Ok so when I wrote:
ftiiaanncsg - is tihs pnllttaeioy cdcrroonittay s-staarms ssseux eecndive ?
You did not bite by translating it into:
"Fascinating, is this potentially contradictory smart-ass Sussex evidence ?"
I say "Sussex" because the original article claimed to come from research done at Cambridge University, and I studied at Sussex University in the UK. In fact, Sussex is #3 behind Oxford and Cambridge in terms of funding that UK Universities receive for science - and is therefore generally considered to be the third best University for science in the UK.
Adrian Thompson is a researcher at Sussex University and he carries out some fascinating experiments with circuits that, well, evolve. The following is a rather long desciption of an experiment (but if you have the time to stay with it pretty amazing) with some questions for you at the end as a reward for reading that far.
Computer scientists have long looked to biology for inspiration. From simplified models of the brain they developed neural networks that have proved particularly good at recognising patterns such as signatures on credit cards and fingerprints. They have also worked out ways to mate and mutate programs and allow the resulting programs to compete with one another to generate the "fittest" software for a task.
These "genetic algorithms" have been used to evolve software that does everything from creating works of art to selecting high-performing shares on the stock market. To Thompson all these techniques leave something to be desired. They are too tightly constrained by the rules of chip designers and software engineers. The behaviour of living neurons, for example, is inseparable from the biochemicals from which they are made. But it doesn't matter what material the circuits of a neural network chip are etched in, so long as they operate in a digital fashion.
Digital computers break down all data into strings of 1s and 0s, which the hardware stores as "ons" and "offs" in its memory. This forces the transistors inside computer chips to work as switches -- they're either on or off. But transistors are not intrinsically digital. Between on and off they pass through a smooth series of values, and in these regions they can behave as amplifiers, for example. Computer designers, however, make little or no use of these properties. Likewise, programmers are constrained by the digital nature of computers. A program is a sequence of logic instructions that the computer applies to the 1s and 0s as they pass through its circuitry.
So the evolution that is driven by genetic algorithms happens only in the virtual world of a programming language. What would happen, Thompson asked, if it were possible to strip away the digital constraints and apply evolution directly to the hardware? Would evolution be able to exploit all the electronic properties of silicon components in the same way that it has exploited the biochemical structures of the organic world?
"I wanted to see what happens if you let evolution break out of the constraints that humans have," says Thompson. "If you give it some hardware, does it do new things?" These questions could only be answered if a way were found to combine the "wet" processes of biological evolution with the "dry" world of silicon chips. Thompson found the solution in a field-programmable gate array (FPGA). The transistors in a conventional microprocessor are hardwired into logic gates, which carry out the processing. By contrast, the logic gates in an FPGA and their interconnections can be changed at will. The transistors are arranged into an array of "logic cells" and simply by loading a special program into the chip's configuration memory, circuit designers can turn each cell into any one of a number of logic gates, and connect it to any other cell. So by loading first one program, then another, the chip can be changed at a stroke from, say, an amplifier to a modem.
Note: Hydra, the worlds strongest chess computer, is based on FPGA's. The hardware can be reprogrammend at will with updated chess knowledge, something that could never happen with Deep Blue as the chess algorithms the chips used were burnt in at manufacture (fabrication) time.
Thompson realised that he could use a standard genetic algorithm to evolve a configuration program for an FPGA and then test each new circuit design immediately on the chip. He set the system a task that appeared impossible for a human designer. Using only 100 logic cells, evolution had to come up with a circuit that could discriminate between two tones, one at 1 kilohertz and the other at 10 kilohertz. To kick off the experiment, Thompson created a population of 50 configuration programs on a computer, each consisting of a random string of 1s and 0s.
The computer downloaded each program in turn to the FPGA to create its circuit and then played it the test tones. The genetic algorithm tested the fitness of each circuit by checking how well it discriminated between the tones. It looked for some characteristic that might prove useful in evolving a solution. At first, this was just an indication that the circuit's output was not completely random. In the first generation, the fittest individual was one with a steady 5-volt output no matter which audio tone it heard. After testing the initial population, the genetic algorithm killed off the least fit individuals by deleting them and let the most fit produce copies of themselves--offspring. It mated some individuals, swapping sections of their code. Finally, the algorithm introduced a small number of mutations by randomly switching 1s and 0s within individual programs. It then downloaded the new population one at a time onto the FPGA and ran the fitness tests once more.
By generation 220, the fittest individual produced outputs almost identical to the inputs--two waveforms corresponding to 1 kilohertz and 10 kilohertz--but not yet the required steady output at 0 volts or 5 volts. By generation 650, the output stayed mostly high for the 1 kilohertz input, although the 10 kilohertz input still produced a waveform. By generation 1400, the output was mostly high for the first signal and mostly low for the second. By generation 2800, the fittest circuit was discriminating accurately between the two inputs, but there were still glitches in its output. These only disappeared completely at generation 4100. After this, there were no further changes. Once the FPGA could discriminate between the two tones, it was fairly easy to continue the evolutionary process until the circuit could detect the more finely modulated differences between the spoken words "go" and "stop".
So how did evolution do it? If a human designer, steeped in digital lore, were to tackle the same problem, one component would have been essential--a clock. The transistors inside a chip need time to flip between on and off, so the clock is set to keep everything marching in step, ensuring that no transistor produces an output between 0 and 1. A human designer would also use the clock to count the number of ticks between the peaks of the waves of the input tones. There would be 10 times as many ticks between the wave peaks of the 1 kilohertz tone as those of the 10 kilohertz tone.
In order to ensure that his circuit came up with a unique result, Thompson deliberately left a clock out of the primordial soup of components from which the circuit evolved. Of course, a clock could have evolved. The simplest would probably be a "ring oscillator"--a circle of cells that change their output every time a signal passes through. It generates a sequence of 1s and 0s rather like the ticks of a clock. But Thompson reckoned that a ring oscillator was unlikely to evolve because it would need far more than the 100 cells available.
So how did evolution do it--and without a clock? When he looked at the final circuit, Thompson found the input signal routed through a complex assortment of feedback loops. He believes that these probably create modified and time-delayed versions of the signal that interfere with the original signal in a way that enables the circuit to discriminate between the two tones. "But really, I don't have the faintest idea how it works," he says.
One thing is certain: the FPGA is working in an analogue manner. Up until the final version, the circuits were producing analogue waveforms, not the neat digital outputs of 0 volts and 5 volts. Thompson says the feedback loops in the final circuit are unlikely to sustain the 0 and 1 logic levels of a digital circuit. "Evolution has been free to explore the full repertoire of behaviours available from the silicon resources," says Thompson.
That repertoire turns out to be more intriguing than Thompson could have imagined. Although the configuration program specified tasks for all 100 cells, it transpired that only 32 were essential to the circuit's operation. Thompson could bypass the other cells without affecting it. A further five cells appeared to serve no logical purpose at all--there was no route of connections by which they could influence the output. And yet if he disconnected them, the circuit stopped working. It appears that evolution made use of some physical property of these cells--possibly a capacitive effect or electromagnetic inductance--to influence a signal passing nearby. Somehow, it seized on this subtle effect and incorporated it into the solution. To solve this mystery, Thompson needs to measure the input and output values of each cell when the circuit is operating. But the FPGA allows only digital access to these points, so he can't measure the analogue values. Thompson's colleague, Paul Layzell, is building a circuit board that will allow all the components to be measured with analogue instruments.
So what is happening here? How on Earth did a circuit designed by "survival of the fittest" principles, recognize "stop" and "go"? Can you see a blind watchmaker at work here? And what does this mean for the next generation of computer designed computers? We won't know how they work. Will they be the next level?
Thursday, June 08, 2006
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment