Wednesday, August 18, 2010

Alan Turing's Broken Machine and the Hofstadter Solution

Alan Turing was, regardless of anything I'm about to say, a brilliant man. His ideas and designs form the basis for the computers that every single last one of you reading this today are using. I could easily write for the rest of my life on how Alan Turing has affected my life, and a great many people make their careers out of writing about and expanding upon his fundamental design.

However, I find it fundamentally unlikely that his own machine could pass his famous proposed test. Granted, it is possible and in fact easy to fool a human, so I don't propose that no artificial intelligence program will ever pass the test, but I stand with reasonable certainty that no Turing complete machine will ever be capable of replacing or even replicating a human mind.

I still believe that true artificial intelligence is possible and likely to be developed, but not by its current approach. Regular showcases of animatronic puppets with eerie robotic voices and disconnected thought processes are fun, and may do something for the need for companionship, but clearly are not the ancestors of a thoughtful, sentient computer. Even more developed, strictly text-based artificial intelligence, such as A.L.I.C.E. and the disappointingly accurately named "World's Best Chatbot" fail to resemble even an imbecile. It's just a weird pattern, a tool without a purpose.

I assert that these are not even the ancestors of sentience because they have no resemblance to real world sentient ancestors in any way. Even Theo Jansen's (almost) mindless walking machines "Strandbeest" feel more like living creatures than Chatbots.

However, more than outward resemblance, I find that the root of the Turing design is fundamentally inconsistent with sentience. Computers are strictly logical, developed from the ground up on Boolean logic gates (from the most basic Tamagotchi to IBM's Deep Blue, all computers are made of the same basic designs), and arranged to process higher orders of logic. All programming languages, for this reason, are strictly logical, no matter how advanced and abstract.

Compare this with human languages. Human thought is almost defined by idiosyncrasy. Double-negatives are a perfect example of how natural language is logically inconsistent. We all understand someone who says "I don't know nothing" or "No sé nunca", but anyone can recognize that that's still a contradictory statement. Of course most people seem to have caught on to this and pompously remove it from their speech, but there's another oddity I tend to hear from these sorts of people a lot. Quite often they say something along the lines of, "We're out of milk, we need to get some more. I mean, we need to get some." The first statement was clearly correct, not just because milk is a staple of my diet but because clearly when you have none, you would like to have an amount that is greater than none. But, as logically proper folk are quick to assert, "you can't have more than nothing! You don't have any to start out!" This is a good example of a problem that held back mathematics for centuries: People do not naturally recognize zero as a number.

These and other idiosyncrasies are the butt of a lot of jokes by observational comedians such as the late George Carlin and certainly the target of rage by many an anal-retentive grammar teacher, but I propose they would be better used as a point of study for artificial intelligence researchers. Douglas Hofstadter certainly recognized this, and and as he explained, in his aptly named paper "To Err is Human; To Study Error-making is Cognitive Science", by studying the errors in human thought and action, we can better understand the underlying process by which it works. That is to say, the very fundamental construction of cognition is incomplete without the elements that produce these very errors.

It may very well be possible regardless to develop an artificial brain that functions without error, at least to the same degree as a regular computer, but it would not be believably lifelike if it did not obey the fundamental pattern of a human brain which by design gives rise to these errors. When the basic formula is discovered, the first models may not be anywhere close to modeling human speech, but unlike any artificial intelligence yet made, they will have qualities of living sentience, perhaps curiosity, the ability to learn and develop, and to make associations and analogies.



I don't have faith in this topic gaining funding and developing with direct research, but several experiments with the mold Physarum polycephalum, as well as many of the comments on the experiments, are giving me hope for the development of artificial brains as I described.

Update 26 September, 2010: There is still hope.