If you have any doubt that we are well on our way to emulating a human mind within a machine, two stories in today's news should go a long way toward convincing you.
First, we have a program called DISCERN, developed by Risto Miikkulainen of the University of Texas at Austin. DISCERN is able to learn language naturally, through being shown examples and stories. When DISCERN is told a story, it is assimilated into memory not as a string of text, but as a set of statistical relationships between words. This is very similar to the way small children learn; as a simple example, when children hear combinations of words like "big dog," "black cat," "good food," and so on, their brains eventually induce the rule "adjectives come before the nouns they modify." DISCERN learns the same way.
Importantly, DISCERN is also programmed to forget. Words or relationships that occur with a low frequency are eventually deleted from memory. Again, this is very similar to our brain's way of processing.
After being fed many stories, DISCERN could communicate quite convincingly with its creators. It is, apparently, a candidate for passing the Turing Test, which is the metric for gauging artificial intelligence. The Turing Test, formulated by British mathematician and computer scientist Alan Turing, states that basically, if a computer program can fool a human, it's intelligent. That DISCERN is getting close to passing the Turing Test is impressive enough, but I haven't even gotten to the most amazing part.
Miikkulainen and his student, Uli Graesemann, ran DISCERN, but changed one parameter -- the rate at which it forgot old information with low statistical relevance. And when they reduced the rate at which DISCERN forgot, the program...
... wait for it...
... developed schizophrenia.
I am not making this up. As the "forget rate" decreased, DISCERN's output became increasingly erratic; the output began switching back and forth between first and third person, making statements that were syntactic gibberish, digressing abruptly while conversing, and claiming to be responsible for various bad things (natural disasters and terrorist bombings) that had been in the stories DISCERN had been fed. These language outputs were so suggestive of schizophrenia that Miikkulainen and Graesemann feel that they've hit on something that is emulating what happens in the brains of actual schizophrenia victims.
"We basically simulated what would happen in the brain if there were an excess of dopamine in the memory centers of the brain," Graesemann said. "The hypothesis is that dopamine encodes the importance -- the salience -- of experience. When there's too much dopamine, it leads to exaggerated salience, and the brain ends up learning from things that it shouldn't be learning from." Graesemann stressed that the idea that schizophrenia comes from impairment in the mechanism of forgetting is still unproven, but believes that the experiments with DISCERN support the hypothesis fairly convincingly.
On a happier note, two researchers at the University of Washington, Chloe Kiddon and Yuriy Brun, have taught a computer to understand dirty jokes.
Well, specifically, one kind of dirty joke -- the "That's What She Said" joke. Double entendres have been around for years, and in fact what is now called a "That's What She Said" joke in the US was known for years in Britain as "... said the actress to the bishop." (An example: a student of mine was describing being nervous about getting a vaccination. Apparently the nurse was taking a long time getting the needle ready, and he said, "I was just sitting there thinking, 'Hurry up, just stick it in and get it over with!'" At this point, about fifteen of his friends chimed in, "THAT'S WHAT SHE SAID.")
So, anyway, Kiddon and Brun characterized the double entendre as a "hard natural language understanding problem," and set about to try to see if they could teach a computer to "get the joke." They started by analyzing sentences and evaluating them for erotic or non-erotic content, and rated words for their "sexiness quotient" -- presumably a good gauge of how promising a particular phrase might be for a TWSS joke. After a lot of training, the software got a hit rate of 70%, which is pretty impressive, given that my ex-wife doesn't get jokes nearly that often.
However, it does mean that 30% of its jokes didn't work, and resulted in knee-slappingly hilarious output like "The Franco-Prussian War ended in 1871... That's what she said." So they have some work still to do. On the other hand, humor is definitely a higher-level brain function; it requires the ability to map two concepts onto one another, often in unexpected ways. The makers of Star Trek: The Next Generation understood that -- in making Data humorless, they identified one thing that is somehow quintessentially human.
Not that I think it will be impossible to emulate; Kiddon and Brun have taken the first steps. I think it's only a matter of time before we have computers that are convincingly intelligent, that could pass the Turing Test with one megabyte of RAM tied behind their CPUs. And Miikkulainen and Graesemann have shown that when that occurs, we will have to worry about the same kinds of neural net breakdowns that occur in humans, a prospect I find distinctly scary. (Did HAL from 2001: A Space Odyssey just come to mind for you? Yeah, me too.)
The direction that computer science is taking is sounding increasingly like science fiction, and one has to wonder about the potential dangers -- the fictional universe is densely populated with computer networks that have gone insane and started murdering people. But that notwithstanding, I think it's worth pursuing, if for no other reason, for increasing our understanding of how our own minds work.