In yet another step toward rendering humans superfluous, programmer Selmer Bringsjord of Renssselaer Polytechnic Institute has now programmed a computer to write fiction.
The program, called "Brutus," is designed to take the basics of plot, character, setting, and dialogue, and devise a story. The result, Bringsjord says, is "pretty convincing."
As an aspiring writer of fiction, the whole thing gives me pause. I've often wondered what it was that was going on in my head when I come up with a story -- am I, like Brutus, simply following some kind of internal algorithm, albeit hopefully a more complex one? How would you tell?
"There's a certain bag of tricks that Brutus had for saying things at the right time to convince the reader that 'boy, there is something really deep linguistically going on here,'" Bringsjord said. On the other hand, he isn't convinced that what Brutus is doing is the same as human creativity. "The machine is just doing what you've programmed it to do. If a machine is creative, the designer of the system — knowing the algorithms involved, data structure — is completely mystified by how the output came out. In my opinion, if that's not the case, then we're just cloning our own intelligence."
I'm not so sure. Consider Stephen Thaler's "Creative Machine," an artificial neural net that has composed music, designed snack foods, and solved problems in military science. The Creative Machine is capable of learning -- as Thaler has shown by introducing "noise" into the system to disrupt a rote solution, and watching what the program does. The Creative Machine is able to adapt, and find alternate ways to use the information it has. "And therein is where discovery takes place," Thaler said. "It's not in the rote memories that we have committed to memory, it's in the generalization of all those memories into concepts and plans of action."
I'm beginning to think we're getting close to creating a true artificial intelligence -- a software that can flexibly respond, learn, and create. This idea repels a lot of people, and fascinates others. Some folks believe there has to be more to the human mind -- there's got to be something inside our skulls that is more than just the sum of our neural firings, and therefore would make it impossible to emulate with a machine. The two basic attitudes toward this problem were parsed out by philosopher John Searle and mathematician Alan Turing.
Searle, for his part, thought that artificial intelligence was impossible, and used his "Chinese Room" analogy to illustrate why. Imagine a man in a sealed room, with an English/Chinese dictionary and a book of the grammatical rules of Chinese. His task: he will be handed a string of Chinese characters through a slot in the wall; he will use the dictionary and rule book to convert it to a string of English characters and stick the result out through the slot. This is what Searle said that computers do; they are simply converting one string of characters to another in a rote fashion, however complex it might look from the outside. Because there's nothing "more" in the computer's circuits -- just as there's no true understanding going on in the man translating the characters -- there is no real intelligence there.
It doesn't matter, says Turing; all that matters is the output. The Turing test hinges on whether a sufficiently smart human could be fooled. We have no access to our own wiring, either; what's going on in our brain might just be a sophisticated set of electrical signals. Or maybe there's something "more." Whatever it is, we don't have access to it at its fundamental level. So we have to judge by the output -- same as we do with our fellow humans. Therefore, if a computer program could respond to a questioner in a way that fools him/her into thinking that the program is an intelligent human responder -- it is by definition intelligent.
I've always been in Turing's camp, personally; I don't think it's ever really been demonstrated that the "something more" that Searle says computers don't have actually exists in my brain, much less what that "something more" might be. I know that I'm often mystified as to where my own creative impulses come from -- when I write, I feel like the characters and story come from some enigmatic source, and they often feel like they spring from my head fully-formed. There is seldom a feeling of "working it out," the way you might a math problem. The ideas are just... "there." (Or not. Some days, the ideas won't come, for equally puzzling reasons.)
But whatever the truth of human intelligence and creativity, machines have just taken one further step toward emulating it. And I, for one, find that fascinating. I wonder -- by creating these machines, and studying them, what might we learn about how our own brain works?