Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Tuesday, December 20, 2011

The emulation of creativity

In yet another step toward rendering humans superfluous, programmer Selmer Bringsjord of Renssselaer Polytechnic Institute has now programmed a computer to write fiction.

The program, called "Brutus," is designed to take the basics of plot, character, setting, and dialogue, and devise a story.  The result, Bringsjord says, is "pretty convincing."

As an aspiring writer of fiction, the whole thing gives me pause.  I've often wondered what it was that was going on in my head when I come up with a story -- am I, like Brutus, simply following some kind of internal algorithm, albeit hopefully a more complex one?  How would you tell?

"There's a certain bag of tricks that Brutus had for saying things at the right time to convince the reader that 'boy, there is something really deep linguistically going on here,'" Bringsjord said.  On the other hand, he isn't convinced that what Brutus is doing is the same as human creativity. "The machine is just doing what you've programmed it to do.  If a machine is creative, the designer of the system — knowing the algorithms involved, data structure — is completely mystified by how the output came out. In my opinion, if that's not the case, then we're just cloning our own intelligence."

I'm not so sure.  Consider Stephen Thaler's "Creative Machine," an artificial neural net that has composed music, designed snack foods, and solved problems in military science.  The Creative Machine is capable of learning -- as Thaler has shown by introducing "noise" into the system to disrupt a rote solution, and watching what the program does.  The Creative Machine is able to adapt, and find alternate ways to use the information it has.  "And therein is where discovery takes place," Thaler said. "It's not in the rote memories that we have committed to memory, it's in the generalization of all those memories into concepts and plans of action."

I'm beginning to think we're getting close to creating a true artificial intelligence -- a software that can flexibly respond, learn, and create.  This idea repels a lot of people, and fascinates others.  Some folks believe there has to be more to the human mind -- there's got to be something inside our skulls that is more than just the sum of our neural firings, and therefore would make it impossible to emulate with a machine.  The two basic attitudes toward this problem were parsed out by philosopher John Searle and mathematician Alan Turing.

Searle, for his part, thought that artificial intelligence was impossible, and used his "Chinese Room" analogy to illustrate why.  Imagine a man in a sealed room, with an English/Chinese dictionary and a book of the grammatical rules of Chinese.  His task: he will be handed a string of Chinese characters through a slot in the wall; he will use the dictionary and rule book to convert it to a string of English characters and stick the result out through the slot.  This is what Searle said that computers do; they are simply converting one string of characters to another in a rote fashion, however complex it might look from the outside.  Because there's nothing "more" in the computer's circuits -- just as there's no true understanding going on in the man translating the characters -- there is no real intelligence there.

It doesn't matter, says Turing; all that matters is the output.  The Turing test hinges on whether a sufficiently smart human could be fooled.  We have no access to our own wiring, either; what's going on in our brain might just be a sophisticated set of electrical signals.  Or maybe there's something "more."  Whatever it is, we don't have access to it at its fundamental level.  So we have to judge by the output -- same as we do with our fellow humans.  Therefore, if a computer program could respond to a questioner in a way that fools him/her into thinking that the program is an intelligent human responder -- it is by definition intelligent.

I've always been in Turing's camp, personally; I don't think it's ever really been demonstrated that the "something more" that Searle says computers don't have actually exists in my brain, much less what that "something more" might be.  I know that I'm often mystified as to where my own creative impulses come from -- when I write, I feel like the characters and story come from some enigmatic source, and they often feel like they spring from my head fully-formed.  There is seldom a feeling of "working it out," the way you might a math problem.  The ideas are just... "there."  (Or not.  Some days, the ideas won't come, for equally puzzling reasons.)

But whatever the truth of human intelligence and creativity, machines have just taken one further step toward emulating it.  And I, for one, find that fascinating.  I wonder -- by creating these machines, and studying them, what might we learn about how our own brain works? 

5 comments:

  1. Unrelated, though I have no other modes of communication to you. Even email feels too... personal, maybe I'm peculiar... or a social networking luddite. I digress.

    Typed in skeptophilia.blogpsot.com (notice the mispelling) ...and... wow.

    Is that a measure of your notoriety (infamy?) or a litmus for the prevelance of woo-woo-ism?

    ReplyDelete
  2. WHOA. I am... speechless. And that doesn't happen often.

    Can this really be in response to me? It doesn't mention me anywhere... at least that I saw?

    WOW.

    ReplyDelete
  3. Okay, my wife figured it out.

    It turns out that anything.blogpsot.com brings you to this site. I guess the owner has somehow gotten the 'net to bring anything with that hosting to his site.

    I'm too much of a Luddite, too, to know how you'd go about doing this... but at least that means it wasn't targeted. (The whole "World's Biggest Skeptic" thing did give me pause, though!)

    ReplyDelete
  4. This is a bit rhetorical, but do these people really think that when someone sits down at their computer to check/read/write a blog, and accidentally types in the URL wrong, that they would actually take the time to read the unintended site they have been directed to?

    How many people become believers in a specific religion/idea based on a "gotcha" moment? Who actually thinks that would work, such to waste the countless hours making said website?

    When this happens to me, my first thought is never "Oh... hmmm... this is interesting, I should investigate." My first thought is always "What the eff? Oh great. What kind of trojans, worms, spyware, malware, keyloggers, or viruses did I just receive? ARGH!" ...as I feverishly click the back button on my browser, run a virus scan, and try to unfurl my brow.

    Maybe one of your future posts could expound upon this topic, because I think these "gotcha" sites are made by actual trolls.

    ReplyDelete
  5. Definitely will be the subject of a post soon...

    ReplyDelete