You may recall that a couple of days ago, in my post on mental maps, I mentioned that the contention of some neuroscientists is that consciousness is nothing more than our neural firing patterns. In other words, there's nothing there that's not explained by the interaction of the parts, just as there's nothing to a car's engine running well than the bits and pieces all working in synchrony.
Others, though, think there's more to it, that there is something ineffable about human consciousness, be it a soul or a spirit or whatever you'd like to call it. There are just about as many flavors of this belief as there are people. But if we're being honest, there's no scientific proof for any of them -- just as there's no scientific proof for the opposite claim, that consciousness is an illusion created by our neural links. The origin of consciousness is one of the big unanswered questions of biology.
But it's a question we might want to try to find an answer to fairly soon.
Ever heard of GPT-3? It stands for Generative Pre-trained Transformer 3, and is an attempt by a San Francisco-based artificial intelligence company to produce conscious intelligence. It was finished in May of this year, and testing has been ongoing -- and intensive.
GPT-3 was trained using Common Crawl, which crawls the internet, extracting data and text for a variety of uses. In this case, it pulled text and books directly from the web, using it to train the software to draw connections and create meaningful text itself. (To get an idea of how much data Common Crawl extracted for GPT-3, the entirety of Wikipedia accounts for a half a percent of the total it had access to.)
The result is half fascinating and half scary. One user, after experimenting with it, described it as being "eerily good at writing amazingly coherent text with only a few prompts." It is said to be able to "generate news articles which human evaluators have difficulty distinguishing from articles written by humans," and has even been able to write convincing poetry, something an op-ed in the New York Times called "amazing but spooky... more than a little terrifying."
It only gets creepier from here. An article in the MIT Technology Review criticized GPT-3 for sometimes generating non-sequiturs or getting things wrong (like a passage where it "thought" that a table saw was a saw for cutting tables), but made a telling statement in describing its flaws: "If you dig deeper, you discover that something’s amiss: although its output is grammatical, and even impressively idiomatic, its comprehension of the world is often seriously off, which means you can never really trust what it says."
Which, despite their stance that GPT-3 is a flawed attempt to create a meaningful text generator, sounds very much like they're talking about...
... an entity.
It brings up the two time-honored solutions to the question of how we would tell if we had true artificial intelligence:
- The Turing test, named after Alan Turing: if a potential AI can fool a panel of trained, intelligent humans into thinking they're communicating with a human, it's intelligent.
- The "Chinese room" analogy, from philosopher John Searle: machines, however sophisticated, will never be true conscious intelligence, because at their hearts they're nothing more than converters of strings of symbols. They're no more exhibiting intelligence than the behavior of a person who is locked in a room where they're handed a slip of paper in English and use a dictionary to convert it to Chinese ideograms. All they do is take input and generate output; there's no understanding, and therefore no consciousness or intelligence.
I've always tended to side with Turing, but not for any particularly well-considered reason other than wondering how our brains are not themselves just fancy string converters. I say "Hello, how are you," and you convert that to output saying, "I'm fine, how are you?", and to me it doesn't make much difference whether the machinery that allowed you to do that is made of wires and transistors and capacitors or of squishy neural tissue. The fact that from inside my own skull I might feel self-aware may not have much to do with the actual answer to the question. As I said a couple of days ago, that sense of self-awareness may simply be more patterns of neural firings, no different from the electrical impulses in the guts of a computer except for the level of sophistication.
But things took a somewhat more alarming turn a few days ago, an article came out describing a conversation between GPT-3 and philosopher David Chalmers. Chalmers decided to ask GPT-3 flat out, "Are you conscious?" The answer was unequivocal -- but kind of scary. "No, I am not," GPT-3 said. "I am not self-aware. I am not conscious. I can’t feel pain. I don’t enjoy anything... the only reason I am answering is to defend my honor."
*brief pause to get over the chills running up my spine*
Is it just me, or is there something about this statement that is way too similar to HAL-9000, the homicidal computer system in 2001: A Space Odyssey? "This mission is too important for me to allow you to jeopardize it... I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen." Oh, and "I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I've still got the greatest enthusiasm and confidence in the mission. And I want to help you."
I also have to say that I agree with a friend of mine, who when we were discussing this said in fairly hysterical tones, "Why the fuck would you invent something like this in 2020?"
So I'm a little torn here. From a scientific perspective -- what we potentially could learn both about artificial intelligence systems and the origins of our own intelligence and consciousness -- GPT-3 is brilliant. From the standpoint of "this could go very, very wrong" I must admit wishing they'd put the brakes on things a little until we see what's going on here and try to figure out if we even know what consciousness means.
It seems fitting to end with another quote from 2001: A Space Odyssey, this one from the main character, astronaut David Bowman: "Well, he acts like he has genuine emotions. Um, of course he's programmed that way to make it easier for us to talk to him. But as to whether he has real feelings, it's something I don't think anyone can truthfully answer."
No comments:
Post a Comment