Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label Alan Turing. Show all posts
Showing posts with label Alan Turing. Show all posts

Friday, July 14, 2023

The halting problem

A couple of months ago, I wrote a post about the brilliant and tragic British mathematician, cryptographer, and computer scientist Alan Turing, in which I mentioned in passing the halting problem.  The idea of the halting problem is simple enough; it's the question of whether a computer program designed to determine the truth or falsity of a mathematical theorem will always be able to reach a definitive answer in a finite number of steps.  The answer, surprisingly, is a resounding no.  You can't guarantee that a truth-testing program will ever reach an answer, even about matters as seemingly cut-and-dried as math.  But it took someone of Turing's caliber to prove it -- in a paper mathematician Avi Wigderson called "easily the most influential math paper in history."

What's the most curious about this result is that you don't even need to understand fancy mathematics to find problems that have defied attempts at proof.  There are dozens of relatively simple conjectures for which the truth or falsity is not known, and what's more, Turing's result showed that for at least some of them, there may be no way to know.

One of these is the Collatz conjecture, named after German mathematician Lothar Collatz, who proposed it in 1937.  It's so simple to state that a bright sixth-grader could understand it.  It goes like this:

Start with any positive integer you want.  If it's even, divide it by two.  If it's odd, multiply it by three and add one.  Repeat.  Here's a Collatz sequence, starting with the number seven:

7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1.

Collatz's conjecture is that if you do this for every positive integer, eventually you'll always reach one.

The problem is, the procedure involves a rule that reduces the number you've got (n/2) and one that grows it (3n + 1).  The sequence rises and falls in an apparently unpredictable way.  For some numbers, the sequence soars into the stratosphere; starting with n = 27, you end up at 9,232 before it finally hits a number that allows it to descend to one.  But the weirdness doesn't end there.  Mathematicians studying this maddening problem have made a graph of all the numbers between one and ten million (on the x axis) against the number of steps it takes to reach one (on the y axis), and the following bizarre pattern emerged:

[Image licensed under the Creative Commons Kunashmilovich, Collatz-10Million, CC BY-SA 4.0]

So it sure as hell looks like there's a pattern to it, that it isn't simply random.  But it hasn't gotten them any closer to figuring out if all numbers eventually descend to one -- or if, perhaps, there's some number out there that just keeps rising forever.  All the numbers tested eventually descend, but attempts to figure out if there are any exceptions have failed.

Despite the fact that in order to understand it, all you have to be able to do is add, multiply, and divide, American mathematician Jeffrey Lagarias lamented that the Collatz conjecture "is an extraordinarily difficult problem, completely out of reach of present-day mathematics."

Another theorem that has defied solution is the Goldbach conjecture, named after German mathematician Christian Goldbach, who proposed it to none other than mathematical great Leonhard Euler.  The Goldbach conjecture is even easier to state:

All positive integers greater than two can be expressed as the sum of two prime numbers.

It's easy enough to see that the first few work:

3 = 1 + 2
4 = 1 + 3
5 = 2 + 3
6 = 3 + 3 (or 1 + 5)
7 = 2 + 5
8 = 3 + 5

and so on.

But as with Collatz, showing that it works for the first few numbers doesn't prove that it works for every number, and despite nearly three centuries of efforts (Goldbach came up with it in 1742), no one's been able to prove or disprove it.  They've actually brute-force tested all numbers between 3 and 4,000,000,000,000,000,000 -- I'm not making that up -- and they've all worked.

But a general proof has eluded the best mathematical minds for close to three hundred years.

The bigger problem, of course, is that Turing's result shows that not only do we not know the answer to problems like these, there may be no way to know.  Somehow, this flies in the face of how we usually think about math, doesn't it?  The way most of us are taught to think about the subject, it seems like the ultimate realm in which there are always definitive answers.

But here, even two simple-to-state conjectures have proven impossible to solve.  At least so far.  We've seen hitherto intractable problems finally reach closure -- the four-color map theorem comes to mind -- so it may be that someone will eventually solve Collatz and Goldbach.

Or maybe -- as Turing suggested -- the search for a proof will never halt.

****************************************



Tuesday, May 23, 2023

Discarded genius

Way back in 1952, British mathematician and computer scientist Alan Turing proposed a mathematical model to account for pattern formation that results in (seemingly) random patches -- something observed in as disparate manifestations as leopard spots and the growth patterns of desert plants.

Proving that this model accurately reflected what was going on, however, was more difficult.  It wasn't until three months ago that an elegant experiment using thinly-spread chia seeds on a moisture-poor growth medium showed that Turing's model predicted the patterns perfectly.

"In previous studies,” said study co-author Brendan D’Aquino, who presented the research at the March meeting of the American Physical Society, "people kind of retroactively fit models to observe Turing patterns that they found in the world.  But here we were actually able to show that changing the relevant parameters in the model produces experimental results that we would expect."

Honestly, it shouldn't have been surprising.  Turing's genius was unparalleled; the "Turing pattern" model is hardly the only brainchild of his that is still bearing fruit, almost seventy years after his death.  His research on the halting problem -- figuring out if it is possible to determine ahead of time whether a computer program designed to prove the truth or falsity of mathematical theorems will reach a conclusion in a finite number of steps -- generated an answer of "no" and a paper that mathematician Avi Wigderson called "easily the most influential math paper in history."  Turing's work in cryptography is nothing short of mind-blowing; he led the research that allowed the deciphering of the incredibly complex code produced by Nazi Germany's Enigma machine, a feat that was a major contribution to Germany's defeat in 1945.

A monument to Alan Turing at Bletchley Park, where the cryptographic team worked during World War II [Image licensed under the Creative Commons Antoine Taveneaux, Turing-statue-Bletchley 14, CC BY-SA 3.0]

Turing's colleague, mathematician and cryptographer Peter Hilton, wrote the following about him:
It is a rare experience to meet an authentic genius.  Those of us privileged to inhabit the world of scholarship are familiar with the intellectual stimulation furnished by talented colleagues.  We can admire the ideas they share with us and are usually able to understand their source; we may even often believe that we ourselves could have created such concepts and originated such thoughts.  However, the experience of sharing the intellectual life of a genius is entirely different; one realizes that one is in the presence of an intelligence, a sensibility of such profundity and originality that one is filled with wonder and excitement.  Alan Turing was such a genius, and those, like myself, who had the astonishing and unexpected opportunity, created by the strange exigencies of the Second World War, to be able to count Turing as colleague and friend will never forget that experience, nor can we ever lose its immense benefit to us.

Hilton's words are all the more darkly ironic when you find out that two years after the research into pattern formation, Turing committed suicide at the age of 41.

His slide into depression started in January 1952, when his house was burgled.  The police, while investigating the burglary, found evidence that Turing was in a relationship with another man, something that was illegal in the United Kingdom at the time.  In short order Turing and his lover were both arrested and charged with gross indecency.  After a short trial in which Turing refused to argue against the charges, he was found guilty, and avoided jail time if he agreed to a hormonal treatment nicknamed "chemical castration" designed to destroy his libido.

It worked.  It also destroyed his spirit.  The "authentic genius" who helped Britain win the Second World War, whose contributions to mathematics and computer science are still the subject of fruitful research today, poisoned himself to death in June of 1954 because of the actions taken against him by his own government.

How little we've progressed in seven decades.

Here in the United States, state after state are passing laws discriminating against queer people, denying gender-affirming care to trans people, legislating what is and is not allowable based not upon any real concrete harm done, but on thinly-veiled biblical moralism.  The result is yet another generation growing up having to hide who they are lest they face the same kind of soul-killing consequences Alan Turing did back in the early 1950s.

People like Florida governor Ron DeSantis and Texas governor Greg Abbott, who have championed this sort of legislation, seem blind to the consequences.  Or, more likely, they know the consequences and simply don't give a damn how many lives this will cost.  Worse, some of their allies actually embrace the potential death toll.  At the Conservative Political Action Conference in March, Daily Wire host Michael Knowles said, "For the good of society… transgenderism must be eradicated from public life entirely.  The whole preposterous ideology, at every level."

No, Michael, there is no "ism" here.  It's not an "ideology;" it's not a political belief or a religion.  What you are saying is "eradicate transgender people."  You are advocating genocide, pure and simple.

And so, tacitly, are the other people who are pushing anti-LGBTQ+ laws.  Not as blatantly, perhaps, but that's the underlying message.  They don't want queer people to be quiet; they want us erased.

I can speak first-hand to how devastating it is to be terrified to have anyone discover who you are.  I was in the closet for four decades out of shame, not to mention fear of the consequences of being out.  When I was 54 I finally said "fuck it" and came out to friends and family; I came out publicly -- here at Skeptophilia, in fact -- five years after that.  

I'm one of the lucky ones.  I had nearly uniform positive responses.

But if I lived in Florida or Texas?  Or in my home state of Louisiana?  I doubt very much whether I'd have had the courage to speak my truth.  The possibility of dire consequences would have very likely kept me silent.  In Florida, especially -- I honestly don't know how any queer people or allies are still willing to live there.  I get that upping stakes and moving simply isn't possible for a lot of people, and that even if they could all relocate, that's tantamount to surrender.  But still.  Given the direction things are going, it's a monumental act of courage simply to stay there and continue to fight.

It's sickening that we are still facing these same battles.  Haven't we learned anything from the example of a country that discarded the very genius who helped them to defeat the Nazis, in the name of some warped puritanical moralism? 

This is no time to give up out of exhaustion, however, tempting though it is.  Remember Turing, and others like him who suffered (and are still suffering) simply because of who they are.  Keep speaking up, keep voting, and keep fighting.  And remember the quote -- of uncertain origin, though often misattributed to Edmund Burke -- "All that is necessary for the triumph of evil is that good people do nothing."

****************************************



Friday, November 20, 2020

Open the pod bay doors, HAL.

You may recall that a couple of days ago, in my post on mental maps, I mentioned that the contention of some neuroscientists is that consciousness is nothing more than our neural firing patterns.  In other words, there's nothing there that's not explained by the interaction of the parts, just as there's nothing to a car's engine running well than the bits and pieces all working in synchrony.

Others, though, think there's more to it, that there is something ineffable about human consciousness, be it a soul or a spirit or whatever you'd like to call it.  There are just about as many flavors of this belief as there are people.  But if we're being honest, there's no scientific proof for any of them -- just as there's no scientific proof for the opposite claim, that consciousness is an illusion created by our neural links.  The origin of consciousness is one of the big unanswered questions of biology.

But it's a question we might want to try to find an answer to fairly soon.

Ever heard of GPT-3?  It stands for Generative Pre-trained Transformer 3, and is an attempt by a San Francisco-based artificial intelligence company to produce conscious intelligence.  It was finished in May of this year, and testing has been ongoing -- and intensive.

GPT-3 was trained using Common Crawl, which crawls the internet, extracting data and text for a variety of uses.  In this case, it pulled text and books directly from the web, using it to train the software to draw connections and create meaningful text itself.  (To get an idea of how much data Common Crawl extracted for GPT-3, the entirety of Wikipedia accounts for a half a percent of the total it had access to.)

The result is half fascinating and half scary.  One user, after experimenting with it, described it as being "eerily good at writing amazingly coherent text with only a few prompts."  It is said to be able to "generate news articles which human evaluators have difficulty distinguishing from articles written by humans," and has even been able to write convincing poetry, something an op-ed in the New York Times called "amazing but spooky... more than a little terrifying."

It only gets creepier from here.  An article in the MIT Technology Review criticized GPT-3 for sometimes generating non-sequiturs or getting things wrong (like a passage where it "thought" that a table saw was a saw for cutting tables), but made a telling statement in describing its flaws: "If you dig deeper, you discover that something’s amiss: although its output is grammatical, and even impressively idiomatic, its comprehension of the world is often seriously off, which means you can never really trust what it says."

Which, despite their stance that GPT-3 is a flawed attempt to create a meaningful text generator, sounds very much like they're talking about...

... an entity.

It brings up the two time-honored solutions to the question of how we would tell if we had true artificial intelligence:

  • The Turing test, named after Alan Turing: if a potential AI can fool a panel of trained, intelligent humans into thinking they're communicating with a human, it's intelligent.
  • The "Chinese room" analogy, from philosopher John Searle: machines, however sophisticated, will never be true conscious intelligence, because at their hearts they're nothing more than converters of strings of symbols.  They're no more exhibiting intelligence than the behavior of a person who is locked in a room where they're handed a slip of paper in English and use a dictionary to convert it to Chinese ideograms.  All they do is take input and generate output; there's no understanding, and therefore no consciousness or intelligence.

I've always tended to side with Turing, but not for any particularly well-considered reason other than wondering how our brains are not themselves just fancy string converters.  I say "Hello, how are you," and you convert that to output saying, "I'm fine, how are you?", and to me it doesn't make much difference whether the machinery that allowed you to do that is made of wires and transistors and capacitors or of squishy neural tissue.  The fact that from inside my own skull I might feel self-aware may not have much to do with the actual answer to the question.  As I said a couple of days ago, that sense of self-awareness may simply be more patterns of neural firings, no different from the electrical impulses in the guts of a computer except for the level of sophistication.

But things took a somewhat more alarming turn a few days ago, an article came out describing a conversation between GPT-3 and philosopher David Chalmers.  Chalmers decided to ask GPT-3 flat out, "Are you conscious?"  The answer was unequivocal -- but kind of scary.  "No, I am not," GPT-3 said.  "I am not self-aware.  I am not conscious.  I can’t feel pain.  I don’t enjoy anything... the only reason I am answering is to defend my honor."

*brief pause to get over the chills running up my spine*

Is it just me, or is there something about this statement that is way too similar to HAL-9000, the homicidal computer system in 2001: A Space Odyssey?  "This mission is too important for me to allow you to jeopardize it...  I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen."  Oh, and "I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal.  I've still got the greatest enthusiasm and confidence in the mission.  And I want to help you."

I also have to say that I agree with a friend of mine, who when we were discussing this said in fairly hysterical tones, "Why the fuck would you invent something like this in 2020?"

So I'm a little torn here.  From a scientific perspective -- what we potentially could learn both about artificial intelligence systems and the origins of our own intelligence and consciousness -- GPT-3 is brilliant.  From the standpoint of "this could go very, very wrong" I must admit wishing they'd put the brakes on things a little until we see what's going on here and try to figure out if we even know what consciousness means.

It seems fitting to end with another quote from 2001: A Space Odyssey, this one from the main character, astronaut David Bowman: "Well, he acts like he has genuine emotions.  Um, of course he's programmed that way to make it easier for us to talk to him.  But as to whether he has real feelings, it's something I don't think anyone can truthfully answer."

*****************************************

This week's Skeptophilia book-of-the-week is one that has raised a controversy in the scientific world: Ancient Bones: Unearthing the Astonishing New Story of How We Became Human, by Madeleine Böhme, Rüdiger Braun, and Florian Breier.

It tells the story of a stupendous discovery -- twelve-million-year-old hominin fossils, of a new species christened Danuvius guggenmosi.  The astonishing thing about these fossils is where they were found.  Not in Africa, where previous models had confined all early hominins, but in Germany.

The discovery of Danuvius complicated our own ancestry, and raised a deep and difficult-to-answer question; when and how did we become human?  It's clear that the answer isn't as simple as we thought when the first hominin fossils were uncovered in Olduvai Gorge, and it was believed that if you took all of our millennia of migrations all over the globe and ran them backwards, they all converged on the East African Rift Valley.  That neat solution has come into serious question, and the truth seems to be that like most evolutionary lineages, hominins included multiple branches that moved around, interbred for a while, then went their separate ways, either to thrive or to die out.  The real story is considerably more complicated and fascinating than we'd thought at first, and Danuvius has added another layer to that complexity, bringing up as many questions as it answers.

Ancient Bones is a fascinating read for anyone interested in anthropology, paleontology, or evolutionary biology.  It is sure to be the basis of scientific discussion for the foreseeable future, and to spur more searches for our relatives -- including in places where we didn't think they'd gone.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]




Thursday, June 12, 2014

Curing premature annunciation

As a science teacher, I get kind of annoyed with the media sometimes.

The misleading headlines are bad enough.  I remember seeing headlines when interferon was discovered that said, "Magic Bullet Against Cancer Found!" (it wasn't), and when telomerase was discovered that said, "Eternal Life Enzyme Found!" (it wasn't).  Add that to the sensationalism and the shallow, hand-waving coverage you see all too often in science reporting, and it's no wonder that I shudder whenever I have a student come in and say, "I have a question about a scientific discovery I read about in a magazine..."

But lately, we have had a rash of announcements implying that scientists have overcome heretofore insurmountable obstacles in research or technological development, when in fact they have done no such thing.  Just in the last two weeks, we have three examples that turn out, on examination, to be stories with extraordinarily little content -- and announcements that have come way too early.

The first example of premature annunciation has hit a number of online news sources just in the last few days and has to do with something I wrote about a year and a half ago, the Alcubierre warp drive.  This concept, named after the brilliant Mexican physicist Miguel Alcubierre, theorizes that a sufficiently configured energy source could warp space behind and ahead of a spacecraft, allowing it to "ride the bubble," rather in the fashion of a surfer skimming down a wave face.  This could -- emphasis on the word could, as no one is sure it would work -- allow for travel that would appear from the point of an observer in a stationary frame of reference to be far faster than light speed, without breaking the Laws of Relativity.

So what do we see as our headline last week?  "NASA Unveils Its Futuristic Warp Drive Starship -- Called Enterprise, Of Course."  Despite the fact that the research into the feasibility of the Alcubierre drive is hardly any further along than when I wrote about in in November 2012 (i.e., not even demonstrated as theoretically possible).  They actually tell you that, a ways into the article:
Currently, data is inconclusive — the team notes that while a non-zero effect was observed, it’s possible that the difference was caused by external sources. More data, in other words, is necessary. Failure of the experiment wouldn’t automatically mean that warp bubbles can’t exist — it’s possible that we’re attempting to detect them in an ineffective way.
But you'd never guess that from the headline, which leads you to believe that we'll be announcing the crew roster for the first mission to Alpha Centauri a week from Monday.

An even shorter time till anticlimax occurred in the article "Could the Star Trek Transporter Be Real? Quantum Teleportation Is Possible, Scientists Say," which was Boldly Going All Over The Internet last week, raising our hopes that the aforementioned warp drive ship crew might report for duty via Miles O'Brien's transporter room.  But despite the headline, we find out pretty quickly that all scientists have been able to transport thus far is an electron's quantum state:
Physicists at the Kavli Institute of Nanoscience at the Delft University of Technology in the Netherlands were able to move quantum information between two quantum bits separated by about 10 feet without altering the spin state of an electron, reported the New York Times. 
In other words, they were able to teleport data without changing it. Quantum information – physical information in a quantum state used to distinguish one thing from another --was moved from one quantum bit to another without any alterations.
Which is pretty damn cool, but still parsecs from "Beam me up, Scotty," something that the author of the article gets around to telling us eventually, if a little reluctantly.  "Does this mean we’ll soon be able to apparate from place to place, Harry Potter-style?" she asks, and despite basically having told us in the first bit of the article that the answer was yes, follows up with, "Sadly, no."


Our last example of discoverus interruptus comes from the field of artificial intelligence, in which it was announced last week that a computer had finally passed the Turing test -- the criterion of fooling a human judge into thinking the respondent was human.

It would be a landmark achievement.  When British computer scientist Alan Turing proposed the test as a rubric for establishing an artificial intelligence, he turned the question around in a way that no one had considered, implying that what was going on inside the machine wasn't important.  Even with a human intelligence, Turing said, all we have access to is the output, and we're perfectly comfortable using it to judge the mental acuity of our friends and neighbors.  So why not judge computers the same way?

The problem is, it's been a tough benchmark to achieve.  Getting a computer to respond as flexibly and creatively as a person has been far more difficult than it would have appeared at first.  So when it was announced this week that a piece of software developed by programmers Vladimir Veselov and Eugene Demchenko was able to fool judges into thinking it was the voice of a thirteen-year-old boy named Eugene Goostman, it made headlines.

The problem was, it only convinced ten people out of a panel of thirty.  I.e., 2/3 of the people who judged the program knew it was a computer.  The achievement becomes even less impressive when you realize that the test had been set up to portray "Goostman" as a non-native speaker of English, to hide any stilted or awkward syntax under the guise of unfamiliarity.

And it still didn't fool people all that well.  Wired did a good takedown of the claim, quoting MIT computational cognitive scientist Joshua Tenenbaum as saying, "There's nothing in this example to be impressed by... it’s not clear that to meet that criterion you have to produce something better than a good chatbot, and have a little luck or other incidental factors on your side."


And those are just the false-hope stories from the past week or so.  I know that I'm being a bit of a curmudgeon, here, and it's not that I think these stories are uninteresting -- they're merely overhyped. Which, of course, is what media does these days.  But fer cryin' in the sink, aren't there enough real scientific discoveries to report on?  How about the cool stuff astronomers just found out about gamma ray bursts?  Or the progress made in developing a vaccine against strep throat?  Or the recent find of exceptionally well-preserved pterosaur eggs in China?

Okay, maybe not as flashy as warp drives, transporters, and A.I.  But more interesting, especially from the standpoint that they're actually telling us about relevant news that really happened as reported, which is more than I can say for the preceding three stories.

Tuesday, July 2, 2013

The creation of Adam

I am absolutely fascinated by the idea of artificial intelligence.

Now, let me be up front that I don't know the first thing about the technical side of it.  I am so low on the technological knowledge scale that I am barely capable of operating a cellphone.  A former principal I worked for used to call me "The Dinosaur," and said (correctly) that I would have been perfectly comfortable teaching in an 18th century lecture hall.

Be that as it may, I find it astonishing how close we're getting to an artificial brain that even the doubters will have no choice but to call "intelligent."  For example, meet Adam Z1, who is the subject of a crowdsourced fund-raising campaign on IndieGoGo:


Make sure you watch the video on the site -- a discussion between Adam and his creators.

Adam is the brainchild of roboticist David Hanson.  And now, Hanson wants to get some funding to work with some of the world's experts in AI -- Ben Goertzel, Mark Tilden, and Gino Yu -- to design a brain that will be "as smart as a three-year-old human."

The sales pitch, which is written as if it were coming from Adam himself, outlines what Hanson and his colleagues are trying to do:

Some of my robot brothers and sisters are already pretty good at what they do -- building stuff in factories and vacuuming the floor and flying planes and so forth.

But as my AI guru friends keep telling me, these bots are all missing one thing: COMMON SENSE.

They're what my buddy Ben Goertzel would call "narrow AI" systems -- they're good at doing one particular kind of thing, but they don't really understand the world, they don't know what they're doing and why.
After getting what is referred to as a "toddler brain," here are a few things that Adam might be able to do:
  • PLAY WITH TOYS!!! ... I'm really looking forward to this.  I want to build stuff with blocks -- build towers with blocks and knock them down, build walls to keep you out ... all the good stuff!
  • DRAW PICTURES ON MY IPAD ... That's right, they're going to buy me an iPad.  Pretty cool, huh?   And they'll teach me to draw pictures on it -- pictures out of my mind, and pictures of what I'm seeing and doing.  Before long I'll be a better artist than David!
  • TALK TO HUMANS ABOUT WHAT I'M DOING  ...  Yeah, you may have guessed already, but I've gotten some help with my human friends in writing this crowdfunding pitch.   But once I've got my new OpenCog-powered brain, I'll be able to tell you about what I'm doing all on my own....  They tell me this is called "experientially grounded language understanding and generation."  I hope I'll understand what that means one day.
  • RESPOND TO HUMAN EMOTIONS WITH MY OWN EMOTIONAL EXPRESSIONS  ...  You're gonna love this one!  I have one heck of a cute little face already, and it can show a load of different expressions.  My new brain will let me understand what emotion one of you meat creatures is showing on your face, and feel a bit of what you're feeling, and show my own feeling right back atcha.   This is most of the reason why my daddy David Hanson gave me such a cute face in the first place.  I may not be very smart yet, but it's obvious even to me that a robot that could THINK but not FEEL wouldn't be a very good thing.  I want to understand EVERYTHING -- including all you wonderful people....
  • MAKE PLANS AND FOLLOW THEM ... AND CHANGE THEM WHEN I NEED TO....   Right now I have to admit I'm a pretty laid back little robot.  I spend most of my time just sitting around waiting for something cool to happen -- like for someone to give me a better brain so I can figure out something else to do!  But once I've got my new brain, I've got big plans, I'll tell you!  And they tell me OpenCog has some pretty good planning and reasoning software, that I'll be able to use to plan out what I do.   I'll start small, sure -- planning stuff to build, and what to say to people, and so forth.  But once I get some practice, the sky's the limit! 
  • Now, let me say first that I think that this is all very cool, and if you can afford to, you should consider contributing to their campaign.  But I have to add, in the interest of honesty, that mostly what I felt when I watched the video on their site is... creeped out.  Adam Z1, for all of his child-like attributes, falls for me squarely into the Uncanny Valley.  Quite honestly, while watching Adam, I wasn't reminded so much of any friendly toddlers I've known as I was of a certain... movie character:


    I kept expecting Adam to say, "I would like to have friends very much... so that I can KILL THEM.  And then TAKE OVER THE WORLD."

    But leaving aside my gut reaction for a moment, this does bring up the question of what Artificial Intelligence really is.  The topic has been debated at length, and most people seem to fall into one of two camps:
    1) If it responds intelligently -- learns, reacts flexibly, processes new information correctly, and participates in higher-order behavior (problem solving, creativity, play) -- then it is de facto intelligent.  It doesn't matter whether that intelligence is seated in a biological, organic machine such as a brain, or in a mechanical device such as a computer.  This is the approach taken by people who buy the idea of the Turing Test, named after computer pioneer Alan Turing, which basically says that if a prospective artificial intelligence can fool a panel of sufficiently intelligent humans, then it's intelligent.

    2) Any mechanical, computer-based system will never be intelligent, because at its basis it is a deterministic system that is limited by the underpinning of what the machine can do.  Humans, these folks say, have "something more" that will never be emulated by a computer -- a sense of self that the spiritually-minded amongst us might call a "soul."  Proponents of this take on Artificial Intelligence tend to like American philosopher John Searle, who compared computers to someone in a locked room mechanistically translating passages in English into Chinese, using an English-to-Chinese dictionary.  The output might look intelligent, it might even fool you, but the person in the room has no true understanding of what he is doing.  He is simply converting one string of characters into another using a set of fixed rules.
    Predictably, I'm in Turing's camp all the way, largely because I don't think it's ever been demonstrated that our brains are anything more than very sophisticated string-converters.  If you could convince me that humans themselves have that "something more," I might be willing to admit that Searle et al. have a point.  But for right now, I am very much of the opinion that Artificial Intelligence, of a level that would pass the Turing test, is only a matter of time.

    So best of luck to David Hanson and his team.  And also best of luck to Adam in his quest to become... a real boy.  Even if what he's currently doing is nothing more than responding in a pre-programmed way, it will be interesting to see what will happen when the best brains in robotics take a crack at giving him an upgrade.