Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Monday, May 1, 2017

Poker face

A wag once said, "Artificial intelligence is twenty years in the future, and always will be."  It's a trenchant remark; predictions about when we'd have computers that could truly think have been off the mark ever since scientists at the Dartmouth Summer Research Project in Artificial Intelligence stated that they would have the problem cracked in a few months...

... back in 1956.

Still, progress has been made.  We now have software that learns from its mistakes, can beat grand masters at strategy games like chess, checkers, and Go, and have come damn close to passing the Turing test.  But the difficulty of emulating human intelligence in a machine has proven to be more difficult than anyone would have anticipated, back when the first computers were built in the 1940s and 1950s.

We've taken a new stride recently, however.  Just a couple of months ago, researchers at the University of Alberta announced that they had created software that could beat human champions at Texas Hold 'Em, a variant of poker.  Why this is remarkable -- and more of a feat than computers that can win chess -- is that all previous game-playing software involved games in which both players have identical information about the state of the game.  In poker, there is hidden information.  Not only that, but a good poker player needs to know how to bluff.

In other words... lie.


Michael Bowling, who led the team at the University of Alberta, said that this turned out to be a real challenge.  "These poker situations are not simple," Bowling said.  "They actually involve asking, 'What do I believe about my opponent’s cards?'"

But the program, called DeepStack, turned out to be quite good at this, despite the daunting fact that in Texas Hold 'Em there are about 10160 decision points -- more unique scenarios than there are atoms in the universe.  But instead of analyzing all the possibilities, as a program might do in chess (such an approach in this situation would be, for all practical purposes, impossible), DeepStack plays much like a person would -- by speculating on the likelihood of certain outcomes based on the limited information it has.

"It will do its thinking on the fly while it is playing," Bowling said.  "It can actually generalize situations it's never seen before."

Which is pretty amazing.  But not everyone is as impressed as I am.

When Skeptophilia frequent flier Rick Wiles, of End Times radio, heard about DeepStack, he was appalled that we now had a computer that could deceive. "I'm still thinking about programming robots to lie," Wiles said.  "This has been done to us for the past thirty, forty, fifty years -- Deep State has deliberately lied to the public because they concluded that it was in our best interest not to be told the truth...  What's even scarier about the robots that can lie is that they weren't programmed to lie, they learned to lie.  Who's the father of all lies?  Satan is the father of all lies.  Are we going to have demon-possessed artificially intelligent robots?  Is it possible to have demonic spirit to possess an artificial intelligent machine?  Can they possess idols?  Can they inhabit places?  Yeah.  Absolutely.  They can take possession of animals.  They can attach themselves to inanimate objects.  If you have a machine that is capable of lying, then it has to be connected to Lucifer.  Now we’re back to the global brain.  This is where they’re going.  They’re building a global brain that will embody Lucifer’s mind and so Lucifer will be deceiving people through the global brain."

So there's that.  But the ironic thing is that, all demonic spirit bullshit aside, Wiles may not be so far wrong.  While I think the development of artificial intelligence is fascinating, and I can understand why researchers find it compelling, you have to worry what our creations might think about us once they do reach sentience.  This goes double if you can no longer be sure that what the computer is telling you is the truth.

Maybe what we should be worried about is not a computer that can pass the Turing test; it's one that can pass the Turing test -- and chooses to pretend, for its own reasons, that it can't.

I mean, the last thing I want is to go on record as saying I agree with Rick Wiles on anything.  But still.

So that's our rather alarming news for the day.  It's not that I think we're headed into The Matrix any time soon; but the idea that we might be supplanted by intelligent machines of our own making, the subject of countless science fiction stories, may not be impossible after all.

And maybe the artificial intelligence of twenty years in the future may not be as far away as we thought.

No comments:

Post a Comment