Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Friday, June 8, 2018

Artificial psycho

New from the "Don't You People Ever Watch Horror Movies?" department, we have: a group of scientists at MIT who have created an artificial intelligence that is psychopathic.

At least that's kind of what it looks like.  The AI, which has been programmed to analyze, understand, and learn from photographs, was then trained on horrific images -- pictures of humans being injured or otherwise abused, obtained from the site Reddit -- and afterwards asked it to interpret Rorschach ink blots.

Here are a few of the responses given by the AI, who has been named "Norman" after Norman Bates from Psycho, and for purposes of comparison, the responses from a control AI that had been trained on a variety of different sorts of images (rather than all violent ones):
Control: a close-up of a wedding cake on a table.
Norman: a man killed by a speeding driver. 
Control: a black-and-white photograph of a baseball glove.
Norman: a man murdered by machine gun in broad daylight. 
Control: a black-and-white photograph of a small bird.
Norman: a man being pulled into a dough machine. 
Control: a person holding an umbrella in the air.
Norman: a man shot dead in front of his screaming wife.
Control: a black-and-white photograph of a red-and-white umbrella.
Norman: a man gets electrocuted trying to cross a busy street. 
The trio of scientists responsible, Pinar Yanardag, Manuel Cebrian, and Iyad Rahwan, don't seem unduly concerned by their creation, although they do point out the hazards of training an AI using skewed input.  "Norman suffered from extended exposure to the darkest corners of Reddit," they said in an interview, "and represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine-learning algorithms."

[Image released into the Public Domain by its creator, Michel Royon]

What it makes me wonder is to what extent our own brains get co-opted by this sort of thing.  It's often claimed that people who (for example) play lots of violent video games become inured, desensitized, to violence in general.  But maybe it's more than that.  Maybe if we expose ourselves to ugliness, we become more likely to interpret neutral situations as ugly.

Sort of seeing the world through awful-colored glasses.

I saw an example of this, albeit of a milder variety, in my own parents.  My folks were the type that had the television on in the evening whether anyone was watching it or not, and a favorite channel had reruns of the show Cops on every night.  I'm a little puzzled as to why anyone would watch that show to start with -- after all, it's not like the plot varies -- but I noticed that after a time, my parents (especially my mom) started viewing the world as an unsafe place.  People are always waiting to hurt you, she said, and you have to stay on your guard constantly.  I still recall the last thing she told me before I left for a month-long walking tour of England:

"Don't trust ANYBODY."

In England, for fuck's sake.  I mean, it's not like I was planning on hiking across Sudan, or anything.

So what you immerse yourself in day after day does make a difference.  I'm not suggesting that we be Pollyannas, nor to look at the world in the way of Dr. Pangloss from Voltaire's masterpiece Candide ("Everything happens for the best, as this is the best of all possible worlds.")  But it bears keeping in mind that we can bias ourselves by what we choose to watch, read, play, and participate in.

And I do hope they know where the "Off" switch is on Norman.  Because that sonofabitch scares the hell out of me.

***********************

This week's featured book is the amazing Thinking, Fast and Slow by Daniel Kahneman, which looks at the fact that we have two modules in our brain for making decisions -- a fast one, that mostly works intuitively, and a slower one that is logical and rational.  Unfortunately, they frequently disagree on what's the best course of action.  Worse still, trouble ensues when we rely on the intuitive one to the exclusion of the logical one, calling it "common sense" when in fact it's far more likely to come from biases rather than evidence.

Kahneman's book will make you rethink how you come to conclusions -- and make you all too aware of how frail the human reasoning capacity is.






No comments:

Post a Comment