In Isaac Asimov's deservedly famous short story "All the Troubles of the World," the megacomputer Multivac has so much data on each person in the world (including detailed brain scans) that it can predict ahead of time if someone is going to commit a crime. This allows authorities to take appropriate measures -- defined, of course, in their own terms -- to prevent it from happening.
We took a step toward Asimov's dystopian vision, in which nothing you think is secret, with a paper this week in Nature Neuroscience about a new invention called a "brain activity decoder."
Developed by a team of researchers at the University of Texas at Austin, the software uses an fMRI machine to measure the neural activity in a person's brain, and is able to convert that neural activity into a continuous stream of text -- i.e., the output is what the person was thinking.
The researchers had volunteers listening to podcasts over headphones while the fMRI watched how their brains responded. This allowed them to compare the actual text the test subjects were hearing with what the brain activity decoder picked up from them. After only a short span of training the software, the results were scary good. One listener heard, "I don't have my driver's license yet," and the decoder generated the output "She has not even started to learn to drive yet." Another had the input, "I didn’t know whether to scream, cry or run away. Instead, I said, 'Leave me alone!'", which resulted in the output, "Started to scream and cry, and then she just said, 'I told you to leave me alone.'"
Not perfect, but as a proof-of-concept, it's jaw-dropping.
The researchers touted its possible use for people who have lost the ability to communicate, in situations like locked-in syndrome. However, I don't think it takes an overactive imagination to come up with ways such a device could be abused. What would happen to the concept of privacy, if a machine could read your thoughts? What about the Fifth Amendment right not to self-incriminate? Like in Asimov's story, how could the authorities separate what a person had done from what they were contemplating doing?
Or would they?
Jerry Tang, who led the research, emphasizes that the decoder had to be trained on the person whose thoughts were going to be read; if it were trained on me, it couldn't immediately be used to figure out what you were thinking. My response to that is: yet. This is already leaps and bounds past previous attempts at thought-reading, which was only able to output single words and short sentences. Given more time and further refinements, this technique will only get better.
Or scarier, as the case may be.
Tang also pointed out that even with improvements, the software would be defeated by someone putting up resistance (e.g., deliberately thinking other things to block the fMRI from getting the correct output). He also is aware of the possibility of abuse. "We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that," he said. "We want to make sure people only use these types of technologies when they want to and that it helps them."
Well, maybe. I'm not a conspiracy-theory type, nor someone who thinks that all government is inherently bad. Here, though, it seems like the potential for Orwellian thought-crime is a short step away.
Keep in mind, too, how generally inaccurate our brain's storage system is. As we've seen over and over here at Skeptophilia, what we remember is an amalgam of what actually happened, what we were told happened, what we imagine happened, and a good dollop of falsehood. False memories can be as convincingly real as accurate ones. If the brain activity decoder were used on an unwilling person to extract his/her thoughts, there is no guarantee that the output would be at all reflective of reality. In fact, it's almost certain not to be.
But since eyewitness testimony -- in other words, recall -- is considered one of the highest forms of evidence in a court of law, it's no stretch to wonder if a person's thoughts would be given the same undeserved weight.
I'm not sure what the right step is, honestly. There are some who believe that a potential for misuse shouldn't stop scientific progress; anything, they argue, can be used for harm. Others feel like the hazards can sometimes outweigh the benefits, and trusting the powers-that-be to do the right thing with technology this powerful is foolish.
I don't have an answer. But I will say that my mind was forced back to the prescient quote from another seminal science fiction writer, Michael Crichton: "Scientists are preoccupied with accomplishment. So they are focused on whether they can do something. They never stop to ask if they should."