Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Tuesday, May 21, 2024

Memento mori

In this week's episode of the current season of Doctor Who, entitled "Boom," the body of a soldier killed in battle is converted into a rather creepy-looking cylinder that has the capacity for producing a moving, speaking hologram of the dead man, which has enough of his memory and personality imprinted on it that his friends and family can interact with it as if he were still alive.

I suspect I'm not alone in having found this scene rather disturbing, especially when his daughter has a chat with the hologram and seems completely unperturbed that her dad had just been brutally killed.  

Lest you think this is just another wild trope dreamed up by Steven Moffat and Russell T. Davies, there are already (at least) two companies that do exactly this -- Silicon Intelligence and Super Brain.  Both of them have models that use generative AI that scour your photos, videos, and written communication to produce a convincing online version of you, that then can interact with your family and friends in (presumably) a very similar fashion to how you did when you were alive.

I'm not the only one who is having a "okay, just hold on a minute" reaction to this.  Ethicists Katarzyna Nowaczyk-Basińska and Tomasz Hollanek, both of Cambridge University, considered the implications of "griefbots" in a paper last week in the journal Philosophy & Technology, and were interviewed this week in Science News, and they raise some serious objections to the practice.

The stance of the researchers is that at the very least there should be some kind of safeguard to protect the young from accessing this technology (since, just as in Doctor Who, there's the concern that children wouldn't be able to recognize that they weren't talking to their actual loved one, with serious psychological repercussions), and that it be clear to all users that they're communicating with an AI.  But they bring up a problem I hadn't even thought of; what's to stop companies from monetizing griefbots by including canny advertisements for paying sponsors?  "Our concern," said Nowaczyk-Basińska, "is that griefbots might become a new space for a very sneaky product placement, encroaching upon the dignity of the deceased and disrespecting their memory."

Ah, capitalism.  There isn't anything so sacred that it can't be hijacked to make money.

But as far as griefbots in general go, my sense is that the entire thing crosses some kind of ethical line.  I'm not entirely sure why, other than the "it just ain't right" arguments that devolve pretty quickly into the naturalistic fallacy.  Especially given my atheism, and my hunch that after I die there'll be nothing left of my consciousness, why would I care if my wife made an interactive computer model of me to talk to?  If it gives her solace, what's the harm?

I think one consideration is that by doing so, we're not really cheating death.  To put it bluntly, it's deriving comfort from a lie.  The virtual-reality model inside the computer isn't me, any more than a photograph or a video clip is.  But suppose we really go off the deep end, here, and consider what it would be like if someone could actually emulate the human brain in a machine -- and not just a random brain, but yours?

There's at least a theoretical possibility that you could have a computerized personality that would be completely authentic, with your thoughts, memories, sense of humor, and emotions.  (The current ones are a long way from that -- but even so, they're still scarily convincing.)  Notwithstanding my opinions on the topic of religion and the existence of the soul, there's a part of me that simply rebels at this idea.  Such a creation might look and act like me, but it wouldn't be me.  It might be a convincing facsimile, but that's about it.

But what about the Turing test?  Devised by Alan Turing, the idea of the Turing test for artificial intelligence is that because we don't have direct access to what any other sentient being is experiencing -- each of us is locked inside his/her own skull -- the only way to evaluate whether something is intelligent is the way it acts.  The sensory experience of the brain is a black box.  So if scientists made a Virtual Gordon, who acted on the computer screen in a completely authentic Gordonesque manner, would it not only be intelligent and alive, but... me?

In that way, some form of you might achieve immortality, as long as there was a computer there to host you.

This is moving into some seriously sketchy territory for most of us.  It's not that I'm eager to die; I tend to agree with my dad, who when he was asked what he wanted written on his gravestone, responded, "He's not here yet."  But as hard as it is to lose someone you love, this strikes me as a cheat, a way to deny reality, closing your eyes to part of what it means to be human.

So when I die, let me go.  Give me a Viking funeral -- put me on my canoe, set it on fire, and launch it out into the ocean.  Then my friends and family need to throw a huge party in my honor, with lots of music and dancing and good red wine and drunken debauchery.  And I think I want my epitaph to be the one I created for one of my fictional characters, also a science nerd and a staunch atheist: "Onward into the next great mystery."

For me, that will be enough.


No comments:

Post a Comment