Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label generative AI. Show all posts
Showing posts with label generative AI. Show all posts

Tuesday, May 21, 2024

Memento mori

In this week's episode of the current season of Doctor Who, entitled "Boom," the body of a soldier killed in battle is converted into a rather creepy-looking cylinder that has the capacity for producing a moving, speaking hologram of the dead man, which has enough of his memory and personality imprinted on it that his friends and family can interact with it as if he were still alive.


I suspect I'm not alone in having found this scene rather disturbing, especially when his daughter has a chat with the hologram and seems completely unperturbed that her dad had just been brutally killed.  

Lest you think this is just another wild trope dreamed up by Steven Moffat and Russell T. Davies, there are already (at least) two companies that do exactly this -- Silicon Intelligence and Super Brain.  Both of them have models that use generative AI that scour your photos, videos, and written communication to produce a convincing online version of you, that then can interact with your family and friends in (presumably) a very similar fashion to how you did when you were alive.

I'm not the only one who is having a "okay, just hold on a minute" reaction to this.  Ethicists Katarzyna Nowaczyk-Basińska and Tomasz Hollanek, both of Cambridge University, considered the implications of "griefbots" in a paper last week in the journal Philosophy & Technology, and were interviewed this week in Science News, and they raise some serious objections to the practice.

The stance of the researchers is that at the very least there should be some kind of safeguard to protect the young from accessing this technology (since, just as in Doctor Who, there's the concern that children wouldn't be able to recognize that they weren't talking to their actual loved one, with serious psychological repercussions), and that it be clear to all users that they're communicating with an AI.  But they bring up a problem I hadn't even thought of; what's to stop companies from monetizing griefbots by including canny advertisements for paying sponsors?  "Our concern," said Nowaczyk-Basińska, "is that griefbots might become a new space for a very sneaky product placement, encroaching upon the dignity of the deceased and disrespecting their memory."

Ah, capitalism.  There isn't anything so sacred that it can't be hijacked to make money.

But as far as griefbots in general go, my sense is that the entire thing crosses some kind of ethical line.  I'm not entirely sure why, other than the "it just ain't right" arguments that devolve pretty quickly into the naturalistic fallacy.  Especially given my atheism, and my hunch that after I die there'll be nothing left of my consciousness, why would I care if my wife made an interactive computer model of me to talk to?  If it gives her solace, what's the harm?

I think one consideration is that by doing so, we're not really cheating death.  To put it bluntly, it's deriving comfort from a lie.  The virtual-reality model inside the computer isn't me, any more than a photograph or a video clip is.  But suppose we really go off the deep end, here, and consider what it would be like if someone could actually emulate the human brain in a machine -- and not just a random brain, but yours?

There's at least a theoretical possibility that you could have a computerized personality that would be completely authentic, with your thoughts, memories, sense of humor, and emotions.  (The current ones are a long way from that -- but even so, they're still scarily convincing.)  Notwithstanding my opinions on the topic of religion and the existence of the soul, there's a part of me that simply rebels at this idea.  Such a creation might look and act like me, but it wouldn't be me.  It might be a convincing facsimile, but that's about it.

But what about the Turing test?  Devised by Alan Turing, the idea of the Turing test for artificial intelligence is that because we don't have direct access to what any other sentient being is experiencing -- each of us is locked inside his/her own skull -- the only way to evaluate whether something is intelligent is the way it acts.  The sensory experience of the brain is a black box.  So if scientists made a Virtual Gordon, who acted on the computer screen in a completely authentic Gordonesque manner, would it not only be intelligent and alive, but... me?

In that way, some form of you might achieve immortality, as long as there was a computer there to host you.

This is moving into some seriously sketchy territory for most of us.  It's not that I'm eager to die; I tend to agree with my dad, who when he was asked what he wanted written on his gravestone, responded, "He's not here yet."  But as hard as it is to lose someone you love, this strikes me as a cheat, a way to deny reality, closing your eyes to part of what it means to be human.

So when I die, let me go.  Give me a Viking funeral -- put me on my canoe, set it on fire, and launch it out into the ocean.  Then my friends and family need to throw a huge party in my honor, with lots of music and dancing and good red wine and drunken debauchery.  And I think I want my epitaph to be the one I created for one of my fictional characters, also a science nerd and a staunch atheist: "Onward into the next great mystery."

For me, that will be enough.

****************************************



Friday, June 23, 2023

Stolen voices

AI scares the hell out of me.

Not, perhaps, for the reason you might be thinking.  Lately there have been scores of articles warning about the development of broad-ability generative AI, and how we're in for it as a species if that happens -- that AI will decide we're superfluous, or even hazardous for its own survival, and it'll proceed to either enslave us (The Matrix-style) or else do away with us entirely.

For a variety of reasons, I think that's unlikely.  First, I think conscious, self-aware AI is a long way away (although it must be mentioned that I'm kind of lousy at predictions; I distinctly recall telling my AP Biology class that "adult tissue cloning is at least ten years in the future" the week before the Dolly the sheep research was released).  For another, you have to wonder how, practically, AI would accomplish killing us all.  Maybe a malevolent AI could infiltrate our computer systems and screw things up royally, but wiping us out as a species is very hard to imagine.

However.

I'm seriously worried about AI's escalating impact on creative people.  As a fiction writer, I follow a lot of authors on Twitter, and in the past week there's been alarm over a new application of AI tools (such as Sudowrite and Chat GPT) that will "write a novel" given only a handful of prompts.  The overall reaction to this has been "this is not creativity!", which I agree with, but what's to stop publishers from cutting costs -- skipping the middle-man, so to speak -- and simply AI-generating novels to sell?  No need to deal with (or pay) pesky authors.  Just put in, "write a space epic about an orphan, a smuggler, and a princess who get caught up in a battle to stop an evil empire," and presto!  You have the next Star Wars in a matter of minutes.

If you think this isn't already happening, you're fooling yourself.  Every year, the group Queer Science Fiction hosts a three-hundred-word flash fiction contest, and publishes an anthology of the best entries.  (Brief brag; I've gotten into the anthology two years running, and last year my submission, "Refraction," won the Director's Pick Award.  I should hear soon if I got the hat trick and made it into this year's anthology.)  J. Scott Coatsworth (a wonderful author in his own right), who manages the contest, said that for the first time this year he had to run submissions through an algorithm to detect AI-generated writing -- and caught (and disqualified) ten entires.

If people are taking these kinds of shortcuts to avoid writing a three-hundred-word story, how much more incentive is there to use it to avoid the hard work and time required to write a ninety-thousand-word novel?  And how much longer will it be before AI becomes good enough to slip past the detection algorithms?

And it's not just writing.  You've no doubt heard of the issue with AI art, but do you know about the impact on music?  Musician Rick Beato did a piece on YouTube about AI voice synthesis that is fascinating and terrifying.  It includes a clip of a "new Paul McCartney/John Lennon duet" -- completely AI-created, of course -- that is absolutely convincing.  He frames the question as, "who owns your voice?"  It's a more complex issue than it appears at first.  Parodists and mimics imitate famous voices all the time, and as long as they're not claiming to actually be the person they're imitating, it's all perfectly legal.  So what happens if a music producer decides to generate an AI Taylor Swift song?  No need to pay the real Taylor Swift; no expensive recording studio time needed.  As long as it's labeled "AI Taylor Swift," it seems like it should be legal.

Horrifyingly unethical, yes.  But legal.

And because all of this boils down to money, you know it's going to happen.  "Write a novel in the style of Stephen King."  "Create a new song by Linkin Park."  "Generate a painting that looks like Salvador Dalí."  What happens to the actual artists, musicians, and writers?  Once your voice is stolen and synthesized, what need is there for your real voice any more?

Of course, I think that creatives are absolutely critical; our voices are unique and irreplaceable.  The problem is, if an AI can get close enough to the real thing, you can bet consumers are going to go for it, not only because AI-generated content will be a great deal cheaper, but also for the sheer novelty.  ("Listen to this!  Can you believe this isn't actually Beyoncé?")  As an author, I can vouch for the fact that it's already hard enough to get your work out to the public, have it seen and read and reviewed.

What will we do when the market is flooded with cheap, mediocre-but-adequate AI-generated content?

I'm no legal expert, and I don't have any ready solutions for how this could be fairly managed.  There are positive uses for AI, so "ban it all" isn't the answer.  And in any case, the genie is out of the bottle; any efforts to stop AI development at this point are doomed to failure.

But we have to figure out how to protect the voices of creatives.  Because without our voices, we've lost the one thing that truly makes us human.

****************************************