Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label intellectual property rights. Show all posts
Showing posts with label intellectual property rights. Show all posts

Monday, September 30, 2024

Chutzpah

As always, Yiddish has a word for it, and the word is chutzpah.

Chutzpah means extreme self-confidence and audacity, but there's more to it than that.  There's a cheekiness to it, an in-your face, scornful sense of "I dare you even to try to do something about this."  As writer Leo Rosten put it, "Chutzpah is the guy who killed both of his parents and then appealed to the judge for mercy because he's an orphan."

The reason this comes up is, unsurprisingly, Mark Zuckerberg, who raises chutzpah to the level of performance art.  This time it's because of his interview last week with The Verge, which looked at his company Meta's embrace of AI -- and his sneering attitude toward the creative people whose work is being stolen to train it, and without which the entire enterprise wouldn't even get off the ground.  When asked about whether this was fair or ethical, Zuckerberg basically said that the question was irrelevant, because if someone objected, their work was of little worth anyhow.

"I think individual creators or publishers tend to overestimate the value of their specific content in the grand scheme of this," Zuckerberg said.  "My guess is that there are going to be certain partnerships that get made when content is really important and valuable.  But if creators are concerned or object, when push comes to shove, if they demanded that we don’t use their content, then we just wouldn’t use their content.  It’s not like that’s going to change the outcome of this stuff that much...  I think that in any new medium in technology, there are the concepts around fair use and where the boundary is between what you have control over.  When you put something out in the world, to what degree do you still get to control it and own it and license it?"

In other words: if you ask Meta not to use your intellectual property, they'll comply.  But not because it's the right thing to do.  It's because there are tens of thousands of other artists, photographers, and writers out there to fuck over.  Anything accessible on the internet is fair game -- once again, not because it's legal or ethical, but because (1) most of the time the creator doesn't know their material is being used for free, and (2) even if they find out, few creative people have the resources to sue Mark Zuckerberg.

He can just shrug his shoulders and say "fine, then," because there's a ton of other people out there to exploit.

Chutzpah.

Add to this an article that also appeared last week, this time over at CNN, and which adds insult to injury.  This one is about how Zuckerberg is now the fourth-richest person in the world, with a net worth of around two hundred billion dollars.

Let me put that in perspective for you.  Assuming no further increase in his net worth, if Mark Zuckerberg gave away a million dollars every single day, he would finally run out in 548 years.

Because of all this, it's only with deep reluctance that I still participate in Meta-owned social media sites like Facebook and Instagram.  Removing myself from them would cut me off completely, not only from opportunities to market my work, but from friends I seldom get a chance to see in person.  What are my other options?  The Elon Musk-owned far-right-wing cesspool formerly known as Twitter?  TikTok, which stands a fair chance of being shut down in the United States because of allegations of data mining by China?  I'm on Bluesky, but I'm damned if I can figure out how to get any traction there -- most of my posts get ignored completely.

You gotta give Zuckerberg one thing; he knows how to back people into a corner.

I know some of my bitterness over all this is how hard I've worked as a writer, and how little recompense I've ever gotten.  I've written Skeptophilia for twelve years, have over five and a half million lifetime hits on the site, and other than some kind donations (for which I will always be grateful) haven't made a damn thing from it.  I have twenty-odd novels in print, through two different traditional publishers and a handful that are self-published, and have never netted more than five hundred dollars a year from them.  I'll own some of this; I absolutely suck at marketing and self-promotion, largely because it was hammered into me as a child that being proud of, or even talking about, my accomplishments was "conceit," an attitude I've never really recovered from.  And fine, I'm willing even to accept that maybe I have an over-inflated sense of my own skill as a writer and how much I should expect to make.

So fair enough: I should admit the possibility that I haven't succeeded as a writer because I'm not good enough to deserve success.

And I could leave it there, except for the fact that I'm not alone.  As part of the writing community, I can name without even trying hard two dozen exceptionally talented, hard-working writers who struggle to sell enough books to make it worth their time.  They keep going only because they love storytelling and are really good at it.  Just about all of them have day jobs so they can pay the mortgage and buy food.

Maybe I can't be unbiased about my own writing, but I'll be damned if I'll accept that all of us are creating work that isn't "important or valuable."


So the fact is, AI will continue to steal the work of people like me, who can ill afford to lose the income, and assholes like Mark Zuckerberg will continue to accrue wealth at levels somewhere way beyond astronomical, all the while thumbing their noses at us simply because they can.  The only solution is one I've proposed before; stop using AI.  Completely.  Yes, there are undoubtedly ways it could be used ethically, but at the moment, it's not, and it won't be until the techbros see enough people opting out that the message will get hammered home.

But until then, my personal message to Mark Zuckerberg is a resounding "fuck you, you obnoxious, arrogant putz."  The last word of which, by the way, is also Yiddish.  If you don't know it, I'll leave it to you to research its meaning.

****************************************


Wednesday, May 22, 2024

Hallucinations

If yesterday's post -- about creating pseudo-interactive online avatars for dead people -- didn't make you question where our use of artificial intelligence is heading, today we have a study out of Purdue University that found an application of ChatGPT to solving programming and coding problems resulted in answers that half the time contained incorrect information -- and 39% of the recipients of these answers didn't recognize the answers as incorrect.

The problem of an AI system basically just making shit up is called a "hallucination," and it's proven to be extremely difficult to eradicate.  This is at least partly because the answers are still generated using real data, so they can sound plausible; it's the software version of a student who only paid attention half the time and then has to take a test, and answers the questions by taking whatever vocabulary words he happens to remember and gluing them together with bullshit.  Google's Bard chatbot, for example, claimed that the James Webb Space Telescope had captured the first photograph of a planet outside the Solar System (a believable lie, but it didn't).  Meta's AI Galactica was asked to draft a paper on the software for creating avatars, and cited a fictitious paper by a real author who works in the field.  Data scientist Teresa Kubacka was testing ChatGPT and decided to throw in a reference to a fictional device -- the "cycloidal inverted electromagnon" -- just to see what the AI would do with it, and it came up with a description of the thing so detailed (with dozens of citations) that Kubacka found herself compelled to check and see if she'd by accident used the name of something obscure but real.

It gets worse than that.  A study of an AI-powered mushroom-identification software found it only got the answer right fifty percent of the time -- and, frighteningly, provided cooking instructions when presented with a photograph of a deadly Amanita mushroom.  Fall for that little "hallucination" and three days later at your autopsy they'll have to pour your liver out of your abdomen.  Maybe the AI was trained on Terry Pratchett's line that "All mushrooms are edible.  Some are only edible once."

[Image licensed under the Creative Commons Marketcomlabo, Image-chatgpt, CC BY-SA 4.0]

Apparently, in inventing AI, we've accidentally imbued it with the very human capacity for lying.

I have to admit that when the first AI became widely available, it was very tempting to play with it -- especially the photo modification software of the "see what you'd look like as a Tolkien Elf" type.  Better sense prevailed, so alas, I'll never find out how handsome Gordofindel is.  (A pity, because human Gordon could definitely use an upgrade.)  Here, of course, the problem isn't veracity; the problem is that the model is trained using art work and photography that is (to put not too fine a point on it) stolen.  There have been AI-generated works of "art" that contained the still-legible signature of the artist whose pieces were used to train the software -- and of course, neither that artist nor the millions of others whose images were "scrubbed" from the internet by the software received a penny's worth of compensation for their time, effort, and skill.

It doesn't end there.  Recently actress Scarlett Johansson announced that she actually had to sue Sam Altman, CEO of OpenAI, to get him to discontinue the use of a synthesized version of her voice that was so accurate it fooled her family and friends.  Here's her statement:


Fortunately for Ms. Johansson, she's got the resources to sue Altman, but most creatives simply don't.  If we even find out that our work has been lifted, we really don't have any recourse to fight the AI techbros' claims that it's "fair use." 

The problem is, the system is set up so that it's already damn near impossible for writers, artists, and musicians to make a living.  I've got over twenty books in print, through two different publishers and a handful that are self-published, and I have never made more than five hundred dollars a year.  My wife, Carol Bloomgarden, is an astonishingly talented visual artist who shows all over the northeastern United States, and in any given show it's a good day when she sells enough to pay for her booth fees, lodging, travel expenses, and food.

So throw a bunch of AI-insta-generated pretty-looking crap into the mix, and what happens -- especially when the "artist" can sell it for one-tenth of the price and still turn a profit? 

I'll end with a plea I've made before; until lawmakers can put the brakes on AI to protect safety, security, and intellectual property rights, we all need to stop using it.  Period.  This is not out of any fundamental anti-tech Luddite-ism; it's simply from the absolute certainty that the techbros are not going to police themselves, not when there's a profit to be made, and the only leverage we have is our own use of the technology.  So stop posting and sharing AI-generated photographs.  I don't care how "beautiful" or "precious" they are.  (And if you don't know the source of an image with enough certainty to cite an actual artist or photographer's name or Creative Commons handle, don't share it.  It's that simple.)

As a friend of mine put it, "As usual, it's not the technology that's the problem, it's the users."  Which is true enough; there are a myriad potentially wonderful uses for AI, especially once they figure out how to debug it.  But at the moment, it's being promoted by people who have zero regard for the rights of human creatives, and are willing to steal their writing, art, music, and even their voices without batting an eyelash.  They are shrugging their shoulders at their systems "hallucinating" incorrect information, including information that could potentially harm or kill you.

So just... stop.  Ultimately, we are in control here, but only if we choose to exert the power we have.

Otherwise, the tech companies will continue to stomp on the accelerator, authenticity, fairness, and truth be damned.

****************************************