Today I'd like to look at two articles that are especially interesting in juxtaposition.
The first is about a study out of the University of New South Wales, where researchers in psychology found that people are largely overconfident about their ability to detect AI-generated human faces. No doubt this confidence comes from the fact that it used to be easier -- AI faces had a slick, animated quality, that for many of us was an immediate red flag that the image wasn't real.
Not anymore.
It's not the Dunning-Kruger effect -- the (now widely disputed) tendency of people to overestimate their competence -- it's more that the quality of AI images has simply improved. Drastically. One thing that makes this study especially interesting is that the research team deliberately included a cohort of people called "super-recognizers" -- people whose ability to remember faces is significantly better than average -- as well as a group of people with ordinary facial recognition ability.
"Up until now, people have been confident of their ability to spot a fake face," said study co-author James Dunn. "But the faces created by the most advanced face-generation systems aren’t so easily detectable anymore... What we saw was that people with average face-recognition ability performed only slightly better than chance. And while super-recognizers performed better than other participants, it was only by a slim margin. What was consistent was people’s confidence in their ability to spot an AI-generated face – even when that confidence wasn’t matched by their actual performance."Generative AI systems produce outputs that are coherent and contextually plausible yet not necessarily anchored in empirical evidence or ground truth. This challenges traditional notions of factuality and prompts a revaluation of what counts as a fact in computational contexts. This paper offers a theoretical examination of AI-generated outputs, employing fact-checking as an epistemic lens. It analyses how three categories of facts – evidence-based facts, interpretative-based facts and rule-based facts – operate in complementary ways, while revealing their limitations when applied to AI-generated content. To address these shortcomings, the paper introduces the concept of emergent facts, drawing on emergence theory in philosophy and complex systems in computer science. Emergent facts arise from the interaction between training data, model architecture, and user prompts; although often plausible, they remain probabilistic, context-dependent, and epistemically opaque.
Is it just me, or does the whole "emergent fact" thing remind you of Kellyanne Conway's breezy, "Yes, well, we have alternative facts"?
I mean, evaluating philosophical claims is way above my pay grade, but doesn't "epistemically opaque" mean "it could either be true or false, and we have no way of knowing which?" And if my interpretation is correct, how can the output of a generative AI system even qualify as a "fact" of any kind?
So, we have AI systems that are capable of fooling people in a realm where most of us have a strikingly good, evolutionarily-driven ability -- recognizing what is and what is not a real human face -- and simultaneously, the people who study the meaning of truth are saying straight out that what comes out of large language models is effectively outside the realm of provable truth? It makes sense, given how LLMs work; they're probabilistic sentence generators, using a statistical model to produce sentences that sound good based on a mathematical representation of the text they were trained on. It's unsurprising, I suppose, that they sometimes generate bullshit -- and that it sounds really convincing.
Please tell me I'm not the only one who finds this alarming.
Is this really the future that the techbros want? A morass of AI-generated slop that is so cleverly constructed we can't tell the difference between it and reality?
The most frightening thing, to me, is that it puts a terrifying amount of power in the hands of bad actors who will certainly use AI's capacity to mislead for their own malign purposes. Not only in creating content that is fake and claiming it's real, but the reverse. For example, when photographic and video evidence of Donald Trump's violent pedophilia is made public -- it's only a matter of time -- I guarantee that he will claim that it's an AI-generated hoax.
And considering "emergent facts" and the phenomenal improvement in AI-generated imagery, will it even be possible to prove otherwise? Gone are the days that you could just count the fingers or look for joints bending the wrong way.
I know I've been harping on the whole AI thing a lot lately, and believe me, I wish I didn't have to. I'd much rather write about cool discoveries in astronomy, geology, genetics, and meteorology. But the current developments are so distressing that I feel driven to post about them, hoping that someone is listening who is in a position to put the brakes on.
Otherwise, I fear that we're headed toward a world where telling truth from lies will slide from "difficult" to "impossible" -- and where that will lead, I have no idea. But it's nowhere good.
