Dr. Rosalia Neve is a sociologist and public policy researcher based in Montreal, Quebec. She earned her Ph.D. in Sociology from McGill University, where her work explored the intersection of social inequality, youth development, and community resilience. As a contributor to EvidenceNetwork.ca, Dr. Neve focuses on translating complex social research into clear, actionable insights that inform equitable policy decisions and strengthen community well-being.
Curious. Why would a sociologist who studies social inequality, youth development, and community resilience be writing about an oddity of African geology? If there'd been mention of the social and/or anthropological implications of a continent fracturing, okay, that'd at least make some sense. But there's not a single mention of the human element in the entire article.
So I did a Google search for "Rosalia Neve Montreal." The only hits were from EvidenceNetwork.ca. Then I searched "Rosalia Neve sociology." Same thing. Mighty peculiar that a woman with a Ph.D. in sociology and public policy has not a single publication that shows up on an internet search. At this point, I started to notice some other oddities; her headshot (shown above) is blurry, and the article is full of clickbait-y ads that have nothing to do with geology, science, or (for that matter) sociology and public policy.
At this point, the light bulb went off, and I said to Andrew, "You think this is AI-generated?"
His response: "Sure looks like it."
But how to prove it? It seemed like the best way was to try to find the author. As I said, nothing in the content looked spurious, or even controversial. So Andrew did an image search on Dr. Neve's headshot... and came up with zero matches outside of EvidenceNetwork.ca. This is in and of itself suspicious. Just about any (real) photograph you put into a decent image-location app will turn up something, except in the unusual circumstance that the photo really doesn't appear online anywhere.
Our conclusion: Rosalia Neve doesn't exist, and the article and her "photograph" were both completely AI-generated.
[Nota bene: if Rosalia Neve is actually a real person and reads this, I will humbly offer my apologies. But I strongly suspect I'll never have to make good on that.]
It immediately brought to mind something a friend posted last Friday:
What's insidious about all this is that the red flags in this particular piece are actually rather subtle. People do write articles outside the area of their formal education; the irony of my objecting to this is not lost on me. The information in the article, although unremarkable, appears to be accurate enough. Here's the thing, though. This article is convincing precisely because it's so straightforward, and because the purported author is listed with significant academic credentials, albeit ones unrelated to the topic of the piece. Undoubtedly, the entire point of it is garnering ad revenue for EvidenceNetwork.ca. But given how slick this all is, how easy would it be for someone with more nefarious intentions to slip inaccurate, inflammatory, or outright dangerously false information into an AI-generated article credited to an imaginary person who, we're told, has amazing academic credentials? And how many of us would realize it was happening?
More to the point, how many of us would simply swallow it whole?
This is yet another reason I am in the No Way, No How camp on AI. Here in the United States the current regime has bought wholesale the fairy tale that regulations are unnecessary because corporations will Do The Right Thing and regulate themselves in an ethical fashion, despite there being 1,483,279 counterexamples in the history of capitalism. We've gone completely hands-off with AI (and damn near everything else) -- with the result that very soon, there'll be way more questionable stuff flooding every sort of media there is.
Now, as I said above, it might be that Andrew and I are wrong, and Dr. Neve is a real sociologist who just turns out to be interested in geology, just as I'm a linguist who is, too. What do y'all think? While I hesitate to lead lots of people to clicking the article link -- this, of course, is exactly what EvidenceNetwork.ca is hoping for -- do you believe this is AI-generated? Critically, how could you prove it?
We'd all better start practicing how to get real good at this skill, real soon.
Detecting AI slop, I'm afraid, is soon going to be an exercise left for every responsible reader.


Hi Gordon, you should also check out Dr. Kyle Muller, another prolific author on the same website. Undoubtedly faked. What's ironic is contrasting what the site does vs what their mission statement is. LOL.
ReplyDelete