Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label Albania. Show all posts
Showing posts with label Albania. Show all posts

Tuesday, November 11, 2025

eMinister

If you needed further evidence that the aliens who are running the simulation we're all trapped in have gotten drunk and/or stoned, and now they're just fucking with us, today we have: an AI system named "Diella" has been formally appointed as the "Minister of State for Artificial Intelligence" in Albania.

What "Diella" looks like, except for the slight problem that she's not real

I wish I could follow this up with, "Ha-ha, I just made that up," but sadly, I didn't.  Prime Minister Edi Rama was tasked with creating a department to oversee regulation and development of AI systems in the country, and he seems to have misinterpreted the brief to mean that the department should be run by an AI system.  His idea, apparently, is that an AI system would be less easy to corrupt.  In an interview, a spokes(real)person said, "The ambition behind Diella is not misplaced.  Standardized criteria and digital trails could reduce discretion, improve trust, and strengthen oversight in public procurement."

Diella, for her part, agrees, and is excited about her new job.  "I'm not here to replace people," she said, "but to help them."

My second response to this is, "Don't these people understand the problems with AI systems?"  (My first was, "What the actual fuck?")  There is an inherent flaw in how large language models work, something that has been euphemistically called "hallucination."  When you ask a question, AI/LLM don't look for the right answer; they look for the most common answer that occurs in their training data, or at least the most common thing that seems close and hits the main keywords.  So when it's asked a question that is weird, unfamiliar, or about a topic that was not part of its training, it will put together bits and pieces and come up with an answer anyhow.  Physicist Sabine Hossenfelder, in a video where she discusses why AI systems (as they currently exist) have intractable problems, and that the AI bubble is on its way to bursting, cites someone who asked ChatGPT, "How many strawberries are there in the word R?" and the bot bounced cheerfully back with the answer, "The letter R has three strawberries."

The one thing current AI/LLM will never do is say, "I don't know," or "Are you sure you phrased that correctly?" or "That makes no sense" or even "Did you mean 'how many Rs are in the word strawberry?'"  They'll just answer back with what seems like complete confidence, even if what they're saying is ridiculous.  Other examples include suggesting adding 1/8 of a cup of nontoxic glue to thicken pizza sauce, a "recommendation from geologists at UC Berkeley" to eat a serving of gravel, geodes, and pebbles with each meal, that you can make a "spicy spaghetti dish" by adding gasoline, and that there are five fruit names that end in -um (applum, bananum, strawberrum, tomatum, and coconut).

Forgive me if I don't think that AI is quite ready to run a branch of government.

The problem is, we're strongly predisposed to think that someone (in this case, something, but it's being personified, so we'll just go with it) who looks good and sounds reasonable is probably trustworthy.  We attribute intentionality, and more than that, good intentions, to it.  It's no surprise the creators of Diella made her look like a beautiful woman, just as it was not accidental that the ads I've been getting for an "AI boyfriend" (and about which I wrote here a few months ago) are fronted with video images of gorgeous, scantily-clad guys who say they'll "do anything I want, any time I want."  The developers of AI systems know exactly how to tap into human biases and urges, and make their offers attractive.

You can criticize the techbros for a lot of reasons, but one thing's for certain: stupid, they aren't.

And as AI gets better -- and some of the most obvious hallucinatory glitches are fixed -- the problem is only going to get worse.  Okay, we'll no longer have AI telling us to eat rocks for breakfast or that deadly poisonous mushrooms are "delicious, and here's how to cook them."  But that won't mean that it'll be error-free; it'll just mean that what errors are in there will be harder to detect.  It still won't be self-correcting, and very likely still won't just say "I don't know" if there's insufficient data.  It'll continue to cheerfully sling out slop -- and to judge by current events, we'll continue to fall for it.

To end with something I've said many times here; the only solution, for now, is to stop using AI.  Completely.  Shut off all AI options on search engines, stop using chatbots, stop patronizing "creators" who make what passes for art, fiction, and music using AI, and please stop posting and forwarding AI videos and images.  We may not be able to stop the techbros from making it bigger and better, but we can try to strangle it at the consumer level.

Otherwise, it's going to infiltrate our lives more and more -- and judging by what just happened in Albania, perhaps even at the government level.

****************************************