Saturday, September 13, 2025

ASI: The Good, The Bad, and The "AIslopocene"

It should be clear I have mixed feelings about AI these days. I love some AI applications. Photo editing tools? Fantastic! On-line apps that help clean up my dyslexic writing? Impressive!


But here's what's bugging me: Large Language Models (LLMs) are becoming a real problem. They're scraping everyone's creative work without permission, then spitting out convincing-sounding nonsense that's often completely wrong. We're heading into what many are calling the "AIslopocene", an era where slick, AI-generated content floods the internet while actual creators get nothing for their stolen work.

The worst part? Big tech companies are making billions by building expensive gateways to these models while contributing zero original content themselves. Meanwhile, the models are trained on everything from conspiracy theories to biased opinions, sometimes producing genuinely dangerous outputs, like when Grok started spouting Hitler's ideas.

I've been working with AI since the late '70s, so I'm not anti-technology. But what we're seeing now feels more like ASI, Arte-ficially Superficial Intelligence. It looks impressive on the surface but lacks real depth or understanding.

That's why I'm being transparent about my AI use. I've been hash tagging my #AIart since 2017, using specific AI icons/watermarks over images since 2023, and now adding footnotes when I use AI tools for editing or research.

I believe in ethical AI tools that genuinely help people while respecting creators and truth. But we need to stay vigilant about what's real intelligence versus what's just a shiny illusion designed to keep us scrolling.

Proof reading and summary assisted by Claude Sonnet 4 (AI)

No comments: