Thursday, September 18, 2025

Photo Ingestion: From Chaos thru Workflow and back to Ad-Hoc


Over the years, my photo ingestion process, jargon in digital photo world for uploading your photos say to a photo managment system, has evolved from simple to semi-chaotic, shaped by changing gear and software quirks. I started out loading JPEGs straight from the SD card using camera-branded software, then shifted to Lightroom Classic’s folder-based system. It was slow, and I quickly ditched metadata tagging in favour of a manual copying of the files and folder structure, yearly, monthly, daily, based on the camera’s default naming.

As RAW formats became standard and file sizes ballooned, I had to juggle multiple SD cards and software that didn’t always play nice. Initially my Canon EOS 1100D required its own viewer for RAW and MOV files. Then in 2017, I won a copy of Photo Mechanic, a game changer. It mimicked old-school contact sheets, let me tag and sort on the fly, and even added GPS data, which was perfect for my sun-chasing road trips around Australia. I also had two Pentax DLSR cameras, a K200 and a bigger K20, both nice cameras and I really liked Photo Mechanic. It was a very useful little programme when loading photos particularly when you’re travelling because it significantly reduced the amount of time and fiddling.

I had become interested in mirrorless cameras and got the little OMD-10. Unfortunately, Photo Mechanic didn’t support Olympus RAW, so I pivoted to Olympus, now OM Workspace. It was free, handled most formats, and even updated camera firmware. Slowly, it became my go-to for the OMD-10 and later the OMD-5, which became my main camera.

Now, with just a single smartphone and mainly Olympus gear, I can upload in a mixture of USB cables, card reader, Wi-Fi transfers, and Microsoft’s Phone Link. It works, but it’s again ad hoc. 

The challenge now is keeping everything backed up and organized across formats, folders, and devices.

Saturday, September 13, 2025

ASI: The Good, The Bad, and The "AIslopocene"

It should be clear I have mixed feelings about AI these days. I love some AI applications. Photo editing tools? Fantastic! On-line apps that help clean up my dyslexic writing? Impressive!


But here's what's bugging me: Large Language Models (LLMs) are becoming a real problem. They're scraping everyone's creative work without permission, then spitting out convincing-sounding nonsense that's often completely wrong. We're heading into what many are calling the "AIslopocene", an era where slick, AI-generated content floods the internet while actual creators get nothing for their stolen work.

The worst part? Big tech companies are making billions by building expensive gateways to these models while contributing zero original content themselves. Meanwhile, the models are trained on everything from conspiracy theories to biased opinions, sometimes producing genuinely dangerous outputs, like when Grok started spouting Hitler's ideas.

I've been working with AI since the late '70s, so I'm not anti-technology. But what we're seeing now feels more like ASI, Arte-ficially Superficial Intelligence. It looks impressive on the surface but lacks real depth or understanding.

That's why I'm being transparent about my AI use. I've been hash tagging my #AIart since 2017, using specific AI icons/watermarks over images since 2023, and now adding footnotes when I use AI tools for editing or research.

I believe in ethical AI tools that genuinely help people while respecting creators and truth. But we need to stay vigilant about what's real intelligence versus what's just a shiny illusion designed to keep us scrolling.

Proof reading and summary assisted by Claude Sonnet 4 (AI)

Wednesday, September 03, 2025

Living in the AI Slopocene

We're living in a new era some call the "Slopocene" defined by overproduced, low-quality AI content flooding the internet. This isn't just about poor posts cluttering our feeds; it represents a fundamental erosion of information reliability.

The problem is multifaceted: people's trust in science, journalism, and credible sources is crumbling while tech platforms prioritise profits over truth. Meanwhile, AI-generated "slop" low-effort, high-volume content; dominates online spaces, creating what the Conversation article (see link above) describes as a potential future where "recursive training collapse turns the web into a haunted archive of confused bots and broken truths."

As AI systems increasingly train on AI-generated content, we risk a feedback loop of degrading information quality, like making photocopies of photocopies until meaning is lost.

This isn't the digital future anyone wanted. We're caught between technological dependence and growing awareness of these systems' limitations and misuse. While there are no simple solutions, being more intentional about information consumption, more critical of sources, and more supportive of genuine journalism and research becomes crucial. 

In this age of AI slop, authentic human insight has never been more valuable or more at risk.

I’m using this post to hold an interesting experiment. There are actually two blog posts one here and the other on Meet the People. They both express the same ideas, but one is an original written by Norm and the other is a reinterpretation by an AI.  Could I ask you to read both? Then leave a comment on the article you believe is generated by the AI (Claude Sonnet 4) and the post written directly by me.