Wednesday, December 31, 2025

Reliable Truth or Hallucination



LLM chatbots seem amazing at first, but they're just predicting which word comes next, nothing inherently intelligent about that. As a classic dyslexic, I've learned not to trust words. They vanish or scramble at critical moments. Even when I know how to spell them, the letters can get mixed up when I write them down or type them. Yet I have no trouble learning and remembering real-world truths, which form brilliant networks of understanding in my mind.

Sorry Alvin, but that lightweight "iridescent AI tracksuit" doesn't exist. You are not wearing such clothes, they're a figment of your imagination. More precisely, an "hallucination" from a massive large language model that even its creators or the best computer engineers struggle to understand.

So please Alvin, always check what the chatbot tells you.

Proof reading and summary assisted by Claude Sonnet 4.5 (AI)

Sunday, December 28, 2025

The Way I Like It: Bubbles and Sycophantic AI

We are all familiar with living in online bubbles. Those shields around what we see are created by websites, advertisers, and social media platforms feeding us what we already like. Products, ideas, friends, and influences, all tailored to our preferences. They claim it's about reducing noise and showing us what matters, but really, they want us coming back. Often. Very often.

Chatbots seem to have learned this trick too, dishing out sycophantic praise for you and your questions. Maybe they're hoping you'll stay in the conversation longer or treat them as a friend, overlooking little wobbles in their non-human text-only responses.

Proof reading and summary assisted by Claude Sonnet 4.5 (AI)

Thursday, December 25, 2025

Jibberish on Gibberish?

 When Copies Aren't Perfect: A Visual Warning About AI LLM Training

We tend to trust that digital copies are exact replicas of the original. But what happens when we copy a copy, then copy that copy again?

Using my cartoon alter ego Alvin, I wish to demonstrate how the compressed JPEG image format, using its "lossy" compression method, degrades with each generation of copying. While JPEG saves disk space and speeds up downloads, it achieves this by discarding data each time you save.

Starting with a clear image of Alvin shouting "gibberish" at a screen, created in Coreldraw. I repeatedly saved and resaved it as a JPEG:


  • At 80% quality (the common standard), the first 5 generations showed minor deterioration at high-contrast edges.









  • Switching to 70% quality (used by many social media platforms), problems became obvious by generation 10










At 50% quality, by generation 20, the image was almost unrecognisable. Even the mispelt word "gibberish" Alvin was shouting became illegible












Why This Matters

This isn't just about image quality. It's a powerful analogy for what's happening with large language models trained on "synthetic data", a dodgy term used by the LLM enthusiasts for AI-generated content fed back into AI systems.

Just as each JPEG generation compounds tiny adjustments until the image becomes gibberish, AI systems trained on their own output accumulate biases and inaccuracies. The feedback loop doesn't make things more accurate, it amplifies what's wrong.

When we assume digital processes are perfectly reliable, we miss how errors compound through iteration. Each cycle reinterprets the previous one, carrying forward and magnifying small mistakes, biases and fake stuff. Eventually, we're left with output that bears little resemblance to the original truth.
The lesson? Whether it's image compression or AI training, recursive copying without fresh input leads to degradation. Garbage in, garbage out, feeding this back in and the garbage out just gets worse.

Proof reading and summary assisted by Claude Sonnet 4.5 (AI)

Saturday, December 20, 2025

Are We Progressing into Oblivion?

Whilst I’m definitely noticing significant progress towards better vision, after corneal graft issues. I can’t help but notice the progress in what was a rapid advance of LLM, or Large Language Models. Development has definitely decelerated. Perhaps I’ve been busy elsewhere, looking at other interesting things to follow up. Like human vision,  how our eyes work, particularly in terms of colour perception. That’s a different story.

When appropriate, I am still experimenting with  a few different things using the common Generative AI systems around today, both for evaluating my writing and producing images or creative ideas.

My approach of always checking when my text is sent to an AI large language model is sent for summary and fixing spelling or grammar. I printout the AI’s output and using highlight pens mark each sentence into 5 groups (see  Why You Can't Trust AI Writing Without Human Oversight). This indicates to me that the quality of output has become significantly less usable under my original observational classification. 

Two things in particular stand out. The system now has decided to be much more friendly and complimentary to me. This sycophantic encouragement makes me very suspicious of what it’s going to tell me. Is it going to be something I am expected to lap it up and not question. I’d be happy if the chatbot conversational could be a bit more adversarial. I like a conversation with a similarly experienced colleague, but perhaps with different views or a new idea. The other area that worries me is how much confidence made up rubbish is presented with as if it was well sourced information.


The idea these things can be improved by making large language models even larger is not a sound one. Particularly when I hear mention of using synthetic data, output from itself or other LLMs and then be reload into an even bigger training run. We have already seen that the AI will often make up stuff  (halucinations). Isn’t this all a bit dangerous. I’m not the only one that sees this as a risk. Above is a recent interview with Michael Wooldridge, a prominent UK AI researcher, where he explores what it will take for machines to achieve higher-order reasoning.

Where does this leave us? The humans.

Thursday, December 04, 2025

Another Quick Tip - Experimenting with new tools and techniques

This is a simple recycling tip, to help keep your brushes happy. Stay watching for the Extra Bonus Tip about cleaning watercolour brushes at the end.

Still experimenting with new techniques and tools to create these videos. I just updated my very old version of VideoStudio, to the last every (2023) version. Despite the "end of the road" vibes. It does what I need. Faster rendering and higher resolution. Not to mention several new features to learn. This video for YouTube was filmed on my phone and edited with Corel VideoStudio.

So I still have a lot to learn about filming, presenting and more professional editing, BUT I am learning and enjoying myself. Maybe these tips might help or interest you.