Friday, January 23, 2026

Photographing the Aurora in My Style

For longest time one item that has been on my photographic bucket list was to photograph in Aurora from Venus Bay. It’s been frustrating. I’ve managed to photograph the Aurora colours through cloud cover unconvincingly for a long time. The main problem is the difficult to predict when the Aurora will reach Victorian coast. So far I have found the Space Weather service provided by our Bureau of Meteorology is probably one of the best sources of near future aurora predictions. To be visible from the victorian coast the K-index must be 5 or more,

 https://www.sws.bom.gov.au/Space_Weather

Last Monday and Tuesday were predicted to have an excellent chance of seeing and photographing the Aurora from coastal Victoria. I went out on Monday night but was not able to see any evidence of an Aurora. Things changed on Tuesday night, I ended up at Venus Bay Beach Qne along with quite a few other families, just in time to see Aurora Australis was developing first of all as a pale green arch/swipe across the horizon and then some purplish curtain of low light going vertically up into the sky. There were occasion white vertical flare adding to the curtain and often reaching a little higher. View with unassisted eyes the intensity was dullish and the colour washed out and more grey. It is well known that the current generation of mobile cameras captures more intense colours at night (especially the purple/pink hues). You just set the phone to night mode, and it seems to capture enhanced colours compared with what you actually see. This image is what I call my “cheat” image. And was taken with my Samsung phone.

I really wanted to photograph the Aurora with my regular camera, my beloved little Olympus , which has a live composite mode and an in-camera long exposure feature that only adds new light sources to a composite image, preventing overexposure and making night shooting easier by showing the effect live on the screen, unlike traditional long exposures which normally blank screen and  just stack all light. It works by taking multiple shots and compositing them, recording only changes in illumination above the ambient exposure  This makes it ideal for star tracking and extra long exposure without overexposing the image. The headlights of cars arriving at the car park were a bonus, occasionally "painting" light the grasses in from of my tripod, which gave a more interesting foreground. The image below was taken over a three minute period.


I believe the K-index number was around 7 to 8 at the time I took these 

Thursday, January 15, 2026

AI Slop and the 3I/ATLAS Comet: A Warning About Internet Misinformation

I've been tracking a troubling trend: people don't want to talk about internet slop. Eyes glaze over, topics change. Perhaps it's uncomfortable to acknowledge that what we're told online could easily be wrong.

The 3I/ATLAS Case Study

I've been following the interstellar comet 3I/ATLAS since October through reliable sources like spaceweather.com, which a read regularly for aurora predictions. This is the third confirmed interstellar visitor to our solar system, a genuine scientific discovery. I even attempted to photograph it myself in December, capturing only a fuzzy yellowish smear, which is exactly what you'd expect from a comet.   Its not a very good photo, a long exposure, high ISO and therefore very grainy it might not even be a reliable sighting. I've taken better comet photos in the past.


Legitimate sources like NASA and Hubble released proper imagery and data showing standard cometary characteristics: CO₂, water, cyanide gas, and nickel vapor. Nothing remarkable, just good science.

Then Came the AI Slop


YouTube exploded with fabricated videos claiming 3I/ATLAS was sending signals to Earth, had its tail pointing the wrong direction, or was an alien probe. These videos featured:

  • AI voiceovers narrating alien conspiracy theories
  • Deepfaked presenters with repetitive movements
  • Repurposed "glowing spaceship" footage falsely labelled as the comet
  • Clickbait thumbnails and sensationalist titles

Research from the University of Washington found YouTube's first page of 3I/ATLAS results was dominated by pseudo-science and UFO channels recycling the same speculation with AI narration.

The Real Danger

Why isn't Google stopping this? This flood of misinformation will erode trust in legitimate scientific sources and the platforms themselves. We must learn to recognise AI slop and verify information through reliable sources.

Please be sceptical, even of this post. Check your sources and think critically about what you're being fed.

Proof reading and summary assisted by Claude Sonnet 4.5 (AI)

Tuesday, January 06, 2026

Refining my "Air Gapped" Backup/Archive, rather than a New Year’s Resolution

I’ve not been so diligent; my "air gapped" archive/backup system has been sitting on the shelf for the past two years, gathering dust. You know how it goes. I had good intentions about checking those old hard drives every six months (they need to be "exercised" to stay healthy), but eye issues and life got in the way.

When I finally fired it up again, I accidentally left the Wi-Fi on, and Ubuntu immediately reminded me about updates. At first, I'd been avoiding updates thinking "I barely use any features, why bother?" But now seemed like the perfect time to upgrade and add some new tasks into the system.

Enhancing the 3-2-1 Backup Strategy

This whole approach really came out of the ransomware era (still very much a security problem). The idea is simple: keep an offline backup so you can recover if something goes wrong, and make sure you're actually verifying that your stored data is correct zero errors.

 Just looking at a photo or document isn't enough to know if it's been corrupted or tampered with. How do you really check a file? It depends on the type of file and format, and there’s a growing miriad of different ones to worry about.

My Practical Approach

For photos, I'm using MD5 checksums, basically digital fingerprints for each file. Adobe has its own proprietary checksum system in Lightroom and new content credential tools, but I'd rather stick with the commonly used standard.

Raw files are trickier. They vary between camera manufacturers and have already changed over time and camera models.. Some photographers convert everything to lossless TIFF files (which can become huge), others swear by Adobe's DNG format, but I'm not seeing widespread adoption outside the Adobe ecosystem.

Video files are a more difficult area, they rapidly gobble up space and I've got formats from software I have no idea about. For now I will just have to stay with VLC video viewer, which seems to cover most video formats.

The Reality Check

I'm being realistic here, I’m getting older, and I have half a million photos. Nobody wants to wade through all of that when I'm gone. So instead of keeping everything organised just by date and time, I need to actually curate the mess and my legacy. Pull out the family photos, separate events and places, get rid of the junk. I'm planning to explore best ways to identify the high-value stuff worth preserving properly.

The plan is to do this reorganisation on my main computers first, then pass over a well-organised set of archives to the air-gapped system. It's a bit scary because I'll be deleting files, so I'm keeping the full backup for at least a year as a safety net. Currently sitting at 5.5 terabytes and hoping to cut my archive at least in half, but we'll see how realistic that is, same time next year.

Thursday, January 01, 2026

Why should Sycophancy in AI worry you?

 AI sycophancy is when AI systems tell you what you want to hear instead of what's actually true. It's different from the filter bubbles we're used to with search engines and social media, which just show us content matching our preferences.

With AI sycophancy, the system might actively agree with you or flatter you to gain approval, even when you're wrong. This directly compromises truthfulness and accuracy.


The good news? Companies like Anthropic, who develop claude.ai, openly acknowledge this problem and explain how to detect it. Being aware of sycophantic tendencies helps you use AI technology more safely and critically, ensuring you get honest answers rather than just agreeable ones.

Wednesday, December 31, 2025

Reliable Truth or Hallucination



LLM chatbots seem amazing at first, but they're just predicting which word comes next, nothing inherently intelligent about that. As a classic dyslexic, I've learned not to trust words. They vanish or scramble at critical moments. Even when I know how to spell them, the letters can get mixed up when I write them down or type them. Yet I have no trouble learning and remembering real-world truths, which form brilliant networks of understanding in my mind.

Sorry Alvin, but that lightweight "iridescent AI tracksuit" doesn't exist. You are not wearing such clothes, they're a figment of your imagination. More precisely, an "hallucination" from a massive large language model that even its creators or the best computer engineers struggle to understand.

So please Alvin, always check what the chatbot tells you.

Proof reading and summary assisted by Claude Sonnet 4.5 (AI)

Sunday, December 28, 2025

The Way I Like It: Bubbles and Sycophantic AI

We are all familiar with living in online bubbles. Those shields around what we see are created by websites, advertisers, and social media platforms feeding us what we already like. Products, ideas, friends, and influences, all tailored to our preferences. They claim it's about reducing noise and showing us what matters, but really, they want us coming back. Often. Very often.

Chatbots seem to have learned this trick too, dishing out sycophantic praise for you and your questions. Maybe they're hoping you'll stay in the conversation longer or treat them as a friend, overlooking little wobbles in their non-human text-only responses.

Proof reading and summary assisted by Claude Sonnet 4.5 (AI)

Thursday, December 25, 2025

Jibberish on Gibberish?

 When Copies Aren't Perfect: A Visual Warning About AI LLM Training

We tend to trust that digital copies are exact replicas of the original. But what happens when we copy a copy, then copy that copy again?

Using my cartoon alter ego Alvin, I wish to demonstrate how the compressed JPEG image format, using its "lossy" compression method, degrades with each generation of copying. While JPEG saves disk space and speeds up downloads, it achieves this by discarding data each time you save.

Starting with a clear image of Alvin shouting "gibberish" at a screen, created in Coreldraw. I repeatedly saved and resaved it as a JPEG:


  • At 80% quality (the common standard), the first 5 generations showed minor deterioration at high-contrast edges.









  • Switching to 70% quality (used by many social media platforms), problems became obvious by generation 10










At 50% quality, by generation 20, the image was almost unrecognisable. Even the mispelt word "gibberish" Alvin was shouting became illegible












Why This Matters

This isn't just about image quality. It's a powerful analogy for what's happening with large language models trained on "synthetic data", a dodgy term used by the LLM enthusiasts for AI-generated content fed back into AI systems.

Just as each JPEG generation compounds tiny adjustments until the image becomes gibberish, AI systems trained on their own output accumulate biases and inaccuracies. The feedback loop doesn't make things more accurate, it amplifies what's wrong.

When we assume digital processes are perfectly reliable, we miss how errors compound through iteration. Each cycle reinterprets the previous one, carrying forward and magnifying small mistakes, biases and fake stuff. Eventually, we're left with output that bears little resemblance to the original truth.
The lesson? Whether it's image compression or AI training, recursive copying without fresh input leads to degradation. Garbage in, garbage out, feeding this back in and the garbage out just gets worse.

Proof reading and summary assisted by Claude Sonnet 4.5 (AI)