Wednesday, June 25, 2025

The Reality Check: Why You Can't Trust AI Writing Without Human Oversight


Large language Models like Claude, Gemini, and ChatGPT promise to make writing easier, especially for individuals like myself who face challenges such as dyslexia or vision issues. But here's the uncomfortable truth: AI output is often unreliable and sometimes flat-out wrong.

I have moved from someone forcing my outrageous dyslexic spelling and punctuation into a word processor to someone who relies on dictation due to my current vision problems. I've discovered a love-hate relationship with AI writing tools. They solve my spelling and punctuation nightmares, but they create new problems. My red spelling underlines have been replaced by green grammar highlights – different errors, same frustration.

My Simple But Essential Process

After dictating my rambling thoughts, I ask AI to "summarise as a blog post in plain English in less than 300 words.

This post only took about 12 minutes total, dictating text to suggested summary. Now comes the crucial part: I print the results and highlight every sentence using five categories:

  1. OK as is (left unhighlighted)
  2. Needs rewording
  3. Sycophantic (fake flattery and bias bubble reinforcement)
  4. Needs fact-checking
  5. Clearly wrong (delete immediately)

The results are sobering. Often, only 20% of the AI-generated text survives unchanged. Thankfully closer to 40% for this post

But the real troublemakers are levels 4 and 5. Level 4 – "needs checking" – is actually the worst offender. AI loves introducing new "facts" or concepts that sound plausible but require verification. Sometimes it's genuinely adding something I missed; other times it's complete nonsense dressed up as insight. The tedium of fact-checking these assertions is exhausting, and anything questionable gets left out.

Level 5 is pure fabrication – AI making things up entirely. The good news? I'm seeing less of this since limiting word counts, which seems to reduce how far AI can wander into fantasy land.

Meanwhile, level 3's sycophantic language tells me I'm brilliant and reinforces whatever biases it thinks I want to hear. This echo-chamber effect mirrors social media manipulation. This is an area I must stay alert.

AI can help dyslexic writers like me organise thoughts, but it requires rigorous human oversight. Every sentence needs scrutiny. The fact-checking alone often takes longer than the original dictation, but it's essential for maintaining integrity.

Use AI as a starting point, never the finish line.

Proof reading and summary assisted by Claude Sonnet 4 (AI)


No comments:

Post a Comment