Wednesday, February 07, 2024

Where Do We Go From Here?

Testing Out New AI Tools for Writing - The Good πŸ‘and The Bad πŸ‘Ž


I've been experimenting with some of the new large language AI models like ChatGPT Bard and Claude to help summarize and clean up my dyslexic writing. At first, it seemed amazing - I could just dictate my random thoughts and the AI would turn it into clear, readable text. I even had it generate content like blog posts, YouTube scripts, and Instagram captions.

However, I started noticing some issues:
  • The AI can be overenthusiastic, especially when mentioning product names. It reads like advertising copy. I have had to rewrite these sections to keep them factual.
  • Outrageous claims and incorrect facts. About 40% of the time, the AI includes claims or "facts" that are just plain wrong. I end up removing entire paragraphs.
  • About 30% of the time, the content is good as is. The other 30% needs some reworking to tone down the language.
Clearly there's an issue here with misinformation. My current theory is that these large language models are trained on in-discriminant internet data containing conspiracy theories, misinformation, and bias. Garbage in, garbage out. 

I'm finding Anthropic's Claude model more reliable with fewer glaring errors. I have used it on this post, but I still have to carefully review any AI-generated text before publishing. 

As AI becomes more ubiquitous, it's crucial that we understand how these models are trained and what biases they may contain. We have to establish checks and balances, verifying information and not blindly trusting AI outputs.

I'll keep experimenting with AI writing assistants, AI in photography and digital Graphics (generative AI images such as the one abouve), but always maintain oversight. Stay tuned for more on responsible use of generative AI.

No comments:

Post a Comment