Testing Out
New AI Tools for Writing - The Good πand The Bad π
I've been
experimenting with some of the new large language AI models like ChatGPT Bard and
Claude to help summarize and clean up my dyslexic writing. At first, it seemed
amazing - I could just dictate my random thoughts and the AI would turn it into
clear, readable text. I even had it generate content like blog posts, YouTube
scripts, and Instagram captions.
However, I started noticing some issues:
- The AI can be overenthusiastic, especially when mentioning product names. It reads like advertising copy. I have had to rewrite these sections to keep them factual.
- Outrageous claims and incorrect facts. About 40% of the time, the AI includes claims or "facts" that are just plain wrong. I end up removing entire paragraphs.
- About 30% of the time, the content is good as is. The other 30% needs some reworking to tone down the language.
I'm finding Anthropic's Claude model more reliable with fewer glaring errors. I have used it on this post, but I still have to carefully review any AI-generated text before publishing.
As AI becomes more ubiquitous, it's crucial that we understand how these models are trained and what biases they may contain. We have to establish checks and balances, verifying information and not blindly trusting AI outputs.
I'll keep
experimenting with AI writing assistants, AI in photography and digital Graphics (generative AI images such as the one abouve), but always
maintain oversight. Stay tuned for more on responsible use of generative AI.
No comments:
Post a Comment