Whilst I’m definitely noticing significant progress towards better vision, after corneal graft issues. I can’t help but notice the progress in what was a rapid advance of LLM, or Large Language Models. Development has definitely decelerated. Perhaps I’ve been busy elsewhere, looking at other interesting things to follow up. Like human vision, how our eyes work, particularly in terms of colour perception. That’s a different story.
When appropriate, I am still experimenting with a few different things using the common Generative
AI systems around today, both for evaluating my writing and producing images or
creative ideas.
My approach of always checking when my text is sent to an AI
large language model is sent for summary and fixing spelling or grammar. I
printout the AI’s output and using highlight pens mark each sentence into 5 groups (see Why You Can't Trust AI Writing Without Human
Oversight). This indicates to me that the quality of output has become
significantly less usable under my original observational classification.
Two things in particular stand out. The system now has decided to be much more friendly and complimentary to me. This sycophantic encouragement makes me very suspicious of what it’s going to tell me. Is it going to be something I am expected to lap it up and not question. I’d be happy if the chatbot conversational could be a bit more adversarial. I like a conversation with a similarly experienced colleague, but perhaps with different views or a new idea. The other area that worries me is how much confidence made up rubbish is presented with as if it was well sourced information.
The idea these things can be improved by making large
language models even larger is not a sound one. Particularly when I hear
mention of using synthetic data, output from itself or other LLMs and then be
reload into an even bigger training run. We have already seen that the AI will often
make up stuff (halucinations). Isn’t
this all a bit dangerous. I’m not the only one that sees this as a risk. Above is a
recent interview with Michael Wooldridge, a prominent UK AI researcher, where
he explores what it will take for machines to achieve higher-order reasoning.
Where does this leave us? The humans.
No comments:
Post a Comment