The developments in AI (well large-scale machine learning in natural language and image generation areas) have been astonishing. However, the "viral" usage seems to be sliding towards that all to common race to the bottom. Well, several bottoms, exploiting others to make money for nothing, chasing fame and likes, selling AIart as NFTs and now creating and selling books (children's stories in fact). All that as others question the ethics and legality of profiting off someone else's work.
Today I became aware of two books that have been raced into production using these AI tools. (I say this confidentially because the tools to create them have only been available in recent months and the better version in the last weeks)
You can read a wordier (somewhat promotional)
story about creating the book on Time (its a pay wall site and wants you to subscribe but you can read this story for free, just ignore the all the ads). The book
Alice and Sparkle follows a young girl who builds her own artificial intelligence robot that becomes self-aware and capable of making its own decisions.
Ammaar Reshi used ChatGPT, Midjourney and other AI tools were combined to create the book. At the time the article was written he had sold about 70 copies through Amazon since Dec. 4
This was in the normal Blurb email mailout. Guess what it's about dystopian bees. Story by the child of artist Mark Terry and art by artifical intelligence tools. It not exactly cheap but you can get your
print on demand version already on Blurb"I think this book is a glimpse at what anyone can do with merely some AI software, basic Photoshop skills, and an idea. If my [idea] can be turned into a book, then I'm sure your far better ideas can, too!"
—Mark Terry of The Truth About Bees
In the other camp are many unhappy artists ask the question why are these people profiting from our work when we are ignored? Afterall all machine learning system get fed a lot of work created by humans, that's what they learn from.
So is this straight-out plagiarism? Well not exactly because of the way most Text to image AIs work. They are not directedly copying the "pixel" (or actual marks made) they are learning from a translated space (not the graphic one we see) with an emphasis on things like style, colour choice, composition constructs etc. The Ai then builds the objects "as if" they were painted by a given artist following their style, or a particular photographer or illustrator etc. even just following a generic artistic, cinema, computer game "look". In legal terms this may not be considered as copying (as argued by well-paid lawyers).
What is clearer is the lack of ethics. Profiting (and they can be large profits) from someone else's work and in most cases not even acknowledging them is poor form, not moral and unprincipled. I don't think it is too late to fix this. The argument that the dragon has been let out is valid. Artist should be told (or at least able to find out) if their work in involved in a given neural net and that they can ask their work is removed and the net rebuilt (similar to a take down notice). This will require the licening or similar (we already have creative common licnes) and any the big internet groups applying them to any "content" they make visible on the wider internet. We also need to quosh the idea that anything on the net can be copied and reposted without permission.
Yet I do see that there is great potential for many artist being able to use these Ai tools as aids to improving their skills, helping with inspiration and understanding, and even making their own unique tools on a much smaller scale. As Alice's story says there is power in these tools which can be used for good and evil, depending on how they are guided (what they given to learn)
No comments:
Post a Comment