Nov 11, 2023 22:35
I've been watching the increasing polarization of the art and writing world on the subject of generative AI (Large Language Models). In particular, I'm uneasy about the rapid disappearance of any sort of middle ground and the growing demand of the extremes for their position to be treated as the only legitimate one.
In particular, some of the extremes on the anti side seem to be heading toward a moral panic, becoming increasingly adamant to the point I get the feeling that they consider LLM's to be inherently thefty, even inherently evil -- and that everyone must condemn it in order to be a Decent Human Being in their eyes. Try to talk about a market space for an LLM that's based entirely on public domain materials, or that licenses copyrighted materials and pays ongoing royalties, and they claim that it's still theft, often with some notion that you're "stealing" money from the writers and artists you "should've" paid to do the work instead of having a machine do it.
They don't seem to want to hear some salient facts.
LLM's are tools, neither good nor evil. Like the knife in the butcher block, which can be used to prepare a delicious meal or to commit a crime, they have no moral agency of their own. They're not "real" AI on the level of C-3PO, Lt. Commander Data or Wall-E (even if one of the systems' names is a play on that loveable robot's name), but systems that analyze and correlate enormous masses of data and use the connections to create new material, usually to a typed prompt.
The current implementations of them as text and image generators may be deeply problematic, both in terms of IP law and of ethics, but the technology itself has legitimate uses, including the analysis of enormous amounts of scientific and linguistic data (there've been successes in studying protein folding and in translating Sumerian tablets full of routine materials that previously haven't merited a human translator's time -- but both still require a human being to review the output and make sure the system hasn't started "hallucinating" or gone down a logical rabbit hole).
The argument that using even a fully public domain or fully licensed AI system is still thefty because it's "stealing" from artists who would otherwise get commissioned to do it is based upon a flawed understanding of labor and the market. Most of the people who are using AI art for things like covers for indie books aren't going to "find the money" to pay an artist, for the simple reason that the money isn't there to be found. No matter how much they may want to support artists, no matter how much they may care, they can't will money into their bank accounts just by wanting and caring with all their might. Instead, they'll most likely shelve the project, or do it in private for the proverbial dresser drawer.
Then there is the problem of the legal issues becoming a legal tug-of-war between two sides that have major interests in getting the answers they want to hear, and enough money to just keep fighting case after case. This situation could actually diminish people's sense of the law in this matter being part of the good fences that make good neighbors, but rather an exercise in naked power by one or another of the interested parties in the matter, and be less likely to respect the law as a whole.
And with emotions riled up as much as a lot of the people on the extremes have become, it's going to be very difficult to have a logical examination of both law and technology to prevail. Especially with the long literary history of robots and AI being used as symbols of human hubris, combined with the "slave revolt" trope, it's likely that a lot of the people in this fight are responding as much to symbols in their minds as they are to the actual technology that's being used.
A lot of the "more heat than light" energy in this really feels fear-driven. Fear of being replaced. Fear of being rendered irrelevant and left behind. Fear of the destitution that comes from not just losing one's job, but having one's entire line of work vanishing.
And scared people make some really bad decisions.
artificial intelligence,
ethics,
law