Here's another interesting panel discussion topic from this year's
Worldcon And from everything that blew up with the
Dragon Awards this past week, it's obviously still a controversial topic.
A year ago, the big issue was copyright violation. In late August of 2023, Dean Wesley Smith came right out and said that AI (Large Language Models) was theft, that anyone who used them in any way, shape or form was a thief, and anyone who defended them was defending theft and thus not worth knowing.
However, the high-profile copyright infringement lawsuits have gone against the plaintiffs, as judges are able to understand that no, Large Language Models don't just copy bits and pieces of copyrighted works and produce a digital collage. Yes, a person who is very determined to infringe on someone's copyright can come up with prompts that induce a bot such as Midjourney or DALL-E to produce an infringing image, such as well-known characters from comics and cartoons. However, a human artist working in traditional media can also willfully draw images that infringe on existing copyrights -- or even outright forge works in and out of copyright, presenting one's own copies as the work of the original artist in order to command prices they do not merit.
Now a lot of people are shifting their arguments away from copyright infringement to it supposedly robbing the artist you "ought" to have commissioned to do the artwork. In other words, it's a tool that's taking away jobs and leaving people bereft of their livelihoods.
However, this issue has been cropping up ever since the Industrial Revolution began, as one after another productive process was automated and skilled workers were out of jobs. And yes, there was resistance -- Ned Ludd and the Luddites are the most famous -- but while it might slow the process, it could not halt it. Here and there one jurisdiction might hold out -- in New Jersey it's still illegal to pump your own gas, and while it's touted as a safety issue, it also provides job security for the attendants who pump gas. But eventually the technology that was once "taking away our jobs" becomes ubiquitous, and after some pain people find new ways to make a living.
Then there are the purely emotional "AI IS JUST BAD" arguments, such as the argument that AI has no soul, and therefore cannot produce original art/great art/whatever. Even when it's not getting into metaphysical issues that border on religion (the world's major religions can't agree on the nature of the soul, and often the various sects within a religion aren't in agreement), it's often echoing arguments against earlier generations of technology used in making art: image-manipulation software from MacPaint to Photoshop and GIMP, graphics tablets, airbrushes, even photography -- that they're not real art, that they "cheapen" art and make it too "easy" to create, etc.
But going back to the title of the panel discussion, a lot of the problem really does seem to be the use of the term "AI" for Large Language Models, and machine learning in general. Consciously or not, it brings to mind literary and media portrayals of artificial intelligence, of conscious machines that may be hostile to humanity (although there have been plenty of friendly ones, such as Mike in Robert A. Heinlein's The Moon Is a Harsh Mistress). So at some level, people are seeing it as a rival in a way the technology simply cannot be. It's not conscious or self-aware -- it's a very sophisticated tool, but the difference between the Midjourney bot and the computer with which a user prompts it is a quantitative one, not a qualitative one. It's more of the same stuff, more processors and circuitry, more lines of code, not something that's going to wake up tomorrow and decide to turn the entire world into online art.