Comparing Generative AI to the Steam Engine Should Not Make You Look Good
Some of us reject the use of genAI (generative AI1) profusely, and to some, this behavior makes us look like we’re overreacting to just a new technology. Are we that different from Luddites? What if an AI model was trained “ethically,” respecting authorship? Could we use genAI in a “correct” manner, like “any other tool” instead? There’s a variety of answers to these questions that you could get from people who dislike genAI.
I can only defend my own perspective and speak from my own values related to its application in artistic fields2.
The Political Case
If I had to divide the world into three groups in the simplest way, I’d say there are “those who love AI and think it’s the future,” “those who hate AI and think it’s a threat,” and “those who don’t care much about AI, and are (not expressly) willing to adopt it.” The first two groups are arguably the loudest, but also probably minorities: in my experience, most people wouldn’t care whether genAI was involved in the making of something as long as it isn’t noticeable AI slop. And I’m being really generous here with what “slop” would be.
So, this implies that most people are at least okay with the existence of genAI. Why am I in the minority that feels it’s some kind of threat? Mostly because I don’t ignore the sociological, economic, and cultural context in which this technology (or these technologies) is being pushed. A typical argument in favor of genAI consists of claiming that it’s just another technological innovation (that might or might not be adoped en masse) like the steam engine, and that, at the end of the day, big technological innovations have led society to higher living standards, like the Industrial Revolution. But if you know even a little history, you know that most big changes in a society are pushed from the top by those in power, only conceding the puniest of privileges to the weakest groups in temporary alliances. This “steam engine” argument only makes sense if you were a landowner, a factory owner, or an industrial investor in the 19th or early 20th century.
Big Tech doesn’t care about the impact a technology might have on society as long as it finds it beneficial to itself. We can tell taking a look at our very recent history, with both the rise of social media and the forced adoption of the Windows operating system. The behavior you see from social media users is influenced by the design of the platforms they use, and, at the same time, the internal culture of those platforms is strongly shaped by the way they were designed. As for Windows, I don’t think I need to tell anyone what a mess it has become and what it’s done to the world in general, something only a tiny group of people foresaw and resisted3.
Big Tech does not care about its users. It does not care about artists or creatives. And sadly, “ordinary” people who enjoy the works of artists and creatives do not care that about anything beyond a very basic aesthetic or narrative pleasure.
Gatekeep AI slop curators
What’s the threat about? It’s about AI-generated pieces becoming so ubiquitous that the spaces where you can find works by people who care about what they make end up being filled with generative AI content, as it’s already happening to some degree. The pretense here is that something completely or partially made with genAI can be put in the same category as something that wasn’t.
I’m not here to precisely define art. I just want to defend the spaces of what art is to me. Banning genAI imagery from anime conventions shouldn’t be done because it looks “ugly” or “uncanny.” It should be done because those generated pieces of imagery are a big fuck you to people who have studied their field, materials, conventions, an artistic language. Artistic knowledge can partially translate from a medium to another. But when something is fully or mostly made with AI, no relationship is developed with the medium. Using a machine to auto-generate “technically impressive” imagery won’t make you learn anything other than how the software behaves. What do they know about color, lighting, composition, and stylistic proportions? Or about harmony, rhythm, structure, arrangement, tone, and dynamics? Or whatever is relevant in sculpture, writing, etc.? They wouldn’t be able to explain why their slop looks a certain way; they just know they like it. They are not creating art, in my opinion. At best, they’re curating pieces of media.
It’s often very telling how distant they are from their own “creations.” Why did they choose those colors, that angle, that clothing, those textures on the walls? They didn’t! Even the most detailed prompts will leave these artistic choices out of the equation. So what if the prompt said the character was “holding a british cup of tea with one hand, extending the pinky finger”? There’s not one way to interpret that.
I wouldn’t consider this a problem if people didn’t critize the resulting pieces merely based on how pleasing they are to look at or listen to. You see, there’s been actual-genAI-art stuff before the boom of the big models we have to day, and I didn’t have an issue with them because using genAI was a conscious artistic choice. They weren’t attempts to replace the artistic process. Even with collage you can achieve unique looking results specific to its technique while using other people’s work, and I’ve never seen anyone try to become a painter who doesn’t paint or a photographer who doesn’t take photos, because there’s much more to a technique than the final result.
Sometimes they compare the advent of genAI to the invention of photography. But comparing genAI to the period when photography threatened painting doesn’t say anything good about genAI, contrary to what way too many people believe. GenAI isn’t really a medium. It’s not visual, aural, 2D, or 3D. It doesn’t exist on canvas, or as a sculpture, or as a 3D model, or as a waveform. Thus, genAI can’t develop or adopt its own artistic language. Both photography and painting share visual codes, and each developed languages of their own that overlap to some extent. Taking a photo to “not paint a painting” seems like a totally valid thing to critize to me, and if that was the promoted use for it in the 19th century, I would reject it too. Photography and painting aren’t really the same, and there are many kinds of both. Photography’s ability to capture images in a more “realistic” way opened the door to many practical uses where this “realism” was valued. For example, portrait painting is nowhere near as popular today as it used to be, because we expect to document reality in a realistic way, relegating stylistic distortions to a much less important role.
None of this applies in the case of genAI. GenAI is about imitating the result of existing languages and techniques. If someone isn’t “speaking” the artistic language of the pieces they theorically create, then what makes them an artist? Can I say that I know a foreign language after using machine translation? On the other hand, I don’t think there’s anything wrong with imitating the look of a particular technique, but that’s not the same as trying to imitate an entire yet vague artistic style without internalizing anything about it.
Blame Dadaist Capitalism
What if someone released a model that recognizes authorship and was trained on works with the authors’ permission? I think artists who strongly rely on the copyright argument are hypocrites to some degree. I think this tactic was chosen because copyright has been used as a strong pillar of corporate greed for many years (it still is), so they thought using it to defend themselves would work. And hey, maybe it helped buy some time.
Our morals should not be dictated by what the law says. Personally speaking, I’ve pirated many books, movies, games, and music, so I couldn’t blindly defend copyright. And there’s a lot of nuance depending on the context: The images of Lain and Birdy that my website themes display are not being used with proper authorization. I’m not profiting off other people’s work, so people in my (internet) culture would not typically consider this a wrongdoing. I also think selling fanart is fine. But I’ve also seen many people supporting artists’ right to sell fan art, while in the case of a developer of a VR mod for Cyberpunk 2077, the community sided with the company that made the game, which then punished him after he didn’t release it for free. Maybe I’m missing something, but it feels like the copyright card gets played in a very inconsistent way.
Companies are abusing artists, and that’s what this truly is about. We have to understand the position artists have been in. Our modern capitalist society typically considers illustrators and musicians “artists,” and it’s never been easier to learn and create those general kinds of art. At the same time, I don’t think these artists ever had a good time in this society. They’re probably more visible and present than ever, and paradoxically less valued because they don’t make “practical” things. Nothing is too valuable when profit is what matters the most. It’s kind of sad to see that even artists who create for profit (surviving) are frequently abused by companies and corporations (like any other worker, really). So now when the ruling class tells me that genAI is the future that should replace stuff made with love and passion, it really makes me want to put them on trial for laws that don’t exist yet.
To make things worse, everyone likes “pretty things,” but not everyone truly cares about art, which might not even be pretty. Consumers consume, and in doing so help make certain practices dominant over others, such as the use of genAI. This isn’t “fair”; it’s how the market works in our reality.
Because they can’t meddle in consumers’ choices directly, they have to change their context. You might have noticed that the “invisible hand” is quite visible: sell us a product for a problem that doesn’t exist, and make us dependent on it if it gives them power over us.
The problem with AI model training isn’t so much about the authorship laws of the works they’re fed as it is about an already abusive system toward artists. To succeed, they need to impose the idea that AnYtHiNg CaN bE aRt ✨ ✨ 😍 but NOT the banana on the wall 😠 but we WILL profit off the banana4 in a ludicrous way anyway because it fits institutional capitalism, if you don’t mind… so call now and become an AI prompter yesterday.
It’s just a tool, bruh
GenAI advocates argue AI models are just tools that artists can use in their workflow. Honestly, there isn’t much of a difference to me between a piece that was mostly generated by AI and one that was only partially generated. This would be a harder topic for me to tackle if we were in a scenario like the one I described before, where big “ethically trained” models exist. But this isn’t the case, so even “a little of genAI” feels kind of wrong.
In any case, I’ve rarely seen genAI used as a tool to make “art” at all5. Even if I were to ignore the ethical issues related to its training, I would still say that pieces generated by AI shouldn’t share the same space with pieces that weren’t. Of course, I am talking about spaces where you expect to see nothing but art, like a convention, a social media feed, or a fiction book (including its accompanying cover artwork). There are other cases where “art” is not the main focus, and that’s where it starts to look grim for artists. Is design art? Personally I think that some artistic concepts might apply to design, and design might require some art sometimes, but I don’t see design itself as art. If you were to make a political video for YouTube and needed some imagery to illustrate a concept, I would despise your use of genAI only because of the already described abusive reality we’re in, but otherwise I don’t think it’d be “wrong.”
But, again, there’s no “right” use in the context we live in.
So…?
As they say: genAI is just a technology. Without context and a goal, not even the military technology of the nuclear bomb would be bad technology. So we can still reject a technology because of proposed uses for it. But look further. It’s the underlying system that supports and pushes this technology and its now relatively common uses. Don’t forget about that.
Maybe the use of genAI will die out. Maybe it’ll become way more popular. Maybe it’ll be partially adopted in artistic fields, like they have been saying. Any of these scenarios could happen. In the meantime, defend the artistic spaces that already exist, and fight the notion that AI-generated pieces belong in those spaces. GenAI will not destroy or replace art, but it might replace the current notion of what art is, and destroy many of the spaces we already have for the art we like. That’s what the real danger is.
This is hard to define, but for the most part I’m talking about language models trained to spit out “complete, detailed, and ready to use” content. ↩︎
This article is awfully long already. I won’t be talking about other implications like the use of genAI in fields that aren’t related to art, or the impact on the environment. ↩︎
Did you know about Windows Refund Day? ↩︎
Yes, I do consider Comedian art. Just not art that I like. ↩︎
I don’t see a problem if an ethically trained model is used and its output is not meant to be used as-is. This includes some software to synthetize voice, and excludes all large language models. My opinion could change. ↩︎
