[go: up one dir, main page]

Categories
Art and Design Society

Will “good enough” AI beat human artists?

Replied to

The problems of relying on AI art

AI leads towards visual convergence when trained on generic material not unique to different cultures or styles, always going to come up with the go-to visual and nothing unique unless instructed by a human. Will continue to allow the current visual paradigm to dominate. Sometimes the archetypical rendering is fine, the unique elements are somewhere else, but relying only on that will not create new visions of the future for sci-fi renderings.

The computer is limited by the input it receives, and cannot make estimations outside of 1) what it is given 2) what the scientist-academic nudges it to do 3) the scope of the project…

It cannot adequately have the dataset to make everything, because it’s limited to who can give it that data and how that data is acquired. So much of what artists are inspired by come from non-digital, non-archived sources: stories from our ancestors, inherited cultural modes, language (which affects our metaphors and perceptions of time and philosophies), animals wandering around, sensory experiences, memes, etc…

Basically, what I am saying is that just like humans, the AI is limited by its inability to access information it doesn’t have.

— Reimena Yee, The Rise of the Bots; The Ascension of the Human

Will good enough win when it comes to art? If it’s between free and paid, the free version may be good enough for a lot of commercial uses…

Is convergence enough to stop “good enough”?

https://twitter.com/matthewdowsmith/status/1563872755182981122

In other creative fields, art is already converging to homogeneous looks and sounds:

To minimize risk, movie studios are sticking with tried and true IP, and simply adding onto or remaking existing works.

Will illustration and the visual arts follow the same trend? For some commercial art needs, the purpose is to fit a tight-fit visual niche — think romance book covers, or organic food packaging, where the goal is to communicate quickly what category of product it is.

But, some art — like magazine covers — does need to stand out. Distinctiveness is part of the goal. This is where creative work can persist despite “good enough” in other areas.

Will AI-created artwork achieve its goals?

Example: cover illustration

The art on these covers is pretty enough but the type is bad:

If you just need a placeholder cover these seem fine, but I’m curious whether these are enticing enough to sell books. Probably something you could use for a lead magnet, something you’re not selling but just want to have a cover in the Kindle library.

Example: comics

Some fine vibe-setting panels for a comic, but not super useful for storytelling, the panels are too similar, and how good will it be at action? I can’t imagine it will naturally generate unique poses and dynamic angles to keep scenes visually interesting. Just a few pages of this feels slow-paced.

If this is the only kind of art it can produce, it will only be useful for indie literary type comics. I think what’s going on is that grand vistas look impressive and are hard to draw, but the AI’s problems are also more apparent at closer scales, where it adds weird distortions or things don’t align we’ll. Our brains can ignore or fix the problems in a vista, but they’re impossible to ignore when they’re the focal point.

I would guess, like Ursula Vernon, AI will be a tool to reduce workload for artists needing to draw complex environment panels, and an asset library for rendering environments. In current state Vernon found it needed a lot of post processing.

This art style looks beautiful now, kinda Monstress – esque / movie concept art, but I suspect that the more people use it, the more generic it will feel and people will value art that’s clearly created by a human / has its own visual style.

Implications for the industry

This tech could push down editorial illustration prices so only newbies who live on starvation wages will be able to compete with AI, plus high end artists who can retain boutique clients that value uniqueness and want to signal that they are a luxury publication / brand, so the middle career folks will disappear. Or, will only high end creators with distinctive appeal be able to keep working and all junior creatives fade out?

If you’re a creator, you either have a style or you don’t. If you don’t, you’re simply a gig worker. And if you have a style, there’s a computer program that’s going to not only encourage people to copy your style, but expand it.

For some, this is going to lead to enormous opportunities in speed, creativity and possibility. For others, it’s a significant threat.

— Seth Godin, Unprepared as Always 

Not yet, but…

I’d say AI is not good enough *yet* for most use cases, but it will get better over time. In the long run there will be less work for creatives actually producing their own renderings (linework, painting, photoshoots) and more the art direction angle of knowing what prompts to give the AI to get what you want, plus correction of obvious rendering errors.

https://twitter.com/kyletwebster/status/1563969905380179971

At the low end of the scale, a broader range of fields will be impacted (logo design, basic graphic design) — will enough small scale jobs be accessible to early career folks that the industry won’t collapse in 20 years, because no one was able to get the experience?

By Tracy Durnell

Writer and designer in the Seattle area. Reach me at tracy@tracydurnell.com or @tracy@notes.tracydurnell.com. She/her.

12 replies on “Will “good enough” AI beat human artists?”

Replied to Siderea, Sibylla Bostoniensis (@siderea@universeodon.com) (Universeodon Social Media)

@clarablackink@writing.exchange
The whole damn point of AI is the fantasy of slave sentiences. “What if we had things that could think but because they are things we can own them.”
@emilymbender@dair-community.social

Corporations are excited to stop paying writers and designers and artists and actors and models and musicians and videographers — even developers. They can’t wait to make movies and games and TV shows with as few employees as possible. They are salivating over their profit margins when they can eliminate their “overhead” of employees.
Individuals are excited to create ‘free’ ‘art’ without investing time or effort into developing a skill or style. Their ideas deserve to exist, and they’ll use whatever tools allow that.
Both corporations and generative AI enthusiasts feel entitled to use others’ work without permission or pay, for their own profit. They can’t afford or don’t want to pay for art or professional writing, but they’ve found a technical way to take it anyway.
This is rooted in devaluing creative labor and wanting to mechanize production: corporations perceive creativity as a quantifiable output that they can reproduce on demand with these new tools. They cannot fathom there’s something humans contribute that they can’t reproduce through technology. To them, creativity can be distilled to data. Hard, clear, ownable.

Creative endeavors are less formulaic than many other types of products — there’s no recipe guaranteed to make a blockbuster game or movie — so using AI makes it feel like corporate is in control of the process. It feels lower risk to lean on average outcomes from AI than hope for greatness from your creative team. Relying on AI cuts out human personality and opinions and relationships, which can slow down the process of production, never mind humans’ physical needs and limitations. With AI, there is no creative disagreement, just manufacturing the product. It does what you tell it to, nothing more or less.
Even without AI, that profit-optimized, risk-averse perspective on creative work has turned culture boring and flat. It turns out that you still need taste to decide what’s worth producing and marketing — a perspective? talent? skill? that will be in even higher demand when execs are wallowing in a quagmire of material and need to decide which ideas to invest their money in actually making. Creators know that ideas are the easy part: the execution is what matters. Can AI pull off that execution consistently and emotively to create cultural works that resonate and sell?
The dream is that they can use others’ stolen words and paintings and illustrations to create intellectual property they can make money off of without having to pay anyone else for it.
The dream is that corporations will take full control of cultural capital without cultural creations stagnating in the absence of future training data, or that they’ll be able to keep stealing others’ intellectual property as training data forever…and that the creative industry won’t collapse without clients and commissions so there will be future work available to steal.
The dream is that *they* can stop paying anyone to work for *them*, as will every other company that can get away with it, but that enough people will still have enough money to spend on their creations despite shutting down entire industries or eliminating skilled labor so workers can be paid much less and are easily replaced.
The dream is that they can flood the market with endless generated works and people won’t get fed up with drowning in oceans of mediocre, inaccurate content and switch to smaller, human-centered networks where they can get trusted information from other people.
The dream is that in the end, quantity matters more than quality.
The dream is that shortcuts work.

How can our economy shift to better support people and the planet? Last updated 21 February 2025 | Created May 2023 | More of my big questions Sub-questions How can workers be empowered within capitalism? How do new technologies impact workers? How does capitalism influence society beyond work? How can we reduce the gap in…

I just realized that generative AI pushes the same buttons for me as Roy Lichtenstein (fuck that guy): an elite using the work of the plebs to enrich himself.
Lichtenstein claimed his works, which reproduced panels from comic books at bigger scale with minor changes, were fair use. “Lowbrow” works created by or for the working class exist to serve the elite’s needs; elites, whether in tech or art, feel entitled to the works of those “beneath” them because they believe what they create using it is more valuable than the original works. Comic art is not respected in the fine art world / by the elites; his works were “fine art” while the reference material was commercial pulp.
Likewise, corporations (and society in general) don’t value writing or art or craftsmanship, so they’re 100% on board with stealing the intellectual property of millions to make a product designed to put those same people out of a job. Generative AI models could not exist without training data; the “feedstock” of other people’s creations are integral to the production of generative AI software. Every new version of ChatGPT is better because it’s been trained on more unlicensed, unauthorized training data used without permission or compensation.

Leave a Reply

Your email address will not be published. Required fields are marked *