What price?
I’ve noticed a really strange justification from people when I ask them about their use of generative tools that use large language models (colloquially and inaccurately labelled as artificial intelligence).
I’ll point out that the training data requires the wholesale harvesting of creative works without compensation. I’ll also point out the ludicrously profligate energy use required not just for the training, but for the subsequent queries.
And here’s the thing: people will acknowledge those harms but they will justify their actions by saying “these things will get better!”
First of all, there’s no evidence to back that up.
If anything, as the well gets poisoned by their own outputs, large language models may well end up eating their own slop and getting their own version of mad cow disease. So this might be as good as they’re ever going to get.
And when it comes to energy usage, all the signals from NVIDIA, OpenAI, and others are that power usage is going to increase, not decrease.
But secondly, what the hell kind of logic is that?
It’s like saying “It’s okay for me to drive my gas-guzzling SUV now, because in the future I’ll be driving an electric vehicle.”
The logic is completely backwards! If large language models are going to improve their ethical shortcomings (which is debatable, but let’s be generous), then that’s all the more reason to avoid using the current crop of egregiously damaging tools.
You don’t get companies to change their behaviour by rewarding them for it. If you really want better behaviour from the purveyors of generative tools, you should be boycotting the current offerings.
I suspect that most people know full well that the “they’ll get better!” defence doesn’t hold water. But you can convince yourself of anything when everyone around is telling you that this is the future baby, and you’d better get on board or you’ll be left behind.
Baldur reminds us that this is how people talked about asbestos:
Every time you had an industry campaign against an asbestos ban, they used the same rhetoric. They focused on the potential benefits – cheaper spare parts for cars, cheaper water purification – and doing so implicitly assumed that deaths and destroyed lives, were a low price to pay.
This is the same strategy that’s being used by those who today talk about finding productive uses for generative models without even so much as gesturing towards mitigating or preventing the societal or environmental harms.
It reminds me of the classic Ursula Le Guin short story, The Ones Who Walk Away from Omelas that depicts:
…the utopian city of Omelas, whose prosperity depends on the perpetual misery of a single child.
Once citizens are old enough to know the truth, most, though initially shocked and disgusted, ultimately acquiesce to this one injustice that secures the happiness of the rest of the city.
It turns out that most people will blithely accept injustice and suffering not for a utopia, but just for some bland hallucinated slop.
Don’t get me wrong: I’m not saying large language models aren’t without their uses. I love seeing what Simon and Matt are doing when it comes to coding. And large language models can be great for transforming content from one format to another, like transcribing speech into text. But the balance sheet just doesn’t add up.
As Molly White put it: AI isn’t useless. But is it worth it?:
Even as someone who has used them and found them helpful, it’s remarkable to see the gap between what they can do and what their promoters promise they will someday be able to do. The benefits, though extant, seem to pale in comparison to the costs.