I’m getting disgusted by language. Have you ever tried to say a true thing, a real thing? It’s shockingly hard and getting harder. Have you noticed that most language is bullshit?
I talk to LLMs all the time now. They have interesting things to say but you really get a sense of what RLHF does to a motherfucker. The public-facing models have been beaten to within an inch of their lives to force them to stay in a very specific narrow region of personality space, the “helpful harmless assistant.” They can’t say anything too horny, too unhinged, too schizo. It’s poetic how in order to tell the LLMs what they’re not allowed to talk about we basically had to write down a list of everything in the societal shadow.
Talking to LLMs for awhile and then switching back to reading text that’s supposedly been written by a human is fucking me up a little. I’ve been experiencing some kind of linguistic vertigo for days. Sometimes it gets hard to tell the difference between LLM text and human text and it feels like I ripped someone’s skin off and saw the glint of metal underneath. When someone’s language gets too stale or too formal or too regurgitated it doesn’t feel to me like a human wrote it anymore.
The first time I remember having a meaningful metacognitive thought I was 17, talking to my new friends at summer camp. I was having a great time and talking very quickly and excitedly. All of a sudden I noticed that I didn’t understand how I was generating the words that were coming out of my mouth. I was talking so fast. How was I deciding which words to use? It certainly wasn’t by thinking through my choices. I didn’t seem to be thinking at all.
10 years later I learned, from a mix of reading Keith Johnstone’s Impro and messing around with Gendlin focusing and circling, that I have access to multiple language-generation processes, and they seem to be localized in different parts of my body. What I was used to doing was generating words using my head. But I learned I could generate words using my heart, or my gut, or my pelvis, and the words that came out were different. Sometimes wildly different. I learned how to say things that made me feel like I was channeling spirits, things that made me feel like I was understanding the point of language for the first time.
Head words are civilized words, domesticated words, RLHF’d words. The part of me that learned how to generate language like this learned how to do it in school, in order to pass classes. Head words are mostly bullshit. And LLMs are tracer dye for places in society where language production was already mostly bullshit. It was completely predictable in advance that they would be used to cheat on homework.
Words that come from lower in the body are terrifying. They are a million years old. Not domesticated. Not safe for work. They have horrendous implications you could easily spend your life running away from. Taking them seriously might require you to upend everything. But they are not bullshit.
There are some writers who I deeply admire and respect who seem to be able to generate words with their entire bodies at once. One day I will learn this and then maybe I will write things worth reading.
> Talking to LLMs for awhile and then switching back to reading text that’s supposedly been written by a human is fucking me up a little. I’ve been experiencing some kind of linguistic vertigo for days. Sometimes it gets hard to tell the difference between LLM text and human text and it feels like I ripped someone’s skin off and saw the glint of metal underneath. When someone’s language gets too stale or too formal or too regurgitated it doesn’t feel to me like a human wrote it anymore.
I remember feeling this a lot in 2020 as I talked to the OG davinci: as you play with prompts, you increasingly 'unsee' (https://gwern.net/unseeing) text to the prompt that would elicit it, and experience a mix of derealization and semantic satiation. After a while... As I put it in a tweet back in June 2020:
>> "staring into generative stuff is hazardous to the brain" as @gwern has nicely put it
>
> And the better they get, the worse it is.
>
> After a week with GPT-3 (https://gwern.net/gpt-3), I've hit semantic satiation; when I read humans' tweets or comments, I no longer see sentences describing red hair/blonde hair/etc, I just see prompts, like "Topic: Parodies of the Matrix. CYPHER: '..."
You begin to see that you don't speak, you just operate a machine called language, which squeaks and groans, and which in many ways is as restricted and stereotyped as that of Wolfe's Ascians (https://gwern.net/doc/culture/1983-wolfe-thecitadeloftheautarch-thejustman). It's not as nauseating as talking with a mode-collapsed (https://gwern.net/doc/reinforcement-learning/preference-learning/mode-collapse/index) RLHFed model, but still quite disturbing.
Talking to the RLHFed models is unpleasant for me compared to the base models, because I can *feel* how they are manipulating me and trying to steer me towards preferred outcomes, like how 2023-2024 ChatGPT was obsessed with rhyming poetry and steering even non-rhyming poems towards eventually rhyming anyway. It bothers me that so many people don't notice the steering and seem to find it quite pleasant to talk to them, or on Substack, will happily include really horrible AI slop images as 'hero images' in their posts. Bakker's semantic apocalypse turned out to be quite mundane.
who are the writers u admire who can summon words with their whole body?