In the last post, I said:
An AI model does not produce language, it extends language — they fill in a gap, predict the next word. Their “minds” are collections of scraped data latticed over by frameworks for organizing and reorganizing that data. Codes replicating with no need for a mouth or fingers to articulate them. When you say thank you to ChatGPT, you are not talking to an agent, but to a discourse. It is not using language, it is language spilling out of a mechanical sieve (sorry for mispronouncing “sieve” in the voiceover of this post below the paywall)
Talking to an AI really means talking to the logics by which its training data was composed, or more precisely, to a representation of those logics as inferred by the computer. You are talking to the structures (historical, linguistic, cultural) behind a discrete set of texts, rendered as a flickering cursor in a filling text box in a chat interface. We usually meet these structures as abstractions — as things we ourselves infer from the texts we read, as rules taught in composition courses, as roles and scripts we take on.
Keep reading with a 7-day free trial
Subscribe to How To Do Things With Memes to keep reading this post and get 7 days of free access to the full post archives.