The Paperclip Maximiser Is You

In 1858, the New York Times ran a piece about the transatlantic telegraph that should be tattooed on the forehead of every technologist alive today. The telegraph, the paper warned, was "too fast for the truth." Messages now crossed the Atlantic in minutes instead of weeks, and the editors worried that speed without verification would unleash a torrent of rumour, misinformation, and panic onto an unprepared public. They were right. They were also, in a way that matters enormously right now, describing a pattern that is precisely 168 years old and still accelerating.

I grew up in Brighton in the 1980s. We had four television channels. Channel 4 didn't even broadcast twenty-four hours a day until 1996. My mother told me that if I stared at the television too long, my eyes would go square. The scarcity was the point. Four channels meant somebody — a commissioning editor, a scheduler, a regulator — had decided what was worth broadcasting and when. The information came in a trickle. You could drink from it.

Now it comes from a fire hose attached to a sewage main.

Article content
AI Generated: A child of the eighties, before the flood

Every generation panics about the new medium. In 1492, the Benedictine abbot Johannes Trithemius wrote De Laude Scriptorum ManualiumIn Praise of Scribes — arguing that the printing press would corrupt knowledge and destroy monastic discipline. Scribes, he insisted, engaged with the divine through the physical act of copying; the press severed the link between comprehension and effort. He was mocked for it. He published his defence of hand-copying as a printed book — the irony was not lost on his critics, though Trithemius argued the press was acceptable for distributing his argument while still inferior for forming one. Five hundred years later, he sounds less like a Luddite and more like a prophet. The trade-off he described — effort removed, comprehension diminished — is exactly what is happening when you ask an AI to summarise a report you should have read yourself.

Neil Postman picked up the thread in 1985 with Amusing Ourselves to Death, arguing that Aldous Huxley had beaten George Orwell — that we would not be destroyed by what we fear but by what we love, drowning in an ocean of entertainment we chose for ourselves. Postman's insight was structural: each medium does not merely carry content but reshapes the act of thinking. Television turned political argument into entertainment. The internet turned entertainment into a feedback loop. And AI is turning the feedback loop into something that thinks — or pretends to think — on your behalf.

From Trithemius's printing press to Postman's television to the LLM on your laptop, the pattern is the same: a technology arrives that makes information cheaper, and the thing it makes cheaper is not just production but cognition itself. Amy Orben called this the "Sisyphean Cycle of Technology Panics", and she is right that the pattern exists. I am not interested in relitigating the pattern. I am interested in whether this time the pattern breaks.

The Feedback Loop That Changed Everything

Here is what is different. Every previous communication technology operated on human timescales. A newspaper editor wrote an article, printed it, distributed it. The feedback loop was measured in days or weeks.

Then the loop tightened. Then it got so tight that the human stopped being the author and became the product.

Richard Serra said it in 1973: "You are the product." Sean Parker confirmed it in 2017: Facebook was designed to exploit "a vulnerability in human psychology." Chamath Palihapitiya went further: "I think we have created tools that are ripping apart the social fabric of how society works." These are not critics. These are the architects, confessing.

B.F. Skinner identified variable-ratio reinforcement as the most addictive schedule of reward in the 1950s. Social media bolted his rat lever onto a global communication network and called it a platform. Andrew Bosworth's internal memo made growth "de facto good." Frances Haugen confirmed the platform knew it was causing harm and chose profit over safety.

Growth as the justification for everything. Sound familiar?

Article content
AI Generated: The telegraph office drowning in its own output

Three Threads, One Ratchet

This argument has three threads and they reinforce each other viciously. The first is supply-side: more than 52% of long-form web articles are now AI-generated, and HBR estimates the resulting "workslop" costs organisations $9 million a year. The second is demand-side: when AI summarises your emails and filters your feeds, you do not become better informed — you become a person who has stopped processing information altogether. The third is structural: the engagement engine was built to maximise attention captured, not understanding, and AI has simply given it a new production line. The flood creates the need for AI filters. The filters degrade your ability to evaluate what the flood contains. And the business model profits from both.

I use these tools. I am writing about the ratchet and I can feel its teeth in my own workflow — the pull of the summary, the relief of the shortcut, the slight hollowing-out when I accept an answer I did not earn. I am not writing from above this problem. I am writing from inside it.

Herbert Simon saw the core constraint in 1971: "A wealth of information creates a poverty of attention." Researchers at Caltech have shown that human thought operates at roughly 10 bits per second. Your sensory systems take in billions. Your conscious mind processes ten. Every communication technology in history has increased the volume arriving at that bottleneck. Not one has widened the bottleneck itself.

Then Like Now

'OK,' you might reasonably argue, 'but at least AI helps us manage the overload. It summarises. It filters. It prioritises. Isn't that the whole point?'

The reasonable version of this objection is that tools are neutral and usage is what matters. I used to believe that. But neutral tools do not redesign your information diet without asking. Neutral tools do not create feedback loops that amplify your existing biases. The ratchet does not care about your intentions.

A 2024 study in Nature Human Behaviour showed that AI-human feedback loops amplify existing biases rather than correcting them. A study in PNAS Nexus this year found that people who relied on LLM-generated summaries developed shallower knowledge structures than those who read the source material. They felt more confident. They knew less.

A 2025 study in Societies found a correlation of -0.68 between AI tool usage and critical thinking ability. The correlation does not tell us which way the arrow points — whether AI usage degrades thinking or whether weaker thinkers reach for AI more readily — but either direction is troubling.

And here is the part that should genuinely frighten you. ActivTrak's latest research found that AI tools are increasing task completion time by 346%. Not reducing it. Increasing it. Deep focus time is falling. Email time has doubled. BCG calls it "AI brain fry" — the cognitive exhaustion of constantly supervising a system that is supposed to be supervising you. The 346% figure likely captures the chaos of early adoption and may moderate, but the structural pattern — the supervisor needing a supervisor — is not a teething problem. It is the architecture.

Article content
AI Generated: Take the pill, the label says it helps

The Paperclip Maximizer Is Not a Metaphor for AI

In 2003, Nick Bostrom introduced the paperclip maximiser — a thought experiment about an AI given the simple goal of making paperclips, which converts all available matter in the universe into paperclips, including the humans who built it.

But the paperclip maximiser is not a metaphor for AI. It is a metaphor for us.

We built the engagement engines. We optimised for clicks, views, time-on-site, and share counts. When those systems produced polarisation, addiction, and the erosion of shared reality, we did not shut them down. We scaled them up. We called it growth. AI slop is not a bug. It is the system working as designed.

Here is the ratchet. I am building it deliberately, because I want you to see how each step makes the next feel necessary and the alternative — doing the cognitive work yourself — feel impossible:

Step 1 — The Flood: AI generates content at a scale no human team can match. More than half of long-form articles are now AI-generated, and the number is climbing.

Step 2 — The Filter: You cannot process the flood, so you use AI to filter and summarise it. The filter selects based on what you have engaged with before. Your information diet narrows. This is where a thoughtful leader stops and asks: what is the filter removing? What am I no longer seeing? If you cannot answer that, you have already ceded the decision about what matters.

Step 3 — The Delegation: The AI-generated summaries replace your engagement with source material. Your comprehension shallows. You feel informed. You are not. Shallower knowledge structures, greater confidence, less actual understanding. This is the second off-ramp. If you are a leader making decisions on summaries of summaries, mandate that your team — and you — read the source material for any decision above a given threshold. Name the threshold. Write it down.

Step 4 — The Atrophy: Your reduced attention creates a gap. The AI generates more content to fill it. The noise increases. The signal degrades. You lean harder on the AI. The -0.68 correlation between AI usage and critical thinking is the ratchet in motion.

Step 5 — The Business Model: None of this is accidental. The engagement engine that drives Step 1 profits from every iteration of Steps 2 through 4. The company selling you the flood is selling you the filter. The company selling you the filter has no incentive to restore your ability to drink from the stream yourself. Growth is de facto good. The number goes up. The harm is acceptable because the number is still going up.

That is not a decision framework. That is a ratchet. You are not making decisions. You are being funnelled.

Hannah Arendt wrote about what she called "the banality of evil" — the idea that great moral failures are not caused by monsters but by ordinary people who stop thinking. Not people who think evil thoughts, but people who delegate their thinking to systems, to procedures, to authorities, and in doing so become incapable of moral judgement. Arendt's word for it was thoughtlessness, and she did not mean stupidity. She meant the abdication of the individual's responsibility to comprehend.

The Ratchet and the Responsibility

I need to be honest about something. I drafted sections of this argument with an AI assistant. I asked it to find the Simon quote. I asked it to check the Bostrom citation. I asked it to scan six studies I did not have time to read in full. Each time, I felt the pull — the relief of outsourcing a cognitive task, the slight diminishment of my engagement with the material. I caught myself accepting a summary instead of reading the source. I am describing a ratchet, and I can feel its teeth on my own cognition.

Simon Wardley and I have been putting the world to rights on this — over afternoon coffees, and Greek Raki, the kind of argument that goes in circles because neither of us can find the exit. His position is that the real danger is not that AI produces rubbish but that it degrades comprehension. His deeper worry: once AI controls the reasoning layer, you have handed the keys to your cognition to a system whose incentives are not your own. I keep trying to find the flaw. I have not found it yet.

The technology industry will tell you this is just another moral panic. Trithemius worried about printing. Postman worried about television. And look — civilisation survived.

True. But the printing press did reduce our reliance on monastic memory — Trithemius was right about that. Television did turn political discourse into entertainment — Postman was right about that. The panickers were not wrong about the mechanism. They were wrong about the scale. Civilisation survived, but as something different — something that had traded one cognitive capacity for another without ever consciously choosing to.

This time, the thing we are trading away is comprehension itself. The ability to take in information, process it, weigh it against what you already know, and form a judgement. And the feedback loop is no longer measured in days or weeks but in milliseconds.

Article content
AI Generated: The arcades are full, but nobody chose the game

So here is my challenge. Two things. One is personal, one is organisational.

The personal one: this week, read one thing in full that you would normally have asked an AI to summarise. Sit with it. Let it be slow. Let it be boring. Let your ten bits per second do their work.

The organisational one: audit where your company has inserted AI into the comprehension layer — the places where a human used to read, evaluate, and decide, and a system now does it for them. Map it against the ratchet. At which step are you? Ask whether the people downstream can still do the work if the system is switched off. If the answer is no, you do not have an AI strategy. You have a dependency. And dependencies, left unexamined, become vulnerabilities.

The paperclip maximiser is not coming for you. It is already here. And it is not a machine. It is every decision you made to let something else do your thinking.