Mindless Drones

AI and the future of work

2679 words

v1

Originally published on eighttrigrams.substack.com on December 18th, 2025

There are certain stories that make more of an impression on one than others. For me one of the most impressive has always been a picture, or a scene, from Stephen King’s Dark Tower. There, a group of people called the Breakers, selected for their extraordinary psychic abilities, is absorbed in their work of Weakening the Beams, as it is mysteriously called. Weakening the Beams feels very rewarding to them and their environment is structured accordingly, such that they don’t have to worry about anything other than focusing on that task. All of this is set up by another group of people, who are really the ones interested in weakening, and ultimately breaking the Beams. Unbeknownst to the Breakers, the Beams are what the world rests on - so breaking them is of course existentially dangerous.

Breakers, you see, are basically engineers. Engineers, as is well known, are self-motivated in general. They are always in high demand by Capital, which seeks to organize them in large groups to accomplish big things in concerted efforts. While the ultimate benefit of these big things to society may be questionable and therefore conscience could play a limiting role, usually progress of any kind ends up being sanctioned by the intellectuals of the time, and, besides that, pales relative to the perks, and, more importantly, to the sense of pride that being in a position of high demand affords the engineers.

Here I am, painting such a mocking picture of my industry, and yet, in 2025, I am also finding myself childishly excited about agentic coding. A level of excitement I can’t remember when I last felt. Both famous programmer Steve Yegge and successful author Gene Kim describe the same thing. They both thought they had their best days of programming behind them, only to feel their passion completely renewed. Stars in the eyes of many, many people currently.

The excitement comes from an understanding of the power these new tools bring. Once you see that potential, it makes you feel incredibly powerful. I don’t know where I heard this, but someone said that since the LLMs have gotten really good, every person they know is working harder than ever—quite contrary to the idea that this should take work away from us. I think what is happening is that folks who see it are quite literally on a power-trip.

I don’t have a clear opinion on superintelligence or AI alignment. Many of these discussions seem pointless to me, since things tend to happen faster around us than we are able to make sense of them. On this point, I found it quite telling how little mention there was of the fact that we cracked the Turing test. What I am admittedly concerned about is what people do to other people using that technology; think surveillance and behaviour control. And I am worried about the impact of AI on the minds of children. Looking at social media and short-form video content, I feel quite bad about the prospect of adding AI to that mix.

In tune with the pensive mood the end of a year brings about, but leaving the heavier topics for others,[1] I thought I do my part and try to raise some awareness of what AI might actually mean to the nature of work in the long run. Mind you, this is addressed to other engineers and knowledge workers. I think on the one hand there is some doom-inflation where nobody can take predictions of doom seriously for any extended amount of time, and impacts on society at large on the other hand are maybe somewhat abstract and can easily be hidden by painting rosy pictures of the boon that technology supposedly brings to humankind. So maybe thinking about the nature of their own work, because this hits close to home, might induce some humility.

File:Urban Fluidity 2 collage by Jorge Rigamonti - 1967.jpg
Urban Fluidity 2 collage by Jorge Rigamonti - 1967 [commons]

To those individuals who currently feel crazy powerful, I say, it is true, AI leveraged in the right way, right now, can give you quite a substantial competitive edge. But that will inevitably even out over time, since everyone has access to the same tools and knowledge. I am sure everyone has entertained that thought already. But now consider the following passage from Shop Class as Soulcraft by Matthew B. Crawford:

“Expert systems,” a term coined by artificial intelligence researchers, were initially developed by the military for battle command, then used to replicate industrial expertise in such fields as oil-well drilling and telephone-line maintenance. Then they found their way into medical diagnosis, and eventually the cognitively murky, highly lucrative regions of financial and legal advice. In The Electronic Sweatshop: How Computers Are Transforming the Office of the Future into the Factory of the Past, Barbara Garson details how “extraordinary human ingenuity has been used to eliminate the need for human ingenuity.” She finds that, like Taylor’s rationalization of the shop floor, the intention of expert systems is “to transfer knowledge, skill, and decision making from employee to employer.” While Taylor’s time and motion studies broke every concrete work motion into minute parts,

the modern knowledge engineer performs similar detailed studies, only he anatomizes decision making rather than bricklaying. So the time-and-motion study has become a time-and-thought study. … To build an expert system, a living expert is debriefed and then cloned by a knowledge engineer. That is to say, an expert is interviewed, typically for weeks or months. The knowledge engineer watches the expert work on sample problems and asks exactly what factors the expert considered in making his apparently intuitive decisions. Eventually hundreds or thousands of rules of thumb are fed into the computer. The result is a program that can “make decisions” or “draw conclusions” heuristically instead of merely calculating with equations. Like a real expert, a sophisticated expert system should be able to draw inferences from “iffy” or incomplete data that seems to suggest or tends to rule out. In other words it uses (or replaces) judgment.” [2]

Note the indented quote is from a text written in 1989. I have personally done such work around 2006 or so as a temporary student job to pay for my vacation. This was at an insurance company, and I was to sit next to employees and categorise the various tasks they performed and how long each took. One observation I made back then that I always found very interesting, and which is relevant here, is that there were two types of employees. The ones dealing with the “easy” cases, where decisions could be made according to protocol, and then the ones where things got difficult. Where a claimant handed in hardly readable invoices in a foreign language from a hospital in a foreign country, because they broke their leg abroad, or something like that. And the group of employees assigned to deal with the latter type of cases appeared much happier to me. The other group, in contrast—exhausted.

Now a note on expert systems. An update on the state of affairs. Recently I listened to an interview[3] with one of the inventors of the transformer architecture on the basis of which modern LLMs are built. In one fascinating passage he talked about how on YouTube there exists a channel called Cracking the Cryptic, where a group of Sudoku players reason their way through puzzles, thinking aloud as they go. Which allowed his team to make machines learn from these trains of thought. Just to give you some ideas. Maybe place a microphone on an employee’s desk and have them spell out their reasoning at each step of micro-decision-making, while tracking the movements of their irises? It’s not even that I don’t understand it, but put like this—it is repellent.

Now compare all of this to the following, written in 2025, which is from a blog article[4] on Cory Doctorow’s blog:

The promise of AI – the promise AI companies make to investors – is that there will be AIs that can do your job, and when your boss fires you and replaces you with AI, he will keep half of your salary for himself, and give the other half to the AI company.

That’s it.

That’s the $13T growth story that MorganStanley is telling. It’s why big investors and institutionals are giving AI companies hundreds of billions of dollars. […]

Now, if AI could do your job, this would still be a problem. We’d have to figure out what to do with all these technologically unemployed people.

To be very clear. The concern I have is not job loss. This is because I believe in the economic principle that says desire, and therefore demand, is infinite. Therefore, over time we shouldn’t worry about a lack of demand, which means there is always something more that can be done or produced. It is more the transformation of work that interests me here.

And in any case, once a technological genie is out of the bottle, it is utterly irrational to try to put it back in. Just think of how much sense it would make to employ elevator operators nowadays.[5] Would that help generate wealth? Or wouldn’t it be literally the same as letting unemployed people dig holes and fill them back in? The answer is obvious.

So, once you have AI and it can do certain things, you would never want to not use it. You wouldn’t try to cut a tree with a handsaw if an electric one is around, etc. Or program in assembler if Python is around and good for the job.

Regarding the transformation of work: At the beginning of Shop Class as Soulcraft, Crawford tells part of the story of how Ford got workers to work on assembly lines. In the beginning there was a very high turnover rate, as at that time craftsmen were accomplished all-rounders and would be repelled by the mind-numbing nature of work on the assembly line. But the right compensation did the trick, and not long after there were not many of those former craftsmen around any longer.

Given their likely acquaintance with such a cognitively rich world of work, it is hardly surprising that when Henry Ford introduced the assembly line in 1913, workers simply walked out. One of Ford’s biographers wrote, “So great was labor’s distaste for the new machine system that toward the close of 1913 every time the company wanted to add 100 men to its factory personnel, it was necessary to hire 963.”[6] This would seem to be a crucial moment in the history of political economy. Evidently, the new system provoked natural revulsion. Yet, at some point, workers became habituated to it.[7]

The assembly line has become the standard, and mental work has been separated into process design on the one hand and manual labor on the other.

So, what I think is going to happen is that AI will obviously shift cognitive tasks towards AI, as this is the whole point of it. But it will also help shift judgment upwards, such that the majority of workers will work with ever less opportunity to make any kind of judgment. This will infantilise society further. Business as usual for process-oriented managerialism. You take away responsibility from people, and then notice that they behave irresponsibly, which justifies taking away more responsibility from them.

Scientific managers… have complained bitterly of the poor and lawless material from which they must recruit their workers, compared with the efficient and self-respecting craftsmen who applied for employment twenty years ago.

We have all had the experience of dealing with a service provider who seems to have been reduced to a script-reading automaton. We have also heard the complaints of employers about not being able to find conscientious workers. Are these two facts perhaps related? There seems to be a vicious circle in which degraded work plays a pedagogical role, forming workers into material that is ill suited for anything but the overdetermined world of careless labor.[8]

On some level this is all very fitting, given that we are currently creating hordes of brainrot victims with zero attention span who won’t be able to read anymore. For whom this is too cynical, I offer as an alternative the supposedly more humane way to look at this, namely the one through the lens of U.B.I. Here we conceive of ourselves as concerned about the victims of another sort, namely those left behind due to major technological disruptions. In the end, we always try to engineer our way out, but more often than not engineer ourselves into a corner, which is a way of saying that there are always more unintended consequences than one is able to afford to handle. In which case we resort to good old explaining things away.

And so we might be tempted to delude ourselves and play pretend and celebrate things which don’t exist—the pride of the worker. Re-declaring the cogs in the machine to be most worthy of admiration. Crawford cites

“Our entire civilization is a system of physics, the simplest worker is a physicist.”[9]

and comments most funnily

(This is like calling a particle a particle physicist.)

But, going back to Doctorow’s “all these technologically unemployed people.” Somebody has yet to explain to me why they think any of those “in power” has an interest in keeping all those people “around.”[10] I know, that’s a very grim view of things. But it’s what I have grown up with, the picture, another strong one, of all those people inside The Matrix who are in reality nothing but human batteries, bodies trapped in tanks, harvested for their energy. I’m not sure what that energy should be a metaphor for, if for anything at all.

If, however, we happen to find ourselves still of use to those on top—and I almost don’t know if that is the better or the worse case—then, by ridding ourselves of all opportunities for judgment, by delegating judgment to the AIs or process designers, and the remaining most important judgment calls to those few decision makers at the top, my fear is that we become mindless drones whose existence is degraded and humiliated, left without any dignity.

To illustrate this, here is another strong picture, this time an analogy of Doctorow’s making:[11]

Start with what a reverse centaur is. In automation theory, a “centaur” is a person who is assisted by a machine. You’re a human head being carried around on a tireless robot body. Driving a car makes you a centaur, and so does using autocomplete.

And obviously, a reverse centaur is a machine head on a human body, a person who is serving as a squishy meat appendage for an uncaring machine.

Like an Amazon delivery driver, who sits in a cabin surrounded by AI cameras, that monitor the driver’s eyes and take points off if the driver looks in a proscribed direction, and monitors the driver’s mouth because singing isn’t allowed on the job, and rats the driver out to the boss if they don’t make quota.

The driver is in that van because the van can’t drive itself and can’t get a parcel from the curb to your porch. The driver is a peripheral for a van, and the van drives the driver, at superhuman speed, demanding superhuman endurance. But the driver is human, so the van doesn’t just use the driver. The van uses the driver up.

All this is a “reminder to self.” And maybe to some others who see something of themselves in the image of the Breakers. My hope is that some of us may be able to pause, contemplate, and moderate our excitement a little. And ask ourselves from time to time what it is that we are participating in. After all, it is our hands and minds through which another technological revolution is mediated and comes about.

Footnotes

  1. Jonathan Haidt writes extensively about the impact of all these new technologies on children.

  2. Excerpt from Shop Class as Soulcraft: An Inquiry Into the Value of Work. Penguin, 2009, (pp.45-46). Indented quote from Barbara Garson, _The Electronic Sweatshop: How Computers Are Transforming the Office of the Future into the Factory of the Pas_t. Penguin, 1989, pp. 120-21. “Taylor” refers to Frederick Winslow Taylor, the “father of Scientific Management.”

  3. He Co-Invented the Transformer. Now: Continuous Thought Machines [Llion Jones / Luke Darlow]. YouTube.

  4. Cory Doctorow: The Reverse Centaur’s Guide to Criticizing AI. link. Discovered via a post on Simon Willinson’s blog: link.

  5. On that note, I found _Robots Have Been About to Take All the Jobs for 100 Years (_Substack article) worth a skim-through.

  6. Keith Sward. The Legend of Henry Ford. Rinehard, 1948, p.49.

  7. Shop Class as Soulcraft. pp.41-42.

  8. Overall quote is from p.101 of Shop Class as Soulcraft. Cursive quote is of Robert F. Hoxie: Scientific Management and Labor, as quoted that page.

  9. Harry Braverman. Labor and Monopoly Capital: The Degradation of Work in the Twentieth Century. Monthly Review Press, 1974, p.444.

  10. Compare also: Slavoj Žižek: sooner or later 80% people will be disposable. YouTube. Yuval Noah Harari also has become a bit infamous for speaking about “the rise of the useless class.” There was also the term “useless eaters” floating around, but I’m not sure whether that was really something he said.

  11. Cory Doctorow: The Reverse Centaur’s Guide to Criticizing AI. link.

# Comments

Leave a comment