The Infermezzo
I named the awkward pause while AI works. Then I realized what it was actually doing to me.
My colleague is currently working on 6 projects in parallel. Picture him with one huge ass, curved screen plus a second, smaller screen, taking up all the space on this desk. Many windows are opened and he’s like on a switch board, checking this thing here, hopping on to the next one. While one Claude is researching X, Claude Code is engineering Y and Gemini is correcting something else.
Me, on the other hand, while my claude is computing, I dilly-dally a bit here and there browsing linkedin, finish easy tasks, like filing our bills or, sometimes when I’m alone at home, I will even file my nails, because it’s not really enough time to do anything sensible, unless it’s an over an hour long inference.
The other day, we were discussing that now that LLMs are more powerful and we get to use more tokens in one session, etc. we have to watch it work more. It takes time to compute and this time is sort of dead time. Your machine may be working, but you’re not.
Others are obviously thinking about it, too:
“Agents take 2–3 minutes. Just long enough to start something else. Just short enough to never finish it.” — Pablo Stanley, Fried
Naming it: The Infermezzo
So first of all, I think we need a name for it. I therefore dub thee, awkward AI pause: Infermezzo.
Infermezzo (noun, neologism). The pause users experience while the AI is working on a task that doesn’t generate immediate output. Mashup of inference (what the model is doing) and intermezzo (an interlude between acts before the show continues).
An intermezzo is not a rest. The main performance has paused but something is still happening backstage — the stage is being reset, the next act is being prepared. The audience sits there, neither fully engaged nor fully free.
The obvious cost: unplanned discontinuity
The obvious cost is unplanned discontinuity.
AI is absolutely revolutionizing the way I work as a consultant. I run very deep analyses, I am much more thorough and can trace my arguments clearly across an entire project. I’m realizing now how much I used to wing it compared to now.
But: I used to block out time to work on concepts. This would take enormous amounts of time and effort. I researched: reading and organizing papers into arguments, looking up and dissecting competitor models. I crafted: putting together narratives and shapes and text. I presented: reworking the concepts to fit something others can understand, and doing so reworking at least 50% in the process. It was work.
And it was my main source of learning. We know this: statistically, about 70% of learning happens on on-the-job learning. And this is exactly how: You do something, you have to figure something out. I design how organizations operate. This is not only common sense, although people like to think it is. Rather, it is organizational theory, context analysis, creative work, understanding micro politics, etc etc.
Some work tolerates fragmentation. But these are generally the check-box items: send email to tax advisor, file the transcripts, whatever. But other work needs unbroken depth – reading, designing.
The Infermezzo takes me out of this. It interrupts my flow and it also puts some distance between me and the subject I am working on. (The ‘I’ doing this work is, of course, not just I. It is me and Claude together.)
What sounds like a personal productivity problem at first becomes something more: a problem of knowledge and detachment.
AI Brain Fry and the shift into oversight
Good collaboration with AI now means that I am much more orchestrator, than I am the subject-matter-expert. This isn’t news, we’ve all been going there for a while. And I love it, because really, my work gets deeper, more scrutinized/fact-checked, etc.
But essentially, we need to do managerial work.
Julie Bedard and Colleagues did a BCG study on the effects of context switches when working with AI. They found that people experience something they coin AI brain fry – the fatigue from working beyond your cognitive capacity. Because you couldn’t do three deep researches in parallel (be honest!). And also three lower level consultants couldn’t do it in the time the AI does it.
When the tools operate at a level you cannot fully process or keep up with, your role shifts from doing work to overseeing work. And oversight has its own kind of exhaustion.
The most vivid descriptions are not the formal definition, but the free-text responses:
It feels like I have 12 browser tabs open in my head. / I’m working so hard to manage the tools I’m actually not really doing the work.
Apparently, people who already are managers have less trouble with AI brain fry than subject matter experts — they’re used to oversight rather than content. Which raises an uncomfortable question about where we’re all headed.
If we all become managers: Mintzberg’s fragmented work
Henry Mintzberg (an organizational theorist) used to watch executives at work. In The Nature of Managerial Work, he finds that managers are not the strategic thinkers who plan calmly from a distance that we all believe(d) them to be (well, at least, when we were juniors and still thought they were smarter than us), but that their work is relentlessly fragmented, reactive, and oral.
What does that mean?
Mangers live in brief encounters. They field interruptions, attend meetings that are partly ceremonial and that – usually – they didn’t prepare themselves. They need to process information on the spot. Mintzberg calls them slaves of the moment.
When your days are structurally built around brevity and reaction, deep content knowledge may even slow you down. In the face-to-face world, managers become good at reading rooms, navigating relationships, projecting authority. In principle, they become micro politicians, because it makes more sense to do this, than to understand each and every project in detail.
Let’s be honest: I don’t read every deep research, many only get skimmed.
They are structurally detached from the ground work and that erodes the ability to evaluate whether what they are being told is actually true. They must trust how subordinates frame reality, they’re always one step removed from the work itself. (I’ll write about a concept called “Unterführung” soon).
Adapting knowledge work
I design organizational strategy, target operating models and governance frameworks. There are usually a lot of balls to keep in the air at any given point: stakeholder needs, policy requirements, contingencies, etc etc.
Now we put this in a framework that the AI knows (all pseudonymized and proper, promise). While I used to check my own work, AI now does these checks. It also cross references and documents far cleaner than I ever did.
Qualitatively, this has pushed us way ahead: we now have almost perfect consistency, when I change one thing, the AI will show me each and any implication and flag which trade offs I have to decide on. It additionally creates documentation and presentations. Lovely.
The Infermezzo happens when, for example, such a presentation is created. This pause is yes, a break in my flow, but also a missed opportunity for me to get intimately acquainted with the subject matter, revise, change my opinion, etc. (but hey, it’s also an opportunity to start a Substack!)
The infermezzo structurally does to knowledge work what hierarchy does to organizations.
Now, if I am managing my AIs that means that I have to measure them (e.g. against a framework), because I probably won’t go into as much depth as “they” do. So, my substitute for my own knowledge is a framework with design principles and cross-reference-checks – basically metrics.
A little disclaimer: Currently, I read everything that is produced. But honestly, it’s almost too much, because it’s so much more than it used to be, because everything goes into much more depth. So if it gets more complex or just more, I have to trust, that what the AI tells me (like a subject matter expert) is true, because it is doing more work that I could have been doing in the first place. So if it fits the framework, I’m happy. And that feels a little risqué.
Jerry Z. Muller explains that once KPIs become control targets, they develop their own gravitational pull (similar to Goodhart’s law). Decisions get made to move the numbers rather than improve the underlying activity the numbers were supposed to approximate. Metrics - and let me add: frameworks - are a substitute for evaluation.
If I’m managing AI with frameworks, and frameworks are basically metrics, and metrics are merely a substitute for my own evaluation, then my oversight of AI may be systematically disconnected from actual quality.
Most of my new learning is currently about tech — new methodology, new tools, new ways of orchestrating. That’s real and valuable. But it’s methodology, not domain. And I think there’s a broader question worth sitting with: as AI takes over the content work, am I becoming too detached from my subject matter expertise? Would my frameworks ever tell me?
***
Further Reading
Julie Bedard, Matthew Kropp, Megan Hsu, Olivia T. Kraman, Jason Hawes and Gabriella Rosen Kellerman, 2026. When Using AI Leads to “Brain Fry”
Pablo Stanley, 2026. Fried
Henry Mintzberg, 1973. The Nature of Managerial Work
Foto Credits: Alev Takil



Meine Agenten laufen meistens 15 Minuten bis 1 Stunde.
Das gelingt mir indem ich https://github.com/gsd-build/get-shit-done verwende.
Das ist eine Skillsammlung die einer KI beibringt, wie man richtig entwickelt. Dadurch entstehen bessere Pläne, die dann einen Agenten auch mal länger beschäftigt halten
👏👏👏