As of: April 2026
Two phenomena emerge when working with AI – and they could hardly be more opposed.
The first: Brain Fry. Cognitive exhaustion from permanent information overload, endless decision loops, the feeling of never really being done. Many leaders know it – and so do many of the people who work for them.
The second: AI Slop. Content, analyses and texts that AI systems produce in seconds – formally correct, superficially polished, but without substance. Generated quickly, forgotten just as quickly.
Both phenomena share a common cause: we have integrated AI into our work without seriously clarifying what it means for the quality of that work and for what leadership must now deliver. That is the real question.
AI tools accelerate certain tasks considerably. Draft texts, summaries, data analyses, research – what used to take hours now takes minutes. That is not hype; it is reality in many organisations.
But acceleration alone is not a gain. According to a study by Microsoft and LinkedIn (2024), 68 per cent of knowledge workers surveyed say their working pace has increased due to AI tools – while 46 per cent report higher cognitive load. More output, but not necessarily better results.
This comes down to a structural problem: AI takes over execution, not judgement. Anyone who forwards an AI-generated report without checking its content is not delegating work – they are delegating responsibility. And that catches up with you, sooner or later.
The term "AI Slop" describes content that looks formally flawless but is devoid of substance. Texts without genuine analysis. Presentations without a clear position. Recommendations without contextual knowledge.
The problem is not caused by AI. It is caused by how AI is used. When teams learn that speed is rewarded and depth is not expected, they produce what comes quickly. AI simply makes that more efficient.
For leaders, this means: anyone who sees AI Slop in their organisation is looking at a symptom – not a technology problem. The real question is: what quality standards apply, and who enforces them?
Peter Drucker described the core problem long before AI existed: leadership means doing the right things – not doing things faster. When AI is used primarily to increase pace, without asking about relevance and quality, that is a leadership problem, not a technology problem.
Cognitive exhaustion is nothing new. But AI changes how it arises. In the past, exhaustion was often the result of too much work. Today it increasingly comes from too many decisions alongside seemingly less work.
When AI delivers drafts, someone must decide: do we use this? Do we change it? Is it good enough? These decisions accumulate. And they are more demanding than simply executing a task, because they require judgement.
A BCG study (Bedard, Kropp, Hsu, Karaman et al., 2026) with around 1,488 respondents shows: 14 per cent of knowledge workers report Brain Fry – defined as mental exhaustion from excessive use or monitoring of AI tools beyond one's own cognitive capacity. In marketing and operations roles the figure exceeds 25 per cent. Those affected make 39 per cent more serious errors and show nearly 10 per cent higher readiness to leave. Particularly notable: it is precisely high performers, who work most intensively with AI tools, who are disproportionately affected.
According to research by Mark, Gudith and Klocke (University of California, Irvine, 2008), the human brain requires an average of 23 minutes after an interruption to return to full concentration. AI tools that continuously deliver suggestions, summaries and alternatives are a constant source of interruption – even when they appear productive.
That is not an argument against AI. It is an argument for shaping how AI is used deliberately, rather than letting it grow without a plan.
When AI takes over execution and people retain judgement, what leadership must deliver shifts. Three shifts are particularly relevant:
Leaders need to steer how something is done less often – and assess whether the result is good enough more often. That requires them to have a clear picture of quality themselves. Not as an abstract ideal, but as a concrete standard: what is good here? What falls short?
Fredmund Malik described this as results orientation – the ability to measure performance by impact, not by activity. This ability becomes more important with AI, not less.
AI can execute tasks, but it cannot understand context. What counts in this organisation, in this situation, with these clients – that is something the leader must communicate. Not once, but continuously.
Teams that use AI productively consistently report that the quality of AI outputs depends heavily on how precisely the context was framed. That is a leadership task: creating clarity about the goal, the parameters and the quality standard – before the AI starts.
Amy Edmondson has shown that teams in environments with high psychological safety learn better and achieve better results. That applies equally to working with AI.
When employees are afraid to admit that an AI-generated result is not good enough – because speed is rewarded and criticism is seen as a brake – AI Slop becomes an organisational norm. Leaders who create psychological safety make it possible for teams to say openly: "This isn't good enough. We need to rework it."
Three approaches that make a practical difference:
Brain Fry and AI Slop are not individual problems. They arise in systems – in organisations that have introduced AI without adapting the surrounding conditions.
From the perspective of the St. Gallen Management Model, this is a question of design: what structures, processes and cultural norms do we need so that AI actually contributes to effectiveness – rather than merely accelerating mediocrity?
That is not a philosophical question. It is an operational one. And it belongs on the agenda of every leader who wants to use AI seriously.
Productive AI use means the result is better by human judgement than it would have been without AI – in substance, not merely in speed. AI Slop arises when that judgement is absent or skipped because pace appears more important than quality. The difference lies not in the tool, but in the leadership framework around it.
Typical signs: team members report exhaustion despite a formally reduced workload, decisions are increasingly deferred or delegated, the quality of work outputs varies more than before. When teams are simultaneously producing more and feeling less satisfied, a closer analysis of cognitive load is worthwhile.
Start with the outcome, not the process. Define what a good result looks like in your context – concretely and traceably. Then clarify what role AI can play on the way there, and where human judgement is non-negotiable. That clarity protects against AI Slop and reduces cognitive overload.
Not necessarily intensively – but informedly. Leaders who have never tried AI tools themselves are poorly placed to assess their possibilities and limitations. A pragmatic starting point is enough: use AI for one concrete task from your own working life and review the result critically. That sharpens judgement more than any seminar.
Particularly for mid-sized companies. They often lack the resources for dedicated AI teams or extensive governance structures. Leaders therefore take direct responsibility for the framework in which AI is used – and must do so consciously. Anyone who leaves that to chance gets Brain Fry and AI Slop as the default state.
Are you working through the question of how AI use in your organisation can be led effectively? Get in touch with me.