Blog — Leadership, AI and Accountability | Daniel Dunkhase, Berlin

Between Brain Fry and AI Slop – What AI Means for Leadership | Daniel Dunkhase

Written by Daniel Dunkhase | April 11, 2026

As of: April 2026

Two phenomena emerge when working with AI – and they could hardly be more opposed.

The first: Brain Fry. Cognitive exhaustion from permanent information overload, endless decision loops, the feeling of never really being done. Many leaders know it – and so do many of the people who work for them.

The second: AI Slop. Content, analyses and texts that AI systems produce in seconds – formally correct, superficially polished, but without substance. Generated quickly, forgotten just as quickly.

Both phenomena share a common cause: we have integrated AI into our work without seriously clarifying what it means for the quality of that work and for what leadership must now deliver. That is the real question.

What does AI actually change in everyday working life?

AI tools accelerate certain tasks considerably. Draft texts, summaries, data analyses, research – what used to take hours now takes minutes. That is not hype; it is reality in many organisations.

But acceleration alone is not a gain. According to a study by Microsoft and LinkedIn (2024), 68 per cent of knowledge workers surveyed say their working pace has increased due to AI tools – while 46 per cent report higher cognitive load. More output, but not necessarily better results.

This comes down to a structural problem: AI takes over execution, not judgement. Anyone who forwards an AI-generated report without checking its content is not delegating work – they are delegating responsibility. And that catches up with you, sooner or later.

What is AI Slop – and why is it a leadership issue?

The term "AI Slop" describes content that looks formally flawless but is devoid of substance. Texts without genuine analysis. Presentations without a clear position. Recommendations without contextual knowledge.

The problem is not caused by AI. It is caused by how AI is used. When teams learn that speed is rewarded and depth is not expected, they produce what comes quickly. AI simply makes that more efficient.

For leaders, this means: anyone who sees AI Slop in their organisation is looking at a symptom – not a technology problem. The real question is: what quality standards apply, and who enforces them?

Peter Drucker described the core problem long before AI existed: leadership means doing the right things – not doing things faster. When AI is used primarily to increase pace, without asking about relevance and quality, that is a leadership problem, not a technology problem.

What is Brain Fry – and what does AI have to do with it?

Cognitive exhaustion is nothing new. But AI changes how it arises. In the past, exhaustion was often the result of too much work. Today it increasingly comes from too many decisions alongside seemingly less work.

When AI delivers drafts, someone must decide: do we use this? Do we change it? Is it good enough? These decisions accumulate. And they are more demanding than simply executing a task, because they require judgement.

A BCG study (Bedard, Kropp, Hsu, Karaman et al., 2026) with around 1,488 respondents shows: 14 per cent of knowledge workers report Brain Fry – defined as mental exhaustion from excessive use or monitoring of AI tools beyond one's own cognitive capacity. In marketing and operations roles the figure exceeds 25 per cent. Those affected make 39 per cent more serious errors and show nearly 10 per cent higher readiness to leave. Particularly notable: it is precisely high performers, who work most intensively with AI tools, who are disproportionately affected.

According to research by Mark, Gudith and Klocke (University of California, Irvine, 2008), the human brain requires an average of 23 minutes after an interruption to return to full concentration. AI tools that continuously deliver suggestions, summaries and alternatives are a constant source of interruption – even when they appear productive.

That is not an argument against AI. It is an argument for shaping how AI is used deliberately, rather than letting it grow without a plan.

What are the consequences for leadership?

When AI takes over execution and people retain judgement, what leadership must deliver shifts. Three shifts are particularly relevant:

1. From process control to quality judgement

Leaders need to steer how something is done less often – and assess whether the result is good enough more often. That requires them to have a clear picture of quality themselves. Not as an abstract ideal, but as a concrete standard: what is good here? What falls short?

Fredmund Malik described this as results orientation – the ability to measure performance by impact, not by activity. This ability becomes more important with AI, not less.

2. From task allocation to context provision

AI can execute tasks, but it cannot understand context. What counts in this organisation, in this situation, with these clients – that is something the leader must communicate. Not once, but continuously.

Teams that use AI productively consistently report that the quality of AI outputs depends heavily on how precisely the context was framed. That is a leadership task: creating clarity about the goal, the parameters and the quality standard – before the AI starts.

3. From control to psychological safety

Amy Edmondson has shown that teams in environments with high psychological safety learn better and achieve better results. That applies equally to working with AI.

When employees are afraid to admit that an AI-generated result is not good enough – because speed is rewarded and criticism is seen as a brake – AI Slop becomes an organisational norm. Leaders who create psychological safety make it possible for teams to say openly: "This isn't good enough. We need to rework it."

What should leaders actually do differently?

Three approaches that make a practical difference:

  • Make quality standards explicit. Not "use AI for this", but: "This is the result we need – and that is how we measure it." AI is a tool on the way there, not a benchmark.
  • Build in cognitive buffers. Teams need time to review AI outputs, put them in context and decide. Anyone who does not plan for this time generates Brain Fry – and gets AI Slop as the result.
  • Reflect on your own use of AI. Leaders who uncritically pass on AI-generated content are implicitly setting a standard. That signal is stronger than any instruction.

What does this mean systemically?

Brain Fry and AI Slop are not individual problems. They arise in systems – in organisations that have introduced AI without adapting the surrounding conditions.

From the perspective of the St. Gallen Management Model, this is a question of design: what structures, processes and cultural norms do we need so that AI actually contributes to effectiveness – rather than merely accelerating mediocrity?

That is not a philosophical question. It is an operational one. And it belongs on the agenda of every leader who wants to use AI seriously.

Frequently asked questions

What is the difference between productive AI use and AI Slop?

Productive AI use means the result is better by human judgement than it would have been without AI – in substance, not merely in speed. AI Slop arises when that judgement is absent or skipped because pace appears more important than quality. The difference lies not in the tool, but in the leadership framework around it.

How do I recognise Brain Fry in my team?

Typical signs: team members report exhaustion despite a formally reduced workload, decisions are increasingly deferred or delegated, the quality of work outputs varies more than before. When teams are simultaneously producing more and feeling less satisfied, a closer analysis of cognitive load is worthwhile.

How do I set quality standards for AI-supported work?

Start with the outcome, not the process. Define what a good result looks like in your context – concretely and traceably. Then clarify what role AI can play on the way there, and where human judgement is non-negotiable. That clarity protects against AI Slop and reduces cognitive overload.

Do I need to use AI intensively myself in order to lead on this topic?

Not necessarily intensively – but informedly. Leaders who have never tried AI tools themselves are poorly placed to assess their possibilities and limitations. A pragmatic starting point is enough: use AI for one concrete task from your own working life and review the result critically. That sharpens judgement more than any seminar.

Is this relevant for mid-sized companies, or mainly for large corporations?

Particularly for mid-sized companies. They often lack the resources for dedicated AI teams or extensive governance structures. Leaders therefore take direct responsibility for the framework in which AI is used – and must do so consciously. Anyone who leaves that to chance gets Brain Fry and AI Slop as the default state.

Further reading

Sources

Are you working through the question of how AI use in your organisation can be led effectively? Get in touch with me.