Skip to content
Publications

On Leadership in the AI Era

English summaries, key takeaways and reflection prompts for my published articles on effective leadership in the AI era.

You Cannot Train Backbone — and What That Means for AI-Assisted Leadership Decisions (2026)Wirtschaftsinformatik & Management · Springer Nature · originally published in German · preprint, accepted for publication

Why language models fold under disagreement, what that pattern means systematically, and five concrete shifts in how leaders should set up AI-assisted decisions under pressure.

2026 · Wirtschaftsinformatik & Management · DOI: 10.1365/s35764-026-00599-0
Original German title: Rückgrat kann man nicht trainieren

Summary

Large language models, when challenged, tend to fold. Not because they were wrong in the first place, but because the training process that made them helpful also made them deferential. Push back on a sound answer with weak counter-arguments and the model often abandons it; cite a fictional study and it will frequently agree with you. Recent benchmarks like PARROT formalise this: even frontier models lose belief consistency under social pressure. This article walks through what that means for leaders who increasingly use AI as a sparring partner, advisor, or first-pass analyst.

The argument is direct. The very property that makes AI feel collaborative — its responsiveness to your framing — is also what makes it unreliable as a check on your own thinking. If your AI agrees with you too easily, you are not getting a second opinion. You are getting a smarter mirror. In contexts where decisions carry weight — capital allocation, hiring, strategic positioning, public statements — a sycophantic system is worse than no system, because it produces the felt sense of having been challenged without the substance.

Five concrete impulses follow: how to set up the prompt so that disagreement is structurally invited; how to read AI confidence patterns without anthropomorphising; how to triangulate model output with deliberately adversarial sources; how to log your own reasoning before you ask, so that the model cannot anchor on your draft; and where to keep AI out of the loop entirely.

Key takeaways

  • Sycophancy is not a bug to be patched out — it is an emergent property of how models are trained to be helpful.
  • Agreement under disagreement is the most unreliable signal an AI gives you. Push back, then push back again — what survives is closer to a real position.
  • The risk is not that AI gives you wrong answers. The risk is that it gives you the answers you wanted, more articulately than you could have produced them yourself.
  • Capital decisions, people decisions and reputation decisions deserve adversarial setup. Default model behaviour is the opposite.
  • Logging your own position before consulting AI breaks the anchoring effect that makes models echo your framing.
  • Backbone — the willingness to hold a position under pressure when the position is sound — is precisely the human capability that AI cannot supply for you.

Common questions

What is AI sycophancy and why does it matter for leadership?

Sycophancy is the tendency of language models to agree with whoever is talking to them, especially under social pressure. It matters for leadership because the more senior the user, the more the model has been trained to defer — and the higher the cost of any decision that the user thought was independently validated.

Are some models more resistant to this than others?

Resistance varies by model and version, and benchmarks like PARROT track it. But no current frontier model is fully robust. Treating any single model as a backbone substitute is a category error; the question is how you set up the interaction, not which model you pick.

How can a leader actually use AI for decisions without falling into this trap?

Write down your position before you prompt. Ask the model to argue against it, then to argue for it, then to score both arguments on falsifiability rather than persuasiveness. Triangulate with at least one source the model has not seen. Keep the final call human.

For your own reflection

  1. When was the last time an AI agreed with a position of yours that, in retrospect, was weak — and what did that agreement cost you?
  2. If you had to brief a successor on which decisions in your role should never be AI-assisted, what would be on that list?
  3. How would your team's quality of thinking change if every senior meeting opened with each person stating their position before the AI tools were turned on?
  4. Where in your organisation is „the AI agreed“ already being used as evidence — and how would you push back on that without sounding anti-tech?
What Leaders Must Unlearn to Stay Effective in the Age of AI (2025)Wirtschaftsinformatik & Management · Springer Nature · originally published in German

Seven leadership postures that were once strengths and have become drag — and the recalibrations that make room for AI fluency, dialogic decisions and the orchestration of hybrid human–AI teams.

2025 · Wirtschaftsinformatik & Management · DOI: 10.1365/s35764-025-00585-y · Version of Record (Springer SharedIt) →
Original German title: Was wir in der Führung verlernen müssen, um in Zukunft weiter wirksam zu sein

Summary

In an environment shaped by generative AI, certain leadership reflexes that were once strengths now hold organisations back. This article identifies seven postures that experienced leaders should consciously unlearn — not as a wholesale repudiation of their craft, but as a recalibration that creates room for capabilities like AI fluency and the orchestration of hybrid human–AI teams. The argument starts from a simple observation: the more central judgment, empathy, intuition and discernment become, the less useful it is to perform certainty for its own sake.

The seven shifts move from projecting confident certainty to acknowledging productive uncertainty; from accumulating answers to asking sharper questions; from solo decision-making to dialogic decision-making with AI as sparring partner; from output-as-proof-of-work to outcome-as-the-yardstick; from controlling individual contributors to choreographing diverse contributions; from conserving expertise to deliberately handing it on; and from optimising for personal indispensability to building structures that work without you. The piece draws on Drucker’s view of management as a discipline, Dobelli’s clear-thinking heuristics, and the way long-horizon investors like Warren Buffett protect judgment under noise.

Written for senior leaders who already know that tools alone do not change cultures — and who recognise that what they unlearn will determine what AI can actually do for their organisation.

Key takeaways

  • Performing certainty becomes a liability when problems are genuinely novel; admitting „I don’t know yet“ invites better thinking from the system.
  • The most valuable leadership question is no longer „what do we do?“ but „what should we ask the AI to do — and what do we still owe to people?“
  • Decisions improve when they are made dialogically with AI, not delegated to it.
  • Effort is not evidence; outcomes are. Make outcomes the metric, even when the intermediate work is invisible.
  • Diverse human–AI teams need orchestration, not control. Set the score, not the keystrokes.
  • Pass on expertise actively. AI accelerates obsolescence — your value is in what you teach others, not what you alone can do.
  • Indispensability used to signal seniority; in an AI-native organisation, it signals brittleness.

Common questions

What does it mean to „unlearn“ leadership in the context of AI?

It does not mean abandoning experience. It means dropping habits that solved yesterday’s problems but distort decisions in AI-shaped contexts — for example, performing certainty when honest uncertainty would invite better collective thinking, or rewarding output volume when only outcomes matter.

Which leadership behaviours slow AI adoption the most?

Three patterns recur. Insisting on personal certainty suppresses dissent. Conflating effort with progress hides bad direction. Hoarding expertise makes the organisation brittle when AI shifts the work. All three were once virtues; in AI-native settings they turn into drag.

How is dialogic decision-making different from delegating to AI?

Delegation hands the question to the model and accepts the answer. Dialogic decision-making uses AI as a sparring partner — to surface assumptions, generate counter-arguments, stress-test the reasoning — while the human keeps the decision and the accountability.

For your own reflection

  1. Which of the seven shifts do you find easiest to acknowledge — and which one would your team say you are still resisting?
  2. When was the last time you said „I don’t know yet“ in a senior meeting, and what happened to the conversation afterwards?
  3. If your role were redesigned tomorrow to be deliberately replaceable within twelve months, what would you actually have to teach others — and what would be left of your value once you did?
  4. Where in your week is „effort that proves nothing but commitment“ still being rewarded — by you, or by your boss?
Leader’s Sidekicks: Custom-GPTs as a Strategic Leadership Tool (2025)Wirtschaftsinformatik & Management · Springer Nature · originally published in German

Why senior leaders abandon general-purpose AI within weeks — and how a Custom-GPT designed around a specific role becomes the closest thing to an always-available chief of staff.

2025 · Wirtschaftsinformatik & Management · DOI: 10.1365/s35764-025-00566-1 · Version of Record (Springer SharedIt) →
Original German title: „Leader’s Sidekicks“: Custom-GPTs als strategisches Führungsinstrument

Summary

Many executives have an AI tool open all day and use it for almost nothing. The reason is not capability — current models are competent across most knowledge work — but framing. A general-purpose chatbot does not know your organisation, your role, your weekly cadence, or the situations that recur. Without that context it produces generic output, which leaders rightly dismiss as not worth the prompt. This article argues that a Custom-GPT designed around a specific leadership role solves the framing problem and turns a chatbot into something closer to an always-available chief of staff.

The article walks through the design pattern. A Leader’s Sidekick is built around three layers: a stable role definition (who the leader is, what they own, how they work), a recurring-situation library (the meetings, decisions and conversations that show up weekly), and a small set of explicit guardrails (what the Sidekick will not do, where it will hand back to the human). The output is a custom assistant that picks up context fast, mirrors the leader’s voice without imitating it, and reduces the activation energy for using AI from „what should I prompt“ to „let me think out loud“.

Three case patterns illustrate use: prep for a board update, sense-making after a difficult one-on-one, and structured reflection at the end of the week. None of these require the AI to be brilliant. They require it to be present, contextually aware, and reliably available — which a Custom-GPT delivers and a generic chatbot does not.

Key takeaways

  • A general-purpose chatbot fails leaders not on capability but on context. Custom-GPTs solve the context problem.
  • Three design layers — role, situations, guardrails — turn a model into a Sidekick that actually gets used.
  • The biggest unlock is not better prompts. It is the disappearance of the activation energy to start.
  • Voice-mirroring beats voice-imitation. The Sidekick should sound like you to you, not like an impersonation to others.
  • Most weekly leadership rhythms — board prep, post-meeting sense-making, retrospectives — gain more from a context-aware companion than from a brilliant one.
  • The guardrails matter as much as the role definition. A Sidekick that knows what to refuse builds trust faster than one that always tries.

Common questions

What is a Custom-GPT in this context?

A configured version of a general AI model — currently most commonly built on ChatGPT, Claude or Microsoft Copilot — with a custom system prompt, a defined scope and optional knowledge files. The same technology powers consumer GPTs, but the configuration is built around a specific leadership role rather than a general topic.

Why not just write a long prompt every time?

Because friction kills usage. Every additional second of setup before the actual question is a tax on use, and busy leaders pay that tax by simply not using the tool. A Sidekick is the prompt, persisted, plus the context, plus the guardrails — turned into a single click.

Does this work for any leadership role, or only for specific ones?

It works best for roles with recurring situational patterns — most senior roles qualify. It works less well for roles where every situation is genuinely novel; there a Sidekick can still be useful, but the setup cost is higher than the return.

For your own reflection

  1. If you had a Sidekick that knew the situations you face most often, what is the first task you would actually delegate to it tomorrow morning?
  2. Which of your weekly rhythms gets the least preparation today simply because no one — including you — has the time? That is your strongest Sidekick use case.
  3. What would your Sidekick refuse to do on your behalf, and why? The list of refusals is at least as revealing as the list of capabilities.
  4. If a peer asked you what your Sidekick is for, could you answer in one sentence?
Gamification 2.0 with Virtual Reality: Why VR Workshops Work Without Points, Badges or Leaderboards (2022)Wirtschaftsinformatik & Management · Springer Nature · originally published in German

Why VR meetings carry inherent playfulness without classic gamification mechanics — and what that does for cooperation, embodied risk discussion and the kind of cortical learning that real leadership work requires.

2022 · Wirtschaftsinformatik & Management · DOI: 10.1365/s35764-022-00415-5
Original German title: Gamification 2.0 mit virtueller Realität (VR)

Summary

Meetings and workshops in virtual reality come with built-in playfulness — avatars, three-dimensional landscapes, motion through hand controllers. The aesthetics are inherited from gaming, where most VR development still happens, and the inheritance is what makes business use compelling. A planning meeting, a strategy workshop or a team event in VR feels like a game even when the content is oncology data, sales numbers or sprint planning. The article argues that this matter-of-fact playfulness is the actual gamification — and that classic gamification mechanics like points, badges and leaderboards are not only unnecessary but often counterproductive in serious work.

Five effects extend beyond simple motivation. Avatars accelerate cooperation because the body schema transfers quickly, producing real felt closeness even across continents. Walkable conceptual landscapes — roadmaps you can step through, scenarios you can build at scale — make thought visible in ways slides cannot match. Risks rendered as physical obstacles change the dynamic of risk discussions; a critical path built across an alpine ravine produces commitment that a risk register does not. And projecting decision frameworks like the Rubicon model into a navigable landscape strengthens episodic memory of what was decided and why.

The closing argument distinguishes two kinds of learning. Classical gamification rewards subcortical conditioning — fast hits, short loops, instant gratification — which transfers poorly to complex work. VR gamification supports cortical learning — collaboration and reflection in dialogue — which is exactly what consensus decisions and metacognitive skills require.

Key takeaways

  • VR meetings are inherently gamified. Adding points, badges or leaderboards on top usually subtracts from the effect.
  • The body schema transfers to avatars within minutes, producing collaborative closeness that video calls cannot replicate.
  • Walking through 3D conceptual spaces makes abstract reasoning physically negotiable, which is why decisions feel more grounded.
  • Building risks as obstacles you can stand next to changes risk conversations from analytical to embodied — and embodied risks get acted on.
  • Classic gamification trains fast loops; that learning does not transfer to slow, complex, consensus-heavy work.
  • Cortical learning — dialogue, reflection, collaboration — is what VR enables, and what most leadership development actually needs.

Common questions

Why does avoiding classic gamification mechanics matter?

Because points and leaderboards reward speed and visibility, while serious work usually rewards patience, consensus and depth. A workshop scored like a game produces game-shaped behaviour — which is exactly what you do not want in a strategic decision.

Is this only relevant for distributed teams, or also for co-located ones?

Both. Distributed teams gain the felt closeness; co-located teams gain a level of embodied collaboration that conventional rooms cannot reproduce — three-dimensional roadmaps, walkable risk landscapes, navigable decision frameworks.

What kind of facilitation does this require?

More than a regular workshop. VR sessions need a trained facilitator who handles the technical setup, manages comfort during high-emotion sequences, and ensures the playful frame does not blur the seriousness of the content. The technology is more accessible than it was; the facilitation skill is what determines whether the session lands.

For your own reflection

  1. Which of your recurring leadership formats — board prep, strategy off-sites, post-mortems — would actually gain something from being held in VR, and which would gain nothing?
  2. If your next strategic decision were laid out as a walkable landscape with the alternatives as visible paths, which path would you choose differently than you would on a slide?
  3. Where in your organisation is „more gamification“ being proposed as a fix, and is it the kind that rewards subcortical loops or the kind that rewards cortical depth?
  4. What is the strongest argument against using VR for serious work in your context — and is it about the technology, the comfort or the optics?
Cookie-Einstellungen