Skip to content

AI recommends. You decide.

How leaders make better decisions when AI handles situational analysis, scenarios, and counter-arguments — while judgment and accountability stay human.

Deciding is the critical leadership task. Drucker captured it: "The task which makes or breaks the manager." An organization where decisions are avoided, deferred, or pushed up the ladder doesn't have a methodology problem — it has a leadership problem. What changes when AI joins the table is the distribution of work between situational analysis and the call. What doesn't change is accountability for the consequences.

Why Deciding Remains a Leadership Task — Even with AI

Whoever decides is leading. Whoever doesn't is not — regardless of rank or title. The role is defined by lived practice, not by the sign on the door. And it is not delegable — not even to AI. The reason is structural, not technological: whoever decides owns the consequences. Algorithms can compute probabilities, run scenarios, generate counter-arguments. They cannot be held to account.

Effective leaders make few decisions, but make them carefully. Drucker put it this way: effective managers don't take many decisions — they concentrate on the ones that matter. In the AI era, this discipline matters more, not less. AI lowers the threshold for taking more decisions per hour, but quality doesn't scale with quantity.

A decision has three parts that belong together:

  • Assess the situation — Where do we actually stand? What options, consequences, assumptions? AI helps substantially here — structuring the problem space, surfacing implicit assumptions, generating counter-arguments.
  • Make the call — On this basis, I do that. The call is personal. It demands courage and willingness to take responsibility, because at the moment of decision, no one knows yet whether it will prove right.
  • Plan execution — A decision is not the resolution, it is the result. Already in the resolution, accountabilities, objectives, resources, and deadlines must be clarified. Whoever defers this to a later stage hasn't made a decision — they've stated an intention. AI helps structure the action program; negotiating responsibility stays human.

The most critical phase is not choosing between options — it is defining the problem. Drucker again: "It is much easier to fix a wrong solution to a problem if the problem has been defined correctly than it is to fix a 'correct' solution to a problem that has been defined incorrectly." Fifteen minutes spent on problem definition save hours of solution debate. What's added in the AI era is a new player at the table — and new risks to be guarded against.

Cynefin, Drucker, Bezos: Sense-Making Before Method Choice

Before any method comes the context question. Cynefin (Dave Snowden) is not a decision framework — it is a sense-making model that determines which decision mode fits the situation in the first place. Skip this, and you apply the same methodology to every situation, with predictable results in the three domains where it doesn't fit.

  • Clear — obvious cause and effect. Apply standard rule (Sense-Categorize-Respond).
  • Complicated — analyzable complexity. Drucker's seven elements work; thorough analysis pays off (Sense-Analyze-Respond).
  • Complex — effects only visible in retrospect. Run small parallel safe-to-fail probes instead of analysis (Probe-Sense-Respond).
  • Chaotic — no recognizable order. Stabilize first, then understand (Act-Sense-Respond).

Only when the domain is clear does method selection follow. Within the decidable domains — Clear and Complicated — two decision logics meet, addressing different kinds of situations:

Method Where it works Risk when misapplied
Drucker
(Seven elements of effective decisions, The Effective Executive)
Complicated situations with clear cause-effect — capital expenditures, site selection, vendor decisions Fails in Complex situations because the assumed causality isn't there
Bezos: Two-Way Doors
(Reversibility check, 70-percent rule)
Reversible decisions — pilots, operational allocation. Move fast with incomplete information Applied to irreversible decisions — investments, hiring, compensation — produces expensive mistakes

The slow-and-thorough discipline and Bezos' bias for action are not opposites. They address different decision types. The reversibility check is the up-front question: can we walk back, or not? Hiring decisions and decisions about compensation systems are without exception to be treated as irreversible — even when they are formally revisable. The cost of correction exceeds the cost of getting it right the first time.

AI in Decision Making: What It Accelerates, What Stays Human

AI does not change the nature of the task — only the tools. Drucker's characterization of deciding as risk taking and a challenge to judgment stands. What changes is that preparatory work gets radically faster. What doesn't change is judgment and accountability.

What AI Reliably Takes Over

  • Structure the problem space — separate symptoms from causes, formulate alternative problem definitions, propose success criteria.
  • Reference class forecasting — surface historical comparables to calibrate assumptions. What used to fail on data availability is now feasible.
  • Surface assumptions — sort what a proposal silently presumes into certain / uncertain / unverified.
  • Devil's advocate — systematically generate counter-arguments that team consensus would have missed. AI has no reputational concerns.
  • Pre-mortem material — ten plausible reasons for hypothetical failure, in minutes instead of hours.
  • Decision journal — audit trail for the EU AI Act, works council requirements, and later after-action review.
  • Action program structure — who carries which measure, with which authority, which resources, by when. AI drafts; the team sharpens.

What Stays Human

  • Define the problem — what question are we actually deciding here? AI can suggest alternatives, but the opening itself is the leader's setting.
  • Test the situation against assumptions — how much weight any given assumption carries is judgment. It connects material with context that isn't represented in the system.
  • Carry dissent — finding and sustaining intelligent disagreement in your organization requires relationship, trust, and political negotiation. Three dimensions AI cannot replace.
  • Make the callon this basis, I do that. This statement takes responsibility before consequences are certain. It demands courage, because at the moment of the call, not everything is known.
  • Bear the consequences — justifying an AI-assisted decision by pointing at the system is not an explanation. Accountability stays with the human who decided.

The real risk isn't AI computing wrong. It is the biases AI creates in the user: automation bias, anchoring on AI outputs, sycophancy — language models tend to weaken under pushback, not become more reliable. These distortions don't disappear by improving the system, only by protecting use through routines. On top of this comes the EU AI Act: AI in HR decisions becomes a high-risk application from August 2026, with strict requirements for human oversight and traceability.

Want to use AI in your decisions the right way — as a sparring partner, not as a substitute for your judgment?
Let's talk about your leadership team →

Better Decisions: Executive Coaching, Leadership Training, Workshop

I work on this in three formats: executive coaching, leadership training, and leadership workshops for established teams — methodologically grounded in Drucker's seven elements, complemented by Cynefin and decision-hygiene research. "Deciding" is rarely booked as a training topic. It surfaces when you look behind strategy debates, hiring decisions, or investment proposals. Five questions stand at the center of every format:

  • Which situation are you actually deciding in? Cynefin domain check before method selection. Whoever applies Drucker's seven elements to every situation also applies them where they don't fit — and pays the price in costly pseudo-analyses or delayed crises.
  • Which decisions are reversible, which are not? Fast decisions for two-way doors, deliberate ones for one-way doors. Hiring decisions and compensation systems belong without exception in the slow lane — even under AI pressure.
  • Where can AI be integrated meaningfully? Which subtasks — problem structuring, pre-mortem, bias audit, decision journal — actually offload to AI? And where does AI amplify existing weaknesses, for example by accelerating decisions that should be slower?
  • How do you carry dissent? Sustainable consensus emerges from openly aired dissent, not from conflict avoidance. Which structures — silent rounds, designated critic, disagree-and-commit — protect your decisions against hierarchy and harmony pressure?
  • How do you plan execution? A decision is not the resolution, it is the result. Already in the resolution, accountabilities, objectives, resources, and deadlines must be clarified. AI helps structure the action program; the negotiation of responsibility stays human.

What you take away is not a new method — but a sharper view of how your organization decides, and where AI can offload work without diluting your leadership accountability.

7 Ways to Make Better Decisions with AI

Small routines that work immediately. Tested with around 5,500 leaders.

1. Cynefin Domain Check (3 minutes)

Before any major decision conversation, the domain question: is the cause-effect structure obvious, analyzable, only retrospectively visible, or not visible at all? Saves method misapplications — the most common error is treating every situation as Complicated.

2. Reversibility Check in Three Tiers (1 minute)

Classify in writing as reversible / partially reversible / irreversible before the resolution. Reversible: Bezos logic, 70 percent information is enough. Irreversible: the slow-and-thorough lane, at least three working days, external perspectives, pre-mortem.

3. Problem Definition Pause with Success Question (15 minutes)

When discussion jumps straight into solutions: stop. What are we actually deciding here? Are we talking about symptoms or causes? Test the definition against all known facts. Success question: How will we know in six months that the decision was a good one?

4. Pre-Mortem with Early Indicators (45 minutes)

Place the team in the future: It's twelve months from now. The decision has failed. Why? Each person writes ten reasons. For each likely reason, define an early indicator. Put the indicators on the calendar — before the resolution drops.

5. Silent Round with Pen and Paper (4 + 10 minutes)

At the start of any decision meeting: question on the wall in one sentence, four minutes of silence, each person writes their position with two main reasons. Works against anchoring by the highest-ranking voice and against group-think.

6. Decision Journal in Eight Fields (10 minutes per Type-1 decision)

What was decided, which alternatives were rejected, which assumptions underlie the call, confidence level in percent, emotional state, expected outcome, AI recommendation with deviation and reasoning. Protection against hindsight bias, foundation for organizational learning, EU AI Act compliant.

7. Last Mile Human (15 minutes)

For decisions with personnel impact — hiring, promotion, termination, sanction: read the AI recommendation, then close the AI system. Write your decision in your own words, no copy-paste. Question: Would I make the same call without the AI input?

Leadership Training, Coaching, Keynote — How We Work Together

Leadership Training & Workshops

For leaders and teams who want to raise their decision quality with AI to a new level — at real decisions in your business, at decision templates and company-wide standards, not at generic cases.

See programs →

Executive Coaching

For individual leaders facing critical decisions — confidential, with a sparring partner who has no stake in the outcome and protects you from bias and shortcuts.

Schedule a conversation →

Keynote

For leadership conferences that want to open the conversation on decisions, AI, and accountability in your organization. No hype show — an honest look at what leadership in the AI era actually demands.

Request a keynote →

Common Questions About Decision Making, Bias, and AI in Management

When should I decide fast, when deliberately?

Run the reversibility check before every decision: can we walk back, or not? Reversible decisions — pilots, operational allocation, provisional resource allocation — follow the Bezos logic: move fast with 70 percent information, review in retrospect. Irreversible decisions — investments, strategic commitments, hiring, compensation — go the slow-and-thorough route: external perspectives, formal pre-mortem, deliberate pause before the resolution. Hiring decisions are without exception to be treated as irreversible, even when they are formally revisable.

How do I prevent my team from pushing every decision back up to me?

Usually not by writing a better delegation matrix, but by consistently bouncing back what you don't need to decide. Whoever responds to every escalated question trains the organization to violate subsidiarity — the classic monkey on the back wanders right back up. A simple question helps: What would you do if I were on vacation this week? What sustains the change is what gets anchored as a standard in the company — decision templates with a clear Decide role, escalation criteria, thresholds.

What helps against endless decision discussions without resolution?

In most cases, a clear Decide role is missing. Consensus procedures without a named decision-maker diffuse responsibility, and decisions get deferred. Practical lever: apply the RAPID model — who recommends, who agrees, who performs, who provides input, who decides. The Decide role belongs to one person, not a committee. Where this is clarified, decisions start landing again.

Can AI make the decision for us?

The short answer: no. AI can structure the problem space, surface options, make assumptions explicit, generate counter-arguments. But it cannot be held to account. A decision made on the basis of an AI recommendation remains a decision by the leader — legally, organizationally, and humanly. "The algorithm said so" is not an explanation, it is an excuse. And when you disagree with the AI: read it fully, write down where you doubt it, document the deviation. Language models often get weaker under pushback (sycophancy), not more reliable.

How do I protect myself from bias in important decisions?

Awareness alone doesn't work. Workshops on confirmation bias, anchoring, or sunk cost have little effect unless they are embedded in structure. What works is decision hygiene (a term coined by Daniel Kahneman): pre-mortem, devil's advocate as an institutionalized role, outside view instead of inside view, mediating assessments protocol, decision journal with confidence level. Procedures absorb bias structurally — good intentions don't.

What is a pre-mortem and when is it worth doing?

A pre-mortem is the anticipation of failure: place the team mentally in the future — It's twelve months from now. The decision has failed. Why? Each person writes ten concrete reasons, the team clusters them, marks the three most likely, and defines an early indicator per reason. Research shows that imagining an event that has already occurred substantially improves the ability to identify plausible causes, compared with mere risk listing. Use it for every irreversible decision as a matter of discipline.

How does the EU AI Act change our decision practice?

Concretely in three areas. First: AI in HR decisions — selection, promotion, evaluation, termination, compensation — becomes a high-risk application from August 2026, requiring risk assessment, human oversight, traceability, and discrimination testing. Second: HR decisions must remain explainable — pointing at an algorithm doesn't satisfy anti-discrimination law in any major jurisdiction. Third: as soon as AI can monitor employee behavior or performance (tickets, customer communication, call times), works council co-determination applies. Rolling out without involving employee representatives is not sustainable.

Read More: Accountability with AI, Objective Setting, AI Fluency

Cookie-Einstellungen