Ensuring Objectives — with AI, not by AI.
How senior leaders set the right objectives when AI does the groundwork — and where the judgment call stays with the human.
Objectives are the first craft of effective leadership. They connect strategy to Monday morning, customer value to metric, leader to team. What changes when AI sits at the table: the distribution of the work. What doesn't: who is accountable for the outcome.
Why Ensuring Objectives Stays a Leadership Job — Even with AI
Ensuring objectives isn't the same as agreeing to them. It isn't handing targets down either. It means standing behind the fact that objectives exist at all, that they are the right ones, and that everyone on your team knows what actually matters. This responsibility doesn't delegate — not to HR, not to an OKR coach, not to a tool, and not to AI.
The job has three sides that belong together:
- Craft — formulating, cascading, and communicating objectives.
- Judgment — deciding which few objectives actually count.
- Personal stake — feeling accountable when the quarter produces nothing.
Objective systems that focus only on the first side — forms, cascades, quarterly rituals — and quietly let the process carry the second and third miss the point. The process doesn't own accountability. Objectives without someone who ensures them stay inert.
Two principles aren't negotiable. First: few objectives, big ones. Three to seven per cycle, five as the benchmark. Twelve objectives aren't a set of objectives — they're a wish list. Second: quantify without dogma. What can be measured should be measured. What can't be cleanly measured — and that's often what matters most — still belongs on the list, with the courage to render a judgment at the end rather than read a value off a dashboard.
Peter Drucker wrote this into management thinking sixty years ago: "Each manager, from the big boss down to the operations supervisor, needs clearly spelled-out objectives. Otherwise confusion can be guaranteed." Management by Objectives was never just a steering tool for the top — it was an architecture for self-control. The core hasn't aged. What's new is the additional player at the table.
MbO, OKRs or Hoshin Kanri: Which Objective System Fits Which Company
Three systems show up in practice. None is categorically better. The question is which one fits the dynamics of your business and the maturity of your leadership culture.
| Approach | Strength | Risk |
|---|---|---|
| MbO (Management by Objectives, Drucker) |
Planning reliability, clear annual accountability anchor — still the default in mid-market industrials, regulated enterprises and many multinationals | Coupling objectives to variable pay turns leadership into compensation negotiation |
| OKRs (Objectives & Key Results) |
Quarterly rhythm, transparency across levels, deliberately decoupled from bonus | Requires strategic clarity and genuine delegation — without them, OKRs turn into activity management under a new label |
| Hoshin Kanri (Lean policy deployment) |
Links breakthrough objectives to operational action through catchball dialogue and the X-Matrix | Without a lived PDCA discipline it stays a poster on the wall |
All three methods work when someone ensures the objectives. All three fail when no one does. Choosing the method is a leadership decision — never the first question, and never an end in itself.
AI in Objective-Setting: What It Speeds Up, What Stays with the Human
AI isn't a revolution for this leadership job. It's a bridge technology: it upgrades the craft that good leaders already have. What used to fail on capacity — continuous situational awareness, market scanning, progress visualization, translating strategy into role-level objectives — becomes feasible. What doesn't fail on capacity stays leadership.
What AI handles reliably
- Aggregating data and generating forecasts — pulling ERP, CRM, market and competitive signals into real time.
- Translating strategy into role-level objectives — closing the gap between "we want to lead in Segment X" and "what does that mean for me on Monday morning" with drafts the leader picks from.
- Flagging drift predictively — spotting deviations before they show up in the next reporting cycle.
- Surfacing conflicts — what a two-hour board meeting leaves unsaid for political reasons, a capable language model names in ten minutes.
- Prep work and admin — status updates, briefing documents, scheduling, templates.
What stays with the human
- Prioritizing — which few objectives really matter. That's knowledge of the business, not a model's call.
- Running the dialogue — understanding context, recognizing strength, building commitment. Listening doesn't delegate.
- Judging — putting results in context. More than plan versus actual.
- Drawing consequences — reallocating resources, having hard conversations. Requires nerve, not compute.
- Connecting contribution to purpose — why this objective matters. Requires conviction and credibility.
The real risk isn't that AI gets the math wrong. It's that AI gets the math right — on metrics that happen to be in the system. Silent customer losses, key people quietly checking out, shifts in the market. What isn't in the objective system is invisible to the tool. Deciding what belongs in the objective system is a leadership question, not a configuration one.
Rolling out AI across your leadership team — and want to keep the human judgment at the center of objective-setting?
Let's talk about your leadership team →
Putting Objective Systems to Work: Keynote, Training, Executive Coaching
I work on this topic in three formats: as a keynote speaker at leadership summits and executive offsites, in leadership training and workshops for executive teams, and in executive coaching for senior leaders one-on-one — from mid-market to multinational, in English, German, or bilingual for global teams. The material has been tested with leaders across Europe and Asia-Pacific, most recently at the Association for Talent Development's Asia Pacific Conference (ATD APC) in Taipei, 2025.
"Ensuring objectives" is rarely the entry point. It's what surfaces when you look past the method. Most leaders I work with already have an objective system. Most are technically fine. The real work starts elsewhere — around four questions I'll put to every room, regardless of format:
- Which objective system fits your business and culture? MbO, OKRs, Hoshin, a hybrid — this isn't a method debate. It's a question about the dynamics of your business and the maturity of your leadership culture. It belongs on the table, not inherited from the previous era.
- Where does your current approach hold up, and where doesn't it? What works stays. What survives purely from habit — the annual ritual, the bonus negotiation, the template culture — gets examined. The diagnosis is usually more surprising than the therapy.
- Where does AI genuinely belong in this? Which parts of the objective cycle get lighter with AI, and where does AI amplify weaknesses you already have? That's the real question behind every "AI-first" claim.
- What changes in leadership practice? Which routines shift, which conversations become more important, which become obsolete? Without that translation, objective work stays busywork.
In every format, the method is the tool, not the topic. The topic is you, your team, and your business. What you walk out with isn't a new framework — it's a sharper read on which objectives actually carry your business, and where AI can take work off your plate without diluting the responsibility that stays with you.
Six Practice Moves for Sharper Objective-Setting with AI
Small routines that work on Monday. Tested with roughly 5,500 leaders across Europe and Asia-Pacific, including executive audiences at ATD APC Taipei 2025.
- The 5±2 rule. Three to seven truly important objectives per cycle — five as the benchmark. Everything else is either ongoing work or less important than it looks.
- The three-objective test. Would I still set this objective if I were only allowed three? If not: cut it or deprioritize. In a workshop, this surfaces in ten minutes which objectives are a wish list wearing a tie.
- Objective-setting conference, not one-on-ones. Once a quarter, work through the team's core objectives out loud, together. Commitments made in front of peers land harder than commitments made privately to a boss.
- The Friday check-in, ten minutes. Alone with your own objectives. Where do I stand? What moved? What has to change next week? Don't wait for quarter-end.
- The one-sentence test. Before an objective goes live: can everyone on the team explain in a single sentence what it means? If not, it's not finished.
- "What do you see that I don't?" A regular question to your people. It gives you information you wouldn't otherwise have, and it signals that their judgment counts. Drucker called this "communications up" and treated it as the mandatory counterpart to the top-down side of MbO.
Keynote, Training, Coaching — How We Can Work Together
Keynote
For leadership summits, executive offsites, and industry conferences that want an honest, evidence-backed conversation about AI, objectives, and leadership accountability. English or German. No hype show.
Leadership Training & Workshops
For executive teams and leadership cohorts that want to raise the bar on objective-setting with AI — built around the real topics of your business, not generic cases. Delivered in English, German, or bilingual.
Executive Coaching
For individual senior leaders sharpening how they ensure objectives — confidential, across the year, with a sparring partner who has no stake in the outcome.
Common Questions About Objective Systems, MbO, OKRs and AI
What does "ensuring objectives" mean in leadership?
"Ensuring objectives" describes the leadership responsibility for whether objectives exist at all, whether they are the right ones, and whether they actually guide the team's work through the quarter. It's broader than setting objectives on paper: it includes adjusting when reality moves, standing behind the list in front of the team, and keeping the accountability with the human even when AI drafts, cascades, and tracks the details. The phrase draws on Peter Drucker's Management by Objectives tradition — the leader doesn't just assign targets, they ensure the whole system works. With AI at the table, the ensuring part — judgment about what matters, the conversation with the team, willingness to own the outcome — stays firmly a human leadership job.
Can you keynote internationally on AI, leadership, and objectives?
Yes — in English or German, across Europe, Asia-Pacific and the Americas. My most recent international stage was the ATD Asia Pacific Conference in Taipei, 2025. Keynotes are built for your audience: tenured leaders, emerging leaders, executive teams, or mixed rooms. What I won't do is the hype show — the value is in the evidence and in an honest view of what AI changes for leaders and what it doesn't.
How do you adapt MbO or OKRs for multinational teams across cultures?
The mechanics don't change — the adoption does. Multinational teams need to decide upfront what's standardized (the top-level objectives, the cycle, the transparency rules) and what's localized (how objectives get translated at regional or country level). I've seen global OKR rollouts succeed because they kept the cycle rigid and the wording flexible — and fail because they forced identical key results across cultures with very different feedback norms. The leadership job is to set the rails, then coach local leaders in steering on them.
What's the difference between Drucker's MbO and OKRs in practice?
MbO (Peter Drucker's original concept) is usually annual, cascaded, and tied to variable pay — a classic instrument for steering the business that works even in disciplined top-down cultures. OKRs (Objectives and Key Results, popularized at Intel and then Google) are quarterly, transparent across levels, and deliberately decoupled from bonus — a focus instrument for organizations with genuine delegation. The difference isn't just cadence. It's the assumption about leadership culture. MbO works with discipline; OKRs require trust.
Should bonuses be tied to OKRs or AI-generated targets?
Ideally not at all. The moment objectives drive variable pay, the objective-setting conversation turns into a compensation negotiation and the real conversation — where the business is going — stops happening. AI-derived targets make it worse, not better: they look objective, but they optimize on whatever was in the system, not on what matters. If full decoupling isn't feasible, at least separate the objective dialogue from the compensation dialogue by time and venue. One meeting for the business, another for the pay.
How many objectives should a team have per quarter?
Three to seven, with five as the benchmark. More than seven and you effectively have no objectives — you have a list. Fewer than three often means you're hiding trade-offs. The number sounds simple and is surprisingly hard to hold. That's the point: the limit forces prioritization and makes trade-offs visible instead of burying them in the long tail.
Can AI write the OKRs for my team?
It can draft them, and that's useful. What it can't do is decide which objectives are the right ones. AI doesn't know what matters to your business — it knows what's frequent in its training data and what's typical in companies like yours. Use AI to propose options, pressure-test wording, translate strategy into role-level drafts, and surface conflicts. Then make the call yourself. Objectives written by AI alone optimize for yesterday's business.
How do you write good OKRs with AI?
Use AI as a drafting partner, not as the author. Three practical moves work in almost every setup. First, feed the model your strategy, business context, and current priorities — then ask for five to seven OKR options instead of one. The variety forces you to choose. Second, pressure-test the key results: ask what could go wrong, where Goodhart's Law would bite, and what the objective might miss. Third, strip the language back to plain English — if a team member can't explain an OKR in a single sentence without looking at the document, the wording isn't finished. What AI won't do well is decide which few OKRs are the right ones for this quarter. That judgment stays with the human who'll be accountable when the quarter closes.
When should leaders override AI-suggested objectives?
Three situations, most often. First, when the AI has derived objectives from data rather than from strategy — you get extrapolation, not direction. Override and restart from intent. Second, when the suggestions add many precise KPIs that the team would start chasing — Goodhart's Law kicks in: the moment a metric becomes a target, it stops being a good metric. Override and thin out. Third, when the system misses what isn't in it — customer relationships, key-person retention, market shifts. AI reinforces what's already in your objective system; override when silent risks need a voice. Short version: let AI accelerate the drafting, and step in whenever strategy, priority, or judgment would otherwise be lost to the model's defaults.
Does AI have cultural bias in performance management?
Yes. Outputs reflect the training data and the norms of the teams who configured the tooling, and those defaults tend to be Western, individualist, and quantitative-first. In cultures with strong collective ownership or high-context communication, AI-generated objectives and performance feedback can land sharper or more confrontational than intended. The answer isn't to avoid AI — it's to stay in the loop. Review the wording for tone and context, check whether the key results fit how feedback actually works locally, and adjust before anything goes into the room. For multinational teams this is a standard review step, not an edge case.
What does the EU AI Act mean for objective-setting and performance management?
AI systems used in performance evaluation and personnel decisions are classified as high-risk under the EU AI Act. From August 2026, the compliance requirements — risk assessment, human oversight, traceability — apply in full. Practically: if you use AI to suggest objectives, draft reviews, or flag performance issues, the human still owns the decision, the reasoning needs to be documented, and the affected employee has transparency rights. None of this is a reason not to use AI. It's a reason to design the workflow so the human is clearly in command.
Is Hoshin Kanri still relevant for software and service companies?
Yes, selectively. The catchball dialogue and X-Matrix logic translate well to any organization that takes strategy deployment seriously. Where Hoshin tends to struggle outside manufacturing is the cadence — PDCA feels alien in fast-moving software environments. In practice, many software and service companies end up with a hybrid: Hoshin's discipline on breakthrough objectives, OKRs for quarterly team focus. The label doesn't matter. What matters is whether the method matches your operational rhythm and your leadership culture.
How do I stop our objective system from becoming compliance theater?
Cut the number of objectives by half and throw out half the templates. That's almost always the first step. Most objective systems are overloaded and under-prioritized — forms get filled because HR asks, quarterly reviews happen because the calendar says so, SMART criteria get checked because the audit wants it. Compliance theater isn't harmless: it burns the energy that real objective work needs. A short note with three sentences is often more useful than a form with twelve fields.
Do you advise clients beyond keynotes?
Yes — in two main formats. Leadership training and workshops for entire teams or executive cohorts, and executive coaching for individual senior leaders. Keynotes are often the entry point: they create a shared language and evidence base for an organization. Training and coaching are where the lasting change happens. Clients who move quickly tend to book the keynote and the follow-on program together.
