Blog — Leadership, AI and Accountability | Daniel Dunkhase, Berlin

AI and Bias: Why AI Reflects Our Prejudices | Daniel Dunkhase

Written by Daniel Dunkhase | April 9, 2026

Artificial intelligence is considered objective. A rational, incorruptible decision-maker that corrects human error. This notion is understandable — and wrong. AI systems learn from human data. And human data is anything but neutral.

What this means for leaders who use or intend to use AI: they bear responsibility for a tool that can amplify their own blind spots — faster and at greater scale than any individual decision.

What Is AI Bias — and Where Does It Come From?

Bias in AI systems does not arise through intent. It arises through data. When an algorithm is trained on historical decisions — hiring, promotion, credit allocation — it learns the patterns embedded in those decisions. Including the prejudices contained within them.

The result: a system that projects past inequalities into the future. Not maliciously. But systematically.

Three Sources of AI Bias

  • Training data: When historical data under- or over-represents certain groups, the model learns that imbalance as the norm.
  • Flawed labelling: People annotate data — and bring their own assumptions with them. What counts as "good performance" or a "suitable candidate" is rarely neutral.
  • Proxy variables: Seemingly neutral characteristics such as postcode, educational background, or language patterns can act as proxies for protected attributes — without anyone intending them to.

Known Cases — Not Theory, but Practice

AI bias is not an academic problem. It manifests in real systems with real consequences.

Amazon developed an AI-powered recruiting tool that evaluated CVs. The system had learned which profiles had been hired in the past — predominantly men. The result: CVs containing the word "women" (for instance in "women's football team") were systematically downgraded. Amazon discontinued the tool in 2018.

The US risk assessment system COMPAS, used in criminal justice, was found by a ProPublica investigation to classify Black defendants as future re-offenders at roughly twice the rate of white defendants. The system itself contains no attribute for race — but it does contain proxy variables that correlate with it.

Facial recognition shows similar patterns: an MIT study (Buolamwini & Gebru, 2018) documented error rates of up to 34.7 per cent for dark-skinned women — compared with 0.8 per cent for light-skinned men. The same technology, dramatically different reliability.

Why This Is a Leadership Question

Leaders who introduce or use AI-powered systems take on responsibility for their effects. This is not a technical question. It is a leadership question.

Three dynamics are particularly relevant here:

1. Automation Bias

People tend to trust algorithmic recommendations more than their own judgement — even when a system is demonstrably prone to error. This effect is well documented and applies to experienced decision-makers as much as anyone else. AI recommendations appear objective. That is what makes them dangerous when they are not.

2. Scaling of Errors

An individual leader makes flawed decisions in limited volume. An AI system makes the same flawed decision thousands of times — at the same speed, with the same consequence. Bias at system level is not an individual problem. It is a structural one.

3. Lack of Visibility

Many AI systems are a black box to those who use them. The basis for decisions is not transparent. That makes errors difficult to identify — and even harder to correct.

What Effective Leadership Means in Practice

Leading AI systems responsibly does not require a computer science background. But it does require a clear stance and concrete practices.

Ask Questions Before Trusting

Before deploying an AI system that influences decisions about people, certain basic questions are non-negotiable:

  • On what data was this system trained — and who was represented in it?
  • What error rates exist — and for whom?
  • Who has tested the system for bias, and how?
  • What human review is built into the process?

These are not excessively critical questions. They are minimum standards of professional due diligence.

Preserve Human Judgement as a Corrective

AI can prepare decisions. In sensitive areas, it should not replace them. This applies above all where people are concerned: hiring, appraisal, development, separation. Leaders who delegate this responsibility to an algorithm delegate more than a task — they delegate accountability that cannot be delegated.

Monitor Outcomes, Not Just Processes

A system can be correctly configured and still produce discriminatory effects. Checking the process is therefore not enough. Leaders should systematically monitor the outcomes of their AI-supported decisions: who gets hired, who gets promoted, who receives which development opportunities. If certain groups consistently fare worse, that is a signal — regardless of whether the algorithm is functioning "correctly".

Create Psychological Safety for Dissent

Disagreeing with an algorithm is not as straightforward as disagreeing with a colleague. Leaders should actively ensure that their team is free to question AI recommendations — without that being treated as inefficiency or a lack of trust. Anyone who creates a culture in which only the algorithm can be right switches off the most important corrective available: human judgement.

Know Your Own Assumptions

AI bias is not a purely technical problem. It is often a mirror of human assumptions — including one's own. Leaders who work with AI systems would do well to understand their own thought patterns: which profiles do I consider "typically successful"? Which deviations from that do I unconsciously discount? This kind of reflection is not self-criticism for its own sake. It is a precondition for sound decisions.

Conclusion: Responsibility Stays with People

AI systems are powerful tools. They can accelerate, structure, and scale decision-making processes. But they do not assume responsibility. People do — and leaders do.

Anyone who deploys AI without understanding what is inside it is not delegating efficiency. They are delegating control. And anyone who delegates control without retaining accountability has not fulfilled their leadership role.

The good news: you do not need to be an AI expert to lead with AI responsibly. You need to ask the right questions, keep human judgement in the process — and have the courage to push back against a system when the results do not add up.

That is not a new competency. That is leadership.

Sources:

  • Buolamwini, J. & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research.
  • Angwin, J. et al. (2016). Machine Bias. ProPublica.
  • Amazon scraps secret AI recruiting tool that showed bias against women. Reuters, 2018.