Artificial intelligence is considered objective. A rational, incorruptible decision-maker that corrects human error. This notion is understandable — and wrong. AI systems learn from human data. And human data is anything but neutral.
What this means for leaders who use or intend to use AI: they bear responsibility for a tool that can amplify their own blind spots — faster and at greater scale than any individual decision.
Bias in AI systems does not arise through intent. It arises through data. When an algorithm is trained on historical decisions — hiring, promotion, credit allocation — it learns the patterns embedded in those decisions. Including the prejudices contained within them.
The result: a system that projects past inequalities into the future. Not maliciously. But systematically.
AI bias is not an academic problem. It manifests in real systems with real consequences.
Amazon developed an AI-powered recruiting tool that evaluated CVs. The system had learned which profiles had been hired in the past — predominantly men. The result: CVs containing the word "women" (for instance in "women's football team") were systematically downgraded. Amazon discontinued the tool in 2018.
The US risk assessment system COMPAS, used in criminal justice, was found by a ProPublica investigation to classify Black defendants as future re-offenders at roughly twice the rate of white defendants. The system itself contains no attribute for race — but it does contain proxy variables that correlate with it.
Facial recognition shows similar patterns: an MIT study (Buolamwini & Gebru, 2018) documented error rates of up to 34.7 per cent for dark-skinned women — compared with 0.8 per cent for light-skinned men. The same technology, dramatically different reliability.
Leaders who introduce or use AI-powered systems take on responsibility for their effects. This is not a technical question. It is a leadership question.
Three dynamics are particularly relevant here:
People tend to trust algorithmic recommendations more than their own judgement — even when a system is demonstrably prone to error. This effect is well documented and applies to experienced decision-makers as much as anyone else. AI recommendations appear objective. That is what makes them dangerous when they are not.
An individual leader makes flawed decisions in limited volume. An AI system makes the same flawed decision thousands of times — at the same speed, with the same consequence. Bias at system level is not an individual problem. It is a structural one.
Many AI systems are a black box to those who use them. The basis for decisions is not transparent. That makes errors difficult to identify — and even harder to correct.
Leading AI systems responsibly does not require a computer science background. But it does require a clear stance and concrete practices.
Before deploying an AI system that influences decisions about people, certain basic questions are non-negotiable:
These are not excessively critical questions. They are minimum standards of professional due diligence.
AI can prepare decisions. In sensitive areas, it should not replace them. This applies above all where people are concerned: hiring, appraisal, development, separation. Leaders who delegate this responsibility to an algorithm delegate more than a task — they delegate accountability that cannot be delegated.
A system can be correctly configured and still produce discriminatory effects. Checking the process is therefore not enough. Leaders should systematically monitor the outcomes of their AI-supported decisions: who gets hired, who gets promoted, who receives which development opportunities. If certain groups consistently fare worse, that is a signal — regardless of whether the algorithm is functioning "correctly".
Disagreeing with an algorithm is not as straightforward as disagreeing with a colleague. Leaders should actively ensure that their team is free to question AI recommendations — without that being treated as inefficiency or a lack of trust. Anyone who creates a culture in which only the algorithm can be right switches off the most important corrective available: human judgement.
AI bias is not a purely technical problem. It is often a mirror of human assumptions — including one's own. Leaders who work with AI systems would do well to understand their own thought patterns: which profiles do I consider "typically successful"? Which deviations from that do I unconsciously discount? This kind of reflection is not self-criticism for its own sake. It is a precondition for sound decisions.
AI systems are powerful tools. They can accelerate, structure, and scale decision-making processes. But they do not assume responsibility. People do — and leaders do.
Anyone who deploys AI without understanding what is inside it is not delegating efficiency. They are delegating control. And anyone who delegates control without retaining accountability has not fulfilled their leadership role.
The good news: you do not need to be an AI expert to lead with AI responsibly. You need to ask the right questions, keep human judgement in the process — and have the courage to push back against a system when the results do not add up.
That is not a new competency. That is leadership.
Sources: