top of page

Can AI debias human decision making?

Debias decision making with AI

The optimism bias is one of the most consistent, prevalent, and robust biases documented in psychology and behavioral economics. It causes employees, managers and investors to adapt and adjust strategies that lead to overspending, considerable losses, project over-runs, recessions and has even been considered as one of the main causes of the financial downfall of 2008. As an attempt to de-bias decision making, external advisors are frequently used to support project leaders and employees. While no solution has been found that tackles the problem, we take a novel approach by showing how AI can de-bias decision making.

Across 9 studies with a total of over 10’000 participants, including 400 US managers, we show that users that were assisted with AI agents engage in more rational decision making than users that were assisted by human agents. Meaning, valence-dependent belief updating can be mitigated through the use of AI assistance.

Further, in collaboration with a Swiss finance firm, our results showed how decision making, customer satisfaction, and loyalty are affected by humanizing AI agents.

So what does that mean in practice?

Managers suffer from overoptimistic investments and strategies of their employees. To reduce the problem, they often support their project leaders and employees with human advisors. Similarly, firms are faced with over optimistic expectations of their customers and must provide them with realistic advice without losing them. We show that supplying employees, project leaders, and customers with knowledge through AI advisors is more effective in de-biasing decision making as compared to using traditional human advisors.

Many financial institutions, for instance, need to deny loans to loyal, long-term customers or provide interest rates that are above their expectations. Unfortunately, given humans’ inherent tendency to be overly optimistic about their future, the need to provide such negative estimates is the norm rather than the exception. Many organizations are thus faced with a lose-lose situation: countenance the overly optimistic beliefs of customers and thus risk losing them as customers when those beliefs prove untenable, on the one hand, or, on the other hand, supply (realistically) negative information to customers and risk losing them that way as well. Thus far, providing such negative information without hurting customer loyalty has been a challenge. Our results suggest that AI agents, or more broadly sources with a low social agency, may be effective in communicating such information without having any adverse effects on customers’ loyalty.

AI is not always the best way to go

However, it’s not that simple. We further found that assisting decision-making with AI agents also leads to potential pitfalls and is not the best way to go in every situation. Maximizing the benefits requires human factors specialists, who are increasingly needed to incorporate the new technology properly.


bottom of page