Overcoming bias in machine learning

Is your bot biased? ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
McKinsey Classics
McKinsey Classics

Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities

McKinsey Classics | October 2023

An image linking to the web page “Controlling machine-learning algorithms and their biases” on McKinsey.com.

Overcoming bias in machine learning

Poor workers always blame their tools, the saying goes. The lesson, of course, is a simple one: the more knowledgeable the worker, the more effective the tool. That’s as true for, say, a factory lathe machine as it is for the complex algorithms that underpin the machine learning and AI technologies that companies increasingly use. Machine learning holds immense promise for businesses that can effectively harness the powerful technology, but much like today’s advanced-analytics and generative AI technology, it’s only as good as the data it’s working from—and, perhaps even more important, the people inputting the data.

One of the risks of machine learning is that the algorithms that support it can easily inherit the behavioral biases of their human creator, derailing projects and creating costly errors in the process. Organizations can take measures to protect against algorithmic bias, including understanding the shortcomings of the algorithms they’re working with, shaping data samples in such a way that minimizes bias, and knowing when not to use the technology if a more traditional decision-making process is appropriate.

Companies are only just beginning to experiment with the powerful new generative AI and machine learning technologies available. To help ensure that both worker and tool are functioning optimally, business leaders would do well to heed the lessons from this 2017 classic from McKinsey partner Vishnu Kamalnath, “Controlling machine-learning algorithms and their biases.”

— Drew Holzfeind, editor, Chicago

Address the limitations of machine learning
LinkedIn Twitter Facebook

Related Reading

An image linking to the web page “The state of AI in 2023: Generative AI’s breakout year” on McKinsey.com.

The state of AI in 2023: Generative AI’s breakout year 

An image linking to the web page “Why businesses need explainable AI—and how to deliver it” on McKinsey.com.

Why businesses need explainable AI—and how to deliver it 

An image linking to the web page “Operationalizing machine learning in processes” on McKinsey.com.

Operationalizing machine learning in processes 

Share these insights

Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.

This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.

You received this email because you subscribed to our McKinsey Classics newsletter.

Manage subscriptions | Unsubscribe

Copyright © 2023 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007


by "McKinsey Classics" <publishing@email.mckinsey.com> - 11:09 - 21 Oct 2023