| | |
| Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
| | | | | | |
| | | |
|
|
|
| | | | Our experts serve up a periodic look at the technology concepts leaders need to understand to help their organizations grow and thrive in the digital age.
| | |
| | | What it is. Explainable AI (XAI) refers to the ability to understand how an AI-powered application arrived at a particular output. Thanks to complex algorithms in AI applications, this isn’t easy. But doing so is foundational to managing risks and ensuring that an AI application performs optimally.
Why we need it. While the neural networks running behind many AI apps are loosely modeled after the human brain, they often don’t process data in the ways that humans do. But if we can’t understand how AI arrived at a particular conclusion, how can skeptical users trust the results enough to take actions on them? XAI has business implications as well. It can help data scientists figure out why an app might be producing biased or inaccurate recommendations. Also, if regulators want proof that bias isn’t occurring—a mounting concern as more governments consider AI regulations—XAI makes it easier to provide that proof.
How to make AI interpretable. Data scientists are applying a number of “explainability” features—including local interpretable model-agnostic explanations (LIME) and Shapley additive explanations (SHAP)—to illuminate which data most influence an AI’s decisions. Still, only 25 percent of organizations enable XAI. Is your AI interpretable? Simple, easily interpretable algorithms often suffice, but when complexity is required, applying explainability techniques should become standard practice.
| | |
| | | | | | | | | | —Edited by Joyce Yoo, editor, New York
| | |
| | |
This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy. |
|
You received this email because you subscribed to our McKinsey Quarterly Five Fifty alert list. |
|
|
Copyright © 2023 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007 |
|
|
|
|