Beyond ‘quiet luxury,’ gen AI and the future of work, Tech for Execs, and more: The Daily Read weekender

Unwind and catch up on the week's big reads ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
The Daily Read Weekend Edition
The Daily Read Weekend Edition

Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities

TECH FOR EXECS

Our experts serve up a periodic look at the technology concepts leaders need to understand to help their organizations grow and thrive in the digital age.

An image linking to the web page “Why businesses need explainable AI—and how to deliver it” on McKinsey.com.

What it is. Explainable AI (XAI) refers to the ability to understand how an AI-powered application arrived at a particular output. Thanks to complex algorithms in AI applications, this isn’t easy. But doing so is foundational to managing risks and ensuring that an AI application performs optimally.

Why we need it. While the neural networks running behind many AI apps are loosely modeled after the human brain, they often don’t process data in the ways that humans do. But if we can’t understand how AI arrived at a particular conclusion, how can skeptical users trust the results enough to take actions on them? XAI has business implications as well. It can help data scientists figure out why an app might be producing biased or inaccurate recommendations. Also, if regulators want proof that bias isn’t occurring—a mounting concern as more governments consider AI regulations—XAI makes it easier to provide that proof.

How to make AI interpretable. Data scientists are applying a number of “explainability” features—including local interpretable model-agnostic explanations (LIME) and Shapley additive explanations (SHAP)—to illuminate which data most influence an AI’s decisions. Still, only 25 percent of organizations enable XAI. Is your AI interpretable? Simple, easily interpretable algorithms often suffice, but when complexity is required, applying explainability techniques should become standard practice.

Quote

QUOTE OF THE DAY

chart of the day

A chart titled “When respondents say their transformations involved each of seven key actions, they also report a higher rate of success.” Click to open the full article on McKinsey.com.

Ready to unwind?

—Edited by Joyce Yoo, editor, New York

McKinsey & Company

Follow our thinking

LinkedIn Twitter Facebook

Share these insights

Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.

This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.

You received this email because you subscribed to our McKinsey Quarterly Five Fifty alert list.

Manage subscriptions | Unsubscribe

Copyright © 2023 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007


by "McKinsey Daily Read" <publishing@email.mckinsey.com> - 12:18 - 27 Oct 2023