A new tool to drive software performance

Re:think

Tracking software development productivity ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
McKinsey & Company
Re:think
Re:think

FRESH TAKES ON BIG IDEAS

A drawing of Chandra Gnanasambandam



ON SOFTWARE EXCELLENCE
Can software developer productivity really be measured?

Chandra Gnanasambandam



Companies have long had a difficult time tracking the experience and productivity of their software engineering teams and figuring out improvements. Part of the problem is that writing software code is an inherently creative and collaborative process. Establishing a clear link between the various inputs and outputs (rather than business outcomes) is also challenging.

Yet learning how to measure the maturity of practices that contribute to developer experience and productivity is more critical than ever. Virtually every company today wants to become a software company to one extent or another. There are currently about 25 million to 30 million developers worldwide, a number expected to reach close to 50 million by the end of the decade. Low-code/no-code platforms and the emergence of generative AI (gen AI) are likely to greatly expand the pool of folks who can create applications and build digital solutions.

Today, there are two main measurement systems that the industry uses to provide insight into developer productivity. DORA (short for “DevOps research and assessment”) metrics focus on outcomes, and SPACE (short for “satisfaction/well-being, performance, activity, communication/collaboration, and efficiency/flow”) takes a multidimensional view of productivity. Both systems provide useful insights.

We believe an additional set of metrics can provide deeper insight into the root causes and identify bottlenecks that slow developers down. This system is intended to help create the best environment and experience to improve overall performance and foster innovation. Critically, it is not intended for performance management or oversight of developers but rather to improve their day-to-day experience and flow; indeed, all data is anonymized and not attributable to a specific individual. The approach involves a set of metrics that analyze work at the system, group, and individual level, focusing on a few key areas of the development process.

The first area is the divide in software development between the inner loop, which encompasses the core work that developers do when they are in the “flow,” and the outer loop, which focuses on all other tasks required to ship a product (integration testing, dependency management, setting up environments, et cetera). Outer-loop activity has real value, particularly in activities early in the development life cycle, such as technical discovery and design work or ensuring code meets the bar on quality, security, and compliance. However, our experience has shown that too much time in the outer loop can be a symptom of underlying issues that affect productivity—including manual activities to release code, holding patterns where developers are waiting on another colleague or team, and multiple meetings to manage dependencies. Leading tech companies aim for developers to spend close to 70 percent of their time on inner-loop activity. By tracking how much time developers are spending on the two loops, companies can optimize the use of in-demand talent.

“Tracking developer productivity in a more holistic way can shorten the time it takes to launch a product by 30 to 40 percent.”

The next step is applying the Developer Velocity Index, which collects insights directly from developers to identify the factors that most affect developer experience and productivity. While this tool has its limitations, it can help companies gain qualitative insight into their practices, tools, culture, and talent management and surface and correct any potential weaknesses they find.

The third thing the system does is conduct a broad-based contribution analysis that examines how teams are functioning collectively. Working with backlog management tools such as Jira, it plots a contribution distribution curve and identifies opportunities for improvement in the way that teams are set up or operating, such as increasing automation, developing individual skills, and rethinking role distribution. One company, for example, discovered its newest hires were having difficulty becoming productive and responded by reassessing its onboarding, documentation, and mentorship programs.

More than 50 companies across sectors have already implemented this new approach. Early findings are encouraging, including a 30 to 40 percent reduction in the time it takes to launch a product, a 15 to 25 percent improvement in product quality, a 20 percent jump in developer experience scores, and a 60 percent improvement in customer satisfaction ratings. We have found that developers are typically happy when companies put in place a holistic measurement system like this, because it highlights issues they have dealt with and been frustrated by. This approach has also had the effect of strengthening a culture of psychological safety, where all team members feel free to take risks and share ideas without fear of negative repercussions or personal judgment. McKinsey research on Developer Velocity has previously shown that psychological safety can be a leading driver of developer experience and innovation.

As effective as these metrics have proven to be so far, there are some pitfalls to avoid in how they are applied. In addition to not employing the metrics for any kind of performance management, they should not be used to attempt to create “targets” for teams, since they are too blunt an instrument to optimize for (and can incentivize the wrong behaviors). Nor should they be leveraged to compare teams, as each has its distinct way of working. Lastly, for any engineering metric, absolute numbers usually do not help, and it’s better to look at trends.

Still, this type of holistic approach could be even more important in the coming years. There is emerging evidence that gen AI can help boost productivity for software development teams that have already started to make improvements in this area. While results vary greatly depending on the specific task and developers’ years in the field, pilots show that gen AI can help further increase developer productivity by as much as 15 to 25 percent. In particular, complex activities such as code refactoring (migrating or updating legacy code) and code documentation (maintaining detailed records and explanations of the changes made to existing code) enjoy sizable boosts from gen AI. That research has also shown that usage of gen AI can increase overall developer happiness and satisfaction. And even as gen AI’s impact on the developer experience and software innovation grows, one thing that isn’t likely to change is that a happier developer (or any worker) tends to be a more productive developer.

Share Chandra Gnanasambandam’s insights

LinkedIn
LinkedIn
Facebook

ABOUT THIS AUTHOR

Chandra Gnanasambandam is a senior partner in McKinsey’s Bay Area office.

MORE FROM THIS AUTHOR

UP NEXT

John Murnane on canal cargo

Simultaneous slowdowns at the Panama and Suez Canals created supply chain headaches. But some companies might transform a logistical challenge into a strategic advantage.

McKinsey & Company

Follow our thinking

LinkedIn Twitter Facebook

This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.

You received this email because you subscribed to our McKinsey Quarterly alert list.

Manage subscriptions | Unsubscribe

Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007


by "McKinsey Quarterly" <publishing@email.mckinsey.com> - 01:07 - 1 May 2024