Archives
- By thread 3652
-
By date
- June 2021 10
- July 2021 6
- August 2021 20
- September 2021 21
- October 2021 48
- November 2021 40
- December 2021 23
- January 2022 46
- February 2022 80
- March 2022 109
- April 2022 100
- May 2022 97
- June 2022 105
- July 2022 82
- August 2022 95
- September 2022 103
- October 2022 117
- November 2022 115
- December 2022 102
- January 2023 88
- February 2023 90
- March 2023 116
- April 2023 97
- May 2023 159
- June 2023 145
- July 2023 120
- August 2023 90
- September 2023 102
- October 2023 106
- November 2023 100
- December 2023 74
- January 2024 75
- February 2024 75
- March 2024 78
- April 2024 74
- May 2024 108
- June 2024 98
- July 2024 116
- August 2024 134
- September 2024 130
- October 2024 141
- November 2024 73
-
What are some recent trends in quantum-technology funding?
On Point
Breakthroughs in quantum technology Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:23 - 9 May 2024 -
Samsung reduces data costs by 30% - Case Study Inside! 🚀
Sumo Logic
From best practices to key features
Scalable log management and analytics: Spend more time innovating and less time troubleshooting.
Powerful log analytics and cost savings of 30% with data tiering? Absolutely.
It takes an enormous amount of data to run Bixby (Samsung’s answer to Siri) – even more so with Galaxy AI integration. But, Samsung’s data growth wasn’t sustainable given their budget.
With Sumo Logic, Samsung:- Reduced data costs by 30% - while maintaining their data growth trajectory
- Improved customer service and response times
- Reduced overhead
Since this case study, we've launched an industry-leading, analytics-based pricing model, including unlimited log ingestion and indexing at $0 with unlimited full-access users.
Introducing analytics-based Flex Pricing.Sumo Logic, Aviation House, 125 Kingsway, London WC2B 6NH, UK
© 2024 Sumo Logic, All rights reserved.Unsubscribe
by "Sumo Logic" <marketing-info@sumologic.com> - 09:01 - 8 May 2024 -
What is generative AI?
On Point
Gen AI’s benefits and risks Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Growing adoption of AI. Gen AI, a form of AI that uses algorithms to create content, has come a long way since ChatGPT burst on the scene in 2022. Given the potential of gen AI to dramatically change how a range of jobs are performed, organizations of all stripes have raced to incorporate the technology. Over the past five years, AI adoption has more than doubled, according to a 2022 survey by McKinsey senior partners Alex Singla and Alexander Sukharevsky, global leaders of QuantumBlack, AI by McKinsey, and their coauthors.
•
AI’s limitations. Developing a bespoke gen AI model is highly resource intensive and therefore out of reach for most companies today. Instead, organizations typically either use gen AI out of the box or fine-tune the technology using proprietary data to help perform specific tasks. Because gen AI models are so new, the long-tail effects are still unknown, which means there are risks involved in using these models. Understand some of the limitations of gen AI, and visit McKinsey Digital to see how companies are using technology to create real value.
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:30 - 8 May 2024 -
100X Scaling: How Figma Scaled its Databases
100X Scaling: How Figma Scaled its Databases
😘 Kiss bugs goodbye with fully automated end-to-end test coverage (Sponsored) Bugs sneak out when less than 80% of user flows are tested before shipping. But getting that kind of coverage — and staying there — is hard and pricey for any sized team.͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for more😘 Kiss bugs goodbye with fully automated end-to-end test coverage (Sponsored)
Bugs sneak out when less than 80% of user flows are tested before shipping. But getting that kind of coverage — and staying there — is hard and pricey for any sized team.
QA Wolf takes testing off your plate:
→ Get to 80% test coverage in just 4 months.
→ Stay bug-free with 24-hour maintenance and on-demand test creation.
→ Get unlimited parallel test runs
→ Zero Flakes guaranteed
QA Wolf has generated amazing results for companies like Salesloft, AutoTrader, Mailchimp, and Bubble.
🌟 Rated 4.5/5 on G2
Learn more about their 90-day pilot
Figma, a collaborative design platform, has been on a wild growth ride for the last few years. Its user base has grown by almost 200% since 2018, attracting around 3 million monthly users.
As more and more users have hopped on board, the infrastructure team found themselves in a spot. They needed a quick way to scale their databases to keep up with the increasing demand.
The database stack is like the backbone of Figma. It stores and manages all the important metadata, like permissions, file info, and comments. And it ended up growing a whopping 100x since 2020!
That's a good problem to have, but it also meant the team had to get creative.
In this article, we'll dive into Figma's database scaling journey. We'll explore the challenges they faced, the decisions they made, and the innovative solutions they came up with. By the end, you'll better understand what it takes to scale databases for a rapidly growing company like Figma.
The Initial State of Figma’s Database Stack
In 2020, Figma still used a single, large Amazon RDS database to persist most of the metadata. While it handled things quite well, one machine had its limits.
During peak traffic, the CPU utilization was above 65% resulting in unpredictable database latencies.
While complete saturation was far away, the infrastructure team at Figma wanted to proactively identify and fix any scalability issues. They started with a few tactical fixes such as:
Upgrade the database to the largest instance available (from r5.12xlarge to r5.24xlarge).
Create multiple read replicas to scale read traffic.
Establish new databases for new use cases to limit the growth of the original database.
Add PgBouncer as a connection pooler to limit the impact of a growing number of connections.
These fixes gave them an additional year of runway but there were still limitations:
Based on the database traffic, they learned that write operations contributed a major portion of the overall utilization.
All read operations could not be moved to replicas because certain use cases were sensitive to the impact of replication lag.
It was clear that they needed a longer-term solution.
The First Step: Vertical Partitioning
When Figma's infrastructure team realized they needed to scale their databases, they couldn't just shut everything down and start from scratch. They needed a solution to keep Figma running smoothly while they worked on the problem.
That's where vertical partitioning came in.
Think of vertical partitioning as reorganizing your wardrobe. Instead of having one big pile of mess, you split things into separate sections. In database terms, it means moving certain tables to separate databases.
For Figma, vertical partitioning was a lifesaver. It allowed them to move high-traffic, related tables like those for “Figma Files” and “Organizations” into their separate databases. This provided some much-needed breathing room.
To identify the tables for partitioning, Figma considered two factors:
Impact: Moving the tables should move a significant portion of the workload.
Isolation: The tables should not be strongly connected to other tables.
For measuring impact, they looked at average active sessions (AAS) for queries. This stat describes the average number of active threads dedicated to a given query at a certain point in time.
Measuring isolation was a little more tricky. They used runtime validators that hooked into ActiveRecord, their Ruby ORM. The validators sent production query and transaction information to Snowflake for analysis, helping them identify tables that were ideal for partitioning based on query patterns and table relationships.
Once the tables were identified, Figma needed to migrate them between databases without downtime. They set the following goals for their migration solution:
Limit potential availability impact to less than 1 minute.
Automate the procedure so it is easily repeatable.
Have the ability to undo a recent partition.
Since they couldn’t find a pre-built solution that could meet these requirements, Figma built an in-house solution. At a high level, it worked as follows:
Prepared client applications to query from multiple database partitions.
Replicated tables from the original database to a new database until the replication lag was near 0.
Paused activity on the original database.
Waited for databases to synchronize.
Rerouted query traffic to the new database.
Resumed activity.
To make the migration to partitioned databases smoother, they created separate PgBouncer services to split the traffic virtually. Security groups were implemented to ensure that only PgBouncers could directly access the database.
Partitioning the PgBouncer layer first gave some cushion to the clients to route the queries incorrectly since all PgBouncer instances initially had the same target database. During this time, the team could also detect the routing mismatches and make the necessary corrections.
The below diagram shows this process of migration.
Latest articles
If you’re not a paid subscriber, here’s what you missed.
To receive all the full articles and support ByteByteGo, consider subscribing:
Implementing Replication
Data replication is a great way to scale the read operations for your database. When it came to replicating data for vertical partitioning, Figma had two options in Postgres: streaming replication or logical replication.
They chose logical replication for 3 main reasons:
Logical replication allowed them to port over a subset of tables so that they could start with a much smaller storage footprint in the destination database.
It enabled them to replicate data into a database running a different Postgres major version.
Lastly, it allowed them to set up reverse replication to roll back the operation if needed.
However, logical replication was slow. The initial data copy could take days or even weeks to complete.
Figma desperately wanted to avoid this lengthy process, not only to minimize the window for replication failure but also to reduce the cost of restarting if something went wrong.
But what made the process so slow?
The culprit was how Postgres maintains indexes in the destination database. While the replication process copies rows in bulk, it also updates the indexes one row at a time. By removing indexes in the destination database and rebuilding them after the data copy, Figma reduced the copy time to a matter of hours.
Need for Horizontal Scaling
As Figma's user base and feature set grew, so did the demands on their databases.
Despite their best efforts, vertical partitioning had limitations, especially for Figma’s largest tables. Some tables contained several terabytes of data and billions of rows, making them too large for a single database.
A couple of problems were especially prominent:
Postgres Vacuum Issue: Vacuuming is an essential background process in Postgres that reclaims storage occupied by deleted or obsolete rows. Without regular vacuuming, the database would eventually run out of transaction IDs and grind to a halt. However, vacuuming large tables can be resource-intensive and cause performance issues and downtime.
Max IO Operations Per Second: Figma’s highest write tables were growing so quickly that they would soon exceed the max IOPS limit of Amazon’s RDS.
For a better perspective, imagine a library with a rapidly growing collection of books. Initially, the library might cope by adding more shelves (vertical partitioning). But eventually, the building itself will run out of space. No matter how efficiently you arrange the shelves, you can’t fit an infinite number of books in a single building. That’s when you need to start thinking about opening branch libraries.
This is the approach of horizontal sharding.
For Figma, horizontal sharding was a way to split large tables across multiple physical databases, allowing them to scale beyond the limits of a single machine.
The below diagram shows this approach:
However, horizontal sharding is a complex process that comes with its own set of challenges:
Some SQL queries become inefficient to support.
Application code must be updated to route queries efficiently to the correct shard.
Schema changes must be coordinated across all shards.
Postgres can no longer enforce foreign keys and globally unique indexes.
Transactions span multiple shards, which means Postgres cannot be used to enforce transactionality.
Exploring Alternative Solutions
The engineering team at Figma evaluated alternative SQL options such as CockroachDB, TiDB, Spanner, and Vitess as well as NoSQL databases.
Eventually, however, they decided to build a horizontally sharded solution on top of their existing vertically partitioned RDS Postgres infrastructure.
There were multiple reasons for taking this decision:
They could leverage their existing expertise with RDS Postgres, which they had been running reliably for years.
They could tailor the solution to Figma’s specific needs, rather than adapting their application to fit a generic sharding solution.
In case of any issues, they could easily roll back to their unsharded Postgres databases.
They did not need to change their complex relational data model built on top of Postgres architecture to a new approach like NoSQL. This allowed the teams to continue building new features.
Figma’s Unique Sharding Implementation
Figma’s approach to horizontal sharding was tailored to their specific needs as well as the existing architecture. They made some unusual design choices that set their implementation apart from other common solutions.
Let’s look at the key components of Figma’s sharding approach:
Colos (Colocations) for Grouping Related Tables
Figma introduced the concept of “colos” or colocations, which are a group of related tables that share the same sharding key and physical sharding layout.
To create the colos, they selected a handful of sharding keys like UserId, FileId, or OrgID. Almost every table at Figma could be sharded using one of these keys.
This provides a friendly abstraction for developers to interact with horizontally sharded tables.
Tables within a colo support cross-table joins and full transactions when restricted to a single sharding key. Most application code already interacted with the database in a similar way, which minimized the work required by applications to make a table ready for horizontal sharding.
The below diagram shows the concept of colos:
Logical Sharding vs Physical Sharding
Figma separated the concept of “logical sharding” at the application layer from “physical sharding” at the Postgres layer.
Logical sharding involves creating multiple views per table, each corresponding to a subset of data in a given shard. All reads and writes to the table are sent through these views, making the table appear horizontally sharded even though the data is physically located on a single database host.
This separation allowed Figma to decouple the two parts of their migration and implement them independently. They could perform a safer and lower-risk logical sharding rollout before executing the riskier distributed physical sharding.
Rolling back logical sharding was a simple configuration change, whereas rolling back physical shard operations would require more complex coordination to ensure data consistency.
DBProxy Query Engine for Routing and Query Execution
To support horizontal sharding, the Figma engineering team built a new service named DBProxy that sits between the application and connection pooling layers such as the PGBouncer.
DBProxy includes a lightweight query engine capable of parsing and executing horizontally sharded queries. It consists of three main components:
A query parser that reads the SQL sent by the application and transforms it into an Abstract Syntax Tree (AST).
A logical planner that parses the AST, extracts the query type (insert, update, etc.), and logical shard IDs from the query plan.
A physical planner that maps the query from logical shard IDs to physical databases and rewrites queries to execute on the appropriate physical shard.
The below diagram shows the practical use of these three components within the query processing workflow.
There are always trade-offs when it comes to queries in a horizontally sharded world. Queries for a single shard key are relatively easy to implement. The query engine just needs to extract the shard key and route the query to the appropriate physical database.
However, if the query does not contain a sharding key, the query engine has to perform a more complex “scatter-gather” operation. This operation is similar to a hide-and-seek game where you send the query to every shard (scatter), and then piece together answers from each shard (gather).
The below diagram shows how single-shard queries work when compared to scatter-gather queries.
As you can see, this increases the load on the database, and having too many scatter-gather queries can hurt horizontal scalability.
To manage things better, DBProxy handles load-shedding, transaction support, database topology management, and improved observability.
Shadow Application Readiness Framework
Figma added a “shadow application readiness” framework capable of predicting how live production traffic would behave under different potential sharding keys.
This framework helped them keep the DBProxy simple while reducing the work required for the application developers in rewriting unsupported queries.
All the queries and associated plans are logged to a Snowflake database, where they can run offline analysis. Based on the data collected, they were able to pick a query language that supported the most common 90% of queries while avoiding the worst-case complexity in the query engine.
Conclusion
Figma’s infrastructure team shipped their first horizontally sharded table in September 2023, marking a significant milestone in their database scaling journey.
It was a successful implementation with minimal impact on availability. Also, the team observed no regressions in latency or availability after the sharding operation.
Figma’s ultimate goal is to horizontally shard every table in their database and achieve near-infinite scalability. They have identified several challenges that need to be solved such as:
Supporting horizontally sharded schema updates
Generating globally unique IDs for horizontally sharded primary keys
Implementing atomic cross-shard transactions for business-critical use cases.
Enabling distributed globally unique indexes.
Developing an ORM model to improve developer velocity
Automatic reshard operations to enable shard splits at the click of a button.
Lastly, after achieving a sufficient runway, they also plan to reassess their current approach of using in-house RDS horizontal sharding versus switching to an open-source or managed alternative in the future.
References:
SPONSOR US
Get your product in front of more than 500,000 tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.
Space Fills Up Fast - Reserve Today
Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing hi@bytebytego.com.
Like Comment Restack © 2024 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:36 - 7 May 2024 -
Do you know what inflation is?
On Point
A new McKinsey Explainer Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
The upside of inflation. Inflation, a broad rise in the prices of goods and services across the economy over time, has been top of mind for many. High inflation can erode purchasing power for consumers and businesses. But inflation isn’t all bad. In a healthy economy, annual inflation is usually about two percentage points. That can stimulate spending and boost economies. Inflation may be declining in many areas, but there’s still uncertainty ahead: without a surge in productivity, Western economies may be headed for a period of sustained inflation or major economic reset, McKinsey Global Institute chair Sven Smit and coauthors find.
•
Causes of inflation. One type of short-term inflation is demand-pull inflation, which occurs when demand for goods and services exceeds the economy’s ability to produce them. For example, when demand for new cars recovered more quickly than anticipated from its sharp dip at the start of the COVID-19 pandemic, an intervening shortage in the supply of semiconductors made it hard for the auto industry to keep up with this renewed demand, McKinsey senior partner Ondrej Burkacky and coauthors share. See our recent McKinsey Explainer “What is inflation?” to explore five steps companies can take to address inflation.
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:06 - 7 May 2024 -
Are you making the most of the cloud? A leader’s guide
The sky’s the limit Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
The tale of cloud computing follows a familiar plotline. As with many other new technologies, there’s lots of excitement about the potential value to gain and then lots of growing pains and entrenched mindsets that make adoption feel more than arduous. But 15 years into the journey, cloud seems to be at an inflection point. McKinsey research projects that cloud could generate up to $3 trillion in value by 2030. Generative AI (gen AI) also has the potential to accelerate cloud use, and the returns a company sees from it, in unprecedented ways. This week, we look at cloud’s growing upsides and how companies can make good on its promise.
Cloud’s potential to transform a business isn’t unique to one industry or region. Still, companies in different geographies face varying conditions and obstacles. McKinsey’s Bernhard Mühlreiter and coauthors look closely at the state of cloud in Europe. There, adoption rates and ambitions are high, while efforts to scale cloud leave room for improvement. So far, European companies have tended to pursue IT-led or IT-focused migrations to the cloud and measure the impact in IT-specific terms. Yet the real value from cloud, in Europe and beyond, comes from opportunities to increase revenue and save costs in a company’s business operations. How companies measure cloud’s impact matters, too. Rather than migrating as many workloads to the cloud as possible, Mühlreiter and other experts stress the importance of focusing on ROI and being intentional about why and where in the business cloud is adopted.
That’s how many percentage points of incremental ROI could be added to a company’s cloud program if it’s also using gen AI. According to senior partner William Forrest and colleagues, the adoption of both gen AI and cloud technologies is mutually beneficial and potentially transformative. To name a few benefits: cloud can support the complex scaling up of gen AI initiatives across an enterprise, just as gen AI capabilities can speed up the cloud migration process. Other gen AI–related advantages include reduced migration costs and higher productivity for development and infrastructure teams that are working on cloud.
That’s Mark Oh, director of infrastructure and user services group at the US Centers for Medicare & Medicaid Services (CMS), on the importance of a people-centered move to the cloud. Oh, along with Rajiv Uppal, chief information officer (CIO) at CMS and the incoming CIO of the Internal Revenue Service, spoke with senior partner Naufal Khan and colleagues about the agency’s cloud migration and why it was important to understand the business, and the people running it, every step of the way. One tactic for achieving broad buy-in was enabling change rather than mandating it. “A lot of the success can be attributed to the ‘community of practice’ we established, where we involved the entire stakeholder community to help shape solutions,” Uppal says. “This community was instrumental in sharing best practices so we could all benefit.”
Even with all of the value cloud promises, the challenges to developing a successful program persist. In an interview with The McKinsey Podcast, McKinsey’s Mark Gu and James Kaplan describe the key components of cloud success that many firms have yet to address: identifying the biggest sources of business value, establishing foundational capabilities, and reimagining the technology operating model and skill needs. As with other types of tech, the business case for cloud can be complicated. “In many cases, the benefits and the investments are happening in two different parts of the income statement or the organization,” Kaplan says. But companies (and leaders) must be patient in their quest for cloud-based value, which goes far beyond cost savings. “You can’t make the case on IT cost by itself,” according to Kaplan. “The reason you go to cloud is because of the business value it enables, through agility, scalability, innovation, and flexibility.”
What does cloud technology share with its meteorological cousin, the polar stratospheric cloud (or “PSC” for short)? Both are invisible, for starters, and both are major players in the planet’s climate change story. Cloud technology has the potential to accelerate the implementation of critical decarbonization initiatives. On the other hand, largely imperceptible PSCs have formed at the Earth’s poles, contributing to their warming more quickly than climate models can account for. Scientists say that learning more about how PSCs and other cloud types influence these trends will be critical to the field of climate science. So, too, could the use of cloud-based tools such as AI and machine learning.
Lead by laying a foundation for cloud success.
— Edited by Daniella Seiler, executive editor, Washington, DC
Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to the Leading Off newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Leading Off" <publishing@email.mckinsey.com> - 04:25 - 6 May 2024 -
The benefits of strengthening cross-tenure nurse relationships
On Point
Closing the experience gap Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Career continuum complexities. Nurses are still reeling from the aftershocks of the COVID-19 pandemic, which exacerbated existing challenges in healthcare delivery. Healthcare organizations have been trying to better understand how to attract nurses to the workforce and improve retention. There’s been considerable attention given to the four generations in the workplace, but when it comes to nursing, tenure adds a dimension to the discussion. It may be one of the more defining characteristics of employee experience, point out McKinsey senior partner Gretchen Berlin and coauthors.
—Edited by Querida Anderson, senior editor, New York
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:18 - 6 May 2024 -
The week in charts
The Week in Charts
Aircraft backlogs, real estate deal volume, and more Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to The Week in Charts newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Week in Charts" <publishing@email.mckinsey.com> - 03:38 - 4 May 2024 -
EP110: Top 5 Strategies to Reduce Latency
EP110: Top 5 Strategies to Reduce Latency
This week’s system design refresher: Top 5 Strategies to Reduce Latency Load Balancer Realistic Use Cases You May Not Know Top 4 data sharding algorithms explained Top 8 C++ Use Cases Apache Kafka in 100 Seconds SPONSOR US New Relic AI monitoring, the industry’s first APM for AI, now generally available (Sponsore͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for moreThis week’s system design refresher:
Top 5 Strategies to Reduce Latency
Load Balancer Realistic Use Cases You May Not Know
Top 4 data sharding algorithms explained
Top 8 C++ Use Cases
Apache Kafka in 100 Seconds
SPONSOR US
New Relic AI monitoring, the industry’s first APM for AI, now generally available (Sponsored)
New Relic AI monitoring provides unprecedented visibility and insights to engineers and developers who are modernizing their tech stacks. With New Relic AI monitoring, engineering teams can monitor, alert, debug, and root-cause AI-powered applications.
Top 5 Strategies to Reduce Latency
10 years ago, Amazon found that every 100ms of latency cost them 1% in sales.
That’s a staggering $5.7 billion in today’s terms.
For high-scale user-facing systems, high latency is a big loss of revenue.Here are the top strategies to reduce latency:
Database Indexing
Caching
Load Balancing
Content Delivery Network
Async Processing
Data Compression
Over to you: What other strategies to reduce latency have you seen?
Load Balancer Realistic Use Cases You May Not Know
Load balancers are inherently dynamic and adaptable, designed to efficiently address multiple purposes and use cases in network traffic and server workload management.
Let's explore some of the use cases:
Failure Handling:
Automatically redirects traffic away from malfunctioning elements to maintain continuous service and reduce service interruptions.Instance Health Checks:
Continuously evaluates the functionality of instances, directing incoming requests exclusively to those that are fully operational and efficient.Platform Specific Routing:
Routes requests from different device types (like mobiles, desktops) to specialized backend systems, providing customized responses based on platform.SSL Termination:
Handles the encryption and decryption of SSL traffic, reducing the processing burden on backend infrastructure.Cross Zone Load Balancing:
Distributes incoming traffic across various geographic or network zones, increasing the system's resilience and capacity for handling large volumes of requests.User Stickiness:
Maintains user session integrity and tailored user interactions by consistently directing requests from specific users to designated backend servers.
Over to you:
Which of these use cases would you consider adding to your network to enhance system reliability and why?Latest articles
If you’re not a paid subscriber, here’s what you missed.
To receive all the full articles and support ByteByteGo, consider subscribing:
Top 4 Data Sharding Algorithms Explained
We are dealing with massive amounts of data. Often we need to split data into smaller, more manageable pieces, or “shards”. Here are some of the top data sharding algorithms commonly used:
Range-Based Sharding
This involves partitioning data based on a range of values. For example, customer data can be sharded based on alphabetical order of last names, or transaction data can be sharded based on date ranges.Hash-Based Sharding
In this method, a hash function is applied to a shard key chosen from the data (like a customer ID or transaction ID).
This tends to distribute data more evenly across shards compared to range-based sharding. However, we need to choose a proper hash function to avoid hash collision.Consistent Hashing
This is an extension of hash-based sharding that reduces the impact of adding or removing shards. It distributes data more evenly and minimizes the amount of data that needs to be relocated when shards are added or removed.Virtual Bucket Sharding
Data is mapped into virtual buckets, and these buckets are then mapped to physical shards. This two-level mapping allows for more flexible shard management and rebalancing without significant data movement.
Top 8 C++ Use Cases
C++ is a highly versatile programming language that is suitable for a wide range of applications.
Embedded Systems
The language's efficiency and fine control over hardware resources make it excellent for embedded systems development.Game Development
C++ is a staple in the game development industry due to its performance and efficiency.Operating Systems
C++ provides extensive control over system resources and memory, making it ideal for developing operating systems and low-level system utilities.Databases
Many high-performance database systems are implemented in C++ to manage memory efficiently and ensure fast execution of queries.Financial Applications
Web Browsers
C++ is used in the development of web browsers and their components, such as rendering engines.Networking
C++ is often used for developing network devices and simulation tools.Scientific Computing
C++ finds extensive use in scientific computing and engineering applications that require high performance and precise control over computational resources.
Over to you - What did we miss?
Apache Kafka in 100 Seconds
This post is written by guest author Sanaz Zakeri, who is a Senior Software Engineer @Uber.
Apache Kafka is a distributed event streaming platform used for building real-time data processing pipelines and streaming applications. It is highly scalable, fault-tolerant, reliable, and can handle large volumes of data.
In order to understand Kafka, we need to define two terms:
Events: a log of state of something at a specific point in time
Event streams: continuous and unbounded series of events
Kafka can be used as a Messaging in a publish-subscribe model, where producers write event streams, and consumers read the events. This publish-subscribe model enables decoupling of event stream producers and consumers. Also, Kafka can be used as a log aggregation platform, ingesting and storing logs from multiple sources in a durable and fault-tolerant way.
Kafka Components:
Kafka cluster has multiple key components to provide the distributed infrastructure and reliably capture, store, order and provide event streams to client applications.
Brokers:
At the heart of the Kafka cluster lies the brokers which are physical servers that handle event streams. After events are published by producers, the broker makes the events available to consumers. Brokers bring scalability to Kafka as Kafka clusters can span multiple brokers across a variety of infrastructure setup to handle large volumes of events. They also bring fault tolerance since events can be stored and replicated across multiple brokers.
Topics:
Topic is the subject name of where the events are published by producers. Topics can have zero or more consumers listening to them and processing the events.
Partition:
In a topic, data is organized into partitions which store ordered streams of events. Each event within a partition is assigned a unique sequential identifier called offset that represents its position in the partition. Events are appended continually to the partition. A Topics can have one or more partitions. Having more than one partition in a topic enables parallelism as more consumers can read from the topic.
Partitions belonging to a topic can be distributed across separate brokers in the cluster, which brings high data availability and scalability. If one broker fails, the partitions on the remaining brokers can continue to serve data, ensuring fault tolerance.
Producers:
Producers are client applications that write events to Kafka topics as a stream of events.
Consumers:
Consumers are the client applications that subscribe to topics and process or store the events coming to the specific topic. Consumers read events in the order they were received within each partition.
Applications which require real time processing of data will have multiple consumers in a consumer group which can read from partitions on the subscribed topic.
Consumer Groups:
Consumer group is used to organize consumers that are reading a stream of events from one or more topics. Consumer groups enable parallel processing of events and each consumer in the consumer group can read from one partition to enable load balancing on the client application. This functionality not only brings the parallel processing but also brings fault tolerance since if a consumer fails in a consumer group, Partition can be reassigned to another group member.
SPONSOR US
Get your product in front of more than 500,000 tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.
Space Fills Up Fast - Reserve Today
Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing hi@bytebytego.com.
Like Comment Restack © 2024 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:35 - 4 May 2024 -
The CFO conundrum
Plus, CEOs’ 2024 priorities The job of the senior finance executive isn’t what it used to be. Traditional duties—including budgeting, planning, and risk mitigation—remain crucial components of the role. But CFOs now also serve as advisers to CEOs on organizational priorities and strategy, performance challengers, innovation champions, leaders of major investments and transactions, and convenors of cross-enterprise initiatives. And in today’s volatile business environment, CFOs must evaluate the potential impact of climate risks, geopolitical tensions, macroeconomic disruptions, and technological advances.
In our first featured story, McKinsey’s Ankur Agrawal, Christian Grube, Karolina Sauer-Sidor, and Andy West interview eight former CFOs about their finance careers and outline five priorities that CFO hopefuls should consider embracing. Our second story explores how CFOs can raise their games above functional expertise to achieve real strategic impact. Other highlights include the following topics:CEO priorities: Where to focus as the year unfolds
Leaders today confront a raft of complexities. Here’s what will matter most as 2024 evolves—and how CEOs can reckon with ongoing disruption successfully.
Drive performanceThe AI revolution will be ‘virtualized’
A tsunami of digital innovation is hitting product development. Will your organization surf that wave or be overwhelmed by it?
Follow the white rabbitIncreasing your return on talent: The moves and metrics that matter
Five actions can transform an organization’s talent system, establishing a culture of performance while boosting employee experience.
Read moreSteady progress in approaching the quantum advantage
Quantum technology could create value worth trillions of dollars within the next decade. The third annual Quantum Technology Monitor synthesizes the latest opportunities in this burgeoning field.
Discover the latest trendsSpace: The $1.8 trillion opportunity for global economic growth
As the space economy expands, it could create value for multiple industries and solve many of the world’s most pressing challenges.
Reach for the starsThe critical role of commodity trading in times of uncertainty
As increased commodity trading value pools attract new competition, successful players will differentiate by managing illiquid risks and embracing data-driven trading models.
See the forecastExplore
McKinsey Explainers
Find direct answers to complex questions, backed by McKinsey’s expert insights.
Learn moreMcKinsey Themes
Browse our essential reading on the topics that matter.
Get up to speedMcKinsey on Books
Explore this month’s best-selling business books prepared exclusively for McKinsey Publishing by Circana.
See the listsMcKinsey Chart of the Day
See our daily chart that helps explain a changing world—as we strive for sustainable, inclusive growth.
Dive inMcKinsey Classics
Research suggests that the secret to developing effective leaders is to encourage four types of behavior. Read our 2015 classic “Decoding leadership: What really matters” to learn more.
RewindThe Daily Read
The Daily Read newsletter highlights an article a day, picked by our editors.
Subscribe now— Edited by Eleni Kostopoulos, managing editor, New York
Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you are a registered member of our Monthly Highlights newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Highlights" <publishing@email.mckinsey.com> - 11:15 - 4 May 2024 -
Route Optimization Software - Automate the planning and scheduling of your fleets.
Route Optimization Software - Automate the planning and scheduling of your fleets.
Optimize Routes for Increased Efficiency, Reduced Costs, and Improved Customer Satisfaction.Optimize Routes for Increased Efficiency, Reduced Costs, and Improved Customer Satisfaction.
Catch a glimpse of what our Route Optimisation has to offer
Uffizio Technologies Pvt. Ltd., 4th Floor, Metropolis, Opp. S.T Workshop, Valsad, Gujarat, 396001, India
by "Sunny Thakur" <sunny.thakur@uffizio.com> - 08:00 - 3 May 2024 -
Why do bad leaders rise to the top?
The Shortlist
Four new insights Curated by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
As the tech buzz becomes ever louder, executives are turning back to basics. In the future, many workers will likely be using AI to help them do the technical parts of their jobs. This is likely to free people up for more collaborative work, which means interpersonal skills may become more valuable to executives than ever. This edition, some expect AI to diminish the relative importance of technical skill. Interpersonal skills, therefore, may become more important to executives than ever. This edition of the CEO Shortlist points to new research on why we need more emotionally intelligent leaders and how we can pick them. We also double-click on deep-cut AI concepts, transforming a centuries-old bank, and more. We hope you enjoy the read.
—Liz and Homayoun
Hitting your stride. The early years of a CFO’s tenure are often about scoping out the challenge and pulling together the core team. Midtenure is when the best finance leaders get bold. Our interviews with eight CFOs shed light on how leaders can rise to the occasion.
Reinvent the finance function—and yourself—with “Faster, smarter, bolder: How midtenure CFOs shift into a higher gear,” by Ankur Agrawal, Cristina Catania, Christian Grube, and John Kelleher.The never-ending story. Business transformation is more than a one-off; it’s an attempt to build a new company on the fly. Successful transformations set their sights far beyond next year’s financial targets, instead thinking decades—or even centuries—into the future. Frankly, most such transformations don’t work. But those that do, as in the case of 240-year-old US bank BNY Mellon, offer inspiration.
What would American founding father Andrew Mellon do? We don’t know, exactly, but we know what leaders of today’s BNY Mellon did. Hear more from Roman Regelman, former senior executive at BNY Mellon, in “Driving long-term business transformation,” by Kevin Carmody.Your prompt reply is appreciated. Today’s business leaders are no strangers to AI (especially if they’re staying up to date with our publishing). But quick: what does prompt engineering mean for your existing employees? How is tokenization in payments different from tokenization in large language models? Or what needs to happen for AI to be capable of empathy?
Got AI? Then you’ve likely got questions as well. Fill in the gaps in your understanding with “What’s the future of AI?,” a new package of McKinsey Explainers with insights from Michael Chui, Alex Singla, Kate Smaje, Lareina Yee, and many more.Hot air rises to the top. According to new research, gender—before abilities, competencies, interests, and personalities—is one of the strongest predictors of whether someone reaches a leadership role. And that’s not all. The men who benefit from antimeritocratic discrimination, says author and psychologist Dr. Tomas Chamorro-Premuzic, often exhibit traits that hamper organizational progress.
Listen to the good doctor weigh in on “Why so many bad bosses still rise to the top,” the latest episode of the McKinsey Talks Talent podcast, featuring Chamorro-Premuzic and McKinsey talent leaders Bryan Hancock and Brooke Weddle.We hope you find these ideas inspiring and helpful. See you next time with four more McKinsey ideas for the CEO and others in the C-suite.
Share these insights
This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to The CEO Shortlist newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey CEO Shortlist" <publishing@email.mckinsey.com> - 04:46 - 3 May 2024 -
How can employees go from ‘meh’ to motivated?
On Point
5 questions to ask workers Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Dissatisfied workers. According to recent McKinsey research, more than half of workers report being relatively dissatisfied with their jobs. That’s a big percentage that strikes at the heart of value creation for organizations that are already facing rising labor costs and declining productivity, McKinsey senior partner Aaron De Smet and coauthors share. Identifying where workers fall along a satisfaction spectrum could help leaders solve the problem.
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:05 - 3 May 2024 -
Digital twins, the future of AI, MSMEs, and more: The Daily Read Weekender
Big reads for the weekend Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
May is here and it’s almost the weekend. Take a breather and dive into the week’s highlights on digital twins, the future of AI, MSMEs, and more.
QUOTE OF THE DAY
chart of the day
Ready to unwind?
—Edited by Joyce Yoo, editor, New York
Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to our McKinsey Global Institute alert list.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Daily Read" <publishing@email.mckinsey.com> - 12:27 - 3 May 2024 -
A microscope on small businesses: Spotting opportunities to boost productivity
View the report New from McKinsey Global Institute
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to our McKinsey Global Institute alert list.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey & Company" <publishing@email.mckinsey.com> - 02:43 - 2 May 2024 -
Unlocking the Power of SQL Queries for Improved Performance
Unlocking the Power of SQL Queries for Improved Performance
SQL, or Structured Query Language, is the backbone of modern data management. It enables efficient retrieval, manipulation, and management of data in a Database Management System (DBMS). Each SQL command taps into a complex sequence within a database, building on concepts like the connection pool, query cache, command parser, optimizer, and executor, which we covered in our last issue.͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for moreLatest articles
If you’re not a subscriber, here’s what you missed this month.
To receive all the full articles and support ByteByteGo, consider subscribing:
SQL, or Structured Query Language, is the backbone of modern data management. It enables efficient retrieval, manipulation, and management of data in a Database Management System (DBMS). Each SQL command taps into a complex sequence within a database, building on concepts like the connection pool, query cache, command parser, optimizer, and executor, which we covered in our last issue.
Crafting effective queries is essential. The right SQL can enhance database performance; the wrong one can lead to increased costs and slower responses. In this issue, we focus on strategies such as using the Explain Plan, adding proper indexes, and optimizing commands like COUNT(*) and ORDER BY. We also dive into troubleshooting slow queries.
While MySQL is our primary example, the techniques and strategies discussed are applicable across various database systems. Join us as we refine SQL queries for better performance and cost efficiency.
Explain Plan
In MySQL, the EXPLAIN command, known as EXPLAIN PLAN in systems like Oracle, is a useful tool for analyzing how queries are executed. By adding EXPLAIN before a SELECT statement, MySQL provides information about how it processes the SQL. This output shows the tables involved, operations performed (such as sort, scan, and join), and the indexes used, among other execution details. This tool is particularly useful for optimizing SQL queries, as it helps developers see the query execution plan and identify potential bottlenecks.
When an EXPLAIN statement is executed in MySQL, the database engine simulates the query execution. This simulation generates a detailed report without running the actual query. This report includes several important columns:
id: Identifier for each step in the query execution.
select_type: The type of SELECT operation, like SIMPLE (a basic SELECT without unions or subqueries), SUBQUERY, or UNION.
table: The table involved in a particular part of the query.
type: The join type shows how MySQL joins the tables. Common types include ALL (full table scan), index (index scan), range (index range scan), eq_ref (unique index scan), const/system (constant value optimization).
possible_keys: Potential indexes that might be used.
key: The key (index) chosen by MySQL.
key_len: The length of the chosen key.
ref: Columns or constants used with the key to select rows.
rows: Estimated number of rows MySQL expects to examine when executing the query.
Extra: Additional details, such as the use of temporary tables or filesorts.
Let's explore a practical application of the EXPLAIN command using a database table named orders. Suppose we want to select orders with user_id equal to 100.
SELECT * FROM orders WHERE user_id = 100;
To analyze this query with EXPLAIN, we would use:
EXPLAIN SELECT * FROM orders WHERE user_id = 100;
The output might look like this:
Continue reading this post for free, courtesy of Alex Xu.
A subscription gets you:
An extra deep dive on Thursdays Full archive Many expense it with team's learning budget Like Comment Restack © 2024 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:38 - 2 May 2024 -
New Relic AI monitoring—the industry’s first APM for AI—now generally available
New Relic
April 2024New Relic AI monitoring is now generally available New Relic AI monitoring gives you complete visibility across your entire AI stack just like application performance monitoring. Effortlessly monitor and manage your AI applications confidently with features like automatic instrumentation, deep trace insights into large language model responses, robust data security, and model comparison.
Read the blog How BlackLine saved $16 million per year by consolidating toolsConsolidating tools shifted incident culture and created new ways to build code at BlackLine. Here’s how.Informa runs on New Relic to deliver business value
Informa improves their engineering processes to move beyond performance-based conversations to conversations around how their technology is impacting their business. With observability, Informa is building a culture of continuous improvement.Useful readsCustom events are a developer's best friendExplore how New Relic custom events can offer deeper insights and enhance operational efficiency for developers.Upcoming EventsAWS Summits EMEAWe're thrilled to announce our participation in the upcoming AWS Summits happening in multiple cities across Europe and beyond! Here's where you can catch us.
- Berlin (Booth PO4)
- Stockholm (Booth S6)
- Dubai (Booth G6)
- Madrid
We'll be showcasing our latest innovations, hosting engaging demos, and offering valuable insights into how New Relic can empower your cloud journey on AWS.
Don't miss this opportunity to connect with us and explore the future of observability and performance monitoring in the cloud!
Register here.
New Relic University Online WorkshopsJoin our upcoming New Relic online live training workshops. These 90-minute trainer-led workshops with hands-on labs will help you up level your observability skills.
- Maximising performance with integrated APM and infrastructure monitoring
- 30 May at 10am BST / 11am CEST
- Register here.
- Maximising observability with New Relic logs
- 27 June at 10am BST / 11am CEST
- Register here.
New Relic End-of-Life UpdatesLegacy synthetics runtimes and CPM (October 22)- New Relic will end-of-life (EOL) our legacy Chrome 72 (and older) and Node 10 (and older) synthetics runtimes and the containerized private minion (CPM).
- Customers will be unable to create new monitors using legacy runtimes on public or private locations as of June 30.
- All customers must be on the new runtime by October 22 in order to prevent synthetic monitoring degradation from occurring.
- See here for more information.
Support for PromQL (July 15)
- We’re standardizing our querying experiences around NRQL by removing PromQL-styled query support.
- You can still access Prometheus metrics and events, but will need to adopt NRQL as the method for querying Prometheus data.
- If you’re currently using PromQL-styled queries to query your data, you’ll need to adopt NRQL.
- Prometheus metrics data will still be accessible in New Relic.
- For more information, see our documentation.
Need help? Let's get in touch.
This email is sent from an account used for sending messages only. Please do not reply to this email to contact us—we will not get your response.
This email was sent to info@learn.odoo.com Update your email preferences.
For information about our privacy practices, see our Privacy Policy.
Need to contact New Relic? You can chat or call us at +44 20 3859 9190.
Strand Bridge House, 138-142 Strand, London WC2R 1HH
© 2024 New Relic, Inc. All rights reserved. New Relic logo are trademarks of New Relic, Inc
by "New Relic" <emeamaketing@newrelic.com> - 05:03 - 2 May 2024 -
What’s the state of European grocery retail in 2024?
On Point
8 trends in grocery retail Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Lower growth. European grocers faced a challenging year in 2023. Consumers tightened their belts amid rising inflation, leading to a drop in volume and significant downtrading. As a result, industry growth was significantly lower than food price inflation. In Europe, food price inflation averaged 12.8% in 2023, while grocery sales grew at a rate of only 8.6%, McKinsey senior partner Franck Laizet and coauthors share. Still, even though macroeconomic uncertainty will likely persist in 2024, McKinsey data shows some signs of hope for 2024.
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:30 - 2 May 2024 -
A new tool to drive software performance
Re:think
Tracking software development productivity FRESH TAKES ON BIG IDEAS
ON SOFTWARE EXCELLENCE
Can software developer productivity really be measured?Chandra Gnanasambandam
Companies have long had a difficult time tracking the experience and productivity of their software engineering teams and figuring out improvements. Part of the problem is that writing software code is an inherently creative and collaborative process. Establishing a clear link between the various inputs and outputs (rather than business outcomes) is also challenging.
Yet learning how to measure the maturity of practices that contribute to developer experience and productivity is more critical than ever. Virtually every company today wants to become a software company to one extent or another. There are currently about 25 million to 30 million developers worldwide, a number expected to reach close to 50 million by the end of the decade. Low-code/no-code platforms and the emergence of generative AI (gen AI) are likely to greatly expand the pool of folks who can create applications and build digital solutions.
Today, there are two main measurement systems that the industry uses to provide insight into developer productivity. DORA (short for “DevOps research and assessment”) metrics focus on outcomes, and SPACE (short for “satisfaction/well-being, performance, activity, communication/collaboration, and efficiency/flow”) takes a multidimensional view of productivity. Both systems provide useful insights.
We believe an additional set of metrics can provide deeper insight into the root causes and identify bottlenecks that slow developers down. This system is intended to help create the best environment and experience to improve overall performance and foster innovation. Critically, it is not intended for performance management or oversight of developers but rather to improve their day-to-day experience and flow; indeed, all data is anonymized and not attributable to a specific individual. The approach involves a set of metrics that analyze work at the system, group, and individual level, focusing on a few key areas of the development process.
The first area is the divide in software development between the inner loop, which encompasses the core work that developers do when they are in the “flow,” and the outer loop, which focuses on all other tasks required to ship a product (integration testing, dependency management, setting up environments, et cetera). Outer-loop activity has real value, particularly in activities early in the development life cycle, such as technical discovery and design work or ensuring code meets the bar on quality, security, and compliance. However, our experience has shown that too much time in the outer loop can be a symptom of underlying issues that affect productivity—including manual activities to release code, holding patterns where developers are waiting on another colleague or team, and multiple meetings to manage dependencies. Leading tech companies aim for developers to spend close to 70 percent of their time on inner-loop activity. By tracking how much time developers are spending on the two loops, companies can optimize the use of in-demand talent.“Tracking developer productivity in a more holistic way can shorten the time it takes to launch a product by 30 to 40 percent.”
The next step is applying the Developer Velocity Index, which collects insights directly from developers to identify the factors that most affect developer experience and productivity. While this tool has its limitations, it can help companies gain qualitative insight into their practices, tools, culture, and talent management and surface and correct any potential weaknesses they find.
The third thing the system does is conduct a broad-based contribution analysis that examines how teams are functioning collectively. Working with backlog management tools such as Jira, it plots a contribution distribution curve and identifies opportunities for improvement in the way that teams are set up or operating, such as increasing automation, developing individual skills, and rethinking role distribution. One company, for example, discovered its newest hires were having difficulty becoming productive and responded by reassessing its onboarding, documentation, and mentorship programs.
More than 50 companies across sectors have already implemented this new approach. Early findings are encouraging, including a 30 to 40 percent reduction in the time it takes to launch a product, a 15 to 25 percent improvement in product quality, a 20 percent jump in developer experience scores, and a 60 percent improvement in customer satisfaction ratings. We have found that developers are typically happy when companies put in place a holistic measurement system like this, because it highlights issues they have dealt with and been frustrated by. This approach has also had the effect of strengthening a culture of psychological safety, where all team members feel free to take risks and share ideas without fear of negative repercussions or personal judgment. McKinsey research on Developer Velocity has previously shown that psychological safety can be a leading driver of developer experience and innovation.
As effective as these metrics have proven to be so far, there are some pitfalls to avoid in how they are applied. In addition to not employing the metrics for any kind of performance management, they should not be used to attempt to create “targets” for teams, since they are too blunt an instrument to optimize for (and can incentivize the wrong behaviors). Nor should they be leveraged to compare teams, as each has its distinct way of working. Lastly, for any engineering metric, absolute numbers usually do not help, and it’s better to look at trends.
Still, this type of holistic approach could be even more important in the coming years. There is emerging evidence that gen AI can help boost productivity for software development teams that have already started to make improvements in this area. While results vary greatly depending on the specific task and developers’ years in the field, pilots show that gen AI can help further increase developer productivity by as much as 15 to 25 percent. In particular, complex activities such as code refactoring (migrating or updating legacy code) and code documentation (maintaining detailed records and explanations of the changes made to existing code) enjoy sizable boosts from gen AI. That research has also shown that usage of gen AI can increase overall developer happiness and satisfaction. And even as gen AI’s impact on the developer experience and software innovation grows, one thing that isn’t likely to change is that a happier developer (or any worker) tends to be a more productive developer.ABOUT THIS AUTHOR
Chandra Gnanasambandam is a senior partner in McKinsey’s Bay Area office.
MORE FROM THIS AUTHOR
UP NEXT
John Murnane on canal cargo
Simultaneous slowdowns at the Panama and Suez Canals created supply chain headaches. But some companies might transform a logistical challenge into a strategic advantage.
This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to our McKinsey Quarterly alert list.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Quarterly" <publishing@email.mckinsey.com> - 01:07 - 1 May 2024 -
Join us for a must-attend webinar on global hiring strategies!
Join us for a must-attend webinar on global hiring strategies!
Dive into global hiring with experts from Deloitte, Bytez, and Remote. Secure your spot now!Hi MD,
Are you ready to dive deep into the world of global hiring and compliance? Don’t miss our upcoming webinar...
Complexities in Global Hiring - Winning Strategies: How top businesses master global hiring and compliance.
Tuesday, May 14th at 3pm UTC | 5pm CEST | 11am EST.
This engaging session will feature insights from Adam Scheinman of Deloitte Tax and Holly Peck from Bytez, alongside our very own Barbara Matthews. Together, they'll share invaluable insights on navigating international employment laws, recruiting strategies, and managing payroll across borders.
You’ll gain expert advice on:
- Navigating regulatory environments
- Effective recruitment strategies
- Managing international teams and payroll
We can’t wait to see you there!
Remote, the HR platform for global businesses
Remote makes running global teams simple.
Hire, manage, and pay anyone, anywhere.You received this email because you are subscribed to News & Offers from Remote Europe Holding B.V
Update your email preferences to choose the types of emails you receive.
Unsubscribe from all future emailsRemote Europe Holding B.V
Copyright © 2024 All rights reserved.
Kraijenhoffstraat 137A 1018RG Amsterdam The Netherlands
by "Remote" <hello@remote-comms.com> - 07:01 - 1 May 2024