Archives
- By thread 3661
-
By date
- June 2021 10
- July 2021 6
- August 2021 20
- September 2021 21
- October 2021 48
- November 2021 40
- December 2021 23
- January 2022 46
- February 2022 80
- March 2022 109
- April 2022 100
- May 2022 97
- June 2022 105
- July 2022 82
- August 2022 95
- September 2022 103
- October 2022 117
- November 2022 115
- December 2022 102
- January 2023 88
- February 2023 90
- March 2023 116
- April 2023 97
- May 2023 159
- June 2023 145
- July 2023 120
- August 2023 90
- September 2023 102
- October 2023 106
- November 2023 100
- December 2023 74
- January 2024 75
- February 2024 75
- March 2024 78
- April 2024 74
- May 2024 108
- June 2024 98
- July 2024 116
- August 2024 134
- September 2024 130
- October 2024 141
- November 2024 83
-
Understanding micromarkets can help European auto leaders compete
On Point
Local trends and insights Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:26 - 19 Mar 2024 -
Track and Manage Waste Collecting Fleets with Advanced Waste Management Software - SmartWaste.
Track and Manage Waste Collecting Fleets with Advanced Waste Management Software - SmartWaste.
Grow sustainable waste collection businesses with SmartWaste Software.Plan optimum waste collection and disposal routes to maximize productivity.
Benefits
Grow sustainable waste collection businesses with SmartWaste Software.
Uffizio Technologies Pvt. Ltd., 4th Floor, Metropolis, Opp. S.T Workshop, Valsad, Gujarat, 396001, India
by "Sunny Thakur" <sunny.thakur@uffizio.com> - 08:01 - 18 Mar 2024 -
What is economic inclusion?
Go beyond New from McKinsey & Company
What is economic inclusion?
How do we measure economic inclusion? Who is economically excluded? The answers to these questions—and others—are available in our McKinsey Explainers series. Have a question you want answered? Reach out to us at ask_mckinsey@mckinsey.com.
Go beyond From poverty to empowerment: Raising the bar for sustainable and inclusive growth
Our future lives and livelihoods: Sustainable and inclusive and growing
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to our McKinsey Global Institute alert list.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey & Company" <publishing@email.mckinsey.com> - 02:03 - 18 Mar 2024 -
More than a feeling: A leader’s guide to design thinking
Design for business value Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
When business leaders look to generate revenue growth, design may not be the first thing that comes to mind. But design is no longer mostly about how something looks—increasingly, it’s becoming a dynamic problem-solving approach that can have great impact and increase shareholder value. Our research has found that design-led companies outperform their competitors by a considerable margin. “Design thinking is a methodology that we use to solve complex problems, and it’s a way of using systemic reasoning and intuition to explore ideal future states,” notes McKinsey partner Jennifer Kilian. However, the methodology may not be easy to implement, as it involves making fundamental changes to organizational culture and practices. This week, we discuss some starting points.
Good design can allow an organization to succeed on three fronts—growth, margin, and sustainability—observe McKinsey experts in an episode of the McKinsey on Consumer and Retail podcast. This “triple win” may be within reach if companies pay closer attention to the design of their products and packaging. Thanks to new digital technologies that combine qualitative and quantitative user data, “we now have much more transparency into what consumers want and expect,” says McKinsey senior partner Jennifer Schmidt. These insights can inform and enhance product and packaging design. “You can bring together people from design, R&D, marketing, procurement, finance,” says Schmidt. “It can be a team sport now; all the different functions can come together and use that information to deliver the triple wins.” By changing its packaging, one company was “able to come up with breakthroughs in all three of the areas we’ve talked about,” she adds. “That’s one of my favorite stories.”
That’s the number of steps it takes to implement “skinny design,” which involves fitting more product into smaller packages to maximize shelf space, reduce transportation costs, and minimize stockouts. “The good news is that companies can often implement the elements of skinny design quickly and with little investment,” note McKinsey’s Dave Fedewa, Daniel Swan, Warren Teichner, and Bill Wiseman. A key step is to build a strong technological foundation: one company used a digital teardown database to compare its packaging volume with that of its competitors. The tool revealed a cube optimization opportunity that could reduce the company’s logistics costs and carbon footprint.
That’s McKinsey partner Benedict Sheppard on the importance of implementing design thinking in an organization. Companies that perform the best in design achieve higher average revenue growth and shareholder return than their peers, McKinsey research shows. But Sheppard cautions against launching company-wide design transformations that may not show tangible impact. “My advice would be to choose one important upcoming product or service design to pilot improvements on,” he suggests. And to excel at design, it’s essential to apply it across four critical dimensions—“being good at one or two isn’t enough,” in Sheppard’s view. For example, top-performing companies view design as a continuous process of iteration and as everyone’s responsibility, not as a siloed function.
PepsiCo chief design officer Mauro Porcini believes that the wants and needs of human beings should be at the center of all design and innovation. “It’s the biggest challenge right now because we’re in a moment of transition, and many companies are not yet understanding how important it is to focus on this,” he says in a discussion with McKinsey. What he calls the “principles of meaningful design” may spring from a human-centric mindset. “They’re essentially about creating products that are relevant from a functional standpoint, an emotional standpoint, and a semiotic standpoint,” he says. Products also need to be environmentally and financially sustainable; Porcini emphasizes that they should be financially successful enough to reach as many people as possible. “You want billions of people to enjoy your product instead of creating something extraordinary that just four or five people in the world can enjoy.”
Design thinking may have some drawbacks. Its critics allege that it tends to favor ideas over execution, can be rigid or formulaic, and may not always be the solution to long-standing business problems. And overengineered design solutions can put off consumers. But companies can capture the full business value of design by embracing user-centric strategies throughout the organization, empowering senior designers to work closely with C-suite leaders, and using the right balance of user insights and quantitative data. Our research shows that the best design teams are highly integrated with the business, working across functions and using advanced collaboration tools for both physical and digital design.
Lead by design.
– Edited by Rama Ramaswami, senior editor, New York
Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to the Leading Off newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Leading Off" <publishing@email.mckinsey.com> - 04:34 - 18 Mar 2024 -
How can TMT leaders use generative AI to create value?
On Point
100+ applications of generative AI Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Gen AI in TMT. How big is the opportunity presented by gen AI? Our research and hands-on experience have allowed us to identify more than 100 gen AI use cases in technology, media, and telecommunications (TMT) across seven business domains, global leader of QuantumBlack, AI by McKinsey, Alex Singla; McKinsey Digital leader Benjamim Vieira; and coauthors explain. McKinsey research suggests that gen AI could unleash between $380 billion and $690 billion in value in TMT.
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 11:06 - 17 Mar 2024 -
The week in charts
The Week in Charts
Improving health in urban areas, rising sportswear sales, and more Our McKinsey Chart of the Day series offers a daily chart that helps explain a changing world—as we strive toward sustainable and inclusive growth. In case you missed them, this week’s graphics explored improving health in urban areas, rising sportswear sales, ethnocultural minorities in Europe, AI in medtech, and talent demand in the oil and gas industry.
Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to The Week in Charts newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Week in Charts" <publishing@email.mckinsey.com> - 03:56 - 16 Mar 2024 -
Have you digitized your risk function yet?
Transform your risk management Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
Digitizing the risk function for the modern era
Pulling off a digital transformation is hard enough as it is, with companies perennially struggling to capture value on their investments. Digitizing a company’s risk function is even harder. As this classic 2017 article observes, evolving regulatory requirements can waylay many digital efforts, and applying the standard test-and-learn approach of traditional digital transformations to risk processes can create an unacceptable level of, well, risk. Yet then, as now, the promise of digitizing the risk function to boost efficiency and improve decision making remains alluringly high—if leaders can craft a fit-for-purpose approach that minimizes costly glitches and addresses regulatory expectations.
Where to begin? Companies looking to build an effective digital risk program can start by taking a lesson from banks, which have found value in targeting three specific areas: credit risk, stress testing, and operational risk and compliance. Digitizing in these three areas can help banks accelerate decision making for credit delivery, automate fragmented stress-testing processes, and help streamline alert generation and case investigations.
With today’s proliferation of new technologies such as generative AI, companies’ digital efforts will only increase, from capability building to customer service to operations—and certainly to risk. To stay ahead of the curve, read Saptarshi Ganguly, Holger Harreis, Ben Margolis, and Kayvaun Rowshankish’s 2017 classic, “Digital risk: Transforming risk management for the 2020s.”Create a successful digital risk agenda As gen AI advances, regulators—and risk functions—rush to keep pace
Lessons from banking to improve risk and compliance and speed up digital transformations
Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to our McKinsey Classics newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Classics" <publishing@email.mckinsey.com> - 12:34 - 16 Mar 2024 -
EP103: Typical AWS Network Architecture in One Diagram
EP103: Typical AWS Network Architecture in One Diagram
This week’s system design refresher: Reverse Proxy vs API Gateway vs Load Balancer (YouTube video) Typical AWS Network Architecture in one diagram 15 Open-Source Projects That Changed the World Top 6 Database Models How do we detect node failures in distributed systems?͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for moreThis week’s system design refresher:
Reverse Proxy vs API Gateway vs Load Balancer (YouTube video)
Typical AWS Network Architecture in one diagram
15 Open-Source Projects That Changed the World
Top 6 Database Models
How do we detect node failures in distributed systems?
SPONSOR US
Register for POST/CON 24 | April 30 - May 1 (Sponsored)
POST/CON 24 will be an unforgettable experience! Connect with peers who are as enthusiastic about APIs as you are, all as you come together to:
Learn: Get first-hand knowledge from Postman experts and global tech leaders.
Level up: Attend 8-hour workshops to leave with new skills (and badges!)
Become the first to know: See the latest API platform innovations, including advancements in AI.
Help shape the future of Postman: Give direct feedback to the Postman leadership team.
Network with fellow API practitioners and global tech leaders — including speakers from OpenAI, Heroku, and more.
Have fun: Enjoy cocktails, dinner, 360° views of the city, and a live performance from multi-platinum recording artist T-Pain!
So grab your Early Adopter ticket for 30% off now while you can, because you don’t want to miss this!
Reverse Proxy vs API Gateway vs Load Balancer
One picture is worth a thousand words - Typical AWS Network Architecture in one diagram
Amazon Web Services (AWS) offers a comprehensive suite of networking services designed to provide businesses with secure, scalable, and highly available network infrastructure. AWS's network architecture components enable seamless connectivity between the internet, remote workers, corporate data centers, and within the AWS ecosystem itself.
VPC (Virtual Private Cloud)
At the heart of AWS's networking services is the Amazon VPC, which allows users to provision a logically isolated section of the AWS Cloud. Within this isolated environment, users can launch AWS resources in a virtual network that they define.AZ (Availability Zone)
An AZ in AWS refers to one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region.
Now let’s go through the network connectivity one by one:
Connect to the Internet - Internet Gateway (IGW)
An IGW serves as the doorway between your AWS VPC and the internet, facilitating bidirectional communication.Remote Workers - Client VPN Endpoint
AWS offers a Client VPN service that enables remote workers to access AWS resources or an on-premises network securely over the internet. It provides a secure and easy-to-manage VPN solution.Corporate Data Center Connection - Virtual Gateway (VGW)
A VGW is the VPN concentrator on the Amazon side of the Site-to-Site VPN connection between your network and your VPC.VPC Peering
VPC Peering allows you to connect two VPCs, enabling you to route traffic between them using private IPv4 or IPv6 addresses.Transit Gateway
AWS Transit Gateway acts as a network transit hub, enabling you to connect multiple VPCs, VPNs, and AWS accounts together.VPC Endpoint (Gateway)
A VPC Endpoint (Gateway type) allows you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, VPN.VPC Endpoint (Interface)
An Interface VPC Endpoint (powered by AWS PrivateLink) enables private connections between your VPC and supported AWS services, other VPCs, or AWS Marketplace services, without requiring an IGW, VGW, or NAT device.SaaS Private Link Connection
AWS PrivateLink provides private connectivity between VPCs and services hosted on AWS or on-premises, ideal for accessing SaaS applications securely.
Latest articles
If you’re not a paid subscriber, here’s what you missed this month.
To receive all the full articles and support ByteByteGo, consider subscribing:
15 Open-Source Projects That Changed the World
To come up with the list, we tried to look at the overall impact these projects have created on the industry and related technologies. Also, we’ve focused on projects that have led to a big change in the day-to-day lives of many software developers across the world.
Web Development
Node.js: The cross-platform server-side Javascript runtime that brought JS to server-side development
React: The library that became the foundation of many web development frameworks.
Apache HTTP Server: The highly versatile web server loved by enterprises and startups alike. Served as inspiration for many other web servers over the years.
Data Management
PostgreSQL: An open-source relational database management system that provided a high-quality alternative to costly systems
Redis: The super versatile data store that can be used a cache, message broker and even general-purpose storage
Elasticsearch: A scale solution to search, analyze and visualize large volumes of data
Developer Tools
Git: Free and open-source version control tool that allows developer collaboration across the globe.
VSCode: One of the most popular source code editors in the world
Jupyter Notebook: The web application that lets developers share live code, equations, visualizations and narrative text.
Machine Learning & Big Data
Tensorflow: The leading choice to leverage machine learning techniques
Apache Spark: Standard tool for big data processing and analytics platforms
Kafka: Standard platform for building real-time data pipelines and applications.
DevOps & Containerization
Docker: The open source solution that allows developers to package and deploy applications in a consistent and portable way.
Kubernetes: The heart of Cloud-Native architecture and a platform to manage multiple containers
Linux: The OS that democratized the world of software development.
Over to you: Do you agree with the list? What did we miss?
Top 6 Database Models
The diagram below shows top 6 data models.
Flat Model
The flat data model is one of the simplest forms of database models. It organizes data into a single table where each row represents a record and each column represents an attribute. This model is similar to a spreadsheet and is straightforward to understand and implement. However, it lacks the ability to efficiently handle complex relationships between data entities.Hierarchical Model
The hierarchical data model organizes data into a tree-like structure, where each record has a single parent but can have multiple children. This model is efficient for scenarios with a clear "parent-child" relationship among data entities. However, it struggles with many-to-many relationships and can become complex and rigid.Relational Model
Introduced by E.F. Codd in 1970, the relational model represents data in tables (relations), consisting of rows (tuples) and columns (attributes). It supports data integrity and avoids redundancy through the use of keys and normalization. The relational model's strength lies in its flexibility and the simplicity of its query language, SQL (Structured Query Language), making it the most widely used data model for traditional database systems. It efficiently handles many-to-many relationships and supports complex queries and transactions.Star Schema
The star schema is a specialized data model used in data warehousing for OLAP (Online Analytical Processing) applications. It features a central fact table that contains measurable, quantitative data, surrounded by dimension tables that contain descriptive attributes related to the fact data. This model is optimized for query performance in analytical applications, offering simplicity and fast data retrieval by minimizing the number of joins needed for queries.Snowflake Model
The snowflake model is a variation of the star schema where the dimension tables are normalized into multiple related tables, reducing redundancy and improving data integrity. This results in a structure that resembles a snowflake. While the snowflake model can lead to more complex queries due to the increased number of joins, it offers benefits in terms of storage efficiency and can be advantageous in scenarios where dimension tables are large or frequently updated.Network Model
The network data model allows each record to have multiple parents and children, forming a graph structure that can represent complex relationships between data entities. This model overcomes some of the hierarchical model's limitations by efficiently handling many-to-many relationships.
Over to you: Which database model have you used?
How do we detect node failures in distributed systems?
The diagram below shows top 6 Heartbeat Detection Mechanisms.
Heartbeat mechanisms are crucial in distributed systems for monitoring the health and status of various components. Here are several types of heartbeat detection mechanisms commonly used in distributed systems:
Push-Based Heartbeat
The most basic form of heartbeat involves a periodic signal sent from one node to another or to a monitoring service. If the heartbeat signals stop arriving within a specified interval, the system assumes that the node has failed. This is simple to implement, but network congestion can lead to false positives.Pull-Based Heartbeat
Instead of nodes sending heartbeats actively, a central monitor might periodically "pull" status information from nodes. It reduces network traffic but might increase latency in failure detection.Heartbeat with Health Check
This includes diagnostic information about the node's health in the heartbeat signal. This information can include CPU usage, memory usage, or application-specific metrics. It Provides more detailed information about the node, allowing for more nuanced decision-making. However, it Increases complexity and potential for larger network overhead.Heartbeat with Timestamps
Heartbeats that include timestamps can help the receiving node or service determine not just if a node is alive, but also if there are network delays affecting communication.Heartbeat with Acknowledgement
The receiver of the heartbeat message must send back an acknowledgment in this model. This ensures that not only is the sender alive, but the network path between the sender and receiver is also functional.Heartbeat with Quorum
In some distributed systems, especially those involving consensus protocols like Paxos or Raft, the concept of a quorum (a majority of nodes) is used. Heartbeats might be used to establish or maintain a quorum, ensuring that a sufficient number of nodes are operational for the system to make decisions. This brings complexity in implementation and managing quorum changes as nodes join or leave the system.
SPONSOR US
Get your product in front of more than 500,000 tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.
Space Fills Up Fast - Reserve Today
Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing hi@bytebytego.com.
Like Comment Restack © 2024 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:35 - 16 Mar 2024 -
What do you prioritize when spending on health and wellness?
On Point
Wellness trends in 2024 Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Supported by science. From cold plunges to collagen to celery juice, people face myriad choices in today’s $1.8 trillion global wellness market. Worldwide, consumers are prioritizing effective, data-driven, and science-backed health and wellness products, McKinsey senior partner Warren Teichner and coauthors say. Our survey of more than 5,000 consumers across China, the UK, and the US reveals that efficacy and scientific credibility are two of the most important factors to consumers when selecting wellness products.
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:24 - 15 Mar 2024 -
Implementing gen AI responsibly, bringing innovation to a nonprofit, B2B sales, and more highlights from the week: The Daily Read Weekender
Catch up on the week's big reads Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
Get ready for the weekend and catch up on the week’s highlights on safely implementing gen AI, the future of Medicare Advantage, next-gen B2B sales, and more.
QUOTE OF THE DAY
chart of the day
Ready to unwind?
—Edited by Joyce Yoo, editor, New York
Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to our McKinsey Quarterly alert list.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Daily Read" <publishing@email.mckinsey.com> - 10:08 - 14 Mar 2024 -
Implementing generative AI with speed and safety
Revise your playbook This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to our McKinsey Quarterly alert list.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Quarterly" <publishing@email.mckinsey.com> - 02:34 - 14 Mar 2024 -
15 Open-Source Projects That Changed the World
15 Open-Source Projects That Changed the World
Software development is a field of ideas and experiments. One idea leads to an experiment that spawns another idea and the cycle of innovation moves forward. Open-source projects are the fuel for this innovation. A good open-source project impacts the lives of many developers and creates a fertile environment for collaboration. Many of the greatest breakthroughs in software development have come from open-source projects.͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for moreLatest articles
If you’re not a subscriber, here’s what you missed this month.
To receive all the full articles and support ByteByteGo, consider subscribing:
Software development is a field of ideas and experiments. One idea leads to an experiment that spawns another idea and the cycle of innovation moves forward.
Open-source projects are the fuel for this innovation.
A good open-source project impacts the lives of many developers and creates a fertile environment for collaboration. Many of the greatest breakthroughs in software development have come from open-source projects.
In this post, we will look at 15 high-impact open-source projects that have changed the lives of many developers.
To come up with the list, we tried to look at the overall impact these projects have created on the industry and related technologies. Also, we’ve focused on projects that have led to a big change in the day-to-day lives of many software developers across the world.
1 - Linux
Unless you’ve been living in a cave, there’s no way you haven’t heard about Linux.
Linux is an open-source operating system created by Linus Torvalds. Like many open-source projects, it was originally started as a hobby project.
And then, it took over the world.
Linux runs on all sorts of computer systems such as PCs, mobiles and servers. It also runs on more unexpected places such as a washing machine, cars and robots. Even the Large Hadron Collider uses Linux.
However, the biggest impact of Linux is how it has democratized the world of software development by providing a free and open-source operating system.
2 - Apache HTTP Server
Apache HTTP Server is a free and open-source web server that powers a large percentage of websites on the internet.
Ever since its release in 1995, the Apache HTTP server has been a tireless workhorse. It’s versatile enough in terms of security and agility to be adopted by enterprises and startups alike.
Over the years, Apache HTTP Server has inspired so many web servers such as Nginx, Lighttpd, Caddy and so on.
3 - Git
Git hardly needs any introduction.
If you’ve worked as a developer in any capacity, there is a 100% chance you’ve used Git or at least, heard about it.
Git is a free and open-source version control system for software development. And you may be surprised to know that it was also created by Linus Torvalds along with his team.
But why?
Yes, you guessed it right. Linus did it to manage the source code of the Linux kernel project. That’s why people say that the best open-source projects come from your own requirements.
Git has been super-transformative for the way the software industry operates. It provided a standard way of tracking, comparing and applying version control on the source code, leading to the birth of revolutionary products such as GitHub and Bitbucket.
4 - Node.js
JavaScript was always the preferred language for browser-based development. But it would have stayed just a browser language had it not been for Node.js
Node.js is an open-source cross-platform JavaScript runtime environment for server-side programming.
In other words, Node.js brought JavaScript to backend development.
With its release in 2009, Node.js quickly became a popular choice for building scalable and high-performance web applications. It paved the way for using the same language for both client-side and server-side programming.
5 - Docker
It’s no secret that developers love to build applications and test them on their machines.
But no one loves the pressure of deploying the same application to production.
There’s always one pesky environment issue or version mismatch on the production server that brings the entire application down.
And developers can only say - “It worked fine on my machine.”
To which, they get the answer - “Yes, but we can’t ship your machine to production.”
Docker made it possible.
As an open-source platform, Docker allows developers to package and deploy applications in a consistent and portable way.
The application-specific packages and all the environmental dependencies are packaged in a Docker container image. This image can then be deployed wherever needed...
Continue reading this post for free, courtesy of Alex Xu.
A subscription gets you:
An extra deep dive on Thursdays Full archive Many expense it with team's learning budget Like Comment Restack © 2024 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:42 - 14 Mar 2024 -
Why Crisp moved to one monitoring solution
New Relic
Crisp, a risk-intelligence company that helps protect brands from online attacks
Crisp, a risk-intelligence company that helps protect brands from online attacks, transitioned from multiple open-source monitoring tools to one single view with New Relic. The complexity and cost of managing various tools prompted this shift.
After a year-long evaluation, Crisp found New Relic to be the optimal choice for streamlined operations and improved observability. New Relic's pricing based on hosts aligns with Crisp’s needs, eliminating the confusion of data throughput models.
Read how New Relic gave Crisp a more efficient and cost-effective monitoring experience.
Read Story Need help? Let's get in touch.
This email is sent from an account used for sending messages only. Please do not reply to this email to contact us—we will not get your response.
This email was sent to info@learn.odoo.com Update your email preferences.
For information about our privacy practices, see our Privacy Policy.
Need to contact New Relic? You can chat or call us at +44 20 3859 9190.
Strand Bridge House, 138-142 Strand, London WC2R 1HH
© 2024 New Relic, Inc. All rights reserved. New Relic logo are trademarks of New Relic, Inc
by "New Relic" <emeamarketing@newrelic.com> - 07:07 - 14 Mar 2024 -
How can leaders create value from gen AI?
On Point
The need to perform ‘organizational surgery’ Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Rewiring business. Many business leaders expect that gen AI will prove its value to organizations in 2024. Even so, those hoping that gen AI offers a shortcut past the tough—and necessary—organizational surgery are likely to meet with disappointing results. Competitive advantage comes from building organizational and technological capabilities to broadly innovate, deploy, and improve solutions at scale, Alex Singla, global leader of QuantumBlack, AI by McKinsey, and coauthors explain.
•
Gen AI copilots. Much of gen AI’s near-term value is tied to its ability to act as copilot, working with employees to improve job performance. To create competitive advantage, companies should focus on where copilots can have the biggest effect on their priority programs. For instance, industrial companies might use a gen AI copilot to identify issues with equipment failures. Consider six capabilities companies need to harness digital and AI technology, and for more, read Rewired: The McKinsey Guide to Outcompeting in the Age of Digital and AI.
— Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:05 - 14 Mar 2024 -
How can corporate boards manage increasing complexity?
On Point
The changing role of boards Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Higher expectations. Being a board member is more complex than ever, global leader of McKinsey Board Services Frithjof Lund shares on an episode of the Inside the Strategy Room podcast. Corporate boards are taking on new challenges such as geopolitics, generative AI, digitization, and sustainability. In addition, boards are increasingly expected to engage on strategy, investments and M&A, performance management, risk, talent, and organizational matters.
— Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:29 - 13 Mar 2024 -
A Deep Dive into Amazon DynamoDB Architecture
A Deep Dive into Amazon DynamoDB Architecture
State of Observability for Financial Services and Insurance (Sponsored) Financial institutions are experiencing an incredible transformation, stemming from consumers expecting a higher level of digital interaction and access to services and a lower dependency on physical services. At the same time, FSI organizations are faced with increased regulation, with new mandates for IT and cyber risk management such as the Digital Operational Resilience Act (DORA).͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for moreState of Observability for Financial Services and Insurance (Sponsored)
Financial institutions are experiencing an incredible transformation, stemming from consumers expecting a higher level of digital interaction and access to services and a lower dependency on physical services. At the same time, FSI organizations are faced with increased regulation, with new mandates for IT and cyber risk management such as the Digital Operational Resilience Act (DORA).
To ensure development and innovation proceed at the required speed with a customer-centric focus, they’re turning to observability. Dive into the facts and figures of the adoption and business value of observability across the FSI and insurance sectors.
In 2021, there was a 66-hour Amazon Prime Day shopping event.
The event generated some staggering stats:
Trillions of API calls were made to the database by Amazon applications.
The peak load to the database reached 89 million requests per second.
The database provided single-digit millisecond performance while maintaining high availability.
All of this was made possible by DynamoDB.
Amazon’s DynamoDB is a NoSQL cloud database service that promises consistent performance at any scale.
Besides Amazon’s in-house applications, hundreds of thousands of external customers rely on DynamoDB for high performance, availability, durability, and a fully managed serverless experience. Also, many AWS services such as AWS Lambda, AWS Lake Formation, and Amazon SageMaker are built on top of DynamoDB.
In this post, we will look at the evolution of DynamoDB, its operational requirements, and the techniques utilized by the engineers to turn those requirements into reality.
History of DynamoDB
In the early years, Amazon realized that letting applications access traditional enterprise databases was an invitation to multiple scalability challenges such as managing connections, dealing with concurrent workloads, and handling schema updates.
Also, high availability was a critical property for always-online systems. Any downtime negatively impacted the company’s revenue.
There was a pressing need for a highly scalable, available, and durable key-value database for fast-changing data such as a shopping cart.
Dynamo was a response to this need.
However, there was one drawback of Dynamo. It was a single-tenant system and teams were responsible for managing their own Dynamo installations. In other words, every team that used Dynamo had to become experts on various parts of the database service, creating a barrier to adoption.
At about the same time, Amazon launched SimpleDB which reduced operational burden for the teams by providing a managed and elastic experience. The engineers within Amazon’s development team preferred using SimpleDB even though Dynamo might be more suitable for their use case.
But SimpleDB also had some limitations such as:
The tables had a small storage capacity of 10 GB.
Request throughput was low.
Unpredictable read and write latencies because all table attributes were indexed.
Also, the operational burden wasn’t eliminated. Developers still had to take care of dividing data between multiple tables to meet their application’s storage and throughput requirements.
Therefore, the engineers concluded that a better solution would be to combine the best parts of Dynamo (scalability and predictable high performance) with the best parts of SimpleDB (ease of administration, consistency, and a table-based data model).
This led to the launch of DynamoDB as a public AWS service in 2012. It was a culmination of everything they had learned from building large-scale, non-relational databases for Amazon.
Over the years, DynamoDB has added several features based on customer demand.
The below timeline illustrates this constant progress.
Operational Requirements of DynamoDB
DynamoDB has evolved over the years, much of it in response to Amazon’s experiences building highly scalable and reliable cloud computing services. A key challenge has been adding features without impacting the key operational requirements.
The below diagram shows the six fundamental operational requirements fulfilled by DynamoDB.
Let’s look at each of them in a little more detail.
Fully Managed Cloud Service
A fundamental goal of DynamoDB is to free developers from the burden of running their database system. This includes things like patching software, configuring a distributed database cluster, and taking care of hardware needs.
The applications can just talk to the DynamoDB API for creating tables. They can read and write data without worrying about where those tables are physically stored or how they’re being managed.
DynamoDB handles everything for the developer right from resource provisioning to software upgrades, data encryption, taking backups, and even failure recovery.
Multi-Tenant Architecture
DynamoDB also aims to create cost savings for the customers.
One way to achieve this is using a multi-tenant architecture where data from different customers is stored on the same physical machines. This ensures better resource utilization and lets Amazon pass on the savings to the customers.
However, you still need to provide workload isolation in a multi-tenant system.
DynamoDB takes care of it via resource reservations, tight provisioning, and monitoring usage for every customer.
Boundless Scale for Tables
Unlike SimpleDB, there are no predefined limits for how much data can be stored in a DynamoDB table.
DynamoDB is designed to scale the resources dedicated to a table from several servers to many thousands as needed. A table can grow elastically to meet the demands of the customer without any manual intervention.
Predictable Performance
DynamoDB guarantees consistent performance even when the tables grow from a few megabytes to hundreds of terabytes.
For example, if your application is running in the same AWS region as its data, you can expect to see average latency in the low single-digit millisecond range.
DynamoDB handles any level of demand through horizontal scaling by automatically partitioning and repartitioning data as and when needed.
Highly Available
DynamoDB supports high availability by replicating data across multiple data centers or availability zones.
Customers can also create global tables that are geo-replicated across selected regions and provide low latency all across the globe. DynamoDB offers an availability SLA of 99.99% for regular tables and 99.999% for global tables.
Flexible Use Cases
Lastly, DynamoDB has a strong focus on flexibility and doesn’t force developers to follow a particular data model.
There’s no fixed schema and each data item can contain any number of attributes. Tables use a key-value or document data model where developers can opt for strong or eventual consistency while reading items from the table.
Latest articles
If you’re not a paid subscriber, here’s what you missed this month.
To receive all the full articles and support ByteByteGo, consider subscribing:
Architecture of DynamoDB
Now that we’ve looked at the operational requirements of DynamoDB, time to learn more about the architecture that helps fulfill these requirements.
To simplify the understanding, we will look at specific parts of the overall architecture one by one.
DynamoDB Tables
A DynamoDB table is a collection of items where each item is a collection of attributes.
Each item is uniquely identified by a primary key and the schema of this key is specified at the time of table creation. The primary key’s schema contains a partition key or it can be a composite key (consisting of a partition and sort key).
The partition key is important as it helps determine where the item will be physically stored. We will look at how that works out in a later section.
DynamoDB also supports secondary indexes to query data in a table using an alternate key. A particular table can have one or more secondary indexes.
Interface
DynamoDB provides a simple interface to store or retrieve items from a table.
The below table shows the primary operations that can be used by clients to read and write items in a DynamoDB table.
Also, DynamoDB supports ACID transactions that can update multiple items while ensuring atomicity, consistency, isolation, and durability. The key point to note is that this is managed without compromising on the other operational guarantees related to scaling and availability.
Partitioning and Replication
A DynamoDB table is divided into multiple partitions. This provides two benefits:
Handling more throughput as requests increase
Store more data as the table grows
Each partition of the table hosts a part of the table’s key range. For example, if there are 100 keys and 5 partitions, each partition can hold 20 keys.
But what about the availability guarantees of these partitions?
Each partition has multiple replicas distributed across availability zones. Together, these replicas form a replication group and improve the partition’s availability and durability.
A replication group consists of storage replicas that contain both the write-ahead logs and the B-tree that stores the key value data. Also, a group can contain replicas that only store write-ahead log entries and not the key-value data. These replicas are known as log replicas. We will learn more about their usage in a later section.
But whenever you replicate data across multiple nodes, guaranteeing a consensus becomes a big issue. What if each partition has a different value for a particular key?
The replication group uses Multi-Paxos for consensus and leader election. The leader is a key player within the replication group:
The leader serves all write requests. On receiving a write request, the leader of the group generates a write-ahead log record and sends it to the other replicas. A write is acknowledged to the application once a quorum of replicas stores the log record to their local write-ahead logs.
The leader also serves strongly consistent read requests. On the other hand, any other replica can serve eventually consistent reads.
But what happens if the leader goes down?
The leader of a replication group maintains its leadership using a lease mechanism. If the leader of the group fails and this failure is detected by any of the other replicas, the replica can propose a new round of the election to elect itself as the new leader.
DynamoDB Request Flow
DynamoDB consists of tens of microservices. However, there are a few core services that carry out the most critical functionality within the request flow.
The below diagram shows the request flow on a high level.
Let’s understand how it works in a step-by-step manner.
Requests arrive at the request router service. This service is responsible for routing each request to the appropriate storage node. However, it needs to call other services to make the routing decision.
The request router first checks whether the request is valid by calling the authentication service. The authentication service is hooked to the AWS IAM and helps determine whether the operation being performed on a given table is authorized.
Next, the request router fetches the routing information from the metadata service. The metadata service stores routing information about the tables, indexes, and replication groups for keys of a given table or index.
The request router also checks the global admission control to make sure that the request doesn’t exceed the resource limit for the table.
Lastly, if everything checks out, the request router calls the storage service to store the data on a fleet of storage nodes. Each storage node hosts many replicas belonging to different partitions.
Hot Partitions and Throughput Dilution
As you may have noticed, partitioning is a key selling point for DynamoDB. It provides a way to dynamically scale both the capacity and performance of tables as the demand changes.
In the initial release, DynamoDB allowed customers to explicitly specify the throughput requirements for a table in terms of read capacity units (RCUs) and write capacity units (WCUs). As the demand from a table changed (based on size and load), it could be split into partitions.
For example, let’s say a partition has a maximum throughput of 1000 WCUs. When a table is created with 3200 WCUs, DynamoDB creates 4 partitions with each partition allocated 800 WCUs. If the table capacity was increased to 6000 WCUs, then partitions will be split to create 8 child partitions with 750 WCUs per partition.
All of this was controlled by the admission control system to make sure that storage nodes don’t become overloaded. However, this approach assumed a uniform distribution of throughput across all partitions, resulting in some problems.
Two consequences because of this approach were hot partitions and throughput dilation.
Hot partitions arose in applications that had non-uniform access patterns. In other words, more traffic consistently went to a few items on the tables rather than an even distribution.
Throughput dilution was common for tables where partitions were split for size. Splitting a partition for size would result in the throughput of the partition being equally divided among the child partitions. This would decrease the per-partition throughput.
The static allocation of throughput at a partition level can cause reads and writes to be rejected if that partition receives a high number of requests. The partition’s throughput limit was exceeded even though the total provisioned throughput of the table was sufficient. Such a condition is known as throttling.
The below illustration shows this concept:
From a customer’s perspective, throttling creates periods of unavailability even though the service behaved as expected. To solve this, the customer will try to increase the table’s provisioned throughput but not be able to use all that capacity effectively. In other words, tables would be over-provisioned, resulting in a waste of resources.
To solve this, DynamoDB implemented a couple of solutions.
Bursting
While non-uniform access to partitions meant that some partitions exceeded their throughput limit, it also meant that other partitions were not using their allocated throughput. In other words, there was unused capacity being wasted.
Therefore, DynamoDB introduced the concept of bursting at the partition level.
The idea behind bursting was to let applications tap into this unused capacity at a partition level to absorb short-lived spikes for up to 300 seconds. The unused capacity is called the burst capacity.
It’s the same as storing money in the bank from your salary each month to buy a new car with all those savings.
The below diagram shows this concept.
The capacity management was controlled using multiple token buckets as follows:
Allocated token bucket for a partition
Burst token bucket for a partition
Node-level token bucket
Together, these buckets provided admission control:
If a read request arrived on a storage node and there were tokens in the partition’s allocated bucket, the request was admitted and the tokens were deducted from the partition bucket and node-level bucket
Once a partition exhausted all provisioned tokens, requests were allowed to burst only when tokens were available both in the burst token and the node-level token bucket
Global Admission Control
Bursting took care of short-lived spikes. However, long-lived spikes were still a problem in cases that had heavily skewed access patterns across partitions.
Initially, the DynamoDB developers implemented an adaptive capacity system that monitored the provisioned and consumed capacity of all tables. In case of throttling where the table level throughput wasn’t exceeded, it would automatically boost the allocated throughput.
However, this was a reactive approach and kicked in only after the customer had experienced a brief period of unavailability.
To solve this problem, they implemented Global Admission Control or GAC.
Here’s how GAC works:
It builds on the idea of token buckets by implementing a service that centrally tracks the total consumption of a table’s capacity using tokens.
Each request router instance maintains a local token bucket to make admission decisions.
The routers also communicate with the GAC to replenish tokens at regular intervals.
When a request arrives, the request router deducts tokens.
When it runs out of tokens, it asks for more tokens from the GAC.
The GAC instance uses the information provided by the client to estimate global token consumptions and provides the tokens available for the next time unit to the client’s share of overall tokens.
Managing Durability with DynamoDB
One of the central tenets of DynamoDB is that the data should never be lost after it has been committed. However, in practice, data loss can happen due to hardware failures or software bugs.
To guard against these scenarios, DynamoDB implements several mechanisms to ensure high durability.
Hardware Failures
In a large service like DynamoDB, hardware failures such as memory and disk failures are common. When a node goes down, all partitions hosted on that node are down to just two replicas.
The write-ahead logs in DynamoDB are critical for providing durability and crash recovery.
Write-ahead logs are stored in all three replicas of a partition. To achieve even higher levels of durability, the write-ahead logs are also periodically archived to S3 which is designed for 11 nines of durability.
Silent Data Errors and Continuous Verification
Some hardware failures due to storage media, CPU, or memory can cause incorrect data to be stored. Unfortunately, these issues are difficult to detect and they can happen anywhere.
DynamoDB extensively maintains checksums within every log entry, message, and log file to detect such data. Data integrity is validated for every data transfer between two nodes.
DynamoDB also continuously verifies data at rest using a scrub process. The goal of this scrub process is to detect errors such as bit rot.
The process verifies two things:
All three copies of the replicas in the replication group have the same data
Data of the live replicas matches with a copy of a replica built offline using the archived write-ahead log entries
The verification is done by computing the checksum of the live replica and matching that with a snapshot of one generated from the log entries archived in S3.
Backups and Restores
A customer’s data can also get corrupted due to a bug in their application code.
To deal with such scenarios, DynamoDB supports backup and restore functionalities. The great part is that backups and restores don’t affect the performance or availability of the table since they are built using the write-ahead logs that are archived in S3.
Backups are full copies of DynamoDB tables and are stored in an S3 bucket. They are consistent across multiple partitions up to the nearest second and can be restored to a new table anytime.
DynamoDB also supports point-in-time restore allowing customers to restore the contents of a table that existed at any time in the previous 35 days.
Managing Availability with DynamoDB
Availability is a major selling point of a managed database service like DynamoDB.
Customers expect almost 100% availability and even though it may not be theoretically possible, DynamoDB employs several techniques to ensure high availability.
DynamoDB tables are distributed and replicated across multiple Availability Zones (AZs) within a region. The platform team regularly tests resilience to node, rack, and AZ failures.
However, they also had to solve several challenges to bring DynamoDB to such a high level of availability
Write and Read Availability
The write availability of a partition depends on a healthy leader and a healthy write quorum that consists of two out of three replicas from different AZs.
In other words, a partition becomes unavailable for writes if the number of replicas needed to achieve the minimum quorum requirements is unavailable. If one of the replicas goes down, the leader adds a log replica in the group since it is the fastest way to ensure that the write quorum is always available.
As mentioned earlier, the leader replica serves consistent reads while other replicas can serve eventually consistent reads.
Failure Detection
The availability of a database is highly dependent on the ability to detect failures.
Failure detection must be quick to minimize downtime. Also, it should be able to detect false positives because triggering a needless failover can lead to bigger disruptions in the service.
For example, when all replicas lose connection to the leader, it’s clear that the leader is down and a new election is needed.
However, nodes can also experience gray failures due to communication issues between a leader and followers. For instance, a replica doesn’t receive heartbeats from a leader due to some network issue and triggers a new election. However, a newly elected leader has to wait for the expiry of the old leader’s lease resulting in unavailability.
To get around gray failures like this, a replica that wants to trigger a failover confirms with the other replicas whether they are also unable to communicate with the leader. If the other replicas respond with a healthy leader message, the follower drops its leader election attempt.
Metadata Availability
As we saw in the DynamoDB’s request flow diagram, metadata is a critical piece that makes the entire process work.
Metadata is the mapping between a table’s primary keys and the corresponding storage nodes. Without this information, the requests cannot be routed to the correct nodes.
In the initial days, DynamoDB stored the metadata in DynamoDB itself. When the request router received a request for a table it had not seen before, it downloaded the routing information for the entire table and cached it locally for subsequent requests. Since this information didn’t change frequently, the cache hit rate was almost 99.75 percent.
However, bringing up new router instances with empty caches would result in a huge traffic spike to the metadata service, impacting performance and stability.
To reduce the reliance on local caching of the metadata, DynamoDB built an in-memory distributed datastore called MemDS.
See the below diagram for the role of MemDS.
As you can see, MemDS stores all the metadata in memory and replicates it across the fleet of MemDB servers.
Also, a partition map cache (MemDS cache) was deployed on each request router instance to avoid a bi-modal cache setup. Whenever there is a cache hit, an asynchronous call is made to MemDS to refresh the cache, ensuring that there is a constant flow of traffic to the MemDS fleet rather than traffic spikes.
Conclusion
DynamoDB has been a pioneer in the field of NoSQL databases in the cloud-native world.
Thousands of companies all across the world rely on DynamoDB for their data storage needs due to its high availability and scalability properties.
However, behind the scenes, DynamoDB also packs a lot of learnings in terms of designing large-scale database systems.
Some of the key lessons the DynamoDB team had were as follows:
Adapting to the traffic patterns of user applications improves the overall customer experience
To improve stability, it’s better to design systems for predictability over absolute efficiency
For high durability, perform continuous verification of the data-at-rest
Maintaining high availability is a careful balance between operational discipline and new features
These lessons can act as great takeaways for us.
References:
Amazon DynamoDB: A Scalable, Predictably Performant and Fully Managed NoSQL Database Service
Amazon DynamoDB: Evolution of a Hyperscale Cloud Database Service
SPONSOR US
Get your product in front of more than 500,000 tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.
Space Fills Up Fast - Reserve Today
Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing hi@bytebytego.com.
Like Comment Restack © 2024 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:42 - 12 Mar 2024 -
Minority employees could help address Europe’s skills shortage
On Point
The potential (and significant) economic opportunity Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Systemic barriers. Europe has significant room for improvement when it comes to ethnocultural minority employees (EMEs). A McKinsey survey of nearly 4,000 employees in five European countries showed that EMEs, who accounted for about 1,700 of the respondents, face significant barriers in terms of recruitment, retention, and advancement. Systemic issues—including discrimination, lack of access to professional networks, limited opportunities for career development and training, and unconscious biases—hinder EMEs’ progress in the workplace, highlight McKinsey senior partner Massimo Giordano and colleagues.
— Edited by Jana Zabkova, senior editor, New York
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:06 - 12 Mar 2024 -
Management mindsets that work: A leader’s guide
Matters of the mind Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
For a long while, self-awareness and mindfulness were popular notions in new-age circles—though in the corporate world, not so much. After much research and, yes, contemplation, we now understand the connection between our internal and external environments to be critical to the success of leaders at all levels, and of the people they lead. An introspective, emotionally intelligent mindset is especially important for CEOs, whose words, behaviors, and decisions have an outsize impact on their organizations. This week, we look at the mindsets that great leaders bring to work.
Whether you’re on the front line or the management team, some skills are bound to come more naturally than others. Indeed, McKinsey senior partners Gautam Kumra, Joydeep Sengupta, and Mukund Sridhar report that CEOs can be self-aware and aren’t always superhuman. Early results from a self-assessment of CEO performance show that corporate leaders tend to rate themselves highest on managing their personal effectiveness and lowest on engaging their boards of directors. While CEOs tend to say they have good relationships with their boards, they rate themselves lower at tapping the board’s wisdom and focusing board meetings on the future. The good news is that time and training can improve these skills. So can a mindset shift: the best CEOs think about the board’s expertise more expansively, embracing directors’ perspectives rather than relegating them to the sidelines.
That’s the number of employees working at General Motors in 1980, the year that Mary Barra (the company’s current CEO) began there as an engineering intern. Despite the leadership skills Barra honed and the other senior roles she held at GM, there were twists and turns along the way: all evidence of how narrow the path to the top job can be. To increase a CEO hopeful’s odds of success, McKinsey’s Carolyn Dewar, Scott Keller, Vik Malhotra, and Kurt Strovink offer four recommendations, including an honest assessment of why they want the role in the first place. Passion and vision are essential to sustainable success as a chief executive; ego is not. And a humble, open-to-change mindset is just as important for aspiring or newer CEOs as it is for mid-tenured leaders.
After interviewing exceptional CEOs for their bestselling book, CEO Excellence, senior partners Carolyn Dewar, Scott Keller, and Vik Malhotra uncovered an abundance of insights about this group’s habits—then distilled them into six mindsets that leaders of all stripes can learn from. In a recent interview, Dewar and Malhotra note the ways in which the best CEOs’ mindsets can make the difference. “On aligning the organization, the mindset of treating the soft stuff as hard stuff is a critical one,” Malhotra says. “On mobilizing through leaders, many great CEOs focused on the question, how do you create not just a team of stars but a star team?” Another key insight: that great leaders are built, not born. “Great CEOs come to the role with a certain drive, background, and education, but that’s a small part of the equation. The eight or ten roles they held before becoming CEOs prepared them to be bold in setting direction or to focus on what only they could do in their leadership models. Those are learned skills.”
Lead by bringing your best self to work.
– Edited by Daniella Seiler, executive editor, Washington, DC
Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to the Leading Off newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Leading Off" <publishing@email.mckinsey.com> - 02:07 - 11 Mar 2024 -
Seven predictions for medtech in 2024
On Point
‘Lumpy’ geographic performance Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Potential rebound. Growth in medtech has accelerated since the COVID-19 pandemic, bringing with it higher expectations for medtech companies. Medtech revenue growth in 2024 will likely stabilize at 100 to 150 basis points above prepandemic levels, McKinsey senior partner Peter Pfeiffer and coauthors explain. Factors fueling the growth are an aging population; innovations targeting undertreated diseases like diabetes, heart failure, and stroke; and the increased use of alternative-care sites.
— Edited by Jermey Matthews, editor, Boston
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:30 - 11 Mar 2024 -
The week in charts
The Week in Charts
The rise of injectable procedures, evolving trade patterns, and more Our McKinsey Chart of the Day series offers a daily chart that helps explain a changing world—as we strive toward sustainable and inclusive growth. In case you missed them, this week’s graphics explored the rise of injectable procedures, evolving trade patterns, renewable-energy demand, global education performance, and the GDP impact of improving women’s health.
Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to The Week in Charts newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Week in Charts" <publishing@email.mckinsey.com> - 03:38 - 9 Mar 2024