Archives
- By thread 3432
-
By date
- June 2021 10
- July 2021 6
- August 2021 20
- September 2021 21
- October 2021 48
- November 2021 40
- December 2021 23
- January 2022 46
- February 2022 80
- March 2022 109
- April 2022 100
- May 2022 97
- June 2022 105
- July 2022 82
- August 2022 95
- September 2022 103
- October 2022 117
- November 2022 115
- December 2022 102
- January 2023 88
- February 2023 90
- March 2023 116
- April 2023 97
- May 2023 159
- June 2023 145
- July 2023 120
- August 2023 90
- September 2023 102
- October 2023 106
- November 2023 100
- December 2023 74
- January 2024 75
- February 2024 75
- March 2024 78
- April 2024 74
- May 2024 108
- June 2024 98
- July 2024 116
- August 2024 134
- September 2024 101
-
The week in charts
The Week in Charts
Latino representation in Hollywood, gen AI risk assessments, and more Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to The Week in Charts newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Week in Charts" <publishing@email.mckinsey.com> - 03:38 - 13 Apr 2024 -
EP107: Top 9 Architectural Patterns for Data and Communication Flow
EP107: Top 9 Architectural Patterns for Data and Communication Flow
This week’s system de sign refresher: Top 9 Architectural Patterns for Data and Communication Flow How Netflix Really Uses Java? Top 6 Cloud Messaging Patterns What Are the Most Important AWS Services To Learn? SPONSOR US Register for POST/CON 24 | Save 20% Off (Sponsored)͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for moreThis week’s system de sign refresher:
Top 9 Architectural Patterns for Data and Communication Flow
How Netflix Really Uses Java?
Top 6 Cloud Messaging Patterns
What Are the Most Important AWS Services To Learn?
SPONSOR US
Register for POST/CON 24 | Save 20% Off (Sponsored)
Postman’s annual user conference will be one of 2024’s biggest developer events and it features a great mix of high-level talks from tech executives and hands-on training from some of the best developers in the world.
Learn: Get first-hand knowledge from Postman experts and global tech leaders.
Level up: Attend 8-hour workshops to leave with new skills (and badges!)
Become the first to know: See the latest API platform innovations, including advancements in AI.
Help shape the future of Postman: Give direct feedback to the Postman leadership team.
Network with fellow API practitioners and global tech leaders — including speakers from OpenAI, Heroku, and more.
Have fun: Enjoy cocktails, dinner, 360° views of the city, and a live performance from multi-platinum recording artist T-Pain!
Use code PCBYTEBYTEGO20 and save 20% off your ticket!
Top 9 Architectural Patterns for Data and Communication Flow
Peer-to-Peer
The Peer-to-Peer pattern involves direct communication between two components without the need for a central coordinator.API Gateway
An API Gateway acts as a single entry point for all client requests to the backend services of an application.Pub-Sub
The Pub-Sub pattern decouples the producers of messages (publishers) from the consumers of messages (subscribers) through a message broker.Request-Response
This is one of the most fundamental integration patterns, where a client sends a request to a server and waits for a response.Event Sourcing
Event Sourcing involves storing the state changes of an application as a sequence of events.ETL
ETL is a data integration pattern used to gather data from multiple sources, transform it into a structured format, and load it into a destination database.Batching
Batching involves accumulating data over a period or until a certain threshold is met before processing it as a single group.Streaming Processing
Streaming Processing allows for the continuous ingestion, processing, and analysis of data streams in real-time.Orchestration
Orchestration involves a central coordinator (an orchestrator) managing the interactions between distributed components or services to achieve a workflow or business process.
Latest articles
If you’re not a paid subscriber, here’s what you missed.
To receive all the full articles and support ByteByteGo, consider subscribing:
How Netflix Really Uses Java?
Netflix is predominantly a Java shop.
Every backend application (including internal apps, streaming, and movie production apps) at Netflix is a Java application.
However, the Java stack is not static and has gone through multiple iterations over the years.Here are the details of those iterations:
API Gateway
Netflix follows a microservices architecture. Every piece of functionality and data is owned by a microservice built using Java (initially version 8)
This means that rendering one screen (such as the List of List of Movies or LOLOMO) involved fetching data from 10s of microservices. But making all these calls from the client created a performance problem.
Netflix initially used the API Gateway pattern using Zuul to handle the orchestration.BFFs with Groovy & RxJava
Using a single gateway for multiple clients was a problem for Netflix because each client (such as TV, mobile apps, or web browser) had subtle differences.
To handle this, Netflix used the Backend-for-Frontend (BFF) pattern. Zuul was moved to the role of a proxy
In this pattern, every frontend or UI gets its own mini backend that performs the request fanout and orchestration for multiple services.
The BFFs were built using Groovy scripts and the service fanout was done using RxJava for thread management.GraphQL Federation
The Groovy and RxJava approach required more work from the UI developers in creating the Groovy scripts. Also, reactive programming is generally hard.
Recently, Netflix moved to GraphQL Federation. With GraphQL, a client can specify exactly what set of fields it needs, thereby solving the problem of overfetching and underfetching with REST APIs.
The GraphQL Federation takes care of calling the necessary microservices to fetch the data.
These microservices are called Domain Graph Service (DGS) and are built using Java 17, Spring Boot 3, and Spring Boot Netflix OSS packages. The move from Java 8 to Java 17 resulted in 20% CPU gains.
More recently, Netflix has started to migrate to Java 21 to take advantage of features like virtual threads.
Top 6 Cloud Messaging Patterns
How do services communicate with each other? The diagram below shows 6 cloud messaging patterns.
🔹 Asynchronous Request-Reply
This pattern aims at providing determinism for long-running backend tasks. It decouples backend processing from frontend clients.
In the diagram below, the client makes a synchronous call to the API, triggering a long-running operation on the backend. The API returns an HTTP 202 (Accepted) status code, acknowledging that the request has been received for processing.
🔹 Publisher-Subscriber
This pattern targets decoupling senders from consumers, and avoiding blocking the sender to wait for a response.
🔹 Claim Check
This pattern solves the transmision of large messages. It stores the whole message payload into a database and transmits only the reference to the message, which will be used later to retrieve the payload from the database.
🔹 Priority Queue
This pattern prioritizes requests sent to services so that requests with a higher priority are received and processed more quickly than those with a lower priority.
🔹 Saga
Saga is used to manage data consistency across multiple services in distributed systems, especially in microservices architectures where each service manages its own database.
The saga pattern addresses the challenge of maintaining data consistency without relying on distributed transactions, which are difficult to scale and can negatively impact system performance.
🔹 Competing Consumers
This pattern enables multiple concurrent consumers to process messages received on the same messaging channel. There is no need to configure complex coordination between the consumers. However, this pattern cannot guarantee message ordering.
Reference: Azure Messaging patternsWhat Are the Most Important AWS Services To Learn?
Since its inception in 2006, AWS has rapidly evolved from simple offerings like S3 and EC2 to an expansive, versatile cloud ecosystem.
Today, AWS provides a highly reliable, scalable infrastructure platform with over 200 services in the cloud, powering hundreds of thousands of businesses in 190 countries around the world.
For both newcomers and seasoned professionals, navigating the broad set of AWS services is no small feat.
From computing power, storage options, and networking capabilities to database management, analytics, and machine learning, AWS provides a wide array of tools that can be daunting to understand and master.
Each service is tailored to specific needs and use cases, requiring a deep understanding of not just the services themselves, but also how they interact and integrate within an IT ecosystem.
This attached illustration can serve as both a starting point and a quick reference for anyone looking to demystify AWS and focus their efforts on the services that matter most.
It provides a visual roadmap, outlining the foundational services that underpin cloud computing essentials, as well as advanced services catering to specific needs like serverless architectures, DevOps, and machine learning.
Over to you: What has your journey been like with AWS so far?SPONSOR US
Get your product in front of more than 500,000 tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.
Space Fills Up Fast - Reserve Today
Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing hi@bytebytego.com.
Like Comment Restack © 2024 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:35 - 13 Apr 2024 -
Embarcarse en un reinicio de la IA generativa
Además, el papel de la IA generativa en el diseño creativo de productos físicos
by "Destacados de McKinsey" <publishing@email.mckinsey.com> - 08:19 - 13 Apr 2024 -
RE: Seeking for the agent in Saudi Arabia
Dear agent,
Trust you are doing great.
I sincerely invite you to join GLA platform and cover the increasing request ex Saudi Arabia from our GLA members.
Now is the best time to join GLA Family in the beginning of 2024.
Last but not the least, I wish you, your family and your company a prosperous 2024! See you in GLA Dubai Conference for a great talk if possible!
I am looking forward to your cooperative feedback regarding your GLA Membership Application.
Best regards,
Ella Liu
GLA Overseas Department
Mobile:
(86) 199 2879 7478
Email:
Company:
GLA Co.,Ltd
Website:
Address:
No. 2109, 21st Floor, HongChang, Plaza, No. 2001, Road Shenzhen, China
Ø The 10th GLA Global Logistics Conference will be held at 29th April–2nd May 2024 in Dubai, UAE.
Registration Link: https://www.glafamily.com/meeting/registration/Ella
Ø The 8th GLA Conference in Bangkok Thailand, online album: https://live.photoplus.cn/live/76884709?accessFrom=qrcode&userType=null&state=STATE
Ø The 9th GLA Conference in Hainan China, online album: https://live.photoplus.cn/live/61916679?accessFrom=qrcode&userType=null&state=STATE&code=0717A0200FZ46R10fJ200EJbI337A02u#/live
【Notice Agreement No 7】
7. GLA president reserves the right to cancel or reject membership or application.
company shall cease to be a member of GLA if:
a) the Member does not adhere to GLA terms and conditions
b) the Member gives notice of resignation in writing to the GLA.
c) No good reputation in the market.
d) Have bad debt records in the GLA platform or in the market
by "Ella Liu" <member139@glafamily.com> - 02:08 - 12 Apr 2024 -
Want to know how much the space sector could be worth by 2035?
On Point
A new WEF–McKinsey report Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Space economy boom. Activity in space is accelerating. In 2023, “backbone” applications for space (such as satellites and services like GPS) made up $330 billion of the global space economy. “Reach” applications, which describe space technologies that companies across industries use to help generate revenues, represented another $300 billion. Over the next decade, space applications are expected to grow at an annual rate that is double the projected rate of GDP growth, McKinsey senior partner Ryan Brukardt and coauthors reveal.
•
Four key findings. To learn what will shape the future space economy, the World Economic Forum and McKinsey spoke with more than 60 leaders and experts across the globe. The team also used market forecasts and other data to develop original estimates for the size of each part of the space economy, including commercial services and state-sponsored civil and defense applications. To read our key findings on the space economy, download the full report, Space: The $1.8 trillion opportunity for economic growth.
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:13 - 12 Apr 2024 -
White Label Fuel Monitoring Software
White Label Fuel Monitoring Software
A versatile fuel monitoring system that can be tailored to fit any brand or company's fuel sensor, regardless of type or model.A versatile fuel monitoring system that can be tailored to fit any brand or company's fuel sensor, regardless of type or model.
Catch a glimpse of what our software has to offer
Uffizio Technologies Pvt. Ltd., 4th Floor, Metropolis, Opp. S.T Workshop, Valsad, Gujarat, 396001, India
by "Sunny Thakur" <sunny.thakur@uffizio.com> - 08:00 - 11 Apr 2024 -
Embracing Chaos to Improve System Resilience: Chaos Engineering
Embracing Chaos to Improve System Resilience: Chaos Engineering
Imagine it's the early 2000s, and you're a developer with a bold idea. You want to test your software not in a safe, controlled environment but right where the action is: the production environment. This is where real users interact with your system. Back then, suggesting something like this might have gotten you some strange looks from your bosses. But now, testing in the real world is not just okay; it's often recommended.͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for moreLatest articles
If you’re not a subscriber, here’s what you missed this month.
To receive all the full articles and support ByteByteGo, consider subscribing:
Imagine it's the early 2000s, and you're a developer with a bold idea. You want to test your software not in a safe, controlled environment but right where the action is: the production environment. This is where real users interact with your system. Back then, suggesting something like this might have gotten you some strange looks from your bosses. But now, testing in the real world is not just okay; it's often recommended.
Why the big change? A few reasons stand out. Systems today are more complex than ever, pushing us to innovate faster and ensure our services are both reliable and strong. The rise of cloud technology, microservices, and distributed systems has changed the game. We've had to adapt our methods and mindsets accordingly.
Our goal now is to make systems that can handle anything—be it a slowdown or a full-blown outage. Enter Chaos Engineering.
In this issue, we dive into what chaos engineering is all about. We'll break down its key principles, how it's practiced, and examples from the real world. You'll learn how causing a bit of controlled chaos can actually help find and fix weaknesses before they become major problems.
Prepare to see how embracing chaos can lead to stronger, more reliable systems. Let's get started!
What is Chaos Engineering?
So, what exactly is chaos engineering? It's a way to deal with unexpected issues in software development and keep systems up and running. Some folks might think that a server running an app will continue without a hitch forever. Others believe that problems are just part of the deal and that downtime is inevitable.
Chaos engineering strikes a balance between these views. It recognizes that things can go wrong but asserts that we can take steps to prevent these issues from impacting our systems and the performance of our apps.
This approach involves experimenting on our live, production systems to identify weak spots and areas that aren't as reliable as they should be. It's about measuring how much we trust our system's resilience and working to boost that confidence
However, it's important to understand that being 100% sure nothing will go wrong is unrealistic. Through chaos engineering, we intentionally introduce unexpected events to uncover vulnerabilities. These events can vary widely, such as taking down a server randomly, disrupting a data center, or tampering with load balancers and application replicas.
In short, chaos engineering is about designing experiments that rigorously test our systems' robustness.
Defining Chaos Engineering
There are many ways to describe chaos engineering, but here's a definition that captures its essence well, sourced from https://principlesofchaos.org/.
“Chaos Engineering is the discipline of experimenting on a system in order to build confidence in the system’s capability to withstand turbulent conditions in production.”
This definition highlights the core objective of chaos engineering: to ensure our systems can handle the unpredictable nature of real-world operations.
Performance Engineering vs. Chaos Engineering
When we talk about ensuring our systems run smoothly, two concepts often come up: performance engineering and chaos engineering. Let's discuss what sets these two apart and how they might overlap.
Many developers are already familiar with performance engineering, which is in the same family as DevOps. It involves using a combination of tools, processes, and technologies to monitor our system's performance and make continuous improvements. This includes conducting various types of testing, such as load, stress, and endurance tests, all aimed at boosting the performance of our applications.
On the flip side, chaos engineering is about intentionally breaking things. Yes, this includes stress testing, but it's more about observing how systems respond under unexpected stress. Stress testing could be seen as a form of chaos experiment. So, one way to look at it is to consider performance engineering as a subset of chaos engineering or the other way around, depending on how you apply these practices.
Another way to view these two is as distinct disciplines within an organization. One team might focus solely on conducting chaos experiments and learning from the failures, while another might immerse itself in performance engineering tasks like testing and monitoring. Depending on the structure of the organization, the skill sets of the team, and various other factors, we might have separate teams for each discipline or one team that tackles both.
Chaos Engineering in Practice
Let's consider an example to better understand chaos engineering. Imagine we have a system with a load balancer that directs requests to web servers. These servers then connect to a payment service, which, in turn, interacts with a third-party API and a cache service, all located in Availability Zone A. If the payment service fails to communicate with the third-party API or the cache, requests need to be rerouted to Availability Zone B to maintain high availability.
Continue reading this post for free, courtesy of Alex Xu.
A subscription gets you:
An extra deep dive on Thursdays Full archive Many expense it with team's learning budget Like Comment Restack © 2024 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:37 - 11 Apr 2024 -
Maximise ROI: Insights from the 2023 Observability Forecast Report
New Relic
Our latest report unveils pivotal insights for businesses considering observability. The 2023 Observability Forecast, with data collected from 1,700 technology professionals across 15 countries, dives into observability practices and their impacts on costs and revenue.
The report highlights the ROI of observability according to respondents, including driving business efficiency, security, and profitability.
Access the blog now that rounds up key insights to understand the potential impact of observability on your business.
Read Now Need help? Let's get in touch.
This email is sent from an account used for sending messages only. Please do not reply to this email to contact us—we will not get your response.
This email was sent to info@learn.odoo.com Update your email preferences.
For information about our privacy practices, see our Privacy Policy.
Need to contact New Relic? You can chat or call us at +44 20 3859 9190.
Strand Bridge House, 138-142 Strand, London WC2R 1HH
© 2024 New Relic, Inc. All rights reserved. New Relic logo are trademarks of New Relic, Inc
by "New Relic" <emeamarketing@newrelic.com> - 07:11 - 11 Apr 2024 -
How can the world jump-start productivity?
On Point
Why economies need productivity growth Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Productivity slowdown. Over the past 25 years, strong growth in productivity has helped raise the world’s living standards, with median economy productivity jumping sixfold, McKinsey Global Institute chair Sven Smit and coauthors reveal. More recently, though, productivity growth has faded, with a near-universal slowdown since the 2008 global financial crisis. In advanced economies, productivity growth fell to less than 1% in the decade between 2012 and 2022. Over the same period, productivity growth also fell in emerging economies.
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 11:05 - 10 Apr 2024 -
Unlock Data Transformation at Sumo Logic's Booth - AWS Summit London
Sumo Logic
Experience our cutting-edge solutions at Booth B36Dear Mohammad,
I'm thrilled to extend an invitation for you to visit the Sumo Logic booth at the AWS Summit London on the 24th of April at the ExCel London.
Discover how Sumo Logic can transform your approach to data management, ensuring security, scalability, and efficiency. Our experts will showcase our latest innovations and discuss tailored solutions to address your specific needs.
Here are compelling reasons to visit Booth B36:
- Find out how you can power DevSecOps from only one cloud native platform
- Learn more about our $0 data ingest approach
- Explore configuration management and compliance reporting
- Discover how we leverage AI/ML to uncover and mitigate threats
Claim your free entry ticket using this link.
We eagerly anticipate your presence at Booth B36!
Best regards,
Sumo Logic
About Sumo Logic
Sumo Logic is the pioneer in continuous intelligence, a new category of software to address the data challenges presented by digital transformation, modern applications, and cloud computing.Sumo Logic, Aviation House, 125 Kingsway, London WC2B 6NH, UK
© 2024 Sumo Logic, All rights reserved.Unsubscribe
by "Sumo Logic" <marketing-info@sumologic.com> - 06:01 - 10 Apr 2024 -
What does it take to become a CFO?
On Point
5 priorities for CFO hopefuls Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Take career risks. What matters most for securing the CFO job? To find out, McKinsey senior partner Andy West and coauthors spoke with former CFOs, such as Arun Nayar, Tyco International’s former EVP and CFO and PepsiCo’s former CFO of global operations. Earlier in his career, Nayar realized that to rise higher, he would need to gain operational know-how. After obtaining a role overseeing finance in PepsiCo’s global operations division, an area he knew nothing about, he formed the No Fear Club, through which he mentors others in finance roles.
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:11 - 10 Apr 2024 -
Reddit's Architecture: The Evolutionary Journey
Reddit's Architecture: The Evolutionary Journey
The comprehensive developer resource for B2B User Management (Sponsored) Building an enterprise-ready, resilient B2B auth is one of the more complex tasks developers face these days. Today, even smaller startups are demanding security features like SSO that used to be the purview of the Fortune 500.͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for moreThe comprehensive developer resource for B2B User Management (Sponsored)
Building an enterprise-ready, resilient B2B auth is one of the more complex tasks developers face these days. Today, even smaller startups are demanding security features like SSO that used to be the purview of the Fortune 500.
The latest guide from WorkOS covers the essentials of modern day user management for B2B apps — from 101 topics to more advanced concepts that include:
→ SSO, MFA, and sessions
→ Bot policies, org auth policies, and UI considerations
→ Identity linking, email verification, and just-in-time provisioningThis resource also presents an easier alternative for supporting user management.
Reddit's Architecture: The Evolutionary Journey
Reddit was founded in 2005 with the vision to become “the front page of the Internet”.
Over the years, it has evolved into one of the most popular social networks on the planet fostering tens of thousands of communities built around the passions and interests of its members. With over a billion monthly users, Reddit is where people come to participate in conversations on a vast array of topics.
Some interesting numbers that convey Reddit’s incredible popularity are as follows:
Reddit has 1.2 billion unique monthly visitors, turning it into a virtual town square.
Reddit’s monthly active user base has exploded by 366% since 2018, demonstrating the need for online communities.
In 2023 alone, an astonishing 469 million posts flooded Reddit’s servers resulting in 2.84 billion comments and interactions.
Reddit ranked as the 18th most visited website globally in 2023, raking in $804 million in revenue.
Looking at the stats, it’s no surprise that their recent IPO launch was a huge success, propelling Reddit to a valuation of around $6.4 billion.
While the monetary success might be attributed to the leadership team, it wouldn’t have been possible without the fascinating journey of architectural evolution that helped Reddit achieve such popularity.
In this post, we will go through this journey and look at some key architectural steps that have transformed Reddit.
The Early Days of Reddit
Reddit was originally written in Lisp but was rewritten in Python in December 2005.
Lisp was great but the main issue at the time was the lack of widely used and tested libraries. There was rarely more than one library choice for any task and the libraries were not properly documented.
Steve Huffman (one of the founders of Reddit) expressed this problem in his blog:
“Since we're building a site largely by standing on the shoulders of others, this made things a little tougher. There just aren't as many shoulders on which to stand.”
When it came to Python, they initially used a web framework named web.py that was developed by Swartz (another co-founder of Reddit). Later in 2009, Reddit started to use Pylons as its web framework.
The Core Components of Reddit’s Architecture
The below diagram shows the core components of Reddit’s high-level architecture.
While Reddit has many moving parts and things have also evolved over the years, this diagram represents the overall scaffolding that supports Reddit.
The main components are as follows:
Content Delivery Network: Reddit uses a CDN from Fastly as a front for the application. The CDN handles a lot of decision logic at the edge to figure out how a particular request will be routed based on the domain and path.
Front-End Applications: Reddit started using jQuery in early 2009. Later on, they also started using Typescript to redesign their UI and moved to Node.js-based frameworks to embrace a modern web development approach.
The R2 Monolith: In the middle is the giant box known as r2. This is the original monolithic application built using Python and consists of functionalities like Search and entities such as Things and Listings. We will look at the architecture of R2 in more detail in the next section.
From an infrastructure point of view, Reddit decommissioned the last of its physical servers in 2009 and moved the entire website to AWS. They had been one of the early adopters of S3 and were using it to host thumbnails and store logs for quite some time.
However, in 2008, they decided to move batch processing to AWS EC2 to free up more machines to work as application servers. The system worked quite well and in 2009 they completely migrated to EC2.
R2 Deep Dive
As mentioned earlier, r2 is the core of Reddit.
It is a giant monolithic application and has its own internal architecture as shown below:
For scalability reasons, the same application code is deployed and run on multiple servers.
The load balancer sits in the front and performs the task of routing the request to the appropriate server pool. This makes it possible to route different request paths such as comments, the front page, or the user profile.
Expensive operations such as a user voting or submitting a link are deferred to an asynchronous job queue via Rabbit MQ. The messages are placed in the queue by the application servers and are handled by the job processors.
From a data storage point of view, Reddit relies on Postgres for its core data model. To reduce the load on the database, they place memcache clusters in front of Postgres. Also, they use Cassandra quite heavily for new features mainly because of its resiliency and availability properties.
Latest articles
If you’re not a paid subscriber, here’s what you missed.
To receive all the full articles and support ByteByteGo, consider subscribing:
The Expansion Phase
As Reddit has grown in popularity, its user base has skyrocketed. To keep the users engaged, Reddit has added a lot of new features. Also, the scale of the application and its complexity has gone up.
These changes have created a need to evolve the design in multiple areas. While design and architecture is an ever-changing process and small changes continue to occur daily, there have been concrete developments in several key areas.
Let’s look at them in more detail to understand the direction Reddit has taken when it comes to architecture.
GraphQL Federation with Golang Microservices
Reddit started its GraphQL journey in 2017. Within 4 years, the clients of the monolithic application had fully adopted GraphQL.
GraphQL is an API specification that allows clients to request only the data they want. This makes it a great choice for a multi-client system where each client has slightly different data needs.
In early 2021, they also started moving to GraphQL Federation with a few major goals:
Retiring the monolith
Improving concurrency
Encouraging separation of concerns
GraphQL Federation is a way to combine multiple smaller GraphQL APIs (also known as subgraphs) into a single, large GraphQL API (called the supergraph). The supergraph acts as a central point for client applications to send queries and receive data.
When a client sends a query to the supergraph, the supergraph figures out which subgraphs have the data needed to answer that query. It routes the relevant parts of the query to those subgraphs, collects the responses, and sends the combined response back to the client.
In 2022, the GraphQL team at Reddit added several new Go subgraphs for core Reddit entities like Subreddits and Comments. These subgraphs took over ownership of existing parts of the overall schema.
During the transition phase, the Python monolith and new Go subgraphs work together to fulfill queries. As more and more functionalities are extracted to individual Go subgraphs, the monolith can be eventually retired.
The below diagram shows this gradual transition.
One major requirement for Reddit was to handle the migration of functionality from the monolith to a new Go subgraph incrementally.
They want to ramp up traffic gradually to evaluate error rates and latencies while having the ability to switch back to the monolith in case of any issues.
Unfortunately, the GraphQL Federation specification doesn’t offer a way to support this gradual migration of traffic. Therefore, Reddit went for a Blue/Green subgraph deployment as shown below:
In this approach, the Python monolith and Go subgraph share ownership of the schema. A load balancer sits between the gateway and the subgraphs to send traffic to the new subgraph or the monolith based on a configuration.
With this setup, they could also control the percentage of traffic handled by the monolith or the new subgraph, resulting in better stability of Reddit during the migration journey.
As of the last update, the migration is still ongoing with minimal disruption to the Reddit experience.
Data Replication with CDC and Debezium
In the early stages, Reddit supported data replication for their databases using WAL segments created by the primary database.
WAL or write-ahead log is a file that records all changes made to a database before they are committed. It ensures that if there’s a failure during a write operation, the changes can be recovered from the log.
To prevent this replication from bogging down the primary database, Reddit used a special tool to continuously archive PostgreSQL WAL files to S3 from where the replica could read the data.
However, there were a few issues with this approach:
Since the daily snapshots ran at night, there were inconsistencies in the data during the day.
Frequent schema changes to databases caused issues with snapshotting the database and replication.
The primary and replica databases ran on EC2 instances, making the replication process fragile. There were multiple failure points such as a failing backup to S3 or the primary node going down.
To make data replication more reliable, Reddit decided to use a streaming Change Data Capture (CDC) solution using Debezium and Kafka Connect.
Debezium is an open-source project that provides a low-latency data streaming platform for CDC.
Whenever a row is added, deleted, or modified in Postgres, Debezium listens to these changes and writes them to a Kafka topic. A downstream connector reads from the Kafka topic and updates the destination table with the changes.
The below diagram shows this process.
Moving to CDC with Debezium has been a great move for Reddit since they can now perform real-time replication of data to multiple target systems.
Also, instead of bulky EC2 instances, the entire process can be managed by lightweight Debezium pods.
Managing Media Metadata at Scale
Reddit hosts billions of posts containing various media content such as images, videos, gifs, and embedded third-party media.
Over the years, users have been uploading media content at an accelerating pace. Therefore, media metadata became crucial for enhancing searchability and organization for these assets.
There were multiple challenges with Reddit’s old approach to managing media metadata:
The data was distributed and scattered across multiple systems.
There were inconsistent storage formats and varying query patterns for different asset types. In some cases, they were even querying S3 buckets for the metadata information which is extremely inefficient at Reddit scale.
No proper mechanism for auditing changes, analyzing content, and categorizing metadata.
To overcome these challenges, Reddit built a brand new media metadata store with some high-level requirements:
Move all existing media metadata from different systems under a common roof.
Support data retrieval at the scale of 100K read requests per second with less than 50 ms latency.
Support data creation and updates.
The choice of the data store was between Postgres and Cassandra. Reddit finally went with AWS Aurora Postgres due to the challenges with ad-hoc queries for debugging in Cassandra and the potential risk of some data not being denormalized and unsearchable.
The below diagram shows a simplified overview of Reddit’s media metadata storage system.
As you can see, there’s just a simple service interfacing with the database, handling reads and writes through APIs.
Though the design was not complicated, the challenge lay in transferring several terabytes of data from various sources to the new database while ensuring that the system continued to operate correctly.
The migration process consisted of multiple steps:
Enable dual writes into the metadata APIs from clients of media metadata.
Backfill data from older databases to the new metadata store.
Enable dual reads on media metadata from the service clients.
Monitor data comparison for every read request and fix any data issues.
Ramp up read traffic to the new metadata store.
Check out the below diagram that shows this setup in more detail.
After the migration was successful, Reddit adopted some scaling strategies for the media metadata store.
Table partitioning using range-based partitioning.
Serving reads from a denormalized JSONB field in Postgres.
Ultimately, they achieved an impressive read latency of 2.6 ms at p50, 4.7 ms at p90, and 17 ms at p99. Also, the data store was generally more available and 50% faster than the previous data system.
Just-in-time Image Optimization
Within the media space, Reddit also serves billions of images per day.
Users upload images for their posts, comments, and profiles. Since these images are consumed on different types of devices, they need to be available in several resolutions and formats. Therefore, Reddit transforms these images for different use cases such as post previews, thumbnails, and so on.
Since 2015, Reddit has relied on third-party vendors to perform just-in-time image optimization. Image handling wasn’t their core competency and therefore, this approach served them well over the years.
However, with an increasing user base and traffic, they decided to move this functionality in-house to manage costs and control the end-to-end user experience.
The below diagram shows the high-level architecture for image optimization setup.
They built two backend services for transforming the images:
The Gif2Vid service resizes and transcodes GIFs to MP4s on-the-fly. Though users love the GIF format, it’s an inefficient choice for animated assets due to its larger file sizes and higher computational resource demands.
The image optimizer service deals with all other image types. It uses govips which is a wrapper around the libvips image manipulation library. The service handles the majority of cache-miss traffic and handles image transformations like blurring, cropping, resizing, overlaying images, and format conversions.
Overall, moving the image optimization in-house was quite successful:
Costs for Gif2Vid conversion were reduced to a mere 0.9% of the original cost.
The p99 cache-miss latency for encoding animated GIFs was down from 20s to 4s.
The bytes served for static images were down by approximately 20%.
Real-Time Protection for Users at Reddit’s Scale
A critical functionality for Reddit is moderating content that violates the policies of the platform. This is essential to keep Reddit safe as a website for the billions of users who see it as a community.
In 2016, they developed a rules engine named Rule-Executor-V1 (REV1) to curb policy-violating content on the site in real time. REV1 enabled the safety team to create rules that would automatically take action based on activities like users posting new content or leaving comments.
For reference, a rule is just a Lua script that is triggered on specific configured events. In practice, this can be a simple piece of code shown below:
In this example, the rule checks whether a post’s text body matches a string “some bad text”. If yes, it performs an asynchronous action on the user by publishing an action to an output Kafka topic.
However, REV1 needed some major improvements:
It ran on a legacy infrastructure of raw EC2 instances. This wasn’t in line with all modern services on Reddit that were running on Kubernetes.
Each rule ran as a separate process in a REV1 node and required vertical scaling as more rules were launched. Beyond a certain point, vertical scaling became expensive and unsustainable.
REV1 used Python 2.7 which was deprecated.
Rules weren’t version-controlled and it was difficult to understand the history of changes.
Lack of staging environment to test out the rules in a sandboxed manner.
In 2021, the Safety Engineering team within Reddit developed a new streaming infrastructure called Snooron. It was built on top of Flink Stateful Functions to modernize REV1’s architecture. The new system was known as REV2.
The below diagram shows both REV1 and REV2.
Some of the key differences between REV1 and REV2 are as follows:
In REV1, all configuration of rules was done via a web interface. With REV2, the configuration primarily happens through code. However, there are UI utilities to make the process simpler.
In REV1, they use Zookeeper as a store for rules. With REV2, rules are stored in Github for better version control and are also persisted to S3 for backup and periodic updates.
In REV1, each rule had its own process that would load the latest code when triggered. However, this caused performance issues when too many rules were running at the same time. REV2 follows a different approach that uses Flink Stateful Functions for handling the stream of events and a separate Baseplate application that executes the Lua code.
In REV1, the actions triggered by rules were handled by the main R2 application. However, REV2 works differently. When a rule is triggered, it sends out structured Protobuf actions to multiple action topics. A new application called the Safety Actioning Worker, built using Flink Statefun, receives and processes these instructions to carry out the actions.
Reddit’s Feed Architecture
Feeds are the backbone of social media and community-based websites.
Millions of people use Reddit’s feeds every day and it’s a critical component of the website’s overall usability. There were some key goals when it came to developing the feed architecture:
The architecture should support a high development velocity and support scalability. Since many teams integrate with the feeds, they need to have the ability to understand, build, and test them quickly.
TTI (Time to Interactive) and scroll performance should be satisfactory since they are critical to user engagement and the overall Reddit experience.
Feeds should be consistent across different platforms such as iOS, Android, and the website.
To support these goals, Reddit built an entirely new, server-driven feeds platform. Some major changes were made in the backend architecture for feeds.
Earlier, each post was represented by a Post object that contained all the information a post may have. It was like sending the kitchen sink over the wire and with new post types, the Post object got quite big over time.
This was also a burden on the client. Each client app contained a bunch of cumbersome logic to determine what should be shown on the UI. Most of the time, this logic was out of sync across platforms.
With the changes to the architecture, they moved away from the big object and instead sent only the description of the exact UI elements the client will render. The backend controlled the type of elements and their order. This approach is also known as Server-Driven UI.
For example, the post unit is represented by a generic Group object that contains an array of Cell objects. The below image shows the change in response structure for the Announcement item and the first post in the feed.
Reddit’s Move from Thrift to gRPC
In the initial days, Reddit had adopted Thrift to build its microservices.
Thrift enables developers to define a common interface (or API) that enables different services to communicate with each other, even if they are written in different programming languages. Thrift takes the language-independent interface and generates code bindings for each specific language.
This way, developers can make API calls from their code using syntax that looks natural for their programming language, without having to worry about the underlying cross-language communication details.
Over the years, the engineering teams at Reddit built hundreds of Thrift-based microservices, and though it served them quite well, Reddit’s growing needs made it costly to continue using Thrift.
gRPC came on the scene in 2016 and achieved significant adoption within the Cloud-native ecosystem.
Some of the advantages of gRPC are as follows:
It provided native support for HTTP2 as a transport protocol
There was native support for gRPC in several service mesh technologies such as Istio and Linkerd
Public cloud providers also support gRPC-native load balancers
While gRPC had several benefits, the cost of switching was non-trivial. However, it was a one-time cost whereas building feature parity in Thrift would have been an ongoing maintenance activity.
Reddit decided to make the transition to gRPC. The below diagram shows the design they used to start the migration process:
The main component is the Transitional Shim. Its job is to act as a bridge between the new gRPC protocol and the existing Thrift-based services.
When a gRPC request comes in, the shim converts it into an equivalent Thrift message format and passes it on to the existing code just like native Thrift. When the service returns a response object, the shim converts it back into the gRPC format.
There are three main parts to this design:
The interface definition language (IDL) converter that translates the Thrift service definitions into the corresponding gRPC interface. This component also takes care of adapting framework idioms and differences as appropriate.
A code-generated gRPC servicer that handles the message conversions for incoming and outgoing messages between Thrift and gRPC.
A pluggable module for the services to support both Thrift and gRPC.
This design allowed Reddit to gradually transition to gRPC by reusing its existing Thrift-based service code while controlling the costs and effort required for the migration.
Conclusion
Reddit’s architectural journey has been one of continual evolution, driven by its meteoric growth and changing needs over the years. What began as a monolithic Lisp application was rewritten in Python, but this monolithic approach couldn’t keep pace as Reddit’s popularity exploded.
The company went on an ambitious transition to a service-based architecture. Each new feature and issue they faced prompted a change in the overall design in various areas such as user protection, media metadata management, communication channels, data replication, API management, and so on.
In this post, we’ve attempted to capture the evolution of Reddit’s architecture from the early days to the latest changes based on the available information.
References:
SPONSOR US
Get your product in front of more than 500,000 tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.
Space Fills Up Fast - Reserve Today
Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing hi@bytebytego.com.
Like Comment Restack © 2024 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:35 - 9 Apr 2024 -
What’s the state of private markets around the world?
On Point
2024 Global Private Markets Review Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Fundraising decline. Macroeconomic headwinds persisted throughout 2023, with rising financing costs and an uncertain growth outlook taking a toll on private markets. Last year, fundraising fell 22% across private market asset classes globally to just over $1 trillion, as of year-end reported data—the lowest total since 2017, McKinsey senior partner Fredrik Dahlqvist and coauthors reveal. As private market managers look to boost performance, a deeper focus on revenue growth and margin expansion will be needed now more than ever.
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:08 - 9 Apr 2024 -
Feeling good: A guide to empathetic leadership
I feel for you Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
An “empath” is defined as a person who is highly attuned to the feelings of others, often to the point of experiencing those emotions themselves. This may sound like a quality that’s more helpful in someone’s personal life than in their work life, where emotions should presumably be kept at bay. Yet it turns out that the opposite may be true: displaying empathy in appropriate ways can be a powerful component of effective leadership. And when it’s combined with other factors such as technology and innovation, it can lead to exceptional business performance, such as breakthroughs in customer service. This week, we explore some strategies for leaders to enhance their empathy quotient.
“When I train leaders in empathy, one of the first hurdles I need to get over is this stereotype that empathy is too soft and squishy for the work environment,” says Stanford University research psychologist Jamil Zaki in an episode of our McKinsey Talks Talent podcast. Speaking with McKinsey partners Bryan Hancock and Brooke Weddle, Zaki suggests that, contrary to the stereotype, empathy can be a workplace superpower, helping to alleviate employee burnout and spurring greater productivity and creativity. Boosting managers’ empathetic skills is more important now than ever: “There’s evidence that during the time that social media has taken over so much of our lives, people’s empathy has also dropped,” says Zaki. Ways to improve leaders’ empathy in the workplace include infusing more empathy into regular conversations, rewarding empathic behavior, and automating some non-human-centric responsibilities so that managers can focus on mentorship.
That’s business strategist and author John Horn on why it’s important for leaders to try to understand the mindset of their competitors. “When people become more senior in business, they’ll need to think about competitors directly,” he says in a conversation with McKinsey on cognitive empathy. While it may not be possible to ask competitors about their strategies, technology such as AI, machine learning, and big data may be able to provide competitive-insight tools based on a predictive factor. “You could possibly use that data to look across various industries and learn [for example] what has been the response when prices were raised? That’s thinking about the likely behavior your competitors will have,” says Horn.
The qualities that make a leader empathetic can be found within oneself and in the situations of daily life. “Try to unlock the energy and the aspiration that lies within us,” suggests McKinsey senior partner and chief legal officer Pierre Gentin in a McKinsey Legal Podcast interview. What Gentin calls “arrows of inspiration”—whether they come from art, music, literature, or people—may be available to leaders every day. He believes that “a lot of this is just opening our eyes and really recognizing what’s in front of us and transitioning from the small gauge and the negative to the big gauge and the transformative.” For example, in an organizational context, people may need to look beyond short-term concerns and consider “what they can do outside of the four corners of the definition of their role,” he says. “How can we collaborate together to do creative things and to do exciting things and to do worthwhile things?”
There are times when empathy can be too much of a good thing, or leaders can show it in the wrong way. An obvious mistake is to pay lip service to the idea but take limited or no action. In one case, an IT group in a company worked overtime for many weeks during the pandemic. Leaders praised the group lavishly, but the only action they took was to buy everyone on the team a book on time management. Another error may to be to avoid forming the strong relationships that lead to good communication, understanding, and compassion. And empathy by itself—if it’s not balanced by other aspects of emotional intelligence such as self-regulation and composure—can actually derail leadership effectiveness.
Lead empathetically.
– Edited by Rama Ramaswami, senior editor, New York
Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to the Leading Off newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Leading Off" <publishing@email.mckinsey.com> - 04:37 - 8 Apr 2024 -
Which gen AI operating models can help banking leaders compete?
On Point
4 strategies for adopting gen AI Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Embracing gen AI. As banks and other financial institutions move to quickly find uses for gen AI, challenges are emerging. Getting gen AI right could produce enormous value; getting it wrong, however, could be costly. Financial institutions that use centralized gen AI operating models are reaping the biggest rewards, McKinsey senior partner Kevin Buehler and coauthors reveal. A recent review of 16 of the largest financial institutions in Europe and the US found that more than 50% have adopted a more centrally led organization for gen AI.
•
Benefits of centralization. Centralization allows enterprises to focus on a handful of use cases, rapidly moving through experimentation to implement them widely. About 70% of banks and other institutions with highly centralized gen AI operating models have progressed to putting gen AI applications into production. Consider four types of gen AI operating models that financial institutions are using as they implement the technology and visit McKinsey Digital to see how leaders are turning technology’s potential into value for their companies.
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 11:06 - 7 Apr 2024 -
The quarter’s top Themes
McKinsey&Company
At #1: 10 key takeaways from Davos 2024 In the first quarter of 2024, our top ten posts from McKinsey Themes look at highlights from the World Economic Forum’s annual meeting, generative AI, the state of talent, and more. At No. 1 is “10 key takeaways from Davos 2024,” which includes a collection of must-read insights for today’s business leaders by McKinsey’s Michael Chui, Homayoun Hatami, Dana Maor, Kate Smaje, Bob Sternfels, Rodney Zemmel, and others. Read on for our full top 10.
Share these insights
This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you are a registered member of the Top Ten Most Popular newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Top Ten" <publishing@email.mckinsey.com> - 06:09 - 7 Apr 2024 -
The week in charts
The Week in Charts
Job shifts due to generative AI, 6G’s economic potential, and more Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to The Week in Charts newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Week in Charts" <publishing@email.mckinsey.com> - 03:20 - 6 Apr 2024 -
EP106: How Does JavaScript Work?
EP106: How Does JavaScript Work?
This week’s system design refresher: Roadmap for Learning SQL (Youtube video) Can Kafka lose messages? 9 Best Practices for building microsercvices Roadmap for Learning Cyber Security How does Javascript Work? SPONSOR US New Relic IAST exceeds OWASP Benchmark with accuracy scoring above 100% (Sponsored)͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for moreThis week’s system design refresher:
Roadmap for Learning SQL (Youtube video)
Can Kafka lose messages?
9 Best Practices for building microsercvices
Roadmap for Learning Cyber Security
How does Javascript Work?
SPONSOR US
New Relic IAST exceeds OWASP Benchmark with accuracy scoring above 100% (Sponsored)
New Relic Interactive Application Security Testing (IAST) allows security and engineering teams to save time by focusing on real application security problems with zero false positives, as validated by the OWASP benchmark result of 100% accuracy.
Roadmap for Learning SQL
Can Kafka lose messages?
Error handling is one of the most important aspects of building reliable systems.
Today, we will discuss an important topic: Can Kafka lose messages?
A common belief among many developers is that Kafka, by its very design, guarantees no message loss. However, understanding the nuances of Kafka's architecture and configuration is essential to truly grasp how and when it might lose messages, and more importantly, how to prevent such scenarios.
The diagram below shows how a message can be lost during its lifecycle in Kafka.Producer
When we call producer.send() to send a message, it doesn't get sent to the broker directly. There are two threads and a queue involved in the message-sending process:
1. Application thread
2. Record accumulator
3. Sender thread (I/O thread)
We need to configure proper ‘acks’ and ‘retries’ for the producer to make sure messages are sent to the broker.Broker
A broker cluster should not lose messages when it is functioning normally. However, we need to understand which extreme situations might lead to message loss:
1. The messages are usually flushed to the disk asynchronously for higher I/O throughput, so if the instance is down before the flush happens, the messages are lost.
2. The replicas in the Kafka cluster need to be properly configured to hold a valid copy of the data. The determinism in data synchronization is important.
Consumer
Kafka offers different ways to commit messages. Auto-committing might acknowledge the processing of records before they are actually processed. When the consumer is down in the middle of processing, some records may never be processed.
A good practice is to combine both synchronous and asynchronous commits, where we use asynchronous commits in the processing loop for higher throughput and synchronous commits in exception handling to make sure the the last offset is always committed.
Latest articles
If you’re not a paid subscriber, here’s what you missed.
To receive all the full articles and support ByteByteGo, consider subscribing:
9 Best Practices for building microsercvices
Creating a system using microservices is extremely difficult unless you follow some strong principles.
9 best practices that you must know before building microservices:
Design For Failure
A distributed system with microservices is going to fail.
You must design the system to tolerate failure at multiple levels such as infrastructure, database, and individual services. Use circuit breakers, bulkheads, or graceful degradation methods to deal with failures.Build Small Services
A microservice should not do multiple things at once.
A good microservice is designed to do one thing well.Use lightweight protocols for communication
Communication is the core of a distributed system.
Microservices must talk to each other using lightweight protocols. Options include REST, gRPC, or message brokers.Implement service discovery
To communicate with each other, microservices need to discover each other over the network.
Implement service discovery using tools such as Consul, Eureka, or Kubernetes ServicesData Ownership
In microservices, data should be owned and managed by the individual services.
The goal should be to reduce coupling between services so that they can evolve independently.Use resiliency patterns
Implement specific resiliency patterns to improve the availability of the services.
Examples: retry policies, caching, and rate limiting.Security at all levels
In a microservices-based system, the attack surface is quite large. You must implement security at every level of the service communication path.Centralized logging
Logs are important to finding issues in a system. With multiple services, they become critical.Use containerization techniques
To deploy microservices in an isolated manner, use containerization techniques.
Tools like Docker and Kubernetes can help with this as they are meant to simplify the scaling and deployment of a microservice.
Over to you: what other best practice would you recommend?Roadmap for Learning Cyber Security
By Henry Jiang. Redrawn by ByteByteGo.
Cybersecurity is crucial for protecting information and systems from theft, damage, and unauthorized access. Whether you're a beginner or looking to advance your technical skills, there are numerous resources and paths you can take to learn more about cybersecurity. Here are some structured suggestions to help you get started or deepen your knowledge:
Security Architecture
Frameworks & Standards
Application Security
Risk Assessment
Enterprise Risk Management
Threat Intelligence
Security Operation
How does Javascript Work?
The cheat sheet below shows most important characteristics of Javascript.
Interpreted Language
JavaScript code is executed by the browser or JavaScript engine rather than being compiled into machine language beforehand. This makes it highly portable across different platforms. Modern engines such as V8 utilize Just-In-Time (JIT) technology to compile code into directly executable machine code.Function is First-Class Citizen
In JavaScript, functions are treated as first-class citizens, meaning they can be stored in variables, passed as arguments to other functions, and returned from functions.Dynamic Typing
JavaScript is a loosely typed or dynamic language, meaning we don't have to declare a variable's type ahead of time, and the type can change at runtime.Client-Side Execution
JavaScript supports asynchronous programming, allowing operations like reading files, making HTTP requests, or querying databases to run in the background and trigger callbacks or promises when complete. This is particularly useful in web development for improving performance and user experience.Prototype-Based OOP
Unlike class-based object-oriented languages, JavaScript uses prototypes for inheritance. This means that objects can inherit properties and methods from other objects.Automatic Garbage Collection
Garbage collection in JavaScript is a form of automatic memory management. The primary goal of garbage collection is to reclaim memory occupied by objects that are no longer in use by the program, which helps prevent memory leaks and optimizes the performance of the application.Compared with Other Languages
JavaScript is special compared to programming languages like Python or Java because of its position as a major language for web development.
While Python is known to provide good code readability and versatility, and Java is known for its structure and robustness, JavaScript is an interpreted language that runs directly on the browser without compilation, emphasizing flexibility and dynamism.Relationship with Typescript
TypeScript is a superset of JavaScript, which means that it extends JavaScript by adding features to the language, most notably type annotations. This relationship allows any valid JavaScript code to also be considered valid TypeScript code.Popular Javascript Frameworks
React is known for its flexibility and large number of community-driven plugins, while Vue is clean and intuitive with highly integrated and responsive features. Angular, on the other hand, offers a strict set of development specifications for enterprise-level JS development.
SPONSOR US
Get your product in front of more than 500,000 tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.
Space Fills Up Fast - Reserve Today
Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing hi@bytebytego.com.
Like Comment Restack © 2024 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:36 - 6 Apr 2024 -
Gen AI: How to capture value and mitigate risk
Plus, how the world can accelerate productivity growth McKinsey research has estimated that generative AI (gen AI) has the potential to add up to $4.4 trillion in economic value to the global economy. But companies are quickly realizing that capturing this value is harder than expected. There’s also a growing recognition that gen AI opportunities are accompanied by considerable risks. In this month’s first featured story, McKinsey senior partners Eric Lamarre, Alex Singla, Alexander Sukharevsky, and Rodney Zemmel outline why companies have to rewire how they work with gen AI to gain a real competitive advantage from the technology. Our second featured story explores why business leaders need to revise their technology playbooks and drive the integration of effective risk management from the start of their engagement with gen AI. Other highlights include the following topics:
Investing in productivity growth
It’s time to raise investment and catch the next productivity wave.
Pave the wayMcKinsey Global Private Markets Review 2024
Private markets entered a slower era in 2023, with macroeconomic headwinds, rising financing costs and an uncertain growth outlook weighing on fundraising, deal activity and performance.
Boost performance in a new eraThe CEO’s secret to successful leadership: CEO Excellence revisited
Three McKinsey senior partners—and, now, international best-selling authors—reflect on the far-reaching impact of their book, CEO Excellence, two years after its release.
Dare to leadWorking nine to thrive
Better aligning employment with modifiable drivers of health could unlock years of higher-quality life and create trillions of dollars of economic value.
Support employeesAnalyzing the CEO–CMO relationship and its effect on growth
CEOs acknowledge the expertise and importance of chief marketing officers and their role in helping the company grow, yet there’s still a strategic disconnect in the C-suite. Here’s how to close the gap.
Take a holistic approachHow the world’s best hotels deliver exceptional customer experience
Luxury hotels know that the secret to top-tier customer experience is a culture of excellence.
Create a culture of excellenceMcKinsey Themes
Browse our essential reading on the topics that matter.
Get up to speedMcKinsey Explainers
Find direct answers to complex questions, backed by McKinsey’s expert insights.
Learn moreMcKinsey on Books
Explore this month’s best-selling business books prepared exclusively for McKinsey Publishing by Circana.
See the listsMcKinsey Chart of the Day
See our daily chart that helps explain a changing world—as we strive for sustainable, inclusive growth.
Dive inMcKinsey Classics
Significant improvements in risk management can be gained quickly through selective digitization—but capabilities must be test-hardened before release. Read our 2017 classic “Digital risk: Transforming risk management for the 2020s” to learn more.
RewindLeading Off
Our Leading Off newsletter features revealing research and inspiring interviews to empower you—and those you lead.
Subscribe now—Edited by Eleni Kostopoulos, managing editor, New York
Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you are a registered member of our Monthly Highlights newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Highlights" <publishing@email.mckinsey.com> - 11:27 - 6 Apr 2024 -
CEO Excellence—two years on
The Shortlist
Four new insights Curated by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
Technology is trending like never before, with gen AI at the crest of the wave. But our decades of experience working with executives have taught us an important lesson: it’s never just about the tech. To get the most out of digital investments, companies need to pull other levers, like leadership, change management, talent, and innovation. In this edition of the CEO Shortlist, we highlight some of the catalysts that can unleash technology’s full value, and we explore the latest on CEO excellence. We hope you enjoy the read.
—Liz and Homayoun
Step away from the grindstone. Nobody likes the admin side of their job. Gen AI can help. The technology is brilliant at handling dull, repetitive tasks, and companies are rapidly installing applications to do them. This creates a critical opportunity for leaders: deploying gen AI to free up employees’ time for work better suited to humans, such as creative, collaborative thinking. Organizations that succeed are likely to build a long-term competitive edge.
Put on your thinking cap with “The human side of generative AI: Creating a path to productivity,” a new article by Aaron De Smet, Sandra Durth, Bryan Hancock, Marino Mugayar-Baldocchi, and Angelika Reich.What got you here may not get you there. In business—and maybe more broadly—innovation is always needed to outcompete in times of uncertainty. But innovation itself needs a shake-up every once in a while. That’s where gen AI comes in. The most innovative organizations have already deployed gen AI tools to help invigorate their creativity and agility—and they’re reaping the rewards.
It’s not too late to give your organization a gen-AI-powered innovation boost. Learn more with “Driving innovation with generative AI,” a new interview with McKinsey’s Matt Banholzer and Laura LaBerge from our Inside the Strategy Room podcast.Picture this. Most companies are thinking about how gen AI chatbots can help produce text. But don’t forget that these new tools can draw too. Almost every company uses some form of industrial design, whether to create widgets or websites. Gen AI tools can open new doors of creativity and speed—but human designers are far from obsolete.
Draw up a chair and read “Generative AI fuels creative physical product design but is no magic wand,” by Bryce Booth, Jack Donohew, Chris Wlezien, and Winnie Wu.It’s a wide world. Start-ups. Founder-led companies. Portfolio companies. Government organizations. Not for profits. It’s been two years since we published CEO Excellence: The Six Mindsets That Distinguish the Best Leaders from the Rest, and one thing we’ve learned is that the excellent leadership qualities we’ve distilled in this book are applicable way beyond the purely corporate audience we imagined at the outset. Find out more as we catch up with the book’s authors and learn about the journey their international bestseller has taken them on.
Hitch a ride with “The CEO’s secret to successful leadership: CEO Excellence revisited,” by Carolyn Dewar, Scott Keller, and Vikram Malhotra. (And read more from the interview here and here.)We hope you find these ideas inspiring and helpful. See you next time with four more McKinsey ideas for the CEO and others in the C-suite.
Share these insights
This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to The CEO Shortlist newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey CEO Shortlist" <publishing@email.mckinsey.com> - 04:56 - 5 Apr 2024