Archives
- By thread 3661
-
By date
- June 2021 10
- July 2021 6
- August 2021 20
- September 2021 21
- October 2021 48
- November 2021 40
- December 2021 23
- January 2022 46
- February 2022 80
- March 2022 109
- April 2022 100
- May 2022 97
- June 2022 105
- July 2022 82
- August 2022 95
- September 2022 103
- October 2022 117
- November 2022 115
- December 2022 102
- January 2023 88
- February 2023 90
- March 2023 116
- April 2023 97
- May 2023 159
- June 2023 145
- July 2023 120
- August 2023 90
- September 2023 102
- October 2023 106
- November 2023 100
- December 2023 74
- January 2024 75
- February 2024 75
- March 2024 78
- April 2024 74
- May 2024 108
- June 2024 98
- July 2024 116
- August 2024 134
- September 2024 130
- October 2024 141
- November 2024 83
-
Maximise ROI: Insights from the 2023 Observability Forecast Report
New Relic
Our latest report unveils pivotal insights for businesses considering observability. The 2023 Observability Forecast, with data collected from 1,700 technology professionals across 15 countries, dives into observability practices and their impacts on costs and revenue.
The report highlights the ROI of observability according to respondents, including driving business efficiency, security, and profitability.
Access the blog now that rounds up key insights to understand the potential impact of observability on your business.
Read Now Need help? Let's get in touch.
This email is sent from an account used for sending messages only. Please do not reply to this email to contact us—we will not get your response.
This email was sent to info@learn.odoo.com Update your email preferences.
For information about our privacy practices, see our Privacy Policy.
Need to contact New Relic? You can chat or call us at +44 20 3859 9190.
Strand Bridge House, 138-142 Strand, London WC2R 1HH
© 2024 New Relic, Inc. All rights reserved. New Relic logo are trademarks of New Relic, Inc
by "New Relic" <emeamarketing@newrelic.com> - 07:11 - 11 Apr 2024 -
How can the world jump-start productivity?
On Point
Why economies need productivity growth Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 11:05 - 10 Apr 2024 -
Unlock Data Transformation at Sumo Logic's Booth - AWS Summit London
Sumo Logic
Experience our cutting-edge solutions at Booth B36Dear Mohammad,
I'm thrilled to extend an invitation for you to visit the Sumo Logic booth at the AWS Summit London on the 24th of April at the ExCel London.
Discover how Sumo Logic can transform your approach to data management, ensuring security, scalability, and efficiency. Our experts will showcase our latest innovations and discuss tailored solutions to address your specific needs.
Here are compelling reasons to visit Booth B36:
- Find out how you can power DevSecOps from only one cloud native platform
- Learn more about our $0 data ingest approach
- Explore configuration management and compliance reporting
- Discover how we leverage AI/ML to uncover and mitigate threats
Claim your free entry ticket using this link.
We eagerly anticipate your presence at Booth B36!
Best regards,
Sumo Logic
About Sumo Logic
Sumo Logic is the pioneer in continuous intelligence, a new category of software to address the data challenges presented by digital transformation, modern applications, and cloud computing.Sumo Logic, Aviation House, 125 Kingsway, London WC2B 6NH, UK
© 2024 Sumo Logic, All rights reserved.Unsubscribe
by "Sumo Logic" <marketing-info@sumologic.com> - 06:01 - 10 Apr 2024 -
What does it take to become a CFO?
On Point
5 priorities for CFO hopefuls Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Take career risks. What matters most for securing the CFO job? To find out, McKinsey senior partner Andy West and coauthors spoke with former CFOs, such as Arun Nayar, Tyco International’s former EVP and CFO and PepsiCo’s former CFO of global operations. Earlier in his career, Nayar realized that to rise higher, he would need to gain operational know-how. After obtaining a role overseeing finance in PepsiCo’s global operations division, an area he knew nothing about, he formed the No Fear Club, through which he mentors others in finance roles.
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:11 - 10 Apr 2024 -
Reddit's Architecture: The Evolutionary Journey
Reddit's Architecture: The Evolutionary Journey
The comprehensive developer resource for B2B User Management (Sponsored) Building an enterprise-ready, resilient B2B auth is one of the more complex tasks developers face these days. Today, even smaller startups are demanding security features like SSO that used to be the purview of the Fortune 500.͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for moreThe comprehensive developer resource for B2B User Management (Sponsored)
Building an enterprise-ready, resilient B2B auth is one of the more complex tasks developers face these days. Today, even smaller startups are demanding security features like SSO that used to be the purview of the Fortune 500.
The latest guide from WorkOS covers the essentials of modern day user management for B2B apps — from 101 topics to more advanced concepts that include:
→ SSO, MFA, and sessions
→ Bot policies, org auth policies, and UI considerations
→ Identity linking, email verification, and just-in-time provisioningThis resource also presents an easier alternative for supporting user management.
Reddit's Architecture: The Evolutionary Journey
Reddit was founded in 2005 with the vision to become “the front page of the Internet”.
Over the years, it has evolved into one of the most popular social networks on the planet fostering tens of thousands of communities built around the passions and interests of its members. With over a billion monthly users, Reddit is where people come to participate in conversations on a vast array of topics.
Some interesting numbers that convey Reddit’s incredible popularity are as follows:
Reddit has 1.2 billion unique monthly visitors, turning it into a virtual town square.
Reddit’s monthly active user base has exploded by 366% since 2018, demonstrating the need for online communities.
In 2023 alone, an astonishing 469 million posts flooded Reddit’s servers resulting in 2.84 billion comments and interactions.
Reddit ranked as the 18th most visited website globally in 2023, raking in $804 million in revenue.
Looking at the stats, it’s no surprise that their recent IPO launch was a huge success, propelling Reddit to a valuation of around $6.4 billion.
While the monetary success might be attributed to the leadership team, it wouldn’t have been possible without the fascinating journey of architectural evolution that helped Reddit achieve such popularity.
In this post, we will go through this journey and look at some key architectural steps that have transformed Reddit.
The Early Days of Reddit
Reddit was originally written in Lisp but was rewritten in Python in December 2005.
Lisp was great but the main issue at the time was the lack of widely used and tested libraries. There was rarely more than one library choice for any task and the libraries were not properly documented.
Steve Huffman (one of the founders of Reddit) expressed this problem in his blog:
“Since we're building a site largely by standing on the shoulders of others, this made things a little tougher. There just aren't as many shoulders on which to stand.”
When it came to Python, they initially used a web framework named web.py that was developed by Swartz (another co-founder of Reddit). Later in 2009, Reddit started to use Pylons as its web framework.
The Core Components of Reddit’s Architecture
The below diagram shows the core components of Reddit’s high-level architecture.
While Reddit has many moving parts and things have also evolved over the years, this diagram represents the overall scaffolding that supports Reddit.
The main components are as follows:
Content Delivery Network: Reddit uses a CDN from Fastly as a front for the application. The CDN handles a lot of decision logic at the edge to figure out how a particular request will be routed based on the domain and path.
Front-End Applications: Reddit started using jQuery in early 2009. Later on, they also started using Typescript to redesign their UI and moved to Node.js-based frameworks to embrace a modern web development approach.
The R2 Monolith: In the middle is the giant box known as r2. This is the original monolithic application built using Python and consists of functionalities like Search and entities such as Things and Listings. We will look at the architecture of R2 in more detail in the next section.
From an infrastructure point of view, Reddit decommissioned the last of its physical servers in 2009 and moved the entire website to AWS. They had been one of the early adopters of S3 and were using it to host thumbnails and store logs for quite some time.
However, in 2008, they decided to move batch processing to AWS EC2 to free up more machines to work as application servers. The system worked quite well and in 2009 they completely migrated to EC2.
R2 Deep Dive
As mentioned earlier, r2 is the core of Reddit.
It is a giant monolithic application and has its own internal architecture as shown below:
For scalability reasons, the same application code is deployed and run on multiple servers.
The load balancer sits in the front and performs the task of routing the request to the appropriate server pool. This makes it possible to route different request paths such as comments, the front page, or the user profile.
Expensive operations such as a user voting or submitting a link are deferred to an asynchronous job queue via Rabbit MQ. The messages are placed in the queue by the application servers and are handled by the job processors.
From a data storage point of view, Reddit relies on Postgres for its core data model. To reduce the load on the database, they place memcache clusters in front of Postgres. Also, they use Cassandra quite heavily for new features mainly because of its resiliency and availability properties.
Latest articles
If you’re not a paid subscriber, here’s what you missed.
To receive all the full articles and support ByteByteGo, consider subscribing:
The Expansion Phase
As Reddit has grown in popularity, its user base has skyrocketed. To keep the users engaged, Reddit has added a lot of new features. Also, the scale of the application and its complexity has gone up.
These changes have created a need to evolve the design in multiple areas. While design and architecture is an ever-changing process and small changes continue to occur daily, there have been concrete developments in several key areas.
Let’s look at them in more detail to understand the direction Reddit has taken when it comes to architecture.
GraphQL Federation with Golang Microservices
Reddit started its GraphQL journey in 2017. Within 4 years, the clients of the monolithic application had fully adopted GraphQL.
GraphQL is an API specification that allows clients to request only the data they want. This makes it a great choice for a multi-client system where each client has slightly different data needs.
In early 2021, they also started moving to GraphQL Federation with a few major goals:
Retiring the monolith
Improving concurrency
Encouraging separation of concerns
GraphQL Federation is a way to combine multiple smaller GraphQL APIs (also known as subgraphs) into a single, large GraphQL API (called the supergraph). The supergraph acts as a central point for client applications to send queries and receive data.
When a client sends a query to the supergraph, the supergraph figures out which subgraphs have the data needed to answer that query. It routes the relevant parts of the query to those subgraphs, collects the responses, and sends the combined response back to the client.
In 2022, the GraphQL team at Reddit added several new Go subgraphs for core Reddit entities like Subreddits and Comments. These subgraphs took over ownership of existing parts of the overall schema.
During the transition phase, the Python monolith and new Go subgraphs work together to fulfill queries. As more and more functionalities are extracted to individual Go subgraphs, the monolith can be eventually retired.
The below diagram shows this gradual transition.
One major requirement for Reddit was to handle the migration of functionality from the monolith to a new Go subgraph incrementally.
They want to ramp up traffic gradually to evaluate error rates and latencies while having the ability to switch back to the monolith in case of any issues.
Unfortunately, the GraphQL Federation specification doesn’t offer a way to support this gradual migration of traffic. Therefore, Reddit went for a Blue/Green subgraph deployment as shown below:
In this approach, the Python monolith and Go subgraph share ownership of the schema. A load balancer sits between the gateway and the subgraphs to send traffic to the new subgraph or the monolith based on a configuration.
With this setup, they could also control the percentage of traffic handled by the monolith or the new subgraph, resulting in better stability of Reddit during the migration journey.
As of the last update, the migration is still ongoing with minimal disruption to the Reddit experience.
Data Replication with CDC and Debezium
In the early stages, Reddit supported data replication for their databases using WAL segments created by the primary database.
WAL or write-ahead log is a file that records all changes made to a database before they are committed. It ensures that if there’s a failure during a write operation, the changes can be recovered from the log.
To prevent this replication from bogging down the primary database, Reddit used a special tool to continuously archive PostgreSQL WAL files to S3 from where the replica could read the data.
However, there were a few issues with this approach:
Since the daily snapshots ran at night, there were inconsistencies in the data during the day.
Frequent schema changes to databases caused issues with snapshotting the database and replication.
The primary and replica databases ran on EC2 instances, making the replication process fragile. There were multiple failure points such as a failing backup to S3 or the primary node going down.
To make data replication more reliable, Reddit decided to use a streaming Change Data Capture (CDC) solution using Debezium and Kafka Connect.
Debezium is an open-source project that provides a low-latency data streaming platform for CDC.
Whenever a row is added, deleted, or modified in Postgres, Debezium listens to these changes and writes them to a Kafka topic. A downstream connector reads from the Kafka topic and updates the destination table with the changes.
The below diagram shows this process.
Moving to CDC with Debezium has been a great move for Reddit since they can now perform real-time replication of data to multiple target systems.
Also, instead of bulky EC2 instances, the entire process can be managed by lightweight Debezium pods.
Managing Media Metadata at Scale
Reddit hosts billions of posts containing various media content such as images, videos, gifs, and embedded third-party media.
Over the years, users have been uploading media content at an accelerating pace. Therefore, media metadata became crucial for enhancing searchability and organization for these assets.
There were multiple challenges with Reddit’s old approach to managing media metadata:
The data was distributed and scattered across multiple systems.
There were inconsistent storage formats and varying query patterns for different asset types. In some cases, they were even querying S3 buckets for the metadata information which is extremely inefficient at Reddit scale.
No proper mechanism for auditing changes, analyzing content, and categorizing metadata.
To overcome these challenges, Reddit built a brand new media metadata store with some high-level requirements:
Move all existing media metadata from different systems under a common roof.
Support data retrieval at the scale of 100K read requests per second with less than 50 ms latency.
Support data creation and updates.
The choice of the data store was between Postgres and Cassandra. Reddit finally went with AWS Aurora Postgres due to the challenges with ad-hoc queries for debugging in Cassandra and the potential risk of some data not being denormalized and unsearchable.
The below diagram shows a simplified overview of Reddit’s media metadata storage system.
As you can see, there’s just a simple service interfacing with the database, handling reads and writes through APIs.
Though the design was not complicated, the challenge lay in transferring several terabytes of data from various sources to the new database while ensuring that the system continued to operate correctly.
The migration process consisted of multiple steps:
Enable dual writes into the metadata APIs from clients of media metadata.
Backfill data from older databases to the new metadata store.
Enable dual reads on media metadata from the service clients.
Monitor data comparison for every read request and fix any data issues.
Ramp up read traffic to the new metadata store.
Check out the below diagram that shows this setup in more detail.
After the migration was successful, Reddit adopted some scaling strategies for the media metadata store.
Table partitioning using range-based partitioning.
Serving reads from a denormalized JSONB field in Postgres.
Ultimately, they achieved an impressive read latency of 2.6 ms at p50, 4.7 ms at p90, and 17 ms at p99. Also, the data store was generally more available and 50% faster than the previous data system.
Just-in-time Image Optimization
Within the media space, Reddit also serves billions of images per day.
Users upload images for their posts, comments, and profiles. Since these images are consumed on different types of devices, they need to be available in several resolutions and formats. Therefore, Reddit transforms these images for different use cases such as post previews, thumbnails, and so on.
Since 2015, Reddit has relied on third-party vendors to perform just-in-time image optimization. Image handling wasn’t their core competency and therefore, this approach served them well over the years.
However, with an increasing user base and traffic, they decided to move this functionality in-house to manage costs and control the end-to-end user experience.
The below diagram shows the high-level architecture for image optimization setup.
They built two backend services for transforming the images:
The Gif2Vid service resizes and transcodes GIFs to MP4s on-the-fly. Though users love the GIF format, it’s an inefficient choice for animated assets due to its larger file sizes and higher computational resource demands.
The image optimizer service deals with all other image types. It uses govips which is a wrapper around the libvips image manipulation library. The service handles the majority of cache-miss traffic and handles image transformations like blurring, cropping, resizing, overlaying images, and format conversions.
Overall, moving the image optimization in-house was quite successful:
Costs for Gif2Vid conversion were reduced to a mere 0.9% of the original cost.
The p99 cache-miss latency for encoding animated GIFs was down from 20s to 4s.
The bytes served for static images were down by approximately 20%.
Real-Time Protection for Users at Reddit’s Scale
A critical functionality for Reddit is moderating content that violates the policies of the platform. This is essential to keep Reddit safe as a website for the billions of users who see it as a community.
In 2016, they developed a rules engine named Rule-Executor-V1 (REV1) to curb policy-violating content on the site in real time. REV1 enabled the safety team to create rules that would automatically take action based on activities like users posting new content or leaving comments.
For reference, a rule is just a Lua script that is triggered on specific configured events. In practice, this can be a simple piece of code shown below:
In this example, the rule checks whether a post’s text body matches a string “some bad text”. If yes, it performs an asynchronous action on the user by publishing an action to an output Kafka topic.
However, REV1 needed some major improvements:
It ran on a legacy infrastructure of raw EC2 instances. This wasn’t in line with all modern services on Reddit that were running on Kubernetes.
Each rule ran as a separate process in a REV1 node and required vertical scaling as more rules were launched. Beyond a certain point, vertical scaling became expensive and unsustainable.
REV1 used Python 2.7 which was deprecated.
Rules weren’t version-controlled and it was difficult to understand the history of changes.
Lack of staging environment to test out the rules in a sandboxed manner.
In 2021, the Safety Engineering team within Reddit developed a new streaming infrastructure called Snooron. It was built on top of Flink Stateful Functions to modernize REV1’s architecture. The new system was known as REV2.
The below diagram shows both REV1 and REV2.
Some of the key differences between REV1 and REV2 are as follows:
In REV1, all configuration of rules was done via a web interface. With REV2, the configuration primarily happens through code. However, there are UI utilities to make the process simpler.
In REV1, they use Zookeeper as a store for rules. With REV2, rules are stored in Github for better version control and are also persisted to S3 for backup and periodic updates.
In REV1, each rule had its own process that would load the latest code when triggered. However, this caused performance issues when too many rules were running at the same time. REV2 follows a different approach that uses Flink Stateful Functions for handling the stream of events and a separate Baseplate application that executes the Lua code.
In REV1, the actions triggered by rules were handled by the main R2 application. However, REV2 works differently. When a rule is triggered, it sends out structured Protobuf actions to multiple action topics. A new application called the Safety Actioning Worker, built using Flink Statefun, receives and processes these instructions to carry out the actions.
Reddit’s Feed Architecture
Feeds are the backbone of social media and community-based websites.
Millions of people use Reddit’s feeds every day and it’s a critical component of the website’s overall usability. There were some key goals when it came to developing the feed architecture:
The architecture should support a high development velocity and support scalability. Since many teams integrate with the feeds, they need to have the ability to understand, build, and test them quickly.
TTI (Time to Interactive) and scroll performance should be satisfactory since they are critical to user engagement and the overall Reddit experience.
Feeds should be consistent across different platforms such as iOS, Android, and the website.
To support these goals, Reddit built an entirely new, server-driven feeds platform. Some major changes were made in the backend architecture for feeds.
Earlier, each post was represented by a Post object that contained all the information a post may have. It was like sending the kitchen sink over the wire and with new post types, the Post object got quite big over time.
This was also a burden on the client. Each client app contained a bunch of cumbersome logic to determine what should be shown on the UI. Most of the time, this logic was out of sync across platforms.
With the changes to the architecture, they moved away from the big object and instead sent only the description of the exact UI elements the client will render. The backend controlled the type of elements and their order. This approach is also known as Server-Driven UI.
For example, the post unit is represented by a generic Group object that contains an array of Cell objects. The below image shows the change in response structure for the Announcement item and the first post in the feed.
Reddit’s Move from Thrift to gRPC
In the initial days, Reddit had adopted Thrift to build its microservices.
Thrift enables developers to define a common interface (or API) that enables different services to communicate with each other, even if they are written in different programming languages. Thrift takes the language-independent interface and generates code bindings for each specific language.
This way, developers can make API calls from their code using syntax that looks natural for their programming language, without having to worry about the underlying cross-language communication details.
Over the years, the engineering teams at Reddit built hundreds of Thrift-based microservices, and though it served them quite well, Reddit’s growing needs made it costly to continue using Thrift.
gRPC came on the scene in 2016 and achieved significant adoption within the Cloud-native ecosystem.
Some of the advantages of gRPC are as follows:
It provided native support for HTTP2 as a transport protocol
There was native support for gRPC in several service mesh technologies such as Istio and Linkerd
Public cloud providers also support gRPC-native load balancers
While gRPC had several benefits, the cost of switching was non-trivial. However, it was a one-time cost whereas building feature parity in Thrift would have been an ongoing maintenance activity.
Reddit decided to make the transition to gRPC. The below diagram shows the design they used to start the migration process:
The main component is the Transitional Shim. Its job is to act as a bridge between the new gRPC protocol and the existing Thrift-based services.
When a gRPC request comes in, the shim converts it into an equivalent Thrift message format and passes it on to the existing code just like native Thrift. When the service returns a response object, the shim converts it back into the gRPC format.
There are three main parts to this design:
The interface definition language (IDL) converter that translates the Thrift service definitions into the corresponding gRPC interface. This component also takes care of adapting framework idioms and differences as appropriate.
A code-generated gRPC servicer that handles the message conversions for incoming and outgoing messages between Thrift and gRPC.
A pluggable module for the services to support both Thrift and gRPC.
This design allowed Reddit to gradually transition to gRPC by reusing its existing Thrift-based service code while controlling the costs and effort required for the migration.
Conclusion
Reddit’s architectural journey has been one of continual evolution, driven by its meteoric growth and changing needs over the years. What began as a monolithic Lisp application was rewritten in Python, but this monolithic approach couldn’t keep pace as Reddit’s popularity exploded.
The company went on an ambitious transition to a service-based architecture. Each new feature and issue they faced prompted a change in the overall design in various areas such as user protection, media metadata management, communication channels, data replication, API management, and so on.
In this post, we’ve attempted to capture the evolution of Reddit’s architecture from the early days to the latest changes based on the available information.
References:
SPONSOR US
Get your product in front of more than 500,000 tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.
Space Fills Up Fast - Reserve Today
Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing hi@bytebytego.com.
Like Comment Restack © 2024 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:35 - 9 Apr 2024 -
What’s the state of private markets around the world?
On Point
2024 Global Private Markets Review Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Fundraising decline. Macroeconomic headwinds persisted throughout 2023, with rising financing costs and an uncertain growth outlook taking a toll on private markets. Last year, fundraising fell 22% across private market asset classes globally to just over $1 trillion, as of year-end reported data—the lowest total since 2017, McKinsey senior partner Fredrik Dahlqvist and coauthors reveal. As private market managers look to boost performance, a deeper focus on revenue growth and margin expansion will be needed now more than ever.
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:08 - 9 Apr 2024 -
Feeling good: A guide to empathetic leadership
I feel for you Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
An “empath” is defined as a person who is highly attuned to the feelings of others, often to the point of experiencing those emotions themselves. This may sound like a quality that’s more helpful in someone’s personal life than in their work life, where emotions should presumably be kept at bay. Yet it turns out that the opposite may be true: displaying empathy in appropriate ways can be a powerful component of effective leadership. And when it’s combined with other factors such as technology and innovation, it can lead to exceptional business performance, such as breakthroughs in customer service. This week, we explore some strategies for leaders to enhance their empathy quotient.
“When I train leaders in empathy, one of the first hurdles I need to get over is this stereotype that empathy is too soft and squishy for the work environment,” says Stanford University research psychologist Jamil Zaki in an episode of our McKinsey Talks Talent podcast. Speaking with McKinsey partners Bryan Hancock and Brooke Weddle, Zaki suggests that, contrary to the stereotype, empathy can be a workplace superpower, helping to alleviate employee burnout and spurring greater productivity and creativity. Boosting managers’ empathetic skills is more important now than ever: “There’s evidence that during the time that social media has taken over so much of our lives, people’s empathy has also dropped,” says Zaki. Ways to improve leaders’ empathy in the workplace include infusing more empathy into regular conversations, rewarding empathic behavior, and automating some non-human-centric responsibilities so that managers can focus on mentorship.
That’s business strategist and author John Horn on why it’s important for leaders to try to understand the mindset of their competitors. “When people become more senior in business, they’ll need to think about competitors directly,” he says in a conversation with McKinsey on cognitive empathy. While it may not be possible to ask competitors about their strategies, technology such as AI, machine learning, and big data may be able to provide competitive-insight tools based on a predictive factor. “You could possibly use that data to look across various industries and learn [for example] what has been the response when prices were raised? That’s thinking about the likely behavior your competitors will have,” says Horn.
The qualities that make a leader empathetic can be found within oneself and in the situations of daily life. “Try to unlock the energy and the aspiration that lies within us,” suggests McKinsey senior partner and chief legal officer Pierre Gentin in a McKinsey Legal Podcast interview. What Gentin calls “arrows of inspiration”—whether they come from art, music, literature, or people—may be available to leaders every day. He believes that “a lot of this is just opening our eyes and really recognizing what’s in front of us and transitioning from the small gauge and the negative to the big gauge and the transformative.” For example, in an organizational context, people may need to look beyond short-term concerns and consider “what they can do outside of the four corners of the definition of their role,” he says. “How can we collaborate together to do creative things and to do exciting things and to do worthwhile things?”
There are times when empathy can be too much of a good thing, or leaders can show it in the wrong way. An obvious mistake is to pay lip service to the idea but take limited or no action. In one case, an IT group in a company worked overtime for many weeks during the pandemic. Leaders praised the group lavishly, but the only action they took was to buy everyone on the team a book on time management. Another error may to be to avoid forming the strong relationships that lead to good communication, understanding, and compassion. And empathy by itself—if it’s not balanced by other aspects of emotional intelligence such as self-regulation and composure—can actually derail leadership effectiveness.
Lead empathetically.
– Edited by Rama Ramaswami, senior editor, New York
Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to the Leading Off newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Leading Off" <publishing@email.mckinsey.com> - 04:37 - 8 Apr 2024 -
Which gen AI operating models can help banking leaders compete?
On Point
4 strategies for adopting gen AI Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Embracing gen AI. As banks and other financial institutions move to quickly find uses for gen AI, challenges are emerging. Getting gen AI right could produce enormous value; getting it wrong, however, could be costly. Financial institutions that use centralized gen AI operating models are reaping the biggest rewards, McKinsey senior partner Kevin Buehler and coauthors reveal. A recent review of 16 of the largest financial institutions in Europe and the US found that more than 50% have adopted a more centrally led organization for gen AI.
•
Benefits of centralization. Centralization allows enterprises to focus on a handful of use cases, rapidly moving through experimentation to implement them widely. About 70% of banks and other institutions with highly centralized gen AI operating models have progressed to putting gen AI applications into production. Consider four types of gen AI operating models that financial institutions are using as they implement the technology and visit McKinsey Digital to see how leaders are turning technology’s potential into value for their companies.
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 11:06 - 7 Apr 2024 -
The quarter’s top Themes
McKinsey&Company
At #1: 10 key takeaways from Davos 2024 In the first quarter of 2024, our top ten posts from McKinsey Themes look at highlights from the World Economic Forum’s annual meeting, generative AI, the state of talent, and more. At No. 1 is “10 key takeaways from Davos 2024,” which includes a collection of must-read insights for today’s business leaders by McKinsey’s Michael Chui, Homayoun Hatami, Dana Maor, Kate Smaje, Bob Sternfels, Rodney Zemmel, and others. Read on for our full top 10.
Share these insights
This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you are a registered member of the Top Ten Most Popular newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Top Ten" <publishing@email.mckinsey.com> - 06:09 - 7 Apr 2024 -
The week in charts
The Week in Charts
Job shifts due to generative AI, 6G’s economic potential, and more Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to The Week in Charts newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Week in Charts" <publishing@email.mckinsey.com> - 03:20 - 6 Apr 2024 -
EP106: How Does JavaScript Work?
EP106: How Does JavaScript Work?
This week’s system design refresher: Roadmap for Learning SQL (Youtube video) Can Kafka lose messages? 9 Best Practices for building microsercvices Roadmap for Learning Cyber Security How does Javascript Work? SPONSOR US New Relic IAST exceeds OWASP Benchmark with accuracy scoring above 100% (Sponsored)͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for moreThis week’s system design refresher:
Roadmap for Learning SQL (Youtube video)
Can Kafka lose messages?
9 Best Practices for building microsercvices
Roadmap for Learning Cyber Security
How does Javascript Work?
SPONSOR US
New Relic IAST exceeds OWASP Benchmark with accuracy scoring above 100% (Sponsored)
New Relic Interactive Application Security Testing (IAST) allows security and engineering teams to save time by focusing on real application security problems with zero false positives, as validated by the OWASP benchmark result of 100% accuracy.
Roadmap for Learning SQL
Can Kafka lose messages?
Error handling is one of the most important aspects of building reliable systems.
Today, we will discuss an important topic: Can Kafka lose messages?
A common belief among many developers is that Kafka, by its very design, guarantees no message loss. However, understanding the nuances of Kafka's architecture and configuration is essential to truly grasp how and when it might lose messages, and more importantly, how to prevent such scenarios.
The diagram below shows how a message can be lost during its lifecycle in Kafka.Producer
When we call producer.send() to send a message, it doesn't get sent to the broker directly. There are two threads and a queue involved in the message-sending process:
1. Application thread
2. Record accumulator
3. Sender thread (I/O thread)
We need to configure proper ‘acks’ and ‘retries’ for the producer to make sure messages are sent to the broker.Broker
A broker cluster should not lose messages when it is functioning normally. However, we need to understand which extreme situations might lead to message loss:
1. The messages are usually flushed to the disk asynchronously for higher I/O throughput, so if the instance is down before the flush happens, the messages are lost.
2. The replicas in the Kafka cluster need to be properly configured to hold a valid copy of the data. The determinism in data synchronization is important.
Consumer
Kafka offers different ways to commit messages. Auto-committing might acknowledge the processing of records before they are actually processed. When the consumer is down in the middle of processing, some records may never be processed.
A good practice is to combine both synchronous and asynchronous commits, where we use asynchronous commits in the processing loop for higher throughput and synchronous commits in exception handling to make sure the the last offset is always committed.
Latest articles
If you’re not a paid subscriber, here’s what you missed.
To receive all the full articles and support ByteByteGo, consider subscribing:
9 Best Practices for building microsercvices
Creating a system using microservices is extremely difficult unless you follow some strong principles.
9 best practices that you must know before building microservices:
Design For Failure
A distributed system with microservices is going to fail.
You must design the system to tolerate failure at multiple levels such as infrastructure, database, and individual services. Use circuit breakers, bulkheads, or graceful degradation methods to deal with failures.Build Small Services
A microservice should not do multiple things at once.
A good microservice is designed to do one thing well.Use lightweight protocols for communication
Communication is the core of a distributed system.
Microservices must talk to each other using lightweight protocols. Options include REST, gRPC, or message brokers.Implement service discovery
To communicate with each other, microservices need to discover each other over the network.
Implement service discovery using tools such as Consul, Eureka, or Kubernetes ServicesData Ownership
In microservices, data should be owned and managed by the individual services.
The goal should be to reduce coupling between services so that they can evolve independently.Use resiliency patterns
Implement specific resiliency patterns to improve the availability of the services.
Examples: retry policies, caching, and rate limiting.Security at all levels
In a microservices-based system, the attack surface is quite large. You must implement security at every level of the service communication path.Centralized logging
Logs are important to finding issues in a system. With multiple services, they become critical.Use containerization techniques
To deploy microservices in an isolated manner, use containerization techniques.
Tools like Docker and Kubernetes can help with this as they are meant to simplify the scaling and deployment of a microservice.
Over to you: what other best practice would you recommend?Roadmap for Learning Cyber Security
By Henry Jiang. Redrawn by ByteByteGo.
Cybersecurity is crucial for protecting information and systems from theft, damage, and unauthorized access. Whether you're a beginner or looking to advance your technical skills, there are numerous resources and paths you can take to learn more about cybersecurity. Here are some structured suggestions to help you get started or deepen your knowledge:
Security Architecture
Frameworks & Standards
Application Security
Risk Assessment
Enterprise Risk Management
Threat Intelligence
Security Operation
How does Javascript Work?
The cheat sheet below shows most important characteristics of Javascript.
Interpreted Language
JavaScript code is executed by the browser or JavaScript engine rather than being compiled into machine language beforehand. This makes it highly portable across different platforms. Modern engines such as V8 utilize Just-In-Time (JIT) technology to compile code into directly executable machine code.Function is First-Class Citizen
In JavaScript, functions are treated as first-class citizens, meaning they can be stored in variables, passed as arguments to other functions, and returned from functions.Dynamic Typing
JavaScript is a loosely typed or dynamic language, meaning we don't have to declare a variable's type ahead of time, and the type can change at runtime.Client-Side Execution
JavaScript supports asynchronous programming, allowing operations like reading files, making HTTP requests, or querying databases to run in the background and trigger callbacks or promises when complete. This is particularly useful in web development for improving performance and user experience.Prototype-Based OOP
Unlike class-based object-oriented languages, JavaScript uses prototypes for inheritance. This means that objects can inherit properties and methods from other objects.Automatic Garbage Collection
Garbage collection in JavaScript is a form of automatic memory management. The primary goal of garbage collection is to reclaim memory occupied by objects that are no longer in use by the program, which helps prevent memory leaks and optimizes the performance of the application.Compared with Other Languages
JavaScript is special compared to programming languages like Python or Java because of its position as a major language for web development.
While Python is known to provide good code readability and versatility, and Java is known for its structure and robustness, JavaScript is an interpreted language that runs directly on the browser without compilation, emphasizing flexibility and dynamism.Relationship with Typescript
TypeScript is a superset of JavaScript, which means that it extends JavaScript by adding features to the language, most notably type annotations. This relationship allows any valid JavaScript code to also be considered valid TypeScript code.Popular Javascript Frameworks
React is known for its flexibility and large number of community-driven plugins, while Vue is clean and intuitive with highly integrated and responsive features. Angular, on the other hand, offers a strict set of development specifications for enterprise-level JS development.
SPONSOR US
Get your product in front of more than 500,000 tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.
Space Fills Up Fast - Reserve Today
Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing hi@bytebytego.com.
Like Comment Restack © 2024 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:36 - 6 Apr 2024 -
Gen AI: How to capture value and mitigate risk
Plus, how the world can accelerate productivity growth McKinsey research has estimated that generative AI (gen AI) has the potential to add up to $4.4 trillion in economic value to the global economy. But companies are quickly realizing that capturing this value is harder than expected. There’s also a growing recognition that gen AI opportunities are accompanied by considerable risks. In this month’s first featured story, McKinsey senior partners Eric Lamarre, Alex Singla, Alexander Sukharevsky, and Rodney Zemmel outline why companies have to rewire how they work with gen AI to gain a real competitive advantage from the technology. Our second featured story explores why business leaders need to revise their technology playbooks and drive the integration of effective risk management from the start of their engagement with gen AI. Other highlights include the following topics:
Investing in productivity growth
It’s time to raise investment and catch the next productivity wave.
Pave the wayMcKinsey Global Private Markets Review 2024
Private markets entered a slower era in 2023, with macroeconomic headwinds, rising financing costs and an uncertain growth outlook weighing on fundraising, deal activity and performance.
Boost performance in a new eraThe CEO’s secret to successful leadership: CEO Excellence revisited
Three McKinsey senior partners—and, now, international best-selling authors—reflect on the far-reaching impact of their book, CEO Excellence, two years after its release.
Dare to leadWorking nine to thrive
Better aligning employment with modifiable drivers of health could unlock years of higher-quality life and create trillions of dollars of economic value.
Support employeesAnalyzing the CEO–CMO relationship and its effect on growth
CEOs acknowledge the expertise and importance of chief marketing officers and their role in helping the company grow, yet there’s still a strategic disconnect in the C-suite. Here’s how to close the gap.
Take a holistic approachHow the world’s best hotels deliver exceptional customer experience
Luxury hotels know that the secret to top-tier customer experience is a culture of excellence.
Create a culture of excellenceMcKinsey Themes
Browse our essential reading on the topics that matter.
Get up to speedMcKinsey Explainers
Find direct answers to complex questions, backed by McKinsey’s expert insights.
Learn moreMcKinsey on Books
Explore this month’s best-selling business books prepared exclusively for McKinsey Publishing by Circana.
See the listsMcKinsey Chart of the Day
See our daily chart that helps explain a changing world—as we strive for sustainable, inclusive growth.
Dive inMcKinsey Classics
Significant improvements in risk management can be gained quickly through selective digitization—but capabilities must be test-hardened before release. Read our 2017 classic “Digital risk: Transforming risk management for the 2020s” to learn more.
RewindLeading Off
Our Leading Off newsletter features revealing research and inspiring interviews to empower you—and those you lead.
Subscribe now—Edited by Eleni Kostopoulos, managing editor, New York
Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you are a registered member of our Monthly Highlights newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Highlights" <publishing@email.mckinsey.com> - 11:27 - 6 Apr 2024 -
CEO Excellence—two years on
The Shortlist
Four new insights Curated by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
Technology is trending like never before, with gen AI at the crest of the wave. But our decades of experience working with executives have taught us an important lesson: it’s never just about the tech. To get the most out of digital investments, companies need to pull other levers, like leadership, change management, talent, and innovation. In this edition of the CEO Shortlist, we highlight some of the catalysts that can unleash technology’s full value, and we explore the latest on CEO excellence. We hope you enjoy the read.
—Liz and Homayoun
Step away from the grindstone. Nobody likes the admin side of their job. Gen AI can help. The technology is brilliant at handling dull, repetitive tasks, and companies are rapidly installing applications to do them. This creates a critical opportunity for leaders: deploying gen AI to free up employees’ time for work better suited to humans, such as creative, collaborative thinking. Organizations that succeed are likely to build a long-term competitive edge.
Put on your thinking cap with “The human side of generative AI: Creating a path to productivity,” a new article by Aaron De Smet, Sandra Durth, Bryan Hancock, Marino Mugayar-Baldocchi, and Angelika Reich.What got you here may not get you there. In business—and maybe more broadly—innovation is always needed to outcompete in times of uncertainty. But innovation itself needs a shake-up every once in a while. That’s where gen AI comes in. The most innovative organizations have already deployed gen AI tools to help invigorate their creativity and agility—and they’re reaping the rewards.
It’s not too late to give your organization a gen-AI-powered innovation boost. Learn more with “Driving innovation with generative AI,” a new interview with McKinsey’s Matt Banholzer and Laura LaBerge from our Inside the Strategy Room podcast.Picture this. Most companies are thinking about how gen AI chatbots can help produce text. But don’t forget that these new tools can draw too. Almost every company uses some form of industrial design, whether to create widgets or websites. Gen AI tools can open new doors of creativity and speed—but human designers are far from obsolete.
Draw up a chair and read “Generative AI fuels creative physical product design but is no magic wand,” by Bryce Booth, Jack Donohew, Chris Wlezien, and Winnie Wu.It’s a wide world. Start-ups. Founder-led companies. Portfolio companies. Government organizations. Not for profits. It’s been two years since we published CEO Excellence: The Six Mindsets That Distinguish the Best Leaders from the Rest, and one thing we’ve learned is that the excellent leadership qualities we’ve distilled in this book are applicable way beyond the purely corporate audience we imagined at the outset. Find out more as we catch up with the book’s authors and learn about the journey their international bestseller has taken them on.
Hitch a ride with “The CEO’s secret to successful leadership: CEO Excellence revisited,” by Carolyn Dewar, Scott Keller, and Vikram Malhotra. (And read more from the interview here and here.)We hope you find these ideas inspiring and helpful. See you next time with four more McKinsey ideas for the CEO and others in the C-suite.
Share these insights
This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to The CEO Shortlist newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey CEO Shortlist" <publishing@email.mckinsey.com> - 04:56 - 5 Apr 2024 -
What would it take to make air travel fairer for all?
On Point
Tools to promote disability inclusion Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:05 - 5 Apr 2024 -
A Crash Course in CI/CD
A Crash Course in CI/CD
Introduction What is CI/CD? How does it help us ship faster? Is it worth the hassle? In this issue, we will look into Continuous Integration and Continuous Deployment, or CI/CD for short. CI/CD helps automate the software development process from the initial code commit to deployment. It eliminates much of the manual human intervention traditionally required to ship code to production.͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for moreLatest articles
If you’re not a subscriber, here’s what you missed this month.
To receive all the full articles and support ByteByteGo, consider subscribing:
Introduction
What is CI/CD? How does it help us ship faster? Is it worth the hassle? In this issue, we will look into Continuous Integration and Continuous Deployment, or CI/CD for short. CI/CD helps automate the software development process from the initial code commit to deployment. It eliminates much of the manual human intervention traditionally required to ship code to production.
The CI/CD process builds, tests, and deploys code to production. The promise is that it enables software teams to deploy better-quality software faster. This all sounds very good, but does it work in real life? The answer is — it depends.
Let’s break up CI/CD into their parts and discuss them separately.
Continuous Integration (CI)
Continuous integration (CI) is a development practice that many people believe they're using in their work, but they don't fully get it.
Before CI came along, development teams typically operated in silos, with individual developers working independently on distinct features for extended periods. Their work would eventually need to be merged into a shared codebase, often resulting in complications such as merge conflicts and compatibility issues among the hundreds of files and contributors. This dilemma, often known as "merge hell," represented the difficulties faced in traditional development methods.
Avoid “merge hell”
Let's consider a scenario with two developers, Alice and Bob. Alice writes her code and shares it as soon as she has a functional version that doesn't cause any issues, even if it's not fully complete. She uploads her code to a central repository. Bob follows the same approach, always grabbing the latest version of the code before starting his work. As Alice continues to update her code, Bob does the same. If Bob makes changes, Alice incorporates them into her work without any problems. They collaborate smoothly, with a low chance of interfering each other because they always work off the latest code. If they encounter conflicts, it's usually on recent changes they've both made, so they can sit down together, resolve the issues, and move forward.
However, with so many people constantly contributing code, problems are inevitable. Things may not always run smoothly, and new errors can emerge. So, what's the solution?
Automation
The solution is automation. It acts like a vigilant watchdog, constantly monitoring the code. Whenever a change occurs, it springs into action, grabbing the code, building it, and running tests. If anything fails during this process, the team receives an alert, ensuring everyone is aware of the issue. With this safety net in place, continuous integration becomes a reality.
So, what exactly is continuous integration (CI)?
Definition
Continuous integration involves automating builds, executing tests, and merging code from individual developers into a shared repository. The primary goal of continuous integration is to efficiently integrate source code into shared repositories. Once changes are committed to the version control system, automated builds and test cases are executed to ensure the functionality and validity of the code. These processes validate how the source code compiles and how test cases perform during execution.
Tools
What are some common tools used in CI? A robust source code management system is the foundation. GitHub is a popular example. It holds everything needed to build the software, including source code, test scripts, and scripts to build the software applications.
Many tools are available to manage the CI process itself. GitHub Actions and Buildkite are modern examples, while Jenkins, CircleCI, and TravisCI are also widely used. These tools manage the build and test tasks in CI.
Numerous test tools exist for writing and running tests. These tools are usually language and ecosystem-specific. For example, in Javascript, Jest is a unit testing framework, while Playwright and Cypress are common integration testing frameworks for web applications.
Build tools are even more diverse and ecosystem-specific. Gradle is a powerful build tool for Java. The Javascript build ecosystem is fragmented and challenging to keep track of. Webpack is the standard, but many new build tools claim to be much faster, although they are not yet as extensible as Webpack.
Benefits of continuous integration
Continuous integration holds significant importance for several reasons. The table below presents some of the major advantages of CI.
Continuous Deployment (CD)
Continuous deployment (CD) is the next step after CI in the CI/CD pipeline. CD is the practice of automatically deploying every code change that passes the automated testing phase to production.
While true continuous deployment is challenging and not as widely adopted as CI, a more common practice is continuous delivery, which is similar but has a subtle difference, as explained below.
Continuous delivery
Continuous delivery focuses on the rapid deployment of code changes into production environments. Its roots can be traced back to the Agile Manifesto, which emphasizes “early and continuous delivery of valuable software” to satisfy customers.
The objective of continuous delivery is to efficiently transition valuable code changes into production. The initial step involves transforming the code into deployable software through a build process. Once the software is ready, the next logical step might seem to be deploying it directly into production. However, the real practice involves rigorous testing to ensure that only stable software enters the production environment.
Typically, organizations maintain multiple test environments, such as "QA," "Performance," or "Staging." These environments serve as checkpoints for validating the software before it reaches production. The software undergoes testing in each environment to ensure its readiness for deployment.
In essence, the journey to production in continuous delivery involves transitioning software through various testing environments before deployment into the production environment.
A key aspect of continuous delivery is ensuring that the code remains deployable at all times. Once the delivery process is completed, the code is ready for deployment to any desired environment. This end-to-end process includes building the source code, executing test cases, generating artifacts such as WAR or JAR files, and delivering them to specific environments.
Automatic deployment
Coming back to continuous deployment (CD), it involves the automatic deployment of code changes to the production environment. Essentially, CD represents the final stage in the development pipeline. In this phase, not only are artifacts prepared and test cases executed, but the process extends further to deploying the artifacts to the production environment. Continuous deployment ensures that any changes made to the code are promptly deployed to the production environment without human intervention.
Continuous deployment vs. continuous delivery
Continuous deployment and continuous delivery are related concepts, but they have distinct differences. Here, we list some of the differences:
While continuous deployment may be suitable for some organizations, continuous delivery is the approach that many are striving to achieve, as it offers a cautious yet automated approach to software delivery.
Tools
The tools we mentioned earlier, like GitHub Actions, Buildkite, and Jenkins, are commonly used to handle CD tasks. Infrastructure-specific tools also make CD easier to maintain. For example, ArgoCD is popular on Kubernetes.
CI/CD is a powerful software development practice that can help teams ship better-quality software faster. However, it's not a one-size-fits-all solution, and its implementation may vary depending on the complexity of the system.
Benefits of continuous deployment
Continuous deployment offers numerous benefits to organizations. Here, we list some of them.
Deployment Strategies
Nothing beats the satisfaction of seeing our code go live to millions of users. It is always thrilling to see. But getting there is not always easy. Let’s explore some common deployment strategies.
Big bang deployment
One of the earliest methods of deploying changes to production is the Big Bang Deployment. Picture it like ripping off a bandage. We push all our changes at once, causing a bit of downtime as we have to shut down the old system to switch on the new one. The downtime is usually short, but be careful - it can sting if things don't go as planned. Preparation and testing are key. If things go wrong, we roll back to the previous version. However, rolling back is not always pain-free. We might still disrupt users, and there could be data implications. We need to have a solid rollback plan.
Continue reading this post for free, courtesy of Alex Xu.
A subscription gets you:
An extra deep dive on Thursdays Full archive Many expense it with team's learning budget Like Comment Restack © 2024 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:37 - 4 Apr 2024 -
What makes public sector workers stay in their jobs?
On Point
6 priorities for government leaders Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Attracting tomorrow’s workforce. Aging populations, Gen Z’s growing presence in the labor force, increased workforce diversity, and other trends are reshaping demand for government services. By 2030, Gen Z is expected to account for about 30% of the global workforce. Yet in the US, Gen Z accounted for just 1.6% of the federal workforce, compared with Gen X at 42.0%. Clearly, the government faces challenges when it comes to mentoring, apprenticing, and developing its next-generation workforce, McKinsey senior partner Julia Klier and coauthors explain.
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:06 - 4 Apr 2024 -
Trakzee - White Label Advanced Fleet Management Software
Trakzee - White Label Advanced Fleet Management Software
A Software that enables real-time vehicle tracking, fuel monitoring, tire management, and much more.An Advanced Fleet Management Software that enables real-time vehicle tracking, fuel monitoring, tire management, and much more.
Why Choose Our Fleet Management Software?
Fleet Management Platform That Helps You To Grow Your Business
Uffizio Technologies Pvt. Ltd., 4th Floor, Metropolis, Opp. S.T Workshop, Valsad, Gujarat, 396001, India
by "Sunny Thakur" <sunny.thakur@uffizio.com> - 08:00 - 3 Apr 2024 -
Black Americans are not thriving as much as their White neighbors, but certain steps could help
Re:think
How location influences Black Americans’ success FRESH TAKES ON BIG IDEAS
ON BLACK AMERICANS’ PROSPERITY
Enabling prosperity for Black Americans—no matter where they liveDuwain Pinder
When we started the McKinsey Institute for Black Economic Mobility in 2020, one of our first reports was The economic state of Black America: What is and what could be. The report examined challenges and opportunities for Black Americans in five different roles that they play in the economy: workers, business owners, savers and investors, consumers, and residents. Since then, we have been going deep into each of these roles. In 2021, we looked at Black consumers. This year, we’re focusing on Black residents.
Black Americans’ opportunities and outcomes vary significantly in different places. When you look nationally at only things like life expectancy and the poverty rate, you can miss quite a bit of nuance and detail. If you really want to understand what’s happening, you have to look at the issue of place as well. Black Americans, for example, are more likely than most of the US population to be concentrated in major cities. But when you look at the places where Black Americans are doing the best, it’s in the suburbs and exurbs. However, these are the very places where you’re least likely to find Black Americans.
What’s interesting in these US suburbs and exurbs is that while Black residents are doing great relative to Black Americans across the nation, if you compare them with their White neighbors, they’re doing only about 65 percent as well. There’s still a significant racial wealth gap. Black Americans today are much more prosperous than they were ten years ago: about 75 percent of US counties have seen improvements in overall Black prosperity. But in many places, White Americans’ outcomes have been improving at a faster rate than their Black neighbors.
The question is, how do we build a set of solutions that can improve Black Americans’ outcomes? There is effectively no place in the country where Black residents are doing as well as their White neighbors are. In fact, there are only a few places in the United States where Black people’s outcomes are at or above 90 percent of their White neighbors’. One is Paulding County, a suburb outside of Atlanta; several other counties with large Black populations outside of Atlanta, Houston, and Washington, DC, also are doing relatively well on parity. But most other places with higher parity are small, rural communities, so we’re talking about small populations in places where residents of all races tend to be less well off.“To address the nationwide disparity, we need an all-hands-on-deck approach in which many solutions are operating at scale for a long time.”
There’s no easy answer for how to address the nationwide disparity. Instead, we need an all-hands-on-deck approach in which many solutions are operating at scale for a long time. There are certain areas for investment that can deliver broad, downstream effects. Affordable housing, for example, is linked to improved physical and mental health, economic opportunity, and other measures of prosperity. Early-childhood education also has been shown to yield significant impact—and not just on the individual who is receiving the childcare. Because quality childcare improves parents’ ability to find meaningful work, it improves an entire family’s long-term economic outlook. Investing in early-childhood education also benefits the educators themselves, who disproportionately tend to be Black women.
How can Black economic mobility improve? In affordable housing, one partial solution that we’ve seen is developing underused land. We’ve also found that using new technology and efficient construction methods can decrease the cost of that housing. You can also boost access to programs that connect people to existing affordable housing or provide financial assistance or even just awareness. In education, you can expand access to high-quality pre-K programs by adding student seats, boosting the number of trained—and well-paid—teachers, and investing in community and parent outreach to support enrollment. While the specifics vary according to the unique needs of the community, these are things that have been proven to create benefit. The problem is that they haven’t been scaled.
As part of our analysis, we held the prosperity of White Americans constant and asked, “At the current pace of change, how long would it take for Black Americans to reach the same level?” The conservative estimate was that it could take more than 300 years. That’s not an optimistic number.
That being said, there are many solutions that have real promise. If society can scale them and really commit over a long period of time, there can be genuine progress. So that makes me feel a lot more optimistic.ABOUT THIS AUTHOR
Duwain Pinder is a leader of the McKinsey Institute for Black Economic Mobility and a partner in McKinsey’s Columbus, Ohio, office.
MORE FROM THIS AUTHOR
UP NEXT
Alexis Trittipo on climate change adaptation
Mitigating climate change isn’t nearly enough. Adapting to it is also crucial. Adaptation requires understanding how climate change may affect a particular area or asset, performing scenario planning, and taking the long view when collaborating across the public and private sectors to take action.
This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to our McKinsey Quarterly alert list.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Quarterly" <publishing@email.mckinsey.com> - 02:36 - 3 Apr 2024 -
How could a big shift from cars to bicycles benefit cities?
On Point
Explore our hypothetical scenario Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Bike-friendly cities. European residents and city leaders are increasingly pressing for bike-friendly cities to boost micromobility. Yet some questions remain about how more bicycle usage can affect urban areas and what infrastructure changes may be needed, note McKinsey partner Kersten Heineke and coauthors. To understand the effect on the environment, commuting time, and transportation infrastructure, McKinsey analyzed a scenario in which residents of a Western European metropolitan area replaced 22.5% of the kilometers traveled by private cars with bicycles.
—Edited by Querida Anderson, senior editor, New York
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:35 - 3 Apr 2024 -
You’re invited! Join us for a discussion on productivity growth.
Register now New from McKinsey Global Institute
Join us on Wednesday, April 24, at 11 a.m. ET / 5 p.m. CET, for a discussion on MGI’s latest report that explores productivity in economies around the world, why it has stalled, and what it would take to accelerate it. This virtual event will include a presentation by the authors followed by a panel with leading economists and technologists who will discuss:
•
Productivity trends across countries and sectors
•
Factors behind the global productivity slowdown
•
The role of investment and how leaders can harness the opportunities of new technologies like generative AI to unleash the next wave of productivity growth
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to our McKinsey Global Institute alert list.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Global Institute" <publishing@email.mckinsey.com> - 12:40 - 2 Apr 2024