Archives
- By thread 3428
-
By date
- June 2021 10
- July 2021 6
- August 2021 20
- September 2021 21
- October 2021 48
- November 2021 40
- December 2021 23
- January 2022 46
- February 2022 80
- March 2022 109
- April 2022 100
- May 2022 97
- June 2022 105
- July 2022 82
- August 2022 95
- September 2022 103
- October 2022 117
- November 2022 115
- December 2022 102
- January 2023 88
- February 2023 90
- March 2023 116
- April 2023 97
- May 2023 159
- June 2023 145
- July 2023 120
- August 2023 90
- September 2023 102
- October 2023 106
- November 2023 100
- December 2023 74
- January 2024 75
- February 2024 75
- March 2024 78
- April 2024 74
- May 2024 108
- June 2024 98
- July 2024 116
- August 2024 134
- September 2024 97
-
EP117: What makes HTTP2 faster than HTTP1?
EP117: What makes HTTP2 faster than HTTP1?
This week’s system design refresher: Kafka vs. RabbitMQ vs. Messaging Middleware vs. Pulsar (Youtube video) What makes HTTP2 faster than HTTP1? Top 6 Cases to Apply Idempotency 4 Ways Netflix Uses Caching to Hold User Attention Log Parsing Cheat Sheet͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for moreThis week’s system design refresher:
Kafka vs. RabbitMQ vs. Messaging Middleware vs. Pulsar (Youtube video)
What makes HTTP2 faster than HTTP1?
Top 6 Cases to Apply Idempotency
4 Ways Netflix Uses Caching to Hold User Attention
Log Parsing Cheat Sheet
SPONSOR US
Collaborating on APIs Is Easier with Postman (Sponsored)
API Collaboration improves developer productivity by empowering producers and consumers to share, discover, and reuse high-quality API assets.
Postman revolutionizes the experience of collaborative API development with Postman Collections and Workspaces. Used together, they enable API design, testing, and documentation, while providing a shared canvas for collaborating on API assets.
Learn how companies like Cvent, Visma, Built Technologies, and Amadeus use Postman to collaborate more easily and deliver better APIs faster.
Kafka vs. RabbitMQ vs. Messaging Middleware vs. Pulsar
What makes HTTP2 faster than HTTP1?
The key features of HTTP2 play a big role in this. Let’s look at them:
Binary Framing Layer
HTTP2 encodes the messages into binary format.
This allows the messages into smaller units called frames, which are then sent over the TCP connection, resulting in more efficient processing.Multiplexing
The Binary Framing allows full request and response multiplexing.
Clients and servers can interleave frames during transmissions and reassemble them on the other side.Stream Prioritization
With stream prioritization, developers can customize the relative weight of requests or streams to make the server send more frames for higher-priority requests.Server Push
Since HTTP2 allows multiple concurrent responses to a client’s request, a server can send additional resources along with the requested page to the client.HPACK Header Compression
HTTP2 uses a special compression algorithm called HPACK to make the headers smaller for multiple requests, thereby saving bandwidth.
Of course, despite these features, HTTP2 can also be slow depending on the exact technical scenario. Therefore, developers need to test and optimize things to maximize the benefits of HTTP2.
Over to you: Have you used HTTP2 in your application?Latest articles
If you’re not a paid subscriber, here’s what you missed.
To receive all the full articles and support ByteByteGo, consider subscribing:
Top 6 Cases to Apply Idempotency
Idempotency is essential in various scenarios, particularly where operations might be retried or executed multiple times. Here are the top 6 use cases where idempotency is crucial:
RESTful API Requests
We need to ensure that retrying an API request does not lead to multiple executions of the same operation. Implement idempotent methods (like PUT and DELETE) to maintain consistent resource states.Payment Processing
We need to ensure that customers are not charged multiple times due to retries or network issues. Payment gateways often need to retry transactions; idempotency ensures only one charge is made.Order Management Systems
We need to ensure that submitting an order multiple times results in only one order being placed. We design a safe mechanism to prevent duplicate inventory deductions or updates.Database Operations
We need to ensure that reapplying a transaction does not change the database state beyond the initial application.User Account Management
We need to ensure that retrying a registration request does not create multiple user accounts. Also, we need to ensure that multiple password reset requests result in a single reset action.Distributed Systems and Messaging
We need to ensure that reprocessing messages from a queue does not result in duplicate processing. We Implement handlers that can process the same message multiple times without side effects.
4 Ways Netflix Uses Caching to Hold User Attention
The goal of Netflix is to keep you streaming for as long as possible. But a user’s typical attention span is just 90 seconds.
They use EVCache (a distributed key-value store) to reduce latency so that the users don’t lose interest.
However, EVCache has multiple use cases at Netflix.Lookaside Cache
When the application needs some data, it first tries the EVCache client and if the data is not in the cache, it goes to the backend service and the Cassandra database to fetch the data.
The service also keeps the cache updated for future requests.Transient Data Store
Netflix uses EVCache to keep track of transient data such as playback session information.
One application service might start the session while the other may update the session followed by a session closure at the very end.Primary Store
Netflix runs large-scale pre-compute systems every night to compute a brand-new home page for every profile of every user based on watch history and recommendations.
All of that data is written into the EVCache cluster from where the online services read the data and build the homepage.High Volume Data
Netflix has data that has a high volume of access and also needs to be highly available. For example, UI strings and translations that are shown on the Netflix home page.
A separate process asynchronously computes and publishes the UI string to EVCache from where the application can read it with low latency and high availability.
Reference: "Caching at Netflix: The Hidden Microservice" by Scott Mansfield
Log Parsing Cheat Sheet
The diagram below lists the top 6 log parsing commands.
GREP
GREP searches any given input files, selecting lines that match one or more patterns.CUT
CUT cuts out selected portions of each line from each file and writes them to the standard output.SED
SED reads the specified files, modifying the input as specified by a list of commands.AWK
AWK scans each input file for lines that match any of a set of patterns.SORT
SORT sorts text and binary files by lines.UNIQ
UNIQ reads the specified input file comparing adjacent lines and writes a copy of each unique input line to the output file.
These commands are often used in combination to quickly find useful information from the log files. For example, the below commands list the timestamps (column 2) when there is an exception happening for xxService.
grep “xxService” service.log | grep “Exception” | cut -d” “ -f 2
Over to you: What other commands do you use when you parse logs?SPONSOR US
Get your product in front of more than 500,000 tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.
Space Fills Up Fast - Reserve Today
Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing hi@bytebytego.com
Like Comment Restack © 2024 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:36 - 22 Jun 2024 -
Jasa Penerbitan BANK GARANSI, SURETY BOND & Jaminan Pembayaran Non collateral.
Kepada Yth,
Perusahaan kontraktor/sub kontraktor/Migas DllUp : Bapak / Ibu Pimpinan Perusahaan Finance Manager / HRD
From : Barli barliansyah
Hp/Wa : 0852-1122-8654
Telp : 021-2248.3553
Perihal : Penawaran Kerjasama Dalam Kemudahan Pengurusan Penerbitan Jaminan Bank Garansi & Surety Bond Non Cash Coll ( Tanpa Agunan )
Dengan Hormat,Perkenalkan kami dari PT. SEKUNDANG MAJU BERSAMA bermaksud menawarkan kerjasama dalam pengurusan penerbitan Bank Garansi & Surety Bond, Tanpa Agunan ( Non cash Coll ) 100%, dan pemrosesan bisa kami bantu seluruh wilayah Indonesia.
Jenis Jaminan Dan Rate Bank Garansi :
1
Jaminan Penawaran ( bid bond )
Rate 2.00 % / Plat
2
Jaminan Pelaksanaan ( performance bond )
Rate 3.75 % / Tahun
3
Jaminan Uang Muka ( advance payment bond )
Rate 4.00 % / Tahun
4
Jaminan Pemeliharaan ( maintenance bond )
Rate 3.75 % / Tahun
5
Jaminan Pembayaran ( payment bond )
Rate 4.00 % / Tahun
6
Jaminan SP2D ( Akhir Tahun )
Rate 2.75 % / Plat
Jenis Jaminan Dan Rate Surety Bond :
1
Jaminan Penawaran ( bid bond )
Rate 0.35 % / 3 Bulan
2
Jaminan Pelaksanaan ( performance bond )
Rate 0.35 % / 3 Bulan
3
Jaminan Uang Muka ( advance payment bond )
Rate 0.40 % / 3 Bulan
4
Jaminan Pemeliharaan ( maintenance bond )
Rate 0.35 % / 3 Bulan
5
Jaminan Pembayaran ( payment bond )
Rate 0.40 % / 3 Bulan
6
Jaminan SP2D ( Akhir Tahun )
Rate 0.40 % / Plat
Demikianlah surat penawaran yang dapat Saya sampaikan. Semoga ini menjadi langkah awal kita untuk menjalin kerjasama yang baik dan dapat berkesinambungan di masa yang akan datang. Atas perhatian dan pertimbangan nya Saya Barli barliansyah mengucapkan ribuan terimakasih atas waktu & pertimbngan nya.
HORMAT KAMI
Best Regards
Barli barliansyah
Marketing
Head Office :
PT. SEKUNDANG MAJU BERSAMAJasa – Bank Guarantee & Surety Bond
Jl.Batu Amantis No.19 Kel. Kayu Putih Kec.Pulo Gadung Jakarta Timur 13210
Phone : (021) 2248.3553, 2248.3543
Mobile/Whatsapp : 0852-1122-8654
by "barli smb" <barlismb53@gmail.com> - 04:43 - 21 Jun 2024 -
What is the future of travel?
Only McKinsey
3 top travel trends
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:24 - 21 Jun 2024 -
A Crash Course on Microservice Communication Patterns
A Crash Course on Microservice Communication Patterns
Microservices architecture promotes the development of independent services. However, these services still need to communicate with each other to function as a cohesive system. Getting the communication right between microservices is often a challenge. There are two primary reasons for this:͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for moreLatest articles
If you’re not a subscriber, here’s what you missed this month.
To receive all the full articles and support ByteByteGo, consider subscribing:
Microservices architecture promotes the development of independent services. However, these services still need to communicate with each other to function as a cohesive system.
Getting the communication right between microservices is often a challenge. There are two primary reasons for this:
When microservices communicate over a network, they face inherent challenges associated with inter-process communication.
Developers often choose a communication pattern without carefully considering the specific needs of the problem. This can lead to suboptimal performance and scalability.
In this post, we explore various communication patterns for microservices and discuss their strengths, weaknesses, and ideal use cases.
But first, let’s look at the key challenges associated with microservice communication.
Why is Microservice Communication Challenging?...
Unlock this post for free, courtesy of Alex Xu.
A subscription gets you:
An extra deep dive on Thursdays Full archive Many expense it with team's learning budget Like Comment Restack © 2024 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:35 - 20 Jun 2024 -
Stainless pipe stock-0619
Dear manager,
Good day for you. I’m Sarah from BESTAR Steel Co., Ltd.
Our main products include petroleum & gas exploration and transportation pipeline, fluid transmission pipe, boiler tube, ocean & port construction pipe, building, and structural-use pipe.
Let us know if you get interested, thanks.
Sarah You| Manager of Oversea Business Dept.
Bestar Steel Co., Ltd | Shinestar Holdings Group
(Tel: (0086)731-88678537
Mobile: 0086-15096311583(What'sAPP)
(Fax:(0086)731-88678531
Add.: No.9 Xiangfu Road,Yuhua District, Changsha, Hunan, China 410000
------------------------------------------------------------------------------------------------------
Steel Pipe Manufacturer Since 1993
by "sarah@bestar-pipe.com" <sarah@bestar-pipe.com> - 05:26 - 20 Jun 2024 -
Know what it takes for small businesses to boost productivity?
Only McKinsey
Small businesses’ contribution to GDP Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Small businesses’ critical role. Micro-, small, and medium-size enterprises (MSMEs) are the lifeblood of economies around the world. They account for more than 90% of all businesses, roughly half of value added, and more than two-thirds of business employment. At the same time, small businesses lag behind large companies in productivity. On average, their labor productivity—or value added per worker—is half that of their larger peers, McKinsey Global Institute Council chair Marco Piccitto and coauthors reveal.
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:42 - 20 Jun 2024 -
Don’t let the cost of log ingestion stall innovation
Sumo Logic
Log everything. Only pay for insights.
Master your log analytics. Master your budget.
You’ve read the benefits of Sumo Logic log analytics from customers (like Infor, Samsung, and Automation Anywhere) that offered proof across use cases.
Ask yourself — are you getting the highest performance to drive application innovation, delivery, troubleshooting, and security within a reasonable budget with your current solution?
Since these case studies, our pricing model has matured - pay for insights with no extra fees.
Included at no cost: unlimited log ingestion and indexing with unlimited full-access users.
You only pay for analysis and storage.
Don’t “balance” budget and performance when you can have both.Sumo Logic, Aviation House, 125 Kingsway, London WC2B 6NH, UK
© 2024 Sumo Logic, All rights reserved.Unsubscribe
by "Sumo Logic" <marketing-info@sumologic.com> - 09:00 - 19 Jun 2024 -
RE:How to smoothly ship goods to the Middle East?
All best to you,
Here is Yori fm Winsail International Logistics Co.,Ltd
Please see the below weekly Ocean/Air freight rates:
Ocean freight rate:
SHANGHAI - DAMMAM 3000USD/20GP; 4200USD /40HQ
SHENZHEN - DAMMAM USD 3050/20GP; USD 4400/40HQ
Air freight rate:
CAN -DMM : 5.2USD/KG
Feel free to let me know if you need other main ports
Best regards
-------------------------------------------------------------------
My email: overseas.12@winsaillogistics.com
My Tel/whatsapp number:+86 13660987349
by "Yori" <forwarder03@win-win-logistics.cn> - 02:33 - 19 Jun 2024 -
Scaling to 1.2 Billion Daily API Requests with Caching at RevenueCat
Scaling to 1.2 Billion Daily API Requests with Caching at RevenueCat
Effortlessly Integrate E-Signatures into Your App with BoldSign (Sponsored) BoldSign by Syncfusion makes it easy for developers to integrate e-signatures into applications. Our powerful e-signature API allows you to embed signature requests, create templates, add custom branding, and more.͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for moreEffortlessly Integrate E-Signatures into Your App with BoldSign (Sponsored)
BoldSign by Syncfusion makes it easy for developers to integrate e-signatures into applications.
Our powerful e-signature API allows you to embed signature requests, create templates, add custom branding, and more.
It’s so easy to get started that 60% of our customers integrated BoldSign into their apps within one day.
Why BoldSign stands out:
99.999% uptime.
Trusted by Ryanair, Cost Plus Drugs, and more.
Complies with eIDAS, ESIGN, GDPR, SOC 2, and HIPAA standards.
No hidden charges.
Free migration support.
Rated 4.7/5 on G2.
Get 20% off the first year with code BYTEBYTEGO20. Valid until Sept. 31, 2024.
Disclaimer: The details in this post have been derived from the article originally published on the RevenueCat Engineering Blog. All credit for the details about RevenueCat’s architecture goes to their engineering team. The link to the original article is present in the references section at the end of the post. We’ve attempted to analyze the details and provide our input about them. If you find any inaccuracies or omissions, please leave a comment, and we will do our best to fix them.
RevenueCat is a platform that makes it easy for mobile app developers to implement and manage in-app subscriptions and purchases.
The staggering part is that they handle over 1.2 billion API requests per day from the apps.
At this massive scale, a fast and reliable performance becomes critical. Some of it is achieved by distributing the workload uniformly across multiple servers.
However, an efficient caching solution also becomes the need of the hour.
Caching allows frequently accessed data to be quickly retrieved from fast memory rather than slower backend databases and systems. This can dramatically speed up response times.
But caching also adds complexity since the cached data must be kept consistent with the source of truth in the databases. Stale or incorrect data in the cache can lead to serious issues.
For an application operating at the scale of RevenueCat, even small inefficiencies or inconsistencies in the caching layer can have a huge impact.
In this post, we will look at how RevenueCat overcame multiple challenges to build a truly reliable and scalable caching solution using Memcached.
The Three Key Goals of Caching
RevenueCat has three key goals for its caching infrastructure:
Low latency: The cache needs to be fast because even small delays in the caching layer can have significant consequences at this request volume. Retrying requests and opening new connections are detrimental to the overall performance.
Keeping cache servers up and warm: Cache servers need to stay available and full of frequently accessed data to offload the backend systems.
Maintaining data consistency: Data in the cache needs to be consistent. Inconsistency can lead to serious application issues.
While these main goals are highly relevant to applications operating at scale, a robust caching solution also needs supporting features such as monitoring and observability, optimization, and some sort of automated scaling.
Let’s look at each of these goals in more detail and how RevenueCat’s engineering team achieved them.
Low Latency
There’s no doubt that latency has a huge impact on user experience.
As per a statistic by Amazon, every 100ms of latency costs them 1% in sales. While it’s hard to confirm whether this is 100% true, there’s no denying the fact that latency impacts user experience.
Even small delays of a few hundred milliseconds can make an application feel sluggish and unresponsive. As latency increases, user engagement and satisfaction plummet.
RevenueCat achieves low latency in its caching layer through two key techniques.
1 - Pre-established connections
Their cache client maintains a pool of open connections to the cache servers.
When the application needs to make a cache request, it borrows a connection from the pool instead of establishing a new TCP one. This is because a TCP handshake could nearly double the cache response times. Borrowing the connection avoids the overhead of the TCP handshake on each request.
But no decision comes without some tradeoff.
Keeping connections open consumes memory and other resources on both the client and server. Therefore, it’s important to carefully tune the number of connections to balance resource usage with the ability to handle traffic spikes.
2 - Fail-fast approach
If a cache server becomes unresponsive, the client immediately marks it as down for a few seconds and fails the request, treating it as a cache miss.
In other words, the client will not retry the request or attempt to establish new connections to the problematic server during this period.
The key insight here is that even brief retry delays of 100ms can cause cascading failures under heavy load. Requests pile up, servers get overloaded, and the "retry storm" can bring the whole system down. Though it might sound counterintuitive, failing fast is crucial for a stable system.
But what’s the tradeoff here?
There may be a slight increase in cache misses when servers have temporary issues. But this is far better than risking a system-wide outage. A 99.99% cache hit rate is meaningless if 0.01% of requests trigger cascading failures. Prioritizing stability over perfect efficiency is the right call.
One potential enhancement over here could be circuit breaking where requests to misbehaving servers can be disabled based on error rates and latency measurements. This is something that Uber uses in their integrated cache solution called CacheFront.
However, the aggressive timeouts and managing connection pools likely achieve similar results with far less complexity.
Keeping Cache Servers Warm
The next goal RevenueCat had was keeping the cache servers warm.
They employed several strategies to achieve this.
1 - Planning for Failure with Mirrored and Gutter pool
RevenueCat uses fallback cache pools to handle failures.
Their strategy is designed to handle cache server failures and maintain high availability. The two approaches they use are as follows:
Mirrored pool: A fully synchronized secondary cache pool that receives all writes and can immediately take over reads if the primary pool fails.
Gutter pool: A small, empty cache pool that temporarily caches values with a short TTL when the primary pool fails, reducing the load on the backend until the primary recovers. For reference, the gutter pool technique was also used by Facebook when they built their caching architecture with Memcached.
Here also, there are trade-offs to consider concerning server size:
For example, having smaller servers provides benefits such as:
Granular failure impact: With many small cache servers, the failure of a single server affects a smaller portion of the cached data. This can make the fallback pool more effective, as it needs to handle a smaller subset of the total traffic.
Faster warmup: When a small server fails and the gutter pool takes over, it can warm up the cache for that server’s key space more quickly due to the smaller data volume.
However, small servers also have drawbacks:
Increased operational complexity of managing a larger number of servers adds operational complexity.
A higher connection overhead where each application server has to maintain connections to all cache servers.
The diagram below from RevenueCat’s article shows this comparison:
Simplified management: Fewer large servers are easier to manage and maintain compared to many small instances. There are fewer moving parts and less complexity in the overall system.
Improved resource utilization: Larger servers can more effectively utilize the available CPU, memory, and network resources, leading to better cost efficiency.
Fewer connections: With fewer cache servers, the total number of connections from the application servers is reduced, minimizing connection overhead.
Bigger servers also have some trade-offs:
When a large server fails, a larger portion of the cached data becomes unavailable. The fallback pool needs to handle a larger volume of traffic, potentially increasing the load on the backend.
In the case of a failure, warming up the cache for a larger key space may take longer due to the increased data volume.
This is where the strategy of using a mirrored pool for fast failover and a gutter pool for temporary caching strikes a balance between availability and cost.
The mirrored pool ensures immediate availability. The gutter pool, on the other hand, provides a cost-effective way to handle failures temporarily.
Generally speaking, it’s better to design the cache tier based on a solid understanding of the backend capacity. Also, when using sharding, the cache, and the backend sharding should be orthogonal so that a cache server going down translates into a moderate increase on backend servers.
Latest articles
If you’re not a paid subscriber, here’s what you missed.
To receive all the full articles and support ByteByteGo, consider subscribing:
2 - Dedicated Pools
Another technique they employ to keep cache servers warm is to use dedicated cache pools for certain use cases.
Here’s how the strategy works:
Identifying high-value data: The first step is to analyze the application's data access patterns and identify datasets that are crucial for performance, accuracy, or user experience. This could include frequently accessed configuration settings, important user-specific data, or computationally expensive results.
Creating dedicated pools: Instead of relying on a single shared cache pool, create separate pools for each identified high-value dataset. These dedicated pools have their own allocated memory and operate independently from the main cache pool.
Reserving memory: By allocating dedicated memory to each pool, they ensure that the high-value data has a guaranteed space in the cache. This prevents other less critical data from evicting the important information, even under high memory pressure.
Tailored eviction policies: Each dedicated pool can have its eviction policy tailored to the specific characteristics of the dataset. For example, a pool holding expensive-to-recompute data might have a longer TTL or a different eviction algorithm compared to a pool with frequently updated data.
The dedicated pools strategy has several advantages:
Improved cache hit ratio for critical data
Increased data accuracy
Flexibility in cache management
3 - Handling Hot Keys
Hot keys are a common challenge in caching systems.
They refer to keys that are accessed more frequently than others, leading to a high concentration of requests on a single cache server. This can cause performance issues and overload the server, potentially impacting the overall system.
There are two main strategies for handling hot keys:
Key Splitting
The below points explain how key splitting works:
Key splitting involves distributing the load of a hot key across multiple servers.
Instead of having a single key, the key is split into multiple versions, such as keyX/1, keyX/2, keyX/3, etc.
Each version of the key is placed on a different server, effectively spreading the load.
Clients read from one version of the key (usually determined by their client ID) but write to all versions to maintain consistency.
The challenge with key splitting is detecting hot keys in real time and coordinating the splitting process across all clients.
It requires a pipeline to identify hot keys, determine the splitting factor, and ensure that all clients perform the splitting simultaneously to avoid inconsistencies.
The list of hot keys is dynamic and can change based on real-life events or trends, so the detection and splitting process needs to be responsive.
Local Caching
Local caching is simpler when compared to key splitting.
Here are some points to explain how it works:
Local caching involves caching hot keys directly on the client-side, rather than relying solely on the distributed cache.
A key is cached locally on the client with a short TTL (Time-To-Live) when a key is identified as hot.
Subsequent requests for that key are served from the local cache, reducing the load on the distributed cache servers.
Local caching doesn't require coordination among clients.
However, local caching provides weaker consistency guarantees since the locally cached data may become stale if updates occur frequently.
To mitigate this, it’s important to use short TTLs for locally cached keys and only apply local caching to data that changes rarely.
Avoiding Thundering Herds
When a popular key expires, all clients may request it from the backend simultaneously, causing a spike. This is known as the “thundering herd situation”.
RevenueCat avoids this situation since it tries to maintain cache consistency by updating it during the writes. However, when using low TTLs and invalidations from DB changes, the thundering herd can cause a lot of problems.
Some other potential solutions to avoid thundering herds are as follows:
Recache policy: The GET requests can include a recache policy. When the remaining TTL is less than the given value, one of the clients will get a miss and re-populate the value in the cache while other clients continue to use the existing value.
Stale policy: In the delete command, the key is marked as stale. A single client gets a miss while others keep using the old value.
Lease policy: In this policy, only one client wins the right to repopulate the value while the losers just have to wait for the winner to re-populate. For reference, Facebook uses leasing in its Memcache setup.
Cache Server Migrations
Sometimes cache servers have to be replaced while minimizing impact on hit rates and user experience.
RevenueCat has built a coordinated cache server migration system that consists of the following steps:
Warming up the new cluster:
Before switching traffic, the team starts warming up the new cache cluster.
They populate the new cluster by mirroring all the writes from the existing cluster.
This ensures that the new cluster has the most up-to-date data before serving any requests.
Switching a percentage of reads:
After the new cluster is sufficiently warm, the team gradually switches a percentage of read traffic to it.
This allows them to test the new cluster’s performance and stability under real-world load.
Flipping all traffic:
Once the new cluster has proven its stability and performance, the traffic is flipped over to it.
At this point, the new cluster becomes the primary cache cluster, serving all read and write requests.
The old cluster is kept running for a while, with writes still being mirrored to it. This allows quick fallback in case of any issues.
Decommissioning the old cluster:
After a period of stable operation with the new cluster as the primary, the old cluster is decommissioned.
This frees up resources and completes the migration process.
The diagram below shows the entire migration process.
Maintaining data consistency is one of the biggest challenges when using caching in distributed systems.
The fundamental issue is that data is stored in multiple places - the primary data store (like a database) and the cache. Keeping the data in sync across these locations in the face of concurrent reads and writes is a non-trivial problem.
See the example below that shows how a simple race condition can result in a consistency problem between the database and the cache.
What’s going on over here?
Web Server 1 gets a cache miss and fetches data from the database.
A second request results in Web Server 2 performing a DB Write for the same data. It also updates the cache with the new data
Web Server 2 refills the cache with the stale data that it had fetched in step 1.
RevenueCat uses two main strategies to maintain cache consistency.
1 - Write Failure Tracking
In RevenueCat's system, a cache write failure is a strong signal that there may be an inconsistency between the cache and the primary store.
However, there are better options than simply retrying the write because that can lead to cascading failures and overload as discussed earlier.
Instead, RevenueCat's caching client records all write failures. After recording, it deduplicates them and ensures that the affected keys are invalidated in the cache at least once (retrying as needed until successful). This guarantees that the next read for those keys will fetch fresh data from the primary store, resynchronizing the cache.
This write failure tracking allows them to treat cache writes as if they should always succeed, significantly simplifying their consistency model. They can assume the write succeeded, and if it didn't, the tracker will ensure eventual consistency.
2 - Consistent CRUD Operations
For each type of data operation (Create, Read, Update, Delete), they have developed a strategy to keep the cache and primary store in sync.
For reads, they use the standard cache-aside pattern: read from the cache, and on a miss, read from the primary store and populate the cache. They always use an "add" operation to populate, which only succeeds if the key doesn't already exist, to avoid overwriting newer values.
For updates, they use a clever strategy as follows:
Before the update, they reduce the cache entry's TTL to a low value like 30 seconds
They update the primary data store
After the update, they update the cache with the new value and reset the TTL
If a failure occurs between steps 1 and 2, the cache remains consistent as the update never reaches the primary store. If a failure occurs between 2 and 3, the cache will be stale, but only for a short time until the reduced TTL expires. Also, any complete failures are caught by the write failure tracker that we talked about earlier.
For deletes, they use a similar TTL reduction strategy before the primary store delete.
However, for creation, they rely on the primary store to provide unique IDs to avoid conflicts.
Conclusion
RevenueCat’s approach illustrates the complexities of running caches at a massive scale. While some details may be specific to their Memcached setup, the high-level lessons are widely relevant.
Here are some key takeaways to consider from this case study:
Use low timeouts and fail fast on cache misses. Retries can cause cascading failures under load.
Plan cache capacity for failure scenarios. Ensure the system can handle multiple cache servers going down without overloading backends.
Use fallback and dedicated cache pools. Mirrored fallback pools and dedicated pools for critical data help keep caches warm and handle failures.
Handle hot keys through splitting or local caching. Distribute load from extremely popular keys across servers or cache them locally with low TTLs.
Avoid "thundering herds" with techniques like stale-while-revalidate and leasing.
Track and handle cache write failures. Assume writes always succeed but invalidate on failure to maintain consistency.
Implement well-tested strategies for cache updates during CRUD operations. Techniques like TTL reduction before writes help maintain consistency across cache and database.
References:
Scaling Smoothly: RevenueCat’s data-caching techniques for 1.2 billion daily API requests
How RevenueCat Manages Caching for Handling over 1.2 Billion API Requests
How Uber Serves Over 40 Million Reads Per Second from Online Storage Using Integrated Cache
SPONSOR US
Get your product in front of more than 500,000 tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.
Space Fills Up Fast - Reserve Today
Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing hi@bytebytego.com
Like Comment Restack © 2024 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:36 - 18 Jun 2024 -
[Online workshop] Maximizing observability with New Relic logs
New Relic
Register for this free online workshop on the 27th June at 10 AM BST/ 11 AM CEST for a comprehensive introduction to leveraging logs in New Relic. Get hands-on with log data, master importation, parsing, filtering, dropping, and setting up alerts.
In this 90-minute online workshop, you’ll work in a sandbox environment, search New Relic log data, work with partitions and AI log patterns, troubleshoot application errors and trace data, create charts and dashboards for seamless team collaboration, and configure proactive alert conditions to address potential issues.
You’ll learn:
- What logs in context is and its role in observability
- What log shipping is and how it works in New Relic
- How to apply parsing rules and drop filters
- Ways to bring your log data into New Relic
- Configuring plugins like FluentD, Kubernetes cloud integrations and log API
Register now Need help? Let's get in touch.
This email is sent from an account used for sending messages only. Please do not reply to this email to contact us—we will not get your response.
This email was sent to info@learn.odoo.com Update your email preferences.
For information about our privacy practices, see our Privacy Policy.
Need to contact New Relic? You can chat or call us at +44 20 3859 9190.
Strand Bridge House, 138-142 Strand, London WC2R 1HH
© 2024 New Relic, Inc. All rights reserved. New Relic logo are trademarks of New Relic, Inc
by "New Relic" <emeamarketing@newrelic.com> - 05:04 - 18 Jun 2024 - What logs in context is and its role in observability
-
Do you know how big the space economy could be by 2035?
Only McKinsey
Space industry opportunities Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Space-based capabilities. Over the next decade, the space economy will expand significantly, McKinsey senior partner Ryan Brukardt shares on an episode of The McKinsey Podcast. By investing in space, industries will be able to develop a wide range of capabilities, including in connectivity (for example, the ability to communicate with anywhere in the world using satellite communications technology), mobility (understanding where you are on Earth), and deriving data that only space-based applications can provide.
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:05 - 18 Jun 2024 -
⏰ Last chance to register: API platform insights on 18 June!
⏰ Last chance to register: API platform insights on 18 June!
Explore the latest API platform insights with industry insiders. Learn about trends, success metrics, and AI's role.Hi Md Abul,
Just a quick heads up that our exciting new online panel discussion - API platform insights 2024 - in collaboration with ResearchHQ - is happening tomorrow! If you haven't registered, now's your last chance to secure your spot.
📅 Date: 18th of June
🕙 Time: 10 am EDT / 3 pm BST
📍 Location: ZoomCome join us for an in-depth discussion of the findings of the API platform insights 2024 report. We'll explore the latest trends in API platforms, discuss success metrics, and examine the role of AI tools in enhancing the efficiency and ROI of platform teams.
Sign up now, and you'll receive an exclusive copy of the full report and access to the on-demand recording after the event.
Thanks,
Budha & teamTyk, 87a Worship Street, London, City of London EC2A 2BE, United Kingdom, +44 (0)20 3409 1911
by "Budhaditya Bhattacharya" <budha@tyk.io> - 06:01 - 17 Jun 2024 -
The final frontier: A leader’s guide to the space economy
Out of this world Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
Few areas of the economy are as dynamic or pervasive in our day-to-day lives as outer space. Space-based technology is improving at a breakneck pace, supporting an ever-growing number of applications that we use here on Earth. And like the universe itself, the business potential seems limitless. McKinsey research suggests that the space economy is at an inflection point: it’s poised to nearly triple in size by 2035, and many industries have something to gain from the connectivity, mobility, and data capabilities that outer space offers. This week, we consider space’s strategic future and how the cosmos could help us solve some of our greatest challenges at home.
There’s a certain romance about space. But its practical applications have made it more accessible and connected to our daily lives than ever—from how we watch movies and stream content to how we track packages to how we grow crops. In a recent episode of The McKinsey Podcast, senior partner Ryan Brukardt explains the ins and outs of the fast-growing space economy and its implications for business and society. “Everybody needs to have [space] in their strategy,” Brukardt says. As space-based innovations grow apace, so does the number of companies that can benefit from them. According to McKinsey global managing partner Bob Sternfels, Brukardt, and colleagues, it’s important for leaders in all industries to bridge the gap between the space community and their customers. They can do so by setting a vision for capturing value from space-related advances, even if it means disrupting their own business; by investing in space through new partnerships or a space-dedicated business line; and by joining the broader dialogue about the space economy’s future to ensure that its benefits are as far-reaching as possible.
That’s how many times the cost performance of satellites has improved in the past five to ten years. According to McKinsey’s Daniel Pacthod and colleagues, such rapid progress has enabled a proliferation of satellite-based use cases, including the ability to observe the effects of climate change on every corner of the planet. And there’s even more that satellites can do to advance sustainability here on Earth. The space sector can do the same, the authors say: for example, by tracking emissions, rating the sustainability of satellite missions, and setting targets for net-zero debris in orbit.
What goes up must come down, at least when you’re in Earth’s gravitational pull. But after an object from the International Space Station crashed into a family’s home in the United States, the lack of a legal framework relating to space junk—and who’s responsible when it falls on personal property—has raised a few eyebrows. Indeed, as space gets increasingly crowded, the need for good governance is greater than ever. A more structured approach to managing the space economy’s risks to infrastructure, data, and people (whether they’re on, or orbiting, the Earth) is key to ensuring its future growth.
Lead by shooting for the moon.
— Edited by Daniella Seiler, executive editor, Washington, DC
Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to the Leading Off newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Leading Off" <publishing@email.mckinsey.com> - 04:42 - 17 Jun 2024 -
Re: Weekly update Shipping information fm China
Dear friend
Greeting
Pls check below :
· Shekou-Jebel Ali 3450/40HQ *2 18th June
· Ningbo-Dammam 4000/40HQ *3 26th June
· Shanghai-Riyadh 3550/20GP ; 4650/40HQ 11th June
There is the peak season coming , rate increase sharply minute by minute
Space is hard to book these days . Don't waste time and face a higher rate !
Best regards
--------
Yori
NVOCC:MOC-NV09845
Winsail International Logistics Co.,Ltd
QQ:1586409909
Mob/Whatsapp: +86 13660987349
Email: overseas.12@winsaillogistics.com
by "Yori" <overseas10@gz-logistics.cn> - 03:45 - 17 Jun 2024 -
How widespread is the use of generative AI?
Only McKinsey
Our latest McK global survey Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
High expectations for gen AI. 2024 is the year organizations truly begin using—and deriving business value from—gen AI, Alex Singla, McKinsey senior partner and global leader of QuantumBlack, AI by McKinsey, and coauthors share. Our latest McKinsey Global Survey on AI finds that respondents’ expectations for gen AI’s impact remain as high as they were last year, with three-quarters predicting that gen AI will lead to significant or disruptive change in their industries in the years ahead.
•
Surge in adoption. For the past six years, AI adoption by respondents’ organizations has hovered at about 50%. This year, the survey finds that adoption has jumped to 72%. Companies are also now using AI in more parts of the business, with half of respondents saying their organizations have adopted AI in two or more business functions. Learn what high-performing companies are doing differently to create value from gen AI adoption, and visit McKinsey Digital to see examples of how companies are competing with technology.
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 11:06 - 16 Jun 2024 -
The quarter’s top themes
McKinsey&Company
At #1: What's the future of AI? In the second quarter of 2024, our top ten posts from McKinsey Themes highlighted topics including generative AI, the traits of good bosses, and more. At No. 1 is What's the future of AI?, which draws on insights from articles by McKinsey’s Eric Lamarre, Rodney W. Zemmel, Kate Smaje, Michael Chui, Ida Kristensen, and more. Read on for our full top 10.
2. 100 articles on generative AI
Since generative AI (gen AI) burst onto the scene in late 2022, it’s captivated business leaders and society at large. The excitement is well deserved: McKinsey research indicates that gen AI could add the equivalent of $2.6 trillion to $4.4 trillion of value annually—and redefine the way people work and live. Plus, our top 10
3. How to be a better boss
It’s often said that your manager can make or break your experience at a job. Unfortunately, it seems that almost everyone can recall working under a bad boss at least once in their career. How do so many bad leaders come into a position of power in the first place, and why do they remain there? Traits that make great leaders
Did you know that McKinsey partners regularly speak with top CEOs across industries to glean valuable perspectives? Our recent and best interviews cover a range of topics, from navigating disruption to effective crisis leadership. Dive into our curated selection below for insights from these impactful leaders, and learn what it takes to thrive in today's uniquely challenging business environment. Get perspective
Share these insights
This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you are a registered member of the Top Ten Most Popular newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Top Ten" <publishing@email.mckinsey.com> - 06:46 - 16 Jun 2024 -
The week in charts
The Week in Charts
AI’s effect on workforce skills, healthcare gaps, and more Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to The Week in Charts newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Week in Charts" <publishing@email.mckinsey.com> - 03:32 - 15 Jun 2024 -
Five exercises to help you lead at your best
Make behavioral changes stick Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
How exceptional leaders lead
Imagine you’re an avid basketball player. Your rebounding could use some work, but your three-point game is stellar. Where to spend your time? If you’re like the pros, you won’t waste too much of it trying to perfect your weakness. Instead, you’ll focus on your strength—and ensure that you sink those threes every time. If you’re a business leader, the same lesson applies. Often, people spend great amounts of energy on their shortcomings—to unsatisfying results. But playing to your strengths and integrating them into your daily work can be much more inspiring for both you and your team.
While focusing on one’s strengths might seem obvious, doing so often requires a shift in mindset. This is a crucial step when adopting new behavior, and it’s one that leaders often neglect in favor of immediate action. But ignoring the attitudes and beliefs behind previous behavior all but ensures that the new behavior a person hopes to adopt won’t stick. Leaders who are in tune with the mindsets that dictate their actions are better equipped to guide their organizations toward effective behavioral change.
Finding your strength is one of five key exercises leaders can use to be more aware of their mindsets. To explore the other four—including the power of taking a pause and how to ask solution-focused questions—and to learn how to shift your own mindset in service of stronger, more purposeful leadership, read Johanne Lavoie’s 2014 McKinsey Quarterly classic, “Lead at your best.”Lead better by shifting your mindset Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to our McKinsey Classics newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Classics" <publishing@email.mckinsey.com> - 12:28 - 15 Jun 2024 -
EP116: 11 steps to go from Junior to Senior Developer
EP116: 11 steps to go from Junior to Senior Developer
This week’s system design refresher: What is Data Pipeline? | Why Is It So Popular? (Youtube video) 11 steps to go from Junior to Senior Developer Top 8 must-know Docker concepts What does a typical microservice architecture look like? Top 10 Most Popular Open-Source Databases͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for moreThis week’s system design refresher:
What is Data Pipeline? | Why Is It So Popular? (Youtube video)
11 steps to go from Junior to Senior Developer
Top 8 must-know Docker concepts
What does a typical microservice architecture look like?
Top 10 Most Popular Open-Source Databases
SPONSOR US
New Relic Digital Monitoring Experience (DEM) (Sponsored)
New Relic DEM solutions are designed to provide comprehensive insights into digital operations, allowing teams to optimize user experiences in real time.
Download the datasheet for an overview of New Relic DEM capabilities:
Real user monitoring (RUM): Browser monitoring and mobile monitoring
Pixel-perfect replays: Session replay
Proactive issue detection: Synthetic monitoring, mobile user journeys (crash analysis), and error tracking (errors inbox)
Integration and collaboration: In-app collaboration capabilities
What is Data Pipeline? | Why Is It So Popular?
11 steps to go from Junior to Senior Developer
Collaboration Tools
Software development is a social activity. Learn to use collaboration tools like Jira, Confluence, Slack, MS Teams, Zoom, etc.Programming Languages
Pick and master one or two programming languages. Choose from options like Java, Python, JavaScript, C#, Go, etc.API Development
Learn the ins and outs of API Development approaches such as REST, GraphQL, and gRPC.Web Servers and Hosting
Know about web servers as well as cloud platforms like AWS, Azure, GCP, and KubernetesAuthentication and Testing
Learn how to secure your applications with authentication techniques such as JWTs, OAuth2, etc. Also, master testing techniques like TDD, E2E Testing, and Performance TestingDatabases
Learn to work with relational (Postgres, MySQL, and SQLite) and non-relational databases (MongoDB, Cassandra, and Redis).CI/CD
Pick tools like GitHub Actions, Jenkins, or CircleCI to learn about continuous integration and continuous delivery.Data Structures and Algorithms
Master the basics of DSA with topics like Big O Notation, Sorting, Trees, and Graphs.System Design
Learn System Design concepts such as Networking, Caching, CDNs, Microservices, Messaging, Load Balancing, Replication, Distributed Systems, etc.Design patterns
Master the application of design patterns such as dependency injection, factory, proxy, observers, and facade.AI Tools
To future-proof your career, learn to leverage AI tools like GitHub Copilot, ChatGPT, Langchain, and Prompt Engineering.
Over to you: What else would you add to the roadmap?
Latest articles
If you’re not a paid subscriber, here’s what you missed.
To receive all the full articles and support ByteByteGo, consider subscribing:
Top 8 must-know Docker concepts
Dockerfile: It contains the instructions to build a Docker image by specifying the base image, dependencies, and run command.
Docker Image: A lightweight, standalone package that includes everything (code, libraries, and dependencies) needed to run your application. Images are built from a Dockerfile and can be versioned.
Docker Container: A running instance of a Docker image. Containers are isolated from each other and the host system, providing a secure and reproducible environment for running your apps.
Docker Registry: A centralized repository for storing and distributing Docker images. For example, Docker Hub is the default public registry but you can also set up private registries.
Docker Volumes: A way to persist data generated by containers. Volumes are outside the container’s file system and can be shared between multiple containers.
Docker Compose: A tool for defining and running multi-container Docker applications, making it easy to manage the entire stack.
Docker Networks: Used to enable communication between containers and the host system. Custom networks can isolate containers or enable selective communication.
Docker CLI: The primary way to interact with Docker, providing commands for building images, running containers, managing volumes, and performing other operations.
Over to you: What other concept should one know about Docker?
What does a typical microservice architecture look like? 👇
The diagram below shows a typical microservice architecture.
Load Balancer: This distributes incoming traffic across multiple backend services.
CDN (Content Delivery Network): CDN is a group of geographically distributed servers that hold static content for faster delivery. The clients look for content in CDN first, then progress to backend services.
API Gateway: This handles incoming requests and routes them to the relevant services. It talks to the identity provider and service discovery.
Identity Provider: This handles authentication and authorization for users.
Service Registry & Discovery: Microservice registration and discovery happen in this component, and the API gateway looks for relevant services in this component to talk to.
Management: This component is responsible for monitoring the services.
Microservices: Microservices are designed and deployed in different domains. Each domain has its database.
Over to you:
What are the drawbacks of the microservice architecture?
Have you seen a monolithic system be transformed into microservice architecture? How long does it take?
Top 10 Most Popular Open-Source Databases
This list is based on factors like adoption, industry impact, and the general awareness of the database among the developer community.
MySQL
PostgreSQL
MariaDB
Apache Cassandra
Neo4j
SQLite
CockroachDB
Redis
MongoDB
Couchbase
Over to you: Which other database would you add to this list?
SPONSOR US
Get your product in front of more than 500,000 tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.
Space Fills Up Fast - Reserve Today
Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing hi@bytebytego.com
Like Comment Restack © 2024 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:36 - 15 Jun 2024 -
Looking for guest post
Hi Sir/Madam,
I have visited your website, and it looks good. I am in search of sites to publish my articles and get backlinks in return. Will you accept my guest post? In case you accept guest posts, tell me the topic and criteria to write an article for you? Otherwise, give me a do-follow backlink from your existing post. What will be the minimum price for the permanent article with a do-follow backlink for general posts as well as casino/betting/CBD and existing post backlinks?
Hope for positive feedback from your end.
by "Muhammad asad Gujjer" <muhammadasadgujjer@gmail.com> - 10:56 - 14 Jun 2024