• A Crash Course on Scaling the API Layer

    A Crash Course on Scaling the API Layer

    The API (Application Programming Interface) layer serves as the backbone for communication between clients and the backend services in modern internet-based applications.
    ͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­
    Forwarded this email? Subscribe here for more

    Latest articles

    If you’re not a subscriber, here’s what you missed this month.

    1. A Crash Course on Domain-Driven Design

    2. A Crash Course on Relational Database Design

    3. A Crash Course on Distributed Systems

    4. A Crash Course in Database Scaling Strategies

    5. A Crash Course in Database Sharding

    To receive all the full articles and support ByteByteGo, consider subscribing:


    The API (Application Programming Interface) layer serves as the backbone for communication between clients and the backend services in modern internet-based applications.

    It acts as the primary interface through which clients, such as web or mobile applications, access the functionality and data provided by the application. The API Layer of any application has several key responsibilities such as:

    • Process incoming requests from clients based on the defined API contract.

    • Enforce security mechanisms and protocols by authenticating and authorizing clients based on their credentials or access tokens.

    • Orchestrate interactions between various backend services and aggregate the responses received from them.

    • Handle responses by formatting and returning the result to the client.

    Due to the central role APIs play in the application architecture, they become critical for the application scalability.

    The scalability of the API layer is crucial due to the following reasons:

    • Handling Load and Traffic Spikes: As applications become popular, they encounter increased traffic and sudden spikes in user demand. A scalable API can manage the increased load efficiently.

    • Better User Experience: The bar for user expectation has gone up. Most users these days expect fast and responsive applications. A scalable API ensures that the application can support a high number of users without compromising performance.

    • Cost and Resource Optimization: Scalable APIs unlock the path to better resource utilization. Rather than provisioning the infrastructure upfront for the highest demand level, instances are added and removed based on demand, resulting in reduced operational costs.

    In this article, we’ll learn the key concepts a developer must understand for API scalability. We will also look at some tried and tested strategies for scaling the API layer with basic code examples for clarity. Lastly, we will also look at some best practices that can help with scaling the API layer.

    Unlock this post for free, courtesy of Alex Xu.

    Claim my free post

    A subscription gets you:

    An extra deep dive on Thursdays
    Full archive
    Many expense it with team's learning budget
     
    Like
    Comment
    Restack
     

    © 2024 ByteByteGo
    548 Market Street PMB 72296, San Francisco, CA 94104
    Unsubscribe

    Get the appStart writing


    by "ByteByteGo" <bytebytego@substack.com> - 11:35 - 22 Aug 2024
  • Assessment Oman Rail


    Greeting all


    VENDOR REGISTRATION invitation If you intend to participate, please confirm your interest by requesting the Vendor Questionnaire and Expression of Interest form.

    We appreciate your interest in this VENDOR REGISTRATION invitation and look forward to your reply.


    Kind Regards,


    Mr. Rafik Farah,
    Snr. Procurement Coordinator.
    OMAN RAIL - OMAN.

    by "OMAN RAIL - OMAN" <reg@omanrailbidsom.com> - 08:03 - 22 Aug 2024
  • Project Bid

    Hello, 

    Electrical contractors and subcontractors can use the services provided by our Electrical Take-offs Company. For our take-offs, we employ Accu-Bid, EBM, McCormick, ConEst, Plans Swift, and Blue Beam. Before beginning a job, get in touch with us with the project's plans for a quote on our costs and turnaround time.

    Send an email for any samples, queries, and quotes for your projects.

    Thanks.
    Have a Nice Day! 
    Regards
    Gazmir Bogdani
    Estimation Department
    Brink Estimating, LLC.

    by "Gazmir Bogdani" <gazmirbogdanighy67@gmail.com> - 05:01 - 21 Aug 2024
  • 1,000,000 Special: 30% Off Annual Premium Subscription

    1,000,000 Special: 30% Off Annual Premium Subscription

    We're thrilled to announce that our ByteByteGo newsletter has hit a major milestone - 1 million subscribers!
    ͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­
    Forwarded this email? Subscribe here for more

    We're thrilled to announce that our ByteByteGo newsletter has hit a major milestone - 1 million subscribers! 

    To celebrate this achievement, we're offering a special 30% discount on our Annual Premium Subscription, but only for a limited time from August 21st through August 27th.

    For this one-week only promotion, you can upgrade to Premium and unlock a wealth of exclusive content at an unbeatable price. As a Premium subscriber, you'll receive:

    🔹An extra in-depth system design deep dive every Thursday

    🔹Full access to our entire Premium archive

    Click here to claim your 30% discount on the ByteByteGo Premium subscription before time runs out.

    Here's to 1 million amazing subscribers. We're excited to keep growing our community and delivering the best system design content around.

    Thanks for being a part of the ByteByteGo journey!

    Upgrade Now (30% OFF)


    Sincerely,

    Alex & Sahn 👋

     
    Like
    Comment
    Restack
     

    © 2024 ByteByteGo
    548 Market Street PMB 72296, San Francisco, CA 94104
    Unsubscribe

    Get the appStart writing


    by "ByteByteGo" <bytebytego@substack.com> - 11:35 - 21 Aug 2024
  • Introducing the My Caddie Golf Platform featuring Birchwood Golf Club

    Introducing the My Caddie Golf Platform featuring Birchwood Golf Club

    Hi there,

    I hope you're well, I wanted to reach out because Your Telecoms Consultant has been recommended to us, and we have a unique opportunity that you may be interested in.

    The My Caddie Golf Platform featuring Birchwood Golf Club can help you and the team generate business from our members and visitors, we are looking for a local Telecommunication Company to become our official partner.

    Aligning your business with such a prestigious establishment can elevate your brand image and generate a positive association in the minds of potential customers.

    This partnership presents an ideal opportunity to put your company in front of a vast, local and affluent audience whilst also giving you complimentary golf to use as you see fit. Even if you're too busy at the moment, we still like to have you on board as a trusted local company, who could provide our other partners with advice and pass on referrals.

    Here are some of the features you will receive in the partnership:

    - Exclusivity for your sector.
    - Providing you with exposure on the members and visitors iPhone app.
    - Exposure on the members and visitors Android app.
    - Your branding on the flyovers on one of the holes on our Birchwood Golf Club web flyovers which is trackable and targeted to your demographic within the local area.
    - Access to our networking groups between all partners and plus ones.
    - Complimentary golf for you to entertain clients, colleagues and guests.


    The cost is the equivalent of just £26 per week for a 2-year partnership + £399 Artwork (one-off, optional) + VAT.

    Artwork is optional but if you want us to do it for you, you can change it up to 8 times over the 2 years so every quarter you can revamp it and put new offers on. We'll also give It to you for further marketing.

    I have reached out to a number of companies locally and will be operating on a first come first serve basis so if the above is of interest please let me know as soon as possible to avoid disappointment.


    Best wishes,

    Jack Stevens
    Account Manager
    0330 0436 463






    We have sent this email to info@learn.odoo.com having found your company contact details online. If you don't want to get any more emails from us you can stop them here.

    West 1 Group UK Limited, registered in England and Wales under company number 07574948. Our registered office is Unit 1 Airport West, Lancaster Way, Yeadon, Leeds, West Yorkshire, LS19 7ZA.

    Disclaimer: Our app operates independently. While we provide authentic and accurate hole-by-hole guides, we do not have a direct association with Birchwood Golf Club or claim any endorsement from them. We aim to offer golfers a reliable guide as they navigate their favourite courses. As a value-add for our advertisers, we offer free tee times at Birchwood Golf Club which we procure as any customer would, directly from the venue. We also host networking events, which may be held a various local venues as well as online sessions.Furthermore, advertisers have the unique opportunity to be featured in our flyovers of each golf hole. All offerings are subject to availability and terms.


    by "Jack Stevens" <jack@w1g.biz> - 05:47 - 21 Aug 2024
  • Popular recent editions—and a brief respite

    Intersection

    Get your briefing ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
    Five Fifty
    Five Fifty

    Share these insights

    Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.

    This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.

    You received this email because you subscribed to our McKinsey Quarterly Five Fifty alert list.

    Manage subscriptions | Unsubscribe

    Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007


    by "McKinsey Quarterly Five Fifty" <publishing@email.mckinsey.com> - 04:27 - 20 Aug 2024
  • You're invited! Join us for a virtual event on the physical realities of the energy transition
    Register now ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 

    New from McKinsey & Company

    An image linking to the web page “The hard stuff: Navigating the physical realities of the energy transition
” on McKinsey.com.

    Today’s energy system is a huge and deeply interlinked physical entity with around 60,000 power plants, two million kilometers of oil and gas pipelines, and 1.5 billion vehicles on the road. Despite the momentum of recent years, the energy transition is in its early stages. Only 10 percent of the low-emissions power capacity needed by 2050 to meet global climate commitments is currently deployed. Navigating the remaining 90 percent requires confronting the reality that the energy transition is at its core a physical transformation—on a colossal scale. 

    We have identified 25 physical challenges that must be addressed, which relate to the performance of low-emissions technologies and the scaling of the supply chains and infrastructure needed to deploy them. 

    Join us on Thursday, September 5 at 11:00AM–12:00PM ET (5:00PM–6:00PM CET) for a virtual event featuring a presentation by McKinsey’s Chris Bradley, Mekala Krishnan, Humayun Tai, and leading industry experts who will discuss the physical challenges that would need to be addressed for a successful energy transition and the opportunities for innovation and system reconfiguration to tackle them. This event will explore the findings of a new report, The hard stuff: Navigating the physical realities of the energy transition.

    Among topics discussed will include:

    The nature of today’s energy system and its performance, and the potential for low-emissions technologies to match that performance

    The current status of the energy transition

    Our comprehensive stocktake of 25 physical challenges that would need to be addressed

    Why 12 of the 25 challenges are particularly difficult to tackle—the demanding dozen

    How CEOs and policymakers can use an understanding of the physical challenges to navigate through a successful transition

    Once registered, you will receive an email confirmation and can add the event to your calendar. Registrants who can’t attend live will receive a recording of the event.

    LinkedIn
    X
    Facebook

    This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.

    You received this email because you subscribed to our McKinsey Global Institute alert list.

    Manage subscriptions | Unsubscribe

    Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007


    by "McKinsey & Company" <publishing@email.mckinsey.com> - 12:33 - 20 Aug 2024
  • Trillions of Indexes: How Uber’s LedgerStore Supports Such Massive Scale

    Trillions of Indexes: How Uber’s LedgerStore Supports Such Massive Scale

    Try Fully Managed Apache Airflow and get certified for FREE (Sponsored)
    ͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­
    Forwarded this email? Subscribe here for more

    Try Fully Managed Apache Airflow and get certified for FREE (Sponsored)

    Run Airflow without the hassle and management complexity. Take Astro (the fully managed Airflow solution) for a test drive today and unlock a suite of features designed to simplify, optimize, and scale your data pipelines. For a limited time, new sign ups will receive a complimentary Airflow Fundamentals Certification exam (normally $150).

    Get Started —>


    Disclaimer: The details in this post have been derived from the Uber Engineering Blog. All credit for the technical details goes to the Uber engineering team. The links to the original articles are present in the references section at the end of the post. We’ve attempted to analyze the details and provide our input about them. If you find any inaccuracies or omissions, please leave a comment, and we will do our best to fix them.

    Ledgers are the source of truth of any financial event. By their very nature, ledgers are immutable. Also, we usually want to access data stored in these ledgers in various combinations.

    With billions of trips and deliveries, Uber performs tens of billions of financial transactions. Merchants, riders, and customers are involved in these financial transactions. Money flows from the ones spending to the ones earning. 

    To manage this mountain of financial transaction data, the LedgerStore is an extremely critical storage solution for Uber. The myriad access patterns for the data stored in LedgerStore also create the need for a huge number of indexes.

    In this post, we’ll look at how Uber implemented the indexing architecture for LedgerStore to handle trillions of indexes and how they migrated a trillion entries of Uber’s Ledger Data from DynamoDB to LedgerStore.

    What is LedgerStore?

    LedgerStore is Uber’s custom-built storage solution for managing financial transactions. Think of it as a giant, super-secure digital ledger book that keeps track of every financial event at Uber, from ride payments to food delivery charges.

    What makes LedgerStore special is its ability to handle an enormous amount of data. We’re talking about trillions of entries.

    Two main features supported by LedgerStore are:

    • Immutability: LedgerStore is designed to be immutable, which means once a record is written, it cannot be changed. This ensures the integrity of financial data.

    • Indexes: LedgerStore allows quick look-up of information using various types of indexes. For example, if someone needs to check all transactions for a particular user or all payments made on a specific date, LedgerStore can retrieve this information efficiently.

    Ultimately, LedgerStore helps Uber manage its financial data more effectively, reducing costs compared to previous solutions.

    Types of Indexes

    LedgerStore supports three main types of indexes:

    • Strongly consistent indexes

    • Eventually consistent indexes

    • Time-range indexes

    Let’s look at each of them in more detail.

    Strongly Consistent Indexes

    These indexes provide immediate read-your-write guarantees, crucial for scenarios like credit card authorization flows. For example, when a rider starts an Uber trip, a credit card hold is placed, which must be immediately visible to prevent duplicate charges.

    See the diagram below that shows the credit-card payment flow for an Uber trip supported by strongly consistent indexes.

    If the index is not strongly consistent, the hold may take a while to be visible upon reading. This can result in duplicate charges on the user’s credit card.

    Strongly consistent indexes at Uber are built using the two-phase commit approach. Here’s how this approach works in the write path and read path.

    1 - The Write Path

    The write path consists of the following steps:

    • When a new record needs to be inserted, the system first writes an “index intent” to the index table. It could even be multiple indexes.

    • This intent signifies that a new record is about to be written. If the index intent write fails, the whole insert operation fails.

    • After the index intent is successfully written, the actual record is written to the main data store.

    • If the record write is also successful, the system commits the index. This is done asynchronously to avoid affecting the end-user insert latency.

    The diagram below shows this entire process.

    There is one special case to consider here: if the index intent write succeeds, but the record write fails, the index intent has to be rolled back to prevent the accumulation of unused intents. This part is handled during the read path.

    2 - The Read Path

    The below steps happen during the read path:

    • If a committed index entry is found, the response data is returned to the client.

    • If an index entry is found but with a “pending” status, the system must resolve its state. This is done by checking the main data store for the corresponding record.

    • If the record exists, the index is asynchronously committed and the record is returned to the user.

    • If the record doesn’t exist, the index intent is deleted or rolled back and the read operation does not return a result for the query.

    The diagram below shows the process steps in more detail.


    Reserve Your Seat Now! | Upcoming Cohort on Aug 26th, 2024 (Sponsored) 

    Build confidence without getting lost in technical jargon.

    This cohort is designed to help you build a foundational understanding of software applications. You won’t just memorize a bunch of terms - you’ll actually understand how software products are designed and deployed to handle millions of users.

    And our community will be here to give you the support, guidance, and accountability you’ll need to finally stick with it.

    After only 4 weeks, you’ll look back and think.. “WOW! I can’t believe I did that.”

    Now imagine if you could:

    ✅ Master technical concepts without getting lost in an internet maze.

    ✅ Stop asking engineers to dumb down concepts when talking to you.

    ✅ Predict risks, anticipate issues, and avoid endless back-and-forth.

    ✅ Improve your communication with engineers, users, and technical stakeholders.

    Grab your spot now with an exclusive 25% off discount for ByteByteGo Readers. See you there!

    Register Now!


    Eventually Consistent Indexes

    These indexes are designed for scenarios where real-time consistency is not critical, and a slight delay in index updates is acceptable. They offer better performance and availability at the cost of immediate consistency.

    From a technical implementation point of view, the eventually consistent indexes are generated using the Materialized Views feature of Uber’s Docstore. 

    Materialized views are pre-computed result sets stored as a table, based on a query against the base table(s). The materialized view is updated asynchronously when the base table changes.

    When a write occurs to the main data store, it doesn’t immediately update the index. Instead, a separate process periodically scans for changes and updates the materialized view. The consistency window is configurable and determines how frequently the background process runs to update the indexes.

    In Uber’s case, the Payment History Display screen uses the eventually consistent indexes.

    Time-range Indexes

    Time-range indexes are a crucial component of LedgerStore, designed to query data within specific time ranges efficiently.

    These indexes are important for operations like offloading older ledger data to cold storage or sealing data for compliance purposes. The main characteristic of these indexes is their ability to handle deterministic time-range batch queries that are significantly larger in scope compared to other index types.

    Earlier, the time-range indexes were implemented using a dual-table design approach in DynamoDB. However, it was operationally complex.

    The migration of LedgerStore to Uber’s Docstore paved the path for a simpler implementation of the time-range index. Here’s a detailed look at the Docstore implementation for the time-range indexes:

    • Single Table Design: Only one table is used for time-range indexes in Docstore.

    • Partitioning Strategy: Index entries are partitioned based on full timestamp value. This ensures a uniform distribution of writes across partitions, eliminating the chances of hot partitions and write throttling.

    • Sorted Data Storage: Data is stored in a sorted manner based on the primary key (partition + sort keys). 

    • Read Operation: Reads involve a prefix scanning of each shard of the table up to a certain time granularity. For example, to read 30 minutes of data, the operation might be broken down into three 10-minute interval scans. Each scan is bounded by start and end timestamps. After scanning, a scatter-gather operation is performed, followed by sort merging across shards to obtain all time-range index entries in the given window, in a sorted fashion.

    For clarity, consider a query to fetch all ledger entries between “2024-08-09 10:00:00” and “2024-08-09 10:30:00”. The query would be broken down into three 10-minute scans:

    • 2024-08-09 10:00:00 to 2024-08-09 10:10:00

    • 2024-08-09 10:10:00 to 2024-08-09 10:20:00

    • 2024-08-09 10:20:00 to 2024-08-09 10:30:00

    Each of these scans would be executed across all shards in parallel. The results would then be merged and sorted to provide the final result set.

    The diagram below shows the overall process:

    Index Lifecycle Management

    Index lifecycle management is another component of LedgerStore’s architecture that handles the design, modification, and decommissioning of indexes.

    Let’s look at the main parts of the index lifecycle management system.

    Index Lifecycle State Machine

    This component orchestrates the entire lifecycle of an index:

    • Creating the index table

    • Backfilling it with historical index entries

    • Validating the entries for completeness

    • Swapping the old index with the new one for read/write operations

    • Decommissioning the old index

    The state machine ensures that each step is completed correctly before moving to the next, maintaining data integrity throughout the process.

    The diagram below shows all the steps:

    Historical Index Data Backfill

    When new indexes are defined or existing ones are modified, it’s essential to backfill historical data to ensure completeness.

    The historical index data backfill component performs the following tasks:

    • Builds indexes from historical data stored in cold storage.

    • Backfills the data to the storage layer in a scalable manner.

    • Uses configurable rate-limiting and batching to manage the process efficiently.

    Index Validation

    After indexes are backfilled, they need to be verified for completeness. This is done through an offline job that:

    • Computes order-independent checksums at a certain time-window granularity.

    • Compares these checksums across the source of truth data and the index table.

    From a technical point of view, the component uses a time-window approach i.e. computing checksums for every 1-hour block of data. Even if a single entry is missed, the aggregate checksum for that time window will lead to a mismatch. 

    For example, If checksums are computed for 1-hour blocks, and an entry from 2:30 PM is missing, the checksum for the 2:00 PM - 3:00 PM block will not match.

    Migration of Uber’s Payment Data to LedgerStore

    Now that we have understood about LedgerStore’s indexing architecture and capabilities, let’s look at the massive migration of Uber’s payment data to LedgerStore.

    Uber’s payment platform, Gulfstream, was initially launched in 2017 using Amazon DynamoDB for storage. However, as Uber’s operations grew, this setup became increasingly expensive and complex. 

    By 2021, Gulfstream was using a combination of three storage systems:

    • DynamoDB for the most recent 12 weeks of data. This was the hot data.

    • TerraBlob (Uber’s internal blob store like AWS S3) for older or cold data.

    • LedgerStore (LSG) where new data was being written and where they wanted to migrate all data.

    The primary reasons for migrating to LedgerStore were as follows:

    • Cost savings: Moving to LedgerStore promised significant recurring cost savings compared to DynamoDB.

    • Simplification: Consolidating from three storage systems to one would simplify the code and design of the Gulfstream services.

    • Improved Performance: LedgerStore offered shorter indexing lag and reduced latency due to being on-premise.

    • Purpose-Built Design: LedgerStore was specifically designed for storing payment-style data, offering features like verifiable immutability and tiered storage for cost management.

    The migration was a massive undertaking. Some statistics are as follows:

    • 1.2 Petabytes of compressed data

    • Over 1 trillion entries

    • 0.5 PB of uncompressed data for secondary indexes.

    For reference, storing this data on typical 1 TB hard drives requires a total of 1200 hard drives just for the compressed data.

    Checks

    One of the key goals of the migration was to ensure that the backfill was correct and acceptable. Also, the current traffic requirements needed to be fulfilled.

    Key validation methods adopted were as follows:

    1 - Shadow Validation

    This ensured that the new LedgerStore system could handle current traffic patterns without disruption.

    Here’s how it worked:

    • The system would compare responses from the existing DynamoDB-based system with what the LedgerStore would return for the same queries. This allowed the team to catch any discrepancies in real time. 

    • An ambitious goal was to ensure 99.99% completeness and correctness with an upper bound of 99.9999%.

    • To achieve six nines of confidence, the team needed to compare at least 100 million records. At a rate of 1000 comparisons per second, this would take more than a day to collect sufficient data.

    • During shadow validation, production traffic was duplicated on LedgerStore. This helped the team verify the LedgerStore’s ability to handle the production load.

    2 - Offline Validation

    While shadow validation was effective for current traffic patterns, it couldn’t provide strong guarantees about rarely-accessed historical data. This is where offline validation came into play. 

    Here’s how it worked:

    • Offline validation involved comparing complete data dumps from DynamoDB with the data in LedgerStore. The largest offline validation job compared 760 billion records, involving 70 TB of compressed data.

    • The team used Apache Spark for these massive comparison jobs, leveraging distributed shuffle-as-a-service for Spark.

    Backfill Issues

    The process of migrating Uber’s massive ledger data from DynamoDB to LedgerStore involved several backfill challenges that had to be solved:

    • Scalability: The engineering team learned that starting small and gradually increasing the scale was crucial. Blindly pushing beyond the system’s limit could create a DDoS attack on their systems.

    • Incremental Backfills: Given the enormous volume of data, attempting to backfill all at once would generate 10 times the normal traffic load. The solution was to break the backfill into smaller, manageable batches that could be completed within minutes.

    • Rate Control: To ensure consistent behavior during backfill, the team implemented rate control using Guava’s RateLimiter in Java/Scala.  The team also developed a system to dynamically adjust the backfill rate based on the current system load. For this, they used an additive increase/multiplicative decrease approach similar to TCP congestion control.

    • Data File Size Management: The team found that managing the size of data files was important. They aimed to keep the file sizes around 1 GB, with flexibility between 100 MB and 10 GB. This approach helped avoid issues like MultiPart limitations in various tools and prevented problems associated with having too many small files.

    • Fault Tolerance: Data quality issues and data corruption were inevitable. The team’s solution was to monitor statistics. If the failure rate was high, they would manually stop the backfill, fix the issue, and continue. For less frequent issues, they let the backfill continue while addressing problems in parallel.

    • Logging Challenges: Excessive logging during backfill could overload the logging infrastructure. The solution was to use a rate limiter for logging. For example, they might log only once every 30 seconds for routine operations while logging all errors if they occurred infrequently.

    Conclusion

    The impact of Uber’s ledger data migration to LedgerStore has been amazing, with over 2 trillion unique indexes successfully transferred without a single data inconsistency detected in over six months of production use. 

    This migration, involving 1.2 PB of compressed data and over 1 trillion entries, showcases Uber’s ability to handle massive-scale data operations without disrupting critical financial processes. It also provides great learning points for the readers.

    The cost savings from this migration have been substantial, with estimated yearly savings exceeding $6 million due to reduced spend on DynamoDB. Performance improvements have been notable, with LedgerStore offering shorter indexing lag and better network latency due to its on-premise deployment within Uber’s data centers.

    References:

     
    Like
    Comment
    Restack
     

    © 2024 ByteByteGo
    548 Market Street PMB 72296, San Francisco, CA 94104
    Unsubscribe

    Get the appStart writing


    by "ByteByteGo" <bytebytego@substack.com> - 11:35 - 20 Aug 2024
  • The Business Show UK 2024

    Hello,

     

    Would you be interested in The Business Show UK attendees list

     

    It’s a Paid list

     

    Regards,

    Gina


    by "Gina Bernard" <gina.bernard@serveitoutreach.shop> - 07:01 - 20 Aug 2024
  • The hard stuff: Navigating the physical realities of the energy transition
    Forge a path ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 

    New from McKinsey Global Institute

    Image of a spinning sphere with dark hexagon shapes pealing off and floating away revealing a green sphere underneath.
    LinkedIn
    X
    Facebook

    This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.

    You received this email because you subscribed to our McKinsey Global Institute alert list.

    Manage subscriptions | Unsubscribe

    Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007


    by "McKinsey & Company" <publishing@email.mckinsey.com> - 12:39 - 19 Aug 2024
  • " Waiting For Your Quote`"
                                                  [ Hi ]

                      Do you have any plans & Drawings  for an estimate?
                     You simply send us plans and mention the scope of work.
                     We give you an accurate estimate at lowest prices.

                                            Best Regards,
                                              [ Andrew ]




    Unsubscribe

    by "Kiven Brandon" <kiven.golabdea@gmail.com> - 09:53 - 19 Aug 2024
  • Our most-read leadership advice

    Leading Off

    Catch up as we take a pause ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
    Leading Off

    Great leaders know the value of taking time to rest and recharge—and we’re following their advice by taking a break from our usual Leading Off delivery schedule. We hope you’re also finding some space to slow down this summer and enjoy time with friends and family. 

    On September 9, we’ll be back in your inbox with never-before-heard reflections from leaders in The Journey of Leadership by Dana Maor, Kurt Strovink, Ramesh Srinivasan, and senior partner emeritus Hans-Werner Kaas. While we’re away, you can browse through past issues here—and be sure you didn’t miss recent favorites:

    If you have friends or colleagues who might enjoy Leading Off, consider forwarding this email to them or sharing it on LinkedIn, X, or Facebook. They can sign up for this or any of our 40+ other free email subscriptions at mckinsey.com/subscriptions. (And you might also want to revisit that page to see our full newsletter lineup.)

    Share these insights

    Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.

    This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.

    You received this email because you subscribed to the Leading Off newsletter.

    Manage subscriptions | Unsubscribe

    Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007


    by "McKinsey Leading Off" <publishing@email.mckinsey.com> - 02:02 - 19 Aug 2024
  • Guest Post article published Offered
    Hello,

    I am a professional SEO outreach specialist (blogger) providing high quality websites for publishing guest posts and building backlinks. We are accepting guest (sponsored) posts on our websites. 

    Fashion General and Technology Sites Check 


    If you are interested in guest posting or writing with us , send us an email.

    All the best,


    by hussanalig397@gmail.com - 02:32 - 18 Aug 2024
  • How companies can make big moves to beat the odds
    Transform the strategy room ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
    McKinsey Classics
    McKinsey Classics

    McKinsey Classics | August 2024

    An image linking to the web page “The power of big moves in crafting a successful strategy” on McKinsey.com

    The power of big moves in crafting a successful strategy

    In today’s ever competitive business environment, crafting the right strategy is crucial for any company to distance itself from the rest of the pack. But all too often, leaders are stymied once they enter the strategy room, where biases, competing agendas, and short-term projections result in unrealistic expectations and ineffective planning. This is the “social side of strategy,” as the authors of this 2018 classic put it, and it’s preventing your company from moving up the economic power curve.

    So how do you reduce the social noise and boost your odds of setting a successful strategy? Start by making big moves. The authors highlight five that, according to their research, make the biggest difference—for example, engaging in a dynamic reallocation of resources, where organizations feed the business units that are likely to produce substantial moves up the power curve (and starve the ones that aren’t), and having a strong productivity program, one that puts you in at least the top 30 percent of your industry.

    To uncover the other big moves that can help make your strategic-planning sessions more worthwhile, read Chris Bradley and Sven Smit’s 2018 McKinsey Quarterly classic, “Strategy to beat the odds.”

    Jordyn Libow, editor, Atlanta

    Revamp your strategic planning
    LinkedIn Twitter Facebook

    Related Reading

    An image linking to the web page “Tying short-term decisions to long-term strategy” on McKinsey.com

    Tying short-term decisions to long-term strategy 

    An image linking to the web page “The strategy-analytics revolution” on McKinsey.com

    The strategy-analytics revolution 

    An image linking to the web page “Artificial intelligence in strategy” on McKinsey.com

    Artificial intelligence in strategy 

    Share these insights

    Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.

    This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.

    You received this email because you subscribed to our McKinsey Classics newsletter.

    Manage subscriptions | Unsubscribe

    Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007


    by "McKinsey Classics" <publishing@email.mckinsey.com> - 12:55 - 17 Aug 2024
  • EP125: How does Garbage Collection work?

    EP125: How does Garbage Collection work?

    This week’s system design refresher:
    ͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­
    Forwarded this email? Subscribe here for more

    This week’s system design refresher:

    • Linux Performance Tools! (Youtube video)

    • How does Garbage Collection work?

    • A Cheat Sheet for Designing Fault-Tolerant Systems

    • 10 System Design Tradeoffs You Cannot Ignore

    • SPONSOR US


    WorkOS: Modern Identity Platform for B2B SaaS (Sponsored)

    Start selling to enterprises with just a few lines of code.

    → WorkOS provides a complete user management solution along with SSO, SCIM, Audit Logs, & Fine-Grained Authorization.

    → Unlike other auth providers that rely on user-centric models, WorkOS is designed for B2B SaaS with an org modeling approach.

    → The APIs are flexible, easy-to-use, and modular. Pick and choose what you need and integrate in minutes.

    → User management is free up to 1 million MAUs and comes standard with RBAC, bot protection, MFA, & more.

    Get Started Today


    Linux Performance Tools!


    How does Garbage Collection work?

    Garbage collection is an automatic memory management feature used in programming languages to reclaim memory no longer used by the program.

    No alternative text description for this image
    • Java
      Java provides several garbage collectors, each suited for different use cases:

      1. Serial Garbage Collector: Best for single-threaded environments or small applications.

      2. Parallel Garbage Collector: Also known as the "Throughput Collector."

      3. CMS (Concurrent Mark-Sweep) Garbage Collector: Low-latency collector aiming to minimize pause times.

      4. G1 (Garbage-First) Garbage Collector: Aims to balance throughput and latency.

      5. Z Garbage Collector (ZGC): A low-latency garbage collector designed for applications that require large heap sizes and minimal pause times.

    • Python
      Python's garbage collection is based on reference counting and a cyclic garbage collector:

      1. Reference Counting: Each object has a reference count; when it reaches zero, the memory is freed.

      2. Cyclic Garbage Collector: Handles circular references that can't be resolved by reference counting.

    • GoLang
      Concurrent Mark-and-Sweep Garbage Collector: Go's garbage collector operates concurrently with the application, minimizing stop-the-world pauses.


    Latest articles

    If you’re not a paid subscriber, here’s what you missed.

    1. A Crash Course on Architectural Scalability

    2. A Crash Course on Microservices Design Patterns

    3. A Crash Course on Domain-Driven Design

    4. "Tidying" Code

    5. A Crash Course on Relational Database Design

    To receive all the full articles and support ByteByteGo, consider subscribing:


    A Cheat Sheet for Designing Fault-Tolerant Systems

    diagram

    Designing fault-tolerant systems is crucial for ensuring high availability and reliability in various applications. Here are six top principles of designing fault-tolerant systems:

    1. Replication
      Replication involves creating multiple copies of data or services across different nodes or locations.

    2. Redundancy
      Redundancy refers to having additional components or systems that can take over in case of a failure.

    3. Load Balancing
      Load balancing distributes incoming network traffic across multiple servers to ensure no single server becomes a point of failure.

    4. Failover Mechanisms
      Failover mechanisms automatically switch to a standby system or component when the primary one fails.

    5. Graceful Degradation
      Graceful degradation ensures that a system continues to operate at reduced functionality rather than completely failing when some components fail.

    6. Monitoring and Alerting
      Continuously monitor the system's health and performance, and set up alerts for any anomalies or failures.


    10 System Design Tradeoffs You Cannot Ignore

    If you don’t know trade-offs, you DON'T KNOW system design.

    graphical user interface
    1. Vertical vs Horizontal Scaling
      Vertical scaling is adding more resources (CPU, RAM) to an existing server.

      Horizontal scaling means adding more servers to the pool.

    2. SQL vs NoSQL
      SQL databases organize data into tables of rows and columns.

      NoSQL is ideal for applications that need a flexible schema.

    3. Batch vs Stream Processing
      Batch processing involves collecting data and processing it all at once. For example, daily billing processes.

      Stream processing processes data in real time. For example, fraud detection processes.

    4. Normalization vs Denormalization
      Normalization splits data into related tables to ensure that each piece of information is stored only once.

      Denormalization combines data into fewer tables for better query performance.

    5. Consistency vs Availability
      Consistency is the assurance of getting the most recent data every single time.

      Availability is about ensuring that the system is always up and running, even if some parts are having problems.

    6. Strong vs Eventual Consistency
      Strong consistency is when data updates are immediately reflected.

      Eventual consistency is when data updates are delayed before being available across nodes.

    7. REST vs GraphQL
      With REST endpoints, you gather data by accessing multiple endpoints.

      With GraphQL, you get more efficient data fetching with specific queries but the design cost is higher.

    8. Stateful vs Stateless
      A stateful system remembers past interactions.

      A stateless system does not keep track of past interactions.

    9. Read-Through vs Write-Through Cache
      A read-through cache loads data from the database in case of a cache miss.

      A write-through cache simultaneously writes data updates to the cache and storage.

    10. Sync vs Async Processing
      In synchronous processing, tasks are performed one after another.

      In asynchronous processing, tasks can run in the background. New tasks can be started without waiting for a new task.

    Over to you: Which other tradeoffs have you encountered?


    SPONSOR US

    Get your product in front of more than 1,000,000 tech professionals.

    Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.

    Space Fills Up Fast - Reserve Today

    Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing sponsorship@bytebytego.com

     
    Like
    Comment
    Restack
     

    © 2024 ByteByteGo
    548 Market Street PMB 72296, San Francisco, CA 94104
    Unsubscribe

    Get the appStart writing


    by "ByteByteGo" <bytebytego@substack.com> - 11:35 - 17 Aug 2024
  • Look back at some of our readers’ favorite recent charts

    The Week in Charts

    Revisit highlights ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 

    Thanks for reading The Week in Charts as we aim to provide you with the visuals you need to understand today’s biggest business and management challenges. Now the data shows that it’s time for a short hiatus, though we’ll resume our weekly send schedule on September 3. Stay tuned for new charts when we return, including ones on leadership in honor of The Journey of Leadership by Dana Maor, Kurt Strovink, Ramesh Srinivasan, and senior partner emeritus Hans-Werner Kaas.

    While we’re away, take a look back at some recent charts that resonated most with our readers:

    If you have friends or colleagues who might enjoy The Week in Charts, consider forwarding this email to them or sharing it on LinkedIn, X, or Facebook. They can sign up for this or any of our 40+ other free email subscriptions at mckinsey.com/subscriptions. (And you might also want to revisit that page to see our full newsletter lineup.)

    Share these insights

    Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.

    This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.

    You received this email because you subscribed to The Week in Charts newsletter.

    Manage subscriptions | Unsubscribe

    Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007


    by "McKinsey Week in Charts" <publishing@email.mckinsey.com> - 11:14 - 17 Aug 2024
  • RE :: QUOTE 2024

     

    Harry potter

    Hello ,

     

    Dear Web Owner,

     

    I hope you are doing well.

     

    Want more clients and customers?

     

    We will help them find you by putting you on the 1st page in Google.

     

    Can I send you best price list?

     

    Thanks!

     

     

     

     

     

     

     

     

     

    Best SEO Strategies for 2024

     

     

     

     

     

     

                                                                                                                                                                                            


    by "Krishna" <svhjvx@gmail.com> - 02:17 - 17 Aug 2024
  • The physical realities of the energy transition, customer delight, customer-centric insurance, and more highlights
    Highlights for your downtime ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌   ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
    The Daily Read Weekend Edition
    The Daily Read Weekend Edition

    Ready to unwind?

    —Edited by Joyce Yoo, editor, New York

    McKinsey & Company

    Follow our thinking

    LinkedIn Twitter Facebook

    Share these insights

    Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.


    by "McKinsey Daily Read" <publishing@email.mckinsey.com> - 10:16 - 15 Aug 2024
  • A Crash Course on Architectural Scalability

    A Crash Course on Architectural Scalability

    In today's interconnected world, software applications have a global reach, serving users from diverse geographical locations.
    ͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­
    Forwarded this email? Subscribe here for more

    Latest articles

    If you’re not a subscriber, here’s what you missed this month.

    1. A Crash Course on Domain-Driven Design

    2. A Crash Course on Relational Database Design

    3. A Crash Course on Distributed Systems

    4. A Crash Course in Database Scaling Strategies

    5. A Crash Course in Database Sharding

    To receive all the full articles and support ByteByteGo, consider subscribing:


    In today's interconnected world, software applications have a global reach, serving users from diverse geographical locations. 

    With the rapid growth of social media and viral content, a single tweet or post can lead to a sudden and massive surge in traffic to an application. The importance of building applications with scalable architecture from the ground up has never been higher. 

    Being prepared for unexpected traffic spikes is indispensable for development teams building applications. A sudden increase in user demand may be just around the corner. Not being prepared for it can put immense pressure on the application's infrastructure. It not only causes performance degradation but can also, in some cases, result in a complete system failure. 

    To mitigate these risks and ensure a good user experience, teams must proactively design and build scalable architectures.

    But what makes scalability such a desirable characteristic?

    Scalability allows the application to dynamically adapt to changing workload requirements without compromising performance or availability.

    In this post, we’ll understand the true meaning of scalability from different perspectives followed by the various techniques and principles that can help you scale the application’s architecture. Also, in subsequent posts in the coming weeks, we’ll take deeper dives into the scalability of each layer and component of a typical architecture.

    What is Scalability?...

    Unlock this post for free, courtesy of Alex Xu.

    Claim my free post

    A subscription gets you:

    An extra deep dive on Thursdays
    Full archive
    Many expense it with team's learning budget
     
    Like
    Comment
    Restack
     

    © 2024 ByteByteGo
    548 Market Street PMB 72296, San Francisco, CA 94104
    Unsubscribe

    Get the appStart writing


    by "ByteByteGo" <bytebytego@substack.com> - 11:36 - 15 Aug 2024
  • Efficient and Professional Logistics Services
    image0




    If you don't want to receive our emails, you can easily unsubscribe here.








    Services: Ocean Freight | Air Freight | Projects | Trucking | Insurance | Rail Freight  | Custom Brokerage | Warehousing & Distribution

    owenSales manager
     
    Shunshunfa International Logistics Co.,Ltd
    Add: B1, 7th Floor, Building 3, Lehui Science and Technology Innovation Center, No. 489 Jihua Road, Bantian Street, Longgang District, Shenzhen 
    Email: owen@ssf-logistics.com     Web:https//:www.ssf-logistics.com
    Mob: +86 13751768263    QQ:625927164 





















    by "OWEN" <owen@ssf-logistics.com> - 10:35 - 15 Aug 2024