EP88: Linux Boot Process Explained

EP88: Linux Boot Process Explained

This week’s system design refresher: How Git Works: Explained in 4 Minutes (Youtube video) Linux Boot Process Explained The Evolving Landscape of API Protocols in 2023 Explaining the 4 Most Commonly Used Types of Queues in a Single Diagram A Brief Overview of Kubernetes  ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
Forwarded this email? Subscribe here for more

This week’s system design refresher:

  • How Git Works: Explained in 4 Minutes (Youtube video)

  • Linux Boot Process Explained

  • The Evolving Landscape of API Protocols in 2023

  • Explaining the 4 Most Commonly Used Types of Queues in a Single Diagram

  • A Brief Overview of Kubernetes


Streamline API Development With Postman Workspaces (Sponsored)

Solve problems together. They are the go-to place for development teams to collaborate and move quickly while staying on the same page.

With workspaces, teams can:

  • Automatically notify other team members about changes to APIs as updates sync in real-time.

  • Set up manual or automated workflows to support different stages of API development.

  • Enable faster onboarding for both internal and external partner developers

  • Create collaborative hubs for troubleshooting API calls and maintaining a log of common steps to follow.

    Get started with workspaces for free


How Git Works: Explained in 4 Minutes


Linux Boot Process Explained

Almost every software engineer has used Linux before, but only a handful know how its Boot Process works :) Let's dive in.

The diagram below shows the steps.

Step 1 - When we turn on the power, BIOS (Basic Input/Output System) or UEFI (Unified Extensible Firmware Interface) firmware is loaded from non-volatile memory, and executes POST (Power On Self Test).

Step 2 - BIOS/UEFI detects the devices connected to the system, including CPU, RAM, and storage.

Step 3 - Choose a booting device to boot the OS from. This can be the hard drive, the network server, or CD ROM.

Step 4 - BIOS/UEFI runs the boot loader (GRUB), which provides a menu to choose the OS or the kernel functions.

Step 5 - After the kernel is ready, we now switch to the user space. The kernel starts up systemd as the first user-space process, which manages the processes and services, probes all remaining hardware, mounts filesystems, and runs a desktop environment.

Step 6 - systemd activates the default. target unit by default when the system boots. Other analysis units are executed as well.

Step 7 - The system runs a set of startup scripts and configure the environment.

Step 8 - The users are presented with a login window. The system is now ready.


Latest articles

If you’re not a paid subscriber, here’s what you missed this month.

  1. Unlock Highly Relevant Search with AI

  2. Does Serverless Have Servers?

  3. A Crash Course in Docker

  4. Shipping to Production

  5. Kubernetes: When and How to Apply It

To receive all the full articles and support ByteByteGo, consider subscribing:


The Evolving Landscape of API Protocols in 2023

This is a brief summary of the blog post I wrote for Postman.

In this blog post, I cover the six most popular API protocols: REST, Webhooks, GraphQL, SOAP, WebSocket, and gRPC. The discussion includes the benefits and challenges associated with each protocol.

Thank you, Abhinav Asthana, Rebecca Johnston-Gilbert, K.C. Patrick 💥 📈, Mackenzie Lawson, Kevin Keene, Ashley Lowe for the great collaboration.

You can read the full blog post here.


Explaining the 4 Most Commonly Used Types of Queues in a Single Diagram

Queues are popular data structures used widely in the system. The diagram below shows 4 different types of queues we often use.

  1. Simple FIFO Queue
    A simple queue follows FIFO (First In First Out). An new element is inserted at the tail of the queue, and an element is removed from the head of the queue.

    If we would like to send out email notifications to the users whenever we receive a payment response, we can use a FIFO queue. The emails will be sent out in the same order as the payment responses.

  2. Circular Queue
    A circular queue is also called a circular buffer or a ring buffer. Its last element is linked to the first element. Insertion takes place at the front of the queue and deletion at the end of the queue.

    A famous implementation is LMAX’s low-latency ring buffer. Trading components talk to each other via a ring buffer. This is implemented in memory and super fast.

  3. Priority Queue
    The elements in a priority queue have predefined priorities. We take the element with the highest (or lowest) priority from the queue. Under the hood, it is implemented using a max heap or a min heap where the element with the largest or lowest priority is at the root of the heap.

    A typical use case is assigning patients with the highest severity to the emergency room while others to the regular rooms.

  4. Deque
    Deque is also called double-ended queue. The insertion and deletion can happen at both the head and the tail. Deque supports both FIFO and LIFO (Last In First Out), so we can use it to implement a stack data structure.

Over to you: Which type of queue have you used?


A Brief Overview of Kubernetes

Kubernetes, often referred to as K8S, extends far beyond simple container orchestration. It's an open-source platform designed to automate deploying, scaling, and operating application containers.

  • Where Docker Lags, Kubernetes Excels
    Docker revolutionized containerization, making it accessible and standardized. However, when it comes to managing a large number of containers across different servers, Docker can fall short. Kubernetes steps in here, providing a more robust, cluster-based environment for managing containerized applications at scale. It offers high availability, load balancing, and a self-healing mechanism, ensuring applications are always operational and efficiently distributed.

  • Solving the Container Management Puzzle
    The primary problem Kubernetes solves is the complexity of managing multiple containers across various servers. It automates the distribution and scheduling of containers on a cluster, handles scaling requirements, and ensures a consistent environment across development, testing, and production.

Kubernetes' Container Runtime Interface (CRI) is a significant leap forward, enabling users to plug in different container runtimes without recompiling Kubernetes. This flexibility means organizations can choose from a variety of runtimes like Docker, containerd, CRI-O, and others, depending on their specific needs.

 
Like
Comment
Restack
 

© 2023 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe

Get the appStart writing


by "ByteByteGo" <bytebytego@substack.com> - 11:39 - 2 Dec 2023