Archives
- By thread 3668
-
By date
- June 2021 10
- July 2021 6
- August 2021 20
- September 2021 21
- October 2021 48
- November 2021 40
- December 2021 23
- January 2022 46
- February 2022 80
- March 2022 109
- April 2022 100
- May 2022 97
- June 2022 105
- July 2022 82
- August 2022 95
- September 2022 103
- October 2022 117
- November 2022 115
- December 2022 102
- January 2023 88
- February 2023 90
- March 2023 116
- April 2023 97
- May 2023 159
- June 2023 145
- July 2023 120
- August 2023 90
- September 2023 102
- October 2023 106
- November 2023 100
- December 2023 74
- January 2024 75
- February 2024 75
- March 2024 78
- April 2024 74
- May 2024 108
- June 2024 98
- July 2024 116
- August 2024 134
- September 2024 130
- October 2024 141
- November 2024 90
-
Taskeye - Field Employee Task Tracking Software Offering Real-Time Visibility into Employees’ Activities on the Field.
Taskeye - Field Employee Task Tracking Software Offering Real-Time Visibility into Employees’ Activities on the Field.
A Versatile Platform is Designed to Cater to the Unique Needs of Various Field Domains.Experience the power of our field employe task tracking software that offers real-time visibility into employees’ activities on the field. With our innovative device-less tracking system, effortlessly monitor the workforce and optimize their performance like never before.
Advanced tool to reduce workload and increase employee productivity.
Uffizio Technologies Pvt. Ltd., 4th Floor, Metropolis, Opp. S.T Workshop, Valsad, Gujarat, 396001, India
by "Sunny Thakur" <sunny.thakur@uffizio.com> - 08:00 - 21 Mar 2024 -
A Brief History of Scaling Netflix
A Brief History of Scaling Netflix
Netflix began its life in 1997 as a mail-based DVD rental business. Marc Randolph and Reed Hastings got the idea of Netflix while carpooling between their office and home in California. Hastings admired Amazon and wanted to emulate their success by finding a large category of portable items to sell over the Internet. It was around the same time that DVDs were introduced in the United States and they tested the concept of selling or renting DVDs by mail.͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for moreLatest articles
If you’re not a subscriber, here’s what you missed this month.
To receive all the full articles and support ByteByteGo, consider subscribing:
Netflix began its life in 1997 as a mail-based DVD rental business.
Marc Randolph and Reed Hastings got the idea of Netflix while carpooling between their office and home in California.
Hastings admired Amazon and wanted to emulate their success by finding a large category of portable items to sell over the Internet. It was around the same time that DVDs were introduced in the United States and they tested the concept of selling or renting DVDs by mail.
Fast forward to 2024, Netflix has evolved into a video-streaming service with over 260 million users from all over the world. Its impact has been so humongous that “Netflix being down” is often considered an emergency.
To support this amazing growth story, Netflix had to scale its architecture on multiple dimensions.
In this article, we attempt to pull back the curtains on some of the most significant scaling challenges they faced and how those challenges were overcome.
The Architectural Origins of Netflix
Like any startup looking to launch quickly in a competitive market, Netflix started as a monolithic application.
The below diagram shows what their architecture looked like a long time ago.
The application consisted of a single deployable unit with a monolithic database (Oracle). As you can notice, the database was a possible single point of failure.
This possibility turned into reality in August 2008.
There was a major database corruption issue due to which Netflix couldn’t ship any DVDs to the customers for 3 days. It suddenly became clear that they had to move away from a vertically scaled architecture prone to single points of failure.
As a response, they made two important decisions:
Move all the data to the AWS cloud platform
Evolve the systems into a microservices-based architecture
The move to AWS was a crucial decision.
When Netflix launched in 2007, EC2 was just getting started and they couldn’t leverage it at the time. Therefore, they built two data centers located right next to each other.
However, building a data center is a lot of work. You’ve to order equipment, wait for the equipment to arrive and install it. Before you finish, you’ve once again run out of capacity and need to go through the whole cycle again.
To cut through this cycle, Netflix went for a vertical scaling strategy that led to their early system architecture being modeled as a monolithic application.
However, the outage we talked about earlier taught Netflix one critical lesson - building data centers wasn’t their core capability.
Their core capability was delivering video to the subscribers and it would be far better for them to get better at delivering video. This prompted the move to AWS with a design approach that can eliminate single points of failure.
It was a mammoth decision for the time and Netflix adopted some basic principles to guide them through this change:
Buy vs Build
First, try to use or contribute to open-source technology wherever possible.
Only build from scratch what you absolutely must.
Stateless Services
Services should be built in a stateless manner except for the persistence or caching layers.
No sticky sessions.
Employ chaos testing to prove that an instance going down doesn’t impact the wider system.
Scale-out vs scale up
Horizontal scaling gives you a longer runway in terms of scalability.
Prefer to go for horizontal scaling instead of vertical scaling.
Redundancy and Isolation
Make more than one copy of anything. For example, replica databases and multiple service instances.
Reduce the blast radius of any issue by isolating workloads.
Automate Destructive Testing
Destructive testing of the systems should be an ongoing activity.
Adoption of tools like Chaos Monkey to carry out such tests at scale.
These guiding principles acted as the North Star for every transformational project Netflix took up to build an architecture that could scale according to the demands.
The Three Main Parts of Netflix Architecture
The overall Netflix architecture is divided into three parts:
The Client
The Backend
The Content Delivery Network
The client is the Netflix app on your mobile, a website on your computer or even the app on your Smart TV. It includes any device where the users can browse and stream Netflix videos. Netflix controls each client for every device.
The backend is the part of the application that controls everything that happens before a user hits play. It consists of multiple services running on AWS and takes care of various functionalities such as user registration, preparing incoming videos, billing, and so on. The backend exposes multiple APIs that are utilized by the client to provide a seamless user experience.
The third part is the Content Delivery Network also known as Open Connect. It stores Netflix videos in different locations throughout the world. When a user plays a video, it streams from Open Connect and is displayed on the client.
The important point to note is that Netflix controls all three areas, thereby achieving complete vertical integration over their stack.
Some of the key areas that Netflix had to scale if they wanted to succeed were as follows:
The Content Delivery Network
The Netflix Edge
APIs
Backend Services with Caching
Authorization
Memberships
Let’s look at each of these areas in more detail.
Scaling the Netflix CDN
Imagine you’re watching a video in Singapore and the video is being streamed from Portland. It’s a huge geographic distance broken up into many network hops. There are bound to be latency issues in this setup resulting in a poorer user experience.
If the video content is moved closer to the people watching it, the viewing experience will be a lot better.
This is the basic idea behind the use of CDN at Netflix.
Put the video as close as possible to the users by storing copies throughout the world. When a user wants to watch a video, stream it from the nearest node.
Each location that stores video content is called a PoP or point of presence. It’s a physical location that provides access to the internet and consists of servers, routers and other networking equipment.
However, it took multiple iterations for Netflix to scale their CDN to the right level.
Iteration 1 - Small CDN
Netflix debuted its streaming service in 2007.
At the time, it had over 35 million members across 50 countries, streaming more than a billion hours of video each month
To support this usage, Netflix built its own CDN in five different locations within the United States. Each location contained all of the content.
Iteration 2 - 3rd Party CDN
In 2009, Netflix started to use 3rd party CDNs.
The reason was that 3rd-party CDN costs were coming down and it didn’t make sense for Netflix to invest a lot of time and effort in building their own CDN. As we saw, they struggled a lot with running their own data centers.
Moving to a 3rd-party solution also gave them time to work on other higher-priority projects. However, Netflix did spend a lot of time and effort in developing smarter client applications to adapt to changing network conditions.
For example, they developed techniques to switch the streaming to a different CDN to get a better result. Such innovations allowed them to provide their users with the highest quality experience even in the face of errors and overloaded networks.
Iteration 3 - Open Connect
Sometime around 2011, Netflix realized that they were operating at a scale where a dedicated CDN was important to maximize network efficiency and viewing experience.
The streaming business was now the dominant source of revenue and video distribution was a core competency for Netflix. If they could do it with extremely high quality, it could turn into a huge competitive advantage.
Therefore, in 2012, Netflix launched its own CDN known as Open Connect.
To get the best performance, they developed their own computer system for video storage called Open Connect Appliances or OCAs.
The below picture shows an OCA installation:
Source: Netflix OpenConnect Website An OCA installation was a cluster of multiple OCA servers. Each OCA is a fast server that is highly optimized for delivering large files. They were packed with lots of hard disks or flash drives for storing videos.
Check the below picture of a single OCA server:
Source: Open Connect Presentation The launch of Open Connect CDN had a lot of advantages for Netflix:
It was more scalable when it came to providing service everywhere in the world.
It had better quality because they could now control the entire video path from transcoding, CDN, and clients on the devices.
It was also less expensive as compared to 3rd-party CDNs.
Scaling the Netflix Edge
The next critical piece in the scaling puzzle of Netflix was the edge.
The edge is the part of a system that’s close to the client. For example, out of DNS and database, DNS is closer to the client and can be thought of as edgier. Think of it as a degree rather than a fixed value.
Edge is the place where data from various requests enters into the service domain. Since this is the place where the volume of requests is highest, it is critical to scale the edge.
The Netflix Edge went through multiple stages in terms of scaling.
Early Architecture
The below diagram shows how the Netflix architecture looked in the initial days.
As you can see, it was a typical three-tier architecture.
There is a client, an API, and a database that the API talks to. The API application was named NCCP (Netflix Content Control Protocol) and it was the only application that was exposed to the client. All the concerns were put into this application.
The load balancer terminated the TLS and sent plain traffic to the application. Also, the DNS configuration was quite simple. The idea was that clients should be able to find and reach the Netflix servers.
Such a design was dictated by the business needs of the time. They had money but not a lot. It was important to not overcomplicate things and optimize for time to market.
The Growth Phase
As the customer base grew, more features were added. With more features, the company started to earn more money.
At this point, it was important for them to maintain the engineering velocity. This meant breaking apart the monolithic application into microservices. Features were taken out of the NCCP application and developed as separate apps with separate data.
However, the logic to orchestrate between the services was still within the API. An incoming request from a client hits the API and the API calls the underlying microservices in the right order.
The below diagram shows this arrangement:
Continue reading this post for free, courtesy of Alex Xu.
A subscription gets you:
An extra deep dive on Thursdays Full archive Many expense it with team's learning budget Like Comment Restack © 2024 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:39 - 21 Mar 2024 -
What does it take to succeed with digital and AI?
On Point
10 ideas shaping modern business Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:47 - 21 Mar 2024 -
‘Unbundling’ and expanding banks could boost their value
Re:think
What’s ahead for banking FRESH TAKES ON BIG IDEAS
ON THE FUTURE OF BANKING
How banks can take control of their futureMiklós Dietz
The banking system, supported by higher profit margins, has a historic opportunity in the next few years to reinvent its business model—something it needs to do. Despite margins strengthened by elevated interest rates, banking is the lowest-valued sector in the world, trading at a 0.8 price-to-book ratio, while the rest of the global economy trades at 2.7. Price to book reflects the theoretical amount that shareholders would get if all assets were liquidated and all debts repaid. Trading below 1.0 means the markets see the banking system in a negative light.
Clearly, the banking model needs future-proofing. Banks are losing share in the global intermediary landscape to nonbanks, which are cherry-picking high-profitability businesses. One reason banks are trading so low is that they are very complex. Banking is a mix of three things: distribution (branches and sales staff), transactions, and balance sheet management, which measures how banks are transforming deposits into loans and managing credit risk. Distribution and transactions are profitable and require little capital, so they create value. Keeping things on the balance sheet does not create value, so the banking system needs to consider a new approach to balance sheet management.
Banks could consider separating the core balance sheet from distribution and transaction, following a path taken by utilities and telcos. You don’t necessarily have to break up the bank, but by unbundling, you could create more transparency for investors into what the bank is doing. Banks could speed up the metabolism of balance sheet management through faster securitization. Technology, especially AI, can reinvent every layer of banking, making it more cost-efficient. One more thing we can’t forget: risk. Ultimately, the best-performing banks are the ones that get risk right. It’s not just traditional credit risk but also new types, including cyber and geopolitical risks.
Banks today have it tough. They are expected to operate with incredible cost efficiency and robustness; to run the general ledger and balance sheet; to do asset liability management, matching everything; and to never make a mistake. That alone is a huge task. But banks are also expected to focus on innovation and growth and produce personalized services for every single customer. These two sets of expectations create tension.“The best US banks operate at a cost-to-asset ratio of 200 basis points, while the best European banks can do 70 basis points or even less.”
In the next few years, banks will need to get the basics right and at the same time reinvent themselves. On one hand, they need to be digital and AI-driven, low-cost, and efficient. On the other hand, they need to be visionary, go into new sectors, and create end-to-end customer journeys.
To create powerful customer journeys, banks can really own the customer relationship. This may enable them to understand clients’ needs and serve them more effectively, bringing higher margins and stronger customer attachment. For example, instead of just offering mortgages, banks could help buyers find a home, move in, and finance it. In payments, banks could offer coupons, e-gifting, online marketplaces, and location-based services. For business clients, banks could add a unified, finance- and payment-enabled platform that integrates services including administrative, tax, accounting, business intelligence, benchmarking, and B2B marketplaces.
An ideal banking model might involve strategies from different geographies. In some areas of banking, the United States is more advanced, because it disintermediates more and does more securitization. In others, Europe is more advanced, digital, and efficient. The best American banks operate at a cost-to-asset ratio of 200 basis points, meaning they incur costs of two cents for every $1 of assets managed. The best European banks can operate at a cost-to-asset ratio of 70 to 80 basis points or even less. In Europe, the best banks do 70 to 80 percent of their sales digitally. Some European banks can approve a mortgage application within a day. At the best American banks, it still takes weeks to get a mortgage. Asian banks, especially Indian banks, are the stars of how to go beyond banking and discover new avenues and differentiate. They operate almost like tech companies in many ways.
A provocative vision for the bank of 2035 might be a platform of networks—essentially a holding company for a collection of businesses—including e-commerce, payments, consumer lending, real estate, and a truly personalized advisory business, moving beyond financial into insurance, healthcare, and other things. This could be a $1 trillion market cap bank. It would still do everything that banking is doing now—it would still be very heavily regulated and very stable—but it would be unbundled to create value.ABOUT THIS AUTHOR
Miklós Dietz is a senior partner in McKinsey’s Vancouver office.
MORE FROM THIS AUTHOR
UP NEXT
Duwain Pinder on Black Americans’ prosperity
Black Americans’ economic well-being varies greatly depending on where they live. But even where they are most prosperous, there is a significant breach between them and their White neighbors. Progress requires an all-hands-on-deck approach, in which many solutions are operating at scale for a long time.
This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to our McKinsey Quarterly alert list.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Quarterly" <publishing@email.mckinsey.com> - 02:21 - 20 Mar 2024 -
Join me on Thursday for Dashboard techniques to visualise system and business performance
Hi MD,
It's Liam Hurrell, Manager of Customer Training at New Relic University, here.
Are you looking to improve your dashboard skills, or use techniques that enable you to visualise patterns, correlate important metrics and make better data-driven decisions? If so, you can register to attend this free online workshop I'll be hosting on Thursday 21st March at 10 am GMT/ 11 am CET, “Dashboard techniques to visualize system and business performance”.
You can find the full agenda on the registration page here. While we recommend attending the hands-on workshop live, you can also register to receive the recording.
Hope to see you then,
Liam HurrellManager, Customer TrainingNew Relic
This email was sent to info@learn.odoo.com as a result of subscribing or providing consent to receive marketing communications from New Relic. You can tailor your email preferences at any time here.Privacy Policy © 2008-24 New Relic, Inc. All rights reserved
by "Liam Hurrell, New Relic" <emeamarketing@newrelic.com> - 06:06 - 20 Mar 2024 -
The energy industry can make changes to attract younger workers
On Point
4 critical themes in energy Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Something old; something new. Energy companies have their work cut out for them amid the transition to clean energy. They need to maintain their core businesses but also move into low-carbon offerings, power, renewables, and retail. To be successful, they should be agile, efficient, and fast—all while operating in an industry for which talent is scarce, potential M&A is looming, and generative AI is poised to shake things up, share McKinsey partners Ignacio Fantaguzzi and Christopher Handscomb.
—Edited by Jana Zabkova, senior editor, New York
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:35 - 20 Mar 2024 -
The human side of generative AI: Creating a path to productivity
People before tech This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to our McKinsey Quarterly alert list.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Quarterly" <publishing@email.mckinsey.com> - 12:51 - 20 Mar 2024 -
Understanding micromarkets can help European auto leaders compete
On Point
Local trends and insights Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Localized trends. The European auto market is changing rapidly. With more than 65% of new cars sold by 2030 expected to be fully electric, passenger car electrification is accelerating. Digitized and personalized consumer experiences are also gaining ground. These trends, however, can unfold in local markets at very different rates. Understanding the trends and characteristics of micromarkets—local areas, such as districts and postcodes—can give auto-industry leaders a competitive edge, McKinsey senior partner Inga Maurer and coauthors say.
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:26 - 19 Mar 2024 -
Track and Manage Waste Collecting Fleets with Advanced Waste Management Software - SmartWaste.
Track and Manage Waste Collecting Fleets with Advanced Waste Management Software - SmartWaste.
Grow sustainable waste collection businesses with SmartWaste Software.Plan optimum waste collection and disposal routes to maximize productivity.
Benefits
Grow sustainable waste collection businesses with SmartWaste Software.
Uffizio Technologies Pvt. Ltd., 4th Floor, Metropolis, Opp. S.T Workshop, Valsad, Gujarat, 396001, India
by "Sunny Thakur" <sunny.thakur@uffizio.com> - 08:01 - 18 Mar 2024 -
What is economic inclusion?
Go beyond New from McKinsey & Company
What is economic inclusion?
How do we measure economic inclusion? Who is economically excluded? The answers to these questions—and others—are available in our McKinsey Explainers series. Have a question you want answered? Reach out to us at ask_mckinsey@mckinsey.com.
Go beyond From poverty to empowerment: Raising the bar for sustainable and inclusive growth
Our future lives and livelihoods: Sustainable and inclusive and growing
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to our McKinsey Global Institute alert list.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey & Company" <publishing@email.mckinsey.com> - 02:03 - 18 Mar 2024 -
More than a feeling: A leader’s guide to design thinking
Design for business value Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
When business leaders look to generate revenue growth, design may not be the first thing that comes to mind. But design is no longer mostly about how something looks—increasingly, it’s becoming a dynamic problem-solving approach that can have great impact and increase shareholder value. Our research has found that design-led companies outperform their competitors by a considerable margin. “Design thinking is a methodology that we use to solve complex problems, and it’s a way of using systemic reasoning and intuition to explore ideal future states,” notes McKinsey partner Jennifer Kilian. However, the methodology may not be easy to implement, as it involves making fundamental changes to organizational culture and practices. This week, we discuss some starting points.
Good design can allow an organization to succeed on three fronts—growth, margin, and sustainability—observe McKinsey experts in an episode of the McKinsey on Consumer and Retail podcast. This “triple win” may be within reach if companies pay closer attention to the design of their products and packaging. Thanks to new digital technologies that combine qualitative and quantitative user data, “we now have much more transparency into what consumers want and expect,” says McKinsey senior partner Jennifer Schmidt. These insights can inform and enhance product and packaging design. “You can bring together people from design, R&D, marketing, procurement, finance,” says Schmidt. “It can be a team sport now; all the different functions can come together and use that information to deliver the triple wins.” By changing its packaging, one company was “able to come up with breakthroughs in all three of the areas we’ve talked about,” she adds. “That’s one of my favorite stories.”
That’s the number of steps it takes to implement “skinny design,” which involves fitting more product into smaller packages to maximize shelf space, reduce transportation costs, and minimize stockouts. “The good news is that companies can often implement the elements of skinny design quickly and with little investment,” note McKinsey’s Dave Fedewa, Daniel Swan, Warren Teichner, and Bill Wiseman. A key step is to build a strong technological foundation: one company used a digital teardown database to compare its packaging volume with that of its competitors. The tool revealed a cube optimization opportunity that could reduce the company’s logistics costs and carbon footprint.
That’s McKinsey partner Benedict Sheppard on the importance of implementing design thinking in an organization. Companies that perform the best in design achieve higher average revenue growth and shareholder return than their peers, McKinsey research shows. But Sheppard cautions against launching company-wide design transformations that may not show tangible impact. “My advice would be to choose one important upcoming product or service design to pilot improvements on,” he suggests. And to excel at design, it’s essential to apply it across four critical dimensions—“being good at one or two isn’t enough,” in Sheppard’s view. For example, top-performing companies view design as a continuous process of iteration and as everyone’s responsibility, not as a siloed function.
PepsiCo chief design officer Mauro Porcini believes that the wants and needs of human beings should be at the center of all design and innovation. “It’s the biggest challenge right now because we’re in a moment of transition, and many companies are not yet understanding how important it is to focus on this,” he says in a discussion with McKinsey. What he calls the “principles of meaningful design” may spring from a human-centric mindset. “They’re essentially about creating products that are relevant from a functional standpoint, an emotional standpoint, and a semiotic standpoint,” he says. Products also need to be environmentally and financially sustainable; Porcini emphasizes that they should be financially successful enough to reach as many people as possible. “You want billions of people to enjoy your product instead of creating something extraordinary that just four or five people in the world can enjoy.”
Design thinking may have some drawbacks. Its critics allege that it tends to favor ideas over execution, can be rigid or formulaic, and may not always be the solution to long-standing business problems. And overengineered design solutions can put off consumers. But companies can capture the full business value of design by embracing user-centric strategies throughout the organization, empowering senior designers to work closely with C-suite leaders, and using the right balance of user insights and quantitative data. Our research shows that the best design teams are highly integrated with the business, working across functions and using advanced collaboration tools for both physical and digital design.
Lead by design.
– Edited by Rama Ramaswami, senior editor, New York
Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to the Leading Off newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Leading Off" <publishing@email.mckinsey.com> - 04:34 - 18 Mar 2024 -
How can TMT leaders use generative AI to create value?
On Point
100+ applications of generative AI Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Gen AI in TMT. How big is the opportunity presented by gen AI? Our research and hands-on experience have allowed us to identify more than 100 gen AI use cases in technology, media, and telecommunications (TMT) across seven business domains, global leader of QuantumBlack, AI by McKinsey, Alex Singla; McKinsey Digital leader Benjamim Vieira; and coauthors explain. McKinsey research suggests that gen AI could unleash between $380 billion and $690 billion in value in TMT.
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 11:06 - 17 Mar 2024 -
The week in charts
The Week in Charts
Improving health in urban areas, rising sportswear sales, and more Our McKinsey Chart of the Day series offers a daily chart that helps explain a changing world—as we strive toward sustainable and inclusive growth. In case you missed them, this week’s graphics explored improving health in urban areas, rising sportswear sales, ethnocultural minorities in Europe, AI in medtech, and talent demand in the oil and gas industry.
Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to The Week in Charts newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Week in Charts" <publishing@email.mckinsey.com> - 03:56 - 16 Mar 2024 -
Have you digitized your risk function yet?
Transform your risk management Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
Digitizing the risk function for the modern era
Pulling off a digital transformation is hard enough as it is, with companies perennially struggling to capture value on their investments. Digitizing a company’s risk function is even harder. As this classic 2017 article observes, evolving regulatory requirements can waylay many digital efforts, and applying the standard test-and-learn approach of traditional digital transformations to risk processes can create an unacceptable level of, well, risk. Yet then, as now, the promise of digitizing the risk function to boost efficiency and improve decision making remains alluringly high—if leaders can craft a fit-for-purpose approach that minimizes costly glitches and addresses regulatory expectations.
Where to begin? Companies looking to build an effective digital risk program can start by taking a lesson from banks, which have found value in targeting three specific areas: credit risk, stress testing, and operational risk and compliance. Digitizing in these three areas can help banks accelerate decision making for credit delivery, automate fragmented stress-testing processes, and help streamline alert generation and case investigations.
With today’s proliferation of new technologies such as generative AI, companies’ digital efforts will only increase, from capability building to customer service to operations—and certainly to risk. To stay ahead of the curve, read Saptarshi Ganguly, Holger Harreis, Ben Margolis, and Kayvaun Rowshankish’s 2017 classic, “Digital risk: Transforming risk management for the 2020s.”Create a successful digital risk agenda As gen AI advances, regulators—and risk functions—rush to keep pace
Lessons from banking to improve risk and compliance and speed up digital transformations
Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to our McKinsey Classics newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Classics" <publishing@email.mckinsey.com> - 12:34 - 16 Mar 2024 -
EP103: Typical AWS Network Architecture in One Diagram
EP103: Typical AWS Network Architecture in One Diagram
This week’s system design refresher: Reverse Proxy vs API Gateway vs Load Balancer (YouTube video) Typical AWS Network Architecture in one diagram 15 Open-Source Projects That Changed the World Top 6 Database Models How do we detect node failures in distributed systems?͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for moreThis week’s system design refresher:
Reverse Proxy vs API Gateway vs Load Balancer (YouTube video)
Typical AWS Network Architecture in one diagram
15 Open-Source Projects That Changed the World
Top 6 Database Models
How do we detect node failures in distributed systems?
SPONSOR US
Register for POST/CON 24 | April 30 - May 1 (Sponsored)
POST/CON 24 will be an unforgettable experience! Connect with peers who are as enthusiastic about APIs as you are, all as you come together to:
Learn: Get first-hand knowledge from Postman experts and global tech leaders.
Level up: Attend 8-hour workshops to leave with new skills (and badges!)
Become the first to know: See the latest API platform innovations, including advancements in AI.
Help shape the future of Postman: Give direct feedback to the Postman leadership team.
Network with fellow API practitioners and global tech leaders — including speakers from OpenAI, Heroku, and more.
Have fun: Enjoy cocktails, dinner, 360° views of the city, and a live performance from multi-platinum recording artist T-Pain!
So grab your Early Adopter ticket for 30% off now while you can, because you don’t want to miss this!
Reverse Proxy vs API Gateway vs Load Balancer
One picture is worth a thousand words - Typical AWS Network Architecture in one diagram
Amazon Web Services (AWS) offers a comprehensive suite of networking services designed to provide businesses with secure, scalable, and highly available network infrastructure. AWS's network architecture components enable seamless connectivity between the internet, remote workers, corporate data centers, and within the AWS ecosystem itself.
VPC (Virtual Private Cloud)
At the heart of AWS's networking services is the Amazon VPC, which allows users to provision a logically isolated section of the AWS Cloud. Within this isolated environment, users can launch AWS resources in a virtual network that they define.AZ (Availability Zone)
An AZ in AWS refers to one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region.
Now let’s go through the network connectivity one by one:
Connect to the Internet - Internet Gateway (IGW)
An IGW serves as the doorway between your AWS VPC and the internet, facilitating bidirectional communication.Remote Workers - Client VPN Endpoint
AWS offers a Client VPN service that enables remote workers to access AWS resources or an on-premises network securely over the internet. It provides a secure and easy-to-manage VPN solution.Corporate Data Center Connection - Virtual Gateway (VGW)
A VGW is the VPN concentrator on the Amazon side of the Site-to-Site VPN connection between your network and your VPC.VPC Peering
VPC Peering allows you to connect two VPCs, enabling you to route traffic between them using private IPv4 or IPv6 addresses.Transit Gateway
AWS Transit Gateway acts as a network transit hub, enabling you to connect multiple VPCs, VPNs, and AWS accounts together.VPC Endpoint (Gateway)
A VPC Endpoint (Gateway type) allows you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, VPN.VPC Endpoint (Interface)
An Interface VPC Endpoint (powered by AWS PrivateLink) enables private connections between your VPC and supported AWS services, other VPCs, or AWS Marketplace services, without requiring an IGW, VGW, or NAT device.SaaS Private Link Connection
AWS PrivateLink provides private connectivity between VPCs and services hosted on AWS or on-premises, ideal for accessing SaaS applications securely.
Latest articles
If you’re not a paid subscriber, here’s what you missed this month.
To receive all the full articles and support ByteByteGo, consider subscribing:
15 Open-Source Projects That Changed the World
To come up with the list, we tried to look at the overall impact these projects have created on the industry and related technologies. Also, we’ve focused on projects that have led to a big change in the day-to-day lives of many software developers across the world.
Web Development
Node.js: The cross-platform server-side Javascript runtime that brought JS to server-side development
React: The library that became the foundation of many web development frameworks.
Apache HTTP Server: The highly versatile web server loved by enterprises and startups alike. Served as inspiration for many other web servers over the years.
Data Management
PostgreSQL: An open-source relational database management system that provided a high-quality alternative to costly systems
Redis: The super versatile data store that can be used a cache, message broker and even general-purpose storage
Elasticsearch: A scale solution to search, analyze and visualize large volumes of data
Developer Tools
Git: Free and open-source version control tool that allows developer collaboration across the globe.
VSCode: One of the most popular source code editors in the world
Jupyter Notebook: The web application that lets developers share live code, equations, visualizations and narrative text.
Machine Learning & Big Data
Tensorflow: The leading choice to leverage machine learning techniques
Apache Spark: Standard tool for big data processing and analytics platforms
Kafka: Standard platform for building real-time data pipelines and applications.
DevOps & Containerization
Docker: The open source solution that allows developers to package and deploy applications in a consistent and portable way.
Kubernetes: The heart of Cloud-Native architecture and a platform to manage multiple containers
Linux: The OS that democratized the world of software development.
Over to you: Do you agree with the list? What did we miss?
Top 6 Database Models
The diagram below shows top 6 data models.
Flat Model
The flat data model is one of the simplest forms of database models. It organizes data into a single table where each row represents a record and each column represents an attribute. This model is similar to a spreadsheet and is straightforward to understand and implement. However, it lacks the ability to efficiently handle complex relationships between data entities.Hierarchical Model
The hierarchical data model organizes data into a tree-like structure, where each record has a single parent but can have multiple children. This model is efficient for scenarios with a clear "parent-child" relationship among data entities. However, it struggles with many-to-many relationships and can become complex and rigid.Relational Model
Introduced by E.F. Codd in 1970, the relational model represents data in tables (relations), consisting of rows (tuples) and columns (attributes). It supports data integrity and avoids redundancy through the use of keys and normalization. The relational model's strength lies in its flexibility and the simplicity of its query language, SQL (Structured Query Language), making it the most widely used data model for traditional database systems. It efficiently handles many-to-many relationships and supports complex queries and transactions.Star Schema
The star schema is a specialized data model used in data warehousing for OLAP (Online Analytical Processing) applications. It features a central fact table that contains measurable, quantitative data, surrounded by dimension tables that contain descriptive attributes related to the fact data. This model is optimized for query performance in analytical applications, offering simplicity and fast data retrieval by minimizing the number of joins needed for queries.Snowflake Model
The snowflake model is a variation of the star schema where the dimension tables are normalized into multiple related tables, reducing redundancy and improving data integrity. This results in a structure that resembles a snowflake. While the snowflake model can lead to more complex queries due to the increased number of joins, it offers benefits in terms of storage efficiency and can be advantageous in scenarios where dimension tables are large or frequently updated.Network Model
The network data model allows each record to have multiple parents and children, forming a graph structure that can represent complex relationships between data entities. This model overcomes some of the hierarchical model's limitations by efficiently handling many-to-many relationships.
Over to you: Which database model have you used?
How do we detect node failures in distributed systems?
The diagram below shows top 6 Heartbeat Detection Mechanisms.
Heartbeat mechanisms are crucial in distributed systems for monitoring the health and status of various components. Here are several types of heartbeat detection mechanisms commonly used in distributed systems:
Push-Based Heartbeat
The most basic form of heartbeat involves a periodic signal sent from one node to another or to a monitoring service. If the heartbeat signals stop arriving within a specified interval, the system assumes that the node has failed. This is simple to implement, but network congestion can lead to false positives.Pull-Based Heartbeat
Instead of nodes sending heartbeats actively, a central monitor might periodically "pull" status information from nodes. It reduces network traffic but might increase latency in failure detection.Heartbeat with Health Check
This includes diagnostic information about the node's health in the heartbeat signal. This information can include CPU usage, memory usage, or application-specific metrics. It Provides more detailed information about the node, allowing for more nuanced decision-making. However, it Increases complexity and potential for larger network overhead.Heartbeat with Timestamps
Heartbeats that include timestamps can help the receiving node or service determine not just if a node is alive, but also if there are network delays affecting communication.Heartbeat with Acknowledgement
The receiver of the heartbeat message must send back an acknowledgment in this model. This ensures that not only is the sender alive, but the network path between the sender and receiver is also functional.Heartbeat with Quorum
In some distributed systems, especially those involving consensus protocols like Paxos or Raft, the concept of a quorum (a majority of nodes) is used. Heartbeats might be used to establish or maintain a quorum, ensuring that a sufficient number of nodes are operational for the system to make decisions. This brings complexity in implementation and managing quorum changes as nodes join or leave the system.
SPONSOR US
Get your product in front of more than 500,000 tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.
Space Fills Up Fast - Reserve Today
Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing hi@bytebytego.com.
Like Comment Restack © 2024 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:35 - 16 Mar 2024 -
What do you prioritize when spending on health and wellness?
On Point
Wellness trends in 2024 Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Supported by science. From cold plunges to collagen to celery juice, people face myriad choices in today’s $1.8 trillion global wellness market. Worldwide, consumers are prioritizing effective, data-driven, and science-backed health and wellness products, McKinsey senior partner Warren Teichner and coauthors say. Our survey of more than 5,000 consumers across China, the UK, and the US reveals that efficacy and scientific credibility are two of the most important factors to consumers when selecting wellness products.
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:24 - 15 Mar 2024 -
Implementing gen AI responsibly, bringing innovation to a nonprofit, B2B sales, and more highlights from the week: The Daily Read Weekender
Catch up on the week's big reads Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
Get ready for the weekend and catch up on the week’s highlights on safely implementing gen AI, the future of Medicare Advantage, next-gen B2B sales, and more.
QUOTE OF THE DAY
chart of the day
Ready to unwind?
—Edited by Joyce Yoo, editor, New York
Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to our McKinsey Quarterly alert list.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Daily Read" <publishing@email.mckinsey.com> - 10:08 - 14 Mar 2024 -
Implementing generative AI with speed and safety
Revise your playbook This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to our McKinsey Quarterly alert list.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Quarterly" <publishing@email.mckinsey.com> - 02:34 - 14 Mar 2024 -
15 Open-Source Projects That Changed the World
15 Open-Source Projects That Changed the World
Software development is a field of ideas and experiments. One idea leads to an experiment that spawns another idea and the cycle of innovation moves forward. Open-source projects are the fuel for this innovation. A good open-source project impacts the lives of many developers and creates a fertile environment for collaboration. Many of the greatest breakthroughs in software development have come from open-source projects.͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for moreLatest articles
If you’re not a subscriber, here’s what you missed this month.
To receive all the full articles and support ByteByteGo, consider subscribing:
Software development is a field of ideas and experiments. One idea leads to an experiment that spawns another idea and the cycle of innovation moves forward.
Open-source projects are the fuel for this innovation.
A good open-source project impacts the lives of many developers and creates a fertile environment for collaboration. Many of the greatest breakthroughs in software development have come from open-source projects.
In this post, we will look at 15 high-impact open-source projects that have changed the lives of many developers.
To come up with the list, we tried to look at the overall impact these projects have created on the industry and related technologies. Also, we’ve focused on projects that have led to a big change in the day-to-day lives of many software developers across the world.
1 - Linux
Unless you’ve been living in a cave, there’s no way you haven’t heard about Linux.
Linux is an open-source operating system created by Linus Torvalds. Like many open-source projects, it was originally started as a hobby project.
And then, it took over the world.
Linux runs on all sorts of computer systems such as PCs, mobiles and servers. It also runs on more unexpected places such as a washing machine, cars and robots. Even the Large Hadron Collider uses Linux.
However, the biggest impact of Linux is how it has democratized the world of software development by providing a free and open-source operating system.
2 - Apache HTTP Server
Apache HTTP Server is a free and open-source web server that powers a large percentage of websites on the internet.
Ever since its release in 1995, the Apache HTTP server has been a tireless workhorse. It’s versatile enough in terms of security and agility to be adopted by enterprises and startups alike.
Over the years, Apache HTTP Server has inspired so many web servers such as Nginx, Lighttpd, Caddy and so on.
3 - Git
Git hardly needs any introduction.
If you’ve worked as a developer in any capacity, there is a 100% chance you’ve used Git or at least, heard about it.
Git is a free and open-source version control system for software development. And you may be surprised to know that it was also created by Linus Torvalds along with his team.
But why?
Yes, you guessed it right. Linus did it to manage the source code of the Linux kernel project. That’s why people say that the best open-source projects come from your own requirements.
Git has been super-transformative for the way the software industry operates. It provided a standard way of tracking, comparing and applying version control on the source code, leading to the birth of revolutionary products such as GitHub and Bitbucket.
4 - Node.js
JavaScript was always the preferred language for browser-based development. But it would have stayed just a browser language had it not been for Node.js
Node.js is an open-source cross-platform JavaScript runtime environment for server-side programming.
In other words, Node.js brought JavaScript to backend development.
With its release in 2009, Node.js quickly became a popular choice for building scalable and high-performance web applications. It paved the way for using the same language for both client-side and server-side programming.
5 - Docker
It’s no secret that developers love to build applications and test them on their machines.
But no one loves the pressure of deploying the same application to production.
There’s always one pesky environment issue or version mismatch on the production server that brings the entire application down.
And developers can only say - “It worked fine on my machine.”
To which, they get the answer - “Yes, but we can’t ship your machine to production.”
Docker made it possible.
As an open-source platform, Docker allows developers to package and deploy applications in a consistent and portable way.
The application-specific packages and all the environmental dependencies are packaged in a Docker container image. This image can then be deployed wherever needed...
Continue reading this post for free, courtesy of Alex Xu.
A subscription gets you:
An extra deep dive on Thursdays Full archive Many expense it with team's learning budget Like Comment Restack © 2024 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:42 - 14 Mar 2024 -
Why Crisp moved to one monitoring solution
New Relic
Crisp, a risk-intelligence company that helps protect brands from online attacks
Crisp, a risk-intelligence company that helps protect brands from online attacks, transitioned from multiple open-source monitoring tools to one single view with New Relic. The complexity and cost of managing various tools prompted this shift.
After a year-long evaluation, Crisp found New Relic to be the optimal choice for streamlined operations and improved observability. New Relic's pricing based on hosts aligns with Crisp’s needs, eliminating the confusion of data throughput models.
Read how New Relic gave Crisp a more efficient and cost-effective monitoring experience.
Read Story Need help? Let's get in touch.
This email is sent from an account used for sending messages only. Please do not reply to this email to contact us—we will not get your response.
This email was sent to info@learn.odoo.com Update your email preferences.
For information about our privacy practices, see our Privacy Policy.
Need to contact New Relic? You can chat or call us at +44 20 3859 9190.
Strand Bridge House, 138-142 Strand, London WC2R 1HH
© 2024 New Relic, Inc. All rights reserved. New Relic logo are trademarks of New Relic, Inc
by "New Relic" <emeamarketing@newrelic.com> - 07:07 - 14 Mar 2024