Archives
- By thread 4827
-
By date
- June 2021 10
- July 2021 6
- August 2021 20
- September 2021 21
- October 2021 48
- November 2021 40
- December 2021 23
- January 2022 46
- February 2022 80
- March 2022 109
- April 2022 100
- May 2022 97
- June 2022 105
- July 2022 82
- August 2022 95
- September 2022 103
- October 2022 117
- November 2022 115
- December 2022 102
- January 2023 88
- February 2023 90
- March 2023 116
- April 2023 97
- May 2023 159
- June 2023 145
- July 2023 120
- August 2023 90
- September 2023 102
- October 2023 106
- November 2023 100
- December 2023 74
- January 2024 75
- February 2024 75
- March 2024 78
- April 2024 74
- May 2024 108
- June 2024 98
- July 2024 116
- August 2024 134
- September 2024 130
- October 2024 141
- November 2024 171
- December 2024 115
- January 2025 216
- February 2025 140
- March 2025 220
- April 2025 233
- May 2025 177
-
Nonwoven Products and Rolls —— Hangzhou Hongrun Nonwovens Co., Ltd
Dear info ,
I hope this message finds you well.
My name is Boehmer Franchesca,a leading manufacturer specializing in spunlaced non-woven fabric rolls and products, including wax strips, wax roll, cleaning rags, disposable sheets, disposable towels, and disposable masks, wet wipes and, so on!85% of our products are exported to European market, followed by North America, Western Europe, Japan, Korea, etc.
About Depilation wax paper, our company accounts for 2/3 of the foreign market. For example, Sassoon in the United States purchases wax paper from our company.
We pride ourselves on our ability to customize our products to meet the specific requirements of our clients.
We are eager to explore potential partnerships with your esteemed company. We believe our products can add significant value to your offerings and help you meet the demands of your market. We would be delighted to provide you with samples for evaluation and discuss any specific needs you may have.
Please let us know a convenient time for you to have a discussion, or feel free to reach out if you have any questions. We look forward to the opportunity to collaborate and contribute to your success.
Thank you for considering as a potential partner. We hope to hear from you soon.
Best regards,
by "Boehmer Franchesca" <franchescaboehmer@gmail.com> - 12:51 - 29 Apr 2025 -
How Meta Built Threads to Support 100 Million Signups in 5 Days
How Meta Built Threads to Support 100 Million Signups in 5 Days
When a new app hits 100 million signups in under a week, the instinct is to assume someone built a miracle backend overnight.͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for moreGenerate your MCP server with Speakeasy (Sponsored)
Like it or not, your API has a new user: AI agents. Make accessing your API services easy for them with an MCP (Model Context Protocol) server. Speakeasy uses your OpenAPI spec to generate an MCP server with tools for all your API operations to make building agentic workflows easy.
Once you've generated your server, use the Speakeasy platform to develop evals, prompts and custom toolsets to take your AI developer platform to the next level.
Disclaimer: The details in this post have been derived from the articles written by the Meta engineering team. All credit for the technical details goes to the Meta/Threads Engineering Team. The links to the original articles and videos are present in the references section at the end of the post. We’ve attempted to analyze the details and provide our input about them. If you find any inaccuracies or omissions, please leave a comment, and we will do our best to fix them.
Threads, Meta’s newest social platform, launched on July 5, 2023, as a real-time, public conversation space.
Built in under five months by a small engineering team, the product received immediate momentum. Infrastructure teams had to respond immediately to the incredible demand.
When a new app hits 100 million signups in under a week, the instinct is to assume someone built a miracle backend overnight. That’s not what happened with Threads. There was no time to build new systems or bespoke scaling plans. The only option was to trust the machinery already in place.
And that machinery worked quite smoothly. As millions signed up in 5 days, the backend systems held on, and everything from the user’s perspective worked as intended.
Threads didn’t scale because it was lucky. It scaled because it inherited Meta’s hardened infrastructure: platforms shaped by a decade of lessons from Facebook, Instagram, and WhatsApp.
This article explores two of those platforms that played a key role in the successful launch of Threads:
ZippyDB, the distributed key-value store powering state and search.
Async, the serverless compute engine that offloads billions of background tasks.
Neither of these systems was built for Threads. But Threads wouldn’t have worked without them.
ZippyDB was already managing billions of reads and writes daily across distributed regions. Also, Async had been processing trillions of background jobs across more than 100,000 servers, quietly powering everything from feed generation to follow suggestions.
ZippyDB: Key-Value at Hyperscale
ZippyDB is Meta’s internal, distributed key-value store designed to offer strong consistency, high availability, and geographical resilience at massive scale.
At its core, it builds on RocksDB for storage, extends replication with Meta’s Data Shuttle (a Multi-Paxos-based protocol), and manages placement and failover through a system called Shard Manager.
Unlike purpose-built datastores tailored to single products, ZippyDB is a multi-tenant platform. Dozens of use cases (from metadata services to product feature state) share the same infrastructure. This design ensures higher hardware utilization, centralized observability, and predictable isolation across workloads.
The Architecture of ZippyDB
ZippyDB doesn’t treat deployment as a monolith. It’s split into deployment tiers: logical groups of compute and storage resources distributed across geographic regions.
Each tier serves one or more use cases and provides fault isolation, capacity management, and replication boundaries. The most commonly used is the wildcard tier, which acts as a multi-tenant default, balancing hardware utilization with operational simplicity. Dedicated tiers exist for use cases with strict isolation or latency constraints.
Within each tier, data is broken into shards, the fundamental unit of distribution and replication. Each shard is independently managed and:
Synchronously replicated across a quorum of Paxos nodes for durability and consistency. This guarantees that writes survive regional failures and meet strong consistency requirements.
Asynchronously replicated to follower replicas, which are often co-located with high-read traffic regions. These replicas serve low-latency reads with relaxed consistency, enabling fast access without sacrificing global durability.
This hybrid replication model (strong quorum-based writes paired with regional read optimization) gives ZippyDB flexibility across a spectrum of workloads.
See the diagram below that shows the concept of region-based replication supported by ZippyDB.
To push scalability even further, ZippyDB introduces a layer of logical partitioning beneath shards: μshards (micro-shards). These are small, related key ranges that provide finer-grained control over data locality and mobility.
Applications don’t deal directly with physical shards. Instead, they write to μshards, which ZippyDB dynamically maps to underlying storage based on access patterns and load balancing requirements.
ZippyDB supports two primary strategies for managing μshard-to-shard mapping:
Compact Mapping: Best for workloads with relatively static data distribution. Mappings change only when shards grow too large or too hot. This model prioritizes stability over agility and is common in systems with predictable access patterns.
Akkio Mapping: Designed for dynamic workloads. A system called Akkio continuously monitors access patterns and remaps μshards to optimize latency and load. This is particularly valuable for global products where user demand shifts across regions throughout the day. Akkio reduces data duplication while improving locality, making it ideal for scenarios like feed personalization, metadata-heavy workloads, or dynamic keyspaces.
In ZippyDB, the Shard Manager acts as the external controller for leadership and failover. It doesn’t participate in the data path but plays a critical role in keeping the system coordinated.
The Shard Manager assigns a Primary replica to each shard and defines an epoch: a versioned leadership lease. The epoch ensures only one node has write authority at any given time. When the Primary changes (for example, due to failure), Shard Manager increments the epoch and assigns a new leader. The Primary sends regular heartbeats to the Shard Manager. If the heartbeats stop, the Shard Manager considers the Primary unhealthy and triggers a leader election by promoting a new node and bumping the epoch.
See the diagram below that shows the role of the Shard Manager:
Consistency and Durability in ZippyDB
In distributed systems, consistency is rarely black-and-white. ZippyDB embraces this by giving clients per-request control over consistency and durability levels, allowing teams to tune system behavior based on workload characteristics.
1 - Strong Consistency
Strong consistency in ZippyDB ensures that reads always reflect the latest acknowledged writes, regardless of where the read or write originated. To achieve this, ZippyDB routes these reads to the primary replica, which holds the current Paxos lease for the shard. The lease ensures that only one primary exists at any time, and only it can serve linearizable reads.
If the lease state is unclear (for example, during a leadership change), the read may fall back to a quorum check to avoid split-brain scenarios. This adds some latency, but maintains correctness.
2 - Bounded Staleness (Eventual Consistency)
Eventual consistency in ZippyDB isn’t the loose promise it implies in other systems. Here, it means bounded staleness: a read may be slightly behind the latest write, but it will never serve stale data beyond a defined threshold.
Follower replicas (often located closer to users) serve these reads. ZippyDB uses heartbeats to monitor follower lag, and only serves reads from replicas that are within an acceptable lag window. This enables fast, region-local reads without compromising on the order of operations.
3 - Read-Your-Writes Consistency
For clients that need causal guarantees, ZippyDB supports a read-your-writes model.
After a write, the server returns a version number (based on Paxos sequence ordering). The client caches this version and attaches it to subsequent reads. ZippyDB then ensures that reads reflect data at or after that version.
This model works well for session-bound workloads, like profile updates followed by an immediate refresh.
4 - Fast-Ack Write Mode
In scenarios where write latency matters more than durability, ZippyDB offers a fast-acknowledgment write mode. Writes are acknowledged as soon as they are enqueued on the primary for replication, not after they’re fully persisted in the quorum.
This boosts throughput and responsiveness, but comes with trade-offs:
Lower durability (data could be lost if the primary crashes before replication).
Weaker consistency (readers may not see the write until replication completes).
This mode fits well in systems that can tolerate occasional loss or use an idempotent retry.
Transactions and Conditional Writes
ZippyDB supports transactional semantics for applications that need atomic read-modify-write operations across multiple keys. Unlike systems that offer tunable isolation levels, ZippyDB keeps things simple and safe: all transactions are serializable by default.
Transactions
ZippyDB implements transactions using optimistic concurrency control:
Clients read a database snapshot (usually from a follower) and assemble a write set.
They send both read and write sets, along with the snapshot version, to the primary.
The primary checks for conflicts, whether any other transaction has modified the same keys since the snapshot.
If there are no conflicts, the transaction commits and is replicated via Paxos.
If a conflict is detected, the transaction is rejected, and the client retries. This avoids lock contention but works best when write conflicts are rare.
ZippyDB maintains recent write history to validate transaction eligibility. To keep overhead low, it prunes older states periodically. Transactions spanning epochs (i.e., across primary failovers) are automatically rejected, which simplifies correctness guarantees at the cost of some availability during leader changes.
Conditional Writes
For simpler use cases, ZippyDB exposes a conditional write API that maps internally to a server-side transaction. This API allows operations like:
“Set this key only if it doesn’t exist.”
“Update this value only if it matches X.”
“Delete this key only if it’s present.”
These operations avoid the need for client-side reads and round-trips. Internally, ZippyDB evaluates the precondition, checks for conflicts, and commits the write as a transaction if it passes.
This approach simplifies client code and improves performance in cases where logic depends on the current key's presence or state.
Why was ZippyDB critical To Threads?
Threads didn’t have months to build a custom data infrastructure. It needed to read and write at scale from day one. ZippyDB handled several core responsibilities, such as:
Counters: Like counts, follower tallies, and other rapidly changing metrics.
Feed ranking state: Persisted signals to sort and filter what shows up in a user's home feed.
Search state: Underlying indices that powered real-time discovery.
What made ZippyDB valuable wasn’t just performance but adaptability. As a multi-tenant system, it supported fast onboarding of new services. Teams didn’t have to provision custom shards or replicate schema setups. They configured what they needed and got the benefit of global distribution, consistency guarantees, and monitoring from day one.
At launch, Threads was expected to grow. But few predicted the velocity: 100 million users in under a week. That kind of growth doesn’t allow for manual shard planning or last-minute migrations.
ZippyDB’s resharding protocol turned a potential bottleneck into a non-event. Its clients map data into logical shards, which are dynamically routed to physical machines. When load increases, the system can:
Provision new physical shards.
Reassign logical-to-physical mappings live, without downtime.
Migrate data using background workers that ensure consistency through atomic handoffs.
No changes are required in the application code. The system handles remapping and movement transparently. Automation tools orchestrate these transitions, enabling horizontal scale-out at the moment it's needed, not hours or days later.
This approach allowed Threads to start small, conserve resources during early development, and scale adaptively as usage exploded, without risking outages or degraded performance.
During Threads’ launch window, the platform absorbed thousands of machines in a matter of hours. Multi-tenancy played a key role here. Slack capacity from lower-usage keyspaces was reallocated, and isolation boundaries ensured that Threads could scale up without starving other workloads.
Async - Serverless at Meta Scale
Async is Meta’s internal serverless compute platform, formally known as XFaaS (eXtensible Function-as-a-Service).
At peak, Async handles trillions of function calls per day across more than 100,000 servers. It supports multiple languages such as HackLang, Python, Haskell, and Erlang.
What sets Async apart is the fact that it abstracts away everything between writing a function and running it on a global scale. There is no need for service provisioning. Drop code into Async, and it inherits Meta-grade scaling, execution guarantees, and disaster resilience.
Async’s Role in Threads
When Threads launched, one of the most critical features wasn’t visible in the UI. It was the ability to replicate a user’s Instagram follow graph with a single tap. Behind that one action sat millions of function calls: each new Threads user potentially following hundreds or thousands of accounts in bulk.
Doing that synchronously would have been a non-starter. Blocking the UI on graph replication would have led to timeouts, poor responsiveness, and frustrated users. Instead, Threads offloaded that work to Async.
Async queued those jobs, spread them across the fleet, and executed them in a controlled manner. That same pattern repeated every time a celebrity joined Threads—millions of users received follow recommendations and notifications, all piped through Async without spiking database load or flooding services downstream.
How Async Handled the Surge
Async didn’t need to be tuned for Threads. It scaled the way it always does.
Several features were key to the success of Threads:
Queueing deferred less-urgent jobs to prevent contention with real-time tasks.
Batching combined many lightweight jobs into fewer, heavier ones, reducing overhead on dispatchers and improving cache efficiency.
Capacity-aware scheduling throttled job execution when downstream systems (like ZippyDB or the social graph service) showed signs of saturation.
This wasn’t reactive tuning. It was a proactive adaptation. Async observed system load and adjusted flow rates automatically. Threads engineers didn’t need to page anyone or reconfigure services. Async matched its execution rate to what the ecosystem could handle.
Developer Experience
One of the most powerful aspects of Async is that the Threads engineers didn’t need to think about scale. Once business logic was written and onboarded into Async, the platform handled the rest:
Delivery windows: Jobs could specify execution timing, allowing deferment or prioritization.
Retries: Transient failures were retried with backoff, transparently.
Auto-throttling: Job rates were adjusted dynamically based on system health.
Multi-tenancy isolation: Surges in one product didn’t impact another.
These guarantees allowed engineers to focus on product behavior, not operational limits. Async delivered a layer of predictable elasticity, absorbing traffic spikes that would have crippled a less mature system.
Conclusion
The Threads launch was like a stress test for Meta’s infrastructure. And the results spoke for themselves. A hundred million users joined in less than a week. But there were no major outages.
That kind of scale doesn’t happen by chance.
ZippyDB and Async weren’t built with Threads in mind. But Threads only succeeded because those systems were already in place, hardened by a decade of serving billions of users across Meta’s core apps. They delivered consistency, durability, elasticity, and observability without demanding custom effort from the product team.
This is the direction high-velocity engineering is heading: modular infrastructure pieces that are composable and deeply battle-tested. Not all systems need to be cutting-edge. However, the ones handling critical paths, such as state, compute, and messaging, must be predictable, reliable, and invisible.
References:
How we built a general-purpose key-value store for Facebook with ZippyDB
Asynchronous computing @Facebook: Driving efficiency and developer productivity at Facebook scale
SPONSOR US
Get your product in front of more than 1,000,000 tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.
Space Fills Up Fast - Reserve Today
Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing sponsorship@bytebytego.com.
Like
Comment
Restack
© 2025 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:35 - 29 Apr 2025 -
Eliminate wasted engineering time. This expert demo is free.
New Relic
A live demo in reclaiming engineering hours. Join the THIRD Product Expert Series session.
Tired of tracking down essential software information?
Join us on 7 May at 10am BST for our Product Expert Webinar: Intelligent Observability for Developer Velocity.
Experience a live demo and Q&A session on service architecture intelligence, discovering how it saves you time by automatically creating a searchable catalog of critical software knowledge from observability data.
Demo how to:- Streamline software development: Build a central knowledge base and auto-discover services.
- Speed up problem solving: Understand dependencies and visualise your landscape.
- Gain team performance insights: Enhance reliability, security, and compliance.
Walk through innovations that free information from team, tool, and systems silos. Discover how observability can be an engine for faster, more satisfying software development.Save your spot now View in browser
This email was sent to info@learn.odoo.com. Update your email preferences.For information about our privacy practices, see our Privacy Policy.
Need to contact New Relic? You can chat or call us at +44 20 3859 9190
Strand Bridge House, 138-142 Strand, London WC2R 1HH
© 2025 New Relic, Inc. All rights reserved. New Relic logo are trademarks of New Relic, Inc.
by "New Relic" <emeamarketing@newrelic.com> - 10:03 - 29 Apr 2025 -
Review and Sign : Order Confirmation.pdf
info@learn.odoo.com
You have received a new document to review and sign:
Order Confirmation.pdf
Please Review and Sign: Order Confirmation.pdf
Thanks for using DocuSign.
If you are unable to Review document file link, please move message to Inbox folder.Powered byDo Not Share This email.
by "Docusign" <vendor@dere.sa> - 08:24 - 29 Apr 2025 -
Re: Product Inquiry
Dear Sir/Madam,
I am Manuel Gonçalves from Alferpac, a Portugal-based company. We recently reviewed your product offerings at an online exhibition and believe there's a strong alignment with our current business needs.
To explore this opportunity further, Please let us know if you will be able to provide us with your best offer.
I'm eagerly awaiting your response at the earliest.
Cumprimentos, Saludos, Best Regards,
Manuel Gonçalves
Responsável de Compras / Buyer
Sede: Rua Bernardino Simões, Nº3 - Apartado 247 - S. Cristóvão
2500-138 Caldas da Rainha
Centro Logístico: Rua dos Frades, Nº 9 Algarão
2475-011 Benedita
Tel.: (+351) 916 356 4950 (Chamada para Rede Móvel Nacional)
Tel: (+351) 262 920 7355 (Chamada para Rede Fixa Nacional)
P Antes de imprimir este e-mail, pense bem se tem mesmo de o fazer. Proteja-se a si e ao ambiente
by "Manuel Gonçalves" <export.alferpac@gmail.com> - 04:29 - 29 Apr 2025 -
RE : In Urgent Need of Agents in Saudi Arabia
Dear Partner
Greeting from Cara and GLA family. Trust you are doing great.The number of inquiries from Saudi Arabia on GLA has been steadily increasing, we are actively seeking a reliable logistics partner in Saudi Arabia. After reviewing your company's website, we found that your business scope aligns perfectly with our platform's focus. Therefore, I would like to extend a sincere invitation for you to join us.
Besides, We are planning to hold the 12th GLA Global Logistics Conference:
Ø Event Name: The 12th GLA Global Logistics Conference
Ø Host By: GLA Global Logistics Alliance
Ø Event Date: May 15 - 18, 2025 (4 days)
Ø Host City: Dubai, UAE
Ø Expected Attendee: 1500+
Ø From: 130+ countries
Ø 1-on-1 Meeting: 51 times per pax
We expect you to join us! Please don’t miss this opportunity to attend the conference as a GLA member J
I would be delighted to discuss how GLA can support your business growth.
If you want to know more, please don’t hesitate to contact me.
Best regards,
Cara Chen
GLA Overseas department
Mobile:
(86) 190-7616-6926 (WhatsApp|Wechat)
Email:
Company:
GLA Co.,Ltd
Website:
Address:
No. 2109, 21st Floor, HongChang, Plaza, No. 2001, Road Shenzhen, China
The 12th GLA Conference – Dubai, UAE on 15th - 18th May 2025 – click here for registration
<![if !supportLists]>Ø <![endif]>The 11th GLA Conference in Bangkok Thailand, online album
<![if !supportLists]>Ø <![endif]>The 10th GLA Conference in Dubai UAE, online album
【Notice Agreement No 7】
7. GLA president reserves the right to cancel or reject membership or application.
company shall cease to be a member of GLA if:
a) the Member does not adhere to GLA terms and conditions
b) the Member gives notice of resignation in writing to the GLA.
c) No good reputation in the market.
d) Have bad debt records in the GLA platform or in the market
by "Cara" <member226@glafamily.com> - 04:16 - 29 Apr 2025 -
YouTube Sponsor Opportunity – Let’s Collaborate
Hey Odoo Team,
I'm currently developing my YouTube channel dedicated to creators, solopreneurs, and digital natives, where I showcase tools, brands, and stories related to artificial intelligence, innovation, and business.
I'm opening a few sponsorship slots this month, and I'd love to feature Odoo in an upcoming video.
Your solution speaks directly to my audience: modern entrepreneurs who want to build, grow, and scale their online business using the best tools.
What I offer (starting at €600) :
Native integration from 60s to 90s at the beginning of the video (script validated together)
Link in the description + pinned comment
Complete performance analysis after 7 days + User feedback
Examples of recent collaborations:
Lovable ➔ +200 qualified leads in 48 hours [Watch the video]
SUNO ➔ +500 qualified leads in 48 hours [Watch the video]
This is an excellent opportunity to boost your visibility with an engaged and qualified audience.
We can also explore custom formats or bundles depending on your objectives.
Let me know if you'd like to book a slot or discuss it—I'll adapt to your needs.
Cheers,
by "KARIM | Sat0oshi" <coach@sat0oshi.com> - 04:11 - 29 Apr 2025 -
Kumail Nanjiani is speaking at DASH!
We’re excited to announce that Kumail Nanjiani—Oscar-nominated writer, Emmy-nominated actor, and comedian—will be speaking at DASH!
This year, we’re reimagining the conference experience—introducing new speakers, stages, and spaces crafted for deeper learning, impactful networking, and inspiration to move your work forward. Take a first look at our 2025 speaker lineup.
Get insights from world-class experts.Eric WeissSenior Software Engineer, CoinbaseConnie WangSenior Staff Software Engineer, Rivian and Volkswagen Group TechnologiesDevin BurnetteSenior Staff Software Engineer, Developer Experience, BettermentTrevor BramwellSenior Systems Engineer, The Linux FoundationAlan ShermanSenior Cloud Operations Engineer, The Linux FoundationDina Abu KhaderSite Reliability Engineer, Expedia Group, Inc.Cheers,
The DASH Team
*Early bird pricing ends April 30, 2025.Visit our email preference center to unsubscribe or update subscription options© 2025 Datadog Inc, 620 8th Ave, 45th Floor, New York, NY 10018This email was sent to info@learn.odoo.com.
by "The DASH Team" <dash@datadoghq.com> - 02:01 - 29 Apr 2025 -
When will self-driving cars be mainstream?
On McKinsey Perspectives
Sooner than you may think
by "Only McKinsey Perspectives" <publishing@email.mckinsey.com> - 01:36 - 29 Apr 2025 -
High tenacity sewing thread manufacturer in China
Dear info,
Glad to hear that you're on the market for high tenacity sewing thread. High tenacity sewing thread is widely used in Leather Bag/Leather Shoes/Leather Sofa/Parachute Sew tough material such as leather, denim, canvas, vinyl, or thick fabric; Repair shoes, coats, jeans, khaki pants, marine and sail gears, mattresses, boat and car covers, tents, awnings, couches, draperies, carpets, or area rugs; Also great for webbing, hair weaving, and beading. We specialize in this field for about 30 years, with good quality and pretty competitive price.
Should you have any questions, please do not hesitate to contact me. FREE SAMPLES and colour chart will be sent for your evaluation!
Thank you!Best regards
Jacky
Rugao Rongli Thread CO., LTD.
Tel: 008618362114755
Wechat:008618362114755
Whatsapp:008613584680449
Mail:rgrongli@gmail.com
Website:https://rgrl.en.alibaba.com
by "sales003" <sales003@rl-thread.com> - 10:57 - 28 Apr 2025 -
Talep Ettiginiz SWIFT Mesaji
Sayın Müşterimiz,
Talep ettiğiniz SWIFT mesajı ekte yer almaktadır.
Saygılarımızla,
Türkiye İş Bankası
by "Turkiye Is Bankasi A.S" <bilgilendirme@ileti.isbank.com.tr> - 10:35 - 28 Apr 2025 -
Introduction and Potential Collaboration Opportunity
Dear info,
Good morning!
I hope this message finds you well. I recently visited your professional website and was greatly impressed by your leadership in the artificial flower industry. It is evident that your company is committed to delivering high-quality products while also making significant efforts to reduce costs, which is highly commendable.
Allow me to introduce myself. I am Ally from Dongguan Hmflowers Industrial Company Limited. With over 25 years of experience in the artificial flower industry, we have built a strong reputation for providing high-quality products at competitive prices. I am confident that we can meet your quality standards while reducing costs by at least 10%. Additionally, we specialize in custom production based on your specific designs.
To demonstrate our capabilities and the quality of our products, I would like to extend an invitation for you to visit our factory. Alternatively, I would be happy to send you a sample for your evaluation.
I look forward to the opportunity to discuss how we can support your objectives and explore potential collaboration.
Best regards,
Ally
Web:https://www.artificialflowers-factory.com/
Whats App:+86-18038381627
by "Jetu Sadic" <jetusadic@gmail.com> - 06:45 - 28 Apr 2025 -
Precision Engineering for Hydraulic Hoses, Seals, and Valves
Dear info,
I hope all is well. I am reaching out to introduce Shaanxi Kelong New Material Technology Co., Ltd., a leading provider of high-quality industrial products. We offer a wide range of solutions across four key categories:
Hydraulic Hoses: Steel wire wound hydraulic hose, Steel wire braided hydraulic hose, Aviation cotton braided rubber hose, PTFE hoses and components, high pressure/ultra-high-pressure rubber hose for hydraulic support, high pressure hydraulic rubber hose;
Seals: Hydraulic supports, cylinders, special rubber sealing components, static seals, dynamic seals and other seals, Aviation seals assembly,wind power equipment sealing assembly, high speed rail-way sealing product assembly;
Mining Equipment: Mining Equipment: Complete sets of equipment for rapid moving of fully mechanized coal mine face, Coal mine underground hydraulic support/support equipment complete sets of moving special vehicles, Full mechanized coal mine face rapid moving equipment, Underground Coal Mining Support Equipment Carrier Vehicles, Underground Coal Mining Support Equipment Carrier Vehicles;
Explosion-proof Valves: Flameproof valve, logic valve, proportional valve, reversing valve, etc
Please let me know if you’d like more information on any of our products or if you'd be open to further discussion. I look forward to hearing from you.
Yours sincerely,
Linda GaoOverseas Dept. Director
----------------------------------------------------------------------------------------------
Shaanxi Kelong New Material Technology Co.,Ltd
https://www.snkelong-sealhose.com/
Add.: Mid-Yongchang Rd,Weibin St.,High-tech Zone,Xianyang,Shaanxi Province, China
Tel/Fax: +86-29-3332 6567
Mobile/What's App/Wechat: +86-153 3914 7989
by "Sophia" <Sophia@klcoalmine-sealhose.com> - 05:25 - 28 Apr 2025 -
Introducing Our High-Quality Textile Products for Your Business
Dear Sir,
Good day!
I hope this message finds you well. My name is Ling, and I am writing to introduce our company, Shaanxi Huazuo Impex Co., Ltd. which specializes in the production and supply of high-quality textile products.
We offer a wide range of textiles including beach towels, bath towels,sport towels, face towels, kitchen towels, tea towels, table cloths ,ponchos, aprons, waterproof mattress protector, pillow protector,waterproof pants, PVC diapers etc, all of which are crafted to meet international standards. Our products are known for their good air permeability ,durability, comfort,High cost performance etc., and we believe they could be a great addition to your product line.Please visit our website for more product details https://huazuo.en.alibaba.com
We would be happy to send you more detailed information about our products or samples for your evaluation. If you are interested, I would be glad to arrange a meeting or call at your convenience to discuss the product details you want.
Kindly forward this mail to the concerned person in import/procurement department of your company.
We also look forward to your inquiry.
Best Regards
Ling Yu
Shaanxi Huazuo Impex Co., Ltd.
WhatsApp:+8613670218644
E-mail: dept06@shaanxihuazuo.com
Website: https://huazuo.en.alibaba.com
Office Address: Room 902, East Of Building A, Fengye Square, Gaoxin Road, High-Tech Zone, Xi 'an, Shaanxi, China
by "ForgeVise" <ForgeVise@shaanxihuazuo-raws.com> - 02:17 - 28 Apr 2025 -
Find high-quality textiles for your next project with YiweiGroup
Dear info,
I hope this message finds you well. I am writing to introduce you to our company and our exceptional flagship product, designed to meet the highest standards of quality and versatility, which are polyester, nylon, RPET and other textiles.
For the past 20 years, YiweiGroup has been at the forefront of the textile manufacturing industry. Our commitment to innovation, quality, and customer satisfaction has established us as a trusted partner to businesses worldwide.
YiweiGroup is one of the manufacturing leaders for textile in China. We specialize in developing and producing fabrics for bag and luggage products, apparel products, home textile products and also outdoor products.
We invite you to explore the remarkable potential of our products for your upcoming projects. By choosing YiweiGroup, you gain a partner committed to helping you achieve excellence in your product offerings.
Please do not hesitate to contact us for samples, detailed product specifications, or any further information you may require. We look forward to the opportunity to collaborate with you and support your business's growth and success.
Thank you for considering YiweiGroup as your trusted textile provider.
Warmest regards,
Yongzhao (Sam) Lu
Marketing Specialist
YiweiGroup Textile. Co
market@yiweigroupcn.com
https://www.yiweigroups.com
P.S.You can visit our global sources online store at https://yiweitextile.manufacturer.globalsources.com/homepage_6003002332362.htm
to stay updated on our latest products and innovations.
by "Puty Heduan" <monsterachimut188@gmail.com> - 02:12 - 28 Apr 2025 -
How WhatsApp Handles 40 Billion Messages Per Day
How WhatsApp Handles 40 Billion Messages Per Day
In this article, we’ll take a technical dive into how WhatsApp built its architecture and the challenges the engineering team faced during this journey.͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for moreBuild to Prod: Secure, Scalable MCP Servers with Docker (Sponsored)
The AI agent era is here, but running tools in production with MCP is still a mess—runtime headaches, insecure secrets, and a discoverability black hole. Docker fixes that. Learn how to simplify, secure, and scale your MCP servers using Docker containers, Docker Desktop, and the included MCP gateway. From trusted discovery to sandboxed execution and secrets management, Docker gives you the foundation to run agentic tools at scale—with confidence.
Read the guide: MCPs to Prod with Docker
Disclaimer: The details in this post have been derived from the articles written by the WhatsApp engineering team. All credit for the technical details goes to the WhatsApp Engineering Team. The links to the original articles and videos are present in the references section at the end of the post. We’ve attempted to analyze the details and provide our input about them. If you find any inaccuracies or omissions, please leave a comment, and we will do our best to fix them.
Messaging platforms don’t get second chances. Missed messages, delayed photos, or dropped calls break trust instantly. And the bigger the user base, the harder it gets to recover from even brief failures.
Some systems thrive under that pressure.
WhatsApp is one of them. It moves nearly 40 billion messages daily, keeps hundreds of millions of users connected, and does it all with a small engineering team. At one point, just over 50 engineers supported the entire backend. Fewer than a dozen focused on the core infrastructure.
This scale is a result of multiple engineering choices that favored simplicity over cleverness, clarity over abstraction, and resilience over perfection. System failures weren’t unexpected, but inevitable. Therefore, the system was designed to keep going when things went sideways.
Erlang played a central role. Built for telecoms, it offered lightweight concurrency, fault isolation, and distributed messaging from the ground up. However, the real advantage came from what was layered on top: smart partitioning, async replication, tightly scoped failover, and tooling.
In this article, we’ll take a technical dive into how WhatsApp built its architecture and the challenges the engineering team faced during this journey.
System Design Principles
At the heart of WhatsApp’s architecture is a surprisingly basic principle: make it simple enough to reason about under stress. When systems operate at a global scale, complexity isn’t a big threat to reliability.
Some guiding principles followed by the WhatsApp engineering team were as follows:
Clarity Over Cleverness: The architecture favors small, focused components. Each service handles one job, minimizing dependencies and limiting the blast radius when things fail.
Async by Default: WhatsApp relies on async messaging throughout. Processes hand off work and move on, keeping the system responsive even when parts of it slow down. This design absorbs load spikes and prevents small glitches from snowballing.
Isolation: Each backend is partitioned into “islands” that can fail independently. Replication flows one way so that if a node drops, its peer takes over.
Seamless Upgrades: Code changes roll out without restarting services or disconnecting users. Discipline around state and interfaces makes this possible.
Quality Through Focus: In the early days, every line of backend code was reviewed by the founding team. That kept the system lean, fast, and deeply understood.
WhatsApp Server Architecture
Delivering a message sounds simple, until millions of phones start talking at once. At WhatsApp's scale, even small inefficiencies compound quickly.
The diagram below shows the high-level WhatsApp architecture:
The architecture focuses on three goals: speed, reliability, and resource isolation. Some key aspects of the architecture are as follows:
A Connection is a Process
When a phone connects to WhatsApp, it establishes a persistent TCP connection to one of the frontend servers. That connection is managed as a live Erlang process that maintains the session state, manages the TCP socket, and exits cleanly when the user goes offline.
There is no connection pooling and no multiplexing, but just one process per connection. This design maps naturally onto Erlang's strengths and makes lifecycle management straightforward. If something goes wrong, like a dropped network packet or app crash, the process dies, and with it, all associated memory and state.
Stateful and Smart on the Edge
The session process isn’t a dumb pipe. It actively coordinates with backends to pull user-specific data:
Authentication: Verifies the client identity and session validity.
Blocking and Permissions: Checks whether the user is allowed to send messages or has been restricted.
Pending Messages and Notifications: Queries message queues and notification subsystems.
This orchestration happens quickly and in parallel. By keeping session logic close to the edge, the system avoids round-trip and minimizes latency for first-message delivery.
Scaling Frontend Connections
At peak, a single chat server can manage upwards of a million concurrent connections. Erlang handles this effortlessly, thanks to its process model and non-blocking IO. Each session lives independently, so one slow client doesn’t affect others.
To maintain performance at that scale, frontend servers avoid unnecessary work by adopting some strategies:
Typing indicators and presence updates (for example, “online,” “last seen”) are batched and rate-limited.
Message acknowledgments use lightweight protocol messages, not full API calls.
Idle sessions are monitored and culled when inactive for too long.
This keeps frontend load proportional to active engagement, not just raw connection count.
Efficient Message Flow
When two users are online and start chatting, their session processes coordinate through backend chat nodes. These nodes are tightly interconnected and handle routing at the protocol level, not the application level. Messages move peer-to-peer within the backend mesh, minimizing hops.
Presence, typing states, and metadata updates add volume. For every message, multiple related updates might flow:
Delivery receipts
Typing notifications
Group membership changes
Profile picture updates
Each of these messages travel through the same architecture, but with reduced delivery guarantees. Not every typing status needs to arrive.
The Role of Erlang
Erlang plays a key role in the efficiency of WhatsApp’s backend.
Most backend stacks buckle when faced with millions of users doing unpredictable things at once. However, Erlang’s runtime is designed from the ground up to handle massive concurrency, soft failure, and fast recovery.
Here are some core features of Erlang:
In Erlang, every connection, every user session, and every internal task runs as a lightweight process. They’re managed by the BEAM virtual machine, which can spin up hundreds of thousands (sometimes millions) of them on a single node without choking.
Each process runs in isolation with its memory and mailbox. It can crash without taking down the system.
Erlang plays exceptionally well with large, multi-core boxes. As core counts increase, the BEAM scheduler spreads processes across them with minimal coordination overhead. This is SP scalability (Single Process or Single Point scaling), where node count stays constant while internal capacity grows.
Erlang’s “let it crash” philosophy is a pragmatic response to the unpredictability of distributed systems. Supervisors monitor child processes, restarting them if they fail. Failures stay local. There’s no chain reaction of exceptions or retries.
Erlang has a Gen Factory that dispatches work across multiple processes. Each mini-factory can handle its own stream of input, reducing contention and spreading load more evenly. This model keeps WhatsApp’s backend humming even under spikes in traffic.
Backend Systems and Isolation
Backend systems tend to become monoliths unless there’s a strong reason to split them up.
WhatsApp had one: survival at scale. When millions of users are relying on real-time messaging, even a minor backend hiccup can ripple through the system.
Here are a few strategies they adopted:
Divide by Function, Not Just Load
The backend is split into over 40 distinct clusters, each handling a narrow slice of the product. Some handle message queues. Others deal with authentication, contact syncing, or presence tracking. Multimedia, push notifications, and spam filtering each get their own space.
This kind of logical decoupling does a few things well:
Limits failure scope: If the spam filter crashes, message delivery doesn’t.
Speeds up iteration: Teams can deploy changes to one backend without risk to others.
Optimizes hardware: Some services are memory-bound, others are CPU-heavy. Isolation lets each run on the hardware it needs.
Decoupling isn’t free. It adds coordination overhead. However, at WhatsApp’s scale, the benefits outweigh the costs.
Redundancy Through Erlang Clustering
Erlang’s distributed model plays a key role in backend resilience. Nodes within a cluster run in a fully meshed topology and use native distribution mechanisms to communicate. If one node drops, others pick up the slack.
State is often replicated or reconstructible. Clients can reconnect to a new node and resume where they left off. Supervisors and health checks ensure that failed processes restart quickly, and clusters self-heal in the face of routine hardware faults.
There’s no single master node, no orchestrator dependency, and minimal need for human intervention.
“Islands” of Stability
To go further, the system groups backend nodes into what are called “islands.” Each island acts as a small, redundant cluster responsible for a subset of data, like a partition in a distributed database.
Here’s how the island approach works:
Each island typically has two or more nodes.
Data partitions are assigned deterministically to one node as primary and another as secondary.
If the primary goes down, the secondary takes over instantly.
Islands replicate data within themselves but remain isolated from other islands.
This setup adds a layer of fault tolerance without requiring full replication across the entire system. Most failures affect only one island, and recovery is scoped tightly.
Database Design and Optimization
When messages need to move at sub-second latency across continents, traditional database thinking doesn't apply. There’s no room for complex joins, heavyweight transactions, or anything that introduces blocking. WhatsApp's architecture leans hard into a model built for speed, concurrency, and volatility.
Here are some core database-related features:
Key-Value Store in RAM
Data access follows a key-value pattern almost universally. Each piece of information, whether it’s a user session, a pending message, or a media pointer, has a predictable key and a compact value.
And whenever possible, data lives in memory.
In-memory structures like Erlang’s ETS (Erlang Term Storage) tables provide fast, concurrent access without external dependencies. These structures are native to the VM and don’t require network hops or disk seeks. Read and write throughput remains consistent under pressure because memory latency doesn’t spike with load.
Databases Embedded in the VM
Instead of reaching out to external storage layers, most database logic is embedded directly within the Erlang runtime. This tight integration reduces the number of moving parts and avoids the latency that creeps in with networked DB calls.
Some backend clusters maintain their internal data stores, implemented using a mix of ETS tables and write-through caching layers. These stores are designed for short-lived data, like presence updates or message queues, that don’t require permanent persistence.
For long-lived data like media metadata, records are still kept in memory as long as possible. Only when capacity demands or eviction policies kick in does the data flush to disk.
Lightweight Locking and Fragmentation
Concurrency isn’t just about spawning processes. It’s also about managing locks.
To minimize lock contention, data is partitioned into what are called “DB Frags”: fragments of ETS tables distributed across processes.
Each fragment handles a small, isolated slice of the keyspace. All access to that fragment goes through a single process on a single node. This allows for:
Serialized access per key: No races, no locks.
Horizontal scale-out: More fragments mean more throughput.
Targeted replication: Each fragment is replicated independently to a paired node.
The result is a system where reads and writes rarely block, and scaling up just means adding more fragments and processes.
Async Writes and Parallel Disk I/O
For persistence, writes happen asynchronously and outside the critical path. Most tables operate in an async_dirty mode, meaning they accept updates without requiring confirmation or transactional guarantees. This keeps latency low, even when disks get slow.
Behind the scenes, multiple transaction managers (TMs) push data to disk and replication streams in parallel. If one TM starts to lag, others keep the system moving. IO bottlenecks are absorbed by fragmenting disk writes across directories and devices, maximizing throughput.
Offline Caching: Don’t Write What Will Be Read Soon
When a phone goes offline, its undelivered messages queue up in an offline cache. This cache is smarter than a simple buffer. It uses a write-back model with a variable sync delay. Messages are written to memory first, then flushed to disk only if they linger too long.
During high-load events, like holidays, this cache becomes a critical buffer. It allows the system to keep delivering messages even when the disk can’t keep up. In practice, over 98% of messages are served directly from memory before ever touching persistent storage.
Replication and Partitioning
Replication sounds simple until it isn’t.
At scale, it gets tricky fast. Bidirectional replication introduces locking, contention, and coordination overhead. Cross-node consistency becomes fragile. And when things go wrong, everything grinds to a halt.
WhatsApp follows a different strategy.
Each data fragment is owned by a single node: the primary. That node handles all application-layer reads and writes for its fragment. It pushes updates to a paired secondary node, which passively receives and stores the changes.
The secondary never serves client traffic. It’s there for failover only.
This model avoids one of the nastiest problems in distributed systems: concurrent access to shared state. There are no conflicting writes, no race conditions, and no need for transactional locks across nodes. If the primary fails, the secondary is promoted, and replication flips.
Also, instead of running one massive table per service, WhatsApp breaks data into hundreds and sometimes thousands of fragments. Each fragment is a small, isolated slice of the total dataset, typically hashed by a user ID or session key.
These fragments are:
Bound to a single node for writes.
Replicated to one other node.
Mapped to processes through consistent hashing.
This sharding scheme reduces contention, improves locality, and allows the system to scale horizontally without reshuffling state.
Each group of nodes managing a set of fragments is called an island. An island typically consists of two nodes: a primary and a secondary. The key is that each fragment belongs to only one island, and each island operates independently.
Scaling Challenges
WhatsApp's backend scaled not just because of clever design, but because teams learned where things cracked under pressure and fixed them before they exploded.
Some of the scaling challenges the WhatsApp team faced are as follows:
When Hashes Collided
Erlang’s ETS tables rely on hash-based indexing for fast access. In theory, that works fine. In practice, a collision in the hash function can degrade performance.
A subtle bug emerged when two layers of the system used the same hash function with different goals. The result was thousands of entries ending up in the same buckets, while others stayed empty.
The fix was change the seed of the hash function: a two-line patch that instantly improved throughput by 4x in that subsystem.
Selective Receive
Erlang's selective receive feature lets processes pull specific messages from their mailbox. This was handy for control flow, but dangerous under load.
In high-throughput situations, like loading millions of records into memory, selective receive turned into a bottleneck. Processes got stuck scanning for the right message.
Engineers worked around this by draining queues into temp storage, splitting logic across worker processes, and avoiding selective receive in performance-critical paths.
Cascading Failures Aren’t Always Load-Related
One of the most severe outages didn’t start with a CPU spike or traffic surge. It started with a router. A backend router silently dropped a VLAN, causing a massive disconnect-reconnect storm across the cluster.
What followed was a perfect storm: overloaded message queues, stuck nodes, unstable cluster state. At one point, internal queues grew from zero to four million messages in seconds. Even robust processes like PG2, normally fault-tolerant, began behaving erratically, queueing messages that couldn’t be delivered.
The only solution was a hard reset. The system had to be shut down, rebooted node by node, and carefully stitched back together.
Conclusion
WhatsApp’s backend is elegant in the trenches. It’s built to handle chaos without becoming chaotic, to scale without centralization, and to fail without taking users down with it.
From Erlang’s lightweight processes to carefully fragmented data and one-way replication, every design choice reflects a deep understanding of operational reality at massive scale.
The architecture is pragmatic: meant to withstand sudden spikes, silent regressions, and global outages.
References:
Erlang Factory 2014 - That's 'Billion' with a 'B': Scaling to the Next Level at WhatsApp
A Reflection on Building the WhatsApp Server - Code BEAM 2018
SPONSOR US
Get your product in front of more than 1,000,000 tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.
Space Fills Up Fast - Reserve Today
Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing sponsorship@bytebytego.com.
Like
Comment
Restack
© 2025 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:36 - 28 Apr 2025 -
Your Beauty Vision, Our Manufacturing Expertise
Hi info,
Ready to create your dream beauty products? As an experienced OEM/ODM cosmetics manufacturer, we specialize in creating high-quality, bespoke solutions for businesses like yours.
Why partner with us?
Innovation: Unique formulations tailored to your audience
Quality: Cutting-edge facilities and stringent quality control
Sustainability: Eco-friendly solutions for a greener futureWe proudly partner with:
Brands launching their first product line
Retailers seeking private-label solutions
Influencers and celebrities creating signature collections
Let’s create something extraordinary together.Book a free consultation with our team today!
Warm regards,
Evelyn
+86 18903070739
Guangzhou Opseve Cosmetics Co., Ltd.
by "MAhipal Popplewell" <popplewellmahipal617@gmail.com> - 07:47 - 28 Apr 2025 -
knitted fabrics and functional sports fabric
Dear info,
Good morning.
Our factory supply high quality knitted fabric and functional sports fabrics shipping to adidas, primark, sam'sclub brands clothing factory.
We produce 900-1200tons knitted fabrics per month.
Do you need fabrics product catalogue series?
Please free feel to contact with us.
Best regards,
Mr. Sam Stone
Director of manufacturing:
Xianghua Group Co., Ltd.Shenzhou Printing Dyeing Co., Ltd.
Tianlun Siyuan Knitting Printing Co., Ltd.Email: samstone@outlook.com
Factory add.:Qiuxia Industrial Zone, Shishi 362700, China Fujian.
by "Fatima Hebbar" <hebbarfatima168@gmail.com> - 07:06 - 28 Apr 2025 -
A leader’s guide to the business impact of tariffs
Leading Off
Take action now Brought to you by Alex Panas, global leader of industries, & Axel Karlsson, global leader of functional practices and growth platforms
Welcome to the latest edition of Leading Off. We hope you find our insights useful. Let us know what you think at Alex_Panas@McKinsey.com and Axel_Karlsson@McKinsey.com.
—Alex and Axel
Volatility and caution remain high as business leaders continue to evaluate the potential impacts of tariffs and trade controls on their organizations and on the global economy. Last week, we highlighted one move that companies can make to navigate the situation: creating a geopolitical nerve center to track developments in global trade and coordinate their responses. This week, we consider the actions that leaders can take to put their companies in a strong position for the long term, even in an evolving economic landscape.
While business leaders are hoping for clarity on the global tariff situation, they cannot simply take a wait-and-see approach—or focus only on more tactical, short-term responses. McKinsey’s Cindy Levy, Shubham Singhal, and Zoe Fox suggest that leaders take three steps to position their companies to thrive in this challenging environment. First, they can assess how tariffs will affect their competitive advantages and growth prospects. Second, organizations can define both their strategic postures and the actions that could help them seize the business opportunities that trade-related changes may present. Finally, leaders can analyze multiple potential scenarios and pressure test different strategic moves they may need to make. “With this view, a company’s leadership team can make proactive decisions to navigate tariff uncertainty while sustaining their company’s resilience and growth,” the authors say.
That’s how many value drivers corporate leaders should continually evaluate as part of a proactive approach to navigating geopolitics, according to McKinsey’s Cindy Levy, Shubham Singhal, and Matt Watters. These include tariffs, subsidies in support of national industrial policies, and potential government investments in geopolitical allies across various business domains.
That’s McKinsey’s Andrew Grant, Michael Birshan, Olivia White, and Ziad Haider on the challenges of being a global company in today’s uncertain environment. The authors suggest that organizations can build resilience by conducting geopolitical-scenario planning and upgrading their boards’ capabilities around geopolitical risks. They also can pursue “structural segmentation,” or “a cluster of moves that global corporations are considering to mitigate geopolitical exposure, to enable locally informed decision-making, and to clear a pathway to safe, stable growth.” Structural segmentation can involve coordinated actions across six domains: operations, R&D, technology and data, legal entity structure, capital, and people.
Business leaders who may feel overwhelmed by the degree of constant global change have good reason. “This is probably the most complex international environment in 80 years,” says Council on Foreign Relations President Michael Froman. In an interview with McKinsey’s Shubham Singhal, Froman adds, “We live in a fragmented world, and it’s likely to become more fragmented.” He says that leaders should consider several key issues in this evolving environment, including their dependencies on certain markets, the impact of government protectionism on supply chains, and the interplay between national security and economic concerns. The long-held view of a strong economy as an enabler of national security is “a little bit flipped on its head,” Froman observes. “Increasingly, we’re looking to economic tools as tools of national security—for example, export controls to keep the most advanced chips out of the hands of our competitors and adversaries so that we can maintain an advantage and a lead on artificial intelligence,” he says. “That could have military and intelligence implications as well as foreign investment restraints.”
The influx of recent global challenges that organizations have faced—from the COVID-19 pandemic to armed conflicts to rising inflation—has shown that leaders can take myriad approaches to dealing with uncertainty. Leaders who demonstrate strategic courage—meaning, a combination of prudence and boldness—are best positioned for success, according to McKinsey Global Managing Partner Bob Sternfels and Senior Partners Ishaan Seth and Michael Birshan. Previous McKinsey research shows that defense-focused organizations tend to perform in the middle of the pack, while offense-minded companies often wind up with a mix of big wins and losses. The authors say that courageous leaders blend both approaches. “As they start to create value from volatility, we see the ambidextrous management teams thriving rather than merely surviving in this environment,” they note.
Lead by being proactive.
— Edited by Eric Quiñones, senior editor, New York
Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to the Leading Off newsletter.
Copyright © 2025 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Leading Off" <publishing@email.mckinsey.com> - 02:23 - 28 Apr 2025 -
Can you predict supply chain disruptions? Digital twins offer a way
On McKinsey Perspectives
A supply chain’s stunt double Brought to you by Alex Panas, global leader of industries, & Axel Karlsson, global leader of functional practices and growth platforms
Welcome to the latest edition of Only McKinsey Perspectives. We hope you find our insights useful. Let us know what you think at Alex_Panas@McKinsey.com and Axel_Karlsson@McKinsey.com.
—Alex and Axel
•
Self-healing supply chain. Rising labor costs, increasing consumer demand for fast and free deliveries, and geopolitical uncertainties are some of the forces stifling supply chains and squeezing companies’ margins. Organizations can more nimbly react and adapt to potential disruptions by harnessing AI-enabled, self-monitoring digital twins that are capable of both identifying issues and prescribing the fix, say McKinsey Partner Alex Cosmas and coauthors. By showing how physical and digital processes interact along the supply chain, digital twins make it easier for companies to optimize the end-to-end process that best suits their needs.
—Edited by Jermey Matthews, editor, Boston
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to the Only McKinsey Perspectives newsletter, formerly known as Only McKinsey.
Copyright © 2025 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey Perspectives" <publishing@email.mckinsey.com> - 01:30 - 28 Apr 2025