Archives
- By thread 3428
-
By date
- June 2021 10
- July 2021 6
- August 2021 20
- September 2021 21
- October 2021 48
- November 2021 40
- December 2021 23
- January 2022 46
- February 2022 80
- March 2022 109
- April 2022 100
- May 2022 97
- June 2022 105
- July 2022 82
- August 2022 95
- September 2022 103
- October 2022 117
- November 2022 115
- December 2022 102
- January 2023 88
- February 2023 90
- March 2023 116
- April 2023 97
- May 2023 159
- June 2023 145
- July 2023 120
- August 2023 90
- September 2023 102
- October 2023 106
- November 2023 100
- December 2023 74
- January 2024 75
- February 2024 75
- March 2024 78
- April 2024 74
- May 2024 108
- June 2024 98
- July 2024 116
- August 2024 134
- September 2024 97
-
RE:How to smoothly ship goods to the Middle East?
All best to you,
Here is Yori fm Winsail International Logistics Co.,Ltd
Please see the below weekly Ocean/Air freight rates:
Ocean freight rate:
SHANGHAI - DAMMAM 3000USD/20GP; 4200USD /40HQ
SHENZHEN - DAMMAM USD 3050/20GP; USD 4400/40HQ
Air freight rate:
CAN -DMM : 5.2USD/KG
Feel free to let me know if you need other main ports
Best regards
-------------------------------------------------------------------
My email: overseas.12@winsaillogistics.com
My Tel/whatsapp number:+86 13660987349
by "Yori" <forwarder03@win-win-logistics.cn> - 02:33 - 19 Jun 2024 -
Scaling to 1.2 Billion Daily API Requests with Caching at RevenueCat
Scaling to 1.2 Billion Daily API Requests with Caching at RevenueCat
Effortlessly Integrate E-Signatures into Your App with BoldSign (Sponsored) BoldSign by Syncfusion makes it easy for developers to integrate e-signatures into applications. Our powerful e-signature API allows you to embed signature requests, create templates, add custom branding, and more.͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for moreEffortlessly Integrate E-Signatures into Your App with BoldSign (Sponsored)
BoldSign by Syncfusion makes it easy for developers to integrate e-signatures into applications.
Our powerful e-signature API allows you to embed signature requests, create templates, add custom branding, and more.
It’s so easy to get started that 60% of our customers integrated BoldSign into their apps within one day.
Why BoldSign stands out:
99.999% uptime.
Trusted by Ryanair, Cost Plus Drugs, and more.
Complies with eIDAS, ESIGN, GDPR, SOC 2, and HIPAA standards.
No hidden charges.
Free migration support.
Rated 4.7/5 on G2.
Get 20% off the first year with code BYTEBYTEGO20. Valid until Sept. 31, 2024.
Disclaimer: The details in this post have been derived from the article originally published on the RevenueCat Engineering Blog. All credit for the details about RevenueCat’s architecture goes to their engineering team. The link to the original article is present in the references section at the end of the post. We’ve attempted to analyze the details and provide our input about them. If you find any inaccuracies or omissions, please leave a comment, and we will do our best to fix them.
RevenueCat is a platform that makes it easy for mobile app developers to implement and manage in-app subscriptions and purchases.
The staggering part is that they handle over 1.2 billion API requests per day from the apps.
At this massive scale, a fast and reliable performance becomes critical. Some of it is achieved by distributing the workload uniformly across multiple servers.
However, an efficient caching solution also becomes the need of the hour.
Caching allows frequently accessed data to be quickly retrieved from fast memory rather than slower backend databases and systems. This can dramatically speed up response times.
But caching also adds complexity since the cached data must be kept consistent with the source of truth in the databases. Stale or incorrect data in the cache can lead to serious issues.
For an application operating at the scale of RevenueCat, even small inefficiencies or inconsistencies in the caching layer can have a huge impact.
In this post, we will look at how RevenueCat overcame multiple challenges to build a truly reliable and scalable caching solution using Memcached.
The Three Key Goals of Caching
RevenueCat has three key goals for its caching infrastructure:
Low latency: The cache needs to be fast because even small delays in the caching layer can have significant consequences at this request volume. Retrying requests and opening new connections are detrimental to the overall performance.
Keeping cache servers up and warm: Cache servers need to stay available and full of frequently accessed data to offload the backend systems.
Maintaining data consistency: Data in the cache needs to be consistent. Inconsistency can lead to serious application issues.
While these main goals are highly relevant to applications operating at scale, a robust caching solution also needs supporting features such as monitoring and observability, optimization, and some sort of automated scaling.
Let’s look at each of these goals in more detail and how RevenueCat’s engineering team achieved them.
Low Latency
There’s no doubt that latency has a huge impact on user experience.
As per a statistic by Amazon, every 100ms of latency costs them 1% in sales. While it’s hard to confirm whether this is 100% true, there’s no denying the fact that latency impacts user experience.
Even small delays of a few hundred milliseconds can make an application feel sluggish and unresponsive. As latency increases, user engagement and satisfaction plummet.
RevenueCat achieves low latency in its caching layer through two key techniques.
1 - Pre-established connections
Their cache client maintains a pool of open connections to the cache servers.
When the application needs to make a cache request, it borrows a connection from the pool instead of establishing a new TCP one. This is because a TCP handshake could nearly double the cache response times. Borrowing the connection avoids the overhead of the TCP handshake on each request.
But no decision comes without some tradeoff.
Keeping connections open consumes memory and other resources on both the client and server. Therefore, it’s important to carefully tune the number of connections to balance resource usage with the ability to handle traffic spikes.
2 - Fail-fast approach
If a cache server becomes unresponsive, the client immediately marks it as down for a few seconds and fails the request, treating it as a cache miss.
In other words, the client will not retry the request or attempt to establish new connections to the problematic server during this period.
The key insight here is that even brief retry delays of 100ms can cause cascading failures under heavy load. Requests pile up, servers get overloaded, and the "retry storm" can bring the whole system down. Though it might sound counterintuitive, failing fast is crucial for a stable system.
But what’s the tradeoff here?
There may be a slight increase in cache misses when servers have temporary issues. But this is far better than risking a system-wide outage. A 99.99% cache hit rate is meaningless if 0.01% of requests trigger cascading failures. Prioritizing stability over perfect efficiency is the right call.
One potential enhancement over here could be circuit breaking where requests to misbehaving servers can be disabled based on error rates and latency measurements. This is something that Uber uses in their integrated cache solution called CacheFront.
However, the aggressive timeouts and managing connection pools likely achieve similar results with far less complexity.
Keeping Cache Servers Warm
The next goal RevenueCat had was keeping the cache servers warm.
They employed several strategies to achieve this.
1 - Planning for Failure with Mirrored and Gutter pool
RevenueCat uses fallback cache pools to handle failures.
Their strategy is designed to handle cache server failures and maintain high availability. The two approaches they use are as follows:
Mirrored pool: A fully synchronized secondary cache pool that receives all writes and can immediately take over reads if the primary pool fails.
Gutter pool: A small, empty cache pool that temporarily caches values with a short TTL when the primary pool fails, reducing the load on the backend until the primary recovers. For reference, the gutter pool technique was also used by Facebook when they built their caching architecture with Memcached.
Here also, there are trade-offs to consider concerning server size:
For example, having smaller servers provides benefits such as:
Granular failure impact: With many small cache servers, the failure of a single server affects a smaller portion of the cached data. This can make the fallback pool more effective, as it needs to handle a smaller subset of the total traffic.
Faster warmup: When a small server fails and the gutter pool takes over, it can warm up the cache for that server’s key space more quickly due to the smaller data volume.
However, small servers also have drawbacks:
Increased operational complexity of managing a larger number of servers adds operational complexity.
A higher connection overhead where each application server has to maintain connections to all cache servers.
The diagram below from RevenueCat’s article shows this comparison:
Simplified management: Fewer large servers are easier to manage and maintain compared to many small instances. There are fewer moving parts and less complexity in the overall system.
Improved resource utilization: Larger servers can more effectively utilize the available CPU, memory, and network resources, leading to better cost efficiency.
Fewer connections: With fewer cache servers, the total number of connections from the application servers is reduced, minimizing connection overhead.
Bigger servers also have some trade-offs:
When a large server fails, a larger portion of the cached data becomes unavailable. The fallback pool needs to handle a larger volume of traffic, potentially increasing the load on the backend.
In the case of a failure, warming up the cache for a larger key space may take longer due to the increased data volume.
This is where the strategy of using a mirrored pool for fast failover and a gutter pool for temporary caching strikes a balance between availability and cost.
The mirrored pool ensures immediate availability. The gutter pool, on the other hand, provides a cost-effective way to handle failures temporarily.
Generally speaking, it’s better to design the cache tier based on a solid understanding of the backend capacity. Also, when using sharding, the cache, and the backend sharding should be orthogonal so that a cache server going down translates into a moderate increase on backend servers.
Latest articles
If you’re not a paid subscriber, here’s what you missed.
To receive all the full articles and support ByteByteGo, consider subscribing:
2 - Dedicated Pools
Another technique they employ to keep cache servers warm is to use dedicated cache pools for certain use cases.
Here’s how the strategy works:
Identifying high-value data: The first step is to analyze the application's data access patterns and identify datasets that are crucial for performance, accuracy, or user experience. This could include frequently accessed configuration settings, important user-specific data, or computationally expensive results.
Creating dedicated pools: Instead of relying on a single shared cache pool, create separate pools for each identified high-value dataset. These dedicated pools have their own allocated memory and operate independently from the main cache pool.
Reserving memory: By allocating dedicated memory to each pool, they ensure that the high-value data has a guaranteed space in the cache. This prevents other less critical data from evicting the important information, even under high memory pressure.
Tailored eviction policies: Each dedicated pool can have its eviction policy tailored to the specific characteristics of the dataset. For example, a pool holding expensive-to-recompute data might have a longer TTL or a different eviction algorithm compared to a pool with frequently updated data.
The dedicated pools strategy has several advantages:
Improved cache hit ratio for critical data
Increased data accuracy
Flexibility in cache management
3 - Handling Hot Keys
Hot keys are a common challenge in caching systems.
They refer to keys that are accessed more frequently than others, leading to a high concentration of requests on a single cache server. This can cause performance issues and overload the server, potentially impacting the overall system.
There are two main strategies for handling hot keys:
Key Splitting
The below points explain how key splitting works:
Key splitting involves distributing the load of a hot key across multiple servers.
Instead of having a single key, the key is split into multiple versions, such as keyX/1, keyX/2, keyX/3, etc.
Each version of the key is placed on a different server, effectively spreading the load.
Clients read from one version of the key (usually determined by their client ID) but write to all versions to maintain consistency.
The challenge with key splitting is detecting hot keys in real time and coordinating the splitting process across all clients.
It requires a pipeline to identify hot keys, determine the splitting factor, and ensure that all clients perform the splitting simultaneously to avoid inconsistencies.
The list of hot keys is dynamic and can change based on real-life events or trends, so the detection and splitting process needs to be responsive.
Local Caching
Local caching is simpler when compared to key splitting.
Here are some points to explain how it works:
Local caching involves caching hot keys directly on the client-side, rather than relying solely on the distributed cache.
A key is cached locally on the client with a short TTL (Time-To-Live) when a key is identified as hot.
Subsequent requests for that key are served from the local cache, reducing the load on the distributed cache servers.
Local caching doesn't require coordination among clients.
However, local caching provides weaker consistency guarantees since the locally cached data may become stale if updates occur frequently.
To mitigate this, it’s important to use short TTLs for locally cached keys and only apply local caching to data that changes rarely.
Avoiding Thundering Herds
When a popular key expires, all clients may request it from the backend simultaneously, causing a spike. This is known as the “thundering herd situation”.
RevenueCat avoids this situation since it tries to maintain cache consistency by updating it during the writes. However, when using low TTLs and invalidations from DB changes, the thundering herd can cause a lot of problems.
Some other potential solutions to avoid thundering herds are as follows:
Recache policy: The GET requests can include a recache policy. When the remaining TTL is less than the given value, one of the clients will get a miss and re-populate the value in the cache while other clients continue to use the existing value.
Stale policy: In the delete command, the key is marked as stale. A single client gets a miss while others keep using the old value.
Lease policy: In this policy, only one client wins the right to repopulate the value while the losers just have to wait for the winner to re-populate. For reference, Facebook uses leasing in its Memcache setup.
Cache Server Migrations
Sometimes cache servers have to be replaced while minimizing impact on hit rates and user experience.
RevenueCat has built a coordinated cache server migration system that consists of the following steps:
Warming up the new cluster:
Before switching traffic, the team starts warming up the new cache cluster.
They populate the new cluster by mirroring all the writes from the existing cluster.
This ensures that the new cluster has the most up-to-date data before serving any requests.
Switching a percentage of reads:
After the new cluster is sufficiently warm, the team gradually switches a percentage of read traffic to it.
This allows them to test the new cluster’s performance and stability under real-world load.
Flipping all traffic:
Once the new cluster has proven its stability and performance, the traffic is flipped over to it.
At this point, the new cluster becomes the primary cache cluster, serving all read and write requests.
The old cluster is kept running for a while, with writes still being mirrored to it. This allows quick fallback in case of any issues.
Decommissioning the old cluster:
After a period of stable operation with the new cluster as the primary, the old cluster is decommissioned.
This frees up resources and completes the migration process.
The diagram below shows the entire migration process.
Maintaining data consistency is one of the biggest challenges when using caching in distributed systems.
The fundamental issue is that data is stored in multiple places - the primary data store (like a database) and the cache. Keeping the data in sync across these locations in the face of concurrent reads and writes is a non-trivial problem.
See the example below that shows how a simple race condition can result in a consistency problem between the database and the cache.
What’s going on over here?
Web Server 1 gets a cache miss and fetches data from the database.
A second request results in Web Server 2 performing a DB Write for the same data. It also updates the cache with the new data
Web Server 2 refills the cache with the stale data that it had fetched in step 1.
RevenueCat uses two main strategies to maintain cache consistency.
1 - Write Failure Tracking
In RevenueCat's system, a cache write failure is a strong signal that there may be an inconsistency between the cache and the primary store.
However, there are better options than simply retrying the write because that can lead to cascading failures and overload as discussed earlier.
Instead, RevenueCat's caching client records all write failures. After recording, it deduplicates them and ensures that the affected keys are invalidated in the cache at least once (retrying as needed until successful). This guarantees that the next read for those keys will fetch fresh data from the primary store, resynchronizing the cache.
This write failure tracking allows them to treat cache writes as if they should always succeed, significantly simplifying their consistency model. They can assume the write succeeded, and if it didn't, the tracker will ensure eventual consistency.
2 - Consistent CRUD Operations
For each type of data operation (Create, Read, Update, Delete), they have developed a strategy to keep the cache and primary store in sync.
For reads, they use the standard cache-aside pattern: read from the cache, and on a miss, read from the primary store and populate the cache. They always use an "add" operation to populate, which only succeeds if the key doesn't already exist, to avoid overwriting newer values.
For updates, they use a clever strategy as follows:
Before the update, they reduce the cache entry's TTL to a low value like 30 seconds
They update the primary data store
After the update, they update the cache with the new value and reset the TTL
If a failure occurs between steps 1 and 2, the cache remains consistent as the update never reaches the primary store. If a failure occurs between 2 and 3, the cache will be stale, but only for a short time until the reduced TTL expires. Also, any complete failures are caught by the write failure tracker that we talked about earlier.
For deletes, they use a similar TTL reduction strategy before the primary store delete.
However, for creation, they rely on the primary store to provide unique IDs to avoid conflicts.
Conclusion
RevenueCat’s approach illustrates the complexities of running caches at a massive scale. While some details may be specific to their Memcached setup, the high-level lessons are widely relevant.
Here are some key takeaways to consider from this case study:
Use low timeouts and fail fast on cache misses. Retries can cause cascading failures under load.
Plan cache capacity for failure scenarios. Ensure the system can handle multiple cache servers going down without overloading backends.
Use fallback and dedicated cache pools. Mirrored fallback pools and dedicated pools for critical data help keep caches warm and handle failures.
Handle hot keys through splitting or local caching. Distribute load from extremely popular keys across servers or cache them locally with low TTLs.
Avoid "thundering herds" with techniques like stale-while-revalidate and leasing.
Track and handle cache write failures. Assume writes always succeed but invalidate on failure to maintain consistency.
Implement well-tested strategies for cache updates during CRUD operations. Techniques like TTL reduction before writes help maintain consistency across cache and database.
References:
Scaling Smoothly: RevenueCat’s data-caching techniques for 1.2 billion daily API requests
How RevenueCat Manages Caching for Handling over 1.2 Billion API Requests
How Uber Serves Over 40 Million Reads Per Second from Online Storage Using Integrated Cache
SPONSOR US
Get your product in front of more than 500,000 tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.
Space Fills Up Fast - Reserve Today
Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing hi@bytebytego.com
Like Comment Restack © 2024 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:36 - 18 Jun 2024 -
[Online workshop] Maximizing observability with New Relic logs
New Relic
Register for this free online workshop on the 27th June at 10 AM BST/ 11 AM CEST for a comprehensive introduction to leveraging logs in New Relic. Get hands-on with log data, master importation, parsing, filtering, dropping, and setting up alerts.
In this 90-minute online workshop, you’ll work in a sandbox environment, search New Relic log data, work with partitions and AI log patterns, troubleshoot application errors and trace data, create charts and dashboards for seamless team collaboration, and configure proactive alert conditions to address potential issues.
You’ll learn:
- What logs in context is and its role in observability
- What log shipping is and how it works in New Relic
- How to apply parsing rules and drop filters
- Ways to bring your log data into New Relic
- Configuring plugins like FluentD, Kubernetes cloud integrations and log API
Register now Need help? Let's get in touch.
This email is sent from an account used for sending messages only. Please do not reply to this email to contact us—we will not get your response.
This email was sent to info@learn.odoo.com Update your email preferences.
For information about our privacy practices, see our Privacy Policy.
Need to contact New Relic? You can chat or call us at +44 20 3859 9190.
Strand Bridge House, 138-142 Strand, London WC2R 1HH
© 2024 New Relic, Inc. All rights reserved. New Relic logo are trademarks of New Relic, Inc
by "New Relic" <emeamarketing@newrelic.com> - 05:04 - 18 Jun 2024 - What logs in context is and its role in observability
-
RE: Learn
Hello,
Did you have a chance to view my previous email which I have sent you?
Kindly share your thoughts about acquiring the list, so that we can give you the cost & count details.
Regards,
Jessica
From: Jessica Martin
Sent: Thursday, June 6, 2024 4:06 PM
To: info@learn.odoo.com
Subject: LearnDear Exhibitor,
Hope this note finds you well.
I am writing to confirm, whether you are interested in acquiring the attendees mailing list of the below mentioned tradeshow ?
IRMA 2024 (June 20-22, 2024 | Hamburg, Germany)
Information fields include : Contact name Company name Job Title Company Mailing address with Zip Code Phone Number Website URL and contact person verified business email address.
The complete list is available for a small investment, with unlimited usage rights, you can use this list for your regular marketing campaigns too.
Please let me know your interest so that we can get back to you with more details on Counts and Pricing available for this list.
Thank you and we awaiting your response.
Regards,
Jessica Martin - Events & Trade Show Specialist.
To remove from this mailing: reply with subject line as "Remove"
by "Jessica Martin" <jessica.martin@reachprospects.onmicrosoft.com> - 02:56 - 18 Jun 2024 -
Do you know how big the space economy could be by 2035?
Only McKinsey
Space industry opportunities Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:05 - 18 Jun 2024 -
⏰ Last chance to register: API platform insights on 18 June!
⏰ Last chance to register: API platform insights on 18 June!
Explore the latest API platform insights with industry insiders. Learn about trends, success metrics, and AI's role.Hi Md Abul,
Just a quick heads up that our exciting new online panel discussion - API platform insights 2024 - in collaboration with ResearchHQ - is happening tomorrow! If you haven't registered, now's your last chance to secure your spot.
📅 Date: 18th of June
🕙 Time: 10 am EDT / 3 pm BST
📍 Location: ZoomCome join us for an in-depth discussion of the findings of the API platform insights 2024 report. We'll explore the latest trends in API platforms, discuss success metrics, and examine the role of AI tools in enhancing the efficiency and ROI of platform teams.
Sign up now, and you'll receive an exclusive copy of the full report and access to the on-demand recording after the event.
Thanks,
Budha & teamTyk, 87a Worship Street, London, City of London EC2A 2BE, United Kingdom, +44 (0)20 3409 1911
by "Budhaditya Bhattacharya" <budha@tyk.io> - 06:01 - 17 Jun 2024 -
The final frontier: A leader’s guide to the space economy
Out of this world Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
Few areas of the economy are as dynamic or pervasive in our day-to-day lives as outer space. Space-based technology is improving at a breakneck pace, supporting an ever-growing number of applications that we use here on Earth. And like the universe itself, the business potential seems limitless. McKinsey research suggests that the space economy is at an inflection point: it’s poised to nearly triple in size by 2035, and many industries have something to gain from the connectivity, mobility, and data capabilities that outer space offers. This week, we consider space’s strategic future and how the cosmos could help us solve some of our greatest challenges at home.
There’s a certain romance about space. But its practical applications have made it more accessible and connected to our daily lives than ever—from how we watch movies and stream content to how we track packages to how we grow crops. In a recent episode of The McKinsey Podcast, senior partner Ryan Brukardt explains the ins and outs of the fast-growing space economy and its implications for business and society. “Everybody needs to have [space] in their strategy,” Brukardt says. As space-based innovations grow apace, so does the number of companies that can benefit from them. According to McKinsey global managing partner Bob Sternfels, Brukardt, and colleagues, it’s important for leaders in all industries to bridge the gap between the space community and their customers. They can do so by setting a vision for capturing value from space-related advances, even if it means disrupting their own business; by investing in space through new partnerships or a space-dedicated business line; and by joining the broader dialogue about the space economy’s future to ensure that its benefits are as far-reaching as possible.
That’s how many times the cost performance of satellites has improved in the past five to ten years. According to McKinsey’s Daniel Pacthod and colleagues, such rapid progress has enabled a proliferation of satellite-based use cases, including the ability to observe the effects of climate change on every corner of the planet. And there’s even more that satellites can do to advance sustainability here on Earth. The space sector can do the same, the authors say: for example, by tracking emissions, rating the sustainability of satellite missions, and setting targets for net-zero debris in orbit.
What goes up must come down, at least when you’re in Earth’s gravitational pull. But after an object from the International Space Station crashed into a family’s home in the United States, the lack of a legal framework relating to space junk—and who’s responsible when it falls on personal property—has raised a few eyebrows. Indeed, as space gets increasingly crowded, the need for good governance is greater than ever. A more structured approach to managing the space economy’s risks to infrastructure, data, and people (whether they’re on, or orbiting, the Earth) is key to ensuring its future growth.
Lead by shooting for the moon.
— Edited by Daniella Seiler, executive editor, Washington, DC
Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to the Leading Off newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Leading Off" <publishing@email.mckinsey.com> - 04:42 - 17 Jun 2024 -
Re: Weekly update Shipping information fm China
Dear friend
Greeting
Pls check below :
· Shekou-Jebel Ali 3450/40HQ *2 18th June
· Ningbo-Dammam 4000/40HQ *3 26th June
· Shanghai-Riyadh 3550/20GP ; 4650/40HQ 11th June
There is the peak season coming , rate increase sharply minute by minute
Space is hard to book these days . Don't waste time and face a higher rate !
Best regards
--------
Yori
NVOCC:MOC-NV09845
Winsail International Logistics Co.,Ltd
QQ:1586409909
Mob/Whatsapp: +86 13660987349
Email: overseas.12@winsaillogistics.com
by "Yori" <overseas10@gz-logistics.cn> - 03:45 - 17 Jun 2024 -
How widespread is the use of generative AI?
Only McKinsey
Our latest McK global survey Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
High expectations for gen AI. 2024 is the year organizations truly begin using—and deriving business value from—gen AI, Alex Singla, McKinsey senior partner and global leader of QuantumBlack, AI by McKinsey, and coauthors share. Our latest McKinsey Global Survey on AI finds that respondents’ expectations for gen AI’s impact remain as high as they were last year, with three-quarters predicting that gen AI will lead to significant or disruptive change in their industries in the years ahead.
•
Surge in adoption. For the past six years, AI adoption by respondents’ organizations has hovered at about 50%. This year, the survey finds that adoption has jumped to 72%. Companies are also now using AI in more parts of the business, with half of respondents saying their organizations have adopted AI in two or more business functions. Learn what high-performing companies are doing differently to create value from gen AI adoption, and visit McKinsey Digital to see examples of how companies are competing with technology.
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 11:06 - 16 Jun 2024 -
The quarter’s top themes
McKinsey&Company
At #1: What's the future of AI? In the second quarter of 2024, our top ten posts from McKinsey Themes highlighted topics including generative AI, the traits of good bosses, and more. At No. 1 is What's the future of AI?, which draws on insights from articles by McKinsey’s Eric Lamarre, Rodney W. Zemmel, Kate Smaje, Michael Chui, Ida Kristensen, and more. Read on for our full top 10.
2. 100 articles on generative AI
Since generative AI (gen AI) burst onto the scene in late 2022, it’s captivated business leaders and society at large. The excitement is well deserved: McKinsey research indicates that gen AI could add the equivalent of $2.6 trillion to $4.4 trillion of value annually—and redefine the way people work and live. Plus, our top 10
3. How to be a better boss
It’s often said that your manager can make or break your experience at a job. Unfortunately, it seems that almost everyone can recall working under a bad boss at least once in their career. How do so many bad leaders come into a position of power in the first place, and why do they remain there? Traits that make great leaders
Did you know that McKinsey partners regularly speak with top CEOs across industries to glean valuable perspectives? Our recent and best interviews cover a range of topics, from navigating disruption to effective crisis leadership. Dive into our curated selection below for insights from these impactful leaders, and learn what it takes to thrive in today's uniquely challenging business environment. Get perspective
Share these insights
This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you are a registered member of the Top Ten Most Popular newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Top Ten" <publishing@email.mckinsey.com> - 06:46 - 16 Jun 2024 -
The week in charts
The Week in Charts
AI’s effect on workforce skills, healthcare gaps, and more Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to The Week in Charts newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Week in Charts" <publishing@email.mckinsey.com> - 03:32 - 15 Jun 2024 -
Five exercises to help you lead at your best
Make behavioral changes stick Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
How exceptional leaders lead
Imagine you’re an avid basketball player. Your rebounding could use some work, but your three-point game is stellar. Where to spend your time? If you’re like the pros, you won’t waste too much of it trying to perfect your weakness. Instead, you’ll focus on your strength—and ensure that you sink those threes every time. If you’re a business leader, the same lesson applies. Often, people spend great amounts of energy on their shortcomings—to unsatisfying results. But playing to your strengths and integrating them into your daily work can be much more inspiring for both you and your team.
While focusing on one’s strengths might seem obvious, doing so often requires a shift in mindset. This is a crucial step when adopting new behavior, and it’s one that leaders often neglect in favor of immediate action. But ignoring the attitudes and beliefs behind previous behavior all but ensures that the new behavior a person hopes to adopt won’t stick. Leaders who are in tune with the mindsets that dictate their actions are better equipped to guide their organizations toward effective behavioral change.
Finding your strength is one of five key exercises leaders can use to be more aware of their mindsets. To explore the other four—including the power of taking a pause and how to ask solution-focused questions—and to learn how to shift your own mindset in service of stronger, more purposeful leadership, read Johanne Lavoie’s 2014 McKinsey Quarterly classic, “Lead at your best.”Lead better by shifting your mindset Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to our McKinsey Classics newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Classics" <publishing@email.mckinsey.com> - 12:28 - 15 Jun 2024 -
EP116: 11 steps to go from Junior to Senior Developer
EP116: 11 steps to go from Junior to Senior Developer
This week’s system design refresher: What is Data Pipeline? | Why Is It So Popular? (Youtube video) 11 steps to go from Junior to Senior Developer Top 8 must-know Docker concepts What does a typical microservice architecture look like? Top 10 Most Popular Open-Source Databases͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for moreThis week’s system design refresher:
What is Data Pipeline? | Why Is It So Popular? (Youtube video)
11 steps to go from Junior to Senior Developer
Top 8 must-know Docker concepts
What does a typical microservice architecture look like?
Top 10 Most Popular Open-Source Databases
SPONSOR US
New Relic Digital Monitoring Experience (DEM) (Sponsored)
New Relic DEM solutions are designed to provide comprehensive insights into digital operations, allowing teams to optimize user experiences in real time.
Download the datasheet for an overview of New Relic DEM capabilities:
Real user monitoring (RUM): Browser monitoring and mobile monitoring
Pixel-perfect replays: Session replay
Proactive issue detection: Synthetic monitoring, mobile user journeys (crash analysis), and error tracking (errors inbox)
Integration and collaboration: In-app collaboration capabilities
What is Data Pipeline? | Why Is It So Popular?
11 steps to go from Junior to Senior Developer
Collaboration Tools
Software development is a social activity. Learn to use collaboration tools like Jira, Confluence, Slack, MS Teams, Zoom, etc.Programming Languages
Pick and master one or two programming languages. Choose from options like Java, Python, JavaScript, C#, Go, etc.API Development
Learn the ins and outs of API Development approaches such as REST, GraphQL, and gRPC.Web Servers and Hosting
Know about web servers as well as cloud platforms like AWS, Azure, GCP, and KubernetesAuthentication and Testing
Learn how to secure your applications with authentication techniques such as JWTs, OAuth2, etc. Also, master testing techniques like TDD, E2E Testing, and Performance TestingDatabases
Learn to work with relational (Postgres, MySQL, and SQLite) and non-relational databases (MongoDB, Cassandra, and Redis).CI/CD
Pick tools like GitHub Actions, Jenkins, or CircleCI to learn about continuous integration and continuous delivery.Data Structures and Algorithms
Master the basics of DSA with topics like Big O Notation, Sorting, Trees, and Graphs.System Design
Learn System Design concepts such as Networking, Caching, CDNs, Microservices, Messaging, Load Balancing, Replication, Distributed Systems, etc.Design patterns
Master the application of design patterns such as dependency injection, factory, proxy, observers, and facade.AI Tools
To future-proof your career, learn to leverage AI tools like GitHub Copilot, ChatGPT, Langchain, and Prompt Engineering.
Over to you: What else would you add to the roadmap?
Latest articles
If you’re not a paid subscriber, here’s what you missed.
To receive all the full articles and support ByteByteGo, consider subscribing:
Top 8 must-know Docker concepts
Dockerfile: It contains the instructions to build a Docker image by specifying the base image, dependencies, and run command.
Docker Image: A lightweight, standalone package that includes everything (code, libraries, and dependencies) needed to run your application. Images are built from a Dockerfile and can be versioned.
Docker Container: A running instance of a Docker image. Containers are isolated from each other and the host system, providing a secure and reproducible environment for running your apps.
Docker Registry: A centralized repository for storing and distributing Docker images. For example, Docker Hub is the default public registry but you can also set up private registries.
Docker Volumes: A way to persist data generated by containers. Volumes are outside the container’s file system and can be shared between multiple containers.
Docker Compose: A tool for defining and running multi-container Docker applications, making it easy to manage the entire stack.
Docker Networks: Used to enable communication between containers and the host system. Custom networks can isolate containers or enable selective communication.
Docker CLI: The primary way to interact with Docker, providing commands for building images, running containers, managing volumes, and performing other operations.
Over to you: What other concept should one know about Docker?
What does a typical microservice architecture look like? 👇
The diagram below shows a typical microservice architecture.
Load Balancer: This distributes incoming traffic across multiple backend services.
CDN (Content Delivery Network): CDN is a group of geographically distributed servers that hold static content for faster delivery. The clients look for content in CDN first, then progress to backend services.
API Gateway: This handles incoming requests and routes them to the relevant services. It talks to the identity provider and service discovery.
Identity Provider: This handles authentication and authorization for users.
Service Registry & Discovery: Microservice registration and discovery happen in this component, and the API gateway looks for relevant services in this component to talk to.
Management: This component is responsible for monitoring the services.
Microservices: Microservices are designed and deployed in different domains. Each domain has its database.
Over to you:
What are the drawbacks of the microservice architecture?
Have you seen a monolithic system be transformed into microservice architecture? How long does it take?
Top 10 Most Popular Open-Source Databases
This list is based on factors like adoption, industry impact, and the general awareness of the database among the developer community.
MySQL
PostgreSQL
MariaDB
Apache Cassandra
Neo4j
SQLite
CockroachDB
Redis
MongoDB
Couchbase
Over to you: Which other database would you add to this list?
SPONSOR US
Get your product in front of more than 500,000 tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.
Space Fills Up Fast - Reserve Today
Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing hi@bytebytego.com
Like Comment Restack © 2024 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:36 - 15 Jun 2024 -
Looking for guest post
Hi Sir/Madam,
I have visited your website, and it looks good. I am in search of sites to publish my articles and get backlinks in return. Will you accept my guest post? In case you accept guest posts, tell me the topic and criteria to write an article for you? Otherwise, give me a do-follow backlink from your existing post. What will be the minimum price for the permanent article with a do-follow backlink for general posts as well as casino/betting/CBD and existing post backlinks?
Hope for positive feedback from your end.
by "Muhammad asad Gujjer" <muhammadasadgujjer@gmail.com> - 10:56 - 14 Jun 2024 -
Delivering services to the public—digitally—with Jennifer Pahlka
Listen in New from McKinsey Global Institute
Delivering services to the public—digitally—with Jennifer Pahlka
Listen in Prefer audio? Listen to the podcast, and explore past episodes of the Forward Thinking Podcast. Subscribe via Apple Podcast or Spotify.
Forward Thinking on measuring the value of the digital age with Avinash Collis
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to our McKinsey Global Institute alert list.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey & Company" <publishing@email.mckinsey.com> - 02:57 - 14 Jun 2024 -
The real risk of gen AI
The Shortlist
Four new insights Curated by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
Last time, we told you the gen AI honeymoon can’t last forever. This edition of the CEO Shortlist explores what happens as we move from proof of concept to sustained sources of new value. As we know, it’s never just about the tech; gen AI is not a magic wand that will solve underlying organizational issues. Today’s leading executives are exploring how to integrate gen AI into their businesses by managing risks, building new capabilities, and aligning resources with strategy. We hope you enjoy the read.
—Liz and Homayoun
Still on the sidelines? If your company hasn’t moved yet on AI, you’re at risk of being left behind. Our latest survey shows that 72 percent of companies have adopted the technology and that in just one year, users of gen AI have doubled. We also learned where companies are using this tool and how much they’re investing in it.
Get in the game with “The state of AI in early 2024: Gen AI adoption spikes and starts to generate value,” by Alex Singla, Alexander Sukharevsky, Lareina Yee, and coauthors.Quit searching: money trees don’t exist. The money for gen AI transformations and other strategic initiatives typically has to be reallocated from somewhere else. But that’s easier said than done. Too often, annual resource allocation processes ultimately fail to align resources with strategy.
No need to panic, though. Read “Keep calm and allocate capital: Six process improvements,” by McKinsey partner Tim Koller.The biggest risk with gen AI? Not taking a risk with gen AI. “There is substantial risk associated with not diving in,” says McKinsey senior partner Ida Kristensen. But that doesn’t mean you should jump in without looking. Leaders should incorporate safeguards around data privacy, bias, and explainability when strategizing gen AI programs.
Risky business? Not necessarily. Check out “Managing the risks around generative AI,” the latest episode of McKinsey’s Inside the Strategy Room podcast, featuring Kristensen and McKinsey partner Oliver Bevan.We hope you find these ideas inspiring and helpful. See you next time with four more McKinsey ideas for the CEO and others in the C-suite.
Share these insights
This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to The CEO Shortlist newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey CEO Shortlist" <publishing@email.mckinsey.com> - 04:45 - 14 Jun 2024 -
Do you know which generations are most eager to travel?
Only McKinsey
Our 2024 report on tourism and hospitality Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Generations that prioritize travel. Global travel is back. After declining by 75% in 2020, travel is on its way to making a full recovery by the end of 2024, with domestic travel still representing the bulk of the market, McKinsey senior partner Caroline Tufft and coauthors explain. To gauge what’s on the minds of present-day travelers, McKinsey surveyed more than 5,000 of them. The results show that younger generations (Gen Zers and millennials), in particular, show significant and increasing interest in travel.
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:35 - 14 Jun 2024 -
A Crash Course on Cell-based Architecture
A Crash Course on Cell-based Architecture
No one wants to sail in a ship that can sink because of a single hull breach. This led to the development of bulkheads, which are vertical partition walls that divide a ship’s interior into watertight compartments. Cell-based architecture attempts to follow the same concept in software development.͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for moreLatest articles
If you’re not a subscriber, here’s what you missed this month.
To receive all the full articles and support ByteByteGo, consider subscribing:
No one wants to sail in a ship that can sink because of a single hull breach.
This led to the development of bulkheads, which are vertical partition walls that divide a ship’s interior into watertight compartments.
Cell-based architecture attempts to follow the same concept in software development.
In cell-based architecture, there are multiple isolated instances of a workload, where each instance is known as a cell. There are three properties of a cell:
Each cell is independent.
A cell does not share the state with other cells.
Each cell handles a subset of the overall traffic.
For example, imagine a web application that handles user requests. In a cell-based architecture, multiple cells of the same web application would be deployed, each serving a subset of the user requests. These cells are copies of the same application working together to distribute the workload.
This approach reduces the blast radius of impact. If a workload uses 5 cells to service 50 requests, a failure in only one cell means that 80% of the requests are unaffected by the failure.
In other words, failure isolation is the biggest benefit of a cell-based architecture.
In this post, we will learn about the various aspects of cell-based architecture and its various components in more detail.
What is a Workload?...
Continue reading this post for free, courtesy of Alex Xu.
A subscription gets you:
An extra deep dive on Thursdays Full archive Many expense it with team's learning budget Like Comment Restack © 2024 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:36 - 13 Jun 2024 -
Three reasons to attend our API insights webinar on 18 June
Three reasons to attend our API insights webinar on 18 June
Learn about critical API trends, future-proof your platform, and get actionable advice. Sign up today!Hi Md Abul,
Our API platform insights 2024 webinar, in partnership with ResearchHQ, is happening next week! We believe this event is a must-attend and have outlined three reasons below why it must be on your calendar for next week.
📅 Date: 18th of June
🕙 Time: 10 am EDT / 3 pm BST
📍 Location: ZoomHere's three practical reasons why you must attend:
- Uncover hidden digital infrastructure issues. Gain insights into critical challenges and learn where to act for maximum efficiency.
- Explore new API trends. With 26% of organisations using AI and 54% prioritising automation, stay updated on the latest advancements to future-proof your platform and team.
- Seize the opportunity to learn from industry experts. 46% of organisations focus on metrics that drive business value and ROI. Get practical advice to enhance your API strategy.
Register today, and you'll receive an exclusive copy of the full report and access to the on-demand recording after the event.
Thanks,
Budha & teamTyk, 87a Worship Street, London, City of London EC2A 2BE, United Kingdom, +44 (0)20 3409 1911
by "Budhaditya Bhattacharya" <budha@tyk.io> - 06:17 - 13 Jun 2024 -
RE: Learn
Hello,
Did you have a chance to view my previous email which I have sent you?
Kindly share your thoughts about acquiring the list, so that we can give you the cost & count details.
Regards,
Jessica
From: Jessica Martin
Sent: Thursday, June 6, 2024 4:06 PM
To: info@learn.odoo.com
Subject: LearnDear Exhibitor,
Hope this note finds you well.
I am writing to confirm, whether you are interested in acquiring the attendees mailing list of the below mentioned tradeshow ?
IRMA 2024 (June 20-22, 2024 | Hamburg, Germany)
Information fields include : Contact name Company name Job Title Company Mailing address with Zip Code Phone Number Website URL and contact person verified business email address.
The complete list is available for a small investment, with unlimited usage rights, you can use this list for your regular marketing campaigns too.
Please let me know your interest so that we can get back to you with more details on Counts and Pricing available for this list.
Thank you and we awaiting your response.
Regards,
Jessica Martin - Events & Trade Show Specialist.
To remove from this mailing: reply with subject line as "Remove"
by "Jessica Martin" <jessica.martin@reachprospects.onmicrosoft.com> - 06:03 - 13 Jun 2024