Archives
- By thread 3652
-
By date
- June 2021 10
- July 2021 6
- August 2021 20
- September 2021 21
- October 2021 48
- November 2021 40
- December 2021 23
- January 2022 46
- February 2022 80
- March 2022 109
- April 2022 100
- May 2022 97
- June 2022 105
- July 2022 82
- August 2022 95
- September 2022 103
- October 2022 117
- November 2022 115
- December 2022 102
- January 2023 88
- February 2023 90
- March 2023 116
- April 2023 97
- May 2023 159
- June 2023 145
- July 2023 120
- August 2023 90
- September 2023 102
- October 2023 106
- November 2023 100
- December 2023 74
- January 2024 75
- February 2024 75
- March 2024 78
- April 2024 74
- May 2024 108
- June 2024 98
- July 2024 116
- August 2024 134
- September 2024 130
- October 2024 141
- November 2024 73
-
Remote is integrated with Xero, try our salary explorer, find us around the world, and more!
Remote is integrated with Xero, try our salary explorer, find us around the world, and more!
Your monthly global update is here from Remote. Dive in to see the latest.Featured news
Remote and Xero are now integrated!
Great news! Remote now integrates with Xero, a top accounting software for global companies. This new feature makes it easy to sync invoices, keeps information accurate across the two platforms, and saves your finance team time.
Getting started is easy and free. Just a few clicks for Remote and Xero customers, and you're all set.
Try out our new Xero integration and see how Remote's all-in-one Global HR platform can make your work easier.
Partnership announcement
Streamline employee recognition in a remote world
We’re thrilled to announce our new partnership with Gifted.co! Together, we’re making it easier than ever to recognize and celebrate your global team with automated gifting for special occasions.
Upcoming Events
🌍 Where in the World is Remote?
Remote is hitting major events this season! Catch us at Money 2020 Europe, SaaStr Europa, CIPD Festival of Work, NY Tech Week and SHRM Annual.
📍 Remote’s Global Table - Chicago
Join us and HiBob for an evening of networking at Remote’s Global Table in Chicago on June 24th, whether you’re local to the Chicago area or in town for SHRM Annual we look forward to welcoming you to this exclusive event. Register here.
📍 The Global Table by Remote - New York
We’re excited to host The Global Table by Remote at a16z’s New York #TechWeek 2024 – an exclusive in-person event focused on the theme of transitioning to working global by default. Register here.
👀 Our billboards are out there too
Look out for our dynamic 'Some / Others' displays and billboards in Chicago, Austin, London, Amsterdam, NYC, San Francisco, and more. Don't forget to share and tag us on LinkedIn to share your sightings.
Product news
May’s product release notes: Mobile app invoicing and PTO, detailed payroll reports, AI misclassification, and more
This month at Remote, we've rolled out a series of exciting new features and updates designed to empower both employers and their teams.
Whether you're managing global operations or organizing local workflows, our latest enhancements bring efficiency and clarity to your day-to-day activities.
Trial our Salary Explorer 🔎
Make smarter hiring decisions with global salary insights and compare employee compensation! Did we mention it’s free for Remote customers?
Learning and insights
Are you hiring and looking to optimize for the 2024 job seeker?
Uncover the complexities of today's candidates, from their preferences for workplace flexibility, to their experiences during the job search.
Employ’s 2024 Employ Job Seeker Nation Report offers critical insights to help you optimize your recruiting strategies and connect with candidates more effectively.
How to terminate an employee legally and professionally
Read on to learn the baseline processes you need to protect your company and employees from risk.
Customer feedback
What Our Customers Are Saying
"Remote also keeps a hand on any additional bonuses, health and check-ups and things that with other companies were easy to miss. My company has transitioned to Remote from another company, and the transition period was very smooth - we almost didn't notice it!" - Read the full review on G2.
Join the Conversation:
Share your experience with us and help us serve you better. Leave a Review
Webinar
Webinar on demand: Complexities in Global Hiring
If you missed it, the recording of our latest webinar is now available. We explore the latest trends and challenges in global hiring with our panel of experts, including leaders from Deloitte Tax, Bytez, and Remote. This session is perfect for HR professionals, business owners, and anyone interested in international recruitment and compliance.
Remote is the global HR platform you deserve
Onboard, pay, and manage employees and contractors around the world with Remote. You focus on finding the best hires — we'll handle the rest.
You received this email because you are subscribed to News & Offers from Remote Europe Holding B.V
Update your email preferences to choose the types of emails you receive.
Unsubscribe from all future emailsRemote Europe Holding B.V
Copyright © 2024 Remote Europe Holding B.V All rights reserved.
Kraijenhoffstraat 137A 1018RG Amsterdam The Netherlands
by "Remote" <hello@remote-comms.com> - 12:15 - 31 May 2024 -
Cash is no longer Latin Americans’ preferred payment method
Only McKinsey
4 trends to watch in payments Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
—Edited by Jana Zabkova, senior editor, New York
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 11:05 - 30 May 2024 -
A microscope on small businesses: The productivity opportunity by country
See the findings New from McKinsey Global Institute
A microscope on small businesses: The productivity opportunity by country
See the findings This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to our McKinsey Global Institute alert list.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey & Company" <publishing@email.mckinsey.com> - 12:27 - 30 May 2024 -
A Crash Course on REST APIs
A Crash Course on REST APIs
Application Programming Interfaces (APIs) are the backbone of software communication. In the acronym API, the word “Application” refers to software that performs a distinct function. An “Interface” is a contract between two applications that defines a set of rules, protocols, and methods for communication. “Programming” makes all of this possible.͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for moreLatest articles
If you’re not a subscriber, here’s what you missed this month.
To receive all the full articles and support ByteByteGo, consider subscribing:
Application Programming Interfaces (APIs) are the backbone of software communication.
In the acronym API, the word “Application” refers to software that performs a distinct function. An “Interface” is a contract between two applications that defines a set of rules, protocols, and methods for communication. “Programming” makes all of this possible.
APIs have been around for a long time in one form or the other:
In the 60s and 70s, we had subroutines and libraries to share code and functionality between programs.
In the 1980s, Remote Procedure Calls (RPC) emerged, allowing programs running on different computers to execute procedures on each other.
With the widespread adoption of the Internet in the 2000s, web services such as SOAP became widely adopted.
The late 2000s and early 2010s marked the rise of RESTful APIs, which have since become the dominant approach due to their simplicity and scalability.
In recent years, the API-first approach to software development has gained significant traction, driven by the emphasis on building loosely coupled services. REST APIs, in particular, have emerged as the go-to choice for developers worldwide.
In this post, we will explore the world of REST APIs and cover basic to advanced concepts.
Introduction to REST APIs...
Continue reading this post for free, courtesy of Alex Xu.
A subscription gets you:
An extra deep dive on Thursdays Full archive Many expense it with team's learning budget Like Comment Restack © 2024 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:36 - 30 May 2024 -
-
-
What makes successful chiefs of staff tick?
Only McKinsey
8 practical tips for leaders Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Executing the mission. Originally established in the White House, the chief of staff role has become a mainstay in C-suites across the world. The chief of staff’s primary purpose is to see that the CEO’s mission is executed, says McKinsey senior partner Andrew Goodman and coauthors. In the process of serving their principals, chiefs of staff can also pick up skills and knowledge to advance their own careers, the authors note.
—Edited by Jermey Matthews, editor, Boston
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:49 - 30 May 2024 -
How to foster a healthier, happier, and more productive workplace
Re:think
Making a difference in employee mental health FRESH TAKES ON BIG IDEAS
ON EMPLOYEE WELL-BEING Good employee mental health starts at the topKana Enomoto
It’s clear from the data: employees are struggling with burnout and mental health. What’s also clear is that organizations want to support their workforces’ mental health, both because it’s the right thing to do and because healthy employees are more productive, fulfilled workers. But for leaders, how exactly to create supportive work environments isn’t always as clear.
At the McKinsey Health Institute (MHI), we define burnout as a phenomenon that happens when your job demands outstrip your resources to perform your job. It’s when you have too many things to do, and not enough tools, energy, or mind space to do them.
Based on a survey of more than 30,000 employees from around the world, we know that about 20 percent of employees in the global workforce are experiencing symptoms of burnout. Burnout can contribute to alienation, distancing, exhaustion, and even cognitive impairment, and it puts people at risk for increased anxiety, depression, and substance use.
A particularly worrying trend is that young people are experiencing poor mental health at a much higher rate than their older counterparts. Of all the generations, Gen Z has consistently shown increasing rates of anxiety and depression. One in four Gen Zers McKinsey surveyed globally in 2022 self-reported poor mental health. That’s three times the rate of baby boomers. We know this not just through self-reporting but also through epidemiology. During the pandemic, there was a 50 percent increase in children showing up at emergency departments with suicidality.
What’s behind this? Social media tends to be an easy scapegoat because increases in mental health problems appear in parallel to its rise. But there are other factors in play, like changes in social connection, family structures, and even our diets.“We see a 23 percent difference in the profitability of businesses whose employee engagement scores are in the top percentile compared with the bottom.”
A recent academic report found that the top driver of mental health pressures in young people in the United States was a lack of meaning, purpose, and direction. Half of young adults in the survey said that their mental health was negatively influenced by not knowing what to do with their lives. This malaise in the workforce of the future is a concern for employers.
Gen Z workers may be more prone to burnout because many are entering the workforce with preexisting high levels of stress. You have a large percentage of future workers showing signs of mental health distress, and a significant portion of current workers experiencing burnout. Employers have a lot to gain by helping provide solutions: our research shows that if employers proactively invest in employee health and well-being, there’s the potential to increase global GDP by up to 12 percent. So what can employers do?
MHI has found that solutions to employee burnout frequently begin at the top. One of the leading causes of burnout is a toxic workplace. That means employees don’t feel supported, respected, or included. Leaders really need to look at the environment they’re creating for their workers. A global pharmaceutical company, for example, created a role modeling program for 250 leaders that aimed to help them understand how they’re communicating, supporting their employees, and creating a culture where people can thrive. Other companies have included specific targeted training in their onboarding programs, including buddy programs pairing new joiners with more experienced colleagues.
It’s also up to leaders to create the vision and mission for organizations that younger workers can believe in. Gen Zers probably aren’t going to stay in a job for 40 years and retire with a gold watch. They believe they are meant to achieve something, to do something big and important, and when they don’t have that, they often feel an emptiness. This is an opportunity for C-suite leaders to support all their employees in feeling that connection to purpose and meaning in the workplace.
Our research indicates that if organizations can help people feel better, they’re also going to work better—and become an employer of choice for young professionals. We see a 23 percent difference in the profitability of businesses whose employee engagement scores are in the top percentile compared with the bottom. These are major bottom-line impacts for employers.ABOUT THIS AUTHOR
Kana Enomoto is a partner and director of brain health at the McKinsey Health Institute. She is based in McKinsey’s Washington, DC, office.
MORE FROM THIS AUTHOR
UP NEXT
Mark Patel on carbon removals
To become an industrial segment in its own right, the carbon removals industry needs investment in innovation, technology, and infrastructure. Businesses that generate carbon credits based on removals have a lot to gain—an estimated $200 billion to $950 billion by 2050.
This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to our McKinsey Quarterly alert list.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Quarterly" <publishing@email.mckinsey.com> - 03:07 - 29 May 2024 -
Rejoignez-moi au Meetup des utilisateurs New Relic : Paris
Bonjour MD,
Rejoignez-nous à notre prochaine rencontre des utilisateurs, le mercredi 19 juin entre 14h00 et 17h30 pour un après-midi gourmand et bien sûr une conversation sur les données à L'Equiria de Paris ! Nous vous avons concocté un programme bien rempli où vous aurez l'occasion d'entendre nos ingénieurs locaux et de brillants clients.
Vous découvrirez les nouveautés de New Relic dont Mobile User Journeys, Session Replay, AI Monitoring et New Relic AI. Et nous vous donnerons également un avant-goût de nos prochains aperçus limités, ainsi que notre feuille de route de ce que nous appelons la troisième phase de l'observabilité, Observability 3.0.
Nous plongerons ensuite dans une série de conférences éclair sur :
- Sidekick - l'enregistrement de scripts synthétiques
- Les toutes dernières nouveautés de CodeStream
- Le paramétrage des cookies dans les contrôles Synthetics
- La présentation des visualisations personnalisées
- Un zoom sur NRQL
Pour terminer l'après-midi, nous remonterons le temps jusqu'en 2013 avec notre version du classique jeu électronique, « Flappy-Birds », animé par New Relic Browser, APM et Logs. Vous pourrez jouer sur votre appareil mobile et les scores s'afficheront en direct sur un tableau de classement. Les meilleurs scores seront récompensés.
Nous terminerons la journée par une session de networking suivie d'une petite restauration et boissons.
Réservez votre place dès maintenant.
N'oubliez pas d'apporter votre ordinateur portable. Nous avons hâte de vous retrouver !
Cordialement,
Harry
Harry Kimpel
Ingénieur principal des relations avec les développeurs - EMEA
View this online · Unsubscribe
This email was sent to info@learn.odoo.com. If you no longer wish to receive these emails, click on the following link: Unsubscribe
by "Harry Kimpel" <emeamarketing@newrelic.com> - 05:19 - 29 May 2024 -
How cities are faring postpandemic
Only McKinsey
Living in a hybrid world Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Empty desks. After the COVID-19 pandemic, cities with a glut of office space could see continued declines in the commercial-real-estate market as companies reduce their footprints or relocate to lower-cost areas. But cities can avoid an “urban doom loop” because property tax rates are variable, Harvard University economist Ed Glaeser says in an episode of the McKinsey Global Institute’s podcast Forward Thinking, cohosted by McKinsey partner Michael Chui.
—Edited by Jana Zabkova, senior editor, New York
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:24 - 29 May 2024 -
Tire Pressure Monitoring System - Keep Your Fleet Safe on the Road.
Tire Pressure Monitoring System - Keep Your Fleet Safe on the Road.
TPMS software provides analytics to enhance vehicle performance and prevent accidents.Eliminate the need for manual tire pressure checks. Know tire health and escalate safety.
Find out what makes our software stand out from the crowd
Compatible with any TPMS
sensor
Our software is flexible to work with any type of tire pressure monitoring sensor. Let your clients have the comfort of choosing the sensor according to their needs.
Tire Pressure Monitoring
Fleet managers can ensure that their vehicles always run on properly inflated tires, reducing the risk of accidents caused by underinflated or over-inflated tires.
Tire Temperature Monitoring
Fleet managers can identify tires that are operating at high temperatures and take corrective action to reduce heat buildup, thus extending tire life and reducing costs.
Real-time monitoring
TPMS constantly monitors tire pressure and temperature and sends real-time alerts if the pressure or temperature drops or rises below a certain threshold.
Empower your Clients with an Advanced Tire Pressure Monitoring System
Uffizio Technologies Pvt. Ltd., 4th Floor, Metropolis, Opp. S.T Workshop, Valsad, Gujarat, 396001, India
by "Sunny Thakur" <sunny.thakur@uffizio.com> - 08:00 - 28 May 2024 -
Are you using gen AI to create real value?
Intersection
Get your briefing Companies gearing up for a gen AI reset first need to determine their strategy for implementing gen AI. Choosing the right archetype can help create a competitive edge, say McKinsey’s Alex Singla, Alexander Sukharevsky, Eric Lamarre, and Rodney Zemmel. To learn how to take your gen AI capabilities to the next level, check out the latest edition of the Five Fifty.
Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to our McKinsey Quarterly Five Fifty alert list.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Quarterly Five Fifty" <publishing@email.mckinsey.com> - 05:00 - 28 May 2024 -
The Scaling Journey of LinkedIn
The Scaling Journey of LinkedIn
10 Insights On Real-World Container Usage (Sponsored) As organizations have scaled their containerized environments, many are now exploring the next technology frontier of containers by building next-gen applications, enhancing developer productivity, and optimizing costs.͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ Forwarded this email? Subscribe here for more10 Insights On Real-World Container Usage (Sponsored)
As organizations have scaled their containerized environments, many are now exploring the next technology frontier of containers by building next-gen applications, enhancing developer productivity, and optimizing costs.
Datadog analyzed telemetry data from over 2.4 billion containers to understand the present container landscape, with key insights into:
Trends in adoption for technologies such as serverless containers and GPU-based compute on containers
How organizations are managing container costs and resources through ARM-based compute and horizontal pod autoscaling
Popular workload categories and languages for containers
LinkedIn is one of the biggest social networks in the world with almost a billion members.
But the platform had humble beginnings.
The idea of LinkedIn was conceived in Reid Hoffman’s living room in 2002 and it was officially launched in May 2003. There were 11 other co-founders from Paypal and Socialnet who collaborated closely with Reid Hoffman on this project.
The initial start was slow and after the first month of operation, LinkedIn had just around 4300 members. Most of them came through personal invitations by the founding members.
However, LinkedIn’s user base grew exponentially over time and so did the content hosted on the platform. In a few years, LinkedIn was serving tens of thousands of web pages every second of every day to users all over the world.
This unprecedented growth had one major implication.
LinkedIn had to take on some extraordinary challenges to scale its application to meet the growing demand. While it would’ve been tough for the developers involved in the multiple projects, it’s a gold mine of lessons for any developer.
In this article, we will look at the various tools and techniques LinkedIn adopted to scale the platform.
Humble Beginning with Leo
Like many startups, LinkedIn also began life with a monolithic architecture.
There was one big application that took care of all the functionality needed for the website. It hosted web servlets for the various pages, handled business logic, and also connected to the database layer.
This monolithic application was internally known as Leo. Yes, it was as magnificent as MGM’s Leo the Lion.
The below diagram represents the concept of Leo on a high level.
However, as the platform grew in terms of functionality and complexity, the monolith wasn’t enough.
The First Need of Scaling
The first pinch point came in the form of two important requirements:
Managing member-to-member connections
Search capabilities
Let’s look at both.
Member Graph Service
A social network depends on connections between people.
Therefore, it was critical for LinkedIn to effectively manage member to member connections. For example, LinkedIn shows the graph distance and common connections whenever we view a user profile on the site.
To display this small piece of data, they needed to perform low-latency graph computations, creating the need for a system that can query connection data using in-memory graph traversals. The in-memory requirement is key to realizing the performance goals of the system.
Such a system had a completely different usage profile and scaling need as compared to Leo.
Therefore, the engineers at LinkedIn built a distributed and partitioned graph system that can store millions of members and their connections. It could also handle hundreds of thousands of queries per second (QPS).
The system was called Cloud and it happened to be the first service at LinkedIn. It consisted of three major subcomponents:
GraphDB - a partitioned graph database that was also replicated for high availability and durability
Network Cache Service - a distributed cache that stores a member’s network and serves queries requiring second-degree knowledge
API Layer - the access point for the front end to query the data.
The below diagram shows the high-level architecture of the member graph service.
To keep it separate from Leo, LinkedIn utilized Java RPC for communication between the monolith and the graph service.
Search Service
Around the same time, LinkedIn needed to support another critical functionality - the capability to search people and topics.
It is a core feature for LinkedIn where members can use the platform to search for people, jobs, companies, and other professional resources. Also, this search feature should aim to provide deeply personalized search results based on a member’s identity and relationships.
To support this requirement, a new search service was built using Lucene.
Lucene is an open-source library that provides three functionalities:
Building a search index
Searching the index for matching entities
Determining the importance of these entities through relevant scores
Once the search service was built, both the monolith and the new member graph service started feeding data into this service.
While the building of these services solved key requirements, the continued growth in traffic on the main website meant that Leo also had to scale.
Let’s look at how that was achieved.
Scaling Leo
As LinkedIn grew in popularity, the website grew and Leo’s roles and responsibilities also increased. Naturally, the once-simple web application became more complex.
So - how was Leo scaled?
One straightforward method was to spin up multiple instances of Leo and run them behind a Load Balancer that routes traffic to these instances.
It was a nice solution but it only involved the application layer of Leo and not the database. However, the increased workload was negatively impacting the performance of LinkedIn’s most critical system - its member profile database that stored the personal information of every registered user. Needless to say, this was the heart of LinkedIn.
A quick and easy fix for this was going for classic vertical scaling by throwing additional compute capacity and memory for running the database. It’s a good approach to buy some time and get some breathing space for the team to think about a long-term solution to scaling the database.
The member profile database had one major issue. It handled both read and write traffic, resulting in a heavy load.
To scale it out, the team turned to database replication.
New replica databases were created. These replicas were a copy of the primary database and stayed in sync with the primary using Databus. While writes were still handled by the primary database, the trick was to send the majority of read requests to the replica databases.
However, data replication always results in some amount of replication lag. If a request reads from the primary database and the replica database at the same time, it can get different results because the replication may not have completed. A classic example is a user updating her profile information and not able to see the updated data on accessing the profile just after the update.
To deal with issues like this, special logic was built to decide when it was safe or consistent to read from the replica database versus the primary database.
The below diagram tries to represent the architecture of LinkedIn along with database replication
While replication solved a major scaling challenge for LinkedIn, the website began to see more and more traffic. Also, from a product point of view, LinkedIn was evolving rapidly.
It created two major challenges:
Leo was often going down in production and it was becoming more difficult to recover from failures
Releasing new features became tough due to the complexity of the monolithic application
High availability is a critical requirement for LinkedIn. A social network being down can create serious ripple effects for user adoption. It soon became obvious that they had to kill Leo and break apart the monolithic application into more manageable pieces.
Latest articles
If you’re not a paid subscriber, here’s what you missed.
To receive all the full articles and support ByteByteGo, consider subscribing:
Killing Leo with Service-Oriented Architecture
While it sounds easy to break apart the monolithic application, it’s not easy to achieve in practice.
You want to perform the migration in a seamless manner without impacting the existing functionality. Think of it as changing a car’s tires while it is moving on the highway at 60 miles per hour.
The engineers at LinkedIn started to extract functionalities from the monolith in their own separate services. Each service contained APIs and business logic specific to a particular functionality.
Next, services to handle the presentation layer were built such as public profiles or recruiter products. For any new product, brand-new services were created completely outside of Leo.
Over time, the effort towards SOA led to the emergence of vertical slices where each slice handled a specific functional area.
The frontend servers fetched data from different domains and handled the presentation logic to build the HTML via JSPs.
The mid-tier services provided API access to data models.
The backend data services provided consistent access to the database.
By 2010, LinkedIn had already built over 150 separate services and by 2015, they had over 750 services.
The below diagram represents a glimpse of the SOA-based design at LinkedIn:
At this point, you may wonder what was the benefit of this massive change.
First, these services were built in a stateless manner. Scaling can be achieved by spinning up new instances of a service and putting them behind a load balancer. Such an approach is known as horizontal scaling and it was more cost-effective compared to scaling the monolithic application.
Second, each service was expected to define how much load it could take and the engineering team was able to build out early provisioning and performance monitoring capabilities to support any deviations.
Managing Hypergrowth with Caching
It’s always a good thing for a business owner to achieve an exponential amount of growth.
Of course, it does create a bunch of problems. Happy problems but still problems that must be solved.
Despite moving to service-oriented architecture and going for replicated databases, LinkedIn had to scale even further.
This led to the adoption of caching.
Many applications started to introduce mid-tier caching layers like memcached or couchbase. These caches were storing derived data from multiple domains. Also, they added caches to the data layers by using Voldemort to store precomputed results when appropriate.
However, if you’ve worked with caching, you would know that caching brings along with it a bunch of new challenges in terms of invalidations, managing consistency, and performance.
Over time, the LinkedIn team got rid of many of the mid-tier caches.
Caches were kept close to the data store in order to reduce the latency and support horizontal scalability without the cognitive load of maintaining multiple caching layers.
Data Collection with Kafka
As LinkedIn’s footprint grew, it also found itself managing a huge amount of data.
Naturally, when any company acquires a lot of data, it wants to put that data to good use for growing the business and offering more valuable services to the users. However, to make meaningful conclusions from the data, they have to collect the data and bring it in one place such as a data warehouse.
LinkedIn started developing many custom data pipelines for streaming and queuing data from one system to another.
Some of the applications were as follows:
Aggregating logs from every service
Collecting data regarding tracking events such as pageviews
Queuing of emails for LinkedIn’s inMail messaging system
Keeping the search system up to date whenever someone updates their profile
As LinkedIn grew, it needed more of these custom pipelines and each individual pipeline also had to scale to keep up with the load.
Something had to be done to support this requirement.
This led to the development of Kafka, a distributed pub-sub messaging platform. It was built around the concept of a commit log and its main goal was to enable speed and scalability.
Kafka became a universal data pipeline at LinkedIn and enabled near real-time access to any data source. It empowered the various Hadoop jobs and allowed LinkedIn to build real-time analytics, and improve site monitoring and alerting.
See the below diagram that shows the role of Kafka at LinkedIn.
Over time, Kafka became an integral part of LinkedIn’s architecture. Some latest facts about Kafka adoption at LinkedIn are as follows:
Over 100 Kafka clusters with more than 4000 brokers
100K topics and 7 million partitions
7 trillion messages handled per day
Scaling the Organization with Inversion
While scaling is often thought of as a software concern, LinkedIn realized very soon that this is not true.
At some time, you also need to scale up at an organizational level.
At LinkedIn, the organizational scaling was carried out via an internal initiative called Inversion.
Inversion put a pause on feature development and allowed the entire engineering organization to focus on improving the tooling and deployment, infrastructure and developer productivity. In other words, they decided to focus on improving the developer experience.
The goal of Inversion was to increase the engineering capability of the development teams so that new scalable products for the future could be built efficiently and in a cost-effective way.
Let’s look at a few significant tools that were built as part of this initiative:
Rest.li
During the transformation from Leo to a service-oriented architecture, the extracted APIs were based on Java-based RPC.
Java-based RPC made sense in the early days but it was no longer sufficient as LinkedIn’s systems evolved into a polyglot ecosystem with services being written in Java, Node.js, Python, Ruby and so on. For example, it was becoming hard for mobile services written in Node.js to communicate with Java object-based RPC services.
Also, the earlier APIs were tightly coupled with the presentation layer, making it difficult to make changes.
To deal with this, the LinkedIn engineers created a new API model called Rest.li.
What made Rest.li so special?
Rest.li was a framework for developing RESTful APIs at scale. It used simple JSON over HTTP, making it easy for non-Java-based clients to communicate with Java-based APIs.
Also, Rest.li was a step towards a data-model-based architecture that brought a consistent API model across the organization.
To make things even more easy for developers, they started using Dynamic Discovery (D2) with Rest.li services. With D2, there was no need to configure URLs for each service that you need to talk to. It provides multiple features such as client-based load balancing, service discovery and scalability.
The below diagram shows the use of Rest.li along with Dynamic Discovery.
Super Blocks
A service-oriented architecture is great for decoupling domains and scale out services independently.
However, there are also downsides.
Many of the applications at LinkedIn depend on data from multiple sources. For example, any request for a user’s profile page not only fetches the profile data but includes other details such as photos, connections, groups, subscription information, following info, long-form blog posts and so on.
In a service-oriented architecture, it means making hundreds of calls to fetch all the needed data.
This is typically known as the “call graph” and you can see that this call graph can become difficult to manage as more and more services are created.
To mitigate this issue, LinkedIn introduced the concept of a super block.
A super block is a grouping of related backend services with a single access API.
This allows teams to create optimized interfaces for a bunch of services and keep the call graph in check. You can think of the super block as the implementation of the facade pattern.
Multi-Data Center
In a few years after launch, LinkedIn became a global company with users joining from all over the world.
They had to scale beyond serving traffic from just one data center. Multiple data centers are incredibly important to maintain high availability and avoid any single point of failure. Moreover, this wasn’t needed just for a single service but the entire website.
The first move was to start serving public profiles out of two data centers (Los Angeles and Chicago).
Once it was proven that things work, they enhanced all other services to support the below features:
Data replication
Callbacks from different origins
One-way data replication events
Pinning users to geographically-close data centers
As LinkedIn has continued to grow, they have migrated the edge infrastructure to Azure Front Door (AFD). For those who don’t know, AFD is Microsoft’s global application and content delivery network and migrating to it provided some great benefits in terms of latency and resilience.
Image Source: Scaling LinkedIn’s Edge with Azure Front Door This move scaled them up to 165+ Points of Presence (PoPs) and helped improve median page load times by up to 25 percent.
The edge infrastructure is basically how our devices connect to LinkedIn today. Data from our device traverses the internet to the closest PoP that houses HTTP proxies that forward those requests to an application server in one of the LinkedIn data centers.
Advanced Developments Around Scalability
Running an application as complex and evolving as LinkedIn requires the engineering team to keep investing into building scalable solutions.
In this section, we will look at some of the more recent developments LinkedIn has undergone.
Real Time Analytics with Pinot
A few years ago, the LinkedIn engineering team hit a wall with regards to analytics
The scale of data at LinkedIn was growing far beyond what they could analyze. The analytics functionality was built using generic storage systems like Oracle and Voldemort. However, these systems were not specialized for OLAP needs and the data volume at LinkedIn was growing in both breadth and depth.
At this point, you might be wondering about the need for real-time analytics at LinkedIn.
Here are three very important use-cases:
The Who’s Viewed Your Profile is LinkedIn’s flagship analytics product that allows members to see who has viewed their profile in real-time. To provide this data, the product needs to run complex queries on large volumes of profile view data to identify interesting insights.
Company Page Analytics is another premium product offered by LinkedIn. The data provided by this product enables company admins to understand the demographic of the people following their page.
LinkedIn also heavily uses analytics internally to support critical requirements such as A/B testing.
To support these key analytics products and many others at scale, the engineering team created Pinot.
Pinot is a web-scale real-time analytics engine designed and built at LinkedIn.
It allows them to slice, dice and scan through massive quantities of data coming from a wide variety of products in real-time.
But how does Pinot solve the problem?
The below diagram shows a comparison between the pre-Pinot and post-Pinot setup.
As you can see, Pinot supports real-time data indexing from Kafka and Hadoop, thereby simplifying the entire process.
Some of the other benefits of Pinot are as follows:
Pinot supports low latency and high QPS OLAP queries. For example, it’s capable of serving thousands of Who’s Viewed My Profile requests while maintaining SLA in the order of 10s of milliseconds
Pinot also simplifies operational aspects like cluster rebalancing, adding or removing nodes, and re-bootstrapping
Lastly, Pinot has been future-proofed to handle new data dimensions without worrying about scale
Authorization at LinkedIn Scale
Users entrust LinkedIn with their personal data and it was extremely important for them to maintain that trust.
After the SOA transformation, LinkedIn runs a microservice architecture where each microservice retrieves data from other sources and serves it to the clients. Their philosophy is that a microservice can only access data with a valid business use case. It prevents the unnecessary spreading of data and minimizes the damage if an internal service gets compromised.
A common industry solution to manage the authorization is to define Access Control Lists (ACLs). An ACL contains a list of entities that are either allowed or denied access to a particular resource.
For example, let’s say there is a Rest.li resource to manage greetings. The ACL for this resource can look something like this.
In this case, the client-service can read but not write whereas the admin-service can both read and write to greetings.
While the concept of an ACL-based authorization is quite simple, it’s a challenge to maintain at scale. LinkedIn has over 700 services that communicate at an average rate of tens of millions of calls per second. Moreover, this figure is only growing.
Therefore, the team had to devise a solution to handle ACLs at scale. Mainly, there were three critical requirements:
Check authorization quickly
Deliver ACL changes quickly
Track and manage a large number of ACLs
The below diagram shows a high-level view of how LinkedIn manages authorization between services.
Some key points to consider over here are as follows:
To make authorization checks quick, they built an authorization client module that runs on every service at LinkedIn. This module decides whether an action should be allowed or denied. New services pick up this client by default as part of the basic service architecture.
Latency is a critical factor during an authorization check and making a network call every time is not acceptable. Therefore, all relevant ACL data is kept in memory by the service.
To keep the ACL data fresh, every client reaches out to the server at fixed intervals and updates its in-memory copy. This is done at a fast enough cadence for any ACL changes to be realized quickly.
All ACLs are stored in LinkedIn’s Espresso database. It’s a fault-tolerant distributed NoSQL database that provides a simple interface.
To manage latency and scalability, they also keep a Couchbase cache in front of Espresso. This means even on the server side, the data is served from memory. To deal with stale data in the Couchbase, they use a Change Data Capture system based on LinkedIn’s Brooklin to notify the service when an ACL has changed so that the cache can be cleared.
Every authorization check is logged in the background. This is necessary for debugging and traffic analysis. LinkedIn uses Kafka for asynchronous, high-scale logging. Engineers can check the data in a separate monitoring system known as inGraphs.
Conclusion
In this post, we’ve taken a brief look at the scaling journey of LinkedIn.
From its simple beginnings as a standalone monolithic system serving a few thousand users, LinkedIn has come a long way. It is one of the largest social networks in the world for professionals and companies, allowing seamless connection between individuals across the globe.
To support the growing demands, LinkedIn had to undertake bold transformations at multiple steps.
In the process, they’ve provided a lot of learnings for the wider developer community that can help you in your own projects.
References:
How LinkedIn customizes Apache Kafka for 7 trillion messages per day
Using Set Cover Algorithm to optimize query latency for a large-scale distributed graph
SPONSOR US
Get your product in front of more than 500,000 tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.
Space Fills Up Fast - Reserve Today
Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing hi@bytebytego.com
Like Comment Restack © 2024 ByteByteGo
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
by "ByteByteGo" <bytebytego@substack.com> - 11:37 - 28 May 2024 -
cost estimate
Hi sir,
We provide residential & commercial estimates & take-offs.
Do you have any projects for estimation? Send me the plans/drawings for a quote.
If you are interested then I can share some samples for your review and a better understanding of our format. Thanks.Best Regards,Theo dore
by "Theodore hamestimate" <dore.hamestimate@gmail.com> - 11:03 - 28 May 2024 -
Join me on Thursday for maximising performance with integrated APM and infrastructure monitoring
Hi MD,
It's Liam Hurrell, Manager of Customer Training at New Relic University, here.
Have you considered the importance of APM and infrastructure monitoring in ensuring the optimal performance and reliability of your applications?
Join our 90 minute workshop “Maximising performance with integrated APM and infrastructure monitoring” I'll be hosting on 30 May at 10 AM BST / 11am CEST to explore how APM and Infrastructure Monitoring can be powerful tools for extracting valuable insights from your application and system data. This webinar will delve into the integration of APM and Infrastructure Monitoring data to enhance operational visibility, mitigate risks, and achieve strategic business goals.
You can find the full agenda on the registration page here. While we recommend attending the hands-on workshop live, you can also register to receive the recording.
I hope to see you then.
Liam HurrellManager, Customer TrainingNew Relic
This email was sent to info@learn.odoo.com as a result of subscribing or providing consent to receive marketing communications from New Relic. You can tailor your email preferences at any time here.Privacy Policy © 2008-23 New Relic, Inc. All rights reserved
by "Liam Hurrell, New Relic" <emeamarketing@newrelic.com> - 05:03 - 28 May 2024 -
What does it take to engage employees?
Only McKinsey
How empathy makes a difference Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
•
Dissatisfied workers. Many workers are disengaged, which may destroy productivity and value. How can employers help boost morale and motivate employees? One way to start is by identifying where they fall on a satisfaction spectrum, McKinsey senior partner Aaron De Smet and coauthors explain. A McKinsey survey of more than 14,000 individuals found that, in any given organization, workers fall into six categories based on how satisfied they are. They range from the thriving stars who create value to the quitters who intend to leave.
•
Pursuing purpose. An authentic corporate purpose can improve engagement. Employees whose sense of purpose connects with their companies’ are five times more likely than their peers to feel fulfilled at work, a McKinsey survey has shown. Empathetic leadership can also strengthen worker well-being and morale and help reduce burnout. See the McKinsey Quarterly Five Fifty for six factors that, if prioritized, could help companies reengage workers and recapture millions of dollars in potential lost value each year.
—Edited by Belinda Yu, editor, Atlanta
This email contains information about McKinsey's research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this newsletter because you subscribed to the Only McKinsey newsletter, formerly called On Point.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "Only McKinsey" <publishing@email.mckinsey.com> - 01:12 - 28 May 2024 -
Re: Weekly update Shipping information fm China
Good day dear
Feel free to let me know if you need rates
My email: overseas.12@winsaillogistics.com
My Tel/whatsapp number:+86 13660987349
by "Yori" <overseas10@gz-logistics.cn> - 05:42 - 27 May 2024 -
The new productivity paradigm: A leader’s guide
Do more with less Brought to you by Liz Hilton Segel, chief client officer and managing partner, global industry practices, & Homayoun Hatami, managing partner, global client capabilities
With only so many hours in a day to juggle work, life, and everything in between, we’re always seeking ways to be more productive: to get more from our own work and to make meaningful investments in the future growth of our businesses and economies. The world economy has made huge gains in productivity over the past two and a half decades, despite a global financial crisis and a pandemic. Yet many countries are at risk of getting left behind. That threatens the potential prosperity of individuals, businesses, and economies. This week, we assess the ways leaders can foster productivity within their businesses and the economies where they operate.
In the past 25 years, global productivity has seen a meteoric rise and has helped raise living standards around the world. This is especially true in some emerging economies, whose productivity improvements could put them at the level of advanced economies 25 years from now. What do these economies (including China, Ethiopia, India, and Poland) have in common? According to senior partners Chris Bradley, Olivia White, and Sven Smit and their colleagues, they’ve made meaningful investments in areas that increase productivity, such as urbanization, infrastructure, education, and services. Such investments are more critical than ever to economic prosperity, as productivity levels have declined in some parts of the world, particularly in advanced economies. To help reverse the slowdown, businesses can make a critical impact by investing in the adoption of new technologies; in reskilling employees, especially those who are closer to retirement; and in hybrid working arrangements that work best for their people.
That’s how many of the world’s businesses are micro-, small, or medium-size enterprises (MSMEs). MSMEs are vital to growth and job creation: they account for almost half of global GDP and for much of the business employment in advanced and emerging economies. But the productivity of these companies lags behind that of their larger counterparts. McKinsey’s Anu Madgavkar, Jan Mischke, Marco Piccitto, Olivia White, and colleagues propose several strategies for improving MSME productivity and helping all economic boats rise. Policy makers, large companies, and other stakeholders, for instance, can create an environment that’s friendlier to small businesses (by improving access to technology, credit, and employee training); large and small businesses can pursue more opportunities for partnership, including mergers; and MSMEs can collaborate to strengthen their respective networks and capabilities.
That’s McKinsey’s Aaron De Smet, Patrick Simon, and colleagues describing the productivity paradox of the remote-work age. Employees have myriad tools at their fingertips for connecting with colleagues, no matter where they sit, yet the quality of those connections is on the decline. To turn the page on trivial interactions and actually get things done, the authors point to three types of “collaborative” interactions in the workplace. First is decision making, which relies on clear decisions right from the start. Second is creative solutions and coordination: empowering employees to come up with their own ways of solving problems. Third is information sharing. If leaders and companies develop a deliberate approach to when and why meetings are needed, they can alleviate interaction fatigue and enhance employee productivity.
What postpandemic talent trend may be even more counterproductive for companies than high employee turnover? According to senior partner Aaron De Smet, it’s the large number of people who are disengaged from their work but choose to stay. In an interview with The McKinsey Podcast, he spoke about his team’s research on employee productivity, which points to six types of employees. These include quiet quitters, who are doing the bare minimum and can be hard to spot in a hybrid working world. De Smet suggests that open dialogue, especially with quiet quitters, is an important place to start. “Have an authentic conversation where you ask, ‘How are you doing? Are you productive? Are you satisfied?’” he says. “In a few cases, people will not give you the honest answer. But in many other cases, if you really listen, the answer will be there.”
Generative AI’s (gen AI’s) promise to increase worker productivity is huge, with trillions of dollars of economic value at stake. McKinsey research suggests that nearly nine in ten workers using gen AI at work are in nontechnical roles, and there is a growing suite of productivity tools to try. In practice, though, the AI agents supporting today’s employees are very much works in progress. Whether they’re rescheduling ten meetings in someone’s calendar (instead of one, as the user requested) or booking a flight with five layovers to save a few dollars, even the most sophisticated chatbots still have room to increase their productivity.
Lead by investing in productivity.
— Edited by Daniella Seiler, executive editor, Washington, DC
Share these insights
Did you enjoy this newsletter? Forward it to colleagues and friends so they can subscribe too. Was this issue forwarded to you? Sign up for it and sample our 40+ other free email subscriptions here.
This email contains information about McKinsey’s research, insights, services, or events. By opening our emails or clicking on links, you agree to our use of cookies and web tracking technology. For more information on how we use and protect your information, please review our privacy policy.
You received this email because you subscribed to the Leading Off newsletter.
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
by "McKinsey Leading Off" <publishing@email.mckinsey.com> - 04:51 - 27 May 2024 -
Re: Cosmetics and Personal Care Industry Professionals List
Would you be interested in acquiring Cosmetics and Personal Care Industry Professionals List with Email across North America, UK, Europe and Global?
We maintain a database of 135,000+ Confirmed Contacts of Cosmetics and Personal Care Industry Professionals with complete contact information including opt-emails across North America, UK, Europe and Global.
The list includes Name, Company, Title, Employee & Revenue Size, Industry, SIC Code, Fax, Website, Physical Address, Phone Number and Verified Email Address.
Ø Guaranteed 95% on email deliverability and 100% on all other information.
Ø Delivered in excel or csv format with complete ownership rights.
Please send me your target audience and geographical area, so that I can give you more information, Counts and Pricing just for your review.
The Pricing depends on the volume of the data you acquire, more the volume less will be the cost and vice-versa.
Looking forward to hearing from you.
Regards,
Joan Tamez | Online Marketing Executive
P We have a responsibility to the environment
Before printing this e-mail or any other document, let's ask ourselves whether we need a hard copy
If you don't wish to receive our newsletters, reply back with "EXCLUDE ME" in the subject line.
by "Joan Tamez" <joan.tamez@reachsmedia.com> - 12:27 - 27 May 2024 -
"Unlock New Perspectives: Guest Post Partnership Proposal"
Hello
I hope you are doing well. I'm Outreach Manager, We have high quality sites for sponsored guest posting or link insertions.
Please check my sponsored guest posting sites.
https://digimagazine.co.ukDo you need websitesI am waiting for your replyTHANKSSent with Mailsuite · Unsubscribe
05/26/24, 10:34:07 AM
by "Luna Henry" <lunahenry067@gmail.com> - 01:34 - 26 May 2024