An Example of Moving a Legacy Service from AWS EC2 to Lambda

Saratoga Software - Blog - Lance Kutner

An Example of Moving a Legacy Service from AWS EC2 to Lambda

By Lance Kuttner – Senior Consultant and Team Lead. Serverless cloud technology is offered by all of the major cloud providers and enables developers to build and run applications without having to manage any servers, databases, or other infrastructure. It offers a number of advantages over traditional Infrastructure as a Service (IaaS) cloud computing services, such as Amazon EC2, which require renting and configuring virtual machines. The business benefits of using serverless cloud technology over IaaS include reduced operational costs, increased scalability leading to improved performance, and faster time-to-market.

These performance and cost benefits offered by “serverless” make for a common case scenario for users of the Amazon cloud platform who want to move their legacy EC2 services across to Lambda. The nuances of each particular case unfortunately do not allow for a rubber-stamped approach, but herewith a case example of one such migration. We present the legacy architecture and walk through the various considerations and points of decision-making as we faced various challenges, and then discuss the implementation and outcomes.

The Legacy System

One of the earliest services developed on our client’s system was a usage-analytics service. As the service grew in popularity, so came the creative brainwaves of what it could be extended for, and after 5 years or so, it started to buckle.

The Application Architecture

The service was written in NodeJS, leveraging many of the more complex features of Redis data structures for optimum performance. For additional performance, the application used a local map for caching.

migrate to lambda

An initial attempt to recover some EC2 resources was to move the local Redis instance to AWS ElastiCache, however that lasted only a few years before the problem reared its’ head again.

Architectural Considerations

One of the architectures considered was to leverage the single-threaded nature of NodeJS to spin up further instances of the application (one per core), using Nginx as a round-robin load balancer. We had used this very successfully in the past.

The obvious issue here would be the in-application cache (the Map). That would need to be extracted as common otherwise we would end up with unsynchronised cache responses. The decision at that point was to consider Lambda since the same work would be required for that.

lambda migration

Lambda Considerations

At the time, Lambda was only able to integrate with DynamoDB as storage, so our initial idea was to migrate the ElastiCache data to DynamoDB, and the service to Lambda. However, it soon became apparent that the data structures were not easily migrated, and the scope of change increased the risks. With limited resources, it would be necessary to avoid a rewrite as much as possible to avoid the need for extensive feature-parity testing.

lambda considerations

Lambda Moves to the VPC

It must have been a couple of months later that Amazon enabled Lambda to be deployed within a VPC, and so the ElastiCache option was opened up, and the migration was back on the table.

The first step was to move the Map onto ElastiCache. As a simple collection of key/value pairs, it was fairly straightforward. But our concern was how this would perform. Would Lambda, reading from ElastiCache outperform Node reading the Map from its own memory? We were sceptical about that, but decided to proceed and test at each stage.

As an overview for comparison of the before and after is as follows:

lambda migration

The actual move from EC2 to Lambda turned out to be far easier than expected, thanks to a very neat little library called serverless-http. This provides a wrapper to Express, and was perfect for our scenario where we wanted to touch the code as little as possible. This does bring into the discussion the pros and cons of a “Lambdalith” – a topic well addressed by Rehan van der Merwe in his blog post.

The majority of the development work was therefore just to move the in-memory cache onto ElastiCache, and we were ready with the first Proof of Concept.

Results Comparison

The initial results were quite disappointing. When we load tested the lambda implementation, we could only get around 15 requests per second out of it, and the mean latency was around 700ms.

lambda

When compared to the existing EC2 service, we could get around 50 requests per second, with a latency of about 200ms.

lambda

Although the performance of the EC2 architecture in this case was superior, notice how the service bottlenecked after about 2,000 requests. This was exactly the problem that we were experiencing on the front end. The call to be made was whether resilience would trump performance, and to therefore proceed or not. Given that rolling back was as simple as changing a CloudFlare CNAME, it was worth proceeding to production.

Lambda Autoscaling

The Lambda invocations show the nature of the autoscaling as well as the nature of the load. Below is an example of how the instances respond by 20% in the space of a 5-minute window.

results

This is achievable on the EC2 solution, but only up to the point that the core is overloaded and the bottleneck manifests. Autoscaling EC2 instances could be a consideration for this, but typically this would be achieved by scaling up by e.g. 5 instances when a threshold is reached, and then scaling back individually as capacity is available. However to respond at the rate shown above is not realistic. Lambda has a much closer fit to the demand curve, giving you better bang for your buck.

Cost Comparison

Running a cost comparison was not an exact science. Since we had moved a large number of services off the instance, and NodeJS being single-threaded, the machine was heavily over-provisioned, since only one core was really the bottleneck.

The AWS Compute Optimiser advised a cost reduction of 15c/hr if we downsized from a c3.xlarge to a c6gd.large.

results

The problem with this is that the c3 is an Intel processor, but the c6gd is on an ARM processor, and that meant more work to bring it to reality.

Practically, the instance was costing us $158 per month whichever way we looked at it.

Lambda costs in comparison were much lower, coming out at $44 per month.

Final Thoughts

Migrating a legacy service from AWS EC2 to Lambda is a nuanced process, requiring a balance of technical adjustments and strategic decision-making. Yet, by considering both performance and resilience, businesses can revitalise their ageing services, and ensure they continue to meet users’ needs in the ever-evolving digital landscape.

Need to Migrate Your Legacy Service? Ask for Saratoga

Our team of experts have the talent, technical know-how, and track record to transition you from old to bold with very little disruption.

Share this post


Saratoga Software