To compete in today's fast-paced economy, businesses need to rapidly prototype new ideas, quickly enhance working products, and efficiently discard failed experiments. It's no surprise that more cloud-savvy companies are moving towards serverless solutions for many applications that traditionally would have involved significant amounts of infrastructure. This move comes with huge advantages if done right.
As a .NET developer writing code on the AWS platform, I have been impressed with the tooling and support AWS has offered for .NET developers. This post will take you through some of the lessons I have learned writing serverless applications on the AWS platform.
Templates and CLI Extension
The AWS project templates provided for the dotnet cli are a great place to start your project. Projects can be as simple as your first serverless application with a single Lambda function or as complex as a fully functional sample blogging application employing API Gateway, DynamoDB and Lambda. Each of these project templates comes with a ready-to-deploy CloudFormation template, example unit tests, and a Readme file detailing how to use the project. I particularly like the "serverless.AspNetCoreWebAPI" project template, which allows you to create a fully functional ASP.NET Web API exposed by AWS API Gateway and backed by Lambda.
dotnet lambda serverless.AspNetCoreWebAPI -Name MyApi
and then start coding your controllers in the same way as you always have. The generated test comes with a sample payload so you can ensure your controllers are tested, and the API can be run locally on your machine for a local debug experience. There is simply no faster way to migrate existing .NET Core Web API projects to the cloud than to use this template.
The dotnet cli extension gives you some powerful tools to support your development and testing lifecycle, enabling you to quickly deploy your entire serverless function or invoke a Lambda with a specific payload. Deploying a serverless application is as simple as running the following command in the same directory as your .csproj file.
dotnet lambda deploy-serverless
Tweaking the Projects
While the dotnet templates give you a good start, they are, by necessity, very generic. The defaults need to work for all kinds of possible workloads and use cases. As such, there are a few things that really should be addressed immediately for any actual project to succeed.
Principle of Least Privilege
To ensure a smooth initial experience, the serverless templates use a relatively permissive policy, "AWSLambda_FullAccess", for the Lambda functions.
One of the first things I do is remove this policy and create an explicit Role for the Lambda. I then add only the necessary permissions to those resources I require. These are usually resources I define in the template that I can use !Ref statements to explicitly reference, or other platform resources that are either imported from an earlier run CloudFormation script, or passed in as parameters. Doing this early on in your project and being disciplined about updating the role as you go ensures you don't have a considerable amount of work to do when the project is almost complete and you decide to "make it secure" before going to production. Oh, and of course, don't forget to give your Lambda function CloudWatch Logs permissions so you can push logs.
.NET Cold Start Performance
.NET Core is a JIT (Just-In-Time) compiled language. This means that once it is compiled and in memory, it is swift and efficient. The JIT part though, can be a problem for cold start times. Depending on the application you are writing, this may or may not be an issue. One of the most common end-user serverless applications is the classic SPA backed by a serverless API. In this case, a 10+ second cold start time could begin to be a significant problem. Users don't generally like waiting around watching a spinner while your code is warming up. The best way to deal with .NET Lambda cold start time is to ensure that the JIT compilation happens as quickly as possible, and the easiest way to do this is to increase the memory size allocated to the Lambda. As the memory size is linked to the number of CPUs allocated to the Lambda, this ensures that a multi-threaded operation, like JIT compilation, is optimized. Out of the box, the templates set the memory size to 256, but I increase it to either 1024 or 2048 to start. These sizes are generally enough to get the first cold start to sub-second. Keep in mind that if you have multiple paths through your code, the JIT compilation will slow down your first call and any subsequent call that takes a slightly different route through the code. Therefore, increasing the memory size will also improve these, what I've called "luke-warm", starts. Of course, you need to weigh up the cost of having your Lambda memory this high with the speed benefits you get.
Enabling X-Ray is as easy as setting "Tracing: Active" on each Lambda.
Then the following two packages to your csproj file
and then adding the next line to your Lambda entry point
Oh, and of course, if you're following the Principle of Least Privilege, don't forget to add a policy to your lambda role to allow it to send traces.
You will automatically enable detailed insights into your application's inner workings without doing any more work, including precise timings and call graphs for all calls to AWS services.
Choosing a Data Store
Datastores are a crucial part of any service, and there are many options and many reasons to choose them. If you follow a microservice-based approach, then each "microservice" should be backed by its specific data repository. Exactly what defines a "microservice" can become a bit hazy when you start talking about serverless and Lambda. For the sake of simplification, let's say that a "microservice" is a group of Lambda functions that perform a bounded set of operations (think Bounded Context in Domain Driven Design) that provide a service. While there are many possible choices of data stores, I'd like to focus on three obvious ones.
S3 is possibly the simplest backend you could have. If your service revolves around files, then it's a very logical choice. Even if your service doesn't, you can still make a very simple data storage solution intelligently using JSON or CSV files. This could save you significant hassle and cost if your service can effectively use S3 key prefixes and metadata as a means of providing the data you are storing with meaning. Of course, you do have to take care of any concurrency issues that might arise, and it will never give you the performance of a dedicated database engine.
The AWS SDK for .NET provides all you need for writing applications against S3. You can even use a local emulator for local development, as described by one of my Telstra Purple colleagues here.
DynamoDB is my personal default choice these days. The flexibility of not having to define a schema is a great temptation when trying to prove something quickly, and when trying to iterate on your success. Don't be fooled however, if you don't do proper data modelling, you can very quickly end up with a mess that is hard to maintain, and costly. Also, please NEVER try to replicate a relational database schema inside your DynamoDB database. A single table should be sufficient for any true microservice. If you think you need more tables, then either you need to split your application out into distinct microservices, or you haven't modelled your data correctly. It is also essential to know where DynamoDB shines and where it doesn't. If you need to do regular scans of your table, you probably should be looking to use a different datastore. DynamoDB shines when you are doing Online Transaction Processing (OLTP). If you need to perform ad-hoc queries on your data, or Online Analytical Processing (OLAP), or if you need to filter and sort on many different attributes, then DynamoDB is not for you.
The AWS SDK for .NET gives you base API access to all functionality provided by DynamoDB and has a Document model and an Object Persistence Model. These give you a higher-level concept that makes writing .NET code against your DyanmoDB a first-class citizen of your .NET coding experience.
Aurora Serverless is an excellent option if you need OLAP-style queries. If your application requires you to sort and filter by a crazy number of different attributes and expect low latency and low cost, you need a relational database. Amazon Aurora was the first relational database that offered a serverless option. The serverless option means it can scale to zero when not in use, so you only pay for data storage and not CPU when you are not using it. This translates to substantial cost savings, especially for DEV and TEST environments only used during business hours. Be careful though, as this comes with a different cost. I have seen cold start times of between 20 and 50 seconds for Aurora serverless in my experience. Slow cold start times can be a deal-breaker for many end-user applications; however, there are ways around this. You don't have to scale to zero, but you can set a minimum requirement and still have the auto-scaling ability when you need it. You can even configure this so only production stays on, while DEV and TEST environments scale to zero to save costs.
Amazon Aurora comes in 2 flavours, MySQL and PostgreSQL, both of which have good support within the .NET core community for frameworks like Dapper, EF Core, and many others.
There is rich support for developing serverless applications on AWS Lambda using the .NET framework. This support enables developers to prototype ideas rapidly, iterate quickly, and keep costs low at the same time.