Containers in Lambda
In part one of this series we took a look at serverless development on AWS from the perspective of a beginner and identified some potential pitfalls. In part two we took a look at some tools and methodologies we can utilize to better manage serverless projects, particularly with teams. In this third and final part of The Evolution of Maintainable Lambda Development, we'll take a look at how the recent addition of containers in lambda takes us one step closer to consistency between local development and deployment, providing additional techniques to better manage our serverless projects.
This entry should be considered as a companion piece to Docker In AWS Lambda written by our very own Nick W. Nick's article will take you through a real-world code example of using docker in lambda and serves as a great jumping-off point for diving deeper into the AWS docs on this topic. While this series has focused on the management of serverless projects more than specific code examples, it is always best to learn about a new feature by doing.
Whenever we have differences between development and deployment environments we run the risk of encountering errors that we did not catch with local testing. We've noticed a few cases where deploying serverless applications can encounter issues you don't see locally.
Missing external dependencies
Providing dependencies that are pre-compiled for lambda is a problem that all serverless tools need to solve. With python applications, we have different options to pre-compile dependencies. Zappa originally provided a catalog of packages precompiled for lambda, but its use has since been deprecated. Special docker images exist that allow us to install/compile libraries that are lambda compatible https://hub.docker.com/r/lambci/lambda/.
The great thing about containers in lambda is we no longer need to deal with such steps because the container will compile libraries that are compatible with itself - obviously! We no longer need to have separate or special dockerfiles just for the purposes of producing cross-compiled libraries, the same Dockerfile that defines our lambda can build its own dependencies, and we have many system choices.
Errors in the handler/wsgi
Using Zappa's handler as an example, lambda receives events from APIG - not actual HTTPS requests. It is the job of APIG to deal with serving requests, the APIG-lambda integration uses a defined payload structure for requests and responses.
This is a tricky area to troubleshoot when you encounter errors. While you can enable detailed cloudwatch metrics and log full request/response payloads from APIG to lambda, these are not enabled by default and can be very verbose. Since python web applications rely on web requests being translated via wsgi, there are multiple steps in request translation where errors can result in failed web requests that have no logging from lambda in cloudwatch (since the application code was not reached).
Having an image that we can build and run locally - that can also be deployed to lambda - gives us parity to test these types of requests reliably. sam provides many useful tools for testing such requests - but before images in lambda, we could not use the same Dockerfile to define an identical image for local requests and lambda deployments. We can craft specific requests as they will be formatted from APIG and test them directly in our container.
Items We Still Need to Check in Deploys
When configuring our application, there are always going to be certain environment-specific things we cannot test without deploying. We can reduce mistakes by DRYing up our deployment recipes by doing things like using IaaC tool outputs to provide environment variables to load into our Lamda application. Either way, we cannot mock the use of a specific environment configuration when deploying environments following best practices in the cloud - using distinct permissions that do not overlap environments, etc. - we must deploy.
Connections to external services
AWS services typically default to least access, requiring specific infrastructure to permit access to other resources. For example, lambda deployments can not talk to the Internet unless you associate them with a public subnet/Internet gateway explicitly. Some build tools may help solve some of these problems for you, but you will need to be aware of where AWS security groups and networking changes are required as you add additional resources to your infrastructure.
Having a docker image that can both be deployed to lambda and run locally means that we can test more specific lambda functionality without having to deploy to test changes below the application layer. While we may still provide alternative dockerfiles so that we can develop locally with tools like runserver and werkzeug, we can also benefit from having an image that can build directly in lambda. This means we can use the same image OS to re-use the dockerfile logic to build our development and deployment images, or we can build for different stacks so we can benefit from popular minimal OSes like Alpine Linux, while avoiding extra cross-compilation in our build and deployment steps. Test requests can be sent to this image and remove some additional troubleshooting issues that we might encounter with the actual complexity of a deployed environment.
Moving forward, we plan to investigate the ways that JBS can utilize the new lambda images functionality to better manage consistency in our projects and build confidence in serverless automation. We have tested several forks of Zappa to provide additional functionality such as websockets in APIGateway (https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-websocket-api-overview.html) and continue to research other serverless frameworks as well to see where we can best contribute to the growth of such projects. As we continue to invest in AWS serverless cloud deployments and optimize our applications to take advantage of new functionality in AWS, improvements such as containers in lambda further enhance the maturity and maintainability of serverless applications.