Today at the re:invent conference, Amazon made some announcements regarding AWS Lambda. These announcements include:
- Increase in the resource limit for AWS Lambda up to 10 GB RAM and 5 vCPUs
- Support for container images to encapsulate functions
- Millisecond billing without any minimum execution time
The second announcement has created some confusion in the social media with many AWS users getting excited about the feature and many Google users comparing it to Google Cloud Run, a fully managed container platform that goes beyond the workloads supported by Functions as a Service (FaaS). While the increase in resource limits makes AWS Lambda useful to run functions that require powerful resources and the millisecond billing gives a more fine-grained pricing model, the announcement about the container support is more of catching up with the industry norm than anything else.
For a long time, AWS did not offer support for encapsulating application code on a container for deployment. While they supported Zip files and Lambda layers for libraries, they didn’t support containers as the deployment package. IBM Functions, from its early days and others like Nimbella, offer support for containers. With Knative gaining traction and widespread adoption of containers and Kubernetes, developers expect Functions as a Service to support containers as an encapsulation for code. While they like the fact that they don’t have to handle the complexities of a container platform with FaaS, they want containers as an encapsulation to ship code to production. By adding support for containers, AWS is catching up with other platforms in the industry.
The other confusion with today’s announcement is a result of developers comparing the container support in AWS Lambda with Google Cloud Run. It is an apples to oranges comparison. If you see the deployment platform as a spectrum, an equivalent service in the AWS ecosystem will be an abstraction layer over AWS Fargate than AWS Lambda. At this point, AWS does not have a service that is equivalent to Cloud Run. As Ben Kehoe of iRobot pointed out, the container support in AWS Lambda is about encapsulation or packaging format while Google Cloud Run is about an execution layer offered as a service. A container could be deployed to run on Google Cloud Run but it is not the same as pushing your code to AWS Lambda using containers as the deployment package. While the container support in the AWS FaaS platform improves the developer experience, it does not extend the use cases the platform can support beyond the current limitations of FaaS platforms.
The AWS Lambda announcements in re:invent make the platform more suitable for enterprise deployment but it is still a platform with typical constraints imposed by FaaS. The container support improves the developer experience. The support for higher memory and CPU allows developers to run more powerful functions. The pricing model adds pressure on competitors to move to fine-grained pricing. While the millisecond pricing may appear like an ordinary announcement, users will realize its value at a scale where the millisecond rounding up and lack of minimum execution time can add to cost savings