In the previous blog post, we talked about the differences between Serverless and Container technologies as it applies to modern application deployment. The next question in the minds of developers is about understanding where serverless shines over containers. In this post, we will briefly discuss where it is a better fit and highlight some of the use cases for serverless.
Unsurprisingly, the ideal solution depends on the application at hand. Some applications are certainly more suitable for a serverless computing, whereas others would be better within a container-based solution. Containers, by nature, allow for bigger and more complex applications and deployments. Containers allow incredibly complex and monolithic applications to be structured and delivered within the necessary dependencies and environment.
Comparatively, serverless computing calls for reliance on the service provider, and the constraints imposed by the platform. The developers are forced to architect their application to work around the constraints imposed by the cloud providers and they have to inherently trust the cloud provider to ensure security. However, the latter is not a problem because cloud providers invest a lot on security and, in today’s age, trusting them to handle the security of a service is not an issue. However, the architectural constraints on the application reduces the number of use cases supported by these platforms.
Another significant factor favoring serverless technology over containers is that costs are much more manageable – even inexpensive – because a company only pays a provider for resources based on the number of executions and the time of executions. Pricing is based on active computing resources only, so paying for idle time is history. Serverless relies on small, simple tasks with almost no overhead. This is in direct contrast to containers which requires running the containers (and the underlying virtual machines) 24/7 with idle times and over provisioned capacity. When you add the cost of operations for running the containers and the orchestration control plane, it is a no brainer to go with serverless.
Serverless is remarkably hands-off, which is one of the advantages over using containers or virtual machines. The code is either deployed using the API or DevOps hooks, and once the application is deployed the scaling is handled automatically. Before uploading the code, a developer can develop the application using any preferred environment. This allows them to focus on business logic and without any learning curve to understand the operational aspects of container technologies like Kubernetes. But that doesn’t mean containers are obsolete – not yet, anyway. In fact, they are still incredibly useful for deploying many different use cases – in some cases, it’s also possible to use both containers and serverless for applications (eg: with a microservices architecture). As serverless computing platforms become more advanced and capable, it’s possible that more and more applications will be deployed using serverless, but right now, we will see organizations using both serverless and container technologies.
If you are a developer, you can use serverless for the following use cases while using containers for everything else.
- Websites using static content
- Audio/Image processing (and other event driven tasks)
- Log processing (scheduled tasks)
- User authentication
- ETL on the fly
- Cron jobs
- Monitoring and notifications
- IoT or Mobile backend
- ML/AI workloads
- Disaster recovery
and many other use cases where one can parallelize the workloads. For these workloads, using either containers or virtual machines are an overkill. Picking serverless for these use cases is the right way to go.