Serverless Pricing Model: Does It Make Sense?

One of the biggest advantages of Serverless is its pricing model which is more fine-grained than VMs and Containers on the cloud. More importantly, one could see the real advantage of pay per use model in the case of Serverless than VMs and Containers because you are not continuously running these services and they come up when invoked, executes and shuts down as soon as the execution is over. This allows cloud providers to offer much lower pricing than other compute services. Is the pricing model as simple and cost effective as many of us believe? This is the question we are going to address in this post.

Typical serverless cost includes number of invocations, CPU and RAM usage, API Gateway costs, Storage and Network costs. Then add in the costs of other cloud services

The biggest selling point for AWS Lambda, one of the early Serverless offerings, is the pay per invocation model and the cost effectiveness of these invocations. However, the devil is in details. For example, here is the pricing model for AWS Lambda (may vary by duration):

Number of Function Requests – $0.20 per 1M requests
Duration of invocation – $0.0000166667 for every GB-second
Memory – Charged as price per 100 ms and it is $0.0000002083 for 128 MB for every 100 ms

and they have a different pricing model for provisional concurrency which gives levers to avoid problems due to cold start issues. The pricing model is

Provisioned Concurrency – $0.0000041667 for every GB-second
Requests – $0.20 per 1M requests
Duration – $0.0000097222 for every GB-second

Other than this, you have to pay the network bandwidth charges at AWS EC2 pricing. Then you have costs for storage and other cloud services used. Just understanding the cost of your application in AWS Lambda itself will have a huge learning curve. We cannot blame the cloud providers for this pricing complexity. It is difficult to price compute beyond the usual chunks used for Virtual Machines and Containers. The complexity is a result of cloud providers trying to offer more fine-grained pricing.

Catalyst, one of the recent Serverless offerings in the market, tries to simplify this complexity by bringing in a credit based model. While you pay pennies for the compute, you could buy credits at a fixed price and use it for backend, AI and other services. They feel that this takes the complexity out of pricing and makes it easy to calculate pricing for the apps.

Different providers offer different pricing model but they all focus on allowing developers to slide and dice various services and pay only for the time they use these components. Serverless computing does offer dramatic cost savings compared to VMs and Containers. However, it is important that you understand the pricing model properly before architecting the application. Without a proper understanding, you may end paying more than VMs and Containers. This is where a more simplified pricing like the one offered by Catalyst can help developers.

Pricing Considerations for Developers

  • Don’t ignore the network and storage costs. Sometimes, it could add up to make using serverless compute more expensive. Similarly, project the number of API requests through gateway and include in your cost calculations to avoid surprises
  • Almost every cloud provider including AWS Lambda, Catalyst and others offer a free tier. Take advantage of the free tier
  • Keep in mind, when doing cost comparisons with VMs and Containers, that Serverless pricing already takes care of high availability at no cost and without the operational overhead needed to ensure HA
  • Some services like AWS Lambda throttle CPUs in the case of smaller memory based executions. Check if your Serverless provider throttle CPUs. Sometimes you may have to use higher memory units to avoid throttling
  • For certain applications, the traffic may require constant invocations of Serverless compute. In such cases, running VMs and Containers 24/7 may be cost effective. But for many applications and most with irregular traffic patterns, Serverless offers dramatic savings
  • Never discount the operational overhead you save with Serverless, especially when using hosted Functions as. a Service offerings

Serverless is a game changer in cloud pricing. It offers dramatic cost efficiencies and removes operational complexities. Developers can focus on their code without worrying about compute environments. But, the pricing model is complex and it is important you understand the model and pick a provider that offers a simplified pricing for your needs.

Serverless Vs Containers – The Differences

Serverless technologies like AWS Lambda functions first offered in late 2014 is changing how developers deploy their applications. Developers saw the convenience and started using AWS Lambda for some basic use cases. As the technology matured, we are seeing more and more use cases for Functions as Service offerings like AWS Lambda, Zoho Catalyst, Azure Functions, etc.. With large scale adoption, serverless is becoming mainstream with increased enterprise adoption because it offers three main advantages to an organization: increased agility, seamless scalability and availability, and infrastructure cost savings. The key question is how does containers stack up against serverless in terms of these advantages.

  • Increased Agility: In current software development model, developers should consider where the code is deployed and create the necessary configuration file and other artifacts to support the deployment process. For example, developers writing a microservice to be deployed in a Kubernetes environment should create the necessary Kubernetes artifacts to support the deployment. They also have to understand Kubernetes concepts, as well as figure out how to run the microservice and artifacts on their development machine to test them before pushing it to the DevOps pipeline. By contrast, with Functions as a Service like AWS Lambda or Zoho Catalyst, developers only need to worry only about the business logic they have to code, which could even follow a certain template, and then just upload the code to the serverless platform. This speeds up things significantly and helps organizations to ship products and solutions quickly and rapidly make changes based on feedback. Agility without the unnecessary operational overhead for developers is the biggest advantage for serverless
  • Scalability and Availability: With traditional approaches, developers and DevOps teams have to factor in the scalability and availability for the application they build it is deployed. It typically includes the peak load expectations, whether an auto-scaling is needed or a static deployment will suffice, setting up the necessary automation for auto-scaling and requirements for ensuring the required availability. This is the case for applications deployed on virtual machines as well as containers. Serverless platforms remove the need to worry about these factors, since they are capable of executing the code when needed, and ensuring availability. The scalability is easily achieved by invoking the functions parallely to meet the load. It is the responsibility of the cloud provider to handle this in a seamless way. So, scalability and availability are provided by default and handled by the cloud provider without any overhead for the developers
  • Infrastructure Cost Savings: When we deploy an application in a production environment, we generally need to keep it running 24×7 since in most cases it is hard to predict the specific time periods the app needs to be available. That means there are many idle times between requests even though we pay for the infrastructure full time. For an organization that has to deploy hundreds of services the costs can escalate quickly. With serverless functions, it is guaranteed that your code will get computing resources allocated only when it is executed, which means you pay exactly for what is being used. This can cut costs significantly. However, this also constraints users from running only certain types of use cases

How is serverless computing different from containers?

Containers are essentially what the name describes: a comprehensive software encapsulation that gets delivered and used as a standalone application environment. The most common form of this is how applications get distributed at runtime. Everything and anything that’s needed to run and interact with a piece of software gets included or packaged together. That often entails bundling the software code, runtime and system tools, software and foundational libraries and default settings. In the case of containers – through platforms like Docker – they help solve the more common problems that arise from cross-platform use.

When moving from one computing environment to another, developers often run into obstacles. If the supporting software is not identical, it can cause a series of hiccups. And since it’s not uncommon for developers to move from a dev environment to a test environment, or even from staging into production, this becomes a widespread issue. That’s just concerning the software itself, as other issues can appear from network topologies, security and privacy policies, as well as varying tools or technologies. Containers solve this by wrapping everything up nicely into a runtime environment. Even though containers offered this advantage of encapsulating entire environments and removing the friction due to different environments on DevOps pipeline, they came with operational overhead which became a burden for developers.

In the case of Serverless, all these tasks are completely abstracted away by the cloud service provider, giving developers a simple interface to deploy their code. Developers need not bother about the application dependencies, environments, etc.. They just need to encapsulate the business logic into code and this can then be deployed onto serverless platform. Developers need not worry about the deployment environment and since the same service will be used for both development and production, they are not faced with the issues of making the code work in different environments. The entire heavy lifting is done by the cloud provider and the developers focus on the innovation.

Why Enterprises Should Go Serverless?

Most of the discussions in the enterprise user community is focussed more on Containers than on Serverless computing. Part of the reason for this focus on containers is due to the legacy enterprise mindset which is still focussed on the idea of managing servers. Containers are seen as a natural evolution to the legacy idea of servers. But, many enterprises are facing the risk of missing out on the Serverless advantages due to their mindset of “clinging to servers”.

Relative to on-premises and private cloud solutions, the public cloud makes it significantly simpler to build, deploy, and manage fleets of virtual servers and to run applications on them. However, companies today have additional options beyond classic server or VM-based architectures when they take advantage of the public cloud. Although the cloud eliminates the need for companies to purchase and maintain their own hardware, any server-based architecture still requires them to architect their applications for scalability and reliability. This adds high development costs to organizations. Plus, companies need to own the challenges of patching and deploying to those (virtual) server fleets as their applications evolve over time. Moreover, they must also handle the scaling of their server fleets to account for peak load and then scale them down after the load returns to normal to lower their costs—all while protecting the experience of end users and the integrity of internal systems. Idle, underutilized servers prove to be costly and wasteful. Analysts estimate that as much as 85 percent of servers in practice have underutilized capacity. By “clinging to servers”, organizations are wasting their resources and not maximizing their move to public cloud.

Serverless compute services like AWS LambdaGoogle Cloud FunctionsZoho CatalystAzure Functions, are designed to address these challenges by offering companies a different way of handling the application design, development and deployment, one with inherently lower costs and faster time to market. These Functions as a Service offerings eliminates the complexity of dealing with servers or server based architectures at all levels of the technology stack, and introduces a more fluid pay-per-request billing model where there are no costs from idle compute capacity. Additionally, these FaaS offerings enable organizations to easily adopt microservice architectures that are more modular and resilient. Eliminating the need to manage the infrastructure and moving to a Serverless model offers enterprises dual cost advantages:

  • Problems like idle servers simply cease to exist, along with their economic consequences. A serverless compute service like AWS Lambda is never “cold” because charges only accrue when useful work is being performed, with millisecond-level billing granularity. Enterprises can offload the idle compute costs to the cloud provider and this adds up big over a period of time
  • Infrastructure management and server operations (including the security patching, deployments, and monitoring of servers) are no longer necessary. This means that it isn’t necessary to maintain the associated tools, processes, and on-call rotations required to support 24×7 server management to ensure uptime. Without the high costs of operations, organizations can direct their scarce IT resources to foster innovation, thereby benefiting the bottom line

Clearly, moving to Serverless computing can help enterprises save costs, be agile and ensure continuous innovation. The biggest roadblock to achieving this nirvana is their legacy “clinging to servers” mindset. If enterprises can overcome this legacy mindset and embrace Serverless, they can face the needs of modern economy with more confidence.