Top DevOps Considerations For Serverless

The role of DevOps in Serverless deployments is critical to realizing the value of these technologies. As developers use Serverless technologies, they need to go beyond the usual developer considerations to DevOps considerations which play a critical role in not just the agility but also in user experience. In this post, we will highlight some of the DevOps considerations every developer using Functions as a Service should consider.

Key DevOps Considerations

  • Self Service: The first, and foremost, requirement is self service. If Developers cannot consume the service in a self service way, it adds tremendous friction, leading to dramatic slowdown in application deployment. Having a self service interface is a critical requirement for any credible Functions as a Service
  • Memory Size: Developers should be able to size the memory to meet their application requirements. Without this feature, the application can fail or perform suboptimally. FaaS providers should give developers the ability to pick the memory size that meets their needs
  • Elastic Scaling: FaaS providers should allow developers to select the required number of concurrent executions so that their applications can scale seamlessly. They should also be able to scale back to zero when their needs are met. The elastic scaling is critical for optimizing both performance and costs
  • Execution Time: It is important to find out the maximum execution time allowed to run the functions. Similarly, some FaaS providers support long running jobs. Without the appropriate execution time to run the functions, the application may fail or perform suboptimally. For certain kind of applications, support for long running jobs is essential. AWS Lambda doesn’t support long running jobs and the users have to tap into AWS Fargate to run these jobs. Similarly, in the case of Azure Functions, developers have to tap into Azure Durable Functions to execute long running jobs. If your application is long running, you need to consider the support for long running jobs or use a service like AWS Fargate or Spot Ocean
  • Cold Start: Most FaaS offerings have a “cold start problem” (used in quotes because it is actually a feature used to reduce costs). Many applications are not suitable to work with cold start and may require a continuously running container as it is the case with container services. However, some providers offer Warm Start support to avoid the delay due to cold start. Depending on your application needs, you need to consider this to reduce any friction in application deployment or user experience
  • Operational Overhead: Another important factor to consider is the operational overhead needed to put together an application using storage and database services. Certain services require an operational overheard (and, hence, a learning curve for developers) while others like Zoho Catalyst are available with an integrated data store. The key to seamless DevOps include reducing friction in using various services needed for the application
  • Integrated logging and monitoring: Observability is key to modern DevOps practices. Some FaaS providers (eg: Zoho Catalyst) offer integrated logging and monitoring. If your FaaS provider doesn’t offer an integrated logging and monitoring tools, it adds operational overheads, increasing the friction in the DevOps processes

Serverless computing like FaaS offers tremendous advantage in terms of agility and cost savings. However, they also impose constraints on developers as well as application architectures. In the previous blog post, we discussed some of the developer considerations impacting the application and this blog post highlights various DevOps considerations that removes friction in application deployment as well as end user experience.

Share this post

Share your thoughts!!

This site uses Akismet to reduce spam. Learn how your comment data is processed.