On The Impact Of Cold Start

One of the biggest complaints against Functions as a Service (FaaS) is the problem of a cold start. It is about the delay between the execution of a function after someone invokes it. To be more specific, it is about the function at the time of invocation. In the background, many FaaS offerings use containers to encapsulate and execute the functions. When a user invokes a function, FaaS keeps the container running to execute the code and shuts it down after the execution. When the function is invoked again, the code, dependency, etc. are downloaded again to the execution container before the function can be executed. For many use cases, this delay is unacceptable and it leads to performance issues.

While first-generation FaaS providers like AWS Lambda and others still have cold start problems, they are following the airline model of charging more after creating constraints for users. They offer expensive warm options to overcome the limitations of this resource usage model. However, other providers avoid the problem altogether. They absorb the costs of keeping the instances warm and, hence, limit such constraints. When a cloud provider operates at scale, optimizing the resources to reduce the impact of cold start should not be an issue. If the underlying infrastructure is based on containers, cloud providers can optimize resources and reduce the cold start impact. With machine learning playing a role in resource optimization, one can effectively optimize the resources and avoid this issue. Spot, which is now part of NetApp, takes advantage of machine learning to effectively optimize resource usage. There is no reason for hyperscalers to not do that and reduce the cold start problem.

How cold start impacts developers?

The presence of cold start in the Serverless platforms like AWS Lambda imposes constraints on developers. The impact of these constraints can be limited in certain cases but it cannot be overlooked in other use cases. Some of the challenges posed by this issue on developers are:

  • There is a limitation on the types of applications that can be deployed on these platforms due to this problem
  • Even on the use cases supported by the platform, the delay could introduce performance issues and developers may have to re-architect the applications to work around the constraints. In some cases, programming languages that use an interpreter, like Python, may help limit the cold start impact
  • This also forces developers to use the code tightly and limit the dependencies so that the starting up time is sped up
  • In some use cases, the performance impact cannot be avoided and it is better to not use FaaS for such applications
  • Database connections and network connections will increase the startup time and impact the applications on platforms with cold start
  • When using warm containers, the cost could go out of control if the resources are not managed effectively
  • Under certain conditions, the scale could have an impact and it is important to architect the functions for scale without the cold start imposed latency

Having pointed out various constraints that developers face due to the issue of cold start, we also want to point out that it should not limit you from using Serverless. Either you architect your application to work around these constraints or use a Serverless provider without this problem. Despite cold start limiting developers, there are many advantages of using FaaS for your compute needs. The constraints can be leveraged to be more innovative while benefiting from the agility and cost savings offered by FaaS.

Share this post

Share your thoughts!!

This site uses Akismet to reduce spam. Learn how your comment data is processed.