Overcoming Serverless Limitations
While Serverless provides developers many advantages, they also impose many constraints on developers in terms of architecture, deployment options, etc.. While these constraints require developers to change the way they develop and deploy their applications, there are many ways developers can overcome these limitations or find a workaround to manage them gracefully. As we attend meetups, we hear many questions/concerns from developers about the Serverless limitations. In this post, we will talk about some of the limitations and how developers can overcome these limitations.
- Stateful Applications: Functions as a Service started off as a platform for stateless applications but the trend is changing slowly. It is possible to maintain the state between invocations using various data stores offered by the cloud providers but it came with enormous operational overhead, complexity, and runaway costs / poor SLAs when not managed properly. Now, many new vendors like Zoho Catalyst or Nimbella are offering FaaS platforms with integrated data stores, giving developers an easy option to build stateful applications
- Mitigating DDOS: Pay per invocation model of FaaS brings this question into focus as developers (and other decision-makers) ponder using FaaS in their organizations. The DDOS risk is not any different from DDOS attacks on applications running on containers or virtual machines but the correlation between the number of invocations of the function with costs amplifies the threat. In many cases, FaaS vendors may try to reduce the impact of DDOS but the users also bear the responsibility to mitigate the attacks. They can use services like Cloudflare to fend away DDOS attacks. They can also resort to throttling as a way to fend off DDOS attacks but, when used pre-emptively, it may end up impacting the application. Using an alerting mechanism for DDOS and then taking quick remediation like throttling may also help. Some FaaS providers like AWS allow developers to use API Gateway with a key, thereby, limiting the access to API gateways
- Latency Issues: When a Serverless platform is used to deploy a small component of a larger complex application, latency could be an impact as these components talk to the rest of the application through REST API calls. This latency in the inter-component communications along with the fact that that the other components of the application may be deployed using other services offered by the cloud provider from a different location may impact, adding latency. This can be mitigated using a loosely coupled architecture like Microservices architecture where FaaS can be used for one or more Microservices
- Scaling Issues: While using certain FaaS services for stateful applications, there could be scaling issues. Stress on the datastore could be timing out the requests sent through various API Gateway endpoints. One way to mitigate this is by using one endpoint for writing to the data store and one for reading the data. Now the load can be distributed better by using a Queuing service while writing to the datastore. For the most part, these constraints can be mitigated by using smarter architecture but it may not always be the possibility
- Cold Start Problem: Cold start may appear like a problem in FaaS but it is leveraged by cloud providers to provide the service at such low costs. Some providers impose cold start as a constraint but others keep the containers encapsulating the functions warm to avoid the cold start problem. If your application’s user experience cannot afford cold start, it is better to pick a provider who keeps the containers warm to avoid the cold start problem
- Integration Testing: One of the biggest problems faced by developers is Integration Testing. In traditional environments, developers will have all the components available locally and they can do the integration testing before the code is pushed into the DevOps pipeline. With FaaS, it is not possible. One way to overcome this limitation is to do the integration testing remotely with the cloud provider than doing it locally. Since FaaS provides a cost-effective way to deploy the application, using the service for dev and test is the right way to deploy applications. This also removes the usual friction that happens between developer and ops teams where each team blames the other for the application failures due to differences in environment
- Vendor Lock-in: Vendor lock-in is definitely a limitation with Serverless computing. Some vendors mitigate this risk by offering their service based on an open-source FaaS offering. However, this mitigation is superficial as most of the lock-in happens with the application dependencies like the database service used, etc.. Instead, our suggestion is to use disposable applications while using FaaS. FaaS reduces the cost of deploying applications but it also reduces the cost of developing applications. This combination of a reduction in development and deployment costs is the secret behind the success of FaaS. Disposable applications are applications that cost less to create a new application than change the application as you move from one type of service to another or move cloud providers. By using disposable applications, users need not worry about the lock-in costs and they can focus on building apps and deploying them quickly. When the time to change comes, they can just throw away the existing application and quickly build a new one
Yes, Serverless computing adds lots of constraints for developers and these constraints help the cloud providers to offer the service at a very low cost. By understanding the limitations, developers can easily mitigate the impact of limitations.