Serverless & Big Data: Is It A Good Fit?

Serverless gained traction among the developers because it allowed them to easily deploy small chunks of code without having to worry about bringing up servers. The early success due to the seamless deployment of event driven functions as well as the constraints imposed by early Serverless platforms like AWS Lambda on developers have given an impression that these platforms are suitable only for limited set of use cases. Since then, Serverless platforms evolved to support stateful applications. A question in the minds of users, especially enterprise developers, is whether these platforms can support big data workloads?

Data is overwhelming every organization, big or small. While large organizations can afford to spend resources building big data processing pipelines using Hadoop or Spark, smaller organizations were always at an disadvantage. This was glaring with data science and machine learning gaining traction, reshaping the business landscape. Can smaller organizations take advantage of big data with the help of Serverless? Can enterprises save money using Serverless to process large data volumes? After all, the Serverless model which can be triggered using an event is an ideal candidate for processing big data.

For many use cases, one can use Serverless platform like AWS Lambda, Catalyst, Google Functions, Azure Functions, etc. to do real time processing or batch processing on large volumes of data. The data flow from a data source or data firehose can trigger the Serverless function to process the data. In the past, big data processing required large number of powerful servers to run MapReduce or other such processing. This was pretty expensive with certain architectures costing millions of dollars. Only large enterprises could afford such big data processing pipelines using Hadoop or Spark. With cloud computing, virtual machines replace the physical servers and since these virtual machines can be brought up on-demand, the costs were dramatically reduced. However, it was still costing tens to hundreds of thousands of dollars, well beyond the reach of many small businesses. By tapping into Serverless Functions, it is easy to set up big data processing for a fraction of the cost.

Serverless functions has completely democratized big data, empowering organizations of any size take advantage of data. Whether it is a simple image processing application or a complex MapReduce job, Serverless Functions can help organizations build powerful big data pipelines.

The advantage of Serverless for big data are:

  • The cost savings is a big factor. There are no servers or virtual machines to manage. No operational costs. Serverless functions can be invoked when the data is ready and paid by the invocations. For example, offerings like Catalyst even offer more simplified pricing model compared to large cloud providers
  • Seamless scaling is another big advantage. As and when data flows in, the Serverless Functions can be triggered to meet the data volume. The concurrent scaling available with Serverless Functions makes scaling seamless without any bottlenecks
  • Developers can focus on code without having to worry about the availability and setting up of servers/virtual machines. Developers have little DevOps overhead using the data pipelines
  • Better security because the physical and network security is handled by the cloud provider. Developers have to focus only on the security needs around code

Serverless Functions are well suited to handle many big data use cases. The drastic reduction in costs democratizes big data and allows any organization to build data driven applications.

The Pros and Cons of AWS Lambda

Ever since Amazon Web Services launched AWS Lambda in 2014 as a Serverless platform, it has been steadily gaining traction among developers. AWS Lambda allowed developers to deploy small chunks of code and scale seamlessly at a very low cost. Developers could do this without having to manage the underlying servers. This productivity advantage offered by the service coupled with the increased adoption of application architectures made of loosely coupled smaller components, increased the interest in AWS Lambda leading to its success in the early days. In this blog post, we will discuss the pros and cons of using AWS Lambda.

The AWS Lambda advantage

  • The cost per invocation is very low at a fraction of a penny. For simpler application architectures, the cost savings associated with deploying these applications on AWS Lambda is high. For example, if you want to process a photo, it can be stored in S3 buckets and processed using AWS Lambda for a fraction of a penny. A developer could roll out a photo processing service for pennies without having to own even a server. The fact that the user is charged only when the AWS Lambda function is invoked makes the pricing model more attractive for the developers. This cost advantage is one of the reasons for early success of AWS. Recently, Catalyst was launched with a much simpler pricing model to compete against AWS Lambda
  • AWS Lambda (Functions as a Service in general) removes the operational complexity associated with setting up and managing virtual machines or containers for deploying applications. This lack of operational overhead and the fact that it is available through a self service interface provides agility. Taking an idea to production could be as simple as invoking serverless functions concurrently using FaaS platforms
  • AWS Lambda can help standardize the environment across dev, test and production, helping developers catch and fix any bugs in their code more efficiently. The variation in environments across the DevOps pipeline was one of the biggest source for friction and slowdown in application delivery. Using AWS Lambda across these environments removes this source of friction
  • AWS Lambda is well integrated with other AWS cloud services. This allows developers to easily invoke the functions using these services or the functions to take advantage of data in these services. If you are a AWS customer who is all in with AWS cloud services, Lambda is the right serverless abstraction

The challenges

  • Compared to some of the newer FaaS offerings, AWS Lambda imposes more constraints on developers. Their support for programming languages is limited compared to some of their competitors. Some FaaS providers allow support for Docker containers, giving more flexibility for developers in using various programming languages or application dependencies
  • Even though AWS Lambda abstract away the operational overhead, it adds DevOps overhead on the developers. Putting together a complex stateful applications requires the developers to put together various AWS services for data storage, networking, etc.. Even for simpler applications, the developers should be able to put together services like API Gateway, Load balancers, etc.. The need for additional DevOps knowledge could be problematic if an organization doesn’t have such talent. Many modern FaaS platforms are integrated with some of these services like backend, data store, etc., making it easy for developers to deploy applications without any DevOps overhead
  • While the abstraction provided by AWS Lambda removes the complexity at the infrastructure level, a somewhat complex application will make complex calls across various functions that are invoked using AWS Lambda. This not only increases design costs but could also increase the cloud costs depending on the use of storage and network bandwidth

AWS has the first mover advantage in the serverless landscape. However, there are some challenges associated with some of the constraints imposed by AWS Lambda platform. The key to picking the right platform lines in understanding the pros and cons on an offering and picking one based on your application needs. In the future posts, we will discuss the Pros and Cons of other major FaaS offerings in the market.

Catalyst Takes On AWS Lambda With Seamless Developer Experience

Recently, Zoho made their Serverless Computing platform Catalyst available for the public. Ever since Serverless computing has been made famous by Amazon Web Services a few years back, there are many offerings in the market targeting developers with an abstraction that makes it easy to deploy small snippets of code (a function) and scale easily to meet the traffic needs. However, not all Serverless offerings are the same in terms of developer experience. Catalyst is differentiating itself from the major cloud providers on how easy it is for the developers to deploy an application into production. With the general availability of the Catalyst Serverless Platform, developers can start using the platform right from their website. In this blog post, we will briefly discuss the unique value offered by the Catalyst platform for developers.

As vendors build higher order abstractions for deploying applications, they add more and more constraints. Platforms become more opinionated and there is a learning curve for developers. For example, to deploy applications on AWS Lambda, developers are still expected to understand how to bring together the compute (AWS Lambda) with object storage or data store and then ensure that it is protected with the right network settings. There is still an operational overhead associated with deploying applications on AWS Lambda even though it is far less than getting an application deployed on a virual machine environment or Kubernetes cluster. There is an nonexplicit expectation that the developers also have DevOps skillset to use these services. This clearly adds a barrier to entry and Catalyst is trying to close this gap by providing a comprehensive Serverless platform.

The key advantages of Catalyst platform are

  • An end to end set of services from compute to workflow to storage to AI to other backend capabilities. There is no need for developers to put together these services before deploying the apps, These are available out of the box
  • Along with event driven functions and cron functions, Catalyst makes it easy to build REST API end points for other applications or microservices to interact
  • Powerful workflows can be created seamlessly to build your complex applications
  • With support for hosting frontends, you can create a powerful web application without having to put together disparate set of services
  • A dedicated team to help you architect your applications with serverless functions, microservices and now code applications. You can learn more about this in the Modern Enterprise Podcast
  • Rich set of use cases are possible in the platform without having to worry about bringing together disparate set of cloud services. The platform also works closely with Zoho Creator, their nocode platform. This will allow small business to transform themselves digitally to meet the needs of post-Covid 19 world

Another interesting aspect of Catalyst platform is their pricing model. They make it straightforward by charging a fixed fee (which is free for light users) and an usage fee based on the number of calls and execution time. I did some quick calculations using their pricing calculator and it is pretty competitive.

While the service is a solid alternative to AWS Lambda, Azure Functions or Google Cloud Functions for all developers, small businesses and independent developers without the DevOps skills needed for using some of the Serverless platforms, will find Catalyst platform more valuable. If you want to learn more about Catalyst platform, their roadmap and more, listen to this Modern Enterprise Podcast episode I did with Giridhar Chakravarthy of Catalyst team

If you want to try out Catalyst platform, visit their homepage.

Is Open Source Even Relevant In Serverless

Open source has been intertwined with enterprise software for more than two decades and, with the increased adoption of cloud computing, the role of open-source software has only been increasing. If we look at the hype around containers, it is completely driven by open source. As Serverless gains traction, driven mainly by the higher-level abstractions, the key question in the minds of industry watchers. When the underlying nuts and bolts have limited meaning from the point of view of its end users, does open source have any meaning? Do developers even care about open source?

When cloud computing gained traction, driven by the success of AWS, there were questions about the viability of open-source software. We had open source projects like OpenStack, CloudStack, and Eucalyptus, competing against the Infrastructure as a Service offering from AWS. They failed miserably because the advantage of cloud computing was in the outsourcing of data center capital expenditure and operations. Open-source IaaS software didn’t provide this advantage and all they could offer was better resource utilization and programmatic access to the infrastructure. However, in the early days of cloud computing, the public cloud was not a runaway success among enterprise customers and the hybrid cloud was the path taken by these users.

Even though open source failed in providing an equivalent to AWS EC2, we saw the reverse happening in the PaaS layer. While Heroku, Google App Engine stagnated after getting initial developer mindshare, OSS offerings like OpenShift and CloudFoundry gained developer traction as well as enterprise adoption. Their success could be adopted to the fact that enterprise customers wanted to have a PaaS like abstraction that could span both on-premises as well as public cloud. This was followed by the huge success of Docker as the container engine and Kubernetes as the container orchestration plane. At this point, talking heads were so confident about the importance of OSS in cloud computing. But we need to keep in mind that the success of OSS in the PaaS layer was driven by two main reasons:

  • PaaS services in the public cloud had a legacy cost structure based on compute units similar to virtual machines
  • Hybrid cloud was still the focus of enterprise customers due to unnecessary concerns about moving to public cloud providers

In the last 2-3 years, we have seen a dramatic shift in the enterprise adoption of public clouds and most organizations are taking a cloud-first approach.

So, when AWS released AWS Lambda with a new pricing model based on service invocations, we saw a shift similar in importance to what happened when they released compute and storage as a service with a pricing model where user paid only when the service was used. Functions as a Service (FaaS) like AWS Lambda, Catalyst, Azure Functions, etc. provide a breakthrough in the cost model similar to the pay per use model unleashed by early cloud services like AWS S3 and EC2. This is what makes FaaS uniquely attractive to both developers and enterprises. A fundamental shift in a cost structure that lowers the barrier to adoption and the cost of innovation. FaaS levels the playing field for smaller companies to build innovative services to compete with large organizations with great financial muscle. This is entirely driven by the cost model of FaaS alone and the cost savings are the reason why FaaS is still attractive in spite of certain architectural limitations imposed by the abstraction.

The key question is whether open-source can compete with FaaS offerings in the cloud. There is no straightforward binary answer and it is a bit more nuanced. OSS cannot compete head to head with FaaS. Developers would rather not have any operational overhead than managing the open-source serverless layer. FaaS is a clear winner against open source equivalents just like how IaaS made open-source cloud infrastructure software irrelevant. However, open-source may still be important in some niche areas:

  • There could be higher-order developer frameworks on top of FaaS that could be open source in order to get developer mindshare. is a good example of this
  • Multi-cloud is fast becoming the de facto strategy for many organizations. Organizations may want to provide the same developer abstractions across multiple cloud providers. Open source serverless platforms will be relevant in this scenario. Data is spread out among multiple cloud providers and the compute for the applications using the data should lie closer to the data. Many organizations may want to provide a standardized developer interface to their developers. Open source serverless platforms will be useful in such scenarios. This is similar to the success of OpenShift and, to a lesser extent, CloudFoundry in the PaaS layer
  • Kubernetes is fast gaining traction among the enterprises for deploying containers. As organizations deploy apps using multiple encapsulations and deployment models like containers and FaaS, they may want to have a Kubernetes based serverless layer. Knative and serverless frameworks on top of Knative may be relevant in such scenarios

While it doesn’t make sense to use an open-source FaaS layer while using a single cloud provider for deploying the applications, it may still be relevant in multi-cloud scenarios and toolkits around FaaS. Thoughts?

Debunking The Serverless Myth

One of the biggest misconceptions about Serverless is that it cannot support stateful applications. We have already pointed out that the new breed of Functions as a Service (FaaS) offerings support stateful applications. In this post, we will go a bit deeper on the topic and break the myth that Serverless is fit only for stateless applications. While we should not have any confusion about the constraints imposed by the FaaS abstraction, they are also good for most of the stateful applications. We will first look at the conventional wisdom about Serverless platforms and then talk about why it is wrong.

Serverless is for stateless applications myth

FaaS was originally built as a platform that deploys functions that takes the input and transforms into an output. This required an architectural rethink that componentizes a large application into smaller chunks. The constraints imposed by FaaS was a result of a newer business model that could offer compute capabilities for pennies. While the virtual machines in the cloud had ephemeral storage that required developers to rethink how they architect their applications, FaaS has made even compute ephemeral. This put severe constraints on the architecture and limited the kind of applications that can be deployed using FaaS. These constraints made people think that FaaS is only suitable for stateless applications.

Let us not delude ourselves here by thinking that deploying stateful applications with FaaS is easy. Even with virtual machines and containers on the cloud, there is operational overhead for developers to deploy stateful applications. FaaS abstraction is at a higher level where the developers need not worry about the underlying infrastructure, including both containers and virtual machines. This makes handling state a more difficult task for developers unless the FaaS providers offer a more seamless way to persist state.

Beyond the storage volume, FaaS has other constraints that make it difficult to handle stateful applications.

  • Many FaaS providers offer severe constraints on the execution time. This is how FaaS providers can offer services for cheap. Persisting states with short-lived code execution are not easy
  • Early day FaaS offerings had the cold start problem that made it difficult to handle state persistence. Today, we have FaaS providers managing the cold start issues much better and some FaaS offerings don’t even have a cold start problem
  • Early day FaaS offerings didn’t make it easy to ensure that the state from one function execution is available to another execution of the same function. There is no way for two executions to share the state stored in the associated ephemeral storage. The only way to handle state is by storing in externalized storage which came with even higher operational overhead than handling external storage for virtual machines and containers, defeating the very advantage of FaaS

Such constraints forced developers to focus mainly on stateless applications and, hence, had lead to the myth that FaaS is suitable only for stateless applications.

Deploying stateful applications with FaaS

In spite of the constraints imposed by FaaS providers, it is possible to deploy stateful applications and persist state across multiple invocations. There are a few ways in which stateful applications can be deployed with FaaS:

  • Connecting to the database directly. Except in the case of some serverless databases that scale seamlessly, developers need to ensure that the database can handle the load and scale properly. If done wrong, the costs could run high
  • Use a Backend as a Service (BaaS) offering which can abstract away the complexity in managing the data store and remove some of the operational overhead that comes with connecting directly to the databases

As mentioned earlier, connecting to a database in a service like AWS Lambda adds some operational overhead to the developers. Also, If the database or BaaS is in a different Zone or Region, there is a latency that will impact the applications. However, a new breed of FaaS offerings like Catalyst, Nimbella, etc. offer an integrated BaaS or data store, which usually is available in the same zone as the compute, and these services reduce the latency impact and remove any additional operational overhead for the developer. If you are looking for deploying stateful applications on FaaS, be sure to pick a provider like Catalyst that can seamlessly support these applications.

Also, Azure Functions now offers Durable Functions that allow chaining of functions, opening up another opportunity for long-running applications that can handle the state better. Even though early day FaaS offerings made it easy to deploy stateless applications, newer offerings are explicitly supporting stateful applications in their platform. It is time for the developer community to look beyond stateless applications when it comes to use cases supported by FaaS. With the right platform, many stateful applications can be deployed but you may have to re-architect the application into a more modular architecture.

Overcoming Serverless Limitations

Overcoming Serverless Limitations

While Serverless provides developers many advantages, they also impose many constraints on developers in terms of architecture, deployment options, etc.. While these constraints require developers to change the way they develop and deploy their applications, there are many ways developers can overcome these limitations or find a workaround to manage them gracefully. As we attend meetups, we hear many questions/concerns from developers about the Serverless limitations. In this post, we will talk about some of the limitations and how developers can overcome these limitations.

  • Stateful Applications: Functions as a Service started off as a platform for stateless applications but the trend is changing slowly. It is possible to maintain the state between invocations using various data stores offered by the cloud providers but it came with enormous operational overhead, complexity, and runaway costs / poor SLAs when not managed properly. Now, many new vendors like Zoho Catalyst or Nimbella are offering FaaS platforms with integrated data stores, giving developers an easy option to build stateful applications
  • Mitigating DDOS: Pay per invocation model of FaaS brings this question into focus as developers (and other decision-makers) ponder using FaaS in their organizations. The DDOS risk is not any different from DDOS attacks on applications running on containers or virtual machines but the correlation between the number of invocations of the function with costs amplifies the threat. In many cases, FaaS vendors may try to reduce the impact of DDOS but the users also bear the responsibility to mitigate the attacks. They can use services like Cloudflare to fend away DDOS attacks. They can also resort to throttling as a way to fend off DDOS attacks but, when used pre-emptively, it may end up impacting the application. Using an alerting mechanism for DDOS and then taking quick remediation like throttling may also help. Some FaaS providers like AWS allow developers to use API Gateway with a key, thereby, limiting the access to API gateways
  • Latency Issues: When a Serverless platform is used to deploy a small component of a larger complex application, latency could be an impact as these components talk to the rest of the application through REST API calls. This latency in the inter-component communications along with the fact that that the other components of the application may be deployed using other services offered by the cloud provider from a different location may impact, adding latency. This can be mitigated using a loosely coupled architecture like Microservices architecture where FaaS can be used for one or more Microservices
  • Scaling Issues: While using certain FaaS services for stateful applications, there could be scaling issues. Stress on the datastore could be timing out the requests sent through various API Gateway endpoints. One way to mitigate this is by using one endpoint for writing to the data store and one for reading the data. Now the load can be distributed better by using a Queuing service while writing to the datastore. For the most part, these constraints can be mitigated by using smarter architecture but it may not always be the possibility
  • Cold Start Problem: Cold start may appear like a problem in FaaS but it is leveraged by cloud providers to provide the service at such low costs. Some providers impose cold start as a constraint but others keep the containers encapsulating the functions warm to avoid the cold start problem. If your application’s user experience cannot afford cold start, it is better to pick a provider who keeps the containers warm to avoid the cold start problem
  • Integration Testing: One of the biggest problems faced by developers is Integration Testing. In traditional environments, developers will have all the components available locally and they can do the integration testing before the code is pushed into the DevOps pipeline. With FaaS, it is not possible. One way to overcome this limitation is to do the integration testing remotely with the cloud provider than doing it locally. Since FaaS provides a cost-effective way to deploy the application, using the service for dev and test is the right way to deploy applications. This also removes the usual friction that happens between developer and ops teams where each team blames the other for the application failures due to differences in environment
  • Vendor Lock-in: Vendor lock-in is definitely a limitation with Serverless computing. Some vendors mitigate this risk by offering their service based on an open-source FaaS offering. However, this mitigation is superficial as most of the lock-in happens with the application dependencies like the database service used, etc.. Instead, our suggestion is to use disposable applications while using FaaS. FaaS reduces the cost of deploying applications but it also reduces the cost of developing applications. This combination of a reduction in development and deployment costs is the secret behind the success of FaaS. Disposable applications are applications that cost less to create a new application than change the application as you move from one type of service to another or move cloud providers. By using disposable applications, users need not worry about the lock-in costs and they can focus on building apps and deploying them quickly. When the time to change comes, they can just throw away the existing application and quickly build a new one

Yes, Serverless computing adds lots of constraints for developers and these constraints help the cloud providers to offer the service at a very low cost. By understanding the limitations, developers can easily mitigate the impact of limitations.

Two Paths To IoT With Serverless

As Internet of Things (IoT) gains traction among businesses of all sizes, developers are forced to figure out out how to build apps that take advantage of data. The usual approach of deploying the apps on virtual machines or containers may be suboptimal as the IoT devices may not generate data at all times justifying running compute resources 24/7 near these devices. Serverless computing could come handy for such situations. With FaaS, users pay for resources only when the compute is used and it will prove to be more cost efficient in IoT use cases.

Using Serverless for IoT workloads requires some understanding of what is available in the market and the workloads needs. There are two approaches to Serverless for such workloads: One is to push data to the cloud and other is to bring compute to edge or closer to the devices. Both approaches are available from different vendors but how the IoT data is processed to meet your business needs determines the right approach to handling the IoT data with Serverless. If the data requires real time processing, it is better to process the data at the edge/IoT location as it cuts down on latency and bandwidth costs. However, storing the processed data at the edge has a higher overhead than storing it on cloud. If the data can be disposed off after the real time processing, using the Function execution engine at the edge or near the IoT device makes complete sense. If your application processes data from across multiple sources and requires storing the data for analysis of historical data, pushing the data to the centralized cloud location makes more sense.

  • AWS Lambda, Zoho Catalyst, Azure Functions, etc. are useful for processing the data in the centralized cloud. Zoho Catalyst, for example, offers machine learning and AI services as a part of their service which could come handy in processing the data from IoT and Edge locations
  • Edjx takes the approach of moving compute closer to the edge. They are trying to provide the same user experience of FaaS offerings from cloud on edge devices. You could also deploy Azure Functions as IoT edge modules to take computing closer to the data than pushing the data to the centralized cloud

Depending on the needs of your organization, either one of the two approaches mentioned above can come handy for your IoT workloads. Except in the certain scenarios where running virtual machines or containers on the cloud may make sense, Serverless Computing is well suited for building apps to process data coming from IoT and Edge.

Considerations for IoT with Serverless

  • Are you processing the data in real time? If so, bringing compute closer to data makes more sense than pushing data to a centralized cloud service
  • Does network latency impact the end user experience? Then, it makes sense to process the data at the edge
  • Does your app process data from multiple sources? In many instances, it may make sense to use centralized cloud depending on the data sources
  • Does historical data play a significant role in your ML models or analytics? Then, using the centralized cloud makes more sense as the cost of storing data could be cheaper in cloud
  • Are you using the data from business applications along with data coming from IoT devices? It will make more sense to use the cloud service for such use cases

Top DevOps Considerations For Serverless

The role of DevOps in Serverless deployments is critical to realizing the value of these technologies. As developers use Serverless technologies, they need to go beyond the usual developer considerations to DevOps considerations which play a critical role in not just the agility but also in user experience. In this post, we will highlight some of the DevOps considerations every developer using Functions as a Service should consider.

Key DevOps Considerations

  • Self Service: The first, and foremost, requirement is self service. If Developers cannot consume the service in a self service way, it adds tremendous friction, leading to dramatic slowdown in application deployment. Having a self service interface is a critical requirement for any credible Functions as a Service
  • Memory Size: Developers should be able to size the memory to meet their application requirements. Without this feature, the application can fail or perform suboptimally. FaaS providers should give developers the ability to pick the memory size that meets their needs
  • Elastic Scaling: FaaS providers should allow developers to select the required number of concurrent executions so that their applications can scale seamlessly. They should also be able to scale back to zero when their needs are met. The elastic scaling is critical for optimizing both performance and costs
  • Execution Time: It is important to find out the maximum execution time allowed to run the functions. Similarly, some FaaS providers support long running jobs. Without the appropriate execution time to run the functions, the application may fail or perform suboptimally. For certain kind of applications, support for long running jobs is essential. AWS Lambda doesn’t support long running jobs and the users have to tap into AWS Fargate to run these jobs. Similarly, in the case of Azure Functions, developers have to tap into Azure Durable Functions to execute long running jobs. If your application is long running, you need to consider the support for long running jobs or use a service like AWS Fargate or Spot Ocean
  • Cold Start: Most FaaS offerings have a “cold start problem” (used in quotes because it is actually a feature used to reduce costs). Many applications are not suitable to work with cold start and may require a continuously running container as it is the case with container services. However, some providers offer Warm Start support to avoid the delay due to cold start. Depending on your application needs, you need to consider this to reduce any friction in application deployment or user experience
  • Operational Overhead: Another important factor to consider is the operational overhead needed to put together an application using storage and database services. Certain services require an operational overheard (and, hence, a learning curve for developers) while others like Zoho Catalyst are available with an integrated data store. The key to seamless DevOps include reducing friction in using various services needed for the application
  • Integrated logging and monitoring: Observability is key to modern DevOps practices. Some FaaS providers (eg: Zoho Catalyst) offer integrated logging and monitoring. If your FaaS provider doesn’t offer an integrated logging and monitoring tools, it adds operational overheads, increasing the friction in the DevOps processes

Serverless computing like FaaS offers tremendous advantage in terms of agility and cost savings. However, they also impose constraints on developers as well as application architectures. In the previous blog post, we discussed some of the developer considerations impacting the application and this blog post highlights various DevOps considerations that removes friction in application deployment as well as end user experience.

Stateful Apps With Serverless

Using Serverless Functions for stateless applications is straightforward. Not all Functions as a Service (FaaS) providers offer out of the box support for stateful applications. Using FaaS without understanding the requirements for deploying stateful applications may end up being expensive and operationally prohibitive. Even though early use cases for Functions as a Service like AWS Lambda were stateless, newer offerings from other cloud providers have made it easy for users to deploy stateful applications. As more enterprise customers adopt FaaS, we are seeing increased number of stateful applications deployed to FaaS.

In the first generation of FaaS platforms, due to the ephemeral nature of the platforms and short lived nature of the serverless functions, a push towards using FaaS only for stateless applications gained traction. More importantly, early FaaS platforms required developers to understand the operations if they had to use any kind of data store. But, the trend is slowly changing with a rethink on how data store is bundled with compute. As these newer generations of FaaS offerings mature, we can expect to see more stateful applications deployed on FaaS.

Building Stateful Applications with Serverless

Simply put

A Stateful application saves data about each client session and uses that data the next time the client makes a request whereas Stateless app is an application program that does not save client data generated in one session for use in the next session with that client

Many business applications we use today are stateful applications. While some of these applications can be re-architected as stateless applications, we cannot entirely get rid of stateful applications. As organizations start using data centric applications, there is a clear expectation to support stateful workloads in Serverless platforms.

Depending on whether the database service comes bundled with the FaaS or not, Stateful applications can use the database either directly or through the API provided by Backend as a Service (BaaS) which abstracts away the complexity that comes with managing the database and making it scale with the Serverless Functions. S3 style API to write objects are file is an example. Many developers use S3 along with AWS Lambda to store objects that are processed by AWS Lambda. Zoho Catalyst users use GraphQL or ZCQL, Zoho Catalyst’s own Query Language, to handle data from the relational database service that comes bundled with Zoho Catalyst.

As Serverless platforms mature, you can expect to see support for more tooling support to handle state and make it easy to deploy stateful applications. With Serverless platforms becoming an useful candidate for running machine learning and artificial intelligence workloads, we can expect to see support for big data tools.

Considerations for Stateful Applications

If you are going to use the composition of functions (output of one function feeding as an input to another) as a way to handle state, then your “stateful” application will become complex too fast. If you are going to use a persistent storage to handle state, make sure the database service is integrated with the FaaS offering so that developers are not responsible for handling the scaling needs of the database. In the absence of such an integrated offering, using BaaS could help mitigate the overhead with handling the scaling.

The events in the relational database can also be used as the trigger for executing the functions in FaaS. Developers can also combine FaaS with relational databases in workflows in lieu of complex applications. However, developers are responsible for ensuring resiliency in the case of building such workflow to build complex applications.


While stateless workloads are still the prevalent use cases in FaaS platforms, newer platforms make it easy to deploy stateful applications by providing a more integrated offering that makes it easy to store sessions and state in the application. As more data centric workloads move to FaaS, expect to see support for big data and other data intensive workloads.

Does Serverless Need DevOps?

As Serverless offerings like Functions as a Service gain traction among developers, they are faced with questions about the role of DevOps in Serverless environments. With so much hype surrounding these technologies and DevOps, there is very little clarity on the relationship between Serverless and DevOps. We can talk about DevOps and Serverless in couple of contexts and, in this post, we will briefly discuss the relationship.

Using Serverless to manage the DevOps: Serverless technologies can be tapped for DevOps automation across the pipeline. Whether it is running build process on check in or running cron jobs or chatops or generating reports or log processing, Functions as a Service can help establish seamless DevOps. By tapping into FaaS for automation, organizations can build DevOps pipelines at low cost without any operational overhead. Using FaaS to handle the automation needs of DevOps is a more straightforward choice.

However, the use of DevOps in deploying Serverless application is a more interesting topic for developers. When they deploy applications to the cloud using either virtual machines or containers, most organizations expect developers to understand the operational aspects of deploying applications and, in some cases, deploy directly to production. There is a steep learning curve for developers and the key question in their minds is whether Serverless can reduce some of the operational burden from their shoulders. The abstraction provided by Functions as a Service should be help in reducing the operational burden for developers. However, we find that there are two different approaches to how these services are abstracted and this determines the operational load developers incur as they push their code to production.

  • Barebones approach spearheaded by AWS, Google and others where compute is offered as Functions as a Service and it is the responsibility of the developer to put together various pieces needed for their application like object storage, database, etc.. Developers should ensure that storage, database, etc. scales like how the compute scales using automation. As the code moves from development environment to testing to production, the developers are tasked with getting the configuration files needed to automate the dependencies. This makes application more complex than what many of the developers prefer
  • On the other end of the spectrum, we have offerings like Zoho Catalyst which provides database, event sources, AI kits, etc. so that the onus is not on the developers to set up these services and manage them. This approach is much suitable for developers in small organizations who can’t afford operational talent to handle the application deployment needs. Zoho Catalyst also provides a sandbox environment where the developers can test their application before deploying it in production. This greatly simplifies the DevOps pipeline for developers in smaller organizations. However, they will also be providing various integrations with various DevOps tools for larger organizations to use their service as a part of their existing DevOps pipeline. Nimbella also belongs to the same bucket

As organizations evaluate various Functions as a Service offerings, they should take into account if their developers have the necessary expertise to handle the DevOps needs. Based on the available expertise, they can pick a DIY approach to DevOps using services like AWS Lambda, Google Cloud Functions, etc. or a more comprehensive platform like Zoho Catalyst. Understanding the DevOps responsibilities is key to maximizing the benefits of any Serverless platform.