The Pros and Cons of Serverless Computing
July 12, 2020
Cloud computing has provided organizations with numerous benefits such as improved scalability, decreased time to market, and cost efficiency. Yet, it is an area which continues to evolve quickly. Today, cloud is becoming increasingly abstract. It has much more to offer as it takes many responsibilities off your plate, so that you can focus on what matters most.
Early cloud services required provisioning of infrastructure, installing required tools, a tool upgrading plan, security administration, and patch management. These processes require constant attention and maintenance, consuming precious time of your developers and cloud engineers.
Enter, Serverless Computing
Definition: Serverless computing is an architecture in which a cloud provider dynamically manages the allocation and provisioning of servers
With any new technology or processes, there are many benefits to appreciate as well as some respective challenges to be aware of. Below, we will take a look at the pros and cons of Serverless Computing as a component of your cloud architecture. These observations can act as a first step in helping you determine if this Serverless Computing is the right fit for your organization.
No Service Management Necessary
One of the most notable benefits of Serverless Computing is that there is no need for service management – your services are managed for you, freeing up time for your engineers and developers to focus on development and analytics right away.
Price Per Server Time
In traditional cloud computing, when you want to provision a server and run an instance, you have to make it available first, and then you are billed for the entire time that server is running. And when it’s up and running, you will be charged continuously even if it isn’t being used for a given time. With serverless computing, you will only be billed for the application uptime or the runtime of your functions. When it comes to processing smaller, mundane tasks, which don’t necessarily need a lot of compute power, this can play a significant role in cost reduction. The philosophy behind serverless computing naturally leads to nearly zero idle time; when workloads are consistently fluctuating, serverless computing prevents overprovisioning of resources, significantly reducing costs.
Addition and decommissioning of cloud resources has been a huge advantage of cloud computing for years as it allows on-demand scalability if needs change. Whereas serverless allows a solution to automate the scaling out of resources as per usage requirements. All the server onboarding and resource management is abstracted for the developers which means a serverless application will handle a very high number of requests just as well as lesser number of requests. This agility allows cloud resource usage to be as optimized as possible, resulting in both performance and cost optimization.
Quick Deployment, Fast Updates
A serverless architecture ensures abstraction of the backend, meaning that your developers only have to publish a piece of code to deploy an application. Developers can upload pieces of code very quickly, one function at a time, creating a microservice architecture where the application is not hosted on a single monolithic stack. This not only allows for a reduced time to market but also leads to possibilities for making quick fixes, updates and patches without disrupting the entire application.
Serverless functions can be started through events in storage or they can be scheduled to run automatically at desired intervals. This means that with a well-designed workflow, there is no need for manual intervention to carry out a data pipeline task. This reduces the need for time-intensive work and subsequently reduces costs.
Most top vendors are PCI-DSS, SOC2 T2, and ISO27001 compliant, so even though you do not know the server identity or location, the vendors are required to meet compliance requirements. They also understand that this is a common worry amongst organizations, so they prioritize security to ensure their organizations data is secure. This means that most serverless cloud solutions in market today are equally if not more secure than on-premise solutions.
De-bugging & Testing
Debugging with a serverless architecture is challenging because the code is running on an unfamiliar cloud backend to which the developers have no visibility into. This is aggravated by the fact that serverless applications are normally a collection of multiple cloud functions which run separately and can have inter-dependencies. This means that a central debugging environment like you would normally have on a local setting is not there. However, there are many frameworks which have been created to address these issues and developers are becoming more acclimated to new processes.
Designed for Short Processes
While this is in part seen as a benefit; due to simplified systems and improved solutions management, serverless computing may not be the right solution for each process. Some processes require a lot of time and processing techniques, which serverless computing is not designed for. While this is improving, serverless computing might not be the ideal solution if you need to support a large workload, over a longer period of time.
Limited Performance Control
Since the compute resources are already provisioned for you, sometimes there is a risk for applications to suffer a ‘cold start’. This is the boot up delay caused by the backend process to allocate compute to your application when triggered.
For more complex or longer running processes, serverless architectures may still not be a great fit. This is because of limited controls on increasing the compute power for a specific process that is complex in nature. Such tasks can take a very long time to run and ultimately lead to high server uptime.