Serverless Architecture and Development Misconceptions Explained
There is no question that Serverless compute offerings running Functions-as-a-Service or FaaS are fundamentally changing the way applications and other code is being written today. Independent Software Vendors are able to effectively utilize Serverless offerings to quickly and efficiently host trigger-based code to do a wide variety of things for application development.
Serverless allows the complete abstraction of running code from the underlying infrastructure by allowing the cloud provider to completely take care of the underlying infrastructure (servers) involved, and allow developers to concentrate on the code being ran or called in applications.
A major hurtle for organizations, including ISVs, looking to start making use of Serverless offerings is that it requires a completely different mindset to development as opposed to traditional application development and many misconceptions exist on this front. Let’s take a look at the top Serverless development misconceptions and see how and why development misconceptions regarding Serverless technology can be problematic to effective Serverless architecture deployment.
Serverless Is Just a Different Way to Deploy Code
As with any technology offering, and especially technology that is fairly new to mainstream adoption such as Serverless, there can certainly be misconceptions that exist among developers, DevOps, management, and others. Concentrating closer in the context of developers, Serverless requires a different mindset to traditional development. How so? A common mistake that can be made among developers with Serverless is that Serverless is simply a different way to deploy code by using offerings such as AWS Lambda. However, this is a misconception in terms of code development and overall methodology in making use of Serverless technology in general.
Traditional software development practices and thinking do not fit with development utilizing Serverless technologies. A prime example of this when considering how traditional applications are built, developers are able to take output from one component and use that output in another. This is referred to as the local state information and can be preserved in traditional architectures. However, with Serverless architecture, the very nature of Serverless functions are ephemeral and trigger-based so that it is not possible to make use of the local state information passed to the Serverless function without storing this state information in some way first.
This can be stored on some type of persistent storage such as DynamoDB or Elasticache for temporary storage and then writing the data permanently to a backend database. Using authentication mechanisms such as the bearer methodology allows utilizing tokens for authentication that can be used to identify users and can easily be stored in a backend datastore. However, while there are workarounds, this represents a new way of thinking for code purposes.
There are facets and aspects of Serverless that developers need to consider in addition to state information. Such issues as “cold start” times when utilizing Serverless functions must be properly handled or application performance will suffer. Additionally, developers must take into consideration the Execution Duration of the function that is being invoked from the Serverless environment. Cloud providers have different limits on how long functions housed in their FaaS offerings are allowed to run.
For example, the timeout for a function running in AWS Lambda is 15 minutes. Developers must keep these limits in mind as certain long running processes included in an application may not be a good fit for a function housed in Serverless architecture. If it is decided to use FaaS for long running processes, this may still be accomplished by coordinating the use of multiple functions to handle long running tasks.
The above are only a few examples that help to illustrate that running code in AWS Lamba and other FaaS offerings is not simply a different way to deploy code in the cloud. The entire mindset of coding must take into account the design, the architecture, and the limitations that are associated with using Serverless architecture. However, for a developer to conclude there is no difference in utilizing Serverless functions when compared to other means of coding applications will most definitely lead to issues with application development.
If you Can Use Serverless You Should Always Use It
Like any shiny new tool, there can easily be the desire to use it for everything, even when it may not be the best suited tool for the job, or even be needed at all. Serverless can easily fall into this category for application development. While it offers tremendously powerful, exciting, and advantageous characteristics in many use cases, it may not always be the best fit for the job, or even be needed at all. There are many great platforms and services that are already built right into many cloud provider service offerings. Using or writing new code and hosting custom code as a function inside a Serverless offering such as AWS Lambda may be “reinventing the wheel” so to speak. For example, if you take a look at the products available from AWS around the area of Machine Learning (ML), it becomes easier to see why it may not be necessary to use Serverless or custom code in general.
AWS has many great options in the realm of ML that allow ISVs and others to use a tiered approach when it comes to how much development of custom code and other technologies need to be implemented. Starting at the base layer, ML frameworks such as TensorFlow, PyTorch, Apache MXNet,x and other frameworks provide the lowest level of access to the machine learning algorithms and potentially require the most coding. A step up from the frameworks products, AWS has ML Services (Amazon SageMaker) which enables developers to quickly and easily build and deploy machine learning models at scale. These require less development than the framework services and allow accessing the pre-configured algorithms which are suitable for the vast majority of use cases. The top-level of ML services is the AWS AI Service which provides ready-made AI intelligence for applications and workflows and requires no machine learning development or custom code to implement a solution.
These examples highlight that “reinventing the wheel” with custom ML code running in Serverless solutions may very well not be necessary. It illustrates there are many great solutions that may already have been developed or available in services or solutions that are already readily available in the cloud.
The development mindset must be holistic in approach and not simply focused in on custom code. The first step to evaluating a Serverless solution for any application is to ensure there are no ready-made platform services already available for consumption that make practical sense to utilize. Only when these readily available platform-services are evaluated and options for solutions are exhausted in that realm, do ISVs want to pursue custom coding with a Serverless solution. This helps to reduce code debt and ultimately places the operational complexity in the purview of the cloud provider.
Ops is No Longer Needed with Serverless
With the layer of abstraction that comes with Serverless and the management of all the infrastructure components being handled by the cloud provider, it may seem as though operations is no longer needed in managing the infrastructure running applications. However, this is a misconception. Ops is not simply just infrastructure administration, although that is one element. It involves monitoring, deploying, securing, and troubleshooting. Even though there is no provisioning or managing of the infrastructure underneath Serverless applications, you still need to have operations staff to make sure the application is performing and running the way it is supposed to run. It is critically important to monitor Functions-as-a-Service to ensure they are running optimally. Security is an ongoing concern and must be followed and implemented correctly, even with Serverless technologies.
Security is Irrelevant with Serverless
Security is perhaps one of the most important objectives and initiatives in development in general today. ISVs and others developing applications must be aware of the current and any potential future vulnerabilities as they arise. A common misconception around Serverless technologies is that Serverless code is immune to vulnerabilities found in traditional infrastructure systems. While there are some aspects of Serverless that are not vulnerable to certain attacks on traditional infrastructure, Serverless is in no way immune to all security holes. In fact, Serverless, in the underlying components relies on various dependencies that in themselves, can have security vulnerabilities. This requires that ISVs and operations personnel continually monitor for new and emerging threat vectors as they arise.
No Need for PaaS and IaaS or Containers with Serverless
This ties in very closely to the idea presented earlier. The misconception that Serverless technology replaces all other cloud technologies, or is “better” than any other solution for developing applications can easily develop. While Serverless technology is powerful, it has its strong suits and use cases that fit the technology well. However, there are still going to be use cases for making use of PaaS, IaaS, or containerized solutions over Serverless technology for certain application development use cases.
As already mentioned, the “cold start” and “local state” aspects of Serverless solutions can be challenges that must be overcome. Depending on the application, there may be a better fit for a particular application using a PaaS, IaaS, or containerized solution. It is worth noting, most likely we will start see the various characteristics of certain cloud technologies such as containers and Serverless continue to merge over time.
Final Thoughts
Serverless is not simply a different way to deploy code but rather a whole new way to build applications with its own set of strengths and weaknesses. As with any tool, it has its strong use cases and those that potentially could be served by other solutions or tools. Even with Serverless technology, monitoring performance and security concerns is required.
By dispelling the prevalent misconceptions regarding Serverless solutions, ISVs can adequately engineer software solutions that appropriately use the best technology that is suited for the job. Serverless solutions are extremely powerful and feature-rich with many strengths that when used properly, can allow ISVs to push the envelope of designing cutting-edge solutions for customers.