Today’s Independent Software Vendors or ISVs are required to move at a faster pace than ever before when architecting technical solutions. They are often tasked with developing software solutions for many of the fast-paced projects that businesses find as central to mission-critical operations. By using cutting edge cloud-based technology, ISVs are able to successfully deliver powerful applications quicker and more efficiently than ever before.
One of the developments in the world of cloud-based architecture has been Serverless. Serverless allows ISVs to move extremely quickly and efficiently as the underlying architecture is totally abstracted from the code functions that execute natively in the cloud. This allows hosting “code” in the cloud and triggering code to instantiate and execute based on certain actions. This provides powerful capabilities and possibilities for developing applications to fit a wide variety of use cases. However, as powerful and exciting as Serverless technology is for ISVs developing cutting edge applications, there can be unseen challenges. What are some of the hidden challenges with Serverless solutions and how can they be overcome?
The world of Serverless and Serverless solutions is rapidly expanding and blurring the lines of development and infrastructure as we know it. It is changing the entire methodology of developing applications in some cases. With all the benefits of abstracting away from the underlying “server” infrastructure that Serverless offerings bring to the table, there are hidden challenges. Let’s take a look at the following hidden challenges with Serverless architecture and how these can be overcome for ISVs looking to utilize Serverless in application development.
Let’s take a closer look at each of these hidden challenges a bit closer and see how ISVs can overcome these in using Serverless solutions in application development.
As with any new technology it can be challenging to find good resources and people with the right skill sets to implement them correctly, effectively, and efficiently. Serverless solutions are no exception. Being a rather new cloud-based technology, the overall knowledge base for Serverless is rather small at this point. This provides challenges to adopting Serverless technology when thinking about such necessary tasks as architecting, designing, developing, and troubleshooting with Serverless components.
While this can provide a challenge to ISVs with little or no experience with utilizing Serverless technology, finding skilled partners with the valuable experience implementing, developing, and troubleshooting Serverless solutions with today’s modern applications allows ISVs to get up to speed quickly and begin integrating Serverless into application development quickly.
A longstanding argument against migrating resources to the cloud is a loss of some control when compared to on-premises environments. This is no different with Serverless offerings today. When ISVs or any other business entity utilize public cloud resources, including Serverless, there is a degree of relinquishing control of your data, applications, infrastructure, or services that are housed there. As cloud providers and environments have matured, many of these once strong objections have greatly diminished. In fact, cloud has now become an essential part of enterprise infrastructure and services. Public cloud has certainly matured.
The added benefits of world-class service offerings and high-availability afforded to customers utilizing cloud environments by the likes of public cloud vendors such as AWS, Google, Microsoft provide real business value. The advantages of utilizing Serverless functions and capabilities in public cloud environments along with the many other public cloud offerings greatly outweighs any perceived loss of control over the infrastructure.
As with any system or infrastructure stored in the public cloud, your systems, and data are ultimately your own responsibility. This means that ISVs need to properly architect serverless systems to be fault tolerant and have high-availability outside of the cloud provider’s HA capabilities, thus regaining some control of the overall architecture through software design. By utilizing multiple cloud regions for serverless architecture, ISVs can provide resiliency from possible outages that may take place in cloud provider infrastructure and networks.
The cloud platform offered by today’s public cloud vendors are in essence a large “shared” system in that multiple tenants are making use of the overall public cloud platform as provided by the public cloud vendor. The same is true with the Serverless offerings. While Serverless offerings can be touted as “dedicated” there is still the matter of certain limits imposed from the provider on the Serverless resources that can be utilized in the public cloud environment. Serverless offerings such as AWS Lambda have built-in scaling that takes care of horizontal scaling automatically and allows the Serverless function to be elastic in nature, based on resources needed. ISVs do need to make sure any functions written to be utilized in the Serverless ecosystem need to assume horizontal-scaled parallelism so the system can do the heavy lifting of autoscaling automatically.
Serverless, being a relatively new offering for most of the public cloud vendors, is being crafted into a solution as perceived from each particular cloud vendor’s point of view. The standardization and exact specifications of Serverless at this point are still being determined. With each public cloud vendor developing and molding their Serverless offerings to fit their particular idea of what the technology offering should be, this can lead to a degree of vendor lock in when choosing one Serverless offering over another.
Customers, including ISVs, must architect at least part of the code facets of the Serverless solution around the offering of their public cloud vendor solution of choice. Navigating the ins and outs of each specific public cloud vendor and their particular Serverless offerings can certainly be challenging. ISVs can align themselves with skilled Serverless solution providers who can help determine the often-difficult nuances various public cloud vendors and their offerings, including Serverless.
As mentioned already, Serverless technology is a relatively new cloud-based technology that customers are only now beginning to understand and be able to utilize effectively. Part of the challenge with today’s Serverless landscape is the technology itself, specifications, and offerings from cloud vendors are in a “state of flux”. The specifications and ideas around the technology and various use cases are continuing to progress and be molded around current and new applications. While this drives the excitement for Serverless offerings and allows new and exciting ways to quickly and efficiently solve today’s development challenges, it creates its own set of challenges. Constantly changing standards, offerings, services, and other Serverless requirements can add to complexity and difficulty of implementation for ISVs. However, a well-architected Serverless solution utilizing good development practices and skilled Serverless solutions experience can help to alleviate these challenges.
When developing applications that make use of Serverless components, ISVs must take into consideration the startup latency of the Serverless component itself. Why is this? Most runtimes have a startup period or interval of time for the runtime to be provisioned. Since Serverless functions are provisioned on-demand, they are subject to the startup time required to instantiate the function code. For those not accustomed to developing applications using Serverless functions, taking this startup time into account is extremely crucial. If this latency in the startup time is not taken into consideration with the application design, end users can experience less than desirable application lag. While this can be a challenge with Serverless applications, effective application design is key to ensuring Serverless components perform as expected for ISVs making use of them in their application delivery.
Testing platforms and solutions is an extremely critical part of any production enterprise solution and development methodology. Traditional development of systems has well-defined principles and methodologies for testing those systems. However, with Serverless solutions, both the tooling and testing of Serverless components can be much more difficult to get a handle on. Serverless systems are instantiated as trigger-based code that is ephemeral. This makes it much more difficult to perform integration testing. Integration testing is where you combine and test individual components as a group instead of individually. This helps to expose any issues with the interaction between the components in the overall system.
Serverless abstractions and ephemeral existence makes this much more challenging. ISV’s accustomed to utilizing traditional means of integration testing must adapt to the new challenges of integration testing that involves Serverless components. Serverless function code can greatly benefit from designing code utilizing the Hexagonal architecture of breaking up the function code by “layers of responsibility”. Integration testing Serverless function code with these and other design techniques can allow ISVs to successfully perform integration testing involving Serverless code.
Today’s ultra-complex and demanding application development requirements imposed on ISVs requires using cutting-edge technologies such as Serverless solutions in application development. While Serverless is new, sexy, and means abstracting code away from the underlying hardware, there are hidden challenges to making use of this new cloud-based technology in application development.
Hidden challenges of Serverless include: the newness of the technology, relinquishing of control, multitenancy and resource limits, startup time, testing challenges, and vendor lock in, just to name a few. Businesses including ISVs need to skillfully architect their Serverless applications to take these challenges into account. This will allow harnessing the powerful capabilities of Serverless architecture and at the same time minimizing the challenges inherent with Serverless offerings.