What is Serverless — Part 2: Challenges and Considerations for Choosing the Right Serverless…

Written by vamsi.chemitiganti | Published 2018/11/21
Tech Story Tags: serverless | cloud-computing | continuous-delivery | kubernetes

TLDRvia the TL;DR App

See the first part, of this 5-part blog series, here.

AWS Lambda? Azure functions? Openwhisk? Fission?

As you’re considering Serverless and looking for ways to get started (with shedding all your infrastructure-worries :-)), here are some considerations and ‘gotchas’ to be aware of when choosing the right Serverless solution to support the needs of large scale enterprises today.

1. Lock-in to a particular cloud provider

This is an obvious one. All the leading cloud providers lock customers into the unique implementation of their Serverless framework. For instance, AWS Lambda relies on a panoply of AWS offerings across DNS (Route53), API Gateway, S3, Database, Networking (VPCs), etc. These proprietary components are needed to compose complex serverless applications. This means that Lambda functions, for example, are not portable across other cloud providers. Once written, portability or re-use of these functions on other environments are next to impossible since it is not just the application logic and functionality that needs to be re-written, but also all the essential services provided by the cloud provider.

Essentially, you could be swapping the tight coupling between the app components and the infrastructure with another type of dependency. This is a problem, particularly since the world of modern software delivery has consistently demonstrated that striving towards as much de-coupling as possible, object re-use and portability — are all critical to ensuring business agility and ease of operations.

In addition to this dependency, the specific cloud providers introduce additional limitations that developers need to be aware of when choosing their preferred service. For example, AWS Lambda limits the artifact sizes (50 MB at the time of writing), the number of concurrent executions and amount of memory allocated per invocation.

2. Cost (and hidden costs.)

As we touched on in the previous part of this series, the billing advantages of Serverless depend, quite sensitively, on actual usage patterns. In addition, the cost of using a given FaaS framework should not be viewed in isolation from the cost of the surrounding ecosystem services required to run the functions. For example, the financial implications of using Lambda in a large enterprise is not limited to just vanilla CPU/RAM/Network cost but consists also of the associated charges of API Gateway, S3, Dynamo, costs of sending data across VPCs, etc. Most users find that the charges quickly add up with the public cloud providers.

If your transaction volumes remain high (and scales-up higher,) solutions such as Lambda functions can potentially cost more of your budget than anticipated. Possible fixes include designing the application in such a way that a larger batch size of data can be ingested into the function, keeping the execution time lower by writing more efficient code, data transfer costs across VPCs and Availability Zones (AZs,) etc. Cross VPC transfers require Lambda functions to open Elastic Network Interfaces (ENI) which causes longer execution times and a higher charge for the transfers themselves.

Whatever be the fix, it stands to reason that Functions should also be offered on the Private cloud and on-premises infrastructure as well.

3. Startup Latency

One issue pointed out by various users of the public clouds has been the cold start challenge associated with using FaaS frameworks.

Once a (Lambda) function has not been used for a certain length of time, the system reclaims the resources that it held, meaning additional spin-up time is required to restart the function — instantiating another container, loading up its dependencies and then making it available. For certain real-time or near real-time applications in IoT or Cognitive applications serving live end-users, 100ms latency is too high.

In contrast, the open source Serverless framework Fission allows you to pre-tune some reserved resources across a spectrum, to ensure your application is ready with minimum latency.

4. Serverless Applications on Private Cloud/On-prem

Sometimes, you want to own and have full control over your infrastructure and ensure easy portability between environments. Maybe your workload is too business critical, maybe your business organization is not super comfortable adding dependencies to new cloud services. Maybe you want more visibility into the development of the systems you use. And, most commonly, you may want to save on IT costs by leveraging your existing infrastructure, rather than increasing your public cloud footprint.

Still, even when using on-prem infrastructure, you still want to be able to enable your developers to modernize their applications and take advantage of new patterns such as Serverless.

In the private cloud, most serverless implementations are based on a PaaS platform. The limiting model of a PaaS essentially calls into question the usage of a serverless framework on it. In that sense, serverless frameworks have been added to commercial PaaS’s as an afterthought. The lock-in around such integration makes this one a very difficult proposition as it adds another layer of complexity to an already complex architecture. The net result is that technical debt can get compounded in the case of inefficiently designed applications.

5. Complex CI/CD toolchains

FaaS frameworks are still evolving and their place in the complex CI/CD toolchain is still being formed. It will take a lot of upfront investment & diligence by development teams to integrate Serverless frameworks into their Continuous Delivery pipelines.

For instance,

  • A newly developed or modified function needs to be passed through a chain of checks — from unit testing to UAT — before being promoted to Production. This can make the process more cumbersome.
  • For FaaS, additional load and performance testing needs to be in place for each individual function. This is critical before deploying these to Production.
  • Rollback and rollforward capabilities need to be put in place for each function.
  • The Ops team needs to get involved much earlier compared to microservices-based development.

7. Silo’ing of Serverless Operations from Other IT Ops

Developers may be exempt from worrying about servers. Ops teams, however — particularly in large enterprises that operate in complex hybrid environments — still need to have visibility and to be able to manage Serverless applications and their footprint. This is doubly-true if you’re trying to enable Serverless on a Private Cloud.

While Serverless, like other technologies, may involve specific tools or services, IT still needs to be able to have a single pane of glass and granular visibility and control over ALL types of applications (legacy, microservices, serverless) — across ALL environments — be it on-premises, public clouds, private cloud, containers, and more. You need a solution that allows Ops to incorporate Serverless-based applications into their overall IT strategy, processes and tools — like they would any other type of application.

8. Visibility and Monitoring

On Lambda, for example, the biggest complaint from users is that they do not know what’s going on. In contrast, the open source Serverless framework Fission provides built-in integration with native Kubernetes monitoring tools that give you as good visibility and troubleshooting over your Serverless functions as you’re accustomed to for other containerized applications.

To be sure, serverless architectures demand a higher level of technology & cultural maturity from enterprises adopting them. The next post in this series will discuss what can be done about this critical enterprise architecture challenge leveraging Kubernetes.


Written by vamsi.chemitiganti | Chief Strategist at Platform9 (www.platform9.com)
Published by HackerNoon on 2018/11/21