Can Middleware survive the Serverless enabled Cloud?

Written by srinathperera | Published 2018/06/07
Tech Story Tags: serverless | software-development | software-architecture | microservices | cloud-computing

TLDRvia the TL;DR App

A Thought Experiment

Would you join me in a thought experiment, to explore the possibilities?

Here are our assumptions.

  1. Assume Serverless has crossed the Chasm and commands 50% of new software. There are three mega-clouds.
  2. Assume due to the price war between mega-clouds, it is cheaper to get a machine in the cloud rather than having it at home
  3. Assume most new apps are written using an IDE that simplifies Serverless development. Users come to IDE, write multiple functions, wire each function and make the App work. He can run it locally as he develops, debug, step through execution, trace, and when it is working ask to deploy it in the cloud. IDE also support versions and CI/CD.

Having accepted the assumptions, Imagine Bezo’s smile, Microsoft following closely, Google hanging on with their vast coffers, and finally, a smile of a kid who got an app running in Serverless in an hour.

How would middleware look like then, and what will be its future? Let’s us divide and conquer; Let’s explore by different types of middleware

Anatomy of an Application

A bottom layer of a typical application would include several services and a database. Service might also depend on core middleware services such as message brokers, cache, and stream processors, etc.

The next layer will include integration middleware such as ESBs, workflow engine, and API management tools. They would compose multiple services to deliver specific business use cases.

The next layers provide the end user experience, often via a Web app that runs in the browser or a mobile application.

Finally, there are helper services that will enhance the experience and environment the application operates in and make the make service lifecycle as painless as possible.

Middleware supports each of these layers. Let us explore how each of these is affected by a Serverless-enabled cloud.

Impact on Core Middleware

This category includes middleware that helps writing and hosting services and applications (e.g., application servers, web servers) and middleware directly used by those applications (e.g., databases, message brokers, cache, and stream processors, etc.).

The core middleware will be where the pitched battle will be fought. Due to the high latencies of wide-area networks, the rest of middleware will be co-located with core middleware.

The application servers are in trouble. Serverless is a direct competitor to application servers. Most of the new apps likely follow Microservice architecture while Serverless is a natural extension to the Microservices. Microservices already have broken down monoliths to loosely coupled services. Moving such applications to Serverless is easy.

Serverless-enabled Cloud will absorb databases, message brokers, stream processors into the PaaS(Platform as a Services). We have two questions to answer. First, will there be enough standalone use cases left for middleware? Second, if Serverless takes over most of the development, will they choose to take over the middleware layer as well?

Serverless is opinionated. Hence, users would time to time run into scenarios that do not match their expectations. On the one hand, this might be a reason enough for middleware to survive. On the other hand, to handle this scenario, well-orchestrated customer service is needed. Moreover, the middleware market depends on relationships and customer service where the vendor and organization both view it as a partnership, not as a take-it-or-leave-it black box. However, customer service and relationships have never been part of mega-clouds DNA. There will be a customer service gap, a gap that is going against mega-cloud’s business model, a gap that is hard to fill. If mega-clouds did not fix this, they would effectively ignore the long tail of applications. This gap can give enough room for middleware companies to survive, to scuff away at mega-clouds, and to even win the long game. Instead, mega-clouds can choose to share some revenue and let middleware vendors handle customer service. So there is hope.

Secondly, even if Serverless controls the bulk of the market, it is not clear whether mega-clouds will absorb all middleware killing independent projects. Middleware is a hard problem, which took decades of time, hordes of best minds, furious disagreements to arrive at the status quo. Without maintenance, middleware code rusts, where most of its endpoints, hardware, and business they represent evolve. Taking them over may be too large an undertaking even to mega-clouds.

Most of these middleware projects have vibrant opensource communities. In which case mega-clouds could go in, contribute, and guide those projects to be the market leader. Even do some marketing on their behalf. In which case, independent middleware that is not part of the opensource project will be in trouble and wither away. Either because it is better or because the real competition is elsewhere, multiple mega-clouds may even cooperate to build one project, as in the case of Kubernetes.

When non-opensource players lead the market, Mega-clouds will pay a licensing fee or do a revenue share like currently done with Oracle databases and Microsoft Windows. One one hand, markets for independent developments of middleware will contract with an ascent of Serverless. On the other hand, middleware companies will have new revenue streams by licensing to Serverless. However, in this relationship mega-clouds would have an advantage is pricing negotiations. Middleware companies will see a revenue increase and margin erosion. It is not clear which one will win.

If the licensing fee is too large and the law does not protect middleware, mega-clouds might choose to write a new version. The outcome will depend on the complexity of the middleware. Rewriting is no simple undertaking. They will find that they lack talent and understanding of the problem after burning millions. Complexity is enemy of mega-clouds; if they try rewriting, they might break under their size like ancient empires.

When law protects middleware, or when they are complicated to build, mega-clouds can buy them. Mega-cloud could operate those companies while controlling “the margin for their use ” and while selling for standalone use cases at higher margins.

Impact on Integration Middleware

Integration middleware creates the next layer composing and integrating functions, services, and APIs. Among examples are ESBs, workflow engine, and API management tools.

You might argue that integration middleware is not Serverless. That is the reason the post is titled “Serverless-enabled middleware” not Serverless. The competition is between iPaaS and on-premise integration middleware.

However, the battle is likely won or lost elsewhere. Performance of integration middleware depends on the latency to reach services they integrate. If most services, APIs, and functions are in the cloud, then integration middleware in the cloud has a significant advantage over their on-premise counterparts and vice versa. If both the integration middleware and services they integrate are in the Cloud, they would have much better network connections connecting them, hence would have better latency. Moreover, mega-clouds might be able to do optimizations to co-locate dependent parts of the same application.

For limited cases where latency does not matter, the outcome will depend on feature gaps between iPaaS and on-premise middleware. Historically, on-premise middleware stayed ahead in-terms of features. However, they will have an uphill battle to flight as iPaaS matures.

Impact on Helper Service Middleware

Helper services include security, observability, tracing and debugging, operations, anomaly and Fraud detection, Application lifecycle management (e.g., CI/CD, Versioning, and different load balancing schemas such as canaries, blue-green deployments, etc.).

In my opinion, this is the strength of mega-clouds. Since they control the environments they run in; they can invest to provide exceptional helper services that a developer would get by default. This process has already begun. Although the same is possible in on-premise deployments, setting up everything and make it work takes significant time, energy, and vision. So such environments are only available for developers in large organizations.

Mega-clouds will not directly affect on-premise helper middleware. Instead, mage-clouds will use the helper middleware themselves. However, helper middleware might find their user base increasingly moving to the cloud. This will leads to similar dynamics as we discussed in the core middleware section.

Serverless platforms will also challenge editors and tools. They will need to build a close integration with Serverless. Mega-clouds will encourage editors and tools to support Serverless. However, unlike other middleware, they are less challenged by the Serverless. For example, developers may build their applications on their laptops and push them into the Serverless cloud without any runtime latency differences. It is likely that mega-clouds will be happy to let editors alone.

Private Serverless Platforms

Concerns about “vendor lock-in” due to lack of standards is the Biggest risk faced by Serverless. The real concern is not serverless functions, but the helper and platform services such as security, observability, required by those functions, without which it is impossible to build meaningful applications. It is hard to abstract away those services efficiently in the absence of standards like SQL for API calls.

Current Serverless market leaders are resisting standardization. It is not clear what mega-clouds should worry about more, other mega-clouds or antipathy to vendor locking. If mega-clouds cooperation to make applications portable, it could expand the pie to the extent so that everyone is better off. It will take time for those scenarios to play out. Another possibility is standardizations enforced by the government. Although chances are small, it is not impossible in a world where GDPR is possible. Standardization, in any form, will significantly hasten the adoption of serverless.

Private Serverless Platforms (PSPs), like Apache OpenWhisk, tries to exploit this concern. The argument is that an organization can get most of the advantages of Serverless, by running a PSP without depending on mega-clouds. This is a strategic answer by middleware companies like IBM to combat Serverless threat faced by middleware.

Private Serverless platforms, however, faces two challenges.

First, without platform services, PSPs loses most of its vitality. PSPs must add databases, without which it is a deal breaker due to stateless nature Serverless. Effect of lack of other platform services remains to be seen. My bets, sadly, is not with PSPs.

Second, PSPs will provide cost savings only if the organization can run a large enough Serverless platform that can offer economies of scale. Running PSPs in mega-cloud IaaS offerings would fail. If it is working, mage-clouds can, in response, make their IaaS expensive. Instead, it is interesting to explore the possibility of multiple small organizations to pool their resources into one Serverless platform securely.

By open-sourcing most of their Serverless platform implementation, Microsoft has also taken an interesting position where companies can choose to run the same tools on-premise as well.

What might not Change?

Machine learning algorithms are already moving towards specialized-hardware, towards GPU, and towards systems that are crafted for performance.

It is unlikely that Serverless would replace machine learning algorithms. The Same is true for low latency applications such as algorithmic trading, systems managing utilities, and AR and VR applications.

Serverless can fight back by offering support to co-located related applications physically in the same machine, using optimization techniques such as machine learning or based on human discretion, just like what Kubernetes did with pods. That would handle some use cases, but not all.

Serverless can also grab a part of the market by providing pre-canned versions of well-known Machine Learning algorithms.

Even if this market segment resisted Serverless, in the grand schema of things, it would only be a small part of the market. It will shrink with time and only prolong the inevitable.

Conclusion

Middleware shall live in interesting times. However, all is not lost. A lot will depend on core middleware. On-premise middleware can’t afford to let service hosting move to cloud (e.g., via Serverless). If service hosting is lost, dominos will start to fall. All else will be lost. In my opinion, on this battleground patents and trade secrets will serve well rather than opensource software as they play right into the hands of the mega-clouds.

Hope this was useful if you enjoyed this post you might also find the following interesting.

Mastering the Four Balancing Acts in Microservices Architecture_Microservices are the new architecture style of building systems using simple, lightweight, loosely coupled services…_medium.com

Chronicle of Big Data: A Technical Comedy_Act 1: Google doesn’t like Databases_hackernoon.com


Published by HackerNoon on 2018/06/07