Last updated at Mon, 06 Nov 2017 21:36:23 GMT

Probably the biggest change that has happened in my lifetime of programming is the transformation from  creating code that is meant to run on known, tangible hardware to making code that runs on the Cloud. We’ve gone from server based computing to the serverless environment. The transformation has brought us the practice of DevOps. Also, the transformation has forced us to rethink the whole way design our code. Increasingly modern programming is about stitching together cloud based resources that provide distinct and increasingly granular services into a single application. These applications are delivered subsequently to consumers via mobile devices or as a distinct thing in the Internet of Things.

The change is not going to slow down. And the implications that result are worth thinking about. I do, a lot. I like to imagine the future based on what I know today. Heck, I am developer, that’s what we do, imagine things and then make them real.

So, in the spirit of imagining the technological landscape the will be the result of serverless computing, I make 3 predictions. These 3 predictions are:

  • Increased lock-in behavior on the part of cloud providers
  • Improved Service Discovery
  • Reinvention of Debugging Technology

Allow me to elaborate.

Increased lock-in behavior on the part of cloud providers

In the early twentieth century there were hundreds of automobile manufacturers in the United States. Today there are a handful. When I started out programming back in the 80s there were hundreds of computer manufacturers. Again, a handful remain. Today there are many providers of cloud services, but if history proves correct, the number will dwindle to a select few that will remain. That’s the way it goes. Larger corporation have the economy of scale to create better products more cheaply. Smaller companies can’t compete.

The consolidation is underway. Presently there are 4 major services providers in the compute services space. They are Amazon Web Services (AWS), Azure, Google Compute Engine (GCE) and IBM’s SoftLayer. These providers will be the “servers” in our future.

As good as consolidation is, it comes with a drawback. Big companies need to stay big in order to survive. To stay big you need market share. There are two ways to grow a market: find new customers or take customers from someone else. Finding a new customer is hard. Keeping an existing from being stolen by others is harder. The typical tactic that companies use is to keep market share is make sure that the customer is thinking about your stuff to the exclusion of others. Time is finite. The more time you are focused on me, the less time you are focused on others. It’s called lock-in. It’s a zero sum game and we’re going to see more of it.

Each provider does and will continue to work a little differently. As a result, your employment viability will be determined, in part, according to your competence with the given the provider. Thus, in the near future, being a competent Java developer might not be enough. You’ll need to be both a competent Java developer and a competent consumer of many of the products offered by Amazon Web Services, or Azure, or GCE or SoftLayer, depending the provider your employer uses.

Service providers are not going to let up. We’re already seeing certifications offered by AWS, Azure, Google and IBM. The value of the certifications will increase as corporate consumers become more heavily locked into the given service provider.

Half our effort as developers will be to create code that meets the requirements at hand. The other half of our labor will be deploying the code to the particular providers. Increased lock-in means we’re going to start seeing a lot more adapter technology that will make agnostic code work in a particular service provider’s environment. Again, the service providers are not going to let up and start to support a cooperative standards. There’s too much at stake.

Improved Service Discovery

At the service level, modern computing is becoming more granular. In the old days we’d create a program that had one or many features, provision a VM and deploy. Our program might require a message queue, a database and some file storage. We’d provision the VM accordingly and away we’d go.

Today your program might use a message queuing service, a database service, a CDN for static content and an identity service for user management. Each of these services is discrete and bounded to a narrowly defined purpose. You access the service via a URL. No libraries are needed except some sort of HTTP client.

As time goes on services will become more fine grain. More of the code we write will be about stitching together the results of a number calls to URLs into a single, valuable resource. You can think of these resources as a microserve .

Microservices are great provided you can find them. Finding microservices is called service discovery. Service discovery presently is a work in progress. We’re seeing a lot scholarly work on the topic. And, providers are trying to provide service discovery tools in their product lines. AWS has Config Console and Azure has Resource Manager. Netflix has taken a more detailed approach to the need by building Eureka, which is a tool to find the network location of services hosted on AWS for the purpose network resource management. These works in progress are useful, but each is still dedicated to a given the service provider. And there is still no way to do something as simple as query the Internet, asking, “Where is a microservice that will do facial recognition service against my store of face images.” A real world, service provider agnostic standard is not in place yet.

As the publication and use of microservices grows, discovery will become more important. At some point soon somebody or some group will create an easy to use, easy to understand way to find a microservice. The need is there. Given the historic trends, it’s only a matter to time before someone creates a solution to meet the need at hand.

Reinvention of Debugging Technology

A debugger is to a developer as a ruler is to a carpenter. Developers really can’t work without one. Without the ability to step through source code and inspect the call stack, variables and expressions, we’re flying blind. The days of using a console.log() or print() statement to figure out what is going on have long passed. But in the world of distributed, service based computing we’re still in the Dark Ages. The usual way for a developer to debug system behavior is via log file inspection, which when you think about it is really just a high falootin way to read console.log() output.

The trend of creating technologies that improve code quality and speed of deployment is to move forward toward better solutions. Every year more tools come out that allow us to create better code, faster and to deploy that code with less friction in the pipeline. The elephant in the living room is system level debuggers. Too many late nights are spent going over log files trying to figure out what’s going wrong. History has shown us that things must and will change. It’s only a matter of time before a company sits down and realizes the Ops side of DevOps need to have the debugging capability that the Dev side has enjoyed for decades. Somebody will create a reliable, easy to use system level debugging framework and when they do, the industry will gobble it up.

Putting it All Together

We are living in exciting times. Mobile computing and the Internet of Things is taking us to uncharted territory on the technological landscape. These times will come with opportunities and pitfalls. But history has shown us that technology moves forward most times for the better. Hopefully the predictions that I describe above will help you contemplate a path forward that works for you. Regardless of whether I am proven right or wrong, a few things are certain. The age of Serverless computing is here; the implications need to be contemplated and those that adapt will play an important part in the future of our profession. The choice is yours. Me? I’m adapting.