Serverless is made of servers, or so the meme goes. But is that all there is to it for serverless, as far as infrastructure engineering is concerned? A cheap dismissal with a witty meme? Or does the idea of serverless applications represent a concern for infrastructure engineers?
Join Chris Wahl and Ethan Banks as they blast through the hot aisle, rip open a rack door, and peer inside to see just what serverless is made of.
And be sure to check out the show notes for a lot more details about the serverless movement and a few informative links.
This episode of Datanauts is brought to you by ITProTV. Enhance your technology aptitude. ITProTVis the resource to keep your I.T. skills up to date, with engaging and informative video tutorials. For a free 7-day trial and 30% off the life of your account, go to itpro.tv/datanauts and use the code DATANAUTS30.
Part 1 – What Is Serverless, Really?
- Before we talk about serverless, what do we mean when we talk about a server? We mean some operating system executing some service. Hardware is implied, but it’s irrelevant. The important bit is that a developer would previously have stood up some process and fired data at it. The process would be running on some manageable, tangible infrastructure.
- Serverless is consuming a cloud-based function from a remote service like AWS Lambda. You don’t need to spin up a VM for some formal process to run on. You fire your data at the function instead. That function is integrated into cloud services on the backend, and scales automatically depending on load. It’s the ultimate abstraction: you are able to create and consume a function, and genuinely never think about infrastructure.
- This article in the Container Journal puts it well: “Serverless application code is built on small, single-purpose functions that can be triggered by cloud events. There is no need to launch or manage a virtual server, or maintain a runtime environment because the serverless application code just runs directly on the supporting platform.”
- What’s AWS Lambda? Here’s a definition from the Fuge blog: “Like IBM OpenWhisk, Google Cloud Functions, and Azure Functions, it’s a service “for executing code in response to specific events such as a file being uploaded to Amazon S3, an event stream, or a request to an API gateway.”
- The automatic scaling aspect of this is huge, because it draws a major parallel to containers and container orchestration. Back to the Container Journal for more detail: “There has been a proliferation recently of services aimed at taking microservices to the next level and supporting a serverless application ecosystem. Amazon’s AWS Lambda and API Gateway, Google Cloud Functions, and Azure Container Service (ACS) are all built on the premise of providing a generic layer capable of running a container orchestration solution.”
Part 2 – The Use Cases For Serverless
- This is about the ultimate divorce of code from infrastructure.
- Thus, I see (at this point) an entire reliance on AWS Lambda to keep and maintain the serverless abstraction layer.
- By abstraction, I mean the function interface developers interact with that magically handles the infrastructure element.
- Is there vendor lock-in here? Are we trading in a dependence on physical servers, containers, and an orchestration system for dependence on a public cloud provider providing us exactly what we need? This feels like Hotel California.
- This is a smart, smart play for AWS. They offer speed of deployment and a new kind of application architecture. There’s a huge appeal to developers, while at the same time spinning up AWS compute, creating revenue for AWS. Plus, while the code seems to be separated from infrastructure, I’m not sure that the code is separated from AWS. I don’t *think* we’ve reached 100% code portability here, because as a dev your code is tied to Lambda requirements.
- Or is it that hard? Descriptors are in JSON. Is change as simple as redefining JSON templates?
Part 3 – How Infrastructure Engineers Can Make Serverless Perform Better
- Speed and redundancy of service resolution
- If you’re using hybrid cloud, you have the same concerns as any hybrid cloud deployment. Latency is the enemy. If you’re calling a function, network latency implies an added delay, with a possible impact on user experience.
- Is it possible, or even necessary, to track the state of queries to Lambda? Or is that a dev problem? It seems to be a dev problem, because Lambda functions are supposed to auto-scale. That means you don’t need a load balancer. And you don’t need another sort of proxy either. Nor do you need query state tracking. That all just happens.
- Nothing changes as far as client communications. The client is still coming in from the outside and hitting some front-end. Serverless seems to be mostly about internal application components — authentication, database, etc. Think microservices in containers instead rendered as Lambda functions.