Serverless is one of those terms that gets thrown around and can mean different things to different people. The thing people generally think of is Function as a Service (FaaS) offerings, and maybe they start with something along the lines of “let’s run a Django app in a Lambda…bam, serverless!”. That’s great and it can be a good first step in moving from a server-based architecture to a serverless architecture, but there is a lot more to the serverless landscape.
As it is a buzzword that has become somewhat overloaded, let’s first define what we mean by serverless development. For the purposes of today’s discussion, we are talking about cloud services that do not require standing up and maintaining virtual machines, upgrading or patching installed software components, and, perhaps most importantly, on-demand services that automatically scale up and down based on usage (and therefore charge based on usage as well).
Serverless tends to fit great with microservices architectures and when used for background processing, ranging from data analytics and machine learning workloads to asynchronous tasks offloaded from a web service (and quite a few good use cases in between). As we alluded to in the previous paragraph, one core motivator for serverless is cost. If your applications do not need to run at consistent and high loads all day, every day, then typically serverless pricing can net you significant cost savings. The idea is to remove a lot of the time and risk involved in maintaining servers and manually scaling services.
So, what does the serverless landscape look like today?
Functions as a Service (FaaS) represent the original serverless compute offering, and as I’ve mentioned, in many people’s minds serverless is FaaS, end of story. Most are already familiar, but just to cover again, FaaS gives you the ability to create and deploy a single function which can then be triggered in any of several ways:
- A configured endpoint being hit
- As a result of an event such as adding or modifying an object in storage
- By receiving a message from a queue
These can be standalone to handle simple background processing, can be coordinated for more complex processing, or can be grouped and deployed together with API gateway configuration to comprise a microservice. There are, of course, tradeoffs that need to be considered with functions:
- Not every language is supported, though the list continues to grow in all cloud platforms
- Functions have time constraints, long running processes either need to be restructured or should not use functions
- Cold startup times can introduce latency. While you can ping with a set number of requests to try to keep that set number of functions warm, any time you see a spike in traffic and more functions spin up, they will experience a cold start. (Azure does allow you to run functions in an App Service plan, which allows always on and warm functions). If cold starts are not acceptable, then functions are not the right choice.
FaaS can cover a lot of compute use cases, but not all. Next, we’ll dive into a couple of alternative options that might be a better fit for some scenarios.
The first of these options is containers. These days, I think most developers are at least familiar with the concept of local development using containers. The lightweight virtualization and the ability to quickly define and setup a standard environment can significantly streamline the developer experience. Not quite as many people are familiar with the built-in cloud offerings for deploying with containers. We’re intentionally ignoring the container orchestration offerings here, as they are not serverless.
Google’s Cloud Run offers a fully managed, pay-for-what-you-use, automatically scale up and down to zero container deployment. Both AWS Fargate and Azure Container Instances also attempt to fill the serverless container deployment space, though they are not quite as streamlined as Google at this point and, while they are advertised as serverless, it is very difficult to make the argument that these are truly serverless offerings. This is not to say that they are not great services, because they are, and I think over time you will see the gap close between them and Google Cloud Run.
Finally, I want to mention app services. These include AWS Elastic Beanstalk, Azure App Service, and Google App Engine. While not truly serverless due to provisioning and the pricing model, they do offer simple setup and deployment (including containers) and can be a good compute alternative if the standard serverless options are not quite the right fit.
Having serverless compute is great, but how do you store your data?
There are three main types of serverless data storage. The first, and simplest, is file/blob storage. These are straightforward offerings, with fairly similar feature sets and cover use cases ranging from storing binary and text files, storing json data, data lakes, content delivery networks, etc.
The second type is document based storage. These are NoSQL stores and the offerings do vary a bit each cloud. Amazon offers DynamoDB, which is a simple key-value datastore and is fairly well understood at this point. Azure has essentially deprecated their previous offering, Table Storage, in favor on the new Cosmos DB. Cosmos is a multi-paradigm datastore, supporting the MongoDB API, a SQL API, the Gremlin API (graph) and simple key-value storage. It is a feat of engineering, but it is also rather pricey at this point (and it is also somewhat dubious to call this serverless, as pricing is based on provisioned throughput and it is not fully on-demand). Google’s primary serverless NoSQL offering is Cloud Firestore, which supports transactions and has a nice query engine.
The final storage type is a newer category, serverless relational databases. While managed relational database services with provisioned compute have been around for a while, new offerings differ in that they are completely on-demand and only charge for resources that are used. Both Azure and AWS now have standard serverless SQL database offerings. Google is a little bit different. They do offer a serverless relational database in Spanner, but it is not explicitly compatible with SQL Server, Postgres or MySQL like the Azure and AWS offerings are.
Due to the compositional nature of serverless architectures, one feature that you cannot live without is messaging. There are three main categories of messaging that you will commonly need to rely on in your serverless architecture, which we will discuss below.
The most common messaging service you will need is a message queue. Message queues can vary in functionality and performance, can be brokered or non-brokered, but the unifying trait is that messages are meant to be received by only one consumer. A common use case here is sending tasks to worker processes/functions. Serverless cloud offerings for message queues include Azure Service Bus, AWS MQ, AWS SQS, and Google Cloud Tasks.
The other main category of messaging is pub-sub (publish-subscribe). The differentiator here is that pub-sub messages are generally meant to be able to be received and processed by many consumers. An example of this might be a stock ticker feed, where every application displaying a stock ticker would subscribe to events and receive all events. You are covered here in the cloud as well with Azure Event Grid, AWS SNS and Google Cloud Pub-Sub.
Finally, a messaging need that you may forget but will almost certainly want to make use of at some point is transactional messages intended to be sent to users. These are messages such as order confirmations, password reset communications, etc. They can be delivered via SMS, email, or mobile push notification. Cloud offerings are hit and miss here, but Azure Notification Hub, Amazon SES, Amazon Pinpoint and Google’s Firebase Cloud Messaging will get you at least some of this functionality. If you find the cloud vendor solutions lacking, there are still excellent third-party offerings in SendGrid and Twilio.
While there are many great authentication libraries out there, it can often make sense to outsource this functionality to the professionals. This is especially true if you need to offer OAuth and OpenID Connect, or SAML integration. Even more so once you start considering multi-factor authentication and all that can be involved there.
You can certainly stand up your own auth microservice, in a completely serverless way, which would give you complete flexibility to do everything you need in exactly the way you need to do it, however, supporting the more complex setups that are becoming commonplace today is a good bit of work and it does require some know how.
All of the cloud solutions (AWS Cognito, Azure AD, Azure AD B2C, Firebase Authentication, Google Identity Platform) cover the standard use cases: username or email logins, social logins, multi-factor with email/SMS/authenticator/hardware tokens, single sign-on, custom claims and policies, varying degrees of UI customization, neutral login URLs, etc.
The main issue today with the cloud offerings is that they can still be somewhat involved and time-consuming to setup and configure exactly the way you want (though it is still far less work than creating your own solution for all of this). So, if the cloud vendor offerings don’t suit your fancy, there are also two very popular third-party auth solutions in widespread use: Auth0 and Okta. These services are easy to get up and running but can get pricey.
This is an area that used to be much more difficult. Way back when, there were not many hosted build server options and getting your entire CI/CD pipeline in place was a lot work. It still takes work, but all clouds now have fully integrated DevOps offerings, from top to bottom. All three major cloud vendors offer all of the following:
- Version control
- Build and deploy pipelines
- Container registries
- Artifact repositories (either directly or via third parties)
- Infrastructure as Code
Azure does have a slight edge here as it also has fully integrated issue tracking and agile management, owing to its Team Foundation Services roots. It also integrates seamlessly with Microsoft development tools if you are on the MS stack, in a way that is tough to match. However, Jira and Bitbucket integrate very well into any environment, so your bases are covered no matter which cloud you run on.
As we’ve now seen, all facets of your application can be implemented in a serverless way. You can create some amazingly cutting-edge, large-scale, all-encompassing application architectures without ever standing up a virtual machine or provisioning resources…and only pay for what you use. That is pretty amazing if you look back just 10 years and compare.
There are still edge cases where you need to run a virtual machine or manually provision and scale sized resources, but more of those edge cases disappear with new solutions arriving every year. Cloud services are now moving towards offering (at least as an option) the auto-scale, on-demand, pay-for-what-you-use model that serverless provides. It is much cheaper and easier to get started and can scale quite a way before it makes sense to start moving to server-based models. We have reached the point where most traditional architectures are at least beginning to incorporate serverless components, and they are continuing to move further in that direction every day.
The JBS Quick Launch Lab
FREE 1/2 Day Assessment
Quantify what it will take to implement your next big idea!
Our intensive 1/2 day session will deliver tangible timelines, costs, high-level requirements, and recommend architectures that will work best, and all for FREE. Let JBS show you why over 20 years of experience matters.