Serverless Still Runs on Servers
Serverless is important, and we should all care about it, even though it's a silly buzzword: like NoSQL before it, the term describes "negative space" – what it isn't rather than what it is. What's serverless? A cat is serverless, donuts are serverless, and monster trucks are serverless.
What does serverless really mean in practice? It means that your code and the infrastructure it runs on are completely decoupled – if there are VMs or containers or punch card machines somewhere, you don't need to know about them. There's no notion of an "instance" or paying for an instance for that matter. It's Functions as a Service.
In addition to potentially lowering resource costs, serverless is also an attractive alternative to other architectures, as programmers don't need to worry about provisioning and managing these resources. Some organizations are even considering skipping microservices and moving straight to a pure-serverless architecture.
Of course, these serverless functions still run on servers, and that has real consequences for performance.
In his recipe for becoming a successful programmer, Peter Norvig reminds us that "there is a 'computer' in 'computer science.'" Likewise, there is a "server" in "serverless." Norvig's point is that while we can reason abstractly about performance in terms of asymptotic behavior, the constants still play an important role in how fast programs run. He urges programmers to understand how long it takes to lock a mutex, perform a disk seek, or send a packet from California to Europe and back (see figure).
Two numbers about the performance of servers are especially relevant to serverless:
- A main memory reference is about 100 nanoseconds.
- A roundtrip RPC (even within a single physical datacenter) is about 500,000 nanoseconds.
This means that a function call within a process is roughly 5,000 times faster than a remote procedure call to another process. (Thanks to Peter and Jeff Dean for calling these out.)
Serverless is often promoted as a way of implementing stateless services. And while there can be an opportunity there, even stateless functions need access to data as part of their implementation. If some of these data must be fetched from other services (which happens more often than not with something that's stateless!), that performance difference can quickly add up.
One solution to this problem is (of course) a cache, but a cache requires some persistent state across requests, and that's the antithesis of a serverless model. While these services might be stateless in terms of their functional specification and behavior, state can open the door to important optimizations. And when you add state to a serverless function, it starts to look a lot like a microservice.
Serverless has its place, especially for offline processing where latency is less of a concern, but often a small amount of context (in the form of a cache) can make all the difference for performance. And while it's tempting to "skip" ahead to a serverless architecture, this might leave you in a situation where you need to step back to an orders-of-magnitude more performant services-based one.
SpoonsCTO and Cofounder
Spoons (or more formally Daniel Spoonhower) has published papers on the performance of parallel programs, garbage collection, and real-time programming. He has a PhD in programming languages but still hasn’t found one he loves.