Philipp Heuberger

4 Risks You Might Run Into When Going Serverless

I recently came across a question about transitioning to serverless. The question was “what are the biggest risks when going serverless.”

After thinking on it for a while, here are my thoughts. Please share yours with me on Twitter, so I can make this article even better.

Alright, let’s dive right in:

#1 Not using serverless for the right job

It’s akin to all other technology choices. You have to know what problem you’re trying to solve. I’ve written about this here and here.

Serverless is great for workloads, that are few and far between. If that’s what you want to do, great.

If, however, you wanted to migrate a microservice that is receiving events in large quantities, you will run into a few problems. Sure, it scales nicely and all, but the costs will be substantial. That’s why it’s important to analyze traffic patterns before making the leap.

Also, don’t forget about your account concurrency limit. Right out of the gate it’s capped at 1000 concurrent executions. Sure, it sounds like a lot, but getting more than 10,000 events per second (assuming 100 ms run time each) will push you already over the edge. That’s why high throughput, high frequency operations are best handled by traditional server architecture.

You can, of course, have the limit raised, but you should take this as a hint to have another glance at your billing dashboard.

#2 Not understanding serverless pricing

Pricing is closely tied to the previous section. Your costs are in direct correlation to how you use serverless and how many executions per month you’re going to have.

Let’s say you’re building an API that sees a lot of consistent traffic all day, every day, get ready to pay a lot of money. Especially when using API Gateway.

People often misjudge how many seconds there are in a day, me included.

Granted, you pay very little for a million executions, but at 100 invocations every second of every day, that’s 250 million requests per month. That’s going to set you back $250 for API Gateway alone. And then another couple hundred for your functions — depending how long they’re running and how much memory they are allocated, of course.

In pretty much all other cases, where you have infrequent access or a bunch of scheduled jobs here and there, not using serverless is actually the risky thing to do.

You’ll have idle infrastructure, that you’re going to have to maintain and keep secure. And, as I have mentioned a million times by now, It’s going to cost you, even when it’s not creating value. And that stinks.

As I mentioned, another driver of price is the amount of memory you want to allocate to your functions. If you’re doing heavy file or stream processing, there’s no way around a configuration with a lot of memory. Unfortunately, that gets expensive quickly.

#3 Not using a framework

Serverless functions are only loosely coupled to one another, if at all.

Once you commit to serverless, the function count will start to explode. If there’s no good framework or process in place, you will have trouble staying on top of things.

In that same vein, it’s crucial to define infrastructure as code via CloudFormation or TerraForm. Otherwise there are too many loose ends. And, getting your code from dev to staging into production will be a nightmare.

In my opinion, good frameworks are https://serverless.com/ and the AWS exclusive Serverless Application Model (SAM) . The former is a cross cloud framework, so it comes with the added benefit of being able to take your functions and moving them over to another provider. Of course, it’s not a simple move, done in an afternoon — but it’s certainly easier than trying to migrate a couple hundred loose functions.

#4 Not enough flexibility in terms of technology

If your business is completely locked into using a certain database technology for example, serverless will be so much more troublesome and painful. I’ve talked about it here.

Serverless works best, if you are absolutely fine with using all the cloud provider has to offer. Those services usually play nicely with each other and integrations are easy. Otherwise, you’re facing an uphill battle. Developers will be frustrated. Management will hate the slow progress and limitations.

However, there are advancements that improve database support for example. During the recent re:Invent 2019, AWS announced better support for relational databases. The fine folks over at serverless.com wrote a brief article about it. You should check it out if SQL is your jam.

In any case, being flexible is advantageous. There are still a couple of caveats with using SQL in serverless, so being able to go with a cloud-native NoSQL database will save you a few troubles down the road. At least for now.