Philipp Heuberger

Some examples for micro services you can extract to serverless functions

This article can be considered part two to last weeks Deploying to Serverless? It’s not that easy….

Let’s dive right in, shall we?

Why do we want to extract parts of our application with the help of functions?

There are a few benefits.

For one, you don’t have to re-deploy your entire monolithic backend every time your sales rep needs you to change the way you’re feeding data into their CRM.

Of course, if they need a new data field, you’re still on the hook. Either you re-deploy and pass this extra field to your function or you generally re-think how lambdas access data. You could expose rich models via an internal API or pass them over a queue.

For another, you can rid your core backend from dependencies you don’t want. Technologies change every now and then and so do their APIs. When that happens you only have to change and deploy a couple of lambdas instead.

But by far the biggest benefit is, you don’t have to over-provision your infrastructure because there might come a time of high load.

Usually there are infrequent bursts of activity and it’s oftentimes hard to predict when it happens. So, the only real option was to keep you servers running in high gear and keep a healthy margin for unexpected spikes.

The problem is that in-between those bursts you’re paying way more than you want. Only because rendering PDFs, down-sizing media files, ingesting hefty CSV files and whatnot is lumped in together with all the rest of your application.

Then, why not take those parts and have functions take care of it? As you know by now, they don’t cost you a dime when the don’t run and scale pretty well.

What micro services could you turn serverless?

In here are a bunch of use cases I came across in my work, but that’s only a fraction. Take some inspiration away from it and please do let me know what services you’re itching to carve out of your monolith.

Creating assets on the fly

Let’s say you’re sending out meeting invitations, you could have a lambda generate your iCal files for you and return them over an API endpoint.

The same goes for QR codes or any other procedurally generated image.

I recently built an Apple Passkit/Wallet service and drafted up a first version by creating a simple Node function that I could run locally. The libraries are kind of okay-ish, so it still took a good bunch of builds to make it look the way I wanted. Once I was finished, I wrapped it in a Serverless project, made sure the response headers are set and called it a day.

And that’s not all. Every time the certificates (did I hear you sigh just now?) expire, I generate new ones and just re-deploy the function and that’s it. Zero downtime.

Webhooks that queue events/data/jobs

This one might be a bit controversial, because the upside is rather small and there’s a potential downside, but let me explain.

Imagine you’re exposing a webhook for some external service or your users to invoke your application. You could, for example, ingest email status events from your transactional email provider, taking in orders from your external e-commerce platform or receiving a ton of images from your users because, maybe, your service revolves around that kind of thing.

In any way, your backend has to take the request, put the thing in a queue, so it can be processed at your application’s pace and return a success message to the requesting party.

On the other end your application also has to work through your queue, so there’s a lot going on now all of a sudden. The worst thing to happen is if your webhook requests start to time out because your application can’t keep up anymore.

There are three ways to approach this problem.

  1. Extract the putting of incoming events into a queue part into its own micro service
  2. Outsource the (heavy) workload of processing the queue into a function
  3. Do both 🤷‍♂️

I’ll give a few examples for solution number two in the next chapter. First, a little gotcha.

You see, functions/lambdas don’t scale infinitely. On AWS you start out with a concurrency limit of 1000. That immediately puts an upper limit on the amount of events you can ingest per second. Let’s quickly do the math:

Say your function runs for 100 ms. That means you can process 1000 * (1000 / 100) = 10,000 events in one second.

That’s certainly not the end of the world. 10K is a lot and if you need to go higher, you can always have your concurrency limit raised by getting in touch with AWS, but you should start to worry about your costs.

If that kind of load occurs constantly, you’re quickly going to notice that functions don’t come for free. In this case it might be worth to explore whipping up an Elixir/Phoenix instance or something similarly performant.

Lambdas are really not suited all too well for constant high-throughput computing. They work best in scenarios with infrequent events and unpredictable spikes.

In any case, you should do your own calculations before deciding on a strategy. (Hello Capt. Obvious 👋)

Alright, enough about serverless economics. On to the next topic…

Doing laborious async stuff

This is where serverless really shines in my opinion.

Some examples:

  • Slogging through a bunch of PDF files and taking them apart
  • Doing some kind of analysis on images
  • Resizing images
  • Pulling in data and building big CSV files
  • Processing chunky CSV files
  • Sending out batch emails
  • and the list goes on and on…

You can adjust your function’s memory configuration, so even heavy file operations are not out of the question. Just make sure you keep an eye on your cloud provider’s function timeouts.

Abstracting away dependencies that could change all the time

I don’t know about you, but the companies I work with and have in my network keep on changing their tools all the time. Going from one book keeping tool to another and back. CRMs? Don’t get me started. Those things are treated like underwear.

At the end of the day, what can you do. You gotta support those third party tools and in my book it’s better to change a small serverless project instead of your core app. And, as I said earlier, what pieces of data they need and how they need it keeps on changing too, so it’s nice that somebody can hack away on this in isolation.

Ah, yes. Examples:

  • CRMs / ESPs
  • Transactional Email
  • Book keeping tools
  • Payment systems
  • Notification systems (Slack, Email, Telegram, …)
  • Probably so many more I can’t think of right now

Last but not least: weird one-off legacy integrations

Ah, my favourite.

It happens more often than we care to admit. A customer who’s stuck in the last century has this weird workflow where you have to upload this one XML file to their SFTP. Oh, and of course, this is only a temporary thing. Not longer than 3-4 months. Their engineering team is already on version number two.

Yeah right.

This thing is gonna stick around for longer than the people who built it. Nobody wrote tests for it. Why would you? So it keeps on rotting in your core backend and you hate yourself for it. Do yourself a favour. Put it in “serverless quarantine” and rest easy.

Alright, that wraps it up for this time. If you have questions about serverless or serverless consulting you should get in touch. We can chat or do a quick call. Happy to help.

. . .

Care for articles like this in your inbox?

Join my monthly (or so) newsletter. No spam and no selling.

By entering your email address you allow me to store and use it to send you my newsletter. I use Mailchimp to send it out. Unsubscribe anytime. More in my privacy policy.