Case study: Repair App as a serverless application

Is a serverless architecture the solution for your business case? Read more to find out!

Some context

In this blog post, we are going to dive deeper into the values and risks of a serverless architecture as part of a solution for your business case. We will illustrate this using a project we recently realized in collaboration with the KU Leuven and Maakbaar Leuven, the Repair App.

The Repair App is an application that is meant to be used in the Repair-Cafés. These are places where people come together to repair all sorts of broken things, from clothes to bikes to electronics, you name it! When the volunteers that work there repair things, they have the great habit of documenting their repairs for future repairders. Leaving documentation for future users also means that those users need to be able to find the correct documentation related to the item they plan to repair.

This is where the KU Leuven, the research group Life Cycle Engineering of the Engineering Technology Department of Mechanical Engineering to be specific, comes in! Their goal is to show how we can use technological advancements like computer vision, IoT and machine learning to facilitate re-use, repairs, and recycling inside a circular economy model. For this purpose they set up a project with OVAM where they collaborate with the “Kringwinkels” who repair items before selling them in their second hand shop. Wait! This sounds similar to what repairders do in the Repair-cafés. So why not use the Repair App as application to demonstrate how machine learning can help identify products, based on a picture of their label, using machine learning algorithms like barcode recognition and OCR (optical character recognition).

In the context of this application, the volunteers at the Repair-cafés or the repairders in the Kringwinkels can use a machine-learning algorithm to identify the product they want to repair and thus facilitate their search for repair information.

Business requirements

Since projects are never without requirements and constraints, here are some of ours:

1) Low budget: There is only a small budget from grants available for development and an even smaller one for operational costs.

2) Time: With 5 weeks planned for development time we can state that time is an important constraint for us to work within.

3) Multiple development languages: The KU Leuven wants to experiment on the machine-learning components of the application by manipulating the code. They do this in Python (whereas we at Kunlabora are typical Java developers).

What is serverless and why did we use it?

So, serverless … What is a serverless architecture then? Serverless is an execution model where we don’t deploy our code on a dedicated server like we traditionally would. Instead, we rely on a cloud provider (Google, AWS, Azure) to execute a piece of code by dynamically allocating the appropriate resources for us. Important to note: the cloud provider only charges for the amount of resources used (times invoked, CPU usage, memory usage and computing time) instead of charging to keep an entire server up and running. This is where it gets really interesting. Remember our first constraint? Yes, the tight budget and even smaller one for our operational costs. The serverless model suits us perfectly here. We only pay what we use, both when our app is under development as when it is in production. Traditional setups (the ones that use a dedicated server) bring a rather large monthly cost for just keeping the server up and running. This from day one AND for multiple development environments!

All those costs and we aren’t yet even talking about maintaining the server, applying security updates, scaling when the load increases, … Work that typically needs to be done and costs us time and money. With the serverless architecture, we don’t have to worry about all those. We are charged for what we use when we use it. And the rest? Our cloud provider takes care of that! These costs can also easily be monitored by dashboards we can set up in AWS. This gives us constant feedback about our billing information, which is great!

Aws billing

How do we apply a serverless architecture to the Repair App you ask? Simple! Well, sort off simple. Where we would typically build a full-blown Spring backend, we now used AWS-Lambdas instead. Lambdas are the small pieces of code (or functions) that AWS executes when we ask it to that we discussed earlier. By doing so we can limit our costs, A LOT! Currently, after 5 weeks of development, we haven’t yet paid a single euro! Not because lambdas are free (because they aren’t) but because AWS provides a free tier to their accounts which includes 1 million requests and 400 000 GB-seconds of computing time per month. Once the limits of the free tier are superseded, it is still worth thinking about lambdas. If you can fit your use case inside this model, it just makes sense to only pay what you actually use, when you use it. No ‘247’ default cost! An overview of AWS’s lambda pricing can be found in the image below:

Aws pricing

Another advantage of those serverless functions is the fact that they are all separate, independent pieces of code. This allows us to deal with our third restriction, multiple languages. Because lambda functions are completely separated from each other we could theoretically write each lambda in another AWS supported code language. Of course, we wouldn’t do that just because we can…but it’s interesting to know that it is possible. In our solution, we started by implementing all of the lambda functions which were critical to the machine learning component of the KU Leuven, in python. While iterating, we later on transformed all the other lambda functions to python as well to improve our applications’ performance. This also gives the KU Leuven even more freedom to adapt some code to tweak their machine learning algorithms.

Finally, our serverless architecture also helps us to speed up our development process by enabeling us to deliver functional value from day one! Yes, day ONE! A serverless architecture lets you skip a lot of the setup that is involved in getting all your servers (on all of your environments) up and running. We also don’t need an entire application set up just to deliver one API call. This all resulted in the deployment of the first version of our UI on our first day our first lambda deployment just two days after that.

Is a serverless architecture all sunshine and rainbows?

Of course not, it also has limitations that we need to take into consideration. Although incredibly interesting and useful, the serverless model is not the right tool for every job. There are some mechanisms inherent to the serverless execution model that you need to consider when deciding if you can use it as a solution to your problem. The most important ones to keep in mind are:

  • Cold starts… When your cloud provider is dynamically allocating the resources for your serverless functions, it temporarily spins up a container that hosts the code for that given function. The process of spinning that container up is called a cold start. Unfortunately, depending on your chosen programming language, cold starts may take a while. Once the container is up and running, it remains up for a small amount of time (AWS is very unclear about how long but roughly +- 15mins). During its uptime, requests are as fast as they always are. But when your lambda isn’t hot (there is no container running the code) you always have to wait for the cold start to finish. Depending on the programming language and your initialization logic this can potentially take a while. So if you want to guarantee 100% of super-fast responses, you might want to think twice about a serverless solution for these calls.

  • When you have processes that take a while to boot, serverless might also not be the ideal solution due to these cold starts. If you are unlucky, you are constantly waiting for your process to get started. However, serverless is certainly not an all-or-nothing solution. It is perfectly valid to go for hybrid solutions where you have parts of your application running in a serverless model and other parts in a more traditional server model.

Diestsevest 32/0c
3000, Leuven
Connect with us