paint-brush
Overcoming the Challenges of Using a Serverless Stackby@hacker7169103

Overcoming the Challenges of Using a Serverless Stack

by DuranteApril 11th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

In a serverless stack like this, everything is a function, and all the functions are independent. While this can be great, it requires handling each function independently. There is not just one deploy; there are many deploys, and a lot of configuration. It is important to plan ahead when dealing with a complex system.
featured image - Overcoming the Challenges of Using a Serverless Stack
Durante HackerNoon profile picture


In this article, I want to share a few challenges I've encountered while working with a serverless stack on AWS Lambda. Hopefully, you can learn a thing or two from my experience.


For reference, the stack I use includes:

  • Lambda

  • SQS

  • API Gateway

  • S3

  • RDS

  • Several other AWS Lambda services.


Disclaimer: While I have gained some expertise through my experience, I am not an expert in this field. If you have any ideas on how I can avoid these problems more efficiently or if you have faced similar situations, please share them in the comments! I am eager to learn and improve.

Resources handling and deployment

With a serverless stack like this, everything is a function, and all the functions are independent. While this can be great, it requires handling each function independently, for REAL. There is not just one deploy; there are many deploys, and a lot of configuration between functions (security groups, variables, dependencies, etc.).


It is important to plan ahead when dealing with a complex system, as it can quickly become a nightmare, especially if you have a small/medium-sized team.


For smaller scaler projects, you can manually create your functions using the AWS console. However, for larger projects, you will need a tool (or tools) to help manage them.


Here are a few tools that I have used (or have heard about) that can fix the issue (or at least help you):



I personally use SAM, which is essentially CloudFormation with extra features that ease your work with Lambda. It has a few limitations, such as the number of functions you can create and the configuration of API Gateway that you can use, but overall, it's a pretty good tool. It can help you feel like you are working on an integrated project instead of many connected projects.


These are just a few recommendations, and there are several tools available right now to manage serverless functions. The selection will depend on your project's needs, programming language, and other architectural concerns that you have. I like SAM because it is supported by the AWS team, it's easy to use, and it's based on CloudFormation, which gives you a lot of advantages in the AWS documentation.

Code sharing between the function

This is similar to the problem above but has another solution. So, you know that our code is independent, REAL independent. To illustrate, think of each function as a standalone application.


Now, if you need to share code between these applications, there are several ways to do so:


  • Monorepo: put all the lambda in 1 repo and use the code between them. The pro with this is that all code is in one place, so there are no limitations to using it. The con is that all the lambda will have this code in the artifact, so you will have the same code deployed in each lambda. That's not really as BIG of a problem as you may think, but remember you instance the lambda function when it's called (not always, but from time to time let's keep it simple). So, if you need to rebuild the app with a lot of code, you will lose performance.


  • Lambda Layer: the AWS native solution for this is basically a dependency, but AWS handles it for us and it's previously processed. The pro is that it's the faster option and one of the simplest. The con is that it's a native way, so if you need to move your code, you will need an option in the next provider.


  • Library: basically, it's just a library that you will install using something like pip, npm, or the package manager that you want. The pro is that it's the most standard and simple way to do it. The con is the same as splitting your repos: you will need to handle the package (send it to npm or pip, tag it, deploy it, etc.).


In summary, the selection depends on your needs. These options are just possible ways it can be handled.

Lambda Quotes

I have to admit, dealing with the limitations of serverless architecture can be frustrating. However, these limitations are in place to ensure functionality and stability. There are several limitations that I have had to navigate while working with serverless architecture.


Some of the most common limitations include the following:

Execution time

Lambda functions have a maximum execution time of 15 minutes. Therefore, you need to ensure that your function can perform its task within that time frame. While you certainly wouldn't want an API user waiting for 15 minutes, there are plenty of use cases where longer processing times are required, such as video processing, batch processing, sending emails or notifications, or data extraction.


To overcome this limitation, it's essential to design your system using the following techniques:

  • Async Endpoints: These endpoints don't return information immediately. Instead, they take the request, process it asynchronously, and then you can request the result using the same endpoint or another.


  • Message Queue: Take the request and the information you need to process and split it into smaller chunks of information. Send these chunks to an SQS queue and then process them with one or more lambda functions. This approach works well with Lambda and offers several advantages.


  • Cache: Pre-process information and use Lambda to return the information to the user or other services. This reduces the amount of processing required by the Lambda function.


  • Webhooks: Instead of consuming and processing information, use webhooks to keep the system updated and process only what changes. This helps reduce the processing time required by the Lambda function.

Environments variables size

Lambda has a limitation of only supporting up to 4 kilobytes in environment variables. While I'm a big fan of environment variables because they can make a system testable and independent of the environment, it's important to be aware of this limitation.


To work around this issue, we tokenize the environment variables to reduce their size and use an external system to consume the information that we need when the lambda is initialized. This adds a bit more complexity to our system, but it's necessary to ensure that our Lambdas work properly.

Payload size (10 megabytes)

When your system heavily relies on user-uploaded videos and pictures, it can pose a challenge. Initially, we attempted to simplify the process by using the base64 approach and uploading the image as part of the payload. However, Lambda and API Gateway have a limitation of 10 megabytes, and reaching out to AWS for an increase in the limit was not something we preferred to do. Due to this limitation, we had to change our approach.


We chose to upload the metadata of the image and video through a REST API and use the S3 client to upload the media. Although this approach added more complexity to our system, it worked well for us. However, using the S3 client can be a little tricky due to the necessary configuration and permission requirements, especially for heavy-load media.

Bundle/Arficat size

When you talk about ‘artifact size’ all developers think of Android, iOS, or web pages, but in the case of serverless, this is a big issue.


Imagine this: instead of having your artifact in a server that runs 24/7, you have several sleeping applications that wait for a user's request before installing and building themselves. And Once the user is finished, the application would then delete itself to free up memory.


So based on this quick and sad history you need to be careful with the bundle size because you will need to install this from time to time.


This is a simplification of the lambda lifecycle:


  • Init: initialization of the executable, install, build, and memory allocation.
  • Invoke: execution of our code.
  • Shutdown: cleaning of the code and release of infrastructure.

To assure that lambda requires that every lambda follows these restrictions:

  • 50MB for deployment: bundle compress for upload to AWS -250MB for execution: bundle decompress including the lambda layer (all dependencies)
  • 3MB for code: your actual code needs to be less than 3MB to be able to edit in the console if you use Monorepo or a direct dependency to shared code between lambda this is going to be a problem because all your project will need to be less than 3MB.


This is pretty tricky for a large project but these are a couple of actions that we do to accomplish this:


  • Remove unused dependency: Identify dependencies that are for development, not in use, or only used for scripting. These dependencies are not part of the runtime.
  • Identify dead code: Remove any code that is not being executed in your application.
  • Stick to what is necessary: Only write the code necessary for the solution to function.
  • Limit each function to the necessary dependency: Each function or endpoint should be limited to the code necessary for its operation.

Summary

Serverless is a fascinating stack that, like all tools, has its own set of strengths and limitations. With a positive attitude and a willingness to learn, you can overcome any challenges that come your way.


I hope you found this information helpful. Please don't hesitate to reach out if you have any questions or suggestions. Keep coding and have a great day!

References