Database connection pool issue with serverless lambda function

If you are using serverless lambda functions with Relational database, you may have come across the issues when number of connection with the database gets exhausted.

In this blog, I will be explaining what could be the causes that may lead to such a situation and how can we overcome this.

So, before diving into the issue directly, I would like you to understand the life cycle of a Lambda function. Please go through this link to understand the lifecycle.


The why, what and how of Serverless Monitoring

Anyone who has worked on serverless will praise the ease it brings but will also agree on the shortcomings it has, for now. With increase in adoption and constant onslaught of new tools and features, we are moving towards making it truly passable.

We have come a long way considering how new all this is for all of us. AWS was the first provider considered to have a serverless offering back in November 2014. And in mere four years, here we are.

But it’s not easy developing a serverless application without the full overview of every part of the system. In early days it was cumbersome to test the functions entirely without uploading them. The community faced the challenges head on and worked on it to improve the ecosystem, and now we have serverless-offline and localstack to deploy and test our code locally.

One of the many challenges developers face when working with serverless is monitoring.


Deploying to AWS Lambda

`serverless deploy` all the things!

“Simple can be harder than complex. You have to work hard to get your thinking clean to make it simple. But it’s worth it in the end, because once you get there, you can move mountains”

Ken Segall, Insanely Simple: The Obsession That Drives Apple’s Success


Writing beautiful code is just winning half the battle. However, preparing a place for it to live healthily to be able to efficiently service clients is a whole different story altogether.

What I am about to present to you is surely a testament to the above quote, demonstrating how technical ‘mountains’ can be moved once we have achieved a level of simplicity and a very high level of abstraction. Brace yourselves, because this may change the way you look at deploying your code and, if I may say, about DevOps itself.


Go Serverless! But Why?

“Serverless will fundamentally change how we build business around technology and how you code.”

– Simon Wardley, Why the fuss about serverless?

Serverless Computing is coming here!

What comes to your mind when you hear the term Serverless? AWS Lambda functions? Google Cloud functions? Maybe Azure functions? Or some other service provided by yet another service provider? This is where most people get confused. What they don’t understand is that Serverless is not a service but a cloud computing execution model. Too hard to understand? Let’s break it down a little.

Serverless with AWS ElastiCache

AWS offers fully managed, in-memory service named ElastiCache. It supports two common engines named Redis and Memcached.

However, the problem is that ElastiCache clusters can only be accessed by the services that are present in the same Virtual Private Cloud – VPC as ElastiCache.

So here is a quick example explaining how to set up a serverless function and ElastiCache cluster under a VPC.

1 – Setting up a VPC

  1. Login to the AWS Console, and open
  2. Choose Start VPC Wizard
  3. From navigation, list choose VPC with Public and Private subnets
  4. Keep the default options, and then choose to Create VPC



Do you want to get articles like these in your inbox?

Email *

Interested groups *
Technical articles