The why, what and how of Serverless Monitoring

Anyone who has worked on serverless will praise the ease it brings but will also agree on the shortcomings it has, for now. With increase in adoption and constant onslaught of new tools and features, we are moving towards making it truly passable.

We have come a long way considering how new all this is for all of us. AWS was the first provider considered to have a serverless offering back in November 2014. And in mere four years, here we are.

But it’s not easy developing a serverless application without the full overview of every part of the system. In early days it was cumbersome to test the functions entirely without uploading them. The community faced the challenges head on and worked on it to improve the ecosystem, and now we have serverless-offline and localstack to deploy and test our code locally.

One of the many challenges developers face when working with serverless is monitoring.

Why monitor?

According to the Oxford Dictionary, monitoring means

“To observe and check the progress or quality of (something) over a period of time; keep under systematic review.”

When everything is handled by the vendor then why monitor? The vendor you choose will most probably scale the compute capacity automatically to ensure code execution. It will worry about load balancing and network optimizations, etc leaving you to just code your requirements and relax.

But it’s not that simple. You need to monitor to simply know when your application stops or when it doesn’t serve the customers need. You can have a insight into your functions and catch the slow response times. You can discover bottlenecks in the network and your code.

What to monitor?

Traditional monitoring focuses on performance of the servers, network latency, and other details which are of no use in the world of serverless, as they are all managed by the vendor.

The application code is the most important element within your control. You need to monitor the edge cases when your function fails, the cold starts and how you can minimize them, the ratio of failure to total invocations and all other things to ensure a smooth experience for your customers.

How to Monitor?

With the rapidly increasing adoption of serverless architecture, we now have various options to choose from, based on their pros and cons.

Today, we’re going to look at few tools to monitor AWS Lambda and weigh their pros and cons, so you will have an easier time deciding which one would best suit your needs.

AWS CloudWatch

  • Native AWS logging tool.
  • Primarily for logging, monitoring, and alerts.
  • Tracks metrics like the number of functions executed, latency in execution, and errors during execution.


  • Customizable Dashboard.
  • Alerts are customizable.
  • Custom alert triggers are available.
  • Works out of box for lambda, you need to create and customize a dashboard if you need one.
  • Fairly reasonable pricing, effectively free under free-tier usage.


  • Difficult to navigate and find the failed invocation.
  • Has around a minute of delay, so performance is not one of its stronger points.
  • Will probably need to use a separate log aggregator for centralised logging.


Dashboard, unlike many of its competitors, reads AWS CloudWatch logs directly and presents the gathered data in an organized and structured way using graphs and a beautiful dashboard. It also shows the resource cost estimate.

Since it reads from CloudWatch, you don’t need to insert a new piece of code or wrap your functions.


  • Tracing & profiling to investigate performance and cold starts.
  • Monitoring and error logs for debugging the serverless functions
  • Doesn’t require additional code to implement
  • Customizable alerts
  • Lambda cost-analysis (per-function basis)
  • Less costly than IOPipe, 14 day free trial provided.


  • Metrics have up to one minute delay (not real-time)


  • Provides tracing, profiling, monitoring, alerts, and real-time metrics.
  • Every function that is to be monitored needs its own separate wrapper which is tiresome to add. Fortunately, IOPipe has provided a useful serverless plugin to speed the process up.


  • Simple and easy integration.
  • Customized alerts.
  • Tracing & profiling to investigate performance and cold starts.
  • Monitoring & customizable events for granular error logs and debugging your serverless functions.
  • Real-time metrics.
  • Free tier available for small projects, also a 21 day free trial is provided.


  • Uses a wrapper for each function, which can result in performance delays (85ms on cold and 35ms on warm functions)
  • Fairly costly for team projects.


Similar to IOPipe it provides tracing, profiling, monitoring, alerts, and metrics.

Thunda differs from IOpipe in a couple ways:

  • Thundra plans to focus more on Java than Python or NodeJS.
  • They are taking a different approach in data sending too.
  • Read more about them here.

Some honorary mentions:

  1. DataDog
  2. New Relic
  3. OpenTracing

So there you have it! As you can see the tooling in serverless world is gaining traction, so check back soon and maybe there will be some additions to this list.

Hope you enjoyed this article. Feel free to leave comments and send us suggestions.

About CauseCode: We are a technology company specializing in Healthtech related Web and Mobile application development. We collaborate with passionate companies looking to change health and wellness tech for good. If you are a startup, enterprise or generally interested in digital health, we would love to hear from you! Let's connect at
Have you subscribed to our blogs and newsletter? If not, what are you waiting for?  Click Here

Leave a Reply

Your email address will not be published. Required fields are marked *


Do you want to get articles like these in your inbox?

Email *

Interested groups *
Technical articles