`serverless deploy` all the things!
“Simple can be harder than complex. You have to work hard to get your thinking clean to make it simple. But it’s worth it in the end, because once you get there, you can move mountains”
Ken Segall, Insanely Simple: The Obsession That Drives Apple’s Success
Writing beautiful code is just winning half the battle. However, preparing a place for it to live healthily to be able to efficiently service clients is a whole different story altogether.
What I am about to present to you is surely a testament to the above quote, demonstrating how technical ‘mountains’ can be moved once we have achieved a level of simplicity and a very high level of abstraction. Brace yourselves, because this may change the way you look at deploying your code and, if I may say, about DevOps itself.
Look ma, no servers!
This blog assumes that you’re familiar with the serverless paradigm, and at least gotten your feet wet with the waters of the Serverless framework. Our cloud provider and service of choice is none other than AWS Lambda.
If you feel you need a crash course on any of the above topics or a refresher session to jog your memory, you can read my colleague’s blog in which he introduces the serverless stack and walks you through creating a basic Serverless app at https://causecode.com/go-serverless-but-why/.
Where your entire infrastructure lives in a text file
Yes, you read that right. An aspect of cloud computing hitherto not much emphasized is that you do not need to know where and how your resources are provisioned. You are just happy knowing that a resource matching your exact requirement is made available for you somewhere and that it operates exactly as expected. The resource in question may be compute engines, storage, network devices, and pretty much anything that can be provisioned in a cloud.
The infrastructure-as-code paradigm exploits this fact and lets you describe all the resources relevant to your application, in a plain text file with a specified format. Again, you only specify the types of resources you need, and their specifications, without going into much details.
Services such as CloudFormation, are able to parse this text file and deploy the required resources on the fly, and then attach them to your application, ready to be used. The set of all the resources belonging to one application are grouped in logical units called CloudFormation stacks.
As the AWS CloudFormation site puts it: “AWS CloudFormation provides a common language for you to describe and provision all the infrastructure resources in your cloud environment. CloudFormation allows you to use a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. This file serves as the single source of truth for your cloud environment. “
Note that we mention CloudFormation here, as Serverless framework internally uses it when it deploys to AWS. It provisions all the resources and then makes them available to your lambda functions. We will see the how of it in the coming sections.
Serverless + CloudFormation
Where the magic begins to happen
I talked about the infra-as-code paradigm and AWS CloudFormation in the last section. Now we will see how Serverless framework leverages this to deploy the lambda functions, and the required resources.
The text file we keep talking about, that contains the infrastructure specification is, in this case, called serverless.yml.Serverless internally uploads this file to AWS CloudFormation when you run the deploy command.
(Heads up: the YAML format seems simple at first but is notorious for being difficult to comprehend once the complexity increases. But you can always head to https://learnxinyminutes.com/docs/yaml/ for a quick crash course and come back to it when you are stuck.)
A serverless.yml file represents a service, which is nothing but a grouping of lambda functions and associated resources that serve a specific purpose. (eg, a SocialNetworkService will contain endpoints to get/set user data, buckets to store profile photos, and so on.)
A Serverless project mainly contains a set of lambda functions, defined in source file(s). Each of these function is actually an AWS resource, of the type AWS::Lambda::Function. As seen in the post on introduction to Serverless, we define each function in the YAML file, along with the endpoint that will be assigned to the function.
In addition to defining functions, Serverless lets us mention any other AWS resources that are needed by our application to run, in the YAML file. These resources can be:
- RDS Databases
- DynamoDB tables
- S3 buckets
- Cognito user pools
- IAM roles
- .. and so on.
When we run sls deploy, Serverless converts the YAML file to a CloudFormation JSON template, which is the format accepted by AWS. It then uploads this file to a S3 bucket, which triggers a creation of a CloudFormation stack for the service, with the listed resources.
The progress of the same can be seen in the CloudFormation console in the AWS website. The YAML file can also contain configuration parameters such as memory, CPU to be allocated, time out, CORS configuration, and so on.
Thus, with a single command, you can provision your entire backend infrastructure automatically, and do it anywhere, any number of times, with the same result, as long as you have the YAML file.
Mind = blown!
For a deep dive on this and to dissect the serverless.yml file, head to https://serverless.com/framework/docs/providers/aws/guide/serverless.yml/ that describes each section of the file in detail. Here’s what a typical template looks like:
AWS APIGateway Stages
Just when you thought that was the end of the magic
What if I told you that you can create separate environments such as dev, alpha, beta, prod, and have their own isolated set of lambda functions, resources, and even endpoint URLs, all by specifying just a single parameter? That’d be awesome! Then, you could test out your changes in a real environment by deploying them to a test stage, and then deploy with confidence to production, with a flick of a single switch.
This is made possible by an AWS concept called, not so surprisingly, stage. While deploying with the sls deploy command, you can specify an additional parameter with the option -s <stage_name>. That’s it! Serverless will create a set of different lambda functions and endpoints for that stage, and then deploy them.
You can choose whether a resource is created common to all stages or separate for each stage, by excluding or including the stage name in the resource’s name. Allow me to explain with a code snippet:
Let’s Do It!
All the power, distilled into two simple commands
If you read through, congrats! All the knowledge gained till now boils down to just two simple commands: one to create the CloudFormation stack, one to remove it
sls deploy [-s <stage_name>]
Creates the CloudFormation JSON template, uploads it and creates the stack. You can check the progress in you AWS Console.
sls remove [-s <stage_name>]
Removes all the functions, endpoints, and resources related to that service and stack, leaving no trace whatsoever, on the AWS infrastructure.
Pro Tip At CauseCode, we maintain an AWS stage for every developer on the team, apart from the alpha, beta, and prod stages. That way, developers can test their own code on AWS infrastructure in isolation, without tripping over each other.
That’s all folks, for this blog. Feel free to comment or discuss and share your opinions. We’ll catch you soon with yet another power packed blog!
About CauseCode: We are a technology company specializing in Healthtech related Web and Mobile application development. We collaborate with passionate companies looking to change health and wellness tech for good. If you are a startup, enterprise or generally interested in digital health, we would love to hear from you! Let's connect at firstname.lastname@example.org