Written by Tim Smiser
Continuous integration and Continuous Delivery is a relatively new approach to delivering software applications. SaaS/Hosted software solution companies have been adopting this approach to increase the frequency of delivery and reduce the risk that accompanies large software releases. The end result is SaaS solutions that are more responsive to customer needs and more adaptable to changing markets.
Traditionally CI/CD components were still based on the notion that the first thing you had to manage was a running instance of some kind. Bare metal, hosted VMs, and ec2 instances were all examples of components that needed to be managed. The typical deployment pattern was:
- Pull request is generated off of master
- Code is peer reviewed then merged.
- Code merge then triggers the following steps
- Build up the server instance(s) for the AQA environment
- Configure the server(s) for the AQA environment
- Configure a container(s) (e.g. Tomcat) for the AQA environment
- Deploy the code to the AQA environment
- Run AQA Tests, Generate Reports
- Teardown AQA infrastructure
- Repeat above steps (minus AQA tests) for Blue environment
- UAT Blue environment
- Blue/Green swap (A/B testing)
Smaller Feature Sets = Faster Delivery, Less Risk
The mindset of CI/CD is that you release small portions of code much more frequently. This reduces risk and gives your application the ability to be more responsive to your customers’ needs.
The Peer Review Code Gate
Utilizing GIT as a code repository allows us to take an approach that relies heavily on a branching strategy. A typical feature is developed in a branch off of master. Master is always a perfect representation of what is deployed.
Any successful CI/CD implementation must have a robust set of automated QA tests. Manual QA is too time consuming to be in the middle of a CI/CD pipeline.
Blue/Green deployments refer to a process by which an exact mirror of your production environment is built out and tested. Once deemed acceptable the production mirror (blue) is swapped via DNS with the current production (green). This allows for a rapid production rollout and a rapid rollback strategy should any issues arise with the new production deployment. Blue/Green deployments can also both be left in place for a time, in order to implement an AB testing strategy.
Serverless CI/CD aims to take all of the points listed above and leverage them in the new serverless world that is now a reality for most tech companies.
Amazon blazed a trail in this space that Google and Microsoft have followed suit in. Serverless represents a significant step forward to the Holy Grail developers have been chasing for years: worry only about writing code, leverage a service for everything else. There are several components in the AWS ecosystem that embody this spirit. For the purpose of this discussion, we’ll focus on Lambda Functions, API Gateway and Serverless data storage like DynamoDB and S3.
AWS Serverless concepts
- Lambdas – an autoscaling container for hosting a small functional unit of code. Node, Python, and Java are currently the supported languages.
- Serverless Data Storage
- DynamoDB – NoSQL storage as a Service
- S3 – File Storage as a Service
- API Gateway – a scalable front end to API endpoints as a Service
Sample Serverless Stack
Below is an example of a simple serverless stack.
Serverless Build Automation
Serverless components break the mold of deployments that developers are traditionally used to as there is no server instance to deploy to. There is also no running container that needs to be managed. In this section, we’ll dive into a few strategies to bring the robust concepts behind CI/CD to a serverless world.
Full Teardown/buildup vs. streamlined deployment
To get the most accurate testing, the full build up/tear down methodology is the way to go. A Cloud Formation Template is defined for the whole stack and parameterized for each environment. Each environment is built from the ground up using the CFT’s.
The problem with the above approach is that building a full stack from a CFT can be time consuming and speed of delivery can be a critical path for a successful CI/CD pipeline. A more streamlined approach can leave long-running components like the API Gateway, Lambda containers, DynamoDB and the S3/Cloudfront UI container in the above diagram in place, while fully replacing the code deployed.
Using jenkins to orchestrate you can leverage AWS plugins for jenkins that can deploy code and configure components directly from your CI/CD controls. The Lambda plugin for jenkins allows you to not only deploy the code, but set the memory, timeout, language, and environment variables.
One trick of note here is the bucket policy must be set using the json below in order for the objects you upload to be accessible.
Stage Variables and the API Gateway
The API Gateway is set up to deploy to various stages. In the “Custom Domains” config section you can assign DNS to different stages of the API gateway. Stage variables are variables defined by the API gateway that can have a different value at each stage. These variables can be passed down into your lambda code backing the API Gateway endpoints. These can then be used to point at different data stores per environment, giving your code a way to know which environment it is being invoked.
CI/CD represents a great approach to putting out an application that is of higher quality and more adaptive to your customers’ needs. With the few tips and tricks outlined above, you can leverage a CI/CD methodology with a serverless environment, combining two really dynamite modern coding practices.