Last updated September 18, 2020

At NİCESİ , we work hard to improve our service quality. If you’re charging your customers money, you have to make your services always available with quick response times. In the event of a failure, you have to recover quickly. To achieve this, we take advantage of DevOps practices.

This tutorial provides a high-level framework for how to build a highly available and scalable infrastructure using AWS Elastic Beanstalk, continuous integration using Github and CircleCI, and continuous delivery using AWS Codepipeline and BlazeMeter.

We also provide AWS CloudFormation templates and awscli commands to help you set the entire platform up in minutes.

We will deploy a simple RESTful API inside node:12 Docker image using the AWS Elastic Beanstalk Docker platform. Elastic Beanstalk will build our Dockerfile and deploy that image to the EC2 instances powered by Docker.

It is a simple NodeJS express API and has simple unit tests we’ll use in the CI pipeline.

Below is a high-level blueprint of our infrastructure.

Blueprint

In Part 1, we will:

  • Create AWS Elastic Beanstalk environments (ProdEnv, TestEnv) using AWS CloudFormation
  • Set up Continuous Integration with CircleCI and Github, and enable Github Flow workflow.
  • Set up Continuous Delivery with AWS CodePipeline using AWS CloudFormation

In Part 2 we’ll integrate BlazeMeter performance test to our pipeline.

Requirements

  • An IAM account with admin permissions, a Github account, and a CircleCI account.
  • awscli, git, curl, and jq tools.

1) Create AWS Elastic Beanstalk environments

Introduction

In this section, we’ll create our HA and scalable infrastructure using AWS Elastic Beanstalk.

AWS Elastic Beanstalk is an easy way to deploy your application in a scalable fashion with no extra costs.

You don’t need to manually create auto-scaling groups, CloudWatch alerts, load balancers, and application runtimes. (Java, NodeJS, PHP, Go, Python, Docker) Elastic Beanstalk takes care of all of them.

You can also integrate AWS RDS if you need a relational database layer.

Since there are many AWS resources(IAM roles, Elastic Beanstalk apps, environments) that depend on each other, we’ll use AWS CloudFormation to create and delete them consistently. We’ll create 2 Elastic Beanstalk environments. One for testing/staging(TestEnv), and one for production(ProdEnv). They both use the t2.micro instance type.

ProdEnv is a little bit powerful than TestEnv. It spins up 2 EC2 instances minimum and uses the Rolling deployment policy with a minimum of 1 batch size. Both environments create AWS Application Load Balancer.

Here is the complete CloudFormation template.

Setup

Step 1: Fork repo, clone and navigate into it

Sign in to your Github account and fork the repo below that contains our example app and whole infrastructure as code.

https://github.com/acikogun/node-petshop-api

Clone the repo and cd into it.

git clone https://github.com/<YOUR_GITHUB_USERNAME>/node-petshop-api
cd node-petshop-api

Step 2: Load configuration

We’ve set some environment variables to don’t hardcode them into CloudFormation templates. We need to export them before creating AWS CloudFormation stacks. The config.sh file will do it for you.

source cf-templates/config.sh

Here is the content of config.sh file.

Step 3: Upload source code to S3 bucket

We need to upload our source code to an S3 bucket in zip format. This is a one-time operation for initializing Elastic Beanstalk environments. When we set up our CD pipeline, it will fetch the source code from Github and upload it to S3 automatically.

# Create a bucket
aws s3 mb s3://${S3_BUCKET_NAME}
# Archive in zip format
git archive -v -o /tmp/${APP_FILE} --format=zip HEAD
# Upload zip file to bucket
aws s3 cp /tmp/${APP_FILE} s3://${S3_BUCKET_NAME}

Step 4: Create CloudFormation stack

Now it is time to create our Elastic Beanstalk environments. The command below will create ProdEnv and TestEnv environments.

Then, Elastic Beanstalk environments will build a Docker image(contains our application) from the Dockerfile, deploy it to environments. It will take about 7 minutes to create.

Run the command below to create Elastic Beanstalk environments.

aws cloudformation create-stack --stack-name node-petshop-api-ebs \
--template-body file://cf-templates/eb-apps.yaml \
--parameters \
ParameterKey=S3Bucket,ParameterValue="${S3_BUCKET_NAME}" \
ParameterKey=SourceS3Key,ParameterValue="${APP_FILE}" \
ParameterKey=EBProdCNAME,ParameterValue="${PROD_ENV_CNAME}" \
ParameterKey=EBTestCNAME,ParameterValue="${TEST_ENV_CNAME}" \
ParameterKey=EBProdAppName,ParameterValue="${PROD_APP_NAME}" \
ParameterKey=EBProdEnvName,ParameterValue="${PROD_ENV_NAME}" \
ParameterKey=EBTestAppName,ParameterValue="${TEST_APP_NAME}" \
ParameterKey=EBTestEnvName,ParameterValue="${TEST_ENV_NAME}" \
ParameterKey=ProdInstanceType,ParameterValue="${INSTANCE_TYPE}" \
ParameterKey=TestInstanceType,ParameterValue="${INSTANCE_TYPE}" \
ParameterKey=LBHealthCheckPath,ParameterValue="${HEALTH_CHECK_PATH}" \
ParameterKey=EBPlatform,ParameterValue="${EBS_PLATFORM}" \
--capabilities CAPABILITY_NAMED_IAM

You can navigate to CloudFormation and Elastic Beanstalk console to watch while CloudFormation is creating the stack.

Warnings:

  • Don’t forget to switch to the region you’ve created the stack inside.
  • Make sure it has run without any error.

Elastic Beanstalk Environments Elastic Beanstalk Environments

CloudFormation Stacks CloudFormation Stacks

Step 5: Test out endpoint

Now that we’ve deployed our API to both Elastic Beanstalk environments(TestEnv, ProdEnv), we can perform a simple smoke test to our ProdEnv’s ELB endpoint.

Navigate to Elastic Load Balancer console and copy DNS address of your ProdEnv ELB.

ProdEnv ELB endpoint ProdEnv ELB endpoint

API health check

curl <YOUR_PRODUCTION_ELB_ADDRESS>
{"Status":"200 OK"}

Get all pets

curl <YOUR_PRODUCTION_ELB_ADDRESS>/pets
[{"id":1,"type":"dog","price":249.99},{"id":2,"type":"cat","price":124.99},{"id":3,"type":"fish","price":0.99}]

Considerations

  • Because this is just a showcase, we’ve used a minimum of 2 EC2(t2.micro) instances in the production. You should determine the type and amount of resources that suit your business best.

  • Elastic Beanstalk is a good solution for simple web applications, API’s. If you have an application that consist of multiple services connected each other, it would be hard and slow to deploy and scale on Elastic Beanstalk. You should consider using Kubernetes, further Istio.

  • We’ve deployed both environments to the same default VPC with the same AWS account. In a real production environment, you should provide the highest level of resource and security isolation. Consider isolation of VPCs, accounts, Elastic Beanstalk environments, etc. Here is a comprehensive guide to give you an idea.

  • We didn’t enable an https listener on load balancers for the sake of simplicity. You must enable https and disable https in real-world scenarios. Consider using AWS Certificate Manager .

2) Set up Continuous Integration

Introduction

In this section, we’ll set up our continuous integration with CircleCI. We also enable the Github Flow branching model. That means, every time a developer pushes a branch(for example, a feature or bugfix branch), Github will run checks (CircleCI in our context)

This enables teams to practice continuous integration (CI) via trunk-based development where developers regularly merge their code changes into a central repository, ideally several times a day. When teams can regularly merge small changes, they minimize the complexity of the merge and thereby the effort. Combining trunk-based continuous integration with continuous delivery (CI/CD) reduces the lead-time of getting a change into production.

Setup

Step 1: Set up Continuous Integration with CircleCI

In this step, we’ll integrate Github with CircleCI. We’ve already placed a CircleCI config file. It installs npm package dependencies and runs unit tests.

  • Visit CircleCI and sign up with your Github account.

  • Click Projects on the left menu. Find the repo you forked before (node-petshop-api) and click Set Up Project.

  • Click Start Building on the popup menu. It’ll kick off the CI pipeline and redirect to the Pipelines page. Make sure it has run without any error.

CircleCI continuous integration pipeline CircleCI continuous integration pipeline

Step 2: Enable Github Flow branching model using Github Checks

GitHub flow is a lightweight, branch-based workflow that supports teams and projects where deployments are made regularly. To learn more, visit the official guide .

The idea is simple.

  • Developers create branches off of the master branch.
  • They made commits and push those branches to Github, open pull requests.
  • They can discuss and review commits, add new commits to opened pull requests.
  • If they are satisfied, they merge commits to the master branch.
  • Every time a push or a merge occurred, Github checks run. (the CircleCI pipeline in our scenario)

Navigate to https://github.com/YOUR_GITHUB_USERNAME/node-petshop-api/settings/branches

  • Click Add rule button.
  • Type master into Branch name pattern area.
  • Select Require status checks to pass before merging and ci/circleci: run_tests box below.
  • Click Create button.

Github branch protection rule

We’ve enabled our CI pipeline with Github checks. We’ll test it out after we create the continuous delivery pipeline.

Considerations

  • You can enforce more strict rules like Require pull request reviews before merging. That way, you will ensure pull requests won’t be merged without a code reviewing process.
  • We use the master as our main branch. You can enable the rule on another deployment branch you use. (dev, staging, prod)

3) Set up Continuous Delivery with AWS CodePipeline

Introduction

In this section, we’ll create a continuous delivery pipeline with AWS CodePipeline. We’ll integrate it with our Github repo. When we push or merge commits to our master branch, it’ll trigger AWS CodePipeline.

The pipeline will first pull the source code from Github and deploy it to Elastic Beanstalk TestEnv environment. Then, it’ll wait for manual approval. When we approve, it will finally deploy the application to the ProdEnv environment.

In Part 2 we’ll integrate BlazeMeter performance test to our pipeline.

Setup

Step 1: Configure GitHub authentication

CodePipeline uses GitHub OAuth tokens and personal access tokens to access your GitHub repositories and retrieve the latest changes. We’ll create a personal access token.

Github token

  • Click Generate token

  • Next to the generated token, choose the copy icon. You won’t be able to see it again. Github token

Step 2: Export Github variables

  • Set variables for your Github username and token you’ve created.
export GITHUB_USER=<YOUR_GITHUB_USERNAME>
export GITHUB_TOKEN=<YOUR_GITHUB_TOKEN>

Step 3: Create Pipeline

  • Run the command below to create CodePipeline named node-petshop-api-cd-pipeline.
aws cloudformation create-stack --stack-name node-petshop-api-cd-pipeline \
--template-body file://cf-templates/codepipeline.yaml \
--parameters \
ParameterKey=GitHubRepositoryName,ParameterValue="${GITHUB_REPO}" \
ParameterKey=GitHubUser,ParameterValue="${GITHUB_USER}" \
ParameterKey=GitHubBranch,ParameterValue="${GITHUB_BRANCH}" \
ParameterKey=GitHubToken,ParameterValue="${GITHUB_TOKEN}" \
ParameterKey=ArtifactBucket,ParameterValue="${S3_BUCKET_NAME}" \
ParameterKey=EBProdCNAME,ParameterValue="${PROD_ENV_CNAME}" \
ParameterKey=EBTestCNAME,ParameterValue="${TEST_ENV_CNAME}" \
ParameterKey=EBProdAppName,ParameterValue="${PROD_APP_NAME}" \
ParameterKey=EBProdEnvName,ParameterValue="${PROD_ENV_NAME}" \
ParameterKey=EBTestAppName,ParameterValue="${TEST_APP_NAME}" \
ParameterKey=EBTestEnvName,ParameterValue="${TEST_ENV_NAME}" \
--capabilities CAPABILITY_NAMED_IAM

Our CloudFormation template will create the pipeline and a webhook to Github. It will start running immediately after the creation.

  • Browse AWS CodePipeline console and wait until it comes to Approve stage. You can check endpoints of TestEnv endpoints to see if it works.

  • If you are satisfied, click Review, then Approve. Now you can watch while DeploytoProduction stage is running. It will deploy our application to ProdEnv environment.

CodePipeline

CodePipeline

CodePipeline

CodePipeline

Considerations

There is a reason why we use CircleCI for CI and AWS CodePipeline for CD.

We could use CircleCI, Jenkins, or TravisCI for both CI and CD. But you have to install awscli tool or outdated Jenkins extension for Elastic Beanstalk and expose your AWS secrets outside of your VPC. This is not a good practice. AWS CodePipeline is native and the best way to deploy Elastic Beanstalk.

We could use AWS CodePipeline for CI. We could integrate AWS CodeBuild or Jenkins to our CodePipeline to run our unit tests.

But CodePipeline doesn’t support for multiple branches. That means that we can’t run checks(unit tests) on our feature branches. So this blocks us from implementing Github Flow workflow. We could dynamically create a pipeline for each branch using an AWS Lambda script. But this is not efficient and not scalable.

So, using a CI service like CircleCI or TravisCI that integrates well with Github and using CodePipeline for deployment to AWS services seems an efficient solution in many cases.

Test it out

Now it’s time to test our CI/CD pipelines using Github Flow.

Let’s say you want to update the dog’s price.

  • Create a feature branch called update-dog-price.
git checkout -b update-dog-price
  • Open app.js. Got to line 24 and update the dog price as you want.

  • Commit and push your feature branch.

git add app.js
git commit -m "Update dog price"
git push origin update-dog-price
  • Go to CircleCI pipelines and ensure your CI pipeline is triggered and running.

  • Browse the Github repo you’ve cloned and click to Pull requests tab.

  • Click New pull request. Select master as your base branch and update-dog-price as compare branch.

  • Click Create pull request and see Github checks are running.

Github Check

  • Now, let’s say one of our collaborators has reviewed your commit and noticed that you’ve updated only /pets/ endpoint and you’ve forgotten to update /pets/1 endpoint.

  • Open app.js again. Go to line 45 and update it too.

  • Commit and push again.

git add app.js
git commit -m "Fix dog price"
git push origin update-dog-price
  • Github checks will run again. If they pass, we’ll be able to merge our commits to the master branch. Now we are assuming that everything is fine and ready to deploy. Let’s merge our commits and watch while CodePipeline is deploying out commits to Elastic Beanstalk.

  • Click to Merge pull request then Confirm merge.

  • Go to AWS CodePipeline console again and repeat the approval steps we’ve done before. Ensure the pipeline has run without any error.

  • Go to ProdEnv ELB address again with curl or your browser. Check /pets and /pets/1 endpoints. Ensure AWS CodePipeline deployed your updates as it is supposed to be.

Clean Up

You can follow Part 2 to integrate a performance test with BlazeMeter to the CD pipeline. If you won’t, don’t forget to destroy AWS resources to avoid unexpected charges.

# Delete pipeline
aws cloudformation delete-stack --stack-name node-petshop-api-cd-pipeline
# Delete Elastic Beanstalk environments
aws cloudformation delete-stack --stack-name node-petshop-api-ebs
# Delete bucket
aws s3 rb s3://${S3_BUCKET_NAME} --force

Summary

This scenario is a simple yet powerful showcase for simple web apps and APIs.

If you need help with your business, please contact us . We are ready to find the best solution together!