Using Rancher RKE to provision MongoDB and deploy a React App onto AWS using Docker, Helm and Terraform

This is a tutorial to show you how to deploy a React application and provision a MongoDB instance utilizing RKE, Docker, Helm, and Terraform on AWS.

Using Rancher RKE to provision MongoDB and deploy a React App onto AWS using Docker, Helm and Terraform

On Monday, March 30th, we were approached with an issue where one of our corporate partner/friends was trying to use Rancher to show a proof-of-concept with MongoDB and helm on AWS. There was an issue where the documentation provided was not really helping this individual get to the solution that he needed. They were running into multiple errors where their servers that they had provisioned were not communicating properly with their Kubernetes cluster, and were running in circles trying to figure out a solution.

We took the time to take a look at the tools he was using, and came up with a tutorial on how to get everything connected to a Kubernetes Cluster in AWS.
This is a tutorial to show you how to deploy a React application and provision a MongoDB instance utilizing RKE, Docker, Helm, and Terraform on AWS. We do not do anything substantial with the MongoDB, but as a proof-of-concept we will detail how to do so. We’ll be detailing step-by-step instructions, and how to use our template to get your application in the cloud.

If this last paragraph doesn’t sound anything like the English language to you, we totally understand, and have been there. But, we hope that you’ve come to this post with a problem, that we can help you solve.

Table of Contents
  1. The Five Tools
  2. Planning Steps
  3. Helm Steps

The Five tools:

Terraform
Rancher(RKE)
Helm
You can install helm with a package manager or download a binary, we used a package manager “homebrew” on our local machines (‘brew install helm’)
AWS(create an account)
Docker

This tutorial assumes that you have these tools installed, if not please follow the correct documentation to install them. You should be able to get them through your respective terminals following the documentation.
Now that everything is installed, let’s take a moment to talk about what these tools do.

Terraform

Terraform is the most popular Infrastructure as Code (IAC) tool created by Mitchell Hashimoto, it was written in Go, to enable users to define and provision a datacenter infrastructure using a high-level configuration language known as Hashicorp Configuration Language (HCL), or JSON. Terraform supports a number of cloud infrastructure providers such as AWS, IBM Cloud, Google Cloud Platform, and more.

Terraform solved a problem where developers used to have to set up their infrastructure manually, Terraform allows you to manage your infrastructure in a file(s) where a resource becomes a piece of infrastructure in a given environment such as a virtual machine, security group, network interface, etc.,

Terraform allows you to use HCL to create files that contain definitions of their desired resources on almost any provider, and automates the creation of those resources at the time of application.

Rancher Kubernetes Engine (RKE)

Rancher Kubernetes Engine (RKE)

What is RKE? (Rancher Kubernetes Engine):
The Rancher Kubernetes Engine is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers. RKE was created to solve the common frustration of installation complexity with Kubernetes by removing most host dependencies and presented a stable path for deployment, upgrades, and rollbacks.
RKE is used to make the operation of Kubernetes easily automated, and entirely independent of the operating system and platform you’re running.

HELM

Helm is a tool that was created to streamline installing and managing Kubernetes applications. To compare it to something you might be more familiar with, think of it like Apt/Yum/Homebrew for Kubernetes.

Helm makes use of a packaging format known as charts, and a chart is a collection of files that describe a related set of Kubernetes resources. Helm charts help you define, install, and upgrade even the most complex Kubernetes application and are easy to create, version, share, and publish.
There are three important concepts of Helm.

First the chart is a bundle of information necessary to create an instance of a Kubernetes Application.
Second the config contains configuration information that can be merged into a packaged chart to create a releasable object.
Then the release is a running instance of a chart, combined with a specific config.
Helm uses a client through the command-line for end users, and is responsible for:
Local chart development, managing repositories, managing releases, interfacing with the Helm Library[ Sending charts to be installed, Requesting upgrading or uninstalling of existing releases]

AWS

Amazon Web Services, or AWS is a comprehensive cloud platform, that features over 175 featured services from data centers across the globe. You can define AWS as a secure cloud services platform that provides compute power, database storage, content delivery, and a multitude of other bundled cloud based services. For this project we used a few services such as
Amazon Elastic Compute Cloud (EC2) — basically our virtual servers, where we hosted our data.
AWS Identity and Access Management (IAM) — we create and manage our AWS users, enabling us to securely control access to AWS services and resources for users, to create and manage users and groups and use permissions to allow and deny to other resources.

Docker

Docker is a container engine which uses the Linux kernel features like namespaces and control groups to create containers on top of operating systems and automates the deployment on the container. Containers are isolated from one another and bundle their own software, libraries, and configuration files. It is the fastest way to securely build, test, and share cloud-ready applications from your local machine.

Now that we have an understanding of what these tools are and what they can do for us, let’s get started.

Planning Steps

Our first task was to figure out how to get into Rancher, and to understand the process of provisioning information properly to get Rancher to cooperate with us. You can find our repository to get started here.
Now that you’ve cloned our repository, you’re probably wondering “What are these?” “What do they do?”, we’ll explain what they’re used for, and how they will apply to our project.

  1. Clone/download the repository to your local machine. You’ll notice that there are four folders. The AWS folder contains our terraform files that provide our infrastructure in AWS, the cloud-common/files details RKE server/docker installs as shell scripts, the helm-charts provide storage volumes, MongoDB, and the react app. Rancher-common holds the terraform files that provide the rancher-common Terraform module that will contain resources that bootstrap into AWS.
  2. First, rename terraform.tfvars.example to terraform.tfvars. You’ll notice that the template has your AWS access key, AWS secret key, and a password for your rancher admin user. You can set your rancher password up now, and your AWS keys can be set up through your AWS account.
  3. Create AWS keys and place them in your terraform.tfvars file where prompted.
    You can make your own key through Key Management Service, navigate to Customer managed keys, and hit the orange tab that says Create key.
    For Key type, you can choose “Symmetric” and you don’t need to worry about Advanced Options. Create an alias, so that you can reference your key, we used something like “Test-Key”, and you can leave the tags blank. The next part is to define Key Administrative permissions and Usage permissions, give permissions to your account/user. You’ll then be given a policy that shows what your key is capable of doing, make sure to save it, and remember your Key ID. All of this can be done on the AWS console website.
  4. Double check the Rancher username, and password. You can find the username in providers.tf inside the rancher-common folder, the last provider will give you your username.
    In the same folder, in variables.tf, search for the variable “admin_password” and with that you’ll find that the default password is set to “admin”. You can change that to whatever you’d like, but make sure it matches the password set in tfvars in the aws folder.
  5. You’ll need to install another Rancher RKE plugin specific to the image you’ll be running in your Rancher instance for terraform to make use of. This is a third party plugin (which means terraform cannot automatically download it for you when you run terraform init). To do this you’ll have to download the correct binary(go with the latest stable release) from the Rancher releases page here. Next, you’ll want to go back to your home directory on your terminal and change(cd) into your .terraform.d folder. Next create a folder called ‘plugins’. Inside of plugins you’ll create another directory that will be the title of the OS your image uses (ours was darwin_amd64). This is where you will unzip the plugin. Make sure the plugin file is named appropriately and has the version (ex: terraform-provider-rke_v0.14.1). Now that the plugin is in place, run terraform init (from the aws directory in your repository) and you’ll see that it installs the terraform-provider-rke plugin.
  6. You’ll then create an IAM policy, this policy will attach to an IAM role, and then you create an IAM instance profile. So in your iam.tf file, we will have pre-provisioned resources that you will be able to use. You can also do this manually through AWS itself or through the AWS CLI, but having it in terraform to automate that step for you is really nice. This is attached in our infra.tf where the AWS resources the Rancher and node resources. So in infra.tf it will look something like:
resource "aws_instance" "rancher_server" {  ami           = data.aws_ami.ubuntu.id  instance_type = var.instance_type  iam_instance_profile   = aws_iam_instance_profile.cloud_provider_master.name   key_name        = aws_key_pair.quickstart_key_pair.key_name  security_groups = [aws_security_group.rancher_sg_allowall.name]  user_data = templatefile("../cloud-common/files/userdata_rancher_server.template", {    docker_version = var.docker_version    username       = local.node_username  })
AWS CLI

7. Run Terraform plan and inspect the output to make sure everything is ready to deploy.

8. If you are happy with the output you can now run terraform apply.

An important sidenote

"Kubernetes.io/cluster/${local.Cluster_id}" = "owned"

Any resource used by Kubernetes in the Amazon cloud provider must be tagged with the ClusterID. For our work, this needs to be on the security group, rancher server, node, and EBS volume resources. Tags don’t really have a function in AWS, they are just labels that you can search and filter by, but Rancher and Kubernetes need this tag to operate with each other. More information is provided here.

"Kubernetes.io/cluster/${local.Cluster_id}" = "owned"Any resource used by Kubernetes in the Amazon cloud provider must be tagged with the ClusterID. For our work, this needs to be on the security group, rancher server, node, and EBS volume resources. Tags don’t really have a function in AWS, they are just labels that you can search and filter by, but Rancher and Kubernetes need this tag to operate with each other. More information is provided here.

When provisioning has finished, Terraform will output the URL to connect to the Rancher server. Follow this URL and enter your username and password to access the Rancher GUI.

Two sets of Kubernetes configurations will also be generated:
kube_config_server.yaml contains credentials to access the RKE cluster supporting the Rancher server
kube_config_workload.yaml contains credentials to access the provisioned workload cluster

You now have instances and a Kubernetes Cluster in the cloud, congratulations! But wait, there’s nothing on there, well this is where helm comes to play.

A general outline of the infrastructure provisioned and how helm will interact with it

HELM

Now it’s time to utilize Helm using your command line/terminal.

Navigate to the root folder of your repository and run the following command

helm create <name>

We named ours chart-storage because we planned on using it to create our storage class in kubernetes. That command will give you a basic Helm setup that you may trim down to what you need. You can find our helm charts in the ‘helm-charts’ directory.

We only used this helm chart to create our storage class and persistent volume claim. We deleted everything in the templates folder except the helpers.tpl file. Then we removed everything from the values.yaml file and only added back anything that we might need later. We then added an application.yaml to add a storage class and later add actual applications into it.

Here is our applications.yaml code to build that storage class

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
reclaimPolicy: Retain
mountOptions:
  - debug

We then added a volumeClaim.yaml for our persistent volume claim which looked like this

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myvolume
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: standard
  volumeMode: Filesystem
status: {}

Then to actually install this chart you can run

helm install <NAME> [DIRECTORY PATH]

So when we were in our root directory we ran

helm install storage ./helm-charts/chart-storage/

To deploy a React application on a kubernetes cluster you need to create a react app. You can create your own or use ours which is found in the helm-charts folder, in “AuditDeployExample-master”. Next build your app. If you are using our project just run `npm build` in your terminal. Create a Dockerfile and specify your base image as bitnami/apache:latest

Here is our docker file.

FROM bitnami/apache:latest

COPY build /app

Pretty simple but it gets the job done.

Then run a docker build and store it in your own dockerhub account (If you’re wondering when did we create a dockerhub account, this tutorial is assuming you’ve used docker before, and have utilized their services).

docker build -t USERNAME/APP-NAME .  

That will store your image in your dockerhub so that you can install it on your cluster later. Make sure to replace USERNAME and APP-NAME with your own username and app-name, and don’t forget the dot at the end.

You can install this image using helm in one of two ways. First is by a simple terminal command

helm install APP_NAME bitnami/apache \
    --set image.repository=USERNAME/APP-NAME \
    --set image.tag=latest \
    --set image.pullPolicy=Always

The other is to set up a helm chart locally that will run this command with just helm install. Since we knew that this was using apache for the base image I decided to pull down the apache image and modify that in the same way the command line is. To do this use

helm pull YOUR_CHART_NAME bitnami/apache --untar

Then you can modify this chart. Open your values.yaml file in a code editor and find the image section. It should look something like this.

image:
 registry: docker.io
 repository: bitnami/apache
 tag: v7.10.16
 ## Specify a imagePullPolicy
 ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
 ##
 pullPolicy: IfNotPresent
 ## Optionally specify an array of imagePullSecrets.
 ## Secrets must be manually created in the namespace.
 ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
 ##
 # pullSecrets:
 #   - myRegistryKeySecretName
 
 ## Set to true if you would like to see extra information on logs
 ## ref:  https://github.com/bitnami/minideb-extras/#turn-on-bash-debugging
 ##
 debug: false

Change the repository to your image that you just added to DockerHub, your tag to latest or your own version number, and your pull policy to Always. Now it should look like this

image:
 registry: docker.io
 repository: USERNAME/APP-NAME
 tag: latest
 ## Specify a imagePullPolicy
 ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
 ##
 pullPolicy: Always
 ## Optionally specify an array of imagePullSecrets.
 ## Secrets must be manually created in the namespace.
 ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
 ##
 # pullSecrets:
 #   - myRegistryKeySecretName
 
 ## Set to true if you would like to see extra information on logs
 ## ref:  https://github.com/bitnami/minideb-extras/#turn-on-bash-debugging
 ##
 debug: false

Just fill out the name and the path to the chart you want to install

helm install APP_NAME [PATH TO CHART]

Next we will install MongoDB in the same way. Here is the command line command.

helm install mongodb bitnami/mongodb  \
    --set persistence.storageClass=standard \
    --set persistence.accessMode=ReadWriteOnce \
    --set persistence.size=2G

And again you can pull the chart and modify it like we did with the apache chart

helm pull mongodb-chart bitnami/mongodb --untar

Find the persistence section in the values.yaml and update it in the same way the cli is so that it looks like this.

persistence:
 enabled: true
  ## A manually managed Persistent Volume and Claim
  ## Requires persistence.enabled: true
  ## If defined, PVC must be created manually before volume will be bound
  ##
  # existingClaim:

  ## The path the volume will be mounted at, useful when using different
  ## MongoDB images.
  ##
  mountPath: /bitnami/mongodb

  ## The subdirectory of the volume to mount to, useful in dev environments
  ## and one PV for multiple services.
  ##
  subPath: ""

  ## mongodb data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  storageClass: standard
  accessModes:
    - ReadWriteOnce
  size: 2Gi
  annotations: {}

Now you can run this chart in the same way you ran the others and your react app should be up as well as your instance of mongoDB which will be attached to your storage class!

Congratulations, you’ve now successfully created a Rancher Kubernetes cluster running a React Application and mongoDB storage.

This tutorial was brought to you by the AuditDeploy engineering team. If you have additional tips that you think other Engineers and DevOps teams should know about, share your thoughts with us on Twitter using #deployertips.