Deploying a Kubernetes application to AWS EKS using Terraform and Ansible

In this article i am going to walkthrough with a demo to show how to use Ansible to deploy an application onto a Kubernetes cluster that is provisioned by Terraform.

First of all, let me lay down some definitions and terms and explanations of those technologies & tools so that we are all on the same page. Also, please allow me to set the scene of the application i am going to deploy.

My Application Overview

As part of this demo i have built an example application that represents an online catalog ordering system used for sales management.

This application is a microservice application meaning the application is built with a combination of several different services (of different sizes). Adopting a microservice approach for the architecture is a modern way of building software applications compared to the old days where we built the so-called ‘monolithic applications’.

Each backend microservice is a Java Web Service bootstrapped with Spring Boot. Each backend microservice connects to a different data store that is suited to the functionality of that microservice.

The full architecture diagram of my application is shown below:

From the above, you can see that the User Service microservice uses a H2 embedded database where as the Customer Service microservice connects to a relational MySQL database. The Order Service microservice is backed by a NoSQL document data store in MongoDB and lastly, the K/V Data Store of Redis backs up the Product Service microservice.

Kubernetes

Kubernetes is a container orchestration framework that manages the lifecycle of containers. Docker is a very famous Container technology but managing many Docker containers can be very tedious and error prone hence the need for a container orchestration tool. This is where a tool like Kubernetes comes in. Other similar technologies in this space include Docker Swarm (from Docker itself), Apache Mesos, Hashicorp’s Nomad, AWS ECS etc.

Terraform

Terraform is an infrastructure resource provisioning tool created by Hashicorp. Basically, it is a tool that allows you to define your infrastructure resources as code and creates those resources for you on your chosen provider platform easily. Additionally, it allows you to easily tear down your infrastructure resources easily as well.

This is implementing Infrastructure as Code (IAC) concepts.

Ansible

Ansible is mainly a general IT Automation tool that enables the automation of common IT related tasks. However, Ansible is commonly categorized in the same brackets as other Configuration Management tools such as Puppet, Chef, Salt. Ansible is also Infrastructure as Code.

Technology Stack

  • AWS EKS
  • Kubernetes
  • Terraform
  • Ansible

We require infrastructure to host the application. The chosen platform is Kubernetes. Since the application is a microservice architecture meaning each individual components can be easily packaged up inside Docker containers allowing Kubernetes to manage those containers.

I have also chosen to use AWS EKS.

EKS is AWS’s managed Kubernetes service that allows us to spin up a complete Kubernetes cluster easily without too many overheads.

VPC

Our EKS cluster requires to be inside a VPC where the EKS worker nodes lives. The worker nodes themselves are EC2 instances.

The above VPC terraform configuration provisions a bog standard VPC for any application. It contains an Internet Gateway that is required for Internet Traffic in and out of the Network.

In above, i’ve configured only public subnets for pure simplicity but you may want to make the distinction between public and private subnets where your EKS worker nodes can live inside the private subnets.

EKS Cluster

The following terraform configuration requires the amount of code that provisions an EKS cluster.

EKS Worker Node Group

An EKS cluster requires a number of worker nodes to schedule Kubernetes Pods to so the below configuration gives an example of provisioning an EKS worker group.

In above, i am not using EKS Fargate since i am manually specifying the minimum, desired, and maximum size of the worker group pool. It means i am managing the underlying EKS worker group’s EC2 instances whereas if i were adopting EKS Fargate i would be leaving the management of the underlying worker nodes to AWS where with Fargate, the number of worker nodes would be automatically scaled based on your demand.

Applying the changes

To perform the changes for the above code we use the Terraform CLI and run Terraform commands.

We first initialise the Terraform Project:

terraform init

Terraform has a feature that allows you to view the proposed changes it actually will going to make via its plan command:

terraform plan

If you are happy with the proposed changes you can finally ‘apply’ the changes using the apply command:

terraform apply

Hooking this up with an CI/CD tool is beyond the scope of this demo article.

Terraform provisions the underlying infrastructure in which your application runs on.

So for deploying the application to my provisioned EKS Kubernetes Cluster i use Ansible.

The above code just uses an Ansible module (shell) that runs shell commands on your behalf. Since my application lives in Kubernetes, i let Kubernetes to manage my application cluster. Therefore, to deploy my app (or a collection of different microservices rather) i have written an Ansible file to execute a series of kubectl CLI commands to make the Kubernetes changes.

For simplicity sake, i have coded all my deployment steps in one single Ansible task file where it runs each step sequentially. It would do the deployment in the sequence of steps as defined. However, following best Ansible practice, should split up the monolithic Ansible file into separate Ansible roles where each role can perform a deployment as per each microservice.

Following Kubernetes best practice i would have a deployment object to represent my ‘deployment’ of my services in Kubernetes Pods. In Kubernetes terms, i would have a service object to expose my service out for consumption in the Kubernetes cluster. This ‘service’ object acts as an internal load balancer.

For example, i have an user-service.yml file that defines my Kubernetes service for my user service microservice.

user-service.yml :

And for my Kubernetes deployment object i would have an equivalent user-deployment.yml :

That is it. Hopefully, i have shown you the simple steps to use both Terraform and Ansible to deploy your Kubernetes application onto a Kubernetes platform on AWS Cloud Provider.

I’m a Software Engineering Enthusiast specialising in DevOps, Cloud Engineering, and Programming, all-round Techie, Coffee-Addict ☕, Blogger, Food Lover, Gamer

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store