This tutorial is the third part of a series where I evaluate the challenge of deploying complex application systems on a Kubernetes cluster. Part 1 compares existing technologies and discusses their strengths and weaknesses. Part 2 describes the conceptual approach to using Ansible as a Kubernetes resource orchestrator. In this tutorial, you put the concept in practice by deploying two interconnected applications, preserving their deployment order, and detecting successful (or erroneous) completion. You do all of this without extending the Kubernetes setup, using an existing Kubernetes CLI utility.
Prerequisites
To complete this tutorial, you should have a basic understanding of Kubernetes resource types, as well as working knowledge of the Kubernetes resource definition documents and the Kubernetes CLI (kubectl
).
For the sake of simplicity, the tutorial assumes that the Kubernetes CLI is configured locally (however, as discussed in Part 2, it can also be located remotely). Before you start, you need the following tools available on the system:
A Kubernetes CLI:
kubectl
(or oc, if using OpenShift) connected to the target Kubernetes (or OpenShift) cluster. When you connect to a public cloud, such as IBM Cloud Kubernetes service, you can use a dedicated CLI to authorize access (for example,ibmcloud
).A recent version of Ansible installed: At the time of this writing, the latest version is 2.9.10. There are a number of ways to install Ansible, depending on your needs. Refer to Installing Ansible for more details.
Estimated time
The time required to complete this tutorial is approximately 30 minutes.
Steps
Set up a Kubernetes cluster connection
Let’s start by creating a working directory that you can use throughout this tutorial. Create one in your home directory:
$ cd ~
$ mkdir ansible-kubernetes-demo
$ cd ansible-kubernetes-demo
First, verify kubectl
properly connects to the cluster:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
10.144.186.5 Ready <none> 20h v1.17.9+IKS
Now, let Ansible know the location of kubectl
— that is, the deployment coordinator host (discussed in Part 2). You do this by creating a simple inventory file, where the deployment coordinator is the localhost. (For clarity, I use heredoc, but you can use your favorite editor to create files throughout this tutorial):
$ cat <<EOF > inventory.yaml
all:
hosts:
deployment_coordinator:
ansible_host: localhost
ansible_connection: local
EOF
Now, you need to let Ansible know to use your inventory file. By default, Ansible looks for a configuration file named ansible.cfg
in the current working directory. The configuration file can include various settings to tune Ansible behavior, including the location of the inventory file:
cat <<EOF > ansible.cfg
[defaults]
inventory = inventory.yaml
interpreter_python = python3
EOF
Now you can run test deployment coordinator connectivity, your first Ansible command:
$ ansible -m ping deployment_coordinator
deployment_coordinator | SUCCESS => {
"changed": false,
"ping": "pong"
}
The inventory setup is now complete. The next goal is to deploy your first application: the database.
Prepare the MySQL database resource templates
As discussed in Part 2, you use Ansible roles abstraction to model applications. Ansible requires you to place roles in the roles
directory. The MySQL database role name is mysql
:
$ mkdir -p roles/mysql
The role directory has a specific structure, which involves a number of subdirectories. In this demo, you use defaults, handlers, tasks, and templates. Let’s create appropriate subdirectories:
$ mkdir roles/mysql/{defaults,handlers,tasks,templates}
The most important file is the resource definition template. It’s a Kubernetes resource definition file, enhanced with Jinja2 placeholders. You deploy MySQL database as a StatefulSet:
$ cat <<EOF > roles/mysql/templates/mysql_statefulset.yaml.j2
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: "{{ mysql_name }}"
labels:
app: "{{ mysql_name }}"
spec:
serviceName: "{{ mysql_name }}"
selector:
matchLabels:
app: "{{ mysql_name }}"
template:
metadata:
labels:
app: "{{ mysql_name }}"
spec:
containers:
- name: mysql-server
image: "{{ mysql_image }}"
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: "{{ mysql_name }}-root-pass"
key: root_password
ports:
- name: mysql-server
containerPort: 3306
volumeMounts:
- name: mysql-data
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: mysql-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: "{{ mysql_storage_class }}"
resources:
requests:
storage: "{{ mysql_storage_request }}"
EOF
Per this definition, you also need a Secret resource to hold a MySQL root password:
$ cat <<EOF > roles/mysql/templates/mysql_root_pass.yaml.j2
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: "{{ mysql_name }}-root-pass"
data:
root_password: "{{ mysql_root_password | b64encode }}"
EOF
In addition, because in my demo cluster I don’t have dynamically provisioned storage classes available, I decided to use statically provisioned hostpath persistent volume to back MySQL data volume claim. This is not recommended for production purposes, but it is enough for a simple demo. An appropriate persistent volume resource definition template looks like this:
$ cat <<EOF > roles/mysql/templates/mysql_host_pv.yaml.j2
apiVersion: v1
kind: PersistentVolume
metadata:
name: "{{ mysql_name }}-data"
labels:
app: "{{ mysql_name }}"
spec:
capacity:
storage: "{{ mysql_storage_request }}"
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/data/{{ mysql_name }}"
persistentVolumeReclaimPolicy: Recycle
storageClassName: "{{ mysql_storage_class }}"
EOF
Finally, create one last template file to hold the headless service definition for MySQL. A service is required to both satisfy the StatefulSet requirement and to expose MySQL database internally within the Kubernetes cluster:
$ cat <<EOF > roles/mysql/templates/mysql_svc.yaml.j2
apiVersion: v1
kind: Service
metadata:
name: "{{ mysql_name }}"
labels:
app: "{{ mysql_name }}"
spec:
ports:
- port: 3306
selector:
app: "{{ mysql_name }}"
clusterIP: None
EOF
Notice how you externalized all of the customizable parts of the resource definition document as Ansible variables. You define default values of these variables (except for the root password) in the
defaults/main.yaml
file of the MySQL role:
cat <<EOF > roles/mysql/defaults/main.yaml
mysql_name: mysql
mysql_image_name: mysql
mysql_image_version: latest
mysql_image: "{{ mysql_image_name }}:{{ mysql_image_version }}"
mysql_storage_class: local-storage
mysql_storage_request: 10Gi
EOF
The previous file shows one way to compose a variable out of other variables, like in the case of the mysql_image
variable. This allows you to override parts of variables selectively (in this case, just the database image version).
Define the MySQL database deploy tasks
To create the MySQL resources from the templates you have defined so far, you create a tasks file in the MySQL role. The tasks file name is meaningful: it tells you that running it will deploy
the application on the cluster.
Even if you defined many resource templates, Ansible allows you to define a single task to deploy them all. The task utilizes the Ansible built-in command module to run the kubectl apply
command, passing in the rendered resource definition templates obtained using the Ansible built-in template
lookup to the CLI’s standard input:
$ cat <<EOF > roles/mysql/tasks/deploy.yaml
- name: Deploy MySQL server
command: kubectl apply -f -
args:
stdin: "{{ lookup('template', item + '.yaml.j2') }}"
vars:
mysql_root_password: >-
{{ lookup('password', 'mysql_root_pass chars=ascii_letters,digits') }}
loop:
- mysql_root_pass
- mysql_statefulset
- mysql_svc
register: mysql_deploy_result
changed_when: >-
mysql_deploy_result.stdout_lines
| reject('match', '^.* unchanged$') | list | length > 0
EOF
A few elements of the task deserve explanation:
The use of the loop attribute means the
kubectl apply
command will be called multiple times, with thetemplate
lookup evaluated for each loopitem
.The database root password is automatically generated using the
password
lookup and saved for later use in a local file namedmysql_root_pass
. Notice how you defined a new variable at the task level: Ansible allows for plenty of different places to set variables, each of them having their own precedence and scope. This brings in yet another level of flexibility.The result of the command task is saved in a variable named
mysql_deploy_result
, which is used by thechanged_when
attribute. By using this attribute, you can mark the task as changed when a change is reported by thekubectl apply
command.
Because the creation of persistent volume depends on whether you actually need it or not (you do not need it when using dynamically provisioned volumes), create a separate tasks file to host this logic. The tasks file to deploy a host persistent volume is conceptually similar:
$ cat <<EOF > roles/mysql/tasks/create_host_pv.yaml
- name: Create MySQL host persistent volume
command: kubectl apply -f -
args:
stdin: "{{ lookup('template', 'mysql_host_pv.yaml.j2') }}"
register: mysql_create_host_pv_result
changed_when: >-
mysql_create_host_pv_result.stdout_lines
| reject('match', '^.* unchanged$') | list | length > 0
EOF
Create and run THE MySQL deploy playbook
As you might have noticed, the task files define which actions should be performed (the “what”). In order to actually run the tasks on a certain target (the “where”), you need to create an Ansible playbook.
A simple playbook file that executes MySQL deploy tasks on the deployment coordinator host looks like this:
$ cat <<EOF > mysql_deploy.yaml
- name: Deploy MySQL server
hosts: deployment_coordinator
gather_facts: no
tasks:
- name: Create MySQL host persistent volume
import_role:
name: mysql
tasks_from: create_host_pv
- name: Deploy MySQL server
import_role:
name: mysql
tasks_from: deploy
EOF
To run the playbook, use the ansible-playbook
command:
$ ansible-playbook mysql_deploy.yaml
PLAY [Deploy MySQL server]
*****************************************************
TASK [mysql : Create MySQL host persistent volume]
*****************************
changed: [deployment_coordinator]
TASK [mysql : Deploy MySQL server]
*********************************************
changed: [deployment_coordinator] => (item=mysql_root_pass)
changed: [deployment_coordinator] => (item=mysql_statefulset)
PLAY RECAP
*********************************************************************
deployment_coordinator : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
After a short while, verify that the pod is up and running:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mysql-0 1/1 Running 0 3m42s
Add a waiting handler to the MySQL role
Of course, checking stuff “after a short while” might not satisfy you. You can instruct Ansible to poll Kubernetes for the MySQL deployment status.
Create a handler file that runs the kubectl rollout status
command to check
for the progress of the StatefulSet roll out. You must name the file main.yaml
:
$ cat <<EOF > roles/mysql/handlers/main.yaml
- name: Wait for MySQL database to roll out
command: >-
kubectl rollout status statefulset {{ mysql_name | quote }} -w=false
register: mysql_rollout_status
until: mysql_rollout_status.stdout | regex_search('roll out complete')
retries: "{{ mysql_rollout_retries|int }}"
delay: "{{ mysql_rollout_delay|int }}"
changed_when: no
EOF
Define additional default values for variables that you introduced:
$ cat <<EOF >> roles/mysql/defaults/main.yaml
mysql_rollout_retries: 10
mysql_rollout_delay: 5
EOF
Finally, modify the deploy tasks file to notify the handler:
$ cat <<EOF > roles/mysql/tasks/deploy.yaml
- name: Deploy MySQL server
command: kubectl apply -f -
args:
stdin: "{{ lookup('template', item + '.yaml.j2') }}"
vars:
mysql_root_password: >-
{{ lookup('password', 'mysql_root_pass chars=ascii_letters,digits') }}
loop:
- mysql_root_pass
- mysql_statefulset
- mysql_svc
register: mysql_deploy_result
changed_when: >-
mysql_deploy_result.stdout_lines
| reject('match', '^.* unchanged$') | list | length > 0
notify:
- Wait for MySQL database to roll out
EOF
Before giving it a try, let’s delete the previously deployed MySQL StatefulSet:
$ kubectl delete statefulset mysql
statefulset.apps "mysql" deleted
Now, run the MySQL deploy playbook using the same command that you previously used:
$ ansible-playbook mysql_deploy.yaml
PLAY [Deploy MySQL server]
*****************************************************
TASK [mysql : Create MySQL host persistent volume]
*****************************
ok: [deployment_coordinator]
TASK [mysql : Deploy MySQL server]
*********************************************
ok: [deployment_coordinator] => (item=mysql_root_pass)
changed: [deployment_coordinator] => (item=mysql_statefulset)
ok: [deployment_coordinator] => (item=mysql_svc)
RUNNING HANDLER [mysql : Wait for MySQL database to roll out]
******************
FAILED - RETRYING: Wait for MySQL database to roll out (10 retries
left).
FAILED - RETRYING: Wait for MySQL database to roll out (9 retries left).
ok: [deployment_coordinator]
PLAY RECAP
*********************************************************************
deployment_coordinator : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
As you can see, the handler allowed you to wait for the deployment (roll out) completion. The MySQL database is fully up and running immediately after the playbook exits.
Prepare the WordPress application resource templates
Just like MySQL, the WordPress application is represented by an Ansible role. First, create appropriate role directories:
$ mkdir -p roles/wordpress/{defaults,handlers,tasks,templates}
Given that WordPress scales like a stateless application, the most suitable resource kind is Deployment. Create the following template file to describe the application parameters:
$ cat <<EOF > roles/wordpress/templates/wordpress_deployment.yaml.j2
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ wordpress_name }}"
labels:
app: "{{ wordpress_name }}"
spec:
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: "{{ wordpress_name }}"
spec:
containers:
- image: "{{ wordpress_image }}"
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: "{{ wordpress_db_name }}"
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: "{{ wordpress_db_pass_secret_name }}"
key: "{{ wordpress_db_pass_secret_key }}"
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-data
mountPath: /var/www/html
volumes:
- name: wordpress-data
persistentVolumeClaim:
claimName: "{{ wordpress_name }}-data"
EOF
In the MySQL case, the persistent volume claim is automatically created by the StatefulSet controller from a persistent volume claim template. The Wordpress deployment uses a persistent volume claim reference, which requires you to create persistent volume claim resource manually. For this purpose, create yet another template file:
$ cat <<EOF > roles/wordpress/templates/wordpress_pvc.yaml.j2
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "{{ wordpress_name }}-data"
labels:
app: "{{ wordpress_name }}"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: "{{ wordpress_storage_request }}"
selector:
matchLabels:
app: "{{ wordpress_name }}"
storageClassName: "{{ wordpress_storage_class }}"
EOF
If you are not using dynamically provisioned volumes, also create a template file for the persistent volume that backs the WordPress persistent volume claim:
$ cat <<EOF > roles/wordpress/templates/wordpress_host_pv.yaml.j2
apiVersion: v1
kind: PersistentVolume
metadata:
name: "{{ wordpress_name }}-data"
labels:
app: "{{ wordpress_name }}"
spec:
capacity:
storage: "{{ wordpress_storage_request }}"
accessModes:
- ReadWriteMany
hostPath:
path: "/tmp/data/{{ wordpress_name }}"
persistentVolumeReclaimPolicy: Recycle
storageClassName: "{{ wordpress_storage_class }}"
EOF
Finally, create the WordPress service template. For demo purposes, the template defines a NodePort service, but the service type can be changed depending on needs and what is supported by the Kubernetes cluster:
$ cat <<EOF > roles/wordpress/templates/wordpress_svc.yaml.j2
apiVersion: v1
kind: Service
metadata:
name: "{{ wordpress_name }}"
labels:
app: "{{ wordpress_name }}"
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: {{ wordpress_external_port }}
selector:
app: "{{ wordpress_name }}"
EOF
To complete templates setup, do the same thing you did for the MySQL role and create a default variable values file:
$ cat <<EOF > roles/wordpress/defaults/main.yaml
wordpress_name: wordpress
wordpress_image_name: wordpress
wordpress_image_version: latest
wordpress_image: "{{ wordpress_image_name }}:{{ wordpress_image_version }}"
wordpress_db_name: wordpress-mysql
wordpress_db_pass_secret_name: "{{ wordpress_db_name }}-root-pass"
wordpress_db_pass_secret_key: root_password
wordpress_external_port: 30180
wordpress_storage_class: local-storage
wordpress_storage_request: 10Gi
EOF
Define the WordPress application deploy tasks and handlers
The real orchestration magic starts here. Because the WordPress application depends on the MySQL database, you need to express (and enforce) this dependency. Ansible provides you with several options in this aspect:
Tightly couple roles by importing MySQL role’s deploy tasks directly in WordPress deploy tasks: this causes the MySQL deploy tasks to run when WordPress deploy tasks are run.
Loosely couple roles by importing a smaller tasks file from MySQL role, which only waits for the MySQL service roll out to complete.
Do not couple MySQL and WordPress roles. Instead, define the dependency on the playbook level and rely on the MySQL role handler to wait for roll out completion.
Each of these approaches has pros and cons, and the use of each depends on your actual needs. For demo purposes, let’s follow the last option, which is the easiest to demonstrate.
In this approach, the deploy tasks file doesn’t differ much from what has been done for the MySQL database. Create the tasks file with the following content:
$ cat <<EOF > roles/wordpress/tasks/deploy.yaml
- name: Create WordPress resources
command: kubectl apply -f -
args:
stdin: "{{ lookup('template', item + '.yaml.j2') }}"
register: deploy_wordpress_svc_results
changed_when: >-
deploy_wordpress_svc_results.stdout_lines
| reject('match', '^.* unchanged$') | list | length > 0
loop:
- wordpress_pvc
- wordpress_deployment
- wordpress_svc
notify:
- Wait for WordPress service to roll out
EOF
Similar to the MySQL role, define another tasks file when manual persistent volume provisioning is required:
$ cat <<EOF > roles/wordpress/tasks/create_host_pv.yaml
- name: Create WordPress host persistent volume
command: kubectl apply -f -
args:
stdin: "{{ lookup('template', 'wordpress_host_pv.yaml.j2') }}"
register: wordpress_create_host_pv_result
changed_when: >-
wordpress_create_host_pv_result.stdout_lines
| reject('match', '^.* unchanged$') | list | length > 0
EOF
Next, define the handlers to let the automation wait for roll out completion:
$ cat <<EOF > roles/wordpress/handlers/main.yaml
- name: Wait for WordPress service to roll out
command: >-
kubectl rollout status deploy -w=false {{ wordpress_name | quote }}
register: wordpress_rollout_status
until: >-
wordpress_rollout_status.stdout | regex_search('successfully rolled out')
retries: "{{ wordpress_rollout_retries|int }}"
delay: "{{ wordpress_rollout_delay|int }}"
changed_when: no
EOF
Don’t forget to add new variable defaults:
$ cat <<EOF >> roles/wordpress/defaults/main.yaml
wordpress_rollout_retries: 10
wordpress_rollout_delay: 5
EOF
Create and run the WordPress deploy playbook
You are now ready to glue everything together into a fully functional playbook. Create the playbook file with the following content:
$ cat <<EOF > wordpress_deploy.yaml
- name: Deploy WordPress
hosts: deployment_coordinator
gather_facts: no
vars:
mysql_name: wordpress-mysql
pre_tasks:
- name: Create host persistent volumes
include_role:
name: "{{ item }}"
tasks_from: create_host_pv
loop:
- mysql
- wordpress
- name: Deploy MySQL database
import_role:
name: mysql
tasks_from: deploy
tasks:
- name: Deploy WordPress
import_role:
name: wordpress
tasks_from: deploy
EOF
This playbook file first creates host persistent volumes. Use of include_role
allows looping through similar tasks in each role (isn’t that cool?). Immediately after, the MySQL database is deployed. After all pre_tasks
are executed, the MySQL handler runs, waiting for the StatefulSet to roll out. Finally, WordPress deploy tasks kick off, after which another handler ensures the application is rolled out successfully.
Let’s see that this design holds true in practice:
$ ansible-playbook wordpress_deploy.yaml
PLAY [Deploy WordPress]
********************************************************
TASK [Create host persistent volumes]
******************************************
TASK [mysql : Create MySQL host persistent volume]
*****************************
changed: [deployment_coordinator]
TASK [wordpress : Create WordPress host persistent volume]
*********************
changed: [deployment_coordinator]
TASK [mysql : Deploy MySQL server]
*********************************************
changed: [deployment_coordinator] => (item=mysql_root_pass)
changed: [deployment_coordinator] => (item=mysql_statefulset)
changed: [deployment_coordinator] => (item=mysql_svc)
RUNNING HANDLER [mysql : Wait for MySQL database to roll out]
******************
FAILED - RETRYING: Wait for MySQL database to roll out (10 retries
left).
ok: [deployment_coordinator]
TASK [wordpress : Create WordPress resources]
**********************************
changed: [deployment_coordinator] => (item=wordpress_pvc)
changed: [deployment_coordinator] => (item=wordpress_deployment)
changed: [deployment_coordinator] => (item=wordpress_svc)
RUNNING HANDLER [wordpress : Wait for WordPress service to roll out]
***********
ok: [deployment_coordinator]
PLAY RECAP
*********************************************************************
deployment_coordinator : ok=6 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Because of the linear nature of Ansible task processing, if any of these steps fail, the whole playbook reports an error and stops immediately.
In this case, the playbook execution is successful, which means you can try accessing WordPress through the web browser. Because you used the NodePort service for demo purposes, the service is accessible on the worker node’s port 30180:
After clicking through WordPress configuration screens, you finally receive a success message.
Summary
With the richness of Ansible features, you can orchestrate complex application systems in a Kubernetes-based environment. And with a reasonably small amount of code, you can express a complex deployment logic that comprises two dependent applications: a web application and a backing database. Each application requires several kinds of Kubernetes resources: a StatefulSet, a Deployment, a Secret, Persistent Volume Claims, Persistent Volumes, and Services. During the deployment time, Ansible ensures proper order of deploy actions and waits for services to become fully available. Ansible does all of this while preserving the logical deployment order and with confidence that the system is fully working at the end of the procedure.
The tutorial shows a simple demo scenario, but Ansible’s great flexibility and feature richness allows for plenty of use cases to satisfy even the most complex systems. You can use this technology to deploy complex systems of 40+ different applications, do advanced Public Key Infrastructure management, and automate not only the deployment, but the entire lifecycle of software systems, including maintenance actions and upgrades.
You can get the full source code of this tutorial. I hope you enjoyed reading this series and I encourage you to experiment with Ansible and Kubernetes!