ARTH_TASK 23

Swapnilsukare
8 min readJan 14, 2022
  • Task Description* 📄

📌 Automate *Kubernetes* Cluster Using *Ansible*

🔅 Launch ec2-instances on *AWS Cloud* eg. for master and slave.

🔅 Create roles that will configure master node and slave node seperately.

🔅 Launch a *wordpress* and *mysql* database connected to it in the respectine slaves.

🔅 Expose the *wordpress* pod and client able hit the *wordpress* ip with its respective port.

Let’s start………………….

Setup Dynamic inventory (aws cloud)

Follow the steps carefully for the setup.

Step 1: Install python3

sudo yum install python3 -y

Step 2: Install the boto3 library.

sudo pip3 install boto

Step 3: Create a inventory directory under /opt and cd in to the directory.

sudo mkdir -p /opt/ansible/inventory
cd /opt/ansible/inventory

Step 4: Create a file named aws_ec2.yaml in the inventory directory and copy the following configuration.

Note: The file name should be aws_ec2.yaml. Also, replace add your AWS access key and secret to the config file.

plugin: aws_ec2
aws_access_key: <YOUR-AWS-ACCESS-KEY-HERE>
aws_secret_key: <YOUR-AWS-SECRET-KEY-HERE>
keyed_groups:
- key: tags
prefix: tag

Step 5: Open /etc/ansible/ansible.cfg and find the [inventory] section and add the following line to enable the ec2 plugin.

enable_plugins = aws_ec2

It should look something like this.

[inventory]
enable_plugins = aws_ec2

Step 6: Now lets test the dynamic inventory configuration by listing the ec2 instances.

ansible-inventory -i /opt/ansible/inventory/aws_ec2.yaml --list

The above command returns the list of ec2 instances with all its parameters in JSON format.

Step 7: Execute the following command to test if Ansible is able to ping all the machines returned by the dynamic inventory.

ansible all -m ping

Now Dynamic inventory setup done.

Creating yml file for MySQL and Wordpress

  1. This file creates the Secret which contain my User Name and Password of my Database, Service which basically exposed in private world only so that other application(wordpress) can connect with 3306 port number only, and Deployment with Recreate strategy using version mysql:5.6.
apiVersion: v1
kind: Secret
metadata:
name: mysecure
data:
rootpass: cmVkaGF0
userpass: cmVkaGF0
---
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysecure
key: rootpass
- name: MYSQL_USER
value: satyam
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysecure
key: userpass
- name: MYSQL_DATABASE
value: sqldb
ports:
- containerPort: 3306
name: mysql

2. This file creates Service with type called LoadBalancer and exposed with port no. 80, Deployment with same strategy called recreate, and getting user name and password of database by using secret database which created in above steps.

apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: mysql
type: LoadBalancer
---apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: wordpress:latest
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_USER
value: satyam
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysecure
key: userpass
- name: WORDPRESS_DB_NAME
value: sqldb
ports:
- containerPort: 80
name: wordpress

Now Lets Launch 3 Instances over AWS namely master and 2 slaves.

  • ansible-playbook <file_name>

I used aws.yml file to launch three instance on top of aws cloud where I create playbook for launch two slave nodes and one master node .

- hosts: localhost
vars_files:
secret.yml
tasks:
- name: "Creating Master Node"
ec2:
region: ap-south-1
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
vpc_subnet_id: subnet-0288bbf00ed3128d7
count: 1
state: present
instance_type: t2.micro
key_name: redhat-key
assign_public_ip: yes
group_id: sg-0612a79a1fdb041ff
image: ami-08f63db601b82ff5f
instance_tags:
name: master




- name: "Creating Slave Nodes"
ec2:
region: ap-south-1
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
vpc_subnet_id: subnet-0288bbf00ed3128d7
count: 2
state: present
instance_type: t2.micro
key_name: redhat-key
assign_public_ip: yes
group_id: sg-0612a79a1fdb041ff
image: ami-08f63db601b82ff5f
instance_tags:

name: slave

Don’t forget to write secret.yml file.

access_key: AKIA6dslfXXXXXXXXXX
secret_key: Ay1obxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Ansible-playbook for master node

  1. First add the kubeadm repo in RHEL so that it can go to internet and download kubeadm, kubelet, and kubectl etc. In my case I used COPY module but we can also use yum_repository module.
- name: Adding Kubeadm repo
copy:
src: kubernetes.repo

dest: /etc/yum.repos.d

kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

2. Now installing docker and kubeadm using package module.

- name: installing docker
package:
name: "docker"
state: present
- name: installing kubeadm
package:
name: "kubeadm"
state: present

3. Enabling the docker service

- name: Enabling docker service
service:
name: docker
state: started
enabled: yes

4. Now pull all image from docker hub which is important to setup master node. These images are related to api-server, flannel, kube-controller, Etcd , controller-manager, scheduler and so many. In my case I used shell module to perform this step.

- name: Pulling all kubeadm config image
command: kubeadm config images pull

ignore_errors: no

5. Now as we know that kubernetes support systemd driver so we have to change this cgroup to systemd. For this I used file called daemon.json which i copied at /etc/docker/daemon.json so it automatically will change to systemd driver.

- name: Changing driver cgroup to systemd
copy:
src: daemon.json

dest: /etc/docker/daemon.json

daemon.json file

{
"exec-opts": ["native.cgroupdriver=systemd"]
}

6. Now remove all swap file from /etc/fstab because it shows errors while initializing the master node.

- name: Removing swapfile from /etc/fstab
mount:
name: "{{ item }}"
fstype: swap
state: absent
with_items:
- swap

- none

7. Now Enable kubelet service and Restart the docker because if something change into driver(systemd) so we have to again restart the service.

- name: Enabling kubelet service
service:
name: kubelet
daemon_reload: yes
state: started
enabled: yes
- name: Restarting docker service
service:
name: docker

state: "restarted"

8. Install iproute-tc software because kubernetes master uses this software while initializing as a master node.

- name: Installing iproute-tc
package:
name: iproute-tc
state: present
update_cache: yes

9. Now we can initializing the node as a master node. In my case I used shell module to perform this operation. Remember one thing, It is not good practice to use RAM less then 2200MB and CPU less then 2 otherwise It through the error. For ignore this error we can use — ignore-preflight-error command.

- name: Initializing the kubeadm
shell: "kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=Swap --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem --node-name=master"

register: kubeadm
ignore_errors: yes

- debug:
msg: "{{ kubeadm }}"

10. Now setup the kubeconfig for home user so that master node can also work as a client and can use kubectl command.

- name: Setup kubeconfig for home user
shell: "{{ item }}"
with_items:
- "mkdir -p $HOME/.kube"
- "cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
- "chown $(id -u):$(id -g) $HOME/.kube/config"

11. Now add flannel network in master node so that it can setup some internal overlay setup.

- name: Adding flannel network
shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

12. Now I am creating token for slave node for authentication purpose and storing token into on file called token.sh by using local_action module.

- name: Joining token
shell: "kubeadm token create --print-join-command"
register: token
- debug:
msg: "{{ token }}"
ignore_errors: yes
- name: Storing token into a file
local_action: copy content={{ token.stdout_lines[0] }} dest=../slave1/token.sh

13. Now Copying the database.yml and wordpress.yml file.

- name: Copying mysql-database.yml file
copy:
src: database.yaml
dest: /root

- name: Copying wordpress.yml file
copy:
src: wordpress.yml
dest: /root

14. Now finally running database and wordpress both file using shell module. But remember one thing that don’t forget to tell proper path where your database.yml and wordpress.yml file located. Also we can see output of command using debug module.

- shell: "kubectl apply -f /root/database.yaml"
register: mysql
- shell: "kubectl apply -f /root/wordpress.yml"
register: wordpress
- debug:
msg: "{{ mysql }}"
- debug:msg: "{{ wordpress }}"

Ansible-playbook for slave node

Almost every steps are similar like master node. Till 8th step is same as master node except step-4 so I am going to explain what extra we have to do setup node as a slave node.

9. Copying k8s.conf file at /etc/sysctl.d/ path. It is important do initialized any node as a slave node.

- name: Copying k8s.conf file
copy:
src: k8s.conf

dest: /etc/sysctl.d/k8s.conf

k8s.conf file

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

10. Now enable the sysctl.

- name: Enabling sysctl

shell: "sysctl --system"

11. Now before joining slave node to master node we have to use token created by master node. While running playbook i stored token created by master node into the token.sh file so in this steps i can use it. I used shell module to run token.sh file.

- name: Copying token file at /root location
copy:
src: token.sh
dest: /root/token.sh
- name: Joining slave node to master node
shell: "sh /root/token.sh"
register: joined
- debug:
msg: "{{ joined }}"

Now after running this role……Entire master node and Slave node will be configured.

Output ( Wordpress & MySQL )

  1. Now let’s login master node and running the kubectl get pods command to cross verify.

2. Now WordPress and MySQL Database is running properly. Now check the Port Number where wordpress is running using below command

  • kubectl get svc

3. Copy the public IP of any of the node & respective port number and Go to your Google-chrome browser and paste it.

>> Set Username & Password…..and We completed the task.

Now WordPress and MySQL database connected to it in the respective slaves successfully.

Thanks For Reading.

Keep sharing…….Keep Growing..!!!!

--

--