ARTH_TASK 19

Swapnilsukare
4 min readJan 14, 2022

--

The Aim of this Article is to setup Multi-Node Kubernetes Cluster upon AWS Cloud using Ansible Playbooks uploading these Ansible Playbooks as Ansible Roles upon Ansible Galaxy open-source platform.

To perform this setup I have these platforms:

⚜️RHEL8 VM running inside Oracle Virtual Box → Ansible Master Node

⚜️AWS Instances → Ansible Worker Nodes

⚜️Inside our Ansible Master Workspace We have lots of files:

🔹ansible.cfg: This is local ansible configuration file

🔹inventory.txt: This file is the database for our ansible playbook, it contains all the ip address where automation has to be performed.

🔹password.yml: This is variable file of playbook where my aws credentials are stored so that aws can authenticate this Ansible program

🔹pod_ip.yml: This variable file contains pod cidr network information i.e. our cluster resources will get ip from this ip network range

🔹ec2.yml: This is the playbook that will launch three instances inside my aws cloud account with these specifications.

aws_access_key & aws_secret _key variables will fetch my access & secret key from the password.yml file which i have specified in variables_file section.

🔰Let’s Run this ec2.yml playbook

Playbook has been successfully executed without any errors, let’s check our aws portal to confirm whether instances has been launched or not.

That’s Great our playbook executed successfully and deployed instances on aws cloud…

Let’s move further…

🔹K8.yml: This file configure our Kubernetes master server, this playbook:

  1. Installing docker as underlying container engine and iproute-tc package(required for master configuration),Starting docker services
  2. Setting up Kubeadm Repo & installing kubeadm, kubectl, kubelet package
  3. pulling up the required container images for the cluster setup
  4. updating docker driver from cgroup to systemd & Starting up kubelet services
  5. Initialising Cluster with Kubeadm , along with our network ip range & ignoring pre-flight errors of less computing resources (CPU, ram)
  6. setting up kubernetes config file so that kubectl command knows our master ip ,port username
  7. Setting up Flannel container so that we can setup VXLAN inside our Kubernetes Cluster
  8. Generating user token from master so that worker nodes can join in & storing this token inside a file

🔰Let’s Run this K8.yml playbook

Our second playbook too executed successfully……

Let’s move further…

🔹workernode.yml: This file configure our Kubernetes master server, this playbook:

  1. Installing docker as underlying container engine and iproute-tc package(required for master configuration),Starting docker services
  2. Setting up Kubeadm Repo & installing kubeadm, kubectl, kubelet package
  3. pulling up the required container images for the cluster setup
  4. updating docker driver from cgroup to systemd & Starting up kubelet services
  5. Running the token command in shell prompt so that worker nodes can join with master & form kubernetes cluster

🔰Let’s Run this workernode.yml playbook

That’s Great all the playbooks executed successfully…

Let’s move further…

🔰Let’s jump on testing part

⚜️MASTER:

🔹kubectl get nodes → shows up all the nodes connected with the master inside the cluster

All the nodes are in Ready State and Running

🔹kubectl get namespace → shows up all the namespace inside cluster , namespace gives multi-tenancy feature to the Kubernetes Cluster

🔹Kubectl get pods -n kube-system → shows all the pods inside the kube-system namespace, which is created by master during configuration, it shows that all the service containers inside the cluster are up and running

🔹Kudos!!! Kubernetes Multi-Node Cluster is Successfully setup and services are running on AWS Cloud !!!!

--

--