Kubernetes Multi Cloud Setup
kubernetes high availability cluster over multi cloud
Prequisite:
- AWS CLI installed
- AZURE CLI installed.
The high availability cluster is composed of master slave nodes. The following cluster setup for the k8s multi cloud:
- k8s master on the AWS EC2
- k8s slave on the AWS EC2
- k8s slave on the AZURE VIRTUAL MACHINE.
The following setup before launching nodes on AWS:
- Create key pair for the ssh into the nodes.

- Create security group for all inbound access to the nodes.

Launching nodes in the AWS and Azure.
- terraform code

terraform init
#this command install all the dependencies or modules for provisioning the resources in the aws and azure cloudterraform apply
#this command will provision the resources in the aws and azure. Before provisioning resources it will ask for your confirmation.
Say yes to move forward.
Explanation
- First with AWS cloud provider provisioning master and slave nodes. The instance type is t2.medium, ami is amazon linux 2 , security group and key that we created.
- The two instance will created with name AWS_k8s_master and AWS_k8s_node1.
- With azure provider provisioning the VM in the azure cloud.
- For provisioning VM creating several resources such as resource group, virtual network, subnet, public ip, network security group, network security group association, network interface and virtual machine. The os flavour for the virtual machine is Ubuntu.
AWS EC2 Console

Azure Resources Console

Step-2) Configure k8s Master
Now login into the AWS_k8s_master instance.
To configure AWS_k8s_master node using the following bash script.

# Run the script
bash script.sh
Explanation
- Setting the kubernetes repo with echo command in the /etc/yum.repos.d/ path.
- With yum command installing the docker, kubeadm, kubectl, kubelet and iproute-tc.
- Starting the docker and kubelet service to run always.
- Pulling the images with kubeadm command and setting the docker cgroupdriver to systemd
- restarting the docker service to adopt the cgroupdriver change.
- kubeadm init command will initialize the kubernetes cluster. In the kubeadm command with control-plane-endpoint flag setting master vm public ip.
- Do staff like creating directory, chaning ownership related to kubernetes permission.
- Create the flannel resource. Flannel is responsible for providing a layer 3 IPv4 network between multiple nodes in a cluster. Flannel does not control how containers are networked to the host, only how the traffic is transported between hosts. However, flannel does provide a CNI plugin for Kubernetes and a guidance on integrating with Docker
- This bash script will output the following result. This result include kubeadm token join which will use to setup the k8s slave.
Result

Step-3) Configure Slave Nodes in AWS and Azure
AWS slave node
- Log into the AWS_k8s_node.
- Write the following bash script

bash script.sh
Explanation
- Setting the kubernetes repo in the AWS k8s slave in the yum.repos.d directory.
- Installing docker, kubeadm, kubectl, iproute-tc and starting the docker and kubelet service.
- Set the cgroupdriver to systemd in the /etc/docker/daemon.json.
- Enabling net.bridge.bridge-nf-call-ip6tables and net.bridge.bridge-nf-call-iptables.
- Lastly we run the kubeadm token join command we copied from master node.
kubeadm join MASTER_VM_PUBLICIP:6443 --token zkeih2.um4h1lbordbdoh6u \
--discovery-token-ca-cert-hash sha256:da9322181ebd40b2ee48f4588943a3789b6d6128c940c4696eea78ca9ca76d4c

Azure Slave Node
- Log into the Azure slave node.
- Write the following bash script.

Explanation
- apt-get command install the docker service.
- Then start and enable the docker service.
- Update the apt-get and install the curl command.
- Install the kubernetes package and do all the other steps as done in the AWS k8s slave node.
kubeadm join MASTER_VM_PUBLICIP:6443 --token zkeih2.um4h1lbordbdoh6u \
--discovery-token-ca-cert-hash sha256:da9322181ebd40b2ee48f4588943a3789b6d6128c940c4696eea78ca9ca76d4c

We joined two slave nodes one in aws and other in azure to master node. Let’s check it.
MASTER NODE

Here we have three nodes in the master node. The first is the azure slave node second is kubernetes slave node and third is the aws slave node.
The two slave nodes are from the aws and azure hence we set up the kubernetes over multi cloud.