Best Practice kubernetes Cluster with kubeadm

Im Durchschnitt wird dieses Tutorial Best Practice kubernetes Cluster with kubeadm mit nan bewertet, wobei 1.0 die schlechteste und 5.0 die beste Bewertung ist. Es haben insgesamt 0 Besucher eine Bewertung abgegeben.
0 0

Best Practice kubernetes Cluster with kubeadm

Clustering Kubernetes

Kubernetes made easy

You are right to ask yourself: Does that exist? Yes, it does exist! kubeadm is a toolkit that helps you bootstrap a simple kubernetes cluster. The idea behind kubeadm is to give you a tool with which you can quickly and easily set up a minimally executable cluster. Minimal executable means that no additional add-ons are covered by kubeadm than those that are absolutely necessary to create a working cluster. However, nice-to-have add-ons like a kubernetes dashboard or cloud specific add-ons are outside the scope of kubeadm. Instead, kubeadm is very well suited as a test environment for smaller applications, as a first step into the kubernetes world or all other experiments. Even though kubeadm is only just at the beginning, it remains exciting whether the Kubernetes world will become a bit more user friendly in the future.

If you are just starting out with Kubernetes, I can recommend you read our article Docker Swarm vs. Kubernetes: Both Container Management Tools in comparison.

What are we gonna do?

This tutorial is a quick guide to a working best practice Kubernetes cluster consisting of two nodes, a master node and a worker node. You can also follow this tutorial if you want to build a cluster consisting of a higher number of nodes. For the cluster I created 2 cloud servers with CentOS template in my gridscale panel. Whether you follow the tutorial on our gridscale cloud servers, your Linux laptop, a physical server or on the Raspberry Pi is irrelevant. Any deb/rpm compatible OS such as CentOS or Ubuntu can be used as an operating system.

Here are some details about my setup in the gridscale Panel:

  • Host kubeMaster1: CentOS 7, 4 Cores, 2GB Ram, 40 GB Storage
  • Host kubeWorker1: CentOS 7, 2 Cores, 2GB Ram, 40 GB Storage

 

Preparations

Starting point for the further procedure are 2 gridscale cloud servers with the host names kubeMaster1 and kubeWorker1. My two VPS are operated with CentOS 7, the creation only took a few seconds via gridscale template. For each of your VPSs you should plan at least 2 GB RAM and for the master at least 2 CPU. In addition, your cloud servers must be accessible over a public or private network and have Docker v1.12 (officially recommended), v1.11, v1.13 or 17.03 installed.

Server Configuration:

Before we can move on to the actual installation and creation of the cluster, there is more to prepare. You need to configure the two newly created CentOS Cloud Servers and then install Docker in version v1.12.

The following points are on kubeadm’s list of requirements, which must be taken care of in advance. This includes:

  • Unique host name, MAC address and product_uuid for each node
    Some ports must be enabled – you can find more information here
    SWAP partitioning must be disabled

1. Check if all your machines have unique hostnames, MAC addresses and product_uuid. You don’t have to worry about this point if you are following this tutorial using gridscale.

2. Port enable. The CentOS firewall is activated by default in our gridscale template. We must release at least the port for the kubernetes API Server 6443 and the port 10250 for the Kubelet API.

Note: You need root permissions to run the following commands.

First find out which zones are activated with Firewall.

firewall-cmd --get-active-zones

Then you add the two ports permanently to your zone.

firewall-cmd --zone=public --add-port=6443/tcp --permanent
firewall-cmd --zone=public --add-port=10250/tcp --permanent

3. Switching off SWAP partitioning is still missing. For CentOS 7 / Ubuntu you can disable SWAP with:

swapoff -a

 

Install Docker

On all nodes that belong to your cluster – that means master and worker – you have to install Docker. Kubernetes officially recommends v1.12, but versions v1.11, v1.13 and 17.03 are also known to work well.

CentOS:

yum -y update
yum -y install yum-utils
yum-config-manager --add-repo https://yum.dockerproject.org/repo/main/centos/7
yum -y update
# yum search --showduplicates docker-engine
yum -y --nogpgcheck install docker-engine-1.12.6-1.el7.centos.x86_64
systemctl enable docker && systemctl start docker

 

Ubuntu:

You can download Docker v1.13 from the Ubuntu repositories.

apt-get update
apt-get install -y docker.io

 

Install kubeadm Toolkit

In this section I will show you how to install the toolkit consisting of kubeadm, kubelet and kubectl on CentOS and Ubuntu. You must install the package on each of your machines that form the cluster.

Tip: With the snapshot function of gridscale you can save yourself some work. Just take a snapshot after you have equipped the first host with the toolkit and clone all other hosts for your cluster from this snap.

With CentOS, REHL, Fedora you use the following commands for installation:

cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet

Note: For the containers to have access to the host filesystem, you must set the
SELinux on CentOS into enforcing mode.

setenforce 1

Now set /proc/sys/net/bridge/bridge/bridge-nf-call-iptables to 1, and depending on which pod network you deploy later, you will need to adjust this value to make your pod network run smoothly. This is also the case with Flannel, the Pod Network which we will install later. You can read more about it here.

cat <  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

 

kubeadm Installation on Ubuntu/Debian

On Ubuntu the commands are a bit shorter than with CentOS:

apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat </etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl

 

Configure cgroup Driver

Your cgroup driver used by Kubelet must match Docker’s cgroup driver.

With

docker info | grep -i cgroup
you will get a Docker cgroup driver. And with the command

cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
the Kubelet configuration. In the column with the flag “–cgroup-driver=” you can see the setting.

If your Kubelet configuration does not match, you can use this file
can use the file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

to change the cgroup flag

When I installed Docker v.1.12 and kubeadm on CentOS 7, the Docker cgroup driver was specified with cgroupfs and for Kubelet with systemd. If you find the same setting, you can adjust the kubelet config with the following command:

sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

If an adjustment has been made, your Kubelet must be restarted.

systemctl daemon-reload
systemctl restart kubelet

 

Creating clusters with kubeadm

At this point in the tutorial, you should have completed all pre-configurations on both cloud servers and have successfully installed Docker and the kubeadm toolkit.

Initialize Master

Initialize your master:

kubeadm init --pod-network-cidr=10.244.0.0/16

The process can take a while… First a series of pre-checks are done to ensure that your server is ready to run with Kubernetes. Warnings are also displayed during the process and if any errors occur, the process is aborted.

If everything worked out, you should see something like this:

kubeadm init bootstrap cluster

As you can see, a kubeadm join command is returned to add more machines to your cluster. You’ll need this one later!

Then, as a normal system user without root privileges, you can run this command to use your cluster:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Your admin.conf generated with the init command will be added to your HOME directory. If you want to continue following under the Root user, you can also simply execute the command repeatedly before the further commands.

Otherwise the following error will occur: “The connection to the server localhost:8080 was refused – did you specify the right host or port?”


 

Deploying Pod Network for the Cluster

A so-called Pod Network or Container Network Interface is necessary for your pods to communicate with each other.

Before you can bootstrap your cluster with kubeadm and deploy an application afterwards, it must be installed as an add-on on the master machine.
In the further course we choose Flannel for our cluster.

The standard network kubenet has some limitations and not many features. There are many projects that provide alternatives for different applications with a wide range of features. The 3 most used providers are Weave, Flannel and Calico. Flannel uses the Kubernetes API and vxlan as a network model. It is a simple provider that is easy to configure and has many users.

Use the following command to install Flannel over kubectl:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

 

To make the Flannel Pod Network work properly, you have set the flag

–pod-network-cidr=10.244.0.0/16 attached to the kubeadm init command before.


 

Add Worker

As you may have noticed, bootstrapping the cluster also returned a command to add more nodes to your cluster.

Now switch to the host you want to join to your cluster, which you also prepared and equipped with Docker and kubeadm at the beginning of the tutorial. In my case with the hostname kubeWorker1.

Here you execute the complete kubeadm join command generated with init, followed by the IP address and token/hash.

If everything runs smoothly you get a “This node has joined the cluster” back.

kubeadm join [your-ip+token+hash]

Node has joined the cluster kubeadm

Use the same command on all other hosts you want to join your cluster if you are planning a cluster with a larger number of nodes.
.

Your Kubernetes Cluster

…is ready by now 🙂

Back on the host kubeMaster1, you can now inspect your finished kubernetes cluster.

kubectl get nodes

kubectl get Nodes

The kubernetes Cluster consists of 2 nodes, kubeMaster1 with the Role Master and kubeWorker1 with the Role .

Don’t get irritated by the role status, kubeadm is not yet fully developed, but there is already a request on GitHub.

For all other obstacles and problems around kubeadm you can have a look at the official troubleshooting from Kubernetes:

https://kubernetes.io/docs/setup/independent/troubleshooting-kubeadm/

&

If you have any questions or suggestions about this or any other tutorial, we would love to hear from you – team@gridscale.io


 

Conclusion

The Cuban world is complex, big and, above all, often not that simple. To slow down this complexity a little is the approach behind kubeadm. kubeadm improves the user experience with Kubernetes and has the advantage that it runs everywhere – even on the Raspberry Pi. I hope I could teach you and everyone new to Kubenetes out there in this tutorial how to build a small and secure cluster with kubeadm on CentOS / Ubuntu. kubeadm is perfect for further experiments and to dive further into the Kubernetes world. More to come soon 🙂

Zurück zur Tutorial Übersicht Back to Tutorial Overview

Kubernetes made easy You are right to ask yourself: Does that exist? Yes, it does exist! kubeadm is a toolkit that helps you bootstrap a simple kubernetes cluster. The idea behind kubeadm is to give you a tool with which you can quickly and easily set up a minimally executable cluster. Minimal executable means that […]

Schade, dass dir der Artikel nicht gefallen hat.
Was sollten wir deiner Meinung nach besser machen?

Thank you for your feedback!
We will get back to you as soon as the article is finished.

Übrigens: kennst du schon unser Tutorial zum Thema Docker Swarm vs. Kubernetes: Comparison of both container management tools?