Create a cheap “non production grade” Kubernetes cluster on a shoestring budget (Part 1)

Introduction

The best way I have ever found to learn a programming language was always to dive into a project and create with the language I am trying to learn. The same one could say applies to operating systems, and new technologies.

Kubernetes as defined on the website itself is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a medium to steep learning curve depending on your background, and yes it can also be setup on your local machine for experimentation purposes. However to do anything serious, you will most likely need to move your setup into some form remote based setup, be it a virtual machine you can host yourself or a cloud based setup. The question would then become how much are you willing to spend ? At this point, we only want to get our feet wet, so we would not like to break our banks.

Setting up the Host

For these instructions I have used a VPS server hosted at www.time4vps.com since I already had an account with these guys, and am very happy with their services. However the instructions below are not specific to this provider, and can be used with any Ubuntu 18 installation. The only requirement is to have a vanilla installation, with nothing else running to avoid conflicts and root access to the machine.

  • Create your VPS based on Ubuntu 18.04 (Bionic Beaver). The specs for the one used in this tutorial were :
    • OS: Ubuntu 18.04
    • Processor : 2 X 2.6GHz
    • Ram : 8192Mb
    • Hdd : 80Gb
  • Login to the new machine as root, run apt-get update / apt-get upgrade, and reboot if necessary.
  • Create a new user and add it to the Sudoers group
root@34a4:~# adduser johann
Adding user `johann' ...
Adding new group `johann' (1000) ...
Adding new user `johann' (1000) with group `johann' ...
Creating home directory `/home/johann' ...
Copying files from `/etc/skel' ...
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Changing the user information for johann
Enter the new value, or press ENTER for the default
	Full Name []: Johann Fenech
	Room Number []:
	Work Phone []:
	Home Phone []:
	Other []:
Is the information correct? [Y/n] y
root@34a4:~#
root@34a4:~#
root@34a4:~# usermod -aG sudo johann
  • Check that you can ssh to the server and execute sudo commands using the new user.
  • Disable root ssh login from sshd and change the default ssh port from 22 to something else for security reasons. This step is optional but highly recommended. You can do these steps by editing /etc/ssh/sshd_config uncommenting #Port 22 and changing the port, and changing PermitRootLogin from yes to no. I would also recommend to allow traffic to the ssh port only from selected ip addresses using ip tables.

You might also want to disable password logins entirely, and use only ssh keys, but thats entirely up to you and beyond the scope of this writeup

Setting up Kubernetes

Setting up Kubernetes

The first step required to ensure proper operation of Docker and Kubernetes is to disable swap on the server. I won’t go into the merits of this, but if you are curious you can take a look at https://github.com/kubernetes/kubernetes/issues/53533 or google around. To disable swap, issue the command swapoff -a as root

Note : Unless otherwise stated, the rest of the commands should be run as root

swapoff -a

Next we need to add some apt repositories, which do not come standard with Ubuntu and ensure we have the latest ca-certs installed on our server

# Ensure the latest ca-certs are installed, along with curl and software-properties-common
sudo apt-get -y install apt-transport-https ca-certificates software-properties-common curl

Add the Docker repo key and repo to our server installation

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
 

The command above should output “OK”. Next we add the actual repo

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu  $(lsb_release -cs)  stable"

The output from this command should be something like this :

Hit:1 http://ubuntu-archive.mirror.serveriai.lt bionic InRelease
Hit:2 http://ubuntu-archive.mirror.serveriai.lt bionic-updates InRelease
Hit:3 http://ubuntu-archive.mirror.serveriai.lt bionic-security InRelease
Get:4 https://download.docker.com/linux/ubuntu bionic InRelease [64.4 kB]
Hit:5 http://archive.canonical.com/ubuntu bionic InRelease
Get:6 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages [7,889 B]
Fetched 72.3 kB in 1s (105 kB/s)
Reading package lists... Done

Next we add the Kubernetes repo in a very similar fashion to the Docker repo above, Key first, then repo

curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -  
apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"

Note that at the time of writing, no official Kubernetes repos exist for Bionic Beaver, but we can safely use the Xenial repos. If by the time you read this, the official Bionic repos are ready, by all means use them. The command should return something like this :

Hit:1 http://ubuntu-archive.mirror.serveriai.lt bionic InRelease
Hit:2 http://ubuntu-archive.mirror.serveriai.lt bionic-updates InRelease
Hit:3 http://ubuntu-archive.mirror.serveriai.lt bionic-security InRelease
Hit:4 https://download.docker.com/linux/ubuntu bionic InRelease
Get:5 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8,993 B]
Get:6 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages [28.4 kB]
Fetched 37.4 kB in 1s (44.2 kB/s)
Reading package lists... Done

In the next steps we will update our repo database, and install docker, kubelet, kubeadm and kubernetes-cni

sudo apt-get update && apt-get install -y docker-ce kubelet kubeadm kubernetes-cni

This command will take a few seconds to execute, depending on your connection, but the output should look more or less like this :

Hit:1 http://ubuntu-archive.mirror.serveriai.lt bionic InRelease
Hit:2 http://ubuntu-archive.mirror.serveriai.lt bionic-updates InRelease
Hit:3 http://ubuntu-archive.mirror.serveriai.lt bionic-security InRelease
Hit:5 https://download.docker.com/linux/ubuntu bionic InRelease
Hit:4 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  aufs-tools cgroupfs-mount conntrack containerd.io cri-tools docker-ce-cli ebtables git git-man kubectl libltdl7 patch pigz socat
Suggested packages:
  git-daemon-run | git-daemon-sysvinit git-doc git-el git-email git-gui gitk gitweb git-cvs git-mediawiki git-svn ed diffutils-doc
The following NEW packages will be installed:
  aufs-tools cgroupfs-mount conntrack containerd.io cri-tools docker-ce docker-ce-cli ebtables git git-man kubeadm kubectl kubelet kubernetes-cni libltdl7 patch pigz socat
0 upgraded, 18 newly installed, 0 to remove and 1 not upgraded.
Need to get 146 MB of archives.
After this operation, 705 MB of additional disk space will be used.
Get:1 http://ubuntu-archive.mirror.serveriai.lt bionic/universe amd64 pigz amd64 2.4-1 [57.4 kB]
Get:2 http://ubuntu-archive.mirror.serveriai.lt bionic/universe amd64 aufs-tools amd64 1:4.9+20170918-1ubuntu1 [104 kB]
Get:4 http://ubuntu-archive.mirror.serveriai.lt bionic/universe amd64 cgroupfs-mount all 1.4 [6,320 B]
Get:5 https://download.docker.com/linux/ubuntu bionic/stable amd64 containerd.io amd64 1.2.6-3 [22.6 MB]
Get:6 http://ubuntu-archive.mirror.serveriai.lt bionic/main amd64 conntrack amd64 1:1.4.4+snapshot20161117-6ubuntu2 [30.6 kB]
Get:7 http://ubuntu-archive.mirror.serveriai.lt bionic-updates/main amd64 ebtables amd64 2.0.10.4-3.5ubuntu2.18.04.3 [79.9 kB]
Get:8 http://ubuntu-archive.mirror.serveriai.lt bionic-updates/main amd64 git-man all 1:2.17.1-1ubuntu0.4 [803 kB]
Get:10 http://ubuntu-archive.mirror.serveriai.lt bionic-updates/main amd64 git amd64 1:2.17.1-1ubuntu0.4 [3,907 kB]
Get:3 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 cri-tools amd64 1.13.0-00 [8,776 kB]
Get:14 http://ubuntu-archive.mirror.serveriai.lt bionic/main amd64 socat amd64 1.7.3.2-2ubuntu2 [342 kB]
Get:15 http://ubuntu-archive.mirror.serveriai.lt bionic/main amd64 libltdl7 amd64 2.4.6-2 [38.8 kB]
Get:16 http://ubuntu-archive.mirror.serveriai.lt bionic-updates/main amd64 patch amd64 2.7.6-2ubuntu1.1 [102 kB]
Get:9 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubernetes-cni amd64 0.7.5-00 [6,473 kB]
Get:11 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.15.2-00 [20.2 MB]
Get:17 https://download.docker.com/linux/ubuntu bionic/stable amd64 docker-ce-cli amd64 5:19.03.1~3-0~ubuntu-bionic [42.5 MB]
Get:12 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.15.2-00 [8,763 kB]
Get:13 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.15.2-00 [8,247 kB]
Get:18 https://download.docker.com/linux/ubuntu bionic/stable amd64 docker-ce amd64 5:19.03.1~3-0~ubuntu-bionic [22.7 MB]
Fetched 146 MB in 13s (11.3 MB/s)
Selecting previously unselected package pigz.
(Reading database ... 33859 files and directories currently installed.)
Preparing to unpack .../00-pigz_2.4-1_amd64.deb ...
Unpacking pigz (2.4-1) ...
Selecting previously unselected package aufs-tools.
Preparing to unpack .../01-aufs-tools_1%3a4.9+20170918-1ubuntu1_amd64.deb ...
Unpacking aufs-tools (1:4.9+20170918-1ubuntu1) ...
Selecting previously unselected package cgroupfs-mount.
Preparing to unpack .../02-cgroupfs-mount_1.4_all.deb ...
Unpacking cgroupfs-mount (1.4) ...
Selecting previously unselected package conntrack.
Preparing to unpack .../03-conntrack_1%3a1.4.4+snapshot20161117-6ubuntu2_amd64.deb ...
Unpacking conntrack (1:1.4.4+snapshot20161117-6ubuntu2) ...
Selecting previously unselected package containerd.io.
Preparing to unpack .../04-containerd.io_1.2.6-3_amd64.deb ...
Unpacking containerd.io (1.2.6-3) ...
Selecting previously unselected package cri-tools.
Preparing to unpack .../05-cri-tools_1.13.0-00_amd64.deb ...
Unpacking cri-tools (1.13.0-00) ...
Selecting previously unselected package docker-ce-cli.
Preparing to unpack .../06-docker-ce-cli_5%3a19.03.1~3-0~ubuntu-bionic_amd64.deb ...
Unpacking docker-ce-cli (5:19.03.1~3-0~ubuntu-bionic) ...
Selecting previously unselected package docker-ce.
Preparing to unpack .../07-docker-ce_5%3a19.03.1~3-0~ubuntu-bionic_amd64.deb ...
Unpacking docker-ce (5:19.03.1~3-0~ubuntu-bionic) ...
Selecting previously unselected package ebtables.
Preparing to unpack .../08-ebtables_2.0.10.4-3.5ubuntu2.18.04.3_amd64.deb ...
Unpacking ebtables (2.0.10.4-3.5ubuntu2.18.04.3) ...
Selecting previously unselected package git-man.
Preparing to unpack .../09-git-man_1%3a2.17.1-1ubuntu0.4_all.deb ...
Unpacking git-man (1:2.17.1-1ubuntu0.4) ...
Selecting previously unselected package git.
Preparing to unpack .../10-git_1%3a2.17.1-1ubuntu0.4_amd64.deb ...
Unpacking git (1:2.17.1-1ubuntu0.4) ...
Selecting previously unselected package kubernetes-cni.
Preparing to unpack .../11-kubernetes-cni_0.7.5-00_amd64.deb ...
Unpacking kubernetes-cni (0.7.5-00) ...
Selecting previously unselected package socat.
Preparing to unpack .../12-socat_1.7.3.2-2ubuntu2_amd64.deb ...
Unpacking socat (1.7.3.2-2ubuntu2) ...
Selecting previously unselected package kubelet.
Preparing to unpack .../13-kubelet_1.15.2-00_amd64.deb ...
Unpacking kubelet (1.15.2-00) ...
Selecting previously unselected package kubectl.
Preparing to unpack .../14-kubectl_1.15.2-00_amd64.deb ...
Unpacking kubectl (1.15.2-00) ...
Selecting previously unselected package kubeadm.
Preparing to unpack .../15-kubeadm_1.15.2-00_amd64.deb ...
Unpacking kubeadm (1.15.2-00) ...
Selecting previously unselected package libltdl7:amd64.
Preparing to unpack .../16-libltdl7_2.4.6-2_amd64.deb ...
Unpacking libltdl7:amd64 (2.4.6-2) ...
Selecting previously unselected package patch.
Preparing to unpack .../17-patch_2.7.6-2ubuntu1.1_amd64.deb ...
Unpacking patch (2.7.6-2ubuntu1.1) ...
Setting up aufs-tools (1:4.9+20170918-1ubuntu1) ...
Setting up git-man (1:2.17.1-1ubuntu0.4) ...
Setting up conntrack (1:1.4.4+snapshot20161117-6ubuntu2) ...
Setting up kubernetes-cni (0.7.5-00) ...
Setting up containerd.io (1.2.6-3) ...
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /lib/systemd/system/containerd.service.
Setting up cri-tools (1.13.0-00) ...
Processing triggers for ureadahead (0.100.0-21) ...
Setting up socat (1.7.3.2-2ubuntu2) ...
Setting up cgroupfs-mount (1.4) ...
Setting up patch (2.7.6-2ubuntu1.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for systemd (237-3ubuntu10.25) ...
Setting up libltdl7:amd64 (2.4.6-2) ...
Setting up ebtables (2.0.10.4-3.5ubuntu2.18.04.3) ...
Installing new version of config file /etc/init.d/ebtables ...
update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults
Setting up kubectl (1.15.2-00) ...
Setting up docker-ce-cli (5:19.03.1~3-0~ubuntu-bionic) ...
Setting up pigz (2.4-1) ...
Setting up git (1:2.17.1-1ubuntu0.4) ...
Setting up docker-ce (5:19.03.1~3-0~ubuntu-bionic) ...
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /lib/systemd/system/docker.service.
Created symlink /etc/systemd/system/sockets.target.wants/docker.socket → /lib/systemd/system/docker.socket.
Setting up kubelet (1.15.2-00) ...
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
Setting up kubeadm (1.15.2-00) ...
Processing triggers for ureadahead (0.100.0-21) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for systemd (237-3ubuntu10.25) ...

Install the master node

The master node is provisioned via the kubeadm command. The –pod-network-cidr=10.244.0.0/16 is specified since we will be using the flannel network (more on this further down). Without it, the network would fail to start

kubeadm init --pod-network-cidr=10.244.0.0/16

Once again this command might take a few seconds to run, since it needs to pull some images from the web and also create certificates and keys for our cluster. If your ssh connection to the server is unstable consider using the screen command to endure installations dont get interrupted half way.

Once the command finishes, the command will output instructions on how to access the cluster along with secret to add and authenticate new nodes. If everything went well you should see the following in your terminal

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join aaa.bbb.ccc.ddd:6443 --token 9das2s.fzex2l10dj4udaag \
    --discovery-token-ca-cert-hash sha256:47d9a9f9b8b113c10ed249da7a395491d5e5a79b836ad54c12fcfae4c9ca2b6a 

A “ps -ef | grep kube” will confirm Kubernetes is actually running on our server

Accessing the cluster from your local machine

In order to access your cluster from your local machine you will need to install kubectl. In this tutorial, we will see how to achieve this on a Mac. I have not personally tried this on Windows, but I suspect your best bet would be to install it via the Microsoft Ubuntu Bash using apt-get. But I could be completely wrong.

If you are using a Mac, run the command “brew install kubectl” in your terminal.

If you do not have brew installed, well you should not even be here… Joking, you can install it from https://brew.sh/

Since we will be using kubectl command a lot, the next step is to alias it to “k” by issuing the command “alias k=kubectl” in our terminal prompt. This will save us a lot of typing in the future, every time we need to invoke the kubectl command, we can simply type k. I recommend saving the command in your .bash_profile or .zshrc.

Next we need to setup the config file on our local machine. We can copy this from our server, but since we already blocked root ssh logins from the outside, we need to first make the config available in the home directory of the user we created.

On the Server, as root, go to the users’ home directory you created above and type :

cp -rp /etc/kubernetes/admin.conf .   

Next change the ownership of the file to your user like so :

chown johann:johann admin.conf

Replace johann:johann with the username you created in the first part of this tutorial. Next on you Mac, scp the file from the server to ~/.kube/config. Beware that this command will overwrite any existing config file located under ~/.kube. However since you are here the assumption is that this is your first experience actually working with a Kubernetes cluster.

scp -P  {ssh port} {ssh user}@{server ip}:/home/{username}/admin.conf ~/.kube/config

Replace {ssh port}, {ssh user}, {server ip} and {username} with your actual values, and confirm the file exists in your ~/.kube/config by typing “cat ~/.kube/config”. The file should look something like this :

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNU1EZ3hPREV5TkRBd01Gb1hEVEk1TURneE5URXlOREF3TUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQwTndnWFRPUkRDWDY4dmZjRjIxbjVtZ29peTFoV0tXTApxc0hKWmVXaUVBeVo2RmtCa2gyZUxhSFZ4QmxDcWRGVWh5VStESUE0ZXlOd0tsWUdwY1BxdDRoZncwMzJIMkMvCloxMVFocENIQVk1RzA5UnlXakZ6YVhTVkp6N05HaHBSR3M1djZIKzUvSFJ4NVdzOU92YUhKZG5LdHM2Qkh4OWkKTGhlR1A4TEFQUWdmeTQybjAw0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTWNNCkpiR2pBYXNSUmsvVFdlZWVSbWtwTWZxWnhwRjN3V2h5UytPQmw4U1RlRDFzUFBBZXp1BxQUpiN2dLY25GSDdtNEpOemtyR0plTTVrUmRlaElTpYmNRZk1lZU9pWFdTckgKZEdsSVVmMWtWaDF5OW5FVlAclEvbTh5cWZ2MU1SK2gxME03R05uRzB4RHNwdUdTVFJyYWlWbTZWUlJrVFNNNwpZOUFHZHIZmU1S0R4MHY3anpKc1BGd2F3NQnVoSEFnCnFtWEd1MWRUankyWVd6eWllbHhpd1N6dnZ6TlRqR3JVeHhKRlZVRTVWbXBnakJlL1EyeTF0TVZSL0V6b0N0QnQKb3ZQRFFMaFk2cXdPL1kza3laTXY1NDFnWmg1WkFNUDdKdVhJbURSMEpGeFdaUnJrT25jMFZLdnB4ejBVSlhMeQpPQmJJTWMyZ29kYVQ4SVlhVzBFQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFBN01SSTZYN3dKWW1sRjNnTGlSem5PWWZnRUsKUWdERmlGdU5sOFU1THNacGEvOUFxeitPeWZJTUdjcDBUVNLWmhwRS9mbDBERWRlNjNuUTc0ZjA1NVdPVWUzai9TZytWbS9vMzF6VUhtdApIOXhDTk53YWNrdEVmZ2xheHlLcnB0ajVSRVp4UEd3azlGNS9VZ3VJWiswNVlsNUI1WjFLOTBteFZGbz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://aaa.bbb.ccc.ddd:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJTlJMMkJwYVZNbTB3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB4T1RBNE1UZ3hNalF3TURCYUZ3MHlNREE0TVRjeE1qUXdNREphTURReApGekFWQmdOVkJBb1REbk41YzNSwMDRYMGU0OTdkWEZKQWhpaXY2V0dPa2h0N0FZRFNhSS9RVm1Ic2NiSTNRSW9LVXYwWDBRTGwrWUdSCkF1SGpJd0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFGVEZTSHJJTEt6UjM1K0dMZEJlU055SncycmdEdFBIMW94VQovVFFsdHVTZFBRR0lGVEFBSXk5VGNCczdsZEJnUlg4QmNKdzZuNWhoTTBMb1Y2bWlRV3JzZ3ZtakVXVEE1M1JmCmliTmFiSG5hcGhsTXYyUmdBdE1xWlc0elhVMUbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXdQRGF4ZUhoNmN5QnY5djYKVE5XclNqWC8zTGVnaEdEeXdSSGJwNmpmWkZOZGpkeHNRYlB3RE1reTFyWlAzcWMzRVNrRXVIWGVIMm4yRHVITwpSaFc3SXpuV041eEt5ZFNQcjRiQWE0R1FQL0JacmxPSFRNUGpVSXp5Y2srdFZLS0lUcWJHTkl5cWVkVG1lSW1oCk5UamRrRnFxK3JHVDAyV3JSRzJ2V2FrYmRXOVpPTGpGRTJQdUU1a2pStScjAzalhWTTl4N0pQQ0hpYTJPNUMxdFhUcEpaeEdiTWMKaFBlOWpuZU5mQnZPRlJ2T1JM1NJWENMcmM4MCtSdkdvd20vVHBQVTMxVEc0THljK29aWHlCNE1TQis0RUxPZApwdHYzSm4clUFVvOHNtMGF6aUhaZzZUdWJOODNhOGsKemVXZ1BXNmJxWjZKa0JGa1gzOVlrYnczQmxGVi9mdEp4TE90SW90ZU5GVkJ4NXo5cTBVanUyb1hDaFRnVWNPMApkb0RkY3UMrSSs5R3VWNFBKREx6V2ZVd1Z2aGk5a3A1cGxEeVIvUitOa09YTDNsTDBRcVZac3IraDJ6b3hYUjAxCmVXVVpzOVQyelM3SUw3eWVNR2JpeGJCd2E5OHlIVnVEZVl6TjkvMjdlUWF2Z1l1eXIrdz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgktLSVRxYkdOSXlxZWRUbWVJbWhOVGpka0ZxcStyR1QwMldyUkcydldha2JkVzlaT0xqRgpFMlB1RTVramVQVW84c20wYXppSFpnNlR1Yk44M2E4a3plV2dQVzZicVo2SmtCRmtYMzlZa2J3M0JsRlYvZnRKCnhMT3RJb3RlTkZWQng1ejlxMFVqdTJvWENoVGdVY08wZG9EZGN1MDA0WDBlNDk3ZFhGSkFoaWl2NldHT2todDcKQVlEU2FJL1FWbUhzY2JJM1FJb0tVdjBYMFFMbCtZRUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBd1BEYXhlSGg2Y3lCdjl2NlROV3JTalgvM0xlZ2hHRHl3UkhicDZqZlpGTmRqZHhzClFiUHdETWt5MXJaUDNxYzNFU2tFdUhYZUgybjJEdUhPUmhXN0l6bldONXhLeWRTUHI0YkFhNEdRUC9CWnJsT0gKVE1QalVJenljayt0V1JBdUhqSXdJREFRQUJBb0lCQUhaSzlZYi9mOWd0aDV1UApEbjVQUG83b1hxLy9jWTNZTnQzQ3lpNllMaWEvcWd0UkNwcVo3T0taOU82SnBweDNYeDdKVEZpZ0E0aTFHYVIwCk0xanE5K3FZQ2t2Y0trcG1aWllUZ3lRbXNyOVE2RnNtWVR6aW1Sc0kzNWpRV3hBWitvSm1ZLzFTQlM5Zk2ZTUwOStYK2hSWGEwcFRSQkVPQ1JsTUNnWUJGRW96akJzN0NlMmtCa2ptelEycFcKTzM1YlE3ckx3aFk1c0RTRUs4NDJHYVExMWNwUWpDUHFyZlplQTFvRGNzT2FZcjRORXNNOWNiTmpkWU9CL0hmbwpHcEl5TGx3ZnhpSUIvc3JVVHdQc3g5eGh4VHFVOUJpWlNpcE1vWDBOc3k2dWhHb0VLUk1rVkNGSG0yT0RybXppCnpYTWc2ZFp1ckNvSFdPTTBTRzVuSVFLQmdRQzhFZGpNOWZ6ZmVSZlY4WXFWRG1JczNiNjhRaVc3V000MlFmU0sKakg3dW1SQlUzTlorWk9uQkdFNWpvbHNmZGhwcTlGam1ZRjJ4N1ltUHhPM1hPWEV3MjJvSitmSG1GRjFMeTFsSwpXNmlEb01Nd09CUpkTGYKWnhSUGdpQ0FSbllmcStlZlhRcytTSlVyUm1lUHhkRCtlZXJNN0ZZYVdIRmYycXVBWTZZTE1NZER1d283cEIraQpWdWlaVzN6MzRnSWRWb29xL0Zjc1k3Nm5ZYmRYeGJVN2h4TEw2QWJveFFLVGQ0aXZWOVNBQkZ4anZDSERDVHRtCk5kWDdBWmhMdkE4TndZRXpkRTA2TThsNDA5L1M5RWF2Wi8wZVJuTktKSzVJYTdDQlRTdCtLY1JkaW1NZkREY24KaTBxdmRHRUNnWUVBMnRXRzFxTWhweEpWalQ2aXlBQ1FkNnRQS2NmajRHRU1jbTNzQVgweUM2R1VJT1RtUTFwcgoyYlZwejNXeEI0SmFHNmEwSXNLeUQ4RFlmS2hMVmdKeTgyNnd1MUtmYkZCUWZmRjA0MVljbkJQUXNqbzRBNHc3CjNYdFVUOWdmUm5nWXhhYU9iTElBTnlMazVQU1VNcDE1SHoxdVpPV3VlZlNzTmpEOU43Eh0ZkVDZ1lFQTRiV0sKWWdkbGxlU0JydUxUR2Y2YjE4dXpBNjN6aVh0SUp4Umdxd1dGSFhDN0lqQ29jYkNIeXBxODJYTWhZd1B5V29jdQpaNkRvNkpOTEVkZzhJMG1mSTRtZFVNMHpZVkaUW13SXpYZmdDeVVDdEhJSW0rWnIyUFA3TUE1U0FsaXhWQ2h4Ck9veHlnb2QvUnZJZmFzN091VVdHRxVzJIaUdReWNoK3E2aFRoVDROTkprRDJ0ai9JOXZUVG5iZHBvOVd1dTJSY01nSlBwClNaY3pvUUtCZ0dDOXNmQUdSb3pEUkEzVnRvc056dEdGNDkyNGlIaVBIMUc4VGF4eEpOVWM1RThDTjB0TkplMTgKVDc0M2psUGRXSDk2V0NWUGl4RmN1bUw4d2p4MkNCZzR3UXV4NWs1akdNRlhlTDhDRFA5bmhlcGg0QVVTZzExKwo0UnJVbnc4MTVRTEVaT0hYeFU4Umh3dXFLbWFBclZ1dFVCa3JrTzVQZ3VxWmxuQjJsUEZMCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

Also copy the configuration file for the user we created on our server, by running the following commands as the user :

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

At this point you can run the kubectl command on both the server directly and your machine.

Test everything is ok so far

To test that we can talk to our cluster from our machine, we issue the command “kubectl cluster-info” (or k cluster-info) since we set the alias earlier on.

kubectl cluster-info

The command should return something like this :

Kubernetes master is running at https://aaa.bbb.ccc.ddd:6443
KubeDNS is running at https://aaa.bbb.ccc.ddd:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

You can of course go ahead and issue kubectl cluster-info dump or k cluster-info dump to get a very detailed setup of our cluster.

Another useful command to test connectivity is “kubectl get services”, which returns info relevant to our cluster, including the age of the cluster itself.


NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   3m

Running on a budget

Once we have verified that we can access the cluster from our machine, since we are running on a budget, we need to allow the master to be scheduled as a “worker” node, so that pods can run on it too. This is not something we would ever do in a production grade environment, but the scope of this tutorial is to familiarise ourselves with the setup process of a cluster, as cheaply as possible

On your Mac issue the following command

kubectl taint nodes --all node-role.kubernetes.io/master-

This will basically tell cluster to remove the master label on the master, thus enabling it to be scheduled as a normal node.

Adding a Network Provider

Back on the server we now need to set net.bridge.bridge-nf-call-iptables to 1. This is required so that bridged packets will traverse iptables rules. Run the command (as root) :

sysctl net.bridge.bridge-nf-call-iptables=1

To make this setting permanent, simply add net.bridge.bridge-nf-call-iptables=1 to /etc/sysctl.conf

Next we need to setup networking so that the pods can talk to each other. There are several ways to do this, but the easiest I found was to use the “flannel” network. I will not go into how this works, but if you are interested there are plenty of very good write-ups on the web. You will also find a very good write-up comparing the various networking options here, while you will find a great blog post about the flannel network here.

Like all other things relating to Kubernetes, we will apply the network settings using a yaml file. Feel free to see the structure of the file here. Issue the command :


kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

Wait a couple minutes and then issue the command kubectl get pods –all-namespaces. Things should look something like this :

root@34a4:~# kubectl get po --all-namespaces
NAMESPACE     NAME                                           READY   STATUS    RESTARTS   AGE
kube-system   coredns-5c98db65d4-5pk27                       1/1     Running   0          6m20s
kube-system   coredns-5c98db65d4-x2dfm                       1/1     Running   0          6m20s
kube-system   etcd-34a4.k.yourprovider.cloud                 1/1     Running   0          5m43s
kube-system   kube-apiserver-34a4.nnnn.cloud                 1/1     Running   0          5m43s
kube-system   kube-controller-manager-34a4.nnnn.cloud        1/1     Running   0          5m35s
kube-system   kube-flannel-ds-amd64-n8tbb                    1/1     Running   0          99s
kube-system   kube-proxy-pf4bh                               1/1     Running   0          6m20s
kube-system   kube-scheduler-34a4.nnnn.cloud                 1/1     Running   0          5m17s

Note : The kubectl get po –all-namespaces command should run on both the server and your local machine with the same result. If it only runs on the server and not on your local machine, you may follow the troubleshooting guide at https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/

Installing the Kubernetes Dashboard

Once the container network is running, it is now time to install the Kubernetes Dashboard. Once again we do this though a ready made yaml file found here. Run the command

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta1/aio/deploy/recommended.yaml

The above link may change from time to time, so if it does not work, please refer to https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/ for the latest file

Accessing the Dashboard

In order to access the dashboard on our local machine, we use the kubectl proxy command, more info about the magic going on behind the scenes can be found here. Run the command in your terminal.

kubectl proxy

You should then see

Starting to serve on 127.0.0.1:8001

If you have issues with the default port (8001), you can use the command kubectl proxy –port=xxxx to specify any valid port number. You can also use kubectl proxy –help for more options

Access the dashboard from (assuming you are using port 8001) http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

If all is working well you should be greeted with the Kubernetes dashboard Sign in page.

Next we need to create an admin user, and role binding for the user. Create two yaml files called admin-user.yaml and clusterRoleBinding.yaml on your machine

In the admin-user.yaml paste the following :

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system

In the clusterRoleBinding.yaml paste this :

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

Next issue the following commands :

# This will create an admin-user
kubectl create -f admin-user.yaml
# This will add a cluster role binding to the user we just created
kubectl create -f clusterRoleBinding.yaml

The response from the commands should be :

serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

Next we need to generate the bearer token to be able to login to the dashboard. There are other ways and means to login to the dashboard, but for the sake of simplicity we will generate and use the token as follows :

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

The output from the command should look something like this :

Name:         admin-user-token-4kqwp
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 63776a02-015f-3dde-7ab3-38755dc9f17f

Type:  kubernetes.io/service-account-token

Data
====
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IzZXJ2aWiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9NlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLThrCJzdWIiOiJzecXduIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI3MzY2N2EwNC0wMDVkLTRkZGUtOWNiMy0zOGRjOTc1NWYxN2YiLXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.ZjFsPWWFktRfmouEIBY5cjvjz_wDdVyTUrsfkx6GMDdmgxM259516iXllzlkR1WdjMo0DsIj5SAong__5cfLcGie0nmuI7Dh0vbHWbIoxIVpD2w81LNpIH_PsYdcRL7FM1mZtMPeM5xq2AZ8AiwOp-nO_bsSVjjnH4HBS4hGqG9vca4hhGUUzIJt5_DMv8BYShavnbhgzcse875rsuK1zVRCylmg0e3NUMzb3BL6YLWwC8umYtpJBH2Sfy9j_MEhhQuokJJTe9XzLOrEXnmtS0GpwVJRQbRFeSVcPnMSsUaDfHcKLpZzwQ00XK-hST3cfYWTDEJ4gVzZ0uu_UWlaDQ
ca.crt:     1025 bytes

You can copy and paste the token directly into the Kubernetes dashboard, just make sure the token radio button is checked, before you paste the token and then hit the Sign in Button. If all goes well you should see the Kubernetes dashboard.

Note: you can run the kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk ‘{print $1}’) any time if you loose the access token.

Installing the Heapster Metrics

A very useful set of graphs showing the current cluster resource usage is provided by the heapster metrics. These can be installed by issuing the following command :

git clone https://github.com/kubernetes/heapster.git
cd heapster/deploy/kube-config/influxdb
kubectl create -f influxdb.yaml
kubectl create -f heapster.yaml
cd ../../../../
cd heapster/deploy/kube-config/rbac/
kubectl create -f heapster-rbac.yaml

Next restart the dashboard pod by killing it. Issue the command kubectl get pods –all-namespaces and locate the dashboard pod

NAMESPACE              NAME                                           READY   STATUS    RESTARTS   AGE
kube-system            coredns-5c98db65d4-5pk27                       1/1     Running   0          38m
kube-system            coredns-5c98db65d4-x2dfm                       1/1     Running   0          38m
kube-system            etcd-34a4.k.hostens.cloud                      1/1     Running   0          37m
kube-system            heapster-598cfcfd59-l8gng                      1/1     Running   0          4m52s
kube-system            kube-apiserver-34a4.k.hostens.cloud            1/1     Running   0          37m
kube-system            kube-controller-manager-34a4.k.hostens.cloud   1/1     Running   0          37m
kube-system            kube-flannel-ds-amd64-n8tbb                    1/1     Running   0          33m
kube-system            kube-proxy-pf4bh                               1/1     Running   0          38m
kube-system            kube-scheduler-34a4.k.hostens.cloud            1/1     Running   0          37m
kube-system            monitoring-influxdb-68b6989bb9-gbgfh           1/1     Running   0          4m53s
kubernetes-dashboard   kubernetes-dashboard-5c8f9556c4-f5hjt          1/1     Running   0          27m
kubernetes-dashboard   kubernetes-metrics-scraper-86456cdd8f-74zjq    1/1     Running   0          27m

In this case the pod name is kubernetes-dashboard-5c8f9556c4-f5hjt. The namespace is kubernetes-dashboard

kubectl  delete pod -n kubernetes-dashboard  kubernetes-dashboard-5c8f9556c4-f5hjt

Wait a few seconds and issue the command kubectl get pods –all-namespaces

NAMESPACE              NAME                                           READY   STATUS    RESTARTS   AGE
kube-system            coredns-5c98db65d4-5pk27                       1/1     Running   0          46m
kube-system            coredns-5c98db65d4-x2dfm                       1/1     Running   0          46m
kube-system            etcd-34a4.k.hostens.cloud                      1/1     Running   0          45m
kube-system            heapster-598cfcfd59-l8gng                      1/1     Running   0          12m
kube-system            kube-apiserver-34a4.k.hostens.cloud            1/1     Running   0          45m
kube-system            kube-controller-manager-34a4.k.hostens.cloud   1/1     Running   0          45m
kube-system            kube-flannel-ds-amd64-n8tbb                    1/1     Running   0          41m
kube-system            kube-proxy-pf4bh                               1/1     Running   0          46m
kube-system            kube-scheduler-34a4.k.hostens.cloud            1/1     Running   0          45m
kube-system            monitoring-influxdb-68b6989bb9-gbgfh           1/1     Running   0          12m
kubernetes-dashboard   kubernetes-dashboard-5c8f9556c4-wtzgd          1/1     Running   0          53s
kubernetes-dashboard   kubernetes-metrics-scraper-86456cdd8f-74zjq    1/1     Running   0          35m

As you can see a new pod called kubernetes-dashboard-5c8f9556c4-wtzgd has been spun up to replace the deleted pod. Refresh the dashboard and login again using the token. The metrics should now be available.

Adding a node to the existing cluster

You can add a node to the cluster you will need the token generated during the master node installation. The command syntax is as follows :

kubeadm join --token="token generated during master node install" "ip address of the master node"

Destroying the cluster and burning it to the ground

ARNING : Before proceeding any further make sure you are on the right context especially if you have access and config files to various production clusters from your machine.

# Step 1.  Drain the node by running:
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>
# Step 2. On the node being removed, reset all kubeadm installed state:
kubeadm reset
# If you wish to start over simply run kubeadm init or kubeadm join with the appropriate arguments.

And thats it. You now have a “fully working” Kubernetes cluster playground to experiment with, on a shoe string budget.

In part 2 (coming soon) we will take a look at how to install traefik ingress controller and setup persistent volumes, which will allow us to make something useful with our cluster.

Have Fun !!

Other useful links

https://kubernetes.io/docs/reference/kubectl/cheatsheet/

Leave a reply:

Your email address will not be published.

Site Footer