How to Install Kubernetes on Ubuntu 18?

Kubernetes is an open-source container orchestration tool developed by Google.

Kubernetes is an open-source container orchestration tool developed by Google. In this article, you will learn how to set up Kubernetes with a master node and a worker node. Make sure you have Docker installed on both master and worker node.

Environment Details and Setup

For the demonstration, I have 2 Ubuntu systems, one will be the master node, and the other one will be the worker node. Both server configuration is as follows.

  • 2 CPUs
  • Master – 4 GB RAM / Worker – 2 GB RAM
  • 10 GB Hard Disc

Use hostnamectl command to set the hostname on both the systems.

Also Read: 25+ Docker Commands for sysadmin and developers

On the Master Node:

datamounts@datamounts:~$ sudo hostnamectl set-hostname kubernetes-master

On Worker Node:

datamounts@datamounts:~$ sudo hostnamectl set-hostname kubernetes-worker

So, below are the details of both nodes.

Master Node

  • Hostname: kubernetes-master
  • IP Address: 192.168.1.221

Worker Node

  • Hostname: kubernetes-worker
  • IP Address: 192.168.1.222

Edit hosts file on both the systems.

datamounts@datamounts:~$ sudo gedit /etc/hosts

192.168.1.221 kubernetes-master
192.168.0.109 kubernetes-worker

Before you start to install Kubernetes, run the below command on both master and worker nodes to check if Docker is up and running.

datamounts@datamounts:~$ sudo service docker status
[sudo] password for datamounts:
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2019-11-23 15:39:36 EST; 3 weeks 0 days ago
Docs: https://docs.docker.com
Main PID: 8840 (dockerd)
Tasks: 17
Memory: 42.3M
CGroup: /system.slice/docker.service
└─8840 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Nov 23 15:39:35 datamounts dockerd[8840]: time="2019-11-23T15:39:35.091941184-05:00" level=warning msg="Your kernel does not support cgrou
Nov 23 15:39:35 datamounts dockerd[8840]: time="2019-11-23T15:39:35.093149218-05:00" level=info msg="Loading containers: start."
Nov 23 15:39:35 datamounts dockerd[8840]: time="2019-11-23T15:39:35.957842188-05:00" level=info msg="Default bridge (docker0) is assigned
Nov 23 15:39:36 datamounts dockerd[8840]: time="2019-11-23T15:39:36.078753190-05:00" level=info msg="Loading containers: done."
Nov 23 15:39:36 datamounts dockerd[8840]: time="2019-11-23T15:39:36.664727326-05:00" level=info msg="Docker daemon" commit=481bc77 graphdr
Nov 23 15:39:36 datamounts dockerd[8840]: time="2019-11-23T15:39:36.817929464-05:00" level=error msg="cluster exited with error: error whi
Nov 23 15:39:36 datamounts dockerd[8840]: time="2019-11-23T15:39:36.820439024-05:00" level=error msg="swarm component could not be started
Nov 23 15:39:36 datamounts dockerd[8840]: time="2019-11-23T15:39:36.820821712-05:00" level=info msg="Daemon has completed initialization"
Nov 23 15:39:36 datamounts systemd[1]: Started Docker Application Container Engine.
Nov 23 15:39:36 datamounts dockerd[8840]: time="2019-11-23T15:39:36.883382952-05:00" level=info msg="API listen on /home/datamounts/docker.sock
lines 1-20/20 (END)

 

Install Kubernetes

Run all the commands mentioned in this section on both master and worker nodes.

Firstly, add the Kubernetes package repository key.

datamounts@kubernetes-master:~$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
[sudo] password for datamounts:
OK

 

Also Read: What is Resource Orchestration Service (ROS), Benefits & Constraints

Run the command below to configure the Kubernetes package repository.

datamounts@kubernetes-master:~$ sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
Hit:1 https://download.docker.com/linux/ubuntu bionic InRelease
Hit:2 http://ppa.launchpad.net/ansible/ansible/ubuntu cosmic InRelease
Get:3 http://apt.puppetlabs.com bionic InRelease [85.3 kB]
Hit:5 http://security.ubuntu.com/ubuntu cosmic-security InRelease
Hit:6 http://us.archive.ubuntu.com/ubuntu cosmic InRelease
Ign:7 http://pkg.jenkins.io/debian-stable binary/ InRelease
Hit:8 http://us.archive.ubuntu.com/ubuntu cosmic-updates InRelease
Hit:9 http://pkg.jenkins.io/debian-stable binary/ Release
Hit:10 http://us.archive.ubuntu.com/ubuntu cosmic-backports InRelease
Get:4 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8,993 B]
Get:11 http://apt.puppetlabs.com bionic/puppet6 amd64 Packages [36.1 kB]
Get:13 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages [32.2 kB]
Fetched 163 kB in 3s (49.1 kB/s)
Reading package lists... Done

Before proceeding ahead, disable swap on both the nodes.

datamounts@kubernetes-master:~$ sudo swapoff -a

 

Install Kubeadm

Now you need to install kubeadm.

kubeadm is a tool in Kubernetes which is used to add nodes in the Kubernetes cluster.

datamounts@kubernetes-master:~$ sudo apt-get install kubeadm -y
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
conntrack cri-tools ebtables ethtool kubectl kubelet kubernetes-cni socat
The following NEW packages will be installed:
conntrack cri-tools ebtables ethtool kubeadm kubectl kubelet kubernetes-cni socat
0 upgraded, 9 newly installed, 0 to remove and 235 not upgraded.
Need to get 51.8 MB of archives.
After this operation, 273 MB of additional disk space will be used.
Get:3 http://us.archive.ubuntu.com/ubuntu cosmic/main amd64 conntrack amd64 1:1.4.5-1 [30.2 kB]
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 cri-tools amd64 1.13.0-00 [8,776 kB]
Get:6 http://us.archive.ubuntu.com/ubuntu cosmic/main amd64 ebtables amd64 2.0.10.4-3.5ubuntu5 [79.8 kB]
Get:8 http://us.archive.ubuntu.com/ubuntu cosmic/main amd64 ethtool amd64 1:4.16-1 [115 kB]
Get:9 http://us.archive.ubuntu.com/ubuntu cosmic/main amd64 socat amd64 1.7.3.2-2ubuntu2 [342 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubernetes-cni amd64 0.7.5-00 [6,473 kB]
Get:4 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.17.0-00 [19.2 MB]
Get:5 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.17.0-00 [8,742 kB]
Get:7 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.17.0-00 [8,059 kB]
Fetched 51.8 MB in 8s (6,419 kB/s)
Selecting previously unselected package conntrack.
(Reading database ... 318151 files and directories currently installed.)
Preparing to unpack .../0-conntrack_1%3a1.4.5-1_amd64.deb ...
Unpacking conntrack (1:1.4.5-1) ...
Selecting previously unselected package cri-tools.
Preparing to unpack .../1-cri-tools_1.13.0-00_amd64.deb ...
Unpacking cri-tools (1.13.0-00) ...
Selecting previously unselected package ebtables.
Preparing to unpack .../2-ebtables_2.0.10.4-3.5ubuntu5_amd64.deb ...
Unpacking ebtables (2.0.10.4-3.5ubuntu5) ...
Selecting previously unselected package ethtool.
Preparing to unpack .../3-ethtool_1%3a4.16-1_amd64.deb ...
Unpacking ethtool (1:4.16-1) ...
Selecting previously unselected package kubernetes-cni.
Preparing to unpack .../4-kubernetes-cni_0.7.5-00_amd64.deb ...
Unpacking kubernetes-cni (0.7.5-00) ...
Selecting previously unselected package socat.
Preparing to unpack .../5-socat_1.7.3.2-2ubuntu2_amd64.deb ...
Unpacking socat (1.7.3.2-2ubuntu2) ...
Selecting previously unselected package kubelet.
Preparing to unpack .../6-kubelet_1.17.0-00_amd64.deb ...
Unpacking kubelet (1.17.0-00) ...
Selecting previously unselected package kubectl.
Preparing to unpack .../7-kubectl_1.17.0-00_amd64.deb ...
Unpacking kubectl (1.17.0-00) ...
Selecting previously unselected package kubeadm.
Preparing to unpack .../8-kubeadm_1.17.0-00_amd64.deb ...
Unpacking kubeadm (1.17.0-00) ...
Setting up conntrack (1:1.4.5-1) ...
Setting up kubernetes-cni (0.7.5-00) ...
Setting up cri-tools (1.13.0-00) ...
Setting up socat (1.7.3.2-2ubuntu2) ...
Processing triggers for systemd (239-7ubuntu10.12) ...
Setting up ebtables (2.0.10.4-3.5ubuntu5) ...
Created symlink /etc/systemd/system/multi-user.target.wants/ebtables.service → /lib/systemd/system/ebtables.service.
update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults
Setting up kubectl (1.17.0-00) ...
Processing triggers for man-db (2.8.4-2) ...
Setting up ethtool (1:4.16-1) ...
Setting up kubelet (1.17.0-00) ...
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
Setting up kubeadm (1.17.0-00) ...
Processing triggers for systemd (239-7ubuntu10.12) ...

Check the kubeadm version to verify if it got installed correctly.

datamounts@kubernetes-master:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:17:50Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}

Initialize Kubernetes Cluster

Now, run the init command to initialize the Kubernetes cluster only on the master node. Use --apiserver-advertise-address to tell the worker node about the master’s IP address.

datamounts@kubernetes-master:~$ sudo kubeadm init --apiserver-advertise-address=192.168.1.221 --pod-network-cidr=10.244.0.0/16
W1217 11:05:15.474854 10193 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1217 11:05:15.474935 10193 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.221]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.1.221 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.1.221 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1217 11:05:25.584769 10193 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W1217 11:05:25.587128 10193 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 35.010368 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kubernetes-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kubernetes-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: dmamk9.0nmo62mhom8961qw
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Next, you need to deploy a pod network on the cluster.

Run kubectl apply -f [podnetwork].yaml with one of the options listed at https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.221:6443 --token dmamk9.0nmo62mhom8961qw --discovery-token-ca-cert-hash sha256:2de92f42e84d2020d8b19b1778785df5f8196e5eedaa5664ad911e8c23f58963

As mentioned in the output above, create .kube directory and copy admin.conf file to config file in .kube directory.

datamounts@kubernetes-master:~$ mkdir -p $HOME/.kube
datamounts@kubernetes-master:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
datamounts@kubernetes-master:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

At this moment, when you run the kubectl get nodes command, you will see the status of the master node is NotReady.

datamounts@kubernetes-master:~$ sudo kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes-master NotReady master 2m34s v1.17.0

Deploy Pod Network – Flannel

Next, you need to deploy a pod network on the master node. I am using the Flannel pod network. It is used to communicate between nodes in the Kubernetes cluster.

datamounts@kubernetes-master:~$ sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

Check the status of the master node, it must be in Ready state.

datamounts@kubernetes-master:~$ sudo kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes-master Ready master 4m41s v1.17.0

After a few seconds, check if all the pods are up and running.

datamounts@kubernetes-master:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6955765f44-rzw9d 1/1 Running 0 4m17s
kube-system coredns-6955765f44-xvgdp 1/1 Running 0 4m17s
kube-system etcd-kubernetes-master 1/1 Running 0 4m27s
kube-system kube-apiserver-kubernetes-master 1/1 Running 0 4m27s
kube-system kube-controller-manager-kubernetes-master 1/1 Running 0 4m27s
kube-system kube-flannel-ds-amd64-c2rf5 1/1 Running 0 81s
kube-system kube-proxy-mvdd7 1/1 Running 0 4m17s
kube-system kube-scheduler-kubernetes-master 1/1 Running 0 4m27s

Add Worker Node to the Cluster

Now that your master node is properly configured and running, its time to add the worker node. Here, you need to run the join command on the worker node, which you got after initializing kubeadm.

Run the command below on the worker node to join the master node.

datamounts@kubernetes-worker:~$ sudo kubeadm join 192.168.1.221:6443 --token dmamk9.0nmo62mhom8961qw --discovery-token-ca-cert-hash sha256:2de92f42e84d2020d8b19b1778785df5f8196e5eedaa5664ad911e8c23f58963
[sudo] password for datamounts:
W1217 11:08:01.066191 28968 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

On the Master Node:

You will see a couple of more pods are running now after the worker node joined the cluster.

datamounts@kubernetes-master:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6955765f44-9c7jc 1/1 Running 0 5m3s
kube-system coredns-6955765f44-c9s9r 1/1 Running 0 5m3s
kube-system etcd-kubernetes-master 1/1 Running 0 5m12s
kube-system kube-apiserver-kubernetes-master 1/1 Running 0 5m12s
kube-system kube-controller-manager-kubernetes-master 1/1 Running 0 5m13s
kube-system kube-flannel-ds-amd64-lgr62 1/1 Running 0 3m35s
kube-system kube-flannel-ds-amd64-n6vwm 1/1 Running 0 27s
kube-system kube-proxy-9mqp6 1/1 Running 0 27s
kube-system kube-proxy-kwkz2 1/1 Running 0 5m3s
kube-system kube-scheduler-kubernetes-master 1/1 Running 0 5m13s

Now, run the kubectl command again on the master node to check if the worker node has joined the cluster and it is running in the Ready state.

datamounts@kubernetes-master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes-master Ready master 5m27s v1.17.0
kubernetes-worker Ready <none> 31s v1.17.0

Conclusion

Now that the Kubernetes setup is ready, you can start orchestrating containers on the Kubernetes cluster.

Subscribe to our newsletter
Sign up here to get the latest news, updates and special offers delivered directly to your inbox.
You can unsubscribe at any time

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More