Saturday, July 25, 2020

Kubernetes (Minikube) set up on your PC

1. Download kubectl, minikube, minishift and store at the following location
C:\Program Files\Kubernetes\minikube

2. Add this path to environment variable
Start -> Type env -> Click on system environment variable
and click on environment varilable and double click on PATH from System Variables section
Click on New and paste your path.

3. Open your command prompt and type minukube
if you get out put rather than error message, it is set up correctly. If you get error, fix it.

4. Download and install virtualbox from

5. Open your DOS prompt and run the command below,
> minikube start --driver=virtualbox

This command basically downloads all required packages to install and setup kubernetes.

6. I basically rename kubectl command to kc only so I will have shortcut

7. Verify the configuration
C:\Users\Admin>cd .kube
C:\Users\Admin\.kube>notepad config

C:\Users\Admin\.kube>kc cluster-info
Kubernetes master is running at
KubeDNS is running at

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

C:\Users\Admin>kc get all
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP    <none>        443/TCP   42m

8. Install following tools
- Visual Studio Code
- Node.js
- git
- OpenShift CLI (oc)

Reference Documents

Installing HPE foundation software on HPE Superdom Flex server

A. Console login to HPE Superdome Flex server
$ ssh administrator@superdome-rmc

RMC cli>

a. View partition configuration
RMC cli> show npar
RMC cli> show logs error

no error, go ahead and power on the system if its on power off mode
RMC cli> power on npar pnum=0

Connect to the system, once connected, you will be @OS login prompt
RMC cli> connect npar pnum=0

B. Installing HPE foundation software on HPE Superdom Flex server

1. Download the iso image form
2. Mount it
# mount -o loop hpe-foundation-2.3.1-cd1....iso /mnt/foundation

3. Create repo
# vi /etc/yum.repos.d/hpe-foun.repo

# yum clean all

4. Install/Update HPE Foundation software
# yum groupinstall "HPE Foundation Software"
# yum update "HPE Foundation Software"

Install/update DCD
For new install, the command above installs DCD and its dependencies automatically.
for update
# yum update storelib
# yum update hpe-dcd

Verify if new installation is completed
# rpm -qi hpe-dcd | grep -i version

# reboot the machine

Once system reaches EFI shell, at the RMC command window, type (you may not have to type it)
> power reset npar pnum=0

C. Troubleshooting
After reboot, system could not load some of the modules and two services didn't come up

# systemctl list-units --state=failed

Rebooted the system again, but still has problem. it wipe out the resolve.conf file. NetworkManager service was running but when tried to restart, it failed.

The weak modules kABI compatible, hwperf and numatools did not apprear under /lib/modules directory

Reinstall these two packages
# yum --disablerepo=* --enablerepo=hpe-foundation.2.3.1 reinstall kmod-hwperf.ko
# yum --disablerepo=* --enablerepo=hpe-foundation.2.3.1 reinstall kmod-numatools.ko

# cd /lib/modules
# find . -name hwperf.ko
# find . -name numatools

After reboot, everything is good.

Kubernetes set up on Linux machine ...

1. Installing Docker engine

1. Build a redhat OS system on your Virtual box

Attach ISO image and mount it
# mkdir /cdrom /opt/OS_Image; mount /dev/cdrom /cdrom
# cd /cdrom; cp -a * /opt/OS_Image

TO make cdrom mount automatically
# vi /etc/rc.d/rc/local
touch /var/lock/subsys/local
# Add the line below
mount /dev/cdrom /cdrom


# chmod +x /etc/rc.d/rc/local

Set up repo
[root@control yum.repos.d]# cat local.repo
name=Centos8 repo AppStttream

name=Centos8 repo

name=docker repo
[root@control yum.repos.d]#

[root@control yum.repos.d]# yum install docker-ce --nobest

Set up repo fo kubernetes

googole for installing kubeadm

Read through the page since it contains very important information. Go to the section where you find  yum repo set up.

COpy the code and set up the repo.

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

# Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

systemctl enable --now kubelet

-> Verify SElinux to permissive
# cat /etc/sysconfig/selinux or etc/selinux/config  

Install iproute-tc
[root@control yum.repos.d]# yum install iproute-tc

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
sysctl --system

Make sure that the br_netfilter module is loaded before this step. This can be done by running lsmod | grep br_netfilter. To load it explicitly call modprobe br_netfilter

Restart kubelet
systemctl daemon-reload
systemctl restart kubelet
systemctl enable docker
[root@control yum.repos.d]# systemctl start docker
[root@control yum.repos.d]# systemctl status docker

[root@control yum.repos.d]# docker info
 Cgroup Driver: cgroupfs

Configure cgroup driver used by kubelet on control-plane node

[root@control yum.repos.d]# cat /etc/docker/daemon.json
   "exec-opts": ["native.cgroupdriver=systemd"]
[root@control yum.repos.d]# systemctl restart docker
[root@control yum.repos.d]# docker info  # shows the output...

Cgroup Driver: systemd

Disable swap

[root@control yum.repos.d]# vi /etc/fstab
#/dev/mapper/cl_control-swap swap                    swap    defaults        0 0

[root@control yum.repos.d]# systemctl start kubelet
[root@control yum.repos.d]# systemctl enable kubelet
[root@control yum.repos.d]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor pre>
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
   Active: activating (auto-restart) (Result: exit-code) since Wed 2020-07-22 1>
  Process: 18237 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_C>
 Main PID: 18237 (code=exited, status=255)

Now, we are going to make this os as an base image - called GOLD image. Basedo n this we will create other OS instances for Control and worker nodes.

Now, Shutdown your VM
[root@control yum.repos.d]# init 0

Go to your Virtual box and R. click on your VM and click on clone

Under Macaddress policy - select generate new MAC address

Keep original machine intake and create a 3 clone machines
- Control or master node
- 2 worker nodes

Set the hostname and IP address. Add entry to dns or hosts file
# hostnamectl set-hostname master; exec bash

# cat /etc/hosts    master    worker1    worker2

Make sure they can comunicate with each other
# for i in master worker1 worker2; do  ping -c 2 $i; done

Now, set up your kubernetes master also called control plane
Specify the network information
Run kubeadm -h for helo
# kubeadm -h
read all the output and pick the best option.

here, we like to set up an kubernetes so we will pick init option. lets go ahead and get help on this as well.

# kubeadm init -h
We see for network info, we will pick
--pod-network-cidr string              Specify range of IP addresses

# kubeadm init --pod-network-cidr=

I got error that my docker engine is not started and firewalld is enabled. so I want to disable firewall and start docker

[root@master ~]# systemctl enable docker
[root@master ~]# systemctl restart docker
[root@master ~]# systemctl status docker
[root@master ~]# systemctl stop firewalld
[root@master ~]# systemctl disable firewalld

I got another error complaining about CPU

[root@master ~]# kubeadm init --pod-network-cidr=
W0722 16:25:45.888720    6178 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups []
[init] Using Kubernetes version: v1.18.6
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

I have to shutdown master and add one more CPU.

[root@master ~]# docker ps

Now, lets try again,

[root@master ~]# kubeadm init --pod-network-cidr
W0722 16:30:58.200764    1654 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups []
[init] Using Kubernetes version: v1.18.6
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster


Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join --token 6daorv.o0hdosnxi40z08h1 \
    --discovery-token-ca-cert-hash sha256:61d3f94370095d8a04e155a133383c57b3e221150d369c575dfdb2e3c78de08f
[root@master ~]#

Review the output and complete the following,
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master ~]#

Add alias to profile
[root@master ~]# vi .bashrc
alias kc=kubectl
[root@master ~]# . ./.bashrc

[root@master ~]# systemctl status kubelet
[root@master ~]# docker images
[root@master ~]# docker ps

[root@master ~]# kc get pods
No resources found in default namespace.
[root@master ~]# kc get nodes
master   NotReady   master   12m   v1.18.6
[root@master ~]# kc get ns
[root@master ~]# kc create namespace myspace
namespace/myspace created

[root@master ~]# kc get ns
NAME              STATUS   AGE
default           Active   17m
kube-node-lease   Active   17m
kube-public       Active   17m
kube-system       Active   17m
myspace           Active   19s
[root@master ~]# kc run testpod --image=httpd -n myspace
pod/testpod created
[root@master ~]# kc get pods
No resources found in default namespace.
[root@master ~]# kc get pods -n myspace
testpod   0/1     Pending   0          39s
[root@master ~]# kc get all -n myspace
pod/testpod   0/1     Pending   0          72s

[root@master ~]# kc get pods -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-66bff467f8-66t78         0/1     Pending   0          19m
coredns-66bff467f8-sllrx         0/1     Pending   0          19m
etcd-master                      1/1     Running   0          19m
kube-apiserver-master            1/1     Running   0          19m
kube-controller-manager-master   1/1     Running   0          19m
kube-proxy-4kfld                 1/1     Running   0          19m
kube-scheduler-master            1/1     Running   0          19m
[root@master ~]#

We are going to create flannel

google for github kube-flannel.yml

[root@master ~]# kc apply -f
podsecuritypolicy.policy/psp.flannel.unprivileged created created created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
[root@master ~]#

[root@master ~]# kc get pods
No resources found in default namespace.
[root@master ~]# kc get pods -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-66bff467f8-66t78         1/1     Running   0          26m
coredns-66bff467f8-sllrx         1/1     Running   0          26m
etcd-master                      1/1     Running   0          26m
kube-apiserver-master            1/1     Running   0          26m
kube-controller-manager-master   1/1     Running   0          26m
kube-flannel-ds-amd64-lw6xf      1/1     Running   0          37s
kube-proxy-4kfld                 1/1     Running   0          26m
kube-scheduler-master            1/1     Running   0          26m
[root@master ~]#

Run on client machines
kubeadm join --token 6daorv.o0hdosnxi40z08h1 \
    --discovery-token-ca-cert-hash sha256:61d3f94370095d8a04e155a133383c57b3e221150d369c575dfdb2e3c78de08f