Saturday, July 25, 2020

Kubernetes (Minikube) set up on your PC

1. Download kubectl, minikube, minishift and store at the following location
C:\Program Files\Kubernetes\minikube

2. Add this path to environment variable
Start -> Type env -> Click on system environment variable
and click on environment varilable and double click on PATH from System Variables section
Click on New and paste your path.

3. Open your command prompt and type minukube
if you get out put rather than error message, it is set up correctly. If you get error, fix it.

4. Download and install virtualbox from virtualbox.org

5. Open your DOS prompt and run the command below,
> minikube start --driver=virtualbox

This command basically downloads all required packages to install and setup kubernetes.

6. I basically rename kubectl command to kc only so I will have shortcut

https://cloud.redhat.com/openshift/install

7. Verify the configuration
C:\Users\Admin>cd .kube
C:\Users\Admin\.kube>dir
C:\Users\Admin\.kube>notepad config

C:\Users\Admin\.kube>kc cluster-info
Kubernetes master is running at https://192.168.99.100:8443
KubeDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.


C:\Users\Admin>kc get all
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   42m


8. Install following tools
- Visual Studio Code
- Node.js
- git
- OpenShift CLI (oc)


Reference Documents
https://kubernetes.io/docs/tasks/tools/install-kubectl/
https://github.com/kubernetes/minikube/releases/tag/v1.12.0
https://cloud.redhat.com/openshift/install
https://code.visualstudio.com/docs/?dv=win
https://nodejs.org/en/download/
https://cloud.redhat.com/openshift/install/crc/installer-provisioned

Installing HPE foundation software on HPE Superdom Flex server

A. Console login to HPE Superdome Flex server
$ ssh administrator@superdome-rmc

RMC cli>

a. View partition configuration
RMC cli> show npar
RMC cli> show logs error

no error, go ahead and power on the system if its on power off mode
RMC cli> power on npar pnum=0

Connect to the system, once connected, you will be @OS login prompt
RMC cli> connect npar pnum=0

B. Installing HPE foundation software on HPE Superdom Flex server

1. Download the iso image form hpe.com
2. Mount it
# mount -o loop hpe-foundation-2.3.1-cd1....iso /mnt/foundation

3. Create repo
# vi /etc/yum.repos.d/hpe-foun.repo
[hpe-foundation2.3.1]
  name=HPE-Foundation
  baseurl=file:///mnt/foundation/RPMS
  enabled=1
  gpgkey=file:///mnt/foundation/RPM-GPG-KEY-hpe
         file:///mnt/foundation/RPM-GPG-KEY-sgi
  gpgcheck=1

# yum clean all

4. Install/Update HPE Foundation software
# yum groupinstall "HPE Foundation Software"
# yum update "HPE Foundation Software"

Install/update DCD
For new install, the command above installs DCD and its dependencies automatically.
for update
# yum update storelib
# yum update hpe-dcd

Verify if new installation is completed
# rpm -qi hpe-dcd | grep -i version

# reboot the machine

Once system reaches EFI shell, at the RMC command window, type (you may not have to type it)
> power reset npar pnum=0


C. Troubleshooting
Error
----
After reboot, system could not load some of the modules and two services didn't come up

# systemctl list-units --state=failed
polkit
systemd-modules-load

Rebooted the system again, but still has problem. it wipe out the resolve.conf file. NetworkManager service was running but when tried to restart, it failed.

Finding
The weak modules kABI compatible, hwperf and numatools did not apprear under /lib/modules directory

Solution
Reinstall these two packages
# yum --disablerepo=* --enablerepo=hpe-foundation.2.3.1 reinstall kmod-hwperf.ko
# yum --disablerepo=* --enablerepo=hpe-foundation.2.3.1 reinstall kmod-numatools.ko

# cd /lib/modules
# find . -name hwperf.ko
# find . -name numatools

After reboot, everything is good.

Kubernetes set up on Linux machine ...


1. Installing Docker engine

1. Build a redhat OS system on your Virtual box

Attach ISO image and mount it
# mkdir /cdrom /opt/OS_Image; mount /dev/cdrom /cdrom
# cd /cdrom; cp -a * /opt/OS_Image

TO make cdrom mount automatically
# vi /etc/rc.d/rc/local
touch /var/lock/subsys/local
# Add the line below
mount /dev/cdrom /cdrom

wq!

# chmod +x /etc/rc.d/rc/local


Set up repo
[root@control yum.repos.d]# cat local.repo
[OS-Repo]
name=Centos8 repo AppStttream
baseurl=file:///opt/OS_Image/AppStream
gpgcheck=0

[BASEOS]
name=Centos8 repo
baseurl=file:///opt/OS_Image/BaseOS
gpgcheck=0

[docker]
name=docker repo
baseurl=https://download.docker.com/linux/centos/7/x86_64/stable/
gpgcheck=0
[root@control yum.repos.d]#


[root@control yum.repos.d]# yum install docker-ce --nobest


Set up repo fo kubernetes

googole for installing kubeadm

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

Read through the page since it contains very important information. Go to the section where you find  yum repo set up.

COpy the code and set up the repo.


cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF



# Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

systemctl enable --now kubelet

-> Verify SElinux to permissive
# cat /etc/sysconfig/selinux or etc/selinux/config  

Install iproute-tc
[root@control yum.repos.d]# yum install iproute-tc


cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

Make sure that the br_netfilter module is loaded before this step. This can be done by running lsmod | grep br_netfilter. To load it explicitly call modprobe br_netfilter

Restart kubelet
systemctl daemon-reload
systemctl restart kubelet
systemctl enable docker
[root@control yum.repos.d]# systemctl start docker
[root@control yum.repos.d]# systemctl status docker

[root@control yum.repos.d]# docker info
 Cgroup Driver: cgroupfs

Configure cgroup driver used by kubelet on control-plane node

https://github.com/kubernetes/kubeadm/issues/1394

[root@control yum.repos.d]# cat /etc/docker/daemon.json
{
   "exec-opts": ["native.cgroupdriver=systemd"]
}
[root@control yum.repos.d]# systemctl restart docker
[root@control yum.repos.d]# docker info  # shows the output...

Cgroup Driver: systemd

Disable swap

[root@control yum.repos.d]# vi /etc/fstab
#/dev/mapper/cl_control-swap swap                    swap    defaults        0 0
~

[root@control yum.repos.d]# systemctl start kubelet
[root@control yum.repos.d]# systemctl enable kubelet
[root@control yum.repos.d]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor pre>
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since Wed 2020-07-22 1>
     Docs: https://kubernetes.io/docs/
  Process: 18237 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_C>
 Main PID: 18237 (code=exited, status=255)



Now, we are going to make this os as an base image - called GOLD image. Basedo n this we will create other OS instances for Control and worker nodes.


Now, Shutdown your VM
[root@control yum.repos.d]# init 0


Go to your Virtual box and R. click on your VM and click on clone

Under Macaddress policy - select generate new MAC address


Keep original machine intake and create a 3 clone machines
- Control or master node
- 2 worker nodes


Set the hostname and IP address. Add entry to dns or hosts file
# hostnamectl set-hostname master; exec bash

# cat /etc/hosts
192.168.56.5    master
192.168.56.6    worker1
192.168.56.7    worker2

Make sure they can comunicate with each other
# for i in master worker1 worker2; do  ping -c 2 $i; done


Now, set up your kubernetes master also called control plane
Specify the network information
Run kubeadm -h for helo
# kubeadm -h
read all the output and pick the best option.

here, we like to set up an kubernetes so we will pick init option. lets go ahead and get help on this as well.

# kubeadm init -h
We see for network info, we will pick
--pod-network-cidr string              Specify range of IP addresses

# kubeadm init --pod-network-cidr=10.10.1.0/16

I got error that my docker engine is not started and firewalld is enabled. so I want to disable firewall and start docker

[root@master ~]# systemctl enable docker
[root@master ~]# systemctl restart docker
[root@master ~]# systemctl status docker
[root@master ~]# systemctl stop firewalld
[root@master ~]# systemctl disable firewalld

I got another error complaining about CPU

[root@master ~]# kubeadm init --pod-network-cidr=10.10.1.0/16
W0722 16:25:45.888720    6178 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.6
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher


I have to shutdown master and add one more CPU.

[root@master ~]# docker ps

Now, lets try again,

[root@master ~]# kubeadm init --pod-network-cidr 10.10.1.0/16
W0722 16:30:58.200764    1654 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.6
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster

................

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.2.15:6443 --token 6daorv.o0hdosnxi40z08h1 \
    --discovery-token-ca-cert-hash sha256:61d3f94370095d8a04e155a133383c57b3e221150d369c575dfdb2e3c78de08f
[root@master ~]#


Review the output and complete the following,
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master ~]#

Add alias to profile
[root@master ~]# vi .bashrc
alias kc=kubectl
[root@master ~]# . ./.bashrc

[root@master ~]# systemctl status kubelet
[root@master ~]# docker images
[root@master ~]# docker ps

[root@master ~]# kc get pods
No resources found in default namespace.
[root@master ~]# kc get nodes
NAME     STATUS     ROLES    AGE   VERSION
master   NotReady   master   12m   v1.18.6
[root@master ~]# kc get ns
[root@master ~]# kc create namespace myspace
namespace/myspace created

[root@master ~]# kc get ns
NAME              STATUS   AGE
default           Active   17m
kube-node-lease   Active   17m
kube-public       Active   17m
kube-system       Active   17m
myspace           Active   19s
[root@master ~]# kc run testpod --image=httpd -n myspace
pod/testpod created
[root@master ~]# kc get pods
No resources found in default namespace.
[root@master ~]# kc get pods -n myspace
NAME      READY   STATUS    RESTARTS   AGE
testpod   0/1     Pending   0          39s
[root@master ~]# kc get all -n myspace
NAME          READY   STATUS    RESTARTS   AGE
pod/testpod   0/1     Pending   0          72s


[root@master ~]# kc get pods -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-66bff467f8-66t78         0/1     Pending   0          19m
coredns-66bff467f8-sllrx         0/1     Pending   0          19m
etcd-master                      1/1     Running   0          19m
kube-apiserver-master            1/1     Running   0          19m
kube-controller-manager-master   1/1     Running   0          19m
kube-proxy-4kfld                 1/1     Running   0          19m
kube-scheduler-master            1/1     Running   0          19m
[root@master ~]#

We are going to create flannel

google for github kube-flannel.yml

https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

[root@master ~]# kc apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
[root@master ~]#

[root@master ~]# kc get pods
No resources found in default namespace.
[root@master ~]# kc get pods -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-66bff467f8-66t78         1/1     Running   0          26m
coredns-66bff467f8-sllrx         1/1     Running   0          26m
etcd-master                      1/1     Running   0          26m
kube-apiserver-master            1/1     Running   0          26m
kube-controller-manager-master   1/1     Running   0          26m
kube-flannel-ds-amd64-lw6xf      1/1     Running   0          37s
kube-proxy-4kfld                 1/1     Running   0          26m
kube-scheduler-master            1/1     Running   0          26m
[root@master ~]#




Run on client machines
kubeadm join 10.0.2.15:6443 --token 6daorv.o0hdosnxi40z08h1 \
    --discovery-token-ca-cert-hash sha256:61d3f94370095d8a04e155a133383c57b3e221150d369c575dfdb2e3c78de08f



https://carleton.ca/scs/tech-support/troubleshooting-guides/host-only-adapter-on-virtualbox/
https://condor.depaul.edu/glancast/443class/docs/vbox_host-only_setup.html