Cloud Server
Kube Master
Username: cloud_user
Password: WVAPIfMNPa
Kube Master Public IP: 18.234.223.154
Worker 1
Username: cloud_user
Password: WVAPIfMNPa
Worker 1 Public IP: 54.196.231.131
Worker 0
Username: cloud_user
Password: WVAPIfMNPa
Worker 0 Public IP: 54.221.177.24
Installing and Testing the Components of a Kubernetes Cluster
-> We have three nodes and we will install the components necessary to build a running Kubernetes cluster. Once the cluster is built, we will verify all nodes are in the ready status. We will start testing deployments, pods, services, and port forwarding, as well as executing commands from a pod.
-> Log in to all three servers using terminal program such as putty or MobaX.
A. Get the Docker gpg, and add it to your repository.
1. In all three terminals, run the following command to get the Docker gpg key:
root@master:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
OK
root@master:~#
2. Then add it to your repository:
# sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
B. Get the Kubernetes gpg key, and add it to your repository.
1. In all three terminals, run the following command to get the Kubernetes gpg key:
root@master:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
OK
root@master:~#
2. Then add it to your repository:
# cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
# root@ip-10-0-1-102:~# cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
> deb https://apt.kubernetes.io/ kubernetes-xenial main
> EOF
# deb https://apt.kubernetes.io/ kubernetes-xenial main
root@ip-10-0-1-102:~#
3. Update the packages:
# sudo apt update
C. Install Docker, kubelet, kubeadm, and kubectl.
1. In all three terminals, run the following command to install Docker, kubelet, kubeadm, and kubectl:
- docker-cd, kubelet, kubeadm, kubectl
# sudo apt install -y docker-ce=5:19.03.10~3-0~ubuntu-focal kubelet=1.18.5-00 kubeadm=1.18.5-00 kubectl=1.18.5-00
D. Initialize the Kubernetes cluster.
1. In the Controller server terminal, run the following command to initialize the cluster using kubeadm:
$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16
# root@master:~# kubeadm init --pod-network-cidr=10.244.0.0/16
E. Set up local kubeconfig.
1. In the Controller server terminal, run the following commands to set up local kubeconfig:
$ sudo mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
# root@master:~# mkdir -p $HOME/.kube
# root@master:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# root@master:~# id -u
0
# root@master:~# id -g
0
# root@master:~# chown $(id -u):$(id -g) $HOME/.kube/config
root@master:~#
F. Apply the flannel CNI plugin as a network overlay.
1. In the Controller server terminal, run the following command to apply flannel:
# kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml
G. Join the worker nodes to the cluster, and verify they have joined successfully.
-> When we ran sudo kubeadm init on the Controller node, there was a kubeadmin join command in the output. You'll see it right under this text:
You can now join any number of machines by running the following on each node as root:
-> To join worker nodes to the cluster, we need to run that command, as root (we'll just preface it with sudo) on each of them. It should look something like this:
$ sudo kubeadm join <your unique string from the output of kubeadm init>
H. Run a deployment that includes at least one pod, and verify it was successful.
1. In the Controller server terminal, run the following command to run a deployment of ngnix:
# kubectl create deployment nginx --image=nginx
2. Verify its success:
# kubectl get deployments
I. Verify the pod is running and available.
1. In the Controller server terminal, run the following command to verify the pod is up and running:
# kubectl get pods
J. Use port forwarding to extend port 80 to 8081, and verify access to the pod directly.
1. In the Controller server terminal, run the following command to forward the container port 80 to 8081 (replace <pod_name> with the name in the output from the previous command):
# kubectl port-forward <pod_name> 8081:80
2. Open a new terminal session and log in to the Controller server. Then, run this command to verify we can access this container directly:
# curl -I http://127.0.0.1:8081
We should see a status of OK.
K. Execute a command directly on a pod.
1. In the original Controller server terminal, hit Ctrl+C to exit out of the running program.
2. Still in Controller, execute the nginx version command from a pod (using the same <pod_name> as before):
# kubectl exec -it <pod_name> -- nginx -v
L. Create a service, and verify connectivity on the node port.
1. In the original Controller server terminal, run the following command to create a NodePort service:
# kubectl expose deployment nginx --port 80 --type NodePort
2. View the service:
# kubectl get services
3. Get the node the pod resides on.
# kubectl get po -o wide
4. Verify the connectivity by using curl on the NODE from the previous step and the port from when we viewed the service. Make sure to replace YOUR_NODE and YOUR_PORT with appropriate values for this lab.
# curl -I YOUR_NODE:YOUR_PORT
We should see a status of OK.
Conclusion
Congratulations on completing this lab!
# root@master:~/.kube# source <(kubectl completion bash)
root@master:~/.kube# echo "source <(kubectl completion bash)" >> ~/.bashrc
alias k=kubectl
complete -F __start_kubectl k
root@master:~# alias 'kc=kubectl'
root@master:~# kc get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 17m v1.18.5
root@master:~#
root@master:~# kubeadm init
I1130 18:28:31.593188 26928 version.go:252] remote version is much newer: v1.19.4; falling back to: stable-1.18
W1130 18:28:31.688915 26928 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.12
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
[ERROR Port-10259]: Port 10259 is in use
[ERROR Port-10257]: Port 10257 is in use
[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
root@master:~# pwd
/root
keep getting error,
went to worker node. copy the config file from .kube/config from master node and ran towards it.
# kubeadm join --discovery-file config
root@ip-10-0-1-102:~# kubeadm join --discovery-file config
W1130 18:35:18.713689 9854 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
root@ip-10-0-1-102:~#
root@master:~/.kube# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-1-102 Ready <none> 3m20s v1.18.5
ip-10-0-1-103 Ready <none> 21s v1.18.5
master Ready master 27m v1.18.5
root@master:~/.kube#
------------------
root@master:~/.kube# kc port-forward nginx-f89759699-tm2x7 8081:80
Forwarding from 127.0.0.1:8081 -> 80
Forwarding from [::1]:8081 -> 80
^Z
[2]+ Stopped kubectl port-forward nginx-f89759699-tm2x7 8081:80
root@master:~/.kube# bg
[2]+ kubectl port-forward nginx-f89759699-tm2x7 8081:80 &
root@master:~/.kube# curl -I http://127.0.0.1:8081
Handling connection for 8081
HTTP/1.1 200 OK
Server: nginx/1.19.5
Date: Mon, 30 Nov 2020 18:43:54 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 24 Nov 2020 13:02:03 GMT
Connection: keep-alive
ETag: "5fbd044b-264"
Accept-Ranges: bytes
root@master:~/.kube#
No comments:
Post a Comment