Monday, November 30, 2020

How to export DISPLAY as a root user on RHEL7

[sam@master ~]$ xhost +
[sam@master ~]$ xauth list
[sam@master ~]$ echo $DISPLAY

[sam@master ~]$ sudo su -
[root@master ~]# xauth list
[root@master ~]# xauth list server:  MIT ..cookie.. dd2....
[root@master ~]# xauth list

[root@master ~]# firefox http://mywebsite.com

RHEL7 - Creating LUKs encripted device on redhat

 Creating LUKs encripted device on RHEL7


1. Add device to your system either through VMware or SAN
# ls -l /dev/sdb

2. Partition your drive
# gdisk /dev/sdb

Type ? for help
Type n for new partition
change partition type to LVM
press w to write the partition.
press Y to confirm.
# fdisk -l

3. Now, its time to encript your device.
# cryptsetup --force-password --cipher aes-xts-plain64 luksFormat /dev/sdb1
Confirm by typing YES and it will prompt you for password. Keep/remember this password.

4. Now, open this device
# cryptsetup luksOpen /dev/sdb1 luks-$(cryptsetup luksUUID /dev/sdb1)
# cryptsetup luksUUID /dev/sdb1

5. Now, add device to crypttab
# uuid=$(cryptsetup luksUID /dev/sdb1); echo luks-$uuid UUID=$uuid none >> /etc/crypttab
# cat /etc/crypttab

6. Bring this device under LVM control
# pvcreate /dev/mapper/luks-$(cryptsetup luksUUID /dev/sdb1)
# pvs

7. Create volume group
# vgcreate vg1 /dev/mapper/luks-$(cryptsetup luksUUID /dev/sdb1)

---------------------------------------------------------
if you are extending
# vgextend vg1 /dev/mapper/luks-$(cryptsetup luksUUID /dev/sdb1)
# vgs
# lvs
# lvscan
# df -h /var
# lvextend -l +10G /dev/vg1/lv_var
# lvscan
# df -h /var
# xfs_growfs /dev/vg1/lv_var
# df -h /var # verify the change of the size.
---------------------------------------------------------
8. Create logical volume out off volume group
# lvcreate -L 10G -n lv_www vg1

9. Create filesystem
# lvscan
# mkfs.xfs /dev/mapper/vg1-lv_www

10. Add entry to fstab and mount the device.


Note: If you want to have a shorter password, look on pwquality.conf file to change the length of the pw.

Operators - BITWISE operator in C - Language

  BITWISE operator

----------------

"BITWISE operator" is used to perform the operations on the "BITS".

What is the BITS?
"BITS" are 1 and 0's.

What is another name of "BITS"?
-> Another name of "BITS" is "binary language".
-> Another name of "BINARY LANGUAGE" is "MACHINE LANGUAGE".

The following are the "BITWISE OPERATORS" in C-Language
1) & - AND BITWISE OPETATOR
2) | - OR BITWISE OPERATOR
3) ^ - XOR BITWISE OPERATOR
       NOTE: XOR stands for "EXCLUSIVE OR"
4) >> - RIGHT SHIFT BITWISE OPERATOR
5) << - LEFT SHIFT BITWISE OPERATOR
6) ~ - ONE'S COMPLIMENT BITWISE OPERATOR


BITWISE OPERATORS
-----------------
1) & - AND BITWISE OPERATOR
-----------------------

& - AND BITWISE OPERATOR is used to perform "AND Logical operations on BITS".

What are the "AND logical operations on BITS"?

-> NOTE: True -1   False - 0

1 & 0 = 0
0 & 1 = 0
1 & 1 = 1
0 & 0 = 0

Lets say, I have one integer value: 10
10 -> what are the binary bits of 10?

decimal    Binary
10 = 1010
20 = 10100

WE collect reminder, lets calculate the bit value of 20.
Here, we divide 20 by 2 and we note the reminder.

2|20
2|10 = 0 reminder
2|5  = 0
2|2  = 1
  1  = 0

so, you put together from bottom, we have
10100


As we know,
1 byte = 8 bits
10 = 1010 -> 4 bits
20 = 10100 -> 5 bits

so, 10 = 0 0 0 0 1 0 1 0   -> we just add 4 0's for 10 to make 8 bits which is 1 byte
20 = 0 0 0 1 1 0 1 0 0 -> we have to add 3 0's

Now, lets apply AND bitwise operator

to make a bits, we are adding 0's. As we know one byte equal to 8 bits.

Now, lets apply AND bitwise operator

10 = 0 0 0 0 1 0 1 0
20 = 0 0 0 1 0 1 0 0
---------------------
     0 0 0 0 0 0 0 0

Now calculate,

2^0 -> 2 power 0 is equal to 0.
Here is an example why its 0.

0        0        0       0         0        0            0         0         
(2^7*0)  (2^6*0)   (2^5*0)   (2^4*0)   (2^3*0)  (2^2*0) +  (2^1*0)  +    (0^0*0) = 0 result

When we apply 'AND bitwise operator' on 10 and 20 numbers, then what is the result?
-> 0 is the result.

void main()

{
printf("%d", 10 & 20);
}

2) | - OR BITWISE OPERATOR
--------------------------

OR BITWISE OPERATOR is used to perform  OR logical operations on BITS.

What are the OR LOGICAL OPERATIONS on BITS?
->

1 | 0 = 1    True Or False -> True
0 | 1 = 1    True
1 | 1 = 1    True
0 | 0 = 0    False | False


Applying OR BITWISE Operator
10 = 0   0   0  0  1  0  1  0
20 = 0   0   0  1  0  1  0  0
-----------------------------
     0   0   0  1  1  1  1  0
now, calculate from roght to left

(2^7*0) (2^6*0) (2^5*0) (2^4*1) (2^3*1) (2^2*1) (2^1*1) (2^0*0)
---------------------------------------------------------------
   0   +  0    +  0    +   16 +    8 +     4 +     2   +  0 = 30 Result

So, if we apply 'OR bitwise operator on 10 and 20 numbers, what is the result?
The answer is 30.

void main()
{
  printf("%d", 10 | 20);
}


3) XOR BITWISE OPERATOR
========================

BITWISE OPERATORS are used for performing calculations on bits

Here, XOR stands for EXCLUSIVE OR

XOR operations on BITS:
------------------------
It is little bit different from OR BITWISE operator.

XOR - ^
1 ^ 0 = 1
0 ^ 1 = 1
0 ^ 0 = 0
1 ^ 1 = 0

Q. What is the difference between 'OR' and 'XOR' bitwise operator?
->
1 | 1 = 1
1 ^ 1 = 0


lets work on XOR bitwise operator.

Applying XOR BITWISE Operator
10 = 0   0   0  0  1  0  1  0
20 = 0   0   0  1  0  1  0  0
-----------------------------
     0   0   0  1  1  1  1  0

Since 1 on 10 and 1 on 20 do not fall on same order, that is why the result on 'OR' or 'XOR' comes same.


Result will be same: 30


Now, lets write the code

#include<stdio.h>
void main()
{
 clrscr();
 printf("%d", 10 ^ 20);
}


4) >> - RIGHTSHIFT BITWISE Operator
===================================
>> - two greater than signals.

Kubernetes - Installing and Testing the Components of a Kubernetes Cluster

Cloud Server
Kube Master
Username: cloud_user
Password: WVAPIfMNPa
Kube Master Public IP: 18.234.223.154

Worker 1
Username: cloud_user
Password: WVAPIfMNPa
Worker 1 Public IP: 54.196.231.131

Worker 0
Username: cloud_user
Password: WVAPIfMNPa
Worker 0 Public IP: 54.221.177.24

Installing and Testing the Components of a Kubernetes Cluster

-> We have three nodes and we will install the components necessary to build a running Kubernetes cluster. Once the cluster is built, we will verify all nodes are in the ready status. We will start testing deployments, pods, services, and port forwarding, as well as executing commands from a pod.

-> Log in to all three servers using terminal program such as putty or MobaX.

A. Get the Docker gpg, and add it to your repository.
1. In all three terminals, run the following command to get the Docker gpg key:
root@master:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
OK
root@master:~#

2. Then add it to your repository:

# sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

B. Get the Kubernetes gpg key, and add it to your repository.
1. In all three terminals, run the following command to get the Kubernetes gpg key:
root@master:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
OK
root@master:~#

2. Then add it to your repository:
# cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

# root@ip-10-0-1-102:~# cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
> deb https://apt.kubernetes.io/ kubernetes-xenial main
> EOF

# deb https://apt.kubernetes.io/ kubernetes-xenial main
root@ip-10-0-1-102:~#

3. Update the packages:
# sudo apt update

C. Install Docker, kubelet, kubeadm, and kubectl.
1. In all three terminals, run the following command to install Docker, kubelet, kubeadm, and kubectl:
- docker-cd, kubelet, kubeadm, kubectl

# sudo apt install -y docker-ce=5:19.03.10~3-0~ubuntu-focal kubelet=1.18.5-00 kubeadm=1.18.5-00 kubectl=1.18.5-00

D. Initialize the Kubernetes cluster.
1. In the Controller server terminal, run the following command to initialize the cluster using kubeadm:
$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16

# root@master:~# kubeadm init --pod-network-cidr=10.244.0.0/16

E. Set up local kubeconfig.
1. In the Controller server terminal, run the following commands to set up local kubeconfig:
$ sudo mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
# root@master:~# mkdir -p $HOME/.kube
# root@master:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# root@master:~# id -u
0
# root@master:~# id -g
0
# root@master:~# chown $(id -u):$(id -g) $HOME/.kube/config
root@master:~#

F. Apply the flannel CNI plugin as a network overlay.
1. In the Controller server terminal, run the following command to apply flannel:
# kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml

G. Join the worker nodes to the cluster, and verify they have joined successfully.
-> When we ran sudo kubeadm init on the Controller node, there was a kubeadmin join command in the output. You'll see it right under this text:

You can now join any number of machines by running the following on each node as root:

-> To join worker nodes to the cluster, we need to run that command, as root (we'll just preface it with sudo) on each of them. It should look something like this:

$ sudo kubeadm join <your unique string from the output of kubeadm init>

H. Run a deployment that includes at least one pod, and verify it was successful.
1. In the Controller server terminal, run the following command to run a deployment of ngnix:
# kubectl create deployment nginx --image=nginx

2. Verify its success:
# kubectl get deployments

I. Verify the pod is running and available.
1. In the Controller server terminal, run the following command to verify the pod is up and running:
# kubectl get pods

J. Use port forwarding to extend port 80 to 8081, and verify access to the pod directly.
1. In the Controller server terminal, run the following command to forward the container port 80 to 8081 (replace <pod_name> with the name in the output from the previous command):
# kubectl port-forward <pod_name> 8081:80

2. Open a new terminal session and log in to the Controller server. Then, run this command to verify we can access this container directly:
# curl -I http://127.0.0.1:8081
We should see a status of OK.

K. Execute a command directly on a pod.
1. In the original Controller server terminal, hit Ctrl+C to exit out of the running program.

2. Still in Controller, execute the nginx version command from a pod (using the same <pod_name> as before):
# kubectl exec -it <pod_name> -- nginx -v

L. Create a service, and verify connectivity on the node port.
1. In the original Controller server terminal, run the following command to create a NodePort service:
# kubectl expose deployment nginx --port 80 --type NodePort

2. View the service:
# kubectl get services

3. Get the node the pod resides on.
# kubectl get po -o wide

4. Verify the connectivity by using curl on the NODE from the previous step and the port from when we viewed the service. Make sure to replace YOUR_NODE and YOUR_PORT with appropriate values for this lab.
# curl -I YOUR_NODE:YOUR_PORT
We should see a status of OK.

Conclusion
Congratulations on completing this lab!

# root@master:~/.kube# source <(kubectl completion bash)
root@master:~/.kube# echo "source <(kubectl completion bash)" >> ~/.bashrc
alias k=kubectl
complete -F __start_kubectl k

root@master:~# alias 'kc=kubectl'
root@master:~# kc get nodes
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   17m   v1.18.5
root@master:~# 

root@master:~# kubeadm init

I1130 18:28:31.593188   26928 version.go:252] remote version is much newer: v1.19.4; falling back to: stable-1.18

W1130 18:28:31.688915   26928 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]

[init] Using Kubernetes version: v1.18.12
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Port-6443]: Port 6443 is in use
        [ERROR Port-10259]: Port 10259 is in use
        [ERROR Port-10257]: Port 10257 is in use
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR Port-2379]: Port 2379 is in use
        [ERROR Port-2380]: Port 2380 is in use
        [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

To see the stack trace of this error execute with --v=5 or higher

root@master:~# pwd
/root

keep getting error,
went to worker node. copy the config file from .kube/config from master node and ran towards it.
# kubeadm join --discovery-file config

root@ip-10-0-1-102:~# kubeadm join --discovery-file config
W1130 18:35:18.713689    9854 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

root@ip-10-0-1-102:~# 

root@master:~/.kube# kubectl get nodes
NAME            STATUS   ROLES    AGE     VERSION
ip-10-0-1-102   Ready    <none>   3m20s   v1.18.5
ip-10-0-1-103   Ready    <none>   21s     v1.18.5
master          Ready    master   27m     v1.18.5
root@master:~/.kube#

------------------
root@master:~/.kube# kc port-forward nginx-f89759699-tm2x7 8081:80
Forwarding from 127.0.0.1:8081 -> 80
Forwarding from [::1]:8081 -> 80

^Z
[2]+  Stopped                 kubectl port-forward nginx-f89759699-tm2x7 8081:80
root@master:~/.kube# bg
[2]+ kubectl port-forward nginx-f89759699-tm2x7 8081:80 &
root@master:~/.kube# curl -I http://127.0.0.1:8081
Handling connection for 8081

HTTP/1.1 200 OK
Server: nginx/1.19.5
Date: Mon, 30 Nov 2020 18:43:54 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 24 Nov 2020 13:02:03 GMT
Connection: keep-alive
ETag: "5fbd044b-264"
Accept-Ranges: bytes
root@master:~/.kube#