Thursday, December 17, 2020

Ansible - Ansible Vault - keep your password secret

 ================Ansible Vault==================
1. Create your yaml file
We are going to create a file keepitsecret.yaml and we will keet it secret using vault

[root@master vault]# cat myvault.yaml
- hosts: 127.0.0.1
  vars_files:
    - keepitsecret.yaml
  tasks:
  - name: Sending email using Gmail's smtp services
    mail:
      host: smtp.gmail.com
      port: 587
      username: "{{ u }}"
      password: "{{ p }}"
      to: sam@gmail.com
      subject: Testing email from gmail using ansible
      body: system {{ ansible_host }} has been successfully tested.


2. Create a vault where you will store your username/pw
# av -h
# av create -h
check the syntax
[root@master vault]# ansible-vault create keepitsecret.yaml
New Vault password:
Confirm New Vault password:
u: "sam@gmail.com"
p: "MyPasswordSecret"


3. View the content of the file. You can't read what your stored. Its encripted.
[root@master vault]# cat keepitsecret.yaml
$ANSIBLE_VAULT;1.1;AES256
32346435633239646636626465663162613262623434333664393437316461366565316364396632
6365373834616464333437373134653435386335653165660a326331363163353932373161386362
61316464353339383834666662353230393036313538646563303632393134363165353431336130
3037393363643463650a643762353433663662306630376231363836376464656330346235663964
31656463373832353739303239353032613838333231613464343336656239656535333561663064
3036336665303135313061666234313831626630343066613130
[root@master vault]#

4. Run your playbook
# ap myvault.yaml
I got email alert
Sign-in attempt was blocked
sam@gmail.com
Someone just used your password to try to sign in to your account from a non-Google app. Google blocked them, but you should check what happened. Review your account activity to make sure no one else has access.

Less secure app blocked
Google blocked the app you were trying to use because it doesn't meet our security standards.
Some apps and devices use less secure sign-in technology, which makes your account more vulnerable. You can turn off access for these apps, which we recommend, or turn on access if you want to use them despite the risks. Google will automatically turn this setting OFF if it's not being used.
Learn more
google for less secure app access and 
Enabling less secure apps to access Gmail

you should be send email this time.

Tuesday, December 15, 2020

Ansible: exception handling ... error handling ..

Ansible: exception handling ...

1. Lets create a simple playbook and run

[root@master dec15]# cat error.yaml
- hosts: w1
  tasks:
  - package:
      name: "httpd"
      state: present

  - debug:
      msg: "This is a test run .."

[root@master dec15]# ansible-playbook error.yaml

2. Lets make a mistake on one of the variable.
say pkg name is nane

# cat error.yaml
- hosts: w1
  tasks:
  - package:
        nane: "httpd"
        state: present
- debug:
    msg: "This is just a test run ..."

[root@master dec15]# alias 'ap=ansible-playbook'
[root@master dec15]# ap error.yaml

we saw fatal error. we know its because the keyword we wrote, ansible does not that that keyword defined.
in fact it didn't recognize the parameter e=we supply.

3. What we can do is tell ansible to ignore if you find error. do not just throw error, continue.

# cat error.yaml
- hosts: w1
  tasks:
  - package:
        nane: "httpd"
        state: present
    ignore_errors: yes # ignore this error and go to next task
- debug:
    msg: "This is just a test run ..."

[root@master dec15]# ap error.yaml
You see, error is ignored this time.

4. Now, lets try something else, lets download a file from internet..

[root@master dec15]# ansible-doc -l uri

# cat errors.yaml
- hosts: w1
  tasks:
    - package:
        name: "httpd"
        state: present

debug: 
   msg: "This is just testing msg"

Before running the playbook to download a file from internet, lets look into some docs
or google for ansible uri or get-uri
look for example 

Image source: https://hips.hearstapps.com/hmg-prod.s3.amazonaws.com/images/quotes-about-change-1580499303.jpg

[root@master dec15]# cat error.yaml
- hosts: w1
  tasks:
  - package:
      nane: "httpd"
      state: present
    ignore_errors: yes

  #- get_uri:
  - uri:
      url: https://hips.hearstapps.com/hmg-prod.s3.amazonaws.com/images/quotes-about-change-1580499303.jpg
      dest: "/var/www/html/life_l.jpg"

  - debug:
      msg: "This is a test run .."
[root@master dec15]#


[root@master dec15]# ap error.yaml

[root@worker1 ~]# ls -l /var/www/html/life_l.jpg
-rw-r--r--. 1 root root 206868 Dec  8 05:10 /var/www/html/life_l.jpg

The result above shows that its successful.

5. Now, say we have a problem with internet connection, and if you run this playbook,, it will fail.
It will throw error, so how do you handle the error?

The best thing you can do is ignore the error. How? look at the yaml file below..

[root@master dec15]# netstat -rn
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
0.0.0.0         192.168.10.1    0.0.0.0         UG        0 0          0 enp0s3

for lab purpose, you can remove the 0.0.0.0

[root@master dec15]# cat error.yaml
- hosts: w1
  tasks:
  - package:
      nane: "httpd"
      state: present
    ignore_errors: yes

  #- get_uri:
  - uri:
      url: https://hips.hearstapps.com/hmg-prod.s3.amazonaws.com/images/quotes-about-change-1580499303.jpg
      dest: "/var/www/html/life_l.jpg"
    ignore_errors: yes

  - debug:
      msg: "This is a test run .."

[root@master dec15]# ap error.yaml

If you are disconnected from internet, it will fail with error unreachable network. 
so, you can use ignore_errors keyword to ignore the error.
it will continue to run it. But it might be an important piece of information, that you can't ignore it.

so,  you have to be very careful while dealing with ignore_errors.


6. Using block.
On block, your code in block and at the end, include rescue.



[root@master dec15]# ap error.yaml
/usr/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.2) or chardet (3.0.4) doesn't match a supported version!
  RequestsDependencyWarning)

PLAY [w1] *********************************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************
ok: [w1]

TASK [package] ****************************************************************************************************************
ok: [w1]

TASK [uri] ********************************************************************************************************************
changed: [w1]

TASK [service] ****************************************************************************************************************
changed: [w1]

TASK [debug] ******************************************************************************************************************
ok: [w1] => {
    "msg": "This is a test run .."
}

TASK [debug] ******************************************************************************************************************
ok: [w1] => {
    "msg": "This is a test run .."
}

PLAY RECAP ********************************************************************************************************************
w1                         : ok=6    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0


[root@worker1 ~]# systemctl status httpd
● httpd.service - The Apache HTTP Server
   Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-12-08 06:01:59 EST; 42s ago
     Docs: man:httpd.service(8)
 Main PID: 59114 (httpd)


[root@worker1 html]# ls -ltr
total 212
-rw-r--r--. 1 root root      8 Dec  7 03:19 index.html
-rw-r--r--. 1 root root     12 Dec  8 05:51 webap.htm
-rw-r--r--. 1 root root 206868 Dec  8 05:53 life_l.jpg
[root@worker1 html]# cat webap.htm
This is cool


Saturday, December 12, 2020

Ansible - EC2 instance creation using ansible

Ansible - EC2 instance creation using ansible..

1. Write your playbook
-> Collect all the manual steps to create an EC2 instance. Google for EC2 instance creation using ansible..
# cat aws-ec2.yaml
- hosts: localhost # 192.168.56.5 - use your own control node)
  tasks:
  - ec2_instance:
      region: us-east-1
      image_id: ami-04d29b6f966df1537
      instance_type: t2.micro
      #image: t2.micro
      vpc_subnet_id: subnet-e261d2ec
      security_group: sg-f5b18ad2
      key_name: kt-2020-k
      name: os_from_ansible
      state: present
      aws_access_key: AKIA6DEA42GA2PGZJ7G3
      aws_secret_key: 3IYF568qVJ8I#RZYnUV2OPG8/XDKVrhDfJRJPnbc

2. Run your playbook
[root@master wk-dec9]# ansible-playbook aws-ec2.yaml
PLAY [localhost] *****************************************************************************************
TASK [Gathering Facts] ***********************************************************************************
ok: [localhost]
TASK [ec2_instance] **************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to import the required Python library (botocore or boto3) on master's Python /usr/bin/python3.6. Please read module documentation and install in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter"}
PLAY RECAP ***********************************************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

3. Review the error and Install boto3
[root@master wk-dec9]# pip3 install boto3
Successfully installed boto3-1.16.33 botocore-1.19.33 s3transfer-0.3.3 urllib3-1.26.2

4. Re-run your playbook
[root@master wk-dec9]# ansible-playbook aws-ec2.yaml
PLAY [localhost] *****************************************************************************************
TASK [Gathering Facts] ***********************************************************************************
ok: [localhost]
TASK [ec2_instance] **************************************************************************************
changed: [localhost]
PLAY RECAP ***********************************************************************************************
localhost                  : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
[root@master wk-dec9]#

5. Playbook content ..
[root@master wk-dec9]# cat aws-ec2.yaml
- hosts: localhost # 192.168.56.4 - your own control node)
  tasks:
  - ec2_instance:
      region: us-east-1
      image_id: ami-04d29b6f966df1537
      instance_type: t2.micro
      #image: t2.micro
      vpc_subnet_id: subnet-e251d2ec
      security_group: sg-f7a18ad2
      key_name: kb-2020-key
      name: os_from_ansible
      state: present
      aws_access_key: AKIC6HXA42MR2PGZJ7G3
      aws_secret_key: 3IYF590qVJ8ISpZYnUV92PG8/XDKVrhHsJcMPnbc

Saturday, December 5, 2020

Ansible - Setup and configure Load Balancer and Proxy using HAProxy-automatically using ansible

Configure LB and proxy using haproxy

Requirement:
1. Once server for load balancer
2. one or two servers for web servers
In my example I have three servers
Load Balancer: master  - 192.168.10.50
Web servers: worker1, worker2 - 192.168.10.51/52

1. On master server, Install haproxy - comes on RedHat DVD
# yum install haproxy

Note: There is no httpd process running on this host.
# rpm -qa httpd

2. Configure haproxy
[root@master ~]# vi /etc/haproxy/haproxy.cfg

do not modify global and default setting,
Directly go to 'frontend main' section
Here, change the port where you want your Load Balancer to run.
I will be using port 8080
I will be disabling firewall and selinux for this lab.

frontend main
    bind *:8080

go all the way down to section called 'backend app,

In this section, you will be adding all web server information.

backend app
    balance     roundrobin
    server app1 w1 192.168.10.51:80 check
    server app2 w2 192.168.10.52:80 check

3. Once config is changed, start the service
# systemctl start haproxy
# systemctl enable haproxy
# systemctl status haproxy


4. Now, go to your web server machines. 
a. In my case, its worker node 1 and node2
Install web server and start the service

# yum install httpd
# systemctl start httpd
# systemctl status httpd
# systemctl enable httpd
# systemctl stop firewalld

b. Create a index file
[root@worker1 html]# cat index.html
This is worker node1

[root@worker2 html]# cat index.html
This is Worker node2

5. Now, get the IP of your load balancer server. 

http://192.168.10.50:8080

You should be able to see the web site. if you refresh, you will see new page.

This proves that load balancer is working.

--------------------------------------------------------------

Until now, we configure haproxy manually, lets start configuring haproxy using ansible

1. Lets configure our inventory file as follows,
# ansible --version
# more /etc/ansible/ansible.conf

# cat myhosts
[mylb]
master  ansible_user=root ansible_ssh_pass=changeme ansible_connection=ssh

[myweb]
worker1 ansible_user=root ansible_ssh_pass=changeme ansible_connection=ssh
worker2 ansible_user=root ansible_ssh_pass=changeme ansible_connection=ssh

Note: There is one option available in inventory to group them. and give them group name say web or load balancer.

 
2. Lets automate everything using ansible. Here is the yaml file.
[root@master wk6]# cat mylb.yaml
- hosts: myweb  # myweb comes from inventory file
  tasks:
  - package:
      name: "httpd"

  - copy:
      dest: "/var/www/html/index.html"
      content: " Testing Load Balancer on RHEL7/Centos7"

  - service:
      name: "httpd"
      state: restarted

  - service:
      name: "firewalld"
      state: stopped
      enabled: False

- hosts: mylb
  tasks:
  - name: "Install LB software"
    package:
      name: "haproxy"

  - template:
      dest: "/etc/haproxy/haproxy.cfg"
      src: "haproxy.cfg"

  - service:
      name: "haproxy"
      state: restarted

3. Lets look at the config file for haproxy

Do not modify global and default values.

[root@master wk6]# cat haproxy.cfg
#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   https://www.haproxy.org/download/1.8/doc/configuration.txt
#
#---------------------------------------------------------------------

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

    # utilize system-wide crypto-policies
    ssl-default-bind-ciphers PROFILE=SYSTEM
    ssl-default-server-ciphers PROFILE=SYSTEM

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend main
    bind *:8080  # This is a port where LB will be listening
    #bind *:5000
    acl url_static       path_beg       -i /static /images /javascript /stylesheets
    acl url_static       path_end       -i .jpg .gif .png .css .js

    use_backend static          if url_static
    default_backend             app

#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend static
    balance     roundrobin
    server      static 127.0.0.1:4331 check

#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend app  # app value can be anything
    balance     roundrobin
    #server  app1 127.0.0.1:5001 check
    #server  app2 127.0.0.1:5002 check
    #server  app3 127.0.0.1:5003 check
    #server  app4 127.0.0.1:5004 check
    #server app1 w1 192.168.10.51:80 check
    #server app2 w2 192.168.10.52:80 check

{% for i in groups[ 'myweb' ] %}
   server app{{ loop.index }} {{ i }}:80 check
{% endfor %}


4. Lets run your playbook
[root@master wk6]# ansible-playbook mylb.yaml


5. Lets verify the content of haproxy.conf file
# cat /etc/haproxy/haproxy.conf

6. Go to the browser with ip of proxy server which is .50

http://192.168.10.50:8080/

You should be able to see the page.

Now, modify the content of one of the indexfile from web server and refresh the LB server, you will find the new pages.

Friday, December 4, 2020

Setup and Configure load balancer and proxy using HAProxy

Configure LB and proxy using haproxy

Requirement:
1. One server for load balancer
2. One or two servers for web servers

In my example I have three servers
Load Balancer: master  - 192.168.10.50
Web servers: worker1, worker2 - 192.168.10.51/52

1. On master server, Install haproxy - comes on RedHat DVD
# yum install haproxy
Note: There is no httpd process running on this host.
# rpm -qa httpd

2. Configure haproxy
[root@master ~]# vi /etc/haproxy/haproxy.cfg
do not modify global and default setting,
Directly go to 'frontend main' section
Here, change the port where you want your Load Balancer to run.
I will be using port 8080
I will be disabling firewall and selinux for this lab.
frontend main
    bind *:8080
go all the way down to section called 'backend app,
In this section, you will be adding all web server information.
backend app
    balance     roundrobin
    server w1 192.168.10.51:80 check
    server w2 192.168.10.52:80 check

3. Once config is changed, start the service
# systemctl start haproxy
# systemctl enable haproxy
# systemctl status haproxy

4. Now, go to your web server machines. 
a. In my case, its worker node 1 and node2
Install web server and start the service
# yum install httpd
# systemctl start httpd
# systemctl status httpd
# systemctl enable httpd
# systemctl stop firewalld

b. Create a index file
[root@worker1 html]# cat index.html
This is worker node1
[root@worker2 html]# cat index.html
This is Worker node2

5. Now, get the IP of your load balancer server. 
http://192.168.10.50:8080

You should be able to see the web site. if you refresh, you will see new page.
This proves that load balancer is working.

Monday, November 30, 2020

How to export DISPLAY as a root user on RHEL7

[sam@master ~]$ xhost +
[sam@master ~]$ xauth list
[sam@master ~]$ echo $DISPLAY

[sam@master ~]$ sudo su -
[root@master ~]# xauth list
[root@master ~]# xauth list server:  MIT ..cookie.. dd2....
[root@master ~]# xauth list

[root@master ~]# firefox http://mywebsite.com

RHEL7 - Creating LUKs encripted device on redhat

 Creating LUKs encripted device on RHEL7


1. Add device to your system either through VMware or SAN
# ls -l /dev/sdb

2. Partition your drive
# gdisk /dev/sdb

Type ? for help
Type n for new partition
change partition type to LVM
press w to write the partition.
press Y to confirm.
# fdisk -l

3. Now, its time to encript your device.
# cryptsetup --force-password --cipher aes-xts-plain64 luksFormat /dev/sdb1
Confirm by typing YES and it will prompt you for password. Keep/remember this password.

4. Now, open this device
# cryptsetup luksOpen /dev/sdb1 luks-$(cryptsetup luksUUID /dev/sdb1)
# cryptsetup luksUUID /dev/sdb1

5. Now, add device to crypttab
# uuid=$(cryptsetup luksUID /dev/sdb1); echo luks-$uuid UUID=$uuid none >> /etc/crypttab
# cat /etc/crypttab

6. Bring this device under LVM control
# pvcreate /dev/mapper/luks-$(cryptsetup luksUUID /dev/sdb1)
# pvs

7. Create volume group
# vgcreate vg1 /dev/mapper/luks-$(cryptsetup luksUUID /dev/sdb1)

---------------------------------------------------------
if you are extending
# vgextend vg1 /dev/mapper/luks-$(cryptsetup luksUUID /dev/sdb1)
# vgs
# lvs
# lvscan
# df -h /var
# lvextend -l +10G /dev/vg1/lv_var
# lvscan
# df -h /var
# xfs_growfs /dev/vg1/lv_var
# df -h /var # verify the change of the size.
---------------------------------------------------------
8. Create logical volume out off volume group
# lvcreate -L 10G -n lv_www vg1

9. Create filesystem
# lvscan
# mkfs.xfs /dev/mapper/vg1-lv_www

10. Add entry to fstab and mount the device.


Note: If you want to have a shorter password, look on pwquality.conf file to change the length of the pw.

Operators - BITWISE operator in C - Language

  BITWISE operator

----------------

"BITWISE operator" is used to perform the operations on the "BITS".

What is the BITS?
"BITS" are 1 and 0's.

What is another name of "BITS"?
-> Another name of "BITS" is "binary language".
-> Another name of "BINARY LANGUAGE" is "MACHINE LANGUAGE".

The following are the "BITWISE OPERATORS" in C-Language
1) & - AND BITWISE OPETATOR
2) | - OR BITWISE OPERATOR
3) ^ - XOR BITWISE OPERATOR
       NOTE: XOR stands for "EXCLUSIVE OR"
4) >> - RIGHT SHIFT BITWISE OPERATOR
5) << - LEFT SHIFT BITWISE OPERATOR
6) ~ - ONE'S COMPLIMENT BITWISE OPERATOR


BITWISE OPERATORS
-----------------
1) & - AND BITWISE OPERATOR
-----------------------

& - AND BITWISE OPERATOR is used to perform "AND Logical operations on BITS".

What are the "AND logical operations on BITS"?

-> NOTE: True -1   False - 0

1 & 0 = 0
0 & 1 = 0
1 & 1 = 1
0 & 0 = 0

Lets say, I have one integer value: 10
10 -> what are the binary bits of 10?

decimal    Binary
10 = 1010
20 = 10100

WE collect reminder, lets calculate the bit value of 20.
Here, we divide 20 by 2 and we note the reminder.

2|20
2|10 = 0 reminder
2|5  = 0
2|2  = 1
  1  = 0

so, you put together from bottom, we have
10100


As we know,
1 byte = 8 bits
10 = 1010 -> 4 bits
20 = 10100 -> 5 bits

so, 10 = 0 0 0 0 1 0 1 0   -> we just add 4 0's for 10 to make 8 bits which is 1 byte
20 = 0 0 0 1 1 0 1 0 0 -> we have to add 3 0's

Now, lets apply AND bitwise operator

to make a bits, we are adding 0's. As we know one byte equal to 8 bits.

Now, lets apply AND bitwise operator

10 = 0 0 0 0 1 0 1 0
20 = 0 0 0 1 0 1 0 0
---------------------
     0 0 0 0 0 0 0 0

Now calculate,

2^0 -> 2 power 0 is equal to 0.
Here is an example why its 0.

0        0        0       0         0        0            0         0         
(2^7*0)  (2^6*0)   (2^5*0)   (2^4*0)   (2^3*0)  (2^2*0) +  (2^1*0)  +    (0^0*0) = 0 result

When we apply 'AND bitwise operator' on 10 and 20 numbers, then what is the result?
-> 0 is the result.

void main()

{
printf("%d", 10 & 20);
}

2) | - OR BITWISE OPERATOR
--------------------------

OR BITWISE OPERATOR is used to perform  OR logical operations on BITS.

What are the OR LOGICAL OPERATIONS on BITS?
->

1 | 0 = 1    True Or False -> True
0 | 1 = 1    True
1 | 1 = 1    True
0 | 0 = 0    False | False


Applying OR BITWISE Operator
10 = 0   0   0  0  1  0  1  0
20 = 0   0   0  1  0  1  0  0
-----------------------------
     0   0   0  1  1  1  1  0
now, calculate from roght to left

(2^7*0) (2^6*0) (2^5*0) (2^4*1) (2^3*1) (2^2*1) (2^1*1) (2^0*0)
---------------------------------------------------------------
   0   +  0    +  0    +   16 +    8 +     4 +     2   +  0 = 30 Result

So, if we apply 'OR bitwise operator on 10 and 20 numbers, what is the result?
The answer is 30.

void main()
{
  printf("%d", 10 | 20);
}


3) XOR BITWISE OPERATOR
========================

BITWISE OPERATORS are used for performing calculations on bits

Here, XOR stands for EXCLUSIVE OR

XOR operations on BITS:
------------------------
It is little bit different from OR BITWISE operator.

XOR - ^
1 ^ 0 = 1
0 ^ 1 = 1
0 ^ 0 = 0
1 ^ 1 = 0

Q. What is the difference between 'OR' and 'XOR' bitwise operator?
->
1 | 1 = 1
1 ^ 1 = 0


lets work on XOR bitwise operator.

Applying XOR BITWISE Operator
10 = 0   0   0  0  1  0  1  0
20 = 0   0   0  1  0  1  0  0
-----------------------------
     0   0   0  1  1  1  1  0

Since 1 on 10 and 1 on 20 do not fall on same order, that is why the result on 'OR' or 'XOR' comes same.


Result will be same: 30


Now, lets write the code

#include<stdio.h>
void main()
{
 clrscr();
 printf("%d", 10 ^ 20);
}


4) >> - RIGHTSHIFT BITWISE Operator
===================================
>> - two greater than signals.

Kubernetes - Installing and Testing the Components of a Kubernetes Cluster

Cloud Server
Kube Master
Username: cloud_user
Password: WVAPIfMNPa
Kube Master Public IP: 18.234.223.154

Worker 1
Username: cloud_user
Password: WVAPIfMNPa
Worker 1 Public IP: 54.196.231.131

Worker 0
Username: cloud_user
Password: WVAPIfMNPa
Worker 0 Public IP: 54.221.177.24

Installing and Testing the Components of a Kubernetes Cluster

-> We have three nodes and we will install the components necessary to build a running Kubernetes cluster. Once the cluster is built, we will verify all nodes are in the ready status. We will start testing deployments, pods, services, and port forwarding, as well as executing commands from a pod.

-> Log in to all three servers using terminal program such as putty or MobaX.

A. Get the Docker gpg, and add it to your repository.
1. In all three terminals, run the following command to get the Docker gpg key:
root@master:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
OK
root@master:~#

2. Then add it to your repository:

# sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

B. Get the Kubernetes gpg key, and add it to your repository.
1. In all three terminals, run the following command to get the Kubernetes gpg key:
root@master:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
OK
root@master:~#

2. Then add it to your repository:
# cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

# root@ip-10-0-1-102:~# cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
> deb https://apt.kubernetes.io/ kubernetes-xenial main
> EOF

# deb https://apt.kubernetes.io/ kubernetes-xenial main
root@ip-10-0-1-102:~#

3. Update the packages:
# sudo apt update

C. Install Docker, kubelet, kubeadm, and kubectl.
1. In all three terminals, run the following command to install Docker, kubelet, kubeadm, and kubectl:
- docker-cd, kubelet, kubeadm, kubectl

# sudo apt install -y docker-ce=5:19.03.10~3-0~ubuntu-focal kubelet=1.18.5-00 kubeadm=1.18.5-00 kubectl=1.18.5-00

D. Initialize the Kubernetes cluster.
1. In the Controller server terminal, run the following command to initialize the cluster using kubeadm:
$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16

# root@master:~# kubeadm init --pod-network-cidr=10.244.0.0/16

E. Set up local kubeconfig.
1. In the Controller server terminal, run the following commands to set up local kubeconfig:
$ sudo mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
# root@master:~# mkdir -p $HOME/.kube
# root@master:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# root@master:~# id -u
0
# root@master:~# id -g
0
# root@master:~# chown $(id -u):$(id -g) $HOME/.kube/config
root@master:~#

F. Apply the flannel CNI plugin as a network overlay.
1. In the Controller server terminal, run the following command to apply flannel:
# kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml

G. Join the worker nodes to the cluster, and verify they have joined successfully.
-> When we ran sudo kubeadm init on the Controller node, there was a kubeadmin join command in the output. You'll see it right under this text:

You can now join any number of machines by running the following on each node as root:

-> To join worker nodes to the cluster, we need to run that command, as root (we'll just preface it with sudo) on each of them. It should look something like this:

$ sudo kubeadm join <your unique string from the output of kubeadm init>

H. Run a deployment that includes at least one pod, and verify it was successful.
1. In the Controller server terminal, run the following command to run a deployment of ngnix:
# kubectl create deployment nginx --image=nginx

2. Verify its success:
# kubectl get deployments

I. Verify the pod is running and available.
1. In the Controller server terminal, run the following command to verify the pod is up and running:
# kubectl get pods

J. Use port forwarding to extend port 80 to 8081, and verify access to the pod directly.
1. In the Controller server terminal, run the following command to forward the container port 80 to 8081 (replace <pod_name> with the name in the output from the previous command):
# kubectl port-forward <pod_name> 8081:80

2. Open a new terminal session and log in to the Controller server. Then, run this command to verify we can access this container directly:
# curl -I http://127.0.0.1:8081
We should see a status of OK.

K. Execute a command directly on a pod.
1. In the original Controller server terminal, hit Ctrl+C to exit out of the running program.

2. Still in Controller, execute the nginx version command from a pod (using the same <pod_name> as before):
# kubectl exec -it <pod_name> -- nginx -v

L. Create a service, and verify connectivity on the node port.
1. In the original Controller server terminal, run the following command to create a NodePort service:
# kubectl expose deployment nginx --port 80 --type NodePort

2. View the service:
# kubectl get services

3. Get the node the pod resides on.
# kubectl get po -o wide

4. Verify the connectivity by using curl on the NODE from the previous step and the port from when we viewed the service. Make sure to replace YOUR_NODE and YOUR_PORT with appropriate values for this lab.
# curl -I YOUR_NODE:YOUR_PORT
We should see a status of OK.

Conclusion
Congratulations on completing this lab!

# root@master:~/.kube# source <(kubectl completion bash)
root@master:~/.kube# echo "source <(kubectl completion bash)" >> ~/.bashrc
alias k=kubectl
complete -F __start_kubectl k

root@master:~# alias 'kc=kubectl'
root@master:~# kc get nodes
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   17m   v1.18.5
root@master:~# 

root@master:~# kubeadm init

I1130 18:28:31.593188   26928 version.go:252] remote version is much newer: v1.19.4; falling back to: stable-1.18

W1130 18:28:31.688915   26928 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]

[init] Using Kubernetes version: v1.18.12
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Port-6443]: Port 6443 is in use
        [ERROR Port-10259]: Port 10259 is in use
        [ERROR Port-10257]: Port 10257 is in use
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR Port-2379]: Port 2379 is in use
        [ERROR Port-2380]: Port 2380 is in use
        [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

To see the stack trace of this error execute with --v=5 or higher

root@master:~# pwd
/root

keep getting error,
went to worker node. copy the config file from .kube/config from master node and ran towards it.
# kubeadm join --discovery-file config

root@ip-10-0-1-102:~# kubeadm join --discovery-file config
W1130 18:35:18.713689    9854 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

root@ip-10-0-1-102:~# 

root@master:~/.kube# kubectl get nodes
NAME            STATUS   ROLES    AGE     VERSION
ip-10-0-1-102   Ready    <none>   3m20s   v1.18.5
ip-10-0-1-103   Ready    <none>   21s     v1.18.5
master          Ready    master   27m     v1.18.5
root@master:~/.kube#

------------------
root@master:~/.kube# kc port-forward nginx-f89759699-tm2x7 8081:80
Forwarding from 127.0.0.1:8081 -> 80
Forwarding from [::1]:8081 -> 80

^Z
[2]+  Stopped                 kubectl port-forward nginx-f89759699-tm2x7 8081:80
root@master:~/.kube# bg
[2]+ kubectl port-forward nginx-f89759699-tm2x7 8081:80 &
root@master:~/.kube# curl -I http://127.0.0.1:8081
Handling connection for 8081

HTTP/1.1 200 OK
Server: nginx/1.19.5
Date: Mon, 30 Nov 2020 18:43:54 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 24 Nov 2020 13:02:03 GMT
Connection: keep-alive
ETag: "5fbd044b-264"
Accept-Ranges: bytes
root@master:~/.kube#

Saturday, July 25, 2020

Kubernetes (Minikube) set up on your PC

1. Download kubectl, minikube, minishift and store at the following location
C:\Program Files\Kubernetes\minikube

2. Add this path to environment variable
Start -> Type env -> Click on system environment variable
and click on environment varilable and double click on PATH from System Variables section
Click on New and paste your path.

3. Open your command prompt and type minukube
if you get out put rather than error message, it is set up correctly. If you get error, fix it.

4. Download and install virtualbox from virtualbox.org

5. Open your DOS prompt and run the command below,
> minikube start --driver=virtualbox

This command basically downloads all required packages to install and setup kubernetes.

6. I basically rename kubectl command to kc only so I will have shortcut

https://cloud.redhat.com/openshift/install

7. Verify the configuration
C:\Users\Admin>cd .kube
C:\Users\Admin\.kube>dir
C:\Users\Admin\.kube>notepad config

C:\Users\Admin\.kube>kc cluster-info
Kubernetes master is running at https://192.168.99.100:8443
KubeDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.


C:\Users\Admin>kc get all
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   42m


8. Install following tools
- Visual Studio Code
- Node.js
- git
- OpenShift CLI (oc)


Reference Documents
https://kubernetes.io/docs/tasks/tools/install-kubectl/
https://github.com/kubernetes/minikube/releases/tag/v1.12.0
https://cloud.redhat.com/openshift/install
https://code.visualstudio.com/docs/?dv=win
https://nodejs.org/en/download/
https://cloud.redhat.com/openshift/install/crc/installer-provisioned