Tuesday, June 14, 2016

RHEL7 - Configure iSCSI Initiator

Configure iSCSI Initiator on RHEL7

We successfully set up and configured iSCSI target. Now, we will configure the iSCSI

initiator on the client side to connect to the iSCSI target.

1. Install iscsi-initiator-utils lsscsi packages
# yum install iscsi-initiator-utils lsscsi -y

2. Since this is going to be iSCSI initiator, make sure you use the same initiatorname that

you use while configuring iSCSI target. If you change this from default to something else,

then you have to update the ACL on iSCSI target. Make sure they are same on server and

client.
# cat /etc/iscsi/initiatorname.iscsi

if it has been modified, restart the service
# systemctl restart iscsid
# systemctl status iscsid -l

3. Performing the Discovery
Now, lets perform a discovery against the IP address of the target server to see what iSCSI

target configuration is available. Note: iscsiadm command has different modes.

# iscsiadm --mode discovery --type sendtargets --portal 192.168.10.120 --discover
192.168.10.120:3260,1 iqn.2015-12.local.expanor:target

4. Upon a successful discovery, you can get more information about the target using the -P

option. This option shows details about the current mode. In all modes, the print levels 0

and 1 are supported.
# iscsiadm --mode discovery -P 1

5. Make a connection
Now, upon successful iSCSI discovery, you can log in to the target and make a connection

# iscsiadm -m node -T iqn.1026-12.local.expanor:target -l
or

# iscsiadm --mode node --targetname iqn.2016-12.local.expanor:target --portal 192.168.10.120:3260 --login

options detail
-m or --mode
This option specifies iscsiadm to enter into “node” mode where actual connection with the target can be established.

-T or --targetname
This option specifies the name of the target while discovered using the iSCSI discovery process.

-p or --portal
This option is to specify the target IP address and port

-l or --login
This option is to authenticate to the target and stores credentials as well. Connection can be re-established upon reboot.

Note: After successful login in to iSCSI target server, the connections are persistent. iscsid and iscsi services read the iSCSI configuration locally stored so connection is always automatic even on reboot.
If some reason, you don't want to connect to iSCSI target server after reboot, you have to login out to disconnect the session and delete the corresponding IQN subdirectory and all of its contents.

The command below will wipe out all the configuration.
# iscsiadm --mode node --targetname iqn.2016-12.local.expanor:target --logout
# iscsiadm --mode node --targetname 2016-12.local.expanor:target --op=delete

6. Display and review all active iSCSI sessions
# iscsiadm -m session -P 0

note:  You can change -P 0 to 1,2 or 3 for more information

7. Now, all iSCSI devices (block and fileio disks) shared from target server are available to iSCSI initiator. You use lsscsi, lsblk to list these devices along with LIO devices.

# lsscsi
# lsblk --scsi
# fdisk -l

Mounting iSCSI Devices

8. Now, you can create filesystem on presented disks.
Create XFS file system and get BLK id of the devices to find UUID.

# iscsiadm -m session -P3
# cat /proc/partitions

# mkfs.xfs /dev/sdc
# mkfs.xfs /dev/sdd
# mkfs.xfs /dev/sde

Note: Please understand the disk naming convention and know what disk you are using.

# blkid /dev/sdc
# blkid /dev/sdd
# blkid /dev/sde

9. Now, add entry to /etc/fstab and mount it.
# vi /etc/fstab
UUID=XXXXXXXX-XXXX-XXXX-XXXXXX /opt/iscsi1 xfs _netdev 0 2
add entry for all filesystem

# mkdir /opt/iscsi1 /opt/iscsi2 /opt/iscsi3
# grep -i iscsi /etc/fstab

# mount -a
# mount|grep iscsi
# df -hP | grep opt

10. Logout and disconnect
If you are done with your stuff and disconnect the session, perform the following tasks

a. unmount the filesystem
# umount /opt/iscsi1

b. Remove entry from /etc/fstab
# vi /etc/fstab

c. Log out and verify no active sessions are available
# iscsiadm -m node -u
# iscsiadm -m session -P 0

Help tips
# man 8 iscsiadm

Overview

- Configure the iSCSI target on the server and export the storage over the network.
- Set up an iSCSI initiator on a client system to connect to the target server.
- Upon connection, iSCSI initiator should be able to use the exported disk on client and use them like a local storage.


Solaris 10 - Bart set up

BART is install by default on Solaris servers. Its an audit tool.

1. Plan your BART configuration location
[root@sun-audit-v01]# pwd
/var/audit/BART
[root@sun-audit-v01]# ls
bart2.sh  bart.sh   compare   manifest
[root@sun-audit-v01]#

2. Your script
[root@sun-audit-v01]# more /var/audit/BART/bart.sh
#!/bin/ksh
# Declare Variables
BARTDIR=/var/audit/BART
COMPAREDIR=${BARTDIR}/compare
MANIFESTDIR=${BARTDIR}/manifest
GDATE=/usr/local/bin/gdate
JASSDIR=/var/opt/SUNWjass/BART
RULES=${JASSDIR}/rules.txt
BARTCMD=/usr/bin/bart
HOST=`/usr/bin/hostname | cut -d'.' -f1`
TODAYBARTFILE=${MANIFESTDIR}/${HOST}-manifest-`${GDATE} +%Y%m%d`
touch ${TODAYBARTFILE}
COMPAREFILE=${COMPAREDIR}/compare-`${GDATE} +%Y%m%d`
YESTERDAYBARTFILE=${MANIFESTDIR}/${HOST}-manifest-`${GDATE} +%Y%m%d -d "yesterday"`
# Check for existence of variables
[ ! -d $BARTDIR ] && echo "$BARTDIR does not exist" && exit 1;
[ ! -d $COMPAREDIR ] && echo "$COMPAREDIR does not exist" && exit 1;
[ ! -d $MANIFESTDIR ] && echo "$MANIFESTDIR does not exist" && exit 1;
[ ! -f $GDATE ] && echo "$GDATE does not exist" && exit 1;
[ ! -d $JASSDIR ] && echo "$JASSDIR does not exist" && exit 1;
[ ! -f $RULES ] && echo "$RULES does not exist" && exit 1;
# Let's generate BART Report for Today
$BARTCMD create -r $RULES > ${TODAYBARTFILE}
# Let's do a compare from yesterday and see what changed
if [[ ! -f ${YESTERDAYBARTFILE} ]]; then
  echo "Yesterday's manifest ${YESTERDAYBARTFILE} does not exist" && exit 1;
else
  $BARTCMD compare ${YESTERDAYBARTFILE} ${TODAYBARTFILE} > ${COMPAREFILE}
fi
[root@sun-audit-v01]#



[root@sun-audit-v01]# more bart2.sh
#!/bin/ksh
# Declare Variables
BARTDIR=/var/audit/BART
COMPAREDIR=${BARTDIR}/compare
GDATE=/usr/local/bin/gdate
JASSDIR=/var/opt/SUNWjass/BART
RULES=${JASSDIR}/rules.txt
BARTCMD=/usr/bin/bart
HOST=`/usr/bin/hostname | cut -d'.' -f1`
# Check for
[ ! -d $BARTDIR ] && echo "$BARTDIR does not exist" && exit 1;
[ ! -d $COMPAREDIR ] && echo "$COMPAREDIR does not exist" && exit 1;
[ ! -f $GDATE ] && echo "$GDATE does not exist" && exit 1;
[ ! -d $JASSDIR ] && echo "$JASSDIR does not exist" && exit 1;
[ ! -f $RULES ] && echo "$RULES does not exist" && exit 1;
# Let's generate BART Report for Today
#$BARTCMD create -r $RULES > ${BARTDIR}/manifest-${HOST}-`${GDATE} +%Y%m%d`
#touch ${BARTDIR}/manifest-${HOST}-`${GDATE} +%Y%m%d`
touch ${BARTDIR}/${HOST}-manifest-`${GDATE} +%Y%m%d`
[root@sun-audit-v01]#



[root@sun-audit-v01]# pwd
/var/opt/SUNWjass/BART
[root@sun-audit-v01]# ls
initial-20160610           manifests                  rules.JASS.20131217102721
manifest03                 rules                      rules.txt
[root@sun-audit-v01]#

3. Your rule file
[root@sun-audit-v01]# cat rules
#
# Copyright 2005 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
#
# ident "@(#)rules-secure       1.2     05/06/08 SMI"
#
# This file is supplied as part of the Solaris Security Toolkit and
# is used to configure BART rules.  See bart_rules(4) for file format.
#
# Note: S82mkdtab is filtered out to avoid false failures. This
# file is deleted after Solaris is first booted.
#

/                       !core !tmp/ !var/ !S82mkdtab
CHECK all
IGNORE contents mtime
/etc/rc*.d              S* !S82mkdtab
/sbin                   !core
/usr/bin                !core
/usr/sbin               !core
CHECK contents
[root@sun-audit-v01]#


^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[root@sun-audit-v01]# cat rules.txt
#
# Copyright 2005 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
#
# ident "@(#)rules-secure       1.2     05/06/08 SMI"
#
# This file is supplied as part of the Solaris Security Toolkit and
# is used to configure BART rules.  See bart_rules(4) for file format.
#
# Note: S82mkdtab is filtered out to avoid false failures. This
# file is deleted after Solaris is first booted.
#

#/                      !core !tmp/ !var/ !S82mkdtab
#CHECK all
#IGNORE contents mtime
#/etc/rc*.d             S* !S82mkdtab
#/sbin                  !core
#/usr/bin               !core
#/usr/sbin              !core
#CHECK contents
IGNORE all
CHECK contents mtime
/usr/local
/etc
/usr/bin
/usr/sbin
/data/oracle
[root@sun-audit-v01]#


[root@sun-audit-v01]# crontab -l
#ident  "@(#)root       1.21    04/03/23 SMI"
#
# The root crontab should be used to perform accounting data collection.
#
#
10 3 * * * /usr/sbin/logadm
15 3 * * 0 /usr/lib/fs/nfs/nfsfind
30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean
#10 3 * * * /usr/lib/krb5/kprop_script ___slave_kdcs___
0 2 * * 4 /usr/lib/acct/dodisk
30 0,2,4,6,8,10,12,14,16,18,20,22  * * * /usr/local/bin/purge_audit.sh
0 3 * * * /var/audit/BART/bart.sh
[root@sun-audit-v01]#


[root@sun-audit-v01]# cat /usr/local/bin/purge_audit.sh
#!/bin/ksh
## Audit Logs are currently stored in two places
## /var/audit directory and the /var/adm/auditlog file
## logadm rotates the auditlog file nightly and keeps upto 5 copies.
## /var/audit directory stores binary data of the audit logs
## and is unnecessary to keep duplicate logs.  /var/audit fills rapidly
## and needs to be purge.
if [[ ! -d /var/audit ]]; then
     echo "/var/audit directory does not exist"
     exit 1;
 else
# becreful with this command
 /usr/sbin/audit -n
  find /var/audit -type f ! -name "*terminated*" -exec rm {} \;
fi
[root@sun-audit-v01]#

Script - Move file/dir from one location to another

$ more movefile.sh
#!/bin/sh
# @expanor, LLC
# Change permission and move files to different location
# K B. - Tue Jun 30 10:36:09 EDT 2015
#
FIXnMOVE () {
        cd $1
        chmod 740 *; chown sam:other *
        mv * $2
}
FIXnMOVE /export/home/sam/dir1 /export/home/sam/dir2
echo Done
echo ""
# EOF:

Monday, June 13, 2016

RHEL7 - Configuring an iSCSI target server

RHEL7 - Configuring an iSCSI target

With an iSCSI target, client system will access the disk storage from a server to a client.

The iSCSI initiator (client) access the storage from the iSCSI target server as a local disk. Here are the steps to set up both an iSCSI target and an iSCSI initiator use them together.

Plan:
iSCSI target server: sam.expanor.local/192.168.10.120
- This host provides the disk space accessible to the client over the network.

iSCSI initiator client: sama.expanor.local/192.168.10.110
- This host is a client system access to the iSCSI target on the iSCSI target server over the

network.

Pre-plan the following disk partitions
# cat /proc/partitions

/dev/sdb
/dev/sdc

# fdisk /dev/sdb
# pvcreate /dev/sdb1
# vgcreate myvg /dev/sdb1

# vgs


# lvcreate -L 2G -n mylv1 myvg
# lvcreate -L 2G -n mylv2 myvg

# lvs
# lvscan


A. Configure iSCSI Target
- Lets configure the iSCSI target on the server, which will provide its disk space over the

network to client system (the iSCSI initiator).
- Lets install ‘targetcli’ package on the server, which offer a shell like environment to

view and modify the target configuration and export local storage resources such as files,

volumes or RAM disks to other external systems. It provides the similar navigation liek

filesystem commands such as cd, ls.

1. Install targetcli package
# yum install targetcli

2. enable and start the target service.
# systemctl enable target
# systemctl start target

3. Now, run the targetcli command. It will provide targetcli prompt. Just run ls to see the

default interface

# targetcli
/> ls

targetcli commmand has a feature like a bash shell tab completion. Just press tab couple of

times to see available options. You can use go up and back using cd command.

4. Creating a Backstore
Backstores offer a different ways of storing the data locally and export to an external

system. The available options are block, fileio, pscsi and ramdisk. Here, we will be using

block and fileio options.

Now, you have to configure the backstore to set up an iSCSI target. Type cd /backstores to go

to the backstores branch of targetcli. Here it allow you to specify which backing storage is

going to be used.

/> cd backstores/


Now, Type block/ create block1 /dev/myvg/mylv1. This will add the LVM we created as the

backstore in the iSCSI target.

A fileio backstore is a file on the filesystem that is created with a predefined size but

performance is not as good as a block backstore. Use write_back=false option which disable

any caching which will reduce the performance but will reduce possible data loss.

/backstores> block/ create block1 /dev/myvg/mylv1
/backstores> block/ create block2 /dev/myvg/mylv2
/backstores> block create block3 dev=/dev/sdc
or
/> backstores/block create block1 /dev/myvg/mylv1
/> backstores/block create block2 /dev/myvg/mylv2
/> backstores/block create block3 dev=/dev/sdc


/> backstore/fileio create testfile1 /root/fileio1 500M write_back=false
or
/backstores> fileio/ create testfile2 /root/diskfile1 500M

Pay attention to the command and the output.

Type ls at the prompt and see the block and file backstore listing
.
/backstores> ls


5. Create the iSCSI Target and Portal

Now, the block backstores part is done. Lets create iSCSI target.

/backstores> cd /
/> cd iscsi
/iscsi> ls

6. Now, a iSCSI target with IQN (iqn.2016-12.local.expanor) and iSCSI target name (target)

Note: The IQN naming convention used here is iqn.YY-MM followed by inversed DNS domain name.

/iscsi> create iqn.2016-12.local.expanor:target

note: rather than specifying iqn name, just press enter after typing create, system will

automatically use the default IQN and target name.

Run ls command to see the iSCSI target listing
/iscsi> ls

The output shows the contents of the iscsi branch of IQN you just created, and TPG tpg1 which

is created automatically.

7. Create LUNs
Now, you have to create the LUNs. We need to associate a block device with a specific TPG. We

create LUN with previously defined backstore.

a. Go to target portal group (TPG)
cd to IQN that you created on iSCSI target. Type cd iqn.[Tab] to go to IQN.

/iscsi> cd iqn.2016-12.local.expanor:target/tpg1/

b. Create LUN by specifying any backstore we created before. We will create LUN for block and

fileod backstores. Lun create this way has read/write permission by default.

/iscsi/iqn.2016-12.local.expanor:target/tpg1> luns/ create /backstores/block/block1
/iscsi/iqn.2016-12.local.expanor:target/tpg1> luns/ create /backstores/block/block2
/iscsi/iqn.2016-12.local.expanor:target/tpg1> luns/ create /backstores/block/block3

/iscsi/iqn.2016-12.local.expanor:target/tpg1> luns/ create /backstores/fileio/testfile1
/iscsi/iqn.2016-12.local.expanor:target/tpg1> luns/ create /backstores/fileio/testfile2

Note: When creating LUNs, you can specify additional parameters. For eg, if you want to

assign specific lun id to specific storage, you can do as follow
> create lun=2 storage_object=/backstores/block/block1

c. Verify you can see the LUNs just created.
/iscsi/iqn.2016-12.local.expanor:target/tpg1> ls

8. Create an access control list (ACL)
Now, we have to create ACL to allow access to the iSCSI target, because iSCSI initiator can

not access without ACLs. Any new LUN created will be mapped to each ACL that is associated

with the TPG because auto_add_mapped_luns feature is on by default.

a. Before assigning ACL, go to your client system (iSCSI initiator) and get the output of the

/etc/iscsi/initiatorname.iscsi file.

Note: you can leave the default value but if you want to edit this file make sure the entry

on ACL iSCSI target server has the same content of this file.

b. On iSCSI initiator record the InitiatorName and assign ACL.
# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn........

c. To create the ACL, cd to IQN that you created on iSCSI target. Type cd iqn.[Tab] to go to

IQN.

/iscsi> cd iqn.2016-12.local.expanor:target/tpg1/acls/
/iscsi/iqn.20...r:target/tpg1/acls/ create iqn.<from_client>

or

/iscsi/iqn.20...r:target/tpg1/acls/ create iqn.2015-12.local.expanor:sama

The above command creates a node ACL that allows sama server (iSCSI initiator) to access the

IQN you just created on server. Make sure to verify the contents of this file on the iSCSI

initiatorname matches.

You can repeat same steps for other iSCSI initiators that need to access the iSCSI target.

d. Now, all the LUNs created within the iSCSI target will have ACL mapped.

/iscsi/iqn.20...get/tpg1/acls> ls

e. Go back to iSCSI target root and view the configuration
/iscsi/iqn.20...get/tpg1/acls> cd ../..
/iscsi/iqn.20...expanor:target> ls

f. Save the configuration
- exit from the prompt and the configuration is saved to /etc/target/saveconfig.json file.

This config file is on JAVA JSON format and do not edit directly. Upon saving the

configuration, iSCSI target service is also started automatically and listen to 3260 of

specified portal IP address if portal is configured.
# netstat -antup | grep 3260

9. Creating the portal

The portal connects the iSCSI configuration to the specific IP address on the iSCSI target

server. Lets say you have specifit static address on your iSCSI target server, you can use

portal create command with the ip address of the target.

/iscsi/iqn.20...r:target/tpg1> portals/ create 192.168.10.120

Please note, this step is only used if you want to have iSCSI target offering its services to

a specific IP address. If you do not create a portal, a default portal is used that binds to

the address 0.0.0.0, which in fact represents all IP addresses on your server.

10. iSCSI firewall rules

Now, allow traffic to pass through the firewall on this port
# firewall-cmd --add-port=3260/tcp --permanent
# firewall-cmd --reload
# firewall-cmd --list-all

11. Review the configuration file
# cat /etc/target/saveconfig.json

12. Check the status of target service and verify that the target is currently active.
# systemctl start target
# systemctl status target

Finally, iSCSI target server should accept connections from the iSCSI initiator from client

system.

High Level Overview

- Create the backstores to provide the storage that the iSCSI target is sharing.
- Create an IQN, which also automatically creates the TPG.- create ACLs to allow nodes to

access the target.
- Create the LUNs. Note the association between ACLs and the LUN
- Configure the portal and write the configuration.

Note: If you type ls on the targetcli interface, you will get all the steps to configure

target. Use man page of the targetcli command to get some help on the iSCSI configuration

This concludes the iSCSI target server configuration.

For iSCSI initiator, please go to initiator page.

RHEL7 - How To Configure Network Teaming on RHEL7

How To Configure Network Teaming on RHEL7

- It is a process to aggregate multiple network links together into a single logical link which will either increase network throughput or redundancy based on the set up.

- In this process, we assign an IP address to a group of two network interfaces to combine the interface throughput, or for fault tolerance so if one interface fails we can fail over to another one.

- Link aggregation was done using Network bonding method on older version of RHEL. There is a new concept of Network Teaming on RHEL7 which comes with almost all features of binding as with other new features.

Plan:
- In addition to your current interface, add two new interfaces to your system.
- You can use nmcli command line tool or nmtui or through the graphical user interface to configure teaming. Here in this example we use mncli.
- Use 'man 5 nmcli-examples' and 'teamnl nm-team options' for other options available.


Step to create and configure a network team

1. Install a teaming daemon
# yum search teamd
# yum install teamd

2. Configure teamd using nmcli command line tool
# nmcli con show

Look at the output under DEVICE and pick the interface that you are going to use.

3. Create a team called team0
# nmcli con add type team con-name team0

4. List the devices
# nmcli con show

5. Now add interface to the team0
# nmcli con add type team-slave ifname <interface_Name1> master team0
# nmcli con add type team-slave ifname <interface_name2> master team0

6.Now, list the devices
# nmcli con show

7. Verify new interface info is created.
# cd /etc/sysconfig/network-scripts/
# ls -ltr ifcfg*

You will see the newly add configuration for the team and the interfaces.
Note: Do not edit these files manually. If you do, reload the config using "nmcli con reload" command

8. Now, assign IP address. By default it use DHCP.
# nmcli con mod team0 ipv4.method manual
# nmcli con mod team0 ipv4.addresses 192.168.10.33/24
# nmcli con mod team0 ipv4.gateway 192.168.10.1

9. Now, bring the interface up
# nmcli con up team-slave-<your_interface1>
# nmcli con up team-slave-<your_interface2>

10. Verify the ip address
# ip addr show

You should have team up and running.

Issue?
if team0 is not up, try to bring it up
# nmcli con up team0
# systemctl restart network

11. Check the team status
# teamctl nm-team state

The output shows,
runner: roundrobin
and also shows you two interfaces


Some available runners are

-> roundrobin: This is a default method. It sends packets to all interfaces in the team in a round robin fashion, in such one interface at a time and next.

-> broadcast: All the traffic is sent over to all ports.

-> activebackup: One interface is actively in use while the other sits aside as a backup. The link is constantly monitored will be failover if there is issue/change on active interface.

-> loadbalance: Traffic is balanced over all interfaces based on Tx traffic, equal load should be shared over available interfaces.

12. Modify the team
a. If you like to change runner from default to another, at the time of initial set up.
# nmcli con add type team con-name team0 config '{ "runner": {"name": "broadcast"}}'

b. Modify the current team by editing the configuration file /etc/sysconfig/network-scripts/ifcfg-team0 and add a line at the bottom of the page as follow

TEAM_CONFIG='{"runner": {"name": "activebackup"}, "link_watch": {"name": "ethtool"}}'

13. After the config change, restart the network service
# systemctl restart network
# teamctl nm-team state

verify the runner got changed.

This is one way to increase performance or redundancy of your system network connection.

Saturday, June 11, 2016

RHEL7 - Introduction to firewall

firewallD is a front end command line interface to iptables. We use firewall-cmd to manage firewall rules. It allow you to create firewall zones to associate IP, services, ports and rules.

Check if package is installed
# rpm -qa | grep -i firewalld

Find the config file location
# rpm -qc firewalld


View the confile file content
# vi /etc/firewalld/firewalld.conf

Check if firewall is running
# systemctl status firewalld
or
# firewall-cmd --state

Check what is allow or what is not allowed
# firewall-cmd --list-all

Note the active zone and the interface it is using

Playing with zones
List default zone
# firewall-cmd --get-default-zone

[root@sam yum.repos.d]# firewall-cmd --list-all
public (default, active)
  interfaces: enp0s25
  sources:
  services: dhcp dhcpv6-client ssh tftp
  ports:
  masquerade: no
  forward-ports:
  icmp-blocks:
  rich rules:

List the zones available on the system
# firewall-cmd --get-zones

List active zone
# firewall-cmd --get-active-zone
On active zone output, you will also see what interface the rule is defined to.

List the services
# firewall-cmd --get-services

Check the configuration of the services
1. The default service location and 2. Custom/user define services

1. The default service location
# cd /usr/lib/firewalld
[root@sam firewalld]# ls
icmptypes  services  xmlschema  zones
# cd services
# cat ssh.xml

review the file content

Playing with zone
Get default zone
# firewall-cmd --get-default-zone

Change your default zone
# firewall-cmd --set-default-zone=internal

Now, add port to the configuration
# firewall-cmd --add-port=22/tcp
# firewall-cmd --list-all

Now, remove the port
# firewall-cmd --remove-port=22/tcp
# firewall-cmd --add-service=ssh

reload the service
# firewall-cmd --reload

All configuration (run-time) is lost. so you have to use the permanent flag, while adding the service

# firewall-cmd --add-service=mysql --permanent

reload the service to make the change
# firewall-cmd --reload

Make sure to reboad other wise it will not have impack.


adding multiple ports
# firewall-cmd --add-port={2800/tcp,300/tcp,4300/tcp}

Adding port on range
# firewall-cmd --add-port=4000-4200/tcp

or adding multiple services
# firewall-cmd --add-service={mysql,ssh,http,https,ldap}
# firewall-cmd --list-all

You can remove multiple services samway
# firewall-cmd --remove-service={mysql,ssh,http,https,ldap}
# firewall-cmd --list-all


port  Forwarding

# netstat -ntlp
# firewall-cmd --list-all
review the ports open

Now add new port to the firewall
# firewall-cmd --add-port=8080/tcp

# firewall-cmd --list-all
# netstat -ntlp

# firewall-cmd --add-forward-port=port=8080:proto=tcp:toport=80

Now, anything comming to port 8080/tcp will be fordwarded to port 80 on the same address. you will see toaddr is empty.
if you want to sent it to different address on different port do the following

# firewall-cmd --add-forward-port=port=8080:proto=tcp:toport=80:toaddress=192.168.10.110



- configure more and complex rules Using rich rules

Allow all traffic from .20 server and block from .30 server
# firewall-cmd --add-rich-rule='rule family='ipv4' source address=192.168.10.20" accept'
# firewall-cmd --add-rich-rule='rule family="ipv4" source address=192.168.10.30" drop'

Now, lets enable web server on this host
# firewall-cmd --add-service=http

Now, go to both server and try to access the web server

on .20 host
$ elink .20
or
$ curl .20
you should be accessing the server

go to .30 host
$ it should block

Friday, June 10, 2016

RHEL7 - Network information configuration

ifconfig command does not display newly added interface information if added using command line interface.

1. List currect interface configuration information
[root@sam ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:23:ae:b0:32:0c brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.120/24 brd 192.168.10.255 scope global enp0s25
       valid_lft forever preferred_lft forever
    inet 192.168.10.221/24 brd 192.168.10.255 scope global secondary enp0s25:0
       valid_lft forever preferred_lft forever
    inet 192.168.10.118/24 brd 192.168.10.255 scope global secondary enp0s25:1
       valid_lft forever preferred_lft forever
    inet 192.168.10.122/24 scope global secondary enp0s25
       valid_lft forever preferred_lft forever
    inet6 fe80::223:aeff:feb0:320c/64 scope link
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
    link/ether 52:54:00:a0:6d:79 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 500
    link/ether 52:54:00:a0:6d:79 brd ff:ff:ff:ff:ff:ff

2. Add new ip address to the interface
[root@sam ~]# ip addr add dev enp0s25:3 192.168.10.123/24
[root@sam ~]#


3. Open a new Putty window and should be able to get login prompt^C

4. List conrunt ip information
[root@sam ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:23:ae:b0:32:0c brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.120/24 brd 192.168.10.255 scope global enp0s25
       valid_lft forever preferred_lft forever
    inet 192.168.10.221/24 brd 192.168.10.255 scope global secondary enp0s25:0
       valid_lft forever preferred_lft forever
    inet 192.168.10.118/24 brd 192.168.10.255 scope global secondary enp0s25:1
       valid_lft forever preferred_lft forever
    inet 192.168.10.122/24 scope global secondary enp0s25
       valid_lft forever preferred_lft forever
    inet 192.168.10.123/24 scope global secondary enp0s25
       valid_lft forever preferred_lft forever
    inet6 fe80::223:aeff:feb0:320c/64 scope link
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
    link/ether 52:54:00:a0:6d:79 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 500
    link/ether 52:54:00:a0:6d:79 brd ff:ff:ff:ff:ff:ff
[root@sam ~]#

5. Display interface information using ifconfig command
[root@sam ~]# ifconfig -a
enp0s25: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.10.120  netmask 255.255.255.0  broadcast 192.168.10.255
        inet6 fe80::223:aeff:feb0:320c  prefixlen 64  scopeid 0x20<link>
        ether 00:23:ae:b0:32:0c  txqueuelen 1000  (Ethernet)
        RX packets 475862  bytes 32081767 (30.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 154846  bytes 59513891 (56.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 21  memory 0xf7ae0000-f7b00000

enp0s25:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.10.221  netmask 255.255.255.0  broadcast 192.168.10.255
        ether 00:23:ae:b0:32:0c  txqueuelen 1000  (Ethernet)
        device interrupt 21  memory 0xf7ae0000-f7b00000

enp0s25:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.10.118  netmask 255.255.255.0  broadcast 192.168.10.255
        ether 00:23:ae:b0:32:0c  txqueuelen 1000  (Ethernet)
        device interrupt 21  memory 0xf7ae0000-f7b00000

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 476  bytes 139565 (136.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 476  bytes 139565 (136.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:a0:6d:79  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0-nic: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether 52:54:00:a0:6d:79  txqueuelen 500  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@sam ~]#


Wednesday, June 8, 2016

RHEL7 - Controlling System, Services and Daemons

http://linuxtab.blogspot.com/2015/12/rhel7-comparison-of-service-utility.html

RHEL7 - Controlling Services and Daemons

Listing unit files with systemctl

List available units on your system
# systemctl -t help


1. Find the state of all unit
# systemctl

2. Find the state of all available service unit on your system.
# systemctl --type=service

3. Find the status of a service
# systemctl status sshd.service -l
–l option to gives you detail output.

=> List only failed services
# systemctl –failed –type=service

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

4. Check if particular unit is active and enabled to start at boot time.
# systemctl is-active sshd
# systemctl is-enabled sshd

5. List the active state of all loaded units.
# sytsemctl list-units –type=service

List all inactive/inactive state of loaded units
# systemctl list-units –type=service –all

6. Find if unit is enable to start automatically up on reboot or not
# systemctl list-unit-files --type=service

state Description
------ ------------
loaded unit configuration file has been processed
active(running) Running with one or more continuing processes
active(waiting) Running but waiting for an event
active(existed) Successfully completed a one-time configuration
inactive not running
enabled will be started at boot time
disabled will not be started at boot time
static Can not be enabled, but may be started by an enabled unit

automatically


Service status


1. Check the status of a service.
# systemctl status

2. Verify that the process is running.
# ps –up PID

3. Stop the service and verify the status
# systemctl stop sshd.service
# systemctl status sshd.service

4. Start the service and view the status. The process ID will change
# systemctl start sshd.service
# systemctl status sshd.service


5. Stop, then start, the service in a single command.
# systemctl restart sshd.service
# systemctl status sshd.service

6. Reload a service after config file change
# systemctl reload sshd.service
# systemcl status sshd.service

Note: When you restart, process ID  will be changed. But when you reload, it

re-reads the configuration without a complete stop and start.  So the process

ID remains same.

7. Find the service depencendy tree
# systemctl list-dependencies sshd.service

8. Disable the service and verify the status.
# systemctl disable sshd.service
# systemctl status sshd.service
Note that disabling a service does not stop the service. It only prevents the

service from starting at the boot time.

Masking services
To prevent accidentally starting a service, we can mask the service. Basically

masking will create a link in the configuration directories so that if the

service is started nothing will happen.


To mask the service
# systemctl mask crond.service

# systemctl mask crond
# systemctl unmask crond

note: The disabled service does not start automatically at boot time but it

can be started manually. A masked service does not start manually or

automatically.

1. View the status of a service.
# systemctl status sshd.service

2. Disable the service and verify the status. Note that disabling a service

does not stop the service.
# systemctl disable sshd.service
# systemctl status sshd.service

3. Enable the service and verify the status.
# systemctl enable sshd.service
# systemctl is-enable sshd.service


Cheat sheet on systemctl commands

Check the status of a service
# systemctl status service_name

Stop a service
# systemctl stop service_name

Start a service
# systemctl start service_name

Restart a service
# systemctl restart service_name

Reload a service
# systemctl reload service_name

Mask a service to prevent the change on service
# systemctl mask service_name

Enable mask service (unmask a serice)
# systemctl unmask service_name

Enable a service to start at boot time
# systemctl enable service_name

Disable a service from starting at boot time.
# systemctl disable service_name

List dependencies of a service
# systemcl list-dependencies service_name

List the all sockets units on the system,
# systemctl list-units --type=socket --all

Sunday, June 5, 2016

RHEL7 - Removing home directory and extending root filesystem

login as: sudhir
sudhir@192.168.10.13's password:
Last login: Sun Jun  5 09:39:15 2016 from 192.168.10.8
[sudhir@localhost ~]$ su -
Password:
Last login: Sun Jun  5 09:39:18 EDT 2016 on pts/0
[root@localhost ~]# pwd
/root
[root@localhost ~]# df -h /home
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-home   12G   37M   12G   1% /home
[root@localhost ~]# df -h /homes
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root  9.6G  4.0G  5.7G  41% /
[root@localhost ~]# fuser -cu /home
/home:                5957c(sudhir)
[root@localhost ~]# kill -9 5957
[root@localhost ~]# umount /home
[root@localhost ~]# grep home /etc/fstab
/dev/mapper/centos-home /home                   xfs     defaults        0 0
[root@localhost ~]# vgs
  VG     #PV #LV #SN Attr   VSize  VFree
  centos   2   3   0 wz--n- 24.53g 68.00m
[root@localhost ~]# lvremove /dev/mapper/centos-home
Do you really want to remove active logical volume home? [y/n]: y
  Logical volume "home" successfully removed
[root@localhost ~]# lvs
  LV   VG     Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root centos -wi-ao---- 9.56g
  swap centos -wi-ao---- 3.73g
[root@localhost ~]# vgs
  VG     #PV #LV #SN Attr   VSize  VFree
  centos   2   2   0 wz--n- 24.53g 11.24g
[root@localhost ~]# pwd
/root
[root@localhost ~]# cd /
[root@localhost /]# cd /home
[root@localhost home]# ls
[root@localhost home]# cd ..
[root@localhost /]# rmdir /home
[root@localhost /]# pwd
/
[root@localhost /]# cp -rp /homes/ /home
[root@localhost /]# vi /etc/passwd
[root@localhost /]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root  9.6G  4.0G  5.7G  41% /
devtmpfs                 482M     0  482M   0% /dev
tmpfs                    497M   84K  497M   1% /dev/shm
tmpfs                    497M  7.0M  490M   2% /run
tmpfs                    497M     0  497M   0% /sys/fs/cgroup
/dev/sda1                473M  156M  318M  33% /boot
/dev/sr1                 4.1G  4.1G     0 100% /run/media/sudhir/CentOS 7 x86_64
tmpfs                    100M     0  100M   0% /run/user/0
tmpfs                    100M   12K  100M   1% /run/user/42
tmpfs                    100M     0  100M   0% /run/user/1000
[root@localhost /]# xfs_growfs /dev/mapper/centos-root
meta-data=/dev/mapper/centos-root isize=256    agcount=9, agsize=305152 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=2505728, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@localhost /]# df -h .
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root  9.6G  4.0G  5.7G  41% /
[root@localhost /]# vgs
  VG     #PV #LV #SN Attr   VSize  VFree
  centos   2   2   0 wz--n- 24.53g 11.24g
[root@localhost /]# lvscan
  ACTIVE            '/dev/centos/root' [9.56 GiB] inherit
  ACTIVE            '/dev/centos/swap' [3.73 GiB] inherit
[root@localhost /]# lvextend /dev/centos/root 11G
  Physical Volume "11G" not found in Volume Group "centos".
[root@localhost /]# vgs
  VG     #PV #LV #SN Attr   VSize  VFree
  centos   2   2   0 wz--n- 24.53g 11.24g
[root@localhost /]# lvextend /dev/centos/root +11G
  Physical Volume "+11G" not found in Volume Group "centos".
[root@localhost /]# vgs
  VG     #PV #LV #SN Attr   VSize  VFree
  centos   2   2   0 wz--n- 24.53g 11.24g
[root@localhost /]# lvextend /dev/centos/root -L +11G
  Size of logical volume centos/root changed from 9.56 GiB (2447 extents) to 20.56 GiB (5263 extents).
  Logical volume root successfully resized.
[root@localhost /]# xfs_growfs /dev/centos/root
meta-data=/dev/mapper/centos-root isize=256    agcount=9, agsize=305152 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=2505728, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 2505728 to 5389312
[root@localhost /]# df -h .
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   21G  4.0G   17G  20% /
[root@localhost /]#
[root@localhost /]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   21G  4.0G   17G  20% /
devtmpfs                 482M     0  482M   0% /dev
tmpfs                    497M   84K  497M   1% /dev/shm
tmpfs                    497M  7.0M  490M   2% /run
tmpfs                    497M     0  497M   0% /sys/fs/cgroup
/dev/sda1                473M  156M  318M  33% /boot
/dev/sr1                 4.1G  4.1G     0 100% /run/media/sudhir/CentOS 7 x86_64
tmpfs                    100M     0  100M   0% /run/user/0
tmpfs                    100M   16K  100M   1% /run/user/42
tmpfs                    100M     0  100M   0% /run/user/1000
[root@localhost /]#

Friday, June 3, 2016

RHEL7: Install and configure a VNC server

Install and configure a VNC server on RHEL7

VNC server is used to access the server remotely using GUI. There are some instances where you will need graphical interface to install some software/tools such as Oracle, SAS. Once you configure and set up your vnc server, you will use VNC client to access the server.

Note: Before you start installation process, create your repo
Make Sure you have Desktop Packages are installed
# yum groupinstall "GNOME Desktop"

1. Install VNC server package.
# yum search vnc
# yum install tigervnc-server

2. Upon installation, it creates a sample config file at /lib/systemd/system/vncserver@.service. You have to copy it it to /etc/systemd/system directory or create your own.

3. Copy the config file to /etc/systemd/system/ directory
# cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@.service

Note: When you copy the config file, you can also use port number on target location such as destination file can be like vncserver@:2.service. By default vnc server listens on 5900. By specifying the 2.service, you are specifying to use the port 2, wheni turn out to be 5902. So, you can access vnc server from client using the vncserver:2 or 192.168.10.120:2 or 192.168.10.120:5902

# cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@:2.service

If you want to set it for multiple user, copy the sample config as follow
# cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver-user1@1:.service
# cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver-user2@2:.service

Note: ports are going to be 5901 and 5902. Once copied, following the instruction below

4. Edit the config file and change the following <USER> to your userid on line 41 and 42.

# vi /etc/systemd/system/vncserver@\:2.service
 ExecStart=/usr/sbin/runuser -l kamal -c "/usr/bin/vncserver %i"
 PIDFile=/home/kamal/.vnc/%H%i.pid

wq!

5. Create VNC password to your user

[root@sam yum.repos.d]# su - kamal
Last login: Thu Jun  2 12:10:05 EDT 2016 from suvi.expanor.local on pts/0
[kamal@sam ~]$ vncserver

You will require a password to access your desktops.

Password:
Verify:
xauth:  file /home/kamal/.Xauthority does not exist

New 'sam.expanor.local:1 (kamal)' desktop is sam.expanor.local:1

Creating default startup script /home/kamal/.vnc/xstartup
Starting applications specified in /home/kamal/.vnc/xstartup
Log file is /home/kamal/.vnc/sam.expanor.local:1.log

[kamal@sam ~]$

6. add service/port to firewall config if its enabled
# systemctl status firewalld
# firewall-cmd --permanent --zone=public --add-port=5902/tcp
# firewall-cmd --reload
or
# firewall-cmd --add-rich-rule='rule family="ipv4" service name=vnc-server accept'


7. Start and enable VNC server service
# systemctl daemon-reload
# systemctl enable vncserver@:2.service
Created symlink from /etc/systemd/system/multi-user.target.wants/vncserver@:2.service to /etc/systemd/system/vncserver@:2.service.


[root@sam yum.repos.d]# systemctl start vncserver@:2.service
Warning: vncserver@:2.service changed on disk. Run 'systemctl daemon-reload' to reload units.
[root@sam yum.repos.d]# systemctl daemon-reload
[root@sam yum.repos.d]# systemctl enable vncserver@:2.service


8. Now, check the ports vnc is listening on
# netstat -tulp | grep vnc


9. Now, go to your client system and connect
From Linux machine
$ vncviewer 192.168.10.120:2

On windows machine,
go to download page of tightvnc viewer (TightVNC Java Viewer JAR in a ZIP archive) and run the program.
http://www.tightvnc.com/download.php

Once open, supply the following information
Remote host: 192.168.10.120
Port: 5902

and click connect, it will prompt for password.
and it may ask for administrator password. enter the administrator password.
Enter your login name and password to login to your system.

Script - start application upon reboot

# cat /etc/init.d/ab-bridge
#!/bin/sh
AB_HOME=/opt/APPS/abinitio/abinitio-app-hub
export AB_HOME

su - abinitio -c "$AB_HOME/bin/ab-bridge start"

# ln -s /etc/init.d/ab-bridge /etc/rc3.d/S88ab-bridge

Thursday, June 2, 2016

RHEL7 - How to install And configure Samba Server and Client


How to install And configure Samba Server on RHEL7/CentOS7

Samba:- Samba is open source software. It uses the TCP/IP protocol to share file and print services for clients (windows/linux and other OS) using smb/cifs (server message block and common internet file system) protocol.



Pre-requisite tasks
hosts info

samba server
hostname: sam.expanor.local
ip address: 192.168.10.120

samba client:
hostname: sama.expanor.local
ip address: 192.168.10.110
OS: RHEL6

hostname: suvi.expanor.local
ip address: 192.168.10.8
os: Win 7


1. On your windows machine, open dos windows and run the command below.

net config workstation

get the workstation info.

2. Add host to dns or on host file if you are using host name rather than ip address.

at dos prompt type

notepad C:\Windows\System32\drivers\etc\hosts

192.168.10.120   sama.expanor.local   sama

Note: you might not be able to save file here, you save first on desktop and copy the file and past it on etc directory.

3. Add following services in /etc/services files

# cat /etc/services | grep netbios

Note: You will see 137, 138 and 139 tcp/udp added to the file.


Installation and configuration steps

1. Software installation

# rpm -qa | grep samba
# yum list installed | grep samba

If not installed, installed the packages

# yum install -y samba* policycoreutils-python


2. Create a user/group to use for samba. Assign samba password to user
# groupadd shared
# useradd -g shared suvi
# smbpasswd -a suvi

Note: You don't have to assign os password for user.

3. Create a shared directory and change owner/permission
# mkdir /opt/shared
# chown -R suvi:shared /opt/shared
# chmod -R 777 /opt/shared

note: pay attention to ownership/permission

4. Change selinux security context
# ls -ldZ /opt/shared
# chcon -R -t samba_share_t /opt/shared
# semanage fcontext -a -t samba_share_t /opt/shared
# setseboot -P samba_enable_home_dirs on
# getsebool -a | grep samba_export

Note: you can use restorecon -R on some cases

5. Editing /etc/samba/smb.conf file
# cd /etc/samba; cp -p smb.conf smb.conf.06022016
# vi smb.conf

edit the following contents

search for interface and  host allow entry and make changes

interfaces = lo enp0s25 192.168.10.0/24
hosts allow = 127. 192.168.10.

even if you forgot to add interface entry, its still working...

search for workgroup and update the entry based on finding from windows machine.

workgroup = WORKGROUP

go to the bottom of the page and paste the following entry

[shared]
path = /opt/shared
comment = Samba Shared directory
browseable = yes
valid users = suvi, devi,kbhusal, @shared
writable = yes
create mode = 0777
directory mode = 0777


Note: Once you done with configuration change, save it and run testparm command to test the configuration


6. Now, start and enable samba service in order to load upon reboot.
# systemctl enable smb.service
# systemcrt enable nmb.service
# systemctl start smb.service
# systemctl start nmb.service

Note: if you hosts entry has wrong host/ip on server and client, you will have problem making it work.
It will fail to start nmb.service if you have wrong entry on smb.conf file.

7. Disable your firewall to test it. Life makes easy. Most of the work place, os firewall is not enabled. But if you like to enable, enjoy
# systemctl disable firewalld

# systemctl enable firewalld

# firewall-cmd --permanent --add-service=samba
# firewall-cmd --reload

8. Verify the share from the server
# smbclient -L localhost -U suvi

9. Now go to windows machine and open your windows explorer and type you samba server ip or hostname as follow on the address bar

\\192.168.10.120

it will ask you for username and password. Use the os user name and the smbpassword in order to login.

upon successful login, you will see your shared name

In my case, it will be shared.

double click on the share and create one file. save it and go to your samba server. You should be able to see the file you just created.

10. Go to your linux client and type the following command to see the share

# smbclient -L //192.168.10.120 -U suvi

# mkdir /opt/smbclient
# mount -t cifs //192.168.10.120/shared /opt/smbclient -o username=suvi

Note: you will be using shared name not the share directory after hostname/ip


11. To make it permanent, add entry to fstab


# cat /root/secretinfo
username=suvi
password=samapassword

# /opt/sambaclient
# vi /etc/fstab
//192.168.10.120/shared /opt/sambaclient cifs credentials=/root/secretinfo,defaults 0 0

wq!

12. Mount and verify it
# mount -a
[root@sama ~]# df -hP | grep samba
//192.168.10.120/shared   20G   16G  4.2G  80% /opt/sambaclient

[root@sama ~]# cat >/opt/sambaclient/sambafile.linux1
This file is created on host same

Now, login to your windows machine and check your share, You should be able to see the newly created content.
Also on your samba server machine, you should be able to see the file content

[root@sam samba]# cd /opt/shared
[root@sam shared]# ls
sambafile.linux1  test_file.log  test_file.log1.txt
[root@sam shared]# cat sambafile.linux1
This file is created on host same
[root@sam shared]#

Script - Filesystem, load average vmstat check on remote host


#!/bin/bash
# Kamal Bhusal
# Tue Aug  4 09:14:09 EDT 2015
# Daily server status check
# Solaris 10
# @expanor LLC
LOGFILE="logs/FS_CHK/DISK_SPACE_`date "+%m%d%y_time.%H-%M-%S.log"`"
for i in 10.222.102.47 10.222.102.48
#for i in `cat ../etc/hosts.ip | grep -v "#" | awk '{print $1}'`
do
    echo "           " >>$LOGFILE
    echo "Please wait ... checking server $i"
    echo "=============== Checking $i server ===============" >>$LOGFILE
    echo "           " >>$LOGFILE
    echo "--------------------- Checking Load Average" >> $LOGFILE
    #ssh -q $i "bash -s" << EOF
    ssh -q $i 'w; \
        echo; echo "-------------------- Filesystem information"; \
        echo; df -h | egrep -v "objfs|sharefs|ctfs|proc|platform|fd|mnttab"; \
        echo; echo "-------------------- Cheking Virtual Memory stat"; \
        vmstat 2 3; \
        echo;'    >> $LOGFILE
    echo "____________________________End______________________" >> $LOGFILE
    echo "^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^" >> $LOGFILE
done
more $LOGFILE

Wednesday, June 1, 2016

Script - Get user list of users who has sudo access on the system along with account status


Pre-requisite tasks
1. Set up passwordless authentication
2. Grant root sudo access to user who is going to run this script
4. Tested on Solaris 10 servers
[bhusal@sunserv01]$$ pwd
/export/home/kbhusal/bin
[bhusal@sunserv01]$ cat chk_sudouser.sh
#!/bin/bash
# Kamal Bhusal
# Wed Jun  1 11:54:13 EDT 2016
# Get all users on the system who has sudo access and also check account status
# @expanor LLC
#
LOGFILE="logs/UserLog/User_sudo_access_`date "+%m%d%Y_time.%H-%M-%S.log"`"
#for i in 192.168.10.110 192.168.10.111
for i in `cat ../etc/hosts.ip | grep -v "#" | awk '{print $1}'`
do
    echo "           " >>$LOGFILE
    echo "Please wait ... checking server $i"
    echo "--------------------- Checking $i server" >>$LOGFILE
    echo "           " >>$LOGFILE
    ssh -q $i 'for AUSERS in  `listusers | /usr/bin/awk '\''{print $1}'\'' | /usr/bin/tr "\n" " "`; do echo; \
        echo "--------------------"; \
        echo "chcking $AUSERS  user for sudo access"; \
        /usr/local/bin/sudo -l -U  $AUSERS; echo; \
        echo "Checking Account password status"; \
        /usr/local/bin/sudo passwd -s $AUSERS; done' >> $LOGFILE
    #echo "-------------------" >> $LOGFILE
    echo "           " >>$LOGFILE
    echo "____________________________End______________________" >> $LOGFILE
    echo "^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^" >> $LOGFILE
done
more $LOGFILE
# EOF
[bhusal@sunserv01]$


[bhusal@sunserv01]$ more ../etc/hosts.ip
192.168.10.11 sunserv1
192.168.10.12   sunserv2
#192.168.10.13   sunserv3
192.168.10.14 sunserv4
192.168.10.15 sunserv5
192.168.10.16 sunserv6
192.168.10.17 sunserv7
192.168.10.19 sunserv8
192.168.10.20 sunserv9