Tuesday, November 25, 2014

Setting Up printer using CUPS Command-Line Utilities on SOlaris 11.2

Setting Up printer using CUPS Command-Line Utilities on SOlaris 11.2

1. Become root
sam@apsol11sas25:~$ sudo su -
Enter your LAN Password:
Your password expires in 7 days. Please change it as soon as possible.
Oracle Corporation      SunOS 5.11      11.2    June 2014
You have new mail.
2. How to Set Up Your Printing Environment
2.1. Ensure that the cups/scheduler and the cups/in-lpd SMF services are online.
root@apsol11sas25:~# svcs -a | grep cups/scheduler
online         Nov_20   svc:/application/cups/scheduler:default
root@apsol11sas25:~# svcs -a | grep cups/in-lpd
disabled       Nov_20   svc:/application/cups/in-lpd:default
2.2. If not running, enable these services
root@apsol11sas25:~# svcadm enable cups/in-lpd
root@apsol11sas25:~# svcs -a | grep cups/in-lpd
online         11:46:28 svc:/application/cups/in-lpd:default

2.3. Find out if the printer/cups/system-config-printer package is installed on your system.
root@apsol11sas25:~# pkg info print/cups/system-config-printer
          Name: print/cups/system-config-printer
       Summary: Print Manager for CUPS
      Category: System/Administration and Configuration
         State: Installed
     Publisher: solaris
       Version: 1.0.16
 Build Release: 5.11
Packaging Date: October 28, 2013 03:14:17 PM
          Size: 2.69 MB
          FMRI: pkg://solaris/print/cups/system-config-printer@1.0.16,5.11-
2.4. If not installed, install it
# pkg install print/cups/system-config-printer
2.5. list the print service enabled on your system
root@apsol11sas25:~# /usr/sbin/print-service -q
-bash: /usr/sbin/print-service: No such file or directory
root@apsol11sas25:~# print-service -q
-bash: print-service: command not found
3. Set Up a printer using the lpadmin Command
root@apsol11sas25:~# lpstat -d
no system default destination
root@apsol11sas25:~# lpadm
-bash: lpadm: command not found
root@apsol11sas25:~# ls -l /usr/sbin/lpadmin
-r-xr-xr-x   1 root     bin        28068 Sep 22 16:59 /usr/sbin/lpadmin
root@apsol11sas25:~# /usr/sbin/lpadmin -p ^C
root@apsol11sas25:~# hp5593^C
root@apsol11sas25:~# ping hp5593
hp5593 is alive
root@apsol11sas25:~# nslookup hp5593
Name:   hp5593.fhlmc.com
3.1 Use the lpadmin command with the -p option to add a printer to CUPS.
root@apsol11sas25:~# /usr/sbin/lpadmin -p hp5593 -E
-p  ---> Specifies the name of the printer to add.
-E ---> Enables the destination and accepts jobs.

root@apsol11sas25:~# lpstat -d
no system default destination
3.2. Enable the printer to accept print requests and to print those requests.
root@apsol11sas25:~# cupsaccept hp5593
root@apsol11sas25:~# cupsenable  hp5593
3.3. Verify that the printer is correctly configured
root@apsol11sas25:~# lpstat -p hp5593 -l
printer hp5593 is idle.  enabled since November 25, 2014 11:51:11 AM EST
root@apsol11sas25:~# lpstat -d
no system default destination
root@apsol11sas25:~# lpstat -p hp5593
printer hp5593 is idle.  enabled since November 25, 2014 11:51:11 AM EST
root@apsol11sas25:~# pwd
root@apsol11sas25:~# cd /var/tmp
root@apsol11sas25:/var/tmp# cat >testprint
Testing printer from Sam to John.
If you see this page, please give it to John.
Thank you
Printed from apsol11sas25
root@apsol11sas25:/var/tmp# ls testprint
4. Test your print job
root@apsol11sas25:/var/tmp# lp testprint
lp: Error - no default destination available.
root@apsol11sas25:/var/tmp# lp -d hp5593 testprint
request id is hp5593-1 (1 file(s))
root@apsol11sas25:/var/tmp# lpstat
root@apsol11sas25:/var/tmp# lpstat -l
root@apsol11sas25:/var/tmp# lpstat -r
scheduler is running
root@apsol11sas25:/var/tmp# lpstat -t
scheduler is running
no system default destination
device for hp5593: ///dev/null
hp5593 accepting requests since November 25, 2014 11:58:32 AM EST
printer hp5593 is idle.  enabled since November 25, 2014 11:58:32 AM EST
root@apsol11sas25:/var/tmp# man lpstat
Reformatting page.  Please Wait... done
5. Set the system's default destination printer
root@apsol11sas25:/var/tmp# lpadmin -d hp5593
root@apsol11sas25:/var/tmp#  lpstat -d
system default destination: hp5593
root@apsol11sas25:/var/tmp# ls -ltr
-rw-r--r--   1 root     root         128 Nov 25 11:57 testprint
root@apsol11sas25:/var/tmp# lp testprint
request id is hp5593-2 (1 file(s))



Thursday, November 20, 2014

changing the value at /etc/security/limits.conf file

UNIX Support - Update a standard system configuration file
Please update /etc/security/limits.conf file on mysassrv05 to change nofile limit for mossas1 to 65000.

To set the limit system wide for all user,
echo "* hard nofile 10240" >> /etc/security/limits.conf
echo "* soft nofile 10240" >> /etc/security/limits.conf
sysctl -w fs.file-max=10240
sysctl -p - Loads the value setting specified on /etc/sysctl or defined value.
There is no reboot required and it also works fine even after reboot. The value defined here is just a reference but you can define your own.


On Solaris:
add entry to /etc/system
set rlim_fd_max = 4096

# Hard limit on file descriptors for a single proc
##(Without this above condition, the default value for nofiles is half of the rlim_fd_cur)

set rlim_fd_cur = 1024 # Soft limit on file descriptors for a single proc
- you can configure ulimits for each user here
 stack = 393216
 stack_hard = 393216

On Linux:
- you can configure ulimits for each user here

username hard nofile 4096
username soft nofile 63536
Hard and soft limits of 4096 for 'nofile' (all users)
* soft nofile 4096
* hard nofile 4096

Lock paged memory when using memcached by using memcached -k option

How to lock the paged memory?
# man memcached

#memcached -u user -k &
 warning: -k invalid, mlockall() failed: Cannot allocate memory

The reason is that by default locked memory is 64K.
Run the command below to see the output
# ulimit -l

You can change the value by addeding entry for memlock at /etc/security/limits.conf for root user.

# - memlock - max locked-in-memory address space (KB)
oracle soft nofile 24576
oracle hard nofile 65536
root   -     memlock   1048576
Verify the change, you may want to logout and log back in to test it.
# ulimit -l
If you are setting values for normal user
Once you made the change, su to the user and run the following command to see hard and soft limit.
Not: do man for more info or google is there for you.
$ ulimit -Hn
$ ulimit -Sn
$ id
uid=65128(mysas) gid=75260(sasgrp)
ulimit -u

Please change nofile limit for mostmgr to 65000.
# more /etc/security/limits.conf

* hard core 0
* soft core 0
mostmgr         hard    nofile          25000
mostmgr         soft    nofile          25000
# End of file

# vi /etc/security/limits.conf
#@student        -       maxlogins       4
* hard core 0
* soft core 0
mostmgr         hard    nofile          65000
mostmgr         soft    nofile          65000
# End of file

# su - trnmgr
$ ulimit -Hn
$ ulimit -Sn
$ id
$ ulimit -u

To set the limit system wide for all user,
# echo "* hard nofile 10240" >> /etc/security/limits.conf
# echo "* soft nofile 10240" >> /etc/security/limits.conf
# sysctl -w fs.file-max=10240
# sysctl -p

On Solaris:
add entry to /etc/system
set rlim_fd_max = 4096 # Hard limit on file descriptors for a single proc
##(Without this above condition, the default value for nofiles is half of the rlim_fd_cur)
set rlim_fd_cur = 1024 # Soft limit on file descriptors for a single proc
- you can configure ulimits for each user here
 stack = 393216
 stack_hard = 393216

On Linux:
- you can configure ulimits for each user here
username hard nofile 4096
username soft nofile 63536
Hard and soft limits of 4096 for 'nofile' (all users)
* soft nofile 4096
* hard nofile 4096


data transfer using rsync

data transfer using rsync
This is tested from Solaris 10 to Solaris 11.2 server. [ M5000 to M10 server]

1. Include all the mountpoint that you want to transfer
$ cat /var/tmp/myFS.txt

2. List the directories that you don't want to transfer. It must to relative path to the mountpoint.
$ cat /var/tmp/my.exclude

3. Have your simple script ready. Make sure to check one at a time. Once confirm, try with multiple mountpoints.
$ cat myrsync.all.sh
for i in `cat ${C_FILE}`
        echo "Syncing $i to ${D_HOST} . Please wait ...."
        /usr/bin/rsync -logtprz --exclude-from=/var/tmp/my.exclude --progress --rsh='ssh -l root' $i root@${D_HOST}:$i
 # Solaris 11, we had an issue with root user. When you use sudo to root, its not really a user root, its a role. Convert root role to root user.
 #/usr/bin/rsync -logtprz --exclude-from=/var/tmp/my.exclude --progress --rsync-path="sudo rsync" --rsh='ssh -l your_userid' $i your_userid@${D_HOST}:$i
 #/usr/bin/rsync -logtprz --delete --update --progress --rsync-path="sudo rsync" --rsh='ssh -l dev' /app1/   dev@myappsas09:/app2/

4. Add/change below entry at the last line of /etc/ssh/sshd_config file
AuthorizedKeysFile /etc/ssh/authorized_keys/%u
and also allow root login by changing from no to yes on permit root login.
PermitRootLogin yes

5. Generate root key using ssh-keygen
# ssh-keygen -t dsa
Copy the public key from root user's home dir (solaris 10 its on / and solaris 11, its under /root) to target server's /var/tmp
again rename to root and copy to /etc/ssh/authorized_keys/

6. Test your connection with ssh to host and should not ask for password.
start your simple rsync and if it works use the script.

selinux on Redhat 7

2nd part
# ls -lZ
its hard to find the context level you want
# yum install -y httpd
# cd /var/www/html
# ls -Z
you will see webserver default se-context level
To manage context lavel
List all context level
# semanage fcontext -l | more
# semanage fcontext -l | grep http

its top
check current configuration
# ls -Z

man -k _selinux on old version
on redhat 7
# yum whatprovides */sepolicy
to generate the man page..
# yum -y install policycoreutils-devel

# sepolicy --help
# man sepolicy
# man sepolicy-manpage
where you want to put ur man page
# selpolicy manpage -a -p /usr/share/man/man8
wait for a while to generate the man page..

# man -k _selinux  # whill not show any...
Run mandb command to update the database
# mandb
# man httpd_selinux # to get selinux context level
# cd /var/www
# ls -lZ
# man semanage-fcontext  # only works on RHEL 7
check the example presented
# mkdir /apps
# cat >/apps/index.html
This is test

edit the /etc/httpd/conf/httpd.conf
locate DocumentRoot and change it to /apps
also change Directory location from /var/www to
<Directory "app>
check couple of places

# systemctl restart httpd
# yum -y install elinks
# elink http://localhost
# tail -f /var/log/audit/audit.log | grep AVC
# grep AVC /var/log/audit/audit.log

check the log and you will see the error
# semanage -a -t httpd_sys_content_t "/web(/.*)?"
now you apply the polocy and apply to filesystem
# restorecon -R -v /web
# elinks http://localhost


3rd part
boolean =>
# getsebool -l  # currently available
# semanage boolean -l # show policy state
# man -k _selinux | grep ftp
# man ftpd_selinux # shows us inf about ftp boolean
# getsebool -a | grep ftp
# sesearch -b ftpd_anon_write -ACT
boolean can define a lot of information
transition rules
# sesearch -b ftpd_anon_write -ACT | grep -v type_transit
check for ftpd_t  public_content_WR_T
check for file or directory
source context
# sesearch -b ftpd_anon_write -ACT | grep -v type_transit
# setsebool -P ftpd_anon_write on
no longer blocks the write for anon user...

# sesearch -b ftpd_anon_write -ACT | grep -v type_transit
check for E for enabled on the first coloum
D for disabled
use -P for persistance , without

Tuesday, November 18, 2014

data sync, and server migration

Source: Solaris 10 zone M5000 server
Target: Solaris 11.2, M10 server

One of the server which had over 30TB of data was really slow. This was a zone on M5000 server with 64GB of ram. We build a new server on Sun M10 server and sync the data. rsync was really slow because it was using the 1GB throughput available on the system. However on M10 side, it has 10GB connection and out TSM back server also had 10GB connection. So we decided to use restore rather then rsync. 30 TB of data was restored and then final rsync is completed within a week. It was a wonderful experience. We used to do san to san replication but this time the storage were on different env. Server were on different env. We were copying data from prod to dev. Here is the detail how this task was completed.

on source server get the filesystem info.

You will need FS info, total size assigned to the filesystem and mountpoint.
bash-3.2# df -h | grep z01 | awk '{print $1 "\t\t\t" $2 "\t\t" $6}' >/var/tmp/FS_info.txt
z01p25/sis_appr1                        1.0T            /appsdata/dev/sis/appr1
z01p26/sis_appr2                        500G            /appsdata/dev/sis/appr2
z01p25/fmd_dev_sis_dav                  600G            /appsdata/dev/sis/davm
z01p19/distressed_wrsi                  500G            /appsdata/dev/sis/distressed_wrsi
z01p03/fdata_dev_sis_earlyindicator                     20G             /appsdata/dev/sis/earlyindicator
z01p25/fmd_dev_sis_hpaf                 100G            /appsdata/dev/sis/hpafcst
z01p11/appsdata_dev_sis_hve_monthly                     450G            /appsdata/dev/sis/hve/monthly
z01p10/appsdata_devsis_hve_monthly2                     400G            /appsdata/dev/sis/hve/monthly2
z01p06/fdata_dev_sis_hvedistress                        300G            /appsdata/dev/sis/hvedistress
z01p07/apdata_dev_sis_hvedist                   250G            /appsdata/dev/sis/hvedistress_archive
z01p18/fdata_dev_sis_hvedist_arch2                      295G            /appsdata/dev/sis/hvedistress_archive2
z01p29/fmd_dev_sis_mls                  1000G           /appsdata/dev/sis/mls
z01p13/appsdata_dev_sis_rnd                     600G            /appsdata/dev/sis/rnd
z01p07/apdata_dev_sis_rnd10                     325G            /appsdata/dev/sis/rnd10
z01p15/fmd_dev_sis_rnd11                        400G            /appsdata/dev/sis/rnd11
z01p16/appsdata_dev_sis_rnd2                    600G            /appsdata/dev/sis/rnd2

Now, seperate pool and volume info

bash-3.2# cat /var/tmp/FS_info.txt | awk '{print $2 "\t" $3}' > /var/tmp/size_n_FS.txt
bash-3.2# cd /var/tmp
bash-3.2# cat /var/tmp/size_n_FS.txt
1.0T    /appsdata/dev/sis/appr1
500G    /appsdata/dev/sis/appr2
600G    /appsdata/dev/sis/davm
500G    /appsdata/dev/sis/distressed_wrsi
20G     /appsdata/dev/sis/earlyindicator
100G    /appsdata/dev/sis/hpafcst
450G    /appsdata/dev/sis/hve/monthly
400G    /appsdata/dev/sis/hve/monthly2
300G    /appsdata/dev/sis/hvedistress
250G    /appsdata/dev/sis/hvedistress_archive
295G    /appsdata/dev/sis/hvedistress_archive2
1000G   /appsdata/dev/sis/mls
600G    /appsdata/dev/sis/rnd
325G    /appsdata/dev/sis/rnd10
400G    /appsdata/dev/sis/rnd11
600G    /appsdata/dev/sis/rnd2

bash-3.2# cat /var/tmp/FS_info.txt | awk '{print $1}'>pool_n_Vol.txt
bash-3.2# cat pool_n_Vol.txt

bash-3.2# cat pool_n_Vol.txt | awk -F/ '{print $1 "\t" $2}'>pool_volume.sep.txt
bash-3.2# cat pool_volume.sep.txt
z01p25  sis_appr1
z01p26  sis_appr2
z01p25  fmd_dev_sis_dav
z01p19  distressed_wrsi
z01p03  fdata_dev_sis_earlyindicator
z01p25  fmd_dev_sis_hpaf
z01p11  appsdata_dev_sis_hve_monthly
z01p10  appsdata_devsis_hve_monthly2
z01p06  fdata_dev_sis_hvedistress
z01p07  apdata_dev_sis_hvedist
z01p18  fdata_dev_sis_hvedist_arch2
z01p29  fmd_dev_sis_mls
z01p13  appsdata_dev_sis_rnd
z01p07  apdata_dev_sis_rnd10
z01p15  fmd_dev_sis_rnd11
z01p16  appsdata_dev_sis_rnd2

bash-3.2# paste pool_volume.sep.txt size_n_FS.txt
z01p25  sis_appr1       1.0T    /appsdata/dev/sis/appr1
z01p26  sis_appr2       500G    /appsdata/dev/sis/appr2
z01p25  fmd_dev_sis_dav 600G    /appsdata/dev/sis/davm
z01p19  distressed_wrsi 500G    /appsdata/dev/sis/distressed_wrsi
z01p03  fdata_dev_sis_earlyindicator    20G     /appsdata/dev/sis/earlyindicator
z01p25  fmd_dev_sis_hpaf        100G    /appsdata/dev/sis/hpafcst
z01p11  appsdata_dev_sis_hve_monthly    450G    /appsdata/dev/sis/hve/monthly
z01p10  appsdata_devsis_hve_monthly2    400G    /appsdata/dev/sis/hve/monthly2
z01p06  fdata_dev_sis_hvedistress       300G    /appsdata/dev/sis/hvedistress
z01p07  apdata_dev_sis_hvedist  250G    /appsdata/dev/sis/hvedistress_archive
z01p18  fdata_dev_sis_hvedist_arch2     295G    /appsdata/dev/sis/hvedistress_archive2
z01p29  fmd_dev_sis_mls 1000G   /appsdata/dev/sis/mls
z01p13  appsdata_dev_sis_rnd    600G    /appsdata/dev/sis/rnd
z01p07  apdata_dev_sis_rnd10    325G    /appsdata/dev/sis/rnd10
z01p15  fmd_dev_sis_rnd11       400G    /appsdata/dev/sis/rnd11
z01p16  appsdata_dev_sis_rnd2   600G    /appsdata/dev/sis/rnd2

bash-3.2#  paste pool_volume.sep.txt size_n_FS.txt | awk '{print $2 "\t" $3 "\t" $4 "\t" $1}' >mypool_config.txt

bash-3.2# cat mypool_config.txt
sis_appr1       1.0T    /appsdata/dev/sis/appr1 z01p25
sis_appr2       500G    /appsdata/dev/sis/appr2 z01p26
fmd_dev_sis_dav 600G    /appsdata/dev/sis/davm  z01p25
distressed_wrsi 500G    /appsdata/dev/sis/distressed_wrsi       z01p19
fdata_dev_sis_earlyindicator    20G     /appsdata/dev/sis/earlyindicator        z01p03
fmd_dev_sis_hpaf        100G    /appsdata/dev/sis/hpafcst       z01p25
appsdata_dev_sis_hve_monthly    450G    /appsdata/dev/sis/hve/monthly   z01p11
appsdata_devsis_hve_monthly2    400G    /appsdata/dev/sis/hve/monthly2  z01p10
fdata_dev_sis_hvedistress       300G    /appsdata/dev/sis/hvedistress   z01p06
apdata_dev_sis_hvedist  250G    /appsdata/dev/sis/hvedistress_archive   z01p07
fdata_dev_sis_hvedist_arch2     295G    /appsdata/dev/sis/hvedistress_archive2  z01p18
fmd_dev_sis_mls 1000G   /appsdata/dev/sis/mls   z01p29
appsdata_dev_sis_rnd    600G    /appsdata/dev/sis/rnd   z01p13
apdata_dev_sis_rnd10    325G    /appsdata/dev/sis/rnd10 z01p07
fmd_dev_sis_rnd11       400G    /appsdata/dev/sis/rnd11 z01p15
appsdata_dev_sis_rnd2   600G    /appsdata/dev/sis/rnd2  z01p16


Get the ownership info
# df -h |grep z01 | awk '{print $6}' >/var/tmp/permission.txt

bash-3.2$ more permission.txt
drwxrwxrwx   9 sisuser  sisgrp        10 Sep 10 16:15 /appsdata/dev/sis/appr1
drwxrwxrwx   9 sisuser  sisgrp        14 Mar 19  2014 /appsdata/dev/sis/appr2
drwxrwxr-x   8 sisuser  sisgrp        10 Aug 26 16:06 /appsdata/dev/sis/davm
drwxrwxrwx   4 sisuser  sisgrp         4 Apr 18  2014 /appsdata/dev/sis/distressed_wrsi
drwxrwx---  11 sisuser  sisgrp        50 May  2  2008 /appsdata/dev/sis/earlyindicator
drwxrwxrwx   3 sisuser  sisgrp         4 Jul 12  2013 /appsdata/dev/sis/hpafcst
drwxrwx---  31 sisuser  sisgrp        35 Apr 20  2011 /appsdata/dev/sis/hve/monthly
bash-3.2$ cat permission.txt | awk '{print "chown -R " $3":"$4 "\t" $9}' >/var/tmp/chpermission.sh
$ cat chpermission.sh
chown -R sisuser:sisgrp /appsdata/dev/sis/appr1
chown -R sisuser:sisgrp /appsdata/dev/sis/appr2
chown -R sisuser:sisgrp /appsdata/dev/sis/davm
chown -R sisuser:sisgrp /appsdata/dev/sis/distressed_wrsi
chown -R sisuser:sisgrp /appsdata/dev/sis/earlyindicator
chown -R sisuser:sisgrp /appsdata/dev/sis/hpafcst
chown -R sisuser:sisgrp /appsdata/dev/sis/hve/monthly
chown -R sisuser:sisgrp /appsdata/dev/sis/hve/monthly2
chown -R sisuser:sisgrp /appsdata/dev/sis/hvedistress
chown -R sisuser:sisgrp /appsdata/dev/sis/hvedistress_archive
chown -R sisuser:sisgrp /appsdata/dev/sis/hvedistress_archive2

Calculate the size and what filesystem you want to create on what pool. We created 800 to 1200 GB pool and adjusted the
space to fit the filesystem. You copy the mypool_config.txt file to the target system and create pool and filesystem. and
also copy the chpermission.sh script and put it under /var/tmp and run it to change the permission.

On target system
1. Bring the luns under the OS and create pool based on the size.
vi createpool.sh
zpool create poolv401_2 c1d11 c1d12 c1d13 c1d14 c1d15 c1d16
zpool create poolv401_3 c1d21 c1d22 c1d23 c1d24 c1d25
zpool create poolv401_4 c1d26 c1d27 c1d28 c1d29 c1d30
zpool create poolv401_5 c1d31 c1d32 c1d33 c1d34 c1d35
zpool create poolv401_6 c1d36 c1d37 c1d38 c1d39 c1d40
zpool create poolv401_7 c1d41 c1d42 c1d43 c1d44 c1d45
zpool create poolv401_8 c1d46 c1d47 c1d48 c1d49 c1d50 c1d51
zpool create poolv401_9 c1d52 c1d53 c1d54 c1d55 c1d56 c1d57
zpool create poolv401_10 c1d58 c1d59 c1d60 c1d61 c1d62
zpool create poolv401_11 c1d63 c1d64 c1d65 c1d66 c1d67
zpool create poolv401_12 c1d68 c1d69 c1d70 c1d71 c1d72 c1d73
zpool create poolv401_13 c1d74 c1d75 c1d76 c1d77 c1d78 c1d79
zpool create poolv401_14 c1d80 c1d81 c1d82 c1d83
zpool create poolv401_15 c1d84 c1d85 c1d86 c1d87 c1d88
zpool create poolv401_16 c1d89 c1d90 c1d91 c1d92 c1d93
zpool create poolv401_17 c1d94 c1d95 c1d96 c1d97 c1d98 c1d99
zpool create poolv401_18 c1d100 c1d101 c1d102 c1d103 c1d104 c1d105
zpool create poolv401_19 c1d106 c1d107 c1d108 c1d109 c1d110
zpool create poolv401_20 c1d111 c1d112 c1d113 c1d114 c1d115 c1d116
zpool create poolv401_21 c1d117 c1d118 c1d119 c1d120 c1d121
zpool create poolv401_22 c1d122 c1d123 c1d124 c1d125 c1d126 c1d127
zpool create poolv401_23 c1d128 c1d129 c1d130 c1d131 c1d132 c1d133
zpool create poolv401_24 c1d134 c1d135 c1d136 c1d137 c1d138 c1d139
zpool create poolv401_25 c1d140 c1d141 c1d142 c1d143
zpool create poolv401_26 c1d144 c1d145 c1d146 c1d147 c1d148 c1d149
zpool create poolv401_27 c1d150 c1d151 c1d152 c1d153 c1d154
zpool create poolv401_28 c1d155 c1d156 c1d157 c1d158 c1d159 c1d160
zpool create poolv401_29 c1d161 c1d162 c1d163 c1d164 c1d165 c1d166
zpool create poolv401_30 c1d167 c1d168 c1d169 c1d170 c1d171 c1d172
zpool create poolv401_31 c1d173 c1d174 c1d175 c1d176 c1d177
zpool create poolv401_32 c1d178 c1d179 c1d180 c1d181 c1d182
zpool create poolv401_33 c1d183 c1d184 c1d185 c1d186 c1d187
zpool create poolv401_34 c1d188 c1d189 c1d190 c1d191 c1d192
zpool create poolv401_35 c1d193 c1d194 c1d195

# ===================================================================

# FS      Size   MNT Point  POOL
# Pool poolv401_1 - 1T
FS_sis_appr1                        990G                    /appsdata/dev/sis/appr1 poolv401_1
# poolv401_2  - 1.2T
FS_sis_appr2                        500G                    /appsdata/dev/sis/appr2 poolv401_2
FS_fmd_dev_sis_dav                  600G                    /appsdata/dev/sis/davm poolv401_2
# poolv401_3 - 1T
FS_distressed_wrsi                  500G                    /appsdata/dev/sis/distressed_wrsi poolv401_3
FS_fdata_dev_sis_earlyindicator     20G                     /appsdata/dev/sis/earlyindicator poolv401_3
FS_appsdata_dev_sis_hve_monthly     450G                    /appsdata/dev/sis/hve/monthly poolv401_3
# poolv401_4  - 1T
FS_appsdata_devsis_hve_monthly2 400G                    /appsdata/dev/sis/hve/monthly2 poolv401_4
FS_fdata_dev_sis_hvedistress  300G                    /appsdata/dev/sis/hvedistress poolv401_4
FS_apdata_dev_sis_hvedist   250G                    /appsdata/dev/sis/hvedistress_archive poolv401_4
# poolv401_5 - 1T
FS_fmd_dev_sis_mls                  990G                   /appsdata/dev/sis/mls poolv401_5
# poolv401_6 - 1T
FS_fdata_dev_sis_hvedist_arch2  295G                    /appsdata/dev/sis/hvedistress_archive2 poolv401_6
FS_appsdata_dev_sis_rnd   590G                    /appsdata/dev/sis/rnd  poolv401_6
FS_fmd_dev_sis_hpaf   100G                    /appsdata/dev/sis/hpafcst poolv401_6
# poolv401_7 - 1T
FS_fmd_dev_sis_rnd11   390G                    /appsdata/dev/sis/rnd11  poolv401_7
FS_appsdata_dev_sis_rnd2   590G                    /appsdata/dev/sis/rnd2  poolv401_7
# poolv401_8 - 1.2T
FS_apdata_dev_sis_rnd10     325G                    /appsdata/dev/sis/rnd10  poolv401_8
FS_fdata_dev_sis_rnd5    375G                    /appsdata/dev/sis/rnd5  poolv401_8
FS_fdata_dev_sis_rnd6    375G                    /appsdata/dev/sis/rnd6  poolv401_8
# poolv401_9 - 1.2T
FS_apdata_dev_sis_rnd7    300G                    /appsdata/dev/sis/rnd7  poolv401_9
FS_fmd_dev_sis_rnd8   375G                    /appsdata/dev/sis/rnd8  poolv401_9
FS_fmd_dev_sis_rnd9    375G                    /appsdata/dev/sis/rnd9  poolv401_9
# poolv401_10  - 1TB
FS_fdata_dev_sis_rs                 445G                    /appsdata/dev/sis/rs  poolv401_10
FS_appsdata_dev_sis_rs_archive  295G                    /appsdata/dev/sis/rs_archive poolv401_10
FS_fmd_dev_sis_rs_arc2   245G                    /appsdata/dev/sis/rs_archive2 poolv401_10
# poolv401_11 - 1TB -- adjusted
FS_appsdata_dev_sis_rs_monthly  595G                    /appsdata/dev/sis/rs_monthly  poolv401_11
FS_fdata_dev_sis_rsm_arch   395G                    /appsdata/dev/sis/rs_monthly/archive poolv401_11
# poolv401_12  - 1.2 T
FS_fdata_dev_sis_rs_mon_rnd   275G                    /appsdata/dev/sis/rs_monthly/rnd poolv401_12
FS_fdata_dev_sis_rsmon_rnd2  375G                    /appsdata/dev/sis/rs_monthly/rnd2 poolv401_12
FS_apdata_dev_sis_seeds     375G                    /appsdata/dev/sis/seeds  poolv401_12
# poolv401_13 - 1.2T
FS_fmcdt_dev_sis_sds_arch   375G                    /appsdata/dev/sis/seeds_archive poolv401_13
FS_appsdata_dev_sis_work2   650G                    /appsdata/dev/sis/work2  poolv401_13
# poolv401_14 - 800 GB
FS_appsdata_dev_sis_work   790G                    /appsdata/dev/sis/work  poolv401_14
# poolv401_15 - 1TB
FS_apdata_dev_sis_work3   495G                    /appsdata/dev/sis/work3  poolv401_15
FS_fmd_dev_sis_wrsi_noreo   495G                    /appsdata/dev/sis/wrsi_noreo poolv401_15
# ------------------ Prod ----------------
# poolv401_16 - 1TB
FS_fmdt_prod_arch_1year  195G                    /appsdata/prod/archive/oneyear  poolv401_16
FS_fmdt_prod_arch_7year  195G                    /appsdata/prod/archive/sevenyear poolv401_16
FS_appsdata_prod_sis_archive  595G                    /appsdata/prod/sis/archive  poolv401_16
# poolv401_17  - 1.2 TB
FS_appsdata_prod_sis_arch2  595G                    /appsdata/prod/sis/archive2  poolv401_17
FS_fmd_prod_sis_archive3  595G                    /appsdata/prod/sis/archive3  poolv401_17
# poolv401_18 - 1.2T
FS_appsdata_prod_sis_data   400G                    /appsdata/prod/sis/data  
FS_fdata_p_sishvedat_sds_rawdat  400G                    /appsdata/prod/sis/hve/data/seeds/rawdata
FS_fmdp_sisd_sds_raw_firsta  300G                    /appsdata/prod/sis/hve/data/seeds/rawdata/firsta
# poolv401_19 - 1TB
FS_fdata_p_seedsrawdat_acx   500G                    /appsdata/prod/sis/hve/data/seeds/rawdata/acxiom
FS_fmdt_prd_sis_hve_dt_fdrout  250G                    /appsdata/prod/sis/hve/data/fdrout  poolv401_19
FS_fmd_prd_shve_sd2_rawd   170G                    /appsdata/prod/sis/hve/data/seeds2/rawdata
# poolv401_20 - 1.2 TB
FS_appsdata_prod_sis_hve_retro3  400G                    /appsdata/prod/sis/hve/retro3  poolv401_20
FS_appsdata_prod_sis_nplqc   10G                     /appsdata/prod/sis/nplqc  poolv401_20
FS_apdata_prod_sis_retro10   350G                    /appsdata/prod/sis/retro10  poolv401_20
FS_fmd_prod_sis_retro11   350G                    /appsdata/prod/sis/retro11  poolv401_20
# poolv401_21 - 1TB
FS_fmd_prod_sis_retro12  350G                    /appsdata/prod/sis/retro12  poolv401_21
FS_appsdata_prod_sis_retro1  600G                    /appsdata/prod/sis/retro1  poolv401_21
# poolv401_22  - 1.2T
FS_sis_retro13    595G                    /appsdata/prod/sis/retro13  poolv401_22
FS_appsdata_prod_sis_retro2  595G                    /appsdata/prod/sis/retro2  poolv401_22
# poolv401_23 - 1.2T
FS_fdata_prod_sis_retro4    595G                    /appsdata/prod/sis/retro4  poolv401_23
FS_appsdata_data_sis_retro5  595G                    /appsdata/prod/sis/retro5  poolv401_23
# poolv401_24  - 1.2T
FS_appsdata_prod_sis_retro6  595G                    /appsdata/prod/sis/retro6  poolv401_24
FS_appsdata_prod_sis_retro7  595G                    /appsdata/prod/sis/retro7  poolv401_24
# poolv401_25 - 800GB
FS_appsdata_prod_sis_retro8   350G                    /appsdata/prod/sis/retro8 poolv401_25
FS_apdata_prod_sis_retro9  350G                    /appsdata/prod/sis/retro9 poolv401_25
# poolv401_26  - 1.2TB
FS_sis     22G                     /sis  poolv401_26
FS_sis_cbrtr     250G                    /sis/cbrtr poolv401_26
FS_sis_hve      700G                    /sis/hve poolv401_26
FS_sis_income    150G                    /sis/income poolv401_26
# poolv401_27 - 1TB
FS_sis_hve_archive   595G                    /sis/hve/archive poolv401_27
FS_sis_hve_archive2     395G                    /sis/hve/archive2 poolv401_27
# poolv401_28  - 1.2TB
FS_sis_hve_data   495G                    /sis/hve/data  poolv401_28
FS_sis_hve_data_acxiom   195G                    /sis/hve/data/acxiom poolv401_28
FS_sis_hve_data_comps   195G                    /sis/hve/data/comps poolv401_28
FS_appraise_latest      299G                    /sis/hve/data/property/appraisal/latest  poolv401_28
# poolv401_29  - 1.2TB
FS_sis_hve_data_firsta      200G                    /sis/hve/data/firsta   poolv401_29
FS_sis_hve_data_property_firsta   390G                    /sis/hve/data/property/firsta  poolv401_29
FS_sis_hve_data_p_fir_rawdata    600G                    /sis/hve/data/property/firsta/rawdata poolv401_29
# poolv401_30  - 1.2TB
FS_firsta_latest                    300G                    /sis/hve/data/property/firsta/latest
FS_sis_hve_data_prop_fir_pscrub    500G                    /sis/hve/data/property/firsta/prescrub
FS_sis_hve_data_prop_idm     300G                    /sis/hve/data/property/idm  poolv401_30
# poolv401_31 - 1TB
FS_idm_latest     300G                    /sis/hve/data/property/idm/latest poolv401_31
FS_sis_hve_data_prop_idm_prescr   275G                    /sis/hve/data/property/idm/prescrub poolv401_31
FS_sis_hve_data_rs   400G                    /sis/hve/data/rs   poolv401_31
# poolv401_32 - 1TB
FS_sis_hve_data_prop_idm_rawdat  495G                    /sis/hve/data/property/idm/rawdata poolv401_32
FS_sis_hve_retro    495G                    /sis/hve/retro    poolv401_32
# poolv401_33 - 1TB
FS_sis_hve_data_seeds    250G                    /sis/hve/data/seeds  poolv401_33
FS_sis_hve_data_seeds_raw    280G                    /sis/hve/data/seeds/raw  poolv401_33
FS_sis_hve_distress  300G                    /sis/hve/distress  poolv401_33
FS_sis_hve_ffs    5.0G                    /sis/hve/ffs   poolv401_33
# poolv401_34 - 1TB
FS_sis_hve_forecast   300G                    /sis/hve/forecast  poolv401_34
FS_sis_hve_hedonic   140G                    /sis/hve/hedonic  poolv401_34
FS_sis_hve_rnd2   350G                    /sis/hve/rnd2   poolv401_34
FS_sis_hve_tablescrap          150G                    /sis/hve/tablescrap  poolv401_34
# poolv401_35 - 600GB
FS_sis_hve_testdata    300G                    /sis/hve/testdata poolv401_35
FS_sis_rnd     200G                    /sis/rnd  poolv401_35
# ================================================================================
# df -h | grep z01 | awk '{print $1 "\t\t\t" $2 "\t\t\t" $6}'
# for i in `cat /tmp/a`; do df -h $i | tail -1; done

Once your config file is ready use the following script to create fs.

Copy the following script to /var/tmp and run the script
# cd /var/tmp; vi create_zfs.sh; chmod u+x create_zfs.sh; ./create_zfs.sh

# cat var/tmp/create_zfs.sh
# Sam Bhusal; Only for personal use
# Automate FS creation task using zfs. This script tested and verified on Solaris 10.
# Mon Feb 24 12:13:24 EST 2014
# Update: Thu Aug 14 09:55:12 EDT 2014
# This is a config file for sybase FS.
# Volume       Size (GB) mPoint         pool
#FS_opt_oracle  10G     /opt/oracle     z01p03
#opt_sasbi      25G     /opt/sasbi      z01p03
#pkgs           50G     /pkgs           z01p03
if [ `/usr/bin/whoami` != "root" ]
        echo "You must be root to run this script"
        exit 1
# Creating LVM.
# Read the config file
/bin/cat mypool_config.txt | grep -v "^#" | while read myvol mysize mymnt mypool
        # create FS
        zfs create -o mountpoint=${mymnt} -o quota=${mysize} ${mypool}/${myvol}
#       zfs create -o mountpoint=/tmp/abcde -o quota=10G poolv401_1/FS_TMP_ABCDE
# For raw device
#       zfs create -V ${mysize} ${mypool}/${myvol}
        # Check the condition if the process is successful.
        if [ $? -eq 0 ]; then
                echo "Successfully Created ${mymnt} fileystem on ${mypool} pool."
                df -h ${mymnt}
                echo "Failed, please review the error"
                # exit
echo " ----------------end----------------"
# Backout Plan
# List your LVMs
# zfs list
# zfs destroy /dev/${myvg}/${myfs}

Once fs are created, verify each and everything such as size, mountpoint.
then change the ownership. to change it run the script.

Changing the interface eth3 to eth0 on Redhat 6.3

This is what I got from application team.

Hi Engineering Team,
        Could you please configure all the HW address point to eth0 rather than eth2 on all the hosts? We are having issues with licensing with this.  These are new VM builds and are having these issues.

OK, these servers were built using VMware. In fact, new server is going to be the clone of an existing system. Redhat 5.x kept the same interface. After the completion of the build, we need to add a new interface and use ethtool to verify the correct interface.  But in case of Redhat 6.x, once the server is built, the interface changed to eth2 directly. But it worked perfectly. However, out application team has a software which counts the number of interface. I think they have bug but it query the number of interface. Since the active interface on the system was eth2, it assume that we are using 3 interface so we have to pay the license for 3 interface rather then one. So, they requested to change the interface from eth2 to eth0. Here is what you have to do,

1. In the persistent rules file (/etc/udev/rules.d/70-persistent-net.rules), comment out the first two interfaces and changed the name of the third to eth0

[root@lnxprodsrv02 ~]# cd /etc/udev/rules.d/
[root@lnxprodsrv02 rules.d]# ls -l 70-persistent-net.rules
-rw-r--r-- 1 root root 754 May  9 16:48 70-persistent-net.rules

[root@lnxprodsrv02 rules.d]# cat 70-persistent-net.rules
# PCI device 0x15ad:0x07b0 (vmxnet3)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:50:56:91:25:12", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"
# PCI device 0x8086:0x100f (e1000)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:50:56:91:23:63", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1"
# PCI device 0x15ad:0x07b0 (vmxnet3)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:50:56:bf:52:5d", ATTR{type}=="1", KERNEL=="eth*", NAME="eth2"

[root@lnxprodsrv02 rules.d]# cp -p /etc/sysconfig/network-scripts/ifcfg-eth2 /etc/sysconfig/network-scripts/ifcfg-eth2.origg

[root@lnxprodsrv02 rules.d]# vi /etc/udev/rules.d/70-persistent-net.rules
#SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:50:56:91:25:12", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"
# PCI device 0x8086:0x100f (e1000)
#SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:50:56:91:23:63", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1"
# PCI device 0x15ad:0x07b0 (vmxnet3)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:50:56:bf:69:3f", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

# nexec lnxprodsrv02 mv /etc/sysconfig/network-scripts/route6-eth2 /etc/sysconfig/network-scripts/route6-eth0
# nexec lnxprodsrv02 vi /etc/sysconfig/network-scripts/ifcfg-eth0

2. In /etc/sysconfig/network-scripts/ifcfg-eth2, add the HWADDR from your new eth0 and changed the device to eth0. Then rename the file to ifcfg-eth0

[root@lnxprodsrv02 rules.d]# cat /etc/sysconfig/network-scripts/ifcfg-eth2
check_link_down() {
 return 1;

[root@lnxprodsrv02 rules.d]# cat /etc/sysconfig/network

[root@lnxprodsrv02 rules.d]# ifconfig -a
eth2      Link encap:Ethernet  HWaddr 00:50:56:BF:52:5D
          inet addr:  Bcast:  Mask:
          inet6 addr: fe80::250:56ff:febf:525d/64 Scope:Link
          RX packets:5489372 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2004461 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:4371371748 (4.0 GiB)  TX bytes:10531360073 (9.8 GiB)
lo        Link encap:Local Loopback
          inet addr:  Mask:
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:159886 errors:0 dropped:0 overruns:0 frame:0
          TX packets:159886 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:28644597 (27.3 MiB)  TX bytes:28644597 (27.3 MiB)

3. Renamed route-eth2 to route-eth0
mv /etc/sysconfig/network-scripts/route-eth2 /etc/sysconfig/network-scripts/route-eth0
mv /etc/sysconfig/network-scripts/route6-eth2 /etc/sysconfig/network-scripts/route6-eth0

4. reboot

You should have new interface.

Another way of doing the same task is to make a copy of interface config and delete all the interface from VMware, add new interface and assign the ip address. Reboot to confirm.

Fixing nobody IDs and groups on mounted NAS shares.

Fixing nobody IDs and groups on mounted NAS shares.

1. Login to the server and become root.
$ sudo su -
2. Go to the nfs mount and list the content.
# cd /opt/app2/dump/
# ls -l
drwxr-xr-x 4 nobody nobody 4096 Sep  4 14:02 sasuser
drwxrwxrwx 3 nobody nobody 4096 Sep  8 11:40 retired
drwxr-xr-x 7 nobody nobody 4096 Sep 26 14:05 opsdata
3. Exit out off the nfs mount.
# cd /

4) Make the following change in a configuration file.
   Change the domain from "localdomain" to "expanor.local".
# vi /etc/idmapd.conf
        Verbosity = 0
        Pipefs-Directory = /var/lib/nfs/rpc_pipefs
        #Domain = localdomain
        Domain = expanor.local
        Nobody-User = nobody
        Nobody-Group = nobody
        Method = nsswitch

5) Restart idmapd.
# /etc/init.d/rpcidmapd restart
Stopping RPC idmapd:                                       [  OK  ]
Starting RPC idmapd:                                       [  OK  ]
6. Remount the nfs share and veryfy the cotent.
# umount /opt/app2/dump/; mount /opt/app2/dump/
# cd /opt/app2/dump/
# ls -l
drwxr-xr-x 4 dbusr dbmgr 4096 Sep  4 14:02 sasuser
drwxrwxrwx 3 dbusr dbmgr 4096 Sep  8 11:40 retired
drwxr-xr-x 7 dbusr dbmgr 4096 Sep 26 14:05 opsdata

check to see if port is open on the remote server

Since telnet client is not installed by default, now, you can use netcat command instead. Netcat has lots of features over tcp. Here we discuss about port info.

I had a situation to check a particular port, here is what I did.
To verify TSM works, we need to have bi-directional connection and need to open port 1500 on both end.

1. Testing from tsmbksrv22 to lnxprodsrv02
$ nc -z -v -w 1 1500
nc: connect to port 1500 (tcp) failed: Connection refused

Testing from lnxprodsrv02 to lnxprodsrv02
$ nc -z -v -w 1 1500
Connection to 1500 port [tcp/vlsi-lm] succeeded!

Testing from lnxprodsrv02 to tsmbksrv22
$ nc -z -v -w 1 1500
Connection to 1500 port [tcp/vlsi-lm] succeeded!

Testing from lnxprodsrv02 to tsmbksrv22
$ nc -z -v -w 1 1500
Connection to 1500 port [tcp/vlsi-lm] succeeded!

fsck root filesystem on redhat 6x

Take server to single user mode.

To fsck, you have to unmount it.
# chroot /mnt/sysimage
# vi /etc/fstab
put comment on the filesystems
# mount | grep /mnt/sysimage | awk '{print $3}' | sed -n -e '2,$p' | sort -r | awk '{print "umount " $1}'  | sh
# cat /etc/fstab | egrep "^###" | awk '{print "e2fsck -y " $1}' | tr -d '#' | bash
1. fsck your root device,
# e2fsck -f /dev/sda3

2. If you are using LVM for root filesystem, perform the following to activate the lvm
# lvm pvscan
# lvm vgscan
Reading all physical volumes. This may take a while...
Found volume group "RootVG" using metadata type lvm2
# lvm lvscan
INACTIVE '/dev/RootVG/Lv_Root' [x.xx GB] inherit

# lvm vgchange -ay
1 logical volume(s) in volume group "RootVG" now active

3. Run fsck on your root filesystem (lvm to check file integrity)
# e2fsck -f /dev/mapper/RootVG/Lv_Root

4. reboot the system.


Some OS migration steps

Here are some basic system, application, database migration tasks.

System settings

1. Verify date format on old server and map the date format on new server.
2. Check the entry /etc/project (for solaris) file to see if there is any special values for Kernel.
3. Check kerboros settings (keytab, nfssec.conf, …etc)
4. Check mem/swap assigned to old server, allocate reasonable (or same ) size of swap space to new server.
5. Check the entry at /etc/syslog.conf to see if there are any app logs configured.
6. Check /etc/system and /etc/services for any configurations protocal including ports
7. Check /usr/, /usr/lib, /opt/, /usr/local for java, jre, or any other links and create links on new server. [ for db2 server/client]
8. Check crontab entries for root and inform user to copy their cron entry.
9. Keep the record of all the filessytem, process, ipaddress. (df -h, ps -ef, ifconfig)

Filesystem settings

1. Check all filesystem on the system. zfs does not have entry to vfstab so check one by one and create to target host.
   also check vfstab (fstab) entries for  app filesystems and NAS filesystems. Check to see if there any NAS filesystems that have kerberos enabled.
2. Copy any keys such as keytab, and apply to a new server.
3. Review and Copy all needed entries from /usr/local to new server.
4. Check autofs entry at /etc/auto_master, auto_home and auto_direct for automount filesystems. (linux/Solaris).
5. Make sure to put a ticket to storage team to allocate enough LUNs to copy data.
6. Start doing initial rsync between old server to new server.
7. Verify the autofs entry and if links are created, perform on target host aswell.
8. Check if there are any directories which are local to the system other then OS related. Copy if needed by creating new mountpoint. No local copy.
9. Also check /usr, /opt and /var filesystems, to see if there are any directories other than OS related, if so please copy them to new server.
10. Check on /usr,/opt,/var for database such as Sybase, db2, oracle and any links created. Verify with target host.
11. Closely work with database team to build the filesystem (raw device) for their database dumps.

Client installations

1. Verify if clearcase client is installed  ( ps –ef | grep clearcase or look  for /view  filesystem )
2. Check for mq-series-MQM ( ps –ef | grep mq, pkginfo –l mqm, look for /opt/mqm and /var/mqm
3. Check /opt to see if there are any application/database clients, such as oracle, Sybase, DB2, autosys, pgp..etc.
4. Confirm DB2 client versions with fixpack levels.
5. Verify JAVA versions on old servers, keep SAME java versions on new server, and please make sure to keep the same links as old server. They are very critical and application will fail if they are not copied correctly.
6. Check PERL versions, modules  and PERL path.

Startup Scripts

1. Check /etc/rc2.d and /etc/rc3.d  for start up scripts.
2. Check /etc/init.d for startup scripts.
3. Check /etc/rc0.d and /etc/rc1.d for  kill scripts.
SSH Keys
1. Copy  /etc/ssh/authorized_keys  from old server to new server if there is any.
2. Copy old host keys(.rsa and .dsa) to new server( take backup of existing host keys on new server /etc/ssh).
3. Check the number of keys on old new serverson both servers and they should match.
4. If new key need to be added to the user, do not overwrite, append it.
Quality Check
1. Re-run the rsyncs on all filesystems.
2. Review every possible aspect of os, filesystem, application, database.
Final Rsyncs
1. Stop eTrust/Firewall on both old and new servers and run final rsync with –update option .
2. Verify number of files on each filesystem with ls –lR | wc-l and compare with old server to new server.

DNS change

1. Once everything is verified, shutdown your old host.
2. Coordinate with person who manages DNS to create an alias (cname record) from old host to new host.
3. Once the change is made, reboot your new server
4. Once server is up, use old name to login to the system.
5. Verify your access, your sudo access
6. Check applicaiton, system, and other process
7. Check log to see if you have any errors
8. Once you have verified everything, coordinate with app, database and other team to verify their stuffs.

Next day,

1. Check your email regarding issue with migration.
2. You have something breaking, fix it.
3. If you need to copy something from old host, bring the server by logging to the console and once up, login to the server using ip address.

Copying text file using dd command

Copying text file using dd command

$ cat >myjunkss
THis is a testing of a file to test dd command
$ cat myjunkss | dd of=/var/tmp/myddout.txt oflag=append conv-notrunc
dd: bad argument: "oflag=append"

$ ls -l /var/tmp/myddout.txt
/var/tmp/myddout.txt: No such file or directory

$ cat myjunkss | dd of=/var/tmp/myddout.txt
0+1 records in
0+1 records out

$ ls -l /var/tmp/myddout.txt
-rw-r--r--   1 c13637   sysadmin      47 Nov 18 14:07 /var/tmp/myddout.txt

$ cat /var/tmp/myddout.txt
THis is a testing of a file to test dd command

Stop and start the process on Unix/Linux

Steps to start/stop hung/running process on Linux
1. Check for abcd process
# ps –ef | grep abcd
Kill the abcd process
# ps -ef | grep abcd | grep -v grep | awk '{print $2}' | xargs kill -9

2. Start abcd application
# /etc/init.d/abcd  start

3. Verify the abcd processes are running
# ps -ef | grep abcd
abcd    6829     1  0 10:36 ?        00:00:00 /usr/adm/abcd/bin/abcdagent -b /usr/adm/abcd5.10 -a
abcd    6830  6829  0 11:28 ?        00:00:00 abcdcollect -I noInstance -B /usr/adm/abcd5.10

4. Verify the server is listening on port 5019
# netstat  -an | grep 5019

Tuesday, November 4, 2014

Changing timezone in Solaris 11

How to change timezone on Solaris 11
root@SunSolaris11:/fmacdata# svccfg -s timezone:default setprop timezone/localtime= astring: US/Eastern
root@SunSolaris11:/fmacdata# date
Thursday, October 30, 2014 11:55:43 AM EST
root@SunSolaris11:/fmacdata# svcadm refresh timezone:default
root@SunSolaris11:/fmacdata# date
Thursday, October 30, 2014 12:55:55 PM EDT


root@SunSolaris11:/apps# date
Thursday, October 30, 2014 11:53:22 AM EST
root@SunSolaris11:/apps# grep TZ /etc/default/init
root@SunSolaris11:/apps# svccfg -s timezone:default setprop timezone/localtime= astring: US/Eastern
root@SunSolaris11:/apps# date
Thursday, October 30, 2014 11:55:43 AM EST
root@SunSolaris11:/apps# svcadm refresh timezone:default
root@SunSolaris11:/apps# date
Thursday, October 30, 2014 12:55:55 PM EDT
root@SunSolaris11:/apps# grep TZ /etc/default/init
root@SunSolaris11:/apps# svccfg -s svc:/system/environment:init
svc:/system/environment:init> list
svc:/system/environment:init> listprop
umask                              application
umask/umask                       astring     022
umask/value_authorization         astring     solaris.smf.value.environment
environment                        application
environment/LC_ALL                astring
environment/LC_COLLATE            astring
svc:/system/environment:init> quit
root@SunSolaris11:/apps# cat /etc/default/init
root@SunSolaris11:/apps# svcadm refresh timezone:default
root@SunSolaris11:/apps# svcadm refresh svc:/system/environment
root@SunSolaris11:/apps# grep TZ /etc/default/init