Tuesday, November 29, 2016

Daily Check List for system administrator


Daily Check List for system administrator


1. Check your email and calendar
   - reply your mail right away
   - mark for important meetings


2. Check your ticket queue
   - find the urgency of the ticket.
   - priorotise the task


3. Make a list of servers with problems. Check log /var/adm/messages
    # tail -2000 /var/log/messages


4. Also check the following,


 a. Disk and Filesystem utilization issue
    # df -h
    # cd /mountpoint; du -sh *
    # iostat -En
    # iostat -Exn
    # zpool status
    # zfs list


 b. CPU utilization
    # w
    # prstat
    # vmstat
    # sar -u


 c. Memory utilization
    # prstat
    # vmstat
    # swap -s; swap -l
    # top -> if you have one installed. [check under /usr/sfw/bin]


 d. Network Statistics
    # netstat -rn
    # netstat -an
    # netstat -in
    # netstat -m




5. Once you are familar with the manual process, write all steps into a file and develop a script.
   - if your place allows emailing, put it on cron and send it through email
   - or use monitoring tools to check the status of your system.

Solaris 10 - Q4, 2016 patching Instructions



Solaris 10 Q4, 2016 patching Instructions
1. List and Delete old Boot Environment
# lustatus
# ludelete aBE01252016
# lustatus
# zfs list

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
if you fail to delete BE, perform the following.
# ludelete aBE01252016
ERROR: boot environment </.alt.tmp.b-X1f.mnt> does not exist
ERROR: </.alt.tmp.b-X1f.mnt> is not a valid root device (not a block device)
ERROR: no file system is mounted on </.alt.tmp.b-X1f.mnt>
ERROR: </.alt.tmp.b-X1f.mnt> is not a root device, mount point, or name of
a currently mounted boot environment
ERROR: environment <aBE01252016>.
ERROR: boot environment </.alt.tmp.b-X1f.mnt> does not exist
ERROR: </.alt.tmp.b-X1f.mnt> is not a valid root device (not a block device)
ERROR: no file system is mounted on </.alt.tmp.b-X1f.mnt>
ERROR: </.alt.tmp.b-X1f.mnt> is not a root device, mount point, or name of a
currently mounted boot environment

# cp -rp /etc/lu /var/tmp/lu.deleted.BE.10182016
# cd /etc/lu
# rm ICF.* INODE.* rm .??* ./.alt.* tmp/*
# mv  /etc/lutab /var/tmp/lutab.deleted.BE.10182016
# lustatus
# zfs list -t snapshot
# zfs destroy -R rpool2/ROOT/aBE01252016@pBE04212016
# zfs destroy -R rpool2/ROOT/aBE01252016@pBE04212016

# lucreate -c aBE01252016 -n aBE07202016
# lumount aBE07202016 /alt
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2. Back up configuration and remove old logs
a. /etc/hosts, /etc/nsswitch.conf, /etc/mail/sendmail.cf
b. df -h >/var/tmp/df.out
c. zfs list >/var/tmp/myzfs.out
d. zpool list> /var/tmp/zpool.list
e. zpool status >/var/tmp/zpool.status
cp -p /etc/hosts /etc/hosts.10182016
cp -p /etc/nsswitch.conf /etc/nsswitch.conf.10182016
cp -p /etc/mail/sendmail.cf /etc/mail/sendmail.cf.10182016
df -h >/var/tmp/df.out.10182016
zfs list >/var/tmp/myzfs.out.10182016
zpool list> /var/tmp/zpool.list.10182016
zpool status >/var/tmp/zpool.status.10182016
cat /etc/lu/ICF.1>/var/tmp/ICF1.bk.10182016
cat /etc/lu/ICF.2>/var/tmp/ICF2.bk.10182016
cd /var/adm; rm pacct.* auditlog.0 auditlog.1
cd /var/tmp; rm -fr DCE* SPX* TCP* US* RAW* NMP* net* ISPX* VI* BEQ* DEC* ITCP*
cd /var/audit; /usr/local/bin/purge_audit.sh; cd /var/tmp
 
3. Create New BE
# lucreate -n pBE_10182016
4. extract patch
# cd /data/patch_sol; unzip /repository/Solaris/10_Recommended_CPU_2016-10.zip
Note: Verify the space: # df -h /
5. Mount  new BE under /alt
# lumount pBE_10182016 /alt
6. Apply OS patches to /alt.  Apply Pre-Req first and Patch cluster
# cd /data/patch_Sol/10_Recommended_CPU_2016-10
# ./installcluster --apply-prereq --s10patchset
7. Then Apply Patch cluster to AltBE
# nohup ./installcluster -R /alt --s10patchset --disable-space-check > /alt/opt/Patches/10_Recommended.out 2>&1 &
# tail -f /alt/opt/Patches/10_Recommended.out
or
# nohup ./installcluster -R /alt --s10patchset > /alt/opt/Patches/10_Recommended.out 2>&1 &
# df -h / /alt

------------------JAVA patch------------------------
8. Apply java patch
# unzip -q 147692-82.zip; unzip -q 147693-82.zip; unzip -q 151672-03
# patchadd -R /alt 147692-82; patchadd -R /alt 147693-82; patchadd -R /alt 151672-03
# for i in 147692-82 147693-82 151672-03; do  patchadd -R /alt $i ; done
9. Verify patch
#  showrev -p -R /alt | grep 150400-40
10. Re-link default java and verify the version
/usr/bin/java
cd /alt/usr;rm java;ln -s jdk/latest java;./java/bin/java -version; /usr/java/bin/java -version
/usr/java/bin/java -version; /alt/usr/java/bin/java -version
-----------------------------------------------
11. After patching completes, run bootadm on the alternate mount (Must Do)
# bootadm update-archive -R /alt
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Now, On control domain,
# ldm list
# ldm list | egrep -v "inactive|NAME|primary" | awk '{print $1 "\t" $4}' | tee /var/tmp/LDOM_with_Port.txt /alt/var/tmp/LDOM_with_Port.txt

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
12. Unmount the new BE  (be sure you're out of /alt dir)
# luumount pBE_10182016

13. Remove Patch Folder
# cd /var/tmp/patches
# rm -fr 10_Recommended
14. Activate BE
# luactivate <BE>
# luactivate pBE_10182016


15. If its a physical server and you have LDOM, please shutdown LDOM first.
Verify that all ldoms are down now.
On control domain,
# for i in `cat /var/tmp/LDOM_with_Port.txt | awk '{print $1}'`
> do
> ldm stop $i
> done
#
# for i in `cat /var/tmp/LDOM_with_Port.txt | awk '{print $1}'`; do ldm stop $i; done
# for i in `cat /var/tmp/LDOM_with_Port.txt | awk '{print $2}'`; do telnet 0 $i; done
Wait until all LDOMs are down.
Once all LDOMs are down, use init 6 or init 0 to boot your physical system.
If some reason you can't see LDOMs, power cycle the host.
Login to console and peroform the following tasks,
stop /SYS
start /SYS
start /SP/console
will help you to bring up the LDOM.
This is always a good practice.

16. Reboot the system and verify the patch level
# sync;sync;sync
# init 0
note: do not use reboot command it does not update the boot environment.
upon reboot verify the kernel (patch) lavel
# uname -a
SunOS serp-mw-v12 5.10 Generic_150400-38 sun4v sparc sun4v

17. Once control domain comes online and you verify it, and bring the LDOM up.
# svcs -a | grep -i mile
make sure all are online.
Once your physical system is fully up, login to LDOM, all ldoms are at ok prompt. use boot command to boot the servers. Wait until you can login through ssh.
# for i in `cat /var/tmp/LDOM_with_Port.txt | awk '{print $1}'`; do ldm start $i; done
# for i in `cat /var/tmp/LDOM_with_Port.txt | awk '{print $2}'`
> do
> telnet 0 $i
> done
#


==============================================

# luactivate aBE_072016
A Live Upgrade Sync operation will be performed on startup of boot environment <aBE_072016>.

**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:
1. Enter the PROM monitor (ok prompt).
2. Boot the machine to Single User mode using a different boot device
(like the Solaris Install CD or Network). Examples:
     At the PROM monitor (ok prompt):
     For boot to Solaris CD:  boot cdrom -s
     For boot to network:     boot net -s
3. Mount the Current boot environment root slice to some directory (like
/mnt). You can use the following commands in sequence to mount the BE:
     zpool import rpool
     zfs inherit -r mountpoint rpool/ROOT/pBE04212016
     zfs set mountpoint=<mountpointName> rpool/ROOT/pBE04212016
     zfs mount rpool/ROOT/pBE04212016
4. Run <luactivate> utility with out any arguments from the Parent boot
environment root slice, as shown below:
     <mountpointName>/sbin/luactivate
5. luactivate, activates the previous working boot environment and
indicates the result.
6. umount /mnt
7. zfs set mountpoint=/ rpool/ROOT/pBE04212016
8. Exit Single User mode and reboot the machine.
**********************************************************************


^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Sendmail issue after patching

# cp submit.cf submit.cf.bad.08232016
# cp sendmail.cf sendmail.cf.bad.08232016
# cp -p sendmail.cf.old sendmail.cf
# cp -p submit.cf.old submit.cf
# svcs -a | grep mail
online         Aug_19   svc:/network/sendmail-client:default
online         Aug_19   svc:/network/smtp:sendmail
# svcadm restart  svc:/network/sendmail-client:default
#  mail -s "Test mail from `hostname`" testuser@testdomain.com
Test mail
# mailq
/var/spool/mqueue is empty
                Total requests: 0
#
Verify mail is delivered.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Solaris 10 - RBAC --> Giving user a read only access to a directory



Giving user a read only access to a directory in SOlaris 10
1. Create a user account
# useradd -d /export/home/jmiles -m -c "John  Miles" -s /bin/bash jmiles
# passwd jmiles
# passwd -f jmiles
# groups jmiles
other
2. Enable cac login
# vi /etc/passwd-login.allow
# vi /etc/cac-login.allow
# id abinitio
uid=1006(abinitio) gid=102(dba)
Note: Record UID of the user.
^^^^^^^^^^^^^^^^^^^ RBAC ^^^^^^^^^^^^^^
3. Create a role
# roleadd -c "Abinitio Read only access" -u 5006 -d /export/home/abinitio_ro -m abinitio_ro
# passwd abinitio_ro
# tail -f /etc/user_attr
3. Create profile and add priviledge to profile
# cd /etc/security
# cp -p prof_attr prof_attr.11292016
# cp -p exec_attr exec_attr.11292016
# vi prof_attr and add the line below
Abinitio_ro:::Abinitio Read Only Rights:
# grep Abinitio_ro /etc/security/prof_attr
Abinitio_ro:::Abinitio Read Only Rights:
# vi exec_attr
# grep Abinitio_ro /etc/security/exec_attr
Abinitio_ro:suser:cmd:::/usr/bin/cat:uid=1006
Abinitio_ro:suser:cmd:::/usr/bin/more:uid=1006
Abinitio_ro:suser:cmd:::/usr/bin/less:uid=1006

4. Assign profile to the role and add role to the user
# rolemod -P Abinitio_ro abinitio_ro
# usermod -R abinitio_ro jmiles
Keep adding user to the file.

5. Verify the entry
# tail -f /etc/user_attr
abinitio_ro::::type=role;profiles=Abinitio_ro
jmiles::::type=normal;roles=abinitio_ro
kbhusal::::type=normal;roles=abinitio_ro
6. When user assume the role, it will prompt for role password. So, allow user to access role without supplying password.
a. Using sudo
Now, Create user alias and allow user to su to role user without password.
# visudo
user alias
User_Alias ABINITIO_RO = jmiles
User privilege
ABINITIO_RO ALL=NOPASSWD: /usr/bin/su - abinitio_ro
b. Using RBAC
Enable a User to Use Own Password to Assume a Role
$ rolemod -K roleauth=user rolename
$ rolemod -K roleauth=jmiles rolename
Note: WE are using sudo for this task.
7. Change the permission of the directories to allow read and execute bit
# cd /data/abinitio/sd/ai_data_mount/data/serial
# find . -type f -perm 770 -print
# find . -type f -perm 660 -print
# find . -type d -print | more
# find . -type d -print -exec ls -l {}\; | more
# find . -type d -print -exec ls -ld {} \; | more
# find . -type d -perm -005  -exec ls -ld {} \; | more
# find . -type d ! -perm -005  -exec ls -ld {} \; | more
# find . -type d ! -perm -005  -exec chmod o+rx {} \;
# find . -type d ! -perm -005  -exec ls -ld {} \; | more

$ profiles abinitio_ro
$ roles
Now, login as a normal user and run the following command
$ profiles abinitio_ro
$ profiles -l abinitio_ro
$ roles
and access the role
$ sudo su - abinitio_ro