Solaris 10 Q4, 2016 patching Instructions
1. List and Delete old Boot Environment
# lustatus
# ludelete aBE01252016
# lustatus
# zfs list
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
if you fail to delete BE, perform the following.
# ludelete aBE01252016
ERROR: boot environment </.alt.tmp.b-X1f.mnt> does not exist
ERROR: </.alt.tmp.b-X1f.mnt> is not a valid root device (not a block device)
ERROR: no file system is mounted on </.alt.tmp.b-X1f.mnt>
ERROR: </.alt.tmp.b-X1f.mnt> is not a root device, mount point, or name of
a currently mounted boot environment
ERROR: environment <aBE01252016>.
ERROR: boot environment </.alt.tmp.b-X1f.mnt> does not exist
ERROR: </.alt.tmp.b-X1f.mnt> is not a valid root device (not a block device)
ERROR: no file system is mounted on </.alt.tmp.b-X1f.mnt>
ERROR: </.alt.tmp.b-X1f.mnt> is not a root device, mount point, or name of a
currently mounted boot environment
# cp -rp /etc/lu /var/tmp/lu.deleted.BE.10182016
# cd /etc/lu
# rm ICF.* INODE.* rm .??* ./.alt.* tmp/*
# mv /etc/lutab /var/tmp/lutab.deleted.BE.10182016
# lustatus
# zfs list -t snapshot
# zfs destroy -R
rpool2/ROOT/aBE01252016@pBE04212016# zfs destroy -R
rpool2/ROOT/aBE01252016@pBE04212016
# lucreate -c aBE01252016 -n aBE07202016
# lumount aBE07202016 /alt
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2. Back up configuration and remove old logs
a. /etc/hosts, /etc/nsswitch.conf, /etc/mail/sendmail.cf
b. df -h >/var/tmp/df.out
c. zfs list >/var/tmp/myzfs.out
d. zpool list> /var/tmp/zpool.list
e. zpool status >/var/tmp/zpool.status
cp -p /etc/hosts /etc/hosts.10182016
cp -p /etc/nsswitch.conf /etc/nsswitch.conf.10182016
cp -p /etc/mail/sendmail.cf /etc/mail/sendmail.cf.10182016
df -h >/var/tmp/df.out.10182016
zfs list >/var/tmp/myzfs.out.10182016
zpool list> /var/tmp/zpool.list.10182016
zpool status >/var/tmp/zpool.status.10182016
cat /etc/lu/ICF.1>/var/tmp/ICF1.bk.10182016
cat /etc/lu/ICF.2>/var/tmp/ICF2.bk.10182016
cd /var/adm; rm pacct.* auditlog.0 auditlog.1
cd /var/tmp; rm -fr DCE* SPX* TCP* US* RAW* NMP* net* ISPX* VI* BEQ* DEC* ITCP*
cd /var/audit; /usr/local/bin/purge_audit.sh; cd /var/tmp
3. Create New BE
# lucreate -n pBE_10182016
4. extract patch
# cd /data/patch_sol; unzip /repository/Solaris/10_Recommended_CPU_2016-10.zip
Note: Verify the space: # df -h /
5. Mount new BE under /alt
# lumount pBE_10182016 /alt
6. Apply OS patches to /alt. Apply Pre-Req first and Patch cluster
# cd /data/patch_Sol/10_Recommended_CPU_2016-10
# ./installcluster --apply-prereq --s10patchset
7. Then Apply Patch cluster to AltBE
# nohup ./installcluster -R /alt --s10patchset --disable-space-check > /alt/opt/Patches/10_Recommended.out 2>&1 &
# tail -f /alt/opt/Patches/10_Recommended.out
or
# nohup ./installcluster -R /alt --s10patchset > /alt/opt/Patches/10_Recommended.out 2>&1 &
# df -h / /alt
------------------JAVA patch------------------------
8. Apply java patch
# unzip -q 147692-82.zip; unzip -q 147693-82.zip; unzip -q 151672-03
# patchadd -R /alt 147692-82; patchadd -R /alt 147693-82; patchadd -R /alt 151672-03
# for i in 147692-82 147693-82 151672-03; do patchadd -R /alt $i ; done
9. Verify patch
# showrev -p -R /alt | grep 150400-40
10. Re-link default java and verify the version
/usr/bin/java
cd /alt/usr;rm java;ln -s jdk/latest java;./java/bin/java -version; /usr/java/bin/java -version
/usr/java/bin/java -version; /alt/usr/java/bin/java -version
-----------------------------------------------
11. After patching completes, run bootadm on the alternate mount (Must Do)
# bootadm update-archive -R /alt
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Now, On control domain,
# ldm list
# ldm list | egrep -v "inactive|NAME|primary" | awk '{print $1 "\t" $4}' | tee /var/tmp/LDOM_with_Port.txt /alt/var/tmp/LDOM_with_Port.txt
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
12. Unmount the new BE (be sure you're out of /alt dir)
# luumount pBE_10182016
13. Remove Patch Folder
# cd /var/tmp/patches
# rm -fr 10_Recommended
14. Activate BE
# luactivate <BE>
# luactivate pBE_10182016
15. If its a physical server and you have LDOM, please shutdown LDOM first.
Verify that all ldoms are down now.
On control domain,
# for i in `cat /var/tmp/LDOM_with_Port.txt | awk '{print $1}'`
> do
> ldm stop $i
> done
#
# for i in `cat /var/tmp/LDOM_with_Port.txt | awk '{print $1}'`; do ldm stop $i; done
# for i in `cat /var/tmp/LDOM_with_Port.txt | awk '{print $2}'`; do telnet 0 $i; done
Wait until all LDOMs are down.
Once all LDOMs are down, use init 6 or init 0 to boot your physical system.
If some reason you can't see LDOMs, power cycle the host.
Login to console and peroform the following tasks,
stop /SYS
start /SYS
start /SP/console
will help you to bring up the LDOM.
This is always a good practice.
16. Reboot the system and verify the patch level
# sync;sync;sync
# init 0
note: do not use reboot command it does not update the boot environment.
upon reboot verify the kernel (patch) lavel
# uname -a
SunOS serp-mw-v12 5.10 Generic_150400-38 sun4v sparc sun4v
17. Once control domain comes online and you verify it, and bring the LDOM up.
# svcs -a | grep -i mile
make sure all are online.
Once your physical system is fully up, login to LDOM, all ldoms are at ok prompt. use boot command to boot the servers. Wait until you can login through ssh.
# for i in `cat /var/tmp/LDOM_with_Port.txt | awk '{print $1}'`; do ldm start $i; done
# for i in `cat /var/tmp/LDOM_with_Port.txt | awk '{print $2}'`
> do
> telnet 0 $i
> done
#
==============================================
# luactivate aBE_072016
A Live Upgrade Sync operation will be performed on startup of boot environment <aBE_072016>.
**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:
1. Enter the PROM monitor (ok prompt).
2. Boot the machine to Single User mode using a different boot device
(like the Solaris Install CD or Network). Examples:
At the PROM monitor (ok prompt):
For boot to Solaris CD: boot cdrom -s
For boot to network: boot net -s
3. Mount the Current boot environment root slice to some directory (like
/mnt). You can use the following commands in sequence to mount the BE:
zpool import rpool
zfs inherit -r mountpoint rpool/ROOT/pBE04212016
zfs set mountpoint=<mountpointName> rpool/ROOT/pBE04212016
zfs mount rpool/ROOT/pBE04212016
4. Run <luactivate> utility with out any arguments from the Parent boot
environment root slice, as shown below:
<mountpointName>/sbin/luactivate
5. luactivate, activates the previous working boot environment and
indicates the result.
6. umount /mnt
7. zfs set mountpoint=/ rpool/ROOT/pBE04212016
8. Exit Single User mode and reboot the machine.
**********************************************************************
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Sendmail issue after patching
# cp submit.cf submit.cf.bad.08232016
# cp sendmail.cf sendmail.cf.bad.08232016
# cp -p sendmail.cf.old sendmail.cf
# cp -p submit.cf.old submit.cf
# svcs -a | grep mail
online Aug_19 svc:/network/sendmail-client:default
online Aug_19 svc:/network/smtp:sendmail
# svcadm restart svc:/network/sendmail-client:default
# mail -s "Test mail from `hostname`"
testuser@testdomain.comTest mail
# mailq
/var/spool/mqueue is empty
Total requests: 0
#
Verify mail is delivered.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^