skip to content »

Updating platform sun4v boot archive

updating platform sun4v boot archive-59

scrub: resilver completed after 0h11m with 0 errors on Fri Feb 18 2011 ...c0t0d0s0 ONLINE 0 0 0 12.0G [email protected] # installboot /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t0d0s0 [email protected] # init 0 ... disk1 /[email protected]/[email protected]/[email protected]/@/[email protected] disk0 /[email protected]/[email protected]/[email protected]/@/[email protected] disk /[email protected]/[email protected]/[email protected]/@/[email protected] ... Boot device: /[email protected]/[email protected]/[email protected]/@/[email protected] File and args: Sun OS Release 5.10 Version Generic_142900-11 64-bit ...

updating platform sun4v boot archive-24updating platform sun4v boot archive-75updating platform sun4v boot archive-23updating platform sun4v boot archive-19

If you do not use either init or shutdown, the system will not boot using the target BE. Change the boot device back to the original boot environment by typing: setenv boot-device /[email protected]/[email protected]/[email protected]/LSILogic,[email protected]/[email protected],0:a 3.********************************************************************** In case of a failure while booting to the target BE, the following process needs to be followed to fallback to the currently working boot environment: 1. Boot to the original boot environment by typing: boot ********************************************************************** Modifying boot archive service Activation of boot environment [email protected] # init [email protected] # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 16.1G 258G 98K /rpool rpool/ROOT 10.0G 258G 21K /rpool/ROOT rpool/ROOT/zfs BE 10.0G 258G 10.0G / rpool/dump 2.00G 258G 2.00G - rpool/swap 4.01G 262G 16K - [email protected] # zpool status pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c0t1d0s0 ONLINE 0 0 0 errors: No known data [email protected] # lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- be0 yes no no yes - be3 yes no no yes - zfs BE yes yes yes no - [email protected] # ludelete be0 ...svc.startd: 104 system services are now being stopped. ok boot disk0 Boot device: /[email protected]/[email protected]/[email protected]/@/[email protected] File and args: Sun OS Release 5.10 Version Generic_142900-11 64-bit ... svc.startd: 104 system services are now being stopped. EMC to Net App Solaris FCP migration – Recovery When EMC Breaks the box So, I have been working on an EMC to Netapp migration project, and the customer is entirely Solaris 10 on their Oracle DB environment.One thing I really like about Solaris 11 is that provisioning zones is extremely similar to provisioning physicals and I’m able to use the same manifests and scripts for both.

You can download my service manifest here, it is pretty much identical to the one provided by oracle and I’m not going to dive into that.

Here is my first boot script, you can download it here #! /lib/svc/share/smf_echo "Save original boot environment first." # Obtain the active BE name from beadm: The active BE on reboot has an R in # the third column of 'beadm list' output. bename=`beadm list -Hd|nawk -F ';' '$3 ~ /R/ '` beadm create $mkdir /tmp/mnt #determine the closest server FIRSTSR=10.x.x.x #san SECSR=10.x.x.x #den FST=`ping -vs $FIRSTSR 20 5 | grep round | cut -d'/' -f6 | cut -d. ] updating //platform/sun4v/boot_archive And now it is complete. You can modify the files directly and then run: pkgsend publish -d ./proto -s /export/firstbootrepo first-boot-script.p5m again to have the repository reflect the updates.

-f1` SST=`ping -vs $SECSR 20 5 | grep round | cut -d'/' -f6 | cut -d. Done [ May 20 Method "start" exited with status 0.

The manifest is described here: the logic behind the first boot script here: with all things unix there are multiple ways and philosophies for how to manage build scripts.

For my Solaris 11 baseline, I choose to write a first boot script that determines my closest package repository and then mounts and copies over my directory of admin scripts. This script actually does all of the work on the first boot. AI installs my server packages and then the first-boot package. On the first boot the script is run which copies my tree of admin scripts and then runs a script which does the heavy lifting.

(They want to change, sadly they cannot at this time.) Dealing with migrations between vendors can be difficult.