InfoScale™ 9.0 Virtualization Guide - Solaris
- Section I. Overview of InfoScale solutions in Solaris virtualization environments
- Section II. Zones
- InfoScale Enterprise Solutions support for Solaris Native Zones
- About VCS support for zones
- Configuring VCS in zones
- Prerequisites for configuring VCS in zones
- Deciding on the zone root location
- Configuring the service group for the application
- Exporting VxVM volumes to a non-global zone
- About InfoScale SFRAC component support for Oracle RAC in a zone environment
- Known issues with supporting a InfoScale SFRAC component in a zone environment
- Software limitations of InfoScale support of non-global zones
- InfoScale Enterprise Solutions support for Solaris Native Zones
- Section III. Oracle VM Server for SPARC
- InfoScale Enterprise Solutions support for Oracle VM Server for SPARC
- Oracle VM Server for SPARC deployment models
- Benefits of deploying Arctera InfoScale Enterprise solutions in Oracle VM server for SPARC
- Features
- Split InfoScale stack model
- Guest-based InfoScale stack model
- Layered InfoScale stack model
- System requirements
- Installing InfoScale in a Oracle VM Server for SPARC environment
- Provisioning storage for a guest domain
- Software limitations
- Known issues
- Cluster Server support for using CVM with multiple nodes in a Oracle VM Server for SPARC environment
- VCS: Configuring Oracle VM Server for SPARC for high availability
- About VCS in a Oracle VM Server for SPARC environment
- About Cluster Server configuration models in an Oracle VM Server for SPARC environment
- Cluster Server setup to fail over a logical domain on a failure of logical domain
- Cluster Server setup to fail over an Application running inside logical domain on a failure of Application
- Oracle VM Server for SPARC guest domain migration in VCS environment
- Overview of a live migration
- About configuring VCS for Oracle VM Server for SPARC with multiple I/O domains
- Configuring VCS to manage a Logical Domain using services from multiple I/O domains
- Configuring storage services
- Configure a service group to monitor services from multiple I/O domains
- Configure the AlternateIO resource
- Configure the service group for a Logical Domain
- SFRAC support for Oracle VM Server for SPARC environments
- Support for live migration in FSS environments
- Using SmartIO in the virtualized environment
- InfoScale Enterprise Solutions support for Oracle VM Server for SPARC
- Section IV. Reference
Using VxVM snapshot as a backup copy of the boot image during the upgrade
You have the ability to preserve a backup copy of the guest boot image using the Veritas Volume Manager (VxVM) snapshot feature.
Veritas recommends the following configuration:
VxVM 9.0 in the control domain.
A mirrored VxVM volume per guest LDom boot image.
For ease of management, you may want to group all the LDOM boot image volumes into a separate disk group.
To upgrade the guest boot image
- Shut down the guest after synchronizing the operating system file system:
# sync # init 0
- Stop and unbind the guest:
# ldm stop guest # ldm unbind guest
- (Optional) Taking a snapshot of a VxVM volume requires allocating a DCO object that is done by executing the vxsnap prepare command. Veritas recommends that you mirror the DCO for redundancy. If you choose to do so, add two disks of reasonable size (such as 2gb) to the disk group containing the boot volumes:
# vxdg -g disk_group adddisk [ disk3 disk4 ]
Perform one of the following:
- Ensure that the mirror plexes of the volume are completely synchronized:
# vxtask list
The output of the vxtask list command shows if any currently synchronize operation is in progress. If there is any such task in progress then you have to wait until it completes.
# vxsnap -g disk_group print
This should show that the dirty percentage is 0% and the valid percentage is 100% for both the original and the snapshot volumes. If not, wait until the original and the snapshot volumes are synchronized.
- Take the snapshot of the boot volume and specify the name of the plex that you want to use for backup.
# vxsnap -g disk_group make \ source=boot_volume/new=backup_vol/plex=backup_plex
where
backup_plex
is the plex that you want to use for backup.This operation creates a snapshot volume using the
backup_plex
. This snapshot volume can be used to revert to the boot image to the point-in-time it was taken. - Ensure that the new snapshot volume is completely synchronized:
# vxtask list
The output of the vxtask list command shows if any currently synchronize operation is in progress. If there is any such task in progress then you have to wait until it completes.
# vxsnap -g disk_group print
This should show that the dirty percentage is 0% and the valid percentage is 100% for both the original and the snapshot volume. If not, wait until the original and the snapshot volumes are synchronized.
- Bind and restart the guest and boot the guest:
# ldm bind guest # ldm start guest
The guest is now booting from the primary plex.
- Perform the upgrade of the intended guest.
- Once the upgrade is successful reattach the snapshot volume to the original boot volume. This operation causes the backup plex to get re-attached back to the boot volume as a mirror, making the volume redundant again with two mirrored plexes.
# vxsnap -g disk_group reattach backup_vol source=boot_volume