Please enter search query.
Search <book_title>...
Veritas InfoScale™ 7.3.1 Virtualization Guide - Solaris
Last Published:
2018-08-22
Product(s):
InfoScale & Storage Foundation (7.3.1)
Platform: Solaris
- Section I. Overview of Veritas InfoScale Solutions used in Solaris virtualization
- Section II. Zones and Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- About VCS support for zones
- About the Mount agent
- Configuring VCS in zones
- Prerequisites for configuring VCS in zones
- Deciding on the zone root location
- Configuring the service group for the application
- Exporting VxVM volumes to a non-global zone
- About SF Oracle RAC support for Oracle RAC in a zone environment
- Known issues with supporting SF Oracle RAC in a zone environment
- Software limitations of Storage Foundation support of non-global zones
- Storage Foundation and High Availability Solutions support for Solaris Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- Section III. Oracle VM Server for SPARC
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- Oracle VM Server for SPARC deployment models
- Benefits of deploying Storage Foundation High Availability solutions in Oracle VM server for SPARC
- Features
- Split Storage Foundation stack model
- Guest-based Storage Foundation stack model
- Layered Storage Foundation stack model
- System requirements
- Installing Storage Foundation in a Oracle VM Server for SPARC environment
- Provisioning storage for a guest domain
- Software limitations
- Known issues
- Cluster Server support for using CVM with multiple nodes in a Oracle VM Server for SPARC environment
- VCS: Configuring Oracle VM Server for SPARC for high availability
- About VCS in a Oracle VM Server for SPARC environment
- About Cluster Server configuration models in an Oracle VM Server for SPARC environment
- Cluster Server setup to fail over a logical domain on a failure of logical domain
- Cluster Server setup to fail over an Application running inside logical domain on a failure of Application
- Oracle VM Server for SPARC guest domain migration in VCS environment
- Overview of a live migration
- About configuring VCS for Oracle VM Server for SPARC with multiple I/O domains
- Configuring VCS to manage a Logical Domain using services from multiple I/O domains
- Configuring storage services
- Configure a service group to monitor services from multiple I/O domains
- Configure the AlternateIO resource
- Configure the service group for a Logical Domain
- SF Oracle RAC support for Oracle VM Server for SPARC environments
- Support for live migration in FSS environments
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- Section IV. Reference
Migrating Oracle VM guest when VCS cluster is installed in the guest domains to manage applications for Oracle VM Server for SPARC version 2.0
The domain migration is a warm migration.
Note:
You do not have to start and stop LLT and GAB. In a warm migration, LLT and GAB restart themselves gracefully.
To perform a domain migration for an LDom when VCS is installed in guest domains
- Stop VCS engine. Use the hastop -local -force command on the system that has the logical domain that you plan to migrate. Perform this step so that GAB does not have to kill the Cluster Server (VCS) engine process when the migration is complete. GAB wants all clients to reconfigure and restart when the configuration is not in sync with other members in the cluster.
- If CVM is configured inside the logical domain, perform this step. Set the value of the LLT peerinact parameter to sufficiently high value on all nodes in the cluster. You set the value to very high value so that while the logical domain is in migration, the system is not thrown out of the cluster by the other members in the cluster. If the CVM stack is unconfigured, the applications can stop.
See the Cluster Server Administrator's Guide for LLT tunable parameter configuration instructions.
- If fencing is configured in single instance mode inside the logical domain, perform this step. Unconfigure and unload the vxfen module in the logical domain. Perform this step so that GAB does not panic the node when the logical domain migration is complete.
- Migrate the logical domain from the control domain using the ldm interface. Wait for migration to complete.
ldm migrate [-f] [-n] [-p password_file] source_ldom \ [user@target_host[:target_ldom]
For example:
Sys1# ldm migrate ldom1 Sys2
- Perform this step if you performed step 3. Load and configure vxfen module in the logical domain. See the Cluster Server Administrator's Guide for information about I/O fencing and its administration.
- Perform this step if you performed step 2. Reset the value of the LLT peerinact parameter to its original value on all nodes in the cluster.
See the Cluster Server Administrator's Guide for LLT tunable parameter configuration instructions.
- Use the hastart command to start VCS engine inside the logical domain.
Figure: The logical domain migration when VCS is clustered between guest domains illustrates a logical domain migration when VCS is clustered between control domains and single-node VCS in the guest domain monitors applications.