Please enter search query.
Search <book_title>...
Veritas InfoScale™ 7.3.1 Virtualization Guide - Solaris
Last Published:
2018-08-22
Product(s):
InfoScale & Storage Foundation (7.3.1)
Platform: Solaris
- Section I. Overview of Veritas InfoScale Solutions used in Solaris virtualization
- Section II. Zones and Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- About VCS support for zones
- About the Mount agent
- Configuring VCS in zones
- Prerequisites for configuring VCS in zones
- Deciding on the zone root location
- Configuring the service group for the application
- Exporting VxVM volumes to a non-global zone
- About SF Oracle RAC support for Oracle RAC in a zone environment
- Known issues with supporting SF Oracle RAC in a zone environment
- Software limitations of Storage Foundation support of non-global zones
- Storage Foundation and High Availability Solutions support for Solaris Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- Section III. Oracle VM Server for SPARC
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- Oracle VM Server for SPARC deployment models
- Benefits of deploying Storage Foundation High Availability solutions in Oracle VM server for SPARC
- Features
- Split Storage Foundation stack model
- Guest-based Storage Foundation stack model
- Layered Storage Foundation stack model
- System requirements
- Installing Storage Foundation in a Oracle VM Server for SPARC environment
- Provisioning storage for a guest domain
- Software limitations
- Known issues
- Cluster Server support for using CVM with multiple nodes in a Oracle VM Server for SPARC environment
- VCS: Configuring Oracle VM Server for SPARC for high availability
- About VCS in a Oracle VM Server for SPARC environment
- About Cluster Server configuration models in an Oracle VM Server for SPARC environment
- Cluster Server setup to fail over a logical domain on a failure of logical domain
- Cluster Server setup to fail over an Application running inside logical domain on a failure of Application
- Oracle VM Server for SPARC guest domain migration in VCS environment
- Overview of a live migration
- About configuring VCS for Oracle VM Server for SPARC with multiple I/O domains
- Configuring VCS to manage a Logical Domain using services from multiple I/O domains
- Configuring storage services
- Configure a service group to monitor services from multiple I/O domains
- Configure the AlternateIO resource
- Configure the service group for a Logical Domain
- SF Oracle RAC support for Oracle VM Server for SPARC environments
- Support for live migration in FSS environments
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- Section IV. Reference
Performing live migration between LDOMs in the SmartIO environment
If an array-based SSD is used, live migration is supported with SmartIO caching. With direct attached devices (PCIe), live migration is not supported if SmartIO caching is enabled. If you need to perform live migration, you can use manual steps.
To perform live migration in the SmartIO environment
- To prepare the LDOM for the live migration, perform the following steps:
Offline the cache area that is created inside the LDOM.
Ldom1:/root# sfcache offline cachearea_name
Delete the cache area.
Ldom1:/root# sfcache delete cachearea_name
- Remove the SSD device from the VxVM configuration so that the device can be unexported from the LDOM.
Ldom1:/root# vxdisk rm ssd_device_name
- Verify that the SSD device is removed from VxVM. The SSD device is not visible in the output of the following command:
Ldom1:/root# vxdisk list
- Unexport the device from the LDOM.
Cdom1:/root> ldm remove-vdisk vdisk_name ldom1
- After unexporting the local SSD device, perform the live migration of the LDOM. During the live migration, make sure that the application and mountpoints that use the SFHA objects are intact and running properly.
- After the live migration completes, export the PCIe SSD devices that are available on the other control domain.
Cdom1:/root> ldm add-vdsdev vxvm_device_path vds_device_name>@vds
Cdom1:/root> ldm add-vdisk vdisk_name vds_device_name@vds ldom1
- After exporting the local PCIe SSD devices, include the devices in the VxVM configuration that is inside the LDOM.
Ldom1:/root> vxdisk scandisks
- Verify that the SSD device is visible in the output of the following command:
Ldom1:/root# vxdisk list
- After the local PCIe device is available to the VxVM configuration, you can create the required SmartIO cache area.
- To live migrate back the LDOM from target control domain to source control domain, follow step 1 to step 9.