Please enter search query.
Search <book_title>...
Veritas InfoScale™ 7.3.1 Virtualization Guide - Solaris
Last Published:
2018-08-22
Product(s):
InfoScale & Storage Foundation (7.3.1)
Platform: Solaris
- Section I. Overview of Veritas InfoScale Solutions used in Solaris virtualization
- Section II. Zones and Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- About VCS support for zones
- About the Mount agent
- Configuring VCS in zones
- Prerequisites for configuring VCS in zones
- Deciding on the zone root location
- Configuring the service group for the application
- Exporting VxVM volumes to a non-global zone
- About SF Oracle RAC support for Oracle RAC in a zone environment
- Known issues with supporting SF Oracle RAC in a zone environment
- Software limitations of Storage Foundation support of non-global zones
- Storage Foundation and High Availability Solutions support for Solaris Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- Section III. Oracle VM Server for SPARC
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- Oracle VM Server for SPARC deployment models
- Benefits of deploying Storage Foundation High Availability solutions in Oracle VM server for SPARC
- Features
- Split Storage Foundation stack model
- Guest-based Storage Foundation stack model
- Layered Storage Foundation stack model
- System requirements
- Installing Storage Foundation in a Oracle VM Server for SPARC environment
- Provisioning storage for a guest domain
- Software limitations
- Known issues
- Cluster Server support for using CVM with multiple nodes in a Oracle VM Server for SPARC environment
- VCS: Configuring Oracle VM Server for SPARC for high availability
- About VCS in a Oracle VM Server for SPARC environment
- About Cluster Server configuration models in an Oracle VM Server for SPARC environment
- Cluster Server setup to fail over a logical domain on a failure of logical domain
- Cluster Server setup to fail over an Application running inside logical domain on a failure of Application
- Oracle VM Server for SPARC guest domain migration in VCS environment
- Overview of a live migration
- About configuring VCS for Oracle VM Server for SPARC with multiple I/O domains
- Configuring VCS to manage a Logical Domain using services from multiple I/O domains
- Configuring storage services
- Configure a service group to monitor services from multiple I/O domains
- Configure the AlternateIO resource
- Configure the service group for a Logical Domain
- SF Oracle RAC support for Oracle VM Server for SPARC environments
- Support for live migration in FSS environments
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- Section IV. Reference
Creating a zone with root on shared storage
Create a zone with root which points to the shared disk's location on each node in the cluster. The file system for application data is on a shared device and is either the loopback type or the direct mount type. For a direct mount file system, run the mount command from the global zone with the mount point specified as the complete path that starts with the zone root. For a loopback file system, add it into the zone's configuration before you boot the zone.
To create a zone root on shared disks on each node in the cluster
- Create a file system on shared storage for the zone root. The file system that is to contain the zone root may be in the same disk group as the file system that contains the application data.
- Configure the zone with the zonecfg command.
zonecfg -z newzone zonecfg:newzone> create
- Set the zonepath parameter to specify a location for the zone root.
zonecfg:newzone> set zonepath=/export/home/newzone
- Add network interface to zone configuration. This is required for non-global zone to communicate with had running in global zone.
zonecfg:newzone> add net zonecfg:newzone:net> set physical=bge1 zonecfg:newzone:net> set address=192.168.1.10 zonecfg:newzone:net> end
- Make sure global zone can be pinged from non-global zone with global zone hostname. You may need to add global zone hostname entry to
/etc/hosts
file inside non-global zone or enable DNS access from inside the non-global zone. - If your application data resides on a loopback mount file system, create the loopback file system in the zone.
- Exit the zonecfg configuration.
zonecfg> exit
- Create the zone root directory.
mkdir zonepath
- Set permissions for the zone root directory.
chmod 700 zonepath
- Repeat step 2 to step 9 on each system in the service group's SystemList.
- Mount the file system that contains the shared storage on one of the systems that share the storage to the directory specified in zonepath.
- Run the following command to install the zone on the system where the zone path is mounted.
zoneadm -z newzone install
- If the application data is on a loopback file system, mount the file system containing the application's data on shared storage.
- Boot the zone.
zoneadm -z newzone boot
- If the application data is on a direct mount file system, mount the file system from the global zone with the complete path that starts with the zone root.