Please enter search query.
Search <book_title>...
Veritas InfoScale™ 7.3.1 Virtualization Guide - Solaris
Last Published:
2018-08-22
Product(s):
InfoScale & Storage Foundation (7.3.1)
Platform: Solaris
- Section I. Overview of Veritas InfoScale Solutions used in Solaris virtualization
- Section II. Zones and Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- About VCS support for zones
- About the Mount agent
- Configuring VCS in zones
- Prerequisites for configuring VCS in zones
- Deciding on the zone root location
- Configuring the service group for the application
- Exporting VxVM volumes to a non-global zone
- About SF Oracle RAC support for Oracle RAC in a zone environment
- Known issues with supporting SF Oracle RAC in a zone environment
- Software limitations of Storage Foundation support of non-global zones
- Storage Foundation and High Availability Solutions support for Solaris Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- Section III. Oracle VM Server for SPARC
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- Oracle VM Server for SPARC deployment models
- Benefits of deploying Storage Foundation High Availability solutions in Oracle VM server for SPARC
- Features
- Split Storage Foundation stack model
- Guest-based Storage Foundation stack model
- Layered Storage Foundation stack model
- System requirements
- Installing Storage Foundation in a Oracle VM Server for SPARC environment
- Provisioning storage for a guest domain
- Software limitations
- Known issues
- Cluster Server support for using CVM with multiple nodes in a Oracle VM Server for SPARC environment
- VCS: Configuring Oracle VM Server for SPARC for high availability
- About VCS in a Oracle VM Server for SPARC environment
- About Cluster Server configuration models in an Oracle VM Server for SPARC environment
- Cluster Server setup to fail over a logical domain on a failure of logical domain
- Cluster Server setup to fail over an Application running inside logical domain on a failure of Application
- Oracle VM Server for SPARC guest domain migration in VCS environment
- Overview of a live migration
- About configuring VCS for Oracle VM Server for SPARC with multiple I/O domains
- Configuring VCS to manage a Logical Domain using services from multiple I/O domains
- Configuring storage services
- Configure a service group to monitor services from multiple I/O domains
- Configure the AlternateIO resource
- Configure the service group for a Logical Domain
- SF Oracle RAC support for Oracle VM Server for SPARC environments
- Support for live migration in FSS environments
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- Section IV. Reference
Adding a direct mount to a zone's configuration
A non-global zone can also be configured to have a VxFS file system direct mount automatically when the zone boots using zonecfg
. The fsck command is run, before the file system is mounted. If the fsck command fails, the zone fails to boot.
To add a direct mount to a zone's configuration
- Check the status and halt the zone:
global# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared 1 myzone running /zone/myzone solaris shared global# zoneadm -z myzone halt
- Add devices to the zone's configuration:
global# zonecfg -z myzone zonecfg:myzone> add device zonecfg:myzone:device> set match=/dev/vxportal zonecfg:myzone:device> end zonecfg:myzone> add device zonecfg:myzone:device> set match=/dev/fdd zonecfg:myzone:device> end zonecfg:myzone> add fs zonecfg:myzone:fs> set dir=/dirmnt zonecfg:myzone:fs> set special=/dev/vx/dsk/dg_name/vol_name zonecfg:myzone:fs> set raw=/dev/vx/rdsk/dg_name/vol_name zonecfg:myzone:fs> set type=vxfs zonecfg:myzone:fs> end zonecfg:myzone> add fs zonecfg:myzone:fs> set dir=/etc/vx/licenses/lic zonecfg:myzone:fs> set special=/etc/vx/licenses/lic zonecfg:myzone:fs> set type=lofs zonecfg:myzone:fs> end zonecfg:myzone> verify zonecfg:myzone> commit zonecfg:myzone> exit
- On Solaris 11, you must set fs-allowed=vxfs,odm to the zone's configuration:
global# zonecfg -z myzone zonecfg:myzone> set fs-allowed=vxfs,odm zonecfg:myzone> commit zonecfg:myzone> exit
If you want to use ufs, nfs and zfs inside the zone, set fs-allowed=vxfs,odm,nfs,ufs,zfs.
- Boot the zone:
global# zoneadm -z myzone boot
- Ensure that the file system is mounted:
myzone# df | grep dirmnt /dirmnt (/dirmnt):142911566 blocks 17863944 files