Veritas InfoScale™ 8.0 Virtualization Guide - Solaris
- Section I. Overview of Veritas InfoScale Solutions used in Solaris virtualization
- Section II. Zones
- InfoScale Enterprise Solutions support for Solaris Native Zones
- About VCS support for zones
- Configuring VCS in zones
- Prerequisites for configuring VCS in zones
- Deciding on the zone root location
- Configuring the service group for the application
- Exporting VxVM volumes to a non-global zone
- About InfoScale SFRAC component support for Oracle RAC in a zone environment
- Known issues with supporting in a InfoScale SFRAC component zone environment
- Software limitations of InfoScale support of non-global zones
- InfoScale Enterprise Solutions support for Solaris Native Zones
- Section III. Oracle VM Server for SPARC
- InfoScale Enterprise Solutions support for Oracle VM Server for SPARC
- Oracle VM Server for SPARC deployment models
- Benefits of deploying solutions in Oracle VM server for SPARC
- Features
- Split InfoScale stack model
- Guest-based InfoScale stack model
- Layered InfoScale stack model
- System requirements
- Installing InfoScale in a Oracle VM Server for SPARC environment
- Provisioning storage for a guest domain
- Software limitations
- Known issues
- Cluster Server support for using CVM with multiple nodes in a Oracle VM Server for SPARC environment
- VCS: Configuring Oracle VM Server for SPARC for high availability
- About VCS in a Oracle VM Server for SPARC environment
- About Cluster Server configuration models in an Oracle VM Server for SPARC environment
- Cluster Server setup to fail over a logical domain on a failure of logical domain
- Cluster Server setup to fail over an Application running inside logical domain on a failure of Application
- Oracle VM Server for SPARC guest domain migration in VCS environment
- Overview of a live migration
- About configuring VCS for Oracle VM Server for SPARC with multiple I/O domains
- Configuring VCS to manage a Logical Domain using services from multiple I/O domains
- Configuring storage services
- Configure a service group to monitor services from multiple I/O domains
- Configure the AlternateIO resource
- Configure the service group for a Logical Domain
- SFRAC support for Oracle VM Server for SPARC environments
- Support for live migration in FSS environments
- Using SmartIO in the virtualized environment
- InfoScale Enterprise Solutions support for Oracle VM Server for SPARC
- Section IV. Reference
Configuring a direct mount of VxFS file system in a non-global zone with VCS
Typical steps to configure a direct mount inside a non-global zone.
To configure a direct mount inside a non-global zone
Create a VxVM disk group and volume:
Create a VxVM disk group from a device:
global# vxdg init data_dg c0t0d1
Create a volume from a disk group:
global# vxassist -g data_dg make data_vol 5G
For more information, see the Storage Foundation Administrator's Guide.
Create a zone:
Create a root directory for a zone local-zone and change its permission to 700:
global# mkdir -p /zones/local-zone global# chmod 700 /zones/local-zone
On Solaris 11, configure a zone local-zone:
global# zonecfg -z local-zone local-zone: No such zone configured Use `create' to begin configuring a new zone. zonecfg:local-zone> create zonecfg:local-zone> set zonepath=/zones/local-zone zonecfg:local-zone> set ip-type=shared zonecfg:local-zone> add net zonecfg:local-zone:net> set physical=eri0 zonecfg:local-zone:net> set address=192.168.5.59 zonecfg:local-zone:net> end zonecfg:local-zone > verify zonecfg:local-zone > commit zonecfg:local-zone > exit
The zone is in configured state.
Install the zone:
global# zoneadm -z local-zone install
Login to the zone console to setup the zone from terminal 1:
global# zlogin -C local-zone
Boot the zone from another terminal:
global# zoneadm -z local-zone boot
Follow the steps on terminal 1 on the zone console to setup the zone.
See the Oracle documentation for more information about creating a zone.
Add VxVM volumes to the zone configuration:
Check the zone status and halt the zone, if it is running:
global# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared 2 local-zone running /zones/myzone native shared global# zoneadm -z myzone halt
Add the VxVM devices to the zone's configuration:
global# zonecfg -z local-zone zonecfg:local-zone:fs> add device zonecfg:local-zone:fs> set match=/dev/vxportal zonecfg:local-zone:fs> end zonecfg:local-zone:fs> add device zonecfg:local-zone:fs> set match=/dev/fdd zonecfg:local-zone:fs> end zonecfg:local-zone:fs> add device zonecfg:local-zone:fs> set match=/dev/vx/rdsk/data_dg/data_vol zonecfg:local-zone:fs> end zonecfg:local-zone:fs> add device zonecfg:local-zone:fs> set match=/dev/vx/dsk/data_dg/data_vol zonecfg:local-zone:fs> end zonecfg:local-zone:fs> add fs zonecfg:local-zone:fs> set dir=/etc/vx/licenses/lic zonecfg:local-zone:fs> set special=/etc/vx/licenses/lic zonecfg:local-zone:fs> set type=lofs zonecfg:local-zone:fs> end zonecfg:local-zone> verify zonecfg:local-zone> commit zonecfg:local-zone> exit
On Solaris 11, you must set
fs-allowed
tovxfs
andodm
in the zone's configuration:global# zonecfg -z myzone zonecfg:myzone> set fs-allowed=vxfs,odm zonecfg:myzone> commit zonecfg:myzone> exit
Boot the zone:
global# zoneadm -z myzone boot
Create a VxFS file system on the volume inside a non-global zone:
Login to the local-zone:
global# zlogin myzone
Create a VxFS file system on the block device:
bash-3.00# mkfs -F vxfs /dev/vx/dsk/data_dg/data_vol
Create a mount point inside the zone:
Login to the local-zone:
global# zlogin myzone
Create a mount point inside the non-global zone:
bash-3.00# mkdir -p /mydata
Mount the VxFS file system on the mount point:
bash-3.00# mount -F vxfs /dev/vx/dsk/data_dg/data_vol /mydata
- See Configuring a zone resource in a failover service group with the hazonesetup utility.
Configure the zone service group:
On the first node, create the service group with password-less communication with global zone:
global# hazonesetup -g zone_grp -r zone_res -z myzone \ -p password -s sysA,sysB
In case of non-secure clusters, if the security key is missing from the cluster configuration, an error occurs, which indicates that the secinfo value is missing. To generate the required security information for non-secure clusters, run the vcsencrypt -gensecinfo command.
Switch the service group from the first node to the second node and run the hazonesetup command to setup password-less communication from the next node.
Repeat step 6 for all the nodes in the cluster where the zone can go online.
Create a mount, disk group, and volume resources into the service group:
Add a disk group resource to the service group:
global# hares -add dg_res DiskGroup zone_grp global# hares -modify dg_res DiskGroup data_dg global# hares -modify dg_res Enabled 1
Add a volume resource to the service group:
global# hares -add vol_res Volume zone_grp global# hares -modify vol_res Volume data_vol global# hares -modify vol_res DiskGroup data_dg global# hares -modify vol_res Enabled 1
Add a Mount resource to the service group:
global# hares -add mnt_res Mount zone_grp global# hares -modify mnt_res BlockDevice \ /dev/vx/dsk/data_dg/data_vol global# hares -modify mnt_res MountPoint /mydata global# hares -modify mnt_res FSType vxfs global# hares -modify mnt_res FsckOpt %-y global# hares -modify mnt_res Enabled 1
Create a resource dependency between the resources in the service group:
global# hares -link zone_res vol_res global# hares -link vol_res dg_res global# hares -link mnt_res zone_res
- For information on overriding resource type static attributes: see the Cluster Server Administrator's Guide.
Set the ContainerOpts attribute for the Mount resource for VxFS direct mount:
Override the ContainerOpts attribute at the resource level for mnt_res:
global# hares -override mnt_res ContainerOpts
Set the value of the RunInContainer key to 1:
global# hares -modify mnt_res ContainerOpts RunInContainer \ 1 PassCInfo 0
- Here is a sample configuration for the VxFS direct mount service groups in the
main.cf
file:group zone_grp ( SystemList = {sysA = 0, sysB = 1 } ContainerInfo = { Name = local-zone, Type = Zone, Enabled = 1 } Administrators = { z_zoneres_sysA, z_zoneres_sysB } ) Mount mnt_res ( BlockDevice = "/dev/vx/dsk/data_dg/data_vol" MountPoint = "/mydata" FSType = vxfs FsckOpt = "-y" ContainerOpts = { RunInContainer = 1, PassCInfo = 0 } ) DiskGroup dg_res ( DiskGroup = data_dg ) Volume vol_res ( Volume = data_vol DiskGroup = data_dg ) Zone zone_res ( ) zone_res requires vol_res vol_res requires dg_res mnt_res requires zone_res