Please enter search query.
Search <book_title>...
Veritas InfoScale™ 7.3.1 Virtualization Guide - Solaris
Last Published:
2018-08-22
Product(s):
InfoScale & Storage Foundation (7.3.1)
Platform: Solaris
- Section I. Overview of Veritas InfoScale Solutions used in Solaris virtualization
- Section II. Zones and Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- About VCS support for zones
- About the Mount agent
- Configuring VCS in zones
- Prerequisites for configuring VCS in zones
- Deciding on the zone root location
- Configuring the service group for the application
- Exporting VxVM volumes to a non-global zone
- About SF Oracle RAC support for Oracle RAC in a zone environment
- Known issues with supporting SF Oracle RAC in a zone environment
- Software limitations of Storage Foundation support of non-global zones
- Storage Foundation and High Availability Solutions support for Solaris Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- Section III. Oracle VM Server for SPARC
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- Oracle VM Server for SPARC deployment models
- Benefits of deploying Storage Foundation High Availability solutions in Oracle VM server for SPARC
- Features
- Split Storage Foundation stack model
- Guest-based Storage Foundation stack model
- Layered Storage Foundation stack model
- System requirements
- Installing Storage Foundation in a Oracle VM Server for SPARC environment
- Provisioning storage for a guest domain
- Software limitations
- Known issues
- Cluster Server support for using CVM with multiple nodes in a Oracle VM Server for SPARC environment
- VCS: Configuring Oracle VM Server for SPARC for high availability
- About VCS in a Oracle VM Server for SPARC environment
- About Cluster Server configuration models in an Oracle VM Server for SPARC environment
- Cluster Server setup to fail over a logical domain on a failure of logical domain
- Cluster Server setup to fail over an Application running inside logical domain on a failure of Application
- Oracle VM Server for SPARC guest domain migration in VCS environment
- Overview of a live migration
- About configuring VCS for Oracle VM Server for SPARC with multiple I/O domains
- Configuring VCS to manage a Logical Domain using services from multiple I/O domains
- Configuring storage services
- Configure a service group to monitor services from multiple I/O domains
- Configure the AlternateIO resource
- Configure the service group for a Logical Domain
- SF Oracle RAC support for Oracle VM Server for SPARC environments
- Support for live migration in FSS environments
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- Section IV. Reference
Provisioning Veritas Volume Manager volumes as data disks for guest domains
The following procedure uses VxVM volumes as data disks (virtual disks) for guest domains.
VxFS can be used as the file system on top of the data disks.
The example control domain is named primary
and the guest domain is named ldom1
. The prompts in each step show in which domain to run the command.
To provision Veritas Volume Manager volumes as data disks
- Create a VxVM disk group (
mydatadg
in this example) with some disks allocated to it:primary# vxdg init mydatadg TagmaStore-USP0_29 TagmaStore-USP0_30
- Create a VxVM volume of the desired layout (in this example, creating a simple volume):
primary# vxassist -g mydatadg make datavol1 500m
- Configure a service exporting the volume datavol1 as a virtual disk:
primary# ldm add-vdiskserverdevice /dev/vx/dsk/mydatadg/datavol1 \ datadisk1@primary-vds0
- Add the exported disk to a guest domain.
primary# ldm add-vdisk vdisk1 datadisk1@primary-vds0 ldom1
- Start the guest domain, and ensure that the new virtual disk is visible:
primary# ldm bind ldom1
primary# ldm start ldom1
- If the new virtual disk device node entires do not show up in the
/dev/[r]dsk
directories, then run the devfsadm command in the guest domain:ldom1# devfsadm -C
- Label the disk using the format command to create a valid label before trying to access it.
See the format(1M) manual page.
- Create the file system where c0d1s2 is the disk.
ldom1# mkfs -F vxfs /dev/rdsk/c0d1s2
- Mount the file system.
ldom1# mount -F vxfs /dev/dsk/c0d1s2 /mnt
- Verify that the file system has been created:
ldom11# df -hl -F vxfs
Filesystem size used avail capacity Mounted on /dev/dsk/c0d1s2 500M 2.2M 467M 1% /mnt