InfoScale™ 9.0 Solutions Guide - Linux
- Section I. Introducing Veritas InfoScale
- Section II. Solutions for Veritas InfoScale products
- Solutions for Veritas InfoScale products
- Solutions for Veritas InfoScale products
- Section III. Stack-level migration to IPv6 or dual stack
- Section IV. Improving database performance
- Overview of database accelerators
- Improving database performance with Veritas Concurrent I/O
- Improving database performance with atomic write I/O
- Section V. Using point-in-time copies
- Understanding point-in-time copy methods
- Backing up and recovering
- Preserving multiple point-in-time copies
- Online database backups
- Backing up on an off-host cluster file system
- Database recovery using Storage Checkpoints
- Backing up and recovering in a NetBackup environment
- Off-host processing
- Creating and refreshing test environments
- Creating point-in-time copies of files
- Section VI. Maximizing storage utilization
- Optimizing storage tiering with SmartTier
- Optimizing storage with Flexible Storage Sharing
- Optimizing storage tiering with SmartTier
- Section VII. Migrating data
- Understanding data migration
- Offline migration from LVM to VxVM
- Offline conversion of native file system to VxFS
- Online migration of a native file system to the VxFS file system
- Migrating a source file system to the VxFS file system over NFS v4
- VxFS features not available during online migration
- Migrating storage arrays
- Migrating data between platforms
- Overview of the Cross-Platform Data Sharing (CDS) feature
- CDS disk format and disk groups
- Setting up your system to use Cross-platform Data Sharing (CDS)
- Maintaining your system
- Disk tasks
- Disk group tasks
- Displaying information
- File system considerations
- Specifying the migration target
- Using the fscdsadm command
- Maintaining the list of target operating systems
- Migrating a file system on an ongoing basis
- Converting the byte order of a file system
- Migrating from Oracle ASM to Veritas File System
- Section VIII. Veritas InfoScale 4K sector device support solution
- Section IX. REST API support
- Support for configurations and operations using REST APIs
- Support for configurations and operations using REST APIs
- Section X. Reference
Limitations of Flexible Storage Sharing
Note the following limitations for using Flexible Storage Sharing (FSS):
FSS is only supported on clusters of up to 64 nodes.
Disk initialization operations should be performed only on nodes with local connectivity to the disk.
FSS does not support the use of boot disks, opaque disks, and non-VxVM disks for network sharing.
Hot-relocation is disabled on FSS disk groups.
The VxVM cloned disks operations are not supported with FSS disk groups.
FSS does not support non-SCSI3 disks connected to multiple hosts.
FSS supports only those instant data change objects (DCOs) that are either created using the vxsnap operation or by specifying the logtype=dco and dcoversion=20 attributes during volume creation.
By default, creating a mirror between SSD and HDD is not supported through vxassist, as the underlying media types are different. To workaround this issue, you can create a volume with one media type, for instance the HDD, which is the default media type, and then later add a mirror on the SSD.
For example:
# vxassist -g diskgroup make volume size init=none
# vxassist -g diskgroup mirror volume mediatype:ssd
# vxvol -g diskgroup init active volume
For information on administering mirrored volumes using vxassist, refer to the Storage Foundation Cluster File System High Availability Administrator's Guide or the Storage Foundation for Oracle RAC Administrator's Guide.