Veritas InfoScale™ 8.0.2 Solutions Guide - AIX
- Section I. Introducing Veritas InfoScale
- Section II. Solutions for Veritas InfoScale products
- Section III. Stack-level migration to IPv6 or dual stack
- Section IV. Improving database performance
- Overview of database accelerators
- Improving database performance with Veritas Quick I/O
- About Quick I/O
- Improving database performance with Veritas Cached Quick I/O
- Improving database performance with Veritas Concurrent I/O
- Section V. Using point-in-time copies
- Understanding point-in-time copy methods
- Backing up and recovering
- Preserving multiple point-in-time copies
- Online database backups
- Backing up on an off-host cluster file system
- Database recovery using Storage Checkpoints
- Backing up and recovering in a NetBackup environment
- Off-host processing
- Creating and refreshing test environments
- Creating point-in-time copies of files
- Section VI. Maximizing storage utilization
- Optimizing storage tiering with SmartTier
- Optimizing storage with Flexible Storage Sharing
- Optimizing storage tiering with SmartTier
- Section VII. Migrating data
- Understanding data migration
- Offline migration of native volumes and file systems to VxVM and VxFS
- Converting LVM volume groups to VxVM disk groups
- Conversion of JFS and JFS2 file systems to VxFS
- Conversion steps explained
- Examples of using vxconvert
- About test cases
- Converting LVM, JFS and JFS2 to VxVM and VxFS
- Online migration of native LVM volumes to VxVM volumes
- Online migration from LVM volumes in standalone environment to VxVM volumes
- Online migration from LVM volumes in VCS HA environment to VxVM volumes
- Online migration of a native file system to the VxFS file system
- Migrating a source file system to the VxFS file system over NFS v3
- VxFS features not available during online migration
- Migrating storage arrays
- Migrating data between platforms
- Overview of the Cross-Platform Data Sharing (CDS) feature
- CDS disk format and disk groups
- Setting up your system to use Cross-platform Data Sharing (CDS)
- Maintaining your system
- Disk tasks
- Disk group tasks
- Displaying information
- File system considerations
- Specifying the migration target
- Using the fscdsadm command
- Maintaining the list of target operating systems
- Migrating a file system on an ongoing basis
- Converting the byte order of a file system
- Section VIII. Veritas InfoScale 4K sector device support solution
Do's and Don'ts for online migration from LVM in VCS HA environment to VxVM
The do's and don'ts for online migration in a standalone system apply to online migration in VCS HA environment.
See Do's and Don'ts for online migration from LVM in standalone environment to VxVM .
In addition, follow these do's and don'ts for online migration from LVM volumes in VCS HA environment.
All volumes of a volume group that are to be migrated, should be migrated together. Remaining volumes of that volume group cannot be migrated later separately.
If the start operation fails, the service group is left in a frozen state. Unfreeze the service group manually and run the abort operation on the same node.
Do not perform any VCS configuration change while migration operations are in progress.
Do not perform any changes in the VCS service group involved in migration till the migration is either committed or aborted.
For an active migration setup, the service group failover may require disk re-scanning on the failover node. This may require tuning the value of OnlineTimeout attribute of temporary resource added below DiskGroup resource, as per time required for the vxdisk scandisks operation. The default value of OnlineTimeout is 1hr.
On committing the migration, the LVM resources like LVMVG are present in the service group as non-critical resources. If these resources are not required, remove them from the service group.
During abort or commit operation on a cluster node, the LVM volume devices are cleaned up from VxVM configuration on this node. For all other cluster nodes previously traversed by migration setup, stray entries of LVM volume devices in VxVM configuration remain. You should clean up these stray entries, to prevent interference in any further online migration functioning on that node. Check and clean up the stray entries for LVM volume <lvol>, as follows:
# vxdisk list <lvol>_vxlv
If the device exists, remove it as:
# vxdisk rm <lvol>_vxlv # vxddladm listforeign | grep <lvol>_vxlv
If the device entry exists, remove it as:
# vxddladm rmforeign charpath= \ /dev/r<lvol>_vxlv blockpath=/dev/<lvol>_vxlv