InfoScale™ 9.0 Solutions Guide - Linux
- Section I. Introducing Veritas InfoScale
- Section II. Solutions for Veritas InfoScale products
- Solutions for Veritas InfoScale products
- Solutions for Veritas InfoScale products
- Section III. Stack-level migration to IPv6 or dual stack
- Section IV. Improving database performance
- Overview of database accelerators
- Improving database performance with Veritas Concurrent I/O
- Improving database performance with atomic write I/O
- Section V. Using point-in-time copies
- Understanding point-in-time copy methods
- Backing up and recovering
- Preserving multiple point-in-time copies
- Online database backups
- Backing up on an off-host cluster file system
- Database recovery using Storage Checkpoints
- Backing up and recovering in a NetBackup environment
- Off-host processing
- Creating and refreshing test environments
- Creating point-in-time copies of files
- Section VI. Maximizing storage utilization
- Optimizing storage tiering with SmartTier
- Optimizing storage with Flexible Storage Sharing
- Optimizing storage tiering with SmartTier
- Section VII. Migrating data
- Understanding data migration
- Offline migration from LVM to VxVM
- Offline conversion of native file system to VxFS
- Online migration of a native file system to the VxFS file system
- Migrating a source file system to the VxFS file system over NFS v4
- VxFS features not available during online migration
- Migrating storage arrays
- Migrating data between platforms
- Overview of the Cross-Platform Data Sharing (CDS) feature
- CDS disk format and disk groups
- Setting up your system to use Cross-platform Data Sharing (CDS)
- Maintaining your system
- Disk tasks
- Disk group tasks
- Displaying information
- File system considerations
- Specifying the migration target
- Using the fscdsadm command
- Maintaining the list of target operating systems
- Migrating a file system on an ongoing basis
- Converting the byte order of a file system
- Migrating from Oracle ASM to Veritas File System
- Section VIII. Veritas InfoScale 4K sector device support solution
- Section IX. REST API support
- Support for configurations and operations using REST APIs
- Support for configurations and operations using REST APIs
- Section X. Reference
Preparing for the replica database
To prepare a snapshot for a replica database on the primary host
- If you have not already done so, prepare the host to use the snapshot volume that contains the copy of the database tables. Set up any new database logs and configuration files that are required to initialize the database. On the master node, verify that the volume has an instant snap data change object (DCO) and DCO volume, and FastResync is enabled on the volume:
# vxprint -g database_dg -F%instant database_vol
# vxprint -g database_dg -F%fastresync database_vol
If both commands return the value as ON, proceed to step 3. Otherwise, continue with step 2.
- Use the following command to prepare a volume for instant snapshots:
# vxsnap -g database_dg prepare database_vol [regionsize=size] \ [ndcomirs=number] [alloc=storage_attributes]
- Use the following command to make a full-sized snapshot, snapvol, of the tablespace volume by breaking off plexes from the original volume:
# vxsnap -g database_dg make \ source=volume/newvol=snapvol/nmirror=N
The nmirror attribute specifies the number of mirrors, N, in the snapshot volume.
If the volume does not have any available plexes, or its layout does not support plex break-off, prepare an empty volume for the snapshot.
- Use the vxprint command on the original volume to find the required size for the snapshot volume.
# LEN=`vxprint [-g diskgroup] -F%len volume`
Note:
The command shown in this and subsequent steps assumes that you are using a Bourne-type shell such as sh, ksh or bash. You may need to modify the command for other shells such as csh or tcsh. These steps are valid only for an instant snap DCO.
- Use the vxprint command on the original volume to discover the name of its DCO:
# DCONAME=`vxprint [-g diskgroup] -F%dco_name volume`
- Use the vxprint command on the DCO to discover its region size (in blocks):
# RSZ=`vxprint [-g diskgroup] -F%regionsz $DCONAME`
- Use the vxassist command to create a volume, snapvol, of the required size and redundancy. You can use storage attributes to specify which disks should be used for the volume. The init=active attribute makes the volume available immediately.
# vxassist [-g diskgroup] make snapvol $LEN \ [layout=mirror nmirror=number] init=active \ [storage_attributes]
- Prepare the snapshot volume for instant snapshot operations as shown here:
# vxsnap [-g diskgroup] prepare snapvol [ndcomirs=number] \ regionsz=$RSZ [storage_attributes]
It is recommended that you specify the same number of DCO mirrors (ndcomirror) as the number of mirrors in the volume (nmirror).
- To create the snapshot, use the following command:
# vxsnap -g database_dg make source=volume/snapvol=snapvol
If a database spans more than one volume, specify all the volumes and their snapshot volumes as separate tuples on the same line, for example:
# vxsnap -g database_dg make \ source=vol1/snapvol=svol1/nmirror=2 \ source=vol2/snapvol=svol2/nmirror=2 \ source=vol3/snapvol=svol3/nmirror=2
If you want to save disk space, you can use the following command to create a space-optimized snapshot instead:
# vxsnap -g database_dg make \ source=volume/newvol=snapvol/cache=cacheobject
The argument cacheobject is the name of a pre-existing cache that you have created in the disk group for use with space-optimized snapshots. To create the cache object, follow step 10 through step 13.
If several space-optimized snapshots are to be created at the same time, these can all specify the same cache object as shown in this example:
# vxsnap -g database_dg make \ source=vol1/newvol=svol1/cache=dbaseco \ source=vol2/newvol=svol2/cache=dbaseco \ source=vol3/newvol=svol3/cache=dbaseco
Decide on the following characteristics that you want to allocate to the cache volume that underlies the cache object:
The size of the cache volume should be sufficient to record changes to the parent volumes during the interval between snapshot refreshes. A suggested value is 10% of the total size of the parent volumes for a refresh interval of 24 hours.
If redundancy is a desired characteristic of the cache volume, it should be mirrored. This increases the space that is required for the cache volume in proportion to the number of mirrors that it has.
If the cache volume is mirrored, space is required on at least as many disks as it has mirrors. These disks should not be shared with the disks used for the parent volumes. The disks should also be chosen to avoid impacting I/O performance for critical volumes, or hindering disk group split and join operations.
- Having decided on its characteristics, use the vxassist command to create the volume that is to be used for the cache volume. The following example creates a mirrored cache volume, cachevol, with size 1GB in the disk group, mydg, on the disks disk16 and disk17:
# vxassist -g mydg make cachevol 1g layout=mirror \ init=active disk16 disk17
The attribute init=active is specified to make the cache volume immediately available for use.
- Use the vxmake cache command to create a cache object on top of the cache volume that you created in the previous step:
# vxmake [-g diskgroup] cache cache_object \ cachevolname=volume [regionsize=size] [autogrow=on] \ [highwatermark=hwmk] [autogrowby=agbvalue] \ [maxautogrow=maxagbvalue]]
If you specify the region size, it must be a power of 2, and be greater than or equal to 16KB (16k). If not specified, the region size of the cache is set to 64KB.
Note:
All space-optimized snapshots that share the cache must have a region size that is equal to or an integer multiple of the region size set on the cache. Snapshot creation also fails if the original volume's region size is smaller than the cache's region size.
If the cache is not allowed to grow in size as required, specify autogrow=off. By default, the ability to automatically grow the cache is turned on.
In the following example, the cache object, cobjmydg, is created over the cache volume, cachevol, the region size of the cache is set to 32KB, and the autogrow feature is enabled:
# vxmake -g mydg cache cobjmydg cachevolname=cachevol \ regionsize=32k autogrow=on
- Having created the cache object, use the following command to enable it:
# vxcache [-g diskgroup] start cache_object
For example to start the cache object, cobjmydg:
# vxcache -g mydg start cobjmydg
Note:
This step sets up the snapshot volumes, and starts tracking changes to the original volumes.