Veritas InfoScale™ 7.4 Solutions Guide - Linux
- Section I. Introducing Veritas InfoScale
- Section II. Solutions for Veritas InfoScale products
- Solutions for Veritas InfoScale products
- Solutions for Veritas InfoScale products
- Section III. Stack-level migration to IPv6 or dual stack
- Section IV. Improving database performance
- Overview of database accelerators
- Improving database performance with Veritas Concurrent I/O
- Improving database performance with atomic write I/O
- Section V. Using point-in-time copies
- Understanding point-in-time copy methods
- Backing up and recovering
- Preserving multiple point-in-time copies
- Online database backups
- Backing up on an off-host cluster file system
- Database recovery using Storage Checkpoints
- Backing up and recovering in a NetBackup environment
- Off-host processing
- Creating and refreshing test environments
- Creating point-in-time copies of files
- Section VI. Maximizing storage utilization
- Optimizing storage tiering with SmartTier
- Optimizing storage with Flexible Storage Sharing
- Optimizing storage tiering with SmartTier
- Section VII. Migrating data
- Understanding data migration
- Offline migration from LVM to VxVM
- Offline conversion of native file system to VxFS
- Online migration of a native file system to the VxFS file system
- VxFS features not available during online migration
- Migrating storage arrays
- Migrating data between platforms
- Overview of the Cross-Platform Data Sharing (CDS) feature
- CDS disk format and disk groups
- Setting up your system to use Cross-platform Data Sharing (CDS)
- Maintaining your system
- Disk tasks
- Disk group tasks
- Displaying information
- File system considerations
- Specifying the migration target
- Using the fscdsadm command
- Maintaining the list of target operating systems
- Migrating a file system on an ongoing basis
- Converting the byte order of a file system
- Migrating from Oracle ASM to Veritas File System
- Section VIII. Just in time availability solution for vSphere
- Section IX. Veritas InfoScale 4K sector device support solution
- Section X. Reference
Setting up multiple point-in-time copies
To set up the initial configuration for multiple point-in-time copies, set up storage for the point-in-time copies that will be configured over time.
In the example procedures, disk1, disk2, …, diskN are the LUNs configured on tier 1 storage for application data. A subset of these LUNs logdisk1, logdisk2, …, logdiskN, will be used to configure DCO. Disks sdisk1, sdisk2, …, sdiskN are disks from tier 2 storage.
Note:
If you have an enclosure or disk array with storage that is backed by write cache, Veritas recommends that you use the same set of LUNs for the DCO and for the data volume.
If no logdisks are specified by default, Veritas Volume Manager (VxVM) tries to allocate the DCO from the same LUNs used for the data volumes.
See Figure: Example connectivity for off-host solution using redundant-loop access.
You will need to make sure your cache is big enough for the multiple copies with multiple changes. The following guidelines may be useful for estimating your requirements.
To determine your storage requirements, use the following:
Table: Storage requirements
Sp | Represents the storage requirement for the primary volume |
Sb | Represents the storage requirement for the primary break-off snapshot. |
Nc | Represents the number of point-in-time copies to be maintained. |
Sc | Represents the average size of the changes that occur in an interval before the snapshot is taken. |
St | Represents the total storage requirement. |
The total storage requirement for management of multiple point-in-time copies can be roughly calculated as:
Sb = Sp
St = Sb + Nc * Sc
To determine the size of the cache volume, use the following:
Table: Cache volume requirements
Nc | Represents the number of point-in-time copies to be maintained. |
Sc | Represents the average size of the changes that occur in an interval . |
Rc | Represents the region-size for cache-object. |
St | Represents the total storage requirement. |
The size of cache-volume to be configured can be calculated as:
Nc * Sc * Rc
This equation assumes that the application IO size granularity is smaller than cache-object region-size by factor of at most Rc
To configure the initial setup for multiple point-in-time copies
- If the primary application storage is already configured for snapshots, that is, the DCO is already attached for the primary volume, go to step 2.
If not, configure the primary volumes and prepare them for snapshots.
For example:
# vxassist -g appdg make appvol 10T <disk1 disk2 ... diskN > # vxsnap -g appdg prepare appvol
- Configure a snapshot volume to use as the primary, full-image snapshot of the primary volume. The snapshot volume can be allocated from tier 2 storage.
# vxassist -g appdg make snap-appvol 10T <sdisk1 sdisk2 ... sdiskN > # vxsnap -g appdg prepare snap-appvol \ <alloc=slogdisk1, slogdisk2, ...slogdiskN>
- Establish the relationship between the primary volume and the snapshot volume. Wait for synchronization of the snapshot to complete.
# vxsnap -g appdg make source=appvol/snapvol=snap-appvol/sync=yes # vxsnap -g appdg syncwait snap-appvol
- Create a volume in the disk group to use for the cache volume. The cache volume is used for space-optimized point-in-time copies created at regular intervals. The cache volume can be allocated from tier 2 storage.
# vxassist -g appdg make cachevol 1G layout=mirror \ init=active disk16 disk17
- Configure a shared cache object on the cache volume.
# vxmake -g appdg cache snapcache cachevolname=cachevol
- Start the cache object.
# vxcache -g appdg start snapcache
You now have an initial setup in place to create regular point-in-time copies.