InfoScale™ 9.0 Solutions Guide - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (9.0)
Platform: Linux
  1. Section I. Introducing Veritas InfoScale
    1. Introducing Veritas InfoScale
      1.  
        About the Arctera InfoScale product suite
      2.  
        Components of the Arctera InfoScale product suite
  2. Section II. Solutions for Veritas InfoScale products
    1. Solutions for Veritas InfoScale products
      1.  
        Use cases for Veritas InfoScale products
      2.  
        Feature support across Veritas InfoScale 9.0 products
      3.  
        Using SmartMove and Thin Provisioning with Sybase databases
      4.  
        Running multiple parallel applications within a single cluster using the application isolation feature
      5.  
        Scaling FSS storage capacity with dedicated storage nodes using application isolation feature
      6.  
        Finding Veritas InfoScale product use cases information
  3. Section III. Stack-level migration to IPv6 or dual stack
    1. Stack-level migration to IPv6 or dual stack
      1.  
        Migrating Veritas InfoScale products to support IPv6/dual-stack
  4. Section IV. Improving database performance
    1. Overview of database accelerators
      1.  
        About Arctera InfoScale™ product components database accelerators
    2. Improving database performance with Veritas Concurrent I/O
      1. About Concurrent I/O
        1.  
          How Concurrent I/O works
      2. Tasks for enabling and disabling Concurrent I/O
        1.  
          Enabling Concurrent I/O for Sybase
        2.  
          Disabling Concurrent I/O for Sybase
    3. Improving database performance with atomic write I/O
      1.  
        About the atomic write I/O
      2.  
        Requirements for atomic write I/O
      3.  
        Restrictions on atomic write I/O functionality
      4.  
        How the atomic write I/O feature of Storage Foundation helps MySQL databases
      5.  
        VxVM and VxFS exported IOCTLs
      6.  
        Configuring atomic write I/O support for MySQL on VxVM raw volumes
      7.  
        Configuring atomic write I/O support for MySQL on VxFS file systems
      8.  
        Dynamically growing the atomic write capable file system
      9.  
        Disabling atomic write I/O support
  5. Section V. Using point-in-time copies
    1. Understanding point-in-time copy methods
      1. About point-in-time copies
        1.  
          Implementing point-in time copy solutions on a primary host
        2.  
          Implementing off-host point-in-time copy solutions
      2.  
        When to use point-in-time copies
      3. About Storage Foundation point-in-time copy technologies
        1. Volume-level snapshots
          1.  
            Persistent FastResync of volume snapshots
          2.  
            Data integrity in volume snapshots
        2.  
          Storage Checkpoints
    2. Backing up and recovering
      1.  
        Storage Foundation and High Availability Solutions backup and recovery methods
      2. Preserving multiple point-in-time copies
        1.  
          Setting up multiple point-in-time copies
        2.  
          Refreshing point-in-time copies
        3.  
          Recovering from logical corruption
        4.  
          Off-host processing using refreshed snapshot images
      3. Online database backups
        1. Making a backup of an online database on the same host
          1.  
            Preparing a full-sized instant snapshot for a backup
          2.  
            Preparing a space-optimized snapshot for a database backup
          3.  
            Backing up a Sybase database on the same host
          4.  
            Resynchronizing a volume
        2. Making an off-host backup of an online database
          1.  
            Making an off-host backup of an online Sybase database
          2.  
            Resynchronizing a volume
      4. Backing up on an off-host cluster file system
        1.  
          Mounting a file system for shared access
        2.  
          Preparing a snapshot of a mounted file system with shared access
        3.  
          Backing up a snapshot of a mounted file system with shared access
        4.  
          Resynchronizing a volume from its snapshot volume
        5.  
          Reattaching snapshot plexes
      5. Database recovery using Storage Checkpoints
        1.  
          Creating Storage Checkpoints
        2.  
          Rolling back a database
    3. Backing up and recovering in a NetBackup environment
      1.  
        About Veritas NetBackup
      2.  
        About using NetBackup for backup and restore for Sybase
      3. Using NetBackup in an SFHA Solutions product environment
        1.  
          Clustering a NetBackup Master Server
        2.  
          Backing up and recovering a VxVM volume using NetBackup
        3.  
          Recovering a VxVM volume using NetBackup
    4. Off-host processing
      1.  
        Veritas InfoScale Storage Foundation off-host processing methods
      2. Using a replica database for decision support
        1. Creating a replica database on the same host
          1.  
            Preparing for the replica database
          2.  
            Creating a replica database
        2. Creating an off-host replica database
          1.  
            Setting up a replica database for off-host decision support
          2.  
            Resynchronizing the data with the primary host
          3.  
            Updating a warm standby Sybase ASE 12.5 database
          4.  
            Reattaching snapshot plexes
      3.  
        What is off-host processing?
      4.  
        About using VVR for off-host processing
    5. Creating and refreshing test environments
      1.  
        About test environments
      2.  
        Creating a test environment
      3.  
        Refreshing a test environment
    6. Creating point-in-time copies of files
      1. Using FileSnaps to create point-in-time copies of files
        1.  
          Using FileSnaps to provision virtual desktops
        2.  
          Using FileSnaps to optimize write intensive applications for virtual machines
        3.  
          Using FileSnaps to create multiple copies of data instantly
  6. Section VI. Maximizing storage utilization
    1. Optimizing storage tiering with SmartTier
      1.  
        About SmartTier
      2.  
        About VxFS multi-volume file systems
      3.  
        About VxVM volume sets
      4.  
        About volume tags
      5.  
        SmartTier use cases for Sybase
      6.  
        Setting up a filesystem for storage tiering with SmartTier
      7.  
        Relocating old archive logs to tier two storage using SmartTier
      8.  
        Relocating inactive tablespaces or segments to tier two storage
      9.  
        Relocating active indexes to premium storage
      10.  
        Relocating all indexes to premium storage
    2. Optimizing storage with Flexible Storage Sharing
      1. About Flexible Storage Sharing
        1.  
          Limitations of Flexible Storage Sharing
      2.  
        About use cases for optimizing storage with Flexible Storage Sharing
      3.  
        Setting up an SFRAC clustered environment with shared nothing storage
      4.  
        Implementing the SmartTier feature with hybrid storage
      5.  
        Configuring a campus cluster without shared storage
  7. Section VII. Migrating data
    1. Understanding data migration
      1.  
        Types of data migration
    2. Offline migration from LVM to VxVM
      1.  
        About migration from LVM
      2.  
        Converting unused LVM physical volumes to VxVM disks
      3. LVM volume group to VxVM disk group conversion
        1.  
          Volume group conversion limitations
        2.  
          Converting LVM volume groups to VxVM disk groups
        3. Examples of second stage failure analysis
          1.  
            Snapshot in the volume group
          2.  
            dm_mirror module not loaded in the kernel
          3.  
            Conversion requires extent movement on an LVM1 volume group
          4.  
            Unrecognized partition in volume group
      4. LVM volume group restoration
        1.  
          Restoring an LVM volume group
    3. Offline conversion of native file system to VxFS
      1.  
        About the offline conversion of native file system to VxFS
      2.  
        Requirements for offline conversion of a native file system to VxFS
      3.  
        Converting the native file system to VxFS
    4. Online migration of a native file system to the VxFS file system
      1.  
        About online migration of a native file system to the VxFS file system
      2.  
        Administrative interface for online migration of a native file system to the VxFS file system
      3.  
        Migrating a native file system to the VxFS file system
      4. Migrating a source file system to the VxFS file system over NFS v4
        1.  
          Restrictions of NFS v4 migration
      5.  
        Backing out an online migration of a native file system to the VxFS file system
      6. VxFS features not available during online migration
        1.  
          Limitations of online migration
    5. Migrating storage arrays
      1.  
        Array migration for storage using Linux
      2.  
        Overview of storage mirroring for migration
      3.  
        Allocating new storage
      4.  
        Initializing the new disk
      5.  
        Checking the current VxVM information
      6.  
        Adding a new disk to the disk group
      7.  
        Mirroring
      8.  
        Monitoring
      9.  
        Mirror completion
      10.  
        Removing old storage
      11.  
        Post-mirroring steps
    6. Migrating data between platforms
      1. Overview of the Cross-Platform Data Sharing (CDS) feature
        1.  
          Shared data across platforms
        2.  
          Disk drive sector size
        3.  
          Block size issues
        4.  
          Operating system data
      2. CDS disk format and disk groups
        1. CDS disk access and format
          1. CDS disk types
            1.  
              Private and public regions
            2.  
              Disk access type auto
            3.  
              Platform block
            4.  
              AIX coexistence label
            5.  
              HP-UX coexistence label
            6.  
              VxVM ID block
          2. About Cross-platform Data Sharing (CDS) disk groups
            1.  
              Device quotas
            2.  
              Minor device numbers
        2.  
          Non-CDS disk groups
        3. Disk group alignment
          1. Alignment values
            1.  
              Dirty region log alignment
          2.  
            Object alignment during volume creation
      3. Setting up your system to use Cross-platform Data Sharing (CDS)
        1. Creating CDS disks from uninitialized disks
          1.  
            Creating CDS disks by using vxdisksetup
          2.  
            Creating CDS disks by using vxdiskadm
        2. Creating CDS disks from initialized VxVM disks
          1.  
            Creating a CDS disk from a disk that is not in a disk group
          2.  
            Creating a CDS disk from a disk that is already in a disk group
        3. Creating CDS disk groups
          1.  
            Creating a CDS disk group by using vxdg init
          2.  
            Creating a CDS disk group by using vxdiskadm
        4.  
          Converting non-CDS disks to CDS disks
        5.  
          Converting a non-CDS disk group to a CDS disk group
        6.  
          Verifying licensing
        7.  
          Defaults files
      4. Maintaining your system
        1. Disk tasks
          1.  
            Changing the default disk format
          2.  
            Restoring CDS disk labels
        2. Disk group tasks
          1.  
            Changing the alignment of a disk group during disk encapsulation
          2.  
            Changing the alignment of a non-CDS disk group
          3.  
            Splitting a CDS disk group
          4.  
            Moving objects between CDS disk groups and non-CDS disk groups
          5.  
            Moving objects between CDS disk groups
          6.  
            Joining disk groups
          7.  
            Changing the default CDS setting for disk group creation
          8.  
            Creating non-CDS disk groups
          9.  
            Upgrading an older version non-CDS disk group
          10.  
            Replacing a disk in a CDS disk group
          11.  
            Setting the maximum number of devices for CDS disk groups
          12.  
            Changing the DRL map and log size
          13.  
            Creating a volume with a DRL log
          14.  
            Setting the DRL map length
        3. Displaying information
          1.  
            Determining the setting of the CDS attribute on a disk group
          2.  
            Displaying the maximum number of devices in a CDS disk group
          3.  
            Displaying map length and map alignment of traditional DRL logs
          4.  
            Displaying the disk group alignment
          5.  
            Displaying the log map length and alignment
          6.  
            Displaying offset and length information in units of 512 bytes
        4.  
          Default activation mode of shared disk groups
        5.  
          Additional considerations when importing CDS disk groups
      5. File system considerations
        1.  
          Considerations about data in the file system
        2.  
          File system migration
        3. Specifying the migration target
          1.  
            Examples of target specifications
        4. Using the fscdsadm command
          1.  
            Checking that the metadata limits are not exceeded
          2. Maintaining the list of target operating systems
            1.  
              Adding an entry to the list of target operating systems
            2.  
              Removing an entry from the list of target operating systems
            3.  
              Removing all entries from the list of target operating systems
            4.  
              Displaying the list of target operating systems
          3.  
            Enforcing the established CDS limits on a file system
          4.  
            Ignoring the established CDS limits on a file system
          5.  
            Validating the operating system targets for a file system
          6.  
            Displaying the CDS status of a file system
        5.  
          Migrating a file system one time
        6. Migrating a file system on an ongoing basis
          1.  
            Stopping ongoing migration
        7.  
          When to convert a file system
        8. Converting the byte order of a file system
          1.  
            Importing and mounting a file system from another system
      6.  
        Alignment value and block size
      7.  
        Migrating a snapshot volume
    7. Migrating from Oracle ASM to Veritas File System
      1.  
        About the migration
      2.  
        Pre-requisites for migration
      3.  
        Preparing to migrate
      4.  
        Migrating Oracle databases from Oracle ASM to VxFS
  8. Section VIII. Veritas InfoScale 4K sector device support solution
    1. Veritas InfoScale 4k sector device support solution
      1.  
        About 4K sector size technology
      2.  
        InfoScale unsupported configurations
      3.  
        Migrating VxFS file system from 512-bytes sector size devices to 4K sector size devices
  9. Section IX. REST API support
    1. Support for configurations and operations using REST APIs
      1.  
        Support for InfoScale operations using REST APIs
      2.  
        Supported operations
      3.  
        Configuring the REST server
      4.  
        Security considerations for REST API management
      5.  
        Authorization of users for performing operations using REST APIs
      6.  
        Reconfiguring the REST server
      7.  
        Configuring HA for the REST server
      8.  
        Accessing the InfoScale REST API documentation
      9.  
        Unconfiguring the REST server
      10.  
        Troubleshooting information
      11.  
        Limitations
  10. Section X. Reference
    1. Appendix A. Veritas AppProtect logs and operation states
      1.  
        Log files
      2.  
        Plan states
    2. Appendix B. Troubleshooting Veritas AppProtect
      1.  
        Troubleshooting Just In Time Availability

Backing up a snapshot of a mounted file system with shared access

While you can run the commands in the following steps from any node, Veritas recommends running them from the master node.

To back up a snapshot of a mounted file system which has shared access

  1. On any node, refresh the contents of the snapshot volumes from the original volume using the following command:
    # vxsnap -g database_dg refresh snapvol source=database_vol \
      [snapvol2 source=database_vol2]... syncing=yes

    The syncing=yes attribute starts a synchronization of the snapshot in the background.

    For example, to refresh the snapshot snapvol:

    # vxsnap -g database_dg refresh snapvol source=database_vol \
      syncing=yes

    This command can be run every time you want to back up the data. The vxsnap refresh command will resync only those regions which have been modified since the last refresh.

  2. On any node of the cluster, use the following command to wait for the contents of the snapshot to be fully synchronous with the contents of the original volume:
    # vxsnap -g database_dg syncwait snapvol

    For example, to wait for synchronization to finish for the snapshots snapvol:

    # vxsnap -g database_dg syncwait snapvol

    Note:

    You cannot move a snapshot volume into a different disk group until synchronization of its contents is complete. You can use the vxsnap print command to check on the progress of synchronization.

  3. On the master node, use the following command to split the snapshot volume into a separate disk group, snapvoldg, from the original disk group, database_dg:
    # vxdg split volumedg snapvoldg snapvol

    For example, to place the snapshot of the volume database_vol into the shared disk group splitdg:

    # vxdg split database_dg splitdg snapvol
  4. On the master node, deport the snapshot volume's disk group using the following command:
    # vxdg deport snapvoldg

    For example, to deport the disk group splitdg:

    # vxdg deport splitdg
  5. On the OHP host where the backup is to be performed, use the following command to import the snapshot volume's disk group:
    # vxdg import snapvoldg

    For example, to import the disk group splitdg:

    # vxdg import splitdg
  6. VxVM will recover the volumes automatically after the disk group import unless it is set not to recover automatically. Check if the snapshot volume is initially disabled and not recovered following the split.

    If a volume is in the DISABLED state, use the following command on the OHP host to recover and restart the snapshot volume:

    # vxrecover -g snapvoldg -m snapvol

    For example, to start the volume snapvol:

    # vxrecover -g splitdg -m snapvol
    				
  7. On the OHP host, use the following commands to check and locally mount the snapshot volume:
    # fsck -t vxfs /dev/vx/rdsk/database_dg/database_vol
    
    # mount -t vxfs /dev/vx/dsk/database_dg/database_vol
    			mount_point

    For example, to check and mount the volume snapvol in the disk group splitdg for shared access on the mount point, /bak/mnt_pnt:

    # fsck -t vxfs /dev/vx/rdsk/splitdg/snapvol
    # mount -t vxfs /dev/vx/dsk/splitdg/snapvol /bak/mnt_pnt
    
  8. Back up the file system at this point using a command such as bpbackup in Veritas NetBackup. After the backup is complete, use the following command to unmount the file system.
    # umount mount_point
  9. On the off-host processing host, use the following command to deport the snapshot volume's disk group:
    # vxdg deport snapvoldg

    For example, to deport splitdg:

    # vxdg deport splitdg
  10. On the master node, re-import the snapshot volume's disk group as a shared disk group using the following command:
    # vxdg -s import snapvoldg

    For example, to import splitdg:

    # vxdg -s import splitdg
  11. On the master node, use the following command to rejoin the snapshot volume's disk group with the original volume's disk group:
    # vxdg join snapvoldg database_dg

    For example, to join disk group splitdg with database_dg:

    # vxdg join splitdg database_dg
  12. VxVM will recover the volumes automatically after the join unless it is set not to recover automatically. Check if the snapshot volumes are initially disabled and not recovered following the join.

    If a volume is in the DISABLED state, use the following command on the primary host to recover and restart the snapshot volume:

    # vxrecover -g database_dg -m snapvol
  13. When the recover is complete, use the following command to refresh the snapshot volume, and make its contents refreshed from the primary volume:
    # vxsnap -g database_dg refresh snapvol source=database_vol \
    syncing=yes
    # vxsnap -g database_dg syncwait snapvol

    When synchronization is complete, the snapshot is ready to be re-used for backup.

Repeat the entire procedure each time that you need to back up the volume.