InfoScale™ 9.0 Solutions Guide - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (9.0)
Platform: Linux
  1. Section I. Introducing Veritas InfoScale
    1. Introducing Veritas InfoScale
      1.  
        About the Arctera InfoScale product suite
      2.  
        Components of the Arctera InfoScale product suite
  2. Section II. Solutions for Veritas InfoScale products
    1. Solutions for Veritas InfoScale products
      1.  
        Use cases for Veritas InfoScale products
      2.  
        Feature support across Veritas InfoScale 9.0 products
      3.  
        Using SmartMove and Thin Provisioning with Sybase databases
      4.  
        Running multiple parallel applications within a single cluster using the application isolation feature
      5.  
        Scaling FSS storage capacity with dedicated storage nodes using application isolation feature
      6.  
        Finding Veritas InfoScale product use cases information
  3. Section III. Stack-level migration to IPv6 or dual stack
    1. Stack-level migration to IPv6 or dual stack
      1.  
        Migrating Veritas InfoScale products to support IPv6/dual-stack
  4. Section IV. Improving database performance
    1. Overview of database accelerators
      1.  
        About Arctera InfoScale™ product components database accelerators
    2. Improving database performance with Veritas Concurrent I/O
      1. About Concurrent I/O
        1.  
          How Concurrent I/O works
      2. Tasks for enabling and disabling Concurrent I/O
        1.  
          Enabling Concurrent I/O for Sybase
        2.  
          Disabling Concurrent I/O for Sybase
    3. Improving database performance with atomic write I/O
      1.  
        About the atomic write I/O
      2.  
        Requirements for atomic write I/O
      3.  
        Restrictions on atomic write I/O functionality
      4.  
        How the atomic write I/O feature of Storage Foundation helps MySQL databases
      5.  
        VxVM and VxFS exported IOCTLs
      6.  
        Configuring atomic write I/O support for MySQL on VxVM raw volumes
      7.  
        Configuring atomic write I/O support for MySQL on VxFS file systems
      8.  
        Dynamically growing the atomic write capable file system
      9.  
        Disabling atomic write I/O support
  5. Section V. Using point-in-time copies
    1. Understanding point-in-time copy methods
      1. About point-in-time copies
        1.  
          Implementing point-in time copy solutions on a primary host
        2.  
          Implementing off-host point-in-time copy solutions
      2.  
        When to use point-in-time copies
      3. About Storage Foundation point-in-time copy technologies
        1. Volume-level snapshots
          1.  
            Persistent FastResync of volume snapshots
          2.  
            Data integrity in volume snapshots
        2.  
          Storage Checkpoints
    2. Backing up and recovering
      1.  
        Storage Foundation and High Availability Solutions backup and recovery methods
      2. Preserving multiple point-in-time copies
        1.  
          Setting up multiple point-in-time copies
        2.  
          Refreshing point-in-time copies
        3.  
          Recovering from logical corruption
        4.  
          Off-host processing using refreshed snapshot images
      3. Online database backups
        1. Making a backup of an online database on the same host
          1.  
            Preparing a full-sized instant snapshot for a backup
          2.  
            Preparing a space-optimized snapshot for a database backup
          3.  
            Backing up a Sybase database on the same host
          4.  
            Resynchronizing a volume
        2. Making an off-host backup of an online database
          1.  
            Making an off-host backup of an online Sybase database
          2.  
            Resynchronizing a volume
      4. Backing up on an off-host cluster file system
        1.  
          Mounting a file system for shared access
        2.  
          Preparing a snapshot of a mounted file system with shared access
        3.  
          Backing up a snapshot of a mounted file system with shared access
        4.  
          Resynchronizing a volume from its snapshot volume
        5.  
          Reattaching snapshot plexes
      5. Database recovery using Storage Checkpoints
        1.  
          Creating Storage Checkpoints
        2.  
          Rolling back a database
    3. Backing up and recovering in a NetBackup environment
      1.  
        About Veritas NetBackup
      2.  
        About using NetBackup for backup and restore for Sybase
      3. Using NetBackup in an SFHA Solutions product environment
        1.  
          Clustering a NetBackup Master Server
        2.  
          Backing up and recovering a VxVM volume using NetBackup
        3.  
          Recovering a VxVM volume using NetBackup
    4. Off-host processing
      1.  
        Veritas InfoScale Storage Foundation off-host processing methods
      2. Using a replica database for decision support
        1. Creating a replica database on the same host
          1.  
            Preparing for the replica database
          2.  
            Creating a replica database
        2. Creating an off-host replica database
          1.  
            Setting up a replica database for off-host decision support
          2.  
            Resynchronizing the data with the primary host
          3.  
            Updating a warm standby Sybase ASE 12.5 database
          4.  
            Reattaching snapshot plexes
      3.  
        What is off-host processing?
      4.  
        About using VVR for off-host processing
    5. Creating and refreshing test environments
      1.  
        About test environments
      2.  
        Creating a test environment
      3.  
        Refreshing a test environment
    6. Creating point-in-time copies of files
      1. Using FileSnaps to create point-in-time copies of files
        1.  
          Using FileSnaps to provision virtual desktops
        2.  
          Using FileSnaps to optimize write intensive applications for virtual machines
        3.  
          Using FileSnaps to create multiple copies of data instantly
  6. Section VI. Maximizing storage utilization
    1. Optimizing storage tiering with SmartTier
      1.  
        About SmartTier
      2.  
        About VxFS multi-volume file systems
      3.  
        About VxVM volume sets
      4.  
        About volume tags
      5.  
        SmartTier use cases for Sybase
      6.  
        Setting up a filesystem for storage tiering with SmartTier
      7.  
        Relocating old archive logs to tier two storage using SmartTier
      8.  
        Relocating inactive tablespaces or segments to tier two storage
      9.  
        Relocating active indexes to premium storage
      10.  
        Relocating all indexes to premium storage
    2. Optimizing storage with Flexible Storage Sharing
      1. About Flexible Storage Sharing
        1.  
          Limitations of Flexible Storage Sharing
      2.  
        About use cases for optimizing storage with Flexible Storage Sharing
      3.  
        Setting up an SFRAC clustered environment with shared nothing storage
      4.  
        Implementing the SmartTier feature with hybrid storage
      5.  
        Configuring a campus cluster without shared storage
  7. Section VII. Migrating data
    1. Understanding data migration
      1.  
        Types of data migration
    2. Offline migration from LVM to VxVM
      1.  
        About migration from LVM
      2.  
        Converting unused LVM physical volumes to VxVM disks
      3. LVM volume group to VxVM disk group conversion
        1.  
          Volume group conversion limitations
        2.  
          Converting LVM volume groups to VxVM disk groups
        3. Examples of second stage failure analysis
          1.  
            Snapshot in the volume group
          2.  
            dm_mirror module not loaded in the kernel
          3.  
            Conversion requires extent movement on an LVM1 volume group
          4.  
            Unrecognized partition in volume group
      4. LVM volume group restoration
        1.  
          Restoring an LVM volume group
    3. Offline conversion of native file system to VxFS
      1.  
        About the offline conversion of native file system to VxFS
      2.  
        Requirements for offline conversion of a native file system to VxFS
      3.  
        Converting the native file system to VxFS
    4. Online migration of a native file system to the VxFS file system
      1.  
        About online migration of a native file system to the VxFS file system
      2.  
        Administrative interface for online migration of a native file system to the VxFS file system
      3.  
        Migrating a native file system to the VxFS file system
      4. Migrating a source file system to the VxFS file system over NFS v4
        1.  
          Restrictions of NFS v4 migration
      5.  
        Backing out an online migration of a native file system to the VxFS file system
      6. VxFS features not available during online migration
        1.  
          Limitations of online migration
    5. Migrating storage arrays
      1.  
        Array migration for storage using Linux
      2.  
        Overview of storage mirroring for migration
      3.  
        Allocating new storage
      4.  
        Initializing the new disk
      5.  
        Checking the current VxVM information
      6.  
        Adding a new disk to the disk group
      7.  
        Mirroring
      8.  
        Monitoring
      9.  
        Mirror completion
      10.  
        Removing old storage
      11.  
        Post-mirroring steps
    6. Migrating data between platforms
      1. Overview of the Cross-Platform Data Sharing (CDS) feature
        1.  
          Shared data across platforms
        2.  
          Disk drive sector size
        3.  
          Block size issues
        4.  
          Operating system data
      2. CDS disk format and disk groups
        1. CDS disk access and format
          1. CDS disk types
            1.  
              Private and public regions
            2.  
              Disk access type auto
            3.  
              Platform block
            4.  
              AIX coexistence label
            5.  
              HP-UX coexistence label
            6.  
              VxVM ID block
          2. About Cross-platform Data Sharing (CDS) disk groups
            1.  
              Device quotas
            2.  
              Minor device numbers
        2.  
          Non-CDS disk groups
        3. Disk group alignment
          1. Alignment values
            1.  
              Dirty region log alignment
          2.  
            Object alignment during volume creation
      3. Setting up your system to use Cross-platform Data Sharing (CDS)
        1. Creating CDS disks from uninitialized disks
          1.  
            Creating CDS disks by using vxdisksetup
          2.  
            Creating CDS disks by using vxdiskadm
        2. Creating CDS disks from initialized VxVM disks
          1.  
            Creating a CDS disk from a disk that is not in a disk group
          2.  
            Creating a CDS disk from a disk that is already in a disk group
        3. Creating CDS disk groups
          1.  
            Creating a CDS disk group by using vxdg init
          2.  
            Creating a CDS disk group by using vxdiskadm
        4.  
          Converting non-CDS disks to CDS disks
        5.  
          Converting a non-CDS disk group to a CDS disk group
        6.  
          Verifying licensing
        7.  
          Defaults files
      4. Maintaining your system
        1. Disk tasks
          1.  
            Changing the default disk format
          2.  
            Restoring CDS disk labels
        2. Disk group tasks
          1.  
            Changing the alignment of a disk group during disk encapsulation
          2.  
            Changing the alignment of a non-CDS disk group
          3.  
            Splitting a CDS disk group
          4.  
            Moving objects between CDS disk groups and non-CDS disk groups
          5.  
            Moving objects between CDS disk groups
          6.  
            Joining disk groups
          7.  
            Changing the default CDS setting for disk group creation
          8.  
            Creating non-CDS disk groups
          9.  
            Upgrading an older version non-CDS disk group
          10.  
            Replacing a disk in a CDS disk group
          11.  
            Setting the maximum number of devices for CDS disk groups
          12.  
            Changing the DRL map and log size
          13.  
            Creating a volume with a DRL log
          14.  
            Setting the DRL map length
        3. Displaying information
          1.  
            Determining the setting of the CDS attribute on a disk group
          2.  
            Displaying the maximum number of devices in a CDS disk group
          3.  
            Displaying map length and map alignment of traditional DRL logs
          4.  
            Displaying the disk group alignment
          5.  
            Displaying the log map length and alignment
          6.  
            Displaying offset and length information in units of 512 bytes
        4.  
          Default activation mode of shared disk groups
        5.  
          Additional considerations when importing CDS disk groups
      5. File system considerations
        1.  
          Considerations about data in the file system
        2.  
          File system migration
        3. Specifying the migration target
          1.  
            Examples of target specifications
        4. Using the fscdsadm command
          1.  
            Checking that the metadata limits are not exceeded
          2. Maintaining the list of target operating systems
            1.  
              Adding an entry to the list of target operating systems
            2.  
              Removing an entry from the list of target operating systems
            3.  
              Removing all entries from the list of target operating systems
            4.  
              Displaying the list of target operating systems
          3.  
            Enforcing the established CDS limits on a file system
          4.  
            Ignoring the established CDS limits on a file system
          5.  
            Validating the operating system targets for a file system
          6.  
            Displaying the CDS status of a file system
        5.  
          Migrating a file system one time
        6. Migrating a file system on an ongoing basis
          1.  
            Stopping ongoing migration
        7.  
          When to convert a file system
        8. Converting the byte order of a file system
          1.  
            Importing and mounting a file system from another system
      6.  
        Alignment value and block size
      7.  
        Migrating a snapshot volume
    7. Migrating from Oracle ASM to Veritas File System
      1.  
        About the migration
      2.  
        Pre-requisites for migration
      3.  
        Preparing to migrate
      4.  
        Migrating Oracle databases from Oracle ASM to VxFS
  8. Section VIII. Veritas InfoScale 4K sector device support solution
    1. Veritas InfoScale 4k sector device support solution
      1.  
        About 4K sector size technology
      2.  
        InfoScale unsupported configurations
      3.  
        Migrating VxFS file system from 512-bytes sector size devices to 4K sector size devices
  9. Section IX. REST API support
    1. Support for configurations and operations using REST APIs
      1.  
        Support for InfoScale operations using REST APIs
      2.  
        Supported operations
      3.  
        Configuring the REST server
      4.  
        Security considerations for REST API management
      5.  
        Authorization of users for performing operations using REST APIs
      6.  
        Reconfiguring the REST server
      7.  
        Configuring HA for the REST server
      8.  
        Accessing the InfoScale REST API documentation
      9.  
        Unconfiguring the REST server
      10.  
        Troubleshooting information
      11.  
        Limitations
  10. Section X. Reference
    1. Appendix A. Veritas AppProtect logs and operation states
      1.  
        Log files
      2.  
        Plan states
    2. Appendix B. Troubleshooting Veritas AppProtect
      1.  
        Troubleshooting Just In Time Availability

Running multiple parallel applications within a single cluster using the application isolation feature

Customer scenario

Multiple parallel applications that require flexible sharing of data in a data warehouse are currently deployed on separate clusters. Access across clusters is provided by NFS or other distributed file system technologies. You want to deploy multiple parallel applications that require flexible sharing of data within a single cluster.

In a data center, multiple clusters exist with their dedicated fail over nodes.

There is a need to optimize the deployment of these disjoint clusters as a single large cluster.

Configuration overview

Business critical applications require dedicated hardware to avoid the impact of configuration changes of one application on other applications. For example, when a node leaves or joins the cluster, it affects the cluster and the applications running on it. If multiple applications are configured on a large cluster, configuration changes have the potential to cause application downtime.

With the application isolation feature, Veritas InfoScale provides logical isolation between applications at the disk group boundary. This is very helpful when applications require occasional sharing of data. Data can be copied efficiently between applications by using Veritas Volume Manager snapshots and disk group split, join, or move operations. Updates to data can be optimally shared by copying only the changed data. Thus, existing configurations that have multiple applications on a large cluster can be made more resilient and scalable with the application isolation feature.

Visibility of disk groups can be limited only to the required nodes. Making disk group configurations available to a smaller set of nodes improves performance and scalability of Veritas Volume Manager configuration operations.

The following figure illustrates a scenario where three applications are logically isolated to operate from a specific set of nodes within a single large VCS cluster. This configuration can be deployed to serve any of the above mentioned scenarios.

Supported configuration

  • Veritas InfoScale 7.2 and later

  • Red Hat Enterprise Linux (RHEL) and supported RHEL-compatible distributions, SUSE Linux Enterprise Server (SLES) versions supported in this release

Reference documents

Storage Foundation Cluster File System High Availability Administrator's Guide

Storage Foundation for Oracle RAC Configuration and Upgrade Guide.

Solution

See “To run multiple parallel applications within a single Veritas InfoScale cluster using the application isolation feature”.

To run multiple parallel applications within a single Veritas InfoScale cluster using the application isolation feature

  1. Install and configure Veritas InfoScale Enterprise 7.2 or later on the nodes.
  2. Enable the application isolation feature in the cluster.

    Enabling the feature changes the import and deport behaviour. As a result, you must manually add the shared disk groups to the VCS configuration.

    See the topic "Enabling the application isolation feature in CVM environments" in the Storage Foundation Cluster File System High Availability Administrator's Guide.

  3. Identify the shared disk groups on which you want to configure the applications.
  4. Initialize the disk groups and create the volumes and file systems you want to use for your applications.

    Run the commands from any one of the nodes in the disk group sub-cluster. For example, if node1, node2, node3 belong to the sub-cluster DGSubCluster1, run the following commands from any one of the nodes: node1, node2, node3.

    Disk group sub-cluster 1:

    # vxdg -s init appdg1 disk1 disk2 disk3
    # vxassist -g appdg1 make appvol1 100g nmirror=2
    # mkfs -t vxfs /dev/vx/rdsk/appdg1/appvol1

    Disk group sub-cluster 2:

    # vxdg -s init appdg2 disk4 disk5 disk6
    # vxassist -g appdg2 make appvol2 100g nmirror=2
    # mkfs -t vxfs /dev/vx/rdsk/appdg2/appvol2

    Disk group sub-cluster 3:

    # vxdg -s init appdg3 disk7 disk8 disk9
    # vxassist -g appdg3 make appvol3 100g nmirror=2
    # mkfs -t vxfs /dev/vx/rdsk/appdg3/appvol3
  5. Configure the OCR, voting disk, and CSSD resources on all nodes in cluster. It is recommended to have a mirror of the OCR and voting disk on each node in the cluster.

    For instructions, see the Section "Installation and upgrade of Oracle RAC" in the Storage Foundation for Oracle RAC Configuration and Upgrade Guide.

  6. Configure application app1 on node1, node2 and node3.

    The following commands add the application app1 to the VCS configuration.

    # hagrp -add app1
    # hagrp -modify app1 SystemList  node1 0 node2 1 node3 2
    # hagrp -modify app1 AutoFailOver 0
    # hagrp -modify app1 Parallel 1
    # hagrp -modify app1 AutoStartList  node1 node2 node3

    Add disk group resources to the VCS configuration.

    # hares -add appdg1_voldg CVMVolDg app1
    # hares -modify appdg1_voldg Critical 0
    # hares -modify appdg1_voldg CVMDiskGroup appdg1
    # hares -modify appdg1_voldg CVMVolume  appvol1

    Change the activation mode of the shared disk group to shared-write.

    # hares -local appdg1_voldg CVMActivation
    # hares -modify appdg1_voldg NodeList  node1 node2 node3
    # hares -modify appdg1_voldg CVMActivation sw
    # hares -modify appdg1_voldg Enabled 1

    Add the CFS mount resources for the application to the VCS configuration.

    # hares -add appdata1_mnt CFSMount app1
    # hares -modify appdata1_mnt Critical 0
    # hares -modify appdata1_mnt MountPoint "/appdata1_mnt"
    # hares -modify appdata1_mnt BlockDevice "/dev/vx/dsk/appdg1/appvol1"
    # hares -local appdata1_mnt MountOpt
    # hares -modify appdata1_mnt MountOpt "rw,cluster" -sys node1
    # hares -modify appdata1_mnt MountOpt "rw,cluster" -sys node2
    # hares -modify appdata1_mnt MountOpt "rw,cluster" -sys node3
    # hares -modify appdata1_mnt NodeList  node1 node2 node3
    # hares -modify appdata1_mnt Enabled 1

    Add the application's oracle database to the VCS configuration.

    # hares -add ora_app1 Oracle app1
    # hares -modify ora_app1 Critical 0
    # hares -local ora_app1 Sid
    # hares -modify ora_app1 Sid app1_db1 -sys node1
    # hares -modify ora_app1 Sid app1_db2 -sys node2
    # hares -modify ora_app1 Sid app1_db3 -sys node3
    # hares -modify ora_app1 Owner oracle
    # hares -modify ora_app1 Home "/u02/app/oracle/dbhome"
    # hares -modify ora_app1 StartUpOpt SRVCTLSTART
    # hares -modify ora_app1 ShutDownOpt SRVCTLSTOP
    # hares -modify ora_app1 DBName app1_db
  7. Configure application app2 on node3, node4 and node5.

    . The following commands add the application app2 to the VCS configuration.

    # hagrp -add app2
    # hagrp -modify app2 SystemList  node3 0 node4 1 node5 2
    # hagrp -modify app2 AutoFailOver 0
    # hagrp -modify app2 Parallel 1
    # hagrp -modify app2 AutoStartList  node3 node4 node5

    Add disk group resources to the VCS configuration.

    # hares -add appdg2_voldg CVMVolDg app2
    # hares -modify appdg2_voldg Critical 0
    # hares -modify appdg2_voldg CVMDiskGroup appdg2
    # hares -modify appdg2_voldg CVMVolume  appvol2

    Change the activation mode of the shared disk group to shared-write.

    # hares -local appdg2_voldg CVMActivation
    # hares -modify appdg2_voldg NodeList  node3 node4 node5
    # hares -modify appdg2_voldg CVMActivation sw
    # hares -modify appdg2_voldg Enabled 1

    Add the CFS mount resources for the application to the VCS configuration.

    # hares -add appdata2_mnt CFSMount app2
    # hares -modify appdata2_mnt Critical 0
    # hares -modify appdata2_mnt MountPoint "/appdata2_mnt"
    # hares -modify appdata2_mnt BlockDevice "/dev/vx/dsk/appdg2/appvol2"
    # hares -local appdata2_mnt MountOpt
    # hares -modify appdata2_mnt MountOpt "rw,cluster" -sys node3
    # hares -modify appdata2_mnt MountOpt "rw,cluster" -sys node4
    # hares -modify appdata2_mnt MountOpt "rw,cluster" -sys node5
    # hares -modify appdata2_mnt NodeList  node3 node4 node5
    # hares -modify appdata2_mnt Enabled 1

    Add the application's oracle database to the VCS configuration.

    # hares -add ora_app2 Oracle app2
    # hares -modify ora_app2 Critical 0
    # hares -local ora_app2 Sid
    # hares -modify ora_app2 Sid app2_db1 -sys node3
    # hares -modify ora_app2 Sid app2_db2 -sys node4
    # hares -modify ora_app2 Sid app2_db3 -sys node5
    # hares -modify ora_app2 Owner oracle
    # hares -modify ora_app2 Home "/u02/app/oracle/dbhome"
    # hares -modify ora_app2 StartUpOpt SRVCTLSTART
    # hares -modify ora_app2 ShutDownOpt SRVCTLSTOP
    # hares -modify ora_app2 DBName app2_db
  8. Configure application app3 on node5, node6 and node7.

    . The following commands add the application app3 to the VCS configuration.

    # hagrp -add app3
    # hagrp -modify app3 SystemList  node5 0 node6 1 node7 2
    # hagrp -modify app3 AutoFailOver 0
    # hagrp -modify app3 Parallel 1
    # hagrp -modify app3 AutoStartList  node5 node6 node7

    Add disk group resources to the VCS configuration.

    # hares -add appdg3_voldg CVMVolDg app3
    # hares -modify appdg3_voldg Critical 0
    # hares -modify appdg3_voldg CVMDiskGroup appdg3
    # hares -modify appdg3_voldg CVMVolume  appvol3

    Change the activation mode of the shared disk group to shared-write.

    # hares -local appdg3_voldg CVMActivation
    # hares -modify appdg3_voldg NodeList  node5 node6 node7
    # hares -modify appdg3_voldg CVMActivation sw
    # hares -modify appdg3_voldg Enabled 1

    Add the CFS mount resources for the application to the VCS configuration.

    # hares -add appdata3_mnt CFSMount app3
    # hares -modify appdata3_mnt Critical 0
    # hares -modify appdata3_mnt MountPoint "/appdata3_mnt"
    # hares -modify appdata3_mnt BlockDevice "/dev/vx/dsk/appdg3/appvol3"
    # hares -local appdata3_mnt MountOpt
    # hares -modify appdata3_mnt MountOpt "rw,cluster" -sys node5
    # hares -modify appdata3_mnt MountOpt "rw,cluster" -sys node6
    # hares -modify appdata3_mnt MountOpt "rw,cluster" -sys node7
    # hares -modify appdata3_mnt NodeList  node5 node6 node7
    # hares -modify appdata3_mnt Enabled 1

    Add the application's oracle database to the VCS configuration.

    # hares -add ora_app3 Oracle app3
    # hares -modify ora_app3 Critical 0
    # hares -local ora_app3 Sid
    # hares -modify ora_app3 Sid app3_db1 -sys node5
    # hares -modify ora_app3 Sid app3_db2 -sys node6
    # hares -modify ora_app3 Sid app3_db3 -sys node7
    # hares -modify ora_app3 Owner oracle
    # hares -modify ora_app3 Home "/u02/app/oracle/dbhome"
    # hares -modify ora_app3 StartUpOpt SRVCTLSTART
    # hares -modify ora_app3 ShutDownOpt SRVCTLSTOP
    # hares -modify ora_app3 DBName app3_db