InfoScale™ 9.0 Disaster Recovery Implementation Guide - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (9.0)
Platform: Linux
  1. Section I. Introducing Storage Foundation and High Availability Solutions for disaster recovery
    1. About supported disaster recovery scenarios
      1.  
        About disaster recovery scenarios
      2. About campus cluster configuration
        1.  
          VCS campus cluster requirements
        2.  
          How VCS campus clusters work
        3.  
          Typical VCS campus cluster setup
      3. About replicated data clusters
        1.  
          How VCS replicated data clusters work
      4. About global clusters
        1.  
          How VCS global clusters work
        2.  
          User privileges for cross-cluster operations
        3. VCS global clusters: The building blocks
          1.  
            Visualization of remote cluster objects
          2.  
            About global service groups
          3. About global cluster management
            1.  
              About the wide-area connector process
            2.  
              About the wide-area heartbeat agent
            3.  
              Sample configuration for the wide-area heartbeat agent
          4. About serialization - The Authority attribute
            1.  
              About the Authority and AutoStart attributes
          5.  
            About resiliency and "Right of way"
          6.  
            VCS agents to manage wide-area failover
          7.  
            About the Steward process: Split-brain in two-cluster global clusters
          8.  
            Secure communication in global clusters
      5.  
        Disaster recovery feature support for components in the Veritas InfoScale product suite
      6.  
        Virtualization support for InfoScale 9.0 products in replicated environments
    2. Planning for disaster recovery
      1. Planning for cluster configurations
        1.  
          Planning a campus cluster setup
        2.  
          Planning a replicated data cluster setup
        3.  
          Planning a global cluster setup
      2. Planning for data replication
        1.  
          Data replication options
        2.  
          Data replication considerations
  2. Section II. Implementing campus clusters
    1. Setting up campus clusters for VCS and SFHA
      1. About setting up a campus cluster configuration
        1.  
          Preparing to set up a campus cluster configuration
        2.  
          Configuring I/O fencing to prevent data corruption
        3.  
          Configuring VxVM disk groups for campus cluster configuration
        4.  
          Configuring VCS service group for campus clusters
        5.  
          Setting up campus clusters for VxVM and VCS using Veritas InfoScale Operations Manager
      2.  
        Fire drill in campus clusters
      3.  
        About the DiskGroupSnap agent
      4. About running a fire drill in a campus cluster
        1.  
          Configuring the fire drill service group
        2.  
          Running a successful fire drill in a campus cluster
    2. Setting up campus clusters for SFCFSHA, SFRAC
      1.  
        About setting up a campus cluster for disaster recovery for SFCFSHA or SF Oracle RAC
      2.  
        Preparing to set up a campus cluster in a parallel cluster database environment
      3.  
        Configuring I/O fencing to prevent data corruption
      4.  
        Configuring VxVM disk groups for a campus cluster in a parallel cluster database environment
      5.  
        Configuring VCS service groups for a campus cluster for SFCFSHA and SF Oracle RAC
      6.  
        Tuning guidelines for parallel campus clusters
      7.  
        Best practices for a parallel campus cluster
  3. Section III. Implementing replicated data clusters
    1. Configuring a replicated data cluster using VVR
      1. About setting up a replicated data cluster configuration
        1.  
          About typical replicated data cluster configuration
        2.  
          About setting up replication
        3.  
          Configuring the service groups
        4.  
          Configuring the service group dependencies
      2. About migrating a service group
        1.  
          Switching the service group
      3.  
        Fire drill in replicated data clusters
    2. Configuring a replicated data cluster using third-party replication
      1.  
        About setting up a replicated data cluster configuration using third-party replication
      2.  
        About typical replicated data cluster configuration using third-party replication
      3.  
        About setting up third-party replication
      4.  
        Configuring the service groups for third-party replication
      5.  
        Fire drill in replicated data clusters using third-party replication
  4. Section IV. Implementing global clusters
    1. Configuring global clusters for VCS and SFHA
      1.  
        Installing and Configuring Cluster Server
      2. Setting up VVR replication
        1.  
          About configuring VVR replication
        2.  
          Best practices for setting up replication
        3. Creating a Replicated Data Set
          1. Creating a Primary RVG of an RDS
            1.  
              Prerequisites for creating a Primary RVG of an RDS
            2.  
              Example - Creating a Primary RVG containing a data volume
            3.  
              Example - Creating a Primary RVG containing a volume set
          2. Adding a Secondary to an RDS
            1.  
              Best practices for adding a Secondary to an RDS
            2.  
              Prerequisites for adding a Secondary to an RDS
          3. Changing the replication settings for a Secondary
            1. Setting the mode of replication for a Secondary
              1.  
                Example - Setting the mode of replication to asynchronous for an RDS
              2.  
                Example - Setting the mode of replication to synchronous for an RDS
            2.  
              Setting the latency protection for a Secondary
            3.  
              Setting the SRL overflow protection for a Secondary
            4.  
              Setting the network transport protocol for a Secondary
            5. Setting the packet size for a Secondary
              1.  
                Example - Setting the packet size between the Primary and Secondary
            6. Setting the bandwidth limit for a Secondary
              1.  
                Example: Limiting network bandwidth between the Primary and the Secondary
              2.  
                Example: Disabling Bandwidth Throttling between the Primary and the Secondary
              3.  
                Example: Limiting network bandwidth used by VVR when using full synchronization
        4. Synchronizing the Secondary and starting replication
          1. Methods to synchronize the Secondary
            1.  
              Using the network to synchronize the Secondary
            2.  
              Using block-level tape backup to synchronize the Secondary
            3.  
              Moving disks physically to synchronize the Secondary
          2. Using the automatic synchronization feature
            1.  
              Notes on using automatic synchronization
          3.  
            Example for setting up replication using automatic synchronization
          4.  
            About SmartMove for VVR
          5.  
            About thin storage reclamation and VVR
          6.  
            Determining if a thin reclamation array needs reclamation
        5. Starting replication when the data volumes are zero initialized
          1.  
            Example: Starting replication when the data volumes are zero initialized
      3.  
        Setting up third-party replication
      4. Configuring clusters for global cluster setup
        1.  
          Configuring global cluster components at the primary site
        2.  
          Installing and configuring VCS at the secondary site
        3.  
          Securing communication between the wide-area connectors
        4.  
          Configuring remote cluster objects
        5.  
          Configuring additional heartbeat links (optional)
        6.  
          Configuring the Steward process (optional)
      5. Configuring service groups for global cluster setup
        1.  
          Configuring VCS service group for VVR-based replication
        2.  
          Configuring a service group as a global service group
      6.  
        Fire drill in global clusters
    2. Configuring a global cluster with Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
      1.  
        About global clusters
      2.  
        About replication for parallel global clusters using Storage Foundation and High Availability (SFHA) Solutions
      3.  
        About setting up a global cluster environment for parallel clusters
      4.  
        Configuring the primary site
      5. Configuring the secondary site
        1.  
          Configuring the Sybase ASE CE cluster on the secondary site
      6.  
        Setting up replication between parallel global cluster sites
      7.  
        Testing a parallel global cluster configuration
    3. Configuring global clusters with VVR and Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
      1.  
        About configuring a parallel global cluster using Volume Replicator (VVR) for replication
      2. Setting up replication on the primary site using VVR
        1.  
          Creating the data and SRL volumes on the primary site
        2.  
          Setting up the Replicated Volume Group on the primary site
      3. Setting up replication on the secondary site using VVR
        1.  
          Creating the data and SRL volumes on the secondary site
        2.  
          Editing the /etc/vx/vras/.rdg files
        3.  
          Setting up IP addresses for RLINKs on each cluster
        4.  
          Setting up the disk group on secondary site for replication
      4.  
        Starting replication of the primary site database volume to the secondary site using VVR
      5. Configuring Cluster Server to replicate the database volume using VVR
        1.  
          Modifying the Cluster Server (VCS) configuration on the primary site
        2.  
          Modifying the VCS configuration on the secondary site
        3.  
          Configuring the Sybase ASE CE cluster on the secondary site
      6.  
        Replication use cases for global parallel clusters
  5. Section V. Reference
    1. Appendix A. Sample configuration files
      1. Sample Storage Foundation for Oracle RAC configuration files
        1.  
          sfrac02_main.cf file
        2.  
          sfrac07_main.cf and sfrac08_main.cf files
        3.  
          sfrac09_main.cf and sfrac10_main.cf files
        4.  
          sfrac11_main.cf file
        5.  
          sfrac12_main.cf and sfrac13_main.cf files
        6.  
          Sample fire drill service group configuration
      2. About sample main.cf files for Storage Foundation (SF) for Oracle RAC
        1.  
          Sample main.cf for Oracle 10g for CVM/VVR primary site
        2.  
          Sample main.cf for Oracle 10g for CVM/VVR secondary site
      3. About sample main.cf files for Storage Foundation (SF) for Sybase ASE CE
        1.  
          Sample main.cf for a basic Sybase ASE CE cluster configuration under VCS control with shared mount point on CFS for Sybase binary installation
        2.  
          Sample main.cf for a basic Sybase ASE CE cluster configuration with local mount point on VxFS for Sybase binary installation
        3.  
          Sample main.cf for a primary CVM VVR site
        4.  
          Sample main.cf for a secondary CVM VVR site

Configuring VxVM disk groups for a campus cluster in a parallel cluster database environment

After configuring I/O fencing for data integrity, you must configure the VxVM disk groups for remote mirroring before installing your database.

Note:

In cloud environments where one campus cluster is configured on one site and another on the second site, FSS volumes in campus cluster configurations can be used to replicate data for achieving high data availability across sites. In this FSS-campus cluster configuration, site tags can be added based on the name of the site to make data highly available during site failures.

For the example configuration, the database is Oracle RAC.

To configure VxVM disk groups for Oracle RAC on an SF for Oracle RAC campus cluster

  1. Initialize the disks as CDS disks
    # vxdisksetup -i disk01 format=cdsdisk
    # vxdisksetup -i disk02 format=cdsdisk
    # vxdisksetup -i disk03 format=cdsdisk
    # vxdisksetup -i disk05 format=cdsdisk
    # vxdisksetup -i disk06 format=cdsdisk
    # vxdisksetup -i disk07 format=cdsdisk
    # vxdisksetup -i disk08 format=cdsdisk
  2. Set the site name for each host:
    # vxdctl set site=sitename

    The site name is stored in the /etc/vx/volboot file. To display the site names:

    # vxdctl list | grep siteid

    For example, for a four node cluster with two nodes at each site, mark the sites as follows:

    On the nodes at first site:

    # vxdctl set site=site1

    On the nodes at second site:

    # vxdctl set site=site2
  3. Obtain the enclosure name using the following command:
    # vxdmpadm listenclosure
    ENCLR_NAME     ENCLR_TYPE   ENCLR_SNO STATUS    ARRAY_TYPE LUN_COUNT FIRMWARE
    =============================================================================
    ams_wms0       AMS_WMS      75040638  CONNECTED A/A-A      35        -
    hds9500-alua0  HDS9500-ALUA D600145E  CONNECTED A/A-A       9        -
    hds9500-alua1  HDS9500-ALUA D6001FD3  CONNECTED A/A-A       6        -
    disk           Disk         DISKS     CONNECTED Disk        2        -
  4. Set the site name for all the disks in an enclosure.
    # vxdisk settag site=sitename encl:ENCLR_NAME
  5. Run the following command if you want to tag only the specific disks:
    # vxdisk settag site=sitename disk

    For example:

    # vxdisk settag site=site1 disk01
    # vxdisk settag site=site1 disk02
    # vxdisk settag site=site1 disk03
    # vxdisk settag site=site2 disk06
    # vxdisk settag site=site2 disk08
  6. Verify that the disks are registered to a site.
    # vxdisk listtag

    For example:

    # vxdisk listtag
    DEVICE          NAME                         VALUE
    disk01 										site                         site1
    disk02 										site                         site1
    disk03 										site                         site1
    disk04 										site                         site1
    disk05 										site                         site1
    disk06 										site                         site2
    disk07 										site                         site2
    disk08 										site                         site2
    disk09 										site                         site2
  7. Create a disk group for OCR and Vote Disks and another for Oracle data, with disks picked from both the sites. While the example below shows a single disk group, you can create as many as you need.
    # vxdg -s init ocrvotedg disk05 disk07
    # vxdg -s init oradatadg disk01 disk06
  8. Enable site-based allocation on the disk groups for each site.
    # vxdg -g ocrvotedg addsite site1
    # vxdg -g ocrvotedg addsite site2
    # vxdg -g oradatadg addsite site1
    # vxdg -g oradatadg addsite site2
  9. If you are using an enclosure, set the tag on the enclosure for both sites.
    # vxdg -o retain -g ocrvotedg settag encl:3pardata0 site=site1
    # vxdg -o retain -g ocrvotedg settag encl:3pardata1 site=site2
    # vxdg -o retain -g oradatadg settag encl:3pardata0 site=site1
    # vxdg -o retain -g oradatadg settag encl:3pardata1 site=site2
    				
  10. Configure site consistency for the disk groups.
    # vxdg -g ocrvotedg set siteconsistent=on
    # vxdg -g oradatadg set siteconsistent=on
  11. Create one or more mirrored volumes in the disk group.
    # vxassist -g ocrvotedg make ocrvotevol 2048m nmirror=2
    # vxassist -g oradatadg make oradatavol 10200m nmirror=2
    # vxassist -g ocrvotedg make ocrvotevol 2048m nmirror=2
    # vxassist -g oradatadg make oradatavol 10200m nmirror=2
  12. To verify the site awareness license, use the vxlicrep command. The Veritas Volume Manager product section should indicate: Site Awareness = Enabled

    With the Site Awareness license installed on all hosts, the volume created has the following characteristics by default.

    • The all sites attribute is set to ON; the volumes have at least one mirror at each site.

    • The volumes are automatically mirrored across sites.

    • The read policy (rdpol) is set to siteread.

      The read policy can be displayed using the vxprint -ht command.

    • The volumes inherit the site consistency value that is set on the disk group.

  13. From the CVM master, start the volumes for all the disk groups.
    # vxvol -g ocrvotedg startall
    # vxvol -g oradatadg startall
  14. Create a file system on each volume and mount the same.
    # mkfs -t vxfs /dev/vx/rdsk/ocrvotedg/ocrvotevol
    					
    # mkfs -t vxfs /dev/vx/rdsk/oradatadg/oradatavol
    					
    # mount -t vxfs -o cluster /dev/vx/dsk/ocrvotedg/ocrvotevol /ocrvote
    					
    # mount -t vxfs -o cluster /dev/vx/dsk/oradatadg/oradatavol /oradata
    					
  15. Create seperate directories for OCR and Vote file as follows:
    # mkdir -p /ocrvote/ocr
    # mkdir -p /ocrvote/vote
  16. After creating directories, change the ownership of these directories to Oracle or Grid user:
    # chown -R user:group /ocrvote

    Also change the ownership of /oradata to Oracle user:

    # chown user:group /oradata

    Note:

    One Vote Disk is sufficient since it is already mirrored by VxVM.

  17. Install your database software.

    For Oracle RAC:

    • Insall Oracle Clusterware/GRID

    • Install Oracle RAC binaries

    • Perform library linking of Oracle binaries

    • Create the database on /oradata. For detailed steps,

      See the Storage Foundation for Oracle RAC Configuration and Upgrade Guide.