InfoScale™ 9.0 Disaster Recovery Implementation Guide - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (9.0)
Platform: Linux
  1. Section I. Introducing Storage Foundation and High Availability Solutions for disaster recovery
    1. About supported disaster recovery scenarios
      1.  
        About disaster recovery scenarios
      2. About campus cluster configuration
        1.  
          VCS campus cluster requirements
        2.  
          How VCS campus clusters work
        3.  
          Typical VCS campus cluster setup
      3. About replicated data clusters
        1.  
          How VCS replicated data clusters work
      4. About global clusters
        1.  
          How VCS global clusters work
        2.  
          User privileges for cross-cluster operations
        3. VCS global clusters: The building blocks
          1.  
            Visualization of remote cluster objects
          2.  
            About global service groups
          3. About global cluster management
            1.  
              About the wide-area connector process
            2.  
              About the wide-area heartbeat agent
            3.  
              Sample configuration for the wide-area heartbeat agent
          4. About serialization - The Authority attribute
            1.  
              About the Authority and AutoStart attributes
          5.  
            About resiliency and "Right of way"
          6.  
            VCS agents to manage wide-area failover
          7.  
            About the Steward process: Split-brain in two-cluster global clusters
          8.  
            Secure communication in global clusters
      5.  
        Disaster recovery feature support for components in the Veritas InfoScale product suite
      6.  
        Virtualization support for InfoScale 9.0 products in replicated environments
    2. Planning for disaster recovery
      1. Planning for cluster configurations
        1.  
          Planning a campus cluster setup
        2.  
          Planning a replicated data cluster setup
        3.  
          Planning a global cluster setup
      2. Planning for data replication
        1.  
          Data replication options
        2.  
          Data replication considerations
  2. Section II. Implementing campus clusters
    1. Setting up campus clusters for VCS and SFHA
      1. About setting up a campus cluster configuration
        1.  
          Preparing to set up a campus cluster configuration
        2.  
          Configuring I/O fencing to prevent data corruption
        3.  
          Configuring VxVM disk groups for campus cluster configuration
        4.  
          Configuring VCS service group for campus clusters
        5.  
          Setting up campus clusters for VxVM and VCS using Veritas InfoScale Operations Manager
      2.  
        Fire drill in campus clusters
      3.  
        About the DiskGroupSnap agent
      4. About running a fire drill in a campus cluster
        1.  
          Configuring the fire drill service group
        2.  
          Running a successful fire drill in a campus cluster
    2. Setting up campus clusters for SFCFSHA, SFRAC
      1.  
        About setting up a campus cluster for disaster recovery for SFCFSHA or SF Oracle RAC
      2.  
        Preparing to set up a campus cluster in a parallel cluster database environment
      3.  
        Configuring I/O fencing to prevent data corruption
      4.  
        Configuring VxVM disk groups for a campus cluster in a parallel cluster database environment
      5.  
        Configuring VCS service groups for a campus cluster for SFCFSHA and SF Oracle RAC
      6.  
        Tuning guidelines for parallel campus clusters
      7.  
        Best practices for a parallel campus cluster
  3. Section III. Implementing replicated data clusters
    1. Configuring a replicated data cluster using VVR
      1. About setting up a replicated data cluster configuration
        1.  
          About typical replicated data cluster configuration
        2.  
          About setting up replication
        3.  
          Configuring the service groups
        4.  
          Configuring the service group dependencies
      2. About migrating a service group
        1.  
          Switching the service group
      3.  
        Fire drill in replicated data clusters
    2. Configuring a replicated data cluster using third-party replication
      1.  
        About setting up a replicated data cluster configuration using third-party replication
      2.  
        About typical replicated data cluster configuration using third-party replication
      3.  
        About setting up third-party replication
      4.  
        Configuring the service groups for third-party replication
      5.  
        Fire drill in replicated data clusters using third-party replication
  4. Section IV. Implementing global clusters
    1. Configuring global clusters for VCS and SFHA
      1.  
        Installing and Configuring Cluster Server
      2. Setting up VVR replication
        1.  
          About configuring VVR replication
        2.  
          Best practices for setting up replication
        3. Creating a Replicated Data Set
          1. Creating a Primary RVG of an RDS
            1.  
              Prerequisites for creating a Primary RVG of an RDS
            2.  
              Example - Creating a Primary RVG containing a data volume
            3.  
              Example - Creating a Primary RVG containing a volume set
          2. Adding a Secondary to an RDS
            1.  
              Best practices for adding a Secondary to an RDS
            2.  
              Prerequisites for adding a Secondary to an RDS
          3. Changing the replication settings for a Secondary
            1. Setting the mode of replication for a Secondary
              1.  
                Example - Setting the mode of replication to asynchronous for an RDS
              2.  
                Example - Setting the mode of replication to synchronous for an RDS
            2.  
              Setting the latency protection for a Secondary
            3.  
              Setting the SRL overflow protection for a Secondary
            4.  
              Setting the network transport protocol for a Secondary
            5. Setting the packet size for a Secondary
              1.  
                Example - Setting the packet size between the Primary and Secondary
            6. Setting the bandwidth limit for a Secondary
              1.  
                Example: Limiting network bandwidth between the Primary and the Secondary
              2.  
                Example: Disabling Bandwidth Throttling between the Primary and the Secondary
              3.  
                Example: Limiting network bandwidth used by VVR when using full synchronization
        4. Synchronizing the Secondary and starting replication
          1. Methods to synchronize the Secondary
            1.  
              Using the network to synchronize the Secondary
            2.  
              Using block-level tape backup to synchronize the Secondary
            3.  
              Moving disks physically to synchronize the Secondary
          2. Using the automatic synchronization feature
            1.  
              Notes on using automatic synchronization
          3.  
            Example for setting up replication using automatic synchronization
          4.  
            About SmartMove for VVR
          5.  
            About thin storage reclamation and VVR
          6.  
            Determining if a thin reclamation array needs reclamation
        5. Starting replication when the data volumes are zero initialized
          1.  
            Example: Starting replication when the data volumes are zero initialized
      3.  
        Setting up third-party replication
      4. Configuring clusters for global cluster setup
        1.  
          Configuring global cluster components at the primary site
        2.  
          Installing and configuring VCS at the secondary site
        3.  
          Securing communication between the wide-area connectors
        4.  
          Configuring remote cluster objects
        5.  
          Configuring additional heartbeat links (optional)
        6.  
          Configuring the Steward process (optional)
      5. Configuring service groups for global cluster setup
        1.  
          Configuring VCS service group for VVR-based replication
        2.  
          Configuring a service group as a global service group
      6.  
        Fire drill in global clusters
    2. Configuring a global cluster with Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
      1.  
        About global clusters
      2.  
        About replication for parallel global clusters using Storage Foundation and High Availability (SFHA) Solutions
      3.  
        About setting up a global cluster environment for parallel clusters
      4.  
        Configuring the primary site
      5. Configuring the secondary site
        1.  
          Configuring the Sybase ASE CE cluster on the secondary site
      6.  
        Setting up replication between parallel global cluster sites
      7.  
        Testing a parallel global cluster configuration
    3. Configuring global clusters with VVR and Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
      1.  
        About configuring a parallel global cluster using Volume Replicator (VVR) for replication
      2. Setting up replication on the primary site using VVR
        1.  
          Creating the data and SRL volumes on the primary site
        2.  
          Setting up the Replicated Volume Group on the primary site
      3. Setting up replication on the secondary site using VVR
        1.  
          Creating the data and SRL volumes on the secondary site
        2.  
          Editing the /etc/vx/vras/.rdg files
        3.  
          Setting up IP addresses for RLINKs on each cluster
        4.  
          Setting up the disk group on secondary site for replication
      4.  
        Starting replication of the primary site database volume to the secondary site using VVR
      5. Configuring Cluster Server to replicate the database volume using VVR
        1.  
          Modifying the Cluster Server (VCS) configuration on the primary site
        2.  
          Modifying the VCS configuration on the secondary site
        3.  
          Configuring the Sybase ASE CE cluster on the secondary site
      6.  
        Replication use cases for global parallel clusters
  5. Section V. Reference
    1. Appendix A. Sample configuration files
      1. Sample Storage Foundation for Oracle RAC configuration files
        1.  
          sfrac02_main.cf file
        2.  
          sfrac07_main.cf and sfrac08_main.cf files
        3.  
          sfrac09_main.cf and sfrac10_main.cf files
        4.  
          sfrac11_main.cf file
        5.  
          sfrac12_main.cf and sfrac13_main.cf files
        6.  
          Sample fire drill service group configuration
      2. About sample main.cf files for Storage Foundation (SF) for Oracle RAC
        1.  
          Sample main.cf for Oracle 10g for CVM/VVR primary site
        2.  
          Sample main.cf for Oracle 10g for CVM/VVR secondary site
      3. About sample main.cf files for Storage Foundation (SF) for Sybase ASE CE
        1.  
          Sample main.cf for a basic Sybase ASE CE cluster configuration under VCS control with shared mount point on CFS for Sybase binary installation
        2.  
          Sample main.cf for a basic Sybase ASE CE cluster configuration with local mount point on VxFS for Sybase binary installation
        3.  
          Sample main.cf for a primary CVM VVR site
        4.  
          Sample main.cf for a secondary CVM VVR site

Configuring the secondary site

The setup requirements for the secondary site parallel the requirements for the primary site with a few additions or exceptions as noted below.

Important requirements for parallel global clustering:

  • Cluster names on the primary and secondary sites must be unique.

  • You must use the same OS user and group IDs for your database for installation and configuration on both the primary and secondary clusters.

  • For Oracle RAC, you must use the same directory structure, name, permissions for the CRS/GRID and database binaries.

You can use an existing parallel cluster or you can install a new cluster for your secondary site.

Consult your product installation guide for planning information as well as specific configuration guidance for the steps below.

See the Storage Foundation Cluster File System High Availability Configuration and Upgrade Guide.

See the Storage Foundation for Oracle RAC Configuration and Upgrade Guide.

See the Storage Foundation for Sybase ASE CE Configuration and Upgrade Guide.

To set up the cluster on secondary site

  1. Install and configure servers and storage.
  2. If you are using hardware-based replication, install the sofware for managing your array.
  3. Verify that you have the correct installation options enabled, whether you are using keyless licensing or installing keys manually. You must have the GCO option for a global cluster. If you are using VVR for replication, you must have it enabled.
  4. Prepare, install, and configure your Storage Foundation and High Availability (SFHA) Solutions product according to the directions in your product's installation guide.

    For a multi-node cluster, configure I/O fencing.

  5. For a single-node cluster, do not enable I/O fencing. Fencing will run in disabled mode.
  6. Prepare systems and storage for a global cluster. Identify the hardware and storage requirements before installing your database software.

    For SFCFSHA, you will need to set up:

    • Local storage for database software

    • Shared storage for resources which are not replicated as part of the hardware-based or host-based replication

    • Replicated storage for database files

    • You must use the same directory structure, name, permissions for the quorum and database binaries as on the primary.

    For SF Oracle RAC, you will need to set up:

    • Local storage for Oracle RAC and CRS binaries

    • Shared storage for OCR and Vote disk which is not replicated as part of the hardware-based or host-based replication

    • Replicated shared storage for database files

    • You must use the same directory structure, name, permissions for the CRS/GRID and database binaries as on the primary.

    For SF Sybase CE, you will need to set up:

    • Shared storage for File sSystem and Cluster File System for Sybase ASE CE binaries which is not replicated

    • Shared storage for the quorum device which is not replicated

    • Replicated storage for database files

    • You must use the same directory structure, name, permissions for the quorum and database binaries as on the primary.

    • Verify the configuration using procedures in the Storage Foundation for Sybase ASE CE Configuration and Upgrade Guide.

    Note:

    You must use the same directory structure, name, permissions for the CRS/GRID and database binaries.

  7. For SFCFSHA, install and configure your database binaries. Consult your database documentation.

    Note:

    Resources which will not be replicated must be on non-replicated shared storage.

    After successful database installation and configuration, verify that database resources are up on all nodes.

  8. For Oracle RAC, see the instructions in the Storage Foundation for Oracle RAC Configuration and Upgrade Guide for installing and configuring:
    • Oracle Clusterware/Grid Infrastructure software

    • Oracle RAC database software

    • The Oracle RAC binary versions must be exactly same on both sites.

    Note:

    OCR and Vote disk must be on non-replicated shared storage.

    After successful Oracle RAC installation and configuration, verify that CRS daemons and resources are up on all nodes.

    $GRID_HOME/bin/crsctl stat res -t
  9. For SF Sybase CE, see the instructions in the Storage Foundation for Sybase ASE CE Configuration and Upgrade Guide for installing and configuring Sybase ASE CE binaries.

    Note the following configuration requirements:

    • The quorum device must be on non-replicated shared storage.

    • The Sybase binary versions must be exactly same on both sites, including the ESD versions.

    • Configure Sybase Binaries mounts/volumes under VCS control manually on the secondary site.

Do not create the database. The database will be replicated from the primary site.

To set up the SFCFSHA database for the secondary site

  1. If you are using hardware-based replication, the database, disk group, and volumes will be replicated from the primary site.

    Create the directory for the CFS mount point which will host the database data and control files.

  2. If you are using VVR for replication, create an identical disk group and volumes for the replicated content with the same names and size as listed on the primary site.

    Create the directories for the CFS mount points as they are on the primary site. These will be used to host the database and control files when the failover occurs and the secondary is promoted to become the primary site.

  3. Create subdirectories for the database as you did on the primary site.

To set up the Oracle RAC database for the secondary site

  1. If you are using hardware-based replication, the database, disk group, and volumes will be replicated from the primary site.

    Create the directory for the CFS mount point which will host the database data and control files.

  2. If you are using VVR for replication, create an identical disk group and volumes for the replicated content with the same names and size as listed on the primary site.

    Create the directories for the CFS mount points as they are on the primary site. These will be used to host the database and control files when the failover occurs and the secondary is promoted to become the primary site.

  3. On each node in the cluster, copy the initialization files (pfiles,spfiles) from the primary cluster to the secondary cluster maintaining the same directory path.

    For example, copy init$ORACLE_SID.ora and orapw$ORACLE_SID.ora from $ORACLE_HOME/dbs at the primary to $ORACLE_HOME/dbs at the secondary.

  4. As Oracle user, create the following subdirectories on the secondary site to parallel the directories on the primary site:
    $ mkdir -p $ORACLE_BASE/diag
    $ mkdir -p $ORACLE_BASE/admin
    $ mkdir -p $ORACLE_BASE/admin/adump 

    On both the primary and secondary sites, edit the file:

    $ORACLE_HOME/dbs/init$ORACLE_SID.ora

    as

    remote_listener = 'SCAN_NAME:1521'
    SPFILE=<SPFILE NAME>
  5. Configure listeners on the secondary site with same name as on primary. You can do this by one of the following methods:
    • Copy the listener.ora and tnsnames.ora files from the primary site and update the names as appropriate for the secondary site.

    • Use Oracle's netca utility to to configure the listener.ora and tnsnames.ora files on the secondary site.

  6. On the secondary site, register the database using the srvctl command as the database software owner.

    Registering the database only has to be done once from any node in the secondary cluster.Use the following command as the Oracle database software owner

    $ $ORACLE_HOME/bin/srvctl add database -d database_name -o oracle_home
  7. To prevent automatic database instance restart, change the Management policy for the database (automatic, manual) to MANUAL using the srvctl command:
    $ $ORACLE_HOME/bin/srvctl modify database -d database_name -y manual
    					

    You need only perform this change once from any node in the cluster.

  8. Register the instances using srvctl command. Execute the following command on each node:
    $ $ORACLE_HOME/bin/srvctl add instance -d database_name \
    -i instance_name -n node-name
    					

    If the secondary cluster has more than one node, you must add instances using the srvctl command.

    For example, if the database instance name is racdb, the instance name on sys3 is racdb1 and on sys4 is racdb2.

    $ $ORACLE_HOME/bin/srvctl add instance -d racdb -i racdb1 -n sys3
    
    $ $ORACLE_HOME/bin/srvctl add instance -d racdb -i racdb2 -n sys4
    					
  9. Register all other resources (for example listener, ASM, service) present in cluster/GRID at the primary site to the secondary site using the srvctl command or crs_register. For command details, see Oracle documentation at Metalink.

To set up the Sybase ASE CE database for the secondary site

  1. Create the directories for the CFS mount points as they are on the primary site. These will be used to host the database files when the failover occurs and the secondary is promoted to become the primary site.
  2. Create an identical disk group and volumes for the replicated content with the same names and size as listed on the primary site.