Storage Foundation and High Availability Solutions 8.0.1 HA and DR Solutions Guide for Enterprise Vault - Windows

Last Published:
Product(s): InfoScale & Storage Foundation (8.0.1)
Platform: Windows
  1. Introducing SFW HA for EV
    1.  
      About clustering solutions with InfoScale products
    2.  
      About high availability
    3.  
      How a high availability solution works
    4. How VCS monitors storage components
      1.  
        Shared storage - if you use NetApp filers
      2.  
        Shared storage - if you use SFW to manage cluster dynamic disk groups
      3.  
        Shared storage - if you use Windows LDM to manage shared disks
      4.  
        Non-shared storage - if you use SFW to manage dynamic disk groups
      5.  
        Non-shared storage - if you use Windows LDM to manage local disks
      6.  
        Non-shared storage - if you use VMware storage
    5.  
      About replication
    6.  
      About disaster recovery
    7.  
      What you can do with a disaster recovery solution
    8.  
      Typical disaster recovery configuration
  2. Configuring high availability for Enterprise Vault with InfoScale Enterprise
    1. Reviewing the HA configuration
      1. Active-Passive configuration
        1.  
          Sample Active-Passive configuration
        2.  
          IP addresses for sample Active-Passive configuration
    2. Reviewing the disaster recovery configuration
      1.  
        Sample disaster recovery configuration
      2.  
        IP addresses for disaster recovery configuration
      3.  
        Supported disaster recovery configurations for service group dependencies
    3.  
      High availability (HA) configuration (New Server)
    4.  
      Following the HA workflow in the Solutions Configuration Center
    5. Disaster recovery configuration
      1.  
        DR configuration tasks: Primary site
      2.  
        DR configuration tasks: Secondary site
    6. Notes and recommendations for cluster and application configuration
      1.  
        IPv6 support
    7.  
      Configuring the storage hardware and network
    8. Configuring cluster disk groups and volumes for Enterprise Vault
      1.  
        About cluster disk groups and volumes
      2.  
        Prerequisites for configuring cluster disk groups and volumes
      3.  
        Considerations for a fast failover configuration
      4.  
        Considerations for disks and volumes for campus clusters
      5.  
        Considerations for volumes for a Volume Replicator configuration
      6.  
        Sample disk group and volume configuration
      7.  
        Viewing the available disk storage
      8.  
        Creating a cluster disk group
      9.  
        Creating Volumes
      10.  
        About managing disk groups and volumes
      11.  
        Importing a disk group and mounting a volume
      12.  
        Unmounting a volume and deporting a disk group
      13.  
        Adding drive letters to mount the volumes
      14.  
        Deporting the cluster disk group
    9.  
      Configuring the cluster
    10.  
      Adding a node to an existing VCS cluster
    11.  
      Verifying your primary site configuration
    12.  
      Guidelines for installing InfoScale Enterprise and configuring the cluster on the secondary site
    13.  
      Setting up your replication environment
    14.  
      Setting up security for Volume Replicator
    15.  
      Assigning user privileges (secure clusters only)
    16.  
      Configuring disaster recovery with the DR wizard
    17.  
      Cloning the storage on the secondary site using the DR wizard (Volume Replicator replication option)
    18.  
      Installing and configuring Enterprise Vault on the secondary site
    19.  
      Configuring Volume Replicator replication and global clustering
    20.  
      Configuring global clustering only
    21.  
      Setting service group dependencies for disaster recovery
    22.  
      Verifying the disaster recovery configuration
    23.  
      Adding multiple DR sites (optional)
    24.  
      Recovery procedures for service group dependencies
  3. Using the Solutions Configuration Center
    1.  
      About the Solutions Configuration Center
    2.  
      Starting the Solutions Configuration Center
    3.  
      Options in the Solutions Configuration Center
    4.  
      About launching wizards from the Solutions Configuration Center
    5.  
      Remote and local access to Solutions wizards
    6.  
      Solutions wizards and logs
    7.  
      Workflows in the Solutions Configuration Center
  4. Installing and configuring Enterprise Vault for failover
    1.  
      Installing Enterprise Vault
    2. Configuring the Enterprise Vault service group
      1.  
        Before you configure an EV service group
      2.  
        Creating an EV service group
      3.  
        Enabling fast failover for disk groups (optional)
    3.  
      Configuring Enterprise Vault Server in a cluster environment
    4.  
      Setting service group dependencies for high availability
    5.  
      Verifying the Enterprise Vault cluster configuration
    6.  
      Setting up Enterprise Vault
    7.  
      Considerations when modifying an EV service group

Notes and recommendations for cluster and application configuration

  • Review the hardware compatibility list (HCL) and software compatibility list (SCL).

    Note:

    Solutions wizards cannot be used to perform Disaster Recovery, Fire Drill, or Quick Recovery remotely on Windows Server Core systems.

    The DR, FD, and QR wizards require that the .NET Framework is present on the system where these operations are to be performed. As the .NET Framework is not supported on the Windows Server Core systems, the wizards cannot be used to perform DR, FD, or QR on these systems.

    Refer to the following Microsoft knowledge database article for more details:

    http://technet.microsoft.com/en-us/library/dd184075.aspx

  • Shared disks to support applications that migrate between nodes in the cluster. Campus clusters require more than one array for mirroring. Disaster recovery configurations require one array for each site. Replicated data clusters with no shared storage are also supported.

    If your storage devices are SCSI-3 compliant, and you wish to use SCSI-3 Persistent Group Reservations (PGR), you must enable SCSI-3 support using the Veritas Enterprise Administrator (VEA).

    See the Storage Foundation Administrator's Guide for more information.

  • SCSI, Fibre Channel, iSCSI host bus adapters (HBAs), or iSCSI Initiator supported NICs to access shared storage.

  • A minimum of two NICs is required. One NIC will be used exclusively for private network communication between the nodes of the cluster. The second NIC will be used for both private cluster communications and for public access to the cluster. Veritas recommends three NICs.

  • NIC teaming is not supported for the VCS private network.

  • Static IP addresses are required for certain purposes when configuring high availability or disaster recovery solutions. For IPv4 networks, ensure that you have the addresses available to enter. For IPv6 networks, ensure that the network advertises the prefix so that addresses are autogenerated.

    Static IP addresses are required for the following purposes:

    • One static IP address per site for each Enterprise Vault virtual server.

    • A minimum of one static IP address for each physical node in the cluster.

    • One static IP address per cluster used when configuring Notification or the Global Cluster Option. The same IP address may be used for all options.

    • For Volume Replicator replication in a disaster recovery configuration, a minimum of one static IP address per site for each application instance running in the cluster.

    • For Volume Replicator replication in a Replicated Data Cluster configuration, a minimum of one static IP address per zone for each application instance running in the cluster.

  • Configure name resolution for each node.

  • Verify the availability of DNS Services. AD-integrated DNS or BIND 8.2 or higher are supported.

    Make sure a reverse lookup zone exists in the DNS. Refer to the application documentation for instructions on creating a reverse lookup zone.

  • DNS scavenging affects virtual servers configured in SFW HA because the Lanman agent uses Dynamic DNS (DDNS) to map virtual names with IP addresses. If you use scavenging, then you must set the DNSRefreshInterval attribute for the Lanman agent. This enables the Lanman agent to refresh the resource records on the DNS servers.

    See the Cluster Server Bundled Agents Reference Guide.

  • In an IPv6 environment, the Lanman agent relies on the DNS records to validate the virtual server name on the network. If the virtual servers configured in the cluster use IPv6 addresses, you must specify the DNS server IP, either in the network adapter settings or in the Lanman agent's AdditionalDNSServers attribute.

  • If Network Basic Input/Output System (NetBIOS) is disabled over the TCP/IP, then you must set the Lanman agent's DNSUpdateRequired attribute to 1 (True).

  • You must have write permissions for the Active Directory objects corresponding to all the nodes.

  • If you plan to create a new user account for the VCS Helper service, you must have Domain Administrator privileges or belong to the Account Operators group. If you plan to use an existing user account context for the VCS Helper service, you must know the password for the user account.

  • If User Access Control (UAC) is enabled on Windows systems, then you cannot log on to VEA GUI with an account that is not a member of the Administrators group, such as a guest user. This happens because such user does not have the "Write" permission for the "Veritas" folder in the installation directory (typically, C:\Program Files\Veritas). As a workaround, an OS administrator user can set "Write" permission for the guest user using the Security tab of the "Veritas" folder's properties.

  • For a Replicated Data Cluster, install only in a single domain.

  • Route each private NIC through a separate hub or switch to avoid single points of failure.

  • NIC teaming is not supported for the VCS private network.

  • Verify that your DNS server is configured for secure dynamic updates. For the Forward and Reverse Lookup Zones, set the Dynamic updates option to "Secure only". (DNS > Zone Properties > General tab)

  • This is applicable for a Replicated Data Cluster configuration.

    This is applicable for a Replicated Data Cluster configuration. You can configure single node clusters as the primary and secondary zones. However, if using a shared storage configuration, you must create the disk groups as clustered disk groups. If you cannot create a clustered disk group due to the unavailability of disks on a shared bus, use the vxclus UseSystemBus ON command.

  • To configure a RDC cluster, you need to create virtual IP addresses for the following:

    • Application virtual server; this IP address should be the same on all nodes at the primary and secondary zones

    • Replication IP address for the primary zone

    • Replication IP address for the secondary zone

    Before you start deploying your environment, you should have these IP addresses available.