InfoScale™ 9.0 Storage Foundation and High Availability Solutions HA and DR Solutions Guide for Microsoft SQL Server - Windows

Last Published:
Product(s): InfoScale & Storage Foundation (9.0)
Platform: Windows
  1. Section I. Getting started with Storage Foundation and High Availability Solutions for SQL Server
    1. Introducing SFW HA and the VCS agents for SQL Server
      1.  
        About the Veritas InfoScale solutions for monitoring SQL Server
      2.  
        How application availability is achieved in a physical environment
      3. How is application availability achieved in a VMware virtual environment
        1.  
          How the VMwareDisks agent communicates with the vCenter Server instead of the ESX/ESXi host
        2.  
          Typical VCS cluster configuration in a virtual environment
      4.  
        Managing storage using VMware virtual disks
      5.  
        Modifying the ESXDetails attribute
      6. How VCS monitors storage components
        1.  
          Shared storage - if you use NetApp filers
        2.  
          Shared storage - if you use SFW to manage cluster dynamic disk groups
        3.  
          Shared storage - if you use Windows LDM to manage shared disks
        4.  
          Non-shared storage - if you use SFW to manage dynamic disk groups
        5.  
          Non-shared storage - if you use Windows LDM to manage local disks
        6.  
          Non-shared storage - if you use VMware storage
      7.  
        What must be protected in an SQL Server environment
      8.  
        About the VCS agents for SQL Server
      9.  
        About the VCS agent for SQL Server Database Engine
      10.  
        About the VCS agent for SQL Server FILESTREAM
      11.  
        About the VCS GenericService agent for SQL Server Agent service and Analysis service
      12.  
        About the agent for MSDTC service
      13.  
        About the monitoring options
      14.  
        Typical SQL Server configuration in a VCS cluster
      15.  
        Typical SQL Server disaster recovery configuration
      16.  
        SQL Server sample dependency graph
      17.  
        MSDTC sample dependency graph
    2. Deployment scenarios for SQL Server
      1.  
        Workflows in the Solutions Configuration Center
      2. Reviewing the active-passive HA configuration
        1.  
          Sample Active-Passive configuration
      3.  
        Reviewing the prerequisites for a standalone SQL Server
      4. Reviewing a standalone SQL Server configuration
        1.  
          Sample standalone SQL Server configuration
      5.  
        Reviewing the MSDTC configuration
      6.  
        VCS campus cluster configuration
      7. Reviewing the campus cluster configuration
        1.  
          Campus cluster failover using the ForceImport attribute
        2.  
          Reinstating faulted hardware in a campus cluster
      8.  
        VCS Replicated Data Cluster configuration
      9. Reviewing the Replicated Data Cluster configuration
        1.  
          Sample replicated data cluster configuration
      10. About setting up a Replicated Data Cluster configuration
        1.  
          About setting up replication
        2.  
          About configuring and migrating the service group
      11. Disaster recovery configuration
        1.  
          DR configuration tasks: Primary site
        2.  
          DR configuration tasks: Secondary site
        3.  
          Supported disaster recovery configurations for service group dependencies
      12. Reviewing the disaster recovery configuration
        1.  
          Sample disaster recovery configuration
      13. Notes and recommendations for cluster and application configuration
        1.  
          IPv6 support
        2.  
          IP address requirements for an Active-Passive configuration
        3.  
          IP address requirements for a disaster recovery configuration
      14.  
        Configuring the storage hardware and network
      15. Configuring disk groups and volumes for SQL Server
        1.  
          About disk groups and volumes
        2.  
          Prerequisites for configuring disk groups and volumes
        3.  
          Considerations for a fast failover configuration
        4.  
          Considerations for converting existing shared storage to cluster disk groups and volumes
        5.  
          Considerations when creating disks and volumes for campus clusters
        6.  
          Considerations for volumes for a Volume Replicator configuration
        7.  
          Considerations for disk groups and volumes for multiple instances
        8.  
          Sample disk group and volume configuration
        9.  
          MSDTC sample disk group and volume configuration
        10.  
          Viewing the available disk storage
        11.  
          Creating a dynamic disk group
        12.  
          Adding disks to campus cluster sites
        13.  
          Creating volumes for high availability clusters
        14.  
          Creating volumes for campus clusters
      16. About managing disk groups and volumes
        1.  
          Importing a disk group and mounting a volume
        2.  
          Unmounting a volume and deporting a disk group
        3.  
          Adding drive letters to mount the volumes
      17. Configuring the cluster using the Cluster Configuration Wizard
        1.  
          Configuring notification
        2.  
          Configuring Wide-Area Connector process for global clusters
        3.  
          Adding nodes to a cluster
    3. Installing SQL Server
      1.  
        About installing and configuring SQL Server
      2.  
        About installing multiple SQL Server instances
      3.  
        Verifying that the SQL Server databases and logs are moved to shared storage
      4.  
        About installing SQL Server for high availability configuration
      5.  
        About installing SQL Server on the first system
      6.  
        About installing SQL Server on additional systems
      7.  
        Creating a SQL Server user-defined database
      8. Completing configuration steps in SQL Server
        1.  
          Moving the tempdb database if using Volume Replicator for disaster recovery
        2.  
          Assigning ports for multiple SQL Server instances
        3.  
          Enabling IPv6 support for the SQL Server Analysis Service
  2. Section II. Configuring SQL Server in a physical environment
    1. Configuring SQL Server for failover
      1.  
        Tasks for configuring a new server for high availability
      2.  
        Tasks for configuring an existing server for high availability
      3. About configuring the SQL Server service group
        1.  
          Service group requirements for Active-Active configurations
        2.  
          Prerequisites for configuring the SQL Server service group
        3.  
          Creating the SQL Server service group
      4. Configuring the service group in a non-shared storage environment
        1.  
          Assigning privileges to the existing SQL Server databases and logs
        2.  
          Enabling fast failover for disk groups (optional)
      5.  
        Verifying the SQL Server cluster configuration
      6.  
        About the modifications required for tagged VLAN or teamed network
      7.  
        Tasks for configuring MSDTC for high availability
      8. Configuring an MSDTC Server service group
        1.  
          Prerequisites for MSDTC configuration
        2.  
          Creating an MSDTC Server service group
      9.  
        About configuring the MSDTC client for SQL Server
      10.  
        About the VCS Application Manager utility
      11.  
        Viewing DTC transaction information
      12.  
        Modifying a SQL Server service group to add VMDg and MountV resources
      13.  
        Determining additional steps needed
    2. Configuring campus clusters for SQL Server
      1.  
        Tasks for configuring campus clusters
      2.  
        Modifying the IP resource in the SQL Server service group
      3.  
        Verifying the campus cluster: Switching the service group
      4.  
        Setting the ForceImport attribute to 1 after a site failure
    3. Configuring Replicated Data Clusters for SQL Server
      1.  
        Tasks for configuring Replicated Data Clusters
      2.  
        Creating the primary system zone for the application service group
      3.  
        Creating a parallel environment in the secondary zone
      4.  
        Setting up security for Volume Replicator
      5. Setting up the Replicated Data Sets (RDS)
        1.  
          Prerequisites for setting up the RDS for the primary and secondary zones
        2.  
          Creating the Replicated Data Sets with the wizard
      6. Configuring a RVG service group for replication
        1.  
          Creating the RVG service group
        2. Configuring the resources in the RVG service group for RDC replication
          1.  
            Configuring the IP and NIC resources
          2. Configuring the VMDg or VMNSDg resources for the disk groups
            1.  
              Modifying the DGGuid attribute for the new disk group resource in the RVG service group
            2.  
              Configuring the VMDg or VMNSDg resources for the disk group for the user-defined database
            3.  
              Adding the Volume Replicator RVG resources for the disk groups
            4.  
              Linking the Volume Replicator RVG resources to establish dependencies
            5.  
              Deleting the VMDg or VMNSDg resource from the SQL Server service group
        3. Configuring the RVG Primary resources
          1.  
            Creating the RVG Primary resources
          2.  
            Linking the RVG Primary resources to establish dependencies
          3.  
            Bringing the RVG Primary resources online
        4.  
          Configuring the primary system zone for the RVG service group
      7.  
        Setting a dependency between the service groups
      8. Adding the nodes from the secondary zone to the RDC
        1.  
          Adding the nodes from the secondary zone to the RVG service group
        2.  
          Configuring secondary zone nodes in the RVG service group
        3.  
          Configuring the RVG service group NIC resource for fail over (VMNSDg only)
        4.  
          Configuring the RVG service group IP resource for failover
        5.  
          Configuring the RVG service group VMNSDg resources for fail over
        6.  
          Adding nodes from the secondary zone to the SQL Server service group
        7.  
          Configuring the zones in the SQL Server service group
        8.  
          Configuring the application service group IP resource for fail over (VMNSDg only)
        9.  
          Configuring the application service group NIC resource for fail over (VMNSDg only)
      9. Verifying the RDC configuration
        1.  
          Bringing the service group online
        2.  
          Switching online nodes
      10.  
        Additional instructions for GCO disaster recovery
    4. Configuring disaster recovery for SQL Server
      1.  
        Tasks for configuring disaster recovery for SQL Server
      2.  
        Tasks for setting up DR in a non-shared storage environment
      3.  
        Guidelines for installing Arctera InfoScale Enterprise and configuring the cluster on the secondary site
      4.  
        Verifying your primary site configuration
      5. Setting up your replication environment
        1. Requirements for EMC SRDF array-based hardware replication
          1.  
            Software requirements for configuring EMC SRDF
          2.  
            Replication requirements for EMC SRDF
        2. Requirements for Hitachi TrueCopy array-based hardware replication
          1.  
            Software requirements for Hitachi TrueCopy
          2.  
            Replication requirements for Hitachi TrueCopy
      6.  
        Assigning user privileges (secure clusters only)
      7. About configuring disaster recovery with the DR wizard
        1.  
          Configuring disaster recovery with the DR wizard
      8.  
        Cloning the storage on the secondary site using the DR wizard (Volume Replicator replication option)
      9.  
        Creating temporary storage on the secondary site using the DR wizard (array-based replication)
      10.  
        Installing and configuring SQL Server on the secondary site
      11.  
        Cloning the service group configuration from the primary site to the secondary site
      12.  
        Configuring the SQL Server service group in a non-shared storage environment
      13. Configuring replication and global clustering
        1.  
          Configuring Volume Replicator replication and global clustering
        2. Configuring EMC SRDF replication and global clustering
          1.  
            Optional settings for EMC SRDF
        3. Configuring Hitachi TrueCopy replication and global clustering
          1.  
            Optional settings for HTC
        4.  
          Configuring global clustering only
      14.  
        Creating the replicated data sets (RDS) for Volume Replicator replication
      15.  
        Creating the Volume Replicator RVG service group for replication
      16. Configuring the global cluster option for wide-area failover
        1.  
          Linking clusters: Adding a remote cluster to a local cluster
        2.  
          Converting a local service group to a global service group
        3.  
          Bringing a global service group online
      17.  
        Verifying the disaster recovery configuration
      18.  
        Adding multiple DR sites (optional)
      19.  
        Recovery procedures for service group dependencies
      20.  
        Configuring DR manually without the DR wizard
    5. Testing fault readiness by running a fire drill
      1.  
        About disaster recovery fire drills
      2. About the Fire Drill Wizard
        1.  
          About Fire Drill Wizard general operations
        2. About Fire Drill Wizard operations in a Volume Replicator environment
          1.  
            Preparing the fire drill configuration
          2.  
            About running the fire drill
          3.  
            About restoring the fire drill configuration
          4.  
            About deleting the fire drill configuration
        3.  
          About Fire Drill Wizard operations in a Hitachi TrueCopy or EMC SRDF environment
      3.  
        About post-fire drill scripts
      4.  
        Tasks for configuring and running fire drills
      5. Prerequisites for a fire drill
        1.  
          Prerequisites for a fire drill in a Volume Replicator environment
        2.  
          Prerequisites for a fire drill in a Hitachi TrueCopy environment
        3.  
          Prerequisites for a fire drill in an EMC SRDF environment
      6. Preparing the fire drill configuration
        1.  
          System Selection panel details
        2.  
          Service Group Selection panel details
        3.  
          Secondary System Selection panel details
        4.  
          Fire Drill Service Group Settings panel details
        5.  
          Disk Selection panel details
        6.  
          Hitachi TrueCopy Path Information panel details
        7.  
          HTCSnap Resource Configuration panel details
        8.  
          SRDFSnap Resource Configuration panel details
        9.  
          Fire Drill Preparation panel details
      7.  
        Running a fire drill
      8.  
        Re-creating a fire drill configuration that has changed
      9.  
        Restoring the fire drill system to a prepared state
      10. Deleting the fire drill configuration
        1.  
          Fire Drill Deletion panel details
      11.  
        Considerations for switching over fire drill service groups

Configuring the cluster using the Cluster Configuration Wizard

After installing the software, set up the components required to run Cluster Server. The VCS Cluster Configuration Wizard (VCW) sets up the cluster infrastructure, including LLT and GAB, the user account for the VCS Helper service, and provides an option for configuring the VCS Authentication Service in the cluster. The wizard also configures the ClusterService group, which contains resources for notification and global clusters (GCO). You can also use VCW to modify or delete cluster configurations.

Note:

After configuring the cluster you must not change the names of the nodes that are part of the cluster. If you wish to change a node name, run VCW to remove the node from the cluster, rename the system, and then run VCW again to add that system to the cluster.

Note the following prerequisites before you proceed:

  • The required network adapters (NICs), and SCSI controllers are installed and connected to each system.

    Arctera recommends the following actions for network adapters:

    • Disable the ethernet auto-negotiation options on the private NICs to prevent:

      • Loss of heartbeats on the private networks

      • VCS from mistakenly declaring a system as offline

      Contact the NIC manufacturer for details on this process.

    • Remove TCP/IP from the private NICs to lower system overhead.

  • Verify that the public network adapters on each node use static IP addresses (DHCP is not supported) and name resolution is configured for each node.

  • Arctera recommends that you use three network adapters (two NICs exclusively for the VCS private network and one for the public network) per system. You can implement the second private link as a low-priority link over a public interface. Route each private NIC through a separate hub or switch to avoid single points of failure. Arctera recommends that you disable TCP/IP from private NICs to lower system overhead.

    Note:

    If you wish to use Windows NIC teaming, you must select the Static Teaming mode. Only the Static Teaming mode is currently supported.

  • Use independent hubs or switches for each VCS communication network (GAB and LLT). You can use cross-over Ethernet cables for two-node clusters. GAB supports hub-based or switch network paths, or two-system clusters with direct network links.

  • Verify the DNS settings for all systems on which the application is installed and ensure that the public adapter is the first adapter in the Connections list.

    When enabling DNS name resolution, make sure that you use the public network adapters, and not those configured for the VCS private network.

  • The logged on user must have local Administrator privileges on the system where you run the wizard. The user account must be a domain user account.

  • The logged on user must have administrative access to all systems selected for cluster operations. Add the domain user account to the local Administrator group of each system.

  • If you plan to create a new user account for the VCS Helper service, the logged on user must have Domain Administrator privileges or must belong to the Domain Account Operators group.

  • When configuring a user account for the Veritas VCS Helper service, make sure that the user account is a domain user. The Veritas High Availability Engine (HAD), which runs in the context of the local system built-in account, uses the Veritas VCS Helper service user context to access the network. This account does not require Domain Administrator privileges.

  • Make sure the VCS Helper service domain user account has "Add workstations to domain" privilege enabled in the Active Directory.

  • Verify that each system can access the storage devices and each system recognizes the attached shared disk.

    Use Windows Disk Management on each system to verify that the attached shared LUNs (virtual disks) are visible.

  • If you plan to set up a disaster recovery (DR) environment, you must configure the wide-area connector process for global clusters.

  • If you are setting up a Replicated Data Cluster configuration, add only the systems in the primary zone (zone 0) to the cluster, at this time.

To configure a VCS cluster using the wizard

  1. Start the VCSCluster Configuration Wizard from the Apps menu on the Start screen.
  2. Read the information on the Welcome panel and click Next.
  3. On the Configuration Options panel, click Cluster Operations and click Next.
  4. On the Domain Selection panel, select or type the name of the domain in which the cluster resides and select the discovery options.

    To discover information about all systems and users in the domain, do the following:

    • Clear Specify systems and users manually.

    • Click Next.

      Proceed to step 8.

    To specify systems and user names manually (recommended for large domains), do the following:

    • Select Specify systems and users manually.

      Additionally, you may instruct the wizard to retrieve a list of systems and users in the domain by selecting appropriate check boxes.

    • Click Next.

      If you chose to retrieve the list of systems, proceed to step 6. Otherwise, proceed to the next step.

  5. On the System Selection panel, type the name of each system to be added, click Add, and then click Next.

    Do not specify systems that are part of another cluster.

    Proceed to step 8.

  6. On the System Selection panel, specify the systems for the cluster and then click Next.

    Do not select systems that are part of another cluster.

    Enter the name of the system and click Add to add the system to the Selected Systems list, or click to select the system in the Domain Systems list and then click the > (right-arrow) button.

  7. The System Report panel displays the validation status, whether Accepted or Rejected, of all the systems you specified earlier. Review the status and then click Next.

    Select the system to see the validation details. If you wish to include a rejected system, rectify the error based on the reason for rejection and then run the wizard again.

    A system can be rejected for any of the following reasons:

    • System is not pingable.

    • WMI access is disabled on the system.

    • Wizard is unable to retrieve the system architecture or operating system.

    • Product is either not installed or there is a version mismatch.

  8. On the Cluster Configuration Options panel, click Create New Cluster and then click Next.
  9. On the Cluster Details panel, specify the details for the cluster and then click Next.

    Specify the cluster details as follows:

    Cluster Name

    Type a name for the new cluster. Arctera recommends a maximum length of 32 characters for the cluster name.

    Cluster ID

    Select a cluster ID from the suggested cluster IDs in the drop-down list, or type a unique ID for the cluster. The cluster ID can be any number from 0 to 65535.

    Note:

    If you chose to specify systems and users manually in step 4 or if you share a private network between more than one domain, make sure that the cluster ID is unique.

    Operating System

    From the drop-down list, select the operating system.

    All the systems in the cluster must have the same operating system and architecture.

    Available Systems

    Select the systems that you wish to configure in the cluster.

    Check the Select all systems check box to select all the systems simultaneously.

    The wizard discovers the NICs on the selected systems. For single-node clusters with the required number of NICs, the wizard prompts you to configure a private link heartbeat. In the dialog box, click Yes to configure a private link heartbeat.

  10. The wizard validates the selected systems for cluster membership. After the systems are validated, click Next.

    If a system is not validated, review the message associated with the failure and restart the wizard after rectifying the problem.

    If you chose to configure a private link heartbeat in step 9, proceed to the next step. Otherwise, proceed to step 12.

  11. On the Private Network Configuration panel, configure the VCS private network and then click Next. You can configure the VCS private network either over the ethernet or over the User Datagram Protocol (UDP) layer using IPv4 or IPv6 network.

    Do one of the following:

    • To configure the VCS private network over ethernet, complete the following steps:

    • Select Configure LLT over Ethernet.

    • Select the check boxes next to the two NICs to be assigned to the private network. You can assign a maximum of eight network links.

      Arctera recommends reserving two NICs exclusively for the private network. However, you could lower the priority of one of the NICs and use the low-priority NIC for both public and as well as private communication.

    • If there are only two NICs on a selected system, Arctera recommends that you lower the priority of at least one NIC that will be used for private as well as public network communication.

      To lower the priority of a NIC, right-click the NIC and select Low Priority from the pop-up menu.

    • If your configuration contains teamed NICs, the wizard groups them as "NIC Group #N" where "N" is a number assigned to the teamed NIC. A teamed NIC is a logical NIC, formed by grouping several physical NICs together. All NICs in a team have an identical MAC address. Arctera recommends that you do not select teamed NICs for the private network.

      The wizard configures the LLT service (over ethernet) on the selected network adapters.

    • To configure the VCS private network over the User Datagram Protocol (UDP) layer, complete the following steps:

    • Select Configure LLT over UDP on IPv4 network or Configure LLT over UDP on IPv6 network depending on the IP protocol that you wish to use.

      The IPv6 option is disabled if the network does not support IPv6.

    • Select the check boxes next to the NICs to be assigned to the private network. You can assign a maximum of eight network links. Arctera recommends reserving two NICs exclusively for the VCS private network.

    • For each selected NIC, verify the displayed IP address. If a selected NIC has multiple IP addresses assigned, double-click the field and choose the desired IP address from the drop-down list. In case of IPv4, each IP address can be in a different subnet.

      The IP address is used for the VCS private communication over the specified UDP port.

    • Specify a unique UDP port for each of the link. Click Edit Ports if you wish to edit the UDP ports for the links. You can use ports in the range 49152 to 65535. The default ports numbers are 50000 and 50001 respectively. Click OK.

      For each selected NIC, double-click the respective field in the Link column and choose a link from the drop-down list. Specify a different link (Link1 or Link2) for each NIC. Each link is associated with a UDP port that you specified earlier.

      The wizard configures the LLT service (over UDP) on the selected network adapters. The specified UDP ports are used for the private network communication.

  12. On the VCS Helper Service User Account panel, specify the name of a domain user for the VCS Helper service.

    The Veritas High Availability Engine (HAD), which runs in the context of the local system built-in account, uses the Veritas VCS Helper service user context to access the network. This account does not require Domain Administrator privileges.

    Specify the domain user details as follows:

    • To specify an existing user, do one of the following:

      • Click Existing user and select a user name from the drop-down list.

      • If you chose not to retrieve the list of users in step 4, type the user name in the Specify User field and then click Next.

    • To specify a new user, click New user and type a valid user name in the Create New User field and then click Next.

      Do not append the domain name to the user name; do not type the user name as Domain\user or user@domain.

    • In the Password dialog box, type the password for the specified user and click OK, and then click Next.

  13. On the Configure Security Service Option panel, specify security options for the cluster communications and then click Next.

    Do one of the following:

    • To use VCS cluster user privileges, click Use VCS User Privileges and then type a user name and password.

      The wizard configures this user as a VCS Cluster Administrator. In this mode, communication between cluster nodes and clients, including Cluster Manager (Java Console), occurs using the encrypted VCS cluster administrator credentials. The wizard uses the VCSEncrypt utility to encrypt the user password.

      The default user name for the VCS administrator is admin and the password is password. Both are case-sensitive. You can accept the default user name and password for the VCS administrator account or type a new name and password.

      Arctera recommends that you specify a new user name and password.

    • To use the single sign-on feature, click Use Single Sign-on.

      In this mode, the VCS Authentication Service is used to secure communication between cluster nodes and clients by using digital certificates for authentication and SSL to encrypt communication over the public network. VCS uses SSL encryption and platform-based authentication. The Veritas High Availability Engine (HAD) and Veritas Command Server run in secure mode.

      The wizard configures all the cluster nodes as root brokers (RB) and authentication brokers (AB). Authentication brokers serve as intermediate registration and certification authorities. Authentication brokers have certificates signed by the root. These brokers can authenticate clients such as users and services. The wizard creates a copy of the certificates on all the cluster nodes.

  14. Review the summary information on the Summary panel, and click Configure.

    The wizard configures the VCS private network. If the selected systems have LLT or GAB configuration files, the wizard displays an informational dialog box before overwriting the files. In the dialog box, click OK to overwrite the files. Otherwise, click Cancel, exit the wizard, move the existing files to a different location, and rerun the wizard.

    The wizard starts running commands to configure VCS services. If an operation fails, click View configuration log file to see the log.

  15. On the Completing Cluster Configuration panel, click Next to configure the ClusterService group; this group is required to set up components for notification and for global clusters.

    To configure the ClusterService group later, click Finish.

    At this stage, the wizard has collected the information required to set up the cluster configuration. After the wizard completes its operations, with or without the ClusterService group components, the cluster is ready to host application service groups. The wizard also starts the VCS engine (HAD) and the Veritas Command Server at this stage.

  16. On the Cluster Service Components panel, select the components to be configured in the ClusterService group and then click Next.

    Do the following:

    • Check the Notifier Option check box to configure notification of important events to designated recipients.

    • Check the GCO Option check box to configure the wide-area connector (WAC) process for global clusters.The WAC process is required for inter-cluster communication.

      Configure the GCO Option using this wizard only if you are configuring a Disaster Recovery (DR) environment and are not using the Disaster Recovery wizard.

      You can configure the GCO Option using the DR wizard. The Disaster Recovery chapters in the application solutions guides discuss how to use the Disaster Recovery wizard to configure the GCO option.