Cluster Server 8.0 Implementation Guide for Microsoft SQL Server - Windows

Last Published:
Product(s): InfoScale & Storage Foundation (8.0)
Platform: Windows
  1. Section I. Introducing Veritas InfoScale solutions for application high availability
    1. Understanding the InfoScale solutions for application high availability
      1.  
        About the Veritas InfoScale solutions for monitoring SQL Server
      2. About the VCS agents for SQL Server
        1.  
          About the VCS agent for SQL Server Database Engine
        2.  
          About the VCS agent for SQL Server FILESTREAM
        3.  
          About the VCS GenericService agent for SQL Server Agent service and Analysis service
        4.  
          About the agent for MSDTC service
        5.  
          About the monitoring options
      3. How VCS monitors storage components
        1.  
          Shared storage - if you use NetApp filers
        2.  
          Shared storage - if you use SFW to manage cluster dynamic disk groups
        3.  
          Shared storage - if you use Windows LDM to manage shared disks
        4.  
          Non-shared storage - if you use SFW to manage dynamic disk groups
        5.  
          Non-shared storage - if you use Windows LDM to manage local disks
        6.  
          Non-shared storage - if you use VMware storage
      4. How application availability is achieved in a physical environment
        1.  
          Typical SQL Server cluster configuration using shared storage
        2.  
          Typical SQL Server disaster recovery cluster configuration
        3.  
          SQL Server sample dependency graph
        4.  
          MSDTC sample dependency graph
      5. How is application availability achieved in a VMware virtual environment
        1.  
          How the VMwareDisks agent communicates with the vCenter Server instead of the ESX/ESXi host
        2.  
          Typical VCS cluster configuration in a virtual environment
    2. Managing storage and installing the VCS agents
      1. Managing storage using NetApp filer
        1.  
          Connecting virtual disks to the cluster node
        2.  
          Disconnecting virtual disks from the cluster nodes
      2. Managing storage using Windows Logical Disk Manager
        1.  
          Reserving disks (if you use Windows LDM)
        2.  
          Creating volumes (if you use Windows LDM)
        3.  
          Mounting volumes (if you use Windows LDM)
        4.  
          Unassigning a drive letter
        5.  
          Releasing disks (if you use Windows LDM)
      3.  
        Managing storage using VMware virtual disks
      4.  
        About installing the VCS agents
    3. Installing SQL Server
      1.  
        About installing SQL Server for a high availability (HA) configuration
      2.  
        Configuring Microsoft iSCSI initiator
      3.  
        About installing SQL Server on the first system
      4.  
        About installing SQL Server on additional systems
      5.  
        Assigning ports for multiple SQL Server instances
      6.  
        Enabling IPv6 support for the SQL Server Analysis Service
  2. Section II. Configuring SQL Server in a physical environment
    1. Overview
      1.  
        About configuring SQL Server in physical environment
    2. Configuring the VCS cluster
      1.  
        Configuring the cluster using the Cluster Configuration Wizard
      2.  
        Configuring notification
      3.  
        Configuring Wide-Area Connector process for global clusters
    3. Configuring the SQL Server service group
      1.  
        About configuring the SQL Server service group
      2.  
        Before configuring the SQL Server service group
      3. Configuring a SQL Server service group using the wizard
        1.  
          Configuring detail monitoring for a SQL Server instance
        2.  
          Assigning privileges to the existing SQL Server databases and logs
      4.  
        Configuring the service group in a non-shared storage environment
      5.  
        Running SnapManager for SQL Server
      6.  
        About the modifications required for tagged VLAN or teamed network
      7. Making SQL Server user-defined databases highly available
        1.  
          Create volumes or LUNs for SQL Server user-defined databases
        2.  
          Creating SQL Server databases
        3.  
          Adding storage agent resources to the SQL service group
      8. Verifying the service group configuration
        1.  
          Bringing the service group online
        2.  
          Taking the service group offline
        3.  
          Switching the service group
      9. Administering a SQL Server service group
        1.  
          Modifying a SQL service group configuration
        2.  
          Deleting a SQL service group
    4. Configuring an MSDTC service group
      1.  
        About configuring the MSDTC service group
      2.  
        Typical MSDTC service group configuration using shared storage
      3.  
        Before configuring the MSDTC service group
      4.  
        Creating an MSDTC service group
      5.  
        About configuring an MSDTC client
      6.  
        Configuring an MSDTC client
      7.  
        Verifying the installation
    5. Configuring the standalone SQL Server
      1. Typical high availability configuration for a standalone SQL Server setup
        1.  
          Sample configuration
      2. Configuring a standalone SQL Server for high availablility
        1.  
          Moving the existing SQL Server data files and user databases
    6. Configuring an Active/Active cluster
      1. About running SQL Server in an active-active clustered environment
        1.  
          Sample configuration
      2.  
        Setting up the Active/Active cluster
    7. Configuring a disaster recovery setup
      1. Setting up the disaster recovery cluster
        1.  
          Why implement a disaster recovery solution
        2.  
          Understanding replication
        3.  
          What needs to be protected in a SQL Server environment
      2. Configuring a disaster recovery set up for SQL Server
        1.  
          Configuring replication using NetApp SnapMirror
        2.  
          Configuring SnapMirror resources at the primary site
      3. Configuring the Global Cluster Option for wide-area failover
        1.  
          Prerequisites
        2.  
          Linking clusters: Adding a remote cluster to a local cluster
        3.  
          Converting a local service group to a global service group
        4.  
          Bringing a global service group online
      4. Administering global service groups
        1.  
          Taking a remote global service group offline
        2.  
          Switching a remote service group
        3.  
          Deleting a remote cluster
  3. Section III. Configuring SQL Server in a VMware environment
    1. Configuring application monitoring using the Veritas High Availability solution
      1.  
        Deploying the Veritas High Availability solution for configuring application monitoring
      2. Notes and recommendations
        1. Assigning privileges for non-administrator ESX/ESXi user account
          1.  
            Creating a role
          2.  
            Integrating with Active Directory or local authentication
          3.  
            Creating a new user
          4.  
            Adding a user to the role
      3. Configuring application monitoring
        1.  
          Configuring the VCS cluster
        2.  
          Configuring the application
      4.  
        Modifying the ESXDetails attribute
    2. Administering application monitoring
      1.  
        About the various interfaces available for performing application monitoring tasks
      2. Administering application monitoring using the Veritas High Availability tab
        1.  
          Understanding the Veritas High Availability tab work area
        2.  
          To view the status of configured applications
        3.  
          To configure or unconfigure application monitoring
        4.  
          To start or stop applications
        5.  
          To suspend or resume application monitoring
        6.  
          To switch an application to another system
        7.  
          To add or remove a failover system
        8.  
          To clear Fault state
        9.  
          To resolve a held-up operation
        10.  
          To determine application state
        11.  
          To remove all monitoring configurations
        12.  
          To remove VCS cluster configurations
      3.  
        Administering application monitoring settings
      4. Administering application availability using Veritas High Availability dashboard
        1. Understanding the dashboard work area
          1.  
            Aggregate status bar
          2.  
            ESX cluster/host table
          3.  
            Taskbar
          4.  
            Filters menu
          5.  
            Application table
        2.  
          Monitoring applications across a data center
        3.  
          Monitoring applications across an ESX cluster
        4.  
          Searching for application instances by using filters
        5.  
          Selecting multiple applications for batch operations
        6.  
          Starting an application using the dashboard
        7.  
          Stopping an application by using the dashboard
        8.  
          Entering an application into maintenance mode
        9.  
          Bringing an application out of maintenance mode
        10.  
          Switching an application
  4. Section IV. Appendixes
    1. Appendix A. Troubleshooting
      1.  
        VCS logging
      2.  
        VCS Cluster Configuration Wizard (VCW) logs
      3.  
        VCWsilent logs
      4.  
        NetApp agents error messages
      5. Error and warning messages from VCS agent for SQL Server
        1.  
          Messages from the VCS agent for SQL Server Database Engine
        2.  
          Messages from the VCS agent for SQL Server FILESTREAM
        3.  
          Messages from the VCS agent for SQL Server Agent service and Analysis service
        4.  
          SQL Server Analysis service (MSOLAP) service fails to come online with "invalid context of address" error
        5.  
          Messages from the VCS agent for MSDTC
      6. Troubleshooting application monitoring configuration issues
        1.  
          Running the 'hastop - all' command detaches virtual disks
        2.  
          Validation may fail when you add a failover system
        3.  
          Adding a failover system may fail if you configure a cluster with communication links over UDP
      7. Troubleshooting Veritas High Availability view issues
        1.  
          Veritas High Availability tab not visible from a cluster node
        2.  
          Veritas High Availability tab does not display the application monitoring status
        3.  
          Veritas High Availabilitytab may freeze due to special characters in application display name
        4.  
          Veritas High Availability view may fail to load or refresh
        5.  
          Operating system commands to unmount resource may fail
    2. Appendix B. Using the virtual MMC viewer
      1.  
        About using the virtual MMC viewer
      2.  
        Viewing DTC transaction information

Configuring the cluster using the Cluster Configuration Wizard

After installing the software, set up the components required to run Cluster Server. The VCS Cluster Configuration Wizard (VCW) sets up the cluster infrastructure, including LLT and GAB, the user account for the VCS Helper service, and provides an option for configuring the VCS Authentication Service in the cluster. The wizard also configures the ClusterService group, which contains resources for notification and global clusters (GCO). You can also use VCW to modify or delete cluster configurations.

Note:

After configuring the cluster you must not change the names of the nodes that are part of the cluster. If you wish to change a node name, run VCW to remove the node from the cluster, rename the system, and then run VCW again to add that system to the cluster.

Note the following prerequisites before you proceed:

  • The required network adapters (NICs), and SCSI controllers are installed and connected to each system.

    Veritas recommends the following actions for network adapters:

    • Disable the ethernet auto-negotiation options on the private NICs to prevent:

      • Loss of heartbeats on the private networks

      • VCS from mistakenly declaring a system as offline

      Contact the NIC manufacturer for details on this process.

    • Remove TCP/IP from the private NICs to lower system overhead.

  • Verify that the public network adapters on each node use static IP addresses (DHCP is not supported) and name resolution is configured for each node.

  • Veritas recommends that you use three network adapters (two NICs exclusively for the VCS private network and one for the public network) per system. You can implement the second private link as a low-priority link over a public interface. Route each private NIC through a separate hub or switch to avoid single points of failure. Veritas recommends that you disable TCP/IP from private NICs to lower system overhead.

    Note:

    If you wish to use Windows NIC teaming, you must select the Static Teaming mode. Only the Static Teaming mode is currently supported.

  • Use independent hubs or switches for each VCS communication network (GAB and LLT). You can use cross-over Ethernet cables for two-node clusters. GAB supports hub-based or switch network paths, or two-system clusters with direct network links.

  • Verify the DNS settings for all systems on which the application is installed and ensure that the public adapter is the first adapter in the Connections list.

    When enabling DNS name resolution, make sure that you use the public network adapters, and not those configured for the VCS private network.

  • The logged on user must have local Administrator privileges on the system where you run the wizard. The user account must be a domain user account.

  • The logged on user must have administrative access to all systems selected for cluster operations. Add the domain user account to the local Administrator group of each system.

  • If you plan to create a new user account for the VCS Helper service, the logged on user must have Domain Administrator privileges or must belong to the Domain Account Operators group.

  • When configuring a user account for the Veritas VCS Helper service, make sure that the user account is a domain user. The Veritas High Availability Engine (HAD), which runs in the context of the local system built-in account, uses the Veritas VCS Helper service user context to access the network. This account does not require Domain Administrator privileges.

  • Make sure the VCS Helper service domain user account has "Add workstations to domain" privilege enabled in the Active Directory.

  • Verify that each system can access the storage devices and each system recognizes the attached shared disk.

    Use Windows Disk Management on each system to verify that the attached shared LUNs (virtual disks) are visible.

  • If you plan to set up a disaster recovery (DR) environment, you must configure the wide-area connector process for global clusters.

  • If you are setting up a Replicated Data Cluster configuration, add only the systems in the primary zone (zone 0) to the cluster, at this time.

To configure a VCS cluster using the wizard

  1. Start the VCSCluster Configuration Wizard from the Apps menu on the Start screen.
  2. Read the information on the Welcome panel and click Next.
  3. On the Configuration Options panel, click Cluster Operations and click Next.
  4. On the Domain Selection panel, select or type the name of the domain in which the cluster resides and select the discovery options.

    To discover information about all systems and users in the domain, do the following:

    • Clear Specify systems and users manually.

    • Click Next.

      Proceed to step 8.

    To specify systems and user names manually (recommended for large domains), do the following:

    • Select Specify systems and users manually.

      Additionally, you may instruct the wizard to retrieve a list of systems and users in the domain by selecting appropriate check boxes.

    • Click Next.

      If you chose to retrieve the list of systems, proceed to step 6. Otherwise, proceed to the next step.

  5. On the System Selection panel, type the name of each system to be added, click Add, and then click Next.

    Do not specify systems that are part of another cluster.

    Proceed to step 8.

  6. On the System Selection panel, specify the systems for the cluster and then click Next.

    Do not select systems that are part of another cluster.

    Enter the name of the system and click Add to add the system to the Selected Systems list, or click to select the system in the Domain Systems list and then click the > (right-arrow) button.

  7. The System Report panel displays the validation status, whether Accepted or Rejected, of all the systems you specified earlier. Review the status and then click Next.

    Select the system to see the validation details. If you wish to include a rejected system, rectify the error based on the reason for rejection and then run the wizard again.

    A system can be rejected for any of the following reasons:

    • System is not pingable.

    • WMI access is disabled on the system.

    • Wizard is unable to retrieve the system architecture or operating system.

    • Product is either not installed or there is a version mismatch.

  8. On the Cluster Configuration Options panel, click Create New Cluster and then click Next.
  9. On the Cluster Details panel, specify the details for the cluster and then click Next.

    Specify the cluster details as follows:

    Cluster Name

    Type a name for the new cluster. Veritas recommends a maximum length of 32 characters for the cluster name.

    Cluster ID

    Select a cluster ID from the suggested cluster IDs in the drop-down list, or type a unique ID for the cluster. The cluster ID can be any number from 0 to 65535.

    Note:

    If you chose to specify systems and users manually in step 4 or if you share a private network between more than one domain, make sure that the cluster ID is unique.

    Operating System

    From the drop-down list, select the operating system.

    All the systems in the cluster must have the same operating system and architecture.

    Available Systems

    Select the systems that you wish to configure in the cluster.

    Check the Select all systems check box to select all the systems simultaneously.

    The wizard discovers the NICs on the selected systems. For single-node clusters with the required number of NICs, the wizard prompts you to configure a private link heartbeat. In the dialog box, click Yes to configure a private link heartbeat.

  10. The wizard validates the selected systems for cluster membership. After the systems are validated, click Next.

    If a system is not validated, review the message associated with the failure and restart the wizard after rectifying the problem.

    If you chose to configure a private link heartbeat in step 9, proceed to the next step. Otherwise, proceed to step 12.

  11. On the Private Network Configuration panel, configure the VCS private network and then click Next. You can configure the VCS private network either over the ethernet or over the User Datagram Protocol (UDP) layer using IPv4 or IPv6 network.

    Do one of the following:

    • To configure the VCS private network over ethernet, complete the following steps:

    • Select Configure LLT over Ethernet.

    • Select the check boxes next to the two NICs to be assigned to the private network. You can assign a maximum of eight network links.

      Veritas recommends reserving two NICs exclusively for the private network. However, you could lower the priority of one of the NICs and use the low-priority NIC for both public and as well as private communication.

    • If there are only two NICs on a selected system, Veritas recommends that you lower the priority of at least one NIC that will be used for private as well as public network communication.

      To lower the priority of a NIC, right-click the NIC and select Low Priority from the pop-up menu.

    • If your configuration contains teamed NICs, the wizard groups them as "NIC Group #N" where "N" is a number assigned to the teamed NIC. A teamed NIC is a logical NIC, formed by grouping several physical NICs together. All NICs in a team have an identical MAC address. Veritas recommends that you do not select teamed NICs for the private network.

      The wizard configures the LLT service (over ethernet) on the selected network adapters.

    • To configure the VCS private network over the User Datagram Protocol (UDP) layer, complete the following steps:

    • Select Configure LLT over UDP on IPv4 network or Configure LLT over UDP on IPv6 network depending on the IP protocol that you wish to use.

      The IPv6 option is disabled if the network does not support IPv6.

    • Select the check boxes next to the NICs to be assigned to the private network. You can assign a maximum of eight network links. Veritas recommends reserving two NICs exclusively for the VCS private network.

    • For each selected NIC, verify the displayed IP address. If a selected NIC has multiple IP addresses assigned, double-click the field and choose the desired IP address from the drop-down list. In case of IPv4, each IP address can be in a different subnet.

      The IP address is used for the VCS private communication over the specified UDP port.

    • Specify a unique UDP port for each of the link. Click Edit Ports if you wish to edit the UDP ports for the links. You can use ports in the range 49152 to 65535. The default ports numbers are 50000 and 50001 respectively. Click OK.

      For each selected NIC, double-click the respective field in the Link column and choose a link from the drop-down list. Specify a different link (Link1 or Link2) for each NIC. Each link is associated with a UDP port that you specified earlier.

      The wizard configures the LLT service (over UDP) on the selected network adapters. The specified UDP ports are used for the private network communication.

  12. On the VCS Helper Service User Account panel, specify the name of a domain user for the VCS Helper service.

    The Veritas High Availability Engine (HAD), which runs in the context of the local system built-in account, uses the Veritas VCS Helper service user context to access the network. This account does not require Domain Administrator privileges.

    Specify the domain user details as follows:

    • To specify an existing user, do one of the following:

      • Click Existing user and select a user name from the drop-down list.

      • If you chose not to retrieve the list of users in step 4, type the user name in the Specify User field and then click Next.

    • To specify a new user, click New user and type a valid user name in the Create New User field and then click Next.

      Do not append the domain name to the user name; do not type the user name as Domain\user or user@domain.

    • In the Password dialog box, type the password for the specified user and click OK, and then click Next.

  13. On the Configure Security Service Option panel, specify security options for the cluster communications and then click Next.

    Do one of the following:

    • To use VCS cluster user privileges, click Use VCS User Privileges and then type a user name and password.

      The wizard configures this user as a VCS Cluster Administrator. In this mode, communication between cluster nodes and clients, including Cluster Manager (Java Console), occurs using the encrypted VCS cluster administrator credentials. The wizard uses the VCSEncrypt utility to encrypt the user password.

      The default user name for the VCS administrator is admin and the password is password. Both are case-sensitive. You can accept the default user name and password for the VCS administrator account or type a new name and password.

      Veritas recommends that you specify a new user name and password.

    • To use the single sign-on feature, click Use Single Sign-on.

      In this mode, the VCS Authentication Service is used to secure communication between cluster nodes and clients by using digital certificates for authentication and SSL to encrypt communication over the public network. VCS uses SSL encryption and platform-based authentication. The Veritas High Availability Engine (HAD) and Veritas Command Server run in secure mode.

      The wizard configures all the cluster nodes as root brokers (RB) and authentication brokers (AB). Authentication brokers serve as intermediate registration and certification authorities. Authentication brokers have certificates signed by the root. These brokers can authenticate clients such as users and services. The wizard creates a copy of the certificates on all the cluster nodes.

  14. Review the summary information on the Summary panel, and click Configure.

    The wizard configures the VCS private network. If the selected systems have LLT or GAB configuration files, the wizard displays an informational dialog box before overwriting the files. In the dialog box, click OK to overwrite the files. Otherwise, click Cancel, exit the wizard, move the existing files to a different location, and rerun the wizard.

    The wizard starts running commands to configure VCS services. If an operation fails, click View configuration log file to see the log.

  15. On the Completing Cluster Configuration panel, click Next to configure the ClusterService group; this group is required to set up components for notification and for global clusters.

    To configure the ClusterService group later, click Finish.

    At this stage, the wizard has collected the information required to set up the cluster configuration. After the wizard completes its operations, with or without the ClusterService group components, the cluster is ready to host application service groups. The wizard also starts the VCS engine (HAD) and the Veritas Command Server at this stage.

  16. On the Cluster Service Components panel, select the components to be configured in the ClusterService group and then click Next.

    Do the following:

    • Check the Notifier Option check box to configure notification of important events to designated recipients.

    • Check the GCO Option check box to configure the wide-area connector (WAC) process for global clusters.The WAC process is required for inter-cluster communication.

      Configure the GCO Option using this wizard only if you are configuring a Disaster Recovery (DR) environment and are not using the Disaster Recovery wizard.

      You can configure the GCO Option using the DR wizard. The Disaster Recovery chapters in the application solutions guides discuss how to use the Disaster Recovery wizard to configure the GCO option.