InfoScale™ 9.0 Storage Foundation Cluster File System High Availability Configuration and Upgrade Guide - Solaris
- Section I. Introduction to SFCFSHA
- Introducing Storage Foundation Cluster File System High Availability
- Section II. Configuration of SFCFSHA
- Preparing to configure
- Preparing to configure SFCFSHA clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Configuring the CP server manually
- Configuring SFCFSHA
- Configuring a secure cluster node by node
- Verifying and updating licenses on the system
- Configuring SFCFSHA clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Performing an automated SFCFSHA configuration using response files
- Performing an automated I/O fencing configuration using response files
- Configuring CP server using response files
- Manually configuring SFCFSHA clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Configuring server-based fencing on the SFCFSHA cluster manually
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Section III. Upgrade of SFCFSHA
- Planning to upgrade SFCFSHA
- Preparing to upgrade SFCFSHA
- Performing a full upgrade of SFCFSHA using the installer
- Performing a rolling upgrade of SFCFSHA
- Performing a phased upgrade of SFCFSHA
- About phased upgrade
- Performing a phased upgrade using the product installer
- Performing an automated SFCFSHA upgrade using response files
- Upgrading Volume Replicator
- Upgrading VirtualStore
- Upgrading SFCFSHA using Boot Environment upgrade
- Performing post-upgrade tasks
- Planning to upgrade SFCFSHA
- Section IV. Post-configuration tasks
- Section V. Configuration of disaster recovery environments
- Section VI. Adding and removing nodes
- Adding a node to SFCFSHA clusters
- Adding the node to a cluster manually
- Setting up the node to run in secure mode
- Adding a node using response files
- Configuring server-based fencing on the new node
- Removing a node from SFCFSHA clusters
- Adding a node to SFCFSHA clusters
- Section VII. Configuration and Upgrade reference
- Appendix A. Installation scripts
- Appendix B. Configuration files
- Appendix C. Configuring the secure shell or the remote shell for communications
- Appendix D. High availability agent information
- Appendix E. Sample SFCFSHA cluster setup diagrams for CP server-based I/O fencing
- Appendix F. Reconciling major/minor numbers for NFS shared disks
- Appendix G. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Using the UDP layer of IPv6 for LLT
- Manually configuring LLT over UDP using IPv6
Configuring disk-based I/O fencing using installer
Note:
The installer stops and starts SFCFSHA to complete I/O fencing configuration. Make sure to unfreeze any frozen VCS service groups in the cluster for the installer to successfully stop SFCFSHA.
To set up disk-based I/O fencing using the installer
- Start the installer with -fencing option.
# /opt/VRTS/install/installer -fencing
The installer starts with a copyright message and verifies the cluster information.
Note the location of log files which you can access in the event of any problem with the configuration process.
- Enter the host name of one of the systems in the cluster.
- Confirm that you want to proceed with the I/O fencing configuration at the prompt.
The program checks that the local node running the script can communicate with remote nodes and checks whether SFCFSHA 9.0 is configured properly.
- Review the I/O fencing configuration options that the program presents. Type 2 to configure disk-based I/O fencing.
1. Configure Coordination Point client based fencing 2. Configure disk based fencing 3. Configure majority based fencing 4. Configure fencing in disabled mode Select the fencing mechanism to be configured in this Application Cluster [1-4,q.?] 2
- Review the output as the configuration program checks whether VxVM is already started and is running.
If the check fails, configure and enable VxVM before you repeat this procedure.
If the check passes, then the program prompts you for the coordinator disk group information.
- Choose whether to use an existing disk group or create a new disk group to configure as the coordinator disk group.
The program lists the available disk group names and provides an option to create a new disk group. Perform one of the following:
To use an existing disk group, enter the number corresponding to the disk group at the prompt.
The program verifies whether the disk group you chose has an odd number of disks and that the disk group has a minimum of three disks.
To create a new disk group, perform the following steps:
Enter the number corresponding to the Create a new disk group option.
The program lists the available disks that are in the CDS disk format in the cluster and asks you to choose an odd number of disks with at least three disks to be used as coordinator disks.
Arctera recommends that you use three disks as coordination points for disk-based I/O fencing.
If the available VxVM CDS disks are less than the required, installer asks whether you want to initialize more disks as VxVM disks. Choose the disks you want to initialize as VxVM disks and then use them to create new disk group.
Enter the numbers corresponding to the disks that you want to use as coordinator disks.
Enter the disk group name.
- Verify that the coordinator disks you chose meet the I/O fencing requirements.
You must verify that the disks are SCSI-3 PR compatible using the vxfentsthdw utility and then return to this configuration program.
- After you confirm the requirements, the program creates the coordinator disk group with the information you provided.
- Verify and confirm the I/O fencing configuration information that the installer summarizes.
Review the output as the configuration program does the following:
Stops VCS and I/O fencing on each node.
Configures disk-based I/O fencing and starts the I/O fencing process.
Updates the VCS configuration file main.cf if necessary.
Copies the /etc/vxfenmode file to a date and time suffixed file /etc/vxfenmode-date-time. This backup file is useful if any future fencing configuration fails.
Updates the I/O fencing configuration file /etc/vxfenmode.
Starts VCS on each node to make sure that the SFCFSHA is cleanly configured to use the I/O fencing feature.
- Review the output as the configuration program displays the location of the log files, the summary files, and the response files.
- Configure the Coordination Point Agent.
Do you want to configure Coordination Point Agent on the client cluster? [y,n,q] (y)
- Enter a name for the service group for the Coordination Point Agent.
Enter a non-existing name for the service group for Coordination Point Agent: [b] (vxfen) vxfen
- Set the level two monitor frequency.
Do you want to set LevelTwoMonitorFreq? [y,n,q] (y)
- Decide the value of the level two monitor frequency.
Enter the value of the LevelTwoMonitorFreq attribute: [b,q,?] (5)
Installer adds Coordination Point Agent and updates the main configuration file.
- Enable auto refresh of coordination points.
Do you want to enable auto refresh of coordination points if registration keys are missing on any of them? [y,n,q,b,?] (n)
See Configuring CoordPoint agent to monitor coordination points.