Veritas Access Administrator's Guide
- Section I. Introducing Veritas Access
- Section II. Configuring Veritas Access
- Adding users or roles
- Configuring the network
- Configuring authentication services
- Section III. Managing Veritas Access storage
- Configuring storage
- Configuring data integrity with I/O fencing
- Configuring ISCSI
- Veritas Access as an iSCSI target
- Configuring storage
- Section IV. Managing Veritas Access file access services
- Configuring the NFS server
- Setting up Kerberos authentication for NFS clients
- Using Veritas Access as a CIFS server
- About Active Directory (AD)
- About configuring CIFS for Active Directory (AD) domain mode
- About setting trusted domains
- About managing home directories
- About CIFS clustering modes
- About migrating CIFS shares and home directories
- About managing local users and groups
- Configuring an FTP server
- Using Veritas Access as an Object Store server
- Configuring the NFS server
- Section V. Monitoring and troubleshooting
- Section VI. Provisioning and managing Veritas Access file systems
- Creating and maintaining file systems
- Considerations for creating a file system
- Modifying a file system
- Managing a file system
- Creating and maintaining file systems
- Section VII. Configuring cloud storage
- Section VIII. Provisioning and managing Veritas Access shares
- Creating shares for applications
- Creating and maintaining NFS shares
- Creating and maintaining CIFS shares
- Using Veritas Access with OpenStack
- Integrating Veritas Access with Data Insight
- Section IX. Managing Veritas Access storage services
- Compressing files
- About compressing files
- Compression tasks
- Configuring SmartTier
- Configuring SmartIO
- Configuring episodic replication
- Episodic replication job failover and failback
- Configuring continuous replication
- How Veritas Access continuous replication works
- Continuous replication failover and failback
- Using snapshots
- Using instant rollbacks
- Compressing files
- Section X. Reference
About I/O fencing
In the Veritas Access cluster, one method of communication between the nodes is conducted through heartbeats over private links. If the two nodes cannot communicate, the two nodes cannot verify each other's state. Neither node can distinguish if the failed communication is because of a failed link or a failed partner node. The network breaks into two networks that cannot communicate with each other but do communicate with the central storage. This condition is referred to as the "split-brain" condition.
I/O fencing protects data integrity if the split-brain condition occurs. I/O fencing determines which nodes retain access to the shared storage and which nodes are removed from the cluster, to prevent possible data corruption.
I/O fencing will be enabled on shared disks only if the disks are scsi-3 compliant.
In Veritas Access, I/O fencing has the following modes:
Disk-based I/O fencing uses coordinator disks for arbitration in the event of a network partition. Coordinator disks are standard disks or LUNs that are set aside for use by the I/O fencing driver. All disks (both data and coordinator) must be scsi-3 compliant. The coordinator disks act as a global lock device during a cluster reconfiguration. This lock mechanism determines which node is allowed to fence off data drives from other nodes. A system must eject a peer from the coordinator disks before it can fence the peer from the data drives. Racing for control of coordinator disks is how fencing helps prevent split-brain. Coordinator disks cannot be used for any other purpose. You cannot store data on them.
To use the disk-based I/O fencing feature, you enable fencing on each node in the cluster. Disk-based I/O fencing always requires an odd number of disks starting with three disks. You must also specify the three disks to use as coordinator disks. The minimum configuration must be a two-node cluster with Veritas Access software installed and more than three disks. Three of the disks are used as coordinator disks and the rest of the disks are used for storing data.
Majority-based I/O fencing provides support for high availability when there are no additional servers or shared SCSI-3 disks that can act as coordination points. In case a split-brain condition occurs, the sub-cluster with more than half of the nodes remains online. If a sub-cluster has less than half of the nodes, then it panics itself. If the cluster has odd number of nodes, the sub-cluster which has the most number of cluster nodes survives and the sub-cluster with the least number of nodes is ejected out of the cluster. If the cluster has even number of nodes, the sub-cluster with the lowest cluster id survives.
For Veritas Access, majority-based fencing is used for Flexible Storage Sharing.
Majority-based I/O fencing is administered only with the Access command-line interface§.