Please enter search query.
Search <product_name> all support & community content...
Article: 100017039
Last Published: 2022-02-10
Ratings: 1 0
Product(s): InfoScale & Storage Foundation
Problem
How to set primary node for a cluster file system
Solution
In a cluster that is based on VERITAS Storage Foundation(tm) Cluster File System, the primary node for a cluster file system can be set to any node in the cluster that is allowed to mount the file system.
By default, the cluster node that mounts a file system in cluster mode first becomes the primary for that file system. Any other nodes in the cluster that also mount the cluster file system subsequently are designated as the secondary nodes for that file system.
In this example, a 2-node Storage Foundation Cluster File System cluster consists of nodes NodeA and NodeB. A number of cluster file systems are defined, two of which are shown here.
NodeA is the current primary for the file systems. The above commands can be run on NodeB, too, with the same results.
To set NodeB as primary for /shvol11, run the following command on NodeB:
The primary roles will remain set as shown above until another fsclustadm command is run to transfer the role or until the cluster node drops out of the cluster.
Another way to set the primary role for file systems in a more controlled and orderly way is to use the Policy attribute for CFSMount resources.
The general format of the command used to set primary policy for a CFSMount resource is:
Using the sample CFSMount resources shown above, set the policy for each resource:
To check the policy for a CFSMount resource, use the same fsclustadm command:
The Cluster File System policy specifies the order in which hosts assume the primary role for the cluster file system if the current primary fails. Note that when the fsclustadm command is used to set the policy for a file system, the node that is first in the list will assume the primary role immediately (provided that node is in running state and has the file system mounted).
Checking the file systems for the primary, it can be seen that the primary roles have been set as follows:
With these policies in place, if cluster node NodeA were to crash or leave the cluster, then primary role for cluster file system /shvol12 would be transferred to NodeB.
Cluster File System policies are only persistent as long as the cluster is active. That is, at all times, at least one cluster node must be in running state for the policies to be in effect. If all cluster nodes are stopped or the entire cluster is rebooted, the Cluster File System policies are lost and will have to be set again. The Policy attribute for the CFSMount type resources may be used to set the policy on a persistent basis.
Reset the policy for the two cluster file systems:
To set the Policy attribute for the CFSMount resource, use the hares command or the VERITAS Cluster Server GUI:
Executing the hares command results in the CFSMount agent running the fsclustadm setpolicy command on the nodes where the cluster file system is mounted.
The two CFSMount resources now show the policies that have been set:
Using fsclustadm, it can be seen that the file system primaries for the two CFSMount resources have been reset based on the Policy attribute:
The following table shows the file system primary node changing based on the state of the cluster nodes or the operations performed.
Note that in a 2-node cluster it is enough to specify just the preferred primary node in the Policy attribute rather than both the cluster nodes.However, in a cluster with more than two nodes, it becomes more relevant to list the specific nodes in the Policy attribute.
The Policy attribute may be modified using the hares command.
To add a cluster node to the list of nodes in the Policy attribute:
To delete a cluster node from the list of nodes specified in the Policy attribute:
To delete the Policy attribute for a resource:
By default, the cluster node that mounts a file system in cluster mode first becomes the primary for that file system. Any other nodes in the cluster that also mount the cluster file system subsequently are designated as the secondary nodes for that file system.
In this example, a 2-node Storage Foundation Cluster File System cluster consists of nodes NodeA and NodeB. A number of cluster file systems are defined, two of which are shown here.
CFSMountcfsmount1(
Critical=0
MountPoint="/shvol11"
BlockDevice="/dev/vx/dsk/shdg1/shvol11"
MountOpt@NodeA = noqio
MountOpt@NodeB = noqio
NodeList= { NodeA, NodeB}
RemountRes@NodeA = DONE
RemountRes@NodeB = DONE
)
CFSMount cfsmount2(
Critical=0
MountPoint="/shvol12"
BlockDevice="/dev/vx/dsk/shdg1/shvol12"
MountOpt@NodeA = noqio
MountOpt@NodeB = noqio
NodeList= { NodeA, NodeB}
RemountRes@NodeA = DONE
RemountRes@NodeB = DONE
NodeB:# fsclustadm -v showprimary /shvol11
NodeA
NodeB:# fsclustadm -v showprimary /shvol12
NodeA
NodeA is the current primary for the file systems. The above commands can be run on NodeB, too, with the same results.
To set NodeB as primary for /shvol11, run the following command on NodeB:
NodeB:# fsclustadm setprimary /shvol11
NodeB:# fsclustadm -v showprimary /shvol11
NodeB
The primary roles will remain set as shown above until another fsclustadm command is run to transfer the role or until the cluster node drops out of the cluster.
Another way to set the primary role for file systems in a more controlled and orderly way is to use the Policy attribute for CFSMount resources.
The general format of the command used to set primary policy for a CFSMount resource is:
# fsclustadm setpolicy node1 node2 ... /mount-point-of-cfs
Using the sample CFSMount resources shown above, set the policy for each resource:
NodeA:# fsclustadm setpolicy NodeB NodeA /shvol11
NodeA:# fsclustadm setpolicy NodeA NodeB /shvol12
To check the policy for a CFSMount resource, use the same fsclustadm command:
NodeA:# fsclustadm -v getpolicy /shvol11
NodeB
NodeA
NodeA:# fsclustadm -v getpolicy /shvol12
NodeA
NodeB
The Cluster File System policy specifies the order in which hosts assume the primary role for the cluster file system if the current primary fails. Note that when the fsclustadm command is used to set the policy for a file system, the node that is first in the list will assume the primary role immediately (provided that node is in running state and has the file system mounted).
Checking the file systems for the primary, it can be seen that the primary roles have been set as follows:
NodeA:# fsclustadm -v showprimary /shvol11
NodeB
NodeA:# fsclustadm -v showprimary /shvol12
NodeA
With these policies in place, if cluster node NodeA were to crash or leave the cluster, then primary role for cluster file system /shvol12 would be transferred to NodeB.
Cluster File System policies are only persistent as long as the cluster is active. That is, at all times, at least one cluster node must be in running state for the policies to be in effect. If all cluster nodes are stopped or the entire cluster is rebooted, the Cluster File System policies are lost and will have to be set again. The Policy attribute for the CFSMount type resources may be used to set the policy on a persistent basis.
Reset the policy for the two cluster file systems:
NodeA:# fsclustadm setpolicy /shvol11
NodeA:# fsclustadm setpolicy /shvol12
To set the Policy attribute for the CFSMount resource, use the hares command or the VERITAS Cluster Server GUI:
NodeA:# hares -modify cfsmount1 Policy NodeB NodeA
NodeA:# hares -modify cfsmount2 Policy NodeA NodeB
Executing the hares command results in the CFSMount agent running the fsclustadm setpolicy command on the nodes where the cluster file system is mounted.
The two CFSMount resources now show the policies that have been set:
CFSMount cfsmount1(
Critical=0
MountPoint="/shvol11"
BlockDevice="/dev/vx/dsk/shdg1/shvol11"
MountOpt@NodeA = noqio
MountOpt@NodeB = noqio
NodeList= { NodeA, NodeB }
Policy= { NodeB, NodeA}
RemountRes@NodeA = DONE
RemountRes@NodeB = DONE
)
CFSMount cfsmount2(
Critical=0
MountPoint="/shvol12"
BlockDevice="/dev/vx/dsk/shdg1/shvol12"
MountOpt@NodeA = noqio
MountOpt@NodeB = noqio
NodeList= { NodeA, NodeB }
Policy= { NodeA, NodeB }
RemountRes@NodeA = DONE
RemountRes@NodeB = DONE
Using fsclustadm, it can be seen that the file system primaries for the two CFSMount resources have been reset based on the Policy attribute:
NodeA:# fsclustadm -vshowprimary /shvol11
NodeB
NodeA:# fsclustadm -v showprimary /shvol12
NodeA
The following table shows the file system primary node changing based on the state of the cluster nodes or the operations performed.
Cluster state or commands executed | Primary for /shvol11 (cfsmount1) | Primary for /shvol12 (cfsmount2) |
---|---|---|
NodeA and NodeB in RUNNING state | NodeB | NodeA |
NodeA crashes or VCS stopped (hastop -local) | NodeB | NodeB |
NodeA rejoins cluster | NodeB | NodeA |
VCS stopped on NodeB with force option (hastop -local -force) | NodeB | NodeA |
VCS restarted on NodeB | NodeB | NodeA |
Use fsclustadm setprimary command to set NodeA as primary for /shvol11 | NodeA | NodeA |
Offline cfsmount1 resource on NodeB | NodeA | NodeA |
Online cfsmount1 resource on NodeB | NodeB | NodeA |
Note that in a 2-node cluster it is enough to specify just the preferred primary node in the Policy attribute rather than both the cluster nodes.However, in a cluster with more than two nodes, it becomes more relevant to list the specific nodes in the Policy attribute.
The Policy attribute may be modified using the hares command.
To add a cluster node to the list of nodes in the Policy attribute:
# hares -modify resource-name Policy -add node-name
To delete a cluster node from the list of nodes specified in the Policy attribute:
# hares -modify resource-name Policy -delete node-name
To delete the Policy attribute for a resource:
# hares -modify resource-name Policy -delete -keys