Veritas InfoScale™ for Kubernetes Environments 8.0.100 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Tech Preview: Configuring KMS-based Encryption on an OpenShift cluster
- Tech Preview: Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Troubleshooting
Configuring Data Replication
Using Data Replication custom resource you can configure replication for persistent data(PVs and PVCs) associated with application components in a namespace. Custom resource created on a cluster is automatically synchronized on all peer clusters. Hence, this CR needs to be configured on the primary cluster only. After CR is configured, replication is set up. Veritas Volume Replicator(VVR) is responsible for performing replication. You can check status of underlying replication and perform operations like stop, pause, resume, and migrate data replication.
Complete the following steps
Edit
/YAML/DR/SampleDataReplication.yaml
on the primary cluster as under for replication of persistent data(PVs and PVCs) associated with application components in the specified namespace and labels.apiVersion: infoscale.veritas.com/v1 kind: DataReplication metadata: name: <Name for Data replication> spec: # Virtual IP address to configure VVR localHostAddress: <Any free Virtual IP address to configure VVR for the primary cluster> # Corresponding netmask to configure VVR localNetMask: <Corresponding netmask to configure VVR> # Corresponding network interface to configure VVR (If NIC name is identical on all nodes) localNIC: eth0 # Corresponding network interface map (hostname and NIC name map) to configure VVR # (If NIC name is not identical on all nodes) #localNICMap: # "host1" : "eth0" # "host2" : "eth0" # "host3" : "eth0" # "host4" : "eth1 # Namespace and optionally labels for which you # want to configure data replication selector: namespace: prod labels: env: prod # Current primary cluster name - Name of the cluster you want # to back up currentPrimary: <Current primary cluster name - Name of the cluster you want to back up> # (optional) In case of takeover operation, specify force to # true along with # the updated currentPriamry value. In case of migrate operation, # force should be specified as false and only currentPrimary # needs to be updated. #force: false # Secondary cluster details remoteClusterDetails: # ID of the Cluster to be used for a backup - clusterName: <ID of the Cluster to be used for a backup> # Virtual IP address for VVR configuration of this cluster remoteHostAddress: <Any free Virtual IP address for VVR configuration of the remote cluster> # Corresponding Netmask of this cluster remoteNetMask:<Corresponding Netmask of the remote cluster> # Corresponding Network interface of the remote cluster remoteNIC: eth0 # Corresponding Network interface map of this cluster #remoteNICMap: # "host5" : "eth1" # "host6" : "eth0" # "host7" : "eth0" # "host8" : "eth1" # (optional) replication type can be sync or async. # default value will be async if not specified. #replicationType: async # (optional) replicationState can have values start, stop, # pause and resume. # This field can be updated to start/stop/pasue/resume # replication. # Default value will be set to start during initial # configuration. #replicationState: start # (optional) network transport protocol can be TCP or UDP. # Default value will be set to TCP during initial configuration and # can be later changed to UDP. #networkTransportProtocol: TCP # (optional) By default, it will be set to N/A during # initial configuration, which means the available bandwidth # will be used. # It can be later changed to set the maximum network bandwidth # (in bits per second). #bandwidthLimit: N/A # (optional) Supported values for latency protection are: fail, # disable and override. # By default it will be set to disable during initial configuration # and can be changed later. #latencyProtection: disable # (optional) Supported values log (SRL) protection are: autodcm, # dcm, fail, disable and override. # By default it will be set to autodcm during initial configuration # and can be changed later. #logProtection: autodcm
Note:
Ensure that the current primary cluster name you enter here must be the same that you plan to specify in
DisasterRecoveryPlan.yaml
. For every Disaster Recovery Plan, you must create a separate Data Replication CR. Ensure that namespace and labels in Disaster Recovery Plan and its corresponding Data Replication CR are identical.Run the following command on the bastion node
oc apply -f /YAML/DR/SampleDataReplication.yaml
After these commands are executed, run the following commands on the bastion node
oc get datarep
Review the output similar to the following
NAME PROPOSED PRIMARY CURRENT PRIMARY NAMESPACE LABELS <Name of data replication resource> <proposed cluster ID> <current cluster ID> <namespace> <labels if any>
oc get datarep -o wide
Review the output similar to the following
NAME PROPOSED PRIMARY CURRENT PRIMARY NAMESPACE LABELS REPLICATION SUMMARY postgres-rep Clus1 Clus1 postgres <none> asynchronous | consistent,up-to-date | replicating (connected) | behind by 0h 0m 0s
Wait for the initial synchronization of the application Persistent Volumes to complete on the DR site. Run the following command on the bastion node of the DR site.
oc describe datarep <Data rep name for the application>
Review the status in the output similar to the following. Data Status must be consistent up-to-date.
Spec: .. .. Status: .. .. Primary Status: .. .. Secondary Status: .. Data Status: consistent,up-to-date