InfoScale™ 9.0 Support for Containers - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Arctera InfoScale™ on OpenShift
- Installing Arctera InfoScale™ on Kubernetes
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing InfoScale DR on OpenShift
- Installing InfoScale DR on Kubernetes
- TECHNOLOGY PREVIEW: Disaster Recovery scenarios
- Configuring InfoScale
- Troubleshooting
Configuring Data Replication
Using Data Replication custom resource you can configure replication for persistent data(PVs and PVCs) associated with application components in a namespace. Custom resource created on a cluster is automatically synchronized on all peer clusters. Hence, this CR needs to be configured on the primary cluster only. After CR is configured, replication is set up. Arctera Volume Replicator(VVR) is responsible for performing replication. You can check status of underlying replication and perform operations like stop, pause, resume, and migrate data replication.
You must also configure Data Replication custom resources for Velero. Velero is used to capture application metadata on the primary cluster and restore it on the DR cluster by using VVR. For configuring Velero, you must run the CR on both clusters.
Note:
You must configure at least three CR files. One for Velero replication from the primary to the DR, one for Velero replication from the DR to the primary, and one per application /namespace you want to replicate.
Complete the following steps
Edit
/YAML/DR/SampleDataReplication.yaml
to configure Velero replication from the primary to the DR as underapiVersion: infoscale.veritas.com/v1 kind: DataReplication metadata: name: <Name for Data replication> spec: localHostAddress: <Virtual IP address to configure VVR> localNetMask: <Corresponding netmask to configure VVR> localNICMap: <corresponding network interface to configure VVR> "host1" : "eth0" "host2" : "eth0" "host3" : "eth0" "host4" : "eth1" selector: namespace: <namespace where velero is installed, same as specified in GCM> labels: component: minio-infoscale-dr-bkp currentPrimary: <Current primary cluster name - Name of the cluster you want to back up> remoteClusterDetails: - clusterName: <ID of the Cluster to be used for a backup> remoteHostAddress: <Virtual IP address for VVR configuration of this cluster> remoteNetMask: <Netmask of this cluster> remoteNICMap: <Network interface of this cluster> "host5" : "eth1" "host6" : "eth0" "host7" : "eth0" "host8" : "eth1" replicationType: sync
Run the following command on the master node
kubectl apply -f /YAML/DR/SampleDataReplication.yaml
Similarly copy
SampleDataReplication.yaml
and edit the file to update currentPrimary, local/remote cluster details appropriately. ApplySampleDataReplication.yaml
to configure metadata replication from the DR site to the primary.Run the following command on the master node to verify whether data replication is set up on both clusters.
kubectl get datarep
Edit another copy of
/YAML/DR/SampleDataReplication.yaml
on the primary cluster as under for replication of persistent data(PVs and PVCs) associated with application components in the specified namespace and labels.apiVersion: infoscale.veritas.com/v1 kind: DataReplication metadata: name: <Name for Data replication> spec: # Virtual IP address to configure VVR localHostAddress: <Virtual IP address to configure VVR> # Corresponding netmask to configure VVR localNetMask: <Corresponding netmask to configure VVR> # Corresponding network interface map (hostname and NIC name map) # to configure VVR localNICMap: <corresponding network interface to configure VVR> "host1" : "eth0" "host2" : "eth0" "host3" : "eth0" "host4" : "eth1" # Namespace and optionally labels for which you # want to configure data replication selector: namespace: prod labels: env: prod # Current primary cluster name - Name of the cluster you want # to back up currentPrimary: <Current primary cluster name - Name of the cluster you want to back up> # (optional) In case of takeover operation, specify force to # true along with # the updated currentPriamry value. In case of migrate operation, # force should be specified as false and only currentPrimary # needs to be updated. #force: false # Secondary cluster details remoteClusterDetails: # ID of the Cluster to be used for a backup - clusterName: <ID of the Cluster to be used for a backup> # Virtual IP address for VVR configuration of this cluster remoteHostAddress: <Virtual IP address for VVR configuration of this cluster> # Corresponsding Netmask of this cluster remoteNetMask: <Netmask of this cluster> # Corresponding Network interface map of this cluster remoteNICMap:<Network interface of this cluster> "host5" : "eth1" "host6" : "eth0" "host7" : "eth0" "host8" : "eth1" # (optional) replication type can be sync or async. # default value will be async if not specified. #replicationType: async # (optional) replicationState can have values start, stop, # pause and resume. # This field can be updated to start/stop/pasue/resume # replication. # Default value will be set to start during initial # configuration. #replicationState: start # (optional) network transport protocol can be TCP or UDP. # Default value will be set to TCP during initial configuration and # can be later changed to UDP. #networkTransportProtocol: TCP # (optional) By default, it will be set to N/A during # initial configuration, which means the available bandwidth # will be used. # It can be later changed to set the maximum network bandwidth # (in bits per second). #bandwidthLimit: N/A # (optional) Supported values for latency protection are: fail, # disable and override. # By default it will be set to disable during initial configuration # and can be changed later. #latencyProtection: disable # (optional) Supported values log (SRL) protection are: autodcm, # dcm, fail, disable and override. # By default it will be set to autodcm during initial configuration # and can be changed later. #logProtection: autodcm
Note:
Ensure that the current primary cluster name you enter here must be the same that you plan to specify in
DisasterRecoveryPlan.yaml
. For every Disaster Recovery Plan, you must create a separate Data Replication CR. Ensure that namespace and labels in Disaster Recovery Plan and its corresponding Data Replication CR are identical.Run the following command on the master node
kubectl apply -f /YAML/DR/SampleDataReplication.yaml
After these commands are executed, run the following command on the master node
kubectl get datarep
Review the output similar to the following
NAME SPECCURRENTPRIMARY STATUSCURRENTPRIMARY RVGNAME <Name for ID of the cluster ID of the current Data which you want working cluster replication> to back up
Wait for the initial synchronization of the application Persistent Volumes to complete on the DR site. Run the following command on the master node of the DR site.
kubectl describe datareplications.infoscale.veritas.com <Data rep name for the application>
Review the status in the output similar to the following. Data Status must be consistent up-to-date.
Spec: .. .. Status: .. .. Primary Status: .. .. Secondary Status: .. Data Status: consistent,up-to-date