Veritas InfoScale™ for Kubernetes Environments 8.0.220 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Prerequisites
- Tagging the InfoScale images on Kubernetes
- Installing InfoScale on Kubernetes
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Upgrading InfoScale
- Troubleshooting
Configuring Data Replication
Using Data Replication custom resource you can configure replication for persistent data(PVs and PVCs) associated with application components in a namespace. Custom resource created on a cluster is automatically synchronized on all peer clusters. Hence, this CR needs to be configured on the primary cluster only. After CR is configured, replication is set up. Veritas Volume Replicator(VVR) is responsible for performing replication. You can check status of underlying replication and perform operations like stop, pause, resume, and migrate data replication.
If you are configuring data replication for an on-premise and cloud combination, ensure that you select the appropriate value for cloudVendor
and for load balancer-based network traffic management, set cvmaster:true
.
Complete the following steps
Edit
/YAML/DR/SampleDataReplication.yaml
on the primary cluster as under for replication of persistent data(PVs and PVCs) associated with application components in the specified namespace and labels.apiVersion: infoscale.veritas.com/v1 kind: DataReplication metadata: name: <Name for Data replication> spec: # In case of load balancer based n/w traffic management, # set lbEnabled to true. else the default # value should be always kept false. lbEnabled : false # Virtual IP address to configure VVR localHostAddress: <Any free Virtual IP address to configure VVR for the primary cluster> # Corresponding netmask to configure VVR localNetMask: <Corresponding netmask to configure VVR> # Corresponding network interface to configure VVR # (If NIC name is identical for all nodes) localNIC: eth0 # Corresponding network interface map # (hostname and NIC name map) to configure VVR # (If NIC name is not identical for all nodes) #localNICMap: # "host1" : "eth0" # "host2" : "eth0" # "host3" : "eth0" # "host4" : "eth1" # Namespace and optionally labels # for which you want to configure data replication # (optional) Cloud Vendor (e.g Azure/Aws) on Primary VVR site. cloudVendor: Local # (optional) Applicable for Cloud-Vendor based environments, # If "localHostAddress" value is an Overlay N/w IP, then # specify all applicable Route table resource ids. #routeTableResourceIds: # - "rtb-fb97ac9d" # - "rtb-f416eb8d" # - "rtb-e48be49d" selector: namespace: mysql #labels: # app: db # Current primary cluster name - Name of the cluster # you want to back up currentPrimary: <Current primary cluster name - Name of the cluster you want to back up> # (optional) In case of takeover operation, specify force to true # along with the updated current Primary value. # In case of migrate operation, force should be specified as false # and only currentPrimary needs to be updated. force: false # Secondary cluster details remoteClusterDetails: # ID of the Cluster to be used for a backup - clusterName: <ID of the Cluster to be used for a backup> # In case of load balancer based n/w traffic management, # set remoteLbEnabled to true. else the default # value should be always kept false. remoteLbEnabled : false # Virtual IP address for VVR configuration of this cluster remoteHostAddress: <Any free Virtual IP address for VVR configuration of the remote cluster> # Corresponding Netmask of this cluster remoteNetMask: <Corresponding Netmask of the remote cluster> # Corresponding Network interface of this cluster remoteNIC: eth0 # Corresponding Network interface map of this cluster #remoteNICMap: # "host5" : "eth1" # "host6" : "eth0" # "host7" : "eth0" # "host8" : "eth1" # (optional) Cloud Vendor (e.g Azure/Aws) on remote VVR site. remoteCloudVendor: Local # (optional) Applicable for Cloud-Vendor based environments, # If "remoteHostAddress" value is an Overlay N/w IP, then # specify all applicable Route table resource ids. #remoteRouteTableResourceIds: # - "rtb-fb97ac9d" # - "rtb-f416eb8d" # - "rtb-e48be49d" # (optional) replication type can be sync or async. # default value will be async if not specified. replicationType: async # (optional) replicationState can have # values start, stop, pause and resume. # This field can be updated to # start/stop/pause/resume replication. # Default value will be set to start #during initial configuration. replicationState: start # (optional) network transport protocol can be TCP or UDP. # Default value will be set to TCP during # initial configuration and can be later changed to UDP. networkTransportProtocol: TCP # (optional) By default, it will be set to N/A # during initial configuration,which means the available bandwidth # will be used. # It can be later changed to set the # maximum network bandwidth (in bits per second). bandwidthLimit: N/A # (optional) Supported values for latency protection are: # fail, disable and override. # By default it will be set to disable during initial # configuration and can be changed later. latencyProtection: disable # (optional) Supported values log (SRL) protection are: # autodcm, dcm, fail, disable and override. # By default,it will be set to autodcm during initial # configuration and can be changed later. logProtection: autodcm
Note:
Ensure that the current primary cluster name you enter here must be the same that you plan to specify in
DisasterRecoveryPlan.yaml
. For every Disaster Recovery Plan, you must create a separate Data Replication CR. Ensure that namespace and labels in Disaster Recovery Plan and its corresponding Data Replication CR are identical.Run the following command on the master node
kubectl apply -f /YAML/DR/SampleDataReplication.yaml
After these commands are executed, run the following commands on the master node
kubectl get datarep
Review the output similar to the following
NAME PROPOSED PRIMARY CURRENT PRIMARY NAMESPACE LABELS <Name of data replication resource> <proposed cluster ID> <current cluster ID> <namespace> <labels if any>
kubectl get datarep -o wide
Review the output similar to the following
NAME PROPOSED PRIMARY CURRENT PRIMARY NAMESPACE LABELS postgres-rep Clus1 Clus1 postgres <none> | replicating (connected) | behind by 0h 0m 0s
REPLICATION SUMMARY asynchronous | consistent,up-to-date
Wait for the initial synchronization of the application Persistent Volumes to complete on the DR site. Run the following command on the master node of the DR site.
kubectl describe datarep <Data rep name for the application>
Review the status in the output similar to the following. Data Status must be consistent up-to-date.
Spec: .. .. Status: .. .. Primary Status: .. .. Secondary Status: .. Data Status: consistent,up-to-date