Veritas InfoScale™ for Kubernetes Environments 8.0.200 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Prerequisites
- Tagging the InfoScale images on Kubernetes
- Installing InfoScale on Kubernetes
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Upgrading InfoScale
- Troubleshooting
Migration
You can initiate migration on a primary cluster when peer clusters are connected, configured for Disaster Recovery (DR), and application stack is online. Migration must be initiated from the primary cluster only - this is the source cluster. Initiating migration from target cluster can result in unstable cluster states. You can run the command kubectl or oc edit/patch <Name of DR plan> and update Spec:PrimaryCluster to change current primary cluster details. A different Spec:PrimaryCluster in specifications and status indicates that migration is ongoing.
Following entities are updated during migration
Application metadata - When migration is initiated from source cluster, latest snapshot of managed application's metadata is taken and tagged for restoration. This latest snapshot is replicated across peer clusters (target cluster) for restoring. Thereafter, application goes offline on the source cluster. On the target cluster, this latest snapshot is used for restoring application stack.
Application data - For stateful applications, you must have configured Data Replication CR and updated DisasterRecoveryPlan:Spec:DataReplicationPointer accordingly. Data Replication CR manages replication of application data from primary cluster to peer clusters (source to target). Currently, Veritas Volume Replicator(VVR) is used for application data replication. When migration is initiated from the source cluster, the cluster roles are swapped. The proposed primary cluster assumes 'Primary' role whereas current primary cluster assumes 'Secondary' role.
DNS endpoints - The DNS custom resource updates and monitors the mapping for:
The host name to IP address (A, AAAA, or PTR record)
Alias to hostname or canonical name (CNAME)
When migration is initiated, the DNS resource records are updated appropriately.
You can check intermediate transient states like BackupStatus, ScheduleStatus, RestoreStatus, and DataReplicationStatus attributes of Disaster Recovery Plan during migration. To check logs if migration is stuck, run kubectl/oc logs -f --tail=100 deployments.apps/infoscale-dr-manager -n infoscale-vtas. After migration is complete, these transient states are cleaned and Status:PrimaryCluster in Disaster Recovery Plan is updated to the new primary.
Note:
If you want to resize Volume after migration, ensure you create a Storage Class on the secondary cluster (the new primary).