NetBackup™ Deployment Guide for Kubernetes Clusters
- Introduction
- Section I. Deployment
- Prerequisites for Kubernetes cluster configuration
- Deployment with environment operators
- Deploying NetBackup
- Primary and media server CR
- Deploying NetBackup using Helm charts
- Deploying MSDP Scaleout
- Deploying Snapshot Manager
- Section II. Monitoring and Management
- Monitoring NetBackup
- Monitoring MSDP Scaleout
- Monitoring Snapshot Manager
- Managing the Load Balancer service
- Managing MSDP Scaleout
- Performing catalog backup and recovery
- Section III. Maintenance
- MSDP Scaleout Maintenance
- Upgrading
- Uninstalling
- Troubleshooting
- Troubleshooting AKS and EKS issues
- Troubleshooting AKS-specific issues
- Troubleshooting EKS-specific issues
- Troubleshooting AKS and EKS issues
- Appendix A. CR template
Migrating the MSDP Scaleout to another node pool
You can migrate an existing MSDP Scaleout on another node pool in case of the Kubernetes infrastructure issues.
To migrate the MSDP Scaleout to another node pool
- Ensure that no job running related to MSDP Scaleout that is going to migrate.
- Update the node selector value spec.nodeSelector to the new node in the CR YAML file.
- Apply new CR YAML to update the CR in the Kubernetes environment.
kubectl apply -f <your-cr-yaml>
Note:
All affected pods or other Kubernetes workload objects must be restarted for the change to take effect.
- After the CR YAML file update, existing pods are terminated and restarted one at a time, and the pods are re-scheduled for the new node pool automatically.
Note:
Controller pods are temporarily unavailable when the MDS pod restarts. Do not delete pods manually.
- Run the following command to change MSDP Scaleout operator to the new node pool:
AKS: kubectl msdp init -i <your-acr-url>/msdp-operator:<version> -s <storage-class-name> -l agentpool=<new-nodepool-name>
EKS: kubectl msdp init -i <your-acr-url>/msdp-operator:<version> -s <storage-class-name> -l agentpool=<new-nodegroup-name>
- If node selector does not match any existing nodes at the time of change, you see the message on the console.
If auto scaling for node is enabled, it may resolve automatically as the new nodes are made available to the cluster. If invalid node selector is provided, pods may go in the pending state after the update. In that case, run the command above again.
Do not delete the pods manually.