Please enter search query.
Search <book_title>...
NetBackup™ Deployment Guide for Kubernetes Clusters
Last Published:
2024-06-17
Product(s):
NetBackup & Alta Data Protection (10.4.0.1)
- Introduction
- Section I. Configurations
- Prerequisites
- Recommendations and Limitations
- Configurations
- Configuration of key parameters in Cloud Scale deployments
- Section II. Deployment
- Section III. Monitoring and Management
- Monitoring NetBackup
- Monitoring Snapshot Manager
- Monitoring MSDP Scaleout
- Managing NetBackup
- Managing the Load Balancer service
- Managing PostrgreSQL DBaaS
- Performing catalog backup and recovery
- Managing MSDP Scaleout
- Section IV. Maintenance
- MSDP Scaleout Maintenance
- PostgreSQL DBaaS Maintenance
- Patching mechanism for Primary and Media servers
- Upgrading
- Cloud Scale Disaster Recovery
- Uninstalling
- Troubleshooting
- Troubleshooting AKS and EKS issues
- Troubleshooting AKS-specific issues
- Troubleshooting EKS-specific issues
- Troubleshooting AKS and EKS issues
- Appendix A. CR template
Migrating the MSDP Scaleout to another node pool
You can migrate an existing MSDP Scaleout on another node pool in case of the Kubernetes infrastructure issues.
To migrate the MSDP Scaleout to another node pool
- Ensure that no job running related to MSDP Scaleout that is going to migrate.
- Update the node selector value spec.nodeSelector to the new node in the CR YAML file.
- Apply new CR YAML to update the CR in the Kubernetes environment.
kubectl apply -f <your-cr-yaml>
Note:
All affected pods or other Kubernetes workload objects must be restarted for the change to take effect.
- After the CR YAML file update, existing pods are terminated and restarted one at a time, and the pods are re-scheduled for the new node pool automatically.
Note:
Controller pods are temporarily unavailable when the MDS pod restarts. Do not delete pods manually.
- Re run the following command to update the MSDP Scaleout operator with new node pool:
# helm upgrade --install operators
- If node selector does not match any existing nodes at the time of change, you see the message on the console.
If auto scaling for node is enabled, it may resolve automatically as the new nodes are made available to the cluster. If invalid node selector is provided, pods may go in the pending state after the update. In that case, run the command above again.
Do not delete the pods manually.