NetBackup™ Deployment Guide for Kubernetes Clusters
- Introduction
- Section I. Deployment
- Prerequisites for Kubernetes cluster configuration
- Deployment with environment operators
- Deploying NetBackup
- Primary and media server CR
- Deploying NetBackup using Helm charts
- Deploying MSDP Scaleout
- Deploying Snapshot Manager
- Section II. Monitoring and Management
- Monitoring NetBackup
- Monitoring MSDP Scaleout
- Monitoring Snapshot Manager
- Managing the Load Balancer service
- Managing MSDP Scaleout
- Performing catalog backup and recovery
- Section III. Maintenance
- MSDP Scaleout Maintenance
- Upgrading
- Uninstalling
- Troubleshooting
- Troubleshooting AKS and EKS issues
- Troubleshooting AKS-specific issues
- Troubleshooting EKS-specific issues
- Troubleshooting AKS and EKS issues
- Appendix A. CR template
Upgrade NetBackup from previous versions
Ensure that all the steps mentioned for data migration in the section are performed before upgrading to the latest NetBackup or installing the latest :
See Preparing the environment for NetBackup installation on Kubernetes cluster.
User must have deployed NetBackupon Azure with
Azure disks
as its storage class.While upgrading to latest NetBackup, data migration would happen only if existing storage class has been changed. The existing catalog data of primary server will be migrated (copied) from
Azure disks
toAzure premium files
. Also new data volume would be created onAzure disks
for NetBackup database. If is changed for log, migration from azure disk to azure disk will be triggered for logs.Fresh NetBackup deployment: If user is deploying NetBackup for the first time, then
Azure premium files
will be used for primary server's catalog andAzure disks
will be used for log and data volume for any backup and restore operation.
Ensure that all the steps mentioned for data migration in the following section are performed before upgrading to the latest NetBackup or installing the latest :
See Preparing the environment for NetBackup installation on Kubernetes cluster.
User must have deployed NetBackup on AWS with
EBS
as its storage class.While upgrading to latest NetBackup, the existing catalog data of primary server will be migrated (copied) from
EBS
toAmazon elastic files
.Fresh NetBackup deployment: If user is deploying NetBackup for the first time, then
Amazon elastic files
will be used for primary server's catalog volume for any backup and restore operations.
Perform the following steps to create EFS when upgrading NetBackup from version 10.0.0.1
- To create EFS for primary server, see Create your Amazon EFS file system.
EFS configuration can be as follow and user can update Throughput mode as required:
Performance mode: General Purpose
Throughput mode: Provisioned (256 MiB/s)
Availability zone: Regional
Note:
Throughput mode can be increased at runtime depending on the size of workloads and also if you are seeing performance issue you can increase the Throughput mode till 1024 MiB/s.
- Install
efs-csi-controller
driver on EKS cluster.For more information on installing the driver, see Amazon EFS CSI driver.
- Note down the EFS ID for further use.
- Mount EFS on any EC-2 instance and create and create two directories on EFS to store NetBackup data.
For more information, see Mount on EC-2 instance.
For example,
[root@sych09b03v30 ~]# mkdir /efs [root@sych09b03v30 ~]# mount -t nfs4 -o nfsvers=4.1,rsize=1048576, wsize=1048576,hard,timeo=600,retrans=2,noresvport <fs-0bde325bc5b8d6969>.efs.us-east-2.amazonaws.com: / /efs # change EFS ID
After changing the existing storage class from EBS to EFS for data migration, manually create PVC and PV with EFS volume handle and update the yaml file as described in the following procedure:
Create new PVC and PV with EFS volume handle.
CatlogPVC.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: catalog namespace: ns-155 spec: accessModes: - ReadWriteMany storageClassName: "" resources: requests: storage: 100Gi volumeName: environment-pv-primary -catalog
catalogPV
apiVersion: v1 kind: PersistentVolume metadata: name: environment-pv-primary-catalog labels: topology.kubernetes.io/region: us-east-2 # Give the region as your configuration in your cluster topology.kubernetes.io/zone: us-east-2c # Give the zone of your node instance,
can also check with subnet zone in which your node instance is there. spec: capacity: storage: 100Gi volumeMode: Filesystem accessModes: - ReadWriteMany storageClassName: "" persistentVolumeReclaimPolicy: Retain mountOptions: - iam csi: driver: efs.csi.aws.com volumeHandle: fs-07a82a46b4a7d87f8:/nbdata #EFS id need to be changed as per your created EFS id claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: catalog # catalog pvc name to which data to be copied namespace: ns-155
PVC for data (EBS)
PVC
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-< Primary name >-primary-0 namespace: ns-155 spec: accessModes: - ReadWriteOnce storageClassName: <Storageclass name> resources: requests: storage: 30Gi
Edit the
environment.yaml
file and change the value of paused to true in primary section and apply the yaml.Scale down the primary server using the following commands:
To get statefulset name: kubectl get sts -n < namespace in environment cr (ns-155)>
To scale down the STS: kubectl scale sts --replicas=0 < STS name > -n < Namespace >
Copy the data using the migration yaml file as follows:
catalogMigration.yaml
apiVersion: batch/v1 kind: Job metadata: name: rsync-data namespace: ns-155 spec: template: spec: volumes: - name: source-pvc persistentVolumeClaim: # SOURCE PVC claimName: <EBS PVC name of catalog> # catalog-environment-migrate1-primary- 0
# old PVC (EBS) from which data to be copied - name: destination-pvc persistentVolumeClaim: # DESTINATION PVC claimName: catalog # new PVC (EFS) to which data will be copied securityContext: runAsUser: 0 runAsGroup: 0 containers: - name: netbackup-migration image: OPERATOR_IMAGE:TAG
#image name with tag command: ["/migration", '{"VolumesList":[{"Src":"srcPvc","Dest":"destPvc", "Verify":true,"StorageType":"catalog","OnlyCatalog":true}]}']
volumeMounts: - name: source-pvc mountPath: /srcPvc - name: destination-pvc mountPath: /destPvc restartPolicy: Never
dataMigration.yaml
apiVersion: batch/v1 kind: Job metadata: name: rsync-data2 namespace: ns-155 spec: template: spec: volumes: - name: source-pvc persistentVolumeClaim: # SOURCE PVC claimName: <EBS PVC name of catalog> # catalog-environment-migrate1-primary- 0
# old PVC (EBS) from where data to be copied - name: destination-pvc persistentVolumeClaim: # DESTINATION PVC claimName: data (EBS) pvc name # new PVC (EFS) to where data will be copied securityContext: runAsUser: 0 runAsGroup: 0 containers: - name: netbackup-migration image: OPERATOR_IMAGE:TAG # image name with tag command: ["/migration", '{"VolumesList":[{"Src":"srcPvc","Dest":"destPvc", "Verify":true,"StorageType":"data","OnlyCatalog":false}]}']
volumeMounts: - name: source-pvc mountPath: /srcPvc - name: destination-pvc mountPath: /destPvc restartPolicy: Never
Delete the migration job once the pods are in complete state.
For primary server, delete old PVC (EBS) of catalog volume.
For example, catalog-<Name_of_primary>-primary-0 and create new PVC with same name (as deleted PVC) which were attached to primary server.
Follow the naming conventions of static PV and PVC to consume for Primary Server Deployment.
catalog-<Name_of_primary>-primary-0 data-<Name_of_primary>-primary-0 Example: catalog-test-env-primary-0 data-test-env-primary-0 environment.yaml apiVersion: netbackup.veritas.com/v2 kind: Environment metadata: name: test-env namespace: ns-155 spec: ... primary: # Set name to control the name of the primary server. The default value is the same as the Environment's metadata.name. name: test-env
Yaml to create new catalog PVC:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: catalog-test-env-primary-0 namespace: ns-155 spec: accessModes: - ReadWriteMany storageClassName: "" resources: requests: storage: 100Gi volumeName: environment-pv-primary-catalog
Edit the PV (mounted on EFS) and replace the name, resource version, uid with new created PVC to meet the naming convention.
Get the PV's and PVC's using the following commands:
To get PVC details: kubectl get pvc -n < Namespace>
Use edit command to get PVC details: kubectl edit pvc < New PVC(old name) name > -n < Namespace >
To get PV details: kubectl edit pv < PV name (in which data is copied) >
Upgrade the MSDP with new build and image tag. Apply the following command to MSDP:
./kubectl-msdp init --image <<Image name:Tag>> --storageclass << Storage Class Name>> --namespace << Namespace >>
Apply the following command operator from new build with
and :kubectl apply -k operator/
Edit the
environment.yaml
file from new build and perform the following changes:Add the
tag: <new_tag_of_upgrade_image>
tag separately under primary sections.Provide EFS ID for
of the catalog volume under primary section. Set thepaused=false
under primary section.EFS ID must be same as used in the
step in the above section.Change the
for data and logs as with and then applyenvironment.yaml
file using the following command and ensure that the primary server is upgraded successfully:kubectl apply -f environment.yaml
Upgrade the MSDP Scaleout by updating the new image tag in msdpscaleout section in
environment.yaml file
.Apply
environment.yaml
file using the following command and ensure that MSDP is deployed successfully:kubectl apply -f environment.yaml
Edit the
environment.yaml
file and update the image tag for Media Server in mediaServer section.Apply
environment.yaml file
using the following command and ensure that the Media Server is deployed successfully:kubectl apply -f environment.yaml
(EKS-specific) Perform the following steps when upgrading NetBackup from version 10.1
- Make primary environment controller paused to true as follows:
Edit the environment custom resource using the kubectl edit Environment <environmentCR_name> -n <namespace> command.
To pause the reconciler of the particular custom resource, change the paused: false value to paused: true in the primaryServer or mediaServer section and save the changes.
Scale down the primary server using the following commands:
To get statefulset name: kubectl get sts -n <namespace>
To scale down the STS: kubectl scale sts --replicas=0 < STS name of primary server> -n <Namespace>
- Upgrade the MSDP with new build and image tag. Apply the following command to MSDP:
./kubectl-msdp init --image <Image name:Tag> --storageclass <Storage Class Name> --namespace <Namespace>
- Edit the
sample/environment.yaml
file from new build and perform the following changes:Add the
tag: <new_tag_of_upgrade_image>
tag separately under primary sections.Provide the EFS ID for storageClassName of catalog volume in primary section.
Note:
The provided EFS ID for storageClassName of catalog volume must be same as previously used EFS ID to create PV and PVC.
Use the following command to retrieve the previously used EFS ID from PV and PVC:
kubectl get pvc -n <namespace>
From the output, copy the name of catalog PVC which is of the following format:
catalog-<resource name prefix>-primary-0
Describe catalog PVC using the following command:
kubectl describe pvc <pvc name> -n <namespace>
Note down the value of Volume field from the output.
Describe PV using the following command:
kubectl describe pv <value of Volume obtained from above step>
Note down the value of VolumeHandle field from the output which is the previously used EFS ID.
For data and logs volume, provide the storageClassNameand then apply
environment.yaml
file using the following command and ensure that the primary server is upgraded successfully:kubectl apply -f environment.yaml
Upgrade the MSDP Scaleout by updating the new image tag in msdpscaleout section in
environment.yaml
file.Apply
environment.yaml
file using the following command and ensure that MSDP is deployed successfully:kubectl apply -f environment.yaml
Edit the
environment.yaml
file and update the image tag for Media Server in mediaServer section.Apply
environment.yaml
file using the following command and ensure that the Media Server is deployed successfully:kubectl apply -f environment.yaml