NetBackup™ Deployment Guide for Kubernetes Clusters
- Introduction
- Section I. Configurations
- Prerequisites
- Recommendations and Limitations
- Configurations
- Configuration of key parameters in Cloud Scale deployments
- Section II. Deployment
- Section III. Monitoring and Management
- Monitoring NetBackup
- Monitoring Snapshot Manager
- Monitoring MSDP Scaleout
- Managing NetBackup
- Managing the Load Balancer service
- Managing PostrgreSQL DBaaS
- Performing catalog backup and recovery
- Managing MSDP Scaleout
- Section IV. Maintenance
- MSDP Scaleout Maintenance
- PostgreSQL DBaaS Maintenance
- Patching mechanism for Primary and Media servers
- Upgrading
- Cloud Scale Disaster Recovery
- Uninstalling
- Troubleshooting
- Troubleshooting AKS and EKS issues
- Troubleshooting AKS-specific issues
- Troubleshooting EKS-specific issues
- Troubleshooting AKS and EKS issues
- Appendix A. CR template
Preparing the environment for NetBackup installation on Kubernetes cluster
Ensure that the following prerequisites are met before proceeding with the deployment for AKS/EKS.
Use the following checklist to prepare the AKS for installation.
Your Azure Kubernetes cluster must be created with appropriate network and configuration settings.
For a complete list of supported Kubernetes cluster version, see the NetBackup Compatibility List for all Versions.
While creating the cluster, assign appropriate roles and permissions.
Refer to the 'Concepts - Access and identity in Azure Kubernetes Services (AKS)' section in Microsoft Azure Documentation.
Use an existing Azure container registry or create a new one. Your Kubernetes cluster must be able to access this registry to pull the images from the container registry. For more information on the Azure container registry, see 'Azure Container Registry documentation' section in Microsoft Azure Documentation.
Deploying the Primary and Media server installation on the same node pool (node) is possible. For optimal performance, it is recommended to create separate node pools. Select the
as . The autoscaling feature allows your node pool to scale dynamically by provisioning and de-provisioning the nodes as required automatically.A dedicated node pool for Primary server must be created in Azure Kubernetes cluster.
The following table lists the node configuration for the primary and media servers.
Node type
D16ds v4
Disk type
P30
vCPU
16
RAM
64 GiB
Total disk size per node (TiB)
1 TB
Number of disks/node
1
Cluster storage size
Small (4 nodes)
4 TB
Medium (8 nodes)
8 TB
Large (16 nodes)
16 TB
Another dedicated node pool must be created for Snapshot Manager (if it has to be deployed) with auto scaling enabled.
Following is the minimum configuration required for Snapshot Manager data plane node pool:
Node type
B4ms
RAM
8 GB
Number of nodes
Minimum 1 with auto scaling enabled.
Maximum pods per node
6 (system) + 4 (static pods) + RAM*2 (dynamic) = 26 pods or more
Following are the different scenario's on how the NetBackup Snapshot Manager calculates the number of job which can run at a given point in time, based on the above mentioned formula:
For 2 CPU's and 8 GB RAM node configuration:
CPU
More than 2 CPU's
RAM
8 GB
Maximum pods per node
6 (system) + 4 (static pods) + 8*2 = 16 (dynamic pods) = 26 or more
Autoscaling enabled
Minimum number =1 and Maximum = 3
Note:
Above configuration will run 8 jobs per node at once.
For 2/4/6 CPU's and 16 GB RAM node configuration:
CPU
More than 2/4/6 CPU's
RAM
16 GB
Maximum pods per node
6 (system) + 4 (Static pods) + 16*2=32 (Dynamic pods) = 42 or more
Autoscaling enabled
Minimum number =1 and Maximum = 3
Note:
Above configuration will run 16 jobs per node at once.
All the nodes in the node pool must be running the Linux operating system. Linux based operating system is only supported with default settings.
Taints and tolerations allows you to mark (taint) a node so that no pods can schedule onto it unless a pod explicitly tolerates the taint. Marking nodes instead of pods (as in node affinity/anti-affinity) is particularly useful for situations where most pods in the cluster must avoid scheduling onto the node.
Taints are set on the node pool while creating the node pool in the cluster. Tolerations are set on the pods.
If you want to use static private IPs and fully qualified domain names for the load balancer service, private IP addresses and FQDNs must be created in AKS before deployment.
If you want to bind the load balancer service IPs to a specific subnet, the subnet must be created in AKS and its name must be updated in the
key in the section of the custom resource (CR).For more information on the network configuration for a load balancer service, refer to the How-to-Guide section of the Microsoft Azure Documentation.
For more information on managing the load balancer service, See About the Load Balancer service.
Create a storage class with Azure file storage type with
file.csi.azure.com
and allows volume expansion. It must be in LRS category with Premium SSD. It is recommended that the storage class has ,Retain
reclaim. Such storage class can be used for primary server as it supportsAzure premium files
storage only for catalog volume.For more information on Azure premium files, see 'Azure Files CSI driver' section of Microsoft Azure Documentation.
For example,
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: {{ custome-storage-class-name }} provisioner: file.csi.azure.com reclaimPolicy: Retain allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer parameters: storageaccounttype: Premium_LRS protocol: nfs
Create a storage class with
Managed disc
storage type withallowVolumeExpansion = true
andReclaimPolicy=Retain
. This storage class will be used for Primary server data and log volume. Media server storage details support azure disks only.Customer's Azure subscription should have
role.For more information, see 'Azure built-in roles' section of Microsoft Azure Documentation.
- Create a Kubernetes cluster with the following guidelines:
Use Kubernetes version 1.27 onwards.
AWS default CNI is used during cluster creation.
Create a nodegroup with only one availability zone and instance type should be of at least m5.4xlarge configuration and select the size of attached EBS volume for each node more than 100 GB.
The nodepool uses AWS manual or autoscaling group feature which allows your nodepool to scale by provisioning and de-provisioning the nodes as required automatically.
Note:
All the nodes in node group must be running on the Linux operating system.
Minimum required policies in IAM role:
AmazonEKSClusterPolicy
AmazonEKSWorkerNodePolicy
AmazonEC2ContainerRegistryPowerUser
AmazonEKS_CNI_Policy
AmazonEKSServicePolicy
- Use an existing AWS Elastic Container Registry or create a new one and ensure that the EKS has full access to pull images from the elastic container registry.
- It is recommended to create separate node pool for Media server installation with autoscaler add-on installed in the cluster. The autoscaling feature allows your node pool to scale dynamically by provisioning and de-provisioning the nodes as required automatically.
- A dedicated node pool for Primary server must be created in Amazon Elastic Kubernetes Services cluster.
The following table lists the node configuration for the primary and media servers.
Node type
m5.4xlarge
vCPU
16
RAM
64 GiB
Total disk size per node (TiB)
1 TB
Number of disks/node
1
Cluster storage size
Small (4 nodes)
4 TB
Medium (8 nodes)
8 TB
Large (16 nodes)
16 TB
- Another dedicated node pool must be created for Snapshot Manager (if it has to be deployed) with auto scaling enabled.
Following is the minimum configuration required for Snapshot Manager data plane node pool:
Node type
t3.large
RAM
8 GB
Number of nodes
Minimum 1 with auto scaling enabled.
Maximum pods per node
Number of IPs required for Snapshot Manager data pool, must be greater than:
the number of nodes (for node's own IP) + (RAM size per node * 2 * number of nodes) + (number of all kube-system pods running on all nodes) + static listener pod + number of nodes( for fluent daemonset)
Number of IPs required for Snapshot Manager control pool, must be greater than:
number of nodes (for node's own IP) + number of flexsnap pods(15) + number of flexsnap services (6) + nginx load balancer IP + no. of additional off host agents + operator + (number of all kube-system pods running on all nodes)
Following are the different scenario's on how the NetBackup Snapshot Manager calculates the number of job which can run at a given point in time, based on the above mentioned formula:
For DBPaaS Workload
Note:
The following configuration is advised as the CPU credit limit was reached in the T-series workload.
Node type
m4.2xlarge
RAM
32 GB
For 2 CPU's and 8 GB RAM node configuration:
CPU
More than 2 CPU's
RAM
8 GB
Maximum pods per node
Number of IPs required for Snapshot Manager data pool, must be greater than:
number of nodes (for node's own IP) + (RAM size per node * 2 * number of nodes) + (number of all kube-system pods running on all nodes) + static listener pod + number of nodes( for fluent daemonset)
Number of IPs required for Snapshot Manager control pool, must be greater than:
number of nodes (for node's own IP) + number of flexsnap pods(15) + number of flexsnap services (6) + nginx load balancer IP + no. of additional off host agents + operator + (number of all kube-system pods running on all nodes)
Autoscaling enabled
Minimum number =1 and Maximum = 3
Note:
Above configuration will run 8 jobs per node at once.
For 2/4/6 CPU's and 16 GB RAM node configuration:
CPU
More than 2/4/6 CPU's
RAM
16 GB
Maximum pods per node
6 (system) + 4 (Static pods) + 16*2=32 (Dynamic pods) = 42 or more
Autoscaling enabled
Minimum number =1 and Maximum = 3
Note:
Above configuration will run 16 jobs per node at once.
- Taints and tolerations allows you to mark (taint) a node so that no pods can schedule onto it unless a pod explicitly tolerates the taint. Marking nodes instead of pods (as in node affinity/anti-affinity) is particularly useful for situations where most pods in the cluster must avoid scheduling onto the node.
Taints are set on the node group while creating the node group in the cluster. Tolerations are set on the pods.
- Deploy aws load balancer controller add-on in the cluster.
For more information on installing the add-on, see 'Installing the AWS Load Balancer Controller add-on' section of the Amazon EKS User Guide.
- Install cert-manager and trust-manager as follows:
Install cert-manager by using the following command:
$ kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.3.0/cert-manager.yaml
For more information, see Documentation for cert-manager installation.
Install trust-manager by using the following command:
helm repo add jetstack https://charts.jetstack.io --force-update
$ kubectl create namespace trust-manager
helm upgrade -i -n trust-manager trust-manager jetstack/trust-manager --set app.trust.namespace=netbackup --version v0.7.0 --wait
- The FQDN that will be provided in primary server CR and media server CR specifications in networkLoadBalancer section must be DNS resolvable to the provided IP address.
- Amazon Elastic File System (Amazon EFS) for shared persistence storage. To create EFS for primary server, see 'Create your Amazon EFS file system' section of the Amazon EKS User Guide.
EFS configuration can be as follow and user can update Throughput mode as required:
Performance mode: General Purpose
Throughput mode: Bursting (256 MiB/s)
Availability zone: Regional
Note:
Throughput mode can be increased at runtime depending on the size of workloads and also if you are seeing performance issue you can increase the Throughput mode till 1024 MiB/s.
Note:
To install the add-on in the cluster, ensure that you install the Amazon EFS CSI driver. For more information on installing the Amazon EFS CSI driver, see 'Amazon EFS CSI driver' section of the Amazon EKS User Guide.
- If NetBackup client is outside VPC or if you want to access the WEB UI from outside VPC then NetBackup client CIDR must be added with all NetBackup ports in security group inbound rule of cluster. See About the Load Balancer service. for more information on NetBackup ports.
To obtain the cluster security group, run the following command:
aws eks describe-cluster --name <my-cluster> --query cluster.resourcesVpcConfig.clusterSecurityGroupId
The following link helps to add inbound rule to the security group:
'Add rules to a security group' section of the Amazon EKS User Guide.
- Create a storage class with
EBS
storage type withallowVolumeExpansion = true
andReclaimPolicy=Retain
. This storage class is to be used for data and log for both primary and media servers.For example,
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "true" name: ebs-csi-storage-class parameters: fsType: ext4 type: gp2 provisioner: kubernetes.io/ebs.csi.aws.com reclaimPolicy: Retain volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true
Note:
Ensure that you install the Amazon EBS CSI driver to install the add-on in the cluster. For more information on installing the Amazon EBS CSI driver, see 'Managing the Amazon EBS CSI driver as an Amazon EKS add-on' and 'Amazon EBS CSI driver' sections of the Amazon EKS User Guide.
- The EFS based PV must be specified for Primary server catalog volume with
ReclaimPolicy=Retain
.
Use the following checklist to address the prerequisites on the system that you want to use as a NetBackup host that connects to the AKS/EKS cluster.
Linux operating system: For a complete list of compatible Linux operating systems, refer to the Software Compatibility List (SCL) at:
Install Docker on the host to install NetBackup container images through tar, and start the container service.
Prepare the host to manage the AKS cluster.
Install Azure CLI.
For more information, see 'Install the Azure CLI on Linux' section of the Microsoft Azure Documentation.
Install Kubernetes CLI
For more information, see 'Install and Set Up kubectl on Linux' section of the Kubernetes Documentation.
Log in to the Azure environment to access the Kubernetes cluster by running this command on Azure CLI:
az login - identity
az account set --subscription <subscriptionID>
az aks get-credentials --resource-group
<resource_group_name> --name <cluster_name>
az resource list -n $cluster_name --query [*].identity.principalId --out tsv
az role assignment create --assignee <identity.principalId> --role 'Contributor' --scope /subscriptions/$subscription_id/resourceGroups/NBUX-QA-BiDi- RG/providers/Microsoft.Network/virtualNetworks/NBUX-QA-BiDiNet01/subnets/$subnet
az login --scope https://graph.microsoft.com//.default
Log in to the container registry:
az acr login -n <container-registry-name>
Install AWS CLI.
For more information on installing the AWS CLI, see Install or update the latest version of the AWS CLI' section of the AWS Command Line Interface User Guide.
Install Kubectl CLI.
For more information on installing the Kubectl CLI, see 'Installing kubectl' section of the Amazon EKS User Guide.
Configure docker to enable the push of the container images to the container registry.
Create the OIDC provider for the AWS EKS cluster.
For more information on creating the OIDC provider, see 'Create an IAM OIDC provider for your cluster' section of the Amazon EKS User Guide.
Create an IAM service account for the AWS EKS cluster.
For more information on creating an IAM service account, see 'Configuring a Kubernetes service account to assume an IAM role' section of the Amazon EKS User Guide.
If an IAM role needs an access to the EKS cluster, run the following command from the system that already has access to the EKS cluster:
kubectl edit -n kube-system configmap/aws-auth
For more information on creating an IAM role, see Enabling IAM user and role access to your cluster.
Login to the AWS environment to access the Kubernetes cluster by running the following command on AWS CLI:
aws eks --region <region_name> update-kubeconfig --name <cluster_name>
Free space of approximately 13GB on the location where you copy and extract the product installation TAR package file. If using docker locally, there should be approximately 8GB available on the
/var/lib/docker
location so that the images can be loaded to the docker cache, before being pushed to the container registry.AWS EFS-CSI driver should be installed for static PV/PVC creation of primary catalog volume.