NetBackup™ Deployment Guide for Kubernetes Clusters

Last Published:
Product(s): NetBackup & Alta Data Protection (10.4.0.1)
  1. Introduction
    1.  
      About Cloud Scale deployment
    2.  
      About NetBackup Snapshot Manager
    3.  
      About MSDP Scaleout
    4.  
      Required terminology
    5.  
      User roles and permissions
  2. Section I. Configurations
    1. Prerequisites
      1.  
        Preparing the environment for NetBackup installation on Kubernetes cluster
      2.  
        Prerequisites for MSDP Scaleout and Snapshot Manager (AKS/EKS)
      3. Prerequistes for Kubernetes cluster configuration
        1.  
          Config-Checker utility
        2.  
          Data-Migration for AKS
        3.  
          Webhooks validation for EKS
      4. Prerequisites for Cloud Scale configuration
        1.  
          Cluster specific settings
        2.  
          Cloud specific settings
      5.  
        Prerequisites for deploying environment operators
    2. Recommendations and Limitations
      1.  
        Recommendations of NetBackup deployment on Kubernetes cluster
      2.  
        Limitations of NetBackup deployment on Kubernetes cluster
      3.  
        Limitations in MSDP Scaleout
    3. Configurations
      1.  
        Contents of the TAR file
      2.  
        Initial configurations
      3.  
        Configuring the environment.yaml file
      4. Loading docker images
        1.  
          Installing the docker images for NetBackup
        2.  
          Installing the docker images for Snapshot Manager
        3.  
          Installing the docker images and binaries for MSDP Scaleout
      5.  
        Configuring NetBackup IT Analytics for NetBackup deployment
      6. Configuring NetBackup
        1. Primary and media server CR
          1.  
            After installing primary server CR
          2.  
            After Installing the media server CR
        2.  
          Elastic media server
    4. Configuration of key parameters in Cloud Scale deployments
      1.  
        Tuning touch files
      2.  
        Setting maximum jobs
      3.  
        Enabling intelligent catalog archiving
      4.  
        Enabling security settings
      5.  
        Configuring email server
      6.  
        Reducing catalog storage management
      7.  
        Configuring zone redundancy
      8.  
        Enabling client-side deduplication capabilities
  3. Section II. Deployment
    1. Deploying operators
      1.  
        Deploying the operators
    2. Deploying Postgres
      1.  
        Deploying Postgres
      2.  
        Enable request logging, update configuration, and copying files from/to PostgreSQL pod
    3. Deploying Cloud Scale
      1.  
        Installing Cloud Scale
    4. Deploying MSDP Scaleout
      1. MSDP Scaleout configuration
        1.  
          Initializing the MSDP operator
        2.  
          Configuring MSDP Scaleout
        3.  
          Configuring the MSDP cloud in MSDP Scaleout
        4.  
          Using MSDP Scaleout as a single storage pool in NetBackup
        5.  
          Using S3 service in MSDP Scaleout
        6.  
          Enabling MSDP S3 service after MSDP Scaleout is deployed
      2.  
        Deploying MSDP Scaleout
    5. Verifying Cloud Scale deployment
      1.  
        Verifying Cloud Scale deployment
  4. Section III. Monitoring and Management
    1. Monitoring NetBackup
      1.  
        Monitoring the application health
      2.  
        Telemetry reporting
      3.  
        About NetBackup operator logs
      4.  
        Monitoring Primary/Media server CRs
      5.  
        Expanding storage volumes
      6. Allocating static PV for Primary and Media pods
        1.  
          Recommendation for media server volume expansion
        2.  
          (AKS-specific) Allocating static PV for Primary and Media pods
        3.  
          (EKS-specific) Allocating static PV for Primary and Media pods
    2. Monitoring Snapshot Manager
      1.  
        Overview
      2.  
        Logs of Snapshot Manager
      3.  
        Configuration parameters
    3. Monitoring MSDP Scaleout
      1.  
        About MSDP Scaleout status and events
      2.  
        Monitoring with Amazon CloudWatch
      3.  
        Monitoring with Azure Container insights
      4.  
        The Kubernetes resources for MSDP Scaleout and MSDP operator
    4. Managing NetBackup
      1.  
        Managing NetBackup deployment using VxUpdate
      2.  
        Updating the Primary/Media server CRs
      3.  
        Migrating the cloud node for primary or media servers
    5. Managing the Load Balancer service
      1.  
        About the Load Balancer service
      2.  
        Notes for Load Balancer service
      3.  
        Opening the ports from the Load Balancer service
    6. Managing PostrgreSQL DBaaS
      1.  
        Changing database server password in DBaaS
      2.  
        Updating database certificate in DBaaS
    7. Performing catalog backup and recovery
      1.  
        Backing up a catalog
      2. Restoring a catalog
        1.  
          Primary server corrupted
        2.  
          MSDP-X corrupted
        3.  
          MSDP-X and Primary server corrupted
    8. Managing MSDP Scaleout
      1.  
        Adding MSDP engines
      2.  
        Adding data volumes
      3. Expanding existing data or catalog volumes
        1.  
          Manual storage expansion
      4.  
        MSDP Scaleout scaling recommendations
      5. MSDP Cloud backup and disaster recovery
        1.  
          About the reserved storage space
        2. Cloud LSU disaster recovery
          1.  
            Recovering MSDP S3 IAM configurations from cloud LSU
      6.  
        MSDP multi-domain support
      7.  
        Configuring Auto Image Replication
      8. About MSDP Scaleout logging and troubleshooting
        1.  
          Collecting the logs and the inspection information
  5. Section IV. Maintenance
    1. MSDP Scaleout Maintenance
      1.  
        Pausing the MSDP Scaleout operator for maintenance
      2.  
        Logging in to the pods
      3.  
        Reinstalling MSDP Scaleout operator
      4.  
        Migrating the MSDP Scaleout to another node pool
    2. PostgreSQL DBaaS Maintenance
      1.  
        Configuring maintenance window for PostgreSQL database in AWS
      2.  
        Setting up alarms for PostgreSQL DBaaS instance
    3. Patching mechanism for Primary and Media servers
      1.  
        Overview
      2.  
        Patching of containers
    4. Upgrading
      1.  
        Upgrading Cloud Scale deployment for Postgres using Helm charts
      2. Upgrading NetBackup individual components
        1.  
          Upgrading NetBackup operator
        2. Upgrading NetBackup application
          1.  
            Upgrade NetBackup from previous versions
          2.  
            Procedure to rollback when upgrade of NetBackup fails
        3.  
          Upgrading MSDP Scaleout
        4. Upgrading Snapshot Manager
          1.  
            Post-migration tasks
    5. Cloud Scale Disaster Recovery
      1.  
        Cluster backup
      2.  
        Environment backup
      3.  
        Cluster recovery
      4.  
        Cloud Scale recovery
      5.  
        Environment Disaster Recovery
      6.  
        DBaaS Disaster Recovery
    6. Uninstalling
      1.  
        Uninstalling NetBackup environment and the operators
      2.  
        Uninstalling Postgres using Helm charts
      3.  
        Uninstalling Snapshot Manager from Kubernetes cluster
      4. Uninstalling MSDP Scalout from Kubernetes cluster
        1.  
          Cleaning up MSDP Scaleout
        2.  
          Cleaning up the MSDP Scaleout operator
    7. Troubleshooting
      1. Troubleshooting AKS and EKS issues
        1.  
          View the list of operator resources
        2.  
          View the list of product resources
        3.  
          View operator logs
        4.  
          View primary logs
        5.  
          Socket connection failure
        6.  
          Resolving an issue where external IP address is not assigned to a NetBackup server's load balancer services
        7.  
          Resolving the issue where the NetBackup server pod is not scheduled for long time
        8.  
          Resolving an issue where the Storage class does not exist
        9.  
          Resolving an issue where the primary server or media server deployment does not proceed
        10.  
          Resolving an issue of failed probes
        11.  
          Resolving token issues
        12.  
          Resolving an issue related to insufficient storage
        13.  
          Resolving an issue related to invalid nodepool
        14.  
          Resolving a token expiry issue
        15.  
          Resolve an issue related to KMS database
        16.  
          Resolve an issue related to pulling an image from the container registry
        17.  
          Resolving an issue related to recovery of data
        18.  
          Check primary server status
        19.  
          Pod status field shows as pending
        20.  
          Ensure that the container is running the patched image
        21.  
          Getting EEB information from an image, a running container, or persistent data
        22.  
          Resolving the certificate error issue in NetBackup operator pod logs
        23.  
          Pod restart failure due to liveness probe time-out
        24.  
          NetBackup messaging queue broker take more time to start
        25.  
          Host mapping conflict in NetBackup
        26.  
          Issue with capacity licensing reporting which takes longer time
        27.  
          Local connection is getting treated as insecure connection
        28.  
          Primary pod is in pending state for a long duration
        29.  
          Backing up data from Primary server's /mnt/nbdata/ directory fails with primary server as a client
        30.  
          Storage server not supporting Instant Access capability on Web UI after upgrading NetBackup
        31.  
          Taint, Toleration, and Node affinity related issues in cpServer
        32.  
          Operations performed on cpServer in environment.yaml file are not reflected
        33.  
          Elastic media server related issues
        34.  
          Failed to register Snapshot Manager with NetBackup
        35.  
          Post Kubernetes cluster restart, flexsnap-listener pod went into CrashLoopBackoff state or pods were unable to connect to flexsnap-rabbitmq
        36.  
          Post Kubernetes cluster restart, issues observed in case of containerized Postgres deployment
      2. Troubleshooting AKS-specific issues
        1.  
          Data migration unsuccessful even after changing the storage class through the storage yaml file
        2.  
          Host validation failed on the target host
        3.  
          Primary pod goes in non-ready state
      3. Troubleshooting EKS-specific issues
        1.  
          Resolving the primary server connection issue
        2.  
          NetBackup Snapshot Manager deployment on EKS fails
        3.  
          Wrong EFS ID is provided in environment.yaml file
        4.  
          Primary pod is in ContainerCreating state
        5.  
          Webhook displays an error for PV not found
  6. Appendix A. CR template
    1.  
      Secret
    2. MSDP Scaleout CR
      1.  
        MSDP Scaleout CR template for AKS
      2.  
        MSDP Scaleout CR template for EKS

Preparing the environment for NetBackup installation on Kubernetes cluster

Ensure that the following prerequisites are met before proceeding with the deployment for AKS/EKS.

AKS-specific requirements

Use the following checklist to prepare the AKS for installation.

  • Your Azure Kubernetes cluster must be created with appropriate network and configuration settings.

    For a complete list of supported Kubernetes cluster version, see the NetBackup Compatibility List for all Versions.

  • While creating the cluster, assign appropriate roles and permissions.

    Refer to the 'Concepts - Access and identity in Azure Kubernetes Services (AKS)' section in Microsoft Azure Documentation.

  • Use an existing Azure container registry or create a new one. Your Kubernetes cluster must be able to access this registry to pull the images from the container registry. For more information on the Azure container registry, see 'Azure Container Registry documentation' section in Microsoft Azure Documentation.

  • Deploying the Primary and Media server installation on the same node pool (node) is possible. For optimal performance, it is recommended to create separate node pools. Select the Scale method as Autoscale. The autoscaling feature allows your node pool to scale dynamically by provisioning and de-provisioning the nodes as required automatically.

  • A dedicated node pool for Primary server must be created in Azure Kubernetes cluster.

    The following table lists the node configuration for the primary and media servers.

    Node type

    D16ds v4

    Disk type

    P30

    vCPU

    16

    RAM

    64 GiB

    Total disk size per node (TiB)

    1 TB

    Number of disks/node

    1

    Cluster storage size

    Small (4 nodes)

    4 TB

    Medium (8 nodes)

    8 TB

    Large (16 nodes)

    16 TB

  • Another dedicated node pool must be created for Snapshot Manager (if it has to be deployed) with auto scaling enabled.

    Following is the minimum configuration required for Snapshot Manager data plane node pool:

    Node type

    B4ms

    RAM

    8 GB

    Number of nodes

    Minimum 1 with auto scaling enabled.

    Maximum pods per node

    6 (system) + 4 (static pods) + RAM*2 (dynamic) = 26 pods or more

    Following are the different scenario's on how the NetBackup Snapshot Manager calculates the number of job which can run at a given point in time, based on the above mentioned formula:

    • For 2 CPU's and 8 GB RAM node configuration:

      CPU

      More than 2 CPU's

      RAM

      8 GB

      Maximum pods per node

      6 (system) + 4 (static pods) + 8*2 = 16 (dynamic pods) = 26 or more

      Autoscaling enabled

      Minimum number =1 and Maximum = 3

      Note:

      Above configuration will run 8 jobs per node at once.

    • For 2/4/6 CPU's and 16 GB RAM node configuration:

      CPU

      More than 2/4/6 CPU's

      RAM

      16 GB

      Maximum pods per node

      6 (system) + 4 (Static pods) + 16*2=32 (Dynamic pods) = 42 or more

      Autoscaling enabled

      Minimum number =1 and Maximum = 3

      Note:

      Above configuration will run 16 jobs per node at once.

  • All the nodes in the node pool must be running the Linux operating system. Linux based operating system is only supported with default settings.

  • Taints and tolerations allows you to mark (taint) a node so that no pods can schedule onto it unless a pod explicitly tolerates the taint. Marking nodes instead of pods (as in node affinity/anti-affinity) is particularly useful for situations where most pods in the cluster must avoid scheduling onto the node.

    Taints are set on the node pool while creating the node pool in the cluster. Tolerations are set on the pods.

  • If you want to use static private IPs and fully qualified domain names for the load balancer service, private IP addresses and FQDNs must be created in AKS before deployment.

  • If you want to bind the load balancer service IPs to a specific subnet, the subnet must be created in AKS and its name must be updated in the annotations key in the networkLoadBalancer section of the custom resource (CR).

    For more information on the network configuration for a load balancer service, refer to the How-to-Guide section of the Microsoft Azure Documentation.

    For more information on managing the load balancer service, See About the Load Balancer service.

  • Create a storage class with Azure file storage type with file.csi.azure.com and allows volume expansion. It must be in LRS category with Premium SSD. It is recommended that the storage class has , Retain reclaim. Such storage class can be used for primary server as it supports Azure premium files storage only for catalog volume.

    For more information on Azure premium files, see 'Azure Files CSI driver' section of Microsoft Azure Documentation.

    For example,

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: {{ custome-storage-class-name }}
    provisioner: file.csi.azure.com
    reclaimPolicy: Retain
    allowVolumeExpansion: true
    volumeBindingMode: WaitForFirstConsumer
    parameters:
      storageaccounttype: Premium_LRS
      protocol: nfs
    
  • Create a storage class with Managed disc storage type with allowVolumeExpansion = true and ReclaimPolicy=Retain. This storage class will be used for Primary server data and log volume. Media server storage details support azure disks only.

  • Customer's Azure subscription should have Network Contributor role.

    For more information, see 'Azure built-in roles' section of Microsoft Azure Documentation.

EKS-specific requirements
  1. Create a Kubernetes cluster with the following guidelines:
    • Use Kubernetes version 1.27 onwards.

    • AWS default CNI is used during cluster creation.

    • Create a nodegroup with only one availability zone and instance type should be of at least m5.4xlarge configuration and select the size of attached EBS volume for each node more than 100 GB.

      The nodepool uses AWS manual or autoscaling group feature which allows your nodepool to scale by provisioning and de-provisioning the nodes as required automatically.

      Note:

      All the nodes in node group must be running on the Linux operating system.

    • Minimum required policies in IAM role:

      • AmazonEKSClusterPolicy

      • AmazonEKSWorkerNodePolicy

      • AmazonEC2ContainerRegistryPowerUser

      • AmazonEKS_CNI_Policy

      • AmazonEKSServicePolicy

  2. Use an existing AWS Elastic Container Registry or create a new one and ensure that the EKS has full access to pull images from the elastic container registry.
  3. It is recommended to create separate node pool for Media server installation with autoscaler add-on installed in the cluster. The autoscaling feature allows your node pool to scale dynamically by provisioning and de-provisioning the nodes as required automatically.
  4. A dedicated node pool for Primary server must be created in Amazon Elastic Kubernetes Services cluster.

    The following table lists the node configuration for the primary and media servers.

    Node type

    m5.4xlarge

    vCPU

    16

    RAM

    64 GiB

    Total disk size per node (TiB)

    1 TB

    Number of disks/node

    1

    Cluster storage size

    Small (4 nodes)

    4 TB

    Medium (8 nodes)

    8 TB

    Large (16 nodes)

    16 TB

  5. Another dedicated node pool must be created for Snapshot Manager (if it has to be deployed) with auto scaling enabled.

    Following is the minimum configuration required for Snapshot Manager data plane node pool:

    Node type

    t3.large

    RAM

    8 GB

    Number of nodes

    Minimum 1 with auto scaling enabled.

    Maximum pods per node

    Number of IPs required for Snapshot Manager data pool, must be greater than:

    the number of nodes (for node's own IP) + (RAM size per node * 2 * number of nodes) + (number of all kube-system pods running on all nodes) + static listener pod + number of nodes( for fluent daemonset)

    Number of IPs required for Snapshot Manager control pool, must be greater than:

    number of nodes (for node's own IP) + number of flexsnap pods(15) + number of flexsnap services (6) + nginx load balancer IP + no. of additional off host agents + operator + (number of all kube-system pods running on all nodes)

    Following are the different scenario's on how the NetBackup Snapshot Manager calculates the number of job which can run at a given point in time, based on the above mentioned formula:

    • For DBPaaS Workload

      Note:

      The following configuration is advised as the CPU credit limit was reached in the T-series workload.

      Node type

      m4.2xlarge

      RAM

      32 GB

    • For 2 CPU's and 8 GB RAM node configuration:

      CPU

      More than 2 CPU's

      RAM

      8 GB

      Maximum pods per node

      Number of IPs required for Snapshot Manager data pool, must be greater than:

      number of nodes (for node's own IP) + (RAM size per node * 2 * number of nodes) + (number of all kube-system pods running on all nodes) + static listener pod + number of nodes( for fluent daemonset)

      Number of IPs required for Snapshot Manager control pool, must be greater than:

      number of nodes (for node's own IP) + number of flexsnap pods(15) + number of flexsnap services (6) + nginx load balancer IP + no. of additional off host agents + operator + (number of all kube-system pods running on all nodes)

      Autoscaling enabled

      Minimum number =1 and Maximum = 3

      Note:

      Above configuration will run 8 jobs per node at once.

    • For 2/4/6 CPU's and 16 GB RAM node configuration:

      CPU

      More than 2/4/6 CPU's

      RAM

      16 GB

      Maximum pods per node

      6 (system) + 4 (Static pods) + 16*2=32 (Dynamic pods) = 42 or more

      Autoscaling enabled

      Minimum number =1 and Maximum = 3

      Note:

      Above configuration will run 16 jobs per node at once.

  6. Taints and tolerations allows you to mark (taint) a node so that no pods can schedule onto it unless a pod explicitly tolerates the taint. Marking nodes instead of pods (as in node affinity/anti-affinity) is particularly useful for situations where most pods in the cluster must avoid scheduling onto the node.

    Taints are set on the node group while creating the node group in the cluster. Tolerations are set on the pods.

  7. Deploy aws load balancer controller add-on in the cluster.

    For more information on installing the add-on, see 'Installing the AWS Load Balancer Controller add-on' section of the Amazon EKS User Guide.

  8. Install cert-manager and trust-manager as follows:
    • Install cert-manager by using the following command:

      $ kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.3.0/cert-manager.yaml

      For more information, see Documentation for cert-manager installation.

    • Install trust-manager by using the following command:

      helm repo add jetstack https://charts.jetstack.io --force-update

      $ kubectl create namespace trust-manager

      helm upgrade -i -n trust-manager trust-manager jetstack/trust-manager --set app.trust.namespace=netbackup --version v0.7.0 --wait

  9. The FQDN that will be provided in primary server CR and media server CR specifications in networkLoadBalancer section must be DNS resolvable to the provided IP address.
  10. Amazon Elastic File System (Amazon EFS) for shared persistence storage. To create EFS for primary server, see 'Create your Amazon EFS file system' section of the Amazon EKS User Guide.

    EFS configuration can be as follow and user can update Throughput mode as required:

    Performance mode:  General Purpose

    Throughput mode: Bursting (256 MiB/s)

    Availability zone: Regional

    Note:

    Throughput mode can be increased at runtime depending on the size of workloads and also if you are seeing performance issue you can increase the Throughput mode till 1024 MiB/s.

    Note:

    To install the add-on in the cluster, ensure that you install the Amazon EFS CSI driver. For more information on installing the Amazon EFS CSI driver, see 'Amazon EFS CSI driver' section of the Amazon EKS User Guide.

  11. If NetBackup client is outside VPC or if you want to access the WEB UI from outside VPC then NetBackup client CIDR must be added with all NetBackup ports in security group inbound rule of cluster. See About the Load Balancer service. for more information on NetBackup ports.
    • To obtain the cluster security group, run the following command:

      aws eks describe-cluster --name <my-cluster> --query cluster.resourcesVpcConfig.clusterSecurityGroupId

    • The following link helps to add inbound rule to the security group:

      'Add rules to a security group' section of the Amazon EKS User Guide.

  12. Create a storage class with EBS storage type with allowVolumeExpansion = true and ReclaimPolicy=Retain. This storage class is to be used for data and log for both primary and media servers.

    For example,

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      annotations:
        storageclass.kubernetes.io/is-default-class: "true"
      name: ebs-csi-storage-class
    parameters:
      fsType: ext4
      type: gp2
    provisioner: kubernetes.io/ebs.csi.aws.com
    reclaimPolicy: Retain
    volumeBindingMode: WaitForFirstConsumer
    allowVolumeExpansion: true
    

    Note:

    Ensure that you install the Amazon EBS CSI driver to install the add-on in the cluster. For more information on installing the Amazon EBS CSI driver, see 'Managing the Amazon EBS CSI driver as an Amazon EKS add-on' and 'Amazon EBS CSI driver' sections of the Amazon EKS User Guide.

  13. The EFS based PV must be specified for Primary server catalog volume with ReclaimPolicy=Retain.
Host-specific requirements

Use the following checklist to address the prerequisites on the system that you want to use as a NetBackup host that connects to the AKS/EKS cluster.

AKS-specific

  • Linux operating system: For a complete list of compatible Linux operating systems, refer to the Software Compatibility List (SCL) at:

    NetBackup Compatibility List for all Versions

  • Install Docker on the host to install NetBackup container images through tar, and start the container service.

    Install Docker Engine

  • Prepare the host to manage the AKS cluster.

    • Install Azure CLI.

      For more information, see 'Install the Azure CLI on Linux' section of the Microsoft Azure Documentation.

    • Install Kubernetes CLI

      For more information, see 'Install and Set Up kubectl on Linux' section of the Kubernetes Documentation.

    • Log in to the Azure environment to access the Kubernetes cluster by running this command on Azure CLI:

      az login - identity

      az account set --subscription <subscriptionID>

      az aks get-credentials --resource-group

      <resource_group_name> --name <cluster_name>

      az resource list -n $cluster_name --query [*].identity.principalId --out tsv

      az role assignment create --assignee <identity.principalId> --role 'Contributor' --scope /subscriptions/$subscription_id/resourceGroups/NBUX-QA-BiDi- RG/providers/Microsoft.Network/virtualNetworks/NBUX-QA-BiDiNet01/subnets/$subnet

      az login --scope https://graph.microsoft.com//.default

    • Log in to the container registry:

      az acr login -n <container-registry-name>

EKS-specific

  • Install AWS CLI.

    For more information on installing the AWS CLI, see Install or update the latest version of the AWS CLI' section of the AWS Command Line Interface User Guide.

  • Install Kubectl CLI.

    For more information on installing the Kubectl CLI, see 'Installing kubectl' section of the Amazon EKS User Guide.

  • Configure docker to enable the push of the container images to the container registry.

  • Create the OIDC provider for the AWS EKS cluster.

    For more information on creating the OIDC provider, see 'Create an IAM OIDC provider for your cluster' section of the Amazon EKS User Guide.

  • Create an IAM service account for the AWS EKS cluster.

    For more information on creating an IAM service account, see 'Configuring a Kubernetes service account to assume an IAM role' section of the Amazon EKS User Guide.

  • If an IAM role needs an access to the EKS cluster, run the following command from the system that already has access to the EKS cluster:

    kubectl edit -n kube-system configmap/aws-auth

    For more information on creating an IAM role, see Enabling IAM user and role access to your cluster.

  • Login to the AWS environment to access the Kubernetes cluster by running the following command on AWS CLI:

    aws eks --region <region_name> update-kubeconfig --name <cluster_name>

  • Free space of approximately 13GB on the location where you copy and extract the product installation TAR package file. If using docker locally, there should be approximately 8GB available on the /var/lib/docker location so that the images can be loaded to the docker cache, before being pushed to the container registry.

  • AWS EFS-CSI driver should be installed for static PV/PVC creation of primary catalog volume.