InfoScale™ 9.0 Support for Containers - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Arctera InfoScale™ on OpenShift
- Installing Arctera InfoScale™ on Kubernetes
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing InfoScale DR on OpenShift
- Installing InfoScale DR on Kubernetes
- TECHNOLOGY PREVIEW: Disaster Recovery scenarios
- Configuring InfoScale
- Troubleshooting
Installing InfoScale in an air gapped system
An air gapped system is not connected to the Internet. It is therefore necessary to prepare the system.
Before installing InfoScale on an air gapped system, mirror the Node Feature Discovery (NFD) operator catalog first. You can perform mirroring and installation of Node Feature Discovery (NFD) from any OpenShift cluster node that has Internet connectivity and is also connected with the air gapped system.
Note:
In the following steps, ${JUMP_HOST}:5000
is on the same network. JUMP_HOST
is a system connected to Internet and has a registry setup. 5000 is an indicative port number.
Mirroring the Node Feature Discovery (NFD) operator catalog
- Run the following command on the bastion node to authenticate with
registry.redhat.io
and your custom registry.export REGISTRY_AUTH_FILE=<path_to_pull_secret>/pull-secret.json
- Run the following command on the bastion node to set the following environment variable export
JUMP_HOST="<IP address of custom registry>"
- Run the following command on the bastion node to disable the sources for the default catalogs.
oc patch OperatorHub cluster --type json -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'
- Run the following command on the bastion node to retain only the specified package in the source index.
nfd opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v4.9 -p nfd -t ${JUMP_HOST}:5000/catalog/redhat-operator-index:v4.9
- Run the following command on the bastion node to push the Node Feature Discovery Operator index image to your custom registry.
podman push ${JUMP_HOST}:5000/catalog/redhat-operator-index:v4.9
- Run the following command on the bastion node to mirror the Node Feature Discovery Operator
oc adm catalog mirror \ --insecure=true \ --index-filter-by-os='linux/amd64' \ -a ${REGISTRY_AUTH_FILE} \ ${JUMP_HOST}:5000/catalog/redhat-operator-index:v4.9 ${JUMP_HOST}:5000/operators
- Inspect the manifests directory that is generated in your current directory. The manifest directory format is
manifests-<index_image_name>-<random_number>
. For examplemanifests-redhat-operator-index-1638334101
. - Run the following command on the bastion node to create the ImageContentSourcePolicy (ICSP) object by specifying
imageContentSourcePolicy.yaml
in your manifests directoryoc create -f <path to the manifests directory for your mirrored content>/imageContentSourcePolicy.yaml
- Run the following command on the bastion node to customize
mapping.txt
withREGISTRY_AUTH_FILE
.oc image mirror -f <path/to/manifests/dir>/mapping.txt -a ${REGISTRY_AUTH_FILE} - insecure
- Copy the following content and save it as
catalogSource_redhat_operator.yaml
.apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: redhat-operator-index namespace: openshift-marketplace spec: image: ${JUMP_HOST}:5000/operators/catalog-redhat-operator-index:v4.9 sourceType: grpc displayName: My Operator Catalog publisher: <publisher_name> updateStrategy: registryPoll: interval: 30m
- Run the following command on the bastion node to create the CatalogSource object
oc apply -f catalogSource_redhat_operator.yaml
- Run the following command on the bastion node to check the status of pods.
oc get pods -n openshift-marketplace
Review output as under. Status of the pods must be 'Running'.
NAME READY STATUS RESTARTS AGE certified-operator- index-bq7bt 1/1 Running 0 17h marketplace-operator- d6985d479bc-7zbckj 1/1 Running 0 23d redhat-operator-index -785tv 1/1 Running 0 17h
- Check the package manifest
oc get packagemanifest -n openshift-marketplace
Review output to the following output.
NAME DISPLAY TYPE PUBLISHER AGE certified-operator-index Openshift Telco Docs grpc Openshift Docs 20h redhat-operator-index Openshift Telco Docs grpc Openshift Docs 20h
- Run the following commands on the bastion node to check the catalogsource
oc get catalogsource -n openshift-marketplace
oc get pods -n openshift-marketplace
- Login to the OCP web console and click Operators > OperatorHub. The mirrored operator must be listed here.
Installing Node Feature Discovery (NFD) Operator
- Connect to the OpenShift console.
- In the left frame, click Operators > OperatorHub. You can select and install the operator here.
- In Filter by keyword, enter Node Feature Discovery. Node Feature Discovery is listed as under.
Note:
If the Operator is already installed, it is indicated. See the last step to apply Cert-Manager.
- Select the Node Feature Discovery Operator and follow onscreen instructions to install.
- After a successful installation, Node Feature Discovery is listed under Operators > Installed Operators in the left frame.
- In Node Feature Discovery, see a box under Provided APIs.
- Click Create instance. Edit the values of the
NodeFeatureDiscovery CR
. - Click Create.
- To verify whether the installation is successful and check status of NFD instances on each node, run the following command on the bastion node.
oc get pods -A |grep nfd
Review the sample output as under. Here, the prefix
nfd-
is of the nfd operator.openshift-operators nfd-master-4hqbq 1/1 Running 0 62m openshift-operators nfd-master-brt9f 1/1 Running 0 62m openshift-operators nfd-master-pplqr 1/1 Running 0 62m openshift-operators nfd-operator-59454bd5c9-gf6h7 1/1 Running 0 5d2h openshift-operators nfd-worker-8l6wh 1/1 Running 0 62m openshift-operators nfd-worker-bngbq 1/1 Running 0 62m openshift-operators nfd-worker-d5btm 1/1 Running 0 62m openshift-operators nfd-worker-hx6xl 1/1 Running 0 62m
Note:
You can refer to the OpenShift documentation for Node Feature Discovery.
Installing cert-manager
- Pull the following images
quay.io/jetstack/cert-manager-cainjector:v1.6.1
quay.io/jetstack/cert-manager-controller:v1.6.1
quay.io/jetstack/cert-manager-webhook:v1.6.1
- Tag and push the images to the Custom registry at <IP address of custom registry>/veritas/.
- Edit
/YAML/OpenShift/air-gapped-systems/cert-manager.yaml
as underReplace
192.168.1.21/veritas/cert-manager-cainjector:v1.6.1
withimage: <IP address of custom registry>/veritas/cert-manager-cainjector:v1.6.1
.Replace
192.168.1.21/veritas/cert-manager-controller:v1.6.1
withimage: <IP address of custom registry>/veritas/cert-manager-controller:v1.6.1
.Replace
192.168.1.21/veritas/cert-manager-webhook:v1.6.1
withimage: <IP address of custom registry>/veritas/cert-manager-webhook:v1.6.1
.
- Run the following command on the bastion node to install cert-manager
oc apply -f /YAML/OpenShift/air-gapped-systems/cert-manager.yaml
- Run the following command on the bastion node to check the status of pods.
oc get all -n cert-manager
Status similar to the following indicates a successful installation.
NAME READY STATUS RESTARTS AGE pod/cert-manager-5986867bb9-v95t7 1/1 Running 0 56s pod/cert-manager-cainjector-b475c485b -bxj89 1/1 Running 0 56s pod/cert-manager-webhook-55b6c54579 -95gcw 1/1 Running 0 56s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/cert-manager ClusterIP 172.30.72.54 <none> 9402/TCP 57s service/cert-manager -webhook ClusterIP 172.30.180.10<none> 443/TCP 57s
NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/cert-manager 1/1 1 1 57s deployment.apps/cert-manager -cainjector 1/1 1 1 57s deployment.apps/cert-manager -webhook 1/1 1 1 57s
NAME DESIRED CURRENT READY AGE replicaset.apps/cert-manager -5986867bb9 1 1 1 56s replicaset.apps/cert-manager -cainjector-b475c485b 1 1 1 56s replicaset.apps/cert-manager -webhook-55b6c54579 1 1 1 56s
You must install a Special Resource Operator (SRO) first, before installing Arctera InfoScale™. After the SRO is installed, the system is enabled for installing Arctera InfoScale™.
Installing Special Resource Operator (SRO) and InfoScale Operator
- Download
YAML.tar
from the Arctera Download Center. - Untar
YAML.tar
.After you untar
YAML.tar
, the folders/YAML/OpenShift/
,/YAML/OpenShift/air-gapped-systems
,/YAML/DR
, and/YAML/Kubernetes
are created. Each folder contains files required for installation. - On the bastion node -
Download
registry.redhat.io/openshift4/special-resource-rhel8-operator: v4.9.0-202111161916.p0.gf6ed01a.assembly.stream
, tag, and push it to custom registry as
<IP address of custom registry>/special-resource-rhel8-operator: v4.9.0-202111161916.p0.gf6ed01a.assembly.stream
Download
registry.redhat.io/openshift4/ose-kube-rbac-proxy, tag and push it to custom registry as
<IP address of custom registry>/ose-kube-rbac-proxy:v4.9.
Edit
/YAML/OpenShift/air-gapped-systems/sro.yaml
as underReplace
192.168.1.21/veritas/special-resource-rhel8-operator: v4.9.0-202111161916.p0.gf6ed01a.assembly.stream
with
image:<IP address of custom registry>/special-resource-rhel8-operator: v4.9.0-202111161916.p0.gf6ed01a.assembly.stream
and
Replace image:192.168.1.21/veritas/ose-kube-rbac-proxy:v4.9 with
image:<IP address of custom registry>/ose-kube-rbac-proxy:v4.9.
Run the following command
oc create -f /YAML/OpenShift/air-gapped-systems/sro.yaml
Run oc create -f /YAML/OpenShift/air-gapped-systems/sr.yaml to create Special Resource.
- Run the following commands and review the output to verify whether SR creation and SRO installation is successful.
oc get pods -n openshift-special-resource-operator
Output similar to the following indicates a successful installation.
NAME READY STATUS RESTARTS AGE special-resource-controller-manager-66c8fc64b5-9wv6l 1/1 Running 0 2m35s
Note:
The name in the output here is used in the following command.
oc logs special-resource-controller-manager-66c8fc64b5-9wv6l -n openshift-special-resource-operator -c manager
Output similar to the following indicates a successful installation.
<timestamp> INFO status RECONCILE SUCCESS: Reconcile
oc get SpecialResource
Output similar to the following indicates a successful installation.
NAME AGE special-resource-preamble 2m24s
Applying Licenses
- Run oc create -f /YAML/OpenShift/air-gapped-systems/lico.yaml on the bastion node.
- Run oc get pods -n infoscale-vtas|grep -i licensing on the bastion node to verify whether
lico.yaml
is successfully applied.An output similar to the following indicates that
lico.yaml
is successfully applied.NAME READY STATUS RESTARTS AGE licensing-operator-fbd8c7dc4-rcfz5 1/1 Running 0 2m
- After
lico.yaml
is successfully applied, licensing endpoints must be available.Run oc describe service/lico-webhook-service -n infoscale-vtas|grep Endpoints on the master node and review the output.
- Run the command again till you get an output in the following format.
Endpoints: <IP address of the endpoint>:<Port number>
- Edit /YAML/OpenShift/air-gapped-systems/license_cr.yaml for the license edition. The default license edition is Developer. You can change the
licenseEdition
. If you want to configure Disaster Recovery (DR), you must have Trialware or SubscriptionEnterprise as the license edition.apiVersion: vlic.veritas.com/v1 kind: License metadata: name: license-dev spec: # valid licenseEdition values are Developer, Trialware, SubscriptionStorage or SubscriptionEnterprise licenseEdition: "Developer"
- Run oc create -f /YAML/OpenShift/air-gapped-systems/license_cr.yaml on the bastion node.
- Run oc get licenses on the bastion node to verify whether licenses have been successfully applied.
An output similar to the following indicates that
license_cr.yaml
is successfully applied.NAME NAMESPACE LICENSE-EDITION AGE license DEVELOPER 27s
All information about the worker nodes must be added to the cr.yaml
file. All worker nodes become part of InfoScale cluster after cr.yaml
is applied. After you download and untar YAML.tar
, all files required for installation are available.
Note:
You must download images required for installation from the Red Hat registry and push those to the Custom registry.
Optionally, configure a new user - infoscale-admin, associated with a Role-based Access Control ( RBAC) clusterrole defined ininfoscale-admin-role.yaml
, to deploy InfoScale and its dependent components. infoscale-admin as a user when configured has clusterwide access to only those resources needed to deploy InfoScale and its dependent components such as SRO/NFD/Cert Manager in the desired namespaces.
To provide a secure and isolated environment for InfoScale deployment and associated resources, the namespace associated with these resources must be protected from access of all other users (except super user of the cluster), with appropriate RBAC implemented.
Run the following commands on the bastion node to create a new user - infoscale-admin and a new project and assign role or clusterrole to infoscale-admin. You must be logged in as a super user.
- oc new-project <New Project name>
A new project is created for InfoScale deployment.
- oc adm policy add-role-to-user admin infoscale-admin
Following output indicates that administrator privileges are assigned to the new user - infoscale-admin within the new project.
clusterrole.rbac.authorization.k8s.io/admin added: "infoscale-admin"
- oc apply -f /YAML/OpenShift/air-gapped-systems/infoscale-admin-role.yaml
Following output indicates that a clusterrole is created.
clusterrole.rbac.authorization.k8s.io/infoscale-admin-role created
- oc adm policy add-cluster-role-to-user infoscale-admin-role infoscale-admin
Following output indicates that clusterrole created is associated with infoscale-admin.
clusterrole.rbac.authorization.k8s.io/infoscale-admin-role added: "infoscale-admin"
You must perform all installation activities by logging in as infoscale-admin.
Download the following images -
registry.connect.redhat.com/veritas-technologies/infoscale-sds-operator:8.0.100-rhel8
registry.connect.redhat.com/veritas-technologies/infoscale-fencing:2.0.0.0000-rhel8
registry.connect.redhat.com/veritas-technologies/infoscale-csi:2.0.0.0000-rhel8
registry.connect.redhat.com/veritas-technologies/infoscale-licensing:8.0.100-rhel8
registry.connect.redhat.com/veritas-technologies/infoscale-dr-operator:1.0.0.0000-rhel8
registry.connect.redhat.com/veritas-technologies/infoscale-sds-operator:8.0.100-rhel8
registry.connect.redhat.com/veritas-technologies/infoscale:8.0.100-rhel8.4-<kernel release version>
where, kernel release version = uname -r output from worker node.registry.redhat.io/openshift4/ose-csi-driver-registrar:v4.3
registry.redhat.io/openshift4/ose-csi-external-provisioner-rhel8:v4.7
registry.redhat.io/openshift4/ose-csi-external-attacher:v4.7
registry.redhat.io/openshift4/ose-csi-external-resizer-rhel8:v4.7
registry.redhat.io/openshift4/ose-csi-external-snapshotter-rhel8:v4.7
docker.io/kvaps/kube-fencing-switcher:v2.1.0
docker.io/kube-fencing-controller:v2.1.0
After you download, tag the images, and push those to the Custom registry.
Edit
/YAML/OpenShift/air-gapped-systems/iso.yaml
as underReplace image: 192.168.1.21/veritas/infoscale-sds-operator:8.0.100-rhel8 with image: <IP address of custom registry>/infoscale-sds-operator:8.0.100-rhel8.
Run the following command on the bastion node to install Arctera InfoScale™.
oc create -f /YAML/OpenShift/air-gapped-systems/iso.yaml
Run the following command on the bastion node to verify whether the installation is successful
oc get pods -n infoscale-vtas|grep infoscale-sds-operator
An output similar to the following indicates a successful installation. READY 1/1 indicates that Storage cluster resources can be created.
NAME READY STATUS RESTARTS AGE infoscale-sds-operator-bb55cfc4d-pclt5 1/1 Running 0 20h