Veritas InfoScale™ for Kubernetes Environments 8.0.200 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Prerequisites
- Tagging the InfoScale images on Kubernetes
- Installing InfoScale on Kubernetes
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Upgrading InfoScale
- Troubleshooting
Configuring Veritas Oracle Data Manager (VRTSodm)
Veritas Oracle Data Manager (VRTSodm) is offered as a part of InfoScale suite. With VRTSodm, Oracle Applications bypass caching and locks of the file system thus enabling a faster connection.
VRTSodm is enabled by the linking libodm.so
with the Oracle Application. The I/O calls from Oracle Application are then routed through the ODM kernel module.
Following changes are needed to the Oracle database yaml
file to enable it to run with Veritas ODM.
Update the VxFS Data Volume (<vxfs pvc>) in the following code and add it to the
.yaml
.Note:
Oracle Container image requires the data volume to be mounted at
/opt/oracle/oradata
. This volume also needs to be writable by the 'oracle' (uid: 54321) user inside the container. VxFS data volume must be mounted at this path by using a PVC. To handle this permissions issue, the followinginitContainer
can be used.initContainers: - name: fix-volume-permission image: ubuntu command: - sh - -c - mkdir -p /opt/oracle/oradata && chown -R 54321:54321 /opt/oracle/oradata && chmod 0700 /opt/oracle/oradata volumeMounts: - name: <vxfs pvc> mountPath: /opt/oracle/oradata readOnly: false
Add the following to your
.yaml
to disable DNFS.args: - sh - -c - cd /opt/oracle/product/19c/dbhome_1/rdbms/lib/ && make -f ins_rdbms.mk dnfs_off && cd $WORKDIR && $ORACLE_BASE/$RUN_FILE
Create a hostpath volume
devodm
in the.yaml
, and mount at/dev/odm
.Note:
On selinux-enabled systems (including OpenShift), the Oracle database container must be run as privileged.
Use the
libodm.so
that Veritas provides. Run the following commands on the bastion/master nodes.oc/kubectl cp <infoscalepod>:/opt/VRTSodm/lib64/libodm.so.
oc/kubectl create configmap libodm --from-file libodm.so.
Mount
libodm.so
inside the oracle container as under- name: libodm-cmapvol mountPath: /opt/oracle/product/19c/dbhome_1/rdbms/lib/odm/libodm.so subPath: libodm.so volumes: - name: libodm-cmapvol configMap: name: libodm items: - key: libodm.so path: libodm.so
Run your .yaml on the bastion mode of the OpenShift cluster or the master node of the Kubernetes cluster.
Alternatively, copy the following content and create a new file oracle-odm.yaml
.
apiVersion: apps/v1 kind: Deployment metadata: name: oracle-odm labels: app: oracledb spec: replicas: 1 selector: matchLabels: app: oracledb template: metadata: labels: app: oracledb spec: initContainers: - name: fix-volume-permission image: ubuntu command: - sh - -c - mkdir -p /opt/oracle/oradata && chown -R 54321:54321 /opt/oracle/oradata && chmod 0700 /opt/oracle/oradata volumeMounts: - name: oracle-datavol mountPath: /opt/oracle/oradata readOnly: false containers: - name: oracle-app securityContext: privileged: true image:#replace this with the link for patched oracle container image imagePullPolicy: IfNotPresent # Modification to the args to disable dnfs before starting database args: - sh - -c - cd /opt/oracle/product/19c/dbhome_1/rdbms/lib/ && make -f ins_rdbms.mk dnfs_off && cd $WORKDIR && $ORACLE_BASE/$RUN_FILE resources: requests: memory: 8Gi env: - name: ORACLE_SID value: "orainst1" - name: ORACLE_PDB value: orapdb1 - name: ORACLE_PWD value: oracle ports: - name: listener containerPort: 1521 hostPort: 1521 volumeMounts: - name: oracle-datavol mountPath: /opt/oracle/oradata readOnly: false - name: devodm mountPath: /dev/odm - name: libodm-cmapvol mountPath: /opt/oracle/product/19c/dbhome_1/rdbms/lib/odm/libodm.so subPath: libodm.so volumes: - name: oracle-datavol persistentVolumeClaim: claimName: oracle-data-pvc - name: devodm hostPath: path: /dev/odm type: Directory - name: libodm-cmapvol configMap: name: libodm items: - key: libodm.so path: libodm.so --- apiVersion: v1 kind: Service metadata: name: ora-listener namespace: default labels: app: oracledb spec: selector: app: oracledb type: NodePort ports: - name: ora-listener protocol: TCP port: 1521 targetPort: 1521
Save the file.
Run the file on the bastion mode of the OpenShift cluster or the master node of the Kubernetes cluster to enable a faster connection.