InfoScale™ 9.0 Support for Containers - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (9.0)
Platform: Linux
  1. Overview
    1.  
      Introduction
    2.  
      Features of InfoScale in Containerized environment
    3.  
      CSI Introduction
    4.  
      I/O fencing
    5.  
      Disaster Recovery
  2. System requirements
    1.  
      Introduction
    2.  
      Supported platforms
    3.  
      Disk space requirements
    4.  
      Hardware requirements
    5.  
      Number of nodes supported
    6.  
      DR support
  3. Preparing to install InfoScale on Containers
    1. Setting up the private network
      1.  
        Guidelines for setting the media speed for LLT interconnects
      2.  
        Guidelines for setting the maximum transmission unit (MTU) for LLT
    2.  
      Synchronizing time settings on cluster nodes
    3.  
      Securing your InfoScale deployment
    4.  
      Configuring kdump
  4. Installing Arctera InfoScale™ on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3. Installing InfoScale on a system with Internet connectivity
      1. Using web console of OperatorHub
        1.  
          Adding Nodes to an InfoScale cluster by using OLM
        2.  
          Undeploying and uninstalling InfoScale
      2. Installing from OperatorHub by using Command Line Interface (CLI)
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale by using CLI
      3. Installing by using YAML.tar
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale
    4. Installing InfoScale in an air gapped system
      1.  
        Configuring cluster
      2.  
        Adding nodes to an existing cluster
      3.  
        Undeploying and uninstalling InfoScale
  5. Installing Arctera InfoScale™ on Kubernetes
    1.  
      Introduction
    2. Prerequisites
      1.  
        Installing Node Feature Discovery (NFD) Operator and Cert-Manager on Kubernetes
    3.  
      Installing the Special Resource Operator
    4. Tagging the InfoScale images on Kubernetes
      1.  
        Downloading side car images
    5. Installing InfoScale on Kubernetes
      1.  
        Configuring cluster
      2.  
        Adding nodes to an existing cluster
    6.  
      Undeploying and uninstalling InfoScale
  6. InfoScale CSI deployment in Container environment
    1.  
      CSI plugin deployment
    2.  
      Static provisioning
    3. Dynamic provisioning
      1.  
        Reclaiming provisioned storage
    4.  
      Resizing Persistent Volumes (CSI volume expansion)
    5. Snapshot provisioning (Creating volume snapshots)
      1.  
        Dynamic provisioning of a snapshot
      2.  
        Static provisioning of an existing snapshot
      3.  
        Using a snapshot
      4.  
        Restoring a snapshot to new PVC
      5.  
        Deleting a volume snapshot
    6. Managing InfoScale volume snapshots with Velero
      1.  
        Setting up Velero with InfoScale CSI
      2.  
        Taking the Velero backup
      3.  
        Creating a schedule for a backup
      4.  
        Restoring from the Velero backup
    7. Volume cloning
      1.  
        Creating volume clones
      2.  
        Deleting a volume clone
    8.  
      Using InfoScale with non-root containers
    9.  
      Using InfoScale in SELinux environments
    10.  
      CSI Drivers
    11.  
      Creating CSI Objects for OpenShift
  7. Installing InfoScale DR on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      External dependencies
    4. Installing InfoScale DR
      1.  
        Configuring DR Operator
      2.  
        Configuring Global Cluster Membership (GCM)
      3.  
        Configuring Data Replication
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  8. Installing InfoScale DR on Kubernetes
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      External dependencies
    4. Installing InfoScale DR
      1.  
        Configuring DR Operator
      2.  
        Configuring Global Cluster Membership (GCM)
      3.  
        Configuring Data Replication
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  9. TECHNOLOGY PREVIEW: Disaster Recovery scenarios
    1.  
      Migration
  10. Configuring InfoScale
    1.  
      Logging mechanism
    2.  
      Configuring Arctera Oracle Data Manager (VRTSodm)
  11. Troubleshooting
    1.  
      Known Issues
    2.  
      Limitations

Configuring Arctera Oracle Data Manager (VRTSodm)

Arctera Oracle Data Manager (VRTSodm) is offered as a part of InfoScale suite. With VRTSodm, Oracle Applications bypass caching and locks of the file system thus enabling a faster connection.

VRTSodm is enabled by the linking libodm.so with the Oracle Application. The I/O calls from Oracle Application are then routed through the ODM kernel module.

Following changes are needed to the Oracle database yaml file to enable it to run with Arctera ODM.

  1. Update the VxFS Data Volume (<vxfs pvc>) in the following code and add it to the .yaml.

    Note:

    Oracle Container image requires the data volume to be mounted at /opt/oracle/oradata. This volume also needs to be writable by the 'oracle' (uid: 54321) user inside the container. VxFS data volume must be mounted at this path by using a PVC. To handle this permissions issue, the following initContainer can be used.

    initContainers:
    - name: fix-volume-permission
      image: ubuntu
      command:
      - sh
      - -c
      - mkdir -p /opt/oracle/oradata 
        && chown -R 54321:54321 /opt/oracle/oradata 
        && chmod 0700 /opt/oracle/oradata
      volumeMounts:
      - name: <vxfs pvc>
      mountPath: /opt/oracle/oradata
      readOnly: false
    
  2. Add the following to your .yaml to disable DNFS.

    args:
    - sh
    - -c
    - cd /opt/oracle/product/19c/dbhome_1/rdbms/lib/ &&
     make -f ins_rdbms.mk dnfs_off && cd $WORKDIR &&
     $ORACLE_BASE/$RUN_FILE
  3. Create a hostpath volume devodm in the .yaml, and mount at /dev/odm .

    Note:

    On selinux-enabled systems (including OpenShift), the Oracle database container must be run as privileged.

  4. Use the libodm.so that Arctera provides. Run the following commands on the bastion/master nodes.

    • oc/kubectl cp <infoscalepod>:/opt/VRTSodm/lib64/libodm.so.

    • oc/kubectl create configmap libodm --from-file libodm.so.

    • Mount libodm.so inside the oracle container as under

      - name: libodm-cmapvol
        mountPath: /opt/oracle/product/19c/dbhome_1/rdbms/lib/odm/libodm.so
        subPath: libodm.so
      
      volumes:
      - name: libodm-cmapvol
        configMap:
          name: libodm
          items:
          - key: libodm.so
            path: libodm.so

Run your .yaml on the bastion mode of the OpenShift cluster or the master node of the Kubernetes cluster.

Alternatively, copy the following content and create a new file oracle-odm.yaml .

apiVersion: apps/v1
kind: Deployment
metadata:
  name: oracle-odm
  labels:
    app: oracledb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: oracledb
  template:
    metadata:
      labels:
        app: oracledb
    spec:
      initContainers:
      - name: fix-volume-permission
        image: ubuntu
        command:
        - sh
        - -c
        - mkdir -p /opt/oracle/oradata && chown -R 54321:54321 
          /opt/oracle/oradata && chmod 0700 /opt/oracle/oradata 
        volumeMounts:
        - name: oracle-datavol
          mountPath: /opt/oracle/oradata
          readOnly: false
      containers:
      - name: oracle-app
        securityContext:
          privileged: true
        image:#replace this with the link for patched oracle container image
        imagePullPolicy: IfNotPresent
        # Modification to the args to disable dnfs before starting database
        args:
        - sh
        - -c
        - cd /opt/oracle/product/19c/dbhome_1/rdbms/lib/ && make -f 
             ins_rdbms.mk dnfs_off && cd $WORKDIR && $ORACLE_BASE/$RUN_FILE
        resources:
          requests:
            memory: 8Gi
        env:
        - name: ORACLE_SID
          value: "orainst1"
        - name: ORACLE_PDB
          value: orapdb1
        - name: ORACLE_PWD
          value: oracle
        ports:
        - name: listener
          containerPort: 1521
          hostPort: 1521
        volumeMounts:
        - name: oracle-datavol
          mountPath: /opt/oracle/oradata
          readOnly: false
        - name: devodm
          mountPath: /dev/odm
        - name: libodm-cmapvol
         mountPath: /opt/oracle/product/19c/dbhome_1/rdbms/lib/odm/libodm.so
          subPath: libodm.so
      volumes:
      - name: oracle-datavol
        persistentVolumeClaim:
          claimName: oracle-data-pvc
      - name: devodm
        hostPath:
          path: /dev/odm
          type: Directory
      - name: libodm-cmapvol
        configMap:
          name: libodm
          items:
          - key: libodm.so
            path: libodm.so
---
apiVersion: v1
kind: Service
metadata:
  name: ora-listener
  namespace: default
  labels:
    app: oracledb
spec:
  selector:
    app: oracledb
  type: NodePort
  ports:
  - name: ora-listener
    protocol: TCP
    port: 1521
    targetPort: 1521

Save the file.

Run the file on the bastion mode of the OpenShift cluster or the master node of the Kubernetes cluster to enable a faster connection.