InfoScale™ 9.0 Solutions Guide - Linux
- Section I. Introducing Veritas InfoScale
- Section II. Solutions for Veritas InfoScale products
- Solutions for Veritas InfoScale products
- Solutions for Veritas InfoScale products
- Section III. Stack-level migration to IPv6 or dual stack
- Section IV. Improving database performance
- Overview of database accelerators
- Improving database performance with Veritas Concurrent I/O
- Improving database performance with atomic write I/O
- Section V. Using point-in-time copies
- Understanding point-in-time copy methods
- Backing up and recovering
- Preserving multiple point-in-time copies
- Online database backups
- Backing up on an off-host cluster file system
- Database recovery using Storage Checkpoints
- Backing up and recovering in a NetBackup environment
- Off-host processing
- Creating and refreshing test environments
- Creating point-in-time copies of files
- Section VI. Maximizing storage utilization
- Optimizing storage tiering with SmartTier
- Optimizing storage with Flexible Storage Sharing
- Optimizing storage tiering with SmartTier
- Section VII. Migrating data
- Understanding data migration
- Offline migration from LVM to VxVM
- Offline conversion of native file system to VxFS
- Online migration of a native file system to the VxFS file system
- Migrating a source file system to the VxFS file system over NFS v4
- VxFS features not available during online migration
- Migrating storage arrays
- Migrating data between platforms
- Overview of the Cross-Platform Data Sharing (CDS) feature
- CDS disk format and disk groups
- Setting up your system to use Cross-platform Data Sharing (CDS)
- Maintaining your system
- Disk tasks
- Disk group tasks
- Displaying information
- File system considerations
- Specifying the migration target
- Using the fscdsadm command
- Maintaining the list of target operating systems
- Migrating a file system on an ongoing basis
- Converting the byte order of a file system
- Migrating from Oracle ASM to Veritas File System
- Section VIII. Veritas InfoScale 4K sector device support solution
- Section IX. REST API support
- Support for configurations and operations using REST APIs
- Support for configurations and operations using REST APIs
- Section X. Reference
Running multiple parallel applications within a single cluster using the application isolation feature
Customer scenario |
Multiple parallel applications that require flexible sharing of data in a data warehouse are currently deployed on separate clusters. Access across clusters is provided by NFS or other distributed file system technologies. You want to deploy multiple parallel applications that require flexible sharing of data within a single cluster. In a data center, multiple clusters exist with their dedicated fail over nodes. There is a need to optimize the deployment of these disjoint clusters as a single large cluster. |
Configuration overview |
Business critical applications require dedicated hardware to avoid the impact of configuration changes of one application on other applications. For example, when a node leaves or joins the cluster, it affects the cluster and the applications running on it. If multiple applications are configured on a large cluster, configuration changes have the potential to cause application downtime. With the application isolation feature, Veritas InfoScale provides logical isolation between applications at the disk group boundary. This is very helpful when applications require occasional sharing of data. Data can be copied efficiently between applications by using Veritas Volume Manager snapshots and disk group split, join, or move operations. Updates to data can be optimally shared by copying only the changed data. Thus, existing configurations that have multiple applications on a large cluster can be made more resilient and scalable with the application isolation feature. Visibility of disk groups can be limited only to the required nodes. Making disk group configurations available to a smaller set of nodes improves performance and scalability of Veritas Volume Manager configuration operations. The following figure illustrates a scenario where three applications are logically isolated to operate from a specific set of nodes within a single large VCS cluster. This configuration can be deployed to serve any of the above mentioned scenarios.
|
Supported configuration |
|
Reference documents |
Storage Foundation Cluster File System High Availability Administrator's Guide Storage Foundation for Oracle RAC Configuration and Upgrade Guide. |
Solution |
To run multiple parallel applications within a single Veritas InfoScale cluster using the application isolation feature
- Install and configure Veritas InfoScale Enterprise 7.2 or later on the nodes.
- Enable the application isolation feature in the cluster.
Enabling the feature changes the import and deport behaviour. As a result, you must manually add the shared disk groups to the VCS configuration.
See the topic "Enabling the application isolation feature in CVM environments" in the Storage Foundation Cluster File System High Availability Administrator's Guide.
- Identify the shared disk groups on which you want to configure the applications.
- Initialize the disk groups and create the volumes and file systems you want to use for your applications.
Run the commands from any one of the nodes in the disk group sub-cluster. For example, if node1, node2, node3 belong to the sub-cluster
DGSubCluster1
, run the following commands from any one of the nodes: node1, node2, node3.Disk group sub-cluster 1:
# vxdg -s init appdg1 disk1 disk2 disk3 # vxassist -g appdg1 make appvol1 100g nmirror=2 # mkfs -t vxfs /dev/vx/rdsk/appdg1/appvol1
Disk group sub-cluster 2:
# vxdg -s init appdg2 disk4 disk5 disk6 # vxassist -g appdg2 make appvol2 100g nmirror=2 # mkfs -t vxfs /dev/vx/rdsk/appdg2/appvol2
Disk group sub-cluster 3:
# vxdg -s init appdg3 disk7 disk8 disk9 # vxassist -g appdg3 make appvol3 100g nmirror=2 # mkfs -t vxfs /dev/vx/rdsk/appdg3/appvol3
- Configure the OCR, voting disk, and CSSD resources on all nodes in cluster. It is recommended to have a mirror of the OCR and voting disk on each node in the cluster.
For instructions, see the Section "Installation and upgrade of Oracle RAC" in the Storage Foundation for Oracle RAC Configuration and Upgrade Guide.
- Configure application
app1
on node1, node2 and node3.The following commands add the application app1 to the VCS configuration.
# hagrp -add app1 # hagrp -modify app1 SystemList node1 0 node2 1 node3 2 # hagrp -modify app1 AutoFailOver 0 # hagrp -modify app1 Parallel 1 # hagrp -modify app1 AutoStartList node1 node2 node3
Add disk group resources to the VCS configuration.
# hares -add appdg1_voldg CVMVolDg app1 # hares -modify appdg1_voldg Critical 0 # hares -modify appdg1_voldg CVMDiskGroup appdg1 # hares -modify appdg1_voldg CVMVolume appvol1
Change the activation mode of the shared disk group to shared-write.
# hares -local appdg1_voldg CVMActivation # hares -modify appdg1_voldg NodeList node1 node2 node3 # hares -modify appdg1_voldg CVMActivation sw # hares -modify appdg1_voldg Enabled 1
Add the CFS mount resources for the application to the VCS configuration.
# hares -add appdata1_mnt CFSMount app1 # hares -modify appdata1_mnt Critical 0 # hares -modify appdata1_mnt MountPoint "/appdata1_mnt" # hares -modify appdata1_mnt BlockDevice "/dev/vx/dsk/appdg1/appvol1" # hares -local appdata1_mnt MountOpt # hares -modify appdata1_mnt MountOpt "rw,cluster" -sys node1 # hares -modify appdata1_mnt MountOpt "rw,cluster" -sys node2 # hares -modify appdata1_mnt MountOpt "rw,cluster" -sys node3 # hares -modify appdata1_mnt NodeList node1 node2 node3 # hares -modify appdata1_mnt Enabled 1
Add the application's oracle database to the VCS configuration.
# hares -add ora_app1 Oracle app1 # hares -modify ora_app1 Critical 0 # hares -local ora_app1 Sid # hares -modify ora_app1 Sid app1_db1 -sys node1 # hares -modify ora_app1 Sid app1_db2 -sys node2 # hares -modify ora_app1 Sid app1_db3 -sys node3 # hares -modify ora_app1 Owner oracle # hares -modify ora_app1 Home "/u02/app/oracle/dbhome" # hares -modify ora_app1 StartUpOpt SRVCTLSTART # hares -modify ora_app1 ShutDownOpt SRVCTLSTOP # hares -modify ora_app1 DBName app1_db
- Configure application
app2
on node3, node4 and node5.. The following commands add the application app2 to the VCS configuration.
# hagrp -add app2 # hagrp -modify app2 SystemList node3 0 node4 1 node5 2 # hagrp -modify app2 AutoFailOver 0 # hagrp -modify app2 Parallel 1 # hagrp -modify app2 AutoStartList node3 node4 node5
Add disk group resources to the VCS configuration.
# hares -add appdg2_voldg CVMVolDg app2 # hares -modify appdg2_voldg Critical 0 # hares -modify appdg2_voldg CVMDiskGroup appdg2 # hares -modify appdg2_voldg CVMVolume appvol2
Change the activation mode of the shared disk group to shared-write.
# hares -local appdg2_voldg CVMActivation # hares -modify appdg2_voldg NodeList node3 node4 node5 # hares -modify appdg2_voldg CVMActivation sw # hares -modify appdg2_voldg Enabled 1
Add the CFS mount resources for the application to the VCS configuration.
# hares -add appdata2_mnt CFSMount app2 # hares -modify appdata2_mnt Critical 0 # hares -modify appdata2_mnt MountPoint "/appdata2_mnt" # hares -modify appdata2_mnt BlockDevice "/dev/vx/dsk/appdg2/appvol2" # hares -local appdata2_mnt MountOpt # hares -modify appdata2_mnt MountOpt "rw,cluster" -sys node3 # hares -modify appdata2_mnt MountOpt "rw,cluster" -sys node4 # hares -modify appdata2_mnt MountOpt "rw,cluster" -sys node5 # hares -modify appdata2_mnt NodeList node3 node4 node5 # hares -modify appdata2_mnt Enabled 1
Add the application's oracle database to the VCS configuration.
# hares -add ora_app2 Oracle app2 # hares -modify ora_app2 Critical 0 # hares -local ora_app2 Sid # hares -modify ora_app2 Sid app2_db1 -sys node3 # hares -modify ora_app2 Sid app2_db2 -sys node4 # hares -modify ora_app2 Sid app2_db3 -sys node5 # hares -modify ora_app2 Owner oracle # hares -modify ora_app2 Home "/u02/app/oracle/dbhome" # hares -modify ora_app2 StartUpOpt SRVCTLSTART # hares -modify ora_app2 ShutDownOpt SRVCTLSTOP # hares -modify ora_app2 DBName app2_db
- Configure application
app3
on node5, node6 and node7.. The following commands add the application app3 to the VCS configuration.
# hagrp -add app3 # hagrp -modify app3 SystemList node5 0 node6 1 node7 2 # hagrp -modify app3 AutoFailOver 0 # hagrp -modify app3 Parallel 1 # hagrp -modify app3 AutoStartList node5 node6 node7
Add disk group resources to the VCS configuration.
# hares -add appdg3_voldg CVMVolDg app3 # hares -modify appdg3_voldg Critical 0 # hares -modify appdg3_voldg CVMDiskGroup appdg3 # hares -modify appdg3_voldg CVMVolume appvol3
Change the activation mode of the shared disk group to shared-write.
# hares -local appdg3_voldg CVMActivation # hares -modify appdg3_voldg NodeList node5 node6 node7 # hares -modify appdg3_voldg CVMActivation sw # hares -modify appdg3_voldg Enabled 1
Add the CFS mount resources for the application to the VCS configuration.
# hares -add appdata3_mnt CFSMount app3 # hares -modify appdata3_mnt Critical 0 # hares -modify appdata3_mnt MountPoint "/appdata3_mnt" # hares -modify appdata3_mnt BlockDevice "/dev/vx/dsk/appdg3/appvol3" # hares -local appdata3_mnt MountOpt # hares -modify appdata3_mnt MountOpt "rw,cluster" -sys node5 # hares -modify appdata3_mnt MountOpt "rw,cluster" -sys node6 # hares -modify appdata3_mnt MountOpt "rw,cluster" -sys node7 # hares -modify appdata3_mnt NodeList node5 node6 node7 # hares -modify appdata3_mnt Enabled 1
Add the application's oracle database to the VCS configuration.
# hares -add ora_app3 Oracle app3 # hares -modify ora_app3 Critical 0 # hares -local ora_app3 Sid # hares -modify ora_app3 Sid app3_db1 -sys node5 # hares -modify ora_app3 Sid app3_db2 -sys node6 # hares -modify ora_app3 Sid app3_db3 -sys node7 # hares -modify ora_app3 Owner oracle # hares -modify ora_app3 Home "/u02/app/oracle/dbhome" # hares -modify ora_app3 StartUpOpt SRVCTLSTART # hares -modify ora_app3 ShutDownOpt SRVCTLSTOP # hares -modify ora_app3 DBName app3_db