InfoScale™ 9.0 Solutions Guide - Linux
- Section I. Introducing Veritas InfoScale
- Section II. Solutions for Veritas InfoScale products
- Solutions for Veritas InfoScale products
- Solutions for Veritas InfoScale products
- Section III. Stack-level migration to IPv6 or dual stack
- Section IV. Improving database performance
- Overview of database accelerators
- Improving database performance with Veritas Concurrent I/O
- Improving database performance with atomic write I/O
- Section V. Using point-in-time copies
- Understanding point-in-time copy methods
- Backing up and recovering
- Preserving multiple point-in-time copies
- Online database backups
- Backing up on an off-host cluster file system
- Database recovery using Storage Checkpoints
- Backing up and recovering in a NetBackup environment
- Off-host processing
- Creating and refreshing test environments
- Creating point-in-time copies of files
- Section VI. Maximizing storage utilization
- Optimizing storage tiering with SmartTier
- Optimizing storage with Flexible Storage Sharing
- Optimizing storage tiering with SmartTier
- Section VII. Migrating data
- Understanding data migration
- Offline migration from LVM to VxVM
- Offline conversion of native file system to VxFS
- Online migration of a native file system to the VxFS file system
- Migrating a source file system to the VxFS file system over NFS v4
- VxFS features not available during online migration
- Migrating storage arrays
- Migrating data between platforms
- Overview of the Cross-Platform Data Sharing (CDS) feature
- CDS disk format and disk groups
- Setting up your system to use Cross-platform Data Sharing (CDS)
- Maintaining your system
- Disk tasks
- Disk group tasks
- Displaying information
- File system considerations
- Specifying the migration target
- Using the fscdsadm command
- Maintaining the list of target operating systems
- Migrating a file system on an ongoing basis
- Converting the byte order of a file system
- Migrating from Oracle ASM to Veritas File System
- Section VIII. Veritas InfoScale 4K sector device support solution
- Section IX. REST API support
- Support for configurations and operations using REST APIs
- Support for configurations and operations using REST APIs
- Section X. Reference
Scaling FSS storage capacity with dedicated storage nodes using application isolation feature
Customer scenario |
Shared-nothing architectures rely on network infrastructure instead of Storage Area Networks (SAN) to provide access to shared data. With the Flexible Storage Sharing feature of Veritas InfoScale, high performance clustered applications can get rid of the complexity and cost of SAN storage while still providing access to the shared name space requirement of clustered applications. |
Configuration overview |
In the traditional clustered volume manager (CVM) environment, the shared disk groups are imported on all cluster nodes. As a result, it was difficult to increase storage capacity by adding more storage nodes without scaling the application. With application isolation and flexible storage sharing (FSS), it is now possible to add nodes and create a pool of storage to use them across multiple clustered applications. This completely eliminates the need for SAN storage in data centers allowing ease of use in addition to significant cost reductions. The following figure illustrates a scenario where two applications are configured on a specific set of nodes in the cluster. Two storage nodes are contributing their DAS storage to the applications.
|
Supported configuration |
|
Reference documents |
Storage Foundation Cluster File System High Availability Administrator's Guide Storage Foundation for Oracle RAC Configuration and Upgrade Guide. |
Solution |
See “To scale FSS storage capacity with dedicated storage nodes using application isolation feature”. The commands in the procedure assume the use of clustered application Oracle RAC. Other supported clustered applications can be similarly configured. |
To scale FSS storage capacity with dedicated storage nodes using application isolation feature
- Install and configure Veritas InfoScale Enterprise 7.2 or later on the nodes.
- Enable the application isolation feature in the cluster.
Enabling the feature changes the import and deport behaviour. As a result, you must manually add the shared disk groups to the VCS configuration.
See the topic "Enabling the application isolation feature in CVM environments" in the Storage Foundation Cluster File System High Availability Administrator's Guide.
- Export the DAS storage from each storage node. Run the command on the node from which you are exporting the disk.
Veritas InfoScale Storage supports auto-mapping of storage. You can obtain disks directly from the output of vxdisk -o cluster list command instead of exporting disks from each storage node.
# vxdisk export node6_disk1 node6_disk2 \ node6_disk3 node6_disk4 # vxdisk export node7_disk1 node7_disk2 \ node7_disk3 node7_disk4
- Identify the shared disk groups on which you want to configure the applications.
- Initialize the disk groups and create the volumes and file systems you want to use for your applications.
Run the following commands from any one of the nodes in the disk group sub-cluster. For example, if node1 and node2 belong to the sub-cluster
DGSubCluster1
, run the following commands from any one of the nodes: node1 or node2.Disk group sub-cluster 1:
# vxdg -o fss -s init appdg1 node6_disk1 \ node6_disk2 node7_disk1 node7_disk2 # vxassist -g appdg1 make appvol1 100g nmirror=2 # mkfs -t vxfs /dev/vx/rdsk/appdg1/appvol1
Disk group sub-cluster 2:
# vxdg -o fss -s init appdg2 node6_disk3 \ node6_disk4 node7_disk3 node7_disk4 # vxassist -g appdg2 make appvol2 100g nmirror=2 # mkfs -t vxfs /dev/vx/rdsk/appdg2/appvol2
- Configure the OCR, voting disk, and CSSD resources on all nodes in cluster. It is recommended to have a mirror of the OCR and voting disk on each node in the cluster.
For instructions, see the Section "Installation and upgrade of Oracle RAC" in the Storage Foundation for Oracle RAC Configuration and Upgrade Guide..
- Configure application
app1
on node1, node2 and node3.. The following commands add the application app1 to the VCS configuration.
# hagrp -add app1 # hagrp -modify app1 SystemList node1 0 node2 1 node3 2 # hagrp -modify app1 AutoFailOver 0 # hagrp -modify app1 Parallel 1 # hagrp -modify app1 AutoStartList node1 node2 node3
Add disk group resources to the VCS configuration.
# hares -add appdg1_voldg CVMVolDg app1 # hares -modify appdg1_voldg Critical 0 # hares -modify appdg1_voldg CVMDiskGroup appdg1 # hares -modify appdg1_voldg CVMVolume appvol1
Change the activation mode of the shared disk group to shared-write.
# hares -local appdg1_voldg CVMActivation # hares -modify appdg1_voldg NodeList node1 node2 node3 # hares -modify appdg1_voldg CVMActivation sw # hares -modify appdg1_voldg Enabled 1
Add the CFS mount resources for the application to the VCS configuration.
# hares -add appdata1_mnt CFSMount app1 # hares -modify appdata1_mnt Critical 0 # hares -modify appdata1_mnt MountPoint "/appdata1_mnt" # hares -modify appdata1_mnt BlockDevice "/dev/vx/dsk/appdg1/appvol1" # hares -local appdata1_mnt MountOpt # hares -modify appdata1_mnt MountOpt "rw,cluster" -sys node1 # hares -modify appdata1_mnt MountOpt "rw,cluster" -sys node2 # hares -modify appdata1_mnt MountOpt "rw,cluster" -sys node3 # hares -modify appdata1_mnt NodeList node1 node2 node3 # hares -modify appdata1_mnt Enabled 1
Add the application's oracle database to the VCS configuration.
# hares -add ora_app1 Oracle app1 # hares -modify ora_app1 Critical 0 # hares -local ora_app1 Sid # hares -modify ora_app1 Sid app1_db1 -sys node1 # hares -modify ora_app1 Sid app1_db2 -sys node2 # hares -modify ora_app1 Sid app1_db3 -sys node3 # hares -modify ora_app1 Owner oracle # hares -modify ora_app1 Home "/u02/app/oracle/dbhome" # hares -modify ora_app1 StartUpOpt SRVCTLSTART # hares -modify ora_app1 ShutDownOpt SRVCTLSTOP # hares -modify ora_app1 DBName app1_db
- Configure application
app2
on node3, node4 and node5.. The following commands add the application app2 to the VCS configuration.
# hagrp -add app2 # hagrp -modify app2 SystemList node3 0 node4 1 node5 2 # hagrp -modify app2 AutoFailOver 0 # hagrp -modify app2 Parallel 1 # hagrp -modify app2 AutoStartList node3 node4 node5
Add disk group resources to the VCS configuration.
# hares -add appdg2_voldg CVMVolDg app2 # hares -modify appdg2_voldg Critical 0 # hares -modify appdg2_voldg CVMDiskGroup appdg2 # hares -modify appdg2_voldg CVMVolume appvol2
Change the activation mode of the shared disk group to shared-write.
# hares -local appdg2_voldg CVMActivation # hares -modify appdg2_voldg NodeList node3 node4 node5 # hares -modify appdg2_voldg CVMActivation sw # hares -modify appdg2_voldg Enabled 1
Add the CFS mount resources for the application to the VCS configuration.
# hares -add appdata2_mnt CFSMount app2 # hares -modify appdata2_mnt Critical 0 # hares -modify appdata2_mnt MountPoint "/appdata2_mnt" # hares -modify appdata2_mnt BlockDevice "/dev/vx/dsk/appdg2/appvol2" # hares -local appdata2_mnt MountOpt # hares -modify appdata2_mnt MountOpt "rw,cluster" -sys node3 # hares -modify appdata2_mnt MountOpt "rw,cluster" -sys node4 # hares -modify appdata2_mnt MountOpt "rw,cluster" -sys node5 # hares -modify appdata2_mnt NodeList node3 node4 node5 # hares -modify appdata2_mnt Enabled 1
Add the application's oracle database to the VCS configuration.
# hares -add ora_app2 Oracle app2 # hares -modify ora_app2 Critical 0 # hares -local ora_app2 Sid # hares -modify ora_app2 Sid app2_db1 -sys node3 # hares -modify ora_app2 Sid app2_db2 -sys node4 # hares -modify ora_app2 Sid app2_db3 -sys node5 # hares -modify ora_app2 Owner oracle # hares -modify ora_app2 Home "/u02/app/oracle/dbhome" # hares -modify ora_app2 StartUpOpt SRVCTLSTART # hares -modify ora_app2 ShutDownOpt SRVCTLSTOP # hares -modify ora_app2 DBName app2_db