InfoScale™ 9.0 Solutions Guide - Linux
- Section I. Introducing Veritas InfoScale
- Section II. Solutions for Veritas InfoScale products
- Solutions for Veritas InfoScale products
- Solutions for Veritas InfoScale products
- Section III. Stack-level migration to IPv6 or dual stack
- Section IV. Improving database performance
- Overview of database accelerators
- Improving database performance with Veritas Concurrent I/O
- Improving database performance with atomic write I/O
- Section V. Using point-in-time copies
- Understanding point-in-time copy methods
- Backing up and recovering
- Preserving multiple point-in-time copies
- Online database backups
- Backing up on an off-host cluster file system
- Database recovery using Storage Checkpoints
- Backing up and recovering in a NetBackup environment
- Off-host processing
- Creating and refreshing test environments
- Creating point-in-time copies of files
- Section VI. Maximizing storage utilization
- Optimizing storage tiering with SmartTier
- Optimizing storage with Flexible Storage Sharing
- Optimizing storage tiering with SmartTier
- Section VII. Migrating data
- Understanding data migration
- Offline migration from LVM to VxVM
- Offline conversion of native file system to VxFS
- Online migration of a native file system to the VxFS file system
- Migrating a source file system to the VxFS file system over NFS v4
- VxFS features not available during online migration
- Migrating storage arrays
- Migrating data between platforms
- Overview of the Cross-Platform Data Sharing (CDS) feature
- CDS disk format and disk groups
- Setting up your system to use Cross-platform Data Sharing (CDS)
- Maintaining your system
- Disk tasks
- Disk group tasks
- Displaying information
- File system considerations
- Specifying the migration target
- Using the fscdsadm command
- Maintaining the list of target operating systems
- Migrating a file system on an ongoing basis
- Converting the byte order of a file system
- Migrating from Oracle ASM to Veritas File System
- Section VIII. Veritas InfoScale 4K sector device support solution
- Section IX. REST API support
- Support for configurations and operations using REST APIs
- Support for configurations and operations using REST APIs
- Section X. Reference
About Flexible Storage Sharing
Flexible Storage Sharing (FSS) enables network sharing of local storage, cluster wide. The local storage can be in the form of Direct Attached Storage (DAS) or internal disk drives. Network shared storage is enabled by using a network interconnect between the nodes of a cluster.
FSS allows network shared storage to co-exist with physically shared storage, and logical volumes can be created using both types of storage creating a common storage namespace. Logical volumes using network shared storage provide data redundancy, high availability, and disaster recovery capabilities, without requiring physically shared storage, transparently to file systems and applications.
FSS allows growing the existing storage by growing the LUN, which is termed as Dynamic LUN Expansion (DLE). To use the DLE feature, you invoke the vxdisk resize command with the length option from the master node in an FSS configuration. Alternatively, you can use the Management Server console of InfoScale Operations Manager to resize a disk, which internally invokes this command. The command intelligently detects the remote disks and gets the required protocol executed to complete the LUN expansion; you don't need to specify any additional options. No explicit master switching is required.
Note:
Do not resize an LVM disk or any other disk on which the OS is installed.
You can leverage DLE to grow the storage in cloud environments.
For details on growing the existing storage by growing the LUN, refer to the platform-specific Storage Foundation 9.0 Administrator's Guide.
FSS can be used with SmartIO technology for remote caching to service nodes that may not have local SSDs.
FSS is supported on clusters containing up to 64 nodes with CVM protocol versions 140 and above.
The following figure depicts a FSS environment.