Veritas™ Volume Manager Administrator's Guide
- Understanding Veritas Volume Manager
- VxVM and the operating system
- How VxVM handles storage management
- Volume layouts in VxVM
- Online relayout
- Volume resynchronization
- Dirty region logging
- Volume snapshots
- FastResync
- Provisioning new usable storage
- Administering disks
- Disk devices
- Discovering and configuring newly added disk devices
- Discovering disks and dynamically adding disk arrays
- How to administer the Device Discovery Layer
- Changing the disk-naming scheme
- Adding a disk to VxVM
- Rootability
- Displaying disk information
- Removing disks
- Removing and replacing disks
- Administering Dynamic Multi-Pathing
- How DMP works
- Administering DMP using vxdmpadm
- Gathering and displaying I/O statistics
- Specifying the I/O policy
- Online dynamic reconfiguration
- Reconfiguring a LUN online that is under DMP control
- Creating and administering disk groups
- About disk groups
- Displaying disk group information
- Creating a disk group
- Importing a disk group
- Moving disk groups between systems
- Handling cloned disks with duplicated identifiers
- Handling conflicting configuration copies
- Reorganizing the contents of disk groups
- Destroying a disk group
- Creating and administering subdisks and plexes
- Displaying plex information
- Reattaching plexes
- Creating volumes
- Types of volume layouts
- Creating a volume
- Using vxassist
- Creating a volume on specific disks
- Creating a mirrored volume
- Creating a striped volume
- Creating a volume using vxmake
- Initializing and starting a volume
- Using rules and persistent attributes to make volume allocation more efficient
- Administering volumes
- Displaying volume information
- Monitoring and controlling tasks
- Reclamation of storage on thin reclamation arrays
- Stopping a volume
- Resizing a volume
- Adding a mirror to a volume
- Preparing a volume for DRL and instant snapshots
- Adding traditional DRL logging to a mirrored volume
- Enabling FastResync on a volume
- Performing online relayout
- Adding a RAID-5 log
- Creating and administering volume sets
- Configuring off-host processing
- Administering hot-relocation
- How hot-relocation works
- Moving relocated subdisks
- Administering cluster functionality (CVM)
- Overview of clustering
- Multiple host failover configurations
- CVM initialization and configuration
- Dirty region logging in cluster environments
- Administering VxVM in cluster environments
- Changing the CVM master manually
- Importing disk groups as shared
- Administering sites and remote mirrors
- About sites and remote mirrors
- Fire drill - testing the configuration
- Changing the site name
- Administering the Remote Mirror configuration
- Failure and recovery scenarios
- Performance monitoring and tuning
- Appendix A. Using Veritas Volume Manager commands
- Appendix B. Configuring Veritas Volume Manager
Upgrading the array controller firmware online
Storage array subsystems need code upgrades as fixes, patches, or feature upgrades. You can perform these upgrades online when the file system is mounted and I/Os are being served to the storage.
Legacy storage subsystems contain two controllers for redundancy. An online upgrade is done one controller at a time. DMP fails over all I/O to the second controller while the first controller is undergoing an Online Controller Upgrade. After the first controller has completely staged the code, it reboots, resets, and comes online with the new version of the code. The second controller goes through the same process, and I/O fails over to the first controller.
Note:
Throughout this process, application I/O is not affected.
Array vendors have different names for this process. For example, EMC calls it a nondisruptive upgrade (NDU) for CLARiiON arrays.
A/A type arrays require no special handling during this online upgrade process. For A/P, A/PF, and ALUA type arrays, DMP performs array-specific handling through vendor-specific array policy modules (APMs) during an online controller code upgrade.
When a controller resets and reboots during a code upgrade, DMP detects this state through the SCSI Status. DMP immediately fails over all I/O to the next controller.
If the array does not fully support NDU, all paths to the controllers may be unavailable for I/O for a short period of time. Before beginning the upgrade, set the dmp_lun_retry_timeout tunable to a period greater than the time that you expect the controllers to be unavailable for I/O. DMP retries the I/Os until the end of the dmp_lun_retry_timeout period, or until the I/O succeeds, whichever happens first. Therefore, you can perform the firmware upgrade without interrupting the application I/Os.
For example, if you expect the paths to be unavailable for I/O for 300 seconds, use the following command:
# vxdmpadm settune dmp_lun_retry_timeout=300
DMP retries the I/Os for 300 seconds, or until the I/O succeeds.
To verify which arrays support Online Controller Upgrade or NDU, see the hardware compatibility list (HCL) at the following URL: