Veritas™ Volume Manager Administrator's Guide

Last Published:
Product(s): InfoScale & Storage Foundation (5.1 SP1)
Platform: HP-UX
  1. Understanding Veritas Volume Manager
    1.  
      About Veritas Volume Manager
    2. VxVM and the operating system
      1.  
        How data is stored
    3. How VxVM handles storage management
      1. Physical objects
        1.  
          Disk arrays
        2.  
          Multiple paths to disk arrays
        3.  
          Device discovery
        4.  
          About enclosure-based naming
      2. Virtual objects
        1.  
          Combining virtual objects in VxVM
        2.  
          Disk groups
        3.  
          VM disks
        4.  
          Subdisks
        5.  
          Plexes
        6.  
          Volumes
    4. Volume layouts in VxVM
      1.  
        Non-layered volumes
      2.  
        Layered volumes
      3.  
        Layout methods
      4.  
        Concatenation, spanning, and carving
      5.  
        Striping (RAID-0)
      6.  
        Mirroring (RAID-1)
      7.  
        Striping plus mirroring (mirrored-stripe or RAID-0+1)
      8.  
        Mirroring plus striping (striped-mirror, RAID-1+0 or RAID-10)
      9. RAID-5 (striping with parity)
        1.  
          Traditional RAID-5 arrays
        2.  
          Veritas Volume Manager RAID-5 arrays
        3.  
          Left-symmetric layout
        4.  
          RAID-5 logging
        5.  
          Layered volumes
    5. Online relayout
      1.  
        How online relayout works
      2.  
        Limitations of online relayout
      3.  
        Transformation characteristics
      4.  
        Transformations and volume length
    6. Volume resynchronization
      1.  
        Dirty flags
      2.  
        Resynchronization process
    7. Dirty region logging
      1.  
        Log subdisks and plexes
      2.  
        Sequential DRL
    8. Volume snapshots
      1.  
        Comparison of snapshot features
    9. FastResync
      1.  
        FastResync enhancements
      2. Non-persistent FastResync
        1.  
          How non-persistent FastResync works with snapshots
      3.  
        Persistent FastResync
      4. DCO volume versioning
        1.  
          Version 0 DCO volume layout
        2.  
          Version 20 DCO volume layout
        3.  
          How persistent FastResync works with snapshots
        4.  
          Effect of growing a volume on the FastResync map
      5.  
        FastResync limitations
    10.  
      Hot-relocation
    11.  
      Volume sets
  2. Provisioning new usable storage
    1.  
      Provisioning new usable storage
    2.  
      Growing the existing storage by adding a new LUN
    3.  
      Growing the existing storage by growing the LUN
  3. Administering disks
    1.  
      About disk management
    2. Disk devices
      1. Disk device naming in VxVM
        1.  
          Operating system-based naming
        2.  
          Enclosure-based naming
      2.  
        Private and public disk regions
    3. Discovering and configuring newly added disk devices
      1.  
        Partial device discovery
      2. Discovering disks and dynamically adding disk arrays
        1.  
          How DMP claims devices
        2.  
          Disk categories
        3.  
          Adding support for a new disk array
        4.  
          Enabling discovery of new disk arrays
      3.  
        Third-party driver coexistence
      4. How to administer the Device Discovery Layer
        1.  
          Listing all the devices including iSCSI
        2.  
          Listing all the Host Bus Adapters including iSCSI
        3.  
          Listing the ports configured on a Host Bus Adapter
        4.  
          Listing the targets configured from a Host Bus Adapter or a port
        5.  
          Listing the devices configured from a Host Bus Adapter and target
        6.  
          Getting or setting the iSCSI operational parameters
        7.  
          Listing all supported disk arrays
        8.  
          Excluding support for a disk array library
        9.  
          Re-including support for an excluded disk array library
        10.  
          Listing excluded disk arrays
        11.  
          Listing supported disks in the DISKS category
        12.  
          Displaying details about a supported array library
        13.  
          Adding unsupported disk arrays to the DISKS category
        14.  
          Removing disks from the DISKS category
        15.  
          Foreign devices
    4.  
      Disks under VxVM control
    5. Changing the disk-naming scheme
      1.  
        Examples of using vxddladm to change the naming scheme
      2.  
        Displaying the disk-naming scheme
      3.  
        Regenerating persistent device names
      4.  
        Changing device naming for TPD-controlled enclosures
      5. Persistent simple or nopriv disks with enclosure-based naming
        1.  
          Removing the error state for persistent simple or nopriv disks in the boot disk group
        2.  
          Removing the error state for persistent simple or nopriv disks in non-boot disk groups
    6.  
      About the Array Volume Identifier (AVID) attribute
    7.  
      Discovering the association between enclosure-based disk names and OS-based disk names
    8.  
      About disk installation and formatting
    9.  
      Displaying or changing default disk layout attributes
    10. Adding a disk to VxVM
      1.  
        Disk reinitialization
      2.  
        Using vxdiskadd to put a disk under VxVM control
    11.  
      RAM disk support in VxVM
    12.  
      Veritas Volume Manager co-existence with Oracle Automatic Storage Management (ASM) disks
    13. Rootability
      1.  
        VxVM root disk volume restrictions
      2.  
        Root disk mirrors
      3.  
        Booting root volumes
      4.  
        Setting up a VxVM root disk and mirror
      5.  
        Creating an LVM root disk from a VxVM root disk
      6.  
        Adding swap volumes to a VxVM rootable system
      7.  
        Adding persistent dump volumes to a VxVM rootable system
      8.  
        Removing a persistent dump volume
    14. Displaying disk information
      1.  
        Displaying disk information with vxdiskadm
    15.  
      Controlling Powerfail Timeout
    16. Removing disks
      1.  
        Removing a disk with subdisks
      2.  
        Removing a disk with no subdisks
    17.  
      Removing a disk from VxVM control
    18. Removing and replacing disks
      1.  
        Replacing a failed or removed disk
    19.  
      Enabling a disk
    20.  
      Taking a disk offline
    21.  
      Renaming a disk
    22.  
      Reserving disks
  4. Administering Dynamic Multi-Pathing
    1. How DMP works
      1. How DMP monitors I/O on paths
        1.  
          Path failover mechanism
        2.  
          Subpaths Failover Group (SFG)
        3.  
          Low Impact Path Probing (LIPP)
        4.  
          I/O throttling
      2.  
        Load balancing
      3. DMP coexistence with HP-UX native multi-pathing
        1.  
          Migrating between DMP and HP-UX native multi-pathing
      4. DMP in a clustered environment
        1.  
          About enabling or disabling controllers with shared disk groups
    2.  
      Disabling multi-pathing and making devices invisible to VxVM
    3.  
      Enabling multi-pathing and making devices visible to VxVM
    4.  
      About enabling and disabling I/O for controllers and storage processors
    5.  
      About displaying DMP database information
    6.  
      Displaying the paths to a disk
    7.  
      Setting customized names for DMP nodes
    8. Administering DMP using vxdmpadm
      1.  
        Retrieving information about a DMP node
      2.  
        Displaying consolidated information about the DMP nodes
      3.  
        Displaying the members of a LUN group
      4.  
        Displaying paths controlled by a DMP node, controller, enclosure, or array port
      5.  
        Displaying information about controllers
      6.  
        Displaying information about enclosures
      7.  
        Displaying information about array ports
      8.  
        Displaying extended device attributes
      9.  
        Suppressing or including devices for VxVM or DMP control
      10. Gathering and displaying I/O statistics
        1.  
          Examples of using the vxdmpadm iostat command
        2.  
          Displaying statistics for queued or erroneous I/Os
        3.  
          Displaying cumulative I/O statistics
      11.  
        Setting the attributes of the paths to an enclosure
      12.  
        Displaying the redundancy level of a device or enclosure
      13.  
        Specifying the minimum number of active paths
      14.  
        Displaying the I/O policy
      15. Specifying the I/O policy
        1.  
          Scheduling I/O on the paths of an Asymmetric Active/Active array
        2.  
          Example of applying load balancing in a SAN
      16.  
        Disabling I/O for paths, controllers or array ports
      17.  
        Enabling I/O for paths, controllers or array ports
      18.  
        Renaming an enclosure
      19.  
        Configuring the response to I/O failures
      20.  
        Configuring the I/O throttling mechanism
      21.  
        Configuring Subpaths Failover Groups (SFG)
      22.  
        Configuring Low Impact Path Probing
      23.  
        Displaying recovery option values
      24.  
        Configuring DMP path restoration policies
      25.  
        Stopping the DMP path restoration thread
      26.  
        Displaying the status of the DMP path restoration thread
      27.  
        Displaying information about the DMP error-handling thread
      28.  
        Configuring array policy modules
  5. Online dynamic reconfiguration
    1.  
      About online dynamic reconfiguration
    2. Reconfiguring a LUN online that is under DMP control
      1.  
        Removing LUNs dynamically from an existing target ID
      2.  
        Adding new LUNs dynamically to a new target ID
      3.  
        About detecting target ID reuse if the operating system device tree is not cleaned up
      4.  
        Scanning an operating system device tree after adding or removing LUNs
      5.  
        Cleaning up the operating system device tree after removing LUNs
    3.  
      Upgrading the array controller firmware online
    4.  
      Replacing a host bus adapter
  6. Creating and administering disk groups
    1. About disk groups
      1.  
        Specification of disk groups to commands
      2.  
        System-wide reserved disk groups
      3. Rules for determining the default disk group
        1.  
          Displaying the system-wide boot disk group
        2.  
          Displaying and specifying the system-wide default disk group
      4.  
        Disk group versions
    2. Displaying disk group information
      1.  
        Displaying free space in a disk group
    3. Creating a disk group
      1.  
        Creating a disk group with an earlier disk group version
    4.  
      Adding a disk to a disk group
    5.  
      Removing a disk from a disk group
    6.  
      Moving disks between disk groups
    7.  
      Deporting a disk group
    8. Importing a disk group
      1.  
        Setting the automatic recovery of volumes
    9.  
      Handling of minor number conflicts
    10. Moving disk groups between systems
      1.  
        Handling errors when importing disks
      2.  
        Reserving minor numbers for disk groups
      3.  
        Compatibility of disk groups between platforms
    11. Handling cloned disks with duplicated identifiers
      1.  
        Writing a new UDID to a disk
      2.  
        Importing a disk group containing cloned disks
      3. Sample cases of operations on cloned disks
        1.  
          Enabling configuration database copies on tagged disks
        2.  
          Importing cloned disks without tags
        3.  
          Importing cloned disks with tags
      4.  
        Considerations when using EMC CLARiiON SNAPSHOT LUNs
    12.  
      Renaming a disk group
    13. Handling conflicting configuration copies
      1.  
        Example of a serial split brain condition in a cluster
      2.  
        Correcting conflicting configuration information
    14. Reorganizing the contents of disk groups
      1.  
        Limitations of disk group split and join
      2. Listing objects potentially affected by a move
        1.  
          Moving DCO volumes between disk groups
      3.  
        Moving objects between disk groups
      4.  
        Splitting disk groups
      5.  
        Joining disk groups
    15.  
      Disabling a disk group
    16. Destroying a disk group
      1.  
        Recovering a destroyed disk group
    17.  
      Upgrading the disk group version
    18.  
      About the configuration daemon in VxVM
    19.  
      Backing up and restoring disk group configuration data
    20.  
      Using vxnotify to monitor configuration changes
    21.  
      Working with existing ISP disk groups
  7. Creating and administering subdisks and plexes
    1.  
      About subdisks
    2.  
      Creating subdisks
    3.  
      Displaying subdisk information
    4.  
      Moving subdisks
    5.  
      Splitting subdisks
    6.  
      Joining subdisks
    7.  
      Associating subdisks with plexes
    8.  
      Associating log subdisks
    9.  
      Dissociating subdisks from plexes
    10.  
      Removing subdisks
    11.  
      Changing subdisk attributes
    12.  
      About plexes
    13.  
      Creating plexes
    14.  
      Creating a striped plex
    15. Displaying plex information
      1.  
        Plex states
      2.  
        Plex condition flags
      3.  
        Plex kernel states
    16.  
      Attaching and associating plexes
    17.  
      Taking plexes offline
    18.  
      Detaching plexes
    19. Reattaching plexes
      1.  
        Automatic plex reattachment
    20.  
      Moving plexes
    21.  
      Copying volumes to plexes
    22.  
      Dissociating and removing plexes
    23.  
      Changing plex attributes
  8. Creating volumes
    1.  
      About volume creation
    2. Types of volume layouts
      1.  
        Supported volume logs and maps
    3. Creating a volume
      1.  
        Advanced approach
      2.  
        Assisted approach
    4. Using vxassist
      1.  
        Setting default values for vxassist
      2.  
        Using the SmartMove™ feature while attaching a plex
    5.  
      Discovering the maximum size of a volume
    6.  
      Disk group alignment constraints on volumes
    7.  
      Creating a volume on any disk
    8. Creating a volume on specific disks
      1.  
        Creating a volume on SSD devices
      2.  
        Specifying ordered allocation of storage to volumes
    9. Creating a mirrored volume
      1.  
        Creating a mirrored-concatenated volume
      2.  
        Creating a concatenated-mirror volume
    10.  
      Creating a volume with a version 0 DCO volume
    11.  
      Creating a volume with a version 20 DCO volume
    12.  
      Creating a volume with dirty region logging enabled
    13. Creating a striped volume
      1.  
        Creating a mirrored-stripe volume
      2.  
        Creating a striped-mirror volume
    14.  
      Mirroring across targets, controllers or enclosures
    15.  
      Mirroring across media types (SSD and HDD)
    16.  
      Creating a RAID-5 volume
    17.  
      Creating tagged volumes
    18. Creating a volume using vxmake
      1.  
        Creating a volume using a vxmake description file
    19. Initializing and starting a volume
      1.  
        Initializing and starting a volume created using vxmake
    20.  
      Accessing a volume
    21. Using rules and persistent attributes to make volume allocation more efficient
      1.  
        Understanding persistent attributes
      2.  
        Rule file format
      3.  
        Using rules to create a volume
      4.  
        Using persistent attributes
  9. Administering volumes
    1.  
      About volume administration
    2. Displaying volume information
      1.  
        Volume states
      2.  
        Volume kernel states
    3. Monitoring and controlling tasks
      1.  
        Specifying task tags
      2. Managing tasks with vxtask
        1.  
          vxtask operations
        2.  
          Using the vxtask command
    4.  
      About SF Thin Reclamation feature
    5. Reclamation of storage on thin reclamation arrays
      1.  
        Identifying thin and thin reclamation LUNs
      2.  
        How reclamation on a deleted volume works
      3.  
        Thin Reclamation of a disk, a disk group, or an enclosure
      4.  
        Thin Reclamation of a file system
      5.  
        Triggering space reclamation
    6.  
      Monitoring Thin Reclamation using the vxtask command
    7.  
      Using SmartMove with Thin Provisioning
    8.  
      Admin operations on an unmounted VxFS thin volume
    9. Stopping a volume
      1.  
        Putting a volume in maintenance mode
    10.  
      Starting a volume
    11. Resizing a volume
      1.  
        Resizing volumes with vxresize
      2. Resizing volumes with vxassist
        1.  
          Extending to a given length
        2.  
          Extending by a given length
        3.  
          Shrinking to a given length
        4.  
          Shrinking by a given length
      3.  
        Resizing volumes with vxvol
    12. Adding a mirror to a volume
      1.  
        Mirroring all volumes
      2.  
        Mirroring volumes on a VM disk
    13.  
      Removing a mirror
    14.  
      Adding logs and maps to volumes
    15. Preparing a volume for DRL and instant snapshots
      1.  
        Specifying storage for version 20 DCO plexes
      2.  
        Using a DCO and DCO volume with a RAID-5 volume
      3.  
        Determining the DCO version number
      4.  
        Determining if DRL is enabled on a volume
      5.  
        Determining if DRL logging is active on a volume
      6.  
        Disabling and re-enabling DRL
      7.  
        Removing support for DRL and instant snapshots from a volume
    16. Adding traditional DRL logging to a mirrored volume
      1.  
        Removing a traditional DRL log
    17.  
      Upgrading existing volumes to use version 20 DCOs
    18.  
      Setting tags on volumes
    19.  
      Changing the read policy for mirrored volumes
    20.  
      Removing a volume
    21.  
      Moving volumes from a VM disk
    22. Enabling FastResync on a volume
      1.  
        Checking whether FastResync is enabled on a volume
      2.  
        Disabling FastResync
    23. Performing online relayout
      1.  
        Permitted relayout transformations
      2.  
        Specifying a non-default layout
      3.  
        Specifying a plex for relayout
      4.  
        Tagging a relayout operation
      5.  
        Viewing the status of a relayout
      6.  
        Controlling the progress of a relayout
    24.  
      Converting between layered and non-layered volumes
    25. Adding a RAID-5 log
      1.  
        Adding a RAID-5 log using vxplex
      2.  
        Removing a RAID-5 log
  10. Creating and administering volume sets
    1.  
      About volume sets
    2.  
      Creating a volume set
    3.  
      Adding a volume to a volume set
    4.  
      Removing a volume from a volume set
    5.  
      Listing details of volume sets
    6.  
      Stopping and starting volume sets
    7. Raw device node access to component volumes
      1.  
        Enabling raw device access when creating a volume set
      2.  
        Displaying the raw device access settings for a volume set
      3.  
        Controlling raw device access for an existing volume set
  11. Configuring off-host processing
    1.  
      About off-host processing solutions
    2. Implemention of off-host processing solutions
      1.  
        Implementing off-host online backup
      2.  
        Implementing decision support
  12. Administering hot-relocation
    1.  
      About hot-relocation
    2. How hot-relocation works
      1.  
        Partial disk failure mail messages
      2.  
        Complete disk failure mail messages
      3.  
        How space is chosen for relocation
    3.  
      Configuring a system for hot-relocation
    4.  
      Displaying spare disk information
    5.  
      Marking a disk as a hot-relocation spare
    6.  
      Removing a disk from use as a hot-relocation spare
    7.  
      Excluding a disk from hot-relocation use
    8.  
      Making a disk available for hot-relocation use
    9.  
      Configuring hot-relocation to use only spare disks
    10. Moving relocated subdisks
      1.  
        Moving relocated subdisks using vxdiskadm
      2.  
        Moving relocated subdisks using vxassist
      3. Moving relocated subdisks using vxunreloc
        1.  
          Moving hot-relocated subdisks back to their original disk
        2.  
          Moving hot-relocated subdisks back to a different disk
        3.  
          Forcing hot-relocated subdisks to accept different offsets
        4.  
          Examining which subdisks were hot-relocated from a disk
      4.  
        Restarting vxunreloc after errors
    11.  
      Modifying the behavior of hot-relocation
  13. Administering cluster functionality (CVM)
    1. Overview of clustering
      1.  
        Overview of cluster volume management
      2.  
        Private and shared disk groups
      3.  
        Activation modes of shared disk groups
      4. Connectivity policy of shared disk groups
        1.  
          Global detach policy
        2.  
          Local detach policy
        3.  
          Guidelines for choosing detach policies
        4.  
          Disk group failure policy
        5.  
          Guidelines for failure policies
      5.  
        Effect of disk connectivity on cluster reconfiguration
      6.  
        Limitations of shared disk groups
    2. Multiple host failover configurations
      1.  
        Import lock
      2.  
        Failover
      3.  
        Corruption of disk group configuration
    3.  
      About the cluster functionality of VxVM
    4. CVM initialization and configuration
      1. Cluster reconfiguration
        1.  
          vxclustadm utility
      2. Volume reconfiguration
        1.  
          vxconfigd daemon
        2.  
          vxconfigd daemon recovery
      3.  
        Node shutdown
      4.  
        Cluster shutdown
    5. Dirty region logging in cluster environments
      1.  
        How DRL works in a cluster environment
    6. Administering VxVM in cluster environments
      1.  
        Requesting node status and discovering the master node
      2. Changing the CVM master manually
        1.  
          Errors during CVM master switching
      3.  
        Determining if a LUN is in a shareable disk group
      4.  
        Listing shared disk groups
      5.  
        Creating a shared disk group
      6. Importing disk groups as shared
        1.  
          Forcibly importing a disk group
      7.  
        Handling cloned disks in a shared disk group
      8.  
        Converting a disk group from shared to private
      9.  
        Moving objects between shared disk groups
      10.  
        Splitting shared disk groups
      11.  
        Joining shared disk groups
      12.  
        Changing the activation mode on a shared disk group
      13.  
        Setting the disk detach policy on a shared disk group
      14.  
        Setting the disk group failure policy on a shared disk group
      15.  
        Creating volumes with exclusive open access by a node
      16.  
        Setting exclusive open access to a volume by a node
      17.  
        Displaying the cluster protocol version
      18.  
        Displaying the supported cluster protocol version range
      19.  
        Recovering volumes in shared disk groups
      20.  
        Obtaining cluster performance statistics
      21.  
        Administering CVM from the slave node
  14. Administering sites and remote mirrors
    1. About sites and remote mirrors
      1.  
        About site-based allocation
      2.  
        About site consistency
      3.  
        About site tags
      4.  
        About the site read policy
    2.  
      Making an existing disk group site consistent
    3.  
      Configuring a new disk group as a Remote Mirror configuration
    4. Fire drill - testing the configuration
      1.  
        Simulating site failure
      2.  
        Verifying the secondary site
      3.  
        Recovery from simulated site failure
    5. Changing the site name
      1.  
        Resetting the site name for a host
    6. Administering the Remote Mirror configuration
      1.  
        Configuring site tagging for disks or enclosures
      2.  
        Configuring automatic site tagging for a disk group
      3.  
        Configuring site consistency on a volume
    7.  
      Examples of storage allocation by specifying sites
    8.  
      Displaying site information
    9. Failure and recovery scenarios
      1.  
        Recovering from a loss of site connectivity
      2.  
        Recovering from host failure
      3.  
        Recovering from storage failure
      4.  
        Recovering from site failure
      5.  
        Automatic site reattachment
  15. Performance monitoring and tuning
    1. Performance guidelines
      1.  
        Data assignment
      2.  
        Striping
      3.  
        Mirroring
      4.  
        Combining mirroring and striping
    2. RAID-5
      1.  
        Volume read policies
    3. Performance monitoring
      1.  
        Setting performance priorities
      2. Obtaining performance data
        1.  
          Tracing volume operations
        2.  
          Printing volume statistics
      3. Using performance data
        1.  
          Using I/O statistics
        2.  
          Using I/O tracing
    4. Tuning VxVM
      1.  
        General tuning guidelines
      2. Tuning guidelines for large systems
        1.  
          Number of configuration copies for a disk group
      3.  
        Changing the values of VxVM tunables
      4.  
        Tunable parameters for VxVM
      5.  
        DMP tunable parameters
      6.  
        Disabling I/O statistics collection
      7.  
        Enabling I/O statistics collection
  16. Appendix A. Using Veritas Volume Manager commands
    1.  
      About Veritas Volume Manager commands
    2.  
      CVM commands supported for executing on the slave node
    3. Online manual pages
      1.  
        Section 1M - administrative commands
      2.  
        Section 4 - file formats
      3.  
        Section 7 - device driver interfaces
  17. Appendix B. Configuring Veritas Volume Manager
    1.  
      Setup tasks after installation
    2.  
      Unsupported disk arrays
    3.  
      Foreign devices
    4.  
      Initialization of disks and creation of disk groups
    5. Guidelines for configuring storage
      1.  
        Mirroring guidelines
      2.  
        Dirty region logging guidelines
      3.  
        Striping guidelines
      4.  
        RAID-5 guidelines
      5.  
        Hot-relocation guidelines
      6.  
        Accessing volume devices
    6.  
      VxVM's view of multipathed devices
    7. Cluster support
      1.  
        Configuring shared disk groups
      2.  
        Converting existing VxVM disk groups to shared disk groups
  18.  
    Glossary

Tunable parameters for VxVM

Table: Kernel tunable parameters for VxVM lists the kernel tunable parameters for VxVM.

Table: Kernel tunable parameters for VxVM

Parameter

Description

vol_checkpt_default

The interval at which utilities performing recoveries or resynchronization operations load the current offset into the kernel as a checkpoint. A system failure during such operations does not require a full recovery, but can continue from the last reached checkpoint.

The default value is 10240 sectors (10MB).

Increasing this size reduces the overhead of checkpoints on recovery operations at the expense of additional recovery following a system failure during a recovery.

vol_default_iodelay

The count in clock ticks for which utilities pause if they have been directed to reduce the frequency of issuing I/O requests, but have not been given a specific delay time. This tunable is used by utilities performing operations such as resynchronizing mirrors or rebuilding RAID-5 columns.

The default value is 50 ticks.

Increasing this value results in slower recovery operations and consequently lower system impact while recoveries are being performed.

vol_fmr_logsz

The maximum size in kilobytes of the bitmap that Non-Persistent FastResync uses to track changed blocks in a volume. The number of blocks in a volume that are mapped to each bit in the bitmap depends on the size of the volume, and this value changes if the size of the volume is changed.

For example, if the volume size is 1 gigabyte and the system block size is 1024 bytes, a vol_fmr_logsz value of 4 yields a map that contains 32,768 bits, each bit representing one region of 32 blocks.

The larger the bitmap size, the fewer the number of blocks that are mapped to each bit. This can reduce the amount of reading and writing required on resynchronization, at the expense of requiring more non-pageable kernel memory for the bitmap. Additionally, on clustered systems, a larger bitmap size increases the latency in I/O performance, and it also increases the load on the private network between the cluster members. This is because every other member of the cluster must be informed each time a bit in the map is marked.

Since the region size must be the same on all nodes in a cluster for a shared volume, the value of this tunable on the master node overrides the tunable values on the slave nodes, if these values are different. Because the value of a shared volume can change, the value of this tunable is retained for the life of the volume.

In configurations which have thousands of mirrors with attached snapshot plexes, the total memory overhead can represent a significantly higher overhead in memory consumption than is usual for VxVM.

The default value is 4KB. The maximum and minimum permitted values are 1KB and 8KB.

Note:

The value of this tunable does not have any effect on Persistent FastResync.

vol_max_vol

The maximum number of volumes that can be created on the system. The minimum and maximum permitted values are 1 and the maximum number of minor numbers representable on the system.

The default value is 8388608.

vol_maxio

The maximum size of logical I/O operations that can be performed without breaking up the request. I/O requests to VxVM that are larger than this value are broken up and performed synchronously. Physical I/O requests are broken up based on the capabilities of the disk device and are unaffected by changes to this maximum logical request limit.

The default value is 2048 sectors (2048KB).

The value of voliomem_maxpool_sz must be at least 10 times greater than the value of vol_maxio.

If DRL sequential logging is configured, the value of voldrl_min_regionsz must be set to at least half the value of vol_maxio.

vol_maxioctl

The maximum size of data that can be passed into VxVM via an ioctl call. Increasing this limit allows larger operations to be performed. Decreasing the limit is not generally recommended, because some utilities depend upon performing operations of a certain size and can fail unexpectedly if they issue oversized ioctl requests.

The default value is 32768 bytes (32KB).

vol_maxparallelio

The number of I/O operations that the vxconfigd daemon is permitted to request from the kernel in a single VOL_VOLDIO_READ per VOL_VOLDIO_WRITE ioctl call.

The default value is 256. This value should not be changed.

vol_maxspecialio

The maximum size of an I/O request that can be issued by an ioctl call. Although the ioctl request itself can be small, it can request a large I/O request be performed. This tunable limits the size of these I/O requests. If necessary, a request that exceeds this value can be failed, or the request can be broken up and performed synchronously.

The default value is 1024 sectors (1MB).

Raising this limit can cause difficulties if the size of an I/O request causes the process to take more memory or kernel virtual mapping space than exists and thus deadlock. The maximum limit for this tunable is 20% of the smaller of physical memory or kernel virtual memory. It is inadvisable to go over this limit, because deadlock is likely to occur.

If stripes are larger than the value of this tunable, full stripe I/O requests are broken up, which prevents full-stripe read/writes. This throttles the volume I/O throughput for sequential I/O or larger I/O requests.

This tunable limits the size of an I/O request at a higher level in VxVM than the level of an individual disk. For example, for an 8 by 64KB stripe, a value of 256KB only allows I/O requests that use half the disks in the stripe; thus, it cuts potential throughput in half. If you have more columns or you have used a larger interleave factor, then your relative performance is worse.

This tunable must be set, as a minimum, to the size of your largest stripe (RAID-0 or RAID-5).

vol_subdisk_num

The maximum number of subdisks that can be attached to a single plex. There is no theoretical limit to this number, but it has been limited to a default value of 4096. This default can be changed, if required.

volcvm_smartsync

If set to 0, volcvm_smartsync disables SmartSync on shared disk groups. If set to 1, this parameter enables the use of SmartSync with shared disk groups.

The default value is 1.

voldrl_max_drtregs

The maximum number of dirty regions that can exist on the system for non-sequential DRL on volumes. A larger value may result in improved system performance at the expense of recovery time. This tunable can be used to regulate the worse-case recovery time for the system following a failure.

The default value is 2048.

voldrl_min_regionsz

The minimum number of sectors for a dirty region logging (DRL) volume region. With DRL, VxVM logically divides a volume into a set of consecutive regions. Larger region sizes tend to cause the cache hit-ratio for regions to improve. This improves the write performance, but it also prolongs the recovery time.

The default value is 512 sectors.

If DRL sequential logging is configured, the value of voldrl_min_regionsz must be set to at least half the value of vol_maxio.

voliomem_chunk_size

The granularity of memory chunks used by VxVM when allocating or releasing system memory. A larger granularity reduces CPU overhead due to memory allocation by allowing VxVM to retain hold of a larger amount of memory.

The default value is 65536 bytes (64KB).

voliomem_maxpool_sz

The maximum memory requested from the system by VxVM for internal purposes. This tunable has a direct impact on the performance of VxVM as it prevents one I/O operation from using all the memory in the system.

VxVM allocates two pools that can grow up to this size, one for RAID-5 and one for mirrored volumes. Additional pools are allocated if instant (Copy On Write) snapshots are present.

A write request to a RAID-5 volume that is greater than one fourth of the pool size is broken up and performed in chunks of one tenth of the pool size.

A write request to a mirrored volume that is greater than the pool size is broken up and performed in chunks of the pool size.

The default value is 134217728 (128MB)

The value of voliomem_maxpool_sz must be at least 10 times greater than the value of vol_maxio.

voliot_errbuf_dflt

The default size of the buffer maintained for error tracing events. This buffer is allocated at driver load time and is not adjustable for size while VxVM is running.

The default value is 16384 bytes (16KB).

Increasing this buffer can provide storage for more error events at the expense of system memory. Decreasing the size of the buffer can result in an error not being detected via the tracing device. Applications that depend on error tracing to perform some responsive action are dependent on this buffer.

voliot_iobuf_default

The default size for the creation of a tracing buffer in the absence of any other specification of desired kernel buffer size as part of the trace ioctl.

The default value is 8192 bytes (8KB).

If trace data is often being lost due to this buffer size being too small, then this value can be tuned to a more generous amount.

voliot_iobuf_limit

The upper limit to the size of memory that can be used for storing tracing buffers in the kernel. Tracing buffers are used by the VxVM kernel to store the tracing event records. As trace buffers are requested to be stored in the kernel, the memory for them is drawn from this pool.

Increasing this size can allow additional tracing to be performed at the expense of system memory usage. Setting this value to a size greater than can readily be accommodated on the system is inadvisable.

The default value is 131072 bytes (128KB).

voliot_iobuf_max

The maximum buffer size that can be used for a single trace buffer. Requests of a buffer larger than this size are silently truncated to this size. A request for a maximal buffer size from the tracing interface results (subject to limits of usage) in a buffer of this size.

The default value is 65536 bytes (64KB).

Increasing this buffer can provide for larger traces to be taken without loss for very heavily used volumes.

Care should be taken not to increase this value above the value for the voliot_iobuf_limit tunable value.

voliot_max_open

The maximum number of tracing channels that can be open simultaneously. Tracing channels are clone entry points into the tracing device driver. Each vxtrace process running on a system consumes a single trace channel.

The default number of channels is 32.

The allocation of each channel takes up approximately 20 bytes even when the channel is not in use.

volpagemod_max_memsz

The amount of memory, measured in kilobytes, that is allocated for caching FastResync and cache object metadata.

The default value is 65536k (64MB).

The memory allocated for this cache is exclusively dedicated to it. It is not available for other processes or applications.

Setting the value below 512KB fails if cache objects or volumes that have been prepared for instant snapshot operations are present on the system.

If you do not use the FastResync or DRL features that are implemented using a version 20 DCO volume, the value can be set to 0. However, if you subsequently decide to enable these features, you can use the vxtune command to change the value to a more appropriate one:

# vxtune volpagemod_max_memsz value

where the new value is specified in kilobytes. Using the vxtune command to adjust the value of volpagemod_max_memsz does not persist across system reboots unless you also adjust the value that is configured in the /stand/system file.

volraid_rsrtransmax

The maximum number of transient reconstruct operations that can be performed in parallel for RAID-5. A transient reconstruct operation is one that occurs on a non-degraded RAID-5 volume that has not been predicted. Limiting the number of these operations that can occur simultaneously removes the possibility of flooding the system with many reconstruct operations, and so reduces the risk of causing memory starvation.

The default value is 1.

Increasing this size improves the initial performance on the system when a failure first occurs and before a detach of a failing object is performed, but can lead to memory starvation.