InfoScale™ 9.0 Storage and Availability Management for DB2 Databases - AIX, Linux

Last Published:
Product(s): InfoScale & Storage Foundation (9.0)
Platform: AIX,Linux
  1. Section I. Storage Foundation High Availability (SFHA) management solutions for DB2 databases
    1. Overview of Storage Foundation for Databases
      1.  
        Introducing Storage Foundation High Availability (SFHA) Solutions for DB2
      2. About Veritas File System
        1.  
          About the Veritas File System intent log
        2.  
          About extents
        3.  
          About file system disk layouts
      3.  
        About Volume Manager
      4.  
        About Dynamic Multi-Pathing (DMP)
      5.  
        About Cluster Server
      6.  
        About Cluster Server agents
      7.  
        About InfoScale Operations Manager
      8.  
        Feature support for DB2 across Veritas InfoScale 9.0 products
      9.  
        Use cases for Veritas InfoScale products
  2. Section II. Deploying DB2 with InfoScale products
    1. Deployment options for DB2 in a Storage Foundation environment
      1.  
        DB2 deployment options in a Veritas InfoScale environment
      2.  
        DB2 on a single system with Storage Foundation
      3.  
        DB2 on a single system with off-host in a Storage Foundation environment
      4.  
        DB2 in a highly available cluster with Storage Foundation High Availability
      5.  
        DB2 in a parallel cluster with SF Cluster File System HA
      6.  
        Deploying DB2 and Storage Foundation in a virtualization environment
      7.  
        Deploying DB2 with Storage Foundation SmartMove and Thin Provisioning
    2. Deploying DB2 with Storage Foundation
      1.  
        Tasks for deploying DB2 databases
      2.  
        About selecting a volume layout for deploying DB2
      3. Setting up disk group for deploying DB2
        1.  
          Disk group configuration guidelines for deploying DB2
      4. Creating volumes for deploying DB2
        1.  
          Volume configuration guidelines for deploying DB2
      5. Creating VxFS file system for deploying DB2
        1.  
          File system creation guidelines for deploying DB2
      6.  
        Mounting the file system for deploying DB2
      7.  
        Installing DB2 and creating database
    3. Deploying DB2 in an off-host configuration with Storage Foundation
      1.  
        Requirements for an off-host database configuration
    4. Deploying DB2 with High Availability
      1.  
        Tasks for deploying DB2 in an HA configuration
      2.  
        Configuring VCS to make the database highly available
  3. Section III. Configuring Storage Foundation for Database (SFDB) tools
    1. Configuring and managing the Storage Foundation for Databases repository database
      1.  
        About the Storage Foundation for Databases (SFDB) repository
      2.  
        Requirements for Storage Foundation for Databases (SFDB) tools
      3.  
        Storage Foundation for Databases (SFDB) tools availability
      4. Configuring the Storage Foundation for Databases (SFDB) tools repository
        1.  
          Locations for the SFDB repository
      5.  
        Updating the Storage Foundation for Databases (SFDB) repository after adding a node
      6.  
        Updating the Storage Foundation for Databases (SFDB) repository after removing a node
      7.  
        Removing the Storage Foundation for Databases (SFDB) repository
    2. Configuring authentication for Storage Foundation for Databases (SFDB) tools
      1.  
        Configuring vxdbd for SFDB tools authentication
      2.  
        Adding nodes to a cluster that is using authentication for SFDB tools
      3.  
        Authorizing users to run SFDB commands
  4. Section IV. Improving DB2 database performance
    1. About database accelerators
      1.  
        About Arctera InfoScale™ product components database accelerators
    2. Improving database performance with Quick I/O
      1. About Quick I/O
        1.  
          How Quick I/O improves database performance
      2.  
        Tasks for setting up Quick I/O in a database environment
      3.  
        Preallocating space for Quick I/O files using the setext command
      4.  
        Accessing regular VxFS files as Quick I/O files
      5.  
        Converting DB2 containers to Quick I/O files
      6.  
        About sparse files
      7.  
        Displaying Quick I/O status and file attributes
      8.  
        Extending a Quick I/O file
      9.  
        Monitoring tablespace free space with DB2 and extending tablespace containers
      10.  
        Recreating Quick I/O files after restoring a database
      11.  
        Disabling Quick I/O
    3. Improving DB2 database performance with Veritas Concurrent I/O
      1. About Concurrent I/O
        1.  
          How Concurrent I/O works
      2. Tasks for enabling and disabling Concurrent I/O
        1.  
          Enabling Concurrent I/O for DB2
        2.  
          Disabling Concurrent I/O for DB2
  5. Section V. Using point-in-time copies
    1. Understanding point-in-time copy methods
      1.  
        About point-in-time copies
      2.  
        When to use point-in-time copies
      3.  
        About Storage Foundation point-in-time copy technologies
      4.  
        Point-in-time copy solutions supported by SFDB tools
      5.  
        About snapshot modes supported by Storage Foundation for Databases (SFDB) tools
      6. Volume-level snapshots
        1.  
          Persistent FastResync of volume snapshots
        2.  
          Data integrity in volume snapshots
        3.  
          Third-mirror break-off snapshots
      7. Storage Checkpoints
        1.  
          How Storage Checkpoints differ from snapshots
        2. How a Storage Checkpoint works
          1.  
            Copy-on-write
          2. Storage Checkpoint visibility
            1.  
              Storage Checkpoints and 64-bit inode numbers
        3.  
          About Database Rollbacks using Storage Checkpoints
        4.  
          Storage Checkpoints and Rollback process
        5.  
          Storage Checkpoint space management considerations
    2. Considerations for DB2 point-in-time copies
      1.  
        Considerations for DB2 database layouts
      2.  
        Supported DB2 configurations
    3. Administering third-mirror break-off snapshots
      1. Database FlashSnap for cloning
        1.  
          Database FlashSnap advantages
      2. Preparing hosts and storage for Database FlashSnap
        1. Setting up hosts
          1.  
            Database FlashSnap off-host configuration
        2.  
          Creating a snapshot mirror of a volume or volume set used by the database
      3.  
        Creating a clone of a database by using Database FlashSnap
      4.  
        Resynchronizing mirror volumes with primary volumes
      5.  
        Cloning a database on the secondary host
    4. Administering Storage Checkpoints
      1.  
        About Storage Checkpoints
      2. Database Storage Checkpoints for recovery
        1.  
          Advantages and limitations of Database Storage Checkpoints
      3.  
        Creating a Database Storage Checkpoint
      4.  
        Deleting a Database Storage Checkpoint
      5.  
        Mounting a Database Storage Checkpoint
      6.  
        Unmounting a Database Storage Checkpoint
      7.  
        Creating a database clone using a Database Storage Checkpoint
      8.  
        Restoring database from a Database Storage Checkpoint
      9.  
        Gathering data for offline-mode Database Storage Checkpoints
    5. Backing up and restoring with Netbackup in an SFHA environment
      1.  
        About Veritas NetBackup
      2.  
        About using Veritas NetBackup for backup and restore for DB2
      3. Using NetBackup in an SFHA Solutions product environment
        1.  
          Clustering a NetBackup Master Server
        2.  
          Backing up and recovering a VxVM volume using NetBackup
        3.  
          Recovering a VxVM volume using NetBackup
  6. Section VI. Optimizing storage costs for DB2
    1. Understanding storage tiering with SmartTier
      1. About SmartTier
        1.  
          About VxFS multi-volume file systems
        2.  
          About VxVM volume sets
        3.  
          About volume tags
        4.  
          SmartTier file management
        5.  
          SmartTier sub-file object management
      2.  
        SmartTier in a High Availability (HA) environment
    2. SmartTier use cases for DB2
      1.  
        SmartTier use cases for DB2
      2.  
        Relocating old archive logs to tier two storage using SmartTier
      3.  
        Relocating inactive tablespaces or segments to tier two storage
      4.  
        Relocating active indexes to premium storage
      5.  
        Relocating all indexes to premium storage
  7. Section VII. Storage Foundation for Databases administrative reference
    1. Storage Foundation for Databases command reference
      1.  
        vxsfadm command reference
      2. FlashSnap reference
        1.  
          FlashSnap configuration parameters
        2.  
          FlashSnap supported operations
      3. Database Storage Checkpoints reference
        1.  
          Database Storage Checkpoints configuration parameters
        2.  
          Database Storage Checkpoints supported operations
    2. Tuning for Storage Foundation for Databases
      1.  
        Additional documentation
      2. About tuning Veritas Volume Manager (VxVM)
        1.  
          About obtaining volume I/O statistics
      3. About tuning VxFS
        1. How monitoring free space works
          1.  
            About monitoring fragmentation
        2.  
          How tuning VxFS I/O parameters works
        3.  
          About tunable VxFS I/O parameters
        4.  
          About obtaining file I/O statistics using the Quick I/O interface
        5.  
          About I/O statistics data
        6.  
          About I/O statistics
      4. About tuning DB2 databases
        1.  
          DB2_USE_PAGE_CONTAINER_TAG
        2.  
          DB2_PARALLEL_IO
        3.  
          PREFETCHSIZE and EXTENTSIZE
        4.  
          INTRA_PARALLEL
        5.  
          NUM_IOCLEANERS
        6.  
          NUM_IOSERVERS
        7.  
          CHNGPGS_THRESH
        8.  
          Table scans
        9.  
          Asynchronous I/O
        10.  
          Buffer pools
        11.  
          Memory allocation
        12.  
          TEMPORARY tablespaces
        13.  
          DMS containers
        14.  
          Data, indexes, and logs
        15.  
          Database statistics
      5.  
        About tuning AIX Virtual Memory Manager
    3. Troubleshooting SFDB tools
      1. About troubleshooting Storage Foundation for Databases (SFDB) tools
        1.  
          Running scripts for engineering support analysis for SFDB tools
        2.  
          Storage Foundation for Databases (SFDB) tools log files
      2. About the vxdbd daemon
        1.  
          Starting and stopping vxdbd
        2.  
          Configuring listening port for the vxdbd daemon
        3.  
          Limiting vxdbd resource usage
        4.  
          Configuring encryption ciphers for vxdbd
      3.  
        Troubleshooting vxdbd
      4. Resources for troubleshooting SFDB tools
        1.  
          SFDB logs
        2.  
          SFDB error messages
        3.  
          SFDB repository and repository files
      5.  
        Upgrading Storage Foundation for Databases (SFDB) tools from 5.0.x to 9.0 (2184482)

About tunable VxFS I/O parameters

The following are tunable VxFS I/O parameters:

read_pref_io

The preferred read request size. The file system uses this parameter in conjunction with the read_nstream value to determine how much data to read ahead. The default value is 64K.

write_pref_io

The preferred write request size. The file system uses this parameter in conjunction with the write_nstream value to determine how to do flush behind on writes. The default value is 64K.

read_nstream

The number of parallel read requests of size read_pref_io that you can have outstanding at one time. The file system uses the product of read_nstream multiplied by read_pref_io to determine its read ahead size. The default value for read_nstream is 1.

write_nstream

The number of parallel write requests of size write_pref_io that you can have outstanding at one time. The file system uses the product of write_nstream multiplied by write_pref_io to determine when to do flush behind on writes. The default value for write_nstream is 1.

discovered_direct_iosz

Any file I/O requests larger than the discovered_direct_iosz are handled as discovered direct I/O. A discovered direct I/O is unbuffered similar to direct I/O, but does not require a synchronous commit of the inode when the file is extended or blocks are allocated. For larger I/O requests, the CPU time for copying the data into the page cache and the cost of using memory to buffer the I/O data becomes more expensive than the cost of doing the disk I/O. For these I/O requests, using discovered direct I/O is more efficient than regular I/O. The default value of this parameter is 256K.

initial_extent_ size

Changes the default initial extent size. VxFS determines the size of the first extent to be allocated to the file based on the first write to a new file. Normally, the first extent is the smallest power of 2 that is larger than the size of the first write. If that power of 2 is less than 8K, the first extent allocated is 8K. After the initial extent, the file system increases the size of subsequent extents (see max_seqio_extent_size) with each allocation. Since most applications write to files using a buffer size of 8K or less, the increasing extents start doubling from a small initial extent. initial_extent_size can change the default initial extent size to be larger, so the doubling policy will start from a much larger initial size and the file system will not allocate a set of small extents at the start of file. Use this parameter only on file systems that will have a very large average file size. On these file systems, it will result in fewer extents per file and less fragmentation. initial_extent_size is measured in file system blocks.

max_direct_iosz

The maximum size of a direct I/O request that will be issued by the file system. If a larger I/O request comes in, then it is broken up into max_direct_iosz chunks. This parameter defines how much memory an I/O request can lock at once, so it should not be set to more than 20 percent of memory.

max_diskq

Limits the maximum disk queue generated by a single file. When the file system is flushing data for a file and the number of pages being flushed exceeds max_diskq, processes will block until the amount of data being flushed decreases. Although this doesn't limit the actual disk queue, it prevents flushing processes from making the system unresponsive. The default value is 1MB.

max_seqio_extent_size

Increases or decreases the maximum size of an extent. When the file system is following its default allocation policy for sequential writes to a file, it allocates an initial extent that is large enough for the first write to the file. When additional extents are allocated, they are progressively larger (the algorithm tries to double the size of the file with each new extent) so each extent can hold several writes' worth of data. This is done to reduce the total number of extents in anticipation of continued sequential writes. When the file stops being written, any unused space is freed for other files to use. Normally, this allocation stops increasing the size of extents at 2048 blocks, which prevents one file from holding too much unused space. max_seqio_extent_size is measured in file system blocks.

Enables or disables caching on Quick I/O files. The default behavior is to disable caching. To enable caching, set qio_cache_enable to 1. On systems with large memories, the database cannot always use all of the memory as a cache. By enabling file system caching as a second level cache, performance may be improved. If the database is performing sequential scans of tables, the scans may run faster by enabling file system caching so the file system will perform aggressive read-ahead on the files.

write_throttle

Warning:

The write_throttle parameter is useful in special situations where a computer system has a combination of a lot of memory and slow storage devices. In this configuration, sync operations (such as fsync()) may take so long to complete that the system appears to hang. This behavior occurs because the file system is creating dirty pages (in-memory updates) faster than they can be asynchronously flushed to disk without slowing system performance.

Lowering the value of write_throttle limits the number of dirty pages per file that a file system will generate before flushing the pages to disk. After the number of dirty pages for a file reaches the write_throttle threshold, the file system starts flushing pages to disk even if free memory is still available. The default value of write_throttle typically generates a lot of dirty pages, but maintains fast user writes. Depending on the speed of the storage device, if you lower write_throttle, user write performance may suffer, but the number of dirty pages is limited, so sync operations will complete much faster.

Because lowering write_throttle can delay write requests (for example, lowering write_throttle may increase the file disk queue to the max_diskq value, delaying user writes until the disk queue decreases), it is recommended that you avoid changing the value of write_throttle unless your system has a a large amount of physical memory and slow storage devices.

If the file system is being used with VxVM, it is recommended that you set the VxFS I/O parameters to default values based on the volume geometry.

If the file system is being used with a hardware disk array or volume manager other than VxVM, align the parameters to match the geometry of the logical disk. With striping or RAID-5, it is common to set read_pref_io to the stripe unit size and read_nstream to the number of columns in the stripe. For striping arrays, use the same values for write_pref_io and write_nstream, but for RAID-5 arrays, set write_pref_io to the full stripe size and write_nstream to 1.

For an application to do efficient disk I/O, it should issue read requests that are equal to the product of read_nstream multiplied by read_pref_io. Generally, any multiple or factor of read_nstream multiplied by read_pref_io should be a good size for performance. For writing, the same rule of thumb applies to the write_pref_io and write_nstream parameters. When tuning a file system, the best thing to do is try out the tuning parameters under a real-life workload.

If an application is doing sequential I/O to large files, it should issue requests larger than the discovered_direct_iosz. This causes the I/O requests to be performed as discovered direct I/O requests, which are unbuffered like direct I/O but do not require synchronous inode updates when extending the file. If the file is too large to fit in the cache, then using unbuffered I/O avoids throwing useful data out of the cache and lessons CPU overhead.