InfoScale™ 9.0 Storage and Availability Management for DB2 Databases - AIX, Linux

Last Published:
Product(s): InfoScale & Storage Foundation (9.0)
Platform: AIX,Linux
  1. Section I. Storage Foundation High Availability (SFHA) management solutions for DB2 databases
    1. Overview of Storage Foundation for Databases
      1.  
        Introducing Storage Foundation High Availability (SFHA) Solutions for DB2
      2. About Veritas File System
        1.  
          About the Veritas File System intent log
        2.  
          About extents
        3.  
          About file system disk layouts
      3.  
        About Volume Manager
      4.  
        About Dynamic Multi-Pathing (DMP)
      5.  
        About Cluster Server
      6.  
        About Cluster Server agents
      7.  
        About InfoScale Operations Manager
      8.  
        Feature support for DB2 across Veritas InfoScale 9.0 products
      9.  
        Use cases for Veritas InfoScale products
  2. Section II. Deploying DB2 with InfoScale products
    1. Deployment options for DB2 in a Storage Foundation environment
      1.  
        DB2 deployment options in a Veritas InfoScale environment
      2.  
        DB2 on a single system with Storage Foundation
      3.  
        DB2 on a single system with off-host in a Storage Foundation environment
      4.  
        DB2 in a highly available cluster with Storage Foundation High Availability
      5.  
        DB2 in a parallel cluster with SF Cluster File System HA
      6.  
        Deploying DB2 and Storage Foundation in a virtualization environment
      7.  
        Deploying DB2 with Storage Foundation SmartMove and Thin Provisioning
    2. Deploying DB2 with Storage Foundation
      1.  
        Tasks for deploying DB2 databases
      2.  
        About selecting a volume layout for deploying DB2
      3. Setting up disk group for deploying DB2
        1.  
          Disk group configuration guidelines for deploying DB2
      4. Creating volumes for deploying DB2
        1.  
          Volume configuration guidelines for deploying DB2
      5. Creating VxFS file system for deploying DB2
        1.  
          File system creation guidelines for deploying DB2
      6.  
        Mounting the file system for deploying DB2
      7.  
        Installing DB2 and creating database
    3. Deploying DB2 in an off-host configuration with Storage Foundation
      1.  
        Requirements for an off-host database configuration
    4. Deploying DB2 with High Availability
      1.  
        Tasks for deploying DB2 in an HA configuration
      2.  
        Configuring VCS to make the database highly available
  3. Section III. Configuring Storage Foundation for Database (SFDB) tools
    1. Configuring and managing the Storage Foundation for Databases repository database
      1.  
        About the Storage Foundation for Databases (SFDB) repository
      2.  
        Requirements for Storage Foundation for Databases (SFDB) tools
      3.  
        Storage Foundation for Databases (SFDB) tools availability
      4. Configuring the Storage Foundation for Databases (SFDB) tools repository
        1.  
          Locations for the SFDB repository
      5.  
        Updating the Storage Foundation for Databases (SFDB) repository after adding a node
      6.  
        Updating the Storage Foundation for Databases (SFDB) repository after removing a node
      7.  
        Removing the Storage Foundation for Databases (SFDB) repository
    2. Configuring authentication for Storage Foundation for Databases (SFDB) tools
      1.  
        Configuring vxdbd for SFDB tools authentication
      2.  
        Adding nodes to a cluster that is using authentication for SFDB tools
      3.  
        Authorizing users to run SFDB commands
  4. Section IV. Improving DB2 database performance
    1. About database accelerators
      1.  
        About Arctera InfoScale™ product components database accelerators
    2. Improving database performance with Quick I/O
      1. About Quick I/O
        1.  
          How Quick I/O improves database performance
      2.  
        Tasks for setting up Quick I/O in a database environment
      3.  
        Preallocating space for Quick I/O files using the setext command
      4.  
        Accessing regular VxFS files as Quick I/O files
      5.  
        Converting DB2 containers to Quick I/O files
      6.  
        About sparse files
      7.  
        Displaying Quick I/O status and file attributes
      8.  
        Extending a Quick I/O file
      9.  
        Monitoring tablespace free space with DB2 and extending tablespace containers
      10.  
        Recreating Quick I/O files after restoring a database
      11.  
        Disabling Quick I/O
    3. Improving DB2 database performance with Veritas Concurrent I/O
      1. About Concurrent I/O
        1.  
          How Concurrent I/O works
      2. Tasks for enabling and disabling Concurrent I/O
        1.  
          Enabling Concurrent I/O for DB2
        2.  
          Disabling Concurrent I/O for DB2
  5. Section V. Using point-in-time copies
    1. Understanding point-in-time copy methods
      1.  
        About point-in-time copies
      2.  
        When to use point-in-time copies
      3.  
        About Storage Foundation point-in-time copy technologies
      4.  
        Point-in-time copy solutions supported by SFDB tools
      5.  
        About snapshot modes supported by Storage Foundation for Databases (SFDB) tools
      6. Volume-level snapshots
        1.  
          Persistent FastResync of volume snapshots
        2.  
          Data integrity in volume snapshots
        3.  
          Third-mirror break-off snapshots
      7. Storage Checkpoints
        1.  
          How Storage Checkpoints differ from snapshots
        2. How a Storage Checkpoint works
          1.  
            Copy-on-write
          2. Storage Checkpoint visibility
            1.  
              Storage Checkpoints and 64-bit inode numbers
        3.  
          About Database Rollbacks using Storage Checkpoints
        4.  
          Storage Checkpoints and Rollback process
        5.  
          Storage Checkpoint space management considerations
    2. Considerations for DB2 point-in-time copies
      1.  
        Considerations for DB2 database layouts
      2.  
        Supported DB2 configurations
    3. Administering third-mirror break-off snapshots
      1. Database FlashSnap for cloning
        1.  
          Database FlashSnap advantages
      2. Preparing hosts and storage for Database FlashSnap
        1. Setting up hosts
          1.  
            Database FlashSnap off-host configuration
        2.  
          Creating a snapshot mirror of a volume or volume set used by the database
      3.  
        Creating a clone of a database by using Database FlashSnap
      4.  
        Resynchronizing mirror volumes with primary volumes
      5.  
        Cloning a database on the secondary host
    4. Administering Storage Checkpoints
      1.  
        About Storage Checkpoints
      2. Database Storage Checkpoints for recovery
        1.  
          Advantages and limitations of Database Storage Checkpoints
      3.  
        Creating a Database Storage Checkpoint
      4.  
        Deleting a Database Storage Checkpoint
      5.  
        Mounting a Database Storage Checkpoint
      6.  
        Unmounting a Database Storage Checkpoint
      7.  
        Creating a database clone using a Database Storage Checkpoint
      8.  
        Restoring database from a Database Storage Checkpoint
      9.  
        Gathering data for offline-mode Database Storage Checkpoints
    5. Backing up and restoring with Netbackup in an SFHA environment
      1.  
        About Veritas NetBackup
      2.  
        About using Veritas NetBackup for backup and restore for DB2
      3. Using NetBackup in an SFHA Solutions product environment
        1.  
          Clustering a NetBackup Master Server
        2.  
          Backing up and recovering a VxVM volume using NetBackup
        3.  
          Recovering a VxVM volume using NetBackup
  6. Section VI. Optimizing storage costs for DB2
    1. Understanding storage tiering with SmartTier
      1. About SmartTier
        1.  
          About VxFS multi-volume file systems
        2.  
          About VxVM volume sets
        3.  
          About volume tags
        4.  
          SmartTier file management
        5.  
          SmartTier sub-file object management
      2.  
        SmartTier in a High Availability (HA) environment
    2. SmartTier use cases for DB2
      1.  
        SmartTier use cases for DB2
      2.  
        Relocating old archive logs to tier two storage using SmartTier
      3.  
        Relocating inactive tablespaces or segments to tier two storage
      4.  
        Relocating active indexes to premium storage
      5.  
        Relocating all indexes to premium storage
  7. Section VII. Storage Foundation for Databases administrative reference
    1. Storage Foundation for Databases command reference
      1.  
        vxsfadm command reference
      2. FlashSnap reference
        1.  
          FlashSnap configuration parameters
        2.  
          FlashSnap supported operations
      3. Database Storage Checkpoints reference
        1.  
          Database Storage Checkpoints configuration parameters
        2.  
          Database Storage Checkpoints supported operations
    2. Tuning for Storage Foundation for Databases
      1.  
        Additional documentation
      2. About tuning Veritas Volume Manager (VxVM)
        1.  
          About obtaining volume I/O statistics
      3. About tuning VxFS
        1. How monitoring free space works
          1.  
            About monitoring fragmentation
        2.  
          How tuning VxFS I/O parameters works
        3.  
          About tunable VxFS I/O parameters
        4.  
          About obtaining file I/O statistics using the Quick I/O interface
        5.  
          About I/O statistics data
        6.  
          About I/O statistics
      4. About tuning DB2 databases
        1.  
          DB2_USE_PAGE_CONTAINER_TAG
        2.  
          DB2_PARALLEL_IO
        3.  
          PREFETCHSIZE and EXTENTSIZE
        4.  
          INTRA_PARALLEL
        5.  
          NUM_IOCLEANERS
        6.  
          NUM_IOSERVERS
        7.  
          CHNGPGS_THRESH
        8.  
          Table scans
        9.  
          Asynchronous I/O
        10.  
          Buffer pools
        11.  
          Memory allocation
        12.  
          TEMPORARY tablespaces
        13.  
          DMS containers
        14.  
          Data, indexes, and logs
        15.  
          Database statistics
      5.  
        About tuning AIX Virtual Memory Manager
    3. Troubleshooting SFDB tools
      1. About troubleshooting Storage Foundation for Databases (SFDB) tools
        1.  
          Running scripts for engineering support analysis for SFDB tools
        2.  
          Storage Foundation for Databases (SFDB) tools log files
      2. About the vxdbd daemon
        1.  
          Starting and stopping vxdbd
        2.  
          Configuring listening port for the vxdbd daemon
        3.  
          Limiting vxdbd resource usage
        4.  
          Configuring encryption ciphers for vxdbd
      3.  
        Troubleshooting vxdbd
      4. Resources for troubleshooting SFDB tools
        1.  
          SFDB logs
        2.  
          SFDB error messages
        3.  
          SFDB repository and repository files
      5.  
        Upgrading Storage Foundation for Databases (SFDB) tools from 5.0.x to 9.0 (2184482)

About tuning AIX Virtual Memory Manager

If you are using either Cached Quick I/O or buffered I/O (that is, plain VxFS files without Quick I/O or mount options specified), it is recommended that you monitor any paging activity to the swap device on your database servers. To monitor swap device paging, use the vmstat -I command. Swap device paging information appears in the vmstat -I output under the columns labeled pi and po (for paging in and paging out from the swap device, respectively). Any nonzero values in these columns indicates swap device paging activity.

For example:

# /usr/bin/vmstat -I
kthr				  memory						          page					                    	faults		  	cpu
--------  --------------------- ----------------------------- ---------- -----------
r	 b	 p	  avm		   fre		    fi	  fo	pi	 po	 fr	   sr		  in		  sy	    cs		  us	sy	id	wa

5	 1	 0	  443602		1566524		661	 20	0	  0	  7	    28	   4760		37401	 7580		11	7	 43	38

1	 1	 0	  505780		1503791		18	  6	 0	  0	  0	    0		   1465		5176	  848		 1	 1	 97	1

1	 1	 0	  592093		1373498		1464	1	 0	  0	  0	    0	   	4261		10703	 7154	 5	 5	 27	62

3	 0	 0	  682693		1165463		3912	2	 0	  0	  0	    0		   7984		19117	 15672	16	13	1	 70

4	 0	 0	  775730		937562		 4650	0	 0	  0	  0	    0		   10082	24634	 20048	22	15	0	 63

6	 0	 0	  864097		715214	 	4618	1	 0	  0	  0	    0		   9762		26195	 19666	23	16	1	 61

5	 0	 0	  951657		489668	 	4756	0	 0	  0	  0	    0	   	9926		27601	 20116	24	15	1	 60

4	 1	 0	  1037864	266164	 	4733	5	 0	  0	  0	    0		   9849		28748	 20064	25	15	1	 59

4	 0	 0	  1122539	47155		  4476	0	 0	  0	  0	    0		   9473		29191	 19490	26	16	1	 57

5	 4	 0	  1200050	247    		4179	4	 70	 554	5300	 27420	10793	31564	 22500	30	18	1	 52

6	 10	0	  1252543	98	     	2745	0 	138	694	4625	 12406	16190	30373	 31312	35	14	2	 49

7	 14	0   1292402	220		    2086	0	 153	530	3559	 17661	21343	32946	 40525	43	12	1	 44

7	 18	0	  1319988	183		    1510	2	 130	564	2587	 14648	21011	28808	 39800	38	9		3	 49

If there is evidence of swap device paging, proper AIX Virtual Memory Manager (VMM) tuning is required to improve database performance. VMM tuning limits the amount of memory pages allocated to the file system cache. This prevents the file system cache from stealing memory pages from applications (which causes swap device page-out) when the VMM is running low on free memory pages.

The command to tune the AIX VMM subsystem is:

# /usr/samples/kernel/vmtune

Changes made by vmtune last until the next system reboot. The VMM kernel parameters to tune include: maxperm, maxclient, and minperm. The maxperm and maxclient parameters specify the maximum amount of memory (as a percentage of total memory) that can be used for file system caching. The maximum amount of memory for file system caching should not exceed the amount of unused memory left by the AIX kernel and all active applications. Therefore, it can be calculated as:

100*(T-A)/T

where T is the total number of memory pages in the system and A is the maximum number of memory pages used by all active applications.

The minperm parameter should be set to a value that is less than or equal to maxperm, but greater than or equal to 5.

For more information on AIX VMM tuning, see the vmtune(1) manual page and the performance management documentation provided with AIX.

The following is a tunable VxFS I/O parameter:

VMM Buffer Count

( - b <value> option)

Sets the virtual memory manager (VMM) buffer count. There are two values for the VMM: a default value based on the amount of memory, and a current value. You can display these two values using vxtunefs -b. Initially, the default value and the current value are the same. The -b value option specifies an increase, from zero to 100 per cent, in the VMM buffer count from its default. The specified value is saved in the file /etc/vx/vxfssystem to make it persistent across VxFS module loads or system reboots.

In most instances, the default value is suitable for good performance, but there are counters in the kernel that you can monitor to determine if there are delays waiting for VMM buffers. If there appears to be a performance issue related to VMM, the buffer count can be increased. If there is better response time on the system, it is a good indication that VMM buffers were a bottleneck.

The following fields displayed by the kdb vmker command can be useful in determining bottlenecks.

THRPGIO buf wait (_waitcnt) value

This field may indicate that there were no VMM buffers available for pagein or pageout. The thread was blocked waiting for a VMM buffer to become available. The count is the total number of waits since cold load. This field, together with pages "paged in" and pages "paged out" displayed by the kdb vmstat command can be used to determine if there are an adequate number of VMM buffers. The ratio:

waitcnt / pageins+pageouts

is an indicator of waits for VMM buffers, but cannot be exact because pageins + pageouts includes page I/Os to other file systems and pageing space. It is not possible to give a typical value for this ratio because it depends on the amount of memory and page I/Os to file systems other than VxFS. A number greater than 0.1 may indicate a VMM buffer count bottleneck. Other relevant fields displayed by kdb vmker are:

  • THRPGIO partial cnt (_partialcnt) value

    This field indicates page I/O was done in two or more steps because there were fewer VMM buffers available than the number of pages requiring I/O.

  • THRPGIO full cnt (_fullcnt) value

    All the VMM buffers were found for all the pages requiring I/O.