InfoScale™ 9.0 Storage and Availability Management for DB2 Databases - AIX, Linux
- Section I. Storage Foundation High Availability (SFHA) management solutions for DB2 databases
- Overview of Storage Foundation for Databases
- About Veritas File System
- Overview of Storage Foundation for Databases
- Section II. Deploying DB2 with InfoScale products
- Deployment options for DB2 in a Storage Foundation environment
- Deploying DB2 with Storage Foundation
- Deploying DB2 in an off-host configuration with Storage Foundation
- Deploying DB2 with High Availability
- Deployment options for DB2 in a Storage Foundation environment
- Section III. Configuring Storage Foundation for Database (SFDB) tools
- Configuring and managing the Storage Foundation for Databases repository database
- Configuring the Storage Foundation for Databases (SFDB) tools repository
- Configuring authentication for Storage Foundation for Databases (SFDB) tools
- Configuring and managing the Storage Foundation for Databases repository database
- Section IV. Improving DB2 database performance
- About database accelerators
- Improving database performance with Quick I/O
- About Quick I/O
- Improving DB2 database performance with Veritas Concurrent I/O
- Section V. Using point-in-time copies
- Understanding point-in-time copy methods
- Volume-level snapshots
- Storage Checkpoints
- Considerations for DB2 point-in-time copies
- Administering third-mirror break-off snapshots
- Administering Storage Checkpoints
- Database Storage Checkpoints for recovery
- Backing up and restoring with Netbackup in an SFHA environment
- Understanding point-in-time copy methods
- Section VI. Optimizing storage costs for DB2
- Section VII. Storage Foundation for Databases administrative reference
- Storage Foundation for Databases command reference
- Tuning for Storage Foundation for Databases
- Troubleshooting SFDB tools
Volume configuration guidelines for deploying DB2
Follow these guidelines when selecting volume layouts.
Put the database log files on a file system created on a striped and mirrored (RAID-0+1) volume separate from the index or data tablespaces. Stripe multiple devices to create larger volumes if needed. Use mirroring to improve reliability. Do not use VxVM RAID-5 for redo logs.
When normal system availability is acceptable, put the tablespaces on filesystems created on striped volumes for most OLTP workloads.
Create striped volumes across at least four disks. Try to stripe across disk controllers.
For sequential scans, ensure that the NUM_IOSERVERS and the DB2_PARALLEL_IO settings are tuned to match the number of disk devices used in the stripe.
For most workloads, use the default 64 K stripe-unit size for striped volumes.
When system availability is critical, use mirroring for most write-intensive OLTP workloads. Turn on Dirty Region Logging (DRL) to allow fast volume resynchronization in the event of a system crash.
For most decision support system (DSS) workloads, where sequential scans are common, experiment with different striping strategies and stripe-unit sizes. Put the most frequently accessed tables or tables that are accessed together on separate striped volumes to improve the bandwidth of data transfer.