Veritas InfoScale™ 7.1 Release Notes - AIX
- About this document
- Important release information
- About the Veritas InfoScale product suite
- Licensing Veritas InfoScale
- About Veritas Services and Operations Readiness Tools
- Changes introduced in 7.1
- Changes related to Veritas Cluster Server
- Changes in the Veritas Cluster Server Engine
- Changes related to installation and upgrades
- Changes related to Veritas Volume Manager
- Changes related to Veritas File System
- Changes related to Dynamic Multi-Pathing
- Changes related to Replication
- Changes related to Operating System
- Not supported in this release
- Changes related to Veritas Cluster Server
- System requirements
- Fixed Issues
- Known Issues
- Issues related to installation and upgrade
- Storage Foundation known issues
- Dynamic Multi-Pathing known issues
- Veritas Volume Manager known issues
- Veritas File System known issues
- Replication known issues
- Cluster Server known issues
- Operational issues for VCS
- Issues related to the VCS engine
- Issues related to the bundled agents
- Issues related to the VCS database agents
- Issues related to the agent framework
- Cluster Server agents for Volume Replicator known issues
- Issues related to Intelligent Monitoring Framework (IMF)
- Issues related to global clusters
- Issues related to the Cluster Manager (Java Console)
- VCS Cluster Configuration wizard issues
- LLT known issues
- I/O fencing known issues
- GAB known issues
- Operational issues for VCS
- Storage Foundation and High Availability known issues
- Storage Foundation Cluster File System High Availability known issues
- Storage Foundation for Oracle RAC known issues
- Oracle RAC known issues
- Storage Foundation Oracle RAC issues
- Storage Foundation for Databases (SFDB) tools known issues
- Issues related to installation and upgrade
- Software Limitations
- Storage Foundation software limitations
- Dynamic Multi-Pathing software limitations
- Veritas Volume Manager software limitations
- Veritas File System software limitations
- SmartIO software limitations
- Replication software limitations
- Cluster Server software limitations
- Limitations related to bundled agents
- Limitations related to VCS engine
- Veritas cluster configuration wizard limitations
- Limitations related to the VCS database agents
- Virtualizing shared storage using VIO servers and client partitions
- Cluster Manager (Java console) limitations
- Limitations related to I/O fencing
- Limitations related to bundled agents
- Storage Foundation Cluster File System High Availability software limitations
- Storage Foundation for Oracle RAC software limitations
- Storage Foundation for Databases (SFDB) tools software limitations
- Storage Foundation software limitations
- Documentation
Accessing the same LUNs from Client Partitions on different Central Electronics Complex (CEC) modules
This section briefly outlines how to set shared storage so that it is visible from client partitions on different CEC modules.
With the VIO server and client partitions set up and ready, make sure that you have installed the right level of operating system on the client partitions, and that you have mapped the physical adapters to the client partitions to provide access to the external shared storage.
To create a shareable diskgroup, you need to ensure that the different partitions use the same set of disks. A good way to make sure that the disks (that are seen from multiple partitions) are the same is to use the disks serial numbers, which are unique.
Run the following commands on the VIO server (in non-root mode), unless otherwise noted.
Get the serial number of the disk of interest:
$ lsdev -dev hdisk20 -vpd hdisk20 U787A.001.DNZ06TT-P1-C6-T1-W500507630308037C- L401 0401A00000000 IBM FC 2107 Manufacturer................IBM Machine Type and Model......2107900 Serial Number...............7548111101A EC Level.....................131 Device Specific.(Z0)........10 Device Specific.(Z1)........0100 …
Make sure the other VIO server returns the same serial number. This ensures that you are viewing the same actual physical disk.
List the virtual SCSI adapters.
$ lsdev -virtual | grep vhost vhost0 Available Virtual SCSI Server Adapter vhost1 Available Virtual SCSI Server Adapter
Note:
Usually vhost0 is the adapter for the internal disks. vhost1 in the example above maps the SCSI adapter to the external shared storage.
Prior to mapping hdisk20 (in the example) to a SCSI adapter, change the reservation policy on the disk.
$ chdev -dev hdisk20 -attr reserve_policy=no_reserve hdisk20 changed
For hdisk20 (in the example) to be available to client partitions, map it to a suitable virtual SCSI adapter.
If you now print the reserve policy on hdisk20 the output resembles:
$ lsdev -dev hdisk20 attr reserve_policy value no_reserve
Next create a virtual device to map hdisk20 to vhost1.
$ mkvdev -vdev hdisk20 -vadapter vhost1 -dev mp1_hdisk5 mp1_hdisk5 Available
Finally on the client partition run the cfgmgr command to make this disk visible via the client SCSI adapter.
You can use this disk (hdisk20 physical, and known as mp1_hdisk5 on the client partitions) to create a diskgroup, a shared volume, and eventually a shared file system.
Perform regular VCS operations on the clients vis-a-vis service groups, resources, resource attributes, etc.