Veritas InfoScale™ 7.4.1 Virtualization Guide - AIX

Last Published:
Product(s): InfoScale & Storage Foundation (7.4.1)
Platform: AIX
  1. Section I. Overview
    1. Storage Foundation and High Availability Solutions in AIX PowerVM virtual environments
      1.  
        Overview of the Veritas InfoScale Products Virtualization Guide
      2.  
        About the AIX PowerVM virtualization technology
      3. About Veritas InfoScale products support for the AIX PowerVM environment
        1.  
          About IBM LPARs with N_Port ID Virtualization (NPIV)
      4.  
        About Veritas Extension for Oracle Disk Manager
      5.  
        Virtualization use cases addressed by Veritas InfoScale products
  2. Section II. Implementation
    1. Setting up Storage Foundation and High Availability Solutions in AIX PowerVM virtual environments
      1. Supported configurations for Virtual I/O servers (VIOS) on AIX
        1.  
          Dynamic Multi-Pathing in the logical partition (LPAR)
        2.  
          Dynamic Multi-Pathing in the Virtual I/O server (VIOS)
        3.  
          Veritas InfoScale products in the logical partition (LPAR)
        4.  
          Storage Foundation Cluster File System High Availability in the logical partition (LPAR)
        5.  
          Dynamic Multi-Pathing in the Virtual I/O server (VIOS) and logical partition (LPAR)
        6.  
          Dynamic Multi-Pathing in the Virtual I/O server (VIOS) and Veritas InfoScale products in the logical partition (LPAR)
        7.  
          Cluster Server in the logical partition (LPAR)
        8.  
          Cluster Server in the management LPAR
        9.  
          Cluster Server in a cluster across logical partitions (LPARs) and physical machines
      2.  
        Support for N_Port ID Virtualization (NPIV) in IBM Virtual I/O Server (VIOS) environments
      3.  
        About setting up logical partitions (LPARs) with Veritas InfoScale products
      4.  
        Configuring IBM PowerVM LPAR guest for disaster recovery
      5. Installing and configuring Storage Foundation and High Availability (SFHA) Solutions in the logical partition (LPAR)
        1.  
          Impact of over-provisioning on Storage Foundation and High Availability
        2. About SmartIO in the AIX virtualized environment
          1.  
            Performing LPM in the SmartIO environment
      6.  
        Installing and configuring storage solutions in the Virtual I/O server (VIOS)
      7. Installing and configuring Cluster Server for logical partition and application availability
        1.  
          How Cluster Server (VCS) manages logical partitions (LPARs)
      8.  
        Enabling Veritas Extension for ODM file access from WPAR with VxFS
  3. Section III. Use cases for AIX PowerVM virtual environments
    1. Application to spindle visibility
      1.  
        About application to spindle visibility using
      2.  
        About discovering LPAR and VIO in Veritas InfoScale Operations Manager
      3.  
        About LPAR storage correlation supported in Veritas InfoScale Operations Manager
      4.  
        Prerequisites for LPAR storage correlation support in Veritas InfoScale Operations Manager
    2. Simplified storage management in VIOS
      1.  
        About simplified management
      2.  
        About Dynamic Multi-Pathing in a Virtual I/O server
      3.  
        About the Volume Manager (VxVM) component in a Virtual I/O server
      4. Configuring Dynamic Multi-Pathing (DMP) on Virtual I/O server
        1.  
          Migrating from other multi-pathing solutions to DMP on Virtual I/O server
        2.  
          Migrating from MPIO to DMP on a Virtual I/O server for a dual-VIOS configuration
        3.  
          Migrating from PowerPath to DMP on a Virtual I/O server for a dual-VIOS configuration
      5. Configuring Dynamic Multi-Pathing (DMP) pseudo devices as virtual SCSI devices
        1.  
          Exporting Dynamic Multi-Pathing (DMP) devices as virtual SCSI disks
        2.  
          Exporting a Logical Volume as a virtual SCSI disk
        3.  
          Exporting a file as a virtual SCSI disk
      6. Extended attributes in VIO client for a virtual SCSI disk
        1.  
          Configuration prerequisites for providing extended attributes on VIO client for virtual SCSI disk
        2.  
          Displaying extended attributes of virtual SCSI disks
      7.  
        Virtual IO client adapter settings for Dynamic Multi-Pathing (DMP) in dual-VIOS configurations
      8.  
        Using DMP to provide multi-pathing for the root volume group (rootvg)
      9.  
        Boot device management on NPIV presented devices
    3. Virtual machine (logical partition) availability
      1.  
        About virtual machine (logical partition) availability
      2.  
        VCS in the management LPAR
      3. Setting up management LPAR
        1.  
          Configuring password-less SSH communication between VCS nodes and HMC
      4. Setting up managed LPARs
        1.  
          Creating an LPAR profile
        2.  
          Bundled agents for managing the LPAR
        3.  
          Configuring VCS service groups to manage the LPAR
      5.  
        Managing logical partition (LPAR) failure scenarios
    4. Simplified management and high availability for IBM Workload Partitions
      1.  
        About IBM Workload Partitions
      2.  
        About using IBM Workload Partitions (WPARs) with Veritas InfoScale products
      3. Implementing Storage Foundation support for WPARs
        1.  
          Using a VxFS file system within a single system WPAR
        2.  
          WPAR with root (/) partition as VxFS
        3.  
          Using VxFS as a shared file system
      4. How Cluster Server (VCS) works with Workload Patitions (WPARs)
        1.  
          About the ContainerInfo attribute
        2.  
          About the ContainerOpts attribute
        3.  
          About the WPAR agent
      5. Configuring VCS in WPARs
        1. Prerequisites for configuring VCS in WPARs
          1.  
            About using custom agents in WPARs
        2.  
          Deciding on the WPAR root location
        3.  
          Creating a WPAR root on local disk
        4.  
          Creating WPAR root on shared storage using NFS
        5. Installing the application
          1. Configuring the service group for the application
            1.  
              Modifying the service group configuration
            2.  
              About configuring failovers
        6.  
          Verifying the WPAR configuration
        7.  
          Maintenance tasks
        8.  
          Troubleshooting information
      6.  
        Configuring AIX WPARs for disaster recovery using VCS
    5. High availability and live migration
      1.  
        About Live Partition Mobility (LPM)
      2.  
        About the partition migration process and simplified management
      3.  
        About Storage Foundation and High Availability (SFHA) Solutions support for Live Partition Mobility
      4.  
        Providing high availability with live migration in a Cluster Server environment
      5.  
        Providing logical partition (LPAR) failover with live migration
      6. Limitations and unsupported LPAR features
        1.  
          Live partition mobility of management LPARs
        2.  
          Live partition mobility of managed LPARs
    6. Multi-tier business service support
      1.  
        About Virtual Business Services
      2.  
        Sample virtual business service configuration
    7. Server consolidation
      1.  
        About server consolidation
      2. About IBM Virtual Ethernet
        1.  
          Shared Ethernet Adapter (SEA)
      3.  
        About IBM LPARs with virtual SCSI devices
      4. Using Storage Foundation in the logical partition (LPAR) with virtual SCSI devices
        1.  
          Using Storage Foundation with virtual SCSI devices
        2.  
          Setting up DMP for vSCSI devices in the logical partition (LPAR)
        3.  
          About disabling DMP for vSCSI devices in the logical partition (LPAR)
        4.  
          Preparing to install or upgrade Storage Foundation with DMP disabled for vSCSI devices in the logical partition (LPAR)
        5.  
          Disabling DMP multi-pathing for vSCSI devices in the logical partition (LPAR) after installation or upgrade
        6.  
          Adding and removing DMP support for vSCSI devices for an array
        7. How DMP handles I/O for vSCSI devices
          1.  
            Setting the vSCSI I/O policy
      5.  
        Using VCS with virtual SCSI devices
    8. Physical to virtual migration (P2V)
      1.  
        About migration from Physical to VIO environment
      2.  
        Migrating from Physical to VIO environment
  4. Section IV. Reference
    1. Appendix A. How to isolate system problems
      1.  
        About VxFS trace events
      2.  
        Tracing file read-write event
      3.  
        Tracing Inode cache event
      4.  
        Tracing Low Memory event
    2. Appendix B. Provisioning data LUNs
      1.  
        Provisioning data LUNs in a mixed VxVM and LVM environment
    3. Appendix C. Where to find more information
      1.  
        Veritas InfoScale documentation
      2.  
        Additional documentation for AIX virtualization
      3.  
        Service and support
      4.  
        About Veritas Services and Operations Readiness Tools (SORT)

Providing logical partition (LPAR) failover with live migration

This section describes how to create a profile file and use the ProfileFile attribute to automate LPAR profile creation on failback after migration.

For more information on manage the LPAR profile on source system after migration:

See Live partition mobility of managed LPARs.

Live migration of a managed LPAR deletes the LPAR profile and mappings of adapters from VIO servers from the source physical server. Without the LPAR configuration and VIOS adapter mappings of the physical server the LPAR cannot be brought online or failed over to the Cluster Server (VCS) node from which it has been migrated.

If an LPAR is to be made highly available, the LPAR profile file must be created using the steps provided below on all the VCS nodes on which the LPAR is to be made highly available. The VIO server names for an LPAR are different for each physical server and the adapter ids for the LPAR on each of the physical servers also might be different, therefore the profile file must be created for each of the VCS nodes separately.

When bringing an LPAR online on another node, VCS performs the following actions:

  • Checks if the LPAR configuration exists on that node.

  • If the LPAR configuration does not exist and if the ProfileFile attribute is specified, VCS tries to create an LPAR configuration and VIOS mappings as specified in the file specified by ProfileFile.

  • If creation of the LPAR configuration and VIOS mappings is successful, VCS brings LPAR online.

  • If the ProfileFile attribute is not configured and if the LPAR configuration does not exist on the physical server, the LPAR resource cannot be brought online.

The ProfileFile attribute is used to specify path of LPAR profile file.If the ProfileFile attribute for a VCS node is configured and the RemoveProfileOnOffline attribute is set to 1, VCS performs the following on offline or clean:

  • Deletes the LPAR configuration from the physical server.

  • Deletes the adapter mappings from the VIO servers.

For more information on attributes RemoveProfileOnOffline and ProfileFile, see the Cluster Server Bundled Agent Reference Guide.

To create the profile file for an LPAR

  1. Run the following command on HMC:
    $ lssyscfg -r lpar -m physical-server-name --filter \
    lpar_names=managed-lpar-name
  2. From the output of above command, select the following fields in key-value pairs:
    name,lpar_id,lpar_env,work_group_id,shared_proc_pool_util_auth,\
    allow_perf_collection,power_ctrl_lpar_ids,boot_mode,auto_start,\
    redundant_err_path_reporting,time_ref,lpar_avail_priority,\
    suspend_capable,remote_restart_capable,affinity_group_id --header

    Delete the remaining attributes and their values.

  3. The remaining attributes are obtained from any profile associated with the managed LPAR.The name of the profile which you want to create is managed-lpar-profile-name.

    Run the following command on HMC to get the remaining attribute.

    $ lssyscfg -r prof -m physical-server-name --filter \
    lpar_names=managed-lpar-name,profile_names=managed-lpar-profile-name
    

    From the output of above command, select the following fields in key-value pairs:

    name,all_resources,min_mem,desired_mem,max_mem,mem_mode,\
    mem_expansion,hpt_ratio,proc_mode,min_procs,desired_procs,max_procs,\
    sharing_mode,io_slots,lpar_io_pool_ids,max_virtual_slots,\
    virtual_serial_adapters,virtual_scsi_adapters,virtual_eth_adapters,\
    vtpm_adapters,virtual_fc_adapters,hca_adapters,conn_monitoring,\
    auto_start,power_ctrl_lpar_ids,work_group_id,bsr_arrays,\
    lhea_logical_ports,lhea_capabilities,lpar_proc_compat_mode
  4. Rename the name attribute in the above output to profile_name.
  5. Concatenate outputs from 1 and 3 with a comma and write it in a single line to a text file. This is the configuration file required for VCS to create or delete LPAR configuration. The absolute path of this file should be given in ProfileFile attribute.

Note:

If an error occurs while creating a partition from the LPAR profile file, make sure that all the missing attributes are populated in the profile data file. For more information on the error, see the LPAR_A.log file.

Following example procedure illustrates the profile file generation for lpar05 which is running on physical_server_01. The LPAR resource which monitors lpar05 LPAR is lpar05_resource. The VCS node that manages the lpar05_resource on physical server physical_server_01 is lpar101 and on physical_server_02 is lpar201.

To generate a file profile for lpar05 on on physical_server_01

  1. To get the LPAR details from the HMC, enter:
    $ lssyscfg -r lpar -m physical_server_01 --filter \
    lpar_names=lpar05

    The output of this command is the following:

    name=lpar05,lpar_id=15,lpar_env=aixlinux,state=Running,\
    resource_config=1,os_version=AIX 7.1 7100-00-00-0000,\
    logical_serial_num=06C3A0PF,default_profile=lpar05,\
    curr_profile=lpar05,work_group_id=none,\
    shared_proc_pool_util_auth=0,allow_perf_collection=0,\
    power_ctrl_lpar_ids=none,boot_mode=norm,lpar_keylock=norm,\
    auto_start=0,redundant_err_path_reporting=0,\
    rmc_state=inactive,rmc_ipaddr=10.207.111.93,time_ref=0,\
    lpar_avail_priority=127,desired_lpar_proc_compat_mode=default,\
    curr_lpar_proc_compat_mode=POWER7,suspend_capable=0,\
    remote_restart_capable=0,affinity_group_id=none
  2. Select the output fields as explained in the procedure above.

    See “To create the profile file for an LPAR”.

    The key value pairs are the following:

    name=lpar05,lpar_id=15,lpar_env=aixlinux,work_group_id=none,\
    shared_proc_pool_util_auth=0,allow_perf_collection=0,\
    power_ctrl_lpar_ids=none,boot_mode=norm,auto_start=0,\
    redundant_err_path_reporting=0,time_ref=0,lpar_avail_priority=127,\
    suspend_capable=0,remote_restart_capable=0
  3. To get the profile details from the HMC, enter:
    $ lssyscfg -r lpar -m physical_server_01 --filter \
    lpar_names=lpar05,profile_names=lpar05

    The output of this command is the following:

    name=lpar05,lpar_name=lpar05,lpar_id=15,lpar_env=aixlinux,\
    all_resources=0,min_mem=512,desired_mem=2048,max_mem=4096,\
    min_num_huge_pages=null,desired_num_huge_pages=null,\
    max_num_huge_pages=null,mem_mode=ded,mem_expansion=0.0,\
    hpt_ratio=1:64,proc_mode=ded,min_procs=1,desired_procs=1,\
    max_procs=1,sharing_mode=share_idle_procs,\
    affinity_group_id=none,io_slots=none,lpar_io_pool_ids=none,\
    max_virtual_slots=1000,\
    "virtual_serial_adapters=0/server/1/any//any/1,\
    1/server/1/any//any/1",\
    "virtual_scsi_adapters=304/client/2/vio_server1/4/1,\
    404/client/3/vio_server2/6/1",\
    "virtual_eth_adapters=10/0/1//0/0/ETHERNET0//all/none,\
    11/0/97//0/0/ETHERNET0//all/none,\
    12/0/98//0/0/ETHERNET0//all/none",vtpm_adapters=none,\
    "virtual_fc_adapters=""504/client/2/vio_server1/8/c050760431670010,\
    c050760431670011/1"",""604/client/3/vio_server2/5/c050760431670012,\
    c050760431670013/1""",hca_adapters=none,boot_mode=norm,\
    conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,\
    work_group_id=none,redundant_err_path_reporting=null,bsr_arrays=0,\
    lhea_logical_ports=none,lhea_capabilities=none,\
    lpar_proc_compat_mode=default,electronic_err_reporting=null
  4. After selection of the fields and renaming name to profile_name, the output is as follows:
    profile_name=lpar05,all_resources=0,min_mem=512,desired_mem=2048,\
    max_mem=4096,mem_mode=ded,mem_expansion=0.0,hpt_ratio=1:64,\
    proc_mode=ded,min_procs=1,desired_procs=1,max_procs=1,\
    sharing_mode=share_idle_procs,affinity_group_id=none,io_slots=none,\
    lpar_io_pool_ids=none,max_virtual_slots=1000,\
    "virtual_serial_adapters=0/server/1/any//any/1,1/server/1/any//any/1",\
    "virtual_scsi_adapters=304/client/2/vio_server1/4/1,\
    404/client/3/vio_server2/6/1",\
    "virtual_eth_adapters=10/0/1//0/0/ETHERNET0//all/none,\
    11/0/97//0/0/ETHERNET0//all/none,\
    12/0/98//0/0/ETHERNET0//all/none",vtpm_adapters=none,\
    "virtual_fc_adapters=""504/client/2/vio_server1/8/c050760431670010,\
    c050760431670011/1"",""604/client/3/vio_server2/5/c050760431670012,\
    c050760431670013/1""",hca_adapters=none,\
    boot_mode=norm,conn_monitoring=1,auto_start=0,\
    power_ctrl_lpar_ids=none,work_group_id=none,bsr_arrays=0,\
    lhea_logical_ports=none,lhea_capabilities=none,\
    lpar_proc_compat_mode=default
  5. Concatenate these two outputs with comma, which is as follows:
    name=lpar05,lpar_id=15,lpar_env=aixlinux,work_group_id=none,\
    shared_proc_pool_util_auth=0,allow_perf_collection=0,\
    power_ctrl_lpar_ids=none,boot_mode=norm,auto_start=0,\
    redundant_err_path_reporting=0,time_ref=0,lpar_avail_priority=127,\
    suspend_capable=0,remote_restart_capable=0,profile_name=lpar05,\
    all_resources=0,min_mem=512,desired_mem=2048,max_mem=4096,\
    mem_mode=ded,mem_expansion=0.0,hpt_ratio=1:64,proc_mode=ded,\
    min_procs=1,desired_procs=1,max_procs=1,sharing_mode=share_idle_procs,\
    affinity_group_id=none,io_slots=none,lpar_io_pool_ids=none,\
    max_virtual_slots=1000,"virtual_serial_adapters=0/server/1/any//any/1,\
    1/server/1/any//any/1",\
    "virtual_scsi_adapters=304/client/2/vio_server1/4/1,\
    404/client/3/vio_server2/6/1",\
    "virtual_eth_adapters=10/0/1//0/0/ETHERNET0//all/none,\
    11/0/97//0/0/ETHERNET0//all/none,12/0/98//0/0/ETHERNET0//all/none",\
    vtpm_adapters=none,\
    "virtual_fc_adapters=""504/client/2/vio_server1/8/c050760431670010,\
    c050760431670011/1"",""604/client/3/vio_server2/5/c050760431670012,\
    c050760431670013/1""",hca_adapters=none,boot_mode=norm,\
    conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,\
    work_group_id=none,bsr_arrays=0,lhea_logical_ports=none,\
    lhea_capabilities=none,lpar_proc_compat_mode=default
  6. Write this output to a text file. Assuming that the absolute location of profile file thus generated on lpar101 is /configfile/lpar05_on_physical_server_01.cfg, execute the following commands to configure the profile file in VCS.
    $ hares -local lpar05_res ProfileFile 
    $ hares -modify lpar05_res ProfileFile \
    /configfile/lpar05_on_physical_server_01 -sys lpar101
  7. Repeat steps 1-6 to create the profie file for lpar05 for physical_server02.