InfoScale™ Cluster Server 9.0 Bundled Agents Reference Guide - AIX
- Introducing bundled agents
- Storage agents
- DiskGroup agent
- Notes for DiskGroup agent
- Sample configurations for DiskGroup agent
- DiskGroupSnap agent
- Notes for DiskGroupSnap agent
- Sample configurations for DiskGroupSnap agent
- Volume agent
- VolumeSet agent
- Sample configurations for VolumeSet agent
- LVMVG agent
- Notes for LVMVG agent
- Mount agent
- Sample configurations for Mount agent
- SFCache agent
- Network agents
- About the network agents
- IP agent
- NIC agent
- IPMultiNIC agent
- MultiNICA agent
- About the IPMultiNICB and MultiNICB agents
- IPMultiNICB agent
- Sample configurations for IPMultiNICB agent
- MultiNICB agent
- Sample configurations for MultiNICB agent
- DNS agent
- Agent notes for DNS agent
- About using the VCS DNS agent on UNIX with a secure Windows DNS server
- Sample configurations for DNS agent
- File share agents
- NFS agent
- NFSRestart agent
- Share agent
- About the Samba agents
- Notes for configuring the Samba agents
- SambaServer agent
- SambaShare agent
- NetBios agent
- Service and application agents
- Apache HTTP server agent
- Application agent
- Notes for Application agent
- Sample configurations for Application agent
- CoordPoint agent
- LPAR agent
- Notes for LPAR agent
- MemCPUAllocator agent
- MemCPUAllocator agent notes
- Process agent
- Usage notes for Process agent
- Sample configurations for Process agent
- ProcessOnOnly agent
- RestServer agent
- WPAR agent
- Infrastructure and support agents
- Testing agents
- Replication agents
Notes for MultiNICA agent
If all interfaces configured in the Device attribute are down, the MultiNICA agent faults the resource after a two-three minute interval. This delay occurs because the MultiNICA agent tests the failed interface several times before it marks the resource OFFLINE. Engine logs record a detailed description of the events during a failover.
You can only have one MultiNICA resource for a given set of devices. Devices on different nodes can use different, either Ipv4 or IPv6, protocol. However, devices on each node must use the same protocol. For example, you can have a MultiNICA resource configured as follows:
Option 1:
MultiNICA mnic ( Device@sysa = { en0 = "10.128.8.42", en1 = "10.128.8.42" } Device@sysb = { en0 = "10.128.8.43", en1 = "10.128.8.43" } )
Option 2:
MultiNICA mnic ( Device@sysa = { en1 = "10.209.56.211", en2 = "10.209.56.211" } Device@sysb = { en1 = "2200::34", en2 = "2200::34" } )
The MultiNICA agent supports only one active interface on one IP subnet; the agent does not work with multiple active interfaces on the same subnet.
For example, you have two active interfaces, en0 (10.128.2.5) and en1 (10.128.2.8). You configure a third interface, en2, as the backup interface to en1. The agent does not fail over from en1 to en2 because some ping tests are redirected through en0 on the same subnet. The redirect makes the MultiNICA monitor return an online status.