Skip to main content
NetApp Knowledge Base

Asymmetric Logical Unit Access (ALUA) support on NetApp Storage - Frequently Asked Questions

Views:
10,782
Visibility:
Public
Votes:
22
Category:
clustered-data-ontap-8
Specialty:
san
Last Updated:

Applies to

  • ONTAP 9.x
  • FlexPod 
  • Asymmetric Logical Unit Access (ALUA)

Answer

What is Asymmetric Logical Unit Access?

  • ALUA, also known as Target Port Groups Support (TPGS), is a set of SCSI concepts and commands that define path prioritization for SCSI devices. ALUA is a formalized way to describe SCSI port status, and access characteristics.
  • In short, it describes paths as fast, slow, or down and the transitions between these states in a standards-compliant manner. This standard is designed to define the protocol on how multipath IO should be managed between hosts and storage devices. It will also reduce vendor-specific coding and complexity.
  • Target ports are given an identifier, which is unique to the target (which in a single_image configuration would be the cluster) and are then organized into target port groups. Target port groups are collections of target port identifiers that share the same access characteristics to a LUN.
  • The host uses a MAINTENANCE_IN command to get the list of all the target port groups for a LUN and uses an INQUIRY request to get the target port ID for a specific path. The host then uses this information to organize the paths.
  • By using new target port IDs available with INQUIRY commands and the new REPORT_TARGET_PORT_GROUPS command, it is possible to get the access characteristics for any SCSI target device.
  • The storage system implements four states for a LUN:
    • Active/Optimized
    • Active/Non-Optimized
    • Unavailable
    • Transitioning
  • These map to the following existing Data ONTAP terms:
    • Local/Fast/Primary
    • Partner/Proxy/Slow/Secondary
    • Cluster IC is down, path is not functional
    • Path is transitioning to another state

Why ALUA?

  • Traditionally, NetApp has written a plug-in for each SCSI multipathing stack that it interacts with these plug-ins used by NetApp vendor unique SCSI commands to identify a path as Primary or Secondary.
  • By supporting ALUA in conjunction with SCSI multipathing stacks that also support ALUA, out-of-the-box support is obtained without writing any new code on the host side. For example, with ALUA support, it is no longer required to define the vid/pid information in /kernel/drv/scsi_vhci.conf in Solaris.
  • Data ONTAP implements the implicit ALUA style, not the explicit format. Implicit ALUA makes the target device responsible for all the changes to the target port group states. With implicit access, the device’s controller manages the states of path connections.
  • In this case, the standard understands that among multiple paths to a LUN, there might be performance differences between the paths. Therefore, it includes messages that are specific to a path, which change its characteristics, such as changes during failover/giveback.
  • With the implicit ALUA style, the host multipathing software can monitor the path states but cannot change them, either automatically or manually. Of the active paths, a path may be specified as preferred (optimized in T10), and as non-preferred (non-optimized). If there are active preferred paths, then only these paths will receive commands, and will be load balanced to evenly distribute the commands.
  • If there are no active preferred paths, then the active non-preferred paths are used in a round-robin fashion. If there are no active non-preferred paths, then the LUN cannot be accessed until the controller activates its standby paths.

It is a NetApp Best Practice to use ALUA on hosts that support ALUA.

 

  • Note:
    • Please verify a host supports ALUA prior to implementing, as a cluster failover will result in system interruption or possible data loss. All NetApp LUNs presented to an individual host must have ALUA enabled since the host's MPIO software expects ALUA to be consistent for all LUNs with the same vendor.
    • Traditionally, an administrator would need to manually identify and select the optimal paths for I/O. Utilities such as dotpaths for AIX are used to set path priorities in environments where ALUA is not supported. Using ALUA, the administrator of the host computer does not need to manually intervene in path management any longer, as it will be handled automatically. Running MPIO on the host is still required, but no additional host-specific plug-ins are required.
    • Maximize I/O by utilizing the optimal path consistently and automatically.
  • Limitations:
    • ALUA can only be enabled on FCP initiator groups on Data ONTAP 7-Mode systems.
    • ALUA is not available on non-clustered storage systems for FCP initiator groups.
    • ALUA is not supported for iSCSI initiator groups on Data ONTAP 7-Mode systems.
  • How to enable ALUA on non-ALUA LUNs
  1. Validate the host OS and the multipathing software as well as the storage controller software support ALUA.  If yes, then proceed. For example, ALUA is not supported for VMware ESX until vSphere 4.0. Check with the host OS vendor for supportability.
  2. Check the host system for any script that might be managing the paths automatically and disable it.
  3. If using SnapDrive, verify that there are no settings disabling the ALUA set in the configuration file. 
  • ALUA is enabled or disabled on the igroup mapped to a NetApp LUN on the NetApp controller.
  • The default ALUA setting in Data ONTAP varies by version and by igroup type. Check the output of the igroup show -v <igroup name> command to confirm the setting.
  • Enabling ALUA on the igroup will activate ALUA.
  • Certain hosts such as Windows, Solaris and AIX will require the system to rediscover their disks in order for ALUA to be enabled. It is recommended that the system be rebooted once the change is made.
  • The following table lists the minimum version of the operating system and MPIO software which has been tested by NetApp for use with Data ONTAP:
Operating System Version MPIO software Data ONTAP
Windows 2003 Data ONTAP DSM 3.4

7.3

2008, 2008R2 Microsoft DSM
Data ONTAP DSM 3.4

Linux

RHEL 5.1+, SLES 10 SP2, SLES 11+

Device Mapper (DM-MP)

Veritas Storage Foundation 5.1 and above

7.2.4.1

VMware ESX

4.0+

Native MPIO

7.3.1

AIX

5.3 TL9 SP4 w/ APAR IZ53157
5.3 TL10 SP1 w/ APAR IZ53158
6.1 TL2 SP4 w/ APAR IZ53159
6.1 TL3 SP1 w/ APAR IZ53160

IBM AIX MPIO

Veritas Storage Foundation 5.1 and above

7.3

HP-UX

11iv3 Update 1 (09/2007)

Native Multipathing

Veritas Storage Foundation 5.0.1

7.2.5.1

Solaris

10 Update 3+

Solaris MPxIO

Veritas Storage Foundation 5.1 and above

7.2.1

 

Refer to this SAN host configuration doc, for additional settings.

Additional Information

For complete configuration listings of tested configurations, including later releases of the operating system and MPIO software, see the following links:

 

NetApp provides no representations or warranties regarding the accuracy or reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein. The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customer's responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document.