Multipathing Overview for XenServer 6.2, 6.5, and 7.0


Contents

Overview

This article provides a generic overview of multipathing for Citrix XenServer 6.2, 6.5, and 7.0. For information about multipathing in XenServer 7.1 and later, see the Citrix XenServer Administrator Guide.

Basic Multipath Configuration

Dynamic multipathing support is available for Fibre Channel and iSCSI storage repositories (SRs). By default, multipathing uses round robin mode load balancing, so both routes will have active traffic on them during normal operation. You can enable or disable storage multipathing in XenCenter using the Multipathing tab on the server’s Properties dialog.

Before you enable multipathing:

  • Verify that multiple targets are available on your storage server.
  • The server must be placed in Maintenance Mode; this ensures that any running virtual machines with virtual disks in the affected storage repository are migrated before the changes are made.
  • Multipathing must be configured on each host in the pool. All cabling and, in case of iSCSI, subnet configurations must match the corresponding NICs on each host. For example, all NIC 3s must be configured to use the same subnet.

For more information about multipathing, see the Configuring iSCSI Multipathing Support for XenServer guide and the XenServer Administrator’s Guide.

To Enable or disable Multipathing using XenCenter

The following section contains instructions on enabling and disabling Multipathing using XenCenter.

  1. In the XenCenter Resources pane, select the server and then put it into Maintenance Mode. There may be a short delay while XenCenter migrates any active virtual machines and unplugs the existing storage. If the server is a pool master, it will be disconnected and may disappear from the Resources pane temporarily while a new pool master is assigned. When the server reappears in the Resources pane with the icon, continue to the next step.

User-added image

  1. On the General tab, click Properties and then click on the Multipathing tab.
  2. To enable multipathing, select the Enable multipathing on this server check box. To disable multipathing, clear the check box.

User-added image

  1. Click OK to apply the new setting and close the dialog box. There is a short delay while XenCenter saves the new storage configuration.
  2. Take the server out of Maintenance Mode: select the server in the Resources pane, right-click, and then click Exit Maintenance Mode.

To Enable or Disable Multipath Using the xe CLI

  1. Stop all the VMs which are using virtual disks on the SRs, so that the SRs can be unplugged.

This can be done by suspending VMs, shutting down VMs or migrating VMs to a different host

2. Unplug the PBDs related to such SRs for safety purposes. For every such SR:

Find the PBD related to the particular SR:
# xe sr-list uuid=<sr-uuid> params=all
Unplug the PBD:
# xe pbd-unplug uuid=<pbd_uuid>



3. Change multipath setting for the host setting the host’s other-config:multipathing parameter:

Run the following command to enable Multipath:
# xe host-param-set other-config:multipathing=true uuid=host_uuid

# xe host-param-set other-config:multipathhandle=dmp uuid=host_uuid
Run the following command to disable Multipath
# xe host-param-set other-config:multipathing=false uuid=host_uuid



4. Plug the PBDs back in:

# xe pbd-plug uuid=<pbd_uuid>

  • Repeat steps 1-4 for each server in the pool.
Advanced Multipath Configuration

Redundant Disk Array Controller (RDAC) Configuration (XenServer 6.2 only)

RDAC configuration is required by many storage vendors. This configuration is related to IBM DS4xxx, however you can try it on other storage where RDAC is required.

Important: This configuration may not work for certain storage types and is mostly experimental.

We recommend that you configure and test RDAC multipathing before placing your XenServer hosts into production.

The following section provides an example using IBM storage:

  1. Find out vendor and product type.
  2. The vendor option is usually the manufacturer, such as IBM.
  3. The product number can be found by using several tools or by asking the manufacturer. For example, if you have Qlogic Fibre Channel HBA you can use the “scli” tool to query the target and find out the product id.

    # scli -t

    Note: If “scli” is not in your path, you can find it at the following location: /opt/QLogic_Corporation/QConvergeConsoleCLI/scli
  4. Add the device into the devices section in the /etc/multipath.conf file. There are already some device sections included. Just add the new device at the beginning or the end of the existing section. The section should look like the following example. Note that there could be additional device sections within the devices section as shown in the following example:

    devices {

device {

vendor “IBM”

product “1815”

prio_callout “/sbin/mpath_prio_rdac /dev/%n”

path_grouping_policy group_by_prio

failback immediate

path_checker rdac

hardware_handler “1 rdac”

user_friendly_names no

}

}

  1. Reload the multipath configuration. See the “Reload Multipath Configuration” section for more information.
  2. Show the multipath information. See the “Show multipath information” section for more information.
  3. Enable the mppVhda driver. Some storage devices require the use of the mppVhba driver, which is not enabled by default. Most of the IBM storage that requires RDAC need the driver to be enabled.
  4. Note: You can use wild cards in the product if the product option is too long. For example, “1723*”

Asymmetric Logical Unit Access (ALUA) for iSCSI Devices

Refer to the blog on XenServer 6.5 and Asymmetric Logical Unit Access (ALUA) for iSCSI Devices for information.

Reload Multipath Configuration

Reload multipath configuration on each XenServer in the pool after any change to the /etc/multipath.conf file. Run the following command:

# service multipathd restart

Testing and Troubleshooting

Show Multipath Information

Run the following command to show the current multipath topology:

# multipath -l

# multipath -ll

Run the following command to show the verbosity and print all paths and multipaths:

# multipath -v 2

User-added image

Run the following command to refresh information what is supplied by previous commands:

# multipath -v 3

Note: In some cases, multipath -ll displays all the paths correctly and XenCenter may not display all paths that are connected. In such scenarios follow the instruction in the section “Refresh Multipath information in XenCenter”.

Refresh Multipath Information in XenCenter

If you notice that multipath -ll displays all the paths correctly and XenCenter shows you that some paths are not connected, you can refresh the information by running the following script:

# /opt/xensource/sm/mpathcount.py

iSCSI – Testing and Troubleshooting Commands

Check open-iscsi service status:

# service open-iscsi status

Restart open-iscsi service:

# service open-iscsi restart

Discover targets at given IP address:

# iscsiadm -m discovery -t sendtargets -p <ip_address_of_storage>

Log on to the target manually (The letter “l” stands for “logon”):

# iscsiadm -m node -T <iqn> -p <ip-address-of-the-storage> -l

See the disk and iSCSI ID in devices:

# ll /dev/disk/by-id/

You should be able to see the disk in this directory after a successful iSCSI logon. The long number is the SCSI ID. You can use it for sr-create or when manually creating PBD for iSCSI.

The following command provides a more concise output that includes the SCSI IDs.


# ll /dev/disk/by-scsid/

Log off the target (The letter “u” stands for “log off”):


# iscsiadm -m node -T <iqn> -p <ip-address-of-the-storage> -u

iSCSI – Not Every Path Available

You might experience an issue where only one path becomes active. Other paths are not connected. To resolve this issue, you can log on each path manually on the boot as explained in the following steps:

You already have one active path on IP address 192.168.30.1.

Discover the other paths, for example:

# iscsiadm -m discovery -t sendtargets -p 192.168.31.1

# iscsiadm -m discovery -t sendtargets -p 192.168.32.1

# iscsiadm -m discovery -t sendtargets -p 192.168.33.1

Test to logon the other paths, for example:

# iscsiadm -m node -T <iqn> -p 192.168.31.1 -l

# iscsiadm -m node -T <iqn> -p 192.168.32.1 -l

# iscsiadm -m node -T <iqn> -p 192.168.33.1 -l

Now you should be able to see all the paths active in XenCenter or by running the following command:

# multipath -l

Implement the logon command in /etc/rc.local on every server in the pool as follows:

# sleep 30

# iscsiadm -m node -T <iqn> -p 192.168.31.1 -l

# iscsiadm -m node -T <iqn> -p 192.168.32.1 -l

# iscsiadm -m node -T <iqn> -p 192.168.33.1 -l

Fibre Channel Commands

General Information about HBA and SCSI

To examine some simple information about the Fibre Channel HBAs in a machine

# systool -c fc_host -v

To look at verbose information regarding the SCSI adapters present on a system

# systool -c scsi_host -v

To see what Fibre Channel devices are connected to the Fibre Channel HBA cards

# systool -c fc_remote_ports -v -d

For Fibre Channel transport information

# systool -c fc_transport -v

For information on SCSI disks connected to a system

# systool -c scsi_disk -v

To examine more disk information including which hosts are connected to which disks

# systool -b scsi -v

Use the sg_map command to view more information about the SCSI map

# sg_map -x

To obtain driver information, including version numbers and active parameters

For Qlogic HBAs

# systool -m qla2xxx -v

For Emulex HBAs

# systool -m lpfc -v

Note: For more information about systool commands, see Red Hat Knowledgebase, Article ID: 9937 Why is the /proc/scsi/qla2xxx/ or /proc/scsi/lpfc/ directory missing in Red Hat Enterprise Linux 5 and what has replaced it?


Qlogic HBA

Rescan QLogic HBA for available LUNs:

# echo “- – -“ > /sys/class/scsi_host/host<number>/scan

(For more details see CTX120172 – How to Rescan the Qlogic Host Bus Adapter for New Logical Unit Numbers in XenServer)

Disks should appear in /dev/disk/by-id

# ll /dev/disk/by-id

Query Qlogic HBA for attached instances:

# scli -t

Query Qlogic HBA for LUNs

# scli -l <hba_instance_number_from_previous_command>

Removing HBA-based FC or iSCSI device entries

# echo “1” > /sys/class/scsi_device/<adapter>:<bus>:<target>:<lun>/device/delete


Emulex HBA

Utility for Emulex HBA:

# hbacmd

Display help to the command:

# hbacmd -h

Query Emulex HBAs:

# hbacmd listHBAs

List HBA attributes:

# hbacmd hbaattrib <wwpn_from_previous_command>

For example:

# hbacmd hbaattrib 10:00:00:00:c9:20:08:cc

List HBA port attributes:

# hbacmd portattrib <wwpn_from_previous_command>

For example:

# hbacmd portattrib 10:00:00:00:c9:20:08:cc

Note: If hbacmd is not in the path, you can find it at /usr/sbin/hbanyware/hbacmd

FAQ

Q: Do you support third-party handlers?

A: Citrix only supports default device-mapper (dmp) from XenServer 6.5 onwards. In addition to default device-mapper (dmp), XenServer 6.2 also supports DMP RDAC, and MPP RDAC for LSI compatible controllers.

Q: To what extent do you support multipathing configurations?

A: Citrix only supports default configurations. However, you might change /etc/multipath.conf configuration file according to your requirements, or to vendor-specific settings and then reload the multipath configuration as explained in “Reload Multipath Configuration” section.

Q: Do you support the PowerPath handler?

A: No.

Related:

  • No Related Posts

Leave a Reply