PVS Vdisk Inconsistency – Replication Status Shows Error ” Server Not Reachable” When NIC Teaming is Configured

  • Verify if NIC Teaming is configured as Active-Active. Reconfigure as Active-Passive.


Open Network team configuration and make sure the team is Active Active.

Verify the NICs configured under Active Adaptors and confirm no Standby Adaptors are configured.

Reconfigure the Team and make sure Active and standby adaptors are configured.

Please note that NIC team configuration will differ for different adapter manufacturers, check the configuration guide to follow appropriate steps to reconfigure.

Reconfigure NIC teaming may interrupt the network connection. Please make sure to take proper actions to avoid production impact.

User-added image

  • Verify the MTU setting of NIC on all PVS servers

Since the status of the replication is synced via UDP on PVS port 6895, the communication failure over this udp port will also effects the status of the replications.

The different MTU of the NICs of PVS servers will also block this kind of UDP communication between them. For example, if one of the NIC has MTU of 1500(default) and the other NIC has MTU of 6000, the udp packets which is larger than 1500 will be lost due to the different fragmentation. From MTU of 6000, the udp packet larger than 1500 but less than 6000, so it will not be fragmented. But the peer has MTU of 1500, so it is unable to accepted this packet and causing packet loss.

You need to check the MTU value of all PVS servers by command:

netsh interface ipv4 show subinterface

If MTUs are different on all PVS servers, please change it to the same value (The default value 1500 is recommended):

netsh interface ipv4 set subinterface “ Ethernet ” mtu=1500 store=persistent

Please replace Ethernet with the NIC name of your PVS server.


  • No Related Posts

SEP blocks NIC Teaming in Server 2019

I need a solution

Recently I installed a fresh copy of windows Server 2019 OS Build 17763.107 on my IBM System x3650M5 machine with 4 Broadcom NetXtreme Gigabit adapters. As soon as I created NIC teaming with LACP option (same on the switch side) and installed SEP version 14.2.3335.1000 for WIN64BIT i got disconnected after a restart. Further investigation showed that NIC cards individually looked fine, but the teamed NIC interface was crossed as if Network cable was unplugged.

I upgraded drivers from Lenovo, installed cumulative updates for windows, ran Symantec troubleshooter (which found zero problems related with NIC) but nothing seems to work.

Symantec support offered that some rule was blocking traffic. When we removed “block any any” traffic from firewall rules, Teamed NIC started up. Same happened when we just disabled firewall module. 

I had server 2012R2 installed prior to 2019 on this machine and it never had such problem. couple years ago I tried to upgrade it to 2016, but I encountered the same “Cable unplugged” problem with NIC teaming and didnt troubleshoot it too much, since it was only for evaluation purposes.

Any ideas? Maybe any of you encountered the same problem and more importantly: solved it without just uninstalling SEP for good? 😀




Isilon: If the Smartconnect Service IP (SSIP) is assigned to an aggregate interface, the IP address may go missing under certain conditions or move to another node if one of the laggports is shutdown.

Article Number: 519890 Article Version: 13 Article Type: Break Fix

Isilon,Isilon OneFS,Isilon OneFS,Isilon OneFS

The Smartconnect SSIP or network connectivity could be disrupted in a node if link aggregation interface in LACP mode is configured, and one of the port members in the lagg interface stops participating from the LACP aggregation.

Issue happens when a node is configured with any of the link aggregation interfaces:



And one of its port members is not participating into the lagg interface:


ether 00:07:43:09:3c:77

inet6 fe80::207:43ff:fe09:3c77%lagg0 prefixlen 64 scopeid 0x8 zone 1

inet 10.25.58.xx netmask 0xffffff00 broadcast 10.25.58.xxx zone 1


media: Ethernet autoselect

status: active

laggproto lacp lagghash l2,l3,l4

laggport: cxgb0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>

>> laggport: cxgb1 flags=0<>

This will cause OneFS to internally set the link aggregation interface to ‘No Carrier’ status, due to a bug in network manager software (Flexnet):

# isi network interface list

LNN Name Status Owners IP Addresses


1 10gige-1 No Carrier – –

1 10gige-2 Up – –

1 10gige-agg-1 No Carrier groupnet0.subnet10g.pool10g

Possible failures causing the issue:

  1. Failed switch port
  2. Incorrect LACP configuration at switch port
  3. Bad cable/SFP, or other physical issue
  4. A connected switch to a port was failed, or rebooted
  5. BXE driver bug reporting not full duplex in a port state (KB511208)

Failures 1 to 4, are external to the cluster, and issue should go away as soon as these gets fixed. Failure 5 could be a persistent failure induced by a known OneFS-BXE bug(KB 511208).

  1. If node is lowest node id in pool, and Smartconnect SSIP is configured there, then:
    1. If failure 1,2, or 3 happen, then the SSIP will be moved to next lowest node id that is clear from any failure
    2. If failure 4 is present, then the SSIP will not be available in any node, and DU is expected until workaround is implemented, patch is installed, or switch is fixed or gets available again after a reboot.
    3. If failure 5 is present:
      1. If only one port is failed, then SSIP will move to next available lowest node id not affected by the issue
      2. [DU] If all nodes in a cluster are BXE nodes, and all are affected by the bug, the SSIP will not be available, expect DU, until workaround or patch is applied.
  2. If the link aggregation in LACP mode is configured in a subnet-pool where its defined gateway is the default route in the node, then:
  1. If issue happens when node is running and default route is already set, then the default route will be continue configured and available, connectivity to already connected clients should continue working.
  2. [DU] If node is rebooted with any of the persistent failures, after it gets back up after the reboot, the default router will not be available, causing DU until external issue is fixed, workaround applied, or patch installed.

If during upgrade to or any of the failures is present, then after the rolling reboot a DU is expected due to case described in cause A->c->ii, or cause B->b. A check must be made prior to the upgrade to evaluate you are clear from any of the described failures.


Workaround to immediately restore link aggregation interface if only one member port is persistently down (Failed switch, failed cable/SFP, BXE bug, or other persistent issue)

Step 1:

Identify failed member port on link aggregation interface:

# ifconfig

lagg1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500


ether 00:0e:1e:58:20:70

inet6 fe80::20e:1eff:fe58:2070%lagg1 prefixlen 64 scopeid 0x8 zone 1

inet 172.16.240.xxx netmask 0xffff0000 broadcast 172.16.255.xxx zone 1


media: Ethernet autoselect

status: active

laggproto lacp lagghash l2,l3,l4

>> laggport: bxe1 flags=0<>


Step 2:

Manually remove port member with command:

ifconfig lagg1 -laggport bxe1

Network should be recovered in 10-20 seconds, after executing the command.

This change will be lost after a reboot.

After the external failure in a port has been identified and fixed, and port is again available, reconfigure

port back into link aggregation configuration with command:

ifconfig lagg1 laggport bxe1

A permanent fix will be available in the following OneFS maintenance releases once they become available:

  • OneFS
  • OneFS

Roll-Up patch is now available for: (bug 226984) – patch-226984 (bug 226323) – patch-226323

NOTE: This issue affects the following OneFS versions ONLY:

  • OneFS
  • OneFS
  • OneFS
  • OneFS


How to Configure and Verify Link Aggregation Control Protocol (LACP) on NetScaler Appliance

  • Run the following command for each interface to enable LACP on the NetScaler interfaces:

    set interface <Interface_ID> -lacpMode PASSIVE -lacpKey 1

    For creating a link aggregation channel by using LACP, you need to enable LACP and specify the same LACP key on each interface that you want to be the part of the channel.

    When you enable LACP on an interface, the channels are dynamically created. Additionally, when you enable LACP on an interface and set the lacpKey to 1, the interface is automatically bound to the channel LA/1.

    When you bind an interface to a channel, the channel parameters get precedence over the interface parameters and the interface parameters are ignored. If a channel was created dynamically by LACP, you cannot perform add, bind, unbind, or remove operations on the channel. A channel dynamically created by LACP is automatically deleted when you disable LACP on all the member interfaces of the channel.

    Refer to Citrix Documentation for all operations can be performed on “interface” command.

  • Related:

    ECS – xDoctor: “One or more network interfaces are down or missing”

    Article Number: 503814 Article Version: 5 Article Type: Break Fix

    Elastic Cloud Storage,ECS Appliance,ECS Appliance Hardware

    xDoctor is reporting the below warning:

    admin@ecs1:~> sudo -i xdoctor --report --archive=2017-09-01_064438 -CEWDisplaying xDoctor Report (2017-09-01_064438) Filter:['CRITICAL', 'ERROR', 'WARNING'] ...Timestamp = 2017-09-01_064438 Category = platform Source = ip show Severity = WARNING Node = Message = One or more network interfaces are down or missing Extra = {'': ['slave-0']} 

    Connect to the node in question and see in this case connection to rabbit switch is down:

    admin@ecs4:~> sudo lldpcli show neighbor-------------------------------------------------------------------------------LLDP neighbors:-------------------------------------------------------------------------------Interface: slave-1, via: LLDP, RID: 1, Time: 28 days, 16:42:58 Chassis: ChassisID: mac 44:4c:a8:f5:63:ad SysName: hare SysDescr: Arista Networks EOS version 4.16.6M running on an Arista Networks DCS-7050SX-64 MgmtIP: Capability: Bridge, on Capability: Router, off Port: PortID: ifname Ethernet12 PortDescr: MLAG group 4-------------------------------------------------------------------------------Interface: private, via: LLDP, RID: 2, Time: 28 days, 16:42:44 Chassis: ChassisID: mac 44:4c:a8:d1:77:b9 SysName: turtle SysDescr: Arista Networks EOS version 4.16.6M running on an Arista Networks DCS-7010T-48 MgmtIP: Capability: Bridge, on Capability: Router, off Port: PortID: ifname Ethernet4 PortDescr: Nile Node04 (Data)-------------------------------------------------------------------------------admin@ecs4:~> 

    Check public interface config:

    admin@ecs4:~> sudo cat /etc/sysconfig/network/ifcfg-publicBONDING_MASTER=yesBONDING_MODULE_OPTS="miimon=100 mode=4 xmit_hash_policy=layer3+4"BONDING_SLAVE0=slave-0BONDING_SLAVE1=slave-1BOOTPROTO=staticIPADDR=10.x.x.x/22MTU=1500STARTMODE=autoadmin@ecs4:~> 
    admin@ecs4:~> viprexec -i "grep Mode /proc/net/bonding/public"Output from host : Mode: IEEE 802.3ad Dynamic link aggregationOutput from host : Mode: IEEE 802.3ad Dynamic link aggregationOutput from host : Mode: IEEE 802.3ad Dynamic link aggregationOutput from host : Mode: IEEE 802.3ad Dynamic link aggregationadmin@ecs4:~> 

    Check interface link status:

    admin@ecs4:~> viprexec -i 'ip link show | egrep "slave-|public"'Output from host : public: command not found3: slave-0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master public state UP mode DEFAULT group default qlen 10005: slave-1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master public state UP mode DEFAULT group default qlen 100010: public: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group defaultOutput from host : public: command not found3: slave-0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master public state UP mode DEFAULT group default qlen 10005: slave-1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master public state UP mode DEFAULT group default qlen 100010: public: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group defaultOutput from host : public: command not found4: slave-0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master public state UP mode DEFAULT group default qlen 10005: slave-1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master public state UP mode DEFAULT group default qlen 100010: public: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group defaultOutput from host : public: command not found2: slave-0: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc mq master public state DOWN mode DEFAULT group default qlen 10005: slave-1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master public state UP mode DEFAULT group default qlen 100010: public: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group defaultadmin@ecs4:~> 
    admin@ecs4:~> sudo ethtool slave-0Settings for slave-0: Supported ports: [ FIBRE ] Supported link modes: 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: No Advertised link modes: 10000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: No Speed: Unknown! Duplex: Unknown! (255) Port: Other PHYAD: 0 Transceiver: external Auto-negotiation: off Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: no

    Refer to ECS Hardware Guide for details of specific port on the switch.

    The ECS Hardware Guide is available in SolVe as well as at support.emc.com:


    Port 12 on rabbit switch is connected to slave-0 interface of node 4.

    Connect to rabbit with admin credentials from a different node and check interface status:

    admin@ecs1:~> ssh rabbitPassword:Last login: Tue Sep 5 11:13:30 2017 from>show interfaces Ethernet12Ethernet12 is down, line protocol is notpresent (notconnect) Hardware is Ethernet, address is 444c.a8de.8f83 (bia 444c.a8de.8f83) Description: MLAG group 4 Member of Port-Channel4 Ethernet MTU 9214 bytes , BW 10000000 kbit Full-duplex, 10Gb/s, auto negotiation: off, uni-link: n/a Loopback Mode : None 0 link status changes since last clear Last clearing of "show interface" counters never 5 minutes input rate 0 bps (0.0% with framing overhead), 0 packets/sec 5 minutes output rate 0 bps (0.0% with framing overhead), 0 packets/sec 0 packets input, 0 bytes Received 0 broadcasts, 0 multicast 0 runts, 0 giants 0 input errors, 0 CRC, 0 alignment, 0 symbol, 0 input discards 0 PAUSE input 0 packets output, 0 bytes Sent 0 broadcasts, 0 multicast 0 output errors, 0 collisions 0 late collision, 0 deferred, 0 output discards 0 PAUSE outputrabbit> 

    The above interface status shows link also down and there has been never any I/O traffic on this interface.

    SFP was not properly seated during install phase.

    Customer was able to re-seat the SFP interface. After that the link automatically was detected and came online.

    Otherwise CE need to go onsite for a physical inspection of SFP module, cable etc. connecting to slave-x interface on node.

    Extract from ECS Hardware Guide:

    Network cabling

    The network cabling diagrams apply to U-Series, D-Series, or C-Series ECS Appliance in an Dell EMC or customer provided rack.

    To distinguish between the three switches, each switch has a nickname:
    • Hare: 10 GbE public switch is at the top of the rack in a U- or D-Series or the top switch in a C-Series segment.
    • Rabbit: 10 GbE public switch is located just below the hare in the top of the rack in a U- or D-Series or below the hare switch in a C-Series segment.
    • Turtle: 1 GbE private switch that is located below rabbit in the top of the rack in a U-Series or below the hare switch in a C-Series segment.
    U- and D-Series network cabling

    The following figure shows a simplified network cabling diagram for an eight-node configuration for a U- or D-Series ECS Appliance as configured by Dell EMC or a customer in a supplied rack. Following this figure, other detailed figures and tables provide port, label, and cable color information.

    Switches cabling

    The ECS Hardware and cabling guide is showing rabbit and hare switches labeled as switch 1 and switch 2 what can cause confusion when cabling want to be verified.

    See below table for matching switches and ports as well as the picture for also showing the appropriate switch port numbers.

    Switch 1 = Rabbit = Bottom switch

    Switch 2 = Hare = Top switch

    Node ports:

    Slave-0 = P01 = right port – connects to Switch 1 / Rabbit / Bottom switch

    Slave-1 = P02 = left port – connects to Switch 2 / Hare / Top switch

    User-added image


    NIC teaming not working with sep 12.1.6

    I need a solution


    I try to install SEP 12.1.6 on our new Server (Windows Server 2016) , the server has 2 NICs configured as team (Mode LACP – dynamic – 2x 10G Broadcom Interfaces). 

    As soon as I install sep the team is not reachable anymore, it seems like the IP configuration has been lost. 

    I tried the following workaround from the KB Broadcom Teaming network adapter fails to acquire IP address after installing Symantec Endpoint Protection 12.1 but it does not fix my problem. When I uninstall the SEP package everything is working fine. 

    Any ideas to fix the problem ? 




    Re: VxRail issue


    I guess you’re having a machine



    So, I’d expect network configuration like this:


    So, VMNIC0 maps to port #0 etc..

    vSphere goes into port #3 & #4 (passive/active).

    Would not go with trunking or messing with free hand network uplink modifications , because this may caue trouble later on (f.e. with upgrades or system expansion).

    “Do not enable Link Aggregation on VxRail Switch Ports Do not use link aggregation, including protocols such as LACP and EtherChannel, on any ports directly connected to VxRail nodes. VxRail Appliances use the vSphere active/standby configuration (NIC teaming) for network redundancy. However, LACP could be enabled on non-system ports, such as additional NIC ports or 1G ports, for user traffic. VxRail uses vSphere Network I/O Control (NIOC) to allocate and control network resources for the four predefined network traffic types required for operation: Management, vSphere vMotion, vSAN and Virtual Machine. The respective NIOC settings for the predefined network traffic types are listed in the tables below for the various VxRail Models. 4”

    hope this will help a bit



    Load Balancing Algorithm Recommended for NetScaler Features Deployed with Hyper-V NIC Teaming

    In Hyper-V server, we can combine one or more physical NICs to form a NIC teaming solution and attach it to the NetScaler VPX for bandwidth aggregation and traffic failover to prevent connectivity loss in the event of a network component failure.

    This article explains the various load balancing algorithms and Citrix recommendation for some of the NetScaler feature deployed with Hyper-V NIC teaming.

    Hyper-V NIC Teaming and Load Balancing Algorithms

    Outbound traffic of NIC team can be distributed among the available links in 3 ways using the following load balancing algorithms:

    1. Hyper-V Port
    2. Dynamic
    3. Address hash

    User-added image

    From the deployment guide of NIC teaming (https://gallery.technet.microsoft.com/windows-server-2012-r2-nic-85aa1318) it is important to note the following facts about each algorithms when used with switch independent NIC Teaming.

    Hyper-V port mode

    • Uses single NIC interface from the NIC Team for Ingress and egress traffic distribution of a VM.
    • There is no Source MAC Address changes done by host, peer device always observe the packets from a single mac.
    • This mode limits a single VM to the bandwidth available on single interface of the team.

    User-added image

    Address Hash mode

    • Creates a hash based on address components of the packet and then assigns packets that have that hash value to one of the available adapters.
    • All special packets including ARP, NetScaler (IPv6 Neighbour Discovery Packets), and ICMP packets are sent on the primary team member.
    • All traffic sent on NICs other than the primary team member are sent with the source MAC address modified to match the NIC on which they are sent.
    • All traffic sent on the primary team member is sent with the original source MAC address (which may be the team’s source MAC address).

    User-added image

    Dynamic mode

    • Takes the best aspects of each of the other two modes and combines them into a single mode.
    • Outbound loads are distributed based on a hash of the TCP Ports and IP addresses and it also rebalances loads in real time so that a given outbound flow may move back and forth between team members.
    • Every VM is affinitized to a team member. All ARP/NetScaler packets are sent on the team member to which the port is affinitized.
    • Packets sent on the team member that is the affinitized team member have no source MAC address replacement done.
    • Packets sent on a team member other than the affinitized team member will have source MAC address replacement done.

    User-added image

    Citrix Recommendation

    Citrix recommends the underlying load-balancing mode to be Hyper-V port mode when switch Independent NIC teaming is deployed with NetScaler VPX for following features

    1. HA
    2. Cluster
    3. MAC Based Forwarding
    4. MAC mode VServers
    5. Forwarding Sessions configured

    This is because Dynamic mode and Address hash mode does source MAC address replacement for the outbound traffic and the peer device receiving the packets sent out of the NIC team interfaces will receive the packets with source MAC of the NIC team interface instead of the sender machine MAC .


    LACP Bonding in XenServer – Configuration and Troubleshooting

    In XenServer 6.1, LACP link aggregation was added to existing bonding modes for vSwitch. This article describes dynamic link aggregation (LACP) between a XenServer host and a switch, giving a high-level overview of the 802.3ad protocol, explaining configuration and diagnosis tools.


    LACP Bonds

    Configuring LACP on a switch




    NIC Bonding

    NIC bonding is a technique where two or more network cards (NICs) are configured together in order to logically function as one. It can be used for the following reasons:

    • Redundancy: in case of link failure all traffic should get shifted seamlessly to the remaining NIC(s).
    • Throughput aggregation: it is usually more cost-effective to bundle a few 1Gb NICs than to upgrade from 1Gb to 10Gb.

    Other terms frequently used for NIC bonding are: NIC teaming, network bonding, link aggregation, link bundling or port bundling. However, it is perceived to be more correct to use link aggregation/bundling to describe configurations where set-up is required on both sides, so both endpoints are aware of the aggregation (for example, when configuration is done on both on the server side and the switch side).

    Bonding modes in XenServer 6.0

    In XenServer 6.0, only two bonding modes were supported for both network stacks (Linux Bridge and vSwitch):

    • active-backup (active-passive): only one NIC would be actively used for traffic and only in case of its failure would an inactive NIC take over,
    • balance-slb (active-active): traffic load balancing based on the source MAC address of each Virtual Network Interface (VIF) of a guest.

    Both ‘active-backup’ and ‘balance-slb’ do not require configuration on the switch side.

    In addition to the preceding options, a few unsupported bonding modes existed in Linux Bridge, notably link aggregation 802.3ad.

    The default bonding mode, ‘balance-slb’, has the benefit of throughput aggregation and failover, but load balancing can work well only for a sufficient number of guests(or rather VIFs), since traffic from one VIF is never split between multiple NICs. Also, its frequent rebalancing was known to cause issues in some switches (see CTX134947 – Adjusting the bond balance interval in XenServer 6.1.0).

    Bonding modes in XenServer 6.1

    In XenServer 6.1, LACP support was implemented for the vSwitch network stack and LACP link aggregation was added to existing bond modes. LACP is supported only for vSwitch.

    LACP bonds

    Link aggregation

    Link aggregation is defined as a configuration in which a group of ports is aggregated together and treated like a single interface. Its advantages are: throughput aggregation, load balancing and failover.

    Static and dynamic LAG

    On a switch, ports are usually grouped together by assigning them the same LAG (Link Aggregation Group) number.

    There are two types of LAGs:

    • Static LAG: ports have LACP disabled and become automatically active members of the bond. Static LAG is not widely used, as it is often considered obsolete and inferior to dynamic LAG. With static LAG on the switch, the bond mode should be ‘balance-slb’ rather than ‘lacp’. Note that use of static LAG is not supported.
    • Dynamic LAG: Link Aggregation Control Protocol (LACP) is used for switch-server communication, in order to negotiate dynamically which links should be active and which should be in stand-by mode.

    Two names

    IEEE 802.3ad, introduced in 2000, was the original standard describing link aggregation and LACP. In order to resolve a layer ordering discrepancy, it was later formally transferred to 802.1 group, becoming IEEE 802.1AX-2008 (with no technical changes). However, the name 802.3ad remains widely used.

    LACPDU frames

    When using LACP protocol, both sides (the switch and the server) regularly exchange LACPDU frames. They contain all the information necessary to negotiate active and stand-by links, monitor the state of the links and notify the partner about a potential failure, providing a prompt failover.

    Actor and Partner

    Terms “Actor” and “Partner” are frequently seen when using LACP protocol. These are relative definitions: “Actor” means “this device” and “Partner” is “other device”, so the server will describe itself as the Actor with the switch being its Partner, while the switch sees itself as the Actor and refers to the server as the Partner.

    Fallback to balance-slb

    In the current implementation, in case of unsuccessful LACP negotiation, XenServer will automatically revert to ‘balance-slb’ behavior. The server still keeps monitoring the traffic for LACP frames, so if a handshake with the switch is achieved, the bond mode will change to LACP.

    Load balancing

    Outgoing packets are distributed between active links. Both sides of the link — the switch and the server — are responsible for balancing their respective outgoing traffic. In XenServer, load balancing between links can be described as follows.

    • A hashing algorithm assigns a hash number (0-255) to each packet, based on a mix of MAC, IP and TCP/UDP port as required.
    • Each hash is assigned to one of the NICs on the bond, which means packets with the same hash are always sent through the same NIC.
    • If a new hash is found, it is assigned to the NIC that currently has the lowest utilization.
    • Rebalancing occurs at a regular interval — hashes are redistributed between the NICs to ensure all NICs are utilized to approximately the same extent.

    Hashing algorithm

    The hashing algorithm is chosen independently on both switch and server side, as each device balances its outgoing traffic. Rather than using the same method on both sides, hashing algorithm choice should reflect traffic patterns in each direction.

    On the server side, vSwitch provides two hashing algorithms to choose from:

    • tcpudp_ports (default), based on source and destination IP, TCP/UDP ports and source MAC.
    • src_mac, based on source MAC address — this is the same mechanism as the one used already in ‘balance-slb’ mode (traffic to/from a guest is not split).

    On the switch side, the hashing algorithms depend on the switch brand and can have different names.

    Use of TCP/IP parameters for hashing should provide load balancing for management traffic as well as improve it for the case of a low number of guest, as it allows packets from the same guest to be distributed over different NICs at the same time. However, neither of the hashing algorithms presents benefits for storage traffic, as in a typical configuration large amounts of data are sent using the same source and destination IP and ports, and while the endpoints remain the same, all packets will have the same hash assigned and so will be sent through the same NIC. Consider using multipathing rather than bonding for storage traffic.

    In the case of src_mac hashing, there is a non-negligible probability that two different MAC addresses will get the same hash number assigned — and when such “MAC clash” occurs, the two VIFs will be always using the same link. In general, src_mac hashing cannot provide even traffic distribution over the links if the number of VIFs is smaller than the number of NICs.


    The following limitations apply:

    • LACP for Linux Bridge is not supported.
    • Citrix supports bonding of up to four NICs in XenServer.
    • Cisco EtherChannel is not supported.
    • Cisco Port Aggregation Protocol (PAgP) is not supported.

    Creating LACP bond using XenCenter

    It is possible to create a LACP bond in XenCenter. The procedure is similar to creating bonds of any other type. Either go to the “Networking” tab and choose “Add Network”, and then Bonded Network, or alternatively, click the Create Bond button on the NICs tab. Two types of LACP bonds should be displayed: LACP with load balancing based on IP and port of source and destination and LACP with load balancing based on source MAC address. Load balancing based on IP and port of source and destination is the default hashing algorithm for LACP and it should work well for most typical configurations.

    Creating LACP bond using XenServer command line

    LACP bond can be created in dom0 command line as follows.

    xe bond-create mode=lacp network-uuid=<network-uuid> pif-uuids=<pif-uuids>

    Hashing algorithm can be specified at the creation time (default is tcpudp_ports):

    xe bond-create mode=lacp properties:hashing-algorithm=<halg> network-uuid=<network-uuid> pif-uuids=<pif-uuids>

    where <halg> is src_mac or tcpudp_ports.

    You can also change the hashing algorithm for an existing bond, as follows:

    xe bond-param-set uuid=<bond-uuid> properties:hashing_algorithm=<halg>

    It is possible to customize the rebalancing interval by changing the bond PIF parameter other-config:bond-rebalance-interval and then re-plugging the PIF. The value should be expressed in millisecond. For example, following commands change the rebalancing interval to 30 seconds.

    xe pif-param-set other-config:bond-rebalance-interval=30000 uuid=<pif-uuid> xe pif-plug uuid=<pif-uuid>

    The two LACP bond modes will not be displayed if you use a version older than XenCenter 6.1 or if you use Linux Bridge network stack.

    Configuring LACP on a switch

    Contrary to other supported bonding modes, LACP requires set-up on the switch side. The switch must support IEEE standard 802.3ad. As is the case for other bonding modes, the best practice remains to connect the NICs to different switches, in order to provide better redundancy. However, due to the setup required (with the exception of HPa™s cross-switch LACP feature), all bonded NICs must be connected to the same logical switch — that is, either connected to different units of a single switch stack or connected to the same physical switch (see section Stacked switches in CTX132559 – XenServer Active/Active Bonding — Switch Configuration).

    There is no Hardware Compatibility List (HCL) of switches. IEEE 802.3ad is widely recognized and applied, so any switch with LACP support, as long as it observes this standard, should work with XenServer LACP bonds.

    Steps for configuring LACP on a switch

    1. Identify switch ports connected to the NICs to bond.

    2. Using the switch web interface or command line interface, set up the same LAG (Link Aggregation Group) number for all the ports to be bonded.

    3. For all the ports to be bonded set LACP to active (for example LACP ON or mode auto).

    4. If necessary, bring up the LAG/port-channel interface.

    5. If required, configure VLAN settings for the LAG interface — just as it would be done for a standalone port.

    Example: Dell PowerConnect 6248 switch

    Setting LAG 14 on port 1/g20 for Dell PowerConnect 6248 switch:

    DELLPC-1>enableDELLPC-1#configureDELLPC-1(config)#interface ethernet 1/g20DELLPC-1(config-if-1/g20)# channel-group 14 mode autoDELLPC-1(config-if-1/g20)#exitDELLPC-1(config)#exit

    Bringing up the port-channel interface 14 and configuring VLAN settings (steps 4 and 5):

    DELLPC-1>enableDELLPC-1#configureDELLPC-1(config)#interface port-channel 14DELLPC-1(config-if-ch14)#switchport mode generalDELLPC-1(config-if-ch14)#switchport general pvid 1DELLPC-1(config-if-ch14)#no switchport general acceptable-frame-type tagged-onlyDELLPC-1(config-if-ch14)#switchport general allowed vlan add 1DELLPC-1(config-if-ch14)#switchport general allowed vlan add 2-5 tagged

    Cleaning up the preceding settings:

    DELLPC-1(config-if-ch14)#no switchport modeDELLPC-1(config-if-ch14)#no switchport general pvid

    Example: Cisco Catalyst 3750G-A8

    Configuration of LACP on ports 23 and 24 on the switch:

    C3750-1#configure terminalEnter configuration commands, one per line. End with CNTL/Z.C3750-1(config)#interface Port-channel3C3750-1(config-if)#switchport trunk encapsulation dot1qC3750-1(config-if)#switchport mode trunkC3750-1(config-if)#exitC3750-1(config)#interface GigabitEthernet1/0/23C3750-1(config-if)#switchport mode trunkC3750-1(config-if)#switchport trunk encapsulation dot1qC3750-1(config-if)#channel-protocol lacpC3750-1(config-if)#channel-group 3 mode activeC3750-1(config-if)#exitC3750-1(config)#interface GigabitEthernet1/0/24C3750-1(config-if)#switchport mode trunkC3750-1(config-if)#switchport trunk encapsulation dot1qC3750-1(config-if)#channel-protocol lacpC3750-1(config-if)#channel-group 3 mode activeC3750-1(config-if)#exitC3750-1(config)#exit

    De-configure LACP bond on the switch side:

    C3750-1#configure terminalEnter configuration commands, one per line. End with CNTL/Z.C3750-1(config)#no interface port-channel3C3750-1(config)#interface GigabitEthernet1/0/23C3750-1(config-if)#no channel-protocolC3750-1(config-if)#no shutdownC3750-1(config-if)#exitC3750-1(config)#interface GigabitEthernet1/0/24C3750-1(config-if)#no channel-protocolC3750-1(config-if)#no shutdownC3750-1(config-if)#end


    No connectivity

    Lack of connectivity on the bonded network might be due to setting the LAG on the wrong ports — in case of issues, double-check that the wiring is as expected. Another reason for the lack of connectivity might be mismatched settings.

    Mismatched settings

    XenServer will automatically fall back to ‘balance-slb’ mode if LACP is not configured on the switch. However, if LACP is set up on the switch, but not on the server, the scenario depends on the switch implementation. Many switches will drop all the traffic on the aggregated ports if LACP negotiation fails. For this reason, it is safer to create a bond on the server side first.

    Bond creation taking longer

    Bond creation can take more time in the case of a LACP bond (mainly due to the necessary switch-server negotiation). A short connectivity blip might be expected as well, before the desired configuration is achieved. If the set-up delay is in the order of a few seconds, this is normal and should not be of concern.

    Only one link is active/not all links are active

    During LACP negotiation, the switch usually has the last word in the choice of active and stand-by links. This is entirely dependent on the switch-side implementation of the protocol and the choice might be indeed different from user expectations — for example only one of three available links will be made active. In such cases, however, testing the setup might involve temporarily disabling the active link (on the server or switch side), and checking whether the correct failover occurs and other links are used.

    Warning in the switch logs during LACP configuration

    Some switches can issue a warning during creation of a LACP bond. If the bond eventually works, the warning should not be of concern — it is most likely issued when LACP on the server is not configured yet or if the server-side LACP bond has already reverted to ‘balance-slb’.


    The following commands can help with diagnostics and troubleshooting of LACP bonds. These need to be executed in dom0, either in the XenCenter console tab or in XenServer root ssh session.

    Command “xe bond-list”

    XenServer command xe bond-list returns a list of all bonds in the pool. As for other bonding modes, it contains uuids of bond object, bond PIF and slave PIFs. Additionally, for LACP bonds the hashing algorithm is displayed in the properties field.

    # xe bond-list params=all uuid ( RO) : 14ebdc8c-5a21-db3c-2d72-5c66cc56075b master ( RO): 772829a2-a7f2-0b78-2663-344e67ecb1af slaves ( RO): e168b979-25e3-0640-82ab-3074f326d26e; 4d63ec5d-6341-f135-efa8-52f22f91b8af mode ( RO): lacp properties (MRO): hashing_algorithm: tcpudp_ports primary-slave ( RO): 4d63ec5d-6341-f135-efa8-52f22f91b8af links-up ( RO): 2

    Command “ovs-vsctl list port”

    The vSwitch command ovs-vsctl list port returns parameters for all vSwitch ports (interfaces). The fragment of the output shown here corresponds to a LACP bond “bond0”. Note that bond_mode field describes the hashing algorithm rather than the XenServer bond type and a LACP bond will have the lacp field set to “active”.

    # ovs-vsctl list port[..]_uuid : c8dcd7e8-0972-45e0-946b-8630d8a5e7fabond_downdelay : 200bond_fake_iface : falsebond_mode : balance-tcpbond_updelay : 31000external_ids : {}fake_bridge : falseinterfaces : [49e2c5d4-077e-42fb-95cc-752782e934bc, dc16c7de-3473-4342-b80c-625e195c02b2]lacp : activemac : "00:24:e8:77:bd:4f"name : "bond0"other_config : {bond-detect-mode=carrier, bond-miimon-interval="100", bond-rebalance-interval="10000"}qos : []statistics : {}status : {}tag : []trunks : []vlan_mode : [][..]

    Command “ovs-appctl bond/show”

    Command ovs-appct bond/show returns more real-time information about the bond.

    Field bond_mode is as above and contains the hashing algorithm currently used. The field bond-hash-algorithm is obsolete and is likely to be discontinued in future version of Open vSwitch. The value of next rebalance informs how much time is left before the hashes will be redistributed over active NICs.

    Field lacp_negotiated is highly useful, as it indicates whether LACP negotiation with the switch was successful. This will read “false” if LACP is not set on the switch or if the negotiation did not converge for any other reason.

    The second part of the output contains the list of bonded devices with all hashes currently assigned to them, as well as recent loads per hash. Observing hashes of 0kB is normal — once a hash number was assigned, it is kept in the logs even long after the relevant traffic ceased. Only rebooting the server or re-creating the bond will clean up the hash table.

    # ovs-appctl bond/show bond0 bond_mode: balance-tcp bond-hash-algorithm: balance-tcp bond-hash-basis: 0 updelay: 31000 ms downdelay: 200 ms next rebalance: 8574 ms lacp_negotiated: true slave eth0: enabled active slave may_enable: true hash 119: 16 kB load hash 120: 0 kB load hash 128: 597 kB load hash 132: 0 kB load hash 157: 5 kB load [..] slave eth1: enabled may_enable: true [..] hash 52: 304 kB load hash 64: 246 kB load hash 74: 0 kB load hash 82: 0 kB load [..]

    Command “ovs-appctl lacp/show”

    The OVS command ovs-appctl lacp/show returns information related to LACP protocol — such as the negotiation status, aggregation key, LACP timeout, the identifiers of actor and key components.

    # ovs-appctl lacp/show bond0---- bond0 ---- status: active negotiated sys_id: 00:24:e8:77:bd:4f sys_priority: 65534 aggregation key: 1 lacp_time: slow slave: eth0: current attached port_id: 2 port_priority: 65535 actor sys_id: 00:24:e8:77:bd:4f actor sys_priority: 65534 actor port_id: 2 actor port_priority: 65535 actor key: 1 actor state: activity aggregation synchronized collecting distributing partner sys_id: 00:1c:23:6d:cd:3e partner sys_priority: 1 partner port_id: 113 partner port_priority: 1 partner key: 646 partner state: activity aggregation synchronized collecting distributing slave: eth1: current attached port_id: 1 port_priority: 65535 actor sys_id: 00:24:e8:77:bd:4f actor sys_priority: 65534 actor port_id: 1 actor port_priority: 65535 actor key: 1 actor state: activity aggregation synchronized collecting distributing partner sys_id: 00:1c:23:6d:cd:3e partner sys_priority: 1 partner port_id: 114 partner port_priority: 1 partner key: 646 partner state: activity aggregation synchronized collecting distributing

    Command “tcpdump”

    Tools like tcpdump can be used in dom0 in order to monitor the traffic.

    tcpdump -i eth2 -v -l

    (press Ctrl+C to stop).

    With tcpdump, you can observe LACP frames in the output.

    09:45:21.364818 LACPv1, length 110 Actor Information TLV (0x01), length 20 System 00:1c:23:6d:cd:3e (oui Unknown), System Priority 1, Key 646, Port 113, Port Priority 1 State Flags [Activity, Aggregation, Synchronization, Collecting, Distributing] Partner Information TLV (0x02), length 20 System 00:24:e8:77:bd:4f (oui Unknown), System Priority 65534, Key 1, Port 2, Port Priority 65535 State Flags [Activity, Aggregation, Synchronization, Collecting, Distributing] Collector Information TLV (0x03), length 16 Max Delay 0 Terminator TLV (0x00), length 009:45:21.894776 LACPv1, length 110 Actor Information TLV (0x01), length 20 System 00:24:e8:77:bd:4f (oui Unknown), System Priority 65534, Key 1, Port 2, Port Priority 65535 State Flags [Activity, Aggregation, Synchronization, Collecting, Distributing] Partner Information TLV (0x02), length 20 System 00:1c:23:6d:cd:3e (oui Unknown), System Priority 1, Key 646, Port 113, Port Priority 1 State Flags [Activity, Aggregation, Synchronization, Collecting, Distributing] Collector Information TLV (0x03), length 16 Max Delay 0 Terminator TLV (0x00), length 0

    To see which packets are sent on each link, multiple instances of tcpdump can be run simultaneously in separate dom0 consoles. For example, if interfaces eth0 and eth1 were bonded:

    tcpdump -i eth0 -v
    tcpdump -i eth1 -v

    File /var/log/messages

    VSwitch uses the file /var/log/messages to log switching traffic loads between the bonded NICs:

    Sep 7 09:27:21 localhost ovs-vswitchd: 00130|bond|INFO|bond bond0: shift 3kB of load (with hash 3) from eth0 to eth1 (now carrying 1068kB and 35kB load, respectively)Sep 7 09:27:21 localhost ovs-vswitchd: 00131|bond|INFO|bond bond0: shift 2kB of load (with hash 9) from eth0 to eth1 (now carrying 1065kB and 38kB load, respectively)

    File /var/log/xensource.log

    File /var/log/xensource.log contains useful network daemon entries, as well as DHCP records.

    Additional Resources

    If you experience any difficulties, contact Citrix Technical Support.

    For a review of all supported bonding modes, refer to CTX134585 – XenServer 6.1.0 Administrator’s Guide.

    Technical specification of the LACP protocol can be found in IEEE standard 802.1AX-2008.

    For further information about XenServer 6.1.0 refer to CTX134582 – XenServer 6.1.0 Release Notes.

    For additional information about LACP bonding in the vSwitch network stack, refer to Open vSwitch documentation and Open vSwitch source code. XenServer 6.1.0 uses Open vSwitch 1.4.2.