Cisco Nexus 9000 Series Fabric Switches ACI Mode Border Leaf Endpoint Learning Vulnerability

A vulnerability within the Endpoint Learning feature of Cisco Nexus 9000 Series Switches running in Application Centric Infrastructure (ACI) mode could allow an unauthenticated, remote attacker to cause a denial of service (DoS) condition on an endpoint device in certain circumstances.

The vulnerability is due to improper endpoint learning when packets are received on a specific port from outside the ACI fabric and destined to an endpoint located on a border leaf when Disable Remote Endpoint Learning has been enabled. This can result in a Remote (XR) entry being created for the impacted endpoint that will become stale if the endpoint migrates to a different port or leaf switch. This results in traffic not reaching the impacted endpoint until the Remote entry can be relearned by another mechanism.

Cisco has released software updates that address this vulnerability. There are no workarounds that address this vulnerability.

This advisory is available at the following link:

Security Impact Rating: Medium

CVE: CVE-2019-1977


  • No Related Posts

Failover issues with SGOS on ESX and Cisco

I need a solution

Hi there,

I’m having an issue with failover on two virtualized Symantec ( Bluecoat) proxies on two ESX hosts in two datacenters connected with Cisco switches.

I can see the Mulicast traffice leaving the proxy getting out into the world over the Cisco switches till the firewall blocks them. The packets should be delivered on L2 to the other switch to get into the other ESX-host on the other proxy running there.

But on the other host I don’t see any multicast-traffic incoming. Hence both feel responsible for the virtual IP what makes problems with Skype etc.

Did anyone have such an issue before? On ESX we activated promiscuous mode already for that vlan/subnet. But that didn’t change the issue.

The hardware proxies in the same network see the multicast-traffic incoming from the virtual machines and behave accordingly. As the virtual proxies don’t receive any multicast traffic they always assume to be master as the other one is not sending any updates.

I would understand that there might be an issue between the two Cisco-Switches that multicast traffic is not forwarded to the other. Other idea is – that there is a special setting on the ESX-Machine I’m not aware of? Any idea?

Thanks in advance,




  • No Related Posts

Cisco Nexus 9000 Series Fabric Switches ACI Mode Fabric Infrastructure VLAN Unauthorized Access Vulnerability

A vulnerability in the fabric infrastructure VLAN connection establishment of the Cisco Nexus 9000 Series Application Centric Infrastructure (ACI) Mode Switch Software could allow an unauthenticated, adjacent attacker to bypass security validations and connect an unauthorized server to the infrastructure VLAN.

The vulnerability is due to insufficient security requirements during the Link Layer Discovery Protocol (LLDP) setup phase of the infrastructure VLAN. An attacker could exploit this vulnerability by sending a malicious LLDP packet on the adjacent subnet to the Cisco Nexus 9000 Series Switch in ACI mode. A successful exploit could allow the attacker to connect an unauthorized server to the infrastructure VLAN, which is highly privileged. With a connection to the infrastructure VLAN, the attacker can make unauthorized connections to Cisco Application Policy Infrastructure Controller (APIC) services or join other host endpoints.

Cisco has released software updates that address this vulnerability. There are workarounds that address this vulnerability.

This advisory is available at the following link:

Security Impact Rating: High

CVE: CVE-2019-1890


  • No Related Posts

arp flux problem

I need a solution

Recently we had the following problem:

when a connection is established from a client on the same network segment (subnet) of the outbound i/f of sbg to the inbound i/f the responce is coming out from the outbound i/f using the ip addr of the inbound i/f. This is a known linux problem called ‘ARP Flux”, and the source is that ” The kernel can respond to arp requests with addresses from other interfaces. This may seem wrong but it usually makes sense, because it increases the chance of successful communication. IP addresses are owned by the complete host on Linux, not by particular interfaces. Only for more complex setups like load-balancing, does this behavior cause problems.“

There sould be a way to overcome this by beeing able to control arp_filter, arp_announce and arp_ignore



  • No Related Posts

SEP blocks NIC Teaming in Server 2019

I need a solution

Recently I installed a fresh copy of windows Server 2019 OS Build 17763.107 on my IBM System x3650M5 machine with 4 Broadcom NetXtreme Gigabit adapters. As soon as I created NIC teaming with LACP option (same on the switch side) and installed SEP version 14.2.3335.1000 for WIN64BIT i got disconnected after a restart. Further investigation showed that NIC cards individually looked fine, but the teamed NIC interface was crossed as if Network cable was unplugged.

I upgraded drivers from Lenovo, installed cumulative updates for windows, ran Symantec troubleshooter (which found zero problems related with NIC) but nothing seems to work.

Symantec support offered that some rule was blocking traffic. When we removed “block any any” traffic from firewall rules, Teamed NIC started up. Same happened when we just disabled firewall module. 

I had server 2012R2 installed prior to 2019 on this machine and it never had such problem. couple years ago I tried to upgrade it to 2016, but I encountered the same “Cable unplugged” problem with NIC teaming and didnt troubleshoot it too much, since it was only for evaluation purposes.

Any ideas? Maybe any of you encountered the same problem and more importantly: solved it without just uninstalling SEP for good? 😀




  • No Related Posts

Best Practices for Configuring Provisioning Services Server on a Network

This article provides best practices when configuring Citrix Provisioning, formerly Citrix Provisioning Server, on a network. Use these best practices when troubleshooting issues such as slow performance, image build failures, lost connections to the streaming server, or excessive retries from the target device.

Disabling Spanning Tree or Enabling PortFast

With Spanning Tree Protocol (STP) or Rapid Spanning Tree Protocol, the ports are placed into a blocked state while the switch transmits Bridged Protocol Data Units (BPDUs) and listens to ensure the BPDUs are not in a loopback configuration.

The amount of time it takes to complete this convergence process depends on the size of the switched network, which might allow the Pre-boot Execution Environment (PXE) to time out, preventing the machine from getting an IP address.

Note: This does not apply after the OS is loaded.

To resolve this issue, disable STP on edge-ports connected to clients or enable PortFast or Fast Link depending on the managed switch brand. Refer to the following table:

Switch Manufacturer

Fast Link Option Name


PortFast or STP Fast Link


Spanning Tree FastLink


Fast Port


Fast Start

Auto Negotiation

Auto Negotiation requires network devices and its switch to negotiate a speed before communication begins. This can cause long starting times and PXE timeouts, especially when starting multiple target devices with different NIC speeds. Citrix recommends hard coding all Provisioning Server ports (server and client) on the NIC and on the switch.

Stream Service Isolation

New advancements in network infrastructure, such as 10 Gb networking, may not require the stream service to be isolated from other traffic. If security is of primary concern, Citrix recommends isolating or segmenting the PVS stream traffic from other production traffic. However, in some cases, isolating the stream traffic can lead to a more complicated networking configuration and actually decrease network performance. For more information on whether the streaming traffic should be isolated, refer to the following article:

Is Isolating the PVS Streaming Traffic Really a Best Practice?

Firewall and Server to Server Communication Ports

Open the following ports in both directions:

  • UDP 6892 and 6904 (For Soap to Soap communication – MAPI and IPC)

  • UDP 6905 (For Soap to Stream Process Manager communication)

  • UDP 6894 (For Soap to Stream Service communication)

  • UDP 6898 (For Soap to Mgmt Daemon communication)

  • UDP 6895 (For Inventory to Inventory communication)

  • UDP 6903 (For Notifier to Notifier Communication)

Note: DisableTaskOffload is still required.


  • No Related Posts

Add a seccond endpoint server Fail – Dlp 15.5

I need a solution

Iam testing the version 15.5 and Iike to add another endpoint server but the status in the enforce server is “Unknown”

The single tier server was create in a W2008 and the second endpoint server using a W10 computer.

Additional the W2008 stay in vlan and the W10 stay in vlan 

How can be the problem ? I need to do some additional configuration?




  • No Related Posts

Cisco Nexus 5600 and 6000 Series Switches Fibre Channel over Ethernet Denial of Service Vulnerability

A vulnerability in the Fibre Channel over Ethernet (FCoE) protocol implementation in Cisco NX-OS Software could allow an unauthenticated, adjacent attacker to cause a denial of service (DoS) condition on an affected device.

The vulnerability is due to an incorrect allocation of an internal interface index. An adjacent attacker with the ability to submit a crafted FCoE packet that crosses affected interfaces could trigger this vulnerability. A successful exploit could allow the attacker to cause a packet loop and high throughput on the affected interfaces, resulting in a DoS condition.

Cisco has released software updates that address this vulnerability. There are no workarounds that address this vulnerability.

This advisory is available at the following link:

Security Impact Rating: Medium

CVE: CVE-2019-1595


  • No Related Posts

Isilon: If the Smartconnect Service IP (SSIP) is assigned to an aggregate interface, the IP address may go missing under certain conditions or move to another node if one of the laggports is shutdown.

Article Number: 519890 Article Version: 13 Article Type: Break Fix

Isilon,Isilon OneFS,Isilon OneFS,Isilon OneFS

The Smartconnect SSIP or network connectivity could be disrupted in a node if link aggregation interface in LACP mode is configured, and one of the port members in the lagg interface stops participating from the LACP aggregation.

Issue happens when a node is configured with any of the link aggregation interfaces:



And one of its port members is not participating into the lagg interface:


ether 00:07:43:09:3c:77

inet6 fe80::207:43ff:fe09:3c77%lagg0 prefixlen 64 scopeid 0x8 zone 1

inet 10.25.58.xx netmask 0xffffff00 broadcast zone 1


media: Ethernet autoselect

status: active

laggproto lacp lagghash l2,l3,l4

laggport: cxgb0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>

>> laggport: cxgb1 flags=0<>

This will cause OneFS to internally set the link aggregation interface to ‘No Carrier’ status, due to a bug in network manager software (Flexnet):

# isi network interface list

LNN Name Status Owners IP Addresses


1 10gige-1 No Carrier – –

1 10gige-2 Up – –

1 10gige-agg-1 No Carrier groupnet0.subnet10g.pool10g

Possible failures causing the issue:

  1. Failed switch port
  2. Incorrect LACP configuration at switch port
  3. Bad cable/SFP, or other physical issue
  4. A connected switch to a port was failed, or rebooted
  5. BXE driver bug reporting not full duplex in a port state (KB511208)

Failures 1 to 4, are external to the cluster, and issue should go away as soon as these gets fixed. Failure 5 could be a persistent failure induced by a known OneFS-BXE bug(KB 511208).

  1. If node is lowest node id in pool, and Smartconnect SSIP is configured there, then:
    1. If failure 1,2, or 3 happen, then the SSIP will be moved to next lowest node id that is clear from any failure
    2. If failure 4 is present, then the SSIP will not be available in any node, and DU is expected until workaround is implemented, patch is installed, or switch is fixed or gets available again after a reboot.
    3. If failure 5 is present:
      1. If only one port is failed, then SSIP will move to next available lowest node id not affected by the issue
      2. [DU] If all nodes in a cluster are BXE nodes, and all are affected by the bug, the SSIP will not be available, expect DU, until workaround or patch is applied.
  2. If the link aggregation in LACP mode is configured in a subnet-pool where its defined gateway is the default route in the node, then:
  1. If issue happens when node is running and default route is already set, then the default route will be continue configured and available, connectivity to already connected clients should continue working.
  2. [DU] If node is rebooted with any of the persistent failures, after it gets back up after the reboot, the default router will not be available, causing DU until external issue is fixed, workaround applied, or patch installed.

If during upgrade to or any of the failures is present, then after the rolling reboot a DU is expected due to case described in cause A->c->ii, or cause B->b. A check must be made prior to the upgrade to evaluate you are clear from any of the described failures.


Workaround to immediately restore link aggregation interface if only one member port is persistently down (Failed switch, failed cable/SFP, BXE bug, or other persistent issue)

Step 1:

Identify failed member port on link aggregation interface:

# ifconfig

lagg1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500


ether 00:0e:1e:58:20:70

inet6 fe80::20e:1eff:fe58:2070%lagg1 prefixlen 64 scopeid 0x8 zone 1

inet netmask 0xffff0000 broadcast zone 1


media: Ethernet autoselect

status: active

laggproto lacp lagghash l2,l3,l4

>> laggport: bxe1 flags=0<>


Step 2:

Manually remove port member with command:

ifconfig lagg1 -laggport bxe1

Network should be recovered in 10-20 seconds, after executing the command.

This change will be lost after a reboot.

After the external failure in a port has been identified and fixed, and port is again available, reconfigure

port back into link aggregation configuration with command:

ifconfig lagg1 laggport bxe1

A permanent fix will be available in the following OneFS maintenance releases once they become available:

  • OneFS
  • OneFS

Roll-Up patch is now available for: (bug 226984) – patch-226984 (bug 226323) – patch-226323

NOTE: This issue affects the following OneFS versions ONLY:

  • OneFS
  • OneFS
  • OneFS
  • OneFS


  • No Related Posts