Failover issues with SGOS on ESX and Cisco

I need a solution

Hi there,

I’m having an issue with failover on two virtualized Symantec ( Bluecoat) proxies on two ESX hosts in two datacenters connected with Cisco switches.

I can see the Mulicast traffice leaving the proxy getting out into the world over the Cisco switches till the firewall blocks them. The packets should be delivered on L2 to the other switch to get into the other ESX-host on the other proxy running there.

But on the other host I don’t see any multicast-traffic incoming. Hence both feel responsible for the virtual IP what makes problems with Skype etc.

Did anyone have such an issue before? On ESX we activated promiscuous mode already for that vlan/subnet. But that didn’t change the issue.

The hardware proxies in the same network see the multicast-traffic incoming from the virtual machines and behave accordingly. As the virtual proxies don’t receive any multicast traffic they always assume to be master as the other one is not sending any updates.

I would understand that there might be an issue between the two Cisco-Switches that multicast traffic is not forwarded to the other. Other idea is – that there is a special setting on the ESX-Machine I’m not aware of? Any idea?

Thanks in advance,

Manfred

0

Related:

  • No Related Posts

Cisco Nexus 9000 Series Fabric Switches ACI Mode Fabric Infrastructure VLAN Unauthorized Access Vulnerability

A vulnerability in the fabric infrastructure VLAN connection establishment of the Cisco Nexus 9000 Series Application Centric Infrastructure (ACI) Mode Switch Software could allow an unauthenticated, adjacent attacker to bypass security validations and connect an unauthorized server to the infrastructure VLAN.

The vulnerability is due to insufficient security requirements during the Link Layer Discovery Protocol (LLDP) setup phase of the infrastructure VLAN. An attacker could exploit this vulnerability by sending a malicious LLDP packet on the adjacent subnet to the Cisco Nexus 9000 Series Switch in ACI mode. A successful exploit could allow the attacker to connect an unauthorized server to the infrastructure VLAN, which is highly privileged. With a connection to the infrastructure VLAN, the attacker can make unauthorized connections to Cisco Application Policy Infrastructure Controller (APIC) services or join other host endpoints.

Cisco has released software updates that address this vulnerability. There are workarounds that address this vulnerability.

This advisory is available at the following link:
https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190703-n9kaci-bypass

Security Impact Rating: High

CVE: CVE-2019-1890

Related:

  • No Related Posts

arp flux problem

I need a solution

Recently we had the following problem:

when a connection is established from a client on the same network segment (subnet) of the outbound i/f of sbg to the inbound i/f the responce is coming out from the outbound i/f using the ip addr of the inbound i/f. This is a known linux problem called ‘ARP Flux”, and the source is that ” The kernel can respond to arp requests with addresses from other interfaces. This may seem wrong but it usually makes sense, because it increases the chance of successful communication. IP addresses are owned by the complete host on Linux, not by particular interfaces. Only for more complex setups like load-balancing, does this behavior cause problems.“

There sould be a way to overcome this by beeing able to control arp_filter, arp_announce and arp_ignore

0

Related:

  • No Related Posts

SEP blocks NIC Teaming in Server 2019

I need a solution

Recently I installed a fresh copy of windows Server 2019 OS Build 17763.107 on my IBM System x3650M5 machine with 4 Broadcom NetXtreme Gigabit adapters. As soon as I created NIC teaming with LACP option (same on the switch side) and installed SEP version 14.2.3335.1000 for WIN64BIT i got disconnected after a restart. Further investigation showed that NIC cards individually looked fine, but the teamed NIC interface was crossed as if Network cable was unplugged.

I upgraded drivers from Lenovo, installed cumulative updates for windows, ran Symantec troubleshooter (which found zero problems related with NIC) but nothing seems to work.

Symantec support offered that some rule was blocking traffic. When we removed “block any any” traffic from firewall rules, Teamed NIC started up. Same happened when we just disabled firewall module. 

I had server 2012R2 installed prior to 2019 on this machine and it never had such problem. couple years ago I tried to upgrade it to 2016, but I encountered the same “Cable unplugged” problem with NIC teaming and didnt troubleshoot it too much, since it was only for evaluation purposes.

Any ideas? Maybe any of you encountered the same problem and more importantly: solved it without just uninstalling SEP for good? 😀

0

1561010667

Related:

  • No Related Posts

Best Practices for Configuring Provisioning Services Server on a Network

This article provides best practices when configuring Citrix Provisioning, formerly Citrix Provisioning Server, on a network. Use these best practices when troubleshooting issues such as slow performance, image build failures, lost connections to the streaming server, or excessive retries from the target device.

Disabling Spanning Tree or Enabling PortFast

With Spanning Tree Protocol (STP) or Rapid Spanning Tree Protocol, the ports are placed into a blocked state while the switch transmits Bridged Protocol Data Units (BPDUs) and listens to ensure the BPDUs are not in a loopback configuration.

The amount of time it takes to complete this convergence process depends on the size of the switched network, which might allow the Pre-boot Execution Environment (PXE) to time out, preventing the machine from getting an IP address.

Note: This does not apply after the OS is loaded.

To resolve this issue, disable STP on edge-ports connected to clients or enable PortFast or Fast Link depending on the managed switch brand. Refer to the following table:

Switch Manufacturer

Fast Link Option Name

Cisco

PortFast or STP Fast Link

Dell

Spanning Tree FastLink

Foundry

Fast Port

3COM

Fast Start

Auto Negotiation

Auto Negotiation requires network devices and its switch to negotiate a speed before communication begins. This can cause long starting times and PXE timeouts, especially when starting multiple target devices with different NIC speeds. Citrix recommends hard coding all Provisioning Server ports (server and client) on the NIC and on the switch.

Stream Service Isolation

New advancements in network infrastructure, such as 10 Gb networking, may not require the stream service to be isolated from other traffic. If security is of primary concern, Citrix recommends isolating or segmenting the PVS stream traffic from other production traffic. However, in some cases, isolating the stream traffic can lead to a more complicated networking configuration and actually decrease network performance. For more information on whether the streaming traffic should be isolated, refer to the following article:

Is Isolating the PVS Streaming Traffic Really a Best Practice?

Firewall and Server to Server Communication Ports

Open the following ports in both directions:

  • UDP 6892 and 6904 (For Soap to Soap communication – MAPI and IPC)

  • UDP 6905 (For Soap to Stream Process Manager communication)

  • UDP 6894 (For Soap to Stream Service communication)

  • UDP 6898 (For Soap to Mgmt Daemon communication)

  • UDP 6895 (For Inventory to Inventory communication)

  • UDP 6903 (For Notifier to Notifier Communication)

Note: DisableTaskOffload is still required.

Related:

Add a seccond endpoint server Fail – Dlp 15.5

I need a solution

Iam testing the version 15.5 and Iike to add another endpoint server but the status in the enforce server is “Unknown”

The single tier server was create in a W2008 and the second endpoint server using a W10 computer.

Additional the W2008 stay in vlan 10.99.220.xxx and the W10 stay in vlan 10.99.116.xxx 

How can be the problem ? I need to do some additional configuration?

0

1553715485

Related:

  • No Related Posts

Cisco Nexus 5600 and 6000 Series Switches Fibre Channel over Ethernet Denial of Service Vulnerability

A vulnerability in the Fibre Channel over Ethernet (FCoE) protocol implementation in Cisco NX-OS Software could allow an unauthenticated, adjacent attacker to cause a denial of service (DoS) condition on an affected device.

The vulnerability is due to an incorrect allocation of an internal interface index. An adjacent attacker with the ability to submit a crafted FCoE packet that crosses affected interfaces could trigger this vulnerability. A successful exploit could allow the attacker to cause a packet loop and high throughput on the affected interfaces, resulting in a DoS condition.

Cisco has released software updates that address this vulnerability. There are no workarounds that address this vulnerability.

This advisory is available at the following link:
https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190306-nexus-fbr-dos

Security Impact Rating: Medium

CVE: CVE-2019-1595

Related:

  • No Related Posts

Isilon: If the Smartconnect Service IP (SSIP) is assigned to an aggregate interface, the IP address may go missing under certain conditions or move to another node if one of the laggports is shutdown.

Article Number: 519890 Article Version: 13 Article Type: Break Fix



Isilon,Isilon OneFS 8.0.0.6,Isilon OneFS 8.0.1.2,Isilon OneFS 8.1.0.2

The Smartconnect SSIP or network connectivity could be disrupted in a node if link aggregation interface in LACP mode is configured, and one of the port members in the lagg interface stops participating from the LACP aggregation.

Issue happens when a node is configured with any of the link aggregation interfaces:

10gige-agg-1

ext-agg-1

And one of its port members is not participating into the lagg interface:

lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=6c07bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>

ether 00:07:43:09:3c:77

inet6 fe80::207:43ff:fe09:3c77%lagg0 prefixlen 64 scopeid 0x8 zone 1

inet 10.25.58.xx netmask 0xffffff00 broadcast 10.25.58.xxx zone 1

nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>

media: Ethernet autoselect

status: active

laggproto lacp lagghash l2,l3,l4

laggport: cxgb0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>

>> laggport: cxgb1 flags=0<>

This will cause OneFS to internally set the link aggregation interface to ‘No Carrier’ status, due to a bug in network manager software (Flexnet):

# isi network interface list

LNN Name Status Owners IP Addresses

————————————————————————–

1 10gige-1 No Carrier – –

1 10gige-2 Up – –

1 10gige-agg-1 No Carrier groupnet0.subnet10g.pool10g 10.25.58.46

Possible failures causing the issue:

  1. Failed switch port
  2. Incorrect LACP configuration at switch port
  3. Bad cable/SFP, or other physical issue
  4. A connected switch to a port was failed, or rebooted
  5. BXE driver bug reporting not full duplex in a port state (KB511208)

Failures 1 to 4, are external to the cluster, and issue should go away as soon as these gets fixed. Failure 5 could be a persistent failure induced by a known OneFS-BXE bug(KB 511208).

  1. If node is lowest node id in pool, and Smartconnect SSIP is configured there, then:
    1. If failure 1,2, or 3 happen, then the SSIP will be moved to next lowest node id that is clear from any failure
    2. If failure 4 is present, then the SSIP will not be available in any node, and DU is expected until workaround is implemented, patch is installed, or switch is fixed or gets available again after a reboot.
    3. If failure 5 is present:
      1. If only one port is failed, then SSIP will move to next available lowest node id not affected by the issue
      2. [DU] If all nodes in a cluster are BXE nodes, and all are affected by the bug, the SSIP will not be available, expect DU, until workaround or patch is applied.
  2. If the link aggregation in LACP mode is configured in a subnet-pool where its defined gateway is the default route in the node, then:
  1. If issue happens when node is running and default route is already set, then the default route will be continue configured and available, connectivity to already connected clients should continue working.
  2. [DU] If node is rebooted with any of the persistent failures, after it gets back up after the reboot, the default router will not be available, causing DU until external issue is fixed, workaround applied, or patch installed.

If during upgrade to 8.0.0.6 or 8.1.0.2 any of the failures is present, then after the rolling reboot a DU is expected due to case described in cause A->c->ii, or cause B->b. A check must be made prior to the upgrade to evaluate you are clear from any of the described failures.



Workaround


Workaround to immediately restore link aggregation interface if only one member port is persistently down (Failed switch, failed cable/SFP, BXE bug, or other persistent issue)

Step 1:

Identify failed member port on link aggregation interface:

# ifconfig

lagg1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500

options=507bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWFILTER,VLAN_HWTSO>

ether 00:0e:1e:58:20:70

inet6 fe80::20e:1eff:fe58:2070%lagg1 prefixlen 64 scopeid 0x8 zone 1

inet 172.16.240.xxx netmask 0xffff0000 broadcast 172.16.255.xxx zone 1

nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>

media: Ethernet autoselect

status: active

laggproto lacp lagghash l2,l3,l4

>> laggport: bxe1 flags=0<>

laggport: bxe0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>


Step 2:

Manually remove port member with command:

ifconfig lagg1 -laggport bxe1

Network should be recovered in 10-20 seconds, after executing the command.

This change will be lost after a reboot.

After the external failure in a port has been identified and fixed, and port is again available, reconfigure

port back into link aggregation configuration with command:

ifconfig lagg1 laggport bxe1

A permanent fix will be available in the following OneFS maintenance releases once they become available:

  • OneFS 8.0.0.7
  • OneFS 8.1.0.4

Roll-Up patch is now available for:

8.0.0.6 (bug 226984) – patch-226984

8.1.0.2 (bug 226323) – patch-226323

NOTE: This issue affects the following OneFS versions ONLY:

  • OneFS 8.0.0.6
  • OneFS 8.0.1.2
  • OneFS 8.1.0.2
  • OneFS 8.1.1.1

Related:

  • No Related Posts

Where does a Citrix ADC appliance fit in the network?

A Citrix ADC appliance resides between the clients and the servers, so that client requests and server responses pass through it. In a typical installation, virtual servers configured on the appliance provide connection points that clients use to access the applications behind the appliance. In this case, the appliance owns public IP addresses that are associated with its virtual servers, while the real servers are isolated in a private network. It is also possible to operate the appliance in a transparent mode as an L2 bridge or L3 router, or even to combine aspects of these and other modes.

Physical deployment modes

A Citrix ADC appliance logically residing between clients and servers can be deployed in either of two physical modes: inline and one-arm. In inline mode, multiple network interfaces are connected to different Ethernet segments, and the appliance is placed between the clients and the servers. The appliance has a separate network interface to each client network and a separate network interface to each server network. The appliance and the servers can exist on different subnets in this configuration. It is possible for the servers to be in a public network and the clients to directly access the servers through the appliance, with the appliance transparently applying the L4-L7 features. Usually, virtual servers (described later) are configured to provide an abstraction of the real servers. The following figure shows a typical inline deployment.

Figure 1. Inline Deployment

image

In one-arm mode, only one network interface of the appliance is connected to an Ethernet segment. The appliance in this case does not isolate the client and server sides of the network, but provides access to applications through configured virtual servers. One-arm mode can simplify network changes needed for Citrix ADC installation in some environments.

For examples of inline (two-arm) and one-arm deployment, see “Understanding Common Network Topologies.”

Citrix ADC as an L2 device

A Citrix ADC appliance functioning as an L2 device is said to operate in L2 mode. In L2 mode, the ADC appliance forwards packets between network interfaces when all of the following conditions are met:

  • The packets are destined to another device’s media access control (MAC) address.
  • The destination MAC address is on a different network interface.
  • The network interface is a member of the same virtual LAN (VLAN).

By default, all network interfaces are members of a pre-defined VLAN, VLAN 1. Address Resolution Protocol (ARP) requests and responses are forwarded to all network interfaces that are members of the same VLAN. To avoid bridging loops, L2 mode must be disabled if another L2 device is working in parallel with the Citrix ADC appliance.

For information about how the L2 and L3 modes interact, see Packet forwarding modes.

For information about configuring L2 mode, see the “Enable and disable layer 2 mode” section in Packet forwarding modes.

Citrix ADC as a packet forwarding device

A Citrix ADC appliance can function as a packet forwarding device, and this mode of operation is called L3 mode. With L3 mode enabled, the appliance forwards any received unicast packets that are destined for an IP address that does not belong to the appliance, if there is a route to the destination. The appliance can also route packets between VLANs.

In both modes of operation, L2 and L3, the appliance generally drops packets that are in:

  • Multicast frames
  • Unknown protocol frames destined for an appliance’s MAC address (non-IP and non-ARP)
  • Spanning Tree protocol (unless BridgeBPDUs is ON)

For information about how the L2 and L3 modes interact, see Packet forwarding modes.

For information about configuring the L3 mode, see Packet forwarding modes.

Related:

  • No Related Posts