Failover issues with SGOS on ESX and Cisco

I need a solution

Hi there,

I’m having an issue with failover on two virtualized Symantec ( Bluecoat) proxies on two ESX hosts in two datacenters connected with Cisco switches.

I can see the Mulicast traffice leaving the proxy getting out into the world over the Cisco switches till the firewall blocks them. The packets should be delivered on L2 to the other switch to get into the other ESX-host on the other proxy running there.

But on the other host I don’t see any multicast-traffic incoming. Hence both feel responsible for the virtual IP what makes problems with Skype etc.

Did anyone have such an issue before? On ESX we activated promiscuous mode already for that vlan/subnet. But that didn’t change the issue.

The hardware proxies in the same network see the multicast-traffic incoming from the virtual machines and behave accordingly. As the virtual proxies don’t receive any multicast traffic they always assume to be master as the other one is not sending any updates.

I would understand that there might be an issue between the two Cisco-Switches that multicast traffic is not forwarded to the other. Other idea is – that there is a special setting on the ESX-Machine I’m not aware of? Any idea?

Thanks in advance,

Manfred

0

Related:

  • No Related Posts

Cisco Nexus 9000 Series Fabric Switches ACI Mode Fabric Infrastructure VLAN Unauthorized Access Vulnerability

A vulnerability in the fabric infrastructure VLAN connection establishment of the Cisco Nexus 9000 Series Application Centric Infrastructure (ACI) Mode Switch Software could allow an unauthenticated, adjacent attacker to bypass security validations and connect an unauthorized server to the infrastructure VLAN.

The vulnerability is due to insufficient security requirements during the Link Layer Discovery Protocol (LLDP) setup phase of the infrastructure VLAN. An attacker could exploit this vulnerability by sending a malicious LLDP packet on the adjacent subnet to the Cisco Nexus 9000 Series Switch in ACI mode. A successful exploit could allow the attacker to connect an unauthorized server to the infrastructure VLAN, which is highly privileged. With a connection to the infrastructure VLAN, the attacker can make unauthorized connections to Cisco Application Policy Infrastructure Controller (APIC) services or join other host endpoints.

Cisco has released software updates that address this vulnerability. There are workarounds that address this vulnerability.

This advisory is available at the following link:
https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190703-n9kaci-bypass

Security Impact Rating: High

CVE: CVE-2019-1890

Related:

  • No Related Posts

NetScaler Networking and VLAN Best Practices

The NetScaler uses VLANs to determine which interface should be used for which traffic. In addition, NetScaler does not participate in Spanning Tree. Without the proper VLAN configuration, the NetScaler is unable to determine which interface to use and it can function more like a HUB than a switch or router in these instances (it will try to use ALL interfaces for each conversation).

Symptoms of VLAN Misconfiguration

This type of issue can manifest itself in many forms, including performance issues, inability to establish connections, randomly disconnected sessions, and in severe situations, network disruptions seemingly unrelated to the NetScaler itself. The NetScaler may also report MAC Moves, muted interfaces, and/or management interface transmit or receive buffer overflows, depending on the exact nature of the interaction with your network.

MAC Moves: (counter nic_tot_bdg_mac_moved) This indicates that the NetScaler is using more than one interface to communicate with the same device (MAC address), because it could not properly determine which interface to use.

Muted interfaces: (counter nic_err_bdg_muted) This indicates that the NetScaler has detected that it is creating a routing loop due to VLAN configuration issues, and as such, it has shut down one or more of the offending interfaces in order to prevent a network outage.

Interface buffer overflows, typically referring to management interfaces: (counter nic_err_tx_overflow) This can be caused if too much traffic is being transmitted over a management interface. Management interfaces on the NetScaler is not designed to handle large volumes of traffic, which may result from network and VLAN misconfigurations triggering the NetScaler to use a management interface for production data traffic. This often occurs because the NetScaler has no way to differentiate traffic on the VLAN / subnet of the NSIP (NSVLAN) from regular production traffic. It is highly recommended that the NSIP be on a separate VLAN and subnet from any production devices such as workstations and servers.

Orphan ACKs (counter tcp_err_orphan_ack): This counter indicates that the NetScaler received an ACK packet that it was not expecting, typically on a different interface than the ACK’d traffic originated from. This situation can be caused by VLAN misconfigurations where the NetScaler transmits on a different interface than the target device would typically use to communicate with the NetScaler (often seen in conjunction with MAC moves)

High rates of retransmissions / retransmit giveups: (counters: tcp_err_retransmit_giveups, tcp_err_7th_retransmit, various other retransmit counters) The NetScaler will attempt to retransmit a TCP packet a total of 7 times before it gives up and terminates the connection. While this situation can be caused by network conditions, it often occurs as a result of VLAN and interface misconfiguration.

High Availability Split Brain: Split Brain is a condition where both HA nodes believe they are Primary, leading to duplicate IP addresses and loss of NetScaler functionality. This is caused when the two HA nodes cannot communicate with each-other using HA Heartbeats on UDP Port 3003 using the NSIP, across any interface. This is typically caused by VLAN misconfigurations where the native VLAN on the NetScaler interfaces do not have connectivity between NetScalers.

Best Practices for VLAN and Network Configurations

  1. Each subnet should be associated with a VLAN.

  2. More than one subnet can be associated with the same VLAN (depending on your network design).

  3. EACH VLAN SHOULD BE ASSOCIATED TO ONLY ONE INTERFACE (for purposes of this discussion, a LA Channel counts as a single interface).

  4. If you require more than one subnet to be associated with an interface, the subnets must be tagged.

  5. Contrary to popular belief, the Mac-Based-Forwarding (MBF) feature on the NetScaler is not designed to mitigate this type of issue. MBF is designed primarily for the DSR (Direct Server Return) mode of the NetScaler, which is rarely used in most environments (it is designed to allow traffic to purposely bypass the NetScaler on the return path from the backend servers). MBF may hide VLAN issues in some instances, but it should not be relied-upon to resolve this type of problem.

  6. Every interface on NetScaler requires a native VLAN (unlike Cisco, where native VLANs are optional), although the TagAll setting on an interface can be used so that no untagged traffic will leave the interface in question.

  7. The native VLAN can be tagged if necessary for your network design (this is the TagAll option for the interface).

  8. The VLAN for the subnet of your NetScaler’s NSIP is a special case. This is called the NSVLAN. The concepts are the same but the commands to configure it are different and changes to the NSVLAN require a reboot of the NetScaler to take effect. If you attempt to bind a VLAN to a SNIP that shares he same subnet as the NSIP, you will get “Operation not permitted.” This is because you have to use the NSVLAN commands instead. See CTX123172 for details. Also, on some firmware versions, you cannot set an NSVLAN if that VLAN number already exists via the “add VLAN” command. Simply remove the VLAN and then set the NSVLAN again.

  9. HA Heartbeats always use the Native VLAN of the respective interface (optionally tagged if the TagAll option is set on the interface).

  10. There must be communication between at least one set of Native VLAN(s) on the two nodes of an HA pair (this can be direct or via a router). The native VLANs are used for HA heartbeats. If the NetScalers cannot communicate between native VLANs on any interface, this will lead to HA failovers and possibly a split-brain situation where both NetScalers think they are primary (leading to duplicate IP addresses, amongst other things).

  11. The NetScaler does not participate in spanning tree. As such, it is not possible to use spanning tree to provide for interface redundancy when using a NetScaler. Instead, use a form of Link Aggregation (LACP or manual LAG) for this purpose.

    Note: If you wish to have link aggregation between multiple physical switches, you must have the switches configured as a virtual switch, using a feature such as Cisco’s Switch Stack.

  12. The HA Synchronization and Command Propagation, by default, uses the NSIP/NSVLAN. To separate these out to a different VLAN, you can use the SyncVLAN option of the set HA node command.

  13. There is nothing built-in to the NetScaler’s default configuration that denotes that a management interface (0/1 or 0/2) is restricted to management traffic only. This must be enforced by the enduser through VLAN configuration. The Management interfaces are not designed to handle data traffic, so your network design should take this into account. Management interfaces, contained on the Netscaler motherboard, lack various offloading features such as CRC offload, larger packet buffers, and other optimizations, making them much less efficient in handling large amounts of traffic. In order to separate production data and management traffic, the NSIP should not be on the same subnet/VLAN as your data traffic.

  14. If it is desired to use a management interface to carry management traffic, it is best practice that the Default Route be on a subnet other than the subnet of the NSIP (NSVLAN). In many configurations, the default route will be relied-upon for workstation commmunications (in an internet scenario). If the default route is on the same subnet as the NSIP this will lead to such traffic using the management interface, which can cause the interface to be overloaded.

  15. Additionally to #10-on an SDX-the SVM, XenServer, and all Netscaler instance NSIP’s should be on the same VLAN and subnet. There is no “backplane” inside of the SDX that allows for communication between SVM/Xen/Instances. If they are not on the same VLAN/subnet/interface, traffic between them must leave the physical hardware, be routed on your network, and return. This can lead to obvious connectivity issues between the instances and SVM and as such, is not recommended. A common symptom of this is a Yellow Instance State indicator in the SVM for the VPX instance in question and the inability to use the SVM to reconfigure a VPX instance.

  16. In the event that some VLANs are bound to subnets and some are not, during an HA failover, GARP packets will not be sent for any IP addresses on any of the subnets that are not bound to a VLAN. This can cause dropped connections and connectivity issues during HA failovers, as the NetScaler cannot notify the network of the change of MAC ownership of IP addresses on non-VMAC-configured Netscalers. Symptoms of this are that during/after a HA failover, the ip_tot_floating_ip_err counter increments on the former primary NetScaler for more than a few seconds, indicating that the network did not receive or process GARP packets and the network is continuing to transmit data to the new secondary NetScaler.

Additional Resources

Related:

Report of Devices By OU

I need a solution

I run reports in altiris that show the location of a device by vlan but the issue I run into is our wireless network is the same vlan across mutiple locations so they end up in a catch all location called wireless

Now I can run powershells against our AD and pull devices and OS’s by OU but it lacks some of the info I can get in the altiris reports, is there a simple report that I can pull from altiris that will show the devices OU location that I’m not see or had someone created one like that?

0

Related:

  • No Related Posts

Add a seccond endpoint server Fail – Dlp 15.5

I need a solution

Iam testing the version 15.5 and Iike to add another endpoint server but the status in the enforce server is “Unknown”

The single tier server was create in a W2008 and the second endpoint server using a W10 computer.

Additional the W2008 stay in vlan 10.99.220.xxx and the W10 stay in vlan 10.99.116.xxx 

How can be the problem ? I need to do some additional configuration?

0

1553715485

Related:

  • No Related Posts

Where does a Citrix ADC appliance fit in the network?

A Citrix ADC appliance resides between the clients and the servers, so that client requests and server responses pass through it. In a typical installation, virtual servers configured on the appliance provide connection points that clients use to access the applications behind the appliance. In this case, the appliance owns public IP addresses that are associated with its virtual servers, while the real servers are isolated in a private network. It is also possible to operate the appliance in a transparent mode as an L2 bridge or L3 router, or even to combine aspects of these and other modes.

Physical deployment modes

A Citrix ADC appliance logically residing between clients and servers can be deployed in either of two physical modes: inline and one-arm. In inline mode, multiple network interfaces are connected to different Ethernet segments, and the appliance is placed between the clients and the servers. The appliance has a separate network interface to each client network and a separate network interface to each server network. The appliance and the servers can exist on different subnets in this configuration. It is possible for the servers to be in a public network and the clients to directly access the servers through the appliance, with the appliance transparently applying the L4-L7 features. Usually, virtual servers (described later) are configured to provide an abstraction of the real servers. The following figure shows a typical inline deployment.

Figure 1. Inline Deployment

image

In one-arm mode, only one network interface of the appliance is connected to an Ethernet segment. The appliance in this case does not isolate the client and server sides of the network, but provides access to applications through configured virtual servers. One-arm mode can simplify network changes needed for Citrix ADC installation in some environments.

For examples of inline (two-arm) and one-arm deployment, see “Understanding Common Network Topologies.”

Citrix ADC as an L2 device

A Citrix ADC appliance functioning as an L2 device is said to operate in L2 mode. In L2 mode, the ADC appliance forwards packets between network interfaces when all of the following conditions are met:

  • The packets are destined to another device’s media access control (MAC) address.
  • The destination MAC address is on a different network interface.
  • The network interface is a member of the same virtual LAN (VLAN).

By default, all network interfaces are members of a pre-defined VLAN, VLAN 1. Address Resolution Protocol (ARP) requests and responses are forwarded to all network interfaces that are members of the same VLAN. To avoid bridging loops, L2 mode must be disabled if another L2 device is working in parallel with the Citrix ADC appliance.

For information about how the L2 and L3 modes interact, see Packet forwarding modes.

For information about configuring L2 mode, see the “Enable and disable layer 2 mode” section in Packet forwarding modes.

Citrix ADC as a packet forwarding device

A Citrix ADC appliance can function as a packet forwarding device, and this mode of operation is called L3 mode. With L3 mode enabled, the appliance forwards any received unicast packets that are destined for an IP address that does not belong to the appliance, if there is a route to the destination. The appliance can also route packets between VLANs.

In both modes of operation, L2 and L3, the appliance generally drops packets that are in:

  • Multicast frames
  • Unknown protocol frames destined for an appliance’s MAC address (non-IP and non-ARP)
  • Spanning Tree protocol (unless BridgeBPDUs is ON)

For information about how the L2 and L3 modes interact, see Packet forwarding modes.

For information about configuring the L3 mode, see Packet forwarding modes.

Related:

  • No Related Posts

High Availability Traffic/Heartbeats are not seen on NetScaler Tagged Channel Network Interfaces

If you are optimizing traffic on a multi tenant server network with numerous VLANs, while isolating management traffic you might encounter a problem where heartbeat packets are not visible on all interfaces.

This is common on NetScaler high availability pairs using Link Aggregation on ether-channel switch ports (in this example Cisco Switches). The following demonstrates this issue:

> show node1) Node ID: 0IP: 10.187.125.21 (ns01)Node State: UPMaster State: PrimaryFail-Safe Mode: OFFINC State: DISABLEDSync State: ENABLEDPropagation: ENABLEDEnabled Interfaces : 0/1 LA/1Disabled Interfaces : 1/8 1/7 1/6 1/4 1/3 1/2 0/2HA MON ON Interfaces : 1/8 1/7 1/6 1/4 1/3 1/2 0/1 0/2 LA/1Interfaces on which heartbeats are not seen : LA/1Interfaces causing Partial Failure: NoneSSL Card Status: UPHello Interval: 200 msecsDead Interval: 3 secsNode in this Master State for: 0:21:42:50 (days:hrs:min:sec)2) Node ID: 1IP: 10.187.125.22Node State: UPMaster State: SecondaryFail-Safe Mode: OFFINC State: DISABLEDSync State: SUCCESSPropagation: ENABLEDEnabled Interfaces : 0/1 LA/1Disabled Interfaces : 1/8 1/7 1/6 1/4 1/3 1/2 0/2HA MON ON Interfaces : 1/8 1/7 1/6 1/4 1/3 1/2 0/1 0/2 LA/1Interfaces on which heartbeats are not seen : LA/1Interfaces causing Partial Failure: NoneSSL Card Status: UPLocal node information:Critical Interfaces: 0/1 LA/1Done

In most situations the heartbeat packets will stop by vLAN tagging mismatch on the switch. Review the following article for additional information: CTX109843 – How to Configure a NetScaler Appliance Using Link Aggregation to Connect Pairs of Interfaces to the Cisco Switches​

Related:

  • No Related Posts

NetScaler Interface Tagging and Flow of High Availability Packets Examples

This article describes the flow of High Availability packets when various combinations of tagging are implemented in the NetScaler configuration. For additional information on HA traffic not seen on tagged channels refer to CTX201788.

Flow of High Availability Packets

Heart beats, that is High Availability packets, are always untagged unless the NSVLAN is configured using set ns config -nsvlan command or an interface is configured with the -trunk on option in NetScaler software release 9.2 and earlier or -tagall option in NetScaler software release 9.3 and later.

The following scenarios help in describing the flow of the High Availability packets:

Scenario 1

NSVLAN is default (VLAN 1)

interface 1/1 is bound to VLAN 2

Interface 1/2 is bound to VLAN 3

add vlan 2add vlan 3bind vlan 2 -ifnum 1/1bind vlan 3 -ifnum 1/2


High Availability packets flow as untagged on the 1/1 and 1/2 interfaces on the native VLAN (of those interfaces – 2 and 3 respectively).

Scenario 2

NSVLAN is default (VLAN 1)

interface 1/1 is bound to VLAN 2, which is configured with -trunk ON

Interface 1/2 is bound to VLAN 3, which is configured with -trunk OFF (default)

set interface 1/1 -trunk ONadd vlan 2add vlan 3bind vlan 2 -ifnum 1/1bind vlan 3 -ifnum 1/2


High Availability packets flow on 1/1 as tagged with a VLAN ID of 2 (as all other native packets of this interface), and untagged on the 1/2 interface.

Scenario 3

NSVLAN is VLAN10 (non default)

interface 1/1 is bound to VLAN 2

interface 1/2 is bound to VLAN 3

interface 1/3 is bound to VLAN 10

add vlan 2add vlan 3bind vlan 2 -ifnum 1/1bind vlan 3 -ifnum 1/2set ns config -nsvlan 10 -ifnum 1/3


High Availability packets flow as tagged (default) on VLAN 10, interface 1/3 only and do not flow on VLAN 2 or VLAN 3.

Related:

  • No Related Posts

How to change the Netscaler IP VLAN (NSVLAN) – nsvlan command

Note: This article is only applicable for VPX and MPX instances, for VPX on SDX, please refer to CTX138822.

By factory defaults the Netscaler IP is on the native VLAN 1. The nsvlan command is used to change the VLAN of the NetScaler IP subnet.

The command “set ns config -nsvlan 100 -ifnum 1/1 1/2 -tagged YES” moves the NetScaler IP to VLAN 100 and the NSIP subnet is bound to interface 1/1 and 1/2 (only the specified interfaces will be bound to NSVLAN and the traffic will be tagged). A reboot of the NetScaler is required for the command to take effect.

After using this command, the NetScaler IP subnet is only available on the interface(s) specified with “ifnum” and the traffic from the NetScaler IP subnet is tagged with the VLAN ID specified with nslvan. In the preceding example the NetScaler IP subnet is tagged with an 802.1q VLAN ID of 100, and it is bound to (and only available on) interface 1/1 and 1/2. You must configure the attached switch interface(s) to tag and allow this same VLAN ID on the connected interface(s).

Note: Interface 1/1, 1/2 and VLAN 100 are only considered as an example, the interface number can vary depending on the hardware of NetScaler, and the VLAN number can be anything between 2 and 4094.

To check the configured nsvlan, execute the following command(s) in the NetScaler CLI:

> show ns runningConfig -withDefaults | grep “ns config”

> show ns runningConfig -withDefaults | grep nsvlan –> this command might be empty if nsvlan is 1

Additional Resources

CTX110540 – FAQ: The the -trunk Parameter of a NetScaler Interface

CTX138822 – How to configure the Netscaler VPX VLAN’s on a new NetScaler SDX Instance

Related:

  • No Related Posts

Re: Fail Installation Vxrail Step 38

Hi All

I solved that case, I just give native vlan on my configuration switch.

I used :

– vlan 0 for management and vcenter —> this should untagged traffic

– vlan 106 for Virtual VM Switch

Vxrail has 4 port:

port 0 : Management

Port 1 : Virtual Mechine

Port 2 : VSAN

Port 3 : VMotion

Management used port 0, and on my vxrail I used VLAN 0, that’s mean untagged traffic, I need to connect with my existing network, so I set :

– Switchport mode trunk

– switchtrunk native vlan 130.

Vlan 130 just for management.

Virtual machine used port 1, as my vcenter also is virtual machine and used vlan 0 and the traffic pass through port 1 so I just give same configuration.

– switchport mode trunk

– switchport trunk native vlan 130

the rest of port I just set switchport mode trunk on my physical switch

Related:

  • No Related Posts