Failover issues with SGOS on ESX and Cisco

I need a solution

Hi there,

I’m having an issue with failover on two virtualized Symantec ( Bluecoat) proxies on two ESX hosts in two datacenters connected with Cisco switches.

I can see the Mulicast traffice leaving the proxy getting out into the world over the Cisco switches till the firewall blocks them. The packets should be delivered on L2 to the other switch to get into the other ESX-host on the other proxy running there.

But on the other host I don’t see any multicast-traffic incoming. Hence both feel responsible for the virtual IP what makes problems with Skype etc.

Did anyone have such an issue before? On ESX we activated promiscuous mode already for that vlan/subnet. But that didn’t change the issue.

The hardware proxies in the same network see the multicast-traffic incoming from the virtual machines and behave accordingly. As the virtual proxies don’t receive any multicast traffic they always assume to be master as the other one is not sending any updates.

I would understand that there might be an issue between the two Cisco-Switches that multicast traffic is not forwarded to the other. Other idea is – that there is a special setting on the ESX-Machine I’m not aware of? Any idea?

Thanks in advance,

Manfred

0

Related:

  • No Related Posts

Cisco Nexus 9000 Series Fabric Switches ACI Mode Fabric Infrastructure VLAN Unauthorized Access Vulnerability

A vulnerability in the fabric infrastructure VLAN connection establishment of the Cisco Nexus 9000 Series Application Centric Infrastructure (ACI) Mode Switch Software could allow an unauthenticated, adjacent attacker to bypass security validations and connect an unauthorized server to the infrastructure VLAN.

The vulnerability is due to insufficient security requirements during the Link Layer Discovery Protocol (LLDP) setup phase of the infrastructure VLAN. An attacker could exploit this vulnerability by sending a malicious LLDP packet on the adjacent subnet to the Cisco Nexus 9000 Series Switch in ACI mode. A successful exploit could allow the attacker to connect an unauthorized server to the infrastructure VLAN, which is highly privileged. With a connection to the infrastructure VLAN, the attacker can make unauthorized connections to Cisco Application Policy Infrastructure Controller (APIC) services or join other host endpoints.

Cisco has released software updates that address this vulnerability. There are workarounds that address this vulnerability.

This advisory is available at the following link:
https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190703-n9kaci-bypass

Security Impact Rating: High

CVE: CVE-2019-1890

Related:

  • No Related Posts

NetScaler Networking and VLAN Best Practices

The NetScaler uses VLANs to determine which interface should be used for which traffic. In addition, NetScaler does not participate in Spanning Tree. Without the proper VLAN configuration, the NetScaler is unable to determine which interface to use and it can function more like a HUB than a switch or router in these instances (it will try to use ALL interfaces for each conversation).

Symptoms of VLAN Misconfiguration

This type of issue can manifest itself in many forms, including performance issues, inability to establish connections, randomly disconnected sessions, and in severe situations, network disruptions seemingly unrelated to the NetScaler itself. The NetScaler may also report MAC Moves, muted interfaces, and/or management interface transmit or receive buffer overflows, depending on the exact nature of the interaction with your network.

MAC Moves: (counter nic_tot_bdg_mac_moved) This indicates that the NetScaler is using more than one interface to communicate with the same device (MAC address), because it could not properly determine which interface to use.

Muted interfaces: (counter nic_err_bdg_muted) This indicates that the NetScaler has detected that it is creating a routing loop due to VLAN configuration issues, and as such, it has shut down one or more of the offending interfaces in order to prevent a network outage.

Interface buffer overflows, typically referring to management interfaces: (counter nic_err_tx_overflow) This can be caused if too much traffic is being transmitted over a management interface. Management interfaces on the NetScaler is not designed to handle large volumes of traffic, which may result from network and VLAN misconfigurations triggering the NetScaler to use a management interface for production data traffic. This often occurs because the NetScaler has no way to differentiate traffic on the VLAN / subnet of the NSIP (NSVLAN) from regular production traffic. It is highly recommended that the NSIP be on a separate VLAN and subnet from any production devices such as workstations and servers.

Orphan ACKs (counter tcp_err_orphan_ack): This counter indicates that the NetScaler received an ACK packet that it was not expecting, typically on a different interface than the ACK’d traffic originated from. This situation can be caused by VLAN misconfigurations where the NetScaler transmits on a different interface than the target device would typically use to communicate with the NetScaler (often seen in conjunction with MAC moves)

High rates of retransmissions / retransmit giveups: (counters: tcp_err_retransmit_giveups, tcp_err_7th_retransmit, various other retransmit counters) The NetScaler will attempt to retransmit a TCP packet a total of 7 times before it gives up and terminates the connection. While this situation can be caused by network conditions, it often occurs as a result of VLAN and interface misconfiguration.

High Availability Split Brain: Split Brain is a condition where both HA nodes believe they are Primary, leading to duplicate IP addresses and loss of NetScaler functionality. This is caused when the two HA nodes cannot communicate with each-other using HA Heartbeats on UDP Port 3003 using the NSIP, across any interface. This is typically caused by VLAN misconfigurations where the native VLAN on the NetScaler interfaces do not have connectivity between NetScalers.

Best Practices for VLAN and Network Configurations

  1. Each subnet should be associated with a VLAN.

  2. More than one subnet can be associated with the same VLAN (depending on your network design).

  3. EACH VLAN SHOULD BE ASSOCIATED TO ONLY ONE INTERFACE (for purposes of this discussion, a LA Channel counts as a single interface).

  4. If you require more than one subnet to be associated with an interface, the subnets must be tagged.

  5. Contrary to popular belief, the Mac-Based-Forwarding (MBF) feature on the NetScaler is not designed to mitigate this type of issue. MBF is designed primarily for the DSR (Direct Server Return) mode of the NetScaler, which is rarely used in most environments (it is designed to allow traffic to purposely bypass the NetScaler on the return path from the backend servers). MBF may hide VLAN issues in some instances, but it should not be relied-upon to resolve this type of problem.

  6. Every interface on NetScaler requires a native VLAN (unlike Cisco, where native VLANs are optional), although the TagAll setting on an interface can be used so that no untagged traffic will leave the interface in question.

  7. The native VLAN can be tagged if necessary for your network design (this is the TagAll option for the interface).

  8. The VLAN for the subnet of your NetScaler’s NSIP is a special case. This is called the NSVLAN. The concepts are the same but the commands to configure it are different and changes to the NSVLAN require a reboot of the NetScaler to take effect. If you attempt to bind a VLAN to a SNIP that shares he same subnet as the NSIP, you will get “Operation not permitted.” This is because you have to use the NSVLAN commands instead. See CTX123172 for details. Also, on some firmware versions, you cannot set an NSVLAN if that VLAN number already exists via the “add VLAN” command. Simply remove the VLAN and then set the NSVLAN again.

  9. HA Heartbeats always use the Native VLAN of the respective interface (optionally tagged if the TagAll option is set on the interface).

  10. There must be communication between at least one set of Native VLAN(s) on the two nodes of an HA pair (this can be direct or via a router). The native VLANs are used for HA heartbeats. If the NetScalers cannot communicate between native VLANs on any interface, this will lead to HA failovers and possibly a split-brain situation where both NetScalers think they are primary (leading to duplicate IP addresses, amongst other things).

  11. The NetScaler does not participate in spanning tree. As such, it is not possible to use spanning tree to provide for interface redundancy when using a NetScaler. Instead, use a form of Link Aggregation (LACP or manual LAG) for this purpose.

    Note: If you wish to have link aggregation between multiple physical switches, you must have the switches configured as a virtual switch, using a feature such as Cisco’s Switch Stack.

  12. The HA Synchronization and Command Propagation, by default, uses the NSIP/NSVLAN. To separate these out to a different VLAN, you can use the SyncVLAN option of the set HA node command.

  13. There is nothing built-in to the NetScaler’s default configuration that denotes that a management interface (0/1 or 0/2) is restricted to management traffic only. This must be enforced by the enduser through VLAN configuration. The Management interfaces are not designed to handle data traffic, so your network design should take this into account. Management interfaces, contained on the Netscaler motherboard, lack various offloading features such as CRC offload, larger packet buffers, and other optimizations, making them much less efficient in handling large amounts of traffic. In order to separate production data and management traffic, the NSIP should not be on the same subnet/VLAN as your data traffic.

  14. If it is desired to use a management interface to carry management traffic, it is best practice that the Default Route be on a subnet other than the subnet of the NSIP (NSVLAN). In many configurations, the default route will be relied-upon for workstation commmunications (in an internet scenario). If the default route is on the same subnet as the NSIP this will lead to such traffic using the management interface, which can cause the interface to be overloaded.

  15. Additionally to #10-on an SDX-the SVM, XenServer, and all Netscaler instance NSIP’s should be on the same VLAN and subnet. There is no “backplane” inside of the SDX that allows for communication between SVM/Xen/Instances. If they are not on the same VLAN/subnet/interface, traffic between them must leave the physical hardware, be routed on your network, and return. This can lead to obvious connectivity issues between the instances and SVM and as such, is not recommended. A common symptom of this is a Yellow Instance State indicator in the SVM for the VPX instance in question and the inability to use the SVM to reconfigure a VPX instance.

  16. In the event that some VLANs are bound to subnets and some are not, during an HA failover, GARP packets will not be sent for any IP addresses on any of the subnets that are not bound to a VLAN. This can cause dropped connections and connectivity issues during HA failovers, as the NetScaler cannot notify the network of the change of MAC ownership of IP addresses on non-VMAC-configured Netscalers. Symptoms of this are that during/after a HA failover, the ip_tot_floating_ip_err counter increments on the former primary NetScaler for more than a few seconds, indicating that the network did not receive or process GARP packets and the network is continuing to transmit data to the new secondary NetScaler.

Additional Resources

Related:

Report of Devices By OU

I need a solution

I run reports in altiris that show the location of a device by vlan but the issue I run into is our wireless network is the same vlan across mutiple locations so they end up in a catch all location called wireless

Now I can run powershells against our AD and pull devices and OS’s by OU but it lacks some of the info I can get in the altiris reports, is there a simple report that I can pull from altiris that will show the devices OU location that I’m not see or had someone created one like that?

0

Related:

  • No Related Posts

CVE-2013-4786 for SDX LOM vulnerability

Steps to mitigate CVE-2013-4786:

Below are the recommendations for the reported vulnerability CVE-2013-4786:

1. Setup SSL on the LOM port to encrypt your username and password.

2. Follow the Netscaler Secure Deployment Guide to isolate all management ports including the BMC management port on a management VLAN as is industry best practice. This reduces the threat to internal employees with access to the VLAN. Internet hackers cannot get in. The Netscaler has three zones. Internet Zone, Intranet Zone, Management Zone. For an external hacker to get to the BMC, they would need to break through the Netscaler or other Gateway to get there once VLANs are setup.

3. Use the latest BMC image for their platform to ensure RAKP+ is in use.

4. Security conscious customers can set a random 16 character password easily using any number of free password generators. The trusted AntiVirus company, Symantec/Norton has one on their SSL encrypted website:

https://identitysafe.norton.com/password-generator/

5. Follow the NetScaler Secure Deployment Guide to setup RADIUS based centrally-controlled user/password and role based management allows quick network-wide changes to passwords, roles and users. The RADIUS/Active Directory admin can set the passwords for the BMC roles ensuring that a password generator is used, and that passwords expire.

IPMI authentication is local and is separate from the network-based LDAP auth.

The only currently credible defense against breaking IPMI auth, short of turning off the IPMI port (which isn’t possible currently), is having truly random 128 bit passwords. Computational capabilities of the LOM do not matter here since the attacker performs the computation offline and is only restricted by the capabilities of his own computational cluster.

At this point, isolating/air-gapping LOM to a separate management VLAN and setting 16 character random passwords is a good way to prevent attacks.

Related:

Add a seccond endpoint server Fail – Dlp 15.5

I need a solution

Iam testing the version 15.5 and Iike to add another endpoint server but the status in the enforce server is “Unknown”

The single tier server was create in a W2008 and the second endpoint server using a W10 computer.

Additional the W2008 stay in vlan 10.99.220.xxx and the W10 stay in vlan 10.99.116.xxx 

How can be the problem ? I need to do some additional configuration?

0

1553715485

Related:

  • No Related Posts

Where does a Citrix ADC appliance fit in the network?

A Citrix ADC appliance resides between the clients and the servers, so that client requests and server responses pass through it. In a typical installation, virtual servers configured on the appliance provide connection points that clients use to access the applications behind the appliance. In this case, the appliance owns public IP addresses that are associated with its virtual servers, while the real servers are isolated in a private network. It is also possible to operate the appliance in a transparent mode as an L2 bridge or L3 router, or even to combine aspects of these and other modes.

Physical deployment modes

A Citrix ADC appliance logically residing between clients and servers can be deployed in either of two physical modes: inline and one-arm. In inline mode, multiple network interfaces are connected to different Ethernet segments, and the appliance is placed between the clients and the servers. The appliance has a separate network interface to each client network and a separate network interface to each server network. The appliance and the servers can exist on different subnets in this configuration. It is possible for the servers to be in a public network and the clients to directly access the servers through the appliance, with the appliance transparently applying the L4-L7 features. Usually, virtual servers (described later) are configured to provide an abstraction of the real servers. The following figure shows a typical inline deployment.

Figure 1. Inline Deployment

image

In one-arm mode, only one network interface of the appliance is connected to an Ethernet segment. The appliance in this case does not isolate the client and server sides of the network, but provides access to applications through configured virtual servers. One-arm mode can simplify network changes needed for Citrix ADC installation in some environments.

For examples of inline (two-arm) and one-arm deployment, see “Understanding Common Network Topologies.”

Citrix ADC as an L2 device

A Citrix ADC appliance functioning as an L2 device is said to operate in L2 mode. In L2 mode, the ADC appliance forwards packets between network interfaces when all of the following conditions are met:

  • The packets are destined to another device’s media access control (MAC) address.
  • The destination MAC address is on a different network interface.
  • The network interface is a member of the same virtual LAN (VLAN).

By default, all network interfaces are members of a pre-defined VLAN, VLAN 1. Address Resolution Protocol (ARP) requests and responses are forwarded to all network interfaces that are members of the same VLAN. To avoid bridging loops, L2 mode must be disabled if another L2 device is working in parallel with the Citrix ADC appliance.

For information about how the L2 and L3 modes interact, see Packet forwarding modes.

For information about configuring L2 mode, see the “Enable and disable layer 2 mode” section in Packet forwarding modes.

Citrix ADC as a packet forwarding device

A Citrix ADC appliance can function as a packet forwarding device, and this mode of operation is called L3 mode. With L3 mode enabled, the appliance forwards any received unicast packets that are destined for an IP address that does not belong to the appliance, if there is a route to the destination. The appliance can also route packets between VLANs.

In both modes of operation, L2 and L3, the appliance generally drops packets that are in:

  • Multicast frames
  • Unknown protocol frames destined for an appliance’s MAC address (non-IP and non-ARP)
  • Spanning Tree protocol (unless BridgeBPDUs is ON)

For information about how the L2 and L3 modes interact, see Packet forwarding modes.

For information about configuring the L3 mode, see Packet forwarding modes.

Related:

  • No Related Posts

Difference between NIC Down and Down(PWR OFF) state on an ADC hardware appliance

Question:

In an MPX or SDX ADC appliance physical interface, what is the difference between the DOWN and DOWN(PWR OFF) state as highlighted below?

Interface 1/5 (Gig Ethernet 10/100/1000 MBits) #1

flags=0x4000 <ENABLED, DOWN, down, autoneg, HEARTBEAT, 802.1q>

MTU=1500, native vlan=1, MAC=00:e0:ed:7a:1b:43, downtime 83h09m57s

Requested: media AUTO, speed AUTO, duplex AUTO, fctl OFF,

throughput 0

LLDP Mode: NONE, LR Priority: 1024

RX: Pkts(0) Bytes(0) Errs(0) Drops(0) Stalls(0)

TX: Pkts(0) Bytes(0) Errs(0) Drops(0) Stalls(0)

NIC: InDisc(0) OutDisc(0) Fctls(0) Stalls(0) Hangs(0) Muted(0)

Bandwidth thresholds are not set.

Interface 1/1 (Gig Ethernet 10/100/1000 MBits) #5

flags=0x4100 <disabled, DOWN(PWR OFF), down, autoneg, HEARTBEAT, 802.1q>

MTU=1500, native vlan=1, MAC=00:e0:ed:7a:1b:47, downtime 83h09m57s

Requested: media AUTO, speed AUTO, duplex AUTO, fctl OFF,

throughput 0

LLDP Mode: NONE, LR Priority: 1024

RX: Pkts(0) Bytes(0) Errs(0) Drops(0) Stalls(0)

TX: Pkts(0) Bytes(0) Errs(0) Drops(0) Stalls(0)

NIC: InDisc(0) OutDisc(0) Fctls(0) Stalls(0) Hangs(0) Muted(0)

Bandwidth thresholds are not set.

Answer:

If an interface is Enabled and Down, it displays it as Down state.

If an interface is Disabled and Down, then it is displayed as Down(PWR OFF) state.

Related:

  • No Related Posts

Cisco Policy Suite Graphite Unauthenticated Read-Only Access Vulnerability

A vulnerability in the Graphite web interface of the Policy and Charging Rules Function (PCRF) of Cisco Policy Suite (CPS) could allow an unauthenticated, remote attacker to access the Graphite web interface. The attacker would need to have access to the internal VLAN where CPS is deployed.

The vulnerability is due to lack of authentication. An attacker could exploit this vulnerability by directly connecting to the Graphite web interface. An exploit could allow the attacker to access various statistics and Key Performance Indicators (KPIs) regarding the Cisco Policy Suite environment.

There are no workarounds that address this vulnerability.

This advisory is available at the following link:
https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190109-cps-graphite-access

Security Impact Rating: Medium

CVE: CVE-2018-15466

Related:

  • No Related Posts

High Availability Traffic/Heartbeats are not seen on NetScaler Tagged Channel Network Interfaces

If you are optimizing traffic on a multi tenant server network with numerous VLANs, while isolating management traffic you might encounter a problem where heartbeat packets are not visible on all interfaces.

This is common on NetScaler high availability pairs using Link Aggregation on ether-channel switch ports (in this example Cisco Switches). The following demonstrates this issue:

> show node1) Node ID: 0IP: 10.187.125.21 (ns01)Node State: UPMaster State: PrimaryFail-Safe Mode: OFFINC State: DISABLEDSync State: ENABLEDPropagation: ENABLEDEnabled Interfaces : 0/1 LA/1Disabled Interfaces : 1/8 1/7 1/6 1/4 1/3 1/2 0/2HA MON ON Interfaces : 1/8 1/7 1/6 1/4 1/3 1/2 0/1 0/2 LA/1Interfaces on which heartbeats are not seen : LA/1Interfaces causing Partial Failure: NoneSSL Card Status: UPHello Interval: 200 msecsDead Interval: 3 secsNode in this Master State for: 0:21:42:50 (days:hrs:min:sec)2) Node ID: 1IP: 10.187.125.22Node State: UPMaster State: SecondaryFail-Safe Mode: OFFINC State: DISABLEDSync State: SUCCESSPropagation: ENABLEDEnabled Interfaces : 0/1 LA/1Disabled Interfaces : 1/8 1/7 1/6 1/4 1/3 1/2 0/2HA MON ON Interfaces : 1/8 1/7 1/6 1/4 1/3 1/2 0/1 0/2 LA/1Interfaces on which heartbeats are not seen : LA/1Interfaces causing Partial Failure: NoneSSL Card Status: UPLocal node information:Critical Interfaces: 0/1 LA/1Done

In most situations the heartbeat packets will stop by vLAN tagging mismatch on the switch. Review the following article for additional information: CTX109843 – How to Configure a NetScaler Appliance Using Link Aggregation to Connect Pairs of Interfaces to the Cisco Switches​

Related:

  • No Related Posts