Pass-Through Authentication Fails when Using Standard Receiver on a PNagent Services Site

Pass-Through Authentication for a services site can only use the Receiver Enterprise Edition, also known as the Legacy PNA. There is a misconception that the Standard Receiver can be installed with command line switches to enable to Pass-Through. Then use the Standard receiver to connect to a services site and Pass-Through will work. However, this does not work.

Related:

  • No Related Posts

Cisco IOS Software for Catalyst 2960-L Series Switches and Catalyst CDB-8P Switches 802.1X Authentication Bypass Vulnerability

A vulnerability in the 802.1X feature of Cisco Catalyst 2960-L Series Switches and Cisco Catalyst CDB-8P Switches could allow an unauthenticated, adjacent attacker to forward broadcast traffic before being authenticated on the port.

The vulnerability exists because broadcast traffic that is received on the 802.1X-enabled port is mishandled. An attacker could exploit this vulnerability by sending broadcast traffic on the port before being authenticated. A successful exploit could allow the attacker to send and receive broadcast traffic on the 802.1X-enabled port before authentication.

Cisco has released software updates that address this vulnerability. There are no workarounds that address this vulnerability.

This advisory is available at the following link:
https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-c2960L-DpWA9Re4

Security Impact Rating: Medium

CVE: CVE-2020-3231

Related:

  • No Related Posts

Sophos Wireless Access Points: APX offline after initial configuration

Wireless Access Points going offline after initial configuration

This can be caused if you are using a PoE enabled switch and have manually set the power on the ports to a maximum of less than a certain wattage depending on access points used. Below is a list of the APX access points and their maximum used power ratings:

  • APX320 – Maximum Power: 11.5W
  • APX530 – Maximum Power: 16.7W
  • APX740 – Maximum Power: 22.4W

In addition to the maximum power ratings that could be configured on the PoE switch, below are the supported PoE types:

  • APX320 – PoE Requirements: 802.3af
  • APX530 – PoE Requirements: 802.3at
  • APX740 – PoE Requirements: 802.3at

This article describes the steps to resolve the issue of the access points failing to power up after initial configuration is sent to it.

The following sections are covered:

Applies to the following Sophos products and versions

Sophos UTM

Sophos Firewall

Sophos Central Wireless

We recommend using a separate PoE injector to see if the problem is power from the switch. If using a separate PoE injector fixes the problem, then you will need to investigate the power supplied to the unit via your PoE enabled switch and ensure it is set to a value that supports the APX device in use.

Sign up to the Sophos Support SMS Notification Service to get the latest product release information and critical issues.

If you’ve spotted an error or would like to provide feedback on this article, please use the section below to rate and comment on the article.

This is invaluable to us to ensure that we continually strive to give our customers the best information possible.

Related:

VDAs will not register due to WMI Repository Corruption

Broker agent Won’t start, crashing after power outtage on network switch.

Exception:

System.Management.ManagementException Invalid class

at System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus errorCode)

at System.Management.ManagementObjectCollection.ManagementObjectEnumerator.MoveNext()

at Citrix.MetaInstaller.CoreUtilities.IsVirtualMachine(VMType type)

at Citrix.MetaInstaller.MetaInstallerApplication.Run(String[] args)

at Citrix.MetaInstaller.MetaInstallerApplication.InstallResultMain(String[] args)


Application: BrokerAgent.exe

Framework Version: v4.0.30319

Description: The process was terminated due to an unhandled exception.

Exception Info: System.Reflection.TargetInvocationException

Stack:

at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor)

at System.Reflection.RuntimeMethodInfo.UnsafeInvokeInternal(Object obj, Object[] parameters, Object[] arguments)

at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)

at Citrix.Cds.PluginLoader.PluginProxy.StartPlugin()

at Citrix.Cds.PluginLoader.PluginProxy.StartPlugin()

at System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)

at System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)

at System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)

at System.Threading.ThreadHelper.ThreadStart()

Faulting application name: BrokerAgent.exe, version: 7.1.0.4019, time stamp: 0x5255694d

Faulting module name: KERNELBASE.dll, version: 6.1.7601.18229, time stamp: 0x51fb1677

Exception code: 0xe0434352

Fault offset: 0x000000000000940d

Faulting process id: 0x1988

Faulting application start time: 0x01d019c5969618fd

Faulting application path: C:Program FilesCitrixVirtual Desktop AgentBrokerAgent.exe

Faulting module path: C:Windowssystem32KERNELBASE.dll

Report Id: d5d91837-85b8-11e4-86a5-b499babae614

Related:

  • No Related Posts

Failover issues with SGOS on ESX and Cisco

I need a solution

Hi there,

I’m having an issue with failover on two virtualized Symantec ( Bluecoat) proxies on two ESX hosts in two datacenters connected with Cisco switches.

I can see the Mulicast traffice leaving the proxy getting out into the world over the Cisco switches till the firewall blocks them. The packets should be delivered on L2 to the other switch to get into the other ESX-host on the other proxy running there.

But on the other host I don’t see any multicast-traffic incoming. Hence both feel responsible for the virtual IP what makes problems with Skype etc.

Did anyone have such an issue before? On ESX we activated promiscuous mode already for that vlan/subnet. But that didn’t change the issue.

The hardware proxies in the same network see the multicast-traffic incoming from the virtual machines and behave accordingly. As the virtual proxies don’t receive any multicast traffic they always assume to be master as the other one is not sending any updates.

I would understand that there might be an issue between the two Cisco-Switches that multicast traffic is not forwarded to the other. Other idea is – that there is a special setting on the ESX-Machine I’m not aware of? Any idea?

Thanks in advance,

Manfred

0

Related:

L2 redundancy of WCCP Transarent Proxy

I need a solution

Hello,

I’m trying to configure link redundancy. I have two LAN switches configured in HSRP mode. ProxySG is currently connected to HSPR primary SW-1.

Is it possible to connect another link from proxy to secondary SW-2 so when SW-1 goes down, proxySG continues to do its job using SW-2?

Diagram in attachement.

Thank you.

0

Related:

Cisco Small Business Series Switches Simple Network Management Protocol Denial of Service Vulnerability

A vulnerability in the Simple Network Management Protocol (SNMP) input packet processor of Cisco Small Business Sx200, Sx300, Sx500, ESW2 Series Managed Switches and Small Business Sx250, Sx350, Sx550 Series Switches could allow an authenticated, remote attacker to cause the SNMP application of an affected device to cease processing traffic, resulting in the CPU utilization reaching one hundred percent. Manual intervention may be required before a device resumes normal operations.

The vulnerability is due to improper validation of SNMP protocol data units (PDUs) in SNMP packets. An attacker could exploit this vulnerability by sending a malicious SNMP packet to an affected device. A successful exploit could allow the attacker to cause the device to cease forwarding traffic, which could result in a denial of service (DoS) condition.

Cisco has released firmware updates that address this vulnerability. There are no workarounds that address this vulnerability.

This advisory is available at the following link:
https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190515-sb-snmpdos

Security Impact Rating: High

CVE: CVE-2019-1806

Related:

Best Practices for Configuring Provisioning Services Server on a Network

This article provides best practices when configuring Citrix Provisioning, formerly Citrix Provisioning Server, on a network. Use these best practices when troubleshooting issues such as slow performance, image build failures, lost connections to the streaming server, or excessive retries from the target device.

Disabling Spanning Tree or Enabling PortFast

With Spanning Tree Protocol (STP) or Rapid Spanning Tree Protocol, the ports are placed into a blocked state while the switch transmits Bridged Protocol Data Units (BPDUs) and listens to ensure the BPDUs are not in a loopback configuration.

The amount of time it takes to complete this convergence process depends on the size of the switched network, which might allow the Pre-boot Execution Environment (PXE) to time out, preventing the machine from getting an IP address.

Note: This does not apply after the OS is loaded.

To resolve this issue, disable STP on edge-ports connected to clients or enable PortFast or Fast Link depending on the managed switch brand. Refer to the following table:

Switch Manufacturer

Fast Link Option Name

Cisco

PortFast or STP Fast Link

Dell

Spanning Tree FastLink

Foundry

Fast Port

3COM

Fast Start

Auto Negotiation

Auto Negotiation requires network devices and its switch to negotiate a speed before communication begins. This can cause long starting times and PXE timeouts, especially when starting multiple target devices with different NIC speeds. Citrix recommends hard coding all Provisioning Server ports (server and client) on the NIC and on the switch.

Stream Service Isolation

New advancements in network infrastructure, such as 10 Gb networking, may not require the stream service to be isolated from other traffic. If security is of primary concern, Citrix recommends isolating or segmenting the PVS stream traffic from other production traffic. However, in some cases, isolating the stream traffic can lead to a more complicated networking configuration and actually decrease network performance. For more information on whether the streaming traffic should be isolated, refer to the following article:

Is Isolating the PVS Streaming Traffic Really a Best Practice?

Firewall and Server to Server Communication Ports

Open the following ports in both directions:

  • UDP 6892 and 6904 (For Soap to Soap communication – MAPI and IPC)

  • UDP 6905 (For Soap to Stream Process Manager communication)

  • UDP 6894 (For Soap to Stream Service communication)

  • UDP 6898 (For Soap to Mgmt Daemon communication)

  • UDP 6895 (For Inventory to Inventory communication)

  • UDP 6903 (For Notifier to Notifier Communication)

Note: DisableTaskOffload is still required.

Related:

Isilon: If the Smartconnect Service IP (SSIP) is assigned to an aggregate interface, the IP address may go missing under certain conditions or move to another node if one of the laggports is shutdown.

Article Number: 519890 Article Version: 13 Article Type: Break Fix



Isilon,Isilon OneFS 8.0.0.6,Isilon OneFS 8.0.1.2,Isilon OneFS 8.1.0.2

The Smartconnect SSIP or network connectivity could be disrupted in a node if link aggregation interface in LACP mode is configured, and one of the port members in the lagg interface stops participating from the LACP aggregation.

Issue happens when a node is configured with any of the link aggregation interfaces:

10gige-agg-1

ext-agg-1

And one of its port members is not participating into the lagg interface:

lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=6c07bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>

ether 00:07:43:09:3c:77

inet6 fe80::207:43ff:fe09:3c77%lagg0 prefixlen 64 scopeid 0x8 zone 1

inet 10.25.58.xx netmask 0xffffff00 broadcast 10.25.58.xxx zone 1

nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>

media: Ethernet autoselect

status: active

laggproto lacp lagghash l2,l3,l4

laggport: cxgb0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>

>> laggport: cxgb1 flags=0<>

This will cause OneFS to internally set the link aggregation interface to ‘No Carrier’ status, due to a bug in network manager software (Flexnet):

# isi network interface list

LNN Name Status Owners IP Addresses

————————————————————————–

1 10gige-1 No Carrier – –

1 10gige-2 Up – –

1 10gige-agg-1 No Carrier groupnet0.subnet10g.pool10g 10.25.58.46

Possible failures causing the issue:

  1. Failed switch port
  2. Incorrect LACP configuration at switch port
  3. Bad cable/SFP, or other physical issue
  4. A connected switch to a port was failed, or rebooted
  5. BXE driver bug reporting not full duplex in a port state (KB511208)

Failures 1 to 4, are external to the cluster, and issue should go away as soon as these gets fixed. Failure 5 could be a persistent failure induced by a known OneFS-BXE bug(KB 511208).

  1. If node is lowest node id in pool, and Smartconnect SSIP is configured there, then:
    1. If failure 1,2, or 3 happen, then the SSIP will be moved to next lowest node id that is clear from any failure
    2. If failure 4 is present, then the SSIP will not be available in any node, and DU is expected until workaround is implemented, patch is installed, or switch is fixed or gets available again after a reboot.
    3. If failure 5 is present:
      1. If only one port is failed, then SSIP will move to next available lowest node id not affected by the issue
      2. [DU] If all nodes in a cluster are BXE nodes, and all are affected by the bug, the SSIP will not be available, expect DU, until workaround or patch is applied.
  2. If the link aggregation in LACP mode is configured in a subnet-pool where its defined gateway is the default route in the node, then:
  1. If issue happens when node is running and default route is already set, then the default route will be continue configured and available, connectivity to already connected clients should continue working.
  2. [DU] If node is rebooted with any of the persistent failures, after it gets back up after the reboot, the default router will not be available, causing DU until external issue is fixed, workaround applied, or patch installed.

If during upgrade to 8.0.0.6 or 8.1.0.2 any of the failures is present, then after the rolling reboot a DU is expected due to case described in cause A->c->ii, or cause B->b. A check must be made prior to the upgrade to evaluate you are clear from any of the described failures.



Workaround


Workaround to immediately restore link aggregation interface if only one member port is persistently down (Failed switch, failed cable/SFP, BXE bug, or other persistent issue)

Step 1:

Identify failed member port on link aggregation interface:

# ifconfig

lagg1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500

options=507bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWFILTER,VLAN_HWTSO>

ether 00:0e:1e:58:20:70

inet6 fe80::20e:1eff:fe58:2070%lagg1 prefixlen 64 scopeid 0x8 zone 1

inet 172.16.240.xxx netmask 0xffff0000 broadcast 172.16.255.xxx zone 1

nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>

media: Ethernet autoselect

status: active

laggproto lacp lagghash l2,l3,l4

>> laggport: bxe1 flags=0<>

laggport: bxe0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>


Step 2:

Manually remove port member with command:

ifconfig lagg1 -laggport bxe1

Network should be recovered in 10-20 seconds, after executing the command.

This change will be lost after a reboot.

After the external failure in a port has been identified and fixed, and port is again available, reconfigure

port back into link aggregation configuration with command:

ifconfig lagg1 laggport bxe1

A permanent fix will be available in the following OneFS maintenance releases once they become available:

  • OneFS 8.0.0.7
  • OneFS 8.1.0.4

Roll-Up patch is now available for:

8.0.0.6 (bug 226984) – patch-226984

8.1.0.2 (bug 226323) – patch-226323

NOTE: This issue affects the following OneFS versions ONLY:

  • OneFS 8.0.0.6
  • OneFS 8.0.1.2
  • OneFS 8.1.0.2
  • OneFS 8.1.1.1

Related: