SDWAN doesn’t support “MSS Clamping” in PPPoE Internet Service (Before version 11.3.1), that causes some external web pages can’t be loaded

This is because SDWAN PPPoE link doesn’t support MSS Clamping before version 11.3.1

What is MSS Clamping?

1. In a PPPoE link, additional 8 bytes PPPoE header will be inserted into frames. That may cause total length of frams exceed MTU 1500. Hence, we need to fragment those TCP packets if payload length is 1460.

2. However, in most cases, DF bit is set in packet. Don’t allow fragmentation. Then, PPPoE router should reply ICMP “Fragmentation Required” message to original client/server. Then client/server should send the packet in a smaller data.

3. However, the ICMP message may be dropped by firewall. In such cases, a better solution is PPPoE router modifies the MSS value in a TCP connection to fit PPPoE link’s MTU. That is called MSS Clamping.

Related:

  • No Related Posts

[SDWAN] SDWAN doesn’t support “MSS Clamping” in PPPoE Internet Service (Before version 11.3.1), that causes some external web pages can’t be loaded

This is because SDWAN PPPoE link doesn’t support MSS Clamping before version 11.3.1

What is MSS Clamping?

1. In a PPPoE link, additional 8 bytes PPPoE header will be inserted into frames. That may cause total length of frams exceed MTU 1500. Hence, we need to fragment those TCP packets if payload length is 1460.

2. However, in most cases, DF bit is set in packet. Don’t allow fragmentation. Then, PPPoE router should reply ICMP “Fragmentation Required” message to original client/server. Then client/server should send the packet in a smaller data.

3. However, the ICMP message may be dropped by firewall. In such cases, a better solution is PPPoE router modifies the MSS value in a TCP connection to fit PPPoE link’s MTU. That is called MSS Clamping.

Related:

  • No Related Posts

Bluecoat SG300 offline

I need a solution

Hi everyone

I wonder if someone can help me. We have a Bluecoat SG300 in our Middle East office which went down yesterday. This does all the web filtering so the users have had to disable the proxy server in order to have Internet access – not ideal…

I’ve attached a picture of the front of the unit. The power LED is flashing green-amber while the triangle LED is completely off. Does this indicate a hardware fault or is there a way I can bring it back online? The device cannot even be pinged locally. I was thinking of connecting a laptop to it directly and trying to connect locally but I’m not sure if it’s even booting up fully.

Assistance would be greatly appreciated.

Many thanks

0

1578209133

Related:

Citrix ADC ICMP Counters

This article contains information about the newnslog Internet Control Message Protocol (ICMP) counters and its brief description.

Using the Counters

Log on to the ADC using an SSH client, change to SHELL, navigate to the /var/nslog directory, and then use the ‘nsconmsg’ command to see comprehensive statistics using the different counters available. For the detailed procedure refer to Citrix Blog – NetScaler ‘Counters’ Grab-Bag!.

The newnslog ICMP counters

The following table lists the newnslog ICMP counters with a simple description of the counter.

newnslog Counter

Description

icmp_cur_ratethreshold

This counter tracks the limit for ICMP packets handled every 10 milliseconds. Default value is 0 and has no limit.

This is a configurable value using the set rateControl command.

icmp_tot_rxpkts

This counter tracks the ICMP packets received.

icmp_tot_rxbytes

This counter tracks the bytes of ICMP data received.

icmp_tot_txpkts

This counter tracks the ICMP packets transmitted.

icmp_tot_txbytes

This counter tracks the bytes of ICMP data transmitted.

icmp_tot_rxEchoReply

This counter tracks the ICMP Ping echo replies received.

icmp_tot_txEchoReply

This counter tracks the ICMP Ping echo replies transmitted.

icmp_tot_rxEcho

This counter tracks the ICMP Ping Echo Request and Echo Reply packets received.

icmp_port_unreach

This counter tracks the ICMP Port Unreachable error messages received. This error is generated when there is no service running on the port.

icmpgen_port_unreach

This counter tracks the ICMP Port Unreachable error messages generated. This error is generated when there is no service running on the port.

icmp_unreach_needfrag

This counter tracks the ICMP Fragmentation Needed error messages received for packets that must be fragmented but Don’t Fragment is specified in the header.

icmp_err_threshold

This counter increment when the ICMP rate threshold is exceeded. If this counter continuously increases, then you must first ensure that the ICMP packets received are genuine. If they are genuine, then you must increase the current rate threshold.

icmp_err_dropped

This counter tracks the ICMP packets dropped when the rate threshold is exceeded.

icmp_err_badchecksums

This counter tracks the ICMP Fragmentation Needed error messages received with an ICMP checksum error.

icmp_err_pmtu_dfonfrag

This counter tracks the ICMP Fragmentation Needed error messages received that were generated by an IP fragment other than the first one.

icmp_err_pmtu_invalbodylen

This counter tracks the ICMP Fragmentation Needed error messages received that specified an invalid body length.

icmp_err_pmtu_notcpconn

This counter tracks the ICMP Need Fragmentation error messages received for TCP packets. The state of the connection for these packets is not maintained on the NetScaler appliance.

icmp_err_pmtu_noudpconn

This counter tracks the ICMP Need Fragmentation error messages received for UDP packets. The state of the connection for these packets is not maintained on the NetScaler appliance.

icmp_err_pmtu_invalseqno

This counter tracks the ICMP Fragmentation Needed error messages received for packets that contain an invalid TCP address.

icmp_err_nextmtu_inval

This counter tracks the ICMP Fragmentation Needed error messages received in which the Maximum Transmission Unit (MTU) for the next hop is out of range. The range for the MTU is 576-1500.

icmp_err_mtulkup

This counter tracks the total number of MTU lookup on destination IP address information received on a Need Fragmentation ICMP error message failed.

icmp_err_bignxtmtu

This counter tracks the ICMP Fragmentation Needed error messages received in which the value for the next MTU is higher than that of the current MTU.

icmp_err_pmtu_unknownproto

This counter tracks the ICMP Fragmentation Needed error messages received that contain a protocol other than TCP and UDP.

icmp_err_pmtu_cksum

This counter tracks the ICMP Fragmentation Needed error messages received with an IP checksum error.

icmp_err_pmtu_nolink

This counter tracks the ICMP Fragmentation Needed error messages received on a Protocol Control Block (PCB) with no link. The PCB maintains the state of the connection.

icmp_err_pmtu_disabled

This counter tracks the ICMP Need Fragmentation error messages received when the PMTU Discovery mode is not enabled.

Related:

Built-in IPS signatures

I need a solution

Hello,

Could you please guide me to locate ARP Cache Poison, Port scan, ICMP Ping Flood and TCP SYN Flood built-in IPS signatures in IPS policy of SEPM.

SEPM Version : 14.2.1031.0100

I’m unable to find these signatures as suggested in this article.

https://support.symantec.com/en_US/article.TECH246…

Thanks!

0

Related:

SEP blocks ping on DC

I need a solution

Hello,

Im using ping (via nagios) to monitor connectivity with servers. 

Just created a rule to allow icmp from remote host.
I got logs of allowed traffic that show its working. But smth is wrong with SEP client on domain controlers.

After a couple minuts I dont recive any log for that rule, for about 6-10min. When i check server connectivity monitoring I lose a lot of packages on ping from remote host, in 2h its about 88% 

After I removed SEP client, there is no problem with losing packages on ping anymore.

Any idea what can I do with that?

SEP 14.2

0

Related:

RecoverPoint: How to check MTU size in RecoverPoint[1]

Article Number: 484259 Article Version: 5 Article Type: How To



RecoverPoint,RecoverPoint for Virtual Machines,RecoverPoint CL,RecoverPoint SE,RecoverPoint EX

This document describes two mechanism that can be used to ensure that the MTU (Maximum Transfer Unit) is consistent along the path and to calculate the path MTU which is the minimal MTU that is consistent along the path between two hosts.

The MTU is the largest possible frame on an OSI Model Layer 2 data network.

The MTU size depends on the physical properties of the communications media.

For most Ethernet networks this is set to 1500 bytes by default.

More recently Ethernet frames with more than 1500 bytes of payload are becoming popular (they are called Jumbo Frames) and normally they carry 9000 bytes of payload.

The problem with increasing the MTU over the default value of 1500 is that all network devices along the path must support the increased MTU value.

This is not always the case, so we must ensure that the MTU is consistent all over the path, otherwise weird problems can appear such as:

– replication impact (possible DRU, link resets, etc.)

– inability of clusters to communicate

– clusters unknown across system

To check the Maximum Transmission Unit (MTU) in RecoverPoint, login to the CLI as a boxmgmt role user.

From the boxmgmt main menu select:

[2] Setup

[1] Modify settings

[7] MTU configuration

[1] View MTU values

Selecting 1 will display result such as:

|————|

| MTU values |

|————|

| WAN | 1500 |

| LAN | 1500 |

|—–|——|


To test the MTU via ping:

Varies by version but get to the Run Internal Command prompt (menu buttons 3 then 5 from boxmgmt main menu in 4.1 and higher).

[5] Run internal command

This is the list of commands you are allowed to use: arp arping date ethtool kps.pl netstat ping ping6 ssh su telnet top uptime

Enter internal command:

For internal command to test ping, use following as example when testing to get best value for MTU:

ping -I eth0 -M do -s <packet size> <destination IP>

-I argument forces ping traffic out specified interface (eth0 in this instance, the WAN port)

-s argument specifies packet size to test

Example results:

Enter internal command: ping -I eth0 -M do -s 1472 192.168.0.105

PING 192.168.0.105 (192.168.0.105) from 192.168.0.108 eth0: 1472(1500) bytes of data.

1480 bytes from 192.168.0.105: icmp_req=1 ttl=64 time=0.164 ms

1480 bytes from 192.168.0.105: icmp_req=2 ttl=64 time=0.148 ms

1480 bytes from 192.168.0.105: icmp_req=3 ttl=64 time=0.144 ms

1480 bytes from 192.168.0.105: icmp_req=4 ttl=64 time=0.125 ms

1480 bytes from 192.168.0.105: icmp_req=5 ttl=64 time=0.151 ms

1480 bytes from 192.168.0.105: icmp_req=6 ttl=64 time=0.144 ms

Ctrl+C to end test.

If the test is unsuccessful, test lower MTU values. Guideline to testing is to use values that RP has preset (see below) but subtract 28 from the value to account for the packet header. In above example, tested 1500 MTU by pinging 1472 as packet size to account for the 28 bit overhead on the ping test. If testing 1492 as MTU, you would use 1464 as your -s size. Testing 1430, you would use 1402 and so on.


To change the MTU value:

[2] Setup

[1] Modify settings

[7] MTU configuration

[2] Configure MTU values

Selecting 2 here will allow you to pick the interface to modify –

1. WAN

2. LAN

Select the interface type to modify (1 to 2): 1

The current WAN MTU value is 1500. Do you want to modify it? (y/n)? y

1. 9000 bytes (Recommended when using iSCSI)

2. 1500 bytes (Default value)

3. 1492 bytes (Recommended when using PPPoE)

4. 1430 bytes (Recommended when using VPN)

5. Other (For EMC personnel use only)

Select MTU value: (1 to 5) (Default ‘2’):

Enter the correct value based on the ping tests results.

|————|

| MTU values |

|————|

| WAN | 1500 |

| LAN | 1500 |

|—–|——|

Do you want to apply these configuration settings now (y/n)? y

Changes were received by the RPA, but it may take some time for them to be applied to the entire system.

Note, custom size may be entered using the 5.Other entry as above, bringing the MTU too low can lead to packet fragmentation .

For more information see MTU & packet fragmentation searches on the internet.

Related:

High Packet CPU caused by icmp traffic on loopback IP address 127.0.0.2

On the NetScaler checking the CPU stats you may find similar output as below for CPU stat command on CLI:

>stat system cpu

CPU statistics

ID Usage

4 100

5 100

3 100

2 100

1 100

In above case this NetScaler has 5 Packet CPU and the output will be shown as per the number of Packet CPU’s.

Checking the newnslog counters (under /var/nslog folder) will show that there are a lot of loopback packets on the NetScaler which is higher than the traffic interfaces:

#nsconmsg -K newnslog -d current -s disptime=1 -g nic_tot_rx_packets | more

1 0 9189639189 nic_tot_rx_packets interface(10/1)

3 0 9129888925 nic_tot_rx_packets interface(10/2)

5 0 86622975465 nic_tot_rx_packets interface(10/3)

7 0 86824396495 nic_tot_rx_packets interface(10/4)

9 0 3525761411 nic_tot_rx_packets interface(0/1)

11 0 681495258986 nic_tot_rx_packets interface(LO/1)

13 0 18319518984 nic_tot_rx_packets interface(LA/1)

15 0 173447330798 nic_tot_rx_packets interface(LA/2)

17 0 195292661485 allnic_tot_rx_packets

If you do a NetScaler packet capture, you can identify that looping one particular packet is causing high CPU and most of the traffic on NetScaler is the packet as mentioned below:

NSIP -> 127.0.0.2 ICMP 105 Destination unreachable (Port unreachable)

Also you could identify that the actual traffic on the NetScaler is very low compared to mentioned looped packet.

User-added image

This is seen when the configured nameServer returns a server failure response and the packet is looped into NetScaler.

Related:

Unable to disable Application Monitoring?

I do not need a solution (just sharing information)

We’re trying to debug an issue at the moment where it looks like SEPC is slowing down SSH connections, ping etc. For example when running ping against a local router, there was an initial delay of 10 seconds before it executed. Viewing the client it appears as if SEPC is doing some kind of inspection, which we can’t disable. 

We have noticed that in the client, there is a slider for Application Monitoring. Regardless of the settings for the policy in SEPC this is always enabled and greyed out.

Could somebody explain the purposes of this slider? We’d like to try disabling it as part of debugging the issue explained above.

Thanks 

0

Related:

How to disable “OK but failing” email notification while ICMP health checks is running for particular ip??

I need a solution

Hello.

Can anyone help me with my problem?

I need to perform ICMP echo checks for one IP address and I want to recive only one email message when host is down. If I deploy policy like that “interval = 30 threshold = 5” and when host is down – I will recive two messages – “OK but failing” and then “failed”. Can I disable intermediate message like “OK but failing” and recive only useful messages like OK or Failed. This intermediate email messages are useless because ICMP echo can get lost in Internet but I will recive useless beeps on my phone.

Thanks for any advice.

0

Related: