Citrix ADC Internet Protocol (IP) Counters

This article contains information about the newnslog Internet Protocol (IP) counters and a brief description of the counters.

Using the Counters

Log on to the NetScaler using an SSH client, change to SHELL, navigate to the /var/nslog directory, and then use the ‘nsconmsg’ command to see comprehensive statistics using the different counters available. For the detailed procedure refer to Citrix Blog – NetScaler ‘Counters’ Grab-Bag!.

The newnslog IP Counters

The following table lists the different newnslog IP counters and a brief description of the counter.

Counter

Description

ip_tot_rxpkts

IP packets received

ip_tot_rxbytes

Bytes of IP data received

ip_tot_txpkts

IP packets transmitted

ip_tot_txbytes

Bytes of IP data transmitted

ip_tot_rxMbits

Megabits of IP data received

ip_tot_txMbits

Megabits of IP data transmitted.

ip_tot_routedpkts

Total routed packets

ip_tot_routedMbits

Total routed Mbits

ip_tot_fragments

IP fragments received

ip_tot_addr_lookup_done

IP address lookups performed by the NetScaler appliance. When a packet is received on a non-established session, the NetScaler appliance checks if the destination IP address is one of the NetScaler owned IP addresses.

ip_tot_udp_frag_fwd

UDP fragments forwarded to the client or the server

ip_tot_tcp_frag_fwd

TCP fragments forwarded to the client or the server

ip_tot_makefrag_pkts

Fragmented packets created by the NetScaler

ip_tot_reass_attempts

IP packets that the NetScaler appliance attempts to reassemble. If one of the fragments is missing, the entire packet is dropped.

ip_tot_reass_success

Fragmented IP packets successfully reassembled on the NetScaler appliance

ip_tot_l2_mode_drops

Total number of IP packets dropped due to L2 Mode disabled

ip_tot_l3_mode_drops

Total number of IP packets dropped due to L3 Mode disabled

ip_tot_secondary_pe_drops

Total number of IP packets dropped by the Secondary NetScaler appliance

ip_tot_loopback_drops

Total number of Loopback IP packets dropped

ip_tot_subnet_bcast_drops

Total number of IP packets dropped due to destination address as subnet broadcast

ip_err_badchecksums

Packets received with an IP checksum error

ip_err_reass_failure

Packets received that could not be reassembled. This can occur when there is a checksum failure, an identification field mismatch, or when one of the fragments is missing.

ip_err_reass_len_err

Packets received for which the reassembled data exceeds the Ethernet packet data length of 1500 bytes.

ip_err_reass_zerolenfrags

Packets received with a fragment length of 0 bytes.

ip_err_reass_dupfrags

Duplicate IP fragments received. This can occur when the acknowledgement was not received within the expected time.

ip_err_reass_ooofrags

Fragments received that are out of order.

ip_err_unknown_destination

Packets received in which the destination IP address was not reachable or not owned by the NetScaler appliance.

ip_err_bad_transport

Packets received in which the protocol specified in the IP header is unknown to the NetScaler appliance.

ip_err_natvip_down

Packets received for which the Virtual IP is down. This can occur when all the services bound to the Virtual IP are down or the Virtual IP is manually disabled.

ip_err_fixheader

Packets received that contain an error in one or more components of the IP header.

ip_tot_addr_lookup_failed

IP address lookups performed by the NetScaler appliance that have failed because the destination IP address of the packet does not match any of the NetScaler owned IP addresses.

ip_err_hdrsize

Packets received in which an invalid data length is specified, or the value in the length field and the actual data length do not match. The range for the Ethernet packet data length is 0-1500 bytes.

ip_err_packetlen

Total number of packets received by the NetScaler appliance with invalid IP packet size

ip_err_nsblen

Truncated IP packets received. An overflow in the routers along the path can truncate IP packets.

net_err_noniplen

Truncated non-IP packets received

ip_err_zero_nexthop

Packets received that contain a 0 value in the next hop field. These packets are dropped.

net_err_badlen_txpkts

Packets received with a length greater than the normal maximum transmission unit of 1514 bytes.

net_err_badMACAddr_txpkts

IP packets transmitted with a bad MAC address

ip_err_max_clients

Attempts to open a new connection to a service for which the maximum limit has been exceeded. Default value, 0, applies no limit.

ip_err_unknown_services

Packets received on a port or service that is not configured

ip_err_landattack

Land-attack packets received. The source and the destination addresses are the same.

ip_err_ttl_expired

Packets for which the time-to-live (TTL) expired during transit. These packets are dropped.

Related:

  • No Related Posts

Re: Adding static route in Avamar

Hello All,

I don’t know whether this is a right question to ask but as a newbie to Avamar product,I want someone to answer my question regarding adding static routes to Avamar (single node grid). is it even possible ?

We have two networks (network 1, network 2) which are isolated but recently we decided to backup all the clients from network 2 to network 1. since both the networks are segregated the networking team created a routing interface for devices in network 1 to talk to devices in network 2 and vice versa. so my question is, is it possible to add static route in (Avamar)IDPA for all the devices in network 1 to talk to devices in network 2

Note : the networking team did a ping test from all the routers (16) in network 2 to Avamar and to routing interface everything is reachable. Even the Avamar in Network 1 can reach the routing interface that’s created but cannot reach any router (16) in network 2. There are no firewalls on the routers in network 2.So what changes need to be made on Avamar/DD. Is it even possible ?

I hope this makes sense. Let me know if you have any questions. I can provide more details if needed.

Thanks in Advance

PK

Related:

Adding static route in Avamar

Hello All,

I don’t know whether this is a right question to ask but as a newbie to Avamar product,I want someone to answer my question regarding adding static routes to Avamar (single node grid). is it even possible ?

We have two networks (network 1, network 2) which are isolated but recently we decided to backup all the clients from network 2 to network 1. since both the networks are segregated the networking team created a routing interface for devices in network 1 to talk to devices in network 2 and vice versa. so my question is, is it possible to add static route in (Avamar)IDPA for all the devices in network 1 to talk to devices in network 2

Note : the networking team did a ping test from all the routers (16) in network 2 to Avamar and to routing interface everything is reachable. Even the Avamar in Network 1 can reach the routing interface that’s created but cannot reach any router (16) in network 2. There are no firewalls on the routers in network 2.So what changes need to be made on Avamar/DD. Is it even possible ?

I hope this makes sense. Let me know if you have any questions. I can provide more details if needed.

Thanks in Advance

PK

Related:

In-Band Network Telemetry: Next Frontier in Network Visualization with Analytics and Why Enterprise

In-Band Network Telemetry: Next Frontier in Network Visualization with Analytics and Why Enterprise Customer Care

By: Gautam Chanda, Global Product Line Manager DC Networking Analytics, HPE

Let’s first answer the important question: Why do we need Network Visualization and Analytics?

Data Center networks have become cloud scale and deployment of hyper-converged networks is increasing. Telecom networks will enable faster connectivity everywhere with higher bandwidth delivering 5G wireless services. All of these next-generation networks not only require much higher bandwidth, but they also require real-time telemetry to deliver services with good Quality of Experience (QoE).

A network with detailed real-time visibility enables better reliability and real-time control. Here are key reasons customers need Network Visualization and Analytics now even more than before:

  • Ability to Pinpoint Traffic Patterns for Dynamic Applications: Data centers now have increasingly complex network deployments with Network Virtualization & Overlay / Tunnel technologies; SDN/NFV; Silicon Programmability; Multi-tenancy; increased Applications volume; mobility; Hybrid cloud; Bare metal & Virtualized servers (VMs/Containers); Vswitch; NIC virtualization; Orchestration and the list goes on. This gives rise to increasingly complicated traffic patterns in the data center in which network operators would like to have greater visibility into those complex patterns to understand if their DC network infrastructure is performing optimally.
  • Security Challenges: More security concerns can arise in complicated IT scenarios, more strict regulatory compliances, and more cybersecurity attacks from both inside and outside data center are threats. Defense against Security Attacks and complex traffic patterns from both inside and outside of the data center are critical.
  • Intent-Based Network
  • Network Analytics (Visibility, Validation, Optimization & Upgrade, Troubleshooting, Policy Enforcement) is increasingly important for modern DC and Cloud deployments.

Old Network Management Tools such as SNMP is not up to the task in this very high speed networks as we move from 10G to 25G to 100G and beyond in a short order.

The figure below demonstrates very well the need for Network Visualization and Analytics:

INTBlogPhoto1.png

This bring us to In-Band Network Telemetry (INT).

Let’s pause for a minute:

  • Let’s assume you’re interested in the behaviour of your live user-data traffic.
    • What is the best source of information?
  • Well… probably the live user-data traffic itself.
    • Let’s add meta-data to all interesting live user-data traffic.

This is the essence of In-Band Network Telemetry.

The figure below contrasts traditional ways where in traditional network monitoring, an application polls the host CPU to gather aggregated telemetry every few seconds or minutes, which doesn’t scale well in next generation networks. In-Band Network Telemetry, however, enables packet level telemetry by having key details related to packet processing added to the data plane packets without consuming any host CPU resources:

Figure 2: Traditional vs New Way

INTBlogPhoto2.png

In-Band Network Telemetry (INT) is a sophisticated and flexible telemetry feature supported usually within the Network devices in HW. As explained above INT allows for the collection and reporting by the data plane on detailed latency, congestion, and network state information, without requiring intervention or work by the control plane. The INT enabled devices inserts this valuable metadata, which can then be extracted and interpreted later by a collector/Sink/Network Management SW such as HPE IMC, in-band without affecting network performance.

The INT will enable a number of very useful Customer Use Cases such as:

  • Network troubleshooting
    • When packets enter/exit networks
    • Which path was taken by individual flows associated with Specific Applications
    • How long packets spend at each hop
    • How long packets spend on each link
    • Which switches are seeing congestion?
    • Microburst detection
  • Real-time control or feedback loops:
    • Collector might use the INT data plane information to feed back control information to traffic sources, which could in turn use this information to make changes to traffic engineering or packet forwarding. (Explicit congestion notification schemes are an example of these types of feedback loops).
  • Network Event Detection:
    • If the collected path state indicates a condition that requires immediate attention or resolution (such as severe congestion or violation of certain dataplane invariances), the Collector could generate immediate actions to respond to the network events, forming a feedback control loop either in a centralized or a fully decentralized fashion (a la TCP).
  • List Goes On…..

The Figure below shows end to end INT Customer Use Case in a Data Center:

Figure 3: End To End INT

INTBlogPhoto3.png

In Figure 3 above shows how In-Band Network Telemetry is used to “Track in Real Time Path and Latency of Packets and Flows Associated with Specific Applications”:

  • Collect the physical path and hop latencies hop-by-hop for every packet.
  • Can be initiated /Transited / terminated by either a switch or a NIC (Network Interface Card) in a Host such as a Server.
  • INT metadata is encapsulated and exported to the collector (e.g. HPE IMC).

Use Cases

  • Case 1a: Real-time fault detection and isolation or alert: Congested/oversubscribed links and devices, imbalanced links (LAG, ECMP), loop.
  • Case 1b: Interactive analysis & troubleshooting: On-demand path visualization; Traffic matrix generation; Triage incidents of congestion.
  • Case 1c: Path Verification of bridging/routing, SLA, and configuration effects.
  • Enhanced visibility for all your Network traffic
  • Network provided telemetry data gathered and added to live data
    • Complement out-of-band OAM tools like SNMP, ping, and traceroute
    • Path / Service chain verification
  • Record the packet’s trip as meta-data within the packet
    • Record path and node (i/f, time, app-data) specific data hop-by-hop and end to end
    • Export telemetry data via Netflow/IPFIX/Kafka to Controller/Apps
  • In-band Network Telemetry can be implemented without forwarding performance degradation
  • Network ASIC vendors have started to add INT as a built in functions within their newest ASICs

HPE FlexFabric Network Analytics solution is leading the way towards this next frontier in Network Visualization and Analytics.

Related:

In-Band Network Telemetry: Next Frontier in Network Visualization and Analytics

In-Band Network Telemetry: Next Frontier in Network Visualization with Analytics and Why Enterprise Customer Care

By: Gautam Chanda, Global Product Line Manager DC Networking Analytics, HPE

Let’s first answer the important question: Why do we need Network Visualization and Analytics?

Data Center networks have become cloud scale and deployment of hyper-converged networks is increasing. Telecom networks will enable faster connectivity everywhere with higher bandwidth delivering 5G wireless services. All of these next-generation networks not only require much higher bandwidth, but they also require real-time telemetry to deliver services with good Quality of Experience (QoE).

A network with detailed real-time visibility enables better reliability and real-time control. Here are key reasons customers need Network Visualization and Analytics now even more than before:

  • Ability to Pinpoint Traffic Patterns for Dynamic Applications: Data centers now have increasingly complex network deployments with Network Virtualization & Overlay / Tunnel technologies; SDN/NFV; Silicon Programmability; Multi-tenancy; increased Applications volume; mobility; Hybrid cloud; Bare metal & Virtualized servers (VMs/Containers); Vswitch; NIC virtualization; Orchestration and the list goes on. This gives rise to increasingly complicated traffic patterns in the data center in which network operators would like to have greater visibility into those complex patterns to understand if their DC network infrastructure is performing optimally.
  • Security Challenges: More security concerns can arise in complicated IT scenarios, more strict regulatory compliances, and more cybersecurity attacks from both inside and outside data center are threats. Defense against Security Attacks and complex traffic patterns from both inside and outside of the data center are critical.
  • Intent-Based Network
  • Network Analytics (Visibility, Validation, Optimization & Upgrade, Troubleshooting, Policy Enforcement) is increasingly important for modern DC and Cloud deployments.

Old Network Management Tools such as SNMP is not up to the task in this very high speed networks as we move from 10G to 25G to 100G and beyond in a short order.

The figure below demonstrates very well the need for Network Visualization and Analytics:

INTBlogPhoto1.png

This bring us to In-Band Network Telemetry (INT).

Let’s pause for a minute:

  • Let’s assume you’re interested in the behaviour of your live user-data traffic.
    • What is the best source of information?
  • Well… probably the live user-data traffic itself.
    • Let’s add meta-data to all interesting live user-data traffic.

This is the essence of In-Band Network Telemetry.

The figure below contrasts traditional ways where in traditional network monitoring, an application polls the host CPU to gather aggregated telemetry every few seconds or minutes, which doesn’t scale well in next generation networks. In-Band Network Telemetry, however, enables packet level telemetry by having key details related to packet processing added to the data plane packets without consuming any host CPU resources:

Figure 2: Traditional vs New Way

INTBlogPhoto2.png

In-Band Network Telemetry (INT) is a sophisticated and flexible telemetry feature supported usually within the Network devices in HW. As explained above INT allows for the collection and reporting by the data plane on detailed latency, congestion, and network state information, without requiring intervention or work by the control plane. The INT enabled devices inserts this valuable metadata, which can then be extracted and interpreted later by a collector/Sink/Network Management SW such as HPE IMC, in-band without affecting network performance.

The INT will enable a number of very useful Customer Use Cases such as:

  • Network troubleshooting
    • When packets enter/exit networks
    • Which path was taken by individual flows associated with Specific Applications
    • How long packets spend at each hop
    • How long packets spend on each link
    • Which switches are seeing congestion?
    • Microburst detection
  • Real-time control or feedback loops:
    • Collector might use the INT data plane information to feed back control information to traffic sources, which could in turn use this information to make changes to traffic engineering or packet forwarding. (Explicit congestion notification schemes are an example of these types of feedback loops).
  • Network Event Detection:
    • If the collected path state indicates a condition that requires immediate attention or resolution (such as severe congestion or violation of certain dataplane invariances), the Collector could generate immediate actions to respond to the network events, forming a feedback control loop either in a centralized or a fully decentralized fashion (a la TCP).
  • List Goes On…..

The Figure below shows end to end INT Customer Use Case in a Data Center:

Figure 3: End To End INT

INTBlogPhoto3.png

In Figure 3 above shows how In-Band Network Telemetry is used to “Track in Real Time Path and Latency of Packets and Flows Associated with Specific Applications”:

  • Collect the physical path and hop latencies hop-by-hop for every packet.
  • Can be initiated /Transited / terminated by either a switch or a NIC (Network Interface Card) in a Host such as a Server.
  • INT metadata is encapsulated and exported to the collector (e.g. HPE IMC).

Use Cases

  • Case 1a: Real-time fault detection and isolation or alert: Congested/oversubscribed links and devices, imbalanced links (LAG, ECMP), loop.
  • Case 1b: Interactive analysis & troubleshooting: On-demand path visualization; Traffic matrix generation; Triage incidents of congestion.
  • Case 1c: Path Verification of bridging/routing, SLA, and configuration effects.
  • Enhanced visibility for all your Network traffic
  • Network provided telemetry data gathered and added to live data
    • Complement out-of-band OAM tools like SNMP, ping, and traceroute
    • Path / Service chain verification
  • Record the packet’s trip as meta-data within the packet
    • Record path and node (i/f, time, app-data) specific data hop-by-hop and end to end
    • Export telemetry data via Netflow/IPFIX/Kafka to Controller/Apps
  • In-band Network Telemetry can be implemented without forwarding performance degradation
  • Network ASIC vendors have started to add INT as a built in functions within their newest ASICs

HPE FlexFabric Network Analytics solution is leading the way towards this next frontier in Network Visualization and Analytics.

Related:

The message hop count was exceeded.

Details
Product: BizTalk Server
Event ID: 7435
Source: BizTalk Server 3.0
Version: 3.0.4604.0
Message: The message hop count was exceeded.
   
Explanation
The sent message was dropped by MSMQ since the message exceeded more hops during the routing than allowed.
   
User Action
Verify that the destination format name is correct. Ensure that the MSMQ site topology is defined correctly.

Related: