Dell 2020 Networking & Solutions Technology Trends


  • Joseph White, Senior Distinguished Engineer
  • Joe Ghalam, Distinguished Engineer
  • Mark Sanders, Distinguished Engineer

Since joining Dell as CTO for Networking & Solutions in June 2019, I have been energized by the opportunities and the extent of technology development at Dell Technologies, as well as the deep partner engagement in R&D. Heading into 2020, our customers require distributed and automated infrastructure platforms that support a wide range of use cases from data center automation to edge and 5G enterprise verticals. Let’s take a closer, more technical look at what’s behind these trends.

Cloud-native software drives intelligent automation and fabrics in data centers

Advances in infrastructure automation are leading to full automation stacks incorporating OS configuration management, DevOps tools, and platform stack installers and managers. These bundles enable a new operational model based on fully-automated, zero-touch provisioning and deployment using remote tools for networking, compute and storage infrastructure. This has become a critical requirement for large deployments, delivering the ability to rapidly deploy and manage equipment with the least amount of operational cost at scale. This is a key enabler for edge use cases.

Network configuration and fault mitigation is rapidly becoming automated. Telemetry data availability and integration with orchestration applications allows the network to be more than one static domain. Using data analysis and fault detection, automatic network configuration and self-healing can become a great differentiating factor in selecting one solution over another.

The tools for infrastructure lifecycle management, including firmware upgrades, OS updates, capacity management and application support, are becoming an integral part of any infrastructure solution. These trends will accelerate with the help of AI software tools this year and continue to expand to every part of the infrastructure.

Micro-services based NOS design fuels the next wave in Open Networking

Network operating systems (NOS) are evolving into flexible cloud-native microservices designs that address many of the limitations of traditional networking platforms. One of the biggest benefits is the ability to support different hardware platforms and customize the services and protocols for specific deployments. Gone are the days when the only option network operators had was to accept a monolithic, generic OS stack with many features that would never be used. This new architecture is critical for supporting edge platforms with constrained CPU and power with targeted networking missions.

Community-based NOS platforms such as SONiC (Software for Open Networking in the Cloud) have the added benefit of accelerating development through a community. SONiC is gaining momentum as a NOS for both enterprises and service providers due to its disaggregated and modular design. By selecting desired containers and services, SONiC can be deployed in many use cases and fit in platforms of many sizes.

The recent increased industry involvement and community creation has placed SONiC on an accelerated path to support more use cases and features. The increased development activity will continue through 2020 and beyond. SONiC has also grabbed the attention of other projects and organizations such as ONF and TIP/Disaggregated cell site gateways. These projects are looking into ways to integrate with SONiC in their existing and new solutions and driving a new set of open networking use cases.

Merchant silicon extends to cover more complex networking requirements

Programmable packet forwarding pipelines, deep buffers, high radix, high line speeds, and high forwarding capacity merchant silicon switches coupled to a new generation of open network operating systems are enabling effective large scale-out fabric-based architectures for data centers. These capabilities will enhance both data center and edge infrastructure, replacing the need for a chassis design or edge routers with custom ASICs. In 2020, for the first time, we expect to see merchant silicon-based network solutions achieve parity with most of the traditional edge and core networking platforms, providing a scale out design that is better aligned to converged infrastructure and cloud requirements.

Programmable silicon/data plane enabling streaming analytics

Programmable data planes are maturing with P4 compilers (as the community approach) and many other available languages for creating customized data pipelines. There is also a growing number of NOSs that support programmable data plane functionality. These new software tools enable the creation of unique profiles to support specific services and use cases, including edge functionality, network slicing, real time telemetry and packet visibility. These powerful new capabilities provide control and AI-based mitigation, as well as customized observability at large scale in real time. Developers have access to the data pipeline and will be able to create new services that are not possible in traditional networking. This is going to be one of the key new trends in 2020.

Storage fabrics using distributed NVMe-oF over TCP/IP solutions

NVMe has emerged as the most efficient and low-latency technology for storage access. NVME-over-Fabric (NVMe-oF) extends the protocol to work across networks using fabric-based networks (Fibre Channel, RoCE, TCP/IP). TCP/IP and RoCE have a clear cost effectiveness advantage with 100GbE being four times as fast as 32GbE FC at about 1/8th of the cost. Between those two protocols TCP/IP emerges as the solid choice due to similar performance, better interoperability and routing, and utilization of lossless networks only where needed. NVMe-oF/TCP transport provides the connectivity backbone to build efficient, flexible, and massive-scale distributed storage systems. The key to unlocking this potential is service-based automation and discovery controlling the storage access connectivity within the proven SAN operational approach and orchestration frameworks extended across multiple local storage networks through both storage services and fabric services federation.

Distributed edge emerging as a requirement for Industry vertical solutions

Emerging use cases at the far edge for analytics, surveillance, distributed applications and AI are driving the need for new infrastructure designs. Key constraints are the operating environment, physical location, and physical distribution giving rise to the need for a comprehensive remote automated operational model. New workload requirements are also driving the design. For example, Gartner predicts that “by 2022, as a result of digital business projects, 75% of enterprise-generated data will be created and processed outside the traditional, centralized data center or cloud*.” New innovations at the edge include converged compute and networking, programmable data plane processors, converged rack-level design, micro/mini data centers, edge storage and data streaming, distributed APIs and data processing. We are at the start of new phase of development of custom solutions for specific enterprise verticals that will drive new innovations in infrastructure and automation stacks.

Wireless first designs are driving new infrastructure platforms for enterprises and service providers

There is tremendous growth in wireless spectrum and technologies including 5G, 4G, shared spectrum (CBRS), private LTE, and WiFi, coupled with a new desire to transition to wireless as the preferred technology for LAN, campus, eetail, etc. This is driving the need for wireless platform disaggregation into cloud native applications for core, radio access network (RAN) and WiFi that support multiple wireless technologies on shared infrastructure. Disaggregation is starting at the core and moving to the edge levering edge compute with automation in a distributed model, which is bringing all the benefits of cloud economics, automation and developer access to wireless infrastructure and creating massive new efficiencies and new services.

Smart NICs are evolving to address massively distributed edge requirements

The new generation of powerful Smart NICs extend the model of simple NIC offload and acceleration by adding heavy data plane processing capacity, programmable hardware elements, and integrated switching capabilities. These elements allow many data flow and packet processing functions to live on the smart NIC, including networking, NVMe offload, security, advanced telemetry generation, advanced analytics, custom application assistance, and infrastructure automation. Smart NICs will be a key element in several valuable use cases: distributed network mesh, standalone intelligent infrastructure elements (e.g. radio controllers), autonomous infrastructure, distributed software defined storage, and distributed data processing. Smart NICs will serve as micro-converged infrastructure extending the range of edge compute to new locations and services beyond edge compute.

The age of 400G – higher speeds driving new fundamental network switch architecture

Native 400G switches coupled with 400G 0ptical modules are now available and breaking the 100G speed limit for data center interconnects. This is creating challenges with power and thermal, as well as space and layout, and moving the industry to co-packed optics.

In addition, new silicon photonics (ZR400 and others) enable long reach Dense Wavelength Division Multiplexing (DWDM) transport given the availability of merchant optics DSPs. This is going to fundamentally transform networking, data center interconnect and edge aggregation by collapsing the need for a stand-alone DWDM optical networks, therefore bringing great efficiencies, automation and software-defined capabilities to the entire networking stack.

Stay tuned—2020 is set to be a year packed with innovation as we strive to deliver customers the technology that will drive their businesses into the future.

Additional Resources

*Gartner Top 10 Strategic Technology Trends for 2020, 21 October 2019, David Cearley, Nick Jones, David Smith, Brian Burke, Arun Chandrasekaran, CK Lu


Firewall rules “host” logic

I need a solution


I’ve already found , but still have small question left.

I want to create firewall rule, which will allow specific traffic only if (local IP) and (local MAC) will match specific values.

Following mentioned article, “The hosts that you define on either side of the connection (between the source and the destination)” use OR condition, and “Selected hosts” use AND condition.

That’s fine to understand if we are talking about matching only IP-addresses, for example (we take any IP from “Local” block, any IP from “Remote” block, and connect them with AND statement).

But, in my case, both my conditions (local IP and local MAC) are on the same side – does it mean, that only “OR” is possible? Any way I can connect both this rules with “AND”?



7021934: Enabling File Transfer on an IBM i over TCP/IP in Reflection

Verifying Host Services on the IBM System i

Before you can use Reflection to transfer files over TCP/IP, you must verify whether Host Servers is installed on IBM i, and then start the LIPI servers on the IBM System i.

To verify whether Host Servers is available, follow these steps:

  1. Open Reflection and connect to IBM i with a terminal session.
  2. Enter the command DSPSFWRSC.
  3. In the Software Resource list, look for at least one Option 12 (i5/OS – Host Servers).

If Option 12 is listed, continue with the next section. If you do not see Option 12 listed, you will need to install the host servers from the i5/OS installation media before continuing.

Starting the LIPI Servers

The LIPI servers must be started because they are not available by default.

Note: You need *ALLOBJ system administrator privileges on the IBM i to successfully complete these instructions.

On the IBM i, you can start the LIPI servers all at once or individually.

  • To start all i5/OS LIPI servers, use this command:

Note: After executing the above command you may see the error, “Host server daemon jobs unable to communicate using IPX.” You can ignore this error because IPX is not needed.

  • To start the required LIPI servers individually, use these commands:




Determining Which LIPI Servers Are Started

To confirm which LIPI servers are started, follow these steps:

  1. On the IBM System i, issue this command:
  1. Select option 3: Work with TCP/IP connection status.
  2. Look for the following items under the Local Port heading to verity that the required LIPI servers are running (names may be displayed truncated):

For non-SSL:





For SSL:




Japanese and Other Double-byte Systems

The NETSTAT command will not display the LIPI servers in Japanese or other double-byte operating systems. Instead, look for the TCP port numbers of the LIPI servers.

The following table lists the default ports for the required LIPI servers:

LIPI Server

TCP Port Number


SSL Port Numbers


Central server
Database server
Signon server
Server mapper
449 (must not be changed from default)

You can use non-default ports for all of the servers except the server mapper, which must be on port 449. This port allows the PC to query the host to determine where the other ports are mapped.

Verifying Prestart Jobs

Before you can make a connection and transfer files, one of the following prestart jobs must be started: QZDAINIT or QZDASOINIT. A prestart job is started when you start the corresponding host server daemon. The prestart job then waits and listens for an incoming connection before going to an active state.

  • QZDAINIT is for an SNA (or 802.2) connection. QZDAINIT is the Server program.
  • QZDASOINIT is for TCP/IP, specifically a TCP/IP and IPX socket connection. QZDASOINIT is the Server program, and QZDASRVSD is the Server Daemon program.

To verify whether a prestart job is running, enter the following command on the i5/OS command line:


Configuring File Transfer in Reflection for IBM

Using Reflection, you can transfer files directly to or from the IBM i host over TCP/IP without starting a terminal session. Follow these steps:

  1. In the Reflection Workspace, open a session document or create a new terminal session document.
  2. On the Ribbon, click the File Transfer Settings icon.
  1. The Transfer dialog box is displayed. On the Protocol tab, select AS/400.
  2. On the AS/400 tab, select TCP/IP from the Transport drop-down menu.
  3. Select LIPI from the Host TP drop-down menu.
  4. In the System name field, enter the name of the IBM i host, if you have not already specified this information.
  5. Click OK.
  6. In the Transfer dialog box, specify the files you wish to transfer.
  7. Click the appropriate Transfer button.

Configuring File Transfer in Reflection for the Web

Using Reflection for the Web, you can transfer files directly to or from the AS/400 host over TCP/IP without starting a terminal session. Follow these steps:

  1. Launch the Administrative WebStation and under Tools, click Session Manager.
  2. Click Add to create a new session.
  3. Select the IBM AS/400 Data Transfer option as the Web-Based Session Type and enter a Session name, such as AS/400 Transfer. Click Continue.
  4. Configure your session, and then click Launch to start your file transfer session.
  5. To configure file transfer, on the Connection menu, click Connection (or Session) Setup. In the dialog box, enter your Host, User ID, and Password.
  6. (Optional) To configure security, click the SSL/TLS (or Security) button.
  7. Click OK to connect to the AS/400.
  8. Click File > Save and Exit to save your file transfer session.

To change your file transfer settings, open your file transfer session and on the Options menu, click To Host or click From Host to configure the settings.


1. If you receive the error, “TCP connection dead,” check the following settings on the IBM System i:

  • Verify that the LIPI servers are running. (This is the most likely cause of the error.)
  • Check whether the default ports are being used. See the table above for default values.

If non-default ports are being used, verify that the Use default ports check box is cleared in the advanced file transfer setup dialog box.

  • Check whether any of the specified ports—default or not—are blocked on the host.

To verify whether files can be transferred to or from the designated host, try a different transfer protocol, such as MPTN.

2. If you receive the error, “TCP – Unable to open Connection”, ensure a certificate on the IBM System i has been assigned to the:

a.) Central Server,

b.) Database Server,

c.) Signon Server, and

d.) File Server

  • Symptom: Initial SSL/TLS connection to the mainframe is successful, yet LIP transfer fails very quickly with the message, “TCP – Unable to open Connection”.
  • Host services were verified, LIPI servers are running for SSL, appropriate prestart jobs are running, and Reflection is configured correctly for LIPI transfers.


DHCP Client Service will not start and shows “Access Denied” error

It’s unclear where this problem is coming from, but it appears to be an issue with the way the registry hives are combined when building boot images. You can see this while editing layers, in published images (in App LAyering 4) or in edited desktops (Unidesk 2). To fix this, you need to edit the permissions on a registry key.

Give “Full control” permission to these three users:

  • DHCP
  • Network Service
  • Add local admin: MachineNameadministrator

for the Registry folder “Tcpip” located at:


Click Advanced, and in that page, check the “Replace all child object permission with inheritable permissions from this object” box. DHCP and Network Service should already be listed, so just set them to Full Control. You will need to manually Add a new record for your local Administrator account, and set that to Full Control as well.

This can be done in the published image, or in each layer as you find the problem, but if you preemptively do it in the OS Layer itself, you should find that the fix automatically propagates out to the layers and images (and desktops in Unidesk 2), so you don’t have to do it each time you find it elsewhere.


7021470: Configuring TCP/IP and LAN Adapter for an IBM System i

Verify That TCP/IP Configured Properly on the System i

TCP/IP is configured properly on the System i when TCP/IP is enabled and that you can ping the host. Verify your system as follows:


To ensure that TCP/IP is enabled, enter the following command at the i5/OS command line:

  • If NETSTAT reports that “TCP/IP is not started,” refer to the section below on Installing and Configuring TCP/IP on the System i.
  • If TCP/IP is configured, you may be prompted to verify the settings.


To check whether you can ping the System i, enter the PING command at the i5/OS command line. An example:


The above command would have the System i ping a system with an IP address of

Configuring TCP/IP on the System i

To install and configure TCP/IP on the System i, enter the following command at the i5/OS command line:


When you enter CFGTCP you will see up to twenty-two different configuration options. Verify and configure the required settings below.

  • TCP/IP Interfaces.

The TCP/IP Interface description is typically:

    • Line Description: ETHERNET
    • Line Type: ELAN
  • TCP/IP Routes.

Check the DFROUTE and MTU entries.

    • DFROUTE indicates the router IP address.
    • Set route’s MTU size to *IFC (recommended).
  • TCP/IP Attributes.

Enter the following command at the AS/400 system prompt:


The administrator must start the TCP/IP transport once configuration is complete.

Verify That the LAN Adapter is Installed and Functional

To check whether the LAN adapter is installed and functioning, follow these steps:

  1. Enter the following command at the i5/OS command line:
  1. Find Ethernet Port Tokenring Port in the listing. If there is no value for Ethernet Port or Tokenring Port, then i5/OS is not automatically reporting the existence of an Ethernet adapter. This indicates either a hardware failure or that no LAN adapter is installed on the system.
  2. You will need to know the value of the Resource Line Description entry. Note the Resource Line Description value: L_____. This typically corresponds to LAN adapter (Ethernet or Token-Ring) and is directly above the Port resource line.
  3. Select option 5 to display configuration descriptions.
  4. Enter 8 to work with Configuration Status.
  5. Verify that the status is Active.
  6. Option 1-Vary On may be entered for an inactive device.

Checking the LAN Interface (LINE) Configuration

The way you check a LAN interface configuration depends on whether you have an Ethernet or Token-Ring adapter. You will need to know the Resource Line Description value that you got in step three of the Verify the LAN Adapter is Installed and Functional section above. Steps are included in separate sections below.


If you have an Ethernet adapter follow these steps:

  1. Enter the following command at the i5/OS command line:

Press F4 any you will be prompted for configuration entries. Do not press Enter. In the Line Description field the recommended entry is ETHERNET.


How to Use Policy Based TCP Profile in NetScaler

Note: Policy based TCP profile is not present in 10.x. It is only available from 11.0 64.x and 11.1.

How to configure policy based TCP profile in NetScaler

Consider the following requirement in a customer deployment. Customer has 3G/4G subscribers, all the 3G subscribers are coming through VLAN-1 and 4G from VLAN-2. Based on this parameter, we can give different TCP profile to these clients.

User-added image

Using the APPQOE policy we have created two policies based on VLAN IDs. The action configured for APPQOE policy will select the profile for the subscriber traffic. On getting the request from client, policy evaluation happens, based on the VLAN ID, corresponding TCP profile is used based on the APPQOE action configured. For instance, in the below configuration when 3G traffic comes in to NetScaler using VLAN1, the APPQOE policy “appqoe_3G” is hit and the corresponding action “action_3G” with 3G_profile is applied for the session.

User-added image

  • add appqoe action action_3G -tcpProfile 3G_profile

  • add appqoe action action_4G -tcpProfile 4G_profile

  • add appqoe policy appqoe_3G -rule “” -action action_3G

  • add appqoe policy appqoe_4G -rule “” -action action_4G

  • bind lb vserver tcpopt_traffic_manager -policyname appqoe_3G –priority 1

  • bind lb vserver tcpopt_traffic_manager –policyname appqoe_4G –priority 2

Policy based TCP Profiles using configuration utility

Navigate to AppExpert -> AppQoE

User-added image

User-added image

User-added image

APPQOE Policy Examples

Some examples for APPQOE policy that can be used for other parameters like source IP, HTTP parameters, subscriber specific information are as follows,

TCP/IP specific rule :

add appqoe policy <name> -rule “CLIENT.IP.SRC.EQ(” -action <action-name>

HTTP specific rule :

add appqoe policy apppol1 -rule “HTTP.REQ.URL.CONTAINS(“5k.html”)” -action appact1

add appqoe policy apppol2 -rule “HTTP.REQ.URL.CONTAINS(“500.html”)” -action appact2

Subscriber specific rule:

add appqoe policy apppol1 -rule “SUBSCRIBER.AVP(250).VALUE.CONTAINS(“hi”)” -action appact1

add appqoe policy apppol2 -rule “SUBSCRIBER.SERVICEPATH.IS_NEXT(“SF1″)” -action appct2

This feature leverages the flexibility available in APPQOE policies and actions to dynamically

select the TCP profile required for the traffic going through NetScaler.

User-added image


Best practice for load balancing on outbound proxy IP address

I need a solution

We have the following challenge: Multiple thousand users are accessing a web service through a ProxySG, explicit deployment. At some point we will run into a port exhaustion issue.
Apart from increasing the source port range by setting “#(config) tcp-ip inet-lowport xxx” as explained in… the usual advice is to add more IP addresses to the proxy and split the connection between those IP addresses using reflect_ip(proxyIPaddress1), reflect_ip(proxyIPaddress2), etc.

Now my question is: What is the best way to distribute clients between the outgoing proxy IP addresses? I’d like to use a smarter solution than simple client subnets, as in

client.address= reflect_ip(proxyIPaddress1)
client.address= reflect_ip(proxyIPaddress2)

because there are a lot of subnets and I’d have to manually calculate how many clients are in each subnet in average to create groups which are of equal size. Also of course new subnets are created, deleted, changed every once in a while. Is there a way to distribute the clients in an automatic way? I thought of creating groups like

condition=clientsForIP1 reflect_ip(proxyIPaddress1)

define condition clientsForIP1

So all clients with uneven IP address are automatically sent via IPaddress1, regardless how many subnets exists. However I’m not sure if this is really efficient. Do you have any other ideas? Is there a way how I can perform mathematical calculations inside CPL?