Dell 2020 Networking & Solutions Technology Trends


  • Joseph White, Senior Distinguished Engineer
  • Joe Ghalam, Distinguished Engineer
  • Mark Sanders, Distinguished Engineer

Since joining Dell as CTO for Networking & Solutions in June 2019, I have been energized by the opportunities and the extent of technology development at Dell Technologies, as well as the deep partner engagement in R&D. Heading into 2020, our customers require distributed and automated infrastructure platforms that support a wide range of use cases from data center automation to edge and 5G enterprise verticals. Let’s take a closer, more technical look at what’s behind these trends.

Cloud-native software drives intelligent automation and fabrics in data centers

Advances in infrastructure automation are leading to full automation stacks incorporating OS configuration management, DevOps tools, and platform stack installers and managers. These bundles enable a new operational model based on fully-automated, zero-touch provisioning and deployment using remote tools for networking, compute and storage infrastructure. This has become a critical requirement for large deployments, delivering the ability to rapidly deploy and manage equipment with the least amount of operational cost at scale. This is a key enabler for edge use cases.

Network configuration and fault mitigation is rapidly becoming automated. Telemetry data availability and integration with orchestration applications allows the network to be more than one static domain. Using data analysis and fault detection, automatic network configuration and self-healing can become a great differentiating factor in selecting one solution over another.

The tools for infrastructure lifecycle management, including firmware upgrades, OS updates, capacity management and application support, are becoming an integral part of any infrastructure solution. These trends will accelerate with the help of AI software tools this year and continue to expand to every part of the infrastructure.

Micro-services based NOS design fuels the next wave in Open Networking

Network operating systems (NOS) are evolving into flexible cloud-native microservices designs that address many of the limitations of traditional networking platforms. One of the biggest benefits is the ability to support different hardware platforms and customize the services and protocols for specific deployments. Gone are the days when the only option network operators had was to accept a monolithic, generic OS stack with many features that would never be used. This new architecture is critical for supporting edge platforms with constrained CPU and power with targeted networking missions.

Community-based NOS platforms such as SONiC (Software for Open Networking in the Cloud) have the added benefit of accelerating development through a community. SONiC is gaining momentum as a NOS for both enterprises and service providers due to its disaggregated and modular design. By selecting desired containers and services, SONiC can be deployed in many use cases and fit in platforms of many sizes.

The recent increased industry involvement and community creation has placed SONiC on an accelerated path to support more use cases and features. The increased development activity will continue through 2020 and beyond. SONiC has also grabbed the attention of other projects and organizations such as ONF and TIP/Disaggregated cell site gateways. These projects are looking into ways to integrate with SONiC in their existing and new solutions and driving a new set of open networking use cases.

Merchant silicon extends to cover more complex networking requirements

Programmable packet forwarding pipelines, deep buffers, high radix, high line speeds, and high forwarding capacity merchant silicon switches coupled to a new generation of open network operating systems are enabling effective large scale-out fabric-based architectures for data centers. These capabilities will enhance both data center and edge infrastructure, replacing the need for a chassis design or edge routers with custom ASICs. In 2020, for the first time, we expect to see merchant silicon-based network solutions achieve parity with most of the traditional edge and core networking platforms, providing a scale out design that is better aligned to converged infrastructure and cloud requirements.

Programmable silicon/data plane enabling streaming analytics

Programmable data planes are maturing with P4 compilers (as the community approach) and many other available languages for creating customized data pipelines. There is also a growing number of NOSs that support programmable data plane functionality. These new software tools enable the creation of unique profiles to support specific services and use cases, including edge functionality, network slicing, real time telemetry and packet visibility. These powerful new capabilities provide control and AI-based mitigation, as well as customized observability at large scale in real time. Developers have access to the data pipeline and will be able to create new services that are not possible in traditional networking. This is going to be one of the key new trends in 2020.

Storage fabrics using distributed NVMe-oF over TCP/IP solutions

NVMe has emerged as the most efficient and low-latency technology for storage access. NVME-over-Fabric (NVMe-oF) extends the protocol to work across networks using fabric-based networks (Fibre Channel, RoCE, TCP/IP). TCP/IP and RoCE have a clear cost effectiveness advantage with 100GbE being four times as fast as 32GbE FC at about 1/8th of the cost. Between those two protocols TCP/IP emerges as the solid choice due to similar performance, better interoperability and routing, and utilization of lossless networks only where needed. NVMe-oF/TCP transport provides the connectivity backbone to build efficient, flexible, and massive-scale distributed storage systems. The key to unlocking this potential is service-based automation and discovery controlling the storage access connectivity within the proven SAN operational approach and orchestration frameworks extended across multiple local storage networks through both storage services and fabric services federation.

Distributed edge emerging as a requirement for Industry vertical solutions

Emerging use cases at the far edge for analytics, surveillance, distributed applications and AI are driving the need for new infrastructure designs. Key constraints are the operating environment, physical location, and physical distribution giving rise to the need for a comprehensive remote automated operational model. New workload requirements are also driving the design. For example, Gartner predicts that “by 2022, as a result of digital business projects, 75% of enterprise-generated data will be created and processed outside the traditional, centralized data center or cloud*.” New innovations at the edge include converged compute and networking, programmable data plane processors, converged rack-level design, micro/mini data centers, edge storage and data streaming, distributed APIs and data processing. We are at the start of new phase of development of custom solutions for specific enterprise verticals that will drive new innovations in infrastructure and automation stacks.

Wireless first designs are driving new infrastructure platforms for enterprises and service providers

There is tremendous growth in wireless spectrum and technologies including 5G, 4G, shared spectrum (CBRS), private LTE, and WiFi, coupled with a new desire to transition to wireless as the preferred technology for LAN, campus, eetail, etc. This is driving the need for wireless platform disaggregation into cloud native applications for core, radio access network (RAN) and WiFi that support multiple wireless technologies on shared infrastructure. Disaggregation is starting at the core and moving to the edge levering edge compute with automation in a distributed model, which is bringing all the benefits of cloud economics, automation and developer access to wireless infrastructure and creating massive new efficiencies and new services.

Smart NICs are evolving to address massively distributed edge requirements

The new generation of powerful Smart NICs extend the model of simple NIC offload and acceleration by adding heavy data plane processing capacity, programmable hardware elements, and integrated switching capabilities. These elements allow many data flow and packet processing functions to live on the smart NIC, including networking, NVMe offload, security, advanced telemetry generation, advanced analytics, custom application assistance, and infrastructure automation. Smart NICs will be a key element in several valuable use cases: distributed network mesh, standalone intelligent infrastructure elements (e.g. radio controllers), autonomous infrastructure, distributed software defined storage, and distributed data processing. Smart NICs will serve as micro-converged infrastructure extending the range of edge compute to new locations and services beyond edge compute.

The age of 400G – higher speeds driving new fundamental network switch architecture

Native 400G switches coupled with 400G 0ptical modules are now available and breaking the 100G speed limit for data center interconnects. This is creating challenges with power and thermal, as well as space and layout, and moving the industry to co-packed optics.

In addition, new silicon photonics (ZR400 and others) enable long reach Dense Wavelength Division Multiplexing (DWDM) transport given the availability of merchant optics DSPs. This is going to fundamentally transform networking, data center interconnect and edge aggregation by collapsing the need for a stand-alone DWDM optical networks, therefore bringing great efficiencies, automation and software-defined capabilities to the entire networking stack.

Stay tuned—2020 is set to be a year packed with innovation as we strive to deliver customers the technology that will drive their businesses into the future.

Additional Resources

*Gartner Top 10 Strategic Technology Trends for 2020, 21 October 2019, David Cearley, Nick Jones, David Smith, Brian Burke, Arun Chandrasekaran, CK Lu


  • No Related Posts

PVS Vdisk Inconsistency – Replication Status Shows Error ” Server Not Reachable” When NIC Teaming is Configured

  • Verify if NIC Teaming is configured as Active-Active. Reconfigure as Active-Passive.


Open Network team configuration and make sure the team is Active Active.

Verify the NICs configured under Active Adaptors and confirm no Standby Adaptors are configured.

Reconfigure the Team and make sure Active and standby adaptors are configured.

Please note that NIC team configuration will differ for different adapter manufacturers, check the configuration guide to follow appropriate steps to reconfigure.

Reconfigure NIC teaming may interrupt the network connection. Please make sure to take proper actions to avoid production impact.

User-added image

  • Verify the MTU setting of NIC on all PVS servers

Since the status of the replication is synced via UDP on PVS port 6895, the communication failure over this udp port will also effects the status of the replications.

The different MTU of the NICs of PVS servers will also block this kind of UDP communication between them. For example, if one of the NIC has MTU of 1500(default) and the other NIC has MTU of 6000, the udp packets which is larger than 1500 will be lost due to the different fragmentation. From MTU of 6000, the udp packet larger than 1500 but less than 6000, so it will not be fragmented. But the peer has MTU of 1500, so it is unable to accepted this packet and causing packet loss.

You need to check the MTU value of all PVS servers by command:

netsh interface ipv4 show subinterface

If MTUs are different on all PVS servers, please change it to the same value (The default value 1500 is recommended):

netsh interface ipv4 set subinterface “ Ethernet ” mtu=1500 store=persistent

Please replace Ethernet with the NIC name of your PVS server.


  • No Related Posts

When Accessing VPN, Address Gets Stuck At URL: https://

Modify the f_ndisagent file under the /var/netscaler/gui/vpns folder.


window.location = “http://” + window.location.hostname + “:8080/vpns/services.html” ;


window.location = “https://” + window.location.hostname + “/vpns/services.html”;

After above change the plugin doesn’t try to handle services.html request, so request goes directly to Gateway server.

(changes do not survive a reboot)

Also Sometimes The file f_ndisagent is picked from different location /netscaler/gui/vpns

If you see that the file under location /var/netscaler/gui/vpns already edited and does not have 8080 anymore and still issue occurs then Go ahead and edit the file under/netscaler/gui/vpns


  • No Related Posts

Configure StorageZone Controller for TLS v1.2 Inbound Connections

Due to known vulnerabilities in older SSL/TLS protocols, administrators are looking to limit inbound connections to StorageZone Controllers to TLS v1.2. The following steps provide guidance on setting up your StorageZone Controller to accept TLS v1.2 connections as well as steps to configure ShareFile clients to communicate over TLS v1.2

Support is available as of StorageZones Controller v4.0 or higher. Validation was performed with an external-facing NetScaler configured with TLS v1.2 only for in-bound connections to the ContentSwitching vServer.

If protocols earlier than TLS v1.2 are disabled on the StorageZones Controller, all client software components that interact with the StorageZone must also support TLS v1.2. Windows sync clients require Microsoft .NET Framework 4.5.2 and registry updates to support TLS v1.2. Mac sync clients do not support TLSv1.2. See below for details on how to configure Windows sync machines to use TLS v1.2.​

Setup – NetScaler Configuration

At the Content Switch Virtual Server, modify SSL Parameters and enable TLS v1.2. You can also disable all other protocols.

User-added image

User-added image

ShareFile Windows Client Configuration


  1. .NET 4.5.2 or higher
  2. The following registry key(s) must be applied to your Windows client operating system in order for the .NET applications to communicate over TLS v1.2 outbound. A client OS restart is required

IMPORTANT: The following registry setting allows .NET 4.0 applications to use TLS v1.2. This setting will apply to all .NET 4 applications installed, so please use caution when applying to ensure there will be no impacts to any other applications.



For 64-Bit systems, also include:



Tested Windows Operating Systems

  1. Windows 7 32-bit/64-bit
  2. Windows 8.1 32-bit/64-bit
  3. Windows 10 32-bit/64-bit

Tested Windows Clients

  1. ShareFile Sync Client for Windows
  2. ShareFile Outlook Plugin
  3. ShareFile Desktop App
  4. ShareFile Drive Mapper
  5. ShareFile PowerShell client

Tested ShareFile Mobile Clients

  1. iOS 8/9
  2. Windows 10 Metro
  3. Android 4.4.2, 5.0.2, 6

Tested Web Browsers

  1. IE 10 / 11 / Edge
  2. Chrome
  3. Firefox
  4. Safari

NetScaler Tested

  1. NetScaler 11.0 63.16

Not Supported

  1. ShareFile Sync for Mac
  2. Windows 8.1 Metro
  3. SFCLI


  • No Related Posts

Ghost multicast Server to client negotiation

I need a solution


we are using ghost server to multi clients using multicast.

Just a question on bandwidth and communication.

When the image is being transferred my understanding is that the communication is via udp.

Does anyone know how the client server communication for setup works?

Also what determines the max transfer speed? I know this is the slowest client but how does this

client communicate the speed it is happy with to the server, is this again tcp?

Is there a tcp “management” stream between client and server checking for variations in speed capabilities?

Any help or if anyone knows where to find it much appreciated.



  • No Related Posts

During the installation of symantec in linux shows me this Error: No drivers are loaded into kernel

I need a solution

[root@localhost paquetelinuxrpm]# sudo ./ -i
Starting to install Symantec Endpoint Protection for Linux
Performing pre-check…
Pre-check succeeded
Begin installing virus protection component
Preparando…                         ################################# [100%]
Performing pre-check…
Pre-check is successful
Actualizando / instalando…
   1:sep-14.2.3335-1000               ################################# [100%]
Virus protection component installed successfully
Begin installing Auto-Protect component
Preparando…                         ################################# [100%]
Performing pre-check…
Pre-check is successful
Actualizando / instalando…
   1:sepap-x64-14.2.3335-1000         ################################# [100%]
Auto-Protect component installed successfully
Begin installing GUI component
Preparando…                         ################################# [100%]
Performing pre-check…
Pre-check is successful
Actualizando / instalando…
   1:sepui-14.2.3335-1000             ################################# [100%]
GUI component installed successfully
Pre-compiled Auto-Protect kernel modules are not loaded yet, need compile them f                                                                                        rom source code
Build Auto-Protect kernel modules from source code failed with error: 1
Running LiveUpdate to get the latest defintions…
Update was successful
Installation completed
Daemon status:
symcfgd                         [running]
rtvscand                        [running]
smcd                            [running]
Error: No drivers are loaded into kernel.
Auto-Protect starting
Protection status:
Definition:     Waiting for update.
AP:             Malfunctioning
The log files for installation of Symantec Endpoint Protection for Linux are und                                                                                        er ~/:



  • No Related Posts

integrate SSLva with PaloAlto-VM and having cisco SW in between

I need a solution

Hi All,

we are trying to deploy the below scenario:

Client Subnet –> SSLv “fail to appliance” –> SW –> PaloAlto-VirtualAppliance –> SW –> SSLv –> Gateway –> Internet

is the above scenario applicable?

if yes, what is the recommended setting for switch interface and cabling

If no, why…. what is the ristrictions for SSLv  deployment



  • No Related Posts

SEP blocks NIC Teaming in Server 2019

I need a solution

Recently I installed a fresh copy of windows Server 2019 OS Build 17763.107 on my IBM System x3650M5 machine with 4 Broadcom NetXtreme Gigabit adapters. As soon as I created NIC teaming with LACP option (same on the switch side) and installed SEP version 14.2.3335.1000 for WIN64BIT i got disconnected after a restart. Further investigation showed that NIC cards individually looked fine, but the teamed NIC interface was crossed as if Network cable was unplugged.

I upgraded drivers from Lenovo, installed cumulative updates for windows, ran Symantec troubleshooter (which found zero problems related with NIC) but nothing seems to work.

Symantec support offered that some rule was blocking traffic. When we removed “block any any” traffic from firewall rules, Teamed NIC started up. Same happened when we just disabled firewall module. 

I had server 2012R2 installed prior to 2019 on this machine and it never had such problem. couple years ago I tried to upgrade it to 2016, but I encountered the same “Cable unplugged” problem with NIC teaming and didnt troubleshoot it too much, since it was only for evaluation purposes.

Any ideas? Maybe any of you encountered the same problem and more importantly: solved it without just uninstalling SEP for good? 😀




  • No Related Posts