Unlock a Competitive Advantage with Secure Storage Solutions

Many organizations consider data to be their most valuable asset, but unfortunately it can also be among the most vulnerable. Without a robust data center cybersecurity strategy, the risks of data loss and data unavailability constantly loom, threatening costly outages, downtime, malware attacks and other nontrivial business consequences. Furthermore, when you consider the additional challenges associated with data management such as compliance requirements and opportunity for human error, there are many reasons for IT teams to lose sleep.

These concerns are particularly daunting at smaller and growing organizations, where there are often limited budgets and resources to address and mitigate problems. Although security is a top priority, it’s also the most frequently cited skills shortfall and concern among mid-market IT leaders. The good news is you don’t have to embark on your IT transformation journey alone—and by implementing and streamlining trusted data center solutions you’re equipped to become more competitive.

Trusted Data Center Leaders experience organization-wide benefits

New research from ESG confirms that organizations in the mid-market segment achieve quantifiable benefits by prioritizing the security of their data center and utilizing modern storage platforms. To understand the competitive advantages afforded to Trusted Data Center Leaders, ESG conducted a survey of 1,650 IT decision makers at organizations with 100-999 employees.

Of the organizations surveyed, only seven percent met the criteria to be classified as a Trusted Data Center Leader, meaning they aligned to the following infrastructure, security and data protection best practices:

  • They refresh and retire data center infrastructure regularly, with an average age of server and storage systems at less than three years.
  • They strongly believe that trusted technologies matter and have an organizational commitment to security features across infrastructure technology.
  • They have successfully implemented secure and reliable infrastructure technology, including capabilities to encrypt sensitive data and replicate most or all data to secondary systems.

Despite comprising a small portion of the survey sample, Leaders experience significant benefits compared to their peers who have not fully committed to the same IT practices:

  • Leaders are 7X more likely to view their application and system uptime as excellent.
  • Leaders experience an average of 60 percent reduction in the cost of downtime, saving as much as $20M per year.
  • Leaders are 1X more successful at attaining customer satisfaction scores that exceed expectations.
  • 92 percent of Leaders report that their infrastructure investments to maximize uptime and availability and minimize security risk have met or exceeded ROI forecasts.

Improve growth and productivity with secure storage infrastructure

Organizations who lead in data center trust not only prevent data outages but gain greater control of their IT environment and protect the integrity of their brand. Advanced data protection features make this possible by ensuring protection against data corruption, compromise and loss. Looking specifically at storage infrastructure, Leaders consistently reported a higher presence of data protection features that safeguard on-premises data such as automatic send-site failover capabilities, multi-system replication, self-encrypting drives, data snapshots and clones.

If you’re not sure how to incorporate these capabilities into your data center strategy, fear not! Dell Technologies offers the broadest portfolio of trusted infrastructure and data protection solutions, specifically designed for the needs of growing businesses. Our secure storage solutions are engineered to store, manage and safeguard data in each array, ensuring you’re protected as you scale. And, because we understand future technology needs can be difficult to predict, we make it easy to adopt transformative offerings with a breadth of flexible payment solutions, including consumption-based and as-a-service options. For additional support, Dell Technologies Services are available to help you confidently deploy and optimize your infrastructure for continued success.

All-inclusive storage solutions that propel your organization further

Dell EMC Unity XT is our award-winning, midrange unified storage offering that is designed for performance, optimized for efficiency and built for a multi-cloud world. Unity XT helps you simplify your path to IT transformation and unlock the full potential of your data with more IOPS, memory and drives to scale to the needs of your business. Your investment is further protected with a dual-active controller architecture and enterprise-class data services. Implementing Unity XT immediately ensures your data is secure and protected with robust features that include native sync/async replication, snapshots and data-at-rest encryption across the entire array. The no-compromise design also includes Dynamic Pools for faster drive re-build times, as well as support for multiple cloud deployment options.

In addition, the PowerVault ME4 Series is an affordable entry storage array that’s purpose-built for SAN and DAS environments and designed for versatility. It offers block-only architecture with VMware virtualization integration and includes protective services, like snapshots, replication and self-encrypting Drives (SEDs), along with a Distributed RAID (ADAPT) software architecture.

Both Unity XT and the PowerVault ME4 Series are fully integrated with the Dell Technologies data protection ecosystem, granting you the flexibility to select the best backup and recovery solutions to meet your data center requirements. Rest assured knowing our built-in security capabilities equip your IT teams to more efficiently manage resources and focus on innovation.

Don’t let security be an afterthought in your IT landscape

Become a Trusted Data Center Leader with innovative and secure solutions from Dell Technologies. For peace of mind your vital data is protected—wherever it resides and as your business evolves— it’s crucial to choose an IT partner you trust with an end-to-end approach to security. Formidable storage solutions are part of our DNA at Dell Technologies, and we have extensive experience helping organizations to solve IT challenges across their data center.

To learn more about the Trusted Data Center research from ESG, read the eBook.

Related:

  • No Related Posts

Ease Workload Management With Trusted Simplicity From PowerProtect

When IT simplicity is made real through automation, orchestration and self-service management, you can free your staff to become more responsive to innovative app dev activities. Continuing our Direct2DellEMC series on optimizing your infrastructure for critical business workloads, let’s dive into the latter half of the Power & Simplicity theme that was presented in our last blog entry – “Pamper Workloads with Powered-up Performance from PowerMax.” In that post, we shared advice on making the right technology investments by centering around the use cases, applications and databases that have a most direct impact on your organization’s strategic differentiation. Now that we have covered the cutting-edge performance present within Dell Technologies solutions, today we will focus on the topic of optimized management.

Beyond the key tasks of improving performance and eliminating network bottlenecks, your organization trusts your IT staff and the infrastructure that you manage to make their jobs easier. IT solutions that embrace simplicity and help get complexities out of the way can directly accelerate business growth. And modernizing with a technology provider that embraces this philosophy allows your business to run critical workloads with ease, so that your staff can focus on supporting innovation.

Implementing a new infrastructure solution for every request is not feasible given today’s staffing and budget limitations. Digital transformation is necessary; but avoiding additional operational complexity is equally crucial when implementing modernized solutions. How can you keep a focus on simplicity and continue to free your staff’s time? By putting their general operations & workloads on auto pilot.

Dell Technologies brings you Power & Simplicity to get IT complexities out of your way and accelerate innovation and growth with the right technology. Our technology can provide you with the trusted simplicity that the digital world demands. Designed for ease of use and reliability, our infrastructure allows staff to keep focus on business innovation instead of IT operations. Plus, our integrated full stack solutions (hyperconverged / converged infrastructure) and validated designs offer your team further simplicity.

One example of the simplicity in our products is Dell EMC PowerProtect Data Manager. Ease of install, management and day-to-day operations are key to simplicity in the world of data protection, where workload administrators each have their own precious cargo. PowerProtect Data Manager enables self-service data management for SQL, SAP HANA & Oracle backups. Data owners can perform backup and recovery operations directly from native applications, while IT can maintain central oversight and governance to ensure compliance.

PowerProtect Data Manager is integrated into PowerProtect X series appliances for further effortlessness, and the platform is also available with the new PowerProtect DD series appliances via individual software license or the revamped version of our Data Protection Suite. PowerProtect Data Manager delivers the performance, scalability and resiliency demanded by your most critical workloads. Experience a simplified experience with the entire Dell EMC Data Protection portfolio – with specific solutions for SAP HANA, Oracle, SQL, VMware and Kubernetes containers.

Our advanced offerings across the infrastructure stack – including our Dell EMC data protection portfolio – are available with flexible payment options through Dell Technologies On Demand. If you are not familiar, Dell Technologies On Demand also includes value-added services with ProDeploy, ProSupport and Managed Services, which can be bundled effortlessly and paired with all the financial consumption models.

Dell Technologies has the right technology to provide your IT staff with the ease of use that this data era demands. Our powered-up portfolio of infrastructure solutions enables your business to run its critical workloads with ease, allowing you to focus on business innovations. By delivering the performance, scalability and resiliency required across your infrastructure, we help ensure your IT investments will support the workloads that your business depends on to grow.

For more on this topic, please visit here.

Related:

  • No Related Posts

NVIDIA is on a Roll, and at Dell Technologies, We’re In

Drawing on the power of their close relationship, Dell Technologies and NVIDIA are streamlining the path to GPU computing.

Thousands of people will be diving into some of the latest and greatest technologies for artificial intelligence and deep learning during NVIDIA’s GTC Digital conference. The online event provides developers, engineers, researchers and other participants with training, insights and direct access to experts in all things related to GPU computing.

The virtual crowd at GTC Digital will include many from Dell Technologies, including data scientists, software developers, solution architects and other experts in the application of the technologies for AI, data analytics and high performance computing.

At Dell Technologies, we’re investing heavily in servers and solutions that incorporate leading-edge GPUs and software from NVIDIA. In this post, I will offer glimpses of some of the exciting things that the Dell Technologies team is working on with our colleagues at NVIDIA.

NVIDIA EGX servers

Dell Technologies was among the first server companies to work with NVIDIA to certify systems for the NVIDIA EGX platform for edge computing. This cloud-native software allows organizations to harness streaming data from factory floors, manufacturing inspection lines, city streets and more to securely deliver next-generation AI, IoT and 5G-based services at scale and with low latency.

Early adopters of EGX edge — which combines NVIDIA CUDA-X software with NVIDIA-certified GPU servers and devices — include such global powerhouses as Walmart, BMW, Procter & Gamble, Samsung Electronics and NTT East, as well as the cities of San Francisco and Las Vegas.

NGC Container Registry

Providing fast access to performance-optimized software, NGC is NVIDIA’s hub for GPU-accelerated AI, ML, and HPC applications containers, software development kits and tools. NGC hosts containers for top AI and data science software, HPC applications and data analytics applications. These containers make it easier to take advantage of NVIDIA GPUs on-premises and in the cloud. Each is fully optimized and works across a wide variety of Dell Technologies solutions.

NGC also hosts pre-trained models to help data scientists build high-accuracy models faster, and offers industry-specific software development kits that simplify developing end-to-end AI solutions. By taking care of the plumbing, NGC enables people to focus on building lean models, producing optimal solutions and gathering faster insights.

NGC-Ready systems

Dell Technologies offers NGC-Ready systems for data centers and edge deployments. These systems have passed an extensive suite of tests that validate their ability to deliver high performance when running containers from NGC.

The bar here is pretty high. NGC-Ready system validation includes tests of:

  • Single- and multi-GPU deep learning training using TensorFlow, PyTorch and NVIDIA DeepStream Transfer Learning Toolkit
  • High-volume, low-latency inference using NVIDIA TensorRT, TensorRT Inference Server and DeepStream
  • Data science using RAPIDS
  • Application development using the NVIDIA CUDA Toolkit

Along with passing the NGC-Ready tests, these Dell EMC servers have also demonstrated the ability to support the NVIDIA EGX software platform that uses the industry standards of Trusted Platform Module (TPM) for hardware-based key management and Intelligent Platform Management Interface (IPMI) for remote systems management. NGC-Ready systems aim to create the best experience when it comes to developing and deploying AI software from NGC.

Support for vComputeServer

NVIDIA Virtual Compute Server (vComputeServer) with NGC containers brings GPU virtualization to AI, deep learning and data science for improved security, utilization and manageability. And even better, the software is supported on major hypervisor virtualization platforms, including VMware vSphere. This means your IT team can now use the same management tools across the rest of your data center.

Today, vComputeServer is available in Dell EMC PowerEdge servers. And if you’re not sure which PowerEdge server best fits your accelerated workload needs, we can offer some help in the form of an eBook, “Turbocharge Your Applications.” It includes good/better/best options to help you find your ideal solution for various workloads.

GPU-accelerated servers

Dell Technologies offers a variety of NVIDIA GPUs in the Dell EMC PowerEdge server family. The accelerator optimized Dell EMC PowerEdge C4140 server, for example, offers a choice of up to eight NVIDIA GPUs in configurations with up to four V100 GPUs, or eight V100 GPUs using NVIDIA’s MaxQ setting (150W each). And this is just one of many PowerEdge servers available with NVIDIA GPU accelerators. For a broader and more detailed overview, check out the “Server accelerators” brochure.

Dell EMC Isilon with NVIDIA DGX reference architecture

In another collaboration, NVIDIA and Dell Technologies have partnered to deliver Dell EMC Isilon with NVIDIA DGX reference architectures. These powerful turnkey solutions make it easy to design, deploy and support AI at scale.

Together, Dell EMC Isilon F800 All-Flash Scale-Out Network-Attached Storage (NAS) and NVIDIA DGX systems address deep learning workloads, effectively reducing the time needed for training multi-petabyte datasets and testing analytical models on AI platforms. This integrated approach to infrastructure simplifies and accelerates the deployment of enterprise-grade AI initiatives within the data center.

Supercomputing collaboration

Dell Technologies is collaborating with NVIDIA, along with Mellanox and Bright Computing, on the Rattler supercomputer in our HPC & AI Innovation Lab in Austin, Texas. The Rattler cluster is designed to showcase extreme scalability by leveraging GPUs with NVIDIA NVLink high-speed interconnect technology. Rattler not only accelerates traffic between GPUs inside servers, but also between servers with Mellanox interconnects. Teams use this system for application‑specific benchmarking and workload characterizations.

The bottom line

NVIDIA and Dell Technologies are delivering some great hardware and software to enable and accelerate AI and deep learning workloads. We’re working closely to help our customers capitalize on all the good things that we are doing together.

We’ll miss you in-person at GTC 2020 however, check out Dell Technologies sessions online including:

  • Dell Precision Data Science Workstation for the New Era of AI Productivity [S22535]
  • Tuning GPU Server for DL Performance [S21501]
  • Quantify the Impact of Virtual GPUs Using NVIDIA nVector [S21988]
  • Weakly Supervised Training to Achieve 99% Accuracy for Retail Asset Protection [S21427]
  • How NVIDIA Quadro Virtual Workstations, Virtual PCs and Virtual Apps are Transforming Industries [S21670]

Related:

  • No Related Posts

Unlock Real-Time, GPU-Driven Insights With Azure Stack Hub

As you may have seen, Microsoft announced the public preview of the GPU capabilities with Azure Stack Hub. What does this mean? Well, GPU support in Azure Stack Hub unlocks a variety of new solution opportunities. For customers running training and inference workloads on Azure or looking to run applications on Azure N-Series virtual machines, this preview will bring those capabilities to Azure Stack Hub. Visualization is another targeted use case where customers are looking to leverage GPU capabilities to render large amounts of data on specific targets closer to where the data is generated.

To address these scenarios, Dell Technologies, in collaboration with Microsoft, is excited to announce upcoming enhancements to our Dell EMC Integrated System for Microsoft Azure Stack Hub portfolio that will unlock valuable, actionable information derived from large on-premises data sets at the intelligent edge without sacrificing security using GPU-accelerated AI and ML capabilities.

Our GPU configurations are based on the Dell EMC Integrated System for Microsoft Azure Stack Hub dense configuration platform powered by PowerEdge R840 rack servers and will include both NVIDIA V100 and AMD MI25 GPUs, in a 2U form factor. This will provide customers increased performance density and workload flexibility for the growing predictive analytics and AI/ML markets. Our joint customers will be able to choose the appropriate GPU for their workloads to enable Artificial Intelligence, training, inference and visualization scenarios.

Following our stringent engineered approach, Dell Technologies goes far beyond considering GPUs as just additional hardware components in the Dell EMC Integrated System for Microsoft Azure Stack Hub portfolio. These new configurations, like all Dell EMC Integrated System for Azure Stack Hub offerings, also come with automated lifecycle management capabilities, streamlined operations, and exceptional support.

Dell Technologies has a long history of co-engineering with Microsoft and these new enhancements further strengthen our joint portfolio across hyperconverged infrastructure and hybrid cloud solutions. By working together to deliver innovative services faster and more frequently, we can become a real partner of change for our customers in this Digital Transformation era.

With these new GPU-based configurations at the preview stage, we look forward to working closely with our customers in partnership with Microsoft to understand their scenarios and developing the right GPU platform to ensure a successful outcome. If interested in sharing your interest and feedback, please contact us to speak with one of our engineering technologists.

Related:

  • No Related Posts

Manufacturing & Industrial Automation Lead The Way

I’m always surprised that some people think of manufacturing as stodgy, old school and slow to change – in my view, nothing could be further from the truth! All the evidence shows that the manufacturing industry has consistently led the way from mechanical production, powered by steam in the 18th century, to mass production in the 19th century, followed by 20th century automated production.

The data center merging with the factory floor

Fast forward to today. The fourth industrial revolution is well underway, driven by IoT, edge computing, cloud and big data. And once again, manufacturers are at the forefront of intelligent production, leading the way in adopting technologies like augmented reality, 3D printing, robotics, artificial intelligence, cloud-based supervisory control and data acquisition systems (SCADA) plus programmable automation controllers (PACs). Watch the video below that addresses how manufacturers are changing to embrace Industry 4.0.

In fact, I always visualize the fourth industrial revolution, otherwise known as Industry 4.0, as the data center merging with the factory floor, where you have the perfect blend of information and operational technology working together in tandem. Let’s look at a couple of examples.

Helping monitor and manage industrial equipment

One of our customers, Emerson, a fast-growing Missouri-based company with more than 200 manufacturing locations worldwide, provides automation technology for thousands of chemical, power, and oil & gas organizations around the world. Today, Emerson customers are demanding more than just reliable control valves. They need help performing predictive maintenance on those valves.

To address these needs, Emerson worked with Dell Technologies OEM | Embedded & Edge Solutions to develop and deploy an industrial automation solution that collects IoT data to help its customers better monitor, manage and troubleshoot critical industrial equipment. With our support, Emerson successfully developed a new wireless-valve monitoring solution and brought it to market faster than the competition. This is just the first step in what Emerson sees as a bigger journey to transform services across its entire business. You can read more about our work together here.

Bringing AI to the supply chain to reduce waste and energy

Meanwhile, San-Francisco based Noodle.ai has partnered with us to deliver the world’s first “Enterprise AI” data platform for manufacturing and supply chain projects.

This solution allows customers to anticipate and plan for the variables affecting business operations, including product quality, maintenance, downtime, costs, inventory and flow. Using AI, they can mitigate issues before they happen, solve predictive challenges, reduce waste and material defects as well as cutting the energy required to create new products.

For example, one end-customer, a $2 billion specialty steel manufacturer, needed to increase profit per mill hour, meet increasing demand for high quality steel at predictable times, and reduce the amount of energy consumed. Using the “Enterprise AI” data platform, the customer reported $80 million savings via reduced energy costs, freight costs, scrapped product, and raw material input costs.

Helping design innovative and secure voting technology

Yet, another customer, Democracy Live wanted to deliver a secure, flexible, off-the-shelf balloting device that would make voting accessible to persons with disabilities and that could replace outdated, proprietary and expensive voting machines.

After a comprehensive review of vendors and products, Democracy Live asked us to design a standardized voting tablet and software image. Our Dell Latitude solution complete with Intel processors and pre-loaded with Democracy Live software and Windows 10 IoT Enterprise operating system provides strong security and advanced encryption.

And the good news for Democracy Live that we take all the headaches away by managing the entire integration process, including delivery to end-users. The result? Secure, accessible voting with up to 50 percent savings compared with the cost of proprietary voting machines. Read what Democracy Live has to say about our collaboration here.

Change is constant

Meanwhile, the revolution continues. Did you know that, according to IDC, by the end of this year 60 percent of plant workers at G2000 manufacturers will work alongside robotics, while 50 percent of manufacturing supply chains will have an in-house or outsourced capability for direct-to-consumption shipments and home delivery? More details available here.

Unlock the power of your data

Don’t get left behind! Dell Technologies OEM | Embedded & Edge Solutions is here to help you move through the digital transformation journey, solve your business challenges and work with you to re-design your processes. We can help you use IoT and embedded technologies to connect machines, unlock the power of your data, and improve efficiency and quality on the factory floor.

And don’t forget we offer the broadest range of ruggedized and industrial grade products, designed for the most challenging environments, including servers, edge computing, laptops and tablets. We’d love to hear from you – contact us here and do stay in touch.

Related:

  • No Related Posts

Advance AI/DL Initiatives with Dell EMC Isilon, PowerSwitch and NVIDIA DGX Systems

This blog is co-authored by Claudio Fahey,Chief Solutions Architect, Artificial Intelligence and Analytics, Unstructured Data Solutions, Dell Technologies and Jacci Cenci, Senior Technical Marketing Engineer, NVIDIA

Over the last few years, Dell Technologies and NVIDIA have been helping our joint customers fast-track their Artificial Intelligence and Deep Learning initiatives. For those looking to leverage a pre-validated hardware and software stack for DL, we offer Dell EMC Ready Solutions for AI: Deep Learning with NVIDIA, which also feature Dell EMC Isilon All-Flash storage. For organizations that prefer to build their own solution, we offer the ultra-dense Dell EMC PowerEdge C-series, with NVIDIA V100 Tensor Core GPUs, which allows scale-out AI solutions from four up to hundreds of GPUs per cluster. We also offer the Dell EMC DSS 8440 server, which supports up to 10 NVIDIA V100 GPUs or 16 NVIDIA T4 Tensor Core GPUs. Our collaboration is built on the philosophy of offering flexibility and informed choice across a broad portfolio that combines the best GPU-accelerated compute, scale-out storage, and networking.

To give organizations even more flexibility in how they deploy AI from sandbox to production with breakthrough performance for large-scale AI, Dell Technologies and NVIDIA have recently collaborated on a new reference architecture for AI and DL workloads that combines the Dell EMC Isilon F800 all-flash scale-out NAS, Dell EMC PowerSwitch S5232F-ON switches, and NVIDIA DGX-2 systems.

Key components of the reference architecture include:

  • Dell EMC Isilon all-flash scale-out NAS storage delivers the scale (up to 58 PB), performance (up to 945 GB/s), and concurrency (up to millions of connections) to eliminate the storage I/O bottleneck keeping the most data-hungry compute layers fed to accelerate AI workloads at scale. A single Isilon cluster may contain an all-flash tier for high performance and an HDD tier for lower cost, and files can be automatically moved across tiers to optimize performance and costs throughout the AI development life cycle.
  • The PowerSwitch S5232F-ON is a 1 RU switch with 32 QSFP28 ports that can provide 40 GbE and 100 GbE connectivity. This series supports RDMA over Converged Ethernet (RoCE), which allows a GPU to communicate with a NIC directly across the PCIe bus, without involving the CPU. Both RoCE v1 and v2 are supported.
  • The NVIDIA DGX-2 system includes fully integrated hardware and software that is purpose-built for AI development and high-performance training at scale. Each DGX-2 system is powered by 16 NVIDIA V100 Tensor Core GPUs that are interconnected using NVIDIA NVSwitch technology, providing an ultra-high-bandwidth, low-latency fabric for inter-GPU communication.

Benchmark Methodology

To validate the new reference architecture, we ran industry-standard image classification benchmarks using a 22 TB dataset to simulate real-world training workloads. We used three DGX-2 systems (48 GPUs total) and eight Isilon F800 nodes connected through a pair of PowerSwitch S5232F-ON switches. Various benchmarks from the TensorFlow Benchmarks repository were executed. This suite of benchmarks performs training of an image classification convolutional neural network (CNN) on labeled images. Essentially, the system learns whether an image contains a cat, dog, car, train, etc. The well-known ILSVRC2012 image dataset (often referred to as ImageNet) was used. This dataset contains around 1.3 million training images in 148 GB. This dataset is commonly used by DL researchers for benchmarking and comparison studies. To approximate the performance of this reference architecture for datasets much larger than 148 GB, the dataset was duplicated 150 times, creating a 22 TB dataset.

To determine whether the network or storage impact the performance, we ran identical benchmarks on the original 148 GB dataset. After the first epoch, the entire dataset was cached in the DGX-2 system and subsequent runs had zero storage I/O. These results are labeled Linux Cache in the next section.

Benchmark Results

There are a few conclusions that we can make from the benchmark results shown in the figure below.

  • Image throughput and therefore storage throughput scale linearly from 16 to 48 GPUs.
  • There is no significant difference in image throughput when the data comes from Isilon instead of Linux cache.

In the following figure, system metrics captured during three runs of ResNet-50 training on 48 GPUs are shown. There are a few conclusions that we can make from the GPU and CPU metrics.

  • Each GPU had 97% utilization or higher. This indicates that the GPUs were fully utilized.
  • The maximum CPU core utilization on the DGX-2 system was 70%. This occurred with ResNet-50.

The next figure shows the network metrics during the same ResNet-50 training on 48 GPUs. The total storage throughput was 4,501 MB/sec.

Based on the 15 second average network utilization for the RoCE network links, it appears that the links were using less than 80 MB/sec (640 Mbps) during ResNet-50. However, this is extremely misleading. We measured the network utilization with millisecond precision and plotted it in the figure below. This shows periodic spikes of up to 60 Gbps per link per direction. For VGG-16, we measured peaks of 80 Gbps (not shown).TensorFlow Storage Benchmark

To understand the limits of Isilon when used with TensorFlow, a TensorFlow application was created (TensorFlow Storage Benchmark) that only reads the TFRecord files (the same ones that were used for training). No preprocessing nor GPU computation is performed. The only work performed is counting the number of bytes in each TFRecord. This application also has the option to synchronize all readers after each batch of records, forcing them to go at the same speed. This option was enabled to better simulate a DL or ML training workload. The result of this benchmark is shown below.

With this storage-only workload, the maximum read rate obtained from the eight Isilon nodes was 24,772 MB/sec. As Isilon has been demonstrated to scale to 252 nodes, additional throughput can be obtained simply by adding Isilon nodes.

Conclusion

Here are some of the key findings from our testing of the Isilon, PowerSwitch, and NVIDIA DGX-2 system reference architecture:

  • Achieved compelling performance results across industry-standard DL benchmarks from 16 through 48 GPUs without degradation to throughput or performance
  • Linear scalability from 16 to 48 GPUs while keeping the GPUs pegged at >97% utilization
  • The Isilon F800 system can deliver more than 24 GB/sec of synchronous reads, which is typical of DL or ML training workloads

Dell EMC Isilon-based DL solutions deliver the capacity, performance, and high concurrency to eliminate the I/O storage bottlenecks for AI. This provides a rock-solid foundation for production-ready, large-scale, enterprise-grade DL solutions with a future proof scale-out architecture that meets your AI needs of today.

If you are interested in learning more, please be sure to see the Dell EMC Isilon, PowerSwitch and NVIDIA DGX-2 Systems for Deep Learning whitepaper. You’ll find the complete reproducible benchmark methodology, hardware and software configuration, sizing guidance, performance measurement tools, and some useful scripts.

Finally, check out NVIDIA GTC Digital to learn about the latest innovations.

Related:

  • No Related Posts

Baker’s Half Dozen — Special Edition

If you’ve got questions about this episode, or a question you’d like Matt to answer in the next episode, comment below or tweet Matt using #BakersHalfDozen.

Amidst this time of social distancing and WFH, we created a quick Special Edition of the #BakersHalfDozen. We talk about virtual coffee breaks, barking dogs, and digital makeup. Be safe and be good to one another!

Episode Special Edition Show Notes:

Item 1: Impact of WFH

Item 1.5: Kids & Pets welcome!

Related:

  • No Related Posts

Think HPC Shops Just Have Supercomputers? Think Again

From supporting AI workloads to providing access to hybrid cloud services, today’s HPC shops are expanding the definition of high performance computing.

For IT shops focused on delivering high performance computing services, the pace of evolution has accelerated in recent years. It’s in high gear as HPC shops have taken on roles and responsibilities that go far beyond the operation of supercomputers for limited numbers of scientists and engineers.

In this new data era, many HPC shops now function as multi-cluster, multi-cloud HPC and AI operations. Versatility is the road here, as HPC shops expand into the domain of the cloud service provider and the general-purpose enterprise IT shop.

Let’s look at some of the characteristics of these next-gen HPC shops — and do some rethinking along the way.

Think that HPC shops don’t virtualize? Think again.

In years past, HPC workloads have run primarily on bare-metal, unvirtualized servers. Today, these practices are changing, as IT leaders are recognizing the benefits of virtualization for even the most demanding HPC systems and applications.

Here’s a case in point: The Johns Hopkins University Applied Physics Laboratory has implemented a virtualized infrastructure for its weapons simulation program. As a VMware white paper explains, with the virtualization of compute-intensive applications on the VMware vSphere® platform, the lab was able to more than triple the average utilization of its hardware, reduce costs due to more effective resource sharing, and run a more diverse set of applications.

In another example, the HPC team at the University of Pisa virtualizes its Microsoft SQL Server environment with the Microsoft Hyper-V hypervisor. As a Dell Technologies case study notes, a virtualized software-defined storage environment makes it easier for the university’s IT Center to deploy, manage and scale storage for the SQL database.

Think that HPC shops don’t have containers? Think again.

Just as they have embraced virtualization, HPC shops are embracing the use of containers that bundle up software applications, virtualized operating systems, and all of the pieces and parts needed to deploy and run HPC and AI jobs. As Dell Technologies data scientist Dr. Lucas Wilson explains in an blog on the advantages of containers, the container approach simplifies the provisioning, distribution and management of the software environments that run on top of the virtualized hardware layer.

Here’s a real-life use case: Data science teams in the Dell Technologies HPC & AI Innovation Lab are leveraging Kubernetes containers to streamline and accelerate the development of deep learning solutions. As data science systems engineering specialist John Lockman explains in blog on the power of Kubernetes, the lab uses Kubernetes containerization to speed up and streamline the production and distribution of deep learning training workloads to thousands of CPU and accelerator nodes in two supercomputing clusters.

Think that HPC shops aren’t cloud service providers? Think again.

The use of OpenStack, virtualization and containerization has helped HPC shops pave the road to hybrid cloud environments. In fact, many HPC shops now function as multi-cloud service providers that offer their users access to internal and external clouds, as well as centralized compute resources with multiple storage choices. Via self-service portals, these next-generation HPC shops streamline the path to infrastructure as a service (IaaS), platform as a service (PaaS), software as a service (SaaS) and managed services.

The Cloud and Interactive Computing (CIC) group at the Texas Advanced Computing Center, for example, operates multiple national-scale production clouds that provide self-service academic cloud computing capabilities. And on top of the lower-level IaaS offerings, the CIC group develops, deploys and administers higher-level cloud and interactive computing platforms, applications and tools.

Elsewhere in the HPC world, the University of Liverpool provides public cloud bursting to Amazon Web Services. And back in San Diego, the SDSC Research Data Services team offers its users access to both cloud storage and cloud compute options, along with a separate cloud storage system for data that needs to comply with PHI/PII or HIPAA regulations.

Think that HPC has nothing to do with AI. Think again.

HPC and artificial intelligence applications used to live in different domains. Not so anymore. Today’s AI workloads often require the compute performance and storage capacity of HPC clusters.

As I noted in an earlier blog on the convergence trend, HPC shops are in the business of running AI training and inferencing workloads, along with traditional HPC workloads like modeling and simulation. And that makes sense, because these workloads often have similar infrastructure and performance requirements.

Think HPC shops don’t function like a business? Think again.

Whether they are in an enterprise or academic space, many of today’s HPC operations now function like businesses that recover their costs from their users. This is the case for the University of Michigan, where supercomputing investment decisions for campus-wide machines must factor in 100-percent cost recovery, along with exceptional performance, usability and management characteristics. As noted in an article in The Next Platform, we’re talking about an academic supercomputing site that operates under constraints similar to those of any ROI-driven enterprise.

Let’s also consider the Triton Shared Computing Cluster at the San Diego Supercomputer Center (SDSC) at the University of California San Diego. This system is operated under a “condo cluster” model, in which faculty researchers use their contract and grant funding to buy compute nodes for the cluster. Researchers can also rent time on the cluster for temporary and shorter-term needs via a “hotel service” model. Sounds rather business-like, doesn’t it?

Key takeaways

For HPC shops, these are exciting times. We are in a new hybrid world that is blurring the lines between HPC, AI and more conventional IT services. As they navigate this changing world, HPC shops are opening their doors to a wider range of users and offering ever-larger menus of services to make sure people get what they need to do the things they need to do. From modeling complex systems to training machine learning algorithms, from delivering on-premises HPC clusters to enabling access to hybrid cloud services, HPC shops now do it all.

To learn more

For a deeper dive into the changing role of the HPC shop, check out the CIO.com blogs at Dell Technologies and Intel: Innovating to Transform. Learn more about Dell Technologies high performance computing.

Related:

  • No Related Posts

Dell Technologies Named a 2020 Gartner Peer Insights Customers’ Choice for HCI

Dell Technologies is pleased to announce that thanks to our fantastic customers, Dell EMC HCI has been recognized as a March 2020 Gartner Peer Insights Customers’ Choice for Hyperconverged Infrastructure (HCI). There is nothing more important to us than our customers, and so our team understandably takes great pride in this distinction.

In its announcement, Gartner explains, “The Gartner Peer Insights Customers’ Choice is a recognition of vendors in this market by verified end-user professionals, taking into account both the number of reviews and the overall user ratings.” To ensure fair evaluation, Gartner maintains rigorous criteria for recognizing vendors with a high customer satisfaction rate.

For this distinction, a vendor must have a minimum of 50 published reviews with an average of 4.6 out of 5 stars or higher in the past year. With more than 80 verified published reviews of Dell EMC VxRail and VxFlex, we feel confident that we are meeting our continued commitment to delight customers with HCI and software-defined storage (SDS) solutions.

Here are some excerpts from your peers that may be of value as you make your IT infrastructure decisions:

  • “They promised us a ‘mythical unicorn’ of IT; VxRail definitely delivered”– Senior Network Analyst II, Education (read full review)
  • “VxRail is a must for any IT leader looking to improve performance and ease operations”– Global Director of IT Infrastructure and Operations, Manufacturing (read full review)
  • “VxRail helped us deploy VDI for 1000+ students & teachers easily. It’s a huge success” – Manager of Technology Service Operations, Education (read full review)
  • “Build your Future with Dell EMC VxFlex”- Vice President of IT, Retail (read review)

Read more reviews for VxRail here and VxFlex here.

As demonstrated above, customers choose Dell EMC HCI and SDS offerings for its ease of use and improvement in performance.

We are deeply proud not only to be honored as a March 2020 Customers’ Choice for HCI but also as a January 2020 Customers’ Choice for Primary Storage and Distributed File Systems and Object Storage Gartner Peer Insights markets. We know that the majority of our customers will have a need for both HCI and storage for the foreseeable future, and Dell Technologies can deliver industry-leading offers in both.

To all our customers who submitted reviews, thank you! These reviews inform our products, shape our services, and help define our customer journey. We look forward to continuing our partnership with you to deliver products and solutions that simplify IT and building upon the customer experience that has earned us this distinction!

If you have a Dell EMC story to share, we encourage you to join the Gartner Peer Insights crowd and weigh in.

Disclaimer: Listed as “Dell EMC” on Gartner Peer Insights

The GARTNER PEER INSIGHTS CUSTOMERS’ CHOICE badge is a trademark and service mark of Gartner, Inc., and/or its affiliates, and is used herein with permission. All rights reserved. Gartner Peer Insights Customers’ Choice constitute the subjective opinions of individual end-user reviews, ratings, and data applied against a documented methodology; they neither represent the views of, nor constitute an endorsement by, Gartner or its affiliates.

Related:

  • No Related Posts

Who’s Holding Your Data Wallet?

The volume of data created by today’s enterprise workloads continues to grow exponentially. Data growth combined with advancements in artificial intelligence, machine learning, and containerized application platforms, creates a real challenge supporting critical business requirements. This can really place heavy demand on your infrastructure. Adaptability and agility means having the right resources to service ever changing needs. Performing at scale while keeping up with data growth to deliver business critical outcomes comes from a well architected solution that comprehends all the functional ingredients: networking, storage, compute, virtualization, automated lifecycle management, and most importantly the applications. It also comes from a close partnership between customers and technology suppliers to understand the business drivers needed to deliver a best in class outcome.

Would you ask a stranger to hold your wallet full of cash? Metaphorically speaking, this might be what you’re asking an emerging technology vendor or a startup in the storage space to do if you hand over your key data currency. You might be willing to take a chance on a new pizza delivery service, but I bet you would think differently if someone came to your house to collect all your data.

We respect the innovation that emerging technologies and startups bring. However, when it comes to your most valuable asset – data – it’s important to partner with a vendor with a proven track record of leadership and experience who will be there for you well into the future. One such example is the Dell EMC VxFlex software-defined storage (SDS) platform, which offers customers the kind of predictable scalable performance required to host their critical application workloads and data storage in a unified fabric.

The VxFlex platform is capable of growing compute or storage independently, or in an HCI configuration with linear incremental performance while sustaining sub-millisecond latency. No matter what deployment model you need today or in the future, VxFlex provides the flexibility and non-disruptive upgrade path to host any combination of workloads, without physical cluster segmentation, that scales modularly by the node or by the rack. Whether you need to support conventional Windows and Linux applications or next generation digital transformation initiatives, VxFlex helps you reduce the risk associated with future infrastructure needs.

VxFlex can handle your most critical and demanding workloads in a full end-to-end lifecycle managed system using an adaptable myriad of hypervisors, bare metal, or container technology combinations to meet or exceed your requirements. A great example of VxFlex at work is the Dell EMC VxFlex solution for Microsoft SQL Server 2019 Big Data Clusters, which deploys a future-proof design that improves business outcomes through better analytics. This solution highlights the use of persistent storage for Kubernetes deployments and performance sensitive database workloads using a unified compute, networking and systems management infrastructure that makes it operationally complete. VxFlex software-defined architecture provides an agile means to blend changing workloads and abstraction models that can adjust as workload demands change.

Dell Technologies is a market leader across every major infrastructure category and enables you to proactively leverage technology for competitive advantage. Dell Technologies gives you the ability to drive your business and not be driven by technology. Learn more about how Dell EMC VxFlex can help you achieve your IT goals.

Related:

  • No Related Posts