Simple, Scalable, Containerized Deep Learning using Nauta

Deep learning is hard. Between organizing, cleaning and labeling data, selecting the right neural network topology, picking the right hyperparameters, and then waiting – hoping – that the model produced is accurate enough to put into production. It can seem like an impossible puzzle for your data science team to solve. But the IT aspect of the puzzle is no less complicated, especially when the environment needs to be multi-user and support distributed model training. From choosing an operating system, to installing libraries, frameworks, dependencies, and development platforms, building the infrastructure to support your company’s deep … READ MORE

Related:

  • No Related Posts

Accelerating Insight Using 2nd Generation Intel® Xeon® Scalable Processors with Deep Learning Boost

Artificial Intelligence (AI) techniques are quickly becoming central to businesses’ digital transformation by augmenting, and in many cases supplanting, traditional data analytics techniques. These techniques bring proactive and prescriptive capabilities to a company’s data-driven decision-making process, giving companies that adopt them early a distinct competitive advantage. Those that adopt them late will be left behind. Intel recognizes that AI methods, most notably machine learning and deep learning, are now critical components of company workloads. To address the need to both train and, arguably more importantly, have AI models make decisions faster, Intel has put these workloads … READ MORE

Related:

  • No Related Posts

Data & AI: The Crystal Ball into Your Future Success

Years ago, the future was much opaquer. Now, it’s tangible, visible and rising up all around us. It seems to be taking shape in real time, much of which can be attributed to innovation in data and infrastructure, across their respective and collective aspects.

As innovation in these areas accelerates, it rapidly gains in capabilities, particularly for enterprises who have reached a point of digital maturity, ensuring access to quality data and accelerated infrastructure at scale. Yet, for others, their data and analytics initiatives are still lacking. As their data continues to expand, they do not have the right building blocks in place to grow and change with it. In fact, a recent McKinsey survey of more than 500 executives found that more than 85% acknowledged they are only somewhat effective at meeting the goals they set for their data and analytics initiatives.

With both growing and mature data sets, the effects of enterprise deep learning and machine learning can be significant – automating processes, identifying trends in historical data and uncovering valuable intelligence that strengthens fast and accurate decision-making abilities – all of which can be used as a virtual crystal ball to refine predictions about the future and potentially its successes.

To do this correctly, companies should look at using their data AI and analytics capabilities to not only improve their core operations, but also to launch entirely new business models and applications. First, they must solve for problems in the way data is generated, collected, organized and acted upon. Because, while the mechanics are important, the ultimate value of data doesn’t come from merely collecting it, but acting on the insights derived from it.

The key lies in a fundamental mind shift of evolving your organization into a technology company with a data-first mentality.

In my experience, there are three certainties for every company:

  1. Your data is going to grow faster than you expected.
  2. The use cases for this data are going to change.
  3. The business is always going to expect outcomes to be delivered faster.

The first step in the journey to becoming a technology company is simplifying the infrastructure by moving from legacy data systems to a more nimble, flexible modernized data architecture that can bridge both structured and unstructured data to deliver deeper insights and performance at scale. Once consolidated onto a single, scalable, analytics platform, the pace of discovery and learning can be accelerated to drive a more accurate strategic vision for both today and tomorrow.

At Dell EMC, we are dedicated to bringing new and differentiated value and opportunities to our customers globally. We are always looking toward current and future trends and technologies that will help customers better manage and take advantage of their growing data sets with deep learning and machine learning at scale.

Dell EMC Isilon does just that.

As an industry leading scale-out network-attached storage, designed for demanding enterprise data sets, Isilon simplifies management and gives you access to all your data, scaling from tens of terabytes to tens of petabytes per cluster. We also deliver all-flash performance and file concurrency up to the millions, allowing us to support the bandwidth needs of 1000’s of GPUs running the most complex neural networks available. As a bonus, we accomplish this this very economically, with over 80% storage utilization, data compression and automated-tiering across flash and disk in a single cluster. Finally, Isilon based AI increases operational flexibility with multiprotocol support, allowing you to bring analytics to the data to accelerate AI innovation with faster cycles of learning, higher model accuracy and improved GPU utilization.

In an era of change and ongoing data expansion, creating a crystal ball for your business is not a matter of luck or fortune telling. It takes place through a focused strategy for doing more with the data you have at hand. By offering innovative new ways to store, manage, protect and use data at scale, Isilon moves customers that much closer to both becoming technology companies and future proofing their businesses.

To learn more, attend our April 1st webinar event, “Your Future Self is Calling, Will You Pick Up? with Dell EMC, NVIDIA & Mastercard. We look forward to seeing you there.

Related:

  • No Related Posts

Bakers Half Dozen – Episode 5

Episode 5 Show Notes: Introduction with Matt Baker Item 1 – The State of AI in 2019 Item 2 – Machine Learning Algorithms Wired: How Netflix uses Machine Learning algorithms to make recommendations Forbes: Mastercard uses AI to stop fraud AI and Password security Item 3 — Deep Learning vs Machine Learning GAN Defined  Check it out: thispersondoesnotexist.com  Item 4 – Does Deep Learning lack common sense? Deep Blue vs Kasparov AlphaGo beats Go Can Deep Learning beat Breakout? Item 5 – Cost of Public Cloud VMWare’s CloudHealth Platform Item 6 – Digital Transformation Dell Technologies: … READ MORE

Related:

  • No Related Posts

5 Database Management Predictions for 2019

With the release of 18c and the Autonomous Database, 2018 has been an incredible year for the database.

So what’s happening in 2019? We’ve gathered together our database predictions and in this article, we’re sharing five.

Download the Full Database Predictions Ebook

1. Database Maintenance Automation Will Accelerate

Many routine database management tasks have already been automated in the last few years. In future years, traditional, on-premises databases will be competing against cloud-native deployments. And increasingly, those cloud-native deployments will be autonomous databases with hands-free database management.

So what does this mean?

Responsibilities will evolve to less involvement of the physical environment and the actual database, but more involvement with managing and making use of the data. As it gets simpler to manage data, data itself will become more valuable as it becomes easier to use. This will be an exciting time as careers advance and adjust to the changing landscape. You can already see that today, with the popularity of jobs such as data scientist and data engineer.

2. Database Security Will Become Ever More Important

Big surprise, right? Or maybe not. Unfortunately, we hear the headlines about security breaches all the time. Threats to security will become more common as other players realize the value of data and how they can turn it to their own advantage. And when we say more common, we mean it. A recent Oracle Threat Report predicts the number of security events will increase 100-fold by 2025.

It’s simply no longer possible for humans to detect, correlate, analyze, and then address all threats in a timely manner. So what can IT professionals do about this? Many of them are turning to autonomous solutions. Autonomous monitoring and auditing can identify many issues and threats against the database. It can monitor cloud service settings, notify DBAs of changes, and prevent configuration drift by allowing IT pros to restore approved settings at any time.

When you have an Autonomous Database that uses machine learning to detect threats and stop them, it’s just easier to rely on the security experts at Oracle while you explore ways to extract more value from data to help drive better business outcomes.

3. Standards for Database Reliability, Availability, and Performance Will Go Up

Database reliability, availability, and performance have always been important and in 2019, they’ll continue to be so. Autonomous data management will take those capabilities to the next level. For example, the machine learning capabilities of Autonomous Database can automatically patch systems the moment vulnerabilities are discovered. Autonomous data management will improve uptime and also boost security.

This means that standards will get higher. In the past, we’ve sometimes been able to get away with blaming human error. But that excuse doesn’t really pass muster anymore when there’s an autonomous option.

However, even though software patches are applied automatically in the background and all actions are audited, DBAs will still have to monitor the unified audit trail logs and perform actions accordingly if necessary.

4. The Volume of Data Will Continue Exploding

With data growth—well, we’ve all seen the countless charts and graphs detailing the explosion of data from social media and video and IoT and thousands of other sources that weren’t common even 10 years ago.

That size of data isn’t a major factor when considering the productivity of DBAs, but it does matter when you think about the number of instances and variety of database brands and versions.

This is something that increasingly, DBAs are going to have to think about—how will they manage all of this data in an efficient way? It’ll be a strong factor for moving to the cloud, because most cloud databases can be provisioned in 40 minutes or fewer, versus weeks using the old on-premises methods.

5. Database Provisioning Will Become Even More Automated

In today’s world, 95 percent of DBAs still manually create and update databases. But automated database provisioning is becoming more popular as it improves with each new iteration. With the performance-tuning dimension that Oracle Autonomous Data Warehouse already brings, and new automatic indexing features for the Autonomous Database, automated database provisioning will become even easier for DBAs.

As data grows and the need for data-driven analytics increases, DBAs will need to help businesses get data faster to meet business demands.

Conclusion

What do you see for data management in 2019, and what are you most excited about?

For us, it’s witnessing how machine learning combined with a modern, automated database is going to revolutionize the way we use data. 2018 has been a groundbreaking year for Oracle, and we’re looking forward to seeing more of the same in 2019.

If you want to try out the world’s most groundbreaking database technology, sign up for a free trial of Oracle Autonomous Database today or read the walkthrough of how Autonomous Data Warehouse works.

And to read through the other database management predictions with quotes from top DBAs, download the full ebook, “Database Management Predictions 2019.”

Related:

  • No Related Posts

How Predictive Analytics and a Smarter Service Parts Supply Chain Are Improving Your Service Experience

EMC logo


As we approach the second decade of the 21st century―and a new age of Human-Machine Partnerships, Dell Technologies is predicting that 2019 will be The Year of the Data-Driven Digital Ecosystem.

Machine learning (ML) and emerging artificial intelligence (AI) are empowering “data-driven digital ecosystems” that can analyze vast volumes of data for insight to improve outcomes—and to get continually smarter and smarter at doing so.

As part of our own digital transformation in Services, we are using these techniques to pioneer new and better ways to serve customers. Our data science teams have identified the enormous potential of AI/ML in multiple business areas. We utilize it in our proactive, predictive support capabilities and it’s playing a significant role in our supply chain. Jeff Clarke predicted that supply chains will get stronger and smarter in 2019 and the Global Service Parts team is delivering on that vision, taking advantage of AI/ML to deliver a better customer repair experience.

Applying Predictive, ML, AI and Operational Research Methodologies to Unlock New Insights

Dell EMC Services has been collecting and analyzing data from our service parts supply chain for years. Today, our Global Service Parts organization manages procurement, inventory, repair, and the recycling of parts for 100+ million products at customer sites under warranty or service agreement in 160+ countries around the world.

Massive amounts of historical and near real-time service parts data―tracking the lifecycle of parts as they move in and out of our 800+ warehouses and to and from customer sites―provides a rich trove of data for unlocking new insights.

So what type of actions can we take based on the insights we extract from all that data?

To continue innovation and evolution of our supply chain, we applied predictive, ML, AI and operational research methodologies in two areas for:

  • Sharper planning―for more accurate demand forecasting, with less human effort
  • Smarter repair―through predictive analytics to reduce repair time

Let’s take a look at what each of these means to our business, and most importantly, to our customers.

Sharper Forecasting, with Less Human Effort

The unpredictability of immediate, short- and long-term demand for repair parts makes accurate forecasting an ongoing challenge. To tackle this, our experienced parts planners and data scientists worked together to develop and supervise a data-driven digital ecosystem that uses machine learning to identify and prioritize variables, build predictive models, and generate plans to more precisely pre-position inventory across the globe.

Today, about 35% of our planning is generated autonomously, without human input, greatly reducing the amount of time our expert resources spend on the front end of this process. Once plans are generated, our parts planners have only to review and adjust them before they are approved. We are confident that as the planning tool continues to “learn” from planner modifications and usage patterns, and as AI continues to evolve, we will be able to rely on a fully autonomous planning tool in the next few years, freeing our planners to focus on more complex issues and additional tool development.

Smarter Repair, with Reverse Supply Chain Data

When a repair is needed, of course, we want to make the process as quick and efficient as possible so we are using data science techniques in this area as well.

We use reverse supply chain data―data that comes from built-in system diagnostics, tech support workflow, hands-on diagnostics, defective part evaluations, and other sources. It informs predictive analytics that help us identify the likelihood of failures and helps accelerate repair times.

Our new predictive repair engine combines relevant data and identifies patterns to recommend what parts will be needed before a unit arrives at the repair depot, so a swap-out can be quickly completed. In an initial pilot, we achieved 80% accuracy in identifying the correct part, reducing the movement of parts by 15%, and cutting time-to-repair by 20 minutes. Efficiencies continue to improve, as the technology learns from confirmation of accurate recommendations and correction of inaccurate ones. The repair engine also learns from extensive, post-event failure analysis of parts at the repair depot, improving diagnoses and providing valuable information to product engineers working on next-generation systems.

This predictive repair engine is also making our supply chain greener and more efficient, by helping to reduce waste and shipping, and the need to manufacture and manage as many parts in the first place.

Better and Better Service Experience for Our Customers

Emerging AI technologies, machine learning and other innovative techniques are helping us get smarter and smarter so we can minimize disruption and inconvenience, prevent issues or resolve them faster, and make technology simpler for all of us.

The post How Predictive Analytics and a Smarter Service Parts Supply Chain Are Improving Your Service Experience appeared first on InFocus Blog | Dell EMC Services.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

  • No Related Posts

3 Use Cases for AI, Machine Learning and Deep Learning: Healthcare, Digitization and Proactive Support

EMC logo


For us in the Boston area, we watched our Red Sox end their record-setting season by celebrating another World Series title.

Naturally, there has been much buzz about the team, as well as first-year manager Alex Cora and the clubhouse culture he built. However, it takes more than culture change to win, and Cora and the Red Sox front office recognize that. We live in a data-driven world, and that includes the world of baseball.

A recent Boston Globe story featured a good example of how the Red Sox use data-driven insights to make in-game decisions. It isn’t luck when outfielder Mookie Betts snags a fly ball that most observers would think he had no chance to catch. What places Betts in the ideal position is data, retrieved through AI and analytics that captures and analyzes the batter’s historical and future trends. The output of that learning is placed on an index card which Betts keeps in his back pocket, so he can move to the optimal position in the field before each at-bat.

Perhaps a Red Sox legend like Ted Williams would dismiss today’s approach to analyzing the other team. However, the data has always been there. Today’s difference is the availability of intelligent analysis through AI and Machine Learning. Modern tools to provide managers a modern twist to strategy.

Whether baseball or the business world, organizations are collecting vast amounts of new data points and racing to unlock its value to help make faster and better decisions.

At Dell Technologies, we help our customers deliver new outcomes through AI and ML. At the same time, we as a company are doing what our customers are doing – leveraging AI and ML to help us make better decisions and improve customer experiences and outcomes.

Before proudly sharing a few examples, I invite you to check out Dell Technologies “Unlock the Power of Data,” which was streamed as a virtual event for customers and partners on November 14. During the broadcast, trends in AI were discussed, use cases and examples outlined, and Dell Technologies AI capabilities demonstrated.

Delivering Targeted Healthcare Insights with AI and Machine Learning

The medical industry is well-positioned to be a top benefactor of the AI/ML evolution, enabling providers to better evaluate patients and personalize treatment options.

In this case, a regional healthcare provider partnered with Dell EMC Consulting to develop and implement a robust analytics research platform that would enable an extensive community of researchers and innovators to work more efficiently with faster and expanded access to critical data.

One such example is a recent collaboration between the healthcare provider’s data scientists and data scientists from Dell EMC Consulting. The teams together delivered new research targeted at the alarmingly high number of seizures that occur in hospitals, most of which are only detectable by brain monitoring with an electroencephalogram (EEG). Delayed diagnosis of such “subclinical seizures” leads to brain damage, lengthens hospitalization, and heightens the risk of in-hospital death or long-term disability.

The learnings from past EEGs would go a long way towards helping hospital physicians provide better diagnosis and treatment. However, there are two key challenges that make curating and mining the information difficult. First, patients’ EEG reports and the corresponding waveform data files are often stored separately and not clearly linked. Equally challenging is the ability to quickly extract useful information from the reports that describe clinically important neurophysiological events.

Using the new research platform and applying advanced AI and machine learning techniques, the joint team developed a highly accurate classifier for pairing the report files with the corresponding data. They also discovered several analysis techniques that are highly accurate in extracting the relevant information needed from the reports. With these two foundations, the team has established a highly effective and efficient data pipeline for clinical operations, quality improvement, and neurophysiological research.

Using Machine Learning to Automate the Offline

Dell’s eCommerce platform is the front door for the full range of customer inquiries from simple browsing to real-time support. However, did you know that Dell manages more than four million offline orders that arrive via fax and email each year? Our global Order Management and Support organization has traditionally executed those orders manually. However, a new solution was needed to improve order accuracy and cycle time.

Leveraging machine learning and the latest in Optical Character Recognition (OCR), Dell Digital developed Robotix — a scalable solution for digitizing offline purchase orders. Robotix improves the customer experience by processing orders faster and reducing pain points, while automating offline quality checks and customizing order entry instructions.

Robotix, currently patent-pending, is already live in North America and expected to automate the majority of global offline orders in its first full year of implementation.

Proactively Avoiding System Failure with SupportAssist

The millions of customer systems connected to Dell EMC around the globe can run trillions of variations of hardware and software configurations. These variations may be further influenced by factors such as geographic location and climate.

Given such a vast scope and size, the ability to predict and validate potential faults may seem like an impossible task. However, through the power of AI and ML, and the capacity of today’s Graphics Processing Units, our internal data scientists have built solutions that implement Deep Learning models to open a world of even more possibilities.

Today, SupportAssist, our automated proactive and predictive technology, is run on almost 50 million customer systems. Through this connected technology, Dell EMC can save customers from the potentially disastrous impact of downtime or data loss by alerting and remediating a potential hard drive failure on average 50 days before the failure occurs. And as our services technology continues to get smarter, customers will be empowered to make faster, better decisions about their IT, and address immediate issues while they plan for what’s next.

Summary

These are just a few of the many, many AI/ML use cases deployed either internally by Dell Technologies or externally by our customers. Yet, while implementations vary, there is a common thread tying them together that equates to success. The right people and processes, combined with these powerful technologies, that enable us to define and execute a vision that brings data and insights to life and makes transformation real.

Related Reading:

Deep Learning: AI startup revs up its business with PowerEdge servers and NVIDIA Tesla GPUs

The post 3 Use Cases for AI, Machine Learning and Deep Learning: Healthcare, Digitization and Proactive Support appeared first on InFocus Blog | Dell EMC Services.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

  • No Related Posts

Neural Network Inference Using Intel® OpenVINO™

Deploying trained neural network models for inference on different platforms is a challenging task. The inference environment is usually different than the training environment which is typically a data center or a server farm. The inference platform may be power constrained and limited from a software perspective. The model might be trained using one of the many available deep learning frameworks such as Tensorflow, PyTorch, Keras, Caffe, MXNet, etc. Intel® OpenVINO™ provides tools to convert trained models into a framework agnostic representation, including tools to reduce the memory footprint of the model using quantization and graph optimization. It also provides dedicated inference APIs that are optimized for specific hardware platforms, such as Intel® Programmable Acceleration Cards, and Intel® Movidius™ Vision Processing Units.

openvino.png

The Intel® OpenVINO™ toolkit

Components

  1. Model Optimizer

The Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices. It is a Python script which takes as input a trained Tensorflow/Caffe model and produces an Intermediate Representation (IR) which consists of a .xml file containing the model definition and a .bin file containing the model weights.

2. Inference Engine

The Inference Engine is a C++ library with a set of C++ classes to infer input data (images) and get a result. The C++ library provides an API to read the Intermediate Representation, set the input and output formats, and execute the model on devices. Each supported target device has a plugin which is a DLL/shared library. It also has support for heterogenous execution to distribute workload across devices. It supports implementing custom layers on a CPU while executing the rest of the model on a accelerator device.

Workflow

  1. Using the Model Optimizer, convert a trained model to produce an optimized Intermediate Representation (IR) of the model based on the trained network topology, weights, and bias values.
  2. Test the model in the Intermediate Representation format using the Inference Engine in the target environment with the validation application or the sample applications.
  3. Integrate the Inference Engine into your application to deploy the model in the target environment.

Using the Model Optimizer to convert a Keras model to IR

The model optimizer doesn’t natively support Keras model files. However, because Keras uses Tensorflow as its backend, a Keras model can be saved as a Tensorflow checkpoint which can be loaded into the model optimizer. A Keras model can be converted to an IR using the following steps

  1. Save the Keras model as a Tensorflow checkpoint. Make sure the learning phase is set to 0. Get the name of the output node.

import tensorflow as tf

from keras.applications import Resnet50

from keras import backend as K

from keras.models import Sequential, Model

K.set_learning_phase(0) # Set the learning phase to 0

model = ResNet50(weights=‘imagenet’, input_shape=(256, 256, 3))

config = model.get_config()

weights = model.get_weights()

model = Sequential.from_config(config)

output_node = model.output.name.split(‘:’)[0] # We need this in the next step

graph_file =
“resnet50_graph.pb”

ckpt_file =
“resnet50.ckpt”

saver = tf.train.Saver(sharded=True)

tf.train.write_graph(sess.graph_def,
, graph_file)

saver.save(sess, ckpt_file)

2. Run the Tensorflow freeze_graph program to generate a frozen graph from the saved checkpoint.

tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph –input_graph=./resnet50_graph.pb –input_checkpoint=./resnet50.ckpt –output_node_names=Softmax –output_graph=resnet_frozen.pb

3. Use the mo.py script and the frozen graph to generate the IR. The model weights can be quantized to FP16.

python mo.py –input_model=resnet50_frozen.pb –output_dir=./ –input_shape=[1,224,224,3] — data_type=FP16

Inference

The C++ library provides utilities to read an IR, select a plugin depending on the target device, and run the model.

  1. Read the Intermediate Representation – Using the InferenceEngine::CNNNetReader class, read an Intermediate Representation file into a CNNNetwork class. This class represents the network in host memory.
  2. Prepare inputs and outputs format – After loading the network, specify input and output precision, and the layout on the network. For these specification, use the CNNNetwork::getInputInfo() and CNNNetwork::getOutputInfo()
  3. Select Plugin – Select the plugin on which to load your network. Create the plugin with the InferenceEngine::PluginDispatcher load helper class. Pass per device loading configurations specific to this device and register extensions to this device.
  4. Compile and Load – Use the plugin interface wrapper class InferenceEngine::InferencePlugin to call the LoadNetwork() API to compile and load the network on the device. Pass in the per-target load configuration for this compilation and load operation.
  5. Set input data – With the network loaded, you have an ExecutableNetwork object. Use this object to create an InferRequest in which you signal the input buffers to use for input and output. Specify a device-allocated memory and copy it into the device memory directly, or tell the device to use your application memory to save a copy.
  6. Execute – With the input and output memory now defined, choose your execution mode:
    • Synchronously – Infer() method. Blocks until inference finishes.
    • Asynchronously – StartAsync() method. Check status with the wait() method (0 timeout), wait, or specify a completion callback.
  7. Get the output – After inference is completed, get the output memory or read the memory you provided earlier. Do this with the InferRequest GetBlob API.

The classification_sample and classification_sample_async programs perform inference using the steps mentioned above. We use these samples in the next section to perform inference on an Intel® FPGA.

Using the Intel® Programmable Acceleration Card with Intel® Arria® 10GX FPGA for inference

The OpenVINO toolkit supports using the PAC as a target device for running low power inference. The steps for setting up the card are detailed here. The pre-processing and post-processing is performed on the host while the execution of the model is performed on the card. The toolkit contains bitstreams for different topologies.

  1. Programming the bitstream

aocl program <device_id> <open_vino_install_directory>/a10_dcp_bitstreams/2-0-1_RC_FP16_ResNet50-101.aocx

2. The Hetero plugin can be used with CPU as the fallback device for layers that are not supported by the FPGA. The -pc flag prints performance details for each layer

./classification_sample_async -d HETERO:FPGA,CPU -i <path/to/input/image.png> –m <path/to/ir>/resnet50_frozen.xml

Conclusion

Intel® OpenVINO™ toolkit is a great way to quickly integrate trained models into applications and deploy them in different production environments. The complete documentation for the toolkit can be found at https://software.intel.com/en-us/openvino-toolkit/documentation/featured.

Related:

  • No Related Posts