What’s the Difference Between AI, Machine Learning, and Deep Learning?

Peter Jeffcock

Big Data Product Marketing

AI, machine learning, and deep learning – these terms overlap and are easily confused, so let’s start with some short definitions.

AI means getting a compute to mimic human behavior in some way.

Machine learning is a subset of AI, and it consists of the techniques that enable computers to figure things out from the data and deliver AI applications.

Deep learning, meanwhile, is a subset of machine learning that enables computers to solve more complex problems.

Download your free ebook, “Demystifying Machine Learning.”

Those descriptions are correct, but they are a little concise. So I want to explore each of these areas and provide a little more background.

Difference Between AI, Machine Learning and Deep Learning

What Is AI?

Artificial intelligence as an academic discipline was founded in 1956. The goal then, as now, was to get computers to perform tasks regarded as uniquely human: things that required intelligence. Initially, researchers worked on problems like playing checkers and solving logic problems.

If you looked at the output of one of those checkers playing programs you could see some form of “artificial intelligence” behind those moves, particularly when the computer beat you. Early successes caused the first researchers to exhibit almost boundless enthusiasm for the possibilities of AI, matched only by the extent to which they misjudged just how hard some problems were.

Artificial intelligence, then, refers to the output of a computer. The computer is doing something intelligent, so it’s exhibiting intelligence that is artificial.

The term AI doesn’t say anything about how those problems are solved. There are many different techniques including rule-based or expert systems. And one category of techniques started becoming more widely used in the 1980s: machine learning.

What Is Machine Learning?

The reason that those early researchers found some problems to be much harder is that those problems simply weren’t amenable to the early techniques used for AI. Hard-coded algorithms or fixed, rule-based systems just didn’t work very well for things like image recognition or extracting meaning from text.

The solution turned out to be not just mimicking human behavior (AI) but mimicking how humans learn.

Think about how you learned to read. You didn’t sit down and learn spelling and grammar before picking up your first book. You read simple books, graduating to more complex ones over time. You actually learned the rules (and exceptions) of spelling and grammar from your reading. Put another way, you processed a lot of data and learned from it.

That’s exactly the idea with machine learning. Feed an algorithm (as opposed to your brain) a lot of data and let it figure things out. Feed an algorithm a lot of data on financial transactions, tell it which ones are fraudulent, and let it work out what indicates fraud so it can predict fraud in the future. Or feed it information about your customer base and let it figure out how best to segment them. Find out more about machine learning techniques here.

As these algorithms developed, they could tackle many problems. But some things that humans found easy (like speech or handwriting recognition) were still hard for machines. However, if machine learning is about mimicking how humans learn, why not go all the way and try to mimic the human brain? That’s the idea behind neural networks.

The idea of using artificial neurons (neurons, connected by synapses, are the major elements in your brain) had been around for a while. And neural networks simulated in software started being used for certain problems. They showed a lot of promise and could solve some complex problems that other algorithms couldn’t tackle.

But machine learning still got stuck on many things that elementary school children tackled with ease: how many dogs are in this picture or are they really wolves? Walk over there and bring me the ripe banana. What made this character in the book cry so much?

It turned out that the problem was not with the concept of machine learning. Or even with the idea of mimicking the human brain. It was just that simple neural networks with 100s or even 1000s of neurons, connected in a relatively simple manner, just couldn’t duplicate what the human brain could do. It shouldn’t be a surprise if you think about it; human brains have around 86 billion neurons and very complex interconnectivity.

What is Deep Learning?

Put simply, deep learning is all about using neural networks with more neurons, layers, and interconnectivity. We’re still a long way off from mimicking the human brain in all its complexity, but we’re moving in that direction.

And when you read about advances in computing from autonomous cars to Go-playing supercomputers to speech recognition, that’s deep learning under the covers. You experience some form of artificial intelligence. Behind the scenes, that AI is powered by some form of deep learning.

Let’s look at a couple of problems to see how deep learning is different from simpler neural networks or other forms of machine learning.

How Deep Learning Works

If I give you images of horses, you recognize them as horses, even if you’ve never seen that image before. And it doesn’t matter if the horse is lying on a sofa, or dressed up for Halloween as a hippo. You can recognize a horse because you know about the various elements that define a horse: shape of its muzzle, number and placement of legs, and so on.

Deep learning can do this. And it’s important for many things including autonomous vehicles. Before a car can determine its next action, it needs to know what’s around it. It must be able to recognize people, bikes, other vehicles, road signs, and more. And do so in challenging visual circumstances. Standard machine learning techniques can’t do that.

Take natural language processing, which is used today in chatbots and smartphone voice assistants, to name two. Consider this sentence and work out what the last part should be:

I was born in Italy and, although I lived in Portugal and Brazil most of my life, I still speak fluent ________.

Hopefully you can see that the most likely answer is Italian (though you would also get points for French, Greek, German, Sardinian, Albanian, Occitan, Croatian, Slovene, Ladin, Latin, Friulian, Catalan, Sardinian, Sicilian, Romani and Franco-Provencal and probably several more). But think about what it takes to draw that conclusion.

First you need to know that the missing word is a language. You can do that if you understand “I speak fluent…”. To get Italian you have to go back through that sentence and ignore the red herrings about Portugal and Brazil. “I was born in Italy” implies learning Italian as I grew up (with 93% probability according to Wikipedia), assuming that you understand the implications of born, which go far beyond the day you were delivered. The combination of “although” and “still” makes it clear that I am not talking about Portuguese and brings you back to Italy. So Italian is the likely answer.

Imagine what’s happening in the neural network in your brain. Facts like “born in Italy” and “although…still” are inputs to other parts of your brain as you work things out. And this concept is carried over to deep neural networks via complex feedback loops.

Conclusion

So hopefully that first definition at the beginning of the article makes more sense now. AI refers to devices exhibiting human-like intelligence in some way. There are many techniques for AI, but one subset of that bigger list is machine learning – let the algorithms learn from the data. Finally, deep learning is a subset of machine learning, using many-layered neural networks to solve the hardest (for computers) problems.

Related:

Do You Believe the Artificial Intelligence (AI) Hype?

EMC logo


Traveling between Silicon Valley, Austin and Boston, I am hearing about artificial intelligence and the future of work from so many angles and couldn’t be more excited about the opportunities ahead. The strategies and applications are enticing and the next wave is promising. However, as we rush to embrace AI, I can’t help wondering whether we are maximizing its full potential or just riding from technology wave to technology wave.

For instance, today our walls talk to us – almost everything in our lives talk to us. With a modest investment, smart technology sensors capture data we use to monitor, measure and manage our energy usage and efficiency (and more). I’ve done this in my own home. Unfortunately, I now have too much data and can only make minor choices around my consumption. The application I used sent me too many notifications, so I turned it off.

Have you experienced this as well?

We are in the early days of AI. However, I am looking forward to how artificial intelligence will automatically filter, analyze and provide insight needed to enhance both my personal and professional life. As Dell and VMware’s CIO, I weigh these innovations to determine what is real, relevant and worth investing in as part of our digital transformation. We are doing this by identifying some tangible, short term use cases we can pursue that will deliver value today and determining the best way to transform our technology, processes and people for the future.

Let me share how we are doing that.

First, we must consider how the numerous innovative waves of technologies intersect and interact together. This is especially true with artificial intelligence and the Internet of Things. To be successful, AI depends on consistent, high quality, real-time data. However, with multiple approaches and systems, it is challenging and time-consuming to find, access and make sense of the data available throughout a company.

Which brings me to my second point.

We must also collaborate across the company. IT isn’t the only organization thinking about AI. Our partners in sales, engineering, marketing and other areas of the business are also exploring the potential of AI. If we’re not collaborative, we will create more complexity and conflicting systems that delay our time to value. To get ahead of this, we are embracing what we call the Dell Digital Way, a cultural shift and approach that leverages Pivotal and agile methodologies, pair programming and the latest technologies like Pivotal Cloud Foundry. Rather than numerous disparate approaches, this enables us to work closely with our business partners to deliver better AI innovations and applications even faster.

I’m looking forward to how AI will automatically filer, analyze and provide insight needed to enhance both my personal and professional life.

And finally, as CIOs and IT professionals, we must foster an innovative, cross-functional AI-minded culture throughout the company, so we created the Artificial Intelligence Center of Excellence at Dell. Working together, this team of data scientists, data engineers, IT professionals and others from across Dell Technologies are determining the right platform and the governance structure for enhancing data quality. The Center also host events and contests for team members to collaborate and crowdsource innovative ways to solve a variety of challenges and goals related to our data and Internet of Things. This will this help us embrace AI faster and inspire our brightest minds and expand their career opportunities.

Machines may not take over the world yet, but they are rising. In the near term, AI will be essential for automating and eliminating painful and time-consuming processes like patches and enable us to focus on more innovative and interesting business opportunities. So while many people talk about how artificial intelligence will eliminate the need for human intervention, we don’t have to worry just yet. There is plenty we need to do as humans, and IT professionals, to unleash the full potential of AI.

For now, AI is enhancing human potential rather than replacing it.

Watch me address ‘cutting through the AI hype’ at the 7 Artificial Intelligence Revelations from The Economist Innovation Summit or listen to Dell Technologies’ Jeff Clarke, Vice Chairman of Products & Operations Luminaries Podcast.

The post Do You Believe the Artificial Intelligence (AI) Hype? appeared first on InFocus Blog | Dell EMC Services.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

  • No Related Posts

Humanity and Artificial Intelligence – Shape Our Future in Harmony, Rethink Our Societies.

EMC logo


Of all the emerging technologies that are set to impact the way we work and the way we live, Artificial Intelligence (AI) is, without doubt, the most challenging. Coupled with Machine Learning (ML), AI will not only make objects smarter or allow machines to recognize patterns and interpret data, it has the potential to change the face of the earth. Some see AI as a blessing, many others focus on the threat it poses to human dominance over machines.

When talking about AI and smart machines, examples spring to mind of a computer beating the world chess champion Gary Kasparov in 1997, Google Assistant helping us in our day to day tasks, or – more recently – an AI program beating the world’s best professional poker players because it made better use of information that poker players do not share with each other. But the effects of AI will go much deeper. Just imagine what tasks computers can take over from us when they can be programmed to think like us, and how much faster they can be at performing repetitive tasks. What machines are already doing on the production line, may well happen in our offices too.

Let AI Do the Work for Us

Some recent surveys have shown that business leaders are divided over what the human-machine partnership will bring in terms of productivity. Research conducted by Vanson Bourne on behalf of Dell Technologies shows that 82 per cent of business executives expect their employees and machines to work as ‘integrated teams’ within the next few years. Yet only 49% believe that       employees will be more productive as more tasks get automated. And only 42% think we will have more job satisfaction by offloading tasks that we don’t want to do to intelligent machines. Employees too have their doubts, as research from The Workforce Institute reveals:  although 64 per cent would welcome AI if it simplified or automated time-consuming internal processes or helped balance their workload, six out of ten employees find their employers are not very transparent about what AI will mean to the way they do their jobs – if they will still have them, that is, after this next automation wave.

We are only in the first chapters of the book that we are writing for our future, but already, the AI/ML is having a profound influence on all aspects of human life, and we need to ensure that AI is not writing the ending for us. Consider just these examples:

  • In healthcare, deep learning systems can read images and diagnose pneumonia as accurately as radiologists.
  • At CES in Las Vegas, AT&T announced that it’s testing a new ‘structure monitoring solution’, a system to help cities and transportation companies monitor the stability of bridges, alerting them if their stability is compromised.
  • At Georgia Tech, a chatbot is mailing assignments and answering questions from students.
  • Intelligent systems are helping HR departments analyze employee sentiment in real time, thus helping reduce employee attrition.

The list of applications of AI is endless and you will find examples like these in any industry. The big question that everyone is asking is whether AI will help us, or if AI systems will replace human beings in the workplace, making us completely redundant. The answer to this million dollar question is not so simple. On the one hand, it is clear that a number of jobs are on the line. Just think of truck drivers losing their job if we will get convoys of driverless trucks on the road, call center operators being replaced by conversational AI systems, or financial analysts getting the boot from robo-advisors.

Humans in the Loop

On the other hand, AI will also create new jobs. If you have learning systems, someone will need to supervise those learning efforts, programmers will have to find the right algorithms and embed them in systems, and so on. In fact, some analysts see AI as what is called ‘an invention that makes inventions’, creating endless new possibilities. AI will definitely have a direct impact, but it will also spur on new developments that will, in turn, create new applications and new jobs. Enough new jobs for Gartner’s Peter Sondergaard to claim – during last year’s Gartner IT Symposium – that AI will be a net job contributor from 2020 onwards, eliminating 1.8 million jobs while creating 2.3 million new ones.

I also tend to think AI will bring more benefits than troubles and I strongly believe that humans will be augmented by AI rather than replaced. We need to consider that self-taught AI has trouble with the real world. Emotions are key, multiple options and complex real life situations hard to handle for AI. As Kevin Kelly, author of The Inevitable, says: “the chief benefit of AI is that it does NOT think like humans. We’re going to invent many new types of thinking that don’t exist biologically and that are not like human thinking. Therefore, this type of intelligence does not replace human thinking, but augments it.” In fact, AI cannot do without human intervention and there will always be ‘humans in the loop’ as AI-specialist J.J. Kardwell comments: “humans should be focused on teaching machines so that machines can focus on performing jobs that are too big for humans to process.” According to this school of thinking, humans and robots working in harmony will yield the best results.

For Marvin Minsky, what counts in humans, is our mind, our spirit, the brain being a machine like any other, besides the fact that modeling the plasticity and dynamism of the brain is not easy. We have a “mechanistic” vision of the Human, but, like Jean-François Mattei highlights, the fact is that the brain is first of all a social and cultural organ, which adapts to human relations and with our environment, for a fine and adapted decision, linked to our conscience, to our freedom to think and to create in an innovative way. Our liberty is unique, how we undertake and adapt it is precious, and as Lucretius says, “If the chain of causes is governed solely by laws, what meaning can you give to the freedom of the will and human action”?

Creativity Rules

Does this mean we should stick our heads in the sand and carry on as if nothing will change? Of course not. The future will be different and we need to prepare for it. The educational world has the huge task of preparing the workers of tomorrow. The human race has always excelled in creativity, from the paintings in the caves of Lascaux through architecture to modern music. What education should focus on, is stimulating that creativity and teaching people to combine that creativity with the power of AI to make our dreams come true. After all, machines cannot replace our feelings. I am convinced human beings will not turn into emotionless cyborgs. In that sense, I agree with the French philosopher Jean-François Mattei (‘Question de conscience’) that transhumanism should not lead to technology totalitarianism. Instead, AI will help us become less like the machines we are right now, toiling for ten hours a day to get through our ‘to do’-lists. This is our job to invent new lifestyles, imagine new societies, with the potential, through AI, to help us reorganize the way we live, the way we work and provide us with more time to connect with and take care of other humans or living species, making our world a better place for everyone.

All in all, I think we should be hopeful of the prospect that AI and ML are going to help us weather the changes that are ahead of us and we should not fear the machine. I share this belief with Dell Technologies CEO and Chairman Michael Dell: ““Computers are machines. The human brain is an organism. Those are different things. Maybe in 15 to 20 years from now, we’ll have computers that have more power than 10 million brains in terms of computational power, but it’s still not an organism.” We then must take an intuitive approach to imagine how future is formed, with artificial intelligence, as Bergson would stage, to work intelligently on joining forces, not on one taking over the other. As a closing, the notion of ethical conviction should not ignore the alterity dimension, emphasized by Kant.



ENCLOSURE:https://blog.dellemc.com/uploads/2018/03/Human-Progress-Bar-Graph-Sunrise-1000×500.jpg

Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

  • No Related Posts

AI — The Time for Action is Now

EMC logo


The U.S. is already one step ahead of the game — last December, members of the American Congress presented a bill on the ‘Development and Implementation of Artificial Intelligence’. Its aim is to establish a Federal Advisory Committee for AI. The drafters reasoned that understanding AI “is critical to the economic prosperity and social stability of the United States.”

How forward-thinking of them. But they have nothing on the Chinese — the Chinese State Council has stated that it wants China to be the leader in AI by 2025, which implies that they want to knock the U.S. from its pole position. Even the U.K. is eyeing up a lead position. But what about Germany? Ever since the pandemonium of last summer’s election, when the two largest parties frantically pushed for an AI ‘masterplan’ after China’s statement, not much has actually happened.

I didn’t expect a change in pace either, though. I think it’s much more important that politicians have the issue on their radar at all, and that they understand the implications of artificial intelligence.

Here is where opinions are diametrically opposed. Tesla’s Elon Musk and celebrity physicist Stephen Hawking have branded this technology “our biggest existential threat.” Steve Wozniak has attempted to offer a more balanced opinion, while Mark Zuckerberg has praised AI to the high heavens.

Of course, businesses are optimistic about what the future holds for AI, and are already using it for a wide array of applications: from communication, to cognitive searches and predictive analytics, to translation. The next big thing is the autonomous car. The (German) automotive sector, which used to focus on tin and steel, is also undergoing significant changes. Other sectors are following suit. To companies, AI is the game changer that will improve all our lives and revolutionize the economy. The results of our latest study on the working world of 2030 show that the majority of the 3,800 business leaders surveyed already anticipate a close human-machine symbiosis in the coming years. However, the same study also shows a clear split in opinions. Roughly half of the respondents were pessimistic about the effects of AI, while the other half were optimistic.

So what do we do now? The most important question concerns the implications that AI will actually have — will it usher in a bright new future or social disorder? The discussion on job losses is already in full swing.

Apocalyptic scenarios aren’t the only things we should be thinking about, but at the same time, it is worth reflecting on regulation at this early stage. I think that the AI expert Oren Etzioni has the right attitude. Following the example of Isaac Asimov’s laws of robotics, he suggests three simple rules for artificial intelligence systems so that we are equipped for the worst-case scenarios and can prevent any conceivable damage. He says that AI must be strictly regulated, that AI must be discernable from humans, and that AI cannot arbitrarily handle confidential data. These may seem like superficial rules, but they serve as a very good starting point and basis for discussion.

Are these ideas a little too ahead of their time? I don’t think so. If we tackle these issues as early on as possible, then we will be in a much better position to plan the future of artificial intelligence. Isaac Asimov wrote his laws of robotics way back in 1942, and they are still considered exemplary, even today. And if that’s not a good source of motivation, then I don’t know what is.



ENCLOSURE:https://blog.dellemc.com/uploads/2018/01/Artificial-Intelligence-AI-Man-Green-Screens-1000×500.jpg

Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

  • No Related Posts

Design for cognitive experiences, Part 1: The human-to-machine communication model

Discover better insights with higher confidence faster than humanly
possible. It is the key to taking a first step in the right cognitive
application direction: asking not what could it do but what it should do. With
artificial intelligence (AI), we have decades of data and research into human
thought processes and communication to use a blueprint. To simulate human
relationships, we begin by observing and better understanding
ourselves.

Related:

  • No Related Posts

Machine Militaries: The Future of Artificial Intelligence and National Security

With global superpowers forging the path, Artificial Intelligence (AI) is fueling the automated arms race. While still young in its development, AI has transformed the international landscape of security innovation. Russia’s military modernization program triggered heavy investment in the automation of its armed forces. The United States Department of Defense inaugurated the Algorithmic Warfare Cross-Functional Team in April 2017 to advance AI technologies in the software of machine weaponry. China released a “new generation of Artificial Intelligence development plan” detailing the government’s thirteen-year strategy to “become the major Artificial Intelligence innovation center in the world.” It is no secret that AI has changed the course of national security as we know it.

Machine Militaries

In the future, humans will delegate its most dangerous, war-fighting tasks to autonomous technology. While we already see the replacement of humans by machines in life-threatening situations, like Explosive Ordinance Device (EOD) robots detecting and disabling IEDs, or Improvised Explosive Devices, humans are still in control of the situation by maneuvering the robot and making the decisions.

However, AI will eventually transform military forces. Autonomous weapons, vehicles, aircrafts and drones will catalyze a shift from manned to unmanned combat missions. The greater reliance on cyber weapons combined with their rapid development and availability will grant smaller nations and non-state actors with less powerful militaries the ability to defend and demand interests while also carrying greater weight in influencing international policies.

AI will also be critical in the future of cyber security. Countries will use AI to assess and monitor vulnerabilities in computer systems, continually detecting threats and reinforcing strong cyber defense. Cyber offense will also experience a significant improvement by AI. Nations armed with complex AI cyber weapons, like a machine learning Advanced Persistent Threat (APT) cyber attack, will quickly hunt system weaknesses and execute bespoke hacking and attack campaigns.

Ultimately, Artificial Intelligence will usher in an age where military superiority carries no relation to population or economic might.

Artificial Intelligence as National Intelligence

The continued development of Artificial Intelligence will significantly impact the way countries collect, analyze, and even generate intelligence. According to a study by Dell EMC Corporation, the data produced by the digital universe will double every three years. Intelligence analysts scouring this infinite sea of information eventually will be aided, or more likely replaced, by AI machinery.

The capabilities of AI in intelligence collection and analysis will be unmatched by human potential, like the ability to photograph and analyze Earth’s surface daily, examining every square foot of the planet every twenty-four hours.

AI-aided surveillance will also mean the end of guerilla warfare as terrorist groups or other threatening organizations struggle to leave an invisible digital footprint. Machine learning will offer nations the ability to derive and process unstructured sensor data, amplifying the amount of data captured via Signals Intelligence (SIGINT) and Electronic Intelligence (ELINT).

AI will also grant countries the capacity to make hyper-realistic propaganda or execute social engineering schemes, similar to Russia’s social media interference in the 2017 presidential elections. AI-equipped countries will use face and voice mapping, recognition, and editing to create digital puppets. These life-like portrayals of world leaders, diplomats, citizens, etc. spouting engineered messages will continue to make news media indistinguishable from the truth.

Lastly, a far greater reliance on cyber in national security will make protecting and collecting sensitive intelligence dramatically more critical. Actors could launch intricate cyber attacks, aiming to hack, damage, collect, and/or sell government secrets and national safety.

Automated Economies

Artificial Intelligence will thoroughly transform economies as projectors of power. In the future, AI technology will accelerate global innovation and productivity. Scientists from all fields will research, develop, test, and accurately assess hypotheses at a fraction of the time consumed today in the identical process. Machine learning algorithms and mechanical simulations will automatically produce designs, optimizing existing products and creating entirely new inventions.

Innovation will flourish and a plethora of highest-quality devices will pour into the markets, making technology more accessible and increasing computer literacy nationwide. However, as AI spreads deeper roots into national economies, it will force other jobs out of existence, making thousands of occupations obsolete. Humans will not be able to compete with their computerized opposition. If the future of AI continues down this path, technology will reduce the demand for low-skill labor and obstruct opportunities to retrain, posing devastating consequences to countries’ economies and societies.

Related:

  • No Related Posts

Smarter Is Better: Progress Made and Opportunity Ahead With AI and Machine Learning

EMC logo


Artificial intelligence, machine learning and big data have been making big headlines for the last few years – and for good reason. The amount of data being generated climbs every year thanks to the estimated 20 billion connected devices we’ll see by the year 2020…which is only a little more than two years away. Fast forward to 2050 – we’re looking at one trillion connected devices and “things.”

illustration of woman's head with circuit board inside

All that data – combined with advancements in processing and compute power, and cloud computing – has made artificial intelligence (AI) and machine learning innovation possible. These technologies are game-changers for machine automation and productivity. We’re giving machines the ability to learn and think – just like we do as humans.

The devices, cars, and systems we rely on are getting smarter – with intelligence and compute power that can analyze data and patterns to help make informed decisions that lead to tangible business outcomes. Think about the last time you shopped online and that recommendation for an additional purchase was spot on, or your favorite music app served up the right song at the right time. How’d they know? These are simple examples, but what they have in common are systems that quickly make sense of data and patterns to predict what you need and want, when you need and want it.  And for enterprise organizations – that’s a huge competitive advantage when it comes to customer loyalty, engagement and satisfaction.

These advancements are more than 50 years in the making, with some of the world’s brightest minds and visionaries in history – from Alan Turing to Ray Kurzweil and Stephen Hawking – pushing the limits (and philosophical debates) on what’s possible with algorithms and code.

And while the thought of singularity and AI-powered humanoids is fantastic science fiction and makes for great TV and movies – what I’m most excited about is the way AI and machine learning are going to bring real business and personal value to our everyday lives right now, and in the future. By the time we hit 2030, we’ll be working alongside machines in ways we once only dreamed about. The way we learn, work, bank, commute – even how we experience the routine health check-up, all changes.

…what I’m most excited about is the way AI and machine learning are going to bring real business and personal value to our everyday lives…

I also spend a lot of time thinking about how we take all those AI smarts and integrate them as part of the products and services that we deliver.

As a 30-year veteran, I’ve had a front-row seat to this evolution – but haven’t just watched.  The collective businesses that make up Dell Technologies’ have anticipated the trends, and played a role in computing and data center innovation that now spans compute and processing, data, AI, machine learning and analytics to deliver an entire end-to-end IT infrastructure from the edge to the core to the cloud that makes Digital Transformation possible.  In fact, our Products and Operations organization is the only one of its kind to look across the entire IT infrastructure and deliver the solutions and services that combined, are a game-changing force for our customers.

For instance:

  • Our infrastructure solutions are getting smarter about data by learning from data. For example, we’re bringing more autonomous storage capabilities into our SC All Flash portfolio that can quickly make a decision about how and where data should be stored and protected based on its unique characteristics – critical when you think about the huge sets of data AI and machine learning are dependent upon.
  • We recently announced new Ready Bundles for Machine and Deep Learning to make high-performance computing (HPC) more accessible for organizations, delivering faster, better and deeper data insights that advance analytics for business intelligence.
  • We’re collaborating with Toshiba Digital Solutions to build AI for mission-critical use cases, such as preventive maintenance, crime prevention, disaster recovery, cyber security, demand prediction and transportation quality improvement.
  • We’re going all in on IoT with a brand new division dedicated to helping our customers maximize their opportunity in the market with services, solutions and financing options. For instance, did you ever think you could make kale taste better and grow at scale in an urban warehouse through the power of analytics? Well, that’s exactly what AeroFarms is doing with our edge computing solutions and VMware at the heart of their IT operation.
  • We’re bringing AI-driven productivity to PC computing with Dell Precision Optimizer that uses machine learning and cognitive technologies to automatically set up the desktop experience based on what it’s learned about the usage of that workstation and the necessary performance. And it learns over time, automatically adjusting every time a user logs in – driving speed in productivity and worker satisfaction.
  • And, we’re changing how often you have to change that laptop battery with machine learning built right into the battery pack, firing up and down based on peak usage hours and activities.

Where’s all this going?

Dell EMC CTO John Roese recently discussed how AI will do the thinking tasks while we humans can dedicate our brain power to more sophisticated tasks – or even earn ourselves a little more R&R to work better and smarter. We can let the machines multi-task a bit more, and we as humans can start to focus more on how to apply the learnings and data that our machine counterparts present to solve some of the biggest challenges in society – be it a more sustainable way to make plastic and keep it out of the ocean, or how to predict where natural disasters may strike next.

Early signs of AI for good is what Pivotal did with their client the Circular Board – together they launched Alice – the first-ever artificial intelligence platform to connect women entrepreneurs in real time with the resources they need to scale based on start-up stage, location, industry, revenue and individual needs. As Alice populates, machine learning will allow her to predict founders’ needs to guide them to referrals, events, mentors and even access to capital.

These are just a few of the ways Dell Technologies is driving the AI and machine learning evolution. And we’re just getting started. Every day we are learning new things, and with the help of our machine partners, the opportunities ahead are endless.

And this is just the beginning of the conversation – we’ll be talking a lot more about how customers are using AI and machine learning in powerful yet practical ways…and not just from me, but from our CTOs and Fellows (the leading experts on this stuff :)) who are at the forefront of turning artificial intelligence into business and life intelligence.



ENCLOSURE:https://blog.dell.com/uploads/2018/01/Artificial-Intelligence-AI-Woman-Brain-City-Day-Theme-1000×500.jpg

Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

  • No Related Posts

Industry Verticals READY for Artificial Intelligence in 2018

EMC logo


Imagine what the world would be like if we could harness the multitude of data generated each day to catalyze positive change. What if we had the ability to predict and stop crimes before they happened, or could apply these same methodologies to save lives with better healthcare? Sound like the plots of many familiar movies? With recent advances in artificial intelligence, these outcomes are not only possible, but an exciting reality!

As we move swiftly into this new year, media, analysts and just about everyone is thinking about what will be ‘the next big thing’ in technology. Looking back at 2017, this was a hallmark year for AI enthusiasm and awareness. More industries and organizations embraced digital transformation and have come to value their data as a critical corporate asset. Now, building off of that momentum, 2018 will be the year that AI adoption reaches critical mass among organizations and professionals!

In speaking with customers over the past year, I’ve learned that many have already begun to experiment with machine and deep learning and artificial intelligence; some proactive customers have put AI-enabled capabilities into production, and nearly all are expected to make investments in the coming months. By the end of the year, we expect most enterprise customers will have one or more AI-enhanced services or products in production, and that the majority of smaller and mid-size companies will be executing AI technology evaluations and pilot programs (with some already in production as well).

When deployed strategically, AI technologies equip organizations to derive actionable data insights across virtually all industries, including energy, transportation, education, research, entertainment, hospitality, and so many more. In 2018, we predict the financial services, retail, healthcare and life sciences, and manufacturing industries will realize quick results with human-machine partnerships. Why is this the case?

 Optimizing Financial Data

Financial services companies already possess vast data sets and conduct advanced analytics on business and customer trends. This expertise fosters an ideal culture for evaluating and adopting more powerful methods, as data becomes fuel for deep learning approaches. Artificial intelligence can be used to optimize all facets of financial data reporting, from risk assessments and growth projections to client satisfaction and fraud prevention. The ROI of attracting new customers is easily measured, and in 2018, these organizations can further improve performance by better understanding customer needs and reducing fraud and security breaches. As AI enables automated financial decisions at scale, it will converge with another disruptive technology, blockchain, to secure and validate automated transactions.

Maximizing Profit for Retailers

Retail has many potential uses for AI, such as understanding the target markets of products, improving advertising with personal information, and detecting fraudulent online purchases and theft in brick-and-mortar stores. Next year, these capabilities will increasingly become mainstream for small and mid-sized retailers. Dell EMC retail customers will continue to maximize profits by promoting offerings to the most likely buyers and predictively purchasing commodities by time and location. These AI-enabled successes set the foundation for future advances in retail operations, like implementing fully-automated customer support and autonomous product delivery operations.

Standardizing Efficiency in Healthcare

One of the reasons I love working in technology and for Dell EMC is the ability to help our customers leverage technology to advance human progress. In no field is that truer than in healthcare and life sciences, industries that are often the showcase examples for the power of artificial intelligence.

From the discussions we’re having with our healthcare customers, it is likely that in 2018, initial research projects will evolve into standard operating procedures. There is an abundance of image data for many medical ailments, like tumors, which is being used to train AI models to detect these conditions earlier and more accurately. Early results are so promising, that many healthcare providers will regularly use deep learning solutions to support the diagnosis of cancer and other severe conditions this year. I believe the impact of AI in healthcare will be both wide and deep; as healthcare records become progressively digitized and input into deep learning methods, they will help researchers understand health risks, improve detection and monitoring of conditions, and even predict health issues before they arise. That’s incredible progress! Moving forward, I believe that the era of personalized medicine will be furthered by pairing AI technologies with increasingly comprehensive and blockchain-secured data from IOT and other sources that complement lab results and caregiver observations.

Reducing Cost for Manufacturers

Cost reduction is of utmost importance to the manufacturing industry, whether the costs of components, failures, or maintenance. The ability to predict global supply chain costs and customer demand, and intercede before they occur, runs parallel to the advances in the financial services industry I wrote about above. Embedded within manufacturing facilities and complex equipment are sensors that measure item productivity and environmental conditions that impact reliability and maintenance, like power, temperature, and stressors. Our customers tell me that they are already benefitting from this data and leveraging it to develop AI-powered models to predict failures before they happen and improve customer satisfaction. For example, customers can be notified when firmware updates should be applied or support should be contacted, among many other use cases.  Deep learning technologies are so powerful that we use it to bolster the reliability and support of our own offerings.

To me it is clear – artificial intelligence and machine and deep learning will continue to grow as all types of organizations understand the incredible power offered by these technologies. That said, although revolutionary, the transformative results promised through artificial intelligence will not come without effort. To unlock the full capabilities of AI, organizations will have to do the heavy lifting of going “all in” on digital transformation, accelerating their computing methods, and embracing data sovereignty. It will be critical for data to be extensive collected, curated, and made available to all applicable uses cases.

It’s About Knowing Where to Start

Although artificial intelligence has been around for decades, it’s still difficult to understand and use effectively and requires the right expertise and technologies. Fortunately, Dell EMC is the premier vendor for data storage in the world, and we have decades of experience in managing data, enabling data analytics, and working with customers to design and deploy AI solutions for deeper insights. And, it’s about to get even better!

In 2018, building on extensive work with customers and partners, Dell EMC will make simple AI solutions available to customers in all verticals. As we recently announced, our Ready Bundles for Machine Learning and for Deep Learning bring AI capabilities to the masses, including global companies, research labs, governments, and educational institutions. Our carefully designed, optimized, reliable, and scalable solutions integrate advanced processors, storage and networking technologies, and powerful AI-optimized software. These solutions simplify selection, deployment, adoption, and usage for our customers, thus minimizing the cost, effort, and frustration associated with DIY and public cloud solutions.

AI is complex, but we’re focused on simplifying and accelerating the journey for customers and helping more organizations achieve its promise and potential in 2018. The revolution has begun, and we expect all businesses will use and benefit from AI-powered solutions as we close the decade. Dell EMC Ready Bundles for Machine Learning and Deep Learning will deliver these critical capabilities, empowering customers to innovate, compete, and change the world!

Want to learn more about our thoughts on these topics? Read more about Dell Technologies’ 2018 predictions for artificial intelligence.



ENCLOSURE:https://blog.dellemc.com/uploads/2018/01/AI-Human-Machine-Partnership-2-1000×500-1.jpg

Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

Analysis: China’s AI revolution threatens US

A new report from the Washington-based Center for a New American Security raised the alert level of the US defence community over China’s rise as an artificial intelligence (AI) superpower, one that could effectively destroy the American military by 2030.

The meticulous report no doubt will send a chill through the halls of the Pentagon.

The report, ‘Battlefield Singularity: Artificial Revolution, and China’s Future Military Power’, by Elsa Kania, paints a disturbing picture of China’s AI military modernisation programmes. Kania, as co-founder of the China Cyber and Intelligence Studies Institute, is well suited to write the investigative report using available Chinese-language open-source materials that reveal China’s military thinking and progress on AI.

Kania reported that China’s military is pursuing advances in ‘impact and disruptive military applications of AI’ and given it ‘high-level priority within China’s national agenda for military-civil fusion’. The goal is to become the world’s ‘premier innovation centre’ in AI by 2030.

According to the report, the Chinese military believes the advent of AI could fundamentally change the very character of warfare itself. Transforming itself from today’s ‘informatised’ warfare to ‘intelligentised’ warfare, in which AI will be critical to military power.

The result of this change would be the start of a major shift from China’s strategic approach, ‘beyond its traditional asymmetric focus on targeting US vulnerabilities to the offset-oriented pursuit of competition to innovate’.

China’s military is seeking ‘leapfrog development’ to achieve a ‘decisive edge’ in terms of ‘trump card weapons’ that prove a critical edge in ‘strategic frontline technologies’ against the US during a war. The report pointed out that the magnitude of Chinese publications in ‘deep learning’ has already exceeded the US as of 2014, and China ranks second in AI patent applications with 15,754 in total filed as of late 2016.

In July, China released the New-Generation AI Development Plan that articulated its ambition to lead the world in AI by 2030, becoming the premier global AI innovation centre. ‘Under this strategic framework, China will advance a three-dimensional agenda in AI: tackling key problems in research and development, pursuing a range of products and applications, and cultivating and expanding AI industry.’

This will include support for AI technologies that could result in paradigm shifts, including brain-inspired neural network architectures and quantum-accelerated machine learning. ‘The plan calls for building up national resources for indigenous innovation and pursuing continued advances in big data, swarm intelligence and human-machine hybrid intelligence…’

The report noted that Chinese teams dominated the ImageNet Large-Scale Visual Recognition Challenge, an AI computer vision contest, in 2016 and 2017. For the first time, at the 2017 annual meeting of the Association of the Advancement of Artificial Intelligence, China submitted an equal number of accepted papers compared to the US.

Then in November, Yitu Tech, a Chinese facial recognition start-up, took first place in the Facial Recognition Prize Challenge hosted by the Intelligence Advanced Projects Agency (IARPA). What the reader might find disturbing is that the Maryland-based IARPA is under the US Office of the Director of National Intelligence, which funds research across a range of technical areas, including mathematics, computer science, physics, chemistry, biology, neuroscience, linguistics, political science and cognitive psychology.

IARPA’s activities would be a natural fit for the Chinese Communist Party as it increases social control and stability through ‘new techniques for policing, censorship and surveillance, such as the installation of millions of surveillance cameras enhanced with AI technology’.

The report highlighted concerns the US should have on cooperative efforts with China. Chinese investments in Silicon Valley AI have fuelled the debate on whether the US Committee for Foreign Investment should expand reviews of Chinese high-tech investments, especially in AI. For example, the report pointed out the USAF became concerned after Chinese investment in Neurala, an AI start-up known for ‘innovative deep learning technology that can make more reactive robots’. The company is building the ‘Neurala Brain’ with a deep-learning neural network software.

Between 2012 and mid-2017, Chinese technology investments amounted to $19 billion in the US with particular focus on AI, robotics and augmented or virtual reality, said the report. In May 2014, Baidu Inc. established its Silicon Valley Artificial Intelligence Laboratory. In June 2014, Qihoo 360 Technology Co, a Chinese cybersecurity company, and Microsoft established a partnership in AI that focused on AI and mobile Internet.

In November 2015, the Chinese Academy of Sciences Institute of Automation (CASIA) and Dell established the Artificial Intelligence and Advanced Computing Joint Laboratory, which is pursuing development of cognitive systems and deep-learning technologies.

In January 2016, BEACON (Bio/computational Evolution in Action CONsortium), a centre located at Michigan State University, received funding from the US government via the National Science Foundation to establish the Joint Research Center of Evolutionary Intelligence and Robotics, headquartered at Shantou Technical University, also in partnership with the Guangdong Provincial Key Laboratory of Digital Signal and Image Processing.

In October 2016, Huawei Technologies devoted $1 million in funding to a new AI research partnership with the University of California, Berkeley. In April 2017 Tencent announced plans to open its first AI research centre in Seattle. That same month, Baidu Inc acquired xPerception, a US start-up specialising in computer vision.

The US is not the only accomplice. In 2011 and 2012, the University of Technology Sydney (UTS) established five research centres with Chinese universities that included centres on intelligent systems, data mining, quantum computation and AI. In 2017, UTS partnered with the China Electronics Technology Group (CETC) focusing on big data, AI and quantum technologies.

In 2014 Chinese drive-system maker Best Motion created a research and development centre at the University of Nottingham to develop high-quality servo drive systems for use in AI and robotics. In 2016 the Torch Innovation Precinct at the University of New South Wales was established as a joint China-Australia science and technology partnership to research military-relevant technologies, such as unmanned systems.

In March of this year the Hangzhou Wahaha Group constructed three AI centres in China and Israel as a collaboration between CASIA and the University of Haifa. In July, China, France and the Netherlands renewed an agreement for a joint Sino-European Laboratory in Computer Science, Automation and Applied Mathematics, in partnership with CASIA with a major focus on AI.

If there was one turning point in Chinese military attitudes towards AI it was in March 2016 during the World Go Summit when Google-owned DeepMind’s AlphaGo beat world champion, Lee Sedol. Lee’s defeat ‘captured the PLA’s imagination at the highest levels, sparking high-level seminars and symposiums on the topic’, the report said.

There was also a rise in Chinese military analysis of the US Defense Advanced Research Projects Agency’s programme Deep Green, which is a system that supports commanders’ decision-making on the battlefield through advanced predictive capabilities, including the ‘generation of courses of action, evaluation of options and assessment of the impact of decisions’.

As recently as September, the China Institute of Command and Control (CICC)sponsored the first Artificial Intelligence and War-Gaming National Finals, convened at the National Defense University’s Joint Operations College. ‘It involved a ‘human-machine confrontation’ between top teams and an AI system called CASIA-Prophet 1.0, which was victorious over human teams by a score of 7 to 1.’

Chinese military thinkers now want ‘intelligentisation of warfare that could result in a trend toward battlefield singularity’. Under these conditions, humans would no longer have capacity to remain directly ‘in the loop’, but would still possess ultimate decision-making authority or ‘human on the loop’, i.e. ‘exercising supervisory control’.

Chinese military strategists want to develop synergies between intelligentised or autonomous systems and directed-energy weapons that will enable ‘light warfare’ involving the fusion of real-time information and ‘zero-hour’ attacks. This will include all forms of military weapons. Chinese AI start-up IFlytek is working with the Chinese military on a voice recognition and synthesis module for intelligence processing for this very reason.

Of particular concern is the Chinese military’s Strategic Support Force (SSF) that seeks to build up advanced cyber warfare capabilities, leveraging big data and machine learning. According to the report, the SSF’s Information Engineering University has developed methods to detect and mitigate distributed denial of service (DDoS) attacks through pattern matching, statistical analysis and machine learning, as well as to detect advanced persistent threat detection based on big data analysis.

China’s national strategy of military-civil fusion enables China to transfer dual-use technological advances to build up military capabilities while promoting economic growth. The report advised the US government to compete and counter Chinese AI advances, and the Pentagon should consider supporting research to track the China’s AI defence innovation ecosystem.

Further, the report recommended reforms to laws designed to constrain ‘illicit and problematic’ technology transfers and changes on how the Committee on Foreign Investment decides what investments and acquisitions are a threat to national security.

Related:

  • No Related Posts

3 Keys to Winning the Great Artificial Intelligence (AI) War!

EMC logo


There is a war a-brewin’, but this war will be fought with wits and not brute strength. Ever since Russian President Vladimir Putin’s declaration that “the nation that leads in AI (Artificial Intelligence) will be the ruler of the world,” the press and analysts have created hysteria regarding the ramifications of artificial intelligence on everything from public education to unemployment to healthcare to Skynet.

Note: artificial intelligence (AI) endows applications with the ability to automatically learn and adapt from experience via interacting with the surroundings / environment. See the blog “Artificial Intelligence is not Fake Intelligence” for a more detailed explanation on artificial intelligence and machine learning.

The Fast Company article “How to Stop Worrying and Love the Great AI War of 2018,” projected that the AI battle would ultimately boil down between the “AI Big 6”:  Alphabet/Google, Amazon, Apple, Facebook, IBM, and Microsoft. However, there are other contenders worthy of consideration including GE, Tesla, Netflix, Baidu, Tencent, and Albaba.

But what are the characteristics of organizations that will be the ultimate winners in this Great AI War? What are the behaviors and actions that will distinguish those organizations that capitalize on this AI gold rush while others “fumble the future”?

I believe that the AI winners will have the following characteristics:

  1. Users, not purveyors, of AI technology
  2. Embrace open source for technology agility (independence)
  3. Mastery of Big Data (and no, Big Data is not dead)

Let me state my case.

#1 Users, Not Purveyors, of AI Technology

The Market Capitalization Leaderboard shown in Figure 1 offers important clues as to which organizations will likely be the AI winners. What will set these organizations apart will be not the selling of technology, but their ability leverage AI for “value capture.”

Figure 1: Marketing Capitalization Leaders as of May 26, 2017.

 

By the way, I think Kleiner Perkins was lazy in classifying “Industry Segment.” The market leaders are less purveyors of AI technology than they are users of AI technology.

  • Less than 10% of Amazon’s revenue comes from technology (cloud); $12B in cloud revenue out of a total revenue of $136B in 2016. So what Industry Segment are they in?
  • Google had quarterly revenues (Q1, 2016) of $26B of which digital media/advertising (search) represented $23B. Their “other” businesses (including Google Cloud) were only $3B. So what Industry Segment are they in?
  • Apple’s most recent quarterly (Q3, 2016) revenues were $42B out of which the iPhone (personal communications, information and entertainment) and the associated iPhone ecosystem (iTunes, Apple Music, App Store) comprised an aggregated $37.5B.
  • Finally, I’m not aware of any AI or data technologies that Facebook sells to the general market. Facebook generated $9.3B in revenue in Q2, 2017 of which $9.16B came from Ad revenue. So what Industry Segment are they in?

Mastering Value Capture. Just having the technology is not sufficient; it’s how you use the technology to derive and then drive new sources of customer, business, operational, and financial value that matters. Ultimately, the AI war is about “value capture.”

The companies listed in Figure 1 are trying to dominate markets, not technology. For example:

  • Apple (#1) seeks to dominate personal communications
  • Google/Alphabet (#2) seeks to dominate digital media, advertising and personal communications
  • Amazon (#4) seeks to dominate online commerce
  • Facebook (#5) seeks to dominate social media, and advertising

Each of these AI leaders seeks to extend their value capture capabilities into new markets, including transportation (autonomous vehicles), healthcare, finance, media, and entertainment.

Other market leaders are also moving aggressively to exploit the power of AI to capture more customer, products and operational value. JPM Morgan (#11) is focused on building an AI platform (see “JPMorgan Takes AI Use to the Next Level”) that will allow JPMC to dominate financial trading. And GE (#16) has made a strategic bet with their Predix platform (see “GE’S Big Bet on Data and Analytics”) as the platform for dominating the Industrial Internet of Things.

Microsoft (#3) is the one exception as Microsoft is a purveyor of technology. But even Microsoft is branching beyond just selling technology into trying to dominate markets such as digital media, entertainment, and social media where their AI “chops” can give them competitive advantages (see “The Jewel of Microsoft’s Earnings”).

#2 Embrace Open Source for Technology Agility (Independence)

AI leaders will exploit open-source business models to establish platform dominance/standardization, and create technology agility and independence. They will develop an enabling technology, and then give it away via open source. This enables them to encourage the growing community of developers, especially those up-and-coming developers in universities and research labs, to build out and create de facto standards around their enabling technologies.

Open Source Leaders. The Global AI winners are significant contributors to the artificial intelligence and machine learning open source communities. This includes developments such as Amazon Machine Learning, Google TensorFlow, Facebook Caffe2, Microsoft Azure ML Studio, Microsoft Distributed Machine Learning Toolkit, Facebook GraphQL, and Facebook Torch.

The leadership role that the “Great AI War” combatants are playing can be seen in many open source projects. For example, Torch is an open source machine learning library and scientific computing framework. The “official maintainers” of Torch are:

  • Research Scientist @ Facebook
  • Senior Software Engineer @ Twitter
  • Research Scientist @ Google DeepMind
  • Research Engineer @ Facebook

Training and Education. Another strategy from the Global AI leaders the creation of community or industry training and education opportunities around their open source technologies. For example, Google is committing $1 billion to train American workers to build new businesses with Google’s AI tools (see “Google Commits $1 Billion in Grants to Train U.S. Workers for High-Tech Jobs”).

Avoiding Technology Lock-in.  But equally important is that these AI leaders are seeking to avoid technology and architecture lock-in. They have watched old school organizations struggle with proprietary software packages that took months if not years for upgrades and bug fixes, while paying a burdensome annual maintenance fees (33% of list price means you’re buying the entire software package again every 3 years). In a world where the enabling data and analytic technologies are changing nearly daily, technological and architecture agility (at scale) and independence is mandatory for organizations looking to win the Great AI War.

#3 Mastery of Big Data

Everyone knows about the astounding growth of big data over the last decade as organizations focused on capturing detailed customer, product, operational and market data. Initially fueled by commerce, web and social media data, big data has accelerated with the growth of video, wearables, and the Internet of Things. (See Figure 2).

However, organizations have struggled to monetize this wealth of data. Enter artificial intelligence.

Figure 2: Fueling the Insatiable Appetite for Data

 

More Data = Better AI. Artificial intelligence can exploit massive data sets to identify patterns on a scale that flummox traditional Business Intelligence “slice and dice” and query technologies. Data is the food that feeds AI. The more data the AI models consume, the smarter AI gets. For example, Facebook is mastering facial recognition via its DeepFace Deep Learning application by virtue of owning the world’s largest repository of photos.

To illustrate the symbiotic relationship between big data and AI, let’s look at autonomous vehicles (AV). AV require enormous quantities of data to feed the AV machine learning algorithms. It would take tens of thousands of hours of real-world driving data across a variety of driving scenarios to teach cars how to navigate on their own. To address this data volume problem, AV companies are using the video game “Grand Theft Auto” to help generate enough data in order to train Autonomous Vehicles (see “GTA is Teaching Self-Driving Cars How to Navigate Better in the Real World”).

Data Lake.  Leading AI organizations are exploiting the data lake concept to not only store the growing wealth of structured and unstructured (internal and publicly-available) data, but to provide an elastic, scalable, self-provisioning data science platform for “collaborative value creation” in building the machine learning and artificial intelligence models (see “Data Lake Business Model Maturity Index” for more details on data lake business model maturation).

Exploiting the Economic Value of Data. Leading AI organizations realize that data and analytics are unlike any traditional corporate assets. Data and analytics are digital assets that never wear out, never deplete, and can be used simultaneously at near-zero marginal cost across an infinite business and operational use cases. Understanding the true economic value of the organization’s data can help to prioritize technology and business investments that accelerate value capture from these data sources (see University of San Francisco research paper “Determining the Economic Value of Data” for more details).

Conclusion: How to Become an AI Winner

As has been discussed many times in my blog series, and explored in detail in my book, “Big Data MBA: Driving Business Strategies with Data Science,” AI winners will ultimately be those organizations that are the most effective at leveraging data and analytics to power their business models (see Figure 3).

Figure 3: How Effective Is Your Organization at Leveraging Data and Analytics to Power Your Business Models?

 

Ultimately, AI winners will master three key characteristics:

  • Focus on Value Capture by identifying, validating and prioritizing the organization’s key business and operational use cases (see “Use Case Identification, Validation and Prioritization”).
  • Avoid technology and architecture lock-in and create technology independence via an open source technology strategy
  • Mastery of Big Data and the Data Lake to exploit the unique economic value of the data and analytic digital assets (see “Data Lake Business Model Maturity Index”).

So in conclusion, let’s have some fun with this blog and think outside of the box about some hypothetical scenarios in which companies exploit this AI gold rush:

  • What would be the business model ramifications to GE if they were to open source Predix and offer Predix training to universities and third party developers?
  • What would be the business model ramifications to JPMC if they were to open source their trading platform to universities and third party developers?
  • What would be the business model ramifications if IBM moved out of the technology purveyor business and instead acquired companies in financial services and healthcare where their Watson AI platform could create market dominance?

As the world prepares for the impending great AI war, now is not the time for organizations to be shy or to cling to old, outdated business models.

Fortune Favors the Brave.

Sources

Figure 1: ScoopNest “2017 global market capitalization leader board: tech is 40% of top 20 companies and 100% of top 5” and Consultancy UK “Market capitalisation of world’s 100 biggest companies hits $17.4 trillion

The post 3 Keys to Winning the Great Artificial Intelligence (AI) War! appeared first on InFocus Blog | Dell EMC Services.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related: