Neural Networks in Deep Learning

Neural networks are algorithms that are loosely modeled on the way brains work. These are of great interest right now because they can learn how to recognize patterns. A famous example involves a neural network algorithm that learns to recognize whether an image has a cat, or doesn’t have a cat. In this article, I’m providing an introduction to neural networks. We’ll explore what neural networks are, how they work, and how they’re used today in today’s rapidly developing machine-learning world.

Before we look at different types of neural networks, we need to start with the basic building blocks. And these aren’t hard. There are just five things you need to figure out:

  • Neurons
  • Inputs
  • Outputs (called activation functions)
  • Weights
  • Biases

I’ll summarize these terms below, or you can take a look at this Oracle blog post on machine learning for a more detailed explanation.

How Neural Networks Work

Neurons are the decision makers. Each neuron has one or more inputs and a single output called an activation function. This output can be used as an input to one or more neurons or as an output for the network as a whole. Some inputs are more important than others and so are weighted accordingly. Neurons themselves will “fire” or change their outputs based on these weighted inputs. How quickly they fire depends on their bias. Here’s a simple diagram covering these five elements.

Diagram of Neurons, Inputs, Outputs, Weights, Biases

I haven’t represented the weight and bias in this diagram, but you can think of them as floating point numbers, typically in the range of 0-1. The output or activation function of a neuron doesn’t have to be a simple on/off (though that is the first option) but can take different shapes. In some cases, such as the third and fifth examples above, the output value can go lower than zero. And that’s it!

Examples of How Neural Networks Work

So now you have the building blocks, let’s put them together to form a simple neural network. Here’s a network that is used to recognize handwritten digits. I took it from this neural network site, which I’d recommend as a great resource if you want to read further about this topic.

A simple neural network example

Here you can see a simple diagram with inputs on the left. Only eight are shown but there would need to be 784 in total, one neuron mapping to each of the 784 pixels in the 28×28 pixel scanned images of handwritten digits that the network processes. On the right-hand side, you see the outputs. We would want one and only one of those neurons to fire each time a new image is processed. And in the middle, we have a hidden layer, so-called because you don’t see it directly. A network like this can be trained to deliver very high accuracy recognizing scanned images of handwritten digits (like the example below, adjusted to cover 28×28 pixels).

Scanned image of handwritten digits for neural network

But a network like the one shown above would not be considered by most to be deep learning. It’s too simple, with only one hidden layer. The cutoff point is considered to be at least two hidden layers, like the one shown below:

Neural network with two hidden layers

I glossed over what the hidden layer is actually doing, so let’s look at it here. The input layer has neurons that map to an individual pixel, while the output neurons effectively map to the whole image. Those hidden layers map to components of the image. Perhaps they recognize a curve or a diagonal line or a closed loop.

But importantly, those components in the hidden layers map to specific locations in the original image. They have to. There are hard links from the individual pixels on the left. So a network like the one above would not be able to answer a simple question on the image like the one below: how many horses do you see?

Horses and neural networks

I could show images that had horses anywhere on the picture and you would have no problem determining how many there were. You’d do so by recognizing the elements that make up a horse, no matter where in the picture they occurred. And that’s a very good thing, because the world we live in requires us to recognize objects that are in front of us, or off to the side, fully visible, or partially obscured. To solve problems like this, we need a different kind of network like the one you see below: a convolutional neural network.

Let’s imagine we’re working with images that are 28×28 pixels again, but this time we can’t rely on having one image fixed in the center. Look at the logic of that first hidden layer. All of those neurons are now linked to specific, overlapping areas of the input image (in this case a 5×5 pixel area). Starting with this basic structure, and adding some additional processing, it’s possible to build a neural network that can identify items in a position-independent way. Incidentally, neurons in the visual cortex of animals, including humans, work in a similar way. There are neurons that only trigger on certain parts of the field of view.

Convolutional networks are the workhorses of image recognition. But when it comes to natural language processing, they are not so good. Understanding the written or spoken word is quite different from processing independent images. Language is highly contextual, by which I mean that individual words have to be processed in the context of the words around them. (Note that I am not a linguist and apologize for any imprecise usage of terms).

When it comes to processing a sentence, there are at least three different things you have to understand: the first two are the meaning of the individual words, and the syntax or grammar of the sentence (the rules about word order, structure, and so on). If you’ve gotten this far, then you have those things nailed. But consider the sentence below.

I’m baking a rainbow cake and want to add different _________ to the batter.

What’s the missing term? You can only figure something like that out by looking at the earlier part of the sentence. A rainbow has many different colors, so you would need to add different food dyes to the batter (which would also have to be portioned out in some way).

Recurrent Neural Networks

Working out that answer required taking earlier words in the sentence as input to the next word. I’m describing a feedback loop, which is not something you saw earlier. Networks with feedback loops are called recurrent neural networks and in its simplest form, a feedback loop looks like this.

Note how the output feeds back to the inputs. If you “unroll” this diagram, you get the structure below.

You can see how this kind of structure would enable you to process a sequence of elements (like the words in a sentence) with each one providing input (context if you like) for subsequent elements.

Of course, this simple structure is not powerful enough to process language, but more complex networks with feedback loops can. And a common kind of recurrent neural network contains elements called LSTM units, which are really good at remembering things (like a key word earlier in a sentence), as well as forgetting them when needed. Below is one example of an LSTM unit.

You can see the similarity with the simple diagram above, but there’s much more going on here. I’m not going to explain it all (there’s a great explanation on this Github page) but I’ll point out a couple of things.

Look inside the main rectangular box. The shaded rectangular boxes are entire layers, the symbol inside representing the shape of the activation function (output). The shaded circles with X and + represent multiplication and addition operations respectively. Look at the first combination (a layer with a sigmoid output leading to a multiplication with the output of the previous term). If that layer outputs a value of zero, then the multiplication will effectively zero out that previous term. Put another way, this first combination is the “forgetting” circuitry. For the rest, check out this blog.

There’s more to processing language than syntax and understanding individual words. In that earlier example, how did you know a rainbow has many different colors? You’ve seen rainbows before and know what they look like. And that general knowledge of the implications of those words is the third element of processing natural language (and the hardest for a computer). To illustrate this, see the example below from the game of cricket. Those of you who don’t know the game will still be able to process the syntax of the sentences. You will still know what the individual words mean. But lacking the “common sense” of the context for those words, you will have no clue what is going on.

Coming around the wicket, the leg spinner bowled a “wrong-un.” The batsman swept it firmly against the spin to deep backward point. Is the batsman left-handed, right-handed, or can’t tell from this information?

Conclusion

This is just an overview. There are many other approaches to neural networks that have different strengths and weaknesses or are used to solve different types of problems. But concepts here still apply. Neural networks are all built on the same basic elements: neurons (with bias), inputs (with weights), and outputs (activation functions with specific profiles). These elements are used to construct different or specialized layers and elements (like the LSTM unit above). All of these things are combined with feedback loops and other connections to form a network.

In an upcoming blog post, we will look at how neural networks learn and are trained. Because until that happens, they are not ready for work. In the meantime, discover more about Oracle’s data management platforms and how Oracle supports neural networks in Oracle Database.

Related:

  • No Related Posts

VPLEX: The backup meta-volume(s) exceeds the 39-character limit causing the health-check to report a warning[1]

Article Number: 523881 Article Version: 3 Article Type: Break Fix



VPLEX Series,VPLEX Local,VPLEX Metro,VPLEX VS2,VPLEX VS6,VPLEX GeoSynchrony 5.4 Service Pack 1,VPLEX GeoSynchrony 5.4 Service Pack 1 Patch 1,VPLEX GeoSynchrony 5.4 Service Pack 1 Patch 3,VPLEX GeoSynchrony 5.4 Service Pack 1 Patch 4

  • VPLEX GUI is not showing Metadata details for a cluster.
  • One of the VPLEX Clusters GUI cannot determine the health state of the Metadata.​

When running a health-check the following warning occurs under the Meta Data section for the command output:

Meta Data:----------Cluster Volume Volume Oper Health ActiveName Name Type State State--------- ------------------------------------------------- -------------- ----- ------ ------cluster-1 C1_Logging logging-volume ok ok -cluster-1 cluster_1_meta_volume_vnx_backup_2018Jul20_204956 meta-volume ok ok Truecluster-1 cluster_1_meta_volume_vnx_backup_2018Jul20_204821 meta-volume ok ok Truecluster-1 cluster_1_meta_volume_vnx meta-volume ok ok TrueThe meta-volume cluster_1_meta_volume_vnx_backup_2018Jul20_204956 exceeds the character limit for meta-volume name.The meta-volume cluster_1_meta_volume_vnx_backup_2018Jul20_204821 exceeds the character limit for meta-volume name.cluster-2 C2_Meta meta-volume ok ok Truecluster-2 C2_Logging logging-volume ok ok -cluster-2 C2_Meta_backup_2018Jul02_060022 meta-volume ok ok Truecluster-2 C2_Meta_backup_2018Jul01_060025 meta-volume ok ok True

This issue may also manifest itself in the Unisphere for the VPLEX GUI in the form of missing active or backup meta-volumes shown in the example below for cluster-1 with the yellow bar and 0 meta volumes listed versus the green bar for cluster-2 and the number of meta volumes showing as ‘4’.

User-added image

This warning message indicates that the name(s) of the backup meta-volume(s) exceed the predefined character limit of 39 characters. The backup meta-volume names are a combination of the active meta-volume name and a time-stamp suffix as well as dashes and/or underscores. When the active meta-volume name is too long, it may push the backup meta-volume names beyond the character limit.

Active meta-volume is name changed to a value that pushes the name of the backup meta-volume beyond the character limit.

To resolve this warning, rename the active meta-volume with a shorter name and then re-run the meta-volume backups. This process is non-disruptive.

1. Determine which backup meta-volumes have names that exceed the character limit by running a health-check.

VPlexcli:/> health-check
Locate the Meta Data section of the health-check, like the one shown below:
Meta Data:----------Cluster Volume Volume Oper Health ActiveName Name Type State State--------- ------------------------------------------------- -------------- ----- ------ ------cluster-1 C1_Logging logging-volume ok ok -cluster-1 cluster_1_meta_volume_vnx_backup_2018Jul20_204956 meta-volume ok ok Truecluster-1 cluster_1_meta_volume_vnx_backup_2018Jul20_204821 meta-volume ok ok Truecluster-1 cluster_1_meta_volume_vnx meta-volume ok ok TrueThe meta-volume cluster_1_meta_volume_vnx_backup_2018Jul20_204956 exceeds the character limit for meta-volume name.The meta-volume cluster_1_meta_volume_vnx_backup_2018Jul20_204821 exceeds the character limit for meta-volume name.cluster-2 C2_Meta meta-volume ok ok Truecluster-2 C2_Logging logging-volume ok ok -cluster-2 C2_Meta_backup_2018Jul02_060022 meta-volume ok ok Truecluster-2 C2_Meta_backup_2018Jul01_060025 meta-volume ok ok True

The output from the health-check output above shows that the meta-volume backups for cluster-1 have names that exceed

the 39-character limit. Therefore, the active meta-volume for cluster-1 must be renamed to resolve the warning.

2. View the meta-volumes configured on the VPLEX to find the active meta-volume for each cluster.

VPlexcli:/> ll /clusters/*/system-volumes/clusters/cluster-1/system-volumes:Name Volume Type Operational Health Active Ready Geometry Component Block Block Capacity Slots------------------------------------------------- -------------- Status State ------ ----- -------- Count Count Size -------- ------------------------------------------------------ -------------- ----------- ------ ------ ----- -------- --------- -------- ----- -------- -----C1_Logging_vol logging-volume ok ok - - raid-1 1 2621440 4K 10G -cluster_1_meta_volume_vnx meta-volume ok ok true true raid-1 2 20971264 4K 80G 64000cluster_1_meta_volume_vnx_backup_2018Jul20_204821 meta-volume ok ok false true raid-1 1 20971264 4K 80G 64000cluster_1_meta_volume_vnx_backup_2018Jul20_204956 meta-volume ok ok false true raid-1 1 20971264 4K 80G 64000/clusters/cluster-2/system-volumes:Name Volume Type Operational Health Active Ready Geometry Component Block Block Capacity Slots------------------------------- -------------- Status State ------ ----- -------- Count Count Size -------- ------------------------------------ -------------- ----------- ------ ------ ----- -------- --------- -------- ----- -------- -----C2_Logging_vol logging-volume ok ok - - raid-0 1 2621440 4K 10G -C2_Meta meta-volume ok ok true true raid-1 2 20446976 4K 78G 64000C2_Meta_backup_2018Jul01_060025 meta-volume ok ok false true raid-1 1 20446976 4K 78G 64000C2_Meta_backup_2018Jul02_060022 meta-volume ok ok false true raid-1 1 20971264 4K 80G 64000
3. Navigate to the context for the active meta-volume.
VPlexcli:/> cd /clusters/cluster-1/system-volumes/cluster_1_meta_volume_vnxVPlexcli:/clusters/cluster-1/system-volumes/cluster_1_meta_volume_vnx>

4. Use the set command from this context to change the name of the active meta-volume.

VPlexcli:/clusters/cluster-1/system-volumes/cluster_1_meta_volume_vnx> set name C1_MetaVPlexcli:/clusters/cluster-1/system-volumes/C1_Meta>
Note that the name of the context changes to reflect the name change. You can re-run the following command to

verify the name change:
VPlexcli:/> ll /clusters/*/system-volumes/clusters/cluster-1/system-volumes:Name Volume Type Operational Health Active Ready Geometry Component Block Block Capacity Slots------------------------------------------------- -------------- Status State ------ ----- -------- Count Count Size -------- ------------------------------------------------------ -------------- ----------- ------ ------ ----- -------- --------- -------- ----- -------- -----C1_Logging_vol logging-volume ok ok - - raid-1 1 2621440 4K 10G -C1_Meta meta-volume ok ok true true raid-1 2 20971264 4K 80G 64000cluster_1_meta_volume_vnx_backup_2018Jul20_204821 meta-volume ok ok false true raid-1 1 20971264 4K 80G 64000cluster_1_meta_volume_vnx_backup_2018Jul20_204956 meta-volume ok ok false true raid-1 1 20971264 4K 80G 64000/clusters/cluster-2/system-volumes:Name Volume Type Operational Health Active Ready Geometry Component Block Block Capacity Slots------------------------------- -------------- Status State ------ ----- -------- Count Count Size -------- ------------------------------------ -------------- ----------- ------ ------ ----- -------- --------- -------- ----- -------- -----C2_Logging_vol logging-volume ok ok - - raid-0 1 2621440 4K 10G -C2_Meta meta-volume ok ok true true raid-1 2 20446976 4K 78G 64000C2_Meta_backup_2018Jul01_060025 meta-volume ok ok false true raid-1 1 20446976 4K 78G 64000C2_Meta_backup_2018Jul02_060022 meta-volume ok ok false true raid-1 1 20971264 4K 80G 64000
5. To rename the meta-volume backups run the metadatabackup local command one time for each meta-data backup volume

(this should be two times on a correctly configured system).

NOTE: This command MUST be run from the cluster where the meta-volume is located. For example, if the meta-volume

was on cluster-1 volume, you would need to run this command from the cluster-1 VPlexcli.
VPlexcli:/> metadatabackup localVPlexcli:/> metadatabackup local

As there are two meta-volume backups, we ran the command two times.

6. Re-run the following command to verify the name change for the meta-volume backups.
VPlexcli:/> ll /clusters/*/system-volumes//clusters/cluster-1/system-volumes:Name Volume Type Operational Health Active Ready Geometry Component Block Block Capacity Slots------------------------------- -------------- Status State ------ ----- -------- Count Count Size -------- ------------------------------------ -------------- ----------- ------ ------ ----- -------- --------- -------- ----- -------- -----C1_Logging_vol logging-volume ok ok - - raid-1 1 2621440 4K 10G -C1_Meta meta-volume ok ok true true raid-1 2 20971264 4K 80G 64000C1_Meta_backup_2018Jul20_212714 meta-volume ok ok false true raid-1 1 20971264 4K 80G 64000C1_Meta_backup_2018Jul20_212804 meta-volume ok ok false true raid-1 1 20971264 4K 80G 64000/clusters/cluster-2/system-volumes:Name Volume Type Operational Health Active Ready Geometry Component Block Block Capacity Slots------------------------------- -------------- Status State ------ ----- -------- Count Count Size -------- ------------------------------------ -------------- ----------- ------ ------ ----- -------- --------- -------- ----- -------- -----C2_Logging_vol logging-volume ok ok - - raid-0 1 2621440 4K 10G -C2_Meta meta-volume ok ok true true raid-1 2 20446976 4K 78G 64000C2_Meta_backup_2018Jul01_060025 meta-volume ok ok false true raid-1 1 20446976 4K 78G 64000C2_Meta_backup_2018Jul02_060022 meta-volume ok ok false true raid-1 1 20971264 4K 80G 64000
Note that the meta-volume backup names have now changed.

6. Rerun the health-check to confirm the warning message is no longer present.
VPlexcli:/> health-check
Locate the Meta Data section and verify that the warnings are no longer present.
Meta Data:----------Cluster Volume Volume Oper Health ActiveName Name Type State State--------- ------------------------------- -------------- ----- ------ ------cluster-1 C1_Meta meta-volume ok ok Truecluster-1 C1_Logging logging-volume ok ok -cluster-1 C1_Meta_backup_2018Jul20_212804 meta-volume ok ok Truecluster-1 C1_Meta_backup_2018Jul20_212714 meta-volume ok ok Truecluster-2 C2_Meta meta-volume ok ok Truecluster-2 C2_Logging logging-volume ok ok -cluster-2 C2_Meta_backup_2018Jul02_060022 meta-volume ok ok Truecluster-2 C2_Meta_backup_2018Jul01_060025 meta-volume ok ok True

Related:

  • No Related Posts

Dell EMC and NVIDIA Expand Collaboration to Deliver Flexible Deployment Options for Artificial Intelligence Use Cases

EMC logo


As organizations strive to gain a competitive edge in an increasingly digital global economy, Artificial Intelligence (AI) is garnering a lot of attention. Not surprisingly, AI initiatives are springing up in various business domains, such as manufacturing, customer support, marketing and sales. In fact, Gartner predicts that AI-derived global business value is forecast to reach $3.9 trillion by 2022[i]. As companies scramble to determine how to turn the promise of AI into reality, they are faced with a multitude of complex choices related to software stacks, neural networks and infrastructure components, with significant implications on the … READ MORE



ENCLOSURE:https://blog.dellemc.com/uploads/2018/11/shutterstock_518160529-600×356.png

Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

Self-Driving Storage, Part 1: AI’s Role in Intelligent Storage

EMC logo


Artificial Intelligence (AI) is here! With a rapidly growing number of success stories proving the possibilities and some bloopers too, there is no question that AI and machine learning technology have moved from science fiction to reality. Why now? In essence, I see it as a confluence of two trends: multi-layered recursive learning technologies inspired by a deeper understanding of how the human brain learns, and exponentially cheaper and more powerful computing. Some of the latest advances made by leveraging these trends are truly amazing: machines that take advantage of their own “bodies” to learn, machines … READ MORE



ENCLOSURE:https://blog.dellemc.com/uploads/2018/07/Storage-cropeed-600×356.png

Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

Don’t Let Senior IT Leaders Get Left Behind on Artificial Intelligence

EMC logo


How you can help your customers to better understand the potential (and challenges) of AI implementations Over the next decade, Artificial Intelligence (AI) is predicted to have an impact on virtually every product and business process. So it stands to reason that organizations of all sizes in all industry sectors are moving swiftly to better understand how machine learning and deep learning technologies can enhance efficiencies and/or outcomes in their businesses. If your customers haven’t already looked at the opportunities offered by AI implementations, they could quickly find themselves left behind as competitors tap into the clear business benefits of … READ MORE



ENCLOSURE:https://blog.dellemc.com/uploads/2018/07/robot-600×356.jpg

Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

What’s the Difference Between AI, Machine Learning, and Deep Learning?

Peter Jeffcock

Big Data Product Marketing

AI, machine learning, and deep learning – these terms overlap and are easily confused, so let’s start with some short definitions.

AI means getting a compute to mimic human behavior in some way.

Machine learning is a subset of AI, and it consists of the techniques that enable computers to figure things out from the data and deliver AI applications.

Deep learning, meanwhile, is a subset of machine learning that enables computers to solve more complex problems.

Download your free ebook, “Demystifying Machine Learning.”

Those descriptions are correct, but they are a little concise. So I want to explore each of these areas and provide a little more background.

Difference Between AI, Machine Learning and Deep Learning

What Is AI?

Artificial intelligence as an academic discipline was founded in 1956. The goal then, as now, was to get computers to perform tasks regarded as uniquely human: things that required intelligence. Initially, researchers worked on problems like playing checkers and solving logic problems.

If you looked at the output of one of those checkers playing programs you could see some form of “artificial intelligence” behind those moves, particularly when the computer beat you. Early successes caused the first researchers to exhibit almost boundless enthusiasm for the possibilities of AI, matched only by the extent to which they misjudged just how hard some problems were.

Artificial intelligence, then, refers to the output of a computer. The computer is doing something intelligent, so it’s exhibiting intelligence that is artificial.

The term AI doesn’t say anything about how those problems are solved. There are many different techniques including rule-based or expert systems. And one category of techniques started becoming more widely used in the 1980s: machine learning.

What Is Machine Learning?

The reason that those early researchers found some problems to be much harder is that those problems simply weren’t amenable to the early techniques used for AI. Hard-coded algorithms or fixed, rule-based systems just didn’t work very well for things like image recognition or extracting meaning from text.

The solution turned out to be not just mimicking human behavior (AI) but mimicking how humans learn.

Think about how you learned to read. You didn’t sit down and learn spelling and grammar before picking up your first book. You read simple books, graduating to more complex ones over time. You actually learned the rules (and exceptions) of spelling and grammar from your reading. Put another way, you processed a lot of data and learned from it.

That’s exactly the idea with machine learning. Feed an algorithm (as opposed to your brain) a lot of data and let it figure things out. Feed an algorithm a lot of data on financial transactions, tell it which ones are fraudulent, and let it work out what indicates fraud so it can predict fraud in the future. Or feed it information about your customer base and let it figure out how best to segment them. Find out more about machine learning techniques here.

As these algorithms developed, they could tackle many problems. But some things that humans found easy (like speech or handwriting recognition) were still hard for machines. However, if machine learning is about mimicking how humans learn, why not go all the way and try to mimic the human brain? That’s the idea behind neural networks.

The idea of using artificial neurons (neurons, connected by synapses, are the major elements in your brain) had been around for a while. And neural networks simulated in software started being used for certain problems. They showed a lot of promise and could solve some complex problems that other algorithms couldn’t tackle.

But machine learning still got stuck on many things that elementary school children tackled with ease: how many dogs are in this picture or are they really wolves? Walk over there and bring me the ripe banana. What made this character in the book cry so much?

It turned out that the problem was not with the concept of machine learning. Or even with the idea of mimicking the human brain. It was just that simple neural networks with 100s or even 1000s of neurons, connected in a relatively simple manner, just couldn’t duplicate what the human brain could do. It shouldn’t be a surprise if you think about it; human brains have around 86 billion neurons and very complex interconnectivity.

What is Deep Learning?

Put simply, deep learning is all about using neural networks with more neurons, layers, and interconnectivity. We’re still a long way off from mimicking the human brain in all its complexity, but we’re moving in that direction.

And when you read about advances in computing from autonomous cars to Go-playing supercomputers to speech recognition, that’s deep learning under the covers. You experience some form of artificial intelligence. Behind the scenes, that AI is powered by some form of deep learning.

Let’s look at a couple of problems to see how deep learning is different from simpler neural networks or other forms of machine learning.

How Deep Learning Works

If I give you images of horses, you recognize them as horses, even if you’ve never seen that image before. And it doesn’t matter if the horse is lying on a sofa, or dressed up for Halloween as a hippo. You can recognize a horse because you know about the various elements that define a horse: shape of its muzzle, number and placement of legs, and so on.

Deep learning can do this. And it’s important for many things including autonomous vehicles. Before a car can determine its next action, it needs to know what’s around it. It must be able to recognize people, bikes, other vehicles, road signs, and more. And do so in challenging visual circumstances. Standard machine learning techniques can’t do that.

Take natural language processing, which is used today in chatbots and smartphone voice assistants, to name two. Consider this sentence and work out what the last part should be:

I was born in Italy and, although I lived in Portugal and Brazil most of my life, I still speak fluent ________.

Hopefully you can see that the most likely answer is Italian (though you would also get points for French, Greek, German, Sardinian, Albanian, Occitan, Croatian, Slovene, Ladin, Latin, Friulian, Catalan, Sardinian, Sicilian, Romani and Franco-Provencal and probably several more). But think about what it takes to draw that conclusion.

First you need to know that the missing word is a language. You can do that if you understand “I speak fluent…”. To get Italian you have to go back through that sentence and ignore the red herrings about Portugal and Brazil. “I was born in Italy” implies learning Italian as I grew up (with 93% probability according to Wikipedia), assuming that you understand the implications of born, which go far beyond the day you were delivered. The combination of “although” and “still” makes it clear that I am not talking about Portuguese and brings you back to Italy. So Italian is the likely answer.

Imagine what’s happening in the neural network in your brain. Facts like “born in Italy” and “although…still” are inputs to other parts of your brain as you work things out. And this concept is carried over to deep neural networks via complex feedback loops.

Conclusion

So hopefully that first definition at the beginning of the article makes more sense now. AI refers to devices exhibiting human-like intelligence in some way. There are many techniques for AI, but one subset of that bigger list is machine learning – let the algorithms learn from the data. Finally, deep learning is a subset of machine learning, using many-layered neural networks to solve the hardest (for computers) problems.

Related:

Data Insight – Decomission

I need a solution

Our implentatino of Data Insight was never really done right and we have not used the process/data at all since it’s been in place, so we plan on decommisioning it.  My question is, is there anything other than just shutting down those servers?  Or does something have to be done within DLP to “noitify” that the data insight is no longer a part of it?

0

Related:

  • No Related Posts

Humanity and Artificial Intelligence – Shape Our Future in Harmony, Rethink Our Societies.

EMC logo


Of all the emerging technologies that are set to impact the way we work and the way we live, Artificial Intelligence (AI) is, without doubt, the most challenging. Coupled with Machine Learning (ML), AI will not only make objects smarter or allow machines to recognize patterns and interpret data, it has the potential to change the face of the earth. Some see AI as a blessing, many others focus on the threat it poses to human dominance over machines.

When talking about AI and smart machines, examples spring to mind of a computer beating the world chess champion Gary Kasparov in 1997, Google Assistant helping us in our day to day tasks, or – more recently – an AI program beating the world’s best professional poker players because it made better use of information that poker players do not share with each other. But the effects of AI will go much deeper. Just imagine what tasks computers can take over from us when they can be programmed to think like us, and how much faster they can be at performing repetitive tasks. What machines are already doing on the production line, may well happen in our offices too.

Let AI Do the Work for Us

Some recent surveys have shown that business leaders are divided over what the human-machine partnership will bring in terms of productivity. Research conducted by Vanson Bourne on behalf of Dell Technologies shows that 82 per cent of business executives expect their employees and machines to work as ‘integrated teams’ within the next few years. Yet only 49% believe that       employees will be more productive as more tasks get automated. And only 42% think we will have more job satisfaction by offloading tasks that we don’t want to do to intelligent machines. Employees too have their doubts, as research from The Workforce Institute reveals:  although 64 per cent would welcome AI if it simplified or automated time-consuming internal processes or helped balance their workload, six out of ten employees find their employers are not very transparent about what AI will mean to the way they do their jobs – if they will still have them, that is, after this next automation wave.

We are only in the first chapters of the book that we are writing for our future, but already, the AI/ML is having a profound influence on all aspects of human life, and we need to ensure that AI is not writing the ending for us. Consider just these examples:

  • In healthcare, deep learning systems can read images and diagnose pneumonia as accurately as radiologists.
  • At CES in Las Vegas, AT&T announced that it’s testing a new ‘structure monitoring solution’, a system to help cities and transportation companies monitor the stability of bridges, alerting them if their stability is compromised.
  • At Georgia Tech, a chatbot is mailing assignments and answering questions from students.
  • Intelligent systems are helping HR departments analyze employee sentiment in real time, thus helping reduce employee attrition.

The list of applications of AI is endless and you will find examples like these in any industry. The big question that everyone is asking is whether AI will help us, or if AI systems will replace human beings in the workplace, making us completely redundant. The answer to this million dollar question is not so simple. On the one hand, it is clear that a number of jobs are on the line. Just think of truck drivers losing their job if we will get convoys of driverless trucks on the road, call center operators being replaced by conversational AI systems, or financial analysts getting the boot from robo-advisors.

Humans in the Loop

On the other hand, AI will also create new jobs. If you have learning systems, someone will need to supervise those learning efforts, programmers will have to find the right algorithms and embed them in systems, and so on. In fact, some analysts see AI as what is called ‘an invention that makes inventions’, creating endless new possibilities. AI will definitely have a direct impact, but it will also spur on new developments that will, in turn, create new applications and new jobs. Enough new jobs for Gartner’s Peter Sondergaard to claim – during last year’s Gartner IT Symposium – that AI will be a net job contributor from 2020 onwards, eliminating 1.8 million jobs while creating 2.3 million new ones.

I also tend to think AI will bring more benefits than troubles and I strongly believe that humans will be augmented by AI rather than replaced. We need to consider that self-taught AI has trouble with the real world. Emotions are key, multiple options and complex real life situations hard to handle for AI. As Kevin Kelly, author of The Inevitable, says: “the chief benefit of AI is that it does NOT think like humans. We’re going to invent many new types of thinking that don’t exist biologically and that are not like human thinking. Therefore, this type of intelligence does not replace human thinking, but augments it.” In fact, AI cannot do without human intervention and there will always be ‘humans in the loop’ as AI-specialist J.J. Kardwell comments: “humans should be focused on teaching machines so that machines can focus on performing jobs that are too big for humans to process.” According to this school of thinking, humans and robots working in harmony will yield the best results.

For Marvin Minsky, what counts in humans, is our mind, our spirit, the brain being a machine like any other, besides the fact that modeling the plasticity and dynamism of the brain is not easy. We have a “mechanistic” vision of the Human, but, like Jean-François Mattei highlights, the fact is that the brain is first of all a social and cultural organ, which adapts to human relations and with our environment, for a fine and adapted decision, linked to our conscience, to our freedom to think and to create in an innovative way. Our liberty is unique, how we undertake and adapt it is precious, and as Lucretius says, “If the chain of causes is governed solely by laws, what meaning can you give to the freedom of the will and human action”?

Creativity Rules

Does this mean we should stick our heads in the sand and carry on as if nothing will change? Of course not. The future will be different and we need to prepare for it. The educational world has the huge task of preparing the workers of tomorrow. The human race has always excelled in creativity, from the paintings in the caves of Lascaux through architecture to modern music. What education should focus on, is stimulating that creativity and teaching people to combine that creativity with the power of AI to make our dreams come true. After all, machines cannot replace our feelings. I am convinced human beings will not turn into emotionless cyborgs. In that sense, I agree with the French philosopher Jean-François Mattei (‘Question de conscience’) that transhumanism should not lead to technology totalitarianism. Instead, AI will help us become less like the machines we are right now, toiling for ten hours a day to get through our ‘to do’-lists. This is our job to invent new lifestyles, imagine new societies, with the potential, through AI, to help us reorganize the way we live, the way we work and provide us with more time to connect with and take care of other humans or living species, making our world a better place for everyone.

All in all, I think we should be hopeful of the prospect that AI and ML are going to help us weather the changes that are ahead of us and we should not fear the machine. I share this belief with Dell Technologies CEO and Chairman Michael Dell: ““Computers are machines. The human brain is an organism. Those are different things. Maybe in 15 to 20 years from now, we’ll have computers that have more power than 10 million brains in terms of computational power, but it’s still not an organism.” We then must take an intuitive approach to imagine how future is formed, with artificial intelligence, as Bergson would stage, to work intelligently on joining forces, not on one taking over the other. As a closing, the notion of ethical conviction should not ignore the alterity dimension, emphasized by Kant.



ENCLOSURE:https://blog.dellemc.com/uploads/2018/03/Human-Progress-Bar-Graph-Sunrise-1000×500.jpg

Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

Weaving patterns with artificial intelligence, Part 1: Letter correlation and simple language statistics for AI

AI is more than pattern recognition. It can also build on patterns to generate expression. This is increasingly important in the world of intelligent agents. Learn about generative AI, an important class of techniques to the modern developer. As a first step, consider the patterns in natural language and how these can be modeled to prepare machines to generate their own expressions of familiar language. Discover how to go from basic letter frequency statistics to correlation between letters by using matrix-based models.

Related: