Humanity and Artificial Intelligence – Shape Our Future in Harmony, Rethink Our Societies.

EMC logo


Of all the emerging technologies that are set to impact the way we work and the way we live, Artificial Intelligence (AI) is, without doubt, the most challenging. Coupled with Machine Learning (ML), AI will not only make objects smarter or allow machines to recognize patterns and interpret data, it has the potential to change the face of the earth. Some see AI as a blessing, many others focus on the threat it poses to human dominance over machines.

When talking about AI and smart machines, examples spring to mind of a computer beating the world chess champion Gary Kasparov in 1997, Google Assistant helping us in our day to day tasks, or – more recently – an AI program beating the world’s best professional poker players because it made better use of information that poker players do not share with each other. But the effects of AI will go much deeper. Just imagine what tasks computers can take over from us when they can be programmed to think like us, and how much faster they can be at performing repetitive tasks. What machines are already doing on the production line, may well happen in our offices too.

Let AI Do the Work for Us

Some recent surveys have shown that business leaders are divided over what the human-machine partnership will bring in terms of productivity. Research conducted by Vanson Bourne on behalf of Dell Technologies shows that 82 per cent of business executives expect their employees and machines to work as ‘integrated teams’ within the next few years. Yet only 49% believe that       employees will be more productive as more tasks get automated. And only 42% think we will have more job satisfaction by offloading tasks that we don’t want to do to intelligent machines. Employees too have their doubts, as research from The Workforce Institute reveals:  although 64 per cent would welcome AI if it simplified or automated time-consuming internal processes or helped balance their workload, six out of ten employees find their employers are not very transparent about what AI will mean to the way they do their jobs – if they will still have them, that is, after this next automation wave.

We are only in the first chapters of the book that we are writing for our future, but already, the AI/ML is having a profound influence on all aspects of human life, and we need to ensure that AI is not writing the ending for us. Consider just these examples:

  • In healthcare, deep learning systems can read images and diagnose pneumonia as accurately as radiologists.
  • At CES in Las Vegas, AT&T announced that it’s testing a new ‘structure monitoring solution’, a system to help cities and transportation companies monitor the stability of bridges, alerting them if their stability is compromised.
  • At Georgia Tech, a chatbot is mailing assignments and answering questions from students.
  • Intelligent systems are helping HR departments analyze employee sentiment in real time, thus helping reduce employee attrition.

The list of applications of AI is endless and you will find examples like these in any industry. The big question that everyone is asking is whether AI will help us, or if AI systems will replace human beings in the workplace, making us completely redundant. The answer to this million dollar question is not so simple. On the one hand, it is clear that a number of jobs are on the line. Just think of truck drivers losing their job if we will get convoys of driverless trucks on the road, call center operators being replaced by conversational AI systems, or financial analysts getting the boot from robo-advisors.

Humans in the Loop

On the other hand, AI will also create new jobs. If you have learning systems, someone will need to supervise those learning efforts, programmers will have to find the right algorithms and embed them in systems, and so on. In fact, some analysts see AI as what is called ‘an invention that makes inventions’, creating endless new possibilities. AI will definitely have a direct impact, but it will also spur on new developments that will, in turn, create new applications and new jobs. Enough new jobs for Gartner’s Peter Sondergaard to claim – during last year’s Gartner IT Symposium – that AI will be a net job contributor from 2020 onwards, eliminating 1.8 million jobs while creating 2.3 million new ones.

I also tend to think AI will bring more benefits than troubles and I strongly believe that humans will be augmented by AI rather than replaced. We need to consider that self-taught AI has trouble with the real world. Emotions are key, multiple options and complex real life situations hard to handle for AI. As Kevin Kelly, author of The Inevitable, says: “the chief benefit of AI is that it does NOT think like humans. We’re going to invent many new types of thinking that don’t exist biologically and that are not like human thinking. Therefore, this type of intelligence does not replace human thinking, but augments it.” In fact, AI cannot do without human intervention and there will always be ‘humans in the loop’ as AI-specialist J.J. Kardwell comments: “humans should be focused on teaching machines so that machines can focus on performing jobs that are too big for humans to process.” According to this school of thinking, humans and robots working in harmony will yield the best results.

For Marvin Minsky, what counts in humans, is our mind, our spirit, the brain being a machine like any other, besides the fact that modeling the plasticity and dynamism of the brain is not easy. We have a “mechanistic” vision of the Human, but, like Jean-François Mattei highlights, the fact is that the brain is first of all a social and cultural organ, which adapts to human relations and with our environment, for a fine and adapted decision, linked to our conscience, to our freedom to think and to create in an innovative way. Our liberty is unique, how we undertake and adapt it is precious, and as Lucretius says, “If the chain of causes is governed solely by laws, what meaning can you give to the freedom of the will and human action”?

Creativity Rules

Does this mean we should stick our heads in the sand and carry on as if nothing will change? Of course not. The future will be different and we need to prepare for it. The educational world has the huge task of preparing the workers of tomorrow. The human race has always excelled in creativity, from the paintings in the caves of Lascaux through architecture to modern music. What education should focus on, is stimulating that creativity and teaching people to combine that creativity with the power of AI to make our dreams come true. After all, machines cannot replace our feelings. I am convinced human beings will not turn into emotionless cyborgs. In that sense, I agree with the French philosopher Jean-François Mattei (‘Question de conscience’) that transhumanism should not lead to technology totalitarianism. Instead, AI will help us become less like the machines we are right now, toiling for ten hours a day to get through our ‘to do’-lists. This is our job to invent new lifestyles, imagine new societies, with the potential, through AI, to help us reorganize the way we live, the way we work and provide us with more time to connect with and take care of other humans or living species, making our world a better place for everyone.

All in all, I think we should be hopeful of the prospect that AI and ML are going to help us weather the changes that are ahead of us and we should not fear the machine. I share this belief with Dell Technologies CEO and Chairman Michael Dell: ““Computers are machines. The human brain is an organism. Those are different things. Maybe in 15 to 20 years from now, we’ll have computers that have more power than 10 million brains in terms of computational power, but it’s still not an organism.” We then must take an intuitive approach to imagine how future is formed, with artificial intelligence, as Bergson would stage, to work intelligently on joining forces, not on one taking over the other. As a closing, the notion of ethical conviction should not ignore the alterity dimension, emphasized by Kant.



ENCLOSURE:https://blog.dellemc.com/uploads/2018/03/Human-Progress-Bar-Graph-Sunrise-1000×500.jpg

Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

  • No Related Posts

AI — The Time for Action is Now

EMC logo


The U.S. is already one step ahead of the game — last December, members of the American Congress presented a bill on the ‘Development and Implementation of Artificial Intelligence’. Its aim is to establish a Federal Advisory Committee for AI. The drafters reasoned that understanding AI “is critical to the economic prosperity and social stability of the United States.”

How forward-thinking of them. But they have nothing on the Chinese — the Chinese State Council has stated that it wants China to be the leader in AI by 2025, which implies that they want to knock the U.S. from its pole position. Even the U.K. is eyeing up a lead position. But what about Germany? Ever since the pandemonium of last summer’s election, when the two largest parties frantically pushed for an AI ‘masterplan’ after China’s statement, not much has actually happened.

I didn’t expect a change in pace either, though. I think it’s much more important that politicians have the issue on their radar at all, and that they understand the implications of artificial intelligence.

Here is where opinions are diametrically opposed. Tesla’s Elon Musk and celebrity physicist Stephen Hawking have branded this technology “our biggest existential threat.” Steve Wozniak has attempted to offer a more balanced opinion, while Mark Zuckerberg has praised AI to the high heavens.

Of course, businesses are optimistic about what the future holds for AI, and are already using it for a wide array of applications: from communication, to cognitive searches and predictive analytics, to translation. The next big thing is the autonomous car. The (German) automotive sector, which used to focus on tin and steel, is also undergoing significant changes. Other sectors are following suit. To companies, AI is the game changer that will improve all our lives and revolutionize the economy. The results of our latest study on the working world of 2030 show that the majority of the 3,800 business leaders surveyed already anticipate a close human-machine symbiosis in the coming years. However, the same study also shows a clear split in opinions. Roughly half of the respondents were pessimistic about the effects of AI, while the other half were optimistic.

So what do we do now? The most important question concerns the implications that AI will actually have — will it usher in a bright new future or social disorder? The discussion on job losses is already in full swing.

Apocalyptic scenarios aren’t the only things we should be thinking about, but at the same time, it is worth reflecting on regulation at this early stage. I think that the AI expert Oren Etzioni has the right attitude. Following the example of Isaac Asimov’s laws of robotics, he suggests three simple rules for artificial intelligence systems so that we are equipped for the worst-case scenarios and can prevent any conceivable damage. He says that AI must be strictly regulated, that AI must be discernable from humans, and that AI cannot arbitrarily handle confidential data. These may seem like superficial rules, but they serve as a very good starting point and basis for discussion.

Are these ideas a little too ahead of their time? I don’t think so. If we tackle these issues as early on as possible, then we will be in a much better position to plan the future of artificial intelligence. Isaac Asimov wrote his laws of robotics way back in 1942, and they are still considered exemplary, even today. And if that’s not a good source of motivation, then I don’t know what is.



ENCLOSURE:https://blog.dellemc.com/uploads/2018/01/Artificial-Intelligence-AI-Man-Green-Screens-1000×500.jpg

Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

  • No Related Posts

Virtual War – A Revolution in Human Affairs

Virtual War – A Revolution in Human Affairs

Stefan J. Banach

War, of any kind, is the ultimate failure of mankind. Yet, in the course of human endeavors, we have found another way in which to wage global war – in this case, Virtual War in Virtual Battle Space. The “Technology Singularity” espoused by Vernor Vinge and Ray Kurzweil, is the fundamental source and accelerant for Virtual War. [i] The Vinge and Kruzweil articulation of the “Singularity” of biological and machine intelligence is much closer than most of us understand. The majority of the people in the world are caught up in the inertia of everyday activities, and the emergence of Virtual War is opaque to most of us. To that end and for clarity, the world is experiencing Virtual War – A Revolution in Human Affairs.

Virtual War transcends the “normal” revolutions in military affairs or traditional security rubrics that are discussed in Pentagon forums, within the defense industrial base and among law enforcement agencies. Virtual War is drastically transforming global human affairs as we know them, and in ways that we do not yet understand. Eric Schmidt got it right when he opined that, “the Internet is the first thing that humanity has built that humanity doesn’t understand, it is the largest experiment in anarchy that we have ever had.”

Carl von Clausewitz defined war in his legendary tome, On War. His definition of war follows:

War is nothing but a duel on an extensive scale. If we would conceive as a unit the countless number of duels which make up a war, we shall do so best by supposing to ourselves two wrestlers. Each strives by physical force to compel the other to submit to his will: his first object is to throw his adversary, and thus to render him incapable of further resistance. War therefore is an act of violence to compel our opponent to fulfil our will.[ii]

As a former wrestler, I believe that Virtual War provides unprecedented opportunities to destroy or control the will of an opposition actor, with an unceasing cadence of global virtual munitions from an increasing number of delivery platforms. Virtual War does not require an act of violence to succeed in controlling the will of an individual, group or population. This is an important distinction and opportunity, relative to war, which occurs in Physical Battle Space.

Virtual War is a global systems approach to achieve social control. Virtual War heuristics include: offensive and defensive cyber capabilities, social media, information operations (e.g. “Fake News”), artificial intelligence, stealth technologies and cloaking techniques. The end game is to control and influence the will of a person, group, or larger population to achieve ideological objectives over time in support of a cause or a specific sponsor.

The Good News. As we are seeing every day, Virtual War is truly transformational and will continue to change all of our lives in ways that are difficult to fathom. Kurzweil notes the following positive aspects of the emerging technological changes in his 2005 publication, The Singularity is Near:

Revolutions in genetics, nanotechnology and robotics will usher in the beginning of the Singularity. Sufficient gains in genetic technology should make it possible to maintain the body indefinitely, reversing aging while curing cancer, heart disease and other illnesses. Much of this will be possible thanks to nanotechnology, the second revolution, which entails the molecule by molecule construction of tools which themselves can “rebuild the physical world.” For example, nanotechnology-based manufacturing devices in the 2020s will be capable of creating almost any physical product from inexpensive raw materials and information. Finally, the revolution in robotics will really be the development of strong Artificial Intelligence (AI), defined as machines which have human-level intelligence or greater. This development will be the most important of the century, “comparable in importance to the development of biology itself.” [iii]

The United States, and indeed the world, is experiencing the birth pains of the coming exponential technological change that Vinge and Kurzweil predicted in the 1990’s, and in 2005. The “2020’s” noted above, from Kurzweil’s manuscript, are just years away. Nevertheless, inane debates about whether the Singularity will occur in five years or fifty years, misses the point completely in the context of history. Billions of years in evolutionary technological advancements serve as a backdrop, and we are behind the power curve, relative to the substantial changes that are right in front of us.

Let the drastically reduced lifespans of commercial companies be a guide in this regard. The average age of an S&P 500 company is currently under 20 years, decreased from 60 years in the 1950s, according to Credit Suisse. The Wall Street firm says the trend is accelerating and blames the disruption on unprecedented technological advancements. In that vein, Andy Serwer, Editor-in-Chief of Yahoo Finance, asked this important question at the 2018 Davos World Economic Forum, “If robots, AI, nanotechnology, machine learning, and 3D printing are going to be doing all the work, what the heck will human beings do nine to five?” This question portends more challenges than simply the re-training and the re-education of a pending massive unemployed work force. The world has seen, since the events on 9/11, that large populations of unemployed or under-employed people are not helpful in terms of maintaining global security and stability. Tangentially, sixteen years of attrition warfare and the banality associated with fighting predominantly in Physical Battle Space are financially unsustainable. The National Security Act of 1947, which is the basis of U.S. National Security, is seventy-one years old and is collapsing under the weight of Virtual War exigencies. As Peter Drucker noted, “The greatest danger in times of turbulence is not the turbulence; it is to act with yesterday’s logic.”

The New Normal. Inherent in Virtual War, is the unprecedented kinetic maneuvering of one civilian population against another, which has produced hundreds of mass casualty events around the world, since 9/11. The civilian vs. civilian terror attacks, on 9/11, were planned using the Internet of Things (IoTs) – in Virtual Space – prior to the execution of the physical attacks on the respective civilian targets in the United States.[iv] Hundreds of other terror attacks have taken place around the world since 9/11, and were planned and coordinated in Virtual Space before the horrific attacks took place in Physical Space. What will the world’s security paradigm for warfare and law enforcement look like when the Internet of Things (IoTs) evolves to the Internet of Everything (IoET), that includes much more powerful Nano-Biologically enhanced human beings?

Non-lethal Virtual Space activities also occur continuously around the world in social and political domains that target domestic and foreign matters, with the aim to gain and maintain control of a particular narrative to influence an audience to act in a certain ideological manner. The growing liminality which exists today, by way of virtual space activity, is causing a truth crisis, as the velocity of human interaction and the velocity information is at an all-time high. The average person does not know what to believe given the ubiquity of information and the obvious bias in government and within traditional and non-traditional media sources.

On the socio-economic front, there is a growing divide between the rich and poor, as the middle class struggles with sustainability. There is also a widening chasm between globalist and nationalist. The 1648 Westphalian nation-state model is at odds with a growing number of emergent empowered actors who do not rely on monolithic state entities to govern their behavior in virtual or physical space. Each of these aforementioned variables are all interdependently joined and, to varying degrees, are technologically driven fissures in the world today.

The stark changes that we are seeing today have been forecasted by many people dating back to John von Neumann and Alan Turing in the 1930s during their foundational work in computer experimentation and nascent artificial intelligence theory.[v] More recently, our national and international leadership, is having difficulty framing and naming the emergent exponential technological changes that are impacting us in new ways every day. Today we are seeing the symptoms of technological change in our daily lives, but are not synthesizing the interrelated patterns of activity, and do not know how to cope with the exponential change.

Prophetic Warning. Per Kurzweil’s prose, these forecasted changes will be tantamount to a technological tsunami – which is now at our doorstep, in the context of the evolutionary timeline. Given the accelerating pace of technological advancements, we should expect significant social change. An unprecedented rupture of all the classic: learning, leadership, management, strategy development, planning and governance archetypes, that are in existence today, is absolutely possible.[vi] This externality will move the world from its current state of complexity to chaos. The end result will be the first of many instances where biological and machine intelligence forever transforms warfare and our existence as we know it. The pending exponential technological advancements will move civilization to a completely new era.[vii] This will be an era where humans will not be able to survive without machine intelligence, augmented synthetic strength and artificial stealth capabilities on the web or in physical space.[viii] Subterranean and extra-terrestrial options will be sought to support life and will be made possible by new technological advancements, that were previously unimaginable. The nation-state or actors who can learn the fastest, and optimally frame and reframe their strategies the best, will rule the day in a world that fights predominantly in Virtual Space, and only as necessary in Physical Space, as it is too costly on multiple fronts.

Framing and Naming the Virtual War Paradigm. The United States’ leaders, who have responsibility to win the Nations wars, have lost personal mastery for warfare in our time. The U.S. is fighting the wrong war, with the wrong policies, strategies, doctrine, tactics, techniques and procedures. The logic for warfare in our time is askew as we have seen at previous junctures in history. The value of mechanization versus the horse and the application of air power on modern battlefields during the 1919-1939 inter-war period, are notable historical examples where new technological advantages, were not immediately appreciated by “the experts.”

Asymmetric Warfare, Political Warfare, Gray Zone Operations, Hybrid Warfare, Cyber Warfare, Cognitive Maneuver, et al., are competing heuristics that are bantered about continuously in security think tanks and in the halls of the Pentagon in an attempt to frame and name the paradigm for warfare in this era. Each of the aforementioned warfare nomenclatures and frameworks, that are being used to describe warfare today, are propagated in the Virtual Domain and are further shaped in the Cognitive and Moral Domains, before they are manifested in the Physical Domain environments, which include: air, land, sea and outer space. The entity that controls the Virtual Domain and masters Virtual War Campaigning first, will indirectly achieve social control, and will win every war they engage in, at pennies on the dollar. The Russian Gerasimov Doctrine and the Chinese 2025 Strategic Plan are both indirect approaches to achieving social control of both domestic and foreign populations through the use of Virtual War technology driven conventions.

The Goal of Virtual War. Social Control is the goal of Virtual War. China and Russia are well suited in this pursuit given their respective repressive governance cultures. Invoking Sun Tzu is appropriate as he summed up the goal of Social Control in this quote, “The supreme art of war is to subdue the enemy without fighting.” Global “Social Control” is possible for the first time in the history of the world. Burgeoning Social Control capabilities are nested in: global satellite imagery, swarms of civilian and military aerial drones, public camera surveillance systems in our “smarter cities”, iPhone tracking protocols, Fitbit devices, the internet, artificial intelligence, DNA, Social Security numbers, Driver’s License numbers, Credit Reports, online personal health records, and all the associated digital personal and financial contrivances which exist today.

In the years ahead, the Social Control challenge will become acute as every human being will potentially have bio-medical nanotechnologies embedded in them to ensure optimal health. Each of these medical Nano devices will have a digital interface that presents both new opportunities for increased wellness and new vulnerabilities. Humans will be susceptible to the traditional biological viruses which exist today. New technologies will need to be developed to protect us from artificial virus “infections” that could be mass transmitted in targeted societies, using the embedded Nano medical implants that will serve as the host within our bodies.

The synthesis of each of these personal data points, technologies and capabilities into a coherent global Virtual War Campaign Plan is possible now. The United States desperately needs a “Virtual Manhattan Project” to Design a national security policy, strategy and doctrine that will prevent the attainment of a global “Social Control” capability by China, Russia or another empowered set of actors. This should be a civilian led, U.S. government agency and military supported, “Whole of Nation” enduring Design effort; and it should be something our Allies support.

The Merits of Virtual War. Virtual War schemas enable an actor to gain and maintain 24/7 indirect global influence, with people and machines, at a fraction of the investment with plausible deniability. This approach reduces risk to the mission and the risk to “physical assaulting forces”. Virtual War techniques also eliminate the tyranny of global strategic & operational reach that are limiting variables for physical battle space forces. Theater-level access and early entry conundrums are also expunged and “Virtual Warriors” are now capable of delivering virtual munitions on demand to advance national policy objectives for scenarios involving China, North Korea, Russia & Iran, which was not possible before. Virtual War will continue to change international law vis-à-vis what constitutes an act of war, and it will also challenge the longstanding cognitive and moral standards for what constitutes a just war.

The New Maginot Line. Metaphorically, the overwhelming propensity for Physical Battle Space Maneuver is the United States’ modern-day Maginot Line. Like the French, from 1919-1939, the U.S. has spent more money than all of its Allies combined in the Global War on Terrorism (GWOT) since 2001, executing Physical Battle Space Maneuver activities. Like the French prior to WWII, the United States’ leadership is preparing for the last war it won and does not see the Virtual Blitzkrieg that is upon it every day. There is an ongoing complete envelopment of all the significant U.S. national interests by way of Virtual War – or – Virtual Battle Space Maneuver, and it is happening right in front of us.

Creating an effective Virtual War acumen requires a systemic reframe of leadership development and the creation of non-standard doctrine, tactics, techniques and procedures for how our national security forces and law enforcement agencies compete in Virtual Space. Virtual Space is the decisive terrain and securing it is the decisive operation. Military operators and law enforcement agents should not proceed in to Physical Battle Space without first gaining and maintaining Virtual Battle Space superiority. All U.S. national security forces and law enforcement agents require a new suite of Virtual Weapons Systems to win the Nation’s wars. What Virtual War non-standard doctrine and weaponry needs to be developed and fielded? When will our operators and agents be taught, armed and trained in their use? These are incredibly important questions that should guide our thinking and strategic focus, if we desire to remain a superpower country going forward.

Reframing U.S. National Security. The U.S. Department of Defense (DoD) is not structured to learn about Virtual War and is incapable of keeping pace with the interactive complexity that is generated on today’s Virtual Battlefields. The DoD of the United States will never win another war – as structured and as led. This is not a pejorative statement intended for any current or past leader, who has served in the DoD. Rather, this observation addresses an a priori reasoning culture that has been created and sustained within the DoD since its founding in 1949. The DoD hierarchical learning system is incapable of keeping pace with the emergent rhizomic learning system that is organically deployed in virtual space by billions of actors each day. The power leadership and the single loop learning “Process Trap” that has engulfed the U.S. security apparatus has made our country non-competitive in global security matters for the long-term.

The future U.S. security apparatus must be inundated with “disruptors” and there needs to be shared leadership between the private and public sectors for the defense of the United States if we have any chance of competing on a global scale. Doing the same thing, and expecting a different result, after sixteen years of attrition warfare more than fulfills Eisenstein’s definition of insanity. Peter Senge’s Fifth Discipline provides some of the answers to this conundrum. Personal mastery in warfare can only be achieved by adjusting the mental models and structures that are in use today. Enterprise-wide systems thinking and the establishment of a team learning culture, beyond DoD boundaries, is required to create a “Whole of Nation” shared vision for how to win wars now and in the future. Using the full range of the strategy palette and the methodological strategy development techniques espoused by Martin Reeves and Henry Mintzberg is essential. The leading change and the design principles authored by John Kotter and Bryan Lawson, and so many others, are also critical and should be considered as we Design the future.

The decisive arm of war and conflict now, is the U.S. National Industrial Base. The U.S. citizenry possesses the means to fight and win on Virtual Battlefields. The U.S. citizenry live and operate, unknowingly, in a Virtual Battlespace that they call home and work. With a few exceptions, the American people and U.S. industry are on the sidelines; for now. Virtual War requires triple loop learning solutions. If we are to be successful, a completely new set of governing principles must be designed, outside the existing DoD paradigm, to live, fight and prosper in Virtual Battle Space. The national security approach that the U.S. uses in the future cannot be structured or operate the way our defense apparatus does at the moment.

The Arnold Avatar. Enemies of the United States learned after the 1991 Gulf War that it is a fool’s errand to fight the U.S. military in direct combat. The many actors who comprise the system of opposition to the U.S. are much more willing to engage in an algorithm vs. algorithm virtual battle, vice a tank on tank battle in physical space. The rivals of the U.S., have used the indirect approach in Physical Battle Space for nearly two decades, by employing individual or small groups of assailants, in terror attacks that produce tremendous carnage as seen in Paris, London and Orlando; and many other locations.

Virtual War presents a new repertoire of instruments for a friendly or opposition agent to groom targets and networks without immediate detection or reprisal. The Arnold Avatar is one such technique. This approach consists of four scenarios. The first scenario is to deceive your opposition over the course of a series of human to human engagements until the target is positioned for elimination, arrest or exploitation. The second pillar in the Arnold Avatar approach is for a single human, or small group of humans, to establish an online identity and influence a targeted population via social media to achieve desired behaviors in physical space. Human to machine and machine to machine spoofing operations are the other two techniques inherent in the Arnold Avatar that should be used on daily basis in a Virtual Battle Space Maneuver Campaign Plan. (Note Figure #1 below).

Figure #1

Holographic Armored Formations. Virtual projections of tank and other armored vehicle holographic formations, which emit physical, electronic, cyber and social media signatures, will be possible, on future battlefields. This is an example where a “Singularity” in warfare will occur as virtual war and physical war elements are aligned. Holographic armored formations will be detectable and seen by opposition forces. Once the opposition force identifies and engages the tank holographic formation, friendly forces will be able to direction find and triangulate the opposition force’s electronic and digital signatures and their firing locations. After this is accomplished, friendly forces can destroy the enemy using Physical Battle Space weapon systems. As we have seen, enemy forces and criminal organizations will develop or buy this new technology and co-evolve with U.S. fighting forces and our law enforcement agencies as soon as these “cloaking” capabilities are available.

The Aloha Scenario. This is an excerpt from Wikipedia, concerning the recent false alarm on Oahu, that has serious implications for U.S. leadership as they deal with future tenuous security impasses.

On January 13, 2018, a false ballistic missile alert was issued via the Emergency Alert System and Commercial Mobile Alert System over television, radio, and cellphones in the U.S. state of Hawaii. The alert stated that there was an incoming ballistic missile threat to Hawaii, advised residents to seek shelter, and concluded “This is not a drill”. The message was sent at 8:07 a.m. local time. The governor wanted to warn people by Twitter of the error but he couldn’t remember his password. A second message, sent 38 minutes later, described the first as a “false alarm”. State officials blamed a button pushed in error during an employee shift change at the Hawaii Emergency Management Agency for the first message. Governor David Ige publicly apologized for the erroneous alert, which caused panic and disruptions throughout the state. The Federal Communications Commission and the Hawaii House of Representatives immediately announced investigations into the incident.[ix]

This event occurred with no outside virtual influence on leaders of the government in the State of Hawaii. The technology exists today, for an adversarial nation-state or group of technically capable actors to construct malicious virtual avatar scenarios that are capable of spoofing key political leaders at the state and national-levels to act in an irrational manner. Potential spoofing avatars could be used in a nuclear weapons scenario that is focused on nK, or another country, that causes unintended disastrous reciprocal action by the United States.

A tangential historical case in point, with far less complexity relative to virtual influence, is the July 3rd, 1988, Iranian Air Flight 655, that was shot down by the United States Navy guided missile cruiser USS Vincennes. The USS Vincennes entered Iranian territorial waters after one of its helicopters was fired upon from Iranian speedboats operating within Iranian territorial limits. Moments later, 290 civilians died in a scenario which is benign, compared to the modern day virtual scenarios that can be weaponized and transmitted via opposition cyber, social media, and fake news outlets, from dozens of sources at once, to a single decision-maker in a time compressed in-extremist environment.[x]

The U.S. National Security Imperative. “Whole of Nation Warfare” is now required to sustain the American way of life. The leadership of the United States must communicate the threat accurately to the American people, and mobilize the massive virtual industrial base, which resides in the U.S., to overcome the extraordinary challenge that is before our country. The “Anti-Fragile” attributes that are required to win in both Virtual Battle Space and in Physical Battle Space are not resident in the schemata which are employed by the U.S. DoD and broader governmental leadership today. This needs to change quickly if we intend to regain personal mastery of warfare in our time and maintain our way of life.

U.S. leaders must perpetually reimagine and redesign the way forward, to see the virtual, cognitive, moral and physical domains as evolving interdependent public-private frames of reference for decades and centuries to come. Faced with a growing network of opposition actors, the current reactionary decision-making infrastructure puts the U.S., and our way of life, at a disadvantage versus more agile adversaries who seek Virtual War. Exploiting a retinue of national, and international, thought leaders to design a lasting national technology-based policy, strategy and doctrine for the United States is the imperative of our time. Do we have the foresight to see, and the will to overcome the unprecedented threats that are confronting our republic and way of life?

End Notes

[i] Vinge, The Coming Technological Singularity Essay, 1993; Kurzweil, The Singularity is Near, 2005.

[ii] Clausewitz: On War. Book 1, Chapter 1.

[iii] Kurzweil, The Singularity is Near, 2005, Chapter 5 & Pages 13-28.

[iv] Banach, From 9/11 to London: The Need for Virtual Battle Space Maneuver Doctrine, Small War Journal, June 2017

[v]https://link.springer.com/chapter/10.1007/978-3-642-36751-9_2, by S Istrail – ‎2012

[vi] Kurzweil, The Singularity is Near, 2005, Page 23.

[vii] Ibid, Chapter 2.

[viii] Ibid, Chapter 2.

[ix] 2018 Hawaii False Missile Alert – Wikipedia.

[x]https://en.wikipedia.org/wiki/Iran_Air_Flight_655

0
Your rating: None

Related:

  • No Related Posts

Design for cognitive experiences, Part 1: The human-to-machine communication model

Discover better insights with higher confidence faster than humanly
possible. It is the key to taking a first step in the right cognitive
application direction: asking not what could it do but what it should do. With
artificial intelligence (AI), we have decades of data and research into human
thought processes and communication to use a blueprint. To simulate human
relationships, we begin by observing and better understanding
ourselves.

Related:

  • No Related Posts

Analysis: China’s AI revolution threatens US

A new report from the Washington-based Center for a New American Security raised the alert level of the US defence community over China’s rise as an artificial intelligence (AI) superpower, one that could effectively destroy the American military by 2030.

The meticulous report no doubt will send a chill through the halls of the Pentagon.

The report, ‘Battlefield Singularity: Artificial Revolution, and China’s Future Military Power’, by Elsa Kania, paints a disturbing picture of China’s AI military modernisation programmes. Kania, as co-founder of the China Cyber and Intelligence Studies Institute, is well suited to write the investigative report using available Chinese-language open-source materials that reveal China’s military thinking and progress on AI.

Kania reported that China’s military is pursuing advances in ‘impact and disruptive military applications of AI’ and given it ‘high-level priority within China’s national agenda for military-civil fusion’. The goal is to become the world’s ‘premier innovation centre’ in AI by 2030.

According to the report, the Chinese military believes the advent of AI could fundamentally change the very character of warfare itself. Transforming itself from today’s ‘informatised’ warfare to ‘intelligentised’ warfare, in which AI will be critical to military power.

The result of this change would be the start of a major shift from China’s strategic approach, ‘beyond its traditional asymmetric focus on targeting US vulnerabilities to the offset-oriented pursuit of competition to innovate’.

China’s military is seeking ‘leapfrog development’ to achieve a ‘decisive edge’ in terms of ‘trump card weapons’ that prove a critical edge in ‘strategic frontline technologies’ against the US during a war. The report pointed out that the magnitude of Chinese publications in ‘deep learning’ has already exceeded the US as of 2014, and China ranks second in AI patent applications with 15,754 in total filed as of late 2016.

In July, China released the New-Generation AI Development Plan that articulated its ambition to lead the world in AI by 2030, becoming the premier global AI innovation centre. ‘Under this strategic framework, China will advance a three-dimensional agenda in AI: tackling key problems in research and development, pursuing a range of products and applications, and cultivating and expanding AI industry.’

This will include support for AI technologies that could result in paradigm shifts, including brain-inspired neural network architectures and quantum-accelerated machine learning. ‘The plan calls for building up national resources for indigenous innovation and pursuing continued advances in big data, swarm intelligence and human-machine hybrid intelligence…’

The report noted that Chinese teams dominated the ImageNet Large-Scale Visual Recognition Challenge, an AI computer vision contest, in 2016 and 2017. For the first time, at the 2017 annual meeting of the Association of the Advancement of Artificial Intelligence, China submitted an equal number of accepted papers compared to the US.

Then in November, Yitu Tech, a Chinese facial recognition start-up, took first place in the Facial Recognition Prize Challenge hosted by the Intelligence Advanced Projects Agency (IARPA). What the reader might find disturbing is that the Maryland-based IARPA is under the US Office of the Director of National Intelligence, which funds research across a range of technical areas, including mathematics, computer science, physics, chemistry, biology, neuroscience, linguistics, political science and cognitive psychology.

IARPA’s activities would be a natural fit for the Chinese Communist Party as it increases social control and stability through ‘new techniques for policing, censorship and surveillance, such as the installation of millions of surveillance cameras enhanced with AI technology’.

The report highlighted concerns the US should have on cooperative efforts with China. Chinese investments in Silicon Valley AI have fuelled the debate on whether the US Committee for Foreign Investment should expand reviews of Chinese high-tech investments, especially in AI. For example, the report pointed out the USAF became concerned after Chinese investment in Neurala, an AI start-up known for ‘innovative deep learning technology that can make more reactive robots’. The company is building the ‘Neurala Brain’ with a deep-learning neural network software.

Between 2012 and mid-2017, Chinese technology investments amounted to $19 billion in the US with particular focus on AI, robotics and augmented or virtual reality, said the report. In May 2014, Baidu Inc. established its Silicon Valley Artificial Intelligence Laboratory. In June 2014, Qihoo 360 Technology Co, a Chinese cybersecurity company, and Microsoft established a partnership in AI that focused on AI and mobile Internet.

In November 2015, the Chinese Academy of Sciences Institute of Automation (CASIA) and Dell established the Artificial Intelligence and Advanced Computing Joint Laboratory, which is pursuing development of cognitive systems and deep-learning technologies.

In January 2016, BEACON (Bio/computational Evolution in Action CONsortium), a centre located at Michigan State University, received funding from the US government via the National Science Foundation to establish the Joint Research Center of Evolutionary Intelligence and Robotics, headquartered at Shantou Technical University, also in partnership with the Guangdong Provincial Key Laboratory of Digital Signal and Image Processing.

In October 2016, Huawei Technologies devoted $1 million in funding to a new AI research partnership with the University of California, Berkeley. In April 2017 Tencent announced plans to open its first AI research centre in Seattle. That same month, Baidu Inc acquired xPerception, a US start-up specialising in computer vision.

The US is not the only accomplice. In 2011 and 2012, the University of Technology Sydney (UTS) established five research centres with Chinese universities that included centres on intelligent systems, data mining, quantum computation and AI. In 2017, UTS partnered with the China Electronics Technology Group (CETC) focusing on big data, AI and quantum technologies.

In 2014 Chinese drive-system maker Best Motion created a research and development centre at the University of Nottingham to develop high-quality servo drive systems for use in AI and robotics. In 2016 the Torch Innovation Precinct at the University of New South Wales was established as a joint China-Australia science and technology partnership to research military-relevant technologies, such as unmanned systems.

In March of this year the Hangzhou Wahaha Group constructed three AI centres in China and Israel as a collaboration between CASIA and the University of Haifa. In July, China, France and the Netherlands renewed an agreement for a joint Sino-European Laboratory in Computer Science, Automation and Applied Mathematics, in partnership with CASIA with a major focus on AI.

If there was one turning point in Chinese military attitudes towards AI it was in March 2016 during the World Go Summit when Google-owned DeepMind’s AlphaGo beat world champion, Lee Sedol. Lee’s defeat ‘captured the PLA’s imagination at the highest levels, sparking high-level seminars and symposiums on the topic’, the report said.

There was also a rise in Chinese military analysis of the US Defense Advanced Research Projects Agency’s programme Deep Green, which is a system that supports commanders’ decision-making on the battlefield through advanced predictive capabilities, including the ‘generation of courses of action, evaluation of options and assessment of the impact of decisions’.

As recently as September, the China Institute of Command and Control (CICC)sponsored the first Artificial Intelligence and War-Gaming National Finals, convened at the National Defense University’s Joint Operations College. ‘It involved a ‘human-machine confrontation’ between top teams and an AI system called CASIA-Prophet 1.0, which was victorious over human teams by a score of 7 to 1.’

Chinese military thinkers now want ‘intelligentisation of warfare that could result in a trend toward battlefield singularity’. Under these conditions, humans would no longer have capacity to remain directly ‘in the loop’, but would still possess ultimate decision-making authority or ‘human on the loop’, i.e. ‘exercising supervisory control’.

Chinese military strategists want to develop synergies between intelligentised or autonomous systems and directed-energy weapons that will enable ‘light warfare’ involving the fusion of real-time information and ‘zero-hour’ attacks. This will include all forms of military weapons. Chinese AI start-up IFlytek is working with the Chinese military on a voice recognition and synthesis module for intelligence processing for this very reason.

Of particular concern is the Chinese military’s Strategic Support Force (SSF) that seeks to build up advanced cyber warfare capabilities, leveraging big data and machine learning. According to the report, the SSF’s Information Engineering University has developed methods to detect and mitigate distributed denial of service (DDoS) attacks through pattern matching, statistical analysis and machine learning, as well as to detect advanced persistent threat detection based on big data analysis.

China’s national strategy of military-civil fusion enables China to transfer dual-use technological advances to build up military capabilities while promoting economic growth. The report advised the US government to compete and counter Chinese AI advances, and the Pentagon should consider supporting research to track the China’s AI defence innovation ecosystem.

Further, the report recommended reforms to laws designed to constrain ‘illicit and problematic’ technology transfers and changes on how the Committee on Foreign Investment decides what investments and acquisitions are a threat to national security.

Related:

  • No Related Posts

I’m a pacifist, so why don’t I support the Campaign to Stop Killer Robots?

The Campaign to Stop Killer Robots has called on the UN to ban the development and use of autonomous weapons: those that can identify, track and attack targets without meaningful human oversight. On Monday, the group released a sensationalist video, supported by some prominent artificial intelligence researchers, depicting a dystopian future in which such machines run wild.

I am gratified that my colleagues are volunteering their efforts to ensure beneficial uses of artificial intelligence (AI) technology. But I am unconvinced of the effectiveness of the campaign beyond a symbolic gesture. Even though I identify myself strongly as a pacifist, I have reservations about signing up to the proposed ban. I am not alone in this predicament.

Apart from the difficulty of pinning down exactly what the ban entails for states that want to follow it – is the ban against autonomy or intelligence? – I wonder about the ban’s ability to deter misuse by rogue state or non-state actors. To the extent that bans on conventional and nuclear weapons have been effective, it is because of the significant natural barriers to entry: the raw materials and equipment needed to make those weapons are hard to obtain, and responsible states can control them to a significant extent by fiat and sanctions. In contrast, AI technology, which ostensibly enables the kind of weapons that this ban is aimed at, is already quite open, and some may argue, admirably so. Misuses of it can thus be as hard to control by fiat and bans – as with cyber warfare, for example.

Consider the hypothetical “killer drones” depicted in the video accompanying the Guardian’s article on the call for the ban. Even today, the face recognition technology supposedly needed by such drones can be easily constructed by anyone with access to the internet: several near-state-of-the-art “pre-trained networks” are available open source. Things will only become easier as we make further technical advances.

Given these significantly lower barriers to entry, even if the UN and some constituent states agreed to a ban, it is far from clear that it would stop other rogue state and non-state actors from procuring and deploying such technology for malicious purposes. This would render such bans at best a pyrrhic victory for the proponents of peace, and at worst entail the ironic and unintended effect of tying the hands of the “good actors” behind their backs, while doing little to stop the bad ones.

As an AI researcher, I am also disturbed by the sensationalisation of the whole issue through dystopian – if high production value – videos such as the one reported in the Guardian article. Using a “Daisy Girl”-style campaign ads designed to stoke public fears about AI technologies seems to me to be more an exercise at inflaming rather than informing public opinion.

Given these concerns about the effectiveness of blanket bans, I believe that AI researchers should instead be thinking of more proactive technical solutions to ameliorate potential misuses of AI technologies. As one small example of this alternate strategy, we held a workshop at Arizona State University in early March 2017 titled Challenges of AI: Envisioning and Addressing Adverse Outcomes. The workshop was attended by many leading scientists, technologists and ethicists, and had the aim of coming up with defensive responses for a variety of potential misuses of AI technology, including lethal autonomous weapons.

One recurrent theme of the workshop was using AI technology itself as a defence against the adverse/malicious uses of AI. This could include research into so-called “guardian AI systems” that can provide monitoring and defensive responses. Even if such efforts don’t succeed in completely containing the adverse effects, they could at least better inform the public policy on these issues.

To reiterate, I consider myself a pacifist, and have always supported efforts to control arms and curb wars. If I believed that the proposed ban would be effective and not merely symbolic, and that this campaign would inform rather than inflame the public, I would have gladly supported it.

  • Disclaimer: In the interests of full disclosure, let me state that some of my basic research (on human-aware AI) is supported by US Department of Defense funding agencies (eg Office of Naval Research). However, my research funding sources have no impact on my personal views, and defence funding agencies in the US support a wide spectrum of basic research, including that by researchers involved in the ban campaign.

Subbarao Kambhampati is a professor of computer science at Arizona State University, and the president of the Association for the Advancement of Artificial Intelligence.

Related:

“We’ll see societal changes when AI affects people’s lives in really meaningful ways”

Mohammad Rahman, Associate Professor of Management, Purdue Krannert School of Management spoke to Business Today’s Rajeev Dubey on the evolution of Aritificial Intelligence. Edited excerpts.

Are you on the Ayes or the Nays side of AI debate? Would you concur with Zuckerberg or with Elon Musk?

It’s an interesting question. My thinking on this probably resonates with both of them. I can’t just say I can agree with one of them. To be very frank, I don’t know their full thinking. But, let me explain my position on this. That might be easier. So, I don’t think we can just sit here and not worry about thinking through the implications and the policies we need with the development of AI and how it is going to transform our society, our workforce, and our mobility. Fifty to 100 years later, a lot of this may be second nature, but we’ve got to get through that 50 years. So, we need to think through what it means. At the same time, I don’t think we should say, ‘OK, timeout we don’t let this technology work.’ I don’t think that is what Elon Musk is saying, that we just stop. And, I don’t think Zuckerberg thinks that we don’t need any sort of regulations in the thinking through this process.

If you think about today, the Tesla, Elon Musk’s car, is able to detect crash probabilities and stop the car. Now, none of us want to get involved in a car accident. It’s a black and white decision. We can train AI how to do this; avoiding getting in car accidents. We should. Now, when a car accident happens, how often do people passing by stop or get out to help those affected? It’s very, very judgmental. My response may be different depending on the day, and my priorities and where I need to be. How many times do you see someone needing help on the road, who maybe needs help with a flat tire? How many times do you stop? These are judgmental things, and these are simple judgmental things, but there are lot more complex ones that need to be addressed with AI related possibilities. If we don’t think through the revolutions of choices and judgements that we have as humans, and somehow that is not part of the design, that is going to be a problem. If you think we are all going to buy one color iPhone, that is how you can potentially think, but that is not true. There are potentially three or four colors and everyone is buying one. It is not the same thing. So judgment is very important.

What’s the state of evolution of AI right now?

To me, AI is when you are able to let the machine make certain decisions that are not so codified. For example, in a crash detection system, it is calculating the probability of a crash and making a decision to stop the car. That is AI. Where is the state of evolution? I think we are still in the world of small worlds with AI. As human beings who deal with a lot of big-world decisions, AI is not ready to step in. AI is specific to the activity such as the crash detection system. That crash detection system is really good at figuring what cars are following each other and what is going to happen when the car in front of you stops. That is precisely what AI is trained for. Learning itself without any kind of training data, that is not out there. We need a lot of data to train AI so that it can deal with situations as they come up based on what it has seen before. That is where we stand today.

In your view, which of the world’s primary AI tool providers–IBM, Google, Microsoft, AWS has a headstart vis a vis others? What do you see as the SWOT of each of these?

This is an opinion question and I’m not privy to everything going on inside these companies. But from what we see on the outside, I think IBM is a little ahead. I think IBM has invested a lot and took a risk up front. On the other side, the AI requires a lot of learning from queues and contextual data. IBM is relying on a lot of these from its partners, such as those in health care and with GM on the OnStar system to do shopping and planning. Depending on how partners and how partnerships come about, it could be changing their competitive advantage. In my view, Google has a lot more contextual information about people than any of these companies – from Gmail, to searches to trying to get into the transaction side of the business – and they’ve partnered with Walmart to order things from Walmart. Google Home now knows what temperature you have inside your house. As that evolves, companies will take a look at how much data do I have, and how will I train that system. There is a big competitive advantage in having complexity in the data that you have. So, I think Google may be able to catch up. I think Amazon is heading that way, too. Alexa is pushed everywhere. Microsoft is a very good software company, and at this moment, it seems they are kind of like IBM where they have to rely on other companies because they don’t have a lot of contextual data process. These companies are all going to fight for leadership in AI. And as consumers, we are probably not going to stick to just one of them.

Today, a lot of chatbots, data analytics, big data, etc. are being passed off as AI. What’s your view?

In my view a chatbot, where you have a structured question and a structured answer, is not AI. Analytics and big data are going to be how we see the world and make decisions, but just having data analytics is not having AI. AI to me is where you have a system that is able to make decisions when facing a decision-making scenario.

When do you expect the real AI revolution to begin?

I think everything has a silent phase, and then a buzz phase, and then an application phase. Machine learning and artificial intelligence was really picking up in the ’80s, and it sort of died down, and it is now slowly coming back. I think we are going to see societal changes, or you can call it an evolution, when it affects people’s lives in really meaningful ways. That could start with self-driving cars. And some factories are already automated, or warehouses like Amazon, where you have automated robots doing stuff. As that becomes more prevalent and forces us to move away from tasks that people have done for many years, that is when the revolution starts.

Startups seems to be taking to AI faster than established companies. Any reason?

That is natural, because in established companies you always have to worry about priorities that are going on right now. I have to generate income. I have to make my shareholders happy. Places like Google allow workers to have pet projects, but the pet projects are not there for everyone for five days a week. The company would be in trouble. It is natural to see a lot of start-ups. In a start-up culture, you basically let a thousand flowers bloom, but we don’t see all thousand when we are done. We see a few that are surviving. To me this is a such an unstructured problem, something that we are trying to push on to the next horizon. It ought to be individuals and small companies really trying to do it right.

Do you have any views on the adoption of AI in India?

India is a fairly advanced country when it comes to technology because of outsourcing and the workforce. Certainly, you will see a lot of push there as well. I would not be surprised if many things are rapidly deployed to India. But, like with any other technology there are contextual things that are different. AI is kind of different. What we have in the U.S. works well for us in the U.S. but might not work well in China. That’s the lesson Uber learned in China. There are cultural things. In India, there are some bright spots for culture to take off, but there are also many people who are trying survive by doing basic agriculture. In a country where you have a large population, where not everyone is skilled and employed to their fullest extent to where they are productive, but then you get AI; it will displace people who are kind of well off right now in the relative sense of the society. I think it can create a lot of chaos in society. Being one of the largest democracies, I think that makes it complicated as well.



What’s your view of use of AI in China and specifically China’s approach to it. How different is it?


It seems AI is identified as data tracking and data generation that is not necessarily AI. I do know that in China that there are a lot of factories where they are replacing humans with AI robots. Especially repetitive tasks. Robots are good at repetitive tasks, but dexterity is something it’s not very good at for instance. My take on China, because of regulation, because of society, because of the government strategy, things take off faster there. If the government thinks they are going to do something, it is going to happen quickly. Where in a democracy there would be a lot of push back or discussions. There are both sides to it. I’m sure there will be some advancements, and there is a lot of money in the economy in China.

Another point, is that warfare is based on a lot of AI driven systems. The next generation of influence in the world and who is the mega power is really dependent on how much innovation around cyber war or AI war, such as unmanned planes bombing places and being able to detect places. China is very well positioned to take on things because of resources.

Related:

Artificial intelligence (AI)…and artificial stupidity (AS)

is a complex and mysterious topic, so stay with me as we decode what’s going on. First of all, there are many disparate, misleading, and confusing words being used to describe AI at different levels today. This irritating situation suggests that unemployed English literature majors have found work in the marketing departments of semiconductor makers and software companies. So, let’s untangle the mess that these grandiloquent catachrestic bards have created with simple engineering principles, definitions, and examples.

Consider AI as the initial point of a divergent tree diagram (one-to-many), with branches coming off that point. The first branch on the left is Frozen AI, and that’s just another word for the expert systems we heard about in the 1980s. Frozen-AI is the knowledge from an expert about some process, reduced to software. A frozen AI system does not learn, it does not execute the process faster or more efficiently no matter how many times it executes. Take tax return software as an example: you type-in the numbers, the software puts the numbers in the proper places, on the proper forms, and does the math. That’s Frozen-AI

The next branch, on the right, is (ML). From that branch, we get two other branches: shallow-AI, also known as Narrow-AI or ANI (artificial narrow intelligence), and (DL).

To read more Warfare Evolution Blogs by Ray Alderman, click here.

Let’s use an industrial application to show how shallow-AI works: making peanut butter. Assume that we have an AI machine that we need to teach. We turn the dials and flip the switches to grind-up one ton of peanuts into powder, and add certain amounts of preservatives, sugar, and oil in the vat. Then we heat it up and stir it. The machine watches (i.e., it records the activity) that the operator does and learns how to do it. After we run several batches, the machine knows exactly how to make peanut butter. Shallow-AI systems deal with a few simple variables and only needs to see a few examples to master the task.

DL is a process that contains thousands or millions of variables, and needs thousands or millions of examples to master a task. Let’s take the Frozen-AI tax return software and turn it into a DL machine. After the machine completes your tax return, it says that you will have a cash flow problem in 30 days, because your wife has been writing checks to a divorce lawyer from her private checking account, and she’s about to dump you. The DL machine went out on the internet, looked into your accounts, gathered all the data about your complete financial situation, evaluated it, and then gave you accurate predictions about your future along with your tax return. That’s DL.

Another example of DL is speech recognition with language translation. Google just released their new Pixel Buds, that link to their Pixel smartphone with Bluetooth. The phone then links to “Google Translate” software. Two people can converse in different languages at the same time, and understand each other: the Pixel set-up recognizes each language and translates it into the native language of the listener. Both speech recognition and language interpretation have thousands of variables and need thousands of examples to become proficient.

The guts of AI is the neural network (NN), also called the artificial neural network (ANN). A physical neural network (PNN) is made up of cells connected to many other cells, like the neurons in our brains. These cells do not hold zeroes and ones like a computer. They contain values between zero and one, that can bias the other connected cells. There are 27 different topologies identified for neural networks today: predictive, deep-forward, fast-forward, convolutional, variational, and a bunch of other types that I can’t explain here. It’s best for both of us if you view the diagrams and read about them. <https://semiengineering.com/using-cnns-to-speed-up-systems/> However, software algorithms running on processors and GPUs, with parallel execution paths, are the basic platform today. These algorithms replace the cells with zeroes and ones, and then calculates the answer.

How do AI algorithms work? Think about it like a spreadsheet (actually, it’s called a Matrix in AI jargon). You bring in your bits (examples) and put them in column 1. Each bit in column 1 is assigned a value, called a weight, and those are put in column 2. The weights of each bit are compared to the weights of the four or five bits around it (lots of math involved here), and those integrated weights are put in column 3. Those weights are integrated with those around it, and put in column 4, and on and on. The answer shows-up in the middle of column 10. The process looks like a convergent tree diagram (many-to-one) when you are done.

What are the most pressing applications in the military for using AI? We’ll start with and facial recognition. We have thousands of hours of intelligence video and still pictures from unmanned aerial vehicles (), intelligence aircraft, and satellites, but not enough image analysts to view it and report their findings. Movidius released their Neural Compute Stick back in April, that plugs into a PC’s USB port, and has the ability to do both image processing and facial recognition. In August, Intel introduced their new VPU (Vision Processing Unit) that can be designed into electronics on UAVs and ground vehicles. It can perform four trillion operations per second with very low power consumption. We can use these chips to do image analysis in seconds, without a human in the loop. There are many other chips and algorithms coming to market daily.

Voice recognition and language translation is another major area for AI in military applications. We have thousands of hours of intercepted voice communications from our enemies, but not enough linguists to interpret them and report their findings. We can take the Pixel Buds, Pixel smartphone, and Google Translate, and hook those to a search algorithm that looks for key words like tank, bomb, IED (improvised explosive device], highway, etc. in the conversation. Then, we can eliminate all the chatter and print-out only the most important intelligence findings in seconds, without a linguist in the loop.

Another example is space-time adaptive processing (STAP) , and cognitive electronic warfare (CEW). Our enemies are very good at jamming our radar signals on the battlefield now. With AI-based STAP, we can overcome their jamming techniques and find the targets. AI-based CEW is just the same problem in reverse: we can better jam or spoof our enemy’s radar.

There are many other areas where AI will infiltrate military systems: autonomous weapons, cyber warfare, better unmanned vehicles, multi-domain warfare, cross-domain warfare, missile defense, etc. And all those platforms will feed information into the mother of all AI machines: The . DepSecDef Bob Work started that process with his memo in April 2017, establishing the “Algorithmic Warfare Cross-Functional Team” (Project Maven). All the data, coming from all the platforms and soldiers in the battle, is fed into the War Algorithm machine, and that machine will tell the generals what to do next to defeat the enemy. That’s the plan.

Now, we need to explore artificial stupidity (AS). Billions of AI chips and algorithms are being put into everyday appliances and in . As a hypothetical example, let’s assume you have an Apple, Google, or Amazon Personal Digital Assistant (Siri, Assistant, or Alexa, respectively). You are nearing retirement and want to know what to do, to stay healthy and active. So, you ask the digital assistant. It goes up to the and looks at trillions of pieces of data from all over the world, and runs that through an AI machine. It comes back and tells you to avoid playing golf! Why? Because a very large percentage of golfers die in their late 60s and 70s. What the assistant found is a very high correlation between death and golf. What we really have here is correlation without causation. Old age is killing retired golfers, not golf itself. Read Charles Wheelan’s book, “Naked Statistics” to understand this kind of AI danger. is credited with saying (but it was probably Alexandre Dumas or Elbert Hubbard who deserve the recognition): “The difference between genius and stupidity is that genius has its limits.”

So, just understand that we are in the infancy of AI development, and expect to see a lot of AS coming out of the machines for a while. The opposing view is that the more data you have, the better the AI machine learns. These machines can look at billions of pieces of data (examples) in a few seconds. Statistically, that says AI machines will get very good, very fast. Ultimately, it depends on what you ask the AI machine, and how much valid data you have on that topic.

Now you know a little about AI, how the military will use it, and where we are on the curve. What you really need to consider is how and where AI fits into the kill chain (find, fix, fire, finish, feedback). Image analysis and language translation fit into the find and fix phases. Other implementations of AI will go into the fire, finish, and feedback phases, once AI hardware and software are more stable and reliable. In fact, every military platform, test, study, analysis, review, weapon, budget, program, and strategy fits somewhere in the kill chain. Once you look at all the military news and announcements through the eyes of the kill chain, confusion goes away and everything makes sense. That’s the topic of our next episode.

Related:

My lousy Super Bowl-betting AI shows how humans are indispensable in cybersecurity

Artificial intelligence and machine learning have never been more prominent in the public forum. CBS’s 60 Minutes featured earlier this year a segment promising myriad benefits to humanity in fields ranging from medicine to manufacturing. World chess champion Garry Kasparov recently debuted a book on his historic chess game with IBM’s Deep Blue. Industry luminaries continue to opine about the potential threat by AI to human jobs and even humanity itself.

Much of the conversation focuses on machines replacing humans. But the fact is the future doesn’t have to see humans eclipsed by machines. In my field of cybersecurity, as long as there are human adversaries behind cybercrime and cyber warfare, there will always be a critical need for human beings teamed with technology.

Intellectual honesty required

During last Christmas Break, I wanted explore the field of machine learning by creating some simple models that would examine some of its strengths and weaknesses — but also demonstrate some of the issues related to sampling and over-fitting. Given that we were two months away from the Super Bowl, I built a set of models that would attempt to predict the winner.

One model was trained on 14 years of team data from 1996 to 2010. I used input training features such as regular season results, offensive strength and defensive strength. The model was amazingly effective at predicting the winners for those years, picking all but one of the games correctly. The one miss was the prediction that both the Pittsburgh Steelers and Arizona Cardinals would win in 2009:

But why am I writing this then, instead of flying to Vegas to place huge wagers on games? Well, let’s start by checking how the model worked on six more recent games. Below we show the true results and color grading for the model’s accuracy:

The effectiveness of this model no longer appears too impressive — in fact, it’s no more effective than flipping a coin! What is it about this model that made it work so well on games from 1996 to 2010, but fall apart in more recent years?

The answer is there are two aspects of the way the model was built and the experiment was run that caused this behavior. The model was “over-trained”, meaning it learned the “noise” about the games that it was trained on. We also see how different the results can be for testing the model on data it was trained on versus data it was not trained on (what we call testing in-sample data versus out-of-sample data respectively).

A key point to this demonstration is that a very bad model can be presented to have amazing results. In this case, the model generally doesn’t “know” what you are asking it, it doesn’t understand the concept of “winning the Super Bowl,” but it can make classification decisions based on a complex set of inputs and their relationship to each other. This is important to understand as we apply machine learning to cybersecurity.

In cybersecurity, models generally don’t understand the concept of “a cyber-attack” or “malicious content,” but they can do a remarkable job of fighting it by being trained on the massive quantities of data we have related to those issues. For example, we can look at structural elements of all malware seen over the last 20 years to build effective models for identifying new malware similar in structure or built using similar techniques.

The issue with “is it similar to the known” is that it can lead to both false positives and false negatives. For example, a new form of malicious content developed from scratch will be difficult to detect, as well as benign samples that have the characteristics of malicious content. For example, a benign executable (such as calc.exe) can be packed using a packer known to be used by cybercriminals to compress and obfuscate malware. Many existing detection models will recognize the packer’s work and falsely flag the executable as malicious.

The human advantage

Human-machine teaming is nothing new. Over the last thirty to forty years we have used machine learning in hurricane forecasting. In the last 25 years, we’ve been able to improve the accuracy of our hurricane forecasting from within 350 miles to 100 miles of contact.

Nate Silver’s best seller The Signal and the Noise (2012) notes an interesting trend suggesting that while our weather forecasting models have improved, combining this technology with human knowledge of how weather systems work has improved forecast accuracy by 25 percent. Such human-machine teaming saves thousands of lives.

The key is recognizing that humans are good at doing certain things and machines are good at doing certain things. The best outcome is recognizing the strengths of each and combining them. Machines are good at processing massive quantities of data and performing operations that inherently require scale. Humans have intellect, so they can understand the theory about how an attack might play out even if it has never been seen before.

Cybersecurity is also very different from other fields that utilize big data, analytics, and machine learning, because there is an adversary trying to reverse engineer your models and evade your capabilities. We have seen this time and time again in our industry.

Technologies such as spam filters, virus scans and sandboxing are still part of protection platforms, but their industry buzz has cooled since criminals began working to evade their technology. Thunderstorms are not trying to evade the latest in machine learning detection technologies — but cyber criminals are.

A major area we see playing out with human-machine teaming is attack reconstruction. Essentially having technology assess what has happened inside your environment then having a human work on a scenario.

Efforts to orchestrate security incident responses can benefit tremendously when a complex set of actions is required to remediate a cyber incident, and some of those actions are going to have very severe consequences. Having a human in the loop not only helps guide the orchestration steps, but also assesses whether the required actions are appropriate for the level of risk involved.

Whether it’s threat intelligence analysis, attack reconstruction, or orchestration — human-machine teaming takes the machine assessment of new intelligence and layers upon it the human intellect that only a human can bring.

Doing so can take us to a very new level of outcomes in all aspects of cybersecurity. And, now more than ever, better outcomes are everything in cybersecurity.

Read next: Go anywhere and speak to anyone with uTalk language education — starting under $20

Related:

What is artificial intelligence ?

Often mentioned in the works of science-fiction visual or handwritten, the artificial intelligence is more and more real in our world. But what exactly is it that the AI ? What is it ? What impact does it have in our lives ?

This folder will address all these questions you ask probably what is commonly called theIA. We will also look at what we can expect for this exciting field in the years to come, or, more simply, how we live today alongside the AI. The future is now !

Definition and history of artificial intelligence

The first concept ofartificial intelligence was addressed in 1950 by the mathematician Alan Turing. The latter then creates a test to determine whether a machine can be considered ” conscious “. The Turing test is still used by scientists today, but its relevance is regularly questioned.

© Jon Callas / Wikimedia Commons

It will be necessary to wait until 1956 to get a definition of AI proposed by Marvin Minsky : ” The construction of computer programs that engage in tasks that are, for the moment, accomplished in a more satisfying way by human beings because they require mental processes of a high level such as : perceptual learning, memory organization and critical thinking. “

The artificial intelligence is a broad field that affects not only computer science but also mathematics, neuroscience and even philosophy. The AI has fascinated more than half a century the scientists, but also the novelists and filmmakers. A cyborg killer Terminator, the androids of Blade Runner via HAL 9000 from 2001, a space Odyssey, humans seem fascinated by the possibility of replicating their behavior and communicating with machines ” talking “.

The artificial intelligence of our days

Deep learning, neural networks, pdas… These terms entered in our lives the past few years have all related to facets of artificial intelligence. The scientific advances in the field are breathtaking.

One of the first machines to demonstrate his talents in the face of the man is Deep Blue. This computer beats in 1997 the world chess champion, Garry Kasparov. Since then, the defeats of the human face machines do not cease to follow them. The last in date is the victory of the IA AlphaGo of Google in the face of the champion Lee Sedol in a game of Go, a game more complex than chess. The AI Watson – developed by IBM won even show Jeopardy! in the USA. A feat which demonstrates that the artificial intelligence has beautiful days in front of it.

© Atomic Taco / Wikimedia Commons

But intelligent machines are not confined to board games, far from it. The artificial intelligence is already in place in many areas of our daily lives. Watson, for example, has been used in finance and medicine. The AI also interested the army, which seeks to use for its drones and the automated management of arms. Autonomous cars are also more and more to talk about.

On a smaller scale, we may cite the personal assistants used by our smartphones such as Siri or Google Assistant. These programs, which continue to evolve based on the learning of our habits in order to provide us with relevant information depending on the context. The recent release of the connected speaker type Google Home or Amazon Echo (for the moment in the US only) will bring in the very near future, more and more interactions between humans and their machines.

A market in full growth

With such potential, the artificial intelligence is rapidly becoming a market juicy. The engineers and developers specialized snapping up shots of millions of dollars. Many start-ups specialized in artificial intelligence and its various branches have emerged in the last decade.

The Silicon Valley is a place that is particularly prolific in this market. The web giants such as Google, Apple, Facebook and Amazon (the famous GAFA, sometimes also called GAFAM with the addition of Microsoft) engage in a fierce battle to acquire the start-up innovative activity that will earn a shot to advance on their opponents. It is a new economy that has emerged with the implementation of wizards, voice and facial recognition, to name only a few. For example, not less than 600 billion dollars of investments that have been made in the Silicon Valley in 2013. An impressive figure, which is expected to increase 50% by 2020.

© Ardo191 / Wikimedia Commons

What does the future hold for the AI ?

If the artificial intelligence is still far short of the representation that it is in fact through the film, it will nevertheless evolve rapidly in the coming decades. It also notes that the scientific community is divided on the way forward. Some of the large industrial and science like Elon Musk, Bill Gates and Stephen Hawking are particularly worried about the consequences that would have a lack of control over artificial intelligences : cyber warfare, hacking, killer robots, exploding unemployment, etc. outlook bleak.

But, like any form of technology, the impact of AI depends on what is, in fact : it also brings with it prospects of positive development for Man and the planet. In the medical field, for example, it can enable reliable diagnoses in seconds and without necessarily passing a battery of tests painful or uncomfortable to the patient.

But what happens the day when AI will surpass humanity in all areas ? It will sound the death knell of the human species ? Or will there be at the beginning of a new golden age ? We are still far away, but the reflection already starts within the scientific community. The greatest engineers of the world are currently working together to design the future of artificial intelligence, but also its limitations. This is not without reminding us of the laws of robotics imagined by the novelist Isaac Asimov, who was before time (in 1942 !) of what may happen to a society where artificial intelligence démocratiserait.

The governments are also relying heavily on the AI for the future. Thus, the government Macron a-t-he decided to continue the project #FranceIA initiated by François Hollande, and that is to prepare the strategy of France in the field of artificial intelligence for years to come.

The artificial intelligence has beautiful days in front of it. It fascinates or scares, it seems that humanity can escape. Her future and especially ours, should, therefore, prepare now for a collaboration lasting between men and machines.

Changed the 15/09/2017 at 15h33

Share

Related:

  • No Related Posts