Experts sound alarm over ‘malicious use’ of AI

Paris — Artificial intelligence (AI) could be deployed by dictators, criminals and terrorists to manipulate elections and use drones in terrorist attacks, more than two dozen experts said Wednesday as they sounded the alarm over misuse of the technology.

In a 100-page analysis, they outlined a rapid growth in cyber crime and the use of “bots” to interfere with news gathering and penetrate social media among a host of plausible scenarios in the next five to 10 years.

“Our report focuses on ways in which people could do deliberate harm with AI,” said Sean O hEigeartaigh, Executive Director of the Cambridge Center for the Study of Existential Risk.

“AI may pose new threats, or change the nature of existing threats, across physical, political and cyber security,” he told AFP.

The common practice, for example, of “phishing” — sending emails seeded with malware or designed to finagle valuable personal data — could become far more dangerous, the report detailed.

In the political sphere, unscrupulous or autocratic leaders can already use advanced technology to sift through mountains of data collected from omnipresent surveillance networks to spy on their own people.

“Dictators could more quickly identify people who might be planning to subvert a regime, locate them, and put them in prison before they act,” the report said.

Likewise, targeted propaganda along with cheap, highly believable fake videos have become powerful tools for manipulating public opinion “on previously unimaginable scales”.

An indictment handed down by US special prosecutor Robert Mueller last week detailed a vast operation to sow social division in the United States and influence the 2016 presidential election in which so-called “troll farms” manipulated thousands of social network bots, especially on Facebook and Twitter.

Another danger zone on the horizon is the proliferation of drones and robots that could be repurposed to crash autonomous vehicles, deliver missiles, or threaten critical infrastructure to gain ransom. “Personally, I am particularly worried about autonomous drones being used for terror and automated cyber attacks by both criminals and state groups,” said co-author Miles Brundage, a researcher at Oxford University’s Future of Humanity Institute.

The report details a plausible scenario in which an office-cleaning SweepBot fitted with a bomb infiltrates the German finance ministry by blending in with other machines of the same make.

The intruding robot behaves normally — sweeping, cleaning, clearing litter — until its hidden facial recognition software spots the minister and closes in.

“A hidden explosive device was triggered by proximity, killing the minister and wounding nearby staff,” according to the sci-fi storyline.

“This report has imagined what the world could look like in the next five to 10 years,” O hEigeartaigh said.

Another area of concern is the expanded use of automated lethal weapons.

Last year, more than 100 robotics and AI entrepreneurs — including Tesla and SpaceX CEO Elon Musk, and British astrophysicist Stephen Hawking — petitioned the United Nations to ban autonomous killer robots, warning that the digital-age weapons could be used by terrorists against civilians.

“Lethal autonomous weapons threaten to become the third revolution in warfare,” after the invention of machine guns and the atomic bomb, they warned in a joint statement, also signed by Google DeepMind co-founder Mustafa Suleyman.

“We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”

Contributors to the new report — entitled “The Malicious Use of AI: Forecasting, Prevention, and Mitigation” — also include experts from the Electronic Frontier Foundation, the Center for a New American Security, and OpenAI, a leading non-profit research company. — AFP

Related:

  • No Related Posts

In The Wrong Hands, AI Can Mean New-Age Terrorism, Automated Cyber Attacks: Report

In a 100-page analysis, the experts outlined a rapid growth in cybercrime and the use of “bots” to interfere with news gathering and penetrate social media among a host of plausible scenarios in the next five to 10 years.

“Our report focuses on ways in which people could do deliberate harm with AI,” said Sean O hEigeartaigh, Executive Director of the Cambridge Centre for the Study of Existential Risk. “AI may pose new threats, or change the nature of existing threats, across cyber-, physical, and political security,” he told AFP.


The common practice, for example, of “phishing” — sending emails seeded with malware or designed to finagle valuable personal data — could become far more dangerous, the report detailed.

Currently, attempts at phishing are either generic but transparent — such as scammers asking for bank details to deposit an unexpected windfall — or personalised but labour intensive — gleaning personal data to gain someone’s confidence, known as “spear phishing”.

“Using AI, it might become possible to do spear phishing at scale by automating a lot of the process” and making it harder to spot, O hEigeartaigh noted.

In the political sphere, unscrupulous or autocratic leaders can already use advanced technology to sift through mountains of data collected from omnipresent surveillance networks to spy on their own people.

“Dictators could more quickly identify people who might be planning to subvert a regime, locate them, and put them in prison before they act,” the report said.

Likewise, targeted propaganda along with cheap, highly believable fake videos have become powerful tools for manipulating public opinion “on previously unimaginable scales”.

An indictment handed down by US special prosecutor Robert Mueller last week detailed a vast operation to sow social division in the United States and influence the 2016 presidential election in which so-called “troll farms” manipulated thousands of social network bots, especially on Facebook and Twitter.

Another danger zone on the horizon is the proliferation of drones and robots that could be repurposed to crash autonomous vehicles, deliver missiles, or threaten critical infrastructure to gain ransom.

Also read:Cybercrime Set to Rise With New EU Privacy Laws: Report

Autonomous weapons

“Personally, I am particularly worried about autonomous drones being used for terror and automated cyber attacks by both criminals and state groups,” said co-author Miles Brundage, a researcher at Oxford University’s Future of Humanity Institute.

The report details a plausible scenario in which an office-cleaning SweepBot fitted with a bomb infiltrates the German finance ministry by blending in with other machines of the same make. The intruding robot behaves normally — sweeping, cleaning, clearing litter — until its hidden facial recognition software spots the minister and closes in.

“A hidden explosive device was triggered by proximity, killing the minister and wounding nearby staff,” according to the sci-fi storyline. “This report has imagined what the world could look like in the next five to 10 years,” O hEigeartaigh said.

“We live in a world fraught with day-to-day hazards from the misuse of AI, and we need to take ownership of the problems.”

The authors called on policy-makers and companies to make robot-operating software unhackable, to impose security restrictions on some research, and to consider expanding laws and regulations governing AI development.

Giant high-tech companies — leaders in AI — “have lots of incentives to make sure that AI is safe and beneficial,” the report said.

Another area of concern is the expanded use of automated lethal weapons.

Last year, more than 100 robotics and AI entrepreneurs — including Tesla and SpaceX CEO Elon Musk, and British astrophysicist Stephen Hawking — petitioned the United Nations to ban autonomous killer robots, warning that the digital-age weapons could be used by terrorists against civilians.

“Lethal autonomous weapons threaten to become the third revolution in warfare,” after the invention of machine guns and the atomic bomb, they warned in a joint statement, also signed by Google DeepMind co-founder Mustafa Suleyman.

“We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”

Contributors to the new report — entitled “The Malicious Use of AI: Forecasting, Prevention, and Mitigation” — also include experts from the Electronic Frontier Foundation, the Center for a New American Security, and OpenAI, a leading non-profit research company.

“Whether AI is, all things considered, helpful or harmful in the long run is largely a product of what humans choose to do, not the technology itself,” said Brundage.

Watch: Tech and Auto Show Ep 31 | Auto Expo 2018 Special | Unveilings & Launches

Related:

  • No Related Posts

AI in conflict: Cyber war and robot soldiers

It’s a question that was under discussion as this year’s Munich Security Conference kicked off on Friday (February 16).

Leading the debate, Estonian president Kersti Kaljulaid. Her country was the victim of a massive hacking attack that was widely blamed on Russia.

“I have been really worried as an Estonian–Estonia is a digital state compared to many others–that our capacity to internationally agree and regulate for technological development has been extremely low,” Kaljulaid told euronews. “We haven’t managed to do any progress, for example, even on cyber issues”.

Members of the public present in the audience expressed concern about robot soldiers, and self-piloted weaponised drones.

It’s a real concern, said Anders Fogh Rassmussen, NATO’s former Secretary General.

“The use of robots and artificial intelligence within the military might make the whole world more unstable. For that reason, I think we should elaborate an international and legally binding treaty to prohibit the production and use of what has been called autonomous lethal weapons”.

In an open letter to the UN last year, Robotics experts also called for a ban on developing so-called “killer robots” and warned of a new arms race.

Earlier this week, the US Director of National Intelligence published the annual Worldwide Threat Assessment, which expressed concern about the “potential for surprise” in the cyber realm.

Related:

  • No Related Posts

Back to the Future: 2018 Big Data and Data Science Prognostications

EMC logo


“We should study Science Fiction in order to understand what someday could become Science Fact.”

– Dr. Who?  Doc Brown?  Kodos and Kang?

This is the time of year when everyone makes his or her predictions for 2018.  I have my predictions as well, but wanted to do something a bit more fun.  So I thought I’d look backwards to the state of technology 50 years ago to gain some insights that we can use to make projections about 2018. That is, what “predictions” made in the 1950’s might tell us about 2018.

However, it’s really hard to find predictions about the future made in the 1950’s.  There was no Internet or Social Media or Reality TV, so I found the next best proxy…sci-fi movies!  I decided to review the most popular sci-fi movies from 1950’s, and provide my perspective as to what these movies might tell us about 2018.  Maybe drink a Tab or Fresca as you read this.

The Day the Earth Stood Still (1951)

In the movie “The Day the Earth Stood Still”, a humanoid alien visitor named Klaatu comes to Earth, accompanied by a powerful eight-foot tall robot, Gort, to deliver an important message about how we are destroying planet Earth.  And let’s not forget the important chant “Klaatu barada nikto” that stops Gort from destroying Earth when Klaatu gets shot.  I’m sure that catchy phrase is something you chant every morning while showering.  I do!

By the way, Gort had more character than Keanu Reeves in the 2008 remake, which I guess isn’t really that surprising.

2018 Ramifications:  Open source tools are driving rapid changes in data management and analytic tools, and those tools will continue to have names that make no sense.

The incessant march of open source will continue. The economics of open source are just too compelling for organizations not to embrace (see Figure 1).

 

The unusualness of open source project naming (Linux, Hadoop, Ubuntu, Pidgin, GNU) might be able to exploit names from this movie.  I can already imagine a new Machine Learning framework called “Klaatu barada nikto” popping on the 2018 landscape. By the way, have some fun creating your own open source project names:

https://mrsharpoblunto.github.io/foswig.js/

This site will probably save developers months of work coming up with open source project names that have nothing to do with the functionality of their projects.

Forbidden Planet (1956)

A starship crew goes to investigate the silence of a planet’s colony only to find two survivors and a deadly secret.  Forbidden Planet was the first science fiction film to depict humans traveling in a faster-than-light starship. But more importantly, “Robby the Robot” is one of the first “real” movie robots (where real means more than a “tin can” on legs). Robby displays a distinct personality and plays an integral role in the film.

We also learn that Leslie Nielsen (of Airport and Naked Gun fame) can make movies that don’t make us laugh on purpose.

2018 Ramifications:  Devices of all types – from the little vacuum that scoots along your floor to the vehicle that knows how to park itself – advances in machine learning are creating “smart” devices, which more and more are performing like “robots.”

Machine Learning advancements will continue to advance the state of smart devices – or robots – and their ability to “learn and adapt” in everyday work and home situations.  And we’ll see those advances nowhere more than with autonomous vehicles (which really are nothing more than robots on wheels).

Check out “Hacking the Autonomous Vehicle” for more insights into some of the challenges faced by autonomous vehicles in 2018.  Then play the fun game “Moral Machine” game to find out how much Grand Theft Auto has influenced your autonomous vehicle!

Invasion of the Body Snatchers (1956)

A doctor returns to his small town only to find several of his patients suffering the paranoid delusion that their friends or relatives are impostors. He is initially skeptical, especially when the alleged doppelgängers (digital twins?) are able to answer detailed questions about their victim’s lives. He eventually determines that something odd has happened and searches for the cause of this phenomenon.

Heck, sounds no different than most of the Big Data conferences that I go to.  Wait, maybe it’s my presentations that are causing those reactions?!

2018 Ramifications: Leading organizations are realizing that artificial intelligence and machine learning will augment, not replace, human decision making.  We can also avoid having our lives “invaded” by Artificial Intelligence by investing the time to learn new data science, machine learning and, deep learning skills (which might require unlearning outdated skills).

We have to avoid the paranoia about artificial intelligence and machine learning taking jobs.  From the article “Instead Of Destroying Jobs Artificial Intelligence (AI) Is Creating New Jobs In 4 Out Of 5 Companies,” we get the following:

“All the signs are that those predicting the first wave of machine learning applications will be used to augment existing human workforces, rather than make them redundant, are so far on the money.”

AI and ML will provide customer, product, service, and operational insights that augment human’s decision-making.  From the same article:

 “71% of organizations have proactively initiated reskilling employees with new skills to deal with the impact of AI”

Leading organizations need to re-skill their workforce to exploit the economic potential of these technologies.

War of the Worlds (1953)

H.G. Wells’ classic novel is brought to life in this tale of alien invasion. The residents of a small town in California are excited when a flaming meteor lands in the hills. Their joy turns to H-O-R-R-O-R when they discover that the meteor has passengers who are not very friendly (the understatement of 1953).

By the way, the Tom Cruise and Steven Spielberg remake (2005) should be sent to all neighboring planets to show them how unworthy of conquest Earth must be to have allowed such a horrible movie to be remade.  Let’s pray there’s no remake of “Mars Needs Women!”

2018 Ramifications:  There is a Great AI War forthcoming, and 2018 will be the year when fortifications are built and strategies set.

Russian president Vladimir Putin stated that the nation that leads in AI “will be the ruler of the world.”  To quote Putin, “Artificial intelligence is the future, not only for Russia, but for all humankind.”

China has very clear AI ambitions: to become the world’s leader in AI by 2030, with  many natural ingredients that give them an advantage in this upcoming Great AI War: government funding, massive population, strong research community, and society primed for technological adoption (see Figure 2).

Figure 2: AI Maturity Level, Source: Infosys

 

2018 will see more collaboration between government, business, and universities to ensure that the United States wins this all important war.

Godzilla, King of the Monsters (1956)

A 400-foot dinosaur-like monster is awoken from undersea hibernation off the Japanese coast by atomic bomb testing, and storms Tokyo.  The movie features an award-winning performance by Raymond Burr and one of history’s worst audio translations.

2018 Ramifications:  We must combat the paranoia that we’ve unleashed an Artificial Intelligence and Machine Learning monster that will destroy us all.

I do believe that AI and ML will lead to a bifurcation of America, but that divide will be defined by those who understand and embrace AI and ML versus the luddites who try to regulate it to the point of irrelevance.

2018 Predictions Summary

2018 will continue to see the continuing march of economics that drive innovation and market adoption of Big Data, Data Science, Machine Learning and Artificial Intelligence.  It’s a great time to be in the data and analytics business, and 2018 will just reinforce that!

The post Back to the Future: 2018 Big Data and Data Science Prognostications appeared first on InFocus Blog | Dell EMC Services.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

  • No Related Posts

Future of Military Technology – Five Developments in the Defence Technologies has Changed the …

Market Research HUB

Market Research HUB

Market Research Hub includes new market research report “The Future of Military Technology – Five Developments in defence technology are changing the nature of modern warfare” to its huge collection of research reports. Defense technology is advancing at quite a rate of knots in 2017 and some remarkable new abilities are available for militaries to purchase. From robotic mules through to drone swarms and rail guns there are multiple areas of strong innovation in the defense industry. Some of these technologies have the potential to change how warfighting works in the future and in a new world where the balance of power internationally is more scattered between different multiple nations many are preparing for the concerning prospect of state versus state conflict in the future. However whilst some are game changing there are plenty of other technologies that are a black hole for money and resources, producing impractical, complex, expensive and unworkable machines. The key task though in this period of rapid development is recognizing the full implications of using a new technology indiscriminately, before it becomes a new, dangerous and counterproductive threat to world security.

Request Free Sample Report- www.marketresearchhub.com/enquiry.php?type=S&repid=1404918

Key Questions Answered

– What technology absorbing the attention of modern military planners?

– Where is the money going and is it being spent wisely?

– Are all of the new technologies emerging actually practical?

– What can we expect to see in future wars and does this make the world safer or less safe?

Scope

– Examine how military technology is developing and what the militarys are spending their money on.

– Learn what trends in warfare are driving the changes being seen.

– See how just how useful new technology is and whether or not money is being wasted on certain projects.

– Examine how these changes might alter modern military strategy and just how the global balance of power is changing.

Reasons to buy

– One of the largest levels of military investment is going into procuring equipment that can operate automatically, that doesnt require human operators and can back up units on the ground. This is funneling into all manner of equipment, from automated attack drones, self-driving convoys, automated submarine hunters and many other types of kit. The potential for protecting soldiers is very high, taking over some of the most high-risk jobs and working to protect troops on the ground. However, a lot of this technology comes with very difficult obstacles to navigate including the implementation of this equipment into a fighting unit and protecting it from cyber-attack, but beyond that there are a wide variety of moral and ethical dilemmas to negotiate too. Furthermore, much of this equipment will require a complete rewriting of military strategy and doctrine and for the time being there will be relatively incremental steps to introduce this tech, rather than giant leaps forward.

Browse Full Report with TOC- www.marketresearchhub.com/report/the-future-of-military-t…

– Stealth technology can be a complete disruptor in various military equipment types. The ability to avoid detection and attack or defend targets with the element of surprise, or complete surveillance missions under an enemys nose gives one military a critical edge over another. However, a significant problem with the technology is that the expense required to acquire it can be staggering. The cost involved in fact can be so high that it delays development by years in some cases and requires an enormous amount of secrecy in the design in order to keep it secret and still effective from other militaries. There may be much better ways to achieve the same effect particularly when considering some of the new technology options available today. Multiple countries are still pursuing stealth options and there are some much simpler and less expensive ways to achieve an element of stealth.

– In the context of all the technological opportunities that are being experimented with in the defense sector, modern military strategy and doctrine will have to change too, right down to the core of the basic concepts of how to fight an opponent. Couple with this, the ethics and morality of how two human opponents should engage each other in a new world where automated machines are doing both the majority of the work and the killing itself, and a great deal will have to change in the coming years should many of the new technologies be adopted. The breakdown of the current structures of global power from a unilateral to a multilateral system is also likely to affect how common warfare is and whether there will be conventional warfare in future. Technology is producing some disruptive change in the way that warfare works from strategy through to power balances between states.

Table of Contents

Executive Summary 2

Using autonomy and robotics to resupply, protect and attack 2

Stealth technology is highly desired, but it might not be worth it 2

Fire systems are becoming smarter, not all advances are practical though 2

Soldier systems and survivability have never been more important 3

Military strategy and the global balance of power is changing 3

Using autonomy and robotics to resupply, protect and attack 9

Autonomous trucks are still some way away, but leader follower system shows promise 9

Robotic mules could provide excellent support opportunities 10

Delivery drones could vastly improve battlefield logistics 11

Drones are tried and tested but have significant flaws currently 12

The US and China are locked in a swarm arms race, quantity versus quality 13

Sea hunter: one of the experiments with autonomous sea vehicles 13

Automated tanks will start to trickle into the market very soon 14

Currently these machines are far too expensive to be worth purchasing in many cases 15

Stealth technology is highly desired but it might not be worth it 17

The F-35 is the starkest example of a stealth program gone out of control 17

SU35 doesnt worry about stealth, but focuses on firepower, range and maneuverability and is much cheaper 18

……

Enquire about this Report- www.marketresearchhub.com/enquiry.php?type=enquiry&repid=…

About Market Research Hub:

Market Research Hub (MRH) is a next-generation reseller of research reports and analysis. MRH’s expansive collection of market research reports has been carefully curated to help key personnel and decision makers across industry verticals to clearly visualize their operating environment and take strategic steps.

Contact Us:

90 State Street,

Albany, NY 12207,

United States

Toll Free: 800-998-4852 (US-Canada)

Email: press@marketresearchhub.com

Website: www.marketresearchhub.com/

Read Industry News at – www.industrynewsanalysis.com/

This release was published on openPR.

Source: imported from this press release.

Related:

  • No Related Posts

US Is Losing To Russia And China In War For Artificial Intelligence, Report Says

Russia and China want to revolutionize their forces with weaponized artificial intelligence, a field in which the U.S. risks falling behind, according to a new report released Wednesday from former defense officials and field experts.

The report, from government data analysis group Govini and former Department of Defense Chief Robert Work, says America’s two biggest military competitors are rapidly advancing with AI, leaving the U.S. military with the choice of whether it wants to “lead the coming revolution, or fall victim to it.”

Related: U.S. and Western Europe could badly lose In a war against Russia without China’s help

Keep up with this story and more by subscribing now

“This stark choice will be determined by the degree to which the Department of Defense (DoD) recognizes the revolutionary military potential of AI and advanced autonomous systems,” the report said, according to CNN, which first obtained it.


russiafedorrobot (1)
FEDOR (Final Experimental Demonstration Object Research) is a bipedal robot designed by Russia’s Android Technics and Russian military research agency Advanced Research Fund. Its capable of performing a number complex human tasks including firing guns and driving cars. Advanced Research Fund/Social Media

Russia’s military has utilized artificial intelligence in cruise missiles and drones. Russia announced progress last week for its plan to send its gun-wielding Final Experimental Demonstration Object Research (FEDOR) robot to space. The advanced robot can also drive cars, exercise and, supposedly, help bring Russia’s spacecraft Federatsiya into orbit by 2020. Russia has also shown off a Ratnik-3 third-generation infantry combat suit.

China has entered the race as well. As Chinese President Xi Jinping revamps the People’s Liberation Army (PLA) to include a new cyber focus, his government announced earlier this year investments of billions of dollars into AI in a bid to outpace the U.S. and Russia.


GettyImages-864966574
A man walks past sculptures outside Xianghe Robot Industry Port during a tour arranged by the press center for the 19th Communist Party Congress in Xianghe county in China’s Hebei province on October 22, 2017. China has prioritized preparing for the battlefield of tomorrow. WANG ZHAO/AFP/Getty Images

The direction appeared to pay off, literally, as Chinese start-up Yitu Tech took home a $25,000 prize from a face recognition contest hosted this month by the research wing of the U.S.’s own Defense Intelligence Agency.

“China is no longer in a position of technological inferiority but rather sees itself as close to catching up with and overtaking the United States in AI. As such, the PLA intends to achieve an advantage through changing paradigms in warfare with military innovation, thus seizing the ‘commanding heights’…of future military competition,” the Center for a New American Security’s Elsa Kania wrote in a report published Tuesday.

While the U.S. military has already excelled in using wielding artificial intelligence in some of its most powerful weapons, including the F-35 Lightning Jet II, the reports from Kania and from Work and Govini urged President Donald Trump to formulate a long-term strategy to boost American development of AI.

Related:

The critical human element in the machine age of warfare

In 1983, Stanislav Petrovhelped to prevent the accidental outbreak of nuclear war by recognizing that a false alarm in Soviet early warning systems was not a real report of an imminent US attack. In retrospect, it was a remarkable callmade under enormous stress, based on a guess and gut instinct. If another officer had been in his place that night—an officer who simply trusted the early warning system—there could have been a very different outcome: worldwide thermonuclear war.

As major militaries progress towards the introduction of artificial intelligence (AI) into intelligence, surveillance, and reconnaissance, and even command systems, Petrov’s decision should serve as a potent reminder of the risks of reliance on complex systems in which errors and malfunctions are not only probable, but probably inevitable. Certainly, the use of big data analytics and machine learning can resolve key problems for militaries that are struggling to process a flood of text and numerical data, video, and imagery. The introduction of algorithms to process data at speed and scale could enable a critical advantage in intelligence and command decision-making. Consequently, the US military is seeking to accelerate its integration of big data and machine learning through Project Maven, and the Chinese military is similarly pursuing research and development that leverage these technologies to enable automated data and information fusion, enhance intelligence analysis, and support command decision-making. Russian President Vladimir Putin, meanwhile, has suggested, “Artificial intelligence is the future, not only for Russia, but for all humankind… Whoever becomes the leader in this sphere will become the ruler of the world.”

To date, such military applications of AI have provoked less debate and concern about current capabilities than fears of “killer robots” that do not yet exist. But even though Terminators aren’t in the immediate future, the trend towards greater reliance upon AI systems could nonetheless result in risks of miscalculation caused by technical error. Although Petrov’s case illustrates the issue in extremis, it also offers a general lesson about the importance of human decision-making in the machine age of warfare.

It is clear that merely having a human notionally “in the loop” is not enough, since the introduction of greater degrees of automation tend to adversely impact human decision-making. In Petrov’s situation, another officer may very well have trusted the early warning system and reported an impending US nuclear strike up the chain of command. Only Petrov’s willingness to question the system—based on his understanding that an actual US strike would not involve just a few missiles, but a massive fusillade—averted catastrophe that day.

Today, however, the human in question might be considerably less willing to question the machine. The known human tendency towards greater reliance on computer-generated or automated recommendations from intelligent decision-support systems can result in compromised decision-making. This dynamic—known as automation biasor the overreliance on automation that results in complacency—may become more pervasive, as humans accustom themselves to relying more and more upon algorithmic judgment in day-to-day life.

In some cases, the introduction of algorithms could reveal and mitigate human cognitive biases. However, the risksof algorithmic bias have become increasingly apparent. In a societal context, “biased” algorithms have resulted in discrimination; in military applications, the effects could be lethal. In this regard, the use of autonomous weapons necessarily conveys operational risk. Even greater degrees of automation—such as with the introduction of machine learning in systems not directly involved in decisions of lethal force (e.g., early warning and intelligence)—could contribute to a range of risks.

Friendly fire—and worse. As multiple militaries have begun to use AI to enhance their capabilities on the battlefield, several deadly mistakes have shown the risks of automation and semi-autonomous systems, even when human operators are notionally in the loop. In 1988, the USS Vincennes shot down an Iranian passenger jet in the Persian Gulf after the ship’s Aegis radar-and-fire-control systemincorrectly identified the civilian airplane as a military fighter jet. In this case, the crew responsible for decision-making failed to recognize this inaccuracy in the system—in part because of the complexities of the user interface—and trusted the Aegis targeting system too much to challenge its determination. Similarly, in 2003, the US Army’s Patriot air defense system, which is highly automated with high levels of complexity, was involved in two incidents of fratricide. In these stances, “naïve” trust in the system and the lack of adequate preparation for its operators resulted in fatal, unintended engagements.

As the US, Chinese, and other militaries seek to leverage AI to support applications that include early warning, automatic target recognition, intelligence analysis, and command decision-making, it is critical that they learn from such prior errors, close calls, and tragedies. In Petrov’s successful intervention, his intuition and willingness to question the system averted a nuclear war. In the case of the USS Vincennes and the Patriot system, human operators placed too much trust in and relied too heavily on complex, automated systems. It is clear that the mitigation of errors associated with highly automated and autonomous systems requires a greater focus on this human dimension.

There continues, however, to be a lack of clarity about issues of human control of weapons that incorporate AI. Former Secretary of Defense Ash Carter has said that the US military will never pursue “true autonomy,” meaning humans will always be in charge of lethal force decisions and have mission-level oversight. Air Force Gen. Paul J. Selva, vice chairman of the Joint Chiefs of Staff, used the phrase “Terminator Conundrum” to describe dilemmas associated with autonomous weapons and has reiteratedhis support for keeping humans in the loop because he doesn’t “think it’s reasonable to put robots in charge of whether we take a human life.”To date, however, the US military has not established a full, formalized definition of “in the loop” or of what is necessary for the exercise of “appropriate levels of human judgment” over use of force that was required in the 2012 Defense Department directive on “Autonomy in Weapons Systems.”

The concepts of positive or “meaningful” human control have started to gain traction as ways to characterize the threshold for giving weapon system operators adequate information to make deliberate, conscious, timely decisions. Beyond the moral and legal dimensions of human control over weapons systems, however, lies the difficult question of whether and under what conditions humans can serve as an effective “failsafe” in exercising supervisory weapons control, given the reality of automation bias.

When war is too fast for humans to keep up. The human tendency towards over-reliance on technology is not a new challenge, but today’s advances in machine learning, particularly the use of deep neural networks—and active efforts to leverage these new techniques to enable a range of military capabilities—will intensify the attendant risks.

Moreover, it remains to be seen whether keeping human operators directly involved in decision-making will even be feasible for a number of military missions and functions, and different militaries will likely take divergent approaches to issues of automation and autonomy.

Already, there has been the aforementioned transition to greater degrees of automation in air and missile defense, driven by the inability of humans to react quickly enough to defend against a saturation attack. Similar dynamics may be in play for future cyber operations, because of comparable requirements of speed and scale. Looking to the future potential of AI,certain Chinese military thinkers even anticipate the approach of a battlefield “singularity,” at which human cognition could no longer keep pace with the speed of decision and tempo of combat in future warfare. Perhaps inevitably, keeping a human fully in the loop may become a major liability in a number of contexts. The type and degree of human control that is feasible or appropriate in various conditions will remain a critical issue.

Looking forward, it will be necessary to think beyond binary notions of a human “in the loop” versus “full autonomy” for an AI-controlled system. Instead, efforts will of necessity shift to the challenges of mitigating risks of unintended engagement or accidental escalation by military machines.

Inherently, these issues require a dual focus on the human and technical dimensions of warfare. As militaries incorporate greater degrees of automation into complex systems, it could be necessary to introduce new approaches to training and specialized career tracks for operators. For instance, the Chinese military appears to recognize the importance of strengthening the “levels of thinking and innovation capabilities” of its officers and enlisted personnel, given the greater demands resulting from the introduction of AI-enabled weapons and systems. Those responsible for leveraging autonomous or “intelligent” systems may require a greater degree of technical understanding of the functionality and likely sources of fallibility or dysfunction in the underlying algorithms.

In this context, there is also the critical human challenge of creating an “AI ready culture.” To take advantage of the potential utility of AI, human operators must trust and understand the technology enough to use it effectively, but not so much as to become too reliant upon automated assistance. The decisions made in system design will be a major factor in this regard. For instance, it could be advisable to create redundancies in AI-enabled intelligence, surveillance, and reconnaissance systems such that there are multiple methods to ensure consistency with actual ground truth. Such a safeguard is especially important due to the demonstrated vulnerability of deep neural networks, such as image recognition, to being fooled or spoofed through adversarial examples, a vulnerability that could be deliberately exploited by an opponent. The potential developmentof “counter-AI” capabilities that might poison data or take advantage of flaws in algorithms will introduce risks that systems could malfunction in ways that may be unpredictable and difficult to detect.

In cases in which direct human control may prove infeasible, such as cyber operations, technical solutions to unintended engagements may have to be devised in advance. For instance, it may be advisable to create an analogue to “circuit breakers” that might prevent rapid or uncontrollable escalation beyond expected parameters of operation.

While a ban on AI-enabled military capabilities is likely improbable, and treaties or regulations could be too slow to develop, nations might be able to mitigate likely risks of AI-driven systems to military and strategic stability through a prudent approach that focuses on pragmatic practices and parameters in the design and operation of automated and autonomous systems, including adequate attention to the human element.

Related:

Cyber-War and Google Robots

I’d recently noticed an interesting ‘cyber-phenomenon’ when surfing online news outlets, here is last night and this morning’s result (Central Europe Time.) Don’t get me wrong, I’m no internet expert, but sometimes even a computer-troglodyte can detect a bad cyber-odor.

What I’d noticed is, an on-off-on-again-off again (over these past months), very interesting, ability or inability to access information, depending solely on where the cyber-robots decide one can read … depending on where the robot believes you are located. For instance, in today’s case, if I route my VPN [virtual private network] through Russia, to the English language pages of two Spanish publications, I am allowed to read at one, but not the other. This is when it gets interesting.

The Local (thelocal.es) runs much more balanced articles on Catalonia’s succession effort to break with Spain, whereas El Pais in English (elpais.com) is goosestepping with ‘the Russians did it’ western intelligence agencies information operations (PSYOPS.)

El Pais features ‘Russia is interfering in Catalonia’ headlines…

cyber manipulation - 1

…running highly disputed (elsewhere) allegations of Russia interfering in Spain’s crisis. Furthermore, El Pais is putting out patently false information in the numerous articles it is running on the leadership of the Catalan independence movement, here is an example:

cyber manipulation - 1 (1)

“It is false that we have ignored the Constitution… because there are international treaties that contemplate the right to self-determination, and constitutions are favorable to interpretation under international law”…

Here the Catalan leader Carles Puigdemont is quoted, and he is correct to state [paraphrased] treaties subscribed to under international law bind nations that have signed those treaties and there are such treaties with language that could be construed to favor Catalan independence; but El Pais falsely interprets this statement, deliberately discrediting Puigdemont’s stance, by adding on:

…he [Puigdemont] said, alluding to the United Nations resolution 50/6, which only acknowledges this right for people “under colonial or other forms of alien domination or foreign occupation”

Noting a UN resolution is not a treaty, the misrepresentation is clear. By way of example, here is my mail to El Pais correcting this, pointing to a valid, in force treaty Spain is a party to which Puigdemont could easily be referring:

cyber manipulation - 1 (2)

Dear El Pais

When I lived in Catalonia 7 and 8 years ago, I very much appreciated your newspaper. I cannot say this now, the bias has become palpable. Case in point would be your recent articles emphasizing ’The Right of Self Determination’ is only applicable to colonialism. In fact, this is a very much unsettled question in international law, as demonstrated in article one, paragraph one, of the International Covenant on Civil and Political Rights, where the self determination principle’s language is unequivocal; this is a right of ALL peoples:

Article 1

i) All peoples have the right of self-determination. By virtue of the right they freely determine their political status and freely pursue their economic, social and cultural development.

“The relatively straightforward language of the first paragraph, in particular, is commonly cited as evidence of the universality of the right to self-determination, although its formulation does little to make the scope of the right more precise. Nevertheless, both the reference to “all” peoples and the fact that the article is found in human rights treaties intended to have universal applicability suggest a scope beyond that of decolonization”

https://pesd.princeton.edu/?q=node/254

As a party to a published international law study specific to the Right of Self Determination…

http://www.nomos-shop.de/Cole-West-Right-of-Self-Determination-of-Peoples-its-Application-to-Indigenous-Peoples-USA/productview.aspx?product=1484

…I can inform El Pais that, Rajoy’s policy of jailing Catalonia’s elected leadership could be the determining factor taken in arguing the Spanish state is delivering to the Catalans what is described as “The offensive right of self determination” or, and an earned right of independence, due to disproportionate Spanish state actions, in addition to any as yet unenforced right of self determination specified elsewhere in human rights treaty law.

Beyond this, Spain’s judiciary clearly IS politicized, as claimed by Puigdemont, when comparing the present arrests to cases which are not prosecuted, for instance the secret Catholic militia El Yunque. There are even worse, to now unpublicized cases, where Spain’s institutions, for political reasons, have failed to pursue prosecutions of patent criminal activities of an egregious nature, I invite a read at:

https://ronaldthomaswest.com/2017/11/03/catalonia-paradox/

With sincere regards

Ron West

http://ronaldthomaswest.com

“The history of the great events of this world are scarcely more than a history of crime” -Voltaire

Of course El Pais did not respond to my mail and I did not expect they would; the point of the mail is covering any future circumstance of El Pais editors claiming ignorance in any case of ‘journalistic’ denial.

Now, back to the robot: If, I happen to be an English literate person the robot thinks is located in Russia (because of my VPN falsely locating my location in Russia with routing through a Russian server) and I want to read western press, I’m allowed to read the rank propaganda at El Pais, but not the much more accurate reporting at The Local:

cyber manipulation - 1 (3)

403. That’s an error.

Your client does not have permission to get URL / from this server. That’s all we know.

But if I run my VPN through a western country, in this case Greece, I can load The Local page, no problem:

cyber manipulation - 1 (4)

Now, I seriously doubt The Local’s servers are going to be configured to fence out Russian readers, that simply makes no sense (except in the case of The Local server hacked by western intelligence, a distinct possibility) because The Local is much less biased towards Russia in its coverage of the news in Catalonia and doesn’t print the biased, in some cases, and outright false in other cases, sort of articles one finds at El Pais. It is even less likely Russia itself is blocking The Local.

What we appear to be looking at, and it is highly doubtful there is an otherwise innocent explanation, is cyber war, where Russian, but English literate readers, are directed to western media that is most propagandized; along the lines of ‘look, Mr or Ms Russian citizen, at the evil meddling of Putin in the liberal democracies affairs.’

The unanswered question (for myself) is, why is it a Google robot provides the outright lie of ‘something is broken and we don’t know what it is’, when clearly something is broken deliberately and whoever is behind it DOES know what it is. This has gone on for too long, too consistently, for Google not to know what it is. Who gives a Google robot the assignment of announcing to web-surfers located in Russia ‘sorry you can only read biased propaganda pieces and lies because something is broken.’ Central Intelligence Agency? Google, after all, practically sleeps with western intelligence.

It’s when encountering the preceding, one more than wonders at the veracity of claims Russia is the cyber war boogeyman everywhere one looks…

*

Related: