5 Graph Analytics Use Cases

According to Ernst and Young, $8.2 billion a year is lost to the marketing, advertising, and media industries through fraudulent impressions, infringed content, and malvertising.

The combination of fake news, trolls, bots and money laundering is skewing the value of information and could be hurting your business.

It’s avoidable.

By using graph technology and the data you already have on hand, you can discover fraud through detectable patterns and stop their actions.

We collaborated with Sungpack Hong, Director of Research and Advanced Development at Oracle Labs to demonstrate five examples of real problems and how graph technology and data are being used to combat them.

Get started with data—register for a guided trial to build a data lake

But first, a refresher on graph technology.

What Is Graph Technology?

With a graph technology, the basic premise is that you store, manage and query data in the form of a graph. Your entities become vertices (as illustrated by the red dots). Your relationships become edges (as represented by the red lines).

What Is Graph Technology

By analyzing these fine-grained relationships, you can use graph analysis to detect anomalies with queries and algorithms. We’ll talk about these anomalies later in the article.

The major benefit of graph databases is that they’re naturally indexed by relationships, which provides faster access to data (as compared with a relational database). You can also add data without doing a lot of modeling in advance. These features make graph technology particularly useful for anomaly detection—which is mainly what we’ll be covering in this article for our fraud detection use cases.

How to Find Anomalies with Graph Technology

Gartner 5 Layers of Fraud Detection

If you take a look at Gartner’s 5 Layers of Fraud Protection, you can see that they break the analysis to discover fraud into two categories:

  • Discrete data analysis where you evaluate individual users, actions, and accounts
  • Connected analysis where relationships and integrated behaviors facilitate the fraud

It’s this second category based on connections, patterns, and behaviors that can really benefit from graph modeling and analysis.

Through connected analysis and graph technology, you would:

  • Combine and correlate enterprise information
  • Model the results as a connected graph
  • Apply link and social network analysis for discovery

Now we’ll discuss examples of ways companies can apply this to solve real business problems.

Fraud Detection Use Case #1: Finding Bot Accounts in Social Networks

In the world of social media, marketers want to see what they can discover from trends. For example:

  • If I’m selling this specific brand of shoes, how popular will they be? What are the trends in shoes?
  • If I compare this brand with a competing brand, how do the results mirror actual public opinion?
  • On social media, are people saying positive or negative things about me? About my competitors?

Of course, all of this information can be incredibly valuable. At the same time, it can mean nothing if it’s all inaccurate and skewed by how much other companies are willing to pay for bots.

In this case, we worked with Oracle Marketing Cloud to ensure the information they’re delivering to advertisers is as accurate as possible. We sought to find the fake bot accounts that are distorting popularity.

As an example, there are bots that retweet certain target accounts to make them look more popular.

To determine which accounts are “real,” we created a graph between accounts with retweet counts as the edge weights to see how many times these accounts are retweeting their neighboring accounts. We found that the unnaturally popularized accounts exhibit different characteristics from naturally popular accounts.

Here is the pattern for a naturally popular account:

Naturally Popular Social Media Account

And here is the pattern for an unnaturally popular account:

Unnaturally Popular Social Media Account

When these accounts are all analyzed, there are certain accounts that have obviously unnatural deviation. And by using graphs and relationships, we can find even more bots by:

  • Finding accounts with a high retweet count
  • Inspecting how other accounts are retweeting them
  • Finding the accounts that also get retweets from only these bots

Fraud Detection Use Case #2: Identifying Sock Puppets in Social Media

In this case, we used graph technology to identify sockpuppet accounts (online identity used for purposes of deception or in this case, different accounts posting the same set of messages) that were working to make certain topics or keywords look more important by making it seem as though they’re trending.

Sock Puppet Accounts in Social Media

To discover the bots, we had to augment the graph from Use Case #1. Here we:

  • Added edges between the authors with the same messages
  • Counted the number of repeated messaged and filtered to discount accidental unison
  • Applied heuristics to avoid n2 edge generation per same message

Because we found that the messages were always the same, we were able to take that and create subgraphs using those edges and apply a connected components algorithm.

Sock Puppet Groups

As a result of all of the analysis that we ran on a small sampling, we discovered that what we thought were the most popular brands actually weren’t—our original list had been distorted by bots.

See the image below – the “new” most popular brands barely even appear on the “old” most popular brands list. But they are a much truer reflection of what’s actually popular. This is the information you need.

Brand Popularity Skewed by Bots

After one month, we revisited the identified bot accounts just to see what had happened to them. We discovered:

  • 89% were suspended
  • 2.2% were deleted
  • 8.8% were still serving as bots

Fraud Detection Use Case #3: Circular Payment

A common pattern in financial crimes, a circular money transfer essentially involves a criminal sending money to himself or herself—but hides it as a valid transfer between “normal” accounts. These “normal” accounts are actually fake accounts. They typically share certain information because they are generated from stolen identities (email addresses, addresses, etc.), and it’s this related information that makes graph analysis such a good fit to discover them.

For this use case, you can use graph representation by creating a graph from transitions between entities as well as entities that share some information, including the email addresses, passwords, addresses, and more. Once we create a graph out of it, all we have to do is write a simple query and run it to find all customers with accounts that have similar information, and of course who is sending money to each other.

Circular Payments Graph Technology

Fraud Detection Use Case #4: VAT Fraud Detection

Because Europe has so many borders with different rules about who pays tax to which country when products are crossing borders, VAT (Value Added Tax) fraud detection can get very complicated.

In most cases, the importer should pay the VAT and if the products are exported to other countries, the exporter should receive a refund. But when there are other companies in between, deliberately obfuscating the process, it can get very complicated. The importing company delays paying the tax for weeks and months. The companies in the middle are paper companies. Eventually, the importing company vanishes and that company doesn’t pay VAT but is still able to get payment from the exporting company.

VAT Fraud Detection

This can be very difficult to decipher—but not with graph analysis. You can easily create a graph by transactions; who are the resellers and who is creating the companies?

In this real-life analysis, Oracle Practice Manager Wojciech Wcislo looked at the flow and how the flow works to identify suspicious companies. He then used an algorithm in Oracle Spatial and Graph to identify the middle man.

The graph view of VAT fraud detection:

Graph View of VAT Fraud Detection

A more complex view:

Complex View of Graph Technology and Anomaly Detection

In that case, you would:

  • Identify importers and exporters via simple query
  • Aggregate of VAT invoice items as edge weights
  • Run Fattest Path Algorithm

And you will discover common “Middle Man” nodes where the flows are aggregated

Fraud Detection Use Case #5: Money Laundering and Financial Fraud

Conceptually, money laundering is pretty simple. Dirty money is passed around to blend it with legitimate funds and then turned into hard assets. This was the kind of process discovered in the Panama Papers analysis.

These tax evasion schemes often rely on false resellers and brokers who are able to apply for tax refunds to avoid payment.

But graphs and graph databases provide relationship models. They let you apply pattern recognition, classification, statistical analysis, and machine learning to these models, which enables more efficient analysis at scale against massive amounts of data.

In this use case, we’ll look more specifically at Case Correlation. In this case, whenever there are transactions that regulations dictate are suspicious, those transactions get a closer look from human investigators. The goal here is to avoid inspecting each individual activity separately but rather, group these suspicious activities together through pre-known connections.

Money Laundering and Financial Fraud

To find these correlations through a graph-based approach, we implemented this flow through general graph machines, using pattern matching query (path finding) and connected component graph algorithm (with filters).

Through this method, this company didn’t have to create their own custom case correlation engine because they could use graph technology, which has improved flexibility. This flexibility is important because different countries have different rules.


In today’s world, the scammers are getting ever more inventive. But the technology is too. Graph technology is an excellent way to discover the truth in data, and it is a tool that’s rapidly becoming more popular. If you’d like to learn more, you can find white papers, software downloads, documentation and more on Oracle’s Big Data Spatial and Graph pages.

And if you’re ready to get started with exploring your data now, we offer a free guided trial that enables you to build and experiment with your own data lake.


  • No Related Posts

Kerala Blockchain Academy first Indian academy to get Hyperledger membership

Continue without login


Login from existing account


Refrain from posting comments that are obscene, defamatory or inflammatory, and do not indulge in personal attacks, name calling or inciting hatred against any community. Help us delete comments that do not follow these guidelines by marking them offensive. Let’s work together to keep the conversation civil.



  • No Related Posts

Tech Tent: The battle with the bots

Is the battle against online propaganda already being lost as AI puts powerful new weapons in the hands of the fake news merchants?

Tech Tent reports this week on the desperate efforts by the social media giants to root out the armies of bots trying to pollute online debate.

Following the mass shooting at a Florida school, conspiracy theorists and other trolls harassing some of the survivors were spreading stories that those who spoke out for gun control were actors.

The Twitter Safety account highlighted action the company was taking, and also revealed that it was using what it called anti-spam and anti-abuse tools, to weed out “malicious automation” – bots that retweet abusive messages thousands of times, amplifying their impact.

In other words, both the social media firm and the troll army it is fighting are deploying what you might describe as autonomous weapons made possible by advances in machine learning. Twitter has also been rooting out bots apparently linked to Russia, following the indictment of 13 Russians believed to have created fake accounts to conduct information warfare against the US.

That means that one of the scenarios described in a report on potential malicious use of artificial intelligence published this week has already come true.

One of the global experts behind the report, Haydn Belfield from Cambridge University’s Centre for the Study of Existential Risk, tells us that what he calls “AI-enabled interference” in the democratic process is one of their major concerns.

“What we’re particularly worried about is undermining institutions of democracy, undermining what enables us to trust our fellow citizens and know what’s happening in the world.”

Advances in machine learning, coupled with software which makes it easy to produce fake speech and video are putting new tools in the hands of those with malicious purposes.

“It’s very cheap and very easy to pump this stuff out and it really undermines the ability to continue a functioning democratic conversation in society.”

Image copyrightGetty Images
Image caption The threat of AI is real and many of the technologies are already developed, warn 26 leading experts

Not smart

But do the bot armies which Twitter is battling really amount to an example of artificial intelligence – however widely defined – and are they really as potent a threat as has been claimed?

Samantha Bradshaw, a researcher from the Computational Propaganda project at the Oxford Internet Institute, is rather more sceptical. She tells us that Twitter is finding it quite easy to spot automation and the bot creators are taking notice.

“We’re seeing a lot of bot developers taking a step back from automation, and instead blending automation with human curation,” she said. That means they will post new comments, along with the automated retweets, to show that a “real person” is behind the account.

While the spotlight has been on Russia when it comes to this wave of computational propaganda and other types of cyber-warfare, one expert tells us we should be more worried about North Korea.

Dmitri Alperovitch is the Russian-born US cyber-security entrepreneur who founded Crowdstrike, the company which first identified Russian involvement in the hacking of America’s Democratic Party. But he tells us that North Korea has spent 15 years building cyber-warfare capabilities, including “breaking into financial institutions and stealing hundreds of millions of dollars,” and hacking Sony Pictures after it made a jokey film about the regime.

What seems extraordinary is that a country that is so impoverished and so closed off from the outside world should be able to pose a serious cyber-warfare threat to the United States, the world’s technology superpower.

“Anyone who can build a nuclear weapon can certainly do cyber,” said Mr Alperovitch, explaining this is a kind of asymmetric warfare, where attack is easier than defence.

Cyber-warfare techniques and artificial intelligence have advanced a long way in recent years. Linking the two fields could bring new threats to our security that we cannot imagine today.

Podcast available now:


  • No Related Posts

Domestic Clashes Point to Possible International Consequences of Iran’s Cyber Warfare

Published: 23 February 2018

By INU Staff

INU – On Tuesday, a series of clashes between Iranian security forces and the Sufi religious sect known as Gonabadi dervishes resulted in several deaths and hundreds of arrests. To some extent, this development can be seen as an outgrowth of the much broader conflict between the Iranian regime and anti-government protesters, which came to a head in early January. According to Iranian opposition groups like the National Council of Resistance of Iran, relevant demonstrations have continued to the present day, albeit on a smaller scale than the mass uprising which was repressed by security forces after spreading to well over 100 Iranian cities.

The continuation of these demonstrations has occurred in spite of an ongoing crackdown on political activists and other perceived threats to the stability of the clerical regime. This crackdown established connections between the January uprising and the Sufi protests, in that the latter phenomenon reportedly resulted from rumors that an arrest warrant had been issued for Gonabadi leader and liberal activist Ali Tabandeh. And after those protests demonstrated opposition to the regime’s suppression of dissent, the response demonstrated the continuation of the tactics involved in that suppression.

The CHRI reported on Wednesday that security forces had apparently carried out arrests of Gonabadi dervishes throughout the country, following that clashes that took place in the capital city of Tehran. The Gonabadis’ channel on the Telegram messaging app suggested that more than 3,000 individuals had been arrested, whereas Iranian authorities acknowledged only one tenth of this number.

This report is strongly reminiscent of the NCRI’s accounts of the January uprising and subsequent crackdown. Although the government initially claimed that only a few hundred arrests had been carried out in response to the demonstrations, one Iranian Member of Parliament later acknowledged a figure well in excess of 3,000. Meanwhile, the NCRI continues to maintain, as it did then, that the actual figure exceeds 8,000. The higher arrest figures contribute to higher levels of anxiety about further repression tactics including torture and execution, especially in light of explicit threats of execution from the Iranian judiciary, accompanied by at least a dozen accounts of persons being tortured to death while in detention.

The connections between the January uprising and the Gonabadi protests also extend to broader concerns regarding free speech and access to information in the Islamic Republic. This fact was highlighted on Wednesday by IW, with the publication of an article about the discussions about the Telegram app that had emerged in the wake of the clash with security forces. Some Iranian Telegram users reported problems in accessing the app, leading to speculation that it had been blocked by government authorities once again, just as it had been while the January protests were at their peak.

At that time, restrictive measures varied according to time and place, but they included brief outright bans on the service. As the demonstrations began to subside, there was some talk of those obstructions becoming permanent, as they had with Twitter in the wake of the 2009 Green Movement, where the microblogging platform played a major organizational role. Iranian authorities ultimately decided against a permanent ban on Telegram, reportedly under pressure from businesses who had come to rely on the app for domestic advertising and commerce.

Although the more recent outages rekindled concerns about such permanent obstruction, it was later reported that the source of this problem was on Telegram’s end and was not the result of government restrictions. But the absence of such measures in this case does not mean that Iranian authorities have accepted Telegram as a fixture of Iranian society. Quite to the contrary, IranWire and other outlets have recently reported on a push by hardline officials and commentators to encourage broad-based adoption of alternatives which might allow the regime to obstruct the popular messaging app at a later date, with fewer negative consequences for itself.

Telegram has become popular throughout Iranian society but particularly among the activist community, due in part to its perceived security relative to other social media platforms. Telegram’s executives have also been resistant to pressures from the Iranian government, including demands that local content be moved to servers inside the Islamic Republic. This has allowed anti-government sentiments to spread more easily through that channel than through others, but it has also fueled efforts by Iranian political and religious authorities to portray the service as socially damaging.

These narratives were highlighted on Wednesday by The Iran Project, which reported on recent statements by Mohammad Javad Azari Jahromi, the Iranian Minister of Information and Communications Technology, regarding efforts to promote the use of locally-developed messengers and other applications. “We won’t allow just any content to be published in this space,” he vowed, adding that the economic necessity of digital technology is no reason for Iran to accept foreign-based content.

Jahromi’s remarks made reference to the theoretical national information network, or halalnet, that would effectively cut off Iranian cyberspace from the global internet, allowing in only materials that are approved by the clerical regime. Although touted for years as a thorough solution, this project has also been dismissed by tech experts as impractical and cost-prohibitive. Nevertheless, Jahromi and others continue to insist upon its cultural necessity, as evidenced by the minister’s remarks portraying an open internet as harmful to Iranian children.

The nation’s youth in general is a particular target for restrictions on access to information. This trend is no doubt influenced by the remarkably young average population of the Islamic Republic, and by the fact that social and political differences between the Iranian regime and its people are most prominently on display among the lower age demographics. In fact, according to a recent statement by Rasoul Sanai, the political affairs deputy for the Iranian Revolutionary Guard Corps, as many as 80 percent of the persons arrested in the January uprising were under the age of 30.

Such observations may encourage Tehran to use locally-developed alternatives as a means not only for influencing young Iranians with propaganda but also for spying on that same demographic, for which the internet and social media are especially popular. This threat was the focus of a recent NCRI report, which was picked up by Fox News on Wednesday.

The report noted that the regime’s efforts to spy in cyberspace reached new heights shortly after Telegram was banned during the January protests. Additionally, some of those who were arrested at that time were pressured to “leave the Telegram environment and enter the controlled environment of Mobogram,” a domestic alternative, according to the NCRI. That same report indicates that this and other state-approved social media networks are little more than clones of their internationally popular predecessors, with spyware added.

In its coverage of the same NCRI report, The Sun emphasized the potential international reach of these compromised networks over the long term. By presenting themselves as a means to communicate in Farsi with family inside Iran, these apps could be downloaded by users throughout the world, allowing agents of the Islamic Republic to gain access to contact lists and monitor communications, potentially exposing domestic activists who would be comparatively shielded by Telegram’s security features.

In recent months, it has been widely reported that the Iranian government was stepping up its foreign cyber surveillance and hacking efforts, and that its capabilities had increased to such an extent that these efforts pose a serious threat to Western interests. The Long War Journalhighlighted this trend on Wednesday in its commentary on the Worldwide Threat Assessment that was presented by the US Director of National Intelligence last week.

According to that commentary, cyber spying activities were given greater attention, alongside domestic unrest, in comparison with previous years’ assessments. However, it also notes that the document described alleged restraint in these activities over the past year, at least where Western targets are concerned. The Long War Journal points to the lack of a clear rationale for this supposed change, and it offers several hypotheses including the possibility that Iran is “just opportunistically lying low.”

Notably, this is precisely the conclusion suggested by the NCRI in discussion of its report on Iranian spyware. The Sun quoted the organization’s Deputy Direct Alireza Jafarzadeh as saying that recent efforts involving Mobogram and other platforms had been focused on the domestic population, but also that this constituted an experimentation phase after which the same tactics might be applied to stepped-up spying efforts targeting Western countries.


  • No Related Posts

The GOP Is Conducting Cyber Warfare Against Political Opponents

Photo Credit: dailykos.com

As speculation builds over the extent of Russian meddling in 2018’s elections, the deceptive and influential tactics revealed in last week’s indictment by Special Counsel Robert Mueller—and newer ones—are already in use by U.S. politicos with pro-corporate, pro-GOP agendas.

The examples run the gamut from the seemingly trite—a Republican Senate candidate in Arizona touts an endorsement from a new website impersonating local newspapers—to more overtly serious: a tweet storm calling for Minnesota Democratic Senator Al Franken to resign, which he did last year after escalating accusations of sexual harassment; or tens of thousands of faked emails calling for the repeal of net neutrality, which the GOP-led Federal Communications Commission recently repealed.

In these examples and others, a new hall of mirrors is emerging that threatens American elections and governance—and it is coming from shadowy domestic operatives, not Russians. Websites mimicking news organizations are endorsing candidates. Online identities are being stolen and use to send partisan messages, with people unaware they are being impersonated for partisan gain. Targets are slow to detect or acknowledge the high-tech ruses used against them. The media is catching on, but it’s typically after the fact—not before crucial decisions are made.

While many progressives were split on whether Franken should have left the Senate, the Republican right was unambiguous in seizing the moment to force the Democrats to lose a popular senator.    

Twitter War

“White nationalist provocateurs, a pair of fake news sites, an army of Twitter bots and other cyber tricks helped derail Democratic Senator Al Franken last year, new research shows,” a report by Newsweek’s Nina Burleigh began, describing new details about how he was targeted. “Analysts have now mapped out how Hooters pinup girl and lad-mag model Leeann Tweeden’s initial accusation against Franken became effective propaganda after right-wing black ops master Roger Stone first hinted at the allegation.”

“A pair of Japan-based websites, created the day before Tweeden came forward, and a swarm of related Twitter bots made the Tweeden story go viral and then weaponized a liberal writer’s criticism of Franken,” Burleigh explained. “The bot army—in tandem with prominent real, live members of the far right who have Twitter followers in the millions, such as Mike Cernovich—spewed thousands of posts, helping the #FrankenFondles hashtag and the “Franken is a groper” meme effectively silence the testimonies of eight former female staffers who defended the Minnesota Democrat before he resigned last year.”

This evidence trail tracing how right-wingers used software to amplify the attacks on Franken was discovered by Mike Farb at UnhackTheVote, an election transparency group. He noted this tactic was also one tool used by Russian propagandists during the 2016 U.S. presidential election.  

What’s new now is not that technologies like bots are being created, but that domestic political operatives are using them in much the same way they have used robo-calls, negative campaign mailers and other attacks to undermine political opponents—before the internet and its social media platforms amplified the speed, intensity and impact of such attacks. 

“Like targeted Facebook ads that Russian troll farms used in the 2016 election, Twitter bots have been around for years and were originally created for sales purposes,” Burleigh wrote. “But since the 2016 election, arguably lost due to the right’s superior utilization of darker online strategies, the left is not known to have created or mobilized its own fake cyber army to amplify its viewpoint.”

Burleigh’s observation may be the most chilling. The evidence that is out there so far does suggest that pro-GOP and pro-corporate forces are bet g quicker to embrace the latest version of political dark arts—as seen in the growing list of examples of deceptive and influential online campaigns.

Endorsements That Weren’t

Last week, Politico reported on what, at first, seemed like a silly story—a Republican senatorial candidate from Arizona fell for a fake endorsement that seemed to boost her chances in an upcoming primary.

“It looked as if Arizona Senate candidate Kelli Ward had scored a big endorsement: On Oct. 28, she posted a link on her campaign website and blasted out a Facebook post, quoting extensively from a column in the Arizona Monitor,” Politico reported. “There was just one problem: Despite its reputable sounding name, the Arizona Monitor is not a real news site… The site launched just a few weeks before publishing the endorsement, and its domain registration is hidden, masking the identity of its owner. On its Facebook page, it is classified as a news site, but scant other information is offered.”

The general public doesn’t pay much attention to endorsements early in campaigns. So Ward falling for a faked one might be a typical mistake that inexperienced candidates make—and thus easily forgotten. But Politico’s report said her endorsement was part of a larger and far more disturbing trend: the mass-production of fabricated endorsements by anonymous political operatives clearly pushing a far-right agenda.

“The Arizona Monitor seems to be part of a growing trend of conservative political-messaging sites with names that mimic those of mainstream news organizations and whose favored candidates then tout their stories and endorsements as if they were from independent journalists,” wrote Politico. “It’s a phenomenon that spans the country from northern New England, where the anonymous Maine Examiner wreaked havoc on a recent mayoral election, all the way out to California, where Rep. Devin Nunes launched — as reported by POLITICO— his own so-called news outlet, the California Republican.”

“This basically is an appropriation of credibility,” Kathleen Hall Jamieson, director of the Annenberg Public Policy Center at the University of Pennsylvania, told Politco. “As the credibility of reputable news outlets is appropriated for partisan purposes, we are going to undermine the capacity of legitimate outlets to signal their trustworthiness.” 

Political Identity Theft

Cyber deception also is appearing across the government in the nooks and crannies where White House directives or Congress’ laws are turned into the rules Americans must abide by—or in the Trump era, are repealed.

Here, political identity theft is increasingly becoming a tactic used to push federal agencies to end to consumer protections and other regulations that impede profits. Hundreds of thousands of public comments, purportedly made by real Americans, have come in over the electronic transom at five different agencies in recent months, a series of investigative reports found. Except, the people who supposedly sent these comments never did.

A recent example concerns the “Fiduciary Rule,” which originated in the Labor Department and was to talk effect in July 2019, to try to prevent conflicts of investment from investment advisers targeting retirees.

“The [Wall Street] Journal previously found fraudulent postings under names and email addresses at the Consumer Financial Protection Bureau, Federal Energy Regulatory Commission and Securities and Exchange Commission and the Federal Communications Commission,” it noted.

The highest-profile example concerned the FCC’s so-called net neutrality ruled, which previously had regulated telecom giants from overcharging the public and smaller businesses for access to online data. a day before the FCC voted in November to gut net neutrality, the Verge reported, “A search of the duplicated text found more than 58,000 results as of press time, with 17,000 of those posted in the last 24 hours alone.”

In other words, a bot-like program was hijacking online identities and impersonating those people to file pro-corporate comments at the FCC. When public officials like New York State Attorney General Eric Schneiderman, a Democrat, sought more information from the FCC, he received no response.

While one can speculate about who specifically coordinated these efforts, there is only one category of special interest has the means and motives to thwart government regulators: that’s the targeted industries, professional trade association and lobbyists and the biggest corporate players.

No Accountability Coming

These are people and interests that are represented by Republicans in Washington more so than Democrats. But, as Schneiderman learned, the GOP and it’s political appointees have no inclination to even acknowledge that cyber deception is becoming a new coin of the political realm—while they rule that roost.

Progressives and Democrats might point out that the GOP is the party that obsesses over voter fraud—one person voting many times, which almost never occurs in real life—while Republican-friendly operatives appear to be embracing cyber political identity theft on an unprecedented scale.

What this means for 2018’s elections is uncertain, but it doesn’t bode well. No matter where partisan cyber warfare is coming from—domestically or abroad—its occurrence will undermine public confidence in the results.

The congressional midterms and governors’ races in many states are occurring against a backdrop of a rising blue voter turnout wave. It’s in the GOP’s interests in preserving their power to do anything that undermines the credibility of electoral outcomes that should favor Democrats.

Cyber political warfare is the latest means for doing so. It’s already begun. 


Steven Rosenfeld covers national political issues for AlterNet, including America’s democracy and voting rights. He is the author of several books on elections, including Democracy Betrayed: How Superdelegates, Redistricting, Party Insiders, and the Electoral College Rigged the 2016 Election, to be published in March 2018 from Hot Books.


  • No Related Posts

As An American Tragedy Unfolds, Russian Agents Sow Discord Online

People gather for a memorial one day after the deadly shooting in Parkland, Fla. Experts warn that Russians are exploiting the tragedy on social media. Carolyn Cole/LA Times via Getty Images hide caption

toggle caption

Carolyn Cole/LA Times via Getty Images

People gather for a memorial one day after the deadly shooting in Parkland, Fla. Experts warn that Russians are exploiting the tragedy on social media.

Carolyn Cole/LA Times via Getty Images

As the news broke of a school shooting in Parkland, Fla., hundreds of twitter accounts believed to be under Russian sway pivoted.

Many had been tweeting about places like Syria and Ukraine — countries where Russia is seeking to strengthen its influence. Suddenly the accounts shifted to hashtags like #guncontrol, #guncontrolnow, and #gunreformnow. Tweets mentioning Nikolas Cruz, the name of the alleged shooter, spiked.

For Bret Schafer, an analyst with Hamilton 68, a site tracking Russian influence on Twitter, the pattern is becoming all too familiar. Hamilton 68 follows 600 accounts run by the Russian government, Russian trolls, bots, and individuals sympathetic to the Russian point of view. Data collected by the site over the past few months suggests that Russian social media accounts are now regularly seizing on divisive or tragic news to rile up segments of American society.

“The Kremlin doesn’t care about gun control in America, they have no skin in this game,” Schafer says. Accordingly, some accounts tracked by Hamilton 68 spew extreme, pro-gun rhetoric. Others attack the National Rifle Association. “By taking an extreme hyper-partisan position, it just serves to further rip us apart,” Schafer says.

American intelligence services are increasingly concerned about Russian accounts in social media. At a hearing the day before the shooting, Dan Coats, the Director of National Intelligence, warned that cyber warfare, including on social media, were one of his “greatest concerns”.

“Frankly, the United States is under attack,” Coats told the Senate Intelligence Committee. Adversaries “seek to sow division in the United States and weaken U.S. leadership.”

The intelligence community’s annual threat assessment, also out Tuesday, warns that Russia in particular will use social media “to try to exacerbate social and political fissures in the United States.” The report predicts those attacks are likely to target the upcoming 2018 midterm elections.

Schafer says that the Russian accounts his organization tracks now follow a well-worn path. First, he says, they tweet out news and breaking developments. This helps them to gain attention and attract new followers. Then they begin tweeting highly inflammatory material to fan the flames of partisanship.

Finally, Schafer says, the accounts shift to conspiracy theories. “They build this narrative of, ‘You are being lied to by the government, by the media, by everyone else, so don’t trust anyone or anything,'” he says. “It’s not just divisive, there’s an erosion quality to it as well—of eroding trust.”

By Friday morning, new hashtags surged on the network tracked by Hamilton 68. They included #fbicorrpution and #falseflag.


  • No Related Posts