Skip to main content

5 Graph Analytics Use Cases

According to Ernst and Young, $8.2 billion a year is lost to the marketing, advertising, and media industries through fraudulent impressions, infringed content, and malvertising.

The combination of fake news, trolls, bots and money laundering is skewing the value of information and could be hurting your business.

It’s avoidable.

By using graph technology and the data you already have on hand, you can discover fraud through detectable patterns and stop their actions.

We collaborated with Sungpack Hong, Director of Research and Advanced Development at Oracle Labs to demonstrate five examples of real problems and how graph technology and data are being used to combat them.

Get started with data—register for a guided trial to build a data lake

But first, a refresher on graph technology.

What Is Graph Technology?

With a graph technology, the basic premise is that you store, manage and query data in the form of a graph. Your entities become vertices (as illustrated by the red dots). Your relationships become edges (as represented by the red lines).

What Is Graph Technology

By analyzing these fine-grained relationships, you can use graph analysis to detect anomalies with queries and algorithms. We’ll talk about these anomalies later in the article.

The major benefit of graph databases is that they’re naturally indexed by relationships, which provides faster access to data (as compared with a relational database). You can also add data without doing a lot of modeling in advance. These features make graph technology particularly useful for anomaly detection—which is mainly what we’ll be covering in this article for our fraud detection use cases.

How to Find Anomalies with Graph Technology

Gartner 5 Layers of Fraud Detection

If you take a look at Gartner’s 5 Layers of Fraud Protection, you can see that they break the analysis to discover fraud into two categories:

  • Discrete data analysis where you evaluate individual users, actions, and accounts
  • Connected analysis where relationships and integrated behaviors facilitate the fraud

It’s this second category based on connections, patterns, and behaviors that can really benefit from graph modeling and analysis.

Through connected analysis and graph technology, you would:

  • Combine and correlate enterprise information
  • Model the results as a connected graph
  • Apply link and social network analysis for discovery

Now we’ll discuss examples of ways companies can apply this to solve real business problems.

Fraud Detection Use Case #1: Finding Bot Accounts in Social Networks

In the world of social media, marketers want to see what they can discover from trends. For example:

  • If I’m selling this specific brand of shoes, how popular will they be? What are the trends in shoes?
  • If I compare this brand with a competing brand, how do the results mirror actual public opinion?
  • On social media, are people saying positive or negative things about me? About my competitors?

Of course, all of this information can be incredibly valuable. At the same time, it can mean nothing if it’s all inaccurate and skewed by how much other companies are willing to pay for bots.

In this case, we worked with Oracle Marketing Cloud to ensure the information they’re delivering to advertisers is as accurate as possible. We sought to find the fake bot accounts that are distorting popularity.

As an example, there are bots that retweet certain target accounts to make them look more popular.

To determine which accounts are “real,” we created a graph between accounts with retweet counts as the edge weights to see how many times these accounts are retweeting their neighboring accounts. We found that the unnaturally popularized accounts exhibit different characteristics from naturally popular accounts.

Here is the pattern for a naturally popular account:

Naturally Popular Social Media Account

And here is the pattern for an unnaturally popular account:

Unnaturally Popular Social Media Account

When these accounts are all analyzed, there are certain accounts that have obviously unnatural deviation. And by using graphs and relationships, we can find even more bots by:

  • Finding accounts with a high retweet count
  • Inspecting how other accounts are retweeting them
  • Finding the accounts that also get retweets from only these bots

Fraud Detection Use Case #2: Identifying Sock Puppets in Social Media

In this case, we used graph technology to identify sockpuppet accounts (online identity used for purposes of deception or in this case, different accounts posting the same set of messages) that were working to make certain topics or keywords look more important by making it seem as though they’re trending.

Sock Puppet Accounts in Social Media

To discover the bots, we had to augment the graph from Use Case #1. Here we:

  • Added edges between the authors with the same messages
  • Counted the number of repeated messaged and filtered to discount accidental unison
  • Applied heuristics to avoid n2 edge generation per same message

Because we found that the messages were always the same, we were able to take that and create subgraphs using those edges and apply a connected components algorithm.

Sock Puppet Groups

As a result of all of the analysis that we ran on a small sampling, we discovered that what we thought were the most popular brands actually weren’t—our original list had been distorted by bots.

See the image below – the “new” most popular brands barely even appear on the “old” most popular brands list. But they are a much truer reflection of what’s actually popular. This is the information you need.

Brand Popularity Skewed by Bots

After one month, we revisited the identified bot accounts just to see what had happened to them. We discovered:

  • 89% were suspended
  • 2.2% were deleted
  • 8.8% were still serving as bots

Fraud Detection Use Case #3: Circular Payment

A common pattern in financial crimes, a circular money transfer essentially involves a criminal sending money to himself or herself—but hides it as a valid transfer between “normal” accounts. These “normal” accounts are actually fake accounts. They typically share certain information because they are generated from stolen identities (email addresses, addresses, etc.), and it’s this related information that makes graph analysis such a good fit to discover them.

For this use case, you can use graph representation by creating a graph from transitions between entities as well as entities that share some information, including the email addresses, passwords, addresses, and more. Once we create a graph out of it, all we have to do is write a simple query and run it to find all customers with accounts that have similar information, and of course who is sending money to each other.

Circular Payments Graph Technology

Fraud Detection Use Case #4: VAT Fraud Detection

Because Europe has so many borders with different rules about who pays tax to which country when products are crossing borders, VAT (Value Added Tax) fraud detection can get very complicated.

In most cases, the importer should pay the VAT and if the products are exported to other countries, the exporter should receive a refund. But when there are other companies in between, deliberately obfuscating the process, it can get very complicated. The importing company delays paying the tax for weeks and months. The companies in the middle are paper companies. Eventually, the importing company vanishes and that company doesn’t pay VAT but is still able to get payment from the exporting company.

VAT Fraud Detection

This can be very difficult to decipher—but not with graph analysis. You can easily create a graph by transactions; who are the resellers and who is creating the companies?

In this real-life analysis, Oracle Practice Manager Wojciech Wcislo looked at the flow and how the flow works to identify suspicious companies. He then used an algorithm in Oracle Spatial and Graph to identify the middle man.

The graph view of VAT fraud detection:

Graph View of VAT Fraud Detection

A more complex view:

Complex View of Graph Technology and Anomaly Detection

In that case, you would:

  • Identify importers and exporters via simple query
  • Aggregate of VAT invoice items as edge weights
  • Run Fattest Path Algorithm

And you will discover common “Middle Man” nodes where the flows are aggregated

Fraud Detection Use Case #5: Money Laundering and Financial Fraud

Conceptually, money laundering is pretty simple. Dirty money is passed around to blend it with legitimate funds and then turned into hard assets. This was the kind of process discovered in the Panama Papers analysis.

These tax evasion schemes often rely on false resellers and brokers who are able to apply for tax refunds to avoid payment.

But graphs and graph databases provide relationship models. They let you apply pattern recognition, classification, statistical analysis, and machine learning to these models, which enables more efficient analysis at scale against massive amounts of data.

In this use case, we’ll look more specifically at Case Correlation. In this case, whenever there are transactions that regulations dictate are suspicious, those transactions get a closer look from human investigators. The goal here is to avoid inspecting each individual activity separately but rather, group these suspicious activities together through pre-known connections.

Money Laundering and Financial Fraud

To find these correlations through a graph-based approach, we implemented this flow through general graph machines, using pattern matching query (path finding) and connected component graph algorithm (with filters).

Through this method, this company didn’t have to create their own custom case correlation engine because they could use graph technology, which has improved flexibility. This flexibility is important because different countries have different rules.


In today’s world, the scammers are getting ever more inventive. But the technology is too. Graph technology is an excellent way to discover the truth in data, and it is a tool that’s rapidly becoming more popular. If you’d like to learn more, you can find white papers, software downloads, documentation and more on Oracle’s Big Data Spatial and Graph pages.

And if you’re ready to get started with exploring your data now, we offer a free guided trial that enables you to build and experiment with your own data lake.


  • No Related Posts

Kerala Blockchain Academy first Indian academy to get Hyperledger membership

Continue without login


Login from existing account


Refrain from posting comments that are obscene, defamatory or inflammatory, and do not indulge in personal attacks, name calling or inciting hatred against any community. Help us delete comments that do not follow these guidelines by marking them offensive. Let’s work together to keep the conversation civil.



  • No Related Posts

The great Indian data rush

India needs to be relevant in the new world, where Artificial Intelligence (AI) is going to play a central role.

There has been a lot of discussion around how Google, WhatsApp (Facebook), and Amazon are aggressively entering the Indian tech landscape. Is what they are doing ‘capital dumping’, where they pour money into their India operations and in the process wipe out homegrown startups that simply don’t have that kind of capital?

And, what about our data?

Data is the new oil, and are we simply allowing them to plunder our precious resource – in this case data around the user behaviour of our own citizens – and run away with it? Didn’t China create its fabled tech ecosystem by blocking western tech imperialism? We have been an open society so far, should we re-evaluate that?

These are fair questions and the answers to them are not that obvious. The right way to approach this conundrum is to ask ourselves, what is it that we hope to achieve? What is our objective?

The objective is simple. India needs to be relevant in the new world, where Artificial Intelligence (AI) is going to play a central role. This will not only be in core technology, but everywhere technology is used. Which means, in practically every aspect of our lives.

From home security cameras, to how your smartphone interprets your voice to healthcare diagnosis, to cyber warfare and security, AI will be everywhere before you know it.

Not being a player in AI means being irrelevant in technology

China recognises this, and has set itself the goal of becoming a world leader in AI and add $150 billion to its economy by 2030. And it’s well on its way to achieve the same.

It also turns out, that in order to build AI systems, one needs data. Lots of it. And in the next coming decades, the country which will be generating mountains of data is India. As smartphone penetration tears through the roof, India will likely have twice as many smartphones as US by 2020. But where is this data captured?

Therein lies the catch. All of this data is whizzing its way, outside India, to the data centres and AI/ML models of Google, Facebook (which owns WhatsApp) and Amazon. Where it is being accessed and crunched by top-notch engineering talent in their AI research teams and research labs.

In addition to US, Google has opened AI research labs in Toronto, Montreal, Paris, London and Beijing. Notice, India is not in that list.

These AI labs are breeding grounds for talent and they are the seeds from which sprout new ideas, entrepreneurs, and the next big innovations in the tech economy.

In order to achieve the objective of being an AI powerhouse, India needs to have world-class AI labs. The companies in the best position to open such centres of excellence are Google, Facebook, and Amazon themselves. So, just barring them from operating in the country accomplishes little.

Under normal circumstances, with a fully unencumbered flow of data, these giants will gather their data in India, and harness the power of data wherever their top talent teams are. This is to be expected from normal, shareholder value maximising, public companies.

We need to enact practical policies that will result in Google and Co. investing in creating AI research labs in India. Let’s look at what some of those policies can be:

  • Data generated in India should remain geographically in India: This means Indian data will need to be hosted in data centres residing in India and cannot be freely transferred outside.
  • Data manipulation is done within India: This will require engineering talent to be resident in India that develops core technologies on top of the data repository.
  • Derived intelligence can be shared: This will enable the learning from the data to be shared globally and included in the next-gen products. This is a necessary incentive for companies to invest in research within India as it will make the global products better.
  • Drive government spending to spur AI research: This can be in the form of sponsored research in colleges and as contracts to private companies.

From an enforcement point of view, the government should rely less on policing but more on honourable conduct of companies to follow the law around data. Just like tax compliance, these can be periodically audited but the expectation is everyone follows the law.

Providing restrictions on entry or doing business in India for any of these companies is counter-productive. The big valley giants have spent billions in developing their infrastructure, people and processes. When they enter India, their best practices enter with them too.

Consider Amazon in ecommerce. We cannot underestimate the massive impact entry of Amazon has made to the SMB ecosystem in India. By introducing competition, it has made all existing players more aggressive in their offerings and by bringing its years of experience and best practices to India, Amazon is making the entire SMB supply chain more transparent, trustworthy, and productive. This ultimately benefits everyone in the ecosystem.

Not only does it increase revenues for an SMB, but it also encourages more honest and hardworking people to join the marketplace and start selling online, ultimately driving employment.

Yes, Flipkart has also had a positive impact on the SMBs, but there is no denying that the scale and scope of ecommerce has greatly increased with Amazon entering the fray. You don’t hear of bricks being shipped instead of phones anymore, and that makes for a more professional SMB.

Similarly, Uber and Ola together have transformed urban transport, yet the pace of innovation would have been a lot slower, investment a lot lesser, and total employment smaller, had we restricted the entry of Uber.

India’s objective should be to be a key player in the world of AI, that is built on the foundation of a data-rich economy. And invite the best and brightest in the world to invest in it. Making prudent policies that are fair, non-discriminatory and practical, around how data generated inside India is used, are central to that effort. The time to act is now.

(Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of YourStory.)


  • No Related Posts

Loose tweets sink democracy: How Americans can stop colluding with the Russians


Special counsel Robert Mueller’s office said on Friday that a federal grand jury has indicted 13 Russian nationals and three Russian entities accused of interfering with U.S. elections and political processes. Wochit


“Loose lips sink ships.”

During World War II, that slogan was part of a U.S. anti-espionage campaign. It employed iconic, colorful posters with slogans aimed at preventing American intelligence from falling into the enemies’ clutches.

Today, with all the talk of Russia waging cyber-warfare on American elections, we need a new campaign: Democracy dies when you share lies. Know it’s true before you tweet.

The real national scandal isn’t whether Donald Trump’s campaign colluded with the Russian propaganda campaign to influence the 2016 election. If that happened, it’s terrible and it must never happen again. But the real scandal is that we, the American people, cooperated with such spectacular enthusiasm to aid the Russians in spreading lies and distortion.

Twitter in January disclosed that it had discovered more than 50,000 Russia-linked accounts that posted automated material about the 2016 election. The company said it sent notices to 1.4 million people who had interacted with one specific bot-farm with over 3,800 accounts.

The Trumpsters could have colluded seven days a week and twice on Sunday and it wouldn’t have amounted to a pitcher of warm spit with whipped cream on top. Not without hundreds of thousands of Americans sharing and retweeting false claims and propaganda that we now know was targeted at subverting the election and undermining confidence in our democracy.

Iowans now know that we may have been in the cross hairs, when Russian operatives targeted our state’s 2016 presidential caucuses with allegations of fraud. That’s one of the counts included in special counsel Robert Mueller’s indictment of 13 Russian nationals for interfering with the U.S. election.

Even worse, it’s still happening. Media reports have revealed that Russian bots — fake social media accounts controlled by the Russians — are still hard at work. The tragic Florida high school mass shooting was the latest fodder for Russian efforts to sow discord and division.

The bots were busy feeding wacko conspiracy theories suggesting the whole shooting was a hoax. Meanwhile, conspiracy theorists were alleging that students from Margory Stoneman Douglas High School were actually “crisis actors” deployed by the Democratic Party to push for gun control. An aide to a Florida state representative fell for the ruse and lost his job after he emailed the false allegation to a reporter.

There are lots of theories about why Americans are susceptible to this sort of attack. Google “why are Americans so gullible” and you’ll find blame for everything from political satire like “The Daily Show” to the philosophy of American exceptionalism. I suspect the answer has more to do with Americans’ haste and distraction than anything written by Alexis de Tocqueville or Ayn Rand.

As a still-loyal print newspaper reader, I am drawn to studies that suggest the digital experience erodes our reading comprehension and critical thinking skills. I’d like to see a lot more research before we completely sacrifice our ability to reason at the altar of e-media.

We don’t have time to argue over why it happens. We have to stop it, and soon. The 2018 primary and general elections are mere months away and they will be targets for similar interference.

Sure, we can and should sanction Russia and try to prosecute their operatives. But their tactics can and probably will be adopted by other foreign and domestic operatives.

We can and should pressure social media companies, which are making billions from advertising, to improve their transparency and accountability. We can and should hold our politicians and elected officials accountable if they take advantage of spurious social-media campaigns that try to discredit opponents.

Ultimately, though, this cyber-propaganda war will be won or lost based on whether Americans choose to change their own online behavior. It doesn’t help that we have a president who tweets without regard to either facts or propriety.

Here’s what Trump tweeted Feb. 18 after the announcement of the Russian indictments:

“If it was the GOAL of Russia to create discord, disruption and chaos within the U.S. then, with all of the Committee Hearings, Investigations and Party hatred, they have succeeded beyond their wildest dreams. They are laughing their asses off in Moscow. Get smart America!”

If it was the GOAL of Russia to create discord, disruption and chaos within the U.S. then, with all of the Committee Hearings, Investigations and Party hatred, they have succeeded beyond their wildest dreams. They are laughing their asses off in Moscow. Get smart America!

— Donald J. Trump (@realDonaldTrump) February 18, 2018

I agree with the last part. We need to get smart. I’ve long advocated for requiring high-school students to learn to be savvier media consumers. Too many Americans today can’t tell the difference between fact and opinion, let alone truth and falsehoods.

Most of all, we need to stop being lazy and careless about our use of social media.

Research before you retweet: It only takes a few seconds to find out who produced the stories we’re sharing and retweeting. If no reputable media outlets are reporting a story, it should be a big, fat clue that its either false or unverified. There are many credible fact-checking sites that research political claims. Others debunk urban legends. Kindly call out your friends who are sharing and tweeting garbage.

Know who your friends are: Check out the online profiles and feeds of the people you’re “friending” or following on social media. Do they ever post personal details and photos, or are they blank slates? Do they interact with humans in a normal way, or is all their communication one-way? There are online tools to test whether a specific account is a bot. The Botometer from Indiana University is one option. Block and report bots if they show up in your social feeds.

Back in the 1970s, cartoonist Walt Kelly called out Americans’ wasteful and careless despoiling of the environment with an iconic phrase: “We have seen the enemy and he is us.”

Let’s stop being our democracy’s worst enemies. Don’t be Russian to share Putin’s lies.

Kathie Obradovich is the Des Moines Register’s political columnist. Contact: Twitter: @kobradovich.

Read or Share this story:


  • No Related Posts

Dark future of data wars inevitable unless consumers push back, author warns

The online campaign to influence the 2016 U.S. presidential election is a prelude to a dark future where data will become weaponized by hostile states, unless regulators and consumers push back, says the author of a new book on how to fix the crisis of trust in Silicon Valley.

“There will be major international crises and probably wars built around data,” Andrew Keen says. “There will be a hot data war at some point in the future.”

An internet entrepreneur turned cultural commentator, Mr. Keen was considered a heretic in 2007 when he wrote The Cult of the Amateur, which skewered the unbridled optimism fuelling the early days of Web 2.0 – the shift from static websites to platforms focused on user-generated content.

Story continues below advertisement

Far from democratizing the web, Mr. Keen warned a decade ago that sites such as Facebook and YouTube were undermining traditional media outlets, cannibalizing revenues from professional content creators, and allowing anonymous trolls to post content unconstrained by professional standards that could manipulate public opinion and “reinvent” the truth.

Now as tech giants including Facebook, Twitter and PayPal confront revelations contained in U.S. special counsel Robert Mueller’s indictment that they were the platforms of choice for Russian agents using stolen data to interfere in the U.S. presidential election, those early warnings have become the consensus opinion.

Today there is so much agreement about the harmful effects of technology that Mr. Keen says he’s wants to stop writing about what’s wrong with the internet and start focusing on how to fix it.

The heart of the issue, he argues in his latest book How to Fix the Future, lies in today’s big data economy, where tech companies give away their products for free in exchange for consumer information that advertisers use to create highly targeted messages. It’s a business model built on mass surveillance, with personal data becoming the economy’s most valuable commodity.

And as that data become ever-more important to state-to-state relations, Mr. Keen says we’re only one major hacking event away from a digital world war.

“We still haven’t had an Exxon Valdez or a Chernobyl on data,” he said in an interview days before a U.S. federal grand jury indicted three Russian companies and 13 of their online operatives for a wide-ranging and well-funded online campaign to sow political discord during the 2016 election in support of Donald Trump. “I think there will be some major hacking event in the not-too-distant future which may involve a foreign power that will wake people up to this.”

Yet such a dystopian a future is far from inevitable, he says. The internet’s early optimism, the belief that technology would save the world, was misguided. But so is today’s digital determinism, which says that humans are powerless against algorithms, smart machines and cyberwarfare campaigns of hostile foreign governments.

Story continues below advertisement

Story continues below advertisement

To fix the future, Mr. Keen argues, we should look to the past. The social and economic upheaval caused by Industrial Revolution was tamed through a combination of labour strikes, government regulations that improved working conditions, the advent of a social safety net and the adoption of public schools. Mr. Keen believes the most damaging effects of today’s digital revolution can be similarly managed through a combination of regulation, innovation, consumer and worker demands and education.

History lessons are particularly crucial for Silicon Valley’s forward-looking tech titans. Mr. Keen points to the U.S. automotive industry, whose global dominance was undermined by safety and reliability issues until it eventually lost ground to innovative companies in Europe and Asia.

“It’s very important for Silicon Valley to wake up and recognize that there’s no guarantee that they’ll be dominant in 10 or 20 years,” he said.

In Mr. Keen’s vision of the war for the future, the villains are China and Russia, which are using online platforms to create surveillance states that undermine trust between citizens and their government.

The heroes are countries such as Estonia, which is creating a digital ID system for its citizens – one that alerts them each time a government agency accesses their data. The country also launched an “e-residency” program that gives foreign entrepreneurs access to the country’s financial institutions. In the Estonian model, he says, building online trust means replacing anonymity and privacy with a system of open and transparent state surveillance.

Regulation will become increasingly important to reining in big tech, he says. But the U.S., with its chaotic political system and laws that shield social media companies from liability for content posted on their platforms, is ill-equipped to lead the push for reform.

Story continues below advertisement

Canadian regulators have likewise taken a largely hands-off approach to social media companies, though earlier this month Bank of Canada deputy governor Carolyn Wilkins called for tougher regulation of tech firms, given their growing power and control over vast troves of personal data.

“Access to and control of user data could make some firms virtually unassailable,” she said.

Facebook also launched a “Canadian Election Integrity” project last year to head off concerns over how its platform could be used to undermine the 2019 Canadian federal election.

But Mr. Keen expects European regulators to carry the fight, particularly European Commissioner for Competition Margrethe Vestager. “She’s the only one willing to take on Apple and force them to pay their taxes,” he says. “She’s the only one who is really looking critically [at] Google.”

Just as the U.S. government’s antitrust case against Microsoft in the 1990s loosened the company’s stranglehold on desktop computing and paved the way for startups such as Google and Facebook, Mr. Keen believes the multibillion-dollar fines Ms. Vestager has slapped on Silicon Valley giants are intended to foster innovation by preventing the big tech companies from using their global dominance to squash smaller competitors.

The most significant reforms will come this May, when the European Union launches the General Data Protection Regulation. The aggressive internet-privacy reforms will, among other things, give users the “right to be forgotten” by allowing consumers to delete the personal data that private companies hold about them.

While critics, including Mr. Keen, say the rules unintentionally favour companies large enough to afford to comply, he still sees the regulations as a good start. “The important thing is that they are beginning to pass some laws around data and the protection of consumer data,” he said.

Mr. Keen won’t predict how long it will be before Silicon Valley is forced to make meaningful changes to adapt to consumer and government pressure. But just as technology changes quickly, so can society’s attitude toward it. Or as one venture capitalist in the book describes the process of social and economic disruption: “it’s nothing, nothing, nothing – and then something dramatic.”​


  • No Related Posts

Russian meddling prays on a gullible public

By Hank Waters

In an excellent report published in this newspaper last Sunday, Rudi Keller explained what he learned from several researchers about recent Russian meddling in U.S. affairs using social media. Keller’s primary source was Lt. Col. Jarred Prier, who for years has studied Russian cyber warfare and recently wrote a peer-reviewed report including student protests at the University of Missouri as an example.

Prier says Russian disinformation campaigns seek to sow discord among allies of the U.S. and internally as well. Particularly galling to Prier, a 2003 MU grad, was the successful Russian effort to stoke unfounded fears of a violent white backlash surrounding 2015 student protests and subsequent resignation of then-UM President Tim Wolfe.

Prier found Russian cyber trolls used Twitter to spread untrue accounts of campus violence, including Ku Klux Klan marches and a phony picture of a battered black youth. Incessant repetition on social media caused many to believe the false reports.

The recent indictment by Special Counsel Robert Mueller charges Russia used its campaign in the 2016 presidential campaign to benefit Republican Donald Trump and Democrat Bernie Sanders in order to discredit Democrat Hillary Clinton, thought by the Russians to be their main target.

Larger conclusions by Prier and other expert witnesses interviewed by Keller are interesting. Prier says “They want to force the American public to go over into a corner and argue amongst themselves.”

MU Professor of political science Cooper Drury says the Russian long-term goal is not the victory of any political party but a weaker U.S. If disruption is your goal, says Drury, “then the greater polarization you can get inside a democracy the more successful you will be.”

MU professor of communications Mitchell McKinney says social media helps mask the source of otherwise questionable propaganda, and volume creates believability. Then, he says, most success comes when these rumors are reported by trusted news organization.

“These Russian trolls were driving clicks,” says Prier. “Clicks are what keeps the business moving.”

If political polarization in the U.S. is a primary goal we might think the Russian campaign has been spectacularly successful, but MU professor Drury points out that traditional media once considered neutral is more likely today to take sides. He cites television networks Fox News and MSNBC which attract opposed and mutually disdainful audiences.

Prier’s report sounds pessimistic, but MU journalism professor Mike Kearney argues the internet makes it easier for each of us to share and find information “by ourselves.” Prier says it’s up to providers of information, including Twitter, to be more careful.

Obviously, the first line of defense should be the retail consumer of news, but as we see in the new age of easy disinformation, we have not yet fully learned that skill. A gullible public has existed since the first human society appeared. Today the same human frailty persists, frighteningly fueled by the internet and its latest, most insidious tool, Twitter.

Yes, I will say “insidious.” The benefit of sharing innocuous messages is sadly overcome by the pernicious opportunities gained by newly empowered trolls who so easily get in our heads anonymously. Will we learn to be skeptical enough?


The best argument against democracy is a five-minute conversation with the average voter.

—Winston Churchill


  • No Related Posts

The GOP Is Conducting Cyber Warfare Against Political Opponents

Photo Credit:

As speculation builds over the extent of Russian meddling in 2018’s elections, the deceptive and influential tactics revealed in last week’s indictment by Special Counsel Robert Mueller—and newer ones—are already in use by U.S. politicos with pro-corporate, pro-GOP agendas.

The examples run the gamut from the seemingly trite—a Republican Senate candidate in Arizona touts an endorsement from a new website impersonating local newspapers—to more overtly serious: a tweet storm calling for Minnesota Democratic Senator Al Franken to resign, which he did last year after escalating accusations of sexual harassment; or tens of thousands of faked emails calling for the repeal of net neutrality, which the GOP-led Federal Communications Commission recently repealed.

In these examples and others, a new hall of mirrors is emerging that threatens American elections and governance—and it is coming from shadowy domestic operatives, not Russians. Websites mimicking news organizations are endorsing candidates. Online identities are being stolen and use to send partisan messages, with people unaware they are being impersonated for partisan gain. Targets are slow to detect or acknowledge the high-tech ruses used against them. The media is catching on, but it’s typically after the fact—not before crucial decisions are made.

While many progressives were split on whether Franken should have left the Senate, the Republican right was unambiguous in seizing the moment to force the Democrats to lose a popular senator.    

Twitter War

“White nationalist provocateurs, a pair of fake news sites, an army of Twitter bots and other cyber tricks helped derail Democratic Senator Al Franken last year, new research shows,” a report by Newsweek’s Nina Burleigh began, describing new details about how he was targeted. “Analysts have now mapped out how Hooters pinup girl and lad-mag model Leeann Tweeden’s initial accusation against Franken became effective propaganda after right-wing black ops master Roger Stone first hinted at the allegation.”

“A pair of Japan-based websites, created the day before Tweeden came forward, and a swarm of related Twitter bots made the Tweeden story go viral and then weaponized a liberal writer’s criticism of Franken,” Burleigh explained. “The bot army—in tandem with prominent real, live members of the far right who have Twitter followers in the millions, such as Mike Cernovich—spewed thousands of posts, helping the #FrankenFondles hashtag and the “Franken is a groper” meme effectively silence the testimonies of eight former female staffers who defended the Minnesota Democrat before he resigned last year.”

This evidence trail tracing how right-wingers used software to amplify the attacks on Franken was discovered by Mike Farb at UnhackTheVote, an election transparency group. He noted this tactic was also one tool used by Russian propagandists during the 2016 U.S. presidential election.  

What’s new now is not that technologies like bots are being created, but that domestic political operatives are using them in much the same way they have used robo-calls, negative campaign mailers and other attacks to undermine political opponents—before the internet and its social media platforms amplified the speed, intensity and impact of such attacks. 

“Like targeted Facebook ads that Russian troll farms used in the 2016 election, Twitter bots have been around for years and were originally created for sales purposes,” Burleigh wrote. “But since the 2016 election, arguably lost due to the right’s superior utilization of darker online strategies, the left is not known to have created or mobilized its own fake cyber army to amplify its viewpoint.”

Burleigh’s observation may be the most chilling. The evidence that is out there so far does suggest that pro-GOP and pro-corporate forces are bet g quicker to embrace the latest version of political dark arts—as seen in the growing list of examples of deceptive and influential online campaigns.

Endorsements That Weren’t

Last week, Politico reported on what, at first, seemed like a silly story—a Republican senatorial candidate from Arizona fell for a fake endorsement that seemed to boost her chances in an upcoming primary.

“It looked as if Arizona Senate candidate Kelli Ward had scored a big endorsement: On Oct. 28, she posted a link on her campaign website and blasted out a Facebook post, quoting extensively from a column in the Arizona Monitor,” Politico reported. “There was just one problem: Despite its reputable sounding name, the Arizona Monitor is not a real news site… The site launched just a few weeks before publishing the endorsement, and its domain registration is hidden, masking the identity of its owner. On its Facebook page, it is classified as a news site, but scant other information is offered.”

The general public doesn’t pay much attention to endorsements early in campaigns. So Ward falling for a faked one might be a typical mistake that inexperienced candidates make—and thus easily forgotten. But Politico’s report said her endorsement was part of a larger and far more disturbing trend: the mass-production of fabricated endorsements by anonymous political operatives clearly pushing a far-right agenda.

“The Arizona Monitor seems to be part of a growing trend of conservative political-messaging sites with names that mimic those of mainstream news organizations and whose favored candidates then tout their stories and endorsements as if they were from independent journalists,” wrote Politico. “It’s a phenomenon that spans the country from northern New England, where the anonymous Maine Examiner wreaked havoc on a recent mayoral election, all the way out to California, where Rep. Devin Nunes launched — as reported by POLITICO— his own so-called news outlet, the California Republican.”

“This basically is an appropriation of credibility,” Kathleen Hall Jamieson, director of the Annenberg Public Policy Center at the University of Pennsylvania, told Politco. “As the credibility of reputable news outlets is appropriated for partisan purposes, we are going to undermine the capacity of legitimate outlets to signal their trustworthiness.” 

Political Identity Theft

Cyber deception also is appearing across the government in the nooks and crannies where White House directives or Congress’ laws are turned into the rules Americans must abide by—or in the Trump era, are repealed.

Here, political identity theft is increasingly becoming a tactic used to push federal agencies to end to consumer protections and other regulations that impede profits. Hundreds of thousands of public comments, purportedly made by real Americans, have come in over the electronic transom at five different agencies in recent months, a series of investigative reports found. Except, the people who supposedly sent these comments never did.

A recent example concerns the “Fiduciary Rule,” which originated in the Labor Department and was to talk effect in July 2019, to try to prevent conflicts of investment from investment advisers targeting retirees.

“The [Wall Street] Journal previously found fraudulent postings under names and email addresses at the Consumer Financial Protection Bureau, Federal Energy Regulatory Commission and Securities and Exchange Commission and the Federal Communications Commission,” it noted.

The highest-profile example concerned the FCC’s so-called net neutrality ruled, which previously had regulated telecom giants from overcharging the public and smaller businesses for access to online data. a day before the FCC voted in November to gut net neutrality, the Verge reported, “A search of the duplicated text found more than 58,000 results as of press time, with 17,000 of those posted in the last 24 hours alone.”

In other words, a bot-like program was hijacking online identities and impersonating those people to file pro-corporate comments at the FCC. When public officials like New York State Attorney General Eric Schneiderman, a Democrat, sought more information from the FCC, he received no response.

While one can speculate about who specifically coordinated these efforts, there is only one category of special interest has the means and motives to thwart government regulators: that’s the targeted industries, professional trade association and lobbyists and the biggest corporate players.

No Accountability Coming

These are people and interests that are represented by Republicans in Washington more so than Democrats. But, as Schneiderman learned, the GOP and it’s political appointees have no inclination to even acknowledge that cyber deception is becoming a new coin of the political realm—while they rule that roost.

Progressives and Democrats might point out that the GOP is the party that obsesses over voter fraud—one person voting many times, which almost never occurs in real life—while Republican-friendly operatives appear to be embracing cyber political identity theft on an unprecedented scale.

What this means for 2018’s elections is uncertain, but it doesn’t bode well. No matter where partisan cyber warfare is coming from—domestically or abroad—its occurrence will undermine public confidence in the results.

The congressional midterms and governors’ races in many states are occurring against a backdrop of a rising blue voter turnout wave. It’s in the GOP’s interests in preserving their power to do anything that undermines the credibility of electoral outcomes that should favor Democrats.

Cyber political warfare is the latest means for doing so. It’s already begun. 


Steven Rosenfeld covers national political issues for AlterNet, including America’s democracy and voting rights. He is the author of several books on elections, including Democracy Betrayed: How Superdelegates, Redistricting, Party Insiders, and the Electoral College Rigged the 2016 Election, to be published in March 2018 from Hot Books.


  • No Related Posts