5 Graph Analytics Use Cases

According to Ernst and Young, $8.2 billion a year is lost to the marketing, advertising, and media industries through fraudulent impressions, infringed content, and malvertising.

The combination of fake news, trolls, bots and money laundering is skewing the value of information and could be hurting your business.

It’s avoidable.

By using graph technology and the data you already have on hand, you can discover fraud through detectable patterns and stop their actions.

We collaborated with Sungpack Hong, Director of Research and Advanced Development at Oracle Labs to demonstrate five examples of real problems and how graph technology and data are being used to combat them.

Get started with data—register for a guided trial to build a data lake

But first, a refresher on graph technology.

What Is Graph Technology?

With a graph technology, the basic premise is that you store, manage and query data in the form of a graph. Your entities become vertices (as illustrated by the red dots). Your relationships become edges (as represented by the red lines).

What Is Graph Technology

By analyzing these fine-grained relationships, you can use graph analysis to detect anomalies with queries and algorithms. We’ll talk about these anomalies later in the article.

The major benefit of graph databases is that they’re naturally indexed by relationships, which provides faster access to data (as compared with a relational database). You can also add data without doing a lot of modeling in advance. These features make graph technology particularly useful for anomaly detection—which is mainly what we’ll be covering in this article for our fraud detection use cases.

How to Find Anomalies with Graph Technology

Gartner 5 Layers of Fraud Detection

If you take a look at Gartner’s 5 Layers of Fraud Protection, you can see that they break the analysis to discover fraud into two categories:

  • Discrete data analysis where you evaluate individual users, actions, and accounts
  • Connected analysis where relationships and integrated behaviors facilitate the fraud

It’s this second category based on connections, patterns, and behaviors that can really benefit from graph modeling and analysis.

Through connected analysis and graph technology, you would:

  • Combine and correlate enterprise information
  • Model the results as a connected graph
  • Apply link and social network analysis for discovery

Now we’ll discuss examples of ways companies can apply this to solve real business problems.

Fraud Detection Use Case #1: Finding Bot Accounts in Social Networks

In the world of social media, marketers want to see what they can discover from trends. For example:

  • If I’m selling this specific brand of shoes, how popular will they be? What are the trends in shoes?
  • If I compare this brand with a competing brand, how do the results mirror actual public opinion?
  • On social media, are people saying positive or negative things about me? About my competitors?

Of course, all of this information can be incredibly valuable. At the same time, it can mean nothing if it’s all inaccurate and skewed by how much other companies are willing to pay for bots.

In this case, we worked with Oracle Marketing Cloud to ensure the information they’re delivering to advertisers is as accurate as possible. We sought to find the fake bot accounts that are distorting popularity.

As an example, there are bots that retweet certain target accounts to make them look more popular.

To determine which accounts are “real,” we created a graph between accounts with retweet counts as the edge weights to see how many times these accounts are retweeting their neighboring accounts. We found that the unnaturally popularized accounts exhibit different characteristics from naturally popular accounts.

Here is the pattern for a naturally popular account:

Naturally Popular Social Media Account

And here is the pattern for an unnaturally popular account:

Unnaturally Popular Social Media Account

When these accounts are all analyzed, there are certain accounts that have obviously unnatural deviation. And by using graphs and relationships, we can find even more bots by:

  • Finding accounts with a high retweet count
  • Inspecting how other accounts are retweeting them
  • Finding the accounts that also get retweets from only these bots

Fraud Detection Use Case #2: Identifying Sock Puppets in Social Media

In this case, we used graph technology to identify sockpuppet accounts (online identity used for purposes of deception or in this case, different accounts posting the same set of messages) that were working to make certain topics or keywords look more important by making it seem as though they’re trending.

Sock Puppet Accounts in Social Media

To discover the bots, we had to augment the graph from Use Case #1. Here we:

  • Added edges between the authors with the same messages
  • Counted the number of repeated messaged and filtered to discount accidental unison
  • Applied heuristics to avoid n2 edge generation per same message

Because we found that the messages were always the same, we were able to take that and create subgraphs using those edges and apply a connected components algorithm.

Sock Puppet Groups

As a result of all of the analysis that we ran on a small sampling, we discovered that what we thought were the most popular brands actually weren’t—our original list had been distorted by bots.

See the image below – the “new” most popular brands barely even appear on the “old” most popular brands list. But they are a much truer reflection of what’s actually popular. This is the information you need.

Brand Popularity Skewed by Bots

After one month, we revisited the identified bot accounts just to see what had happened to them. We discovered:

  • 89% were suspended
  • 2.2% were deleted
  • 8.8% were still serving as bots

Fraud Detection Use Case #3: Circular Payment

A common pattern in financial crimes, a circular money transfer essentially involves a criminal sending money to himself or herself—but hides it as a valid transfer between “normal” accounts. These “normal” accounts are actually fake accounts. They typically share certain information because they are generated from stolen identities (email addresses, addresses, etc.), and it’s this related information that makes graph analysis such a good fit to discover them.

For this use case, you can use graph representation by creating a graph from transitions between entities as well as entities that share some information, including the email addresses, passwords, addresses, and more. Once we create a graph out of it, all we have to do is write a simple query and run it to find all customers with accounts that have similar information, and of course who is sending money to each other.

Circular Payments Graph Technology

Fraud Detection Use Case #4: VAT Fraud Detection

Because Europe has so many borders with different rules about who pays tax to which country when products are crossing borders, VAT (Value Added Tax) fraud detection can get very complicated.

In most cases, the importer should pay the VAT and if the products are exported to other countries, the exporter should receive a refund. But when there are other companies in between, deliberately obfuscating the process, it can get very complicated. The importing company delays paying the tax for weeks and months. The companies in the middle are paper companies. Eventually, the importing company vanishes and that company doesn’t pay VAT but is still able to get payment from the exporting company.

VAT Fraud Detection

This can be very difficult to decipher—but not with graph analysis. You can easily create a graph by transactions; who are the resellers and who is creating the companies?

In this real-life analysis, Oracle Practice Manager Wojciech Wcislo looked at the flow and how the flow works to identify suspicious companies. He then used an algorithm in Oracle Spatial and Graph to identify the middle man.

The graph view of VAT fraud detection:

Graph View of VAT Fraud Detection

A more complex view:

Complex View of Graph Technology and Anomaly Detection

In that case, you would:

  • Identify importers and exporters via simple query
  • Aggregate of VAT invoice items as edge weights
  • Run Fattest Path Algorithm

And you will discover common “Middle Man” nodes where the flows are aggregated

Fraud Detection Use Case #5: Money Laundering and Financial Fraud

Conceptually, money laundering is pretty simple. Dirty money is passed around to blend it with legitimate funds and then turned into hard assets. This was the kind of process discovered in the Panama Papers analysis.

These tax evasion schemes often rely on false resellers and brokers who are able to apply for tax refunds to avoid payment.

But graphs and graph databases provide relationship models. They let you apply pattern recognition, classification, statistical analysis, and machine learning to these models, which enables more efficient analysis at scale against massive amounts of data.

In this use case, we’ll look more specifically at Case Correlation. In this case, whenever there are transactions that regulations dictate are suspicious, those transactions get a closer look from human investigators. The goal here is to avoid inspecting each individual activity separately but rather, group these suspicious activities together through pre-known connections.

Money Laundering and Financial Fraud

To find these correlations through a graph-based approach, we implemented this flow through general graph machines, using pattern matching query (path finding) and connected component graph algorithm (with filters).

Through this method, this company didn’t have to create their own custom case correlation engine because they could use graph technology, which has improved flexibility. This flexibility is important because different countries have different rules.

Conclusion

In today’s world, the scammers are getting ever more inventive. But the technology is too. Graph technology is an excellent way to discover the truth in data, and it is a tool that’s rapidly becoming more popular. If you’d like to learn more, you can find white papers, software downloads, documentation and more on Oracle’s Big Data Spatial and Graph pages.

And if you’re ready to get started with exploring your data now, we offer a free guided trial that enables you to build and experiment with your own data lake.

Related:

  • No Related Posts

Kerala Blockchain Academy first Indian academy to get Hyperledger membership

Continue without login

or

Login from existing account

FacebookGoogleEmail

Refrain from posting comments that are obscene, defamatory or inflammatory, and do not indulge in personal attacks, name calling or inciting hatred against any community. Help us delete comments that do not follow these guidelines by marking them offensive. Let’s work together to keep the conversation civil.

HIDE

Related:

  • No Related Posts

UN secretary-general wants global regulations to combat cyberwars

Share on Facebook
Share on Twitter
Share on Google+
Share on LinkedIn
+

UN Secretary-General António Guterres on Monday called for the creation of a regulatory body charged with fighting electronic warfare campaigns that target civilians

While speaking at his alma mater, the University of Lisbon, the UN chief said a global set of rules that would help protect civilians from disinformation campaigns – many of which have revolutionized the way interested parties weaponise information through the use of the internet and social media networks.

State-sponsored computer hackers, including “Fancy Bear” and “Cozy Bear” – both controlled by Russia’s intelligence services, have disrupted multinational firms and public services, as well as political campaigns, and most recently the opening ceremonies of the ongoing Pyeongchang Winter Olympic Games.

“Episodes of cyber warfare between states already exist. What is worse is that there is no regulatory scheme for that type of warfare. It is not clear how the Geneva Convention or international humanitarian law applies in these cases,” Guterres said while speaking at the University of Lisbon. “I am absolutely convinced that unlike the great battles of the past, which opened with a barrage of artillery or aerial bombardment, the next major war will begin with a massive cyber attack to destroy military capacity and to paralyse basic infrastructure, including electric networks.”

Cyber-warfare has moved to the forefront of military planning over the last decade. Russia’s GRU military intelligence unit successfully tested its ability to disrupt public services in Estonia and Georgia more than a decade ago, Western military planners have scrambled to counter the advances that Moscow has made in developing advanced cyber-warfare strategies.

NATO is in the process of cyberwar principles that will act as a strategic framework for guiding the alliance’s force reaction in the event of a crippling cyber attack to its command structure or the deployment of cyberweapons against one of the alliance allies. NATO command hopes to have a broad plan in place by 2019, but questions remain as the US administration under Donald Trump had continued with its lukewarm embrace of the 68-year-old North Atlantic Alliance.

During his speech in Lisbon, Gutteres offered to use the UN as a platform for scientists, programmers, and government representatives to develop rules that would help minimise the amount of access certain agents of war would have when trying to make contact with unwitting civilians.

Guterres said he believed it possible for leading computer specialists and like-minded lawmakers to created a set of rules that would “guarantee the more humane character” of a conflict involving information technology and help preserve cyberspace as “an instrument in the service of good”, but warned that time was not on their side as technological advances far outpace the traditional methods of working out universally accepted rules that include the Geneva Conventions of 1864-1949.

Share on Facebook
Share on Twitter
Share on Google+
Share on LinkedIn
+

Related:

  • No Related Posts

Air Force officer discusses report on Russian meddling at MU

Rudi Keller @CDTCivilWar

During the Soviet era, Air Force Lt. Col. Jarred Prier wrote in his journal article “Commanding the Trend: Social Media as Information Warfare,” Russia used its propaganda tools to plant believable lies in foreign media, intending to sow discord among allies of the United States or weaken it in the eyes of other nations.

Now, as the indictments handed down Friday by Special Counsel Robert Mueller show, Russian disinformation campaigns manipulate opinion here. They have been so successful, Prier said in an interview, that his findings that the Russian cyber warfare team targeted the 2015 turmoil at the University of Missouri will not be believed by a large segment of the public.

 

“There are people who at face value don’t believe what you said because you said Russia did something,” Prier said. “On the opposite side, political left is so willing to believe anything that has to do with Russia right now.”

Prier is currently serving as director of operations for the 20th Bomb Squadron. He has studied the social media propaganda techniques of the Islamic State and Russia and found similar tactics used to serve different strategic goals. He spoke to the Tribune by telephone Wednesday.

Adopting the #PrayForMizzou hashtag in the hours after former UM System President Tim Wolfe resigned, Russian cyber trolls and their robotic repeaters stoked fear of a violent white backlash, Prier found in his peer-reviewed research, published in November 2017 in Strategic Studies Quarterly.

Some of the fear was well-grounded. A threat from inside Missouri posted on Yik Yak led to the arrest of Hunter Park in Rolla. But much of it was baseless, fed by Russian Twitter accounts including one with the handle @FanFan1911 and a user name of “Jermaine,” whose avatar was a photo of a black man. @FanFan1911 tweeted falsely that the Ku Klux Klan was marching on the campus backed by police.

Prier, a 2003 MU graduate, traced the activities of @FanFan1911 and other Russian troll actors while doing master’s degree research at the Air University for the School of Advanced Air and Space Studies. He remembered @FanFan1911 specifically because he called the Twitter user a liar on Nov. 11, 2015.

He’s not 100 percent certain that @FanFan1911 was a Russian, he said. But the way the user’s targets changed – from Europe to the United States, back to Europe again during the Syrian refugee crisis and again to target the U.S. during the election – and the way robots were set up to retweet him, the account fits every measure he has available.

“The final discriminator was after Hillary Clinton used basket of deplorables in a speech, all the accounts I had been monitoring changed their names to deplorables-something or other,” Prier said. “It was bizarro world.”

Prier’s findings about how Russians inserted themselves into MU’s problems make up only a small portion of his article, which is a broader look at the social media tactics employed by the Islamic State and Russia to achieve their strategic goals and how U.S. policy makers should consider it a new field of competition.

The title of Prier’s article is an allusion to Giulio Duohet’s seminal 1921 work on air power, “Command of the Air.” After World War I, Duohet imagined massive fleets of bombers that would reduce cities to rubble, demoralizing inhabitants and forcing their leaders to surrender.

Duohet correctly imagined the extent of future air power but not the result. In his concluding paragraph, Prier puts defense in the social media field on par with protecting infrastructure and information subject to hacking.

“This was not the cyber war we were promised,” Prier wrote. “Predictions of a catastrophic cyberattack dominated policy discussion, but few realized that social media could be used as a weapon against the minds of the population.”

Prier’s work is now being read at the National Intelligence University, where agents are trained.

HOW IT WORKED

On May 21, 2016, about a dozen white supremacists gathered outside the Houston Da’wah Islamic Center, attracted by a Facebook post by a group calling itself Heart of Texas for a protest event to “Stop the Islamization of Texas.” A counter-demonstration, also organized via Facebook by a group calling itself United Muslims of America, drew about 50 counterprotesters for an event to “Save Islamic Knowledge.”

Both events were organized by Russian agents who spent $200 to manipulate behavior on a local level in the United States, the Senate Intelligence Committee revealed Nov. 1, 2017.

“It is an interesting notion to have forces from outside come in and try to manipulate attitudes and public behaviors by inciting different groups to take action,” said Peverill Squire, professor of political science at MU. “It casts modern day politics in a different light.”

In the indictment, Mueller charged that Russia spent $1.25 million per month to influence the 2016 election. The activity began in 2014 and the indictment names the Internet Research Agency, identified by Prier as the likely home of the Twitter trolls he researched, first among 16 defendants.

The short-term result of the Russian’s focus on MU was to sow fear. The long-term damage to MU’s reputation was a false impression that the 2015 protests were violent. The episode served Russia’s strategic goal of reducing the U.S. presence on the world stage by focusing public attention on internal divisions, Prier said.

“They want to force the American public to go over into a corner and argue amongst themselves,” Prier said.

Prier’s analysis is “spot on,” said Cooper Drury, an MU professor of political science who researches foreign policy issues. The Russian long-term goal is not the victory of any political party but a weaker U.S., he said.

“If that is what your goal is, disruption, then the greater polarization you can get inside a democracy the more successful you will be,” Drury said.

The indictment states that Russia used its social media campaigns for the benefit of Donald Trump in the Republican Party and Sen. Bernie Sanders in the Democratic Party. The propaganda worked especially well because it created a false impression that there were vast numbers of people agitating a particular view, Prier said.

“At that time there was a kind of symbiotic relationship between legitimate American conservative thought and these Russian trolls,” he said. “These Russian trolls were driving clicks. Clicks are what keeps the business moving.”

It is the persuasion effect, said Mitchell McKinney, professor of communication at MU. Propaganda easily identified is likely to be discounted as false by most people, he said. Social media helps mask the source and volume creates believability, he said.

“So bombarded at every turn, they insert messages that may seem plausible or in the environment of uncertainty or environment of fear, insert message that might be accepted,” McKinney said.

The most successful are validated when they are reported by trusted news organizations, he said.

Prier’s findings that show the Russians used a network of human and robotic accounts to spread their messages fit what Mike Kearney, an assistant professor of journalism, found as he wrote his doctoral thesis on Twitter use in the 2016 election. He found hundreds of accounts that stopped tweeting as soon as the election was over, Kearney said.

“What doesn’t surprise me is that there is a lot of activity on Twitter that I don’t think is authentic in the way that we would think of it,” Kearney said.

DEFENSE MEASURES

In the fall of 2015, Prier was a major on a fellowship at Georgetown University’s Institute for the Study of Diplomacy, where he studied Islamic State social media. Part of his time was spent working at the State Department, he said.

The protests at MU exploded from a local news story to a major national and international story and a top topic for days on social media sites.

Prier didn’t take a screen grab of @FanFan1911’s tweet about the KKK, which included a picture of a black child with a bruised face and the fake accusation that he had been beaten on campus. He can’t be sure exactly when it was inserted into the stream but he remembers calling @FanFan1911 a liar and tweeting back the source of the photo, a story about a child beaten by police in 2013 in Ohio.

“I was livid because these were people saying things about my university and they were making me mad,” Prier said.

In 2015, the problem posed by ISIS social media was their successful recruiting, Prier said.

The accounts that targeted MU also sent messages amplifying ISIS propaganda, which seemed strange at the time. That is why he returned to them for study at the Air University. He spent hours researching accounts, creating spreadsheets where he identified accounts he believed live humans were generating the messages and those which were automatic repeater accounts.

“FanFan and about a dozen accounts I saw, they were mostly attack dogs, attacking journalists and trying to build a narrative,” he said.

Prier and the MU faculty interviewed for this article agreed that the best defense for individuals is a healthy skepticism of ideas spread on social media. Prier’s findings about how MU became enmeshed in the Russian social media were surprising but show how important it is to be careful of ideas from unknown sources, McKinney said.

“I was surprised just on the level of, this was such an immediate or personal issue for all of us at the university,” McKinney said. “Then to see what we had learned or were reading in terms of Russian involvement through social media in our national elections and at the national level, that that sort of targeting events in our country would even be down at the local level.”

The polarization of political life in the U.S. wasn’t created by Russian social media, Drury said. The traditional media, once trusted as a neutral provider of information, now has outlets that openly take sides, he said.

“Democrats don’t like to watch Fox news and Republicans don’t watch MSNBC, unless they want to get their blood pressure up,” Drury said.

Prier’s article seems pessimistic, Kearney said, as though there was no defense against being manipulated.

“But the corollary is that it makes it more easy to share and find information by ourselves,” Kearney said. “It is certainly direction in the progress of free information. It is easy for us to point to the bad, especially when it takes form or takes shape in ways that we didn’t expect.”

That was what he did when he called @FanFan1911 a liar, Prier said. But it was like spitting into a hurricane – it did not calm the tempest.

It is up to all providers of information – platforms like Twitter, outlets such as the Tribune and especially politicians – to be careful, Prier wrote. The platforms could ban robot accounts, which would eliminate trend creation but would hurt advertisers, he wrote.

“Journalists should do a better job of vetting sources rather than just retweeting something,” Prier said. “And the last piece of advice I give is that politicians got to quit using it.”

rkeller@columbiatribune.com

573-815-1709

Related:

  • No Related Posts