The Forgotten Tribe and The Dell Digital Way

EMC logo


As digital transformation hype continues to grow, IT is still an enabling function that exists to deliver business outcomes. As with other support functions like human resources finance and legal, in IT it’s very common to refer to functions outside of IT as “the business”. The business is trying to grow margin dollars. The business is trying to increase productivity. The business is trying to reduce customer effort.   But who is this nameless faceless business that IT supports? The face of the business is the forgotten tribethe tribe of users – the people that actually use your software tools, sometimes for many hours a day.

Our Dell Digital team is working hard to put a face on the business to enable exceptional outcomes even faster and with less re-work. We call it the Dell Digital Way – a major cultural shift for us built on people, process and technology.  Heavily inspired by our brothers and sisters at Pivotal we’re combining elements of design thinking, Agile, SAFe, extreme programming and IoT. That’s a lot of jargon so what exactly are we doing? We’re taking the Pivotal methodology adding in a few dashes of our own and applying across everything that we do.

First we start with user empathy which is the hallmark of design thinking and we are doing it with professionally trained designers – actually spending time to understand not only what our users do but how they do it and what motivates them. Qualitative empathy is critical but I can’t do it justice compared to the classic TedX talk by Doug Dietz. What we learned is that users care most about an effortless experience and far less about new bells and whistles. In their hierarchy of needs users want applications that are first up, then fast and ultimately easy.

Our human-centered approach brings qualitative and quantitative approaches together. We’ve adopted an iterative approach focused on user empathy with elements of Agile, extreme programming and SAFe to release small increments in days or weeks. We always write test cases before developing of any user story.  Finally we instrument our applications (not just the website) in the spirit of IoT.  The software application itself is the thing and instrumentation gives us performance and adoption feedback so we can continue to fine-tune our interface configuration as well as optimizing performance of backend calls.  Using this quantitative empathy we can begin the cycle or qualitative empathy over again at the start of the next cycle.

A great example of where we’ve applied this approach is in our current Salesforce Service Cloud implementation. We started by setting up a lab in our contact center and selected a team of users that represented a small sample of the total user population. Our team of product managers, designers and engineers spent hours and days observing and building rapport with the users. In parallel they started configuring (never customizing) the application and doing demos with the users. Prior to configuring each user story they wrote test cases to ensure story success.

There’s a common misconception that you don’t need UX with SaaS because there’s already a UI.  When you decide to go with a SaaS application you are outsourcing UI but the UX is still in your hands.  SaaS platforms generally give you enough degrees of freedom to overwhelm users if you don’t make a conscious commitment to design and control complexity throughout the life of the application. If you empathize with your users and apply design and analytics properly, you’ll see this forgotten tribe celebrate their tools instead of wrestling with them, improving the employee experience and ultimately benefiting customers. This is the art of delivering a world class end-user experience and the outcome we expect with the Dell Digital Way.

The post The Forgotten Tribe and The Dell Digital Way appeared first on InFocus Blog | Dell EMC Services.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

Infopacalypse Now

This is one of the most important topics for the Martial Citizen today. Recognizing the ability to twist and malign information to fit a certain agenda is the new norm in “News”. Please, Stay truly Informed and Don’t be Fooled!

“What happens when anyone can make it appear as if anything has happened, regardless of whether or not it did?” technologist Aviv Ovadya warns.

In mid-2016, Aviv Ovadya realized there was something fundamentally wrong with the internet — so wrong that he abandoned his work and sounded an alarm. A few weeks before the 2016 election, he presented his concerns to technologists in San Francisco’s Bay Area and warned of an impending crisis of misinformation in a presentation he titled “Infocalypse.”

The web and the information ecosystem that had developed around it was wildly unhealthy, Ovadya argued. The incentives that governed its biggest platforms were calibrated to reward information that was often misleading and polarizing, or both. Platforms like Facebook, Twitter, and Google prioritized clicks, shares, ads, and money over quality of information, and Ovadya couldn’t shake the feeling that it was all building toward something bad — a kind of critical threshold of addictive and toxic misinformation. The presentation was largely ignored by employees from the Big Tech platforms — including a few from Facebook who would later go on to drive the company’s News-Feed integrity effort.

Read the Remainder at Buzz Feed News

Using Machine Learning to Stop Fake News

EMC logo


Given all the brilliant things that are happening today with machine learning and artificial intelligence, I just don’t understand why “fake news” is still an issue. I think the solution is right in front of us; that is, if social media networks are really serious about addressing this problem.

Facebook is one of the biggest culprits in tolerating fake news, and that probably has a lot to do with the “economics of social engagement.” An article titled “Future of Social Media” summarizes the challenge nicely:

“While it’s great that everyone and her brother has access to create content online, offering a more diverse and thriving online market, this also generates stronger competition for your content to break through the clutter and be seen.

In fact, there will be a time in which the amount of content internet users can consume will be outweighed by the amount of content produced. Schaefer calls this “Content Shock” which, unfortunately, is uneconomical.”

Figure 1 shows the area of “Content Shock,” when the ability to create content outstrips the ability for humans to consume it.

Economic Content and Reason for Machine Learning

Figure 1: Economics of Content and “Content Shock”

The article recommends to “create content that will stand out” in order to draw attention and create engagement. Well, nothing draws attention and creates engagement like “fake news”. For example, here are some examples of fake news articles and the number of Facebook engagements each of these articles drove[1]:

  • “Pope Francis shocks world, endorses Donald Trump for president” – 960,000 Facebook engagements
  • “WikiLeaks confirms Hillary sold weapons to ISIS … Then drops another bombshell” – 789,000 Facebook engagements
  • “FBI agent suspected in Hillary email leaks found dead in apartment murder-suicide” – 567,000 Facebook engagements

That’s an awful lot of Facebook engagements with news that isn’t true, but the “news” certainly does “stand out” in the crowded content space and it certainly does drive engagement.

Solving the Fake News Problem

So assuming that the social media networks truly are motivated to solve the “fake news” problem, here is how I would do it.

  • Step 1: Leverage crowdsourcing to flag potential fake news articles. Social media networks could create a “Fake News” button that flags potential fake news, like Yahoo Mail does today to flag potential spam (see Figure 2).
Figure 2:  Flagging Potential Email Spam in Yahoo Mail

Figure 2:  Flagging Potential Email Spam in Yahoo Mail

  • Step 2: Human Reviewers would need to review the flagged “Fake News” articles to determine which ones are fake and which ones are not fake. Maybe the Reviews could even add additional information (metadata data?) that captures information such as “degree of fakeness” (i.e., is it an outright lie or is it just a slight twisting of the facts) and “severity of fakeness” (i.e., fake news about a celebrity isn’t nearly as severe as fake news about a political candidate. Heck, there are certain celebrities whose fame seems to be based entirely upon fake news… the Kardashians?).
  • Step 3: Apply Supervised Machine Learning algorithms against the flagged potential “fake news” articles to find (quantify) correlations and predictors (i.e., combinations of words, phrases and topics) of “fake news” outcomes. Then use the resulting “fake news” models on new articles to score the article’s “level of fakeness.” Remember, Supervised Machine Learning algorithms identify and quantify relationships between potential predictive variables and metrics against known outcomes (e.g., spam, fraudulent transaction, machine failure, web click, purchase transaction) gathered from historical (training) data sets and then applies the models to new data sets.
  • Step 4: Create “Reader Credibility Scores” to rank credibility of people flagging fake news articles. It is critical to create reader credibility scores (think FICO score or Uber driver and passenger scores) to measure the integrity of folks who are flagging potential fake news (as well as those that are also promoting fake news). That will help to identify “trolls[2]” who are just trying to perpetuate the fake stories or cast doubt on real news.

Amazon already supports the flagging of potential “Trolls” and “fake reviews” in their customer reviews (see Figure 3).

Figure 3:  Flagging Fake Reviews

Figure 3:  Flagging Fake Reviews

  • Step 5: Create “Publisher Credibility Scores” that measures the credibility and reliability of each publisher or source of the article. This score would be comprised of the results of the fakeness analysis (how many fake articles is that publisher responsible for) but could also include other variables such as number of employees working for the publisher and tenure in the business (e.g., Wall Street Journal has around 3,600 employees and has been publishing since 1851 versus Liberty Writers News which has 2 employees and has been publishing since only 2015). Heck, there is even a Wikipedia page “List of fake news websites” that lists known fake news sites, such as Liberty Writers News, American News, Disclose TV, Drudgereport.com and World Truth TV.

Freedom of Speech and Type I/Type II Errors

Machine Learning could certainly help to mitigate and flag fake news, but probably cannot and should not even try to eliminate it entirely. Why? It’s the First Amendment of the Constitution and it’s called Freedom of Speech.

One important consideration as social media organizations look to squelch fake news is to not violate Freedom of Speech. So instead of an outright deletion of questionable publications (other than for pornographic, liable or hate crime reasons), it might be better for the social media sits to use some sort of “Degrees of Truth” indicator that could accompany each publication or article. These indicators might look like something in Figure 4.

Figure 4:  Degrees of Truthfulness Indicators

Figure 4:  Degrees of Truthfulness Indicators

The cost to society of letting a few fake news articles to get published (false positive) greatly outweighs the potential costs of blocking potentially valid news (false negatives). So one will need to err on the side of allowing some level of fake news to ensure that one is not blocking real (though maybe controversial) news. See my blog “Understanding Type I and Type II Errors” to learn more about the potential costs and liabilities associated with Type I and Type II errors.

Machine Learning to End of Fake News

Ending Fake News seems like the perfect application of machine learning. Organizations like Yahoo, Google and Microsoft have been using machine learning for years now to catch spam (see article “Google Says Its AI Catches 99.9 Percent Of Gmail Spam”.)  And companies like McAfee and Symantec employee machine learning to catch viruses (see article “Malware Detection with Machine Learning Methods”.)

Fake news looks a lot like spam and a virus to me. Should be an easy problem to solve, if one really wants to.

[1] http://www.cnbc.com/2016/12/30/read-all-about-it-the-biggest-fake-news-stories-of-2016.html

[2] A troll is a person who sows discord on the Internet by starting arguments or upsetting people, by posting inflammatory, extraneous, or off-topic messages with the intent of provoking readers into an emotional response or of otherwise disrupting normal, on-topic discussion. https://en.wikipedia.org/wiki/Internet_troll

The post Using Machine Learning to Stop Fake News appeared first on InFocus Blog | Dell EMC Services.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

Need Search Template Criteria customization advice

First, I would like to know if possible to add a button to 1(Search Criteria). If not, I will add it to 2 (Search Result) via Default Search Result toolbar.
And most importantly, from this button action, can I obtain the search criteria values? I need to do some custom validation on them. If it is possible, would you advise me how in high level?

Related: