As digital transformation hype continues to grow, IT is still an enabling function that exists to deliver business outcomes. As with other support functions like human resources finance and legal, in IT it’s very common to refer to functions outside of IT as “the business”. The business is trying to grow margin dollars. The business is trying to increase productivity. The business is trying to reduce customer effort. But who is this nameless faceless business that IT supports? The face of the business is the forgotten tribe – the tribe of users – the people that actually use your software tools, sometimes for many hours a day.
Our Dell Digital team is working hard to put a face on the business to enable exceptional outcomes even faster and with less re-work. We call it the Dell Digital Way – a major cultural shift for us built on people, process and technology. Heavily inspired by our brothers and sisters at Pivotal we’re combining elements of design thinking, Agile, SAFe, extreme programming and IoT. That’s a lot of jargon so what exactly are we doing? We’re taking the Pivotal methodology adding in a few dashes of our own and applying across everything that we do.
First we start with user empathy which is the hallmark of design thinking and we are doing it with professionally trained designers – actually spending time to understand not only what our users do but how they do it and what motivates them. Qualitative empathy is critical but I can’t do it justice compared to the classic TedX talk by Doug Dietz. What we learned is that users care most about an effortless experience and far less about new bells and whistles. In their hierarchy of needs users want applications that are first up, then fast and ultimately easy.
Our human-centered approach brings qualitative and quantitative approaches together. We’ve adopted an iterative approach focused on user empathy with elements of Agile, extreme programming and SAFe to release small increments in days or weeks. We always write test cases before developing of any user story. Finally we instrument our applications (not just the website) in the spirit of IoT. The software application itself is the thing and instrumentation gives us performance and adoption feedback so we can continue to fine-tune our interface configuration as well as optimizing performance of backend calls. Using this quantitative empathy we can begin the cycle or qualitative empathy over again at the start of the next cycle.
A great example of where we’ve applied this approach is in our current Salesforce Service Cloud implementation. We started by setting up a lab in our contact center and selected a team of users that represented a small sample of the total user population. Our team of product managers, designers and engineers spent hours and days observing and building rapport with the users. In parallel they started configuring (never customizing) the application and doing demos with the users. Prior to configuring each user story they wrote test cases to ensure story success.
There’s a common misconception that you don’t need UX with SaaS because there’s already a UI. When you decide to go with a SaaS application you are outsourcing UI but the UX is still in your hands. SaaS platforms generally give you enough degrees of freedom to overwhelm users if you don’t make a conscious commitment to design and control complexity throughout the life of the application. If you empathize with your users and apply design and analytics properly, you’ll see this forgotten tribe celebrate their tools instead of wrestling with them, improving the employee experience and ultimately benefiting customers. This is the art of delivering a world class end-user experience and the outcome we expect with the Dell Digital Way.
|Update your feed preferences|
This is one of the most important topics for the Martial Citizen today. Recognizing the ability to twist and malign information to fit a certain agenda is the new norm in “News”. Please, Stay truly Informed and Don’t be Fooled!
“What happens when anyone can make it appear as if anything has happened, regardless of whether or not it did?” technologist Aviv Ovadya warns.
In mid-2016, Aviv Ovadya realized there was something fundamentally wrong with the internet — so wrong that he abandoned his work and sounded an alarm. A few weeks before the 2016 election, he presented his concerns to technologists in San Francisco’s Bay Area and warned of an impending crisis of misinformation in a presentation he titled “Infocalypse.”
The web and the information ecosystem that had developed around it was wildly unhealthy, Ovadya argued. The incentives that governed its biggest platforms were calibrated to reward information that was often misleading and polarizing, or both. Platforms like Facebook, Twitter, and Google prioritized clicks, shares, ads, and money over quality of information, and Ovadya couldn’t shake the feeling that it was all building toward something bad — a kind of critical threshold of addictive and toxic misinformation. The presentation was largely ignored by employees from the Big Tech platforms — including a few from Facebook who would later go on to drive the company’s News-Feed integrity effort.
Read the Remainder at Buzz Feed News
Given all the brilliant things that are happening today with machine learning and artificial intelligence, I just don’t understand why “fake news” is still an issue. I think the solution is right in front of us; that is, if social media networks are really serious about addressing this problem.
Facebook is one of the biggest culprits in tolerating fake news, and that probably has a lot to do with the “economics of social engagement.” An article titled “Future of Social Media” summarizes the challenge nicely:
“While it’s great that everyone and her brother has access to create content online, offering a more diverse and thriving online market, this also generates stronger competition for your content to break through the clutter and be seen.
In fact, there will be a time in which the amount of content internet users can consume will be outweighed by the amount of content produced. Schaefer calls this “Content Shock” which, unfortunately, is uneconomical.”
Figure 1 shows the area of “Content Shock,” when the ability to create content outstrips the ability for humans to consume it.
The article recommends to “create content that will stand out” in order to draw attention and create engagement. Well, nothing draws attention and creates engagement like “fake news”. For example, here are some examples of fake news articles and the number of Facebook engagements each of these articles drove:
That’s an awful lot of Facebook engagements with news that isn’t true, but the “news” certainly does “stand out” in the crowded content space and it certainly does drive engagement.
Solving the Fake News Problem
So assuming that the social media networks truly are motivated to solve the “fake news” problem, here is how I would do it.
Amazon already supports the flagging of potential “Trolls” and “fake reviews” in their customer reviews (see Figure 3).
Freedom of Speech and Type I/Type II Errors
Machine Learning could certainly help to mitigate and flag fake news, but probably cannot and should not even try to eliminate it entirely. Why? It’s the First Amendment of the Constitution and it’s called Freedom of Speech.
One important consideration as social media organizations look to squelch fake news is to not violate Freedom of Speech. So instead of an outright deletion of questionable publications (other than for pornographic, liable or hate crime reasons), it might be better for the social media sits to use some sort of “Degrees of Truth” indicator that could accompany each publication or article. These indicators might look like something in Figure 4.
The cost to society of letting a few fake news articles to get published (false positive) greatly outweighs the potential costs of blocking potentially valid news (false negatives). So one will need to err on the side of allowing some level of fake news to ensure that one is not blocking real (though maybe controversial) news. See my blog “Understanding Type I and Type II Errors” to learn more about the potential costs and liabilities associated with Type I and Type II errors.
Machine Learning to End of Fake News
Ending Fake News seems like the perfect application of machine learning. Organizations like Yahoo, Google and Microsoft have been using machine learning for years now to catch spam (see article “Google Says Its AI Catches 99.9 Percent Of Gmail Spam”.) And companies like McAfee and Symantec employee machine learning to catch viruses (see article “Malware Detection with Machine Learning Methods”.)
Fake news looks a lot like spam and a virus to me. Should be an easy problem to solve, if one really wants to.
 A troll is a person who sows discord on the Internet by starting arguments or upsetting people, by posting inflammatory, extraneous, or off-topic messages with the intent of provoking readers into an emotional response or of otherwise disrupting normal, on-topic discussion. https://en.wikipedia.org/wiki/Internet_troll
|Update your feed preferences|
And most importantly, from this button action, can I obtain the search criteria values? I need to do some custom validation on them. If it is possible, would you advise me how in high level?