Schulich Partners With Deloitte To Launch Big Data Analytics Lab

The speed at which the world is changing is at a point where human adaptability is behind that of the speed of change itself. More than ever before, it’s vital to dedicate ourselves to the concept of lifelong learning, striving to keep up with that change.

Enter, York University’s Schulich School of Business, Toronto, Canada.

In partnership with Deloitte, the school has designed the Cognitive Analytics and Visualization Lab, created to incubate and develop the next generation of business analytic-savvy graduates.

“The analytics lab allows companies to bring their projects to our school,” says Murat Kristal, director of the lab, and director of the Master of Business Analytics program at Schulich, “[as] students work in groups similar to analytics consulting firms.”

So often, when learning about data analytics, students will work on small, clean data sets. But, by inviting them to work intimately with companies on real-life data issues, Murat explains, they are going to run into bona fide business problems, thus augmenting their understanding of data analytic processes.

Collecting data is a process that has been around for an age. “What makes [it] fashionable now […] is that everyone collects data, and we now have the computing power to analyze [it] in a speedy way, which means every business is going to change.”

Executives tell Murat that in the past they made the decisions they did because it tied in with traditional methodology, or they were swayed by a gut feeling, a hunch. “That’s no better than tossing a coin,” he says, “it’s 50/50, with no evidence.”

Advancements in the collection and processing of data means that now, analytics offers hard, objective evidence to work from. “[It is] turning the odds in your favor,” Murat adds.

The odds may have turned, but as the problems of yore are eradicated, the issues of today are given rise. “The biggest problem is not how to analyze the data, but who can analyze the data and understand it [in a business context],” explains Murat.

Within the analytics lab, students are taught how to match the appropriate analytical method to the variety of business problems brought to them. Enabling them to decipher quickly between the methods that will work best means the potential eradication of slow, analytical processes that have held up businesses progressing in the past.

All of the data will be housed in a new building being constructed adjacent to the analytics lab, allowing companies to secure their data in a set of onsite servers—Murat adds that there are also plans to incorporate PHd and executive education into the lab to foster further innovation and research into the available data sets.

“Companies see this as a huge opportunity to experiment with their data,” says Murat. “Any analytics project is a huge undertaking, and it’s a big HR problem getting the people you need to analyze all your data.”

With the Cognitive Analytics and Visualization Lab, that issue is removed as students experiment with the data alongside a professional data scientist and Murat.

The hope is that these small data analytic consulting projects will lead to further innovation in the form of PhD research projects, which could eventually yield a fresh culture of competition among the business community in Canada by offering analytical insights from startups all the way up to major corporations.

A lot has been made of the analytical capacity of tech giants like Facebook, Amazon, and Apple—holding vast swathes of data and the financial power to swiftly analyze it ensures they leave the rest of the competition in the dust.

But rather than choking, the Cognitive Analytics and Visualization Lab has the potential to hoover the dust up, by giving startups and small to medium enterprises (SMEs) more analytical processing power.

“If you think about it, 95% of the economy is made up of SMEs,” says Murat. “[Because of the lab] they will be able to understand and allocate resources in analytics.”

He adds that the scope for the lab’s clients extends to governments, and companies like PwC, EY, and Accenture, the Royal Bank of Canada, and Dell.

“They [all] manage huge assets, and when they are managing them, over time they run differently than they are supposed to, as a result of wear and tear,” explains Murat.

“They can now come to us and ask, ‘can you analyze our data? [Doing] the experiments in a low-cost environment.”

Related:

  • No Related Posts

‘Nong Fah’ leads THAI’s data upgrade

Thai Airways International Plc (THAI) is embracing chatbots and big data analytics to gain insight into customers and boost online sales.

“This year, THAI aims to increase the online revenue proportion from 23% to 25% of total revenue, thanks to the advanced features of data analytics,” said Pariya Chulkaratana, vice-president for e-commerce and ancillary marketing.

Speaking at a seminar hosted by Microsoft, she said the company aims to use data analytics to better understand the customer’s journey and provide a “personalised website”.

The site will offer a predictive one-stop service for flight information, hotel booking and car rental.

Moreover, THAI recently introduced “Nong Fah”, an artificial intelligence (AI) chatbot capable of responding to customer inquiries at www.thaiairways.com.

Nong Fah is available 24/7 to provide instantaneous replies to questions about promotional offers, advance check-in, flight schedule, official merchandise and travel extras.

“Soon we will have a voice-based digital assistant and marketing campaign offered at different channels suited to customer preference, such as Line, SMS or email,” Mrs Pariya said.

The company will also expand its payment channel next month by accepting QR code payment and Samsung Pay. Samsung Pay has 400,000 users in Thailand.

“This will enable the company to reach youngsters and digital lifestyle customers,” Mrs Pariya said. “THAI is embracing digital technology in our organisation, not just to drive the business forward, but to transform it for the future.”

The company’s website saw 30 million visitors in 2017, up 15% year-on-year.

Related:

  • No Related Posts

We sent a vulture to IBM’s new developer conference to find an answer to the burning question …

Index At the first IBM Index developer conference in San Francisco, California, on Tuesday, I spent the morning at a Kubernetes workshop learning that when apps on the IBM Cloud Container Service fail to deploy, the reason may not be obvious.

The presenter, IBM cloud program manager Chris Rosen, framed the event as an opportunity to attempt to answer another question that isn’t evident to everyone: Why IBM?

IBM’s Java CTO John Duimovich offered his answer during an interview with The Register. We’ll delve into that discussion in detail in a follow-up article but here’s his sales pitch.

“In the Java space, we’re the experts,” said Duimovich. “We have hardware experts. We’ve actually redesigned instructions on [processor architecture] Power and on [our mainframe] Z over the years to give better Java support. We have our own JVM, OpenJ9, that’s newly open sourced this year. That’s got advanced features that give you the same throughput for half the memory, for example.”

In the cloud, he explained, that translates to workloads that cost half as much to run under memory-based pricing as a standard VM.

“Why IBM?” mused Duimovich. “We are driving a pretty aggressive optimization and ease of use story for cloud Java.”

And there’s more to it than Java and IBM’s embrace of open source.

“The rest of the stuff – big data, analytics, the Watson portfolio – allows you to build a fairly compelling end-to-end cloud native app,” he said. “We have the full picture, and more specifically around Java, we have probably the deepest skills on the planet.”

“Why us?” Duimovich asked. “Why would you pick anyone who doesn’t have a JVM team? Why would you pick someone who doesn’t have a full stack integrated with their cloud as your Java vendor?”

If only the other conference attendees I spoke with shared that certainty.

Trails

With regard to cloud platforms, IBM isn’t top of mind. By the public cloud revenue metrics of IT consultancy Gartner last September, it doesn’t even merit mention by name. Consigned to the “Other” category, Big Blue trails behind Amazon, Microsoft, Alibaba, Google, and Rackspace.

Financial firm Jefferies in a December report was more charitable, ranking IBM third in public cloud revenue behind Amazon and Microsoft.

As for its overall business, IBM, while still mostly profitable, has a lot to prove. In January, the company reported its first quarter of revenue growth after years of shrinking sales. But it’s too soon to tell whether the patient has stabilized.

The humbled services and mainframe giant has placed several high-profile bets on possible future cash cows: its Watson artificial intelligence service, quantum computing, cloud services, and blockchain tech.

Interviewed in a hallway during the conference, John DeFalco, CEO of Custody Cloud, a firm developing applications for law enforcement, had few kind words for IBM.

DeFalco said he was not impressed IBM’s technology stack. He said he’d used some BlueMix services for fintech applications and felt that the company’s cloud offering was lacking.

He also expressed skepticism about the corporation’s management. “I would never put a dollar of my money in a company run like IBM has been,” he said.

Nonetheless, he said he bought a ticket to the conference to learn more about blockchain technology, because of its potential for securing and tracking data associated with individuals in police custody. He also expressed interest in TensorFlow.

Black & Blue: IBM hires Bain to cut costs, up productivity

READ MORE

A developer with IBM’s cloud business who asked not to be named because he was not authorized to speak to the press said he felt generally hopeful about the direction of the company.

He expects IBM will do well with businesses looking for hybrid cloud deployments – these companies want to move some of their applications to the cloud but they also don’t want to move too fast or to give up too much control.

He said he felt Watson was the most promising strategic initiative because it’s not something other companies have.

Wall Street’s Jefferies was not so optimistic in a report issued last July. The finance firm pegged Watson as a money pit, noting “the returns on IBM’s investments aren’t likely to be above the cost of capital.” The firm also expressed doubt about IBM’s ability to compete for AI talent.

A speaker at one of the conference workshops who asked not to be named – as he may want to be invited back – said he wasn’t sure what the answer to the question “Why IBM?” might be. He said he’d been invited at the last minute and wasn’t really that familiar with IBM’s offerings, noting that he used AWS generally and hadn’t seen anything that made him want to switch.

A conference attendee from an open-source database biz, also preferring anonymity, explained his presence by remarking that his firm has an interest in ensuring its product interoperates with open-source cloud application platform Cloud Foundry and IBM offers Cloud Foundry.

Another conference attendee, who asked not to be named, said it wasn’t IBM’s technology in particular that prompted him to attend. Rather, as a front-end developer, it was open source tech that was relevant to his job and interests.

Also, he said, his company had some extra money allocated to conferences that needed to be spent. His ticket to Index helped zero out budgeted funds.

Benjamin Aaron, a data scientist with mobile health app startup Vytality, said he was attending the conference to compare IBM’s deep-learning technology with services available from rivals. The app sends elder care notifications to family and friends, he explained, and he is looking into deep learning as a way to prioritize and filter notifications.

He said he didn’t have any feelings about IBM one way or another. “Frankly, the tickets were offered at a discount on Meetup and I figured as might as well,” he said. ®

Related:

  • No Related Posts

Should You Invest In The Healthcare Sector And MYnd Analytics Inc (NASDAQ:MYND)?

MYnd Analytics Inc (NASDAQ:MYND), a US$11.69M small-cap, is a healthcare company operating in an industry, which has experienced tailwinds from issues such as higher demand driven by an aging population and the increasing prevalence of diseases and comorbidities. The healthcare tech industry, in particular, is presented with an array of technology to progress innovation clinically and cost-effectively. Advancements such as implantable devices and treatment, robotic surgery and 3D printing will be key drivers of growth in the industry. Healthcare analysts are forecasting for the entire industry, a positive double-digit growth of 20.28% in the upcoming year , and a single-digit 6.28% growth over the next couple of years. This rate is below the growth rate of the US stock market as a whole. An interesting question to explore is whether we can we benefit from entering into the healthcare tech sector right now. Today, I will analyse the industry outlook, as well as evaluate whether MYnd Analytics is lagging or leading its competitors in the industry. Check out our latest analysis for MYnd Analytics

What’s the catalyst for MYnd Analytics’s sector growth?

NasdaqCM:MYND Past Future Earnings Feb 20th 18
NasdaqCM:MYND Past Future Earnings Feb 20th 18

New R&D methods and big data analytics are creating opportunities for innovations, however, stakeholders have been challenged to keep abreast of this structural shift while under pressure to cut costs. In the past year, the industry delivered growth in the twenties, beating the US market growth of 9.88%. MYnd Analytics leads the pack with its impressive earnings growth of 72.73% over the past year. Furthermore, analysts are expecting this trend of above-industry growth to continue, with MYnd Analytics poised to deliver a 52.86% growth over the next couple of years compared to the industry’s 20.28%. This growth may make MYnd Analytics a more expensive stock relative to its peers.

Is MYnd Analytics and the sector relatively cheap?

NasdaqCM:MYND PE PEG Gauge Feb 20th 18
NasdaqCM:MYND PE PEG Gauge Feb 20th 18

The healthcare tech sector’s PE is currently hovering around 44.74x, higher than the rest of the US stock market PE of 18.99x. This illustrates a somewhat overpriced sector compared to the rest of the market. However, the industry returned a similar 9.37% on equities compared to the market’s 10.34%. Since MYnd Analytics’s earnings doesn’t seem to reflect its true value, its PE ratio isn’t very useful. A loose alternative to gauge MYnd Analytics’s value is to assume the stock should be relatively in-line with its industry.

Next Steps:

MYnd Analytics’s industry-beating future is a positive for investors. If MYnd Analytics has been on your watchlist for a while, now may be the time to enter into the stock, if you like its growth prospects and are not highly concentrated in the healthcare tech industry. However, before you make a decision on the stock, I suggest you look at MYnd Analytics’s fundamentals in order to build a holistic investment thesis.

Related:

  • No Related Posts

Big Data Analytics, Governance Align CHI’s 100+ Hospitals

February 20, 2018 – No matter what their scope or scale, healthcare organizations continue to struggle with developing the data analytics and governance competencies to gain much-needed visibility into their opportunities to improve.

Organizations big and small are searching for the skills required to boost their performance and get ahead of financial risks, with success stories on both ends of the size spectrum.

With more than 100 acute care hospitals spread across almost 20 states, Catholic Health Initiatives (CHI) ranks securely among the top ten biggest systems in the nation.

James Reichert, MD, PhD, VP of Analytics and Transformation at CHI
James Reichert, MD, PhD, VP of Analytics and Transformation at CHISource: Xtelligent Media

The sheer size of the $15.9 billion organization, which also includes employed providers, ambulatory clinics, home health facilities, and other environments, sets it apart from many of the country’s other healthcare delivery networks.

With multiple regional division heads responsible for overseeing operations in their territories and a national office that helps data and decisions filter back down to every individual care site, CHI operates on a scale that might seem foreign to the majority of other organizations across the country.

READ MORE:The Role of Healthcare Data Governance in Big Data Analytics

And yet the challenges it faces around using data analytics to improve quality and better serve patients are, fundamentally, the same as the smallest community hospital or rural health clinic.

Communication, collaboration, transparency, and agreement on top priorities are difficult to develop in any organization.

And every type of provider needs to recognize the important role of data analytics and reporting in facilitating the development of these important factors for success.

“Our mission is to provide the best care possible to every patient every time,” said James Reichert, MD, PhD, Vice President of Analytics and Transformation at CHI.

“You can’t improve if you can’t identify where your gaps are, where the opportunities are, where you’re achieving excellence, and how effective your changes are. We recognize that we need data to support what we’re doing.”

Visibility brings a common vision

READ MORE:The Difference Between Big Data and Smart Data in Healthcare

Clinical, financial, and operational data must be trustworthy, and must be presented without the natural bias that can come from wanting to showcase good performance on key metrics related to safety, quality, and outcomes.

These biases, unconscious though they often were, tended to show up in the reports CHI’s central office received from its divisional leaders, Reichert said to HealthITAnalytics.com.

“Around 2013, when I joined the national office, we would be getting reports from our component markets on a quarterly basis, and naturally the markets would be putting their best foot forward on those,” he said.

“But because they were putting together their own data and sending it in, they tended to report on their activities in a way that made everyone look like they were doing very well in clinical quality, patient safety, and the patient experience.”

Many of the organizations were indeed performing highly across the majority of domains, he stressed, and there was little sense that any individual market was actively misrepresenting its performance.

READ MORE:Information Governance Gaining Ground in Healthcare Organizations

“But not everyone can be top of the class at everything,” he pointed out. “The national office started to think they weren’t getting a clear picture of what was really going on in the care environments at the local level.”

The consternation also flowed in the other direction, he added. “The national office would say, ‘We’re going to work on these three initiatives system-wide this year, and we want everyone to improve by such-and-such a percentage.’”

“But the markets would take a look at their data and say, ‘Gee, we’re already pretty good at that – we don’t think that should be our top priority when we have more opportunities and needs somewhere else.’”

Most of the markets were performing well on the majority of top issues, he said. “But if you take a lot of measures together, then everybody’s not doing well in something that they need to work on.”

CHI found that in most cases, a handful of facilities accounted for the majority of the opportunity on any given measure, and decided that a more targeted approach to quality improvement would produce more significant gains in facilities at the lower end of the spectrum without diverting resources unnecessarily in those that were already performing well.

“In order to really understand that in a fair and equitable manner, we needed to create a single source of truth instead of relying on individual components of the system for their interpretation of a measure or a definition,” said Reichert.

“We needed to create transparent reporting across the enterprise with risk-adjusted measures so that everyone is using the same definitions and the same standard metrics so that we could transition to a much longer-term strategic vision.”

Living the mission through better reporting and governance

A more equitable and impactful reporting structure started with choosing the measures that could bring the biggest improvements to the vast organization, Reichert explained.

Nine measures, including metrics around organizational growth, service to vulnerable populations, patient safety, and care quality, now comprise CHI’s “Living Our Mission” initiative.

Standardized frameworks, like the PSI 90 for patient safety and the HCAHPS metrics for patient experience, help CHI compare its activities to its peers by leveraging common definitions and criteria.

“That approach has allowed us to develop focus, credibility, and trust,” said Reichert. “Trust is the most important thing – it’s vital that everyone is using the same analytics and the same reports to understand their performance. They have to trust the data, because then if they do identify a deficiency, they can agree with everyone else that they need to address it.”

To further reduce the chances of discrepancies across different markets, CHI moved from a self-reporting structure to a much more centralized analytics strategy.

“We don’t really want to have a market extracting data from their own facilities and reporting on it, then sending the report to us,” Reichert said. “Instead, we have our markets send all their data directly into a single data warehouse – a centralized repository for clinical and administrative data – and then we do all the value-add to it at the national level.”

“That helps with interoperability and data quality issues, as well,” he continued. “So it doesn’t matter whether the individual market is using Epic or Allscripts or any other electronic health record system: the data simply flows directly out of the source system and into the centralized warehouse, which is more like a data lake.”

Not only does this approach ensure the data is as uniform as it can be before undergoing analysis, but it also reduces a potential source of friction between the regional markets and the national headquarters, Reichert said.

“We don’t want our market leaders to spend time questioning whether a metric has been calculated correctly. We don’t want to create that conflict when it isn’t really a necessary step in the process. Our primary goal is to attain alignment with frontline local domain experts, their market leaders, executive leadership, and the board of stewardship of trustees.”

Developing streamlined analytics infrastructure

With so much data from so many sources flowing into a single system, strong data governance is essential for ensuring integrity and concordance.

Extracting data directly from EHRs may prevent many of the interoperability issues involved in synthesizing data from disparate systems, but it doesn’t necessarily solve the problem of harmonizing unstructured data or elements that can be represented in multiple ways.

“Normalization is a key part of our data governance processes,” Reichert said. “For example, for something as common as hemoglobin A1C, we have 53 representations across the enterprise.”

“We need to be able to normalize all of those into one concept so that we can ask the same questions about diabetes control of our locations in Arkansas as we do in Minnesota or Kentucky, no matter what EHR vendor any of those sites are using.”

Completing that part of the process requires input from Reichert’s analytics team as well as help from several technology allies.

“On the acute care side, once we get the data into the data lake, we can ship it over to one of our technology partners who can work on the transformations and focus on the data quality necessary to bring that information in for our reporting,” he said.

“For our ambulatory markets, we do a lot more of those transformations ourselves. Our partnership with SAS has helped us automate a lot of that labor-intensive data manipulation, and therefore reduce the errors that occur during the processes.”

Reichert oversees a relatively small team for such a huge organization. With fewer than ten people to perform the vast majority of analytics work required to support these goals, automating and simplifying reporting processes is key.

“With a higher degree of automation, we can schedule these processes to run in the background and produce reports that we can publish across the organization, which has helped us make our very small team much more efficient,” he said.

“Once we stand up these solutions, we can move on to other projects without staying in the weeds day in and day out, working with the data.”

Top down, bottom up reporting

After the data is normalized and analyzed, the results must be reported to the stakeholders that will use it to make decisions.

Operational reporting and analytics dashboards are available directly through a portal, which allows everyone to access and view reports, Reichert said.

“That really helps to democratize the data and build staff engagement for improvements,” he observed. “They won’t be able to get patient-level detail unless they’re authorized for it, but they can see how they’re performing at the facility level and sometimes at the unit level. In the ambulatory space, we can offer data at the provider level.”

Division leaders, who are responsible for between 12 and 17 hospitals each, and other executives also get standardized reports on a monthly basis.

The reports include a summary of their quality, patient safety, and patient experience scores, benchmarked against a year’s worth of data to illustrate any improvements or potential shortfalls.

“If an executive sees that one facility is doing particularly poorly on a certain measure, he or she can follow up with that facility’s president or chief medical officer to try to dig through the analytics to understand what’s contributing to that dip in performance.”

Certain leaders in each of CHI’s markets will receive additional data down to the patient level, which allows individual facilities to break down their opportunities for improvement even further.

“Everyone knows who these people are, so they can approach them for that really granular view of performance in our priority domains,” said Reichert.

The positive impact of access to these granular, tailored data resources has been swift. Between 2015 and 2017, CHI saw a 14 percent reduction in heart failure mortality and a 20 percent reduction in pneumonia deaths.

The incidence of pressure ulcers dropped by 13 percent, while the health system achieved a 33 percent reduction in postoperative hip fractures.

Many of these statistics are continuations of year-over-year gains in quality, which are due in part to the more than 1000 reports generated and shared by the analytics team every month.

Using data to fuel collaboration, not contention

In some respects, CHI is able to leverage data effectively in spite of its size, not because of it.

Bringing together so many moving parts, different opinions, and unique organizational cultures at the facility level would be a daunting task for a much larger analytics team, let alone the small group of data experts in charge of overseeing these reports.

Reichert attributes the organization’s success to a governance structure that priorities transparency and a clear chain of command.

“Because we are such a large organization, setting up well-defined roles and responsibilities is huge,” he stressed. “If we didn’t have that clarity, we would be constantly going back and forth about what everyone should or shouldn’t be doing.”

No matter what the size of an organization, strong leadership is vital to any initiative that requires change, such as a new health IT implementation, a quality improvement program, or a population health management initiative.

“We use the analogy of a car: the data is the fuel, the analytics are the engine, but the structure of the car is built out of quality leadership,” he said.

“They’re the ones who provide the focus, communication, and education to move us in the right direction. Without that strong framework in place, our drivers – the people out in the markets who are actually making these changes happen – couldn’t get where they want to go.”

Large organizations can sometimes lose their momentum if they do not start with good data governance and follow it up with strong organizational governance, said Reichert.

“When your quality improvement officers meet, they should be talking about what they need to do to move from the 30th percentile to the 60th percentile, not arguing over whether or not the data on their performance from last quarter is accurate or not,” he said.

“Data and reporting and analytics are just the jumping off point. It’s extremely important to get those pieces right and make sure the trust is there, but at the end of the day, you’re not going to meet your goals if you don’t use that data to actually get things done.”

Related:

  • No Related Posts

Big Data Architect at Cognizant

Cognizant’s Analytics Information Management (AIM) is seeking a Big Data Architect to join its growing organization. We are currently looking for a candidate in (Shorthill, NJ,). Candidates must be willing to travel weekly to different consulting assignments.

Our Analytics Information Management Practice creates solutions that cover the entire lifecycle of information utilization, from ideation through implementation. At the outset, we offer consulting and development services to help our clients define their strategy and solution architecture. Then our teams deliver and manage data warehousing, business analytics and reporting applications that provide tangible business benefits.

Responsibilities

  • Perform architecture design, data modeling, and implementation of Big Data platform and analytic applications for Cognizant clients
  • Analyze latest Big Data Analytic technologies and their innovative applications in both business intelligence analysis and new service offerings, bring these insights and best practices
  • Architect and implement complex big data solutions
  • Stand up and expand data as service collaboration with partners in US and other Develop highly scalable and extensible Big Data platforms which enable collection, storage, modeling, and analysis of massive data sets
  • Drive architecture engagement models and be an ambassador for partnership with delivery and external vendors.
  • Effectively communicate complex technical concepts to non-technical business and executive leaders
  • Lead large and varied technical and project teams
  • Assist with scoping, pricing, architecting, and selling large project engagements

Technical Experience

  • On premises Big Data platforms such as Cloudera, Hortonworks
  • Big Data Analytic frameworks and query tools such as Spark, Storm, Hive, Impala
  • Streaming data tools and techniques such as Kafka, AWS Kinesis, Microsoft Streaming Analytics
  • ETL (Extract-Transform-Load) tools such as Pentaho or Talend
  • Continuous delivery and deployment using Agile Methodologies.
  • Data Warehouse and DataMart design and implementation
  • NoSQL environments such as MongoDB, Cassandra
  • Data modeling of relational and dimensional databases
  • Metadata management, data lineage, data governance, especially as related to Big Data
  • Structured, Unstructured, Semi-Structured Data techniques and processes

Minimum Requirements

  • Over 10 years of engineering and/or software development experience and demonstrable architecture experience in a large organization.
  • Experience should contain 5 years of experience of architecture support combined of these environments: warehouse, DataMart, business intelligence, and big data.
  • 5 years of consulting experience desired
  • Hands-on experience in Big Data Components/Frameworks such as Hadoop, Spark, Storm, HBase, HDFS, Pig, Hive, Scala, Kafka, PyScripts, Unix Shell scripts
  • Experience in architecture and implementation of large and highly complex big data projects
  • History of working successfully with cross-functional engineering teams
  • Demonstrated ability to communicate highly technical concepts in business terms and articulate business value of adopting Big Data technologies

Technical Skills

SNo Primary Skill Proficiency Level * Rqrd./Dsrd.
1 Hive NA Required
2 Apache Hadoop NA Required
3 Core Java NA Required

* Proficiency Legends

Proficiency Level Generic Reference
PL1 The associate has basic awareness and comprehension of the skill and is in the process of acquiring this skill through various channels.
PL2 The associate possesses working knowledge of the skill, and can actively and independently apply this skill in engagements and projects.
PL3 The associate has comprehensive, in-depth and specialized knowledge of the skill. She / he has extensively demonstrated successful application of the skill in engagements or projects.
PL4 The associate can function as a subject matter expert for this skill. The associate is capable of analyzing, evaluating and synthesizing solutions using the skill.

About Cognizant

Cognizant is one of the world’s leading professional services companies, transforming clients’ business, operating and technology models for the digital era. Our unique industry-based, consultative approach helps clients envision, build and run more innovative and efficient businesses. Headquartered in the U.S., Cognizant, a member of the NASDAQ-100, is ranked 205 on the Fortune 500 and is consistently listed among the most admired companies in the world. Learn how Cognizant helps clients lead with digital at http://www.cognizant.com/ or follow us on Twitter:USJobsCognizant.

Cognizant is recognized as a Military Friendly Employer and is a coalition member of the Veteran Jobs Mission. Our Cognizant Veterans Network assists Veterans in building and growing a career at Cognizant that allows them to leverage the leadership, loyalty, integrity, and commitment to excellence instilled in them through participation in military service.

Related:

  • No Related Posts

The Analytics Revolution of 2018: Transforming Insights to Action in the Life Sciences Industry

The life sciences industry is undergoing a transformative shift. Under the new paradigm, patients are seeking greater transparency, physicians are losing prescribing autonomy, hospital consolidation is growing, and value-based medicine is paramount. This new reality demands a complete re-imagination of commercial strategies.

Given these changes, 2018 must be the year that analytics fulfills its longstanding promise: to truly change decisions and drive tangible value. Many life sciences companies have already responded to the shifting industry tides by investing in sophisticated data and analytics capabilities. However, while 70 percent of companies rate sales and marketing analytics as “very important” or “extremely important,” only two percent of organizations claim that their analytics efforts have had a “broad, positive impact.”

Based on APT’s work driving value through analytics at leading organizations across many sectors, this article discusses the future of analytics in the life sciences industry—and examples of how peers in other industries have reaped the benefits of being early movers.

Moving from diagnostic to prescriptive

Over the past few years, life sciences companies have made massive investments in commercial analytics. These investments, combined with declining data storage costs and increasing CPU power, have enabled more sophisticated, complex, and precise capabilities than ever. However, this newfound power has fallen short of its promise; instead of changing decisions, its insights often remain diagnostic.

While big data generates a myriad of trends and patterns, it rarely helps brand leaders decide what their next action should be. Imagine your multi-million dollar machine learning investment identified eight new prescriber segments and four potential patient triggers based on past data. Now what? Should you heavy up detailing on the new segments, invest more in non-personal promotion (NPP) to gain access to these groups, or improve your speaker programs to feature key opinion leaders with these profiles? Leading organizations are realizing that they need the people, process, and technology in place to rapidly act on new insights.

In 2018, brand leaders will continue to invest in infrastructure to achieve truly prescriptive analytics. For many teams, rapid experimentation is the critical capability needed to translate insights into action. Business experimentation, or what we call Test & Learn, is based on the idea of a clinical trial: trying an idea with a subset of customers or markets, and comparing the results for that “test” group to results for a “control” group that received no change. This Test & Learn approach can be the final step to close the loop within your existing infrastructure, generate prescriptive recommendations, and drive real business value.

Optimizing the customer journey

The customer journey is not only an increasingly central pillar of commercial strategy, but also an area ripe for business experimentation. Beyond mass marketing, both patients and physicians are looking for messages relevant to their specific situation and stage of brand engagement. While life sciences companies have reacted by investing heavily in new technology to better understand the customer journey, many have not seen it pay off. Instead, the most leading-edge teams are leveraging in-market experiments to optimize their targeting of different segments.

Two industries that have excelled at improving the customer journey through a Test & Learn approach are the financial services and retail industries. Consumer banks, for example, have long had the infrastructure to test and optimize their marketing content, channels, and timing based on key customer segment traits and past behavior. Rather than relying on hypotheses about customer triggers, they actively test these initiatives for nearly real-time insights. Then, they use learnings from the small-scale tests to inform broader pivots throughout the customer journey and maximize results.

Many retailers also rely on continuous testing of customer journey interventions. For example, retailers may use analytics to learn that shoppers with a certain spend history and demographic profile respond best to email promotions. They can then test various combinations of timing, promotion depth, and messaging to tease out the true cause-and-effect impact of each campaign. Only through in-market tests can they answer questions such as: which customer segments should have received an discount with their online purchase? Which ones would have responded better to a loyalty offer? Which will spend with us again even without further promotions?

Life sciences organizations have parallel questions for their physician and patient base; they should learn from the examples set by banks and retailers, and look to experimentation to unlock the promise of analytics.

Scaling causal analytics is a key competitive advantage

The ability to scale up this rapid, data-driven decision-making structure is key. In the era of big data, patterns are everywhere, but companies must ensure they do not confuse correlation and spurious trends with true cause-and-effect. Scale is another area where life sciences companies can look to other industries for best practices. Some leading credit card issuers, for example, run thousands of test vs. control analyses per year because they have invested in the necessary automation, training, and infrastructure. They not only measure and optimize nearly every campaign, but they also run “meta analyses” to identify overarching insights that cut across multiple initiatives.

In the coming years, one-off analyses will no longer be sufficient in the life sciences industry. Brand leaders will increasingly strive to achieve a similar scale of data-driven decision-making, especially as the industry becomes more competitive.

To navigate the sweeping changes of 2018 and beyond, brand leaders should focus on unlocking the full potential of analytics—just as innovative organizations from across retail, banking, and other industries have already done. This year, life sciences companies should focus on translating insights to action, actively testing to de-risk innovations, and ultimately driving growth.

Scott Beauchamp is Vice President at APT, a Mastercard company.

Related:

  • No Related Posts

FAA, CMS and GSA Retool to Take Advantage of Big Data

Officials at the Federal Aviation Administration want to make the most strategic, well-informed capital planning decisions possible about airport facilities. But airport facility requirements change when the airline industry changes, ranging from mergers and acquisitions to changes in the size of aircraft they operate and their underlying business models.

To improve their decision-making, FAA executives such as Elliott Black, director of the FAA’s Office of Airport Planning and Programming, are combing through terabytes of current and historical information that promise new insights for forecasting.

“I love data,” Black says. “By taking an open and honest look at our information, we can identify trends or problems that we weren’t aware of previously.”

Leaders at the FAA and counterparts at agencies such as the Centers for Medicare and Medicaid Services (CMS) and the General Services Administration realize that to effectively harvest insights from their expanding volumes of diverse data, they must re-evaluate their underlying data management and analytics capabilities.

“Agencies that want to take real advantage of Big Data, analytics and artificial intelligence will eventually need to upgrade their older systems,” says Shawn McCarthy, research director for IDC Government Insights.

SIGN UP: Get more news from the FedTech newsletter in your inbox every two weeks!

It’s a good time to make a change. Steady innovation is bringing about new analytics capabilities derived from emerging technologies such as machine learning, as well as enhancements to open-source tools and commercial applications. “We’re seeing a number of new analytical tools out there that make it easier to build customer reports on the fly,” Black says. “This could reduce the workload for our people and enable them to spend more time doing the substantive analyses we need to do.”

For the past 15 years, the FAA has been relying on its System of Airports Reporting to help manage and forecast capital improvement investments for the approximately 3,300 airports across the country that are eligible for federal grants. SOAR centralizes a wealth of information: 35 years of historical funding data, as well as current project activity and capital needs information provided by individual airports, regional offices and state aeronautical agencies.

The FAA’s data management currently consists of government-developed technology with hardwired connections among the database, user interface and reporting modules, making it difficult to slice and dice the data. The agency is upgrading the system with connections created with industry-standard application programming interfaces and commercial technology that will replace the hardwiring. “By better integrating the modules and building in better business analytics, we want to make it easier to perform complex analyses,” Black says.

FT_Q118_F_Joch-quote_0.jpg

The systems will include a commercial database management system as well as commercial business analytics and reporting applications. “Our goal is for the airport community to be able to enter their information directly, which will save them time and enhance data consistency,” Black says.

Numerous Infrastructure Choices for Processing Big Data

Innovation isn’t limited to analytics tools; CIOs also have new options for building out IT infrastructure to support the efficient processing of large data sets. For example, organizations can select processing that is optimized for specific database platforms.

Lenovo servers have long been the reference platform SAP uses for developing HANA,” says Charles King, principal analyst at Pund-IT. “Plus, virtually every x86 server or system vendor has built solutions that can be applied to Big Data problems and workloads. Dell EMC offers tailored solutions for SAP HANA, Oracle Database and Microsoft SQL Server data, as well as open-source data analysis platforms, such as Apache’s Hadoop and Spark.”

In addition, storage vendors are delivering Big Data solutions that capitalize on all-flash and flash-optimized storage arrays, King says. Flash storage delivers much better performance than traditional spinning-disk drives, which can speed up data analysis.

CMS Crunches Numbers, Saves Lives

In the Office of Minority Health at CMS, it’s understood that gleaning new analytical insight from routinely collected data can produce life-changing results for citizens. By sifting through large volumes of payment and demographic data, the office helps health officials better serve the unique needs of minority populations, people with disabilities and those in rural areas.

For example, infant mortality rates for African-Americans are nearly double the nationwide average; Hispanics show disproportionately higher rates of diabetes than the national average; and deaths from opioids are greatest among non-Hispanic whites. “These disparities show why it’s important to disaggregate data to understand the specific challenges facing various populations,” says Director Cara James. “That helps us target limited healthcare resources to the areas of greatest need.”

Part of the outgrowth of this effort is the Mapping Medicare Disparities Tool, which shows outcomes and other data for 18 chronic diseases, such as diabetes or heart disease.

FT_Q118_F_Joch-elpunto.jpg

A collection of technologies support the map. Applications from various vendors, such as Microsoft, extract Medicare fee-for-service data and feed the results into a Microsoft Excel spreadsheet. An open-source JavaScript library and a cloud-based data analysis platform are then used to produce the final visualizations.

“One of the biggest goals of the tool is to increase awareness and understanding of disparities at the state and local levels,” James says. “Local officials can then use that information to inform their decision-making.”

GSA Visualizes Data in the Cloud

As chief data officer for the General Services Administration, Kris Rowley and his team are developing a long-term strategy for an enterprise data management and analytics platform, which relies on Oracle and SAP solutions.

To achieve that goal, Rowley plans to update the reporting tools the agency has implemented. “There’s been rapid development in visualization technology to make information more presentable and help executives more easily grasp insights from the data,” he says.

The agency is moving much of its data to public cloud repositories to capitalize on the computing capabilities available with those models. As they do this, officials want latitude in choosing which analytical tools stakeholders can use. “I want to be able to plug any visualization application into cloud data sets and know there won’t be any migration costs,” Rowley says. “That means getting away from traditional solutions that integrate the reporting tool with where the data is stored.”

The GSA evaluations also take emerging technology into account. “Everything we’re doing will create a foundation for moving to machine learning,” Rowley says. “Machine learning will support the enterprise by empowering the workforce with predictive modeling and the ability to forecast what may happen next.”

3 Ways to Jump-Start Modern Projects

What first steps can federal officials take to more effectively use data analytics and capture quick wins? McCarthy offers these suggestions:

1. IT leaders should focus first on business cases that will have the most impact on citizens. Crime statistics, transportation/traffic flow analysis and economic indicators are all good starting points.

2. Agencies that want to implement advanced capabilities such as Big Data analytics and artificial intelligence may need to upgrade their infrastructure.

3. Technology that incorporates location intelligence can be particularly useful in government environments. Agencies also should consider solutions that include core reporting, dashboarding and ad-hoc visual discovery functionality.

Related:

  • No Related Posts

Moneyball for movies: Data science and AI in Hollywood

Data science and related disciplines, such as artificial intelligence and machine learning, have become table stakes in almost all areas of software. From marketing to customer service and beyond, data and analytics are changing existing enterprise processes and enabling new ones.

Read this

Data science vs the hunch: What happens when the figures contradict your gut instinct?

Data science vs the hunch: What happens when the figures contradict your gut instinct?

Despite the widespread adoption of analytics and scientific testing by businesses around the world, the management sixth sense continues to flourish.

Read More

There are many challenges to using data science effectively, including which problems to consider and collecting the right data at scale. For business people, these challenges are magnified because they intersect multiple domains such as organizational decision-making, innovation and business model reinvention, technology capability, and even privacy and ethics.

This complexity drives most organizations to buy off-the-shelf software products that automate specific processes and include AI baked-in as part of the offering. For instance, most marketing software today includes AI to analyze customer data. It’s much easier to buy data science or AI inside commercial products than to roll your own.

Although standard products are best for most companies, there are exceptions. For example, if your core business model relies on unique sets of data that need specialized analytic techniques, then building custom tools makes sense. But, these situations are exceptions to the general rule that buying technology is usually better than building it yourself.

In 2013, Legendary Entertainment — a top studio that creates movies like Superman Returns, Jurassic World, Straight Outta Compton, and Warcraft — faced this make-vs-buy decision.

The company’s then-CEO, Thomas Tull, wanted to apply data and analytics to the domain of Hollywood entertainment, but there were no off-the-shelf tools would work. Tull’s inspiration was professional sports and “Moneyball,” as presented in the popular film of the same name. Tull convinced Matthew Marolda, who had built and sold a sports analytics company, to join Legendary as Chief Analytics Officer and create a team to solve the “Moneyball for Hollywood” problem.

To learn more about this intersection of data science and the movie industry, I invited Matt Marolda to take part in Episode 276 of the CXOTalk series of conversations with the world’s leading innovators.

Matthew is a self-described data nerd and a super interesting person. On this episode, he shares details of the approach, goals, and components his team created. It’s fascinating to learn, for example, that they built their storage solution because no commercial systems at the time would work at the scale and performance required.

Personally, I enjoyed hearing how Matt and team use social media to target micro-segments, almost to the individual level. It’s a glimpse inside the near-term future of digital marketing.

Listen to the video embedded above to watch our entire CXOTalk conversation and read the complete transcript. Down below are edited excerpts from the 45-minute discussion.

Matt, tell us about Legendary Entertainment?

Legendary is a producer of both movies and television shows. The types of movies we produce are large-scale, things like Godzilla, Kong: Skull Island, The Dark Knight series, movies of that scale, which are intended to be large, what people often refer to as tentpole movies that are big, global events all around the world.

What was the original plan when Legendary’s CEO asked you to join?

In hindsight, it’s funny because we didn’t know what we were going to do. [Laughter] We knew, at a high level, what it was. We knew we needed to use data and analytics to inform the process. The first thing I said to our chief creative officer when I joined Legendary, and again these are two roles that could effectively be oil and water, creative and analytics. Those could be things that are opposing forces. What I said to him is the attitude that we’ve had from the beginning from the creative side, which was that analytics, especially in sports, but the same with content, never produced a player, but all it tries to do is put the player in the best position to succeed. That was the attitude from the beginning.

The marketing side was a little bit different. The marking side was, “How could we use data and analytics to gain a competitive advantage?” On that front, what we realized very quickly was that there was a real opportunity in how we addressed our audiences, meaning the traditional approach, and this is still often the dominant approach for these kinds of movies is what we always call the spray and pray. Meaning, quite literally, spray the population with TV ads and pray they go to the box office. That works in a certain world, but maybe not even the world of today, but at some point, it worked.

What we realized back four, five years ago was that we needed to be much more precise. It’s a game of impressions, meaning how do we deliver the trailer, the TV spot, or the poster to the right people? Which of those things do we deliver, and in what format? Doing that in a very precise and individual way. What we’ve built are tools that enable us to, at individual levels, predict people’s propensity, as we call it, their likelihood to take the action we want, which may be a trailer view. It may be buying a ticket.

That meant we had to use some very sophisticated tools and techniques that we had to build ourselves. We built up a suite of assets and capabilities that are all rooted back in AI. This was all when AI was not cool. [Laughter] This is a time at which AI was Skynet or something. It wasn’t embedded as broadly as it may be becoming now. We had to go down that path and to use machine learning, neural networks, computer vision, because of the scale in which we needed to operate. It was so massive that, without those kinds of tools, you’re almost back to that spray and pray mode where you are quite literally taking broad guesses at large groups of people.

How do you use all that data?

The first step in that process for us is to try to understand people. The best way for us to understand people is with data.

The first day I walked into Legendary, my first question was, “Where’s all the data?” Again, coming from a world that wasn’t connected to Hollywood, I didn’t understand how the dynamics worked, which was, we produce a movie, we deliver it to a distributor, who then hands it to exhibitors. Then the exhibitors or, ultimately, maybe an Apple, Amazon, or whomever, all the transactions, all the customer interactions happen at that level, which is too removed from us.

When the answer came back to me, “Oh, what data do you mean?” I said, “Anything on people,” they came back with an Excel spreadsheet of about 50,000 email addresses. I realized at that point that there was a different challenge we had to face, which was, how do we get data on people?

I’ll put that to the side for a second, but the principle, though, that we were taking wasn’t data, necessarily. It was analytics.

Our bet was not necessarily on getting the best and most precise data on people. It was, how do we build the analytic tools to take whatever data is available to us and use that to do our targeting. That is a recognition of a lot of factors that I think were true then, but even more true now: privacy issues, social platforms and how they share data and what levels of granularity they’ll provide you, regulatory issues, all sorts of things.

The data will shift. What’s available to you Monday might not be available to you the next, or new things will pop up that weren’t there before. We knew we had to have data. That was table stakes, and so we invested a lot of money, millions of dollars, into data to acquire data on people, on content, unstructured data from social networks, everywhere we could find it.

The real bet for us came at the next level, which was, what can we build on top of that data? With that, what we drove towards was these AI solutions. Meaning, could we take a billion or more email addresses and attach hundreds, if not a thousand or more, attributes to those email addresses that we created from, sourced them from, some partnership, to constructing them from unstructured data, meaning text, image, and things of that nature. We produced a very robust picture of people.

Then once we had that robust picture, we needed to do something with it. It’s inert if we don’t act upon it. The next option to take is to use, effectively, that big table of data on people, which is not what it literally is, but that’s a good visual of it, and create audiences from it and to make individual predictions. The first step in our process is to use our models. There are many different inputs into them, but to use them to home in on who we think the most likely audience is.

It’s not binary. In fact, we have three major categories that we drill in on specifics. The three major categories for us are people we consider to be given, meaning they’re going to watch the movie no matter what. They’re wearing the Godzilla T-shirt. They’ve watched the Kong movie, from 30 years ago or even ten years ago, dozens of times, that kind of person. There’s a small number of them, but they’re there.

There’s a much larger number of people who will never watch, who are never going to consume this content. That’s fine. We don’t want to spend impressions on them.

Who we care about are the people in the middle of those two groups. We call them the persuadables. The people who we can persuade by giving them the right piece of content or the right creative at the right moment through the right channel is key. Those are trite things now. People talk about that a lot, but we try to be very precise about it.

The first step we’ll do is take that persuadable audience and define them exclusive of the givens and the nevers. Then, within the persuadable audience, we will effectively score every single person. In the U.S., for a movie of a scale we typically would work on, it could be 40 million or 50 million people. They’ll get a score from zero to 100. 100 being very likely, zero being very unlikely.

Once we have that, where we can, we deploy media to them specifically and individually. A lot of people use the term onboarding. We might onboard them into, say, programmatic buy on websites, publishers that you would see on the sidebar or across the top. That includes social media. That includes search. That includes video like YouTube. Wherever we can find these people, we’ll reach them, and so we’ll launch these at the lowest granularity that that platform will accept. Sometimes it’s small audiences. Sometimes it’s individuals, but wherever we can.

Then, once that’s launched, the next thing we’ll do is take very small pieces of those audiences, so not only cutting into small micro-segments but now I’m taking even small subsegments of them to test. We call it calibration. We’ll launch many, many combinations, hundreds or thousands of combinations of subsegments and creative. That will give us an indication as to which of those segments will respond better to which pieces of creative.

Once we’ve done that, then we start scaling. Then we start applying more spend, and that will lead us to a more global kind of scale. At that level, once we’ve done that, where we can, and China happens to be the territory we can do this the best, we will try to measure conversion. Meaning, we will try to see who is buying tickets. Those ticket purchases will then feed into our models and enable us to be more honed.

What’s interesting about our approach is we tend to do things that a lot of them are the opposite of what others do. A lot of folks will start to become narrow and them maybe even get panicked and go broad. We do the opposite. The closer we get to release, the more honed we’re trying to get and the more precise we’re trying to get.

Which platforms do you use?

In no particular order: social media, so Facebook, Twitter, Instagram, Snap, all those platforms. It would include the Google platforms, which would be search or YouTube. We’ll also do things programmatically, so we’ll be able to target people across many different websites. Those are the major categories. We do try to do analytics to help us guide what we would consider being nonaddressable media, like television buys and outdoor ads. But, it’s using the same concept of audience. We’re just now deploying it more coarsely.

In certain cases, we can provide individuals and track them. That’s rare, but we can do that. In other cases, it’s these sort of subsegments that may be hundreds of people or a thousand or two thousand, something like that. In other cases, for example, a television buy, you’re buying against the people who watch that show. We’re predicting as to who we think are going to watch the show, but we can’t precisely say, “Oh, these are the 700,000 people we want to get through this show.” We are taking a bet that they’ll be watching, but we don’t know precisely what they are.

Read this

Big data: What to trust – data science or the boss's sixth sense?

Big data: What to trust – data science or the boss’s sixth sense?

While the technology to run big-data projects may be opening up to more firms, progress has been hampered by a lack of skills and a corporate preference for gut instinct.

Read More

Are you looking at real-time or historical data?

It’s a great question. The pace of these kinds of campaigns is very fast. What will happen is, for any given movie, the vast majority — when I say vast, 80 percent, 90 percent will be spent over the course of about four or five weeks. This is where I think people who are just awake and alive will see these massive sorts of media dumps out into the world. We knew that was the phenomena, and we knew that we had to be able to react very quickly within those timescales.

If this were an always-on campaign that ran over the course of years, it’d be much different. We do try to operate very precisely within that very short window of four to six weeks. I would say our cadence for changes and adjustments are typically within a day or so. It’s not real time in the sense of every minute or every hour, but once a day we’re recalibrating and adjusting.

Tell us about your team?

Going back to the beginning, I was a guy in a room. [Laughter] I had a checkbook, effectively, and we could have done a number of things. We could have built a mosaic of solutions.

What we found was that that didn’t exist, and so we built out a team. Our team is about 70 people. Of the 70, about half are some form of engineer, whether they’re data scientists or computer scientists, and we have people who have all kinds of disciplines.

We accumulated these people, and we built these tools because they didn’t exist, and we couldn’t find that solution. It’s that singular solution that goes from front to back. There were a lot of good point solutions along the way, but they didn’t have the full integration.

The loop you described is a very logical loop, and that’s exactly what we were trying to build toward, but we had a hard time finding the solution that would meet both the speed and the pace at which we were spending, along with the sophistication with which we wanted to spend. To go back to your loop, this data platform that we’ve created will suck in data from whatever sources we’ve started with, the initializing sort of data. Then it will launch the media out into the different platforms.

To your point, as the campaigns run, new data is being constantly created. That comes back into the system, enables us to calibrate and change dynamically, and then re-spend. It’s a virtuous cycle that continues.

We need the right people, for sure. I’ve said humility, that’s an initial starting point for us. We look for people like that.

Of course, we have other things we’re interested in, and so the specific skill sets we have accumulated. On a data science side, it’s multidisciplinary. The person who runs our data science team has a Ph.D. in astrophysics. That’s a discipline you wouldn’t expect at a Hollywood studio.

Just like that discipline, some people have different backgrounds in social sciences like human decision sciences, or they are statisticians or econometricians. That’s a whole category of people we have as data scientist folks.

On the software development side, we knew — and we talked about it briefly earlier — that we were going to have these very large data sets. We needed people who had the skills to be able to build these repositories to query and analyze data at remarkable speeds, to be able even to build the infrastructure and the thousands of servers we have running at any given time to support all that, to build the user interfaces that make it all work. Those were skillsets we were very specific and targeted on.

We also needed the other half of our team of people who are experts at applying these kinds of outputs into a campaign. That last group I just mentioned was by no means the last. In fact, we considered all three simultaneously because we knew that if the data science team and the development team built all the amazing tools they build, but they were just shiny toys on a shelf, it was all for not. We needed to make sure we had a group of people who knew how to translate those tools into action. That creates the whole iterative loop we use to develop further.

What’s coming next?

For us, I think the thing that we think a lot about is two things. One is the increasing addressability of media channels, so the ability to get more precise. That feels around the corner. Whether that means addressable TV in that you can send an ad over whatever form of viewing you were doing, that’s one thing for sure.

The other thing for us, which is always that holy grail — in a lot of industries this is not the case, but for things like ours — are conversion measurement is hard. Meaning, can we tell if someone took the action we wanted? As data becomes stronger and better there, that just makes everything better. Those are the two things that feel, in these sort of 18 to 24 months, or maybe even a little longer than that, but the 1 to 3- or 1 to 5-year range.

CXOTalk brings together the most world’s top business and government leaders for in-depth conversations on digital disruption, AI, innovation, and related topics. Be sure to watch our many episodes! Thanks to Laura Hoang from CredPR for the introduction to Matt Marolda.

Related:

  • No Related Posts

Big Data Services Market 2022: Market Overview, Product Scope, Status and Outlook with Forecast

Global Big Data Services Market Report provides important information related to the overall market and price forecast over a five-year period, from 2017 to 2022. This report is a valuable source of guidance for companies and individuals offering Industry Chain Structure, Business Strategies and Proposals for New Project Investments.

Big Data Services Market Segment by Manufacturers includes: Accenture, Deloitte, Hewlett- Packard, IBM, PricewaterhouseCoopers, SAP, Teradata, Alteryx, Atos, Attivio, Chartio, Cirro, ClearStory Data, Cloudera, Continuum Analytics, Datameer, DataStax, Doopex, Dell, Enthought, Hortonworks, MAANA, MapR Technologies, MarkLogic, Microsoft, MongoDB, Mu Sigma, Predixion Software, SAS Institute. and many more.

Browse Detailed TOC, Tables, Figures, Charts and Companies Mentioned in keyword Market Research Report: https://www.absolutereports.com/11578022

A complete analysis of the competitive landscape of the Big Data Services Market is provided in the report. This section includes company profiles of Market key players. The profiles include contact information, gross, capacity, product details of each firm, price, and cost are covered.

This report investigates new project feasibility with a purpose of enlightening new entrants about the possibilities in this Market. In this report, thorough SWOT analysis & investment analysis is provided which forecasts imminent opportunities for the Big Data Services Market players.

Ask Sample PDF of Big Data Services Market Report: https://www.absolutereports.com/enquiry/request-sample/11578022

By Types, the Big Data Services Market can be Split into: Terabytes, Petabytes, Exabytes.

By Applications, the Big Data Services Market can be Split into: Financial Services and Insurance, Healthcare, Social Networking, Shopping, Others.

Big Data Services Market Segment by Regions includes:

  • United States
  • Europe
  • Japan
  • China
  • India
  • Southeast Asia

Chapters:

Chapter 1, to describe Big Data Services MarketIntroduction, product type and application, Market overview, Market analysis by States, Market opportunities, Market risk, Market driving force;

Chapter 2, to analyse the manufacturers of Big Data Services, with profile, main business, news, sales, price, revenue and Market share in 2017;

Chapter 3, to display the competitive situation among the top manufacturers, with sales, revenue and Market share in 2017;

Chapter 4, to show the United States Big Data Services Market by States, covering California, New York, Texas, Illinois and Florida, with sales, price, revenue and Market share of Big Data Services, for each state, from 2012 to 2017;

Chapter 5 and 6, to show the Big Data Services Market by type and application, with sales, price, revenue, Market share and growth rate by type, application, from 2012 to 2017;

And many more…

Get Full Report at $ 3300 (Single User License)at:https://www.absolutereports.com/purchase/11578022

SOURCE The Financial Consulting https://thefinancialconsulting.com/

Related:

  • No Related Posts