Machine Learning and the Modern Data Lake

In this article, we’re going to talk about machine learning, the modern data lake, and what this means for you.

But first, let’s go back to the first Olympic games in modern times, held in Athens in April of 1896. This photograph is from the men’s 100m final. There’s only one runner in the 4-point stance, crouched down with hands on the ground, right behind the start line. That was Tom Burke, and he won—even though he was actually more of a distance runner.

Power of Data

Today every sprinter uses that starting stance. But back then it was new information and only a few athletes were exploiting that data.

Download your free ebook, “Demystifying Machine Learning.”

Exploiting data is what we’re going to talk about today. But instead of the 4-point stance and gold medals, we’ll be discussing machine learning, data lakes, and how they can help you exploit data about your business, your customers, your partners, and anything else you need to get that competitive edge. Essentially, we’re here today to say—what’s the extra information you need to gain an edge like Tom Burke? And then, how can you make use of it?

Why Machine Learning Is More Than Just Buzz

Machine learning is trendy right now, but why should it matter to you? McKinsey Global Institute shows that proactive machine learning adopters simply make more profit than their peers who are less proactive.

Machine learning drives profits

As responsible data users, we can’t prove causation from this one diagram alone. But there’s enough material out there that says it’s more than just a simple correlation.

We think the cloud could be the best place for your machine learning workloads. We’ll get into this more later.

Real-Life Machine Learning Example

Here’s one example of the value that machine learning has brought to an organization. The UK’s National Health Service offers health care to all residents of the UK and holders of a valid European insurance card. Their Business Service Authority branch set up a Data Analytics Learning Lab with the goal of getting more value out of their existing data with machine learning. They had a long-term goal of providing ongoing annual savings of one billion pounds per year by:

  • Improving patient outcomes
  • Optimizing internal processes
  • Reducing fraud

For a small team, they accomplished a lot. In just a few months, they found confirmed annual savings of well over £561 million, with additional savings waiting to be confirmed and implemented.

There are a few machine learning best practices from this.

  1. They started with their existing data, which helped them gain quick wins and put the project on a sound business footing within the organization. Remember, there’s always time to collect new data and explore new projects after the initial successes.
  2. They moved their data into a separate lab environment. Applying unpredictable, heavy-duty analytics like machine learning would likely wreak havoc on the service levels of their production systems, so a separate lab environment enables them to perform needed experimentation without having an impact on the normal operation of business.
  3. They initially started this project using Oracle systems in their data center, which was appropriate at the time. But they’re looking at moving to the cloud. And if they were going to start a similar project today, they would most likely start it in the cloud because the cloud is the perfect place to provision a lab, store a large amount of data, and then spin up analytics workloads that vary from lightweight to very compute-intensive, and from short duration to long duration. The cloud is the best place to build a lab and get results while minimizing costs, risk, and commitment.

But machine learning is about much more than healthcare fraud and improving patient outcomes, as important as those may be. Take a look at these machine learning business use cases.

Machine Learning Business Use Cases

Machine learning can help you:

  • Predict customer lifetime value
  • Predict customer churn
  • Segment your customer base for more targeted marketing
  • Find fraud
  • Make recommendations to your customers
  • Identify subtle seasonal patterns in your business

And these are just a few examples. Organizations in all industries are putting machine learning to use.

How Data Lakes Help With Machine Learning

So let’s say you’re sold, and you want to start exploiting your data by using it for machine learning. What else do you need?

The answer is, access to all of your data—lots and lots of data. Data lakes are a great place to store, manage, process, and analyze your data. People often mistake data lakes as just a place to store data, but they’re more than that.

Data lakes were originally built on premise with racks of dedicated hardware, and that has some advantages:

  • It keeps data in your data center for regulatory compliance, where applicable
  • It’s close to your enterprise data sources
  • You have the ability (and cost/effort) to install the components you want (although you do have to maintain them)

But having a data lake in the cloud offers some different advantages:

  • You can scale up or down much more easily and scale storage independently of compute
  • You can use managed services that significantly reduce administration
  • And most compelling of all (to many people), you pay only for what you use

While Oracle offers you both on-premise and cloud data lake solutions, we also offer you a third option that we call Cloud at Customer. This is a cloud service where the hardware sits in your data center. Here are the advantages:

  • It keeps data in your own data center for regulatory compliance
  • It’s close to your enterprise data sources
  • But you still get the advantages of the cloud and access to managed cloud services

With Cloud at Customer, Oracle owns and manages the hardware, but you consume the services just like you would in the public cloud. In many ways, you’re getting the best of both worlds.

To summarize, the trend has been to go from only using relational database technology, to adding big data technology, to adding specialized big data services in the cloud.

All of these technologies are important depending on the problem you’re trying to solve—and we offer all of these technologies. But the trend we’re seeing is that the first generation of big data technologies like Hadoop are giving way to more modern Spark services in the cloud.

New Cloud Data Lake

How to Use Spark in the Cloud

Here’s an example of how you can take advantage of these specialized Spark services in the cloud.

In the cloud, object storage becomes the persistent storage repository for the data in your data lake. What is object storage? Object storage is a very simple system for storing any kind of data file with scalability and redundancy. You only pay for the amount of data that you have stored, and you can add or remove data whenever you want. In addition, object storage is very low-cost storage.

Then you spin up Spark clusters tailored to the specific processing work. One cluster can be for real-time processing, and a second can be a data lab for your data scientists and analysts. Another could be for batch jobs. Each cluster can be configured with the needed processing resources and local storage and each can be scaled up or down as needed. The node storage can be either disk or solid state. When you’re not using the cluster, you can turn it off so that you’re not paying for it. That’s the beauty of a cloud-based data lake.

Data Science in the Cloud

So that’s great. But now let’s look at how you can start using machine learning in the cloud to fulfill your data science goals. The solution pattern below is a simplified depiction of a data lab for data science, and it shows the different services that are used together.

Data Science Lab

First, data is uploaded to cloud storage (object storage). The data engineer or data scientist can do this with open source tools, Oracle’s free Big Data Connectors, or with the free Oracle Software Appliance that makes object storage look like a disk drive to your other systems.

The stored data is accessed for machine learning in Apache Spark and both raw data and generated data in Spark can be accessed for data visualization. Oracle provides both open source and value-added machine learning capabilities that run on Spark. With a cloud-based data lab, you can start small with just a few CPUs and quickly prove value for a business case.

If you or someone in your organization would like to try this out for yourself, Oracle offers free cloud credits that you can use to run these cloud services. And we also offer step-by-step guides to walk you through the process and show you how to use some of the features.

Conclusion

In this article, we’ve talked a lot about how to exploit your data. But here are the three key ideas:

  • Data – Harness the unused data in your organization and combine data sources to find new value.
  • Platform – The modern cloud-based platform enables easy-to-manage and cost-effective solutions to your data management and analytics needs.
  • Machine learning – With your data and the Oracle platform combined, you can apply machine learning to make predictions, find fraud, make recommendations and more—the technology is constantly changing and getting more exciting.

Together, these are the keys to successfully exploiting your data. With this, you have the power to use your data to find a competitive edge. Don’t forget to download your free ebook, “Demystifying Machine Learning.” Or, if you’re ready to get started, get started today with a guided trial for building your own data lake to try machine learning techniques.

This article features content and writing by Wes Prichard and Peter Jeffcock.

Related:

  • No Related Posts

The Key to Innovative Companies? A Love of Data

By John Abel

If the business world seems to move much faster these days, that’s because it does. For a long time, change happened slowly. Take economic cycles—in the 1900s it took about 19 years to go from the bottom of a recession to the top of a peak. Life moved at a steady, controllable pace. Businesses knew who their competitors were, and had plenty of time to adapt to the newcomers.

It was all so simple.

What the Cloud Meant for Data

But then new technological innovations came along, swiftly followed by the cloud. And with it gargantuan amounts of data to collect and crunch, at a fraction of the cost it used to be. No longer did innovation mean swallowing up budgets and time. Barriers to entering new markets came crashing down, accelerating ambitions and innovation even more. Now, adopting emerging technologies is a necessity, not a choice. Wait for something to hit the mainstream, and your business won’t be looking at the menu; it’ll be on it, and about to be devoured.

Join Oracle at London Tech Week 2018 to learn about trends and technology

If that sounds too dramatic, then just ask the people who once worked for Kodak, Blockbusters, or Woolworth … iconic brands that didn’t innovate. MySpace and Friends Reunited, anyone? Such is the speed of change that even the disruptors get disrupted. We used to be stunned when Amazon, eBay, and Groupon took just a decade to get to $10 billion in annual revenues. Now there are about 30 companies that have gone from zero to $1 billion in four months.

They all have one thing in common.

They are data crazy.

The Fastest-Growing Companies Innovate With Data

These companies have realized that the way to take on their competitors is to spot patterns and anomalies in data faster and more regularly than the next guy. One of the fastest-growing and most innovative companies right now (though in the time it takes you to read this article, it might all have changed!) is Jet2.com—and all they do is use AI and machine learning to predict the price changes of flights, and beat the market.

When I started my career—when I’d buy my clothes at Woolworths and my camera film from Kodak—doing what Jet2.com does would have meant some serious spend. But it cost them barely anything to get started, just some basic data tools and algorithm know-how. And now they’re taking market share at supersonic speed.

Let’s look at China—even as recently as 2011, their old big banks dominated the consumer payment space. Now, they don’t. And which payment solution startup is causing them to worry? WeChat—a messaging app whose capabilities have grown way beyond its original purpose. Out-mastering your enemy is hard enough when you know who they are. But when it could be anyone…

How to Make Use of Your Data

Opportunity or threat, defend or attack; either way, action has to be the answer. But hold on … I’m a food retailer, I’m a car manufacturer, my business is making medicines, or building houses or giving financial advice. What do I know about all this data stuff and how do I innovate with it?

You don’t need to. But you do need a partner who does. A partner who has the technology, the expertise, and the time you don’t. Who has the keys to unlock that Pandora’s box of secrets lurking in your data. Who has the appetite and know-how to innovate at breakneck speed, and take you with them.

Who? How about the people behind world’s most popular database.

Learn more about Oracle’s presence at London Technology Week.

Related:

  • No Related Posts

What An Autonomous Database Means To Me: 5 Expert Views

Jeffrey Erickson

Director of Content Strategy

This article originally appeared on Forbes on Oracle Voice. http://ora.cl/xt3IY

Larry Ellison, Oracle’s cofounder, executive chairman, and chief technology officer, has said Oracle’s new autonomous database ranks among “the most important things we’ve ever done.” That’s because it deploys, optimizes, patches, and secures itself with no human intervention, bringing new levels of performance, security, and efficiency.

So how is this autonomous database playing in the real world? Oracle Autonomous Data Warehouse Cloud Service has been available since March. I talked with five database users at this year’s Collaborate 18about their reaction to the technology and where they see this broader autonomous computing push going:

Jim Czuprynski: What Do You Need DBAs For Now?

After plying the trade as a database administrator for 20 years, Jim Czuprynski considers himself a DBA’s DBA. “I know the daily stresses and joys of being a DBA,” he says.

So Czuprynski, an enterprise architect at Vion Corporation, and a technology blogger, admits that when he first encountered autonomous database technology, his knee-jerk reaction was to ask, “If a computer can do what I can do, then what do you need me for?” he says.

It didn’t take him long to change his tune. “Oracle is simply taking what they’ve been telling us for years are best practices and having the machines just do it,” he says. The autonomous machines “never get tired and can watch everything all at once.”

For example, he says, “the machines are going to watch your database perform against its application workload over time and then it’s going to react to it, subtly, with machine learning,” to tune and better secure the database. “Why wouldn’t I go for that?” he now asks. “Why wouldn’t I want to just turn that over to something that’s smarter than me?”

That still leaves plenty for Czuprynski to do. “It doesn’t mean I don’t have to think out the data model, it doesn’t mean I don’t have to write better application code against the database” to make the applications faster or more reliable, he says.

In the end, autonomous database technology is about giving time back to data experts to do more important work. “You’re going to have a lot more time to spend with developers and with the people that are running the business,” Czuprynski concludes. And that, he says, is a good thing for a DBAs career.

Nitin Vengurlekar: Autonomous Is About Business Agility

Nitin Vengurlekar is CTO at a firm that helps his clients get the most out of their data. “To me, autonomous database equates to a faster speed of business,” he says.

Vengurlekar likes cloud services in general because they provide infrastructure on demand, “so I don’t have to wait weeks or months for technicians to set up the environment,” he says.

Autonomous database technology turns that speed up a notch higher, by taking the database tuning and management off his plate while giving him the three things he really wants in technology for his business: “I want it easy, fast, and elastic,” he says.

Vengurlekar gets easy and fast with the Oracle Autonomous Data Warehouse Cloud Service (ADW) because he can “work with data right away without having to assign DBAs to make sure it’s set up,” he says.

“And I want to grow and shrink the data warehouse based on what my needs are,” whether that’s end of the year or month, or different holiday seasons, he says “the ADW does that also really well.”

For speed, Vengurlekar did his own benchmarks to understand how Oracle Autonomous Data Warehouse compares to other cloud-based data warehouses: “How many queries can I run in an hour? If I have to pay for the service for an hour, how much can I churn through with it?” He found that Oracle delivers on its claim to deliver queries many times faster than other data warehouses he tried.

The speed means, “I can do faster analytics on diverse data sets and have power for things like faster visual analytics and predictive pattern management,” he says.

Russ Lowenthal: Patching Is the Killer App

An autonomous database is worth it for patching alone, says Russ Lowenthal, a director of database security at Oracle who hears from many database users on the subject.

“If I’m a CIO, my number one concern is that a data breach like those in the news lately will happen at my company. That means I’ve got a system out there, and it’s got a known vulnerability and a patch exists for it, but I haven’t applied the patch to it yet,” he says. “That’s my nightmare.”

An autonomous database from Oracle patches itself as soon as a vulnerability is detected and a patch is ready, without taking the system offline. “That immediately takes a huge burden off of the shoulders of CIOs and DBAs,” says Lowenthal.

Patching is hard. To make the case, Lowenthal gives an extreme—but very real—example: “I’ve got one customer with 17,000 Oracle databases. We release patches four times a year. If they patch each database four times per year and each patch takes an hour to apply, that’s 35 full-time workers doing nothing but patching.”

What CIO wouldn’t want to put those thirty-five people “on cool stuff like data engineering or data mining and other valuable, business-centric functions?”

Shrinking the human touch points and thus human hours needed for manual tasks like patching goes a long way toward improving security and reducing error. Plus, he adds, “I don’t know any DBA who enjoys patching.”

Michelle Malcher: Security Functions Deployed Without Effort

Michelle Malcher agrees that patching is a “killer app” of autonomous databases but says there’s a lot more to the security story. Another benefit is that services like the Oracle Autonomous Database and Data Warehouse Service run all available security features of the database by default. “And because there are no humans involved, you avoid exploitable mistakes in the setup,” says Malcher, a data security architect at Extreme Scale Solutions.

Having database options like encryption, secure backups, and database vault turned on automatically is a great start to a more secure environment, Malcher says. The security posture is further improved by managing the database with log monitoring and machine learning in Oracle Cloud.

Once you deploy your database in a secure configuration, the system monitors any changes from the initial install. “It will look for data moves that are at strange times or users that are changing permissions and it will note those,” she says. If it alerts a false positive, for example, learns that behavior as normal. “It learns patterns of behavior over time.”

Dan Vlamis: I Want More Autonomous Technology

Once Dan Vlamis spent time working with Oracle Autonomous Data Warehouse Cloud, he began asking himself why more of his technology isn’t autonomous.

Vlamis’s firm helps companies see into their corporate data by building business intelligence dashboards that are easy on the eyes and tell a good story.

“We found that we could load the client records very quickly and do the analysis without building indexes or partitioning the data or worrying about performance,” Vlamis says of using the autonomous data warehouse cloud service.

“That got me excited about other ways my technology could step in and take over mundane tasks,” he says. For example, he says “when we build a dashboard, we have to figure out what color palette and how many colors we want to use.” He’d like a future autonomous system to figure out what to do by looking at the types of measures and key performance indicators he’s after. “It would know that I need a divergent color scheme or a qualitative color scheme and just say ‘here, I’ll just do it for you,’” he says.

Now that he’s used an autonomous cloud service, he’s hooked: “I love anything that frees me up to do other things in my business.”

Jeff Erickson is editor-at-large for Oracle.

Related:

  • No Related Posts

Top 5 Industry Early Adopters of Autonomous Systems

BY MONICA MEHTA

Automation has already transformed industries in which complexity and performance demands must meet the challenges of scarcer resources, narrower profit margins and expanding product volumes. Now the state of the art is beginning to move to autonomous technologies: driverless vehicles, self-tuning databases, adaptive robots and the like.

While automation involves programming a system to perform specific tasks, autonomous systems are programmed to perform automated tasks, accommodate for variation and self-correct or self-learn with little or no human intervention.

Which industries are ahead of the autonomous curve? These five industries stand out.

Information Technology

In the IT industry, the pioneering product is Oracle’s Autonomous Data Warehouse Cloud, a cloud-based database that configures, optimizes and patches itself with minimal human intervention. Oracle Executive Chairman and CTO Larry Ellison says the machine learning technology that underpins the company’s autonomous data warehouse, as well as autonomous integration, developer, mobile and other platform services that will follow, is “as revolutionary as the internet.”

Monica Kumar, vice president, Oracle Cloud Platform, explains why the company’s autonomous database will become so important across industries. “Data is doubling every two years and the way data is exploding presents both an opportunity and a challenge to organizations,” she says. “It could be a gold mine of information, and it could become hard to store, manage and effectively analyze large volumes and types of data in a timely manner.”

Database administrators now spend almost 75 percent of their time maintaining systems instead of focusing on higher-level work, Kumar says. As data becomes even more complex and fragmented, she says, Oracle Autonomous Database will “automate the administrative tasks so DBAs and IT can now focus on getting insights from the data, architecting the applications and data, building security best practices and supporting their business users better.”

Automotive

The self-driving car is the most well-known autonomous machine, developed by early market leaders such as General Motors, Waymo, Ford and Volkswagen. Self-driving vehicles can navigate roadways as well as detect and respond to traffic signals, pedestrians, impediments and other vehicles, using a combination of techniques such as GPS, lasers and odometry.

A 2016 McKinsey report predicts that by 2030, as many as 15 percent of cars sold will be fully autonomous, up to 50 percent electrified and up to 10 percent shared (reducing sales of private-use vehicles). Those advances will lead to a new economy centered on mobility.

Toyota is heralding that shift with its e-Palette, a concept vehicle whose interchangeable interiors allow it to be used for a variety of purposes, including taxi, delivery van, store, office and hotel room. Toyota’s Mobility Services Platform gathers traffic, route-preference and safety data, as well as analyzing usage trends to determine demand for specific services. “We are not offering the vehicles to people just as a moving tool, but as a way of user-friendly mobility that will greatly increase people’s freedom,” says Keiji Yamamoto, Executive Vice President, Connected Company, and Managing Officer at Toyota.

The mobility economy will be about ecommerce, monetization and personalization—all of which need massive amounts of data storage, says Dave Schoonover, Oracle’s global director of automotive industry solutions. “Autonomous vehicles will end up generating hundreds of exabytes of data each year,” Schoonover says. “Companies need to consider where that data is going to go and how to extract value from it.”

Manufacturing

Robots that perform repetitive tasks have been on manufacturing lines for decades. Today’s robots can accommodate for variations and change their behavior based on pre-defined algorithms.

Meanwhile, autonomous robots will transform supply chains, particularly ones with “lower-value, potentially dangerous or high-risk tasks,” as the technology becomes more accessible and reliable, according to a 2017 Deloitte report.

For example, robots with haptic sensors can grasp objects as fragile as eggshells and assemble products with multi-surface parts. Facial recognition software will let robots judge from a person’s face whether or not they’re doing a job correctly. “But that’s just brushing the surface of their autonomous capabilities,” says Mike Saslavsky, Oracle senior director, high tech.

Leading manufacturers are looking to invest in autonomous robots “to achieve end-to-end efficiency, productivity and risk reduction,” according to the Deloitte report. “When considering the level of automation to bring to your organization, it is important to define the optimal labor-to-automation mix to achieve desired benefits.”

Retail

While a robot won’t be checking you out at the drugstore anytime soon, autonomous systems already have a foothold in retail. At select Lowe’s stores, for example, “LoweBots” are helping customers find products and helping employees monitor inventory levels. Kroger, the biggest US supermarket chain, is testing “intelligent shelf” technology that digitally displays product and price information, simplifying a labor-intensive task.

Such systems are semi-autonomous, requiring a degree of human intervention, notes Michael Forhez, Oracle global managing director, consumer markets. People still have to set the price changes and program the shelves to change accordingly. But with machine learning algorithms Oracle is developing, “it’s not beyond reason that these systems could someday autonomously trigger price and promotion differentiation without any human interaction whatsoever,” Forhez says.

In the near future, he says, expect to see retail innovations that “anticipate our needs and deliver what we need before we even know we need it.”

Healthcare

Healthcare providers and their patients are seeing the benefits of semi-autonomous systems as well. For example, chatbots powered by artificial intelligence schedule appointments and provide relevant, easy-to-understand information about medical conditions. Sensors monitor patients remotely and electronically transmit collected information to health professionals.

“The latest digital innovations are creating opportunities for reshaping how healthcare organizations deliver patient services, improve outcomes, enhance clinician satisfaction and manage costs more effectively,” says Michael Walker, global lead of Oracle’s Healthcare & Life Sciences unit.

The Business of Disruption

Widespread adoption of autonomous systems is only now becoming possible because of recent advances in artificial intelligence (AI).“For a long time, people looked at the promise of AI, but it never quite delivered to its promise until very recently,” Ellison says. “With the advent of the latest generation of AI—neural networks combined with machine learning—we are doing things with computers that hitherto were considered unimaginable.”

For companies in the industries cited above, it’s now just a matter of when they will adopt AI and autonomous systems, not if. “If you are not in the business of disruption, you can bet someone’s got their sights on your business and that you will be disrupted,” Oracle’s Forhez says. “You’re going to have to sharpen your focus and do things quickly, because the breadth of experimentation in the field of autonomous technology is already breathtaking.”

This article originally appeared on the Oracle Insights blog on WSJ.com.

WSJ. Custom Studios is a unit of The Wall Street Journal advertising department. The Wall Street Journal news organization was not involved in the creation of this content.

Related:

  • No Related Posts

Announcing Oracle APEX 18.1

Ilona Gabinsky

Principal Product Marketing Manager

Today we have guest blogger – Joel Kallman – Senior Director, Software Development

Oracle Application Express (APEX) 18.1 is now generally available! APEX enables you to develop, design and deploy beautiful, responsive, data-driven desktop and mobile applications using only a browser. This release of APEX is a dramatic leap forward in both the ease of integration with remote data sources, and the easy inclusion of robust, high-quality application features.

Keeping up with the rapidly changing industry, APEX now makes it easier than ever to build attractive and scalable applications which integrate data from anywhere – within your Oracle database, from a remote Oracle database, or from any REST Service, all with no coding. And the new APEX 18.1 enables you to quickly add higher-level features which are common to many applications – delivering a rich and powerful end-user experience without writing a line of code.

“Over a half million developers are building Oracle Database applications today using Oracle Application Express (APEX). Oracle APEX is a low code, high productivity app dev tool which combines rich declarative UI components with SQL data access. With the new 18.1 release, Oracle APEX can now integrate data from REST services with data from SQL queries. This new functionality is eagerly awaited by the APEX developer community”, said Andy Mendelsohn, Executive Vice President of Database Server Technologies at Oracle Corporation.

Some of the major improvements to Oracle Application Express 18.1 include:

Application Features

It has always been easy to add components to an APEX application – a chart, a form, a report. But in APEX 18.1, you now have the ability to add higher-level application features to your app, including access control, feedback, activity reporting, email reporting, dynamic user interface selection, and more. In addition to the existing reporting and data visualization components, you can now create an application with a “cards” report interface, a dashboard, and a timeline report. The result? An easily-created powerful and rich application, all without writing a single line of code.

REST Enabled SQL Support

Oracle REST Data Services (ORDS) REST-Enabled SQL Services enables the execution of SQL in remote Oracle Databases, over HTTP and REST. You can POST SQL statements to the service, and the service then runs the SQL statements against Oracle database and returns the result to the client in a JSON format.

In APEX 18.1, you can build charts, reports, calendars, trees and even invoke processes against Oracle REST Data Services (ORDS)-provided REST Enabled SQL Services. No longer is a database link necessary to include data from remote database objects in your APEX application – it can all be done seamlessly via REST Enabled SQL.

Web Source Modules

APEX now offers the ability to declaratively access data services from a variety of REST endpoints, including ordinary REST data feeds, REST Services from Oracle REST Data Services, and Oracle Cloud Applications REST Services. In addition to supporting smart caching rules for remote REST data, APEX also offers the unique ability to directly manipulate the results of REST data sources using industry standard SQL.

REST Workshop

APEX includes a completely rearchitected REST Workshop, to assist in the creation of REST Services against your Oracle database objects. The REST definitions are managed in a single repository, and the same definitions can be edited via the APEX REST Workshop, SQL Developer or via documented API’s. Users can exploit the data management skills they possess, such as writing SQL and PL/SQL to define RESTful API services for their database. The new REST Workshop also includes the ability to generate Swagger documentation against your REST definitions, all with the click of a button.


Application Builder Improvements

In Oracle Application Express 18.1, wizards have been streamlined with smarter defaults and fewer steps, enabling developers to create components quicker than ever before. There have also been a number of usability enhancements to Page Designer, including greater use of color and graphics on page elements, and “Sticky Filter” which is used to maintain a specific filter in the property editor. These features are designed to enhance the overall developer experience and improve development productivity. APEX Spotlight Search provides quick navigation and a unified search experience across the entire APEX interface.

Social Authentication

APEX 18.1 introduces a new native authentication scheme, Social Sign-In. Developers can now easily create APEX applications which can use Oracle Identity Cloud Service, Google, Facebook, generic OpenID Connect and generic OAuth2 as the authentication method, all with no coding.

Charts

The data visualization engine of Oracle Application Express powered by Oracle JET(JavaScript Extension Toolkit), a modular open source toolkit based on modern JavaScript, CSS3 and HTML5 design and development principles. The charts in APEX are fully HTML5 capable and work on any modern browser, regardless of platform, or screen size. These charts provide numerous ways to visualize a data set, including bar, line, area, range, combination, scatter, bubble, polar, radar, pie, funnel, and stock charts. APEX 18.1 features an upgraded Oracle JET 4.2 engine with updated charts and API’s. There are also new chart types including Gantt, Box-Plot and Pyramid, and better support for multi-series, sparse data sets.

Mobile UI

APEX 18.1 introduce many new UI components to assist in the creation of mobile applications. Three new component types, ListView, Column Toggle and Reflow Report, are now components which can be used natively with the Universal Theme and are commonly used in mobile applications. Additional enhancements have been made to the APEX Universal Theme which are mobile-focused, namely, mobile page headers and footers which will remain consistently displayed on mobile devices, and floating item label templates, which optimize the information presented on a mobile screen. Lastly, APEX 18.1 also includes declarative support for touch-based dynamic actions, tap and double tap, press, swipe, and pan, supporting the creation of rich and functional mobile applications.


Font APEX

Font APEX is a collection of over 1,000 high-quality icons, many specifically created for use in business applications. Font APEX in APEX 18.1 includes a new set of high-resolution 32 x 32 icons which include much greater detail and the correctly-sized font will automatically be selected for you, based upon where it is used in your APEX application.

Accessibility

APEX 18.1 includes a collection of tests in the APEX Advisor which can be used to identify common accessibility issues in an APEX application, including missing headers and titles, and more. This release also deprecates the accessibility modes, as a separate mode is no longer necessary to be accessible.

Upgrading

If you’re an existing Oracle APEX customer, upgrading to APEX 18.1 is as simple as installing the latest version. The APEX engine will automatically be upgraded and your existing applications will look and run exactly as they did in the earlier versions of APEX.

“We believe that APEX-based PaaS solutions provide a complete platform for extending Oracle’s ERP Cloud. APEX 18.1 introduces two new features that make it a landmark release for our customers. REST Service Consumption gives us the ability to build APEX reports from REST services as if the data were in the local database. This makes embedding data from a REST service directly into an ERP Cloud page much simpler. REST enabled SQL allows us to incorporate data from any Cloud or on-premise Oracle database into our Applications. We can’t wait to introduce APEX 18.1 to our customers!”, said Jon Dixon, co-founder of JMJ Cloud.

Additional Information

Application Express (APEX) is the low code rapid app dev platform which can run in any Oracle Database and is included with every Oracle Database Cloud Service. APEX, combined with the Oracle Database, provides a fully integrated environment to build, deploy, maintain and monitor data-driven business applications that look great on mobile and desktop devices. To learn more about Oracle Application Express, visit apex.oracle.com. To learn more about Oracle Database Cloud, visit cloud.oracle.com/database.

Follow Us On Social Media:

Related:

  • No Related Posts

Forget the Turing Test—give AI the F. Scott Fitzgerald Test instead

Ilona Gabinsky

Principal Product Marketing Manager

Written by Paul Sonderegger – Big data strategist, Oracle

Scott Fitzgerald pinned human intelligence on its tolerance of paradox. But what kind of artificial intelligence could pass his test?

In his 1936 essay “The Crack-Up,” Fitzgerald writes that “the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.” For example, he says you should “be able to see that things are hopeless and yet be determined to make them otherwise.”

He confesses he’s lost this ability—and as a result, himself.

Fitzgerald’s point is not that he needs a better model of the world, but that he needs many models and the freedom to switch among them. This is what allows us to forge ahead despite unexpected obstacles, conflicting priorities, or, in Fitzgerald’s case, hitting his forties and feeling like someone changed the rules of the game while he wasn’t looking.

Fitzgerald, having lost his ability to balance opposing ideas, falls into a drab, routinized existence. Every moment, from his morning routine to dinner with friends, becomes a forced act. He mimics the life of a successful literary man without actually living it.

Take a simple example. When a navigation app redirects stop-and-go traffic from the New Jersey Turnpike onto local roads in the town of Leonia, otherwise quiet neighborhoods become overrun with shortcut-seeking app-watchers. A compassionate human might weigh up the cost-benefit analysis of a shorter trip with the potential annoyance of hapless suburbanites. But a naïve AI, focused only on finding the fastest travel time, won’t. Local authorities are now plotting to fine non-residents caught driving through the area during rush-hour, even though they’re just following the directions on their smartphones.

Read More

Follow Us On Social Media:

Related:

  • No Related Posts

7 Machine Learning Best Practices

Netflix’s famous algorithm challenge awarded a million dollars to the best algorithm for predicting user ratings for films. But did you know that the winning algorithm was never implemented into a functional model?

Netflix reported that the results of the algorithm just didn’t seem to justify the engineering effort needed to bring them to a production environment. That’s one of the big problems with machine learning.

At your company, you can create the most elegant machine learning model anyone has ever seen. It just won’t matter if you never deploy and operationalize it. That’s no easy feat, which is why we’re presenting you with seven machine learning best practices.

Download your free ebook, “Demystifying Machine Learning

At the most recent Data and Analytics Summit, we caught up with Charlie Berger, Senior Director of Product Management for Data Mining and Advanced Analytics to find out more. This is article is based on what he had to say.

Putting your model into practice might longer than you think. A TDWI report found that 28% of respondents took three to five months to put their model into operational use. And almost 15% needed longer than nine months.

Graph on Machine Learning Operational Use

So what can you do to start deploying your machine learning faster?

We’ve laid out our tips here:

1. Don’t Forget to Actually Get Started

In the following points, we’re going to give you a list of different ways to ensure your machine learning models are used in the best way. But we’re starting out with the most important point of all.

The truth is that at this point in machine learning, many people never get started at all. This happens for many reasons. The technology is complicated, the buy-in perhaps isn’t there, or people are just trying too hard to get everything e-x-a-c-t-l-y right. So here’s Charlie’s recommendation:

Get started, even if you know that you’ll have to rebuild the model once a month. The learning you gain from this will be invaluable.

2. Start with a Business Problem Statement and Establish the Right Success Metrics

Starting with a business problem is a common machine learning best practice. But it’s common precisely because it’s so essential and yet many people de-prioritize it.

Think about this quote, “If I had an hour to solve a problem, I’d spend 55 minutes thinking about the problem and 5 minutes thinking about solutions.”

Now be sure that you’re applying it to your machine learning scenarios. Below, we have a list of poorly defined problem statements and examples of ways to define them in a more specific way.

Machine Learning Problem Statements

Think about what your definition of profitability is. For example, we recently talked to a nation-wide chain of fast-casual restaurants that wanted to look at increasing their soft drinks sales. In that case, we had to consider carefully the implications of defining the basket. Is the transaction a single meal, or six meals for a family? This matters because it affects how you will display the results. You’ll have to think about how to approach the problem and ultimately operationalize it.

Beyond establishing success metrics, you need to establish the right ones. Metrics will help you establish progress, but does improving the metric actually improve the end user experience? For example, your traditional accuracy measures might encompass precision and square error. But if you’re trying to create a model that measures price optimization for airlines, that doesn’t matter if your cost per purchase and overall purchases isn’t going up.

3. Don’t Move Your Data – Move the Algorithms

The Achilles heel in predictive modeling is that it’s a 2-step process. First you build the model, generally on sample data that can run in numbers ranging from the hundreds to the millions. And then, once the predictive model is built, data scientists have to apply it. However, much of that data resides in a database somewhere.

Let’s say you want data on all of the people in the US. There are 360 million people in the US—where does that data reside? Probably in a database somewhere.

Where does your predictive model reside?

What usually happens is that people will take all of their data out of database so they can run their equations with their model. Then they’ll have to import the results back into the database to make those predictions. And that process takes hours and hours and days and days, thus reducing the efficacy of the models you’ve built.

However, growing your equations from inside the database has significant advantages. Running the equations through the kernel of the database takes a few seconds, versus the hours it would take to export your data. Then, the database can do all of your math too and build it inside the database. This means one world for the data scientist and the database administrator.

By keeping your data within your database and Hadoop or object storage, you can build models and score within the database, and use R packages with data-parallel invocations. This allows you to eliminate data duplications and separate analytical servers (by not moving data) and allows you to to score models, embed data prep, build models, and prepare data in just hours.

4. Assemble the Right Data

As James Taylor with Neil Raden wrote in Smart Enough Systems, cataloging everything you have and deciding what data is important is the wrong way to go about things. The right way is to work backward from the solution, define the problem explicitly, and map out the data needed to populate the investigation and models.

And then, it’s time for some collaboration with other teams.

Machine Learning Collaboration Teams

Here’s where you can potentially start to get bogged down. So we will refer to point number 1, which says, “Don’t forget to actually get started.” At the same time, assembling the right data is very important to your success.

For you to figure out the right data to use to populate your investigation and models, you will want to talk to people in the three major areas of business domain, information technology, and data analysts.

Business domain—these are the people who know the business.

  • Marketing and sales
  • Customer service
  • Operations

Information technology—the people who have access to data.

  • Database administrators

Data Analysts—people who know the business.

  • Statisticians
  • Data miners
  • Data scientists

You need the active participation. Without it, you’ll get comments like:

  • These leads are no good
  • That data is old
  • This model isn’t accurate enough
  • Why didn’t you use this data?

You’ve heard it all before.

5. Create New Derived Variables

You may think, I have all this data already at my fingertips. What more do I need?

But creating new derived variables can help you gain much more insightful information. For example, you might be trying to predict the amount of newspapers and magazines sold the next day. Here’s the information you already have:

  • Brick-and-mortar store or kiosk
  • Sell lottery tickets?
  • Amount of the current lottery prize

Sure, you can make a guess based off that information. But if you’re able to first compare the amount of the current lottery prize versus the typical prize amounts, and then compare that derived variable against the variables you already have, you’ll have a much more accurate answer.

6. Consider the Issues and Test Before Launch

Ideally, you should be able to A/B test with two or more models when you start out. Not only will you know how you’re doing it right, but you’ll also be able to feel more confident knowing that you’re doing it right.

But going further than thorough testing, you should also have a plan in place for when things go wrong. For example, your metrics start dropping. There are several things that will go into this. You’ll need an alert of some sort to ensure that this can be looked into ASAP. And when a VP comes into your office wanting to know what happened, you’re going to have to explain what happened to someone who likely doesn’t have an engineering background.

Then of course, there are the issues you need to plan for before launch. Complying with regulations is one of them. For example, let’s say you’re applying for an auto loan and are denied credit. Under the new regulations of GDPR, you have the right to know why. Of course, one of the problems with machine learning is that it can seem like a black box and even the engineers/data scientists can’t say why certain decisions have been made. However, certain companies will help you by ensuring your algorithms will give a prediction detail.

7. Deploy and Automate Enterprise-Wide

Once you deploy, it’s best to go beyond the data analyst or data scientist.

What we mean by that is, always, always think about how you can distribute predictions and actionable insights throughout the enterprise. It’s where the data is and when it’s available that makes it valuable; not the fact that it exists. You don’t want to be the one sitting in the ivory tower, occasionally sprinkling insights. You want to be everywhere, with everyone asking for more insights—in short, you want to make sure you’re indispensable and extremely valuable.

Given that we all only have so much time, it’s easiest if you can automate this. Create dashboards. Incorporate these insights into enterprise applications. See if you can become a part of customer touch points, like an ATM recognizing that a customer regularly withdraws $100 every Friday night and likes $500 after every payday.

Conclusion

Here are the core ingredients of good machine learning. You need good data, or you’re nowhere. You need to put it somewhere like a database or object storage. You need deep knowledge of the data and what to do with it, whether it’s creating new derived variables or the right algorithms to make use of them. Then you need to actually put them to work and get great insights and spread them across the information.

The hardest part of this is launching your machine learning project. We hope that by creating this article, we’ve helped you out with the steps to success. If you have any other questions or you’d like to see our machine learning software, feel free to contact us.

You can also refer back to some of the articles we’ve created on machine learning best practices and challenges concerning that. Or, download your free ebook, “Demystifying Machine Learning.”

Related:

  • No Related Posts

Big Data Preparation: The Key to Unlocking Value from Your Data

William Trotman

Marketing Director, Big Data & Analytics EMEA

Making a success of big data analytics is a bit like constructing a skyscraper. Foundations need to be laid and the land prepared for construction, or else the building will rest on shaky ground.

Download your free book, “Driving Growth & Innovation with Big Data”

The success of any analytics project depends on the quality and relevance of the data it’s built upon. The issue today is that companies collect an exponentially large volume and variety of information in many different formats and are struggling to convert it all into useable insight. In short, they’re having trouble preparing their big data and unlocking the value.

Difficulties with Big Data Preparation

For instance, before analysis, a business may need to aggregate data from diverse sources, remove, or complete empty data fields, de-duplicate data, or transform data into a consistent format.

These tasks have traditionally relied on the expertise of the IT department – even as ownership of analytics projects has shifted towards line of business leaders. But as volumes of data grow, preparing data in these ways becomes more laborious. With this mounting demand, IT teams can take weeks to fulfill requests.

Businesses have recognized this and are investing in data preparation technologies. Two thirds say they have implemented a data preparation or wrangling solution to manage a growing volume of data, and 56% have done so to help them work with multiple data sources, according to research from Forrester.

Today’s data preparation tools aren’t restricted to those with IT expertise and they allow companies to spread their analytics processes to individual lines of business. Not only does this dislodge their data bottleneck, but analyses are managed by subject matter experts with a keen eye for the most valuable insights.

How Companies Use Big Data for Business Benefits

As organizations are overwhelmed by the flood of data, it’s also important to unify data from the various sources and ensure they are accessible and consistent across the business. For example, CaixaBank is storing vast pools of data on one consolidated platform – commonly referred to as a data lake – so each of its business units can access, analyze, and digest relevant data as needed.

From here, businesses can start experimenting with the data to explore new ideas. For instance, Telefonica worked with a single view of its data to test a new algorithm designed to create personalized TV-content optimized pricing models for customers. After successful testing, Telefonica made the algorithm live and has since seen higher TV viewing rates and improved customer satisfaction, while also reducing customer churn by 20%.

In addition to unlocking the commercial value of data, there is a strong regulatory driver for companies to gain more control and oversight of their data. When the EU’s GDPR comes into effect this month, companies will face harsh penalties if they are not transparent about the way they collect, use, and share customer information.

Conclusion

To reach skyscraper heights and build the businesses of tomorrow, data preparation must rise up the corporate agenda and be a priority for all companies looking to unlock the value of their ever-increasing volumes of data.

From data scientists and analysts, who work closely with company data each day, to business leaders exploring new ways to improve the way they work, Oracle has a set of rich integrated solutions for everybody in your organization.

Read our ebook, “Driving Growth & Innovation With Big Data” to understand how Oracle’s Cloud Platform for Big Data helps companies uncover new benefits across their business.

Related:

  • No Related Posts

Options for moving EBS, PeopleSoft applications to cloud – Conversation with Calix, Sherwin …

Ilona Gabinsky

Principal Product Marketing Manager

Today we have guest author – Navita Sood – Marketing Director, Cloud Business Group

Once you have decided you want to move to cloud, the next step is deciding if you want to lift and shift your existing workloads to cloud or move to a SaaS application. You can also extend your application with SaaS and then use PaaS offerings to replicate your customizations in cloud. The choice depends on your environment, your business needs, external factors impacting your business, how much you want to invest and how much you are willing to disrupt. It’s important to do a thorough analysis to optimize your ROI. Also, it’s important you involve all the stakeholders in taking this decision because cloud impacts everyone from IT to business. Lastly, once you choose your path, it’s important you lay out a detailed execution plan with your vendor or implementation partner to make your move successful.

Recently I moderated a panel discussion at Collaborate user group, where I invited three customers to talk about their unique paths to cloud and how they implemented them. It was interesting to see how different reasons influenced their choice and impacted their business outcomes. On the panel I had Ravi Gade, Sr. Director of Enterprise Applications in Calix, Vivek Puri, Manager – Database, Middleware & Engineered Systems at Sherwin Williams and Arvind, Chief Solution Architect at Astute Business Solutions.

Calix moved their E-Business Suite workloads to SaaS, Sherwin Williams is lifting and shifting their E-Business Suite workloads to Oracle cloud @ customer and Astute moved their PeopleSoft application from AWS to Oracle cloud. Although the three are in different stages of their cloud journey, they have already started experiencing the benefits guaranteed by cloud.

Calix was undergoing a major business transformation. They were moving from hardware business to software. They wanted a cloud solution that would support their transition. Although their applications were highly customized, they decided anything that wasn’t critical for their business they wouldn’t migrate that to cloud. Instead they preferred to standardize their applications for ease of maintenance. Hence, they opted to move to SaaS. Three things that helped them be successful in their journey were:

  1. Involving the business from day one
  2. Working with Oracle’s cloud business group in the transformation
  3. Leveraging Oracle’s cloud integration services to integrate their on premise and cloud applications

Since their transition 2 years back, they have saved $2.5M per year with 40% ROI going from EBS to ERP cloud. These savings came from data center cost, application support cost and resources for application support. They rebuilt only a few critical customizations to their ERP cloud. They were up and running in cloud only in a few months and were able to break even in just 18 months. They saw their IT queue reduce from 500 tickets, 2 years ago to less than 20 tickets today.


One of the questions from the audience was on job cuts as an outcome of moving to cloud. Ravi was very frank in pointing out that since 2 years back, when they embarked on the journey to cloud, no one in Calix had lost their job. In fact their roles became more prominent as Calix improved its productivity and introduced new revenue models. Calix saw its DBA’s transform into data architects who were now investing more time doing tasks they enjoyed. All their employees underwent training on new cloud services that have also made them more valuable in the market.

Sherwin Williams, being more risk averse wanted to move to cloud very cautiously. They weren’t ready to move their applications and all their data to public cloud. They choose to migrate to Oracle cloud @ customer, to start their cloud journey when they were considering their upgrade. They started their move only 6 months back and were Oracle’s first Cloud @ customer users. In just 6 months they are seeing the following benefits:

  1. Cloud @ customer worked as a testing pad for them to start evaluating the benefits of cloud for their business. One of their biggest concerns was integration of on premises applications with the cloud, but the fact that everything was running homogeneously in no time helped all teams build more confidence in cloud.
  2. They had highly customized applications, with lots of engineered systems that they weren’t willing to part from. By doing a lift and shift they were able to move all those customizations as is to the cloud behind their own firewall without causing any disruptions or changes to the business processes
  3. They were able to future proof their business and move away from having to maintain those applications and underlying infrastructure.
  4. Financially, it is a very attractive proposition for them. They are using the same underlying hardware in the cloud, but now they only pay for 1/4th of the rack based on their requirements.

In future they are evaluating to move their EBS HR to HCM cloud, so their peripheral applications will be on SaaS and other business critical applications will be on cloud @ customer for now, until they gain more confidence in moving everything to cloud.

Astute on the other hand preferred lift and shift to minimize business disruptions and ensure the same experience for their business users, while still receiving all the benefits of Oracle cloud – around automation, performance, speed, agility and cost. They wanted to move away from the data center business 3-4 years back when they moved to AWS. Once Oracle cloud was available they were quick to migrate to Oracle cloud to run their Oracle applications on Oracle cloud. This enabled them to better serve their customers and provide the latest and greatest features available through PaaS and SaaS services. Rightsizing of their infrastructure helped them increase their utilization levels and save costs. Currently they are exploring adding chatbots to their PeopleSoft application to improve the experience of their customers.

There is no best approach to moving to cloud. Pick the most critical business problem you are facing today and see how cloud can solve it. Start your cloud journey from there and then build your custom path to cloud. All paths to cloud would lead to the same goal.

Follow Us On Social Media:

Related:

  • No Related Posts

Threat Report: Companies Trust Cloud Security

Ilona Gabinsky

Principal Product Marketing Manager

Today we have guest blogger – Alan Zeichick – principal analyst at Camden Associates.

Is the cloud ready for sensitive data? You bet it is. Some 90% of businesses in a new survey say that at least half of their cloud-based data is indeed sensitive, the kind that cybercriminals would love to get their hands on.

The migration to the cloud can’t come soon enough, as 66% of companies in the study say at least one cybersecurity incident has disrupted their operations within the past two years, and 80% say they’re concerned about the threat that cybercriminals pose to their data.

The good news is that 62% of organizations consider the security of cloud-based enterprise applications to be better than the security of their on-premises applications, and another 21% consider it as good. The caveat: Companies must be proactive about their cloud-based data and can’t naively assume that “someone else” is taking care of that security.

Those insights come from a brand-new threat report, the first ever jointly conducted by Oracle and KPMG. The “Oracle and KPMG Cloud Threat Report 2018,” to be released this month at the RSA Conference, fills a unique niche among the vast number of existing threat and security reports, including the well-respected Verizon Data Breach Investigations Report produced annually since 2008.

The difference is the Cloud Threat Report’s emphasis on hybrid cloud, and on organizations lifting and shifting workloads and data into the cloud.

“In the threat landscape, you have a wide variety of reports around infrastructure, threat analytics, malware, penetrations, data breaches, and patch management,” says one of the designers of the study, Greg Jensen, senior principal director of Oracle’s Cloud Security Business. “What’s missing is pulling this all together for the journey to the cloud.”

Indeed, 87% of the 450 businesses surveyed say they have a cloud-first orientation. “That’s the kind of trust these organizations have in cloud-based technology,” Jensen says.

Here are data points that break that idea down into more detail:

  • 20% of respondents to the survey say the cloud is much more secure than their on-premises environments; 42% say the cloud is somewhat more secure; and 21% say the cloud is equally secure. Only 21% think the cloud is less secure.
  • 14% say that more than half of their data is in the cloud already, and 46% say that between a quarter and half of their data is in the cloud.

That cloud-based data is increasingly “sensitive,” the survey respondents say. That data includes information collected from customer relationship management systems, personally identifiable information (PII), payment card data, legal documents, product designs, source code, and other types of intellectual property.

Cyberattacks Reveal the Pace Gap

Two-thirds of organizations in the study report some type of past interruption due to a security incident, such as losing the ability to provide service, diminished employee productivity, or delays to IT projects. Just more than half of the businesses say they’ve experienced a financial hit as a result, including a loss of shareholder value, the cost of data loss, or the costs of reputational damage.

Oracle’s Jensen says there’s a growing realization of a “pace gap” between how fast organizations can create and/or deploy new business applications and how fast they can secure those applications to meet an organizations security and compliance target”. Security is lagging behind. This gap is exacerbated by agile application development methodologies.

So should businesses slow down their deployment of new software? Jensen laughs at that suggestion. Instead, he calls for improving security training, processes, and technology.

“A priority area that falls down is training the average end users, because they’re the most vulnerable point of attack, and some of the most successful attacks leverage social engineering, such as phishing,” Jensen says.

When it comes to processes, companies must understand the security responsibility they share with their cloud providers.

As the Oracle-KPMG study explains, the line of demarcation between what cloud vendors and customers are responsible for securing differs when it comes to software as a service, infrastructure as a service, and platform as a service. With IaaS, for example, service providers “are generally responsible for securing the physical infrastructure up to and including the virtualization layer with the customer, then responsible for protecting the server workload,” the report says. “However, regardless of consumption model—IaaS, PaaS, and SaaS—the customer is generally responsible for data security and user access and identity management.”

Machine Learning and Automation Can Help

Meantime, emerging technologies can help close the pace gap, by finding and addressing security issues in on-premises data centers, the cloud, and hybrid environments.

The study shows that 38% of organizations use behavioral analysis and anomaly detection tools, which can instantly determine when a user is acting in a suspicious manner. For example, if an employee has never tried to download a customer database to her laptop before but is suddenly doing so at 2:00 a.m.—well, even if she has the authority to do so, something doesn’t appear to be right there.

Machine learning is another effective tool at reacting quickly to threats, ML algorithms can study tremendous quantities of data (such as transaction logs) and identify patterns. The Oracle-KPMG study shows that 47% of organizations are using machine learning for cybersecurity purposes.

Automation is also key: The more that software can handle routine security tasks, the fewer human errors can creep into system configurations and alert responses. In the study, 84% of companies say they’re committed to increased levels of security automation.

Overall, the future of the cloud is bright when it comes to security. When the majority of organizations rate cloud security as better than their on-premises security, and when 90% of organizations categorize at least half of their cloud data as sensitive, we’re past the tipping point. Organizations must always remain vigilant, but the cloud has earned their trust.

Alan Zeichick is principal analyst at Camden Associates, a tech consultancy in Phoenix, Arizona, specializing in software development, enterprise networking, and cybersecurity. Follow him @zeichick.

Follow Us On Social Media:

Related:

  • No Related Posts