What to Expect from Oracle Autonomous Transaction Processing

Today Larry Ellison announced the general availability of Oracle Autonomous Transaction Processing Cloud Service, the newest member of the Oracle Autonomous Database family, combining the flexibility of cloud with the power of machine learning to deliver data management as a service.

Traditionally, creating a database management system required a team of experts to custom build and manually maintain a complex hardware and software stack. With each system being unique, this approach led to poor economies of scale and a lack of the agility typically needed to give the business a competitive edge.

Try Autonomous Transaction Processing—Sign up for the trial

Autonomous Transaction Processing enables businesses to safely run a complex mix of high-performance transactions, reporting, and batch processing using the most secure, available, performant, and proven platform – Oracle Database on Exadata in the cloud. Unlike manually managed transaction processing databases, Autonomous Transaction Processing provides instant, elastic compute and storage, so only the required resources are provisioned at any given time, greatly decreasing runtime costs.

But What Does the Autonomous in Autonomous Transaction Processing Really Mean?

Self-Driving

Autonomous Transaction Processing is a self-driving database, meaning it eliminates the human labor needed to provision, secure, update, monitor, backup, and troubleshoot a database. This reduction in database maintenance tasks reduces costs and frees scarce administrator resources to work on higher-value tasks.

When an Autonomous Transaction Processing database is requested, an Oracle Real-Application-Cluster (RAC) database is automatically provisioned on Exadata Cloud Infrastructure. This high-availability configuration automatically benefits from many of the performance-enhancing Exadata features such as smart flash cache, Exafusion communication over a super-fast InfiniBand network, and automatic storage indexes.

In addition, when it comes time to update Autonomous Transaction Processing, patches are applied in a rolling fashion across the nodes of the cluster, eliminating unnecessary down time. Oracle automatically applies all clusterware, OS, VM, hypervisor, and firmware patches as well.

In Autonomous Transaction Processing the user does not get OS login privileges or SYSDBA privileges, so even if you want to do the maintenance tasks yourself, you cannot. It is like a car with the hood welded shut so you cannot change the oil or add coolant or perform any other maintenance yourself.

Many customers want to move to the cloud because of the elasticity it can offer. The ability to scale both in terms of compute and storage only when needed, allows people to truly pay per use. Autonomous Transaction Processing not only allows you to scale compute and storage resources, but it also allows you to do it independently online (no application downtime required).

Self-Securing

Autonomous Transaction Processing is also self-securing, as it protects itself from both external attacks and malicious internal users. Security patches are automatically applied every quarter. This is much sooner than most manually operated databases, narrowing an unnecessary window of vulnerability. Patching can also occur off-cycle if a zero-day exploit is discovered. Again, these patches are applied in a rolling fashion across the nodes of the cluster, avoiding application downtime.

But patching is just part of the picture. Autonomous Transaction Processing also protects itself with always-on encryption. This means data is encrypted at rest but also during any communication with the database. Customers control their own encryption keys to further improve security.

Autonomous Transaction Processing also secures itself from Oracle cloud administrators using Oracle Database Vault. Database Vault uniquely allows Oracle’s cloud administrators to do their jobs but prevents them from being able to see any customer data store in Autonomous Transaction Processing.

Finally, customers are not given access to either the operating system or the SYSDBA privilege to prevent security breaches from malicious internal users or from stolen administrator credentials via a phishing attack.

Self-Repairing

Autonomous Transaction Processing automatically recovers from any failures without downtime. The service is deployed on our Exadata cloud infrastructure, which has redundancy built-in at every level of the hardware configuration to protect against any server, storage, or network failures.

Autonomous Transaction Processing automatically backs up the database nightly and gives the ability to restore the database from any of the backups in the archive. It also has the ability to rewind data to a point in time in the past to back out any user errors using Oracle’s unique Flashback Database capabilities.

Since users don’t have access to the OS, Oracle is on the hook to diagnose any problems that may occur. Machine learning is used to detect and diagnose any anomalies. If the database detects an impending error, it gathers statistics and feeds them to AI diagnostics to determine the root cause. If it’s a known issue, the fix is quickly applied. If it’s a new issue a service request will be automatically opened with Oracle support.

How Does Autonomous Transaction Processing Differ from the Autonomous Data Warehouse?

Up until now, all of the functionality I have described is shared between both Autonomous Data Warehouse and Autonomous Transaction Processing. Where the two services differ is actually inside the database itself. Although both services use Oracle Database 18c, they have been optimized differently to support two very different but complementary workloads. The primary goal of the Autonomous Data Warehouse is to achieve fast complex analytics, while Autonomous Transaction Processing has been designed to efficiently execute a high volume of simple transactions.

Configuration

The differences in the two services begin with how we configure them. In Autonomous Data Warehouse, the majority of the memory is allocated to the PGA to allow parallel joins and complex aggregations to occur in-memory, rather than spilling to disk. While on Autonomous Transaction Processing, the majority of the memory is allocated to the SGA to ensure the critical working set can be cached to avoid IO.

Data Formats

We also store the data differently in each service. In the Autonomous Data Warehouse, data is stored in a columnar format as that’s the best format for analytics processing. While in Autonomous Transaction Processing, data is stored in a row format. The row format is ideal for transaction processing, as it allows quick access and updates to all of the columns in an individual record since all of the data for a given record is stored together in-memory and on-storage.

Statistics Gathering

Regardless of which type of autonomous database service you use, optimizer statistics will be automatically maintained. On the Autonomous Data Warehouse, statistics (including histograms) are automatically maintained as part of all bulk-load activities. With Autonomous Transaction Processing, data is added using more traditional insert statements, so statistics are automatically gathered when the volume of data changes significantly enough to make a difference to the statistics.

Query Optimization

Queries executed on the Autonomous Data Warehouse are automatically parallelized, as they tend to access large volumes of data in order to answer the business question. While indexes are used on Autonomous Transaction Processing to access only the specific rows of interest. We also use RDMA on Autonomous Transaction Processing to provide low response time direct access to data stored in-memory on other servers in the cluster.

Resource Management

Both Autonomous Data Warehouse and Autonomous Transaction Processing offer multiple database “services” to make it easy for users to control the priority and parallelism used by each session. The services predefine three priority levels: Low, Medium, and High. Users can just choose the best priority for each aspect of their workload. For each database service you have the ability to define the criteria of a runaway SQL statement. Any SQL statement that excesses these parameters either in terms of elapse time or IO will be automatically terminated. On Autonomous Data Warehouse only one service (LOW) automatically runs SQL statements serially. While on Autonomous Transaction Processing, only one service (PARALLEL) automatically runs SQL statements with parallel execution. You can also use the Medium priority service by default which allows the Low priority service to be used for requests such as reporting and batch to prevent them from interfering with mainstream transaction processing. The High priority level can be used for more important users or actions.

Can I Use Autonomous Transaction Processing to Develop New Applications?

Autonomous Transaction Processing is the ideal platform for new application development. Developers no longer have to wait on others to provision hardware, install software, and create a database for them. With Autonomous Transaction Processing, developers can easily deploy an Oracle database in a matter of minutes, without worrying about manual tuning or capacity planning.

Autonomous Transaction Processing also has the most advanced SQL and PL/SQL support accelerating developer productivity by minimizing the amount of application code required to implement complex business logic. It also has a complete set of integrated Machine Learning algorithms, simplifying the development of applications that perform real-time predictions such as personalized shopping recommendations, customer churn rates, and fraud detection.

Where Can I Get More Information and Get My Hands on Autonomous Transaction Processing?

The first place to visit is the Autonomous Transaction Processing Documentation. There you will find details on exactly what you can expect from the service.

We also have a great program that lets you get started with Oracle Cloud with $300 in free credits, which last much longer than you would expect since the trial service has very low pricing. Using your credits (which will probably last you around 30 days depending on how you configure Autonomous Transaction Processing) you will be able to get valuable hands-on time to try loading some your own workloads. Below is a quick video to help you get started.

Related:

  • No Related Posts

Autonomous Database: What Does That Mean for You?

We’re living in a new age of the database.

Oracle sparked the first revolution, with the relational database.

Now we’re back again with the second revolution. It’s truly—well, revolutionary. And we’re calling the Autonomous Database.

Try Autonomous Transaction Processing—Sign up for the trial

Today we’re here to break it down for you, tell you what it is, and why we think you should care.

Why a Self-Driving Database Is the Future

It’s not just a revolution in databases that’s happening right now. We’re seeing the rise of the cloud, an explosion of data, and true promise for valuable machine learning. Everywhere you turn, you face challenges to the old way of doing things—and dazzling potential for the future.

Here’s what most companies are trying to do:

  • Transform to the modern cloud model
  • Ensure data safety
  • And do more with less

At Oracle, we argue that we’re uniquely positioned to help you through these challenges with our new kind of database, and here’s why.

Oracle has invested literally thousands of engineering years automating and optimizing the database. We’ve introduced and matured many sophisticated automation capabilities from memory management to workload monitoring and tuning—all of which are used in the Autonomous Database. We’ve arguably created an unmatched on-premise database, and that’s precisely what puts us in a unique position as we seek to create the cloud’s best database to help you with all of your new goals.

Here’s our dream: an autonomous, self-driving database that will automatically take care of all database and infrastructure management, as well as monitoring and tuning.

In our vision for the future, this Autonomous Database will:

  • Reduce costs and improve productivity because it automates the mundane tasks of having to provision, patch, and backup databases
  • Free up IT teams to focus on tasks that will bring value to the business
  • Be self-securing to protect itself from both external attacks and any malicious internal users
  • Automatically encrypt all data whether it’s at rest or in flight and automatically apply security updates with no downtime
  • Be self-repairing, and automatically recover from any failure
  • Minimize all kinds of downtime including plan maintenance

Wait a minute though—that’s not a vision for the future.

It’s what’s happening now.

Today, we’re announcing the launch of Autonomous Transaction Processing Cloud Service, which together with the Autonomous Data Warehouse that we released earlier this year, completely changes the way people look at and use a database.

We call this, “The Last Upgrade You’ll Ever Do.”

We are so proud of this. The Autonomous Database is the culmination of that database journey we started over 40 years ago. It brings full automation to every layer of database deployment, from optimization to operations to infrastructure.

In both Autonomous Transaction Processing and the Autonomous Data Warehouse, autonomous intelligence drives infrastructure and database operations for you, while also providing the ability to automatically tune internal database structures to optimize your application workload’s SQL as data changes over time.

We’ve made it easy, so easy.

You can create a highly available, mission-critical database deployment with the click of a button. Define your application schemas, load data using familiar tools, and get started processing your business transactions while leaving the mundane, time-consuming database operations to us.

The Autonomous Database—Optimized by Workload

Right now, there are two workloads we’ve optimized for the Autonomous Database. Of course we don’t plan to stop here. But here’s what we have that’s currently available, ready for you to purchase.

Autonomous Transaction Processing is what we’ve launched today, and we’ve been waiting to show it to you. It’s a database solution we created for general purpose transaction processing and mixed workload applications. Think of it as your database solution for transaction, batch, IoT, and reporting associated with those use cases.

We released the Autonomous Data Warehouse back in March. ADW is best for all analytic workloads. Think of it as your replacement for a data warehouse, data mart or data lake. It’s great for machine learning too.

Both Autonomous Transaction Processing and Autonomous Data Warehouse share the Autonomous Database platform of Oracle Database 18c on our Exadata Cloud infrastructure. The difference is how the services have been optimized within the database.

Here’s the breakdown:

How Can You Deploy the Autonomous Database?

So you know which flavor of the Autonomous Database you want. Or perhaps you know you want both. When it comes to deployment, you can choose between serverless Autonomous Data Warehouse or Autonomous Transaction Processing databases.

Or, very soon, you’ll be able to deploy on dedicated Exadata Cloud Infrastructure for the highest isolation. The complete hardware stack is isolated from other tenants and will provide a unique, fully isolated cloud within the public cloud.

For those who are a fan of Cloud at Customer, don’t worry. We plan to provide this option very soon.

Why Should You Want the Autonomous Database?

We could talk about the benefits of the Autonomous Database endlessly—and we will, in a future article. But here are the top benefits, boiled down as succinctly as we could make it.

The Autonomous Database enables:

  1. More IT innovation for less money
  2. More developer Innovation for less
  3. Fewer security breaches
  4. High availability due to built-in redundancy
  5. Easy upgrade to cloud
  6. Guaranteed lower cost

But don’t just take our word for it:

Don’t forget to take a look at our reviews on Gartner Peer Insights:

“We have also started with the Oracle Autonomous Data Warehouse Cloud and that was exceptionally impressive from start to finish – very easy to use – uploading of data easy and fast and the compression reduced the amount of storage needed 4 times.”

See the full review.

The people who are using Autonomous Database every day, the ones who are able to see how thoroughly it streamlines their work, are already talking about it and what they’re saying is good.

How to Get Started With the Autonomous Database

We can’t wait to show you the Autonomous Database too. And that’s why we’ve put together a free trial experience, so you can see if the Autonomous Database is truly as much of a gamechanger as we say. (Hint: It is)

Here’s what we’re giving you:

3338 hours, 2 TB of Exadata storage—what are you waiting for?

You’ll be able to:

  1. Provision an Autonomous Transaction Processing or Autonomous Data Warehouse instance
  2. Connect SQL Developer to the new database instance
  3. Load data files if needed and go!

You have nothing to lose, right?

Experience a new kind of database with Oracle’s Autonomous Database. Sign up today.

Related:

  • No Related Posts

What Does Data Science Need to Be Successful?

There are certain advances that have revolutionized the tech world – personal computing, cell technology, and cloud computing are just some of them.

Now that we have the ability to store massive amounts of data in the cloud and then use it with advanced analytics, we can finally start working towards a machine learning future.

Download your free ebook, “Demystifying Machine Learning”

Big Data Data Analytics Data Science Cloud

It’s time for data science to shine. Here are some stats:

Businesses are seeing the potential too. Data science can have great impact in:

  • Building and enhancing products and services
  • Enabling new and more efficient operations and processes
  • Creating new channels and business models

But unfortunately, for many businesses much of that is still in the future. Despite making big investments in data science teams, many are still not seeing the value they expected. Why?

Data scientists often face difficulty in working efficiently. There are lengthy waits for resources and data. There’s difficulty collaborating with teammates. And there can be long delays of days or weeks to deploy work.

The IT admins face issues too. They often feel a lot of pain because they’re responsible for supporting data science teams.

Developers have difficulty with access to usable machine learning. Business execs don’t see the full ROI. And there’s more.

A big part of the problem is that data science often happens in silos and isn’t well integrated with rest of the enterprise. There’s a movement to bring technologies, data scientists, and the business together to make enterprise data science truly successful. But to do that, you need a full platform. Here are some questions to think about:

  • What does this platform need?
  • What defines success?
  • What do business execs need to be successful?

To tackle enterprise data science successfully, companies need a data science platform that addresses all of these issues. And that’s why Oracle is excited about our recent acquisition of DataScience.com.

DataScience.com creates one place for your data science tools, projects, and infrastructure. It organizes work, allowing users to easily access data and computing resources. It also enables users to execute end-to-end model development workflows.

Quite simply, it addresses the need to manage data science teams and projects while providing the flexibility to innovate.

What does this mean, exactly? It means you can now:

Make data science more self-service

  • Launch sessions instantly with self-service access to the compute, data, and packages you need to get to work quickly on any size analysis.

Collaborate more efficiently

  • Organize your work via a project-based UI and work together on end-to-end modeling workflows with all of your work backed up by Git.

Get more work done faster

  • Leverage the best of open source machine learning frameworks on a platform tightly integrated with high performance Oracle Cloud Infrastructure

Now Oracle can integrate big data and data science tools all in one place, with a single self-service interface that makes enterprise data science possible—there are more possibilities than ever now.

Companies are scrambling to make machine learning solutions work so they can realize the full potential of it—and with DataScience.com, we’re many steps closer to that machine learning future we all keep hearing about.

If you have any other questions or you’d like to see our machine learning software, feel free to contact us.

You can also refer back to some of the articles we’ve created on machine learning best practices and challenges concerning that. Or, download your free ebook, “Demystifying Machine Learning.”

Related:

  • No Related Posts

7 Data Lab Best Practices for Data Science

Perhaps you’re looking for a better way to perform data experimentation and facilitate data discovery. Or to start using machine learning to uncover more innovation opportunities through data.

The answer to that, of course, is having a data lab. Data labs make data science and experimenting with new data more possible. Complex analytics like machine learning can put a strain on the service levels of production systems. But having a data lab ensures your data scientists can experiment and run analytics as they need to without putting a strain on systems and facing complaints from other teams.

For many, setting up and implementing a data lab is a new project. In fact, you might even be setting up the first data lab ever at your company.

Download your free TDWI report, “Seven Best Practices for Machine Learning on a Data Lake

So how can you ensure that your data lab has the best chance of success? In this article, we lay out seven data lab best practices. Keep in mind, these best practices are designed to get you thinking beyond the nitty-gritty details of architecture and implementation, and more along the lines of widespread support and adoption.

Data Lab Best Practice #1: Deliver a Quick Win

Better one quick win in two months than three wins after four months. Your data lab is likely a high-visibility, expensive project. People want proof that it’s working and they want it now.

So don’t be tempted to just play computer science sandbox. And instead, keep a business goal in mind that aligns with that of a key business stakeholder.

You’ll want to show the value of your data lab from both an IT and business perspectives to gain as much support as you can.

For IT, demonstrate that you’re minimizing the strain placed on production systems with your lab.

For the business, demonstrate easy ways the company can start saving money or maximizing revenue, now.

If you don’t have any ideas, meet with the business leader of the unit and brainstorm. Here are some for a jumping off point:

  • How do I design a service to maximize ad revenue?
  • What is the best combination of data that gives me the optimal segment that will be ever more likely to accept mobile offers?
  • What data do I need for this? How do I combine it? Where do I get it?

Concentrate on the quick wins, but keep the future improvements and more complicated projects in mind.

Data Lab Best Practice #2: Consider Starting with Existing Data

Remember there’s value in your existing data, even if you’ve been collecting or cleaning new data at the same time. If you already have clean, labeled data available, consider creating a use case around that so you can get started faster.

Sometimes this might revolve around reorganizing your project scope. Let’s say ideally, your business unit would like a 360-view of the customer for more effective customer promotions. That’s a complicated project that requires a great deal of data.

But Britain’s National Health Service used existing data to help speed their quick wins. They examined payments, other transactions, and customer complaints as examples of fraud to investigate. Stopping fraud or recovering fraudulent claims is often a good quick win.

Once you have a few of those quick wins under your belt, you can start tackling more complicated projects that require more resources or more kinds of data. But especially in the early stages, it’s important to remember that most businesses won’t care about how complex or innovative your machine learning algorithm is.

They want results. And the faster they can get those results, the better.

Data Lab Best Practice #3: Try to Have Many (But Not Too Many) Projects in the Pipeline

We’ve said you should have a few quick wins. We’ve also said you should start with existing data. And now we’re also saying that you should try to have many projects in the pipeline (but not too many – stay balanced).

You should remember that not every data exploration project is going to have viable results that will mean change at the company. And if the idea only demonstrates incremental change, it might not display enough cost savings and ease of implementation for the ideas to gain traction. And even if the idea demonstrates change, it might not display enough cost savings and ease of implementation for that idea to gain traction.

It’s not going to be enough for you to only demonstrate the many ways your data lab has identified value. The executive team will want to know how many of those ideas were implemented.

That’s why you should have many data projects in the pipeline, to decrease your chances of failure. But try to have some focus too. At the opposite end of the spectrum is chasing after too many ideas and ending up with nothing because you didn’t focus resources.

Data Lab Best Practice #4: Keep Your Executive Support Engaged

We assume you had some sort of executive support to make it this far. But you will need to keep them engaged. That’s related to the previous point – deliver a few quick wins.

But don’t stop there. See which other executives you can get on board. Can you deliver quick wins in another area too? You don’t want to stretch yourself and your resources too thinly. But at the same time, the ideal vision is a company full of executives clamoring for more machine learning projects, with plenty of support for your data lab because it’s seen as such a valued part of the company.

To do this, you can deliver sessions on what machine learning can do. Pitch ideas for how you can help other parts of the business expand.

Yes, this does entail extra work but if you’re determined to make your data lab a cornerstone of the business, it’s well worth it.

Data Lab Best Practice #5: Operationalize Your Data

You might be tempted to think that your job ends with finding insights. But that’s not the case. You need to push on your executives and other business leaders to put your findings into place. Take a look at the business units or business leaders that you’re doing work for. Do they praise your findings but never implement them? If so, it’s time to have a serious conversation or it’s time to find a new team to collaborate with.

Think about the actionable reports you can create, or change to existing apps and processes. Your findings could affect the creation of a brand-new service, app, or product.

Remember, at the end of the day, it’s not about how many insights get uncovered. What your business cares about is how much money is being saved and how much revenue is being created. It’s best if you can point to actual revenue being generated by your skilled team.

Data Lab Best Practice #6: Be Sure You Have a Platform That Scales

Keep in mind, the cloud is the perfect place for an initiative like the data lab. You can provision your lab there, store massive amounts of data, and spin up and spin down flexible analytic workloads as needed. The best part of all? You’ll pay for only what you use, which minimizes your cost and risk.

In addition to having a platform that scales, you’ll also need the resources and talent to execute. If you don’t, you could potentially have a backlog of big data projects from day one. That brings up to Best Practice #7.

Data Lab Best Practice #7: Support Your Data Scientists

A good data scientist is worth his or her weight in gold. Make sure you support our data scientists and set them up for success. Assemble them in talented, diverse teams. Provide them with tools. And make sure that your management tolerates risk. It might take time for your data scientists to find the deep wins that everyone is looking for. So set expectations accordingly, while also ensure that you can find quick easy wins to keep everyone happy.

Conclusion

There you have it, our seven best practices for implementing a successful data lab. Data science may not be easy, but having a data lab makes it easier—and we hope this article will help you gain success more easily.

If you’d like to ask us any further questions, feel free to contact us. Or if you’re ready to experiment with working with your data in the cloud, we offer a free guided trial to build and implement successful data lake.

Related:

  • No Related Posts

Why Oracle Thinks Autonomous IT Can Ultimately Win the Cloud War

By Paul Way

SENIOR PRINCIPLE MARKETING DIRECTOR, CLOUD PLATFORM

Cloud providers today spend a lot of time rattling off key differentiators for their platforms, each jostling to sway buyers in a market that experts believe will reach $266 billion in revenue by 2021 (IDC, Worldwide Semiannual Public Cloud Services Spending Guide, July 2017).

For the most part, cloud providers frequently tout performance and cost advantages—certainly two top-of-mind factors in the decision-making process for most buyers. Unfortunately, providers have now flooded the market with a dizzying stream of benchmarks and comparisons, burying decision makers with piles of often conflicting and confusing apples-to-oranges data that provides no clear picture of just how the various platforms actually stack up.

More recently, discussions around differentiation have shifted, with providers hyping their talents in a spate of new, emerging cloud technologies—such as artificial intelligence and machine learning—that some claim could fundamentally alter how companies interact with technology, resulting in lower costs, productivity increases, and tighter security. Early reviews, however, suggest that many providers haven’t yet developed the expertise or the vision to deliver any tangible, meaningful results.

Off in Redwood City, CA, however, Larry Ellison, Oracle executive chairman, CTO, and one of Silicon Valley’s first and arguably most enduring visionaries, is methodically building a new conversation around what his company sees as perhaps the most important development in cloud computing today—autonomous technology. While the concept incorporates each of the aforementioned differentiators (speed, cost, emerging technologies), Ellison sees the concept of autonomy as much more than the sum of those parts. For Oracle, it represents the merging of several critical cloud advancements into an entirely new level of efficiency that finally attaches tangible—perhaps revolutionary—definition to the oft-used but mostly misunderstood notion of “digital transformation.”

Defining Autonomy

Similar to the concept of the autonomous car, Oracle believes big portions of the IT organization will ultimately run themselves. Unlike the automotive industry, though, where manufacturers like Tesla envision building cars without steering wheels, IT organizations will likely never operate completely on their own. Recent developments in its product portfolio reveal that, like autonomous vehicles, Oracle ultimately imagines several levels of autonomy for IT, similar to the National Highway Traffic Safety Administration’s (NHTSA) levels of autonomous driving.

How Oracle Views Autonomy

For Oracle, the autonomous enterprise goes beyond automation, in which machines respond to an action with an automated reaction. In an automated enterprise, certain tasks become automated, but the full process still requires humans to complete it. Think of cars that can automatically change lanes but still require drivers to get to the final destination. Artificial intelligence and machine learning often make automated tasks possible.

Combining multiple automated tasks can lead to semi-autonomous technology, but true autonomy occurs when humans become removed from the process. That’s exactly where Ellison’s grand vision is headed—in a way only his company can execute. Oracle has arguably the widest and deepest portfolio of products across SaaS, PaaS, and IaaS, and it’s now beginning to embed AI and machine learning throughout. This competitive advantage allows Oracle to extend automation across more and more functions and technologies throughout the enterprise, resulting in semi-autonomous—and now almost completely autonomous—products and processes that require little or no human intervention.

Amid a backdrop of increasing security breaches, numerous high-profile outages, and the subsequent damages to both revenue and brand reputation (Hi, Equifax!), Oracle’s autonomous vision couldn’t come to fruition at a more opportune moment. For Oracle, this emerging focus on autonomous products and the autonomous enterprise isn’t a marketing campaign. In fact, in many ways, it’s the payoff on a bet the company began placing a decade ago and that ultimately pulls together many of the pieces it’s quietly built (and bought) in recent years to make it a serious player in the cloud space.

Building the Pieces

While some providers grabbed early headlines by launching cloud infrastructure businesses, largely around compute and storage, Oracle quietly began its cloud journey a decade ago by rewriting its entire on-premises software portfolio for the cloud a decade ago. It was a significant, albeit often overlooked, strategic move for Oracle, considering that Ellison has insisted for years that the company that rules the SaaS market (and particularly ERP) will ultimately win the battle for cloud supremacy. Oracle hammered that point home with several important acquisitions in the space (Siebel, PeopleSoft, and, more recently, NetSuite), and now owns leadership positions in a wide range of SaaS categories.

Ellison quickly moved to replicate his SaaS success in the emerging market for PaaS products, including cloud databases, the category that launched his company 40 years ago and that Oracle continues to dominate in overwhelming fashion. PaaS is critical as the cloud market unfolds, with many analysts noting the category stands to see the most significant increase in spending in all of cloud in the coming years. For its part, Oracle has established beachheads in several other key PaaS markets beyond database, including analytics, integration, mobility, security, and systems management. More recently, at OpenWorld 2017, the company launched its second-generation IaaS portfolio with cost and speed comparisons that shocked both audiences and competitors.

From Pieces to a Vision

The slow, methodical burn of Ellison’s strategy was initially viewed by some as an unwillingness to commit to the cloud. But with each innovation the company has unveiled recently, it’s increasingly apparent that Ellison’s insistence that he’d been following a carefully plotted strategy all along (as he’s so masterfully done throughout his career) rings true after all. The strategy has put Oracle in an enviable competitive position. Providers that began offering low-cost infrastructure in the cloud, such as AWS, initially grabbed headlines and market share, but they’ve largely ignored other key layers of the full cloud stack, notably SaaS and PaaS. In much the same way, providers who similarly made waves by launching SaaS offerings (Salesforce, Workday) now find themselves out of the conversation for PaaS and IaaS. Meanwhile, Ellison and Oracle managed to build dominant positions and deep technical expertise throughout SaaS, PaaS, and IaaS, with integrations between layers that deliver cost and efficiency benefits their competitors simply can’t. It’s hardly surprising then that, while Oracle currently holds relatively modest overall cloud market share, its competitors have clearly taken notice by increasing their attacks in an attempt to quell the threat.

With its new focus on autonomy, Oracle appears to be flexing its muscles even further by combining its unique competitive differentiators with what’s become one of the company’s most important initiatives in years—establishing its dominance in a portfolio of emerging technologies that includes machine learning, IoT, blockchain, human interface, and more. Many believe these new technologies could completely redesign how people work and live. Ellison and company have embedded not only automation into each layer but have now begun delivering levels of autonomy within their products that other providers are in no position to match.

Oracle’s push to autonomy began in late 2016 when the company announced it was developing “adaptive intelligence” applications. The first adaptive intelligence app, for CX, launched in 2017, allowing companies to combine their own data with Oracle’s extensive third-party Data Cloud (itself a big competitive differentiator) then apply the company’s decision science and machine learning technologies to optimize outcomes. Later that year, Oracle launched similar adaptive intelligence capabilities for ERP, HCM, and SCM apps.

The Autonomous Database

More recently, though, Oracle kicked things into high gear when it introduced an entirely new category of database technology—the product that launched the company 40 years ago and continues to have dominant market share across the globe. The new database category, dubbed autonomous databases, was announced in late 2017 with the first product launch (an autonomous data warehouse cloud service) rolling out in the spring of 2018, offering strong indications of just how dramatically Oracle hopes to redefine the relationship between companies and their systems.

Oracle’s autonomous data warehouse cloud represents a radical shift in how cloud databases are run and maintained—not to mention how much they cost. For all intents and purposes, the database runs itself, automatically and continuously patching, tuning, backing up, and upgrading on its own with virtually no downtime (Oracle says downtime will be less than 2.5 minutes per month). And with little human intervention, the product virtually eliminates human error, with dramatic implications for not only security breaches and outages, but costs as well. At Oracle’s last OpenWorld conference in October 2017, Ellison ran live workloads comparing Oracle’s database on its own cloud and AWS, versus AWS databases on its own cloud. Oracle’s speed and cost advantage proved so dramatic that the company now guarantees customers running its database on Oracle Cloud will halve their current AWS bills.

Beyond the Database

But Oracle’s plans go much further. Oracle views autonomy as a means of delivering on what it believes will be the ultimate competitive advantage. Steve Daheb, senior vice president of Oracle’s Cloud Business Group, says: “Keep in mind that developments in cloud technologies are still in the early stages. The majority of enterprise workloads aren’t yet in the public cloud. To this point, companies have largely adopted cloud as a way to reduce costs, which is always important. But as the market develops, we don’t think buying decisions will come down to just that. Very quickly, it’ll be a matter of trust. Who can I trust to support my whole environment, not just their own apps or infrastructure? Who can I trust to make these pieces interact seamlessly? Who’s going to keep us secure? Autonomous technology does all those things and more.” Daheb added the stakes couldn’t be higher: “If you get that decision wrong, you could be bankrupt. It’s that serious.” In the near term, Oracle has announced plans to launch other autonomous services, including Autonomous Database Express Cloud Service and Autonomous NoSQL Database Cloud Service.

Building the kind of autonomy Oracle envisions isn’t simple. It requires a skillset, a level of expertise, and a vision that few—if any—cloud providers have. Oracle’s competitors can certainly build automation into their products—and many already do. But with glaring gaps in their cloud portfolios, they can’t build—and likely can’t execute—a vision as grand as Oracle’s, where automation fuels autonomy across multiple integrated layers throughout a company’s IT operations. At a company the size of Oracle, pulling off the kind of long-game strategy required to get to this stage may well end up being one of the company’s most remarkable achievements. Like chess moves, Oracle’s pieces have come together to suggest an ambitious blueprint for how the company can lead a new era of cloud computing.

Just as Larry Ellison planned all along.

Follow Us On Social Media:

Related:

  • No Related Posts

Getting Started with Autonomous Database Security

Automatic encryption and patching are a solid beginning to the cloud database security journey.

By Tom Haunert

“Data is your most critical asset, but could become your biggest liability if not properly secured,” says Vipin Samar, senior vice president of Oracle Database Security, in the video Security for the Autonomous Warehouse Database Cloud. At what point is data properly secured? Oracle Magazine sat down with Samar to talk about data assets and liabilities, appropriate security for databases in the cloud, and more.

Oracle Magazine: How is the cloud changing the database security conversation?

Samar: When organizations make the decision to move to the cloud, their first questions are often about security. Is the cloud secure? Can they limit Oracle administrator access to their data in the cloud? Can they meet their compliance requirements in the cloud? These are typically the top three questions I hear.

Oracle Magazine: Oracle Database Cloud services all run with their data encrypted. Is that enough to keep data safe in the cloud?

Samar: We use encryption by default in Oracle Database Cloud services so that hackers do not get access to the raw data.

Encryption closes one particular part of the attack surface—where the hacker gets access to data blocks directly. But hackers can try many other techniques withoutaccess to the data blocks.

Hackers can impersonate users, they can steal an end user’s password, or they can exploit weaknesses in database applications. And they can do more—it’s a long list.

So encryption is one necessary tool, but it doesn’t address all possible security risks.

Oracle Magazine: How can organizations determine whether their databases are secure?

Samar: Many organizations don’t really know how secure their databases are, where their sensitive data is located, or how much data they have.

Oracle recently released the Oracle Database Security Assessment Tool feature of Oracle Database, which lets organizations answer these questions. The tool looks at various security configuration parameters, identifies gaps, and discovers missing security patches. It checks whether security measures such as encryption, auditing, and access control are deployed, and how those controls compare against best practices.

We take care of the security of the infrastructure including the database, and we automate it—leaving nothing to chance or human error.”

Additionally, it helps them discover where their sensitive data is located and how much data they have. Oracle Database Security Assessment Tool searches database metadata for more than 50 types of sensitive data including personally identifiable information, job data, health data, financial data, and information technology data. This helps businesses to understand the security risks for that data.

It also highlights findings and provides recommendations to assist with regulatory compliance. The findings and recommendations support both the European Union General Data Protection Regulation (EU GDPR) and the Center for Internet Security (CIS) benchmark.

Oracle Magazine: Oracle Autonomous Data Warehouse Cloud is described as the world’s first self-securing database cloud service. What does self-securing mean for this service?

Samar: Self-securing starts with the security of the Oracle Cloud infrastructure and database service. Security patches are automatically applied every quarter or as needed, narrowing the window of vulnerability. Patching includes the full stack: firmware, operating system [OS], clusterware, and database. There are no steps required from the customer side. We take care of the security of the infrastructure including the database, and we automate it—leaving nothing to chance or human error.

Next, we encrypt customer data everywhere: in motion, at rest, and in backups. The encryption keys are managed automatically, without requiring any customer intervention. And encryption cannot be turned off.

Administrator activity on Oracle Autonomous Data Warehouse Cloud is logged centrally and monitored for any abnormal activities. We have enabled database auditing using predefined policies so that customers can view logs for any abnormal access.

Oracle Magazine: What’s needed to protect other attack surfaces?

Samar: Securing databases in the cloud is a shared responsibility, with Oracle securing the infrastructure and network; monitoring the OS and network activity; applying OS and database patches and upgrades; and providing encryption, appropriate separation of duties, and various certifications.

The customer organization still needs to secure its applications, users, and data. It needs to ensure that its applications can thwart attacks targeted at the company, that its users follow security best practices, and that its sensitive data is protected using appropriate controls. In some sense, these requirements are no different from those for an organization’s current on-premises databases, except that Oracle has already handled the security infrastructure part.

Follow Us On Social Media:

Related:

  • No Related Posts

What’s the Difference Between AI, Machine Learning, and Deep Learning?

Peter Jeffcock

Big Data Product Marketing

AI, machine learning, and deep learning – these terms overlap and are easily confused, so let’s start with some short definitions.

AI means getting a compute to mimic human behavior in some way.

Machine learning is a subset of AI, and it consists of the techniques that enable computers to figure things out from the data and deliver AI applications.

Deep learning, meanwhile, is a subset of machine learning that enables computers to solve more complex problems.

Download your free ebook, “Demystifying Machine Learning.”

Those descriptions are correct, but they are a little concise. So I want to explore each of these areas and provide a little more background.

Difference Between AI, Machine Learning and Deep Learning

What Is AI?

Artificial intelligence as an academic discipline was founded in 1956. The goal then, as now, was to get computers to perform tasks regarded as uniquely human: things that required intelligence. Initially, researchers worked on problems like playing checkers and solving logic problems.

If you looked at the output of one of those checkers playing programs you could see some form of “artificial intelligence” behind those moves, particularly when the computer beat you. Early successes caused the first researchers to exhibit almost boundless enthusiasm for the possibilities of AI, matched only by the extent to which they misjudged just how hard some problems were.

Artificial intelligence, then, refers to the output of a computer. The computer is doing something intelligent, so it’s exhibiting intelligence that is artificial.

The term AI doesn’t say anything about how those problems are solved. There are many different techniques including rule-based or expert systems. And one category of techniques started becoming more widely used in the 1980s: machine learning.

What Is Machine Learning?

The reason that those early researchers found some problems to be much harder is that those problems simply weren’t amenable to the early techniques used for AI. Hard-coded algorithms or fixed, rule-based systems just didn’t work very well for things like image recognition or extracting meaning from text.

The solution turned out to be not just mimicking human behavior (AI) but mimicking how humans learn.

Think about how you learned to read. You didn’t sit down and learn spelling and grammar before picking up your first book. You read simple books, graduating to more complex ones over time. You actually learned the rules (and exceptions) of spelling and grammar from your reading. Put another way, you processed a lot of data and learned from it.

That’s exactly the idea with machine learning. Feed an algorithm (as opposed to your brain) a lot of data and let it figure things out. Feed an algorithm a lot of data on financial transactions, tell it which ones are fraudulent, and let it work out what indicates fraud so it can predict fraud in the future. Or feed it information about your customer base and let it figure out how best to segment them. Find out more about machine learning techniques here.

As these algorithms developed, they could tackle many problems. But some things that humans found easy (like speech or handwriting recognition) were still hard for machines. However, if machine learning is about mimicking how humans learn, why not go all the way and try to mimic the human brain? That’s the idea behind neural networks.

The idea of using artificial neurons (neurons, connected by synapses, are the major elements in your brain) had been around for a while. And neural networks simulated in software started being used for certain problems. They showed a lot of promise and could solve some complex problems that other algorithms couldn’t tackle.

But machine learning still got stuck on many things that elementary school children tackled with ease: how many dogs are in this picture or are they really wolves? Walk over there and bring me the ripe banana. What made this character in the book cry so much?

It turned out that the problem was not with the concept of machine learning. Or even with the idea of mimicking the human brain. It was just that simple neural networks with 100s or even 1000s of neurons, connected in a relatively simple manner, just couldn’t duplicate what the human brain could do. It shouldn’t be a surprise if you think about it; human brains have around 86 billion neurons and very complex interconnectivity.

What is Deep Learning?

Put simply, deep learning is all about using neural networks with more neurons, layers, and interconnectivity. We’re still a long way off from mimicking the human brain in all its complexity, but we’re moving in that direction.

And when you read about advances in computing from autonomous cars to Go-playing supercomputers to speech recognition, that’s deep learning under the covers. You experience some form of artificial intelligence. Behind the scenes, that AI is powered by some form of deep learning.

Let’s look at a couple of problems to see how deep learning is different from simpler neural networks or other forms of machine learning.

How Deep Learning Works

If I give you images of horses, you recognize them as horses, even if you’ve never seen that image before. And it doesn’t matter if the horse is lying on a sofa, or dressed up for Halloween as a hippo. You can recognize a horse because you know about the various elements that define a horse: shape of its muzzle, number and placement of legs, and so on.

Deep learning can do this. And it’s important for many things including autonomous vehicles. Before a car can determine its next action, it needs to know what’s around it. It must be able to recognize people, bikes, other vehicles, road signs, and more. And do so in challenging visual circumstances. Standard machine learning techniques can’t do that.

Take natural language processing, which is used today in chatbots and smartphone voice assistants, to name two. Consider this sentence and work out what the last part should be:

I was born in Italy and, although I lived in Portugal and Brazil most of my life, I still speak fluent ________.

Hopefully you can see that the most likely answer is Italian (though you would also get points for French, Greek, German, Sardinian, Albanian, Occitan, Croatian, Slovene, Ladin, Latin, Friulian, Catalan, Sardinian, Sicilian, Romani and Franco-Provencal and probably several more). But think about what it takes to draw that conclusion.

First you need to know that the missing word is a language. You can do that if you understand “I speak fluent…”. To get Italian you have to go back through that sentence and ignore the red herrings about Portugal and Brazil. “I was born in Italy” implies learning Italian as I grew up (with 93% probability according to Wikipedia), assuming that you understand the implications of born, which go far beyond the day you were delivered. The combination of “although” and “still” makes it clear that I am not talking about Portuguese and brings you back to Italy. So Italian is the likely answer.

Imagine what’s happening in the neural network in your brain. Facts like “born in Italy” and “although…still” are inputs to other parts of your brain as you work things out. And this concept is carried over to deep neural networks via complex feedback loops.

Conclusion

So hopefully that first definition at the beginning of the article makes more sense now. AI refers to devices exhibiting human-like intelligence in some way. There are many techniques for AI, but one subset of that bigger list is machine learning – let the algorithms learn from the data. Finally, deep learning is a subset of machine learning, using many-layered neural networks to solve the hardest (for computers) problems.

Related:

The Business Benefits of Data Exchange

“It’s difficult to imagine the power that you’re going to have when so many different sorts of data are available,” predicted Tim Berners-Lee, inventor of the World Wide Web in 2007.

Ten years on, businesses have never had as much data–and power–as they do now. What’s more, data exchange (sharing and compiling data) between industries is on the rise, meaning this trend is set to continue. The sharing of this intelligence represents an opportunity for some companies to better understand their audiences and improve customer experience, and for others to unlock new revenue streams.

Download your free ebook, “Driving Growth and Innovation Through Big Data.”

A Real Data Exchange Use Case

Operators like Telefonica are using big data to understand television audiences and their usage patterns, allowing the operator to build personalized recommendations for them based on context, time of day, or device. Telefonica’s deep understanding of its customers is also valuable to content providers and media producers who want to tailor content to their audiences.

This means that Telefonica is able to take anonymized television intelligence and share it with advertising agencies and media producers, which helps them better understand the market and the impact of their content on the audience. By taking a proactive approach to data monetization, Telefonica has been able to capture 30% of Spain’s lucrative digital media and advertising market, compared to the 2% telecoms operators contribute, on average, to the advertising value chain[1].

For more information on use cases, read our free guide, “Big Data Use Cases.”

Monetizing Location Insights from Data Exchange

Retailers are also partnering with communications operators by using anonymized and aggregated location insights to improve store location planning and layout as well as assessing staffing requirements. Europe’s third-largest mobile operator, Turkcell, is using data analytics to deliver location-based services and promotions via SMS, ensuring they’re received when and where they’re most relevant. Data exchange is also enabling advertisers to optimize location and content of billboards, based on the demographics present in each area.

Looking further afield, insurance companies also see the benefits of this approach. For example, by analyzing a range of data sets, such as GPS data from customers’ cars, Generali can track telematics and accident data to identify driving behaviors and patterns that may have contributed to an accident, improving customer profiling and helping actuaries with fraud detection.

Yet despite the market opportunity of data monetization representing 10% of the total revenue worldwide for telecom service providers, only 0.2% of the industry’s revenue today is generated this way[2]. Businesses are currently sitting on a goldmine of untapped data.

More Benefits of Data Exchange

The exchange of data between industries has a larger potential too: to better serve the way we live. Cities are beginning to analyze geo-data from telecom networks to inform the planning of new developments, transport links, parking sites, and traffic flow. As smart meters become more prevalent, utility companies will be using in-car telematics data to better manage the demand electric vehicles will place on the grid.

Emergency services are also looking at telematics data so they can dispatch teams more quickly in the event of an accident. Watch the model race demo below to see how emergency response systems can be linked to a vehicle so that if a crash does occur, the emergency systems know how serious the crash is and how many people are involved. This ensures those teams arrive with all the information available on the incident.

Success in an ever growing data-led market will depend on the willingness of businesses to explore new ways of using data. As evidence is growing, it’s clear that data exchange is a valuable tool for some businesses to grow revenues through new streams and for others to glean new consumer insights. But, as author Richard Bach once said, “Any powerful idea is absolutely fascinating and absolutely useless until we choose to use it.”

From data scientists and analysts, who work closely with company data each day, to business leaders exploring new ways to improve the way they work, Oracle has a set of rich integrated solutions for everybody in your organization.

Read our free ebook, “Driving Growth and Innovation Through Big Data” to learn more about companies that uncover new benefits across their business.

This post is an article from Amine Mouadden (Director, Big Data Communications & Media, Oracle EMEA). Follow him at linkedin.com/in/mouadden


[1],2 External Data Monetization: CSPs Should Cautiously Invest In New Service Offerings To Increase Revenue, Analysys Mason, June 2016

Related:

  • No Related Posts

What Real Database Developers Are Doing With Blockchain, PWA, Docker, And Voice

Jeffrey Erickson

Director of Content Strategy

This article originally appeared on Forbes Oracle Voice http://ora.cl/c8jM2

In the ever-evolving digital economy, fortune favors the curious. It also favors those who know how to make good use of data—the black gold of digital business.

“It’s a great time to be a database developer,” says Jorge Rimblas, a database developer who spoke at the recent ODTUG Kscope technology conference in Orlando, Florida. “But you have to keep learning” to stay on top.

Rimblas, a specialist in Oracle APEX, a popular tool for developing web apps in the Oracle Database, recalls his early years learning to write efficient SQL queries and wrapping his mind around large data models. Today, he’s polishing his abilities with REST services, plumbing JavaScript libraries, and sharing projects on GitHub. “I want to find technologies that challenge me, and help me share the data with more people,” he says.

Below, Rimblas and four fellow database developers talk about the technologies that are changing how they think about managing data, including blockchain, a new mobile development model called Progressive Web Applications, Docker, and voice assistants.

Put Blockchain to Work

When Adrian Png saw a tea company’s stock soar by simply adding “blockchain” to its name, he figured the hype had peaked and it was time to dig in and see what the technology had to offer a database developer.

“As a programmer you don’t just sit comfortable with what you’re doing. You’ve got to look out there to see what’s coming and get yourself ready,” says Png, who chronicles his explorations in his blog Thinking Anew.

As the former database manager for the British Columbia BioLibrary, Png built a bio library proof-of-concept using the open-source Hyperledger’s private blockchain products and Oracle APEX. The “library” holds tissue samples from patients with conditions such as cancer for researchers to study. Any use requires consent from the patient.

Because blockchain gives him a transparent, verifiable, and tamper-proof place to store information, he thought it could work for managing institutional access to data. “As the custodian of patient data, the library stores patients’ data securely in an Oracle Database, Oracle APEX provides the front end and says who I am,” he says, “and then Blockchain decides what I can see.”

With just this first taste of blockchain, Png, who’s now with APEX innovators, Insum Solutions, already is thinking differently about designing applications. “When you design a database application, you’re always thinking about securing your data and keeping it private,” he says. But when you build on top of a blockchain service, “you’re writing an application that’s designed to make data transparent and immutable on the blockchain.” He adds, “I wanted to see how blockchain and Oracle APEX gel,” and that went well.

Build Progressive Web Apps

This was Vincent Morneau’s fourth year at the ODTUG Kscope conference and, he admits, it’s the fourth year he submitted an abstract to teach fellow developers about a cutting-edge technology he knew nothing about. “It forces me to become an expert before I have to present it in front of an audience,” Morneau says.

This year, Morneau spoke on Progressive Web Apps, which allow a web application on a smartphone to deliver a user experience that’s much more like a native app, but without the download barrier that limits native app adoption. “People prefer using apps on their phone rather than web apps on the browser,” he says. “And a home screen icon just makes the app more accessible, it works when it’s offline, and it can also push notifications out to users.”

Morneau used Oracle APEX to build and show an app that runs in a browser and will also install on his Android phone. Progressive Web Apps open up a lot of possibilities for developers “because we’re not just building Web apps anymore,” he says. “We’re building apps” that are no longer differentiated from native mobile apps.

The data experts in Morneau’s session had tough questions about how his app cached and moved data from the Oracle Database but otherwise seemed convinced that he was onto something important.

Join the Docker Faithful

When Martin D’Souza talks to fellow technologists about application development, he often explains his conversion to using Docker: “Once I saw the light on Docker containers, I realized how wrong I had been in my previous development process.”

Docker allows D’Souza to create a container for a resource he needs to build an app, such as a specific version of Java or Oracle Rest Data Services. Then he can simply point his data to the container. “If the app I’m working on needs a different version of ORDS, I don’t have to install that version on a virtual machine and make sure all the dependencies are installed correctly, I just have to point my app at another container with the right version of ORDS,” he says.

Or, when he’s working with another developer who’s on a different laptop, he can simply send him or her the Docker container and they can begin collaborating. “It has sped things up in so many ways,” says D’Souza. “There’s a whole Docker community out there and I say, ‘I want to do X’ and someone out there has already built it and I download it and I’m up and running.”

Bring Voice to Enterprise Apps

Can voice assistants make interacting with enterprise data easier, and even fun?

Jorge Rimblas says yes. He connected Alexa to his company’s timekeeping and billing application built in Oracle APEX.

“Now I can tell Alexa to pull up a job in my APEX app and add hours. Or tell her I’m done for the day and she’ll say, ‘Yay,’” Rimblas says of the proof-of-concept he built for his company. Rimblas likes that Alexa’s markup adds voice inflection to her responses. “It’s sort of like HTML,” he says, and it allows him to add personality and wit to exchanges with his applications.

But “it took a lot of testing and debugging” before Alexa would return the answers or take the actions he wanted, he warns. “You’re in the logs of the Oracle side to see if you reached the information and on the Amazon side to see why she’s not hearing it.”

The investment in learning to integrate with voice assistant is worth is, says fellow database developer Christoph Ruepprich. “There are a gazillion things you can do with a voice assistant,” he says. “It can call up a screen, read back some results, or trigger actions in the application.” It is, he says, a perfect approach for situations where people have their hands busy. “If someone is working for an airline and they’re working on an airplane and they need a part, they can say ‘Hey Alexa, do we have this part in stock? Can I have it brought out to me on the shop floor?’” he says. “Or show me the schematics for this or that.”

The growing interest in voice interfaces is also fueling interest in chatbots. Intelligent bot capability in Oracle Mobile Cloud Service, for example, provide prebuilt interfaces to voice platforms such as Google Home and Alexa and to messaging platforms such as Facebook Messenger and WeChat.

Ruepprich agrees that his database skills are a great starting point for working with voice assistants. “Because that’s where your bottlenecks are,” he says. “If you don’t write efficient SQL then your great voice app isn’t going to work as well as you want.”

Jeff Erickson is an editor-at-large for Oracle.

Follow Us On Social Media:

Related:

  • No Related Posts