Which 2018 OpenWorld Database Sessions Should You Attend?

Oracle has entered a truly exciting time with the development of the Autonomous Database. We’ve continually added new products and new capabilities, like Autonomous Transaction Processing and Autonomous NoSQL Database. Don’t pass up the chance to see what’s happening at OpenWorld, happening from Oct. 22-25 this year.

There are thousands of database sessions to choose from – everything from how to use machine learning with your data warehouse to sessions about the future of the database.

How do you decide which to attend?

To help you out, we’ve put together some lists that showcase the database and Autonomous Database sessions you don’t want to miss.

Recommended Database Sessions

If you’re looking for session recommendations by day, we have you covered:

We’ve also put together suggestions for:

Showcase sessions we definitely advise you to attend:

Monday, October 22

Unleash the Potential of Your Data with Oracle Autonomous Data Warehouse Cloud [PRO3983]

By Monica Kumar, Vice President, Product Marketing Database and Big Data, Oracle

What if business and IT could unite around data? Both could run and manage data autonomously? You could explore any data and get insights and predictions you never thought possible—in record time. In this session dive into Oracle Autonomous Data Warehouse Cloud, a comprehensive solution that allows you to connect and integrate any data, store and process with industry-leading performance, and analyze and visualize the data for actionable business insights. Learn how AI and machine learning identify unique patterns and trends for better predictive insights. Discover how autonomous tuning, patching, upgrading, and more enable error-free operation, reducing data management costs by 50 percent.

Location: Moscone West, Room 3020

Time: 9:00 a.m.–9:45 a.m.

Oracle Autonomous Database Cloud [PKN3947]

By Andrew Mendelsohn, Executive Vice President, Database Server Technologies, Oracle

Oracle Chairman and Chief Technology Officer Larry Ellison describes Oracle Autonomous Database Cloud as “probably the most important thing Oracle has ever done.” In his annual Oracle OpenWorld address, Oracle Executive Vice President Andy Mendelsohn shares the latest updates from the Database Development team along with customer reaction to Oracle Autonomous Database Cloud.

Location: The Exchange @ Moscone South—The Arena

Time: 11:00 a.m.–12:15 p.m.

Keynote: Think Autonomous [KEY3784]

Location: Moscone North, Hall D

Time: 1:45 p.m.–3:00 p.m.

Roadmap Session: Oracle Autonomous OLTP Database Cloud Overview and Roadmap [PRO3978]

By Maria Colgan, Master Product Manager, Oracle and Juan Loaiza, Senior Vice President, Oracle

We are entering a new era in databases with the introduction of Oracle Autonomous OLTP Database Cloud. But what does this mean to your organization and how you achieve your key data management goals? This session provides a clear understanding of how the unique Oracle Autonomous OLTP Database Cloud works and illustrates how it can simplify your approach to data management and accelerate your transition to the cloud.

Location: Moscone West, Room 3003

Time: 4:45 p.m.–5:30 p.m.

Tuesday, October 23

Oracle Cloud: Modernize and Innovate on Your Journey to the Cloud [GEN1229]

By Steve Daheb, Vice President, Oracle Cloud, Erik Dvergsnes, Architect, Aker BP, and Michael Morales, CEO/Managing Partner, Quality Metrics Partner

Companies today have three sometimes conflicting mandates: modernize, innovate, AND reduce costs. The right cloud platform can address all three, but migrating isn’t always as easy as it sounds because everyone’s needs are unique, and cookie-cutter approaches just don’t work. Oracle Cloud Platform makes it possible to develop your own unique path to the cloud however you choose—SaaS, PaaS, or IaaS. Learn how Oracle Autonomous Cloud Platform Services automatically repairs, secures, and drives itself, allowing you to reduce cost and risk while at the same time delivering greater insights and innovation for your organization. In this session learn from colleagues who found success building their own unique paths to the cloud.

Location: Moscone West, Room 2002

Time: 12:30 p.m.–1:15 p.m.

Of course, if you haven’t already registered for OpenWorld, sign up today. It’s the best database learning experience of the year.

Related:

  • No Related Posts

What Is NoSQL Database?

The database world is changing fast, with new advances that are completely transforming the way organizations use massive amounts of. Today, we’re announcing the availability of Oracle NoSQL Database Cloud, the latest addition to the Oracle Database portfolio.

Try NoSQL Database today – free

Developers can now focus on application development without dealing with the hassle of managing:

  • Back-end servers
  • Storage expansion
  • Cluster deployments
  • Software installation
  • Patches
  • Upgrades
  • Backup
  • Operating systems
  • High-availability configurations

They can quickly provision throughput as application workload demands change dynamically and create modern applications without the complexities of maintaining a server or storage.

Watch the video to learn more.

What does this mean?

It’s a new era for developers.

Picture this. With NoSQL Database, developers can now easily develop and deploy applications that respond to inquiries incredibly fast, such as:

  • UI personalization
  • Shopping carts
  • Online fraud detection
  • Gaming
  • Online Advertising
  • And much more

NoSQL Database can do this because it is:

  • Modern: A developer-oriented solution, Oracle NoSQL Database is designed for flexibility. The database supports key value APIs, including a simple declarative SQL API and command line interfaces along with flexible data models for data representation including relational and ad-hoc JSON.
  • Open: The database features innovative SQL interoperability between fixed schema and ad-hoc JSON data models. Users also have deployment options to either run the same application in the cloud or on-premises with no platform lock-in.
  • Easy: With an available SDK and support for popular languages including Python, Node.JS, and Java, Oracle offers a no-hassle application development solution to connect easily to the Oracle NoSQL Database Cloud.

What Does NoSQL Database Do?

Oracle NoSQL Database scales to meet dynamic application workload throughput and storage requirements. Users create tables to store their application data and perform CRUD operations.

A NoSQL Database table is similar to a relational table with additional properties like provisioned write units, read units, and storage capacity. Users provision the throughput and storage capacity in each table based on the anticipated workloads. NoSQL Database resources are allocated and scaled accordingly to meet the workload requirements.

Read Units come in two flavors, Absolute Consistency and Eventual Consistency. Use Absolute Consistency when you need the most recently updated data and Eventual Consistency when you can use slightly older data. Users are billed hourly based on the throughput capacity and storage that is provisioned.

Oracle has made the development environment simple to use to develop applications for NoSQL Database. The application resides as a client which connects to the cloud service. NoSQL Database uses the https protocol to connect these two independently running processes.

To connect the client to the server, the following snippet of Java code is needed:

URL serviceURL = new URL("http", hostname, port, "/");

NoSQLHandleConfig config = new NoSQLHandleConfig(serviceURL);

The “hostname” is called the Service URI and points to a specific region that identifies where NoSQL Database is running. The first release of NoSQL Database resides in the east coast region of the United States, hence the URI must be “ans.uscom-east-1.oraclecloud.com “.

How To Get Started with NoSQL Database

The Java driver can be used by application developers that wish to connect directly to the production service for application development. In addition, we provide a simulator of NoSQL Database which can be run locally. The Oracle NoSQL Cloud Simulator is easily installed on a local machine such as a Linux VM running on a laptop. A simple script is available that starts up the Oracle NoSQL Cloud Simulator. The download is available on OTN.

How to Provision a Table with NoSQL Database

As mentioned earlier, users must provision the table they will use to store their data by specifying the maximum amount of write performance the application will require, the maximum amount of read performance the application will require, as well as the size of the table in terms of gigabytes. The write and read amounts are specified as Write Units and Read Units.

  • Write Unit per Month: The throughput of up to 1 kilobyte (KB) of data per second for a write operation over a month, approximately 2.6 million writes.
  • Read Unit per Month: The throughput of up to 1 kilobyte (KB) of data per second for an eventually consistent read operation over a month, approximately 2.6 million reads.

To achieve the throughput of up to 1 kilobyte (KB) of data per second for an absolute consistent read, the equivalent of two eventual consistent read units need to be provisioned.

A more detailed discussion of pricing will be in a future blog. Specific pricing and estimates of your monthly usage can be obtained by visiting the Oracle Cloud Cost Estimator.

Once the workload requirements for Write Units, Read Units, and amount of database storage has been determined for your specific application, the following Java code snippet is how the table would be provisioned:

TableRequest tableRequest = new TableRequest()

.setStatement("create table if not exists USERS(id integer, " +"name JSON, primary key(id);

.setTableLimits(new TableLimits(100, 50, 150));

TableResult tres = handle.tableRequest(tableRequest);

In the above example, a Table is requested that is provisioned with 100 Write Units, 50 Read Units, and 150 GB of storage. These are the initial values that the application will be able to use.

The following Java code example shows how to insert a small JSON datatype into a table:

private final static String JSON_DATA = "{" +

" "userdata": [" +

" {" +

" "id": "1"," +

" "name": "Julie Sherman"," +

" "gender" : "female"," +

" "Age" : "25"" +

" }" +

" ]" +

"}";

MapValue value = new MapValue()

.put(“id”, 1)

.putFromJson(“name”, JSON_DATA, null);

putRequest = new PutRequest()

.setvalue (value)

.setTableName(“users”);

putRes = handle.put(putRequest);

Over time, your workload may increase which means that the Write Units, Read Units, or Storage amount may need to increase as well. In some cases, depending on the workload, these values might need to be decreased as well. The following Java code example changes the values for TableLimits to 2000, 100, and 500.

TableRequest tableRequest = new TableRequest()

.setTableName(“users”)

.setTableLimits(new TableLimits(2000, 100, 500))

.setTimeout(1000);

TableResult res = conn.tableRequest(tableRequest);

A Benefit of a NoSQL Database in the Cloud

One of the most important aspects of the cloud service is the ability to increase or decrease the provisioned throughput based on the workloads. For example see figure 1, if your application exhibits time of day or day of week peak throughput patterns, you can modify the provisioned throughput of your application to meet those demand peak times and lower your provisioned throughput when demand tapers off. This means you only pay for the peak throughput when it is needed.

A dynamic workload like this may be seen in a shopping site during some peak buying seasons or for any application that has periodic workload changes. As your workload decreases, there is no need to use your Oracle cloud credits to maintain the throughput at the peak demand level. Your application can recognize the decreased demands and scale back, thus saving your company operational expenses. While all this is happening, your latency remains consistent.

Advantage of Cloud NoSQL Database

Figure 1 – Changes in throughput as workload changes

Summary

Oracle NoSQL Database solves a broad range of application problems frequently prevalent in modern applications. With support for predictable, low latency access to key/value data and rich JSON query experience, it’s ideal for UI personalization, real time fraud detection, and IoT data management applications.

Developers can quickly change the provisioned throughput based on workload demands. The API for NoSQL Database is simple to use and flexible in terms of the types of data that can be stored. The predictable latencies will appeal to any developer who is creating an application in which users require fast interaction.

As a fully managed service, developers and organizations can concentrate on creating innovative applications without the hassle of maintaining a server, storage, and network infrastructure. More information can be found on the NoSQL Database webpage. To get started and see what NoSQL Database can do for you, start your free trial today.

For more information, download:

Whitepaper: Flexible Data Models. Zero Administration. Automatic Scaling.

Whitepaper: Oracle NoSQL Database Cloud

Whitepaper: Capacity Planning for Oracle NoSQL Database Cloud

Written by Michael Schulman and Michael Brey

Related:

  • No Related Posts

What Is Autonomous NoSQL Database?

The database world is changing fast, with new advances that are completely transforming the way organizations use massive amounts of. Today, we’re announcing the availability of Oracle Autonomous NoSQL Database Cloud, the latest addition to the Oracle Autonomous Database portfolio.

Try Autonomous NoSQL Database today – free

Developers can now focus on application development without dealing with the hassle of managing:

  • Back-end servers
  • Storage expansion
  • Cluster deployments
  • Software installation
  • Patches
  • Upgrades
  • Backup
  • Operating systems
  • High-availability configurations

They can quickly provision throughput as application workload demands change dynamically and create modern applications without the complexities of maintaining a server or storage.

Watch the video to learn more.

What does this mean?

It’s a new era for developers.

Picture this. With Autonomous NoSQL Database, developers can now easily develop and deploy applications that respond to inquiries incredibly fast, such as:

  • UI personalization
  • Shopping carts
  • Online fraud detection
  • Gaming
  • Online Advertising
  • And much more

Autonomous NoSQL Database can do this because it is:

  • Modern: A developer-oriented solution, Oracle Autonomous NoSQL Database is designed for flexibility. The database supports key value APIs, including a simple declarative SQL API and command line interfaces along with flexible data models for data representation including relational and ad-hoc JSON.
  • Open: The database features innovative SQL interoperability between fixed schema and ad-hoc JSON data models. Users also have deployment options to either run the same application in the cloud or on-premises with no platform lock-in.
  • Easy: With an available SDK and support for popular languages including Python, Node.JS, and Java, Oracle offers a no-hassle application development solution to connect easily to the Oracle Autonomous NoSQL Database Cloud.

What Does the Autonomous NoSQL Database Do?

Autonomous NoSQL Database scales to meet dynamic application workload throughput and storage requirements. Users create tables to store their application data and perform CRUD operations.

An Autonomous NoSQL Database table is similar to a relational table with additional properties like provisioned write units, read units, and storage capacity. Users provision the throughput and storage capacity in each table based on the anticipated workloads. Autonomous NoSQL Database resources are allocated and scaled accordingly to meet the workload requirements.

Read Units come in two flavors, Absolute Consistency and Eventual Consistency. Use Absolute Consistency when you need the most recently updated data and Eventual Consistency when you can use slightly older data. Users are billed hourly based on the throughput capacity and storage that is provisioned.

Oracle has made the development environment simple to use to develop applications for Autonomous NoSQL Database. The application resides as a client which connects to the cloud service. Autonomous NoSQL Database uses the https protocol to connect these two independently running processes.

To connect the client to the server, the following snippet of Java code is needed:

URL serviceURL = new URL("http", hostname, port, "/");

NoSQLHandleConfig config = new NoSQLHandleConfig(serviceURL);

The “hostname” is called the Service URI and points to a specific region that identifies where Autonomous NoSQL Database is running. The first release of Autonomous NoSQL Database resides in the east coast region of the United States, hence the URI must be “ans.uscom-east-1.oraclecloud.com “.

How To Get Started with Autonomous NoSQL Database

The Java driver can be used by application developers that wish to connect directly to the production service for application development. In addition, we provide a simulator of Autonomous NoSQL Database which can be run locally. The Oracle NoSQL Cloud Simulator is easily installed on a local machine such as a Linux VM running on a laptop. A simple script is available that starts up the Oracle NoSQL Cloud Simulator. The download is available on OTN.

How to Provision a Table with Autonomous NoSQL Database

As mentioned earlier, users must provision the table they will use to store their data by specifying the maximum amount of write performance the application will require, the maximum amount of read performance the application will require, as well as the size of the table in terms of gigabytes. The write and read amounts are specified as Write Units and Read Units.

  • Write Unit per Month: The throughput of up to 1 kilobyte (KB) of data per second for a write operation over a month, approximately 2.6 million writes.
  • Read Unit per Month: The throughput of up to 1 kilobyte (KB) of data per second for an eventually consistent read operation over a month, approximately 2.6 million reads.

To achieve the throughput of up to 1 kilobyte (KB) of data per second for an absolute consistent read, the equivalent of two eventual consistent read units need to be provisioned.

A more detailed discussion of pricing will be in a future blog. Specific pricing and estimates of your monthly usage can be obtained by visiting the Oracle Cloud Cost Estimator.

Once the workload requirements for Write Units, Read Units, and amount of database storage has been determined for your specific application, the following Java code snippet is how the table would be provisioned:

TableRequest tableRequest = new TableRequest()

.setStatement("create table if not exists USERS(id integer, " +"name JSON, primary key(id);

.setTableLimits(new TableLimits(100, 50, 150));

TableResult tres = handle.tableRequest(tableRequest);

In the above example, a Table is requested that is provisioned with 100 Write Units, 50 Read Units, and 150 GB of storage. These are the initial values that the application will be able to use.

The following Java code example shows how to insert a small JSON datatype into a table:

private final static String JSON_DATA = "{" +

" "userdata": [" +

" {" +

" "id": "1"," +

" "name": "Julie Sherman"," +

" "gender" : "female"," +

" "Age" : "25"" +

" }" +

" ]" +

"}";

MapValue value = new MapValue()

.put(“id”, 1)

.putFromJson(“name”, JSON_DATA, null);

putRequest = new PutRequest()

.setvalue (value)

.setTableName(“users”);

putRes = handle.put(putRequest);

Over time, your workload may increase which means that the Write Units, Read Units, or Storage amount may need to increase as well. In some cases, depending on the workload, these values might need to be decreased as well. The following Java code example changes the values for TableLimits to 2000, 100, and 500.

TableRequest tableRequest = new TableRequest()

.setTableName(“users”)

.setTableLimits(new TableLimits(2000, 100, 500))

.setTimeout(1000);

TableResult res = conn.tableRequest(tableRequest);

A Benefit of a NoSQL Database in the Cloud

One of the most important aspects of the cloud service is the ability to increase or decrease the provisioned throughput based on the workloads. For example see figure 1, if your application exhibits time of day or day of week peak throughput patterns, you can modify the provisioned throughput of your application to meet those demand peak times and lower your provisioned throughput when demand tapers off. This means you only pay for the peak throughput when it is needed.

A dynamic workload like this may be seen in a shopping site during some peak buying seasons or for any application that has periodic workload changes. As your workload decreases, there is no need to use your Oracle cloud credits to maintain the throughput at the peak demand level. Your application can recognize the decreased demands and scale back, thus saving your company operational expenses. While all this is happening, your latency remains consistent.

Advantage of Cloud NoSQL Database

Figure 1 – Changes in throughput as workload changes

Summary

Autonomous NoSQL Database solves a broad range of application problems frequently prevalent in modern applications. With support for predictable, low latency access to key/value data and rich JSON query experience, it’s ideal for UI personalization, real time fraud detection, and IoT data management applications.

Developers can quickly change the provisioned throughput based on workload demands. The API for Autonomous NoSQL Database is simple to use and flexible in terms of the types of data that can be stored. The predictable latencies will appeal to any developer who is creating an application in which users require fast interaction.

As a fully managed service, developers and organizations can concentrate on creating innovative applications without the hassle of maintaining a server, storage, and network infrastructure. More information can be found on the Autonomous NoSQL Database webpage. To get started and see what Autonomous NoSQL Database can do for you, start your free trial today.

For more information, download:

Whitepaper: Flexible Data Models. Zero Administration. Automatic Scaling.

Whitepaper: Oracle Autonomous NoSQL Database Cloud

Whitepaper: Capacity Planning for Oracle Autonomous NoSQL Database Cloud

Written by Michael Schulman and Michael Brey

Related:

  • No Related Posts

Data in Action: IoT and the Smart Bearing

The Internet of Things (IoT) represents a big wave of technological change, and organizations in virtually every industry will benefit from this technology. Some 4.9 billion connected objects will be in use this year, up 30 percent from just last year, predicts research firm Gartner. By 2020, it adds, that number will increase to some 25 billion connected objects worldwide.

Businesses in many industries are evaluating the use of IoT technology for remote monitoring and to improve maintenance for mission critical operations. In a recent article, researchers with McKinsey & Company stated that “High uncertainty and low growth rates have forced companies in transportation, energy, manufacturing and other industries to squeeze every asset for maximum value.”

The problem is that reactive maintenance exposes companies to significant risks and is not the transformational solution businesses need to remain or become competitive. Cheaper computational power, data streaming, autonomous data management and advanced analytics with embedded machine learning and visualization are enabling more efficient and effective asset utilization.”

What is needed is a predictive maintenance system that relies on an informed approach for each production asset. It should gather data from multiple connected sources such as temperature and acceleration so that predicting failure is more regulated.

A key functional component of assets and equipment in many industries, and the topic of extensive analysis and Industry 4.0 (I4) focus, is the mechanical bearing. Bearings are critical components of rotating equipment including engines, fans, pumps and machines of all types. They are responsible for the continuous operation of planes, vehicles, production machinery, wind turbines, air conditioning systems and elevator hoists. The purpose of bearings is to reduce friction between moving parts and to enable continuous operation. The name comes from the notion of ‘bearing’ the load of a rotating shaft or sliding surface.

Bearings are designed for continuous and long use but each one has a finite lifespan and eventually will fail. Bearing failure causes the equipment it supports to cease operation, resulting in impact that ranges from inconvenient (a household fan stops running) to disruptive (a production line goes down) to catastrophic (a vehicle engine fails). Maintenance managers want to minimize risk and avoid unexpected service disruptions, particularly when the economic or human costs are high, but replacing bearings requires equipment downtime and has significant cost. They are constantly trying to achieve a balance between equipment up time and maintenance cost.

Three approaches to bearing maintenance are: (1) run to failure and replace, (2) perform maintenance at scheduled intervals based on observed aggregate historical norms, and (3) use condition monitoring. Bearing condition monitoring is based on wireless sensors embedded in bearings or located in host assemblies. It involves analyzing huge volumes of vibration data, isolating frequencies associated with the bearing geometry, calculating the spectrum view of the data, analyzing the spectrum and then comparing the spectrum to historical data.

Before they fail, bearings emit telltale signs of weakness resulting from excessive wear. These signs include increased vibration and higher operating temperature. The trick is to use data streams to anticipate time to failure and to lower the risk of downtime, while maximizing useful life. Handling that stream, storing all the historical data, and running the machine learning models is all part of the big data story.

“If you wait too long, you can destroy the shaft and the bearing. But do it too early and you lose money by replacing a bearing that can run longer,” says author Alan S. Brown.

Bearing Failure

Advances in technology have made it possible to establish normal operating conditions by continuously monitoring the performance of each individual bearing, including vibration, temperature, torque and rotational speed, and to then use machine learning to process vast amounts of data. The result is the capability to find hidden patterns that represent potential failure scenarios. and to predict the remaining life of the bearing.

The combination of IoT, machine learning and analytics provides a solution for maintenance managers, enabling them to optimize machine life, manage costs and reduce the risk of damaging failure. “Machine learning… comes without the prejudices of engineers who look for problems only when they expect to see them,” Brown notes.

Combining this capability with powerful, visual analytics provides real-time insight into bearing condition and empowers engineers to reduce cost, raise up time and lower risk.

According to Krishna Raman of Frost & Sullivan, “The adoption of the Industrial Internet of Things (IIoT)-based smart bearings, which can self-diagnose impending faults and failures, is expected to significantly increase in aerospace and defense, wind turbines, railway and automotive” segments. Bearing manufacturers are now looking at ways to leverage data and analytics to provide predictability rather than just metal components, and to “…catapult one of the world’s oldest mechanical devices into the digital future.”

Visit our website to learn more about how to apply Oracle Big Data to your IoT strategy.

Guest author, Jake Krakauer (@JakeKrakauer) is the Head of Product Marketing, Oracle Analytics

Related:

  • No Related Posts

How Algorithms Can Shape Our Data Future

In 2002, I saw an amazing movie, Minority Report with Tom Cruise. It really made an impression on me—not for the science fiction nature of it, but for the possibilities or the reality of it.

I work in machine learning, data mining, and applied math area. I work with a lot of very smart people here at Oracle, we do amazing things. And when I got out of that movie, I thought, “Hmm, that’s pretty possible. I don’t see that as being so farfetched.”

The reason that the movie rings so true is Oracle’s strategy, because at Oracle, our strategy is to move the algorithm, not the data. And when we do that, things like Minority Report, things like the type of new scenarios and possibilities that are written in Patrick Tucker’s book, The Naked Future, all of those things become so much more possible. What happens in a world that anticipates your every move?

At Oracle, we’ve taken a different approach all together. We’ve said data gets bigger and bigger and bigger each year, and data, at some point, becomes so large that it becomes almost immovable, and it makes no sense to move all the data to some other location to calculate a median, or to do a T test, or to run a decision tree, or a logistic regression, or a neural network, or you name it, whatever. What makes much more sense is to bring the algorithms to the data, and that’s what we’ve done.

So, imagine today, you’re on your iPhone, and you wake up in the morning. I’m in the Boston area, and I get the local news, and I get some latest update about something that’s going on in Boston. And I might get the local news, the local weather, might be my stated interests, my favorite sports teams, the Boston Celtics, the New England Patriots, the Red Sox, that kind of stuff, and I might get some national news updates. Traditional stuff, right?

Now, imagine a slightly revised future based on an example in The Naked Future. I wake up and my device tells me, “When you meet your old girlfriend at the coffee shop this morning, act surprised to learn that she’s getting married.”

Huh, that’s interesting.

So, I meet my old girlfriend at the coffee shop and I say, “Oh, by the way, congratulations on getting married,” and she sort of recoils and says, “What do you mean I’m getting married? Who told you that? How did you know?” And he goes, scrambling, I say, “Well, I don’t know, I think I saw it on your Facebook post.” And she says, “I didn’t post anything to anybody anywhere. How did you know? How did you know?” And it becomes kind of confrontational.

And so, if you look at the very near future, the possibilities there, you can see how this could have easily happened. You’re collecting a lot of different data from different locations, different places, and you have maybe the girlfriend changed her address recently, maybe she’s moved in with a boyfriend or moved out of the house into an apartment, or who knows what. Maybe they’ve recently adopted a dog, maybe they’ve had a lot of Facebook pictures of the two of them together, maybe there’s some tweets of, “I’m so in love,” things like that, “Looking forward to spending our life together forever,” things like that. And maybe there’s an online ring purchase off Amazon or a jewelry store, some sort of purchase of a large ring. All these things are quite possible, so it is very real.

So where is this going today? Well, it’s still the basics, right, and in the basics, you must have good data, you must have a place to store your data. This is where I think Oracle can play a role here, but it’s not just the data, it’s the data and the domain knowledge, and that’s the most important thing. You need to know the data, you need to know. It’s not just your bonus amount this year, it’s your bonus amount this year versus last year compared to your peers. It’s the rate of change of the number of opioids that you’re taking compared to what you used to be doing. It’s all these temporal kinds of data and comparative and derive variables that are very specific. It has nothing to do with machine learning algorithms, but they are the most important thing to get you started.

So, there are, of course, machine learning algorithms, and Oracle has, fortunately, great libraries of all of these. We have about 30 machine learning algorithms that run in the database. We have about 30 machine learning algorithms that run in Spark and Hadoop. You gotta have the data, you gotta have the domain knowledge with the data. Those kind of go together in my mind, the algorithms. And then what does that generate? That generates models, predictions, and insights, and it makes you feel like that, although that’s a little bit science fiction-y.

But really, from a more practical point of view, it gives you the ability to hit your customer with the right product at the right time, anticipate things, know what’s a healthy outcome, and really have much greater insight into the, I guess, future of your customers.

And so, that’s all important, but the most important thing is to operationalize this, because if you don’t deploy and operationalize your analytical methodology, you just have a list of customers on a piece of paper. You have an interesting report, you have an interesting pie chart, but you need to deploy that, you need to operationalize that. And if you remember what I said in the beginning about how Oracle brings that algorithms to the data, that changes everything.

I have recorded my talk about how algorithms fuel these changes, real world examples, and a whole lot more. Click on the video below to view it.

If you are interested in how to apply machine learning, algorithms, and Big Data strategies to you own business, visit Oracle Big Data.

Related:

  • No Related Posts

Design Your Data Lake for Maximum Impact

Data lakes are fast becoming valuable tools for businesses that need to organize large volumes of highly diverse data from multiple sources. However, if you are not a data scientist, a data lake may seem more like an ocean that you are bound to drown in. Making a data lake manageable for everyone requires mindful designs that empower users with the appropriate tools.

A recent webcast conducted by TDWI and Oracle, entitled “How to Design a Data Lake with Business Impact in Mind,” identified the best use cases for using a data lake and then defined how to design one for an enterprise-level business. The presentation recommended keeping data-driven use cases at the forefront, making a data lake a central IT-managed function, blending old and new data, empowering self-service, and establish a sponsor group to manage the company’s data lake plan with enough staffing and skills to keep it relevant.

“Business want to make more fact-based data but they also want to go deeper into the data they have with analytics,” says Philip Russom, a Senior Research Director for Data Management at TDWI. “We see data lakes as a good advantage for companies that want to do this as the data can be repurposed repeatedly for new analytics and use cases.”

Data lake usage is on the rise, according to TDWI surveys. A 2017 query revealed that nearly a quarter of those businesses questioned (23 percent) have a data lake already in production with another quarter (24 percent) expected to launch in 12 months with only 7 percent admitting they would not jump into a data lake. A significant number (21 percent) said they would be establish a data lake within three years.

In the same survey, respondents were asked about the business benefit of deploying a Hadoop-based data lake. Half (49 percent) rated advanced analytics including data mining, statistics, and machine learning the primary use case, followed by data exploration and discovery. The third largest response saw big data source for analytics as the third most likely use case for a data lake.

Use cases for data lakes include investigating new data coming from sensors and machines, streaming, and human language text. More complex uses for data lakes include multiplatform data warehouse environments, omnichannel marketing, and digital supply chain.

As the best argument for deploying and using a data lake is to be able to blend old and new data together. This is especially helpful for departments like marketing, finance, and governance which require insight from multiple sources old and new. Russom noted multi-module enterprise resource planning, Internet of Things (IoT), insurance claim workflow, and digital healthcare would all be areas that could benefit from data lake deployments.

When it comes to design, Russom suggests the following:

  • Create a plan, prioritize use cases, and update as biz evolves
  • Choose data platform(s) that support business requirements
  • Get tools that work with platform and satisfy user requirements
  • Augment your staff with consultants experienced with data lakes
  • Train staff for Hadoop, analytics, lakes, clouds.
  • Start with business use case that a lake can address w/ROI

Bruce Edwards, a Cloud Luminary and Information Management Specialist with Oracle, added that the convergence of cloud, big data, and data science have enabled the explosion of data lake deployments. Having a central vendor that not only understands large scale data management the but can integrate existing infrastructures into core data lake components is essential.

“What data lake users need is an open, integrated, self-healing, high performance tool,” Edwards said. “These elements are all needed to allow businesses to begin their data lake journey.

To experience the entire webcast, download the presentation from our website. if you’re ready to start playing around with a data lake, we can offer you a free trial right here.

Related:

  • No Related Posts

Transaction Processing from Oracle is Now Autonomous

Today Larry Ellison announced the general availability of Oracle Autonomous Transaction Processing Cloud Service, the newest member of the Oracle Autonomous Database family, combining the flexibility of cloud with the power of machine learning to deliver data management as a service.

Traditionally, creating a database management system required a team of experts to custom build and manually maintain a complex hardware and software stack. With each system being unique, this approach led to poor economies of scale and a lack of the agility typically needed to give the business a competitive edge.

Try Autonomous Transaction Processing—Sign up for the trial

Autonomous Transaction Processing enables businesses to safely run a complex mix of high-performance transactions, reporting, and batch processing using the most secure, available, performant, and proven platform – Oracle Database on Exadata in the cloud. Unlike manually managed transaction processing databases, Autonomous Transaction Processing provides instant, elastic compute and storage, so only the required resources are provisioned at any given time, greatly decreasing runtime costs.

But What Does the Autonomous in Autonomous Transaction Processing Really Mean?

Self-Driving

Autonomous Transaction Processing is a self-driving database, meaning it eliminates the human labor needed to provision, secure, update, monitor, backup, and troubleshoot a database. This reduction in database maintenance tasks reduces costs and frees scarce administrator resources to work on higher-value tasks.

When an Autonomous Transaction Processing database is requested, an Oracle Real-Application-Cluster (RAC) database is automatically provisioned on Exadata Cloud Infrastructure. This high-availability configuration automatically benefits from many of the performance-enhancing Exadata features such as smart flash cache, Exafusion communication over a super-fast InfiniBand network, and automatic storage indexes.

In addition, when it comes time to update Autonomous Transaction Processing, patches are applied in a rolling fashion across the nodes of the cluster, eliminating unnecessary down time. Oracle automatically applies all clusterware, OS, VM, hypervisor, and firmware patches as well.

In Autonomous Transaction Processing the user does not get OS login privileges or SYSDBA privileges, so even if you want to do the maintenance tasks yourself, you cannot. It is like a car with the hood welded shut so you cannot change the oil or add coolant or perform any other maintenance yourself.

Many customers want to move to the cloud because of the elasticity it can offer. The ability to scale both in terms of compute and storage only when needed, allows people to truly pay per use. Autonomous Transaction Processing not only allows you to scale compute and storage resources, but it also allows you to do it independently online (no application downtime required).

Self-Securing

Autonomous Transaction Processing is also self-securing, as it protects itself from both external attacks and malicious internal users. Security patches are automatically applied every quarter. This is much sooner than most manually operated databases, narrowing an unnecessary window of vulnerability. Patching can also occur off-cycle if a zero-day exploit is discovered. Again, these patches are applied in a rolling fashion across the nodes of the cluster, avoiding application downtime.

But patching is just part of the picture. Autonomous Transaction Processing also protects itself with always-on encryption. This means data is encrypted at rest but also during any communication with the database. Customers control their own encryption keys to further improve security.

Autonomous Transaction Processing also secures itself from Oracle cloud administrators using Oracle Database Vault. Database Vault uniquely allows Oracle’s cloud administrators to do their jobs but prevents them from being able to see any customer data store in Autonomous Transaction Processing.

Finally, customers are not given access to either the operating system or the SYSDBA privilege to prevent security breaches from malicious internal users or from stolen administrator credentials via a phishing attack.

Self-Repairing

Autonomous Transaction Processing automatically recovers from any failures without downtime. The service is deployed on our Exadata cloud infrastructure, which has redundancy built-in at every level of the hardware configuration to protect against any server, storage, or network failures.

Autonomous Transaction Processing automatically backs up the database nightly and gives the ability to restore the database from any of the backups in the archive. It also has the ability to rewind data to a point in time in the past to back out any user errors using Oracle’s unique Flashback Database capabilities.

Since users don’t have access to the OS, Oracle is on the hook to diagnose any problems that may occur. Machine learning is used to detect and diagnose any anomalies. If the database detects an impending error, it gathers statistics and feeds them to AI diagnostics to determine the root cause. If it’s a known issue, the fix is quickly applied. If it’s a new issue a service request will be automatically opened with Oracle support.

How Does Autonomous Transaction Processing Differ from the Autonomous Data Warehouse?

Up until now, all of the functionality I have described is shared between both Autonomous Data Warehouse and Autonomous Transaction Processing. Where the two services differ is actually inside the database itself. Although both services use Oracle Database 18c, they have been optimized differently to support two very different but complementary workloads. The primary goal of the Autonomous Data Warehouse is to achieve fast complex analytics, while Autonomous Transaction Processing has been designed to efficiently execute a high volume of simple transactions.

Configuration

The differences in the two services begin with how we configure them. In Autonomous Data Warehouse, the majority of the memory is allocated to the PGA to allow parallel joins and complex aggregations to occur in-memory, rather than spilling to disk. While on Autonomous Transaction Processing, the majority of the memory is allocated to the SGA to ensure the critical working set can be cached to avoid IO.

Data Formats

We also store the data differently in each service. In the Autonomous Data Warehouse, data is stored in a columnar format as that’s the best format for analytics processing. While in Autonomous Transaction Processing, data is stored in a row format. The row format is ideal for transaction processing, as it allows quick access and updates to all of the columns in an individual record since all of the data for a given record is stored together in-memory and on-storage.

Statistics Gathering

Regardless of which type of autonomous database service you use, optimizer statistics will be automatically maintained. On the Autonomous Data Warehouse, statistics (including histograms) are automatically maintained as part of all bulk-load activities. With Autonomous Transaction Processing, data is added using more traditional insert statements, so statistics are automatically gathered when the volume of data changes significantly enough to make a difference to the statistics.

Query Optimization

Queries executed on the Autonomous Data Warehouse are automatically parallelized, as they tend to access large volumes of data in order to answer the business question. While indexes are used on Autonomous Transaction Processing to access only the specific rows of interest. We also use RDMA on Autonomous Transaction Processing to provide low response time direct access to data stored in-memory on other servers in the cluster.

Resource Management

Both Autonomous Data Warehouse and Autonomous Transaction Processing offer multiple database “services” to make it easy for users to control the priority and parallelism used by each session. The services predefine three priority levels: Low, Medium, and High. Users can just choose the best priority for each aspect of their workload. For each database service you have the ability to define the criteria of a runaway SQL statement. Any SQL statement that excesses these parameters either in terms of elapse time or IO will be automatically terminated. On Autonomous Data Warehouse only one service (LOW) automatically runs SQL statements serially. While on Autonomous Transaction Processing, only one service (PARALLEL) automatically runs SQL statements with parallel execution. You can also use the Medium priority service by default which allows the Low priority service to be used for requests such as reporting and batch to prevent them from interfering with mainstream transaction processing. The High priority level can be used for more important users or actions.

Can I Use Autonomous Transaction Processing to Develop New Applications?

Autonomous Transaction Processing is the ideal platform for new application development. Developers no longer have to wait on others to provision hardware, install software, and create a database for them. With Autonomous Transaction Processing, developers can easily deploy an Oracle database in a matter of minutes, without worrying about manual tuning or capacity planning.

Autonomous Transaction Processing also has the most advanced SQL and PL/SQL support accelerating developer productivity by minimizing the amount of application code required to implement complex business logic. It also has a complete set of integrated Machine Learning algorithms, simplifying the development of applications that perform real-time predictions such as personalized shopping recommendations, customer churn rates, and fraud detection.

Where Can I Get More Information and Get My Hands on Autonomous Transaction Processing?

The first place to visit is the Autonomous Transaction Processing Documentation. There you will find details on exactly what you can expect from the service.

We also have a great program that lets you get started with Oracle Cloud with $300 in free credits, which last much longer than you would expect since the trial service has very low pricing. Using your credits (which will probably last you around 30 days depending on how you configure Autonomous Transaction Processing) you will be able to get valuable hands-on time to try loading some your own workloads. Below is a quick video to help you get started.

Related:

  • No Related Posts

What to Expect from Oracle Autonomous Transaction Processing

Today Larry Ellison announced the general availability of Oracle Autonomous Transaction Processing Cloud Service, the newest member of the Oracle Autonomous Database family, combining the flexibility of cloud with the power of machine learning to deliver data management as a service.

Traditionally, creating a database management system required a team of experts to custom build and manually maintain a complex hardware and software stack. With each system being unique, this approach led to poor economies of scale and a lack of the agility typically needed to give the business a competitive edge.

Try Autonomous Transaction Processing—Sign up for the trial

Autonomous Transaction Processing enables businesses to safely run a complex mix of high-performance transactions, reporting, and batch processing using the most secure, available, performant, and proven platform – Oracle Database on Exadata in the cloud. Unlike manually managed transaction processing databases, Autonomous Transaction Processing provides instant, elastic compute and storage, so only the required resources are provisioned at any given time, greatly decreasing runtime costs.

But What Does the Autonomous in Autonomous Transaction Processing Really Mean?

Self-Driving

Autonomous Transaction Processing is a self-driving database, meaning it eliminates the human labor needed to provision, secure, update, monitor, backup, and troubleshoot a database. This reduction in database maintenance tasks reduces costs and frees scarce administrator resources to work on higher-value tasks.

When an Autonomous Transaction Processing database is requested, an Oracle Real-Application-Cluster (RAC) database is automatically provisioned on Exadata Cloud Infrastructure. This high-availability configuration automatically benefits from many of the performance-enhancing Exadata features such as smart flash cache, Exafusion communication over a super-fast InfiniBand network, and automatic storage indexes.

In addition, when it comes time to update Autonomous Transaction Processing, patches are applied in a rolling fashion across the nodes of the cluster, eliminating unnecessary down time. Oracle automatically applies all clusterware, OS, VM, hypervisor, and firmware patches as well.

In Autonomous Transaction Processing the user does not get OS login privileges or SYSDBA privileges, so even if you want to do the maintenance tasks yourself, you cannot. It is like a car with the hood welded shut so you cannot change the oil or add coolant or perform any other maintenance yourself.

Many customers want to move to the cloud because of the elasticity it can offer. The ability to scale both in terms of compute and storage only when needed, allows people to truly pay per use. Autonomous Transaction Processing not only allows you to scale compute and storage resources, but it also allows you to do it independently online (no application downtime required).

Self-Securing

Autonomous Transaction Processing is also self-securing, as it protects itself from both external attacks and malicious internal users. Security patches are automatically applied every quarter. This is much sooner than most manually operated databases, narrowing an unnecessary window of vulnerability. Patching can also occur off-cycle if a zero-day exploit is discovered. Again, these patches are applied in a rolling fashion across the nodes of the cluster, avoiding application downtime.

But patching is just part of the picture. Autonomous Transaction Processing also protects itself with always-on encryption. This means data is encrypted at rest but also during any communication with the database. Customers control their own encryption keys to further improve security.

Autonomous Transaction Processing also secures itself from Oracle cloud administrators using Oracle Database Vault. Database Vault uniquely allows Oracle’s cloud administrators to do their jobs but prevents them from being able to see any customer data store in Autonomous Transaction Processing.

Finally, customers are not given access to either the operating system or the SYSDBA privilege to prevent security breaches from malicious internal users or from stolen administrator credentials via a phishing attack.

Self-Repairing

Autonomous Transaction Processing automatically recovers from any failures without downtime. The service is deployed on our Exadata cloud infrastructure, which has redundancy built-in at every level of the hardware configuration to protect against any server, storage, or network failures.

Autonomous Transaction Processing automatically backs up the database nightly and gives the ability to restore the database from any of the backups in the archive. It also has the ability to rewind data to a point in time in the past to back out any user errors using Oracle’s unique Flashback Database capabilities.

Since users don’t have access to the OS, Oracle is on the hook to diagnose any problems that may occur. Machine learning is used to detect and diagnose any anomalies. If the database detects an impending error, it gathers statistics and feeds them to AI diagnostics to determine the root cause. If it’s a known issue, the fix is quickly applied. If it’s a new issue a service request will be automatically opened with Oracle support.

How Does Autonomous Transaction Processing Differ from the Autonomous Data Warehouse?

Up until now, all of the functionality I have described is shared between both Autonomous Data Warehouse and Autonomous Transaction Processing. Where the two services differ is actually inside the database itself. Although both services use Oracle Database 18c, they have been optimized differently to support two very different but complementary workloads. The primary goal of the Autonomous Data Warehouse is to achieve fast complex analytics, while Autonomous Transaction Processing has been designed to efficiently execute a high volume of simple transactions.

Configuration

The differences in the two services begin with how we configure them. In Autonomous Data Warehouse, the majority of the memory is allocated to the PGA to allow parallel joins and complex aggregations to occur in-memory, rather than spilling to disk. While on Autonomous Transaction Processing, the majority of the memory is allocated to the SGA to ensure the critical working set can be cached to avoid IO.

Data Formats

We also store the data differently in each service. In the Autonomous Data Warehouse, data is stored in a columnar format as that’s the best format for analytics processing. While in Autonomous Transaction Processing, data is stored in a row format. The row format is ideal for transaction processing, as it allows quick access and updates to all of the columns in an individual record since all of the data for a given record is stored together in-memory and on-storage.

Statistics Gathering

Regardless of which type of autonomous database service you use, optimizer statistics will be automatically maintained. On the Autonomous Data Warehouse, statistics (including histograms) are automatically maintained as part of all bulk-load activities. With Autonomous Transaction Processing, data is added using more traditional insert statements, so statistics are automatically gathered when the volume of data changes significantly enough to make a difference to the statistics.

Query Optimization

Queries executed on the Autonomous Data Warehouse are automatically parallelized, as they tend to access large volumes of data in order to answer the business question. While indexes are used on Autonomous Transaction Processing to access only the specific rows of interest. We also use RDMA on Autonomous Transaction Processing to provide low response time direct access to data stored in-memory on other servers in the cluster.

Resource Management

Both Autonomous Data Warehouse and Autonomous Transaction Processing offer multiple database “services” to make it easy for users to control the priority and parallelism used by each session. The services predefine three priority levels: Low, Medium, and High. Users can just choose the best priority for each aspect of their workload. For each database service you have the ability to define the criteria of a runaway SQL statement. Any SQL statement that excesses these parameters either in terms of elapse time or IO will be automatically terminated. On Autonomous Data Warehouse only one service (LOW) automatically runs SQL statements serially. While on Autonomous Transaction Processing, only one service (PARALLEL) automatically runs SQL statements with parallel execution. You can also use the Medium priority service by default which allows the Low priority service to be used for requests such as reporting and batch to prevent them from interfering with mainstream transaction processing. The High priority level can be used for more important users or actions.

Can I Use Autonomous Transaction Processing to Develop New Applications?

Autonomous Transaction Processing is the ideal platform for new application development. Developers no longer have to wait on others to provision hardware, install software, and create a database for them. With Autonomous Transaction Processing, developers can easily deploy an Oracle database in a matter of minutes, without worrying about manual tuning or capacity planning.

Autonomous Transaction Processing also has the most advanced SQL and PL/SQL support accelerating developer productivity by minimizing the amount of application code required to implement complex business logic. It also has a complete set of integrated Machine Learning algorithms, simplifying the development of applications that perform real-time predictions such as personalized shopping recommendations, customer churn rates, and fraud detection.

Where Can I Get More Information and Get My Hands on Autonomous Transaction Processing?

The first place to visit is the Autonomous Transaction Processing Documentation. There you will find details on exactly what you can expect from the service.

We also have a great program that lets you get started with Oracle Cloud with $300 in free credits, which last much longer than you would expect since the trial service has very low pricing. Using your credits (which will probably last you around 30 days depending on how you configure Autonomous Transaction Processing) you will be able to get valuable hands-on time to try loading some your own workloads. Below is a quick video to help you get started.

Related:

  • No Related Posts

Autonomous Database: What Does That Mean for You?

We’re living in a new age of the database.

Oracle sparked the first revolution, with the relational database.

Now we’re back again with the second revolution. It’s truly—well, revolutionary. And we’re calling the Autonomous Database.

Try Autonomous Transaction Processing—Sign up for the trial

Today we’re here to break it down for you, tell you what it is, and why we think you should care.

Why a Self-Driving Database Is the Future

It’s not just a revolution in databases that’s happening right now. We’re seeing the rise of the cloud, an explosion of data, and true promise for valuable machine learning. Everywhere you turn, you face challenges to the old way of doing things—and dazzling potential for the future.

Here’s what most companies are trying to do:

  • Transform to the modern cloud model
  • Ensure data safety
  • And do more with less

At Oracle, we argue that we’re uniquely positioned to help you through these challenges with our new kind of database, and here’s why.

Oracle has invested literally thousands of engineering years automating and optimizing the database. We’ve introduced and matured many sophisticated automation capabilities from memory management to workload monitoring and tuning—all of which are used in the Autonomous Database. We’ve arguably created an unmatched on-premise database, and that’s precisely what puts us in a unique position as we seek to create the cloud’s best database to help you with all of your new goals.

Here’s our dream: an autonomous, self-driving database that will automatically take care of all database and infrastructure management, as well as monitoring and tuning.

In our vision for the future, this Autonomous Database will:

  • Reduce costs and improve productivity because it automates the mundane tasks of having to provision, patch, and backup databases
  • Free up IT teams to focus on tasks that will bring value to the business
  • Be self-securing to protect itself from both external attacks and any malicious internal users
  • Automatically encrypt all data whether it’s at rest or in flight and automatically apply security updates with no downtime
  • Be self-repairing, and automatically recover from any failure
  • Minimize all kinds of downtime including plan maintenance

Wait a minute though—that’s not a vision for the future.

It’s what’s happening now.

Today, we’re announcing the launch of Autonomous Transaction Processing Cloud Service, which together with the Autonomous Data Warehouse that we released earlier this year, completely changes the way people look at and use a database.

We call this, “The Last Upgrade You’ll Ever Do.”

We are so proud of this. The Autonomous Database is the culmination of that database journey we started over 40 years ago. It brings full automation to every layer of database deployment, from optimization to operations to infrastructure.

In both Autonomous Transaction Processing and the Autonomous Data Warehouse, autonomous intelligence drives infrastructure and database operations for you, while also providing the ability to automatically tune internal database structures to optimize your application workload’s SQL as data changes over time.

We’ve made it easy, so easy.

You can create a highly available, mission-critical database deployment with the click of a button. Define your application schemas, load data using familiar tools, and get started processing your business transactions while leaving the mundane, time-consuming database operations to us.

The Autonomous Database—Optimized by Workload

Right now, there are two workloads we’ve optimized for the Autonomous Database. Of course we don’t plan to stop here. But here’s what we have that’s currently available, ready for you to purchase.

Autonomous Transaction Processing is what we’ve launched today, and we’ve been waiting to show it to you. It’s a database solution we created for general purpose transaction processing and mixed workload applications. Think of it as your database solution for transaction, batch, IoT, and reporting associated with those use cases.

We released the Autonomous Data Warehouse back in March. ADW is best for all analytic workloads. Think of it as your replacement for a data warehouse, data mart or data lake. It’s great for machine learning too.

Both Autonomous Transaction Processing and Autonomous Data Warehouse share the Autonomous Database platform of Oracle Database 18c on our Exadata Cloud infrastructure. The difference is how the services have been optimized within the database.

Here’s the breakdown:

How Can You Deploy the Autonomous Database?

So you know which flavor of the Autonomous Database you want. Or perhaps you know you want both. When it comes to deployment, you can choose between serverless Autonomous Data Warehouse or Autonomous Transaction Processing databases.

Or, very soon, you’ll be able to deploy on dedicated Exadata Cloud Infrastructure for the highest isolation. The complete hardware stack is isolated from other tenants and will provide a unique, fully isolated cloud within the public cloud.

For those who are a fan of Cloud at Customer, don’t worry. We plan to provide this option very soon.

Why Should You Want the Autonomous Database?

We could talk about the benefits of the Autonomous Database endlessly—and we will, in a future article. But here are the top benefits, boiled down as succinctly as we could make it.

The Autonomous Database enables:

  1. More IT innovation for less money
  2. More developer Innovation for less
  3. Fewer security breaches
  4. High availability due to built-in redundancy
  5. Easy upgrade to cloud
  6. Guaranteed lower cost

But don’t just take our word for it:

Don’t forget to take a look at our reviews on Gartner Peer Insights:

“We have also started with the Oracle Autonomous Data Warehouse Cloud and that was exceptionally impressive from start to finish – very easy to use – uploading of data easy and fast and the compression reduced the amount of storage needed 4 times.”

See the full review.

The people who are using Autonomous Database every day, the ones who are able to see how thoroughly it streamlines their work, are already talking about it and what they’re saying is good.

How to Get Started With the Autonomous Database

We can’t wait to show you the Autonomous Database too. And that’s why we’ve put together a free trial experience, so you can see if the Autonomous Database is truly as much of a gamechanger as we say. (Hint: It is)

Here’s what we’re giving you:

3338 hours, 2 TB of Exadata storage—what are you waiting for?

You’ll be able to:

  1. Provision an Autonomous Transaction Processing or Autonomous Data Warehouse instance
  2. Connect SQL Developer to the new database instance
  3. Load data files if needed and go!

You have nothing to lose, right?

Experience a new kind of database with Oracle’s Autonomous Database. Sign up today.

Related:

  • No Related Posts

What Does Data Science Need to Be Successful?

There are certain advances that have revolutionized the tech world – personal computing, cell technology, and cloud computing are just some of them.

Now that we have the ability to store massive amounts of data in the cloud and then use it with advanced analytics, we can finally start working towards a machine learning future.

Download your free ebook, “Demystifying Machine Learning”

Big Data Data Analytics Data Science Cloud

It’s time for data science to shine. Here are some stats:

Businesses are seeing the potential too. Data science can have great impact in:

  • Building and enhancing products and services
  • Enabling new and more efficient operations and processes
  • Creating new channels and business models

But unfortunately, for many businesses much of that is still in the future. Despite making big investments in data science teams, many are still not seeing the value they expected. Why?

Data scientists often face difficulty in working efficiently. There are lengthy waits for resources and data. There’s difficulty collaborating with teammates. And there can be long delays of days or weeks to deploy work.

The IT admins face issues too. They often feel a lot of pain because they’re responsible for supporting data science teams.

Developers have difficulty with access to usable machine learning. Business execs don’t see the full ROI. And there’s more.

A big part of the problem is that data science often happens in silos and isn’t well integrated with rest of the enterprise. There’s a movement to bring technologies, data scientists, and the business together to make enterprise data science truly successful. But to do that, you need a full platform. Here are some questions to think about:

  • What does this platform need?
  • What defines success?
  • What do business execs need to be successful?

To tackle enterprise data science successfully, companies need a data science platform that addresses all of these issues. And that’s why Oracle is excited about our recent acquisition of DataScience.com.

DataScience.com creates one place for your data science tools, projects, and infrastructure. It organizes work, allowing users to easily access data and computing resources. It also enables users to execute end-to-end model development workflows.

Quite simply, it addresses the need to manage data science teams and projects while providing the flexibility to innovate.

What does this mean, exactly? It means you can now:

Make data science more self-service

  • Launch sessions instantly with self-service access to the compute, data, and packages you need to get to work quickly on any size analysis.

Collaborate more efficiently

  • Organize your work via a project-based UI and work together on end-to-end modeling workflows with all of your work backed up by Git.

Get more work done faster

  • Leverage the best of open source machine learning frameworks on a platform tightly integrated with high performance Oracle Cloud Infrastructure

Now Oracle can integrate big data and data science tools all in one place, with a single self-service interface that makes enterprise data science possible—there are more possibilities than ever now.

Companies are scrambling to make machine learning solutions work so they can realize the full potential of it—and with DataScience.com, we’re many steps closer to that machine learning future we all keep hearing about.

If you have any other questions or you’d like to see our machine learning software, feel free to contact us.

You can also refer back to some of the articles we’ve created on machine learning best practices and challenges concerning that. Or, download your free ebook, “Demystifying Machine Learning.”

Related:

  • No Related Posts