How To Update The License Type For Your Autonomous Database

You can now quickly and easily change the type licensing for your Autonomous Database from BYOL to a new cloud subscription or vice-versa. It’s as easy as 1-2-3. So assuming you already have already created an autonomous database instance, how do you go about changing your licensing? Let me show you!

Step 1 – Accessing the management console

First stage involves signing into your cloud account using your tenancy name and cloud account. Then navigate to the Autonomous Database console. Here you will view a list of the Autonomous Databases that have been provisioned. If there is a long list of databases, you can filter the list by using the Filters drop-down menu to filter by the state of the database. You can also sort by workload type.

Let’s now change the type of license for the autonomous database instance “pp1atpsep2”. If we click on the blue text of the instance name which is in the first column of the table, this will take us to the instance management console page as shown below:

Notice in the above image that, on the righthand side, the console shows the current license type as set to “Bring Your Own License” which is often referred to as a BYOL-license.

Step 2 – Selecting “Update License Type” from Action menu

Now click on the “Actions” button in the row of menu buttons as shown below:

Select menu option to change the type of licence

Step 3 – Change the “License Type”

The pop-up form shows the current type of license associated with our autonomous database instance “pp1atpsep2”, which in this case is set to BYOL

For more information about BYOL-license, please visit the BYOL FAQ page here.

In this case, we are going to switch the license type to using a new cloud subscription, as shown below:

That’s it!

All that’s left to do is click on the blue Update button and the new licensing model will be applied to your Autonomous Database instance. At this point your Autonomous Database status will be in “Updating” mode, as shown below.

Autonomous Database mode switches to updating while changes are applied

However, the database is still up and accessible. There is no downtime. When the update is complete the status will return to “Available” and the console will show that the license type has changed to “License Included” as shown below.

Summary

Congratulations you have successfully switched your BYOL license to a new cloud subscription license for your Autonomous Database with absolutely no downtime or impact on your users.

In this post I have showed you how to quickly and easily change the type of license associated with your autonomous database. An animation of the complete end-to-end process is shown below:

Feature neon “Change” image courtesy of wikipedia

Related:

  • No Related Posts

Installing and Connecting Anaconda on Windows to Autonomous Transaction Processing

Introduction

In the previous blog we provisioned and connected to an Autonomous Transaction Processing instance. Autonomous Transaction Processing supports a complex mix of high-performance transactions, reporting, batch, IoT, and machine learning in a single database, allowing much simpler application development and deployment and enabling real-time analytics, personalization, and fraud detection.

In this blog you will install Oracle Client libraries, install Visual Studio, install Anaconda and run a few simple commands on Jupyter Notebook.

Step 1: Download the Oracle Instant Client

In order to connect and run applications from your PC to remote Oracle databases, such as Autonomous Transaction Processing, Oracle client libraries must be installed on your computer. Oracle Instant Client enables applications to connect to a local or remote Oracle Database for development and production deployment. The Instant Client libraries provide the necessary network connectivity, as well as basic and high-end data processing features, to make full use of any Oracle database. It underlies the Oracle APIs of popular languages and environments including Node.js, Python and PHP, as well as providing access for OCI, OCCI, JDBC, ODBC and Pro*C applications. Tools included in Instant Client, such as SQL*Plus and Oracle Data Pump, provide quick and convenient data access.

Let us start with Oracle Instant Client for Microsoft Windows (x64) 64-bit. You can find it here. If you happen to run another operating system, you can find the relevant Oracle Instant Client libraries here.

  • Accept License Agreement and select Basic Lite Package.

  • This will require you signing into OTN with your SSO account. If you do not have an account you need to create one.

  • Download the file and then proceed to the directory where the file was downloaded. Unzip the file into a directory. Open Command Prompt and navigate to the directory.

  • Add this directory to your path in Windows:
    • In Search, search for and then select: Advanced Systems Settings (Control Panel)
    • Click Environment Variables at the bottom of screen
    • In the System Variables double click Path
    • In the screen that opens up select NEW
    • Add full path to the instant client directory (C:instantclient_18_5)

Step 2: Installing Microsoft Visual Studio Redistributable

  • Oracle Client libraries for Windows require the presence of the correct Visual Studio redistributable. Follow the link below to install:

https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads#bookmark-vs2013

  • Select the correct architecture

  • Double Click the downloaded file and proceed with the installation

  • This completes the installation of the pre-requisites

Step 3: Installing Anaconda/Python/Jupyter

Anaconda/Jupyter is a popular IDE. Anaconda/Jupyter is very sensitive to other installed versions and PATH’s associated with previous installations on your computer. If you have other versions of Python installed, remove them as any PATH’s and projects associated with them or this installation may not work.

  • Download the software from www.anaconda.com/download
  • Select the Python 3.7 version download highlighted below, make sure you select the one for your correct architecture (32 or 64-bit)

  • Go to the folder where the file was downloaded and Double Click it. This brings up the Anaconda installation page, go ahead and Click Next.

  • Click I agree on the next screen

  • In the next screen Select Just me and Click Next

  • Install in the following directory: C:Anaconda3 You must create the directory if the directory does not exist create (the installer will not create it). Click Next

  • Make sure you Select Register Anaconda as my default Python 3.7. Leave Add Anaconda to your PATH environment variable non-selected. Click Install

  • The installation will take a few minutes. Once complete Click Next

  • You will get a prompt to install Microsoft VS Code. Skip this step.

  • Deselect both options in the next screen and Click Next.

  • You must add the new install directory into your PATH. Add C:Anaconda3 and C:Anaconda3scripts to your PATH:

In Windows 10:

  • In Search, search for and then select: Advanced System Settings (control panel)
  • Click Environment Variables at bottom of screen
  • In the System variables double click Path
  • In the screen that opens up select NEW
  • Add full path to the anaconda directory (C:Anaconda3)
  • Add full path to the anaconda scripts directory (C:Anaconda3scripts)

Hooray!!! Anaconda and Python is now installed.

Step 4: Using Anaconda/Jupyter/Python with Autonomous Transaction Processing

Before running any Python apps that access the database, the correct packages must be loaded into the Python environment. Open a Command Prompt Window and navigate to the directory where you installed Anaconda (C:Anaconda3) and run the following commands in order. pip is a package management system used to install and manage software packages written in Python. We will use pip to install the packages:

pip install –upgrade pip

pip install keyring

pip install cx_oracle

pip install sql

pip install ipython-sql

pip install python-sql

  • To Start Anaconda/Jupyter, go to the Windows Start Icon, Click and Select Anaconda Navigator under Anaconda3. Once inside Anaconda, Select Jupyter

  • A new browser page will open up, running Jupyter, Select New and then Python 3 highlighted below:

  • A new Python Notebook will open up. Python is an interpreted language so we must load libraries to use every time an environment is started up. Libraries are loaded with the import command, we will use 3 libraries. Run the following commands as shown below. Copy the 3 lines below and Paste them directly in the box next to the In[]: prompt, then select Run.

import cx_Oracle

import keyring

import os

  • Run a simple command to display your PATH. Run the following command (copy and paste into the box and select Run): print(os.environ[“PATH”]

  • Now let us set the TNS_ADMIN variable. TNS_ADMIN is the location of the unzipped wallet files. Instructions on how to create a wallet can be found here (Hyperlink to previous blog post). Below we set and then check the variable (the first command sets it, the second displays it back). Run the following command (copy and paste into the box and select Run):

os.environ[‘TNS_ADMIN’] = ‘c:wallets’

print(os.environ[“TNS_ADMIN”]

  • Let’s make some external calls to the Autonomous Transaction Processing. For that we need to load another library. Run the command below which will load the library needed to call external sql databases (ignore warning/error messages, make sure to include the %):

%load_ext sql

  • Next let us connect to the Autonomous Transaction Processing database using a user name, password and service. Use your admin account and password created when the ATP database was created. The format of the command is:

%sql oracle+cx_oracle://user:password@service

Once connected you will get the message ‘Connected: admin@None’

  • To run a query, once connected use the oracle+cx library calls followed by the SQL statement (notice no ; at the end of the statement). The SQL below is the same one we ran in previous labs, copy the statement below and paste it in the box and click Run.

%sql oracle+cx_oracle://user:password@service

SELECT channel_desc, TO_CHAR(SUM(amount_sold),’9,999,999,999′) SALES$,RANK() OVER (ORDER BY SUM(amount_sold)) AS default_rank,RANK() OVER (ORDER BY SUM(amount_sold) DESC NULLS LAST) AS custom_rank FROM sh.sales, sh.products, sh.customers, sh.times, sh.channels, sh.countries WHERE sales.prod_id=products.prod_id AND sales.cust_id=customers.cust_idAND customers.country_id = countries.country_id AND sales.time_id=times.time_idAND sales.channel_id=channels.channel_idAND times.calendar_month_desc IN (‘2000-09’, ‘2000-10′)AND country_iso_code=’US’ GROUP BY channel_desc

Awesome. Now you are connected to Autonomous Transaction Processing using Anaconda.

Written by Philip Li & Sai Valluri

Related:

  • No Related Posts

Oracle DBA: Job, Career, or Profession?

Today we have guest blogger – Jim Czuprynski – Oracle ACE Director.

I recently returned from COLLABORATE19 in San Antonio, TX where I presented on numerous Oracle technology topics including APEX, Oracle Database 18c and 19c, and Oracle Autonomous Database. But the presentation I enjoyed giving most was named An Airline Pilot, A Urologist, and a DBA Walk Into a Bar: Thinking Like a Professional. It gave me an opportunity to translate nearly four decades of experience in Information Technology, consulting, writing, and public speaking into some practical advice on what it takes to be successful in our industry.

The Future Will Be Amazing. If We Get It Right.

One of my favorite movies is Tomorrowland. Casey, the optimistic hero of the story, likes to say that “There are these two wolves: one represents light and hope, and the other symbolizes darkness and despair. Which one wins?”

As I talked about the massive challenges our world faces today – climate change, globalization and ever-increasing technological changes – we’re definitely standing in between those two wolves every day. In his book Thank You For Being Late, Thomas Friedman suggests there are three “M’s” that will shape our civilization’s future: Moore’s Law, the Marketplace, and Mother Nature … who, by the way, always bats last, and she always bats 1.000.

Yet there is immense promise in technology too. The ITER project in France is quite close to solving our future energy needs via hydrogen fusion in the next few years. The Internet of Things and Big Data offer us a deeper view into how people interact with technology and each other than we’ve ever had. Alternative energy sources like wind and solar power are cheaper than ever, and appear to be on track to transform our reliance on fossil fuels.

As IT professionals, we are among those who are providing a lever with which to move the world into this new paradigm. It’s important that we embrace our responsibility to human civilization to empower these changes because we’re not just doers – we’re also dreamers and thinkers.

Urologists Are the Happiest Surgeons.

During outpatient surgery last summer, I asked my anesthesiologist who were the happiest surgeons. She immediately replied, “Urologists.” I must’ve looked puzzled, because she said, “Think about what they’re working on; they have to maintain a great sense of humor. And they better get it right the first time! But when the outcome is right, the patient is thrilled and thankful.”

She then explained that least happy are vascular surgeons because they are constantly repairing tiny blood vessels in patient’s bodies that have been damaged through years of abuse – usually smoking and Type II diabetes – and they already know they’ll be seeing those same patients again in a matter of months because less than 10% of patients change their lifestyles after the surgery to take advantage of the hard work their surgeons have done.

Her observations struck me as a description of the difficulties most Oracle DBAs face daily. Just like professional surgeons, we’re often forced to pick up the pieces when our development team hasn’t deployed well-written code, but what keeps us going is that perfect outcome, even if our “patient” – the end user – never calls back to tell us that her report is now running 10X faster than it was before.

Aviate. Navigate. Communicate.

I’ve been fascinated by modern aviation my whole life, and I enjoy flying immensely. My cousin is a commercial airline pilot who is also rated to provide check rides for other Boeing 737 pilots. He loves to tell stories about what happens when things go dramatically wrong during a flight, especially during takeoff. He recounted that during one dramatic near-failure, “So then we reached for the emergency checklist with all the critical steps printed in red.” When I asked why the red lettering, he looked at me sternly and said, “Because anyone who skipped those steps crashed.”

My cousin constantly spoke of the Pilot’s Mantra: Aviate, Navigate, Communicate. In other words, the pilot’s first job is to fly the airplane; all other concerns – including where it’s headed, or whatever the tower or flight attendant is saying in his headset, is secondary or tertiary to keeping the plane airborne, in trim, and an eventual safe landing.

Much like an airline pilot, a professional Oracle DBA is expected to keep every database running all the time and performing within expected norms, and – most importantly – to never lose one byte of data. Therefore, it’s crucial for us to take check rides when required, especially when we’re learning to fly a new aircraft like Oracle Database 18c or 19c. And it’s our responsibility to know not only how the database works, but why it works the way it does. Otherwise, we’re likely to make a crucial mistake when something unexpected occurs and we mistakenly diverge from the checklist’s lines written in red.

A Professional … Professions.

From my perspective, the best DBAs I’ve known treat their role as what it truly should be: neither a job nor a career, but a profession. That requires an almost tireless devotion to our craft, a constant inquisitiveness as to how a new feature might solve a business problem, and a realization that our role continues to expand as exponentially as the quantity and diversity of data has.

We may not be called database administrator in the future – frankly, I believe the title Enterprise Data Architect (EDA) is a better name for what we will be doing – but either way, the key to becoming the best DBA / EDA we can be is to insist on challenging ourselves to constantly expand our knowledge base and experience, even if that requires after-hours experimentation on our own dime.

Most importantly, we need to focus on what our future customer base really needs most. I believe we need to recognize that base will be our organizations’ application developers and business units, so we’ll also have to come up to speed on how we can best assist our developers achieve maximum productivity through the application development process. That means we need to develop an understanding of how technology like Application Express (APEX) can enable rapid application development while providing security and code reusability.

A Two-Way Street, Mentorship Is.

So how do we expand the circle of excellence that our profession demands? I suggest that we take inspiration from the most recent episode of the Star Wars movie series. (Spoiler alert.)

The mentorship that Luke Skywalker eventually agrees to grant to Rey as his acolyte takes time to develop. Rey has all sorts of assumptions about her power, which Luke punctures scornfully at the start, but over time they grow to respect one another, realizing that each other’s faults and frailties actually make them stronger than they could possibly imagine. And just in case you are wondering when the sacred responsibility of mentorship ends, recall what Yoda says to Luke with a deep laugh near the end of Episode 8: “Missed you I have, young Skywalker.” In other words, it never ends. Even when you’re a Force Ghost.

Oh, BTW: Which Wolf Wins?

And finally, in case you haven’t seen Tomorrowland and are still curious about which wolf wins, it’s quite simple: The one you feed. It’s time to turn our efforts as IT professionals towards envisioning a future brighter than anything we can possibly imagine.

Related:

  • No Related Posts

Provisioning and Connecting to Autonomous Transaction Processing

Introduction

Autonomous Transaction Processing is built on the self-driving, self-securing, and self-repairing Oracle Autonomous Database that uses machine learning to automate database specific features and deliver outstanding performance. This environment is delivered as a fully managed cloud service running on optimized Oracle Exadata Infrastructure.

This blog walks you through the steps to get started using the Oracle Autonomous Transaction Processing Database. You will learn how to sign in to Oracle Cloud, Provision a new Database and connect to the database using SQL Developer. All in a matter of minutes without any specialized skills.

Get Access to Autonomous Transaction Processing

After your trial is created you will receive a Welcome email with console details and temporary password to access your account. You will have the option of choosing the REGION inside of the Cloud Console.

For this lab, if you are in North America, choose the Phoenix data center. For EMEA and APAC regions, please choose the data center in Frankfurt.

Download and Setup the Required Software

You will need SQL Developer installed on your computer. The minimum SQL Developer version that is required to connect to an Oracle Autonomous Transaction Processing database is SQL Developer 17.4.

Windows 64-bit

Install SQL Developer 18.1 using “Windows 64-bit with JDK 8 included” from this link, http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/index.html

Other platforms

  1. Install JDK 8u161 from this link, http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
  2. Install SQL Developer 18.1 for your platform, from this link, http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/index.html

STEP 1: Sign in to Oracle Cloud Infrastructure console

  • Go to cloud.oracle.com, click Sign In to sign in with your Oracle Cloud account.

  • Enter your Cloud Account Name and click My Services.

  • Enter your Oracle Cloud username and password, and click Sign In.

  • Once you are logged in, you are taken to the cloud services dashboard where you can see all the services available to you.

Step 2: Create an Autonomous Transaction Processing instance

  • Click Autonomous Transaction Processing in the left side menu under services

  • Select Region at the top right corner and click on Create Autonomous Database

  • This will bring up Create Autonomous Database screen where you specify the configurations of the instance
    • Select Autonomous Transaction Processing
    • Choose a Display Name, Database Name, CPU, Core Count and Storage

  • Scroll down further to Administrator Credentials. Create Password and click on Create Autonomous Database

  • It will take a few minutes to provision the Autonomous Transaction Processing Database

  • You can view the Autonomous Transaction Processing Database that you created in your compartment along with your other databases.

Step 3: Download the secure connection wallet for your provisioned instance

  • Click on the Autonomous Transaction Processing database that you just created
  • Click on DB Connection

  • This opens up the Database Connection pop-up window. Click on Download to download the client credentials file

  • Choose a Password and download the client credentials file

  • The client credentials file will be a .zip file. The client credentials zip file contains the encryption wallet, Java keystore and other relevant files to make a secure TLS 1.2 connection to your database from client applications. Store this file in a secure location, for example your home directory on your machine.

Step 4: Connect to Autonomous Transaction Processing instance using SQL Developer

  • Launch SQL Developer and add Connection. This can be done by clicking on the little green cross at the top left corner. It will prompt you to a pop up where you create the database connection.

Enter the following in New database connection

Connection Name: Name for your connection

Username: admin

Password: The password that you created earlier

Connection Type: Cloud Wallet

Role: Default

Configuration File: Click on Browse and select the wallet file you downloaded

Service: ‘databasename_high’ Database name followed by suffix low, medium or high. These suffixes determine degree of parallelism used and are relevant for a DSS workload.

  • Test your connection and save. The Status bar at the bottom left will show Success if it is a successful connection.
  • Click on Connect.

Hooray!!! You now have a secure connection to your Oracle Autonomous Transaction Processing database.

In the last few minutes you signed into your Oracle Cloud trial account, Created your first Autonomous Transaction Processing Cloud instance, Downloaded a wallet and Securely connected to your Database.

For more detailed instructions on how to create a compartment you can visit the Oracle Learning Library page here

Note: Oracle Cloud Infrastructure allows logical isolation of users within a tenant through Compartments. This allows multiple users and business units to share a tenant account while being isolated from each other.

If you have chosen the compartment you do not have privileges on, you will not be able to see or provision instance in it.

More information about Compartments and Policies is provided in the OCI Identity and Access Management documentation here.

Written by Philip Li & Sai Valluri

Related:

  • No Related Posts

See How Easily You Can Make a Database Clone

There are lots of times when, as a DBA, you are going to get asked to make a copy of an existing database. Until now, creating these non-operational/production environments has been challenging and time consuming process for the technical teams, especially the data warehouse DBAs! What everyone has been waiting for is a way to just “right-click” and deploy an exact copy of an existing data warehouse instance (obviously you can do this programmatically too using the Cloud command line APIs but that’s not the focus of this post) . Well the wait is over….

The new cloning feature of autonomous data warehouse comes to the rescue….in a few mouse clicks it is possible to make exact copies of your data warehouse, either including or excluding the actual data depending on your precise needs. Let’s see how this works…

First step is to log in to your cloud console and go to your Autonomous Database overview screen. In the screen below we have set the “Workload” type filter to only show our Autonomous Data Warehouse instances…

ADB Landing Page with list of ADB instances

As you can see we have an existing application database called “nodeappDB”. Let’s assume that we have had a request to create a new training instance of this environment so we can run a training event for some key business users. To do this we will use the new cloning feature to make a copy of our existing “nodeappDB” instance. Here we go…

If you click on the little three vertical dots on the right hand side. In the pop-up menu there is a new menu option “Create Clone”, as shown here:

Selecting the pop-up menu from the list of ADB instances

which gives us access to the “Create Clone” feature on the pop-up menu…

Selecting the create clone menu option

Step 1) Now up pops our familiar “Create Autonomous Database” form except this time it says ““Create Autonomous Database Clone” and the first decision we have to make is to determine the type of clone that we want to create! Fortunately, there are two simple options:

Select the type of clone to create

Full Clone – this creates a new data warehouse instance complete with all our data and metadata (i.e. the definition of all the database objects such as tables, views etc).

Metadata Clone – this creates a new data warehouse that contains only our source data warehouse’s metadata without the data (i.e. the new autonomous database instance will only contain the definitions of our existing database objects such as tables, views etc).

Since we need to create a training environment the obvious choice is to create a “Full Clone” because the business users will get more from their training workshop if or instance contains a realistic data set. If we were creating a new development-type or testing-type instances then a metadata-only clone would probably be sufficient. So with that done, let’s move to…

Step 2) If we need to, we can change the compartment (and if you have no idea what a compartment is then there is more information about compartments and how to use them here)..for example, you could have a specific compartment setup for “training” which contains all your Autonomous Database training instances. So you can think of compartments as a way of organising, grouping your autonomous database instances. In this example let’s put this new “clone” in the same compartment (LABS) as our existing “nodeappDB” instance:

Setting the compartment for the new cloned instance

…with that done, now let’s move on to….

Step 3) Now we can set the Display Name and Database Name for our new instance, as shown below. As of today, a clone does not keep any relationship to its source instance so it might be a good idea to have some sort of naming convention to identify development vs. testing vs. training etc etc instances just to make your life easier in the long run!

Setting the display and database names

Step 4) Next step is to set our CPU and storage resources for our new “clone“, i.e. the number of cores and the amount of storage. Note that if you specify a “Full Clone in Step 1 then obviously the minimum storage that you can specify here is the actual space used (rounded to the next TB) by your “source” database instance. However, the great thing here is that you can set the resources you need specifically for your clone. In this case our source instance, “nodeappDB”, was configured with 16 OCPUs but as we are creating a “training” instance we can allocate fewer resources i.e. let’s just go with 4 OCPUs….

Set the CPU and storage requirements

Step 5) Next we need to set a new administrator password for our cloned database. All the usual password requirements apply to ensure our new instance remains safe and secure. Quick refresher if you are unsure of the rules:

  • The database checks for the following requirements when you create or modify passwords:
  • The password must be between 12 and 30 characters long and must include at least one uppercase letter, one lowercase letter, and one numeric character.
  • Note, the password limit is shown as 60 characters in some help tooltip popups. Limit passwords to a maximum of 30 characters.
  • The password cannot contain the username.
  • The password cannot be one of the last four passwords used for the same username.
  • The password cannot contain the double quote (“) character.
  • The password must not be the same password that is set less than 24 hours ago.

which brings us to the final step…

Step 6) The final step is to set the type of license we want to use for with our new autonomous database. The options are the same as for when you create a completely new autonomous database:

  • Bring your existing database software licenses (see here for more details).
  • Subscribe to new database software licenses and the database cloud service.

Select the type of license

That’s it, we are all done! All we have to do is click the big blue “Create Autonomous Database Clone” at the bottom of the form to start the provisioning process…at which point the create-form will disappear and the following page will be displayed…

New clone in provisioning state

….and in a couple of minutes our new autonomous database will be ready for use.

New autonomous database is now ready for use

So with a few mouse-clicks we have deployed an exact copy of our product nodeappDB autonomous database ready for our training workshop with our business users.

Don’t forget that you can stop this new created training instance until you are ready to run the training workshop with your business users. When the time arrives it’s a quick click on the “Start” button on the management console and you are up and running in a couple of minutes.

If the above screenshots are a little too fuzzy then you can download a PDF containing all the steps here.

But we are not quite finished if you are a DBA reading this blog post because…

If you are a technical user or a data warehouse DBA or a Cloud DBA then there a couple of additional areas that you will want to consider after creating your newly cloned data warehouse:

  • What about all the optimizer statistics from my original data warehouse instance?
  • What the resource rules within my newly cloned instance?

What about the Optimizer Statistics for your cloned data warehouse?

Essentially it doesn’t matter which type of clone you decide to create (Full Clone or a Metadata Clone) the optimizer statistics are copied from the source data warehouse to your newly cloned data warehouse. For for a Full Clone, where all the data from your source data warehouse is copied to your newly cloned instance you are ready to roll straight away! With a Metadata Clone, the first data load into a table will force the optimizer to update the statistics based on the new data load.

What about our resource management rules?

During the cloning process for your new data warehouse instance (Full Clone and Metadata Clone), any resource management rules in the source data warehouse that have been changed by the cloud DBA/administrator will be carried over to our newly cloned data warehouse. For more information on setting resource management rules, see Manage Runaway SQL Statements on Autonomous Data Warehouse.

So, what happens next?

Now that your newly cloned data warehouse instance is available you are ready to start connecting your data warehouse and business intelligence tools. If you are new to connecting different tools to autonomous data warehouse then take a look at our guide to Connecting to Autonomous Database.

Click here for more information on Autonomous Database.

Related:

  • No Related Posts

Six Retail Dashboards for Data Visualizations

Retail is rapidly transforming. Consumers expect an omni-channel experience that empowers them to shop from anywhere in the world. To capture this global demand, retailers have developed ecommerce platforms to complement their traditional brick-and-mortar stores. Ecommerce is truly revolutionizing the way retailers can collect data about the customer journey and identify key buying behaviors.

As described byBeyond Marketing by Deloitte, analysts are taking advantage of “greater volumes of diverse data—in an environment that a company controls—mak[ing] it possible to develop a deeper understanding of customers and individual preferences and behaviors.” Savvy retailers will leverage the influx of data to generate insights on how to further innovate products to the tastes and preferences of their target audience. But how can they do this?

Try a Data Warehouse to Improve Your Analytics Capabilities

Retailers are looking to leverage data to find innovative answers to their questions. While analysts are looking for those data insights, leadership want immediate insights delivered in a clear and concise dashboard to understand the business. Oracle Autonomous Database delivers analytical insights allowing retail analysts to immediately visualize their global operations on the fly with no specialized skills. In this blog, we’ll be creating sales dashboards for a global retailer to help them better understand and guide the business with data.

Analyzing Retail Dashboards

Retail analysts can create dashboards to track KPIs including: Sales, Inventory turnover, Return rates, Cost of customer acquisition, and Retention. These dashboards can help with monitoring KPIs with easily understandable graphics that can be shared with executive management to drive business decisions.

In this blog, we focus on retail dashboards that break down sales and revenue by:

  • Product
  • Region
  • Customer segment

In each dashboard, we will identify and isolate areas of interest. With the introduction of more data, you can continuously update the dashboard and create data-driven insights to guide the business.

Understanding the Retail Data Set

We used modified sales and profit data from a global retailer to simulate the data that retail analysts can incorporate into their own sales dashboards. In our Sample Order Lines excel (shown below), we track data elements such as: Order ID, Customer ID, Customer Segment, Product Category, Order Discount, Profit, and Order Tracking.

Data Visualization Desktop, a tool that comes free with ADW, allows users to continuously update their dashboards by easily uploading each month’s sales data. By introducing more data, we can understand how the business is changing over time and adapt accordingly.

For more on how to continuously update your dashboard please see: Loading Data into Autonomous Data Warehouse Using Oracle Data Visualization Desktop

We looked at the following questions:

  1. What is the current overview of sales and profit broken down by product?
  2. Which regional offices have the best sales performance?
  3. Which geographic regions are hotbeds of activity?
  4. How are different products and regions linked together by sales?
  5. Which market segments are the most profitable?
  6. Which specific products are driving profitability?

Here is the view of the data in Excel:

Here’s a quick snapshot of the data loaded into Data Visualization Desktop:

What is the current overview of sales and profit broken down by product?

This is a sales dashboard summary which shows the overall revenue and profit from different product segments. Here are some quick insights:

  • Using tiles (top left), we see: $1.3M profit out of $8.5M total sales making a 15.3% profit margin.
  • We use pie charts to break out total sales and profit by product category (top right). Technology products not only contribute the most to sales of any single product category (40.88%) but also make up the most profit of any single category (56.15%), meaning that technology products are the highest grossing product line. Under the pie charts is a pivot table showing the actual figures.
  • A line graph shows that every product category has been growing with technology products growing the fastest (bottom left). There was a spike in technology sales that started August and peaked in November 2018.
  • Sales are broken down by both product and customer segment so that we can understand more about the buying habits for different customers (bottom right).

For an even more detailed segment analysis, we also broke out the corporate customer segment (below) to compare with the overall business (above).

Which regional offices have the best sales performance?

In this visualization, we’re looking at the performance of different regional offices and how they’re collectively trending. We overlaid a horizontal stacked graph and a donut chart on a scatterplot. Using a dot to represent each city, the scatterplot analysis compares profit (x-axis) vs sales (y-axis) using larger dots to represent larger customer populations in each city.

For example, the dot in yellow (far right) represents San Paulo, Brazil with 127 customers generating $200,193 in sales and $44,169 in profit. As the most profitable city, San Paulo has a profit margin of 22%, averaging total purchases of $1,576 per customer. On the scatterplot, cities that make at least $10,000 of profit are indicated left of the dotted line.

The horizontal stacked graph (top left) breaks down sales by continent so you can see which regions are leading in sales. The donut chart (bottom right) indicates shows the total amount of sales from all the regions ($9M) and shows each region as a percent. Here are the leading regions by sales:

  • America (38.64%)
  • Europe (28.81%)
  • Asia (18.05%)

To learn more, we use the “keep selected” option to dynamically look at a specific region like Europe (shown below). We can see that Europe accounts for just under $2.5M in sales with the largest portion coming from Northern Europe. The scatterplot also dynamically changes to only show cities in Europe. Now you can now identify that the profitable European city is Belfast, Ireland ($27,729) and the city with the most sales is St. Petersburg, Russia ($127,521). This allows us to identify and replicate the success of the offices like Belfast and St. Petersburg in the other regions as well.

Which geographic regions are hotbeds of activity?

Analysts need to identify which markets to immediately focus on. Using a heat map, we can see which regions have the most sales (shown in color) and regions without sales (gray). This particular global retailer’s sales are primarily in developed markets:

1. America ($1.5M+)

2. United Kingdom ($887K)

3. Australia ($695K)

We can investigate further to pinpoint the exact cities (below) in the UK. We can see that the sales are originating from multiple cities including:
  • Belfast
  • Leeds
  • Manchester
  • Sheffield
Using a heat map can not only help identify how easily customers access storefront locations but also show where to expand operations based on demand.

How are different products and regions linked together by sales?

It’s often hard to see how different factors like sales, product, and geography are interrelated. Using a network map, we see how products categories (technology, furniture, office supplies) are linked to continents that are sublinked to countries. The thickness of the connecting line from one node on the network to another is based on sales and the deeper shades of green are represent more profits. We hover over the line connecting Africa to Southern Africa (above) to see the total sales ($242K) and profit ($34K) from Southern Africa.

Another way to focus on a specific region is to hover over a specific node and use the “keep selected” option (below). In this in the example, we only identify nodes linked to Europe. By doing this, we can see that a majority of the sales and profits from Europe are coming from technology products ($1,030K sales, $213K profit) and originating from the Northern Europe ($974K sales, $162K profit) specifically the UK ($880K sales, $162K profit). Analysts can identify the regional sources of sales/profit while seeing a macroview of how products and regions are linked.

Which market segments are the most profitable?

It’s critical to understand which customer groups are growing the fastest and generating the most sales and profit. We use a stacked bar (left) and scatterplot (right) break down profitability by market segment in FY18. We categorize buyer types into:

  • Consumer
  • Corporate
  • Home office
  • Small business

In the stack bar, we can see that the sales has been growing from Q2 to Q4 but the primary market segments driving sales growth are corporate (61% growth since Q1) and small business (53% growth since Q1). The combined growth of the corporate and small business segments lead to a $191K increase of sales of since Q1. Although these two segment made up over 63% of total sales in FY18Q4, we can also see that sales from the home office segment more than doubled from FY18Q3 to FY18Q4.

In a scatterplot (right), we can see the changes in profit ratio of each market segment over time. The profit ratio formula divides net profits for a reporting period by net sales for the same period. The fastest growing market segments and the most profitable market segments in FY18 (top right quadrant) are:

  • Corporate
  • Small business

We can also isolate the profitability of the corporate customer segment (below). By generating insights about the target market segments, companies are able to focus their product development and marketing efforts.

Which specific products are driving profitability?

Retailers are often managing a portfolio of hundreds, if not thousands, of products. This complexity makes it challenging to track and identify the profitability of individual products. However, we can easily visualize how profitability has changed over time and compare it to specific products. We use a combo graph (top left) to indicate changes to sales and profit ratios over time.

Generally, we can see that every year sales (and profits) increase from Q1 to Q4 then drop off with the start of the next Q1. We use a waterfall graph to track how profits have gradually changed over time (bottom left). From 2013 to the end of 2018, there was a net gain of $167K in profit.

Analysts identify performant products to expand and unprofitable products to cut. On the right, we track sales and profit ratios by individual products. We can see that product that generate the most sales are:

  1. Telephones/communication tools ($1,380K)
  2. Office machines ($1,077)
  3. Chairs ($1,046K)

The products with the highest profit ratio are:

  1. Binders (35 percent)
  2. Envelopes (32.4 percent)
  3. Labels (31.6 percent)
This means that for every binder sold, 35 percent of the sale was pure profit. We also found that product such as bookcases (-5.2 percent), tablets (-5.3 percent), and scissors/rulers (-8.3 percent) had negative profit ratios, which means that there was a loss on each sale. We can also isolate sales performance of the top five products (below).

Summary

Data visualizations dashboards empowered by Autonomous Data Warehouse allow major global retailers to easily understand the state of their business and make judgments on how adapt to dynamic market environments.

Oracle Autonomous Database allow users to easily create secure data marts in the cloud to generate powerful business insights – without specialized skills. It took us fewer than five minutes to provision a database and upload data for analysis.

Now you can also leverage the Autonomous Data Warehouse through a cloud trial:

Sign up for your free Autonomous Data Warehouse trial today

Please visit the blogs below for a step-by-step guide on how to start your free cloud trial: upload your data into OCI Object Store, create an Object Store Authentication Token, create a Database Credential for the user, and load data using the Data Import Wizard in SQL Developer:

Feedback and questions are welcome. Tell us about the dashboards you’ve created!

Related:

  • No Related Posts

Oracle Move – Successfully Migrate Your Data to the Oracle Cloud

The Oracle Database is enterprise-proven—it supports all types of workloads and various deployments as well as platforms on-premises and in the cloud. The question is, how can you migrate your Oracle Database into the cloud successfully, easily, and with virtually no downtime?

The answer to this question is Oracle Move: www.oracle.com/goto/move

Oracle Move provides the information and tools you need to determine the best strategy and methods to migrate your on-premises database to the Oracle Cloud and then helps you execute on the plan.

The Oracle Database in the Oracle Cloud Infrastructure (OCI) is available in several services and configurations, each one tailored to your business needs. Oracle Move therefore not only supports a variety of source database configurations, but also considers the managed Oracle Cloud database services you can choose from and be provided with the most efficient path into the Oracle Cloud. For example, Oracle Move supports the migration of an Oracle Database into the Exadata Cloud Service, Exadata Cloud at Customer, and of course, the Oracle Autonomous Data Warehouse and the Oracle Autonomous Transaction Processing services. In other words, Oracle Move is your one stop solution.

Moving your Databases to the Oracle Cloud starts from where you are today, taking into consideration your source database, whether it’s an Oracle Database or other databases hosted on a 3rd Party Cloud. Oracle Move offers a simple advisor that recommends the optimal migration solution based on your selection of the target and source database deployments.

Oracle Move offers simplicity and efficiency. Oracle automated tools make it seamless to move your database to the Oracle Cloud with virtually no downtime. Using the same technology and standards on-premises and in the Oracle Cloud, you can facilitate the same products and skills to manage your cloud-based Oracle Databases as you would on any other platform.

Oracle Move is flexible. You can directly migrate your Oracle Database to the Oracle Cloud from various source databases into different target cloud deployments depending on your requirements and business needs. Oracle Move provides a well-defined set of tools, giving you the flexibility to choose the method that best applies to your needs.

Oracle Move is cost effective. The same flexibility that lets you directly migrate your Oracle Database to the Oracle Cloud is applied to finding the most cost effective solution for the purpose and duration of the migration.

Oracle Move is highly available and scalable. The tight integration of all migration tools with the Oracle Database lets you maintain control and gain better efficiency when moving your databases into the Oracle Cloud, while the Maximum Availability Architecture (MAA)-approved tools as well as Oracle’s soon to be released Zero Downtime Migration (ZDM)-based migration ensure that your migration is handled as smoothly as possible. https://www.oracle.com/database/technologies/rac/zdm.html

The Oracle Zero Downtime Migration tool follows a simple single button approach, is fully MAA-compliant and scales to your fleet needs. Oracle ZDM leverages Oracle Data Guard in order to provide an automated migration for your on-premises database to the Oracle Cloud; all of this, with a five step process that analyzes both the source and target, prepares both databases, migrates your data, provides monitoring and then a controlled switchover of the application to your newly migrated database in the Oracle Cloud.

Oracle Move provides all the resources for Migrating OVEr to the Oracle Cloud. For more information, visit www.oracle.com/goto/move to find your best path to the Oracle Cloud.

Related:

  • No Related Posts

Loading Data into Autonomous Data Warehouse Using Data Visualization Desktop

Depending on the size and volume of the data and your skill set, there are a number of ways to get data into Autonomous Data Warehouse. In this previous post we outlined the steps using SQL Developer and the use of object storage for very large volumes of data.

Try a Data Warehouse to Improve Your Visualization Capabilities

In this post, we provide another simple and easy choice within just one tool, Data Visualization Desktop, to upload data from a spreadsheet and immediately analyze the data.

Set Up Local Data Visualization Desktop Environment in Windows

STEP 1: Installing Oracle Data Visualization Desktop on a Windows Desktop

  • Download the latest version of Oracle Data Visualization Desktop (DVD) from here .
    • After saving the installer .ZIP to your desktop, unzip the file and click on the DVDesktop.exe installer to follow the guided steps.

Data Visualization Desktop Installer

Data Visualization Desktop Installer

STEP 2: Securing Your Client Connection to Autonomous Data Warehouse

You want to secure your data from the desktop all the way from the client application to the server where your data is stored. You can now store password credentials for connecting to databases in a client-side Oracle wallet, a secure software container used to store authentication and signing credentials. This wallet usage can simplify large-scale deployments that rely on password credentials for connecting to databases. When this feature is configured, application code, batch jobs, and scripts no longer need embedded user names and passwords. This reduces risk because such passwords are no longer exposed, and password management policies are more easily enforced without changing application code whenever user names or passwords change.

You download the connection wallet as shown in this lab. Go to the directory that you saved your Connection Wallet file. Unzip this zipped file.

Connection Wallet

  • You will need the following two files to create the secure connection.
    • cwallet.sso
    • tnsnames.ora

Create a Connection to Your Autonomous Data Warehouse from Data Visualization Desktop

STEP 3: Create Connection

  • Start Oracle Data Visualization Desktop. When Oracle Data Visualization Desktop opens, click on the ‘Create’ button and ‘Connection’.

Getting Started With Data Visualization Desktop

  • In the Create Connection Dialog, select the highlighted option for ‘Oracle Autonomous Data Warehouse’ and progress through the wizard.

Create Connection for Autonomous Data Warehouse

  • Go back to the directory where you saved your wallet file and extracted the file, ‘tnsnames.ora’. Open the file and search for the wallet connection information (in our example “adwfinance_high“) that you will use to connect with.

Connection Info

Entry

Connection Name:

Type in ‘SALES_HISTORY’

Host:

(Copy from tnsnames.ora) e.g. adb.us-phoenix-1.oraclecloud.com

Port:

(Copy from tnsnames.ora) e.g. 1522

Client Credentials:

Click ‘Select’ and select the file “cwallet.sso” from your unzipped Wallet in Step 2

Username:

Insert username created in previous labs. Same as SQL Developer credentials.

Password

Insert password created in previous labs. Same as SQL Developer credentials.

Service Name:

(Copy from tnsnames.ora) e.g. tuak89quycc88vqkzengdw1high.adwc.oraclecloud.com

  • After completing the fields, click on the ‘Save’ button.

    Note: If you are running an older version of Data Visualization Desktop, you may not see an option to select Client Credentials. Update your Data Visualization Desktop or read about connections in older versions in the Data Visualization Desktop User’s Guide.

Below is a sample screen shot for “Create Connection”

Create Connection for Autonomous Data Warehouse

Now that the connection is set up let us take a sample data set and review it. We chose a file titled “Sample Order Lines Oct.xls” for this exercise.

Sample Order Lines

Open Data Visualization Desktop and then

  • Click on Create on the right hand top corner
  • Click on Data Flow

Getting Started With Data Visualization Desktop

You will be prompted to a new screen where you will see an option to upload file.

  • Select the file and hit add.

Upload Data to Data Visualization Desktop

Once your data is loaded, right click the sample order data and select Save Data.

Save Data for Autonomous Data Warehouse

Name the data flow Oct_Data_Upload

The next step is to set up a Database Connection

Set Up Database Connection for Autonomous Data Warehouse

Connect to the database labeled DVD. Name the table something that you find convenient. In this case we chose Oct_Data_Upload

Connect to Database labeled Data Visualization Desktop

Click RunData Flow to upload your data into the Autonomous Database. Name the Data Flow.

Data Flow for Autonomous Data Warehouse

Congrats you’ve pushed data from Data Visualization Desktop into the Autonomous Database:

Push Data from Data Visualization Desktop to Autonomous Data Warehouse

Now your data is ready to use in the most secure, scalable and fully managed data warehouse: Autonomous Data Warehouse.

Next let us preview the data:

Preview Data in Autonomous Data Warehouse

Click on Create Project at the top right corner.

This will take you to a canvas where you can create rich visualizations. Here we create a simple chart for Quantity Ordered by Order Date.

Canvas for Rich Visualizations

You can add multiple canvases and create visualizations within each canvas and save them.

Canvas Visualizations

Go and gain powerful insights from your data in the Autonomous Data Warehouse using Data Visualization Desktop. You can find some additional labs on using Data Visualization Desktop with Autonomous Data Warehouse here. And don’t forget to sign up for your free trial of Autonomous Data Warehouse right here.

Related:

  • No Related Posts

Loading Data into Autonomous Data Warehouse Using Oracle Data Visualization Desktop

Depending on the size and volume of the data and your skill set, there are a number of ways to get data into Autonomous Data Warehouse. In this previous post we outlined the steps using SQL Developer and the use of object storage for very large volumes of data.

In this post, we provide another simple and easy choice within just one tool, Oracle Data Visualization Desktop. This single tool will enable you to upload data from a spreadsheet to the powerful Oracle Autonomous Data Warehouse and immediately analyze the data!!

If you don’t have Oracle Data Visualization Desktop already installed, follow these easy directions below. Oracle Data Visualization Desktop provides powerful personal data exploration and visualization in a simple per-user desktop download. Oracle Data Visualization Desktop is the perfect tool for quick exploration of sample data from multiple sources or for rapid analyses and investigation of your own local data sets. Oracle Data Visualization Desktop comes free with Oracle Autonomous Data Warehouse and can be deployed on both Windows and Mac. You can find more information related to Oracle Data Visualization Desktop here

Set Up Local Oracle Data Visualization Desktop Environment

STEP 1: Installing Oracle Data Visualization Desktop on a Windows Desktop

  • Download the latest version of Oracle Data Visualization Desktop (DVD) from here After saving the installer .ZIP to your desktop, unzip the file and click on the DVDesktop.exe installer to follow the guided steps.

Data Visualization Desktop Installer

Data Visualization Desktop Installer

STEP 2: Securing Your Client Connection to Autonomous Data Warehouse

You want to secure your data from the desktop all the way from the client application to the server where your data is stored. You can now store password credentials for connecting to databases in a client-side Oracle wallet, a secure software container used to store authentication and signing credentials. This wallet usage can simplify large-scale deployments that rely on password credentials for connecting to databases. When this feature is configured, application code, batch jobs, and scripts no longer need embedded user names and passwords. This reduces risk because such passwords are no longer exposed, and password management policies are more easily enforced without changing application code whenever user names or passwords change.

You download the connection wallet as shown in this lab. Go to the directory that you saved your Connection Wallet file. Unzip this zipped file.

Connection Wallet

  • You will need the following two files to create the secure connection.
    • cwallet.sso
    • tnsnames.ora

Create a Connection to Your Autonomous Data Warehouse from Data Visualization Desktop

STEP 3: Create Connection

  • Start Oracle Data Visualization Desktop. When Oracle Data Visualization Desktop opens, click on the ‘Create’ button and ‘Connection’.

Getting Started With Data Visualization Desktop

  • In the Create Connection Dialog, select the highlighted option for ‘Oracle Autonomous Data Warehouse’ and progress through the wizard.

  • Go back to the directory where you saved your wallet file and extracted the file, ‘tnsnames.ora’. Open the file and search for the wallet connection information (in our example “adwfinance_high“) that you will use to connect with.

Connection Info

Entry

Connection Name:

Type in ‘SALES_HISTORY’

Host:

(Copy from tnsnames.ora) e.g. adb.us-phoenix-1.oraclecloud.com

Port:

(Copy from tnsnames.ora) e.g. 1522

Client Credentials:

Click ‘Select’ and select the file “cwallet.sso” from your unzipped Wallet in Step 2

Username:

Insert username created in previous labs. Same as SQL Developer credentials.

Password

Insert password created in previous labs. Same as SQL Developer credentials.

Service Name:

(Copy from tnsnames.ora) e.g. tuak89quycc88vqkzengdw1high.adwc.oraclecloud.com

  • After completing the fields, click on the ‘Save’ button.

    Note: If you are running an older version of Data Visualization Desktop, you may not see an option to select Client Credentials. Update your Data Visualization Desktop or read about connections in older versions in the Data Visualization Desktop User’s Guide.

Below is a sample screen shot for “Create Connection”

Create Connection for Autonomous Data Warehouse

Note: If you are running an older version of Data Visualization Desktop, you may not see an option to select Client Credentials. Update your Data Visualization Desktop or read about connections in older versions in the Data Visualization Desktop User’s Guide.

Now that the connection is set up let us take a sample data set and review it. We chose a file titled “Sample Order Lines Oct.xls” for this exercise.

Sample Order Lines

Open Data Visualization Desktop and then

  • Click on Create on the right hand top corner
  • Click on Data Flow

Getting Started With Data Visualization Desktop

You will be prompted to a new screen where you will see an option to upload file.

  • Select the file and hit add.

Upload Data to Data Visualization Desktop

Once your data is loaded, right click the sample order data and select Save Data.

Save Data for Autonomous Data Warehouse

Name the data flow Oct_Data_Upload

The next step is to set up a Database Connection

Set Up Database Connection for Autonomous Data Warehouse

Connect to the database labeled DVD. Name the table something that you find convenient. In this case we chose Oct_Data_Upload

Connect to Database labeled Data Visualization Desktop

Click RunData Flow to upload your data into the Autonomous Database. Name the Data Flow.

Data Flow for Autonomous Data Warehouse

Congrats you’ve pushed data from Data Visualization Desktop into the Autonomous Database:

Push Data from Data Visualization Desktop to Autonomous Data Warehouse

Now your data is ready to use in the most secure, scalable and fully managed data warehouse: Autonomous Data Warehouse.

Next let us preview the data:

Preview Data in Autonomous Data Warehouse

Click on Create Project at the top right corner.

This will take you to a canvas where you can create rich visualizations. Here we create a simple chart for Quantity Ordered by Order Date.

Canvas for Rich Visualizations

You can add multiple canvases and create visualizations within each canvas and save them.

Canvas Visualizations

Go and gain powerful insights from your data in the Autonomous Data Warehouse using Data Visualization Desktop. You can find some additional labs on using Data Visualization Desktop with Autonomous Data Warehouse here. And don’t forget to sign up for your free trial of Autonomous Data Warehouse right here.

Related:

  • No Related Posts

Cloud Database Security—What Is There to Know?

Public Cloud Resident Data Is Sensitve

Here are some interesting facts that we’ve recently uncovered about cloud database security in the Oracle and KPMG Cloud Threat Report 2019:

  • 73 percent of respondents feel the public cloud is more secure than what they can deliver in their own data center and are moving to the cloud
  • 71 percent of organizations indicated that a majority of this cloud data is sensitive—up from 50% last year
  • 30% cited the inability of existing network security controls to provide visibility into cloud-resident server workloads as a cloud security challenge
  • 92% of organizations are concerned about employees following cloud policies designed to protect this data

Download the Full Security Report

Concerns About Cloud Databases

This tells us that everywhere, organizations are seeing the merits of cloud databases and are making that move—but there are still many concerns about how secure they are. There are simply too many headlines out there about data breaches for organizations and their employees to be complacent.

At the same time, the business benefits of moving to the cloud are so clear and many cloud databases offerings are so strong that it’s no longer really a matter of “if” but “when” companies are starting their cloud journey.

But the cyber security skills gap is a real problem. Some companies are turning to managed service providers, strategic partners, increased training, and of course accelerated recruiting.

What’s most exciting for us, of course, is the ability to help address these vulnerabilities automatically. Machine learning, automation, the speed at which we can execute security processes, and choosing a powerful database like the Autonomous Database with its self-securing abilities—all of these choices result in minimal downtime for customers. After all, it costs less to prevent these problems than to fix them.

In this article, we’re highlighting how users are turning to automation to remedy chronic patching problems. But be sure to download the full report to learn more about other emerging cyber security challenges, the risks businesses face as they embrace cloud services, and ways to educate lines of business about the real security risks the cloud can present (among the many benefits, of course).

The Importance of Patching in the Cloud

People know about the importance of patching. The value of penetration testing to find patches and expedited patching to close them is very well-understood.

But it’s never-ending.

And there are so many reasons why many organizations delay getting around to it, even as they know that they should. See the chart below for reasons why organizations have delayed applying a patch to a production system.

These reasons range from down-time impacting ability to meet SLAs (46%) to software compatibility (45%) to lack of approvals by SecOps, IT operations, or developers (40%).

What concerns do you have about applying a patch to a production system?

We also asked about the patching and server configurations challenges that organizations have experienced in the past 24 months:

Have you experienced any of the following patching and server configurations challenges in the last 24 months?

Automation in Databases Is the Future

And that’s why so many organizations are turning to automated patching. In fact, the use of automated patching is already widespread.

See our chart below—43% have already implemented automated patch management. And 46% plan to implement automated patch management in the next 12 to 24 months.

Deploy a solution that automates patch management for production environments

For many organizations, this is a no-brainer, with organizations choosing to automate patching to gain greater operational efficiencies (48%), reduce the window in which vulnerabilities can be exploited (29%), or to meet agreed-upon performance SLAs (17%).

Deploy an automated management solution

How Can the Autonomous Database Help Your Security?

This is just one of the reasons why the Oracle Autonomous Database is so exciting. After all, it’s self-driving, self-securing, and self-repairing.

Those self-securing abilities mean that Autonomous Database automatically encrypts all data, provides automatic security updates with no downtime (including patching!), and provides protection from both external attacks and malicious internal users. To learn more, watch the video to see how Oracle integrates automation to help deliver a self-securing data management platform.

In a recent Security report, KuppingerCole wrote, “On the whole, with the Autonomous Data Warehouse Cloud, Oracle has created a unique offering for the most demanding database customers that combines enterprise-grade performance and scalability with the highest level of security and compliance for sensitive corporate data by eliminating human factor and replacing it with industry best practices powered by the company’s decade-long expertise and machine learning.” Oracle can offer so much precisely because of its decades of work in database security.

But to truly understand how the Autonomous Database and change the way you approach a cloud database and security, try the Autonomous Database today by signing up for a free database trial.

Related:

  • No Related Posts