Oracle Database 19c Now Available on Oracle Exadata

Oracle Database 19c is now available on Oracle Exadata!

Initially released on LiveSQL.oracle.com, Oracle Database 19c is the final and therefore ‘long term support’ release (previously referred to as a ‘terminal release’) of the Oracle Database 12c and 18c family of products. ‘Long term support’ means that Oracle Database 19c comes with 4 years of premium support (to end of January 2023) and at least 3 years of extended support (to end of January 2026). This extended support window is critical to many of our customers as they plan their upgrade strategy from prior releases. For the latest Oracle Support schedule see Document ID 742060.1 on My Oracle Support (login required).

Download the full Oracle Database 19c White Paper

Aims of Oracle Database 19c

Oracle Database 19c is the release that the majority of customers will target their upgrades towards, and Oracle has made stability the core aim of this release. In Oracle Database 19c, developers have focused on fixing known issues, rather than adding new functionality. This has resulted in hundreds of man years worth of testing and thousands of servers running tests 24 hours a day. This focus on stability goes further than just the core database; it also covers all aspects of the technology stack from the installer to the utilities and tools that make up the product offering. This approach, plus the changes we’ve made to the patching process will greatly reduce the burden on patching in the upcoming years for our customers.

All That’s Happened Before with Oracle Database

Before we discuss some of the changes in Oracle Database 19c, it’s important to remember that Oracle Database has been the cornerstone of enterprise systems for the last 40 years. Over that time, we’ve added a multitude of features under the guidance of our customer community; from row level locking and scalable read consistency, to the ability to logically break tables up into smaller partitions for scanning billions of rows a second using parallel query. Many of these features and their implementation are industry leading, and in many instances remain unique to Oracle Database.

Data is of little value to enterprise users when it’s not accessible, and Oracle Database has ensured that it always is. Whether it’s as simple as make sure the database is consistent on restart after an unexpected server outage. Or, by offering disaster recovery, Oracle Database can provide synchronous (or asynchronous) replication of data over large distances whilst making it available for reporting and backups. Oracle Real Application Clusters (RAC) has meant that Oracle Database is found in nearly every mission critical system where any server outage could have serious implications. RAC enables customers to scale their Oracle Databases to extraordinary levels of throughput and concurrency without having to change their applications.

Oracle Database is widely acknowledged one of the most secure repositories for data in the industry. No other database solution has the breath of capabilities or depth of implementation. Whether it’s our implementation of simple access controls or the classification of data down to the row level. We encrypt data throughout it’s life cycle be it at rest or in flight and we do it in the database itself ensuring that malicious access is minimized.

And More Recent Improvements of Oracle Database

Oracle Database 18c and the previously released Oracle Database 12c family introduced hundreds of new features and improvements. Some of the more significant include:

  • Multitenant Oracle’s strategic container architecture for the cloud introduced the concept of a pluggable database (PDB), enabling users to plug and unplug databases and move between containers, either locally or in the cloud. This architecture enables massive consolidation with the ability to efficiently share memory and processor resources and manage many databases as one (e.g. for backups, patching and upgrades).
  • JSON Support Provides developers with a more flexible approach to defining their persisted schema-less data model. As well as just being able to store JSON in the database, developers can use SQL and all of Oracle’s advanced analytical capabilities to query it. To ease the burden of processing large JSON data collections, Oracle Database also enables parallel scanning and or update. For developers looking to build applications and preferring to use a simple NoSQL API, Oracle Database provides SODA (Simple Object Data Access) APIs for C, Java, PL/SQL, Python, Node and REST.
  • Database In-Memory Enables users to perform lighting fast analytics against their operational databases without being forced to acquire new hardware or make compromises in processing their data. Oracle Database features a dual in-memory model where OLTP data is held both as rows, enabling it to be efficiently updated, and in a columnar form, enabling it to be scanned and aggregated much faster. Reports that used to take hours can now be executed in seconds. In addition, Oracle can store JSON documents in the in-memory column store for lightening fast analysis of semi structured data.
  • Sharding Provides OLTP scalability and fault isolation for customers that want to scale outside of the confines of a typical SMP server. It also supports use cases where data needs to be placed in geographic location because of performance or regularity reasons. Oracle Sharding provides superior run-time performance and simpler life-cycle management compared to home-grown deployments that use a similar approach to scalability. Users can automatically scale up the shards to reflect increases in workload making Oracle the one of the most capable and flexible approaches to web scale work loads for the enterprise today.

New in Oracle Database 19c

While stability is the focus of Oracle Database 19c, that’s not to say there aren’t some new few features and enhancements worth mentioning, such as:

  • Automatic Indexing Without the relevant experience, optimizing database performance can be a challenge for many customers. Figuring out what columns in a table require an index to benefit not just a single query but potentially thousands of variants requires a deep understanding of the data model, performance-related features of Oracle Databases and the underlying hardware. In Oracle Database 19c, we’re introducing Automatic indexing which continually evaluates the executing SQL and the underlying tables to determine which indexes to create and which ones to potentially remove. It does this via an expert system which verifies the improvements an index might make, and after it’s creation, validates the assumptions made. It then uses reinforcement learning to ensure it doesn’t make the same mistake again. Most importantly, Oracle Database 19c is able to adapt over time as the data model and access paths change.

  • Active Standby DMLRedirect A popular feature of Active Dataguard is its ability to make use of standby databases for reporting and backups. In most disaster recovery solutions the standby does nothing but continually recovering logging information shipped from a primary database. While the ability to ‘sweat’ the standby is a big improvement in fully utilizing an enterprises resource, many reporting applications require the ability to persist some data even if it is simply a users preferences. In Oracle Database 19c we now allow users to write to the standby. These writes are transparently redirected to the primary database and written their first (to ensure consistency) and then the changes shipped back to the standby. This approach allows applications to use the standby for moderate write workloads without requiring any application changes.

DML Redirect Image

  • Hybrid Partitioned Tables Breaking larger tables into smaller chunks or partitions makes them easier to manage and can improve performance by focusing operations on only the pieces of data they would be applicable to. Oracle Database supports multiple models for partitioning data as well as online operations for partition management. But, as enterprise data continues to inextricably increase in size and complexity and regulatory requirements mandate that it continues to always be online we need to look at new models for managing it. With Hybrid Partitioned Tables, DBAs can now break data into manageable partitions as before, however DBAs can now select which partitions should be held in the database for fast querying and updating, and which partitions can be made read only and stored in external partitions. These external partitions can be held on on-premises in standard files systems or on low cost HDFS. DBAs can also choose to place the data in cloud-based object stores, thereby ‘stretching’ tables to the cloud.

  • JSON Enhancements There are a number of incremental enhancement to JSON support in Oracle Database 19c, from the simplification of SQL functions to the ability to partially update a JSON document.
  • Memoptimized Rowstore This features enables fast data inserts into Oracle Database 19c from applications, such as Internet of Things (IoT), which ingest small, high volume transactions with a minimal amount of transactional overhead. The insert operations that use fast ingest functionality temporarily buffer the data in the large pool before writing it to disk in bulk in a deferred, asynchronous manner.
  • Quarantine SQL Statements Runaway SQL statements terminated by Resource Manager due to excessive consumption of processor and I/O resources can now be automatically quarantined. This prevents these runaway SQL statements from executing again, and thereby protects Oracle Database 19c from a common source of performance degradation.
  • Real Time Statistics Modern query optimizers require detailed statistics of the structure and make of data in tables to enable them to make the ‘optimal’ decision on how to execute complex queries. The problem with this is that statistic collection can be resource intensive and take some period of time. For most recent ‘always on’ applications, finding a window to run a batch process to collect this data is difficult. In Oracle Database 19c, statistics can now be collected as operations insert, update or delete data in real time. Now, there’s no need for customers to compromise between the quality of the statistics that the optimizer depends upon and finding the right time for statistics maintenance.

For the complete list of new features in Oracle Database 19c, check out the latest documentation set or try the new Database Feature Application Guide here.

For the latest Oracle Database 19c availability on other platforms, both on-premises and in Oracle Cloud (including Autonomous Database Cloud Services), check out Document ID 742060.1 on My Oracle Support (login required).

Written by Dominic Giles, Master Product Manager

Related:

  • No Related Posts

Data Warehouse and Visualizations for Credit Risk Analysis

Most people are dependent on credit to finance vehicles, real estate, student loans, or start small businesses. For financial institutions, assessing credit risk data is critical to determining whether to extend that credit. In this blog, we’ll demonstrate how incorporating data from disparate data sources (in this case, from four data sets) allows you to better understand the primary credit risk factors and optimize financial models.

What’s the best way to make that easy? By using Autonomous Data Warehouse, which gives financial institutions the flexibility to dynamically test and modify analytical models without specialized skills. We’ll demonstrate how Autonomous Data Warehouse makes analyzing credit risk simpler.

Try a Data Warehouse to Improve Your Analytics Capabilities

Analyzing Credit Risk

For many financial institutions, one key performance measure comes to mind more than any other: credit risk. A person’s credit risk score is based on financial health factors including: available credit, debt, payment history, and length of credit history. The financial factors not built into the credit score include income, bank balance, and employment status. But all of these can potentially be used to improve the credit risk model, which ultimately drives more revenue. In this blog, let’s review different data sets that we will use to effectively analyze credit risk.

Understanding the Data Sets

By using data visualizations, data analysts can learn about and effectively segment the market. In this project we are connecting multiple data sources:

  • AI_EXPLAIN_OUTPUT_MAX_CC_SPENT_AMOUNT
  • CREDIT_SCORING_100K_V
  • CREDIT_SCORE_NEW_PREDICTIONS
  • N1_LIFT_TABLE

Data analysts generate insights by sifting through significant amounts of data that can be used in conjunction with one another. However, data from different departments can often be siloed, making it harder for an analyst to incorporate potentially valuable predictive data into the model. For example, data elements in credit risk analysis include employment history from HR, purchase history from sales, and core financial health reports from finance. By combining these data sources into a single cohesive system, analysts can create more accurate models. Financial institutions can not only reduce costs by strategically identifying their target market segment, but also better monetize their data by continuously tailoring financial products while improving service delivery.

We looked at the following questions:

  1. How are weights assigned to individual financial factors to create a model that predicts the credit risk?
  2. What is the distribution of our target market based on our credit risk model?
  3. What kinds of loans is our target market segment interested in?
  4. How is the rate of homeownership correlated with wealth brackets based on the type of loans our target market is interested in (housing loans)?
  5. What combination of attributes identifies a risk-free customer?
  6. How effective was the targeted marketing campaign based on our segmentation analysis?

To get started, we downloaded the CREDIT_SCORING_100K_V dataset. This is one of the four datasets we will be using in this project. Here’s how the different attributes are displayed in Excel.

Let’s view the data in Oracle Data Visualization Desktop now. There are multiple ways to upload data to Oracle Cloud for analysis using Oracle Autonomous Data Warehouse. For this example, we uploaded the Credit Scoring 100K data set and reviewed the data in Data Visualization Desktop.

Here’s a quick snapshot of the data from Data Visualization Desktop:

1. How are weights assigned to individual financial factors to create a model that predicts the credit risk?

In the pivot table, (on the left) we see different factors that help to determine the potential value of a customer including: credit scores, wealth, education, income, debt, and other financial measures. Each factor is given a weight based on significance and ranked. When we plot this data on a horizontal bar graph visualization, we can see all the financial factors from most to least important. This way we can visually see that a factor like wealth (IV: .54) is 10X more important than family size (IV: .04).

2. What is the distribution of our target market based on our credit risk model?

This shows the probability of good credit for various demographic factors. Adjusting the filters above (when you’re in Data Visualization Desktop) to gain an understanding of what is likely to result in good credit. Each row is a person, so we can see that in our model, most people have a 52.85 or 55.26 percent probability of good credit. From this data, we can perform statistical analysis on the standard deviation to understand the target group of clients with more than 50 percent probability of good credit.

3. What kinds of loans is our target market segment interested in?

In this visualization, we set up a pivot table to target people with a high probability of good credit as our target segment. Then we filter their credit history by delay, duly now, duly past, not taken, and risky.

From this, we can construct a treemap visualization to see the loan type of this target market segment. We see that the most common type of loan includes need based followed by housing, auto, and education loans. More than half of the loans are either need based or housing loans.

4. How is the rate of homeownership correlated with wealth brackets based on the type of loans our target market is interested in (housing loans)?

In this visualization, we use a scatterplot to correlate the credit scores, age, and wealth (on the left). We also use pie charts to understand the rate of home ownership among different income brackets (on the right). In the scatterplot, we see that credit scores are correlated to wealth but not correlated to age. In the pie chart, homeowners are shown in green. Out of those surveyed, 22.5 percent of respondents were homeowners while 35.4 percent were tenants. When broken out by wealth, the rate of homeownership increases as you move up the income bracket.

5. What combination of attributes identifies a risk-free customer?

The network map links uses lines to link variables such as the probability of good credit, family size, and residential status. Each data point is a node and each linkage represents a relationship between two data points. In this visualization, we’ve filtered to only show individuals with more than 50 percent probability of good credit. Drilling down further into the simplified network, we can isolate a node that homeowners with 2-3 children are a demographic that often has a high probability of good credit (see below). We can continue the analysis by looking at individual customer IDs and execute a marketing campaign to acquire low-risk customers. By targeting high-value customers, we optimize a limited marketing budget and increase the effectiveness of our sales promotion.

6. How effective was the targeted marketing campaign based on our segmentation analysis?

In this line graph, we use cumulative lift to measure of how much better prediction results are compared to a baseline. In this model, the red line acts as a baseline and the yellow line represents actual results. As an example, suppose you normally have a 5 percent response rate but your most recent marketing campaign has an astonishing 20 percent response rate. The lift for that model would be 5/20 or 4. Since lift is computed using actual outcomes, analysts can compare how well a campaign performed against data on from previous campaigns.

Summary

Oracle Autonomous Database allows users to easily create data marts in the cloud with no specialized DBA skills and generate powerful business insights. It took us fewer than ten minutes to provision a database and upload data for analysis.

Analysts are always looking for ways to create a more accurate credit risk model with data. They ask for analytical capabilities to discover innovative answers to their questions. While analysts are looking for those data insights, leadership wants insights delivered in a clear and concise format to understand the business. IT can’t deal with difficult-to-manage legacy approaches requiring expensive teams with highly specialized skills. And that’s where the Autonomous Data Warehouse comes into play.

Now you can also leverage the autonomous data warehouse through a cloud trial:

Sign up for your free Autonomous Data Warehouse trial today

Please visit the blogs below for a step-by-step guide on how to start your free cloud trial: upload your data into OCI Object Store, create an Object Store Authentication Token, create a Database Credential for user and load data using the Data Import Wizard in SQL Developer:

Feedback and question welcome.

Written by Sai Valluri and Philip Li

Related:

  • No Related Posts

5 Database Management Predictions for 2019

With the release of 18c and the Autonomous Database, 2018 has been an incredible year for the database.

So what’s happening in 2019? We’ve gathered together our database predictions and in this article, we’re sharing five.

Download the Full Database Predictions Ebook

1. Database Maintenance Automation Will Accelerate

Many routine database management tasks have already been automated in the last few years. In future years, traditional, on-premises databases will be competing against cloud-native deployments. And increasingly, those cloud-native deployments will be autonomous databases with hands-free database management.

So what does this mean?

Responsibilities will evolve to less involvement of the physical environment and the actual database, but more involvement with managing and making use of the data. As it gets simpler to manage data, data itself will become more valuable as it becomes easier to use. This will be an exciting time as careers advance and adjust to the changing landscape. You can already see that today, with the popularity of jobs such as data scientist and data engineer.

2. Database Security Will Become Ever More Important

Big surprise, right? Or maybe not. Unfortunately, we hear the headlines about security breaches all the time. Threats to security will become more common as other players realize the value of data and how they can turn it to their own advantage. And when we say more common, we mean it. A recent Oracle Threat Report predicts the number of security events will increase 100-fold by 2025.

It’s simply no longer possible for humans to detect, correlate, analyze, and then address all threats in a timely manner. So what can IT professionals do about this? Many of them are turning to autonomous solutions. Autonomous monitoring and auditing can identify many issues and threats against the database. It can monitor cloud service settings, notify DBAs of changes, and prevent configuration drift by allowing IT pros to restore approved settings at any time.

When you have an Autonomous Database that uses machine learning to detect threats and stop them, it’s just easier to rely on the security experts at Oracle while you explore ways to extract more value from data to help drive better business outcomes.

3. Standards for Database Reliability, Availability, and Performance Will Go Up

Database reliability, availability, and performance have always been important and in 2019, they’ll continue to be so. Autonomous data management will take those capabilities to the next level. For example, the machine learning capabilities of Autonomous Database can automatically patch systems the moment vulnerabilities are discovered. Autonomous data management will improve uptime and also boost security.

This means that standards will get higher. In the past, we’ve sometimes been able to get away with blaming human error. But that excuse doesn’t really pass muster anymore when there’s an autonomous option.

However, even though software patches are applied automatically in the background and all actions are audited, DBAs will still have to monitor the unified audit trail logs and perform actions accordingly if necessary.

4. The Volume of Data Will Continue Exploding

With data growth—well, we’ve all seen the countless charts and graphs detailing the explosion of data from social media and video and IoT and thousands of other sources that weren’t common even 10 years ago.

That size of data isn’t a major factor when considering the productivity of DBAs, but it does matter when you think about the number of instances and variety of database brands and versions.

This is something that increasingly, DBAs are going to have to think about—how will they manage all of this data in an efficient way? It’ll be a strong factor for moving to the cloud, because most cloud databases can be provisioned in 40 minutes or fewer, versus weeks using the old on-premises methods.

5. Database Provisioning Will Become Even More Automated

In today’s world, 95 percent of DBAs still manually create and update databases. But automated database provisioning is becoming more popular as it improves with each new iteration. With the performance-tuning dimension that Oracle Autonomous Data Warehouse already brings, and new automatic indexing features for the Autonomous Database, automated database provisioning will become even easier for DBAs.

As data grows and the need for data-driven analytics increases, DBAs will need to help businesses get data faster to meet business demands.

Conclusion

What do you see for data management in 2019, and what are you most excited about?

For us, it’s witnessing how machine learning combined with a modern, automated database is going to revolutionize the way we use data. 2018 has been a groundbreaking year for Oracle, and we’re looking forward to seeing more of the same in 2019.

If you want to try out the world’s most groundbreaking database technology, sign up for a free trial of Oracle Autonomous Database today or read the walkthrough of how Autonomous Data Warehouse works.

And to read through the other database management predictions with quotes from top DBAs, download the full ebook, “Database Management Predictions 2019.”

Related:

  • No Related Posts

Which OpenWorld Europe Sessions Should You Attend?

Line of business leaders – don’t let your valuable data go stale. Learn about new ways you can manage it and gain value during OpenWorld Europe, happening January 16 and 17 this year.

Oracle has entered a truly exciting time with the development of the Autonomous Database. We’ve continually added new products and new capabilities, like Autonomous Transaction Processing and Autonomous Data Warehouse. These new products will help you in your digital transformation as the world revolutionizes the way data is utilized.

Here are the data management and Autonomous Database sessions you don’t want to miss.

Oracle Code Keynote: Cloud-Native Data Management [SOL1843-LON]

The rise of the cloud brought many changes to the way in which developers build applications today. Containers, Serverless, and Microservices are just a few of the technologies and methodologies that are unthinkable of not being part of a modern cloud-native architecture. Yet the next big step is yet to come. Developers know how to build scalable and distributed cloud-native applications but still have to rely on traditional and fragmented data stores to serve their applications with the data they need to process. Penny Avril will unveil the next evolution of a cloud-native data management platform that not only can store and analyze all your data but is also capable to tune, secure and heal itself so that developers can continue to focus on building the next revolutionary application.

Speaker: Penny Avril, Vice President, Server Technology Division, Oracle

Wednesday, January 16, 09:00 AM – 10:20 AM | Arena 2 (Level 3) – ExCeL London

Oracle Autonomous Database [SOL1682-LON]

Oracle Chairman and Chief Technology Officer Larry Ellison describes the Oracle Autonomous Database as “probably the most important thing Oracle has ever done.” In his annual Oracle OpenWorld address, Oracle Executive Vice President Andy Mendelsohn shares the latest updates from the Database Development team along with customer reaction to Oracle Autonomous Database.

The definition of Digital Transformation continues to evolve. Many people think of Digital Transformation as — to be simplistic — as the integration of digital technology into all areas of a business. But Digital Transformation has the potential to be so much more – it’s a necessary disruptor. Digital Transformation isn’t just about technology . . . it’s part vision, perspective, strategy and precision. Companies will experience digital transformation across the enterprise — from customers, to employees, to partners alike. Learn how successful companies evolve to not only respond to customer, employee, and partner needs, but focus on strategies and technologies that straddle digital transformation. Hear from June Manley of Data Intensity on digital transformation.

Speakers: Andrew Mendelsohn, Executive Vice President, Database Server Technologies, Oracle; June Manley, CMO, Data Intensity, LLC; Eric Grancher, Head of Database Department, CERN; Manuel Martin Marquez, Data Analytics Scientist, CERN

Wednesday, January 16, 12:55 PM – 02:15 PM | Arena 1 (Level 3) – ExCeL London

The Changing Role of the DBA [SES1683-LON]

The advent of the cloud and the introduction of Oracle Autonomous Database Cloud presents opportunities for every organization, but what’s the future role for the DBA? In this session explore how the role of the DBA will continue to evolve, and get advice on key skills required to be a successful DBA in the world of the cloud.

Speaker: Penny Avril, Vice President, Server Technology Division, Oracle

Wednesday, January 16, 02:25 PM – 03:00 PM | Arena 1 (Level 3) – ExCeL London

Unleash the Potential of Data to Drive a Smarter Business [SES1221-LON]

Organizations are under tremendous pressure to lower cost, reduce risk, and accelerate innovation. In this session learn how Oracle Autonomous Database Cloud is helping customers achieve these objectives by leveraging the most valuable currency of the company: data. With its self-driving, self-repairing, and self-securing capabilities using machine leaning, all stakeholders including executives, business users, and data analysts can gain insights for smarter business decisions, and IT can deploy applications in minutes for faster innovation. Learn why Larry Ellison calls Oracle Autonomous Database Cloud “the most important thing we have done in a long time.” See how it is revolutionizing data management and empowering line of business, DBAs, and data scientists, to do more with data.

Speakers: Monica Kumar, Vice President, Product Marketing Database and Big Data, Oracle

Thursday, January 17, 12:10 PM – 12:45 PM | Arena 2 (Level 3) – ExCeL London

Related:

  • No Related Posts

6 Benefits of a Cloud Data Warehouse

Sometimes it seems like cloud technology all that anyone in the tech world is talking about.

But not all companies have adopted a data warehouse in the cloud. We’ve written this article to help answer some questions:

  • Do you even want a data warehouse in the cloud?
  • What can you expect from it and what are the benefits?

Let’s explore these key topics one by one.

Sign up for your free data warehouse trial

Question 1 – Do you even want a data warehouse in the cloud?

Of course you do! Look at how fast your data warehouse is growing. Look at the growing number of requests building up for new data warehouse projects, new data discovery sandboxes, new departmental marts, faster query response times, etc. Every IT department is looking for a silver bullet that can magically help them meet the growing demands for data access coming their business units. That silver bullet would be cloud.

Question 2: What can you expect from a cloud data warehouse and what are the key benefits? There are many, but we’ve identified the top six benefits for you.

Data Warehouse Cloud Benefit #1: Lower Costs With Elasticity

The biggest reason most people move to a data warehouse in the cloud is cost. Storing data on-premise, in your own data center, can get very expensive. And expanding your data footprint often makes it harder to support all of your ever-expanding analytical needs.

Why? Well with an on-premise data warehouse, you can’t independently scale compute and storage – at least not that quickly or easily. Typically, if you need more storage the compute will come with it and you end up having to pay for both.

In addition, you need to purchase as much compute as you need for peak times. So if you’re a retail company worried about how much compute you need to handle Black Friday, well, tough luck—you’re stuck with that much compute for the whole year.

Fortunately, it doesn’t have to be that way.

With the best kind of data warehouse, your system can instantly and flexibly scale to deliver as much or as little compute is necessary, whenever it is that you need it. And, because compute and storage are separate, you only need to purchase what’s essential. Lastly, you also don’t have as many upfront costs—hardware, server rooms, networking, adding extra staff, etc.

Data Warehouse Cloud Benefit #2: Quick to Deploy

In the past, IT teams had to estimate how much storage and compute power would be necessary for their line of business teams—sometimes three years in advance. Getting this information incorrect would mean buying hardware they didn’t need, or facing complaints if there was a lack of storage.

Today, this complicated, detailed planning-and-estimation process isn’t necessary. With the cloud, business users can build their own data warehouse, data mart, or sandbox in only minutes, at any time (night or day). Having a data warehouse in the cloud allows organizations to pay for only the resources they need—when they need it.

In addition, Oracle’s cloud makes it quicker and easier to roll out new data warehouse projects such as data discovery sandboxes. IT and business teams can develop and/or prototype new services and products without spending large sums of money on infrastructure.

Data Warehouse Cloud Benefit #3: Grow Your Capabilities

Having a data warehouse in the cloud improves the overall value of the data warehouse. It means that business intelligence and other applications can deliver faster, smarter insights to the business since the availability, scalability and performance are better.

As Penny Avril, VP of Product Management said in a previous article about autonomous capabilities for data warehouses, “The value of the business is driven by data, and by the usage of the data. For many companies, the data is the only real capital they have. Oracle is making it easier for the C-level to manage and use that data. That should help the bottom line.”

With a data warehouse in the cloud, you can engage in the full spectrum of data warehousing from business analytics, data integration, IoT, and more as a complete, integrated solution.

Data Warehouse Cloud Benefit #4: Self-Service Data Warehousing

Self-service is only truly possible if you have a self-driving database.Just as the cloud data warehouse has many benefits, a self-driving, autonomous data warehouse offers even more benefits. Essentially, you don’t really have to worry about managing the data warehouse anymore.

And that means you can benefit from fully automated management, fully automated patching, and upgrades. It means as business user, you don’t need IT to spin up a new data mart for you. You simply log into the cloud and provision a new data warehouse yourself, in minutes.

Data is more available and accessible than ever before.

This allows IT teams to focus attention and resources on more strategic aspects of providing value to the business. But this doesn’t mean that DBAs will be out of work—they still have to manage how applications connect to the data warehouse and how developers use the in-database features and functions within their application code.

Data Warehouse Cloud Benefit #5: More Secure Data

In the past, people were convinced that on-premises data warehouses were more secure. But in the same way that they now trust digital copies more than physical paper copies, some are beginning to see a data warehouse in the cloud as more secure than an on-premises system.

But obviously, it all depends on the database company. So choose a company that has a business model that relies on data security and encryption. Preferably, that company should have over four decades of experience with entire departments to protecting your most valuable asset … Hmmm, who could that be?

Just as an aside, with our self-driving database, the Autonomous Data Warehouse, we have strong data encryption switched on by default to ensure your data is fully protected.

Data Warehouse Cloud Benefit #6: The Cloud Itself

A self-driving database makes everything easier: it takes care of much of the dull but highly valuable work that most people don’t want to do. A self-driving database will help you gain even more ability and capability in the cloud.

For many customers, adopting a data warehouse is just one step on a multi-step journey. You need to make sure that your cloud provider offers a complete path to the cloud that encompasses integrated IaaS, PaaS, and SaaS solutions.

You can simplify your IT infrastructure and minimize capital investments by utilizing your cloud’s services for infrastructure, data management, applications, and business intelligence.

When it comes to choosing a cloud, make sure the one you pick allows for flexible deployment models, enabling you to seamlessly migrate your IT workloads from an on-premises data center to the cloud and back again.

Conclusion

The benefits of having a data warehouse in the cloud are many. But don’t just stop there; think about the benefits of a self-driving data warehouse too—and how much more you could accomplish.

If you’re ready to get started, sign up for a free Autonomous Data Warehouse trial. You’ll be able to:

  • Deploy a new data warehouse in minutes
  • Quickly run sample queries against billion-row tables in seconds
  • Work with Oracle Machine Learning SQL notebooks to build and run machine learning models
  • Use Oracle Analytics Cloud to create interactive, guided data visualizations

Get started today:

Step 1 – Sign up for a new, free trial account

Step 2 – Get started with our free Oracle Learning Library workshop

Step 3 – Learn about loading the data warehouse for business analytics

Written by Sherry Tiao and Keith Laker

Related:

  • No Related Posts

The Next Big Things for Oracle Cloud Platform

The pace of innovation never slows and Oracle is with you all the way! In the Next Big Things session at OpenWorld 2018, the Oracle Cloud Platform team showcased its coolest user experiences, latest development environments, and new blockchain applications.

Every technology wave provides great opportunities.

– Oracle is ready… When will they show up on your roadmap?

Here you can watch highlights (2 minutes) or the entire session (40 minutes).

In this post, I’ll highlight our five demonstrations.

Immersive Experiences

The state-of-the-art in user experiences continues to be immersive experiences delivered as augmented reality, virtual reality, and mixed reality. All these models depend on various forms of AI to deliver natural human interactions.

These appealing experiences are expected asap in commercial and consumer applications, and Oracle is enabling developers to layer in these experiences without having to become experts in fields of esoteric AI research.

Oracle offers four areas for immersive experiences: content management, mobile applications, digital twins for Internet of Things applications, and analytic visualizations.

In the first demonstration, using Oracle Content Experience, a magazine advertisement was brought to life with augmented reality and engaged a consumer in a buying experience. Using a mobile phone pointed at the advertisement, the product appeared three-dimensionally above the page. The product was manipulated in a virtual fashion – both on the magazine page and then onto the floor! And during the interaction experience, marketing information displayed around the virtual product. Totally cool!

Digital Assistants

As consumers, we are familiar with various digital assistants and conversational chatbots. Typically, there is natural language interaction combined with enterprise information providing a productive experience – and if questions become unanswerable, there is a seamless handoff to a human agent.

However, the more immersive spin is to personalize the textual experience with a human face. For our demonstrations, we featured a partner, Quantum Capture, who specializes in human avatars as the interaction paradigm. Their extremely clever technology enables a conversational chatbot experience face-to-face with a fully animated gesturing avatar. The next time you check into a hotel, your experience may be enhanced with the personification of a digital assistant!

AI and Data Science

AI and Data Science are clearly the next catalyst for innovation. 61% of surveyed corporations declare that AI is their most significant data initiative with $5T of potential derived business value.

Data Scientists conduct experiments that rely on large data sets, running on high performance infrastructure, and using specialized programming languages.

At the heart of data science experiments are the building, training, and managing of models. Using our best-in-class product suite, Oracle Data Science, this demonstration illustrated the complete lifecycle of how data scientists perfect their models – including team collaboration, and training on Oracle’s high performance AI infrastructure.

Blockchain Applications

Blockchain is still an exciting transformational opportunity that brings trust and transparency to where there was none before.

Oracle offers a standards-based blockchain platform that can run your consortia or private/private blockchains at enterprise scale.

This year, Oracle announced a new family of ready-to-use blockchain applications integrated with Oracle IOT Cloud that coordinate track and trace information across supply chain trading partners.

This strikingly visual demonstration illustrated tracking a shipment in a complex supply chain with trading partners running on different applications. At the final destination, the product was rejected for improper handling. While visually tracing the shipment, it was discovered that an IOT truck sensor detected a handling anomaly in-transit.

Visual Development

Better tools are needed to accelerate development for professional and non-professional developers

Eliminating application backlog is best accomplished by simplifying the development process. Popularly known as low-code and no-code, visual development tools are state of the art.

To be effective, you need a common platform that is polyglot and component based. In this demonstration, Oracle Visual Builder, showed the ease of building a function-rich mobile application built from homegrown and community components using simple drag and drop.

So, all in all, it was an exciting session. Visit Oracle.com and Cloud.oracle for more information.

We look forward to seeing you this year!

Related:

  • No Related Posts

How 4 Customers Use Autonomous Data Warehouse & Analytics

Effective organizations want access to their data, fast, and they want it readily available for analytics. That’s what makes the Autonomous Data Warehouse such a great fit for businesses. It abstracts away the complexities of managing and maintaining a data warehouse while still making it easy for business analysts to sift through and analyze potentially millions of records.

Sign up today for your free data warehouse trial

This enables businesses to spend more time and resources on answering questions about how the business is performing and what to do next, and less time on routine maintenance and upgrades.

How Customers Use a Data Warehouse with Analytics

Here we’ve gathered together four customers who use the Autonomous Data Warehouse with their analytics. Watch what they have to say about their experience, and learn how a self-driving data warehouse helps them deliver more business value.

Data Tank Fuels Growth with Autonomous Data Warehouse

The Autonomous Data Warehouse enables Drop Tank to stand up a data warehouse in about an hour, and then start pulling in useful information in around four hours. This enables them to see information and act upon it very quickly.

With a data warehouse that automatically scales, Drop Tank can run a promotion and even if there’s 500 times the amount of transaction volume, the system can recognize that, make some tuning adjustments, secure systems, and deliver what Drop Tank needs without needing to hire people to manage that.

They’ve also found value in Oracle’s universal credit model. Drop Tank CEO David VanWiggeren said, “If we decide we want to spin up the Analytics Cloud, try it for a day or two and turn it off, we can do that. That’s incredibly flexible and very valuable to a company like us.

With Autonomous Data Warehouse, Drop Tank can now monetize their data and use it to drive a loyalty program to further delight their customers.

Data Intensity and Reporting With Autonomous Data Warehouse

Data Intensity decided to use Oracle Autonomous Data Warehouse to solve a problem they had around finance and financial reporting. Their finance team was spending around 60 percent of their time getting data out of systems, and only the remaining 40 percent generating value back to the business.

They chose Autonomous Data Warehouse because it was quick, easy, solved a lot of problems for them, and suited their agile development. In addition, they’ve really appreciated the flexibility of a data warehouse in the cloud, and being able to scale up and scale down the solution as needed for financial reporting periods.

Their CFO is especially delighted. With the Autonomous Data Warehouse and Oracle Analytics Cloud together, he can get the data he needs when he needs it – even during important meetings.

Since implementing Autonomous Data Warehouse, Data Intensity has had an initial savings of nearly a quarter of a million dollars and they’re running on 10 times less hardware than they were previously. They also have 10 times the number of users accessing the system as they used to, and all of them are driving value rather than just spending their time getting data out of the system.

Looker: Analytics at the Speed of Thought

At Looker, they were seeing demand for a fully managed experience where people didn’t have to worry about the hardware component. Because of the Autonomous Data Warehouse, users can focus on the analytics from day 1 and have interactive question-answer sessions in real time.

Now, Looker can feel confident that they’re fulfilling their growth while providing analytics to the entire organizations as they keep adding new users.

DX Marketing: Advanced Analytics in Autonomous Data Warehouse

DX Marketing wanted to build a data management platform people that non-technical people could build themselves. Having an Autonomous Data Warehouse makes things easier for the end user. And using Oracle Advanced Analytics with Autonomous Data Warehouse means that everything runs in the database. There’s no external system pulling data down and processing it and putting it back, which alleviates any kind of network latency.

Four Companies, Four Success Stories with Autonomous Data Warehouse

With Autonomous Data Warehouse, we’ve built a data warehouse that essentially runs itself. These are the only five questions you need to answer before setting up your data warehouse:

  • How many CPUs do you want?
  • How much storage do you need?
  • What’s your password?
  • What’s the database name?
  • What’s a brief description?

It’s really that simple. To get started today and see how you can stop worrying about data management and start thinking about how to take your analytics to the next level, sign up for a free data warehouse trial. It’s easy, it’s fast, and we have a step-by-step how-to guide right here.

Related:

Data Warehouse and Visualizations for Flight Analytics

Everyone who flies has experienced a flight delay at some point. Delays have negative impacts: for passengers there is nothing worse than being trapped in an airport and for airlines it is lost revenue.

Try a Data Warehouse to Improve Your Analytics Capabilities

Analysts are always looking for answers to reduce flight delays. They want it now. Not six months from now. They are asking for analytical capabilities to discover innovative answers to their questions. While analysts are looking for those data insights, leadership wants insights delivered in a clear and concise format to understand the business. IT can’t deal with difficult to manage legacy approaches requiring expensive teams with highly specialized skills.

Analyzing Flight Delays

For many airlines and passengers, one key performance measure comes to mind more than any other: flight delays. A delay is any period of time which a flight is late. It’s the difference between the time scheduled on your boarding passes and when you actually board the plane.

In this blog, we will target primary risk factors leading to delays and cancellations from the month of January 2018 to understand the five most common types of delays: carrier delay, weather delay, National Air System (NAS) delay, security delay, and late aircraft delay.

Understanding the Data Set

Using data visualizations to identify patterns and analyze flight delays increases the amount of time that planes are in the air with minimal effort by locating the weak links in the chain. For example, the data showed that in January 2018 alone domestic airlines collectively suffered 97,760 delays and 17,169 cancellations of schedule flights. We continued to drill into this raw data set to understand delays by these categories: classification, airport, airline, state, and days of the month. Airlines not only reduce costs by strategically identifying and mitigating major delays, but also monetize their data by targeting key areas where they can most easily improve their service delivery for customers.

We looked at the following questions:

  • What could I be delayed by and how long will my delay take?
  • Which are the best and worst days to fly based on expected delays?
  • Which state has the most flight cancellations?
  • Which airlines operated the most flights and had the most delays?
  • Which airports had the most departures and experienced the most delays?

To get started, we downloaded the public-domain Airline On-Time Performance dataset from Bureau of Transportation Statistics for the month of January 2018. This dataset has 570,119 rows.

There are multiple ways to upload data to Oracle Cloud for analysis using Oracle Autonomous Data Warehouse. For this example, we uploaded the data pulled from the Bureau of Transportation Statistics and reviewed the data in Data Visualization Desktop.

Here’s a quick snapshot of the data from Data Visualization Desktop:

Observations

1. What could I be delayed by and how long will my delay take?

There are many reasons why you can experience flight delays. This graph showcases carrier, weather, National Air System (NAS), security, and late aircraft delays. We can see that on average the longest delays are caused by late aircraft delays (over 25 minutes) while the shortest delays are caused by security delays (less than a minute).

Challenge: Download the data from the Bureau of Transportation Statistics for the month of December 2017 and do a similar analysis.

2. Which are the best and worst days to fly based on expected delays?

For January 2018, the worst days to travel were the 12th and the 17th, where both days had an aggregate of over 7000 hours of flights delays. Our initial hypothesis was there would be a strong correlation between flight delays and number of flights. Our expectation was that a reduction in the number of flights meant reduced strain on capacity leading to a proportionate reduction of delays. However, when we overlaid the number of flights, we found that the number of flights remained relatively stable throughout January 2018. This mean that the flight delays experienced was independent of the number of flights. The best days to fly were the 27th and 31st. On average, flights took off early!

Challenge: Overlay pricing data to identify fluctuations in price depending on the day of the month.

3. Which state has the most flight delays?

Florida followed by Illinois and California have the highest total flight delays in the month of January 2018. In Data Visualization Desktop, you can hover over the states to see the exact amount of departure delays.

Challenge: Drill down to a specific day of the month and overlay with weather to show how weather is affecting flight delays.

*For visualizations #4, we will use the “My Calculations” tool to determine the total flights operated by each airline. We do this by taking a count of the flights operated by each unique carrier (seen above). In visualization #5, apply “My Calculations” to determine the total flights departing from each airport.

4. Which airlines operated the most flights and had the most delays?

In this visualization, we see the total amount of flights operated by each airline and corresponding delays.

Challenge: Based on the data above, which airlines have a disproportionate amount of delays?

5. Which airports had the most departures and experienced the most delays?

In this visualization, we see the total flights departing from each origin airport and the corresponding delays. The airports with the most departures flights are Hartsfield–Jackson Atlanta International Airport (ATL) followed by O’Hare International Airport (ORD) and Dallas/Fort Worth International Airport (DFW).

Airports that experienced the most net delays were: O’Hare International Airport (ORD), Hartsfield–Jackson Atlanta International Airport (ATL), and Dallas/Fort Worth International Airport (DFW). In just the month of January, the net delays from O’Hare International Airport totaled just over 397,000 minutes of delay which equates to 276 days.

Challenge: Delays are not only caused by the origin airport but also by the destination airport. Try replicating our results but with destination airports. What observations can you draw from comparing delays from origin and destination airports?

Summary

Oracle Autonomous Database allow users to easily create data marts in the cloud with no specialized DBA skills and generate powerful business insights. It took us less than ten minutes to provision a database and upload data for analysis.

Now you can also leverage the autonomous data warehouse through a cloud trial:

Sign up for your free Autonomous Data Warehouse trial today

Please visit the blogs below for a step-by-step guide on how to start your free cloud trial: upload your data into OCI Object Store, create an Object Store Authentication Token, create a Database Credential for user and load data using the Data Import Wizard in SQL Developer:

Feedback and question welcome. Tell us about the delays you’ve personally experienced!

Written by Sai Valluri and Philip Li

Related:

  • No Related Posts

Data Warehouse 101: Setting up Object Store

In the previous posts we discussed how to set up a trial account, provision Oracle Autonomous Data Warehouse, and connect using SQL Developer.

Get Started With a Free Data Warehouse Trial

The next step is to load data. There are multiple ways of uploading data for use in Oracle Autonomous Data Warehouse. Let’s explore how to set up OCI Object Store and load data into OCI Object Store.

Here are step-by-step instructions on how to set up OCI Object Store, load data, and create auth token and database credential for users.

  • From the Autonomous Data Warehouse console, pull out the left side menu from the top-left corner and select Object Storage. To revisit signing-in and navigating to ADW, visit our introduction to data warehouses.

To learn more about the OCI Object Storage, refer to its documentation .

  • You should now be on the Object Storage page. Choose the root compartment in the Compartment dropdown if it is not already chosen.

Create a Bucket for the Object Storage

In OCI Object Storage, a bucket is the terminology for a container of multiple files.

  • Click the Create Bucket button:

  • Name your bucket ADWCLab and click the Create Bucket button.

Upload Files to Your OCI Object Store Bucket

  • Click on your bucket name to open it:

  • Click on the Upload Object button:

  • Using the browse button or drag-and-drop, select the file you downloaded earlier and click Upload Object:

  • Repeat this for all files you downloaded for this lab.
  • The end result should look like this with all files listed under Objects:

Construct the URLs of the Files on Your OCI Object Storage

  • Construct the base URL that points to the location of your files staged in the OCI Object Storage. The URL is structured as follows. The values for you to specify are in bold:

https://swiftobjectstorage.<region_name>.oraclecloud.com/v1/<tenant_name>/<bucket_name>/

  • The simplest way for you to find this information would be to be look at the details of your recently uploaded files.

  • In this example below, the region name is us-phoenix-1, the tenant name is labs, and the bucket name is ADWCLab. This is all of the information you need to construct the swift storage URL above.

  • Save the base URL you constructed to a note. We will use the base URL in the following steps.

Creating an Object Store Auth Token

To load data from the Oracle Cloud Infrastructure(OCI) Object Storage you will need an OCI user with the appropriate privileges to read data (or upload) data to the Object Store. The communication between the database and the object store relies on the Swift protocol and the OCI user Auth Token.

  • Go back to the Autonomous Data Warehouse Console in your browser. From the pull-out menu on the top left, under Identity, click Users.

  • Click the user’s name to view the details. Also, remember the username as you will need that in the next step. This username could also be an email address.

  • On the left side of the page, click Auth Tokens.

  • Click Generate Token.

  • Enter a friendly description for the token and click Generate Token.

  • The new Auth Token is displayed. Click Copy to copy the Auth Token to the clipboard. You probably want to save this in a temporary notepad document for the next few minutes (you’ll use it in the next step).

    Note: You can’t retrieve the Auth Token again after closing the dialog box.

Create a Database Credential for Your User

In order to access data in the Object Store you have to enable your database user to authenticate itself with the Object Store using your OCI object store account and Auth token. You do this by creating a private CREDENTIAL object for your user that stores this information encrypted in your Autonomous Data Warehouse. This information is only usable for your user schema.

  • Connected as your user in SQL Developer, copy and paste this code snippet to SQL Developer worksheet.

Specify the credentials for your Oracle Cloud Infrastructure Object Storage service: The username will be your OCI username (usually your email address, not your database username) and the password is the OCI Object Store Auth Token you generated in the previous step. In this example, the credential object named OBJ_STORE_CRED is created. You reference this credential name in the following steps.

  • Run the script.

  • Now you are ready to load data from the Object Store.

Loading Data Using the Data Import Wizard in SQL Developer

  • Click ‘Tables’ in your user schema object tree. Clicking the right mouse button opens the context-sensitive menu in SQL Developer; select ‘Import Data’:

When you are satisfied with the data preview, click NEXT.

Note: If you see an object not found error here, your user may not be set up properly to have data access to the object store. Please contact your Cloud Administrator.

  • On the Import Method page, you can click on Load Options to see some of the available options. For this exercise, leave the options at their defaults. Enter CHANNELS_CLOUD as the table name and click NEXT to advance to the next page of the wizard.

  • On the Column Definition page, you can control how the fields of the file map to columns in the table. You can also adjust certain properties such as the Data Type of each column. This data needs no adjustment, we can simply proceed by clicking Next.

  • The last screen before the final data load enables you to test a larger row count than the sample data of the beginning of the wizard, to see whether the previously made decisions are satisfying for your data load. Note that we are not actually loading any data into your database during these Tests. Click TEST and look at the Test Results log, the data you would load, any mistakes and what the external table definition looks like based on your inputs.

When done with your investigation, click NEXT.

  • The final screen reflects all your choices made in the Wizard. Click FINISH when you are ready to load the data into the table.

In the next series of posts, we will explore different industries, review industry data sets, query the data, and analyze industry problems with the help of visualizations:

Data Warehouse and Visualizations for Flight Analytics

Data Warehouse and Visualizations for Credit Risk Analysis

Written by Sai Valluri and Philip Li

Related:

  • No Related Posts

Data Warehouse 101: Provisioning

How to Get Started With Autonomous Data Warehouse

Our previous post Data Warehouse 101: Introduction outlined the benefits of the Autonomous Data Warehouse–it’s simple, fast, elastic, secure, and best of all it’s incredibly easy to spin up an environment and start a new project. If you read through the last post, you already know how to sign up for a data warehouse trial account and download SQL Developer and Data Visualization Desktop, both of which come free with the Autonomous Data Warehouse.

Sign up for a Free Data Warehouse Trial Today

This post will focus on the steps to get started using the Oracle Autonomous Data Warehouse. We will provision a new Autonomous Data Warehouse instance and connect to the database using Oracle SQL Developer.

How to Use Autonomous Data Warehouse with Oracle Cloud Infrastructure

STEP 1: Sign in to Oracle Cloud

  • Go to cloud.oracle.com. Click Sign In to sign in with your Oracle Cloud account.
  • Enter your Cloud Account Name and click My Services.

  • Enter your Oracle Cloud username and password, and click Sign In.

STEP 2: Create an Autonomous Data Warehouse Instance

  • Once you are logged in, you are taken to the cloud services dashboard where you can see all the services available to you. Click Create Instance.

Note: You may also access your Autonomous Data Warehouse service via the pull out menu on the top left of the page, or by using Customize Dashboard to add the service to your dashboard.

  • Click Create on the Autonomous Data Warehouse tile. If it does not appear in your Featured Services, click on All Services and find it there.

  • Select the root compartment, or another compartment of your choice where you will create your new Autonomous Data Warehouse instance. If you want to create a new Compartment or learn more, click here.

    Note – Avoid the use of the ManagedCompartmentforPaaS compartment as this is an Oracle default used for Oracle Platform Services.

  • Click on Create Autonomous Data Warehouse button to start the instance creation process.

  • This will bring up the Create Autonomous Data Warehouse screen where you will specify the configurations of the instance. Select the root compartment, or another compartment of your choice.

  • Specify a memorable display name for the instance. Also specify your database’s name, for this lab use ADWFINANCE.

  • Next, select the number of CPUs and storage size. Here, we use 4 CPUs and 1 TB of storage.

  • Then, specify an ADMIN password for the instance, and a confirmation of it. Make a note of this password.

  • For this lab, we will select Subscribe To A New Database License. If your organization owns Oracle Database licenses already, you may bring those license to your cloud service.
  • Make sure everything is filled out correctly, then proceed to click on Create Autonomous Data Warehouse.

  • Your instance will begin provisioning. Once the state goes from Provisioning to Available, click on your display name to see its details.

  • You now have created your first Autonomous Data Warehouse instance. Have a look at your instance’s details here including its name, database version, CPU count and storage size.

Because Autonomous Data Warehouse only accepts secure connections to the database, you need to download a wallet file containing your credentials first. The wallet can be downloaded either from the instance’s details page, or from the Autonomous Data Warehouse service console.

STEP 4: Download the Connection Wallet

  • In your database’s instance details page, click DB Connection.

  • Under Download a Connection Wallet, click Download.

  • Specify a password of your choice for the wallet. You will need this password when connecting to the database via SQL Developer later, and is also used as the JKS keystore password for JDBC applications that use JKS for security. Click Download to download the wallet file to your client machine.

    Note: If you are prevented from downloading your Connection Wallet, it may be due to your browser’s pop-blocker. Please disable it or create an exception for Oracle Cloud domains.

Connecting to the database using SQL Developer

Start SQL Developer and create a connection for your database using the default administrator account ‘ADMIN’ by following these steps.

STEP 5: Connect to the database using SQL Developer

  • Click the New Connection icon in the Connections toolbox on the top left of the SQL Developer homepage.

  • Fill in the connection details as below:
    • Connection Name: admin_high
    • Username: admin
    • Password: The password you specified during provisioning your instance
    • Connection Type: Cloud Wallet
    • Configuration File: Enter the full path for the wallet file you downloaded before, or click the Browse button to point to the location of the file.
    • Service: There are 3 pre-configured database services for each database. Pick <databasename>_high for this lab. For

      example, if you the database you created was named adwfinance, select adwfinance_high as the service.

Note : SQL Developer versions prior to 18.3 ask for a Keystore Password. Here, you would enter the password you specified when downloading the wallet from ADW.

  • Test your connection by clicking the Test button, if it succeeds save your connection information by clicking Save, then connect to your database by clicking the Connect button. An entry for the new connection appears under Connections.
  • If you are behind a VPN or Firewall and this Test fails, make sure you have SQL Developer 18.3 or higher. This version and above will allow you to select the “Use HTTP Proxy Host” option for a Cloud Wallet type connection. While creating your new ADW connection here, provide your proxy’s Host and Port. If you are unsure where to find this, you may look at your computer’s connection settings or contact your Network Administrator.

Watch a video demonstration of provisioning a new autonomous data warehouse and connect using SQL Developer:

NOTE: The display name for the Autonomous Data Warehouse is ADW Finance Mart and the Database name is ADWFINANCE. This is for representation only and you can choose your name.

In the next post, Data Warehouse 101: Setting up Object Store, we will start exploring a data set, how to load and analyze the data set.

Written by Sai Valluri and Philip Li

Related:

  • No Related Posts