How 4 Customers Use Autonomous Data Warehouse & Analytics

Effective organizations want access to their data, fast, and they want it readily available for analytics. That’s what makes the Autonomous Data Warehouse such a great fit for businesses. It abstracts away the complexities of managing and maintaining a data warehouse while still making it easy for business analysts to sift through and analyze potentially millions of records.

Sign up today for your free data warehouse trial

This enables businesses to spend more time and resources on answering questions about how the business is performing and what to do next, and less time on routine maintenance and upgrades.

How Customers Use a Data Warehouse with Analytics

Here we’ve gathered together four customers who use the Autonomous Data Warehouse with their analytics. Watch what they have to say about their experience, and learn how a self-driving data warehouse helps them deliver more business value.

Data Tank Fuels Growth with Autonomous Data Warehouse

The Autonomous Data Warehouse enables Drop Tank to stand up a data warehouse in about an hour, and then start pulling in useful information in around four hours. This enables them to see information and act upon it very quickly.

With a data warehouse that automatically scales, Drop Tank can run a promotion and even if there’s 500 times the amount of transaction volume, the system can recognize that, make some tuning adjustments, secure systems, and deliver what Drop Tank needs without needing to hire people to manage that.

They’ve also found value in Oracle’s universal credit model. Drop Tank CEO David VanWiggeren said, “If we decide we want to spin up the Analytics Cloud, try it for a day or two and turn it off, we can do that. That’s incredibly flexible and very valuable to a company like us.

With Autonomous Data Warehouse, Drop Tank can now monetize their data and use it to drive a loyalty program to further delight their customers.

Data Intensity and Reporting With Autonomous Data Warehouse

Data Intensity decided to use Oracle Autonomous Data Warehouse to solve a problem they had around finance and financial reporting. Their finance team was spending around 60 percent of their time getting data out of systems, and only the remaining 40 percent generating value back to the business.

They chose Autonomous Data Warehouse because it was quick, easy, solved a lot of problems for them, and suited their agile development. In addition, they’ve really appreciated the flexibility of a data warehouse in the cloud, and being able to scale up and scale down the solution as needed for financial reporting periods.

Their CFO is especially delighted. With the Autonomous Data Warehouse and Oracle Analytics Cloud together, he can get the data he needs when he needs it – even during important meetings.

Since implementing Autonomous Data Warehouse, Data Intensity has had an initial savings of nearly a quarter of a million dollars and they’re running on 10 times less hardware than they were previously. They also have 10 times the number of users accessing the system as they used to, and all of them are driving value rather than just spending their time getting data out of the system.

Looker: Analytics at the Speed of Thought

At Looker, they were seeing demand for a fully managed experience where people didn’t have to worry about the hardware component. Because of the Autonomous Data Warehouse, users can focus on the analytics from day 1 and have interactive question-answer sessions in real time.

Now, Looker can feel confident that they’re fulfilling their growth while providing analytics to the entire organizations as they keep adding new users.

DX Marketing: Advanced Analytics in Autonomous Data Warehouse

DX Marketing wanted to build a data management platform people that non-technical people could build themselves. Having an Autonomous Data Warehouse makes things easier for the end user. And using Oracle Advanced Analytics with Autonomous Data Warehouse means that everything runs in the database. There’s no external system pulling data down and processing it and putting it back, which alleviates any kind of network latency.

Four Companies, Four Success Stories with Autonomous Data Warehouse

With Autonomous Data Warehouse, we’ve built a data warehouse that essentially runs itself. These are the only five questions you need to answer before setting up your data warehouse:

  • How many CPUs do you want?
  • How much storage do you need?
  • What’s your password?
  • What’s the database name?
  • What’s a brief description?

It’s really that simple. To get started today and see how you can stop worrying about data management and start thinking about how to take your analytics to the next level, sign up for a free data warehouse trial. It’s easy, it’s fast, and we have a step-by-step how-to guide right here.

Related:

Data Warehouse and Visualizations for Flight Analytics

Everyone who flies has experienced a flight delay at some point. Delays have negative impacts: for passengers there is nothing worse than being trapped in an airport and for airlines it is lost revenue.

Try a Data Warehouse to Improve Your Analytics Capabilities

Analysts are always looking for answers to reduce flight delays. They want it now. Not six months from now. They are asking for analytical capabilities to discover innovative answers to their questions. While analysts are looking for those data insights, leadership wants insights delivered in a clear and concise format to understand the business. IT can’t deal with difficult to manage legacy approaches requiring expensive teams with highly specialized skills.

Analyzing Flight Delays

For many airlines and passengers, one key performance measure comes to mind more than any other: flight delays. A delay is any period of time which a flight is late. It’s the difference between the time scheduled on your boarding passes and when you actually board the plane.

In this blog, we will target primary risk factors leading to delays and cancellations from the month of January 2018 to understand the five most common types of delays: carrier delay, weather delay, National Air System (NAS) delay, security delay, and late aircraft delay.

Understanding the Data Set

Using data visualizations to identify patterns and analyze flight delays increases the amount of time that planes are in the air with minimal effort by locating the weak links in the chain. For example, the data showed that in January 2018 alone domestic airlines collectively suffered 97,760 delays and 17,169 cancellations of schedule flights. We continued to drill into this raw data set to understand delays by these categories: classification, airport, airline, state, and days of the month. Airlines not only reduce costs by strategically identifying and mitigating major delays, but also monetize their data by targeting key areas where they can most easily improve their service delivery for customers.

We looked at the following questions:

  • What could I be delayed by and how long will my delay take?
  • Which are the best and worst days to fly based on expected delays?
  • Which state has the most flight cancellations?
  • Which airlines operated the most flights and had the most delays?
  • Which airports had the most departures and experienced the most delays?

To get started, we downloaded the public-domain Airline On-Time Performance dataset from Bureau of Transportation Statistics for the month of January 2018. This dataset has 570,119 rows.

There are multiple ways to upload data to Oracle Cloud for analysis using Oracle Autonomous Data Warehouse. For this example, we uploaded the data pulled from the Bureau of Transportation Statistics and reviewed the data in Data Visualization Desktop.

Here’s a quick snapshot of the data from Data Visualization Desktop:

Observations

1. What could I be delayed by and how long will my delay take?

There are many reasons why you can experience flight delays. This graph showcases carrier, weather, National Air System (NAS), security, and late aircraft delays. We can see that on average the longest delays are caused by late aircraft delays (over 25 minutes) while the shortest delays are caused by security delays (less than a minute).

Challenge: Download the data from the Bureau of Transportation Statistics for the month of December 2017 and do a similar analysis.

2. Which are the best and worst days to fly based on expected delays?

For January 2018, the worst days to travel were the 12th and the 17th, where both days had an aggregate of over 7000 hours of flights delays. Our initial hypothesis was there would be a strong correlation between flight delays and number of flights. Our expectation was that a reduction in the number of flights meant reduced strain on capacity leading to a proportionate reduction of delays. However, when we overlaid the number of flights, we found that the number of flights remained relatively stable throughout January 2018. This mean that the flight delays experienced was independent of the number of flights. The best days to fly were the 27th and 31st. On average, flights took off early!

Challenge: Overlay pricing data to identify fluctuations in price depending on the day of the month.

3. Which state has the most flight delays?

Florida followed by Illinois and California have the highest total flight delays in the month of January 2018. In Data Visualization Desktop, you can hover over the states to see the exact amount of departure delays.

Challenge: Drill down to a specific day of the month and overlay with weather to show how weather is affecting flight delays.

*For visualizations #4, we will use the “My Calculations” tool to determine the total flights operated by each airline. We do this by taking a count of the flights operated by each unique carrier (seen above). In visualization #5, apply “My Calculations” to determine the total flights departing from each airport.

4. Which airlines operated the most flights and had the most delays?

In this visualization, we see the total amount of flights operated by each airline and corresponding delays. Southwest (WN) had the most operated flights at 109,676 flights followed by American Airlines (AA) at 73,598 flights and Delta (DL) at 71,254 flights. The top 3 airlines with the most delays are Skywest (OO) followed by Southwest (WN) and Delta (DL).

Challenge: Based on the data above, which airlines have a disproportionate amount of delays?

5. Which airports had the most departures and experienced the most delays?

In this visualization, we see the total flights departing from each origin airport and the corresponding delays. The airports with the most departures flights are Hartsfield–Jackson Atlanta International Airport (ATL) followed by O’Hare International Airport (ORD) and Dallas/Fort Worth International Airport (DFW).

Airports that experienced the most net delays were: O’Hare International Airport (ORD), Hartsfield–Jackson Atlanta International Airport (ATL), and Dallas/Fort Worth International Airport (DFW). In just the month of January, the net delays from O’Hare International Airport totaled just over 397,000 minutes of delay which equates to 276 days.

Challenge: Delays are not only caused by the origin airport but also by the destination airport. Try replicating our results but with destination airports. What observations can you draw from comparing delays from origin and destination airports?

Summary

Oracle Autonomous Database allow users to easily create data marts in the cloud with no specialized DBA skills and generate powerful business insights. It took us less than ten minutes to provision a database and upload data for analysis.

Now you can also leverage the autonomous data warehouse through a cloud trial:

Sign up for your free Autonomous Data Warehouse trial today

Please visit the blogs below for a step-by-step guide on how to start your free cloud trial: upload your data into OCI Object Store, create an Object Store Authentication Token, create a Database Credential for user and load data using the Data Import Wizard in SQL Developer:

Feedback and question welcome. Tell us about the delays you’ve personally experienced!

Written by Sai Valluri and Philip Li

Related:

  • No Related Posts

Data Warehouse 101: Setting up Object Store

In the previous posts we discussed how to set up a trial account, provision Oracle Autonomous Data Warehouse, and connect using SQL Developer.

Get Started With a Free Data Warehouse Trial

The next step is to load data. There are multiple ways of uploading data for use in Oracle Autonomous Data Warehouse. Let’s explore how to set up OCI Object Store and load data into OCI Object Store.

Here are step-by-step instructions on how to set up OCI Object Store, load data, and create auth token and database credential for users.

  • From the Autonomous Data Warehouse console, pull out the left side menu from the top-left corner and select Object Storage. To revisit signing-in and navigating to ADW, visit our introduction to data warehouses.

To learn more about the OCI Object Storage, refer to its documentation .

  • You should now be on the Object Storage page. Choose the root compartment in the Compartment dropdown if it is not already chosen.

Create a Bucket for the Object Storage

In OCI Object Storage, a bucket is the terminology for a container of multiple files.

  • Click the Create Bucket button:

  • Name your bucket ADWCLab and click the Create Bucket button.

Upload Files to Your OCI Object Store Bucket

  • Click on your bucket name to open it:

  • Click on the Upload Object button:

  • Using the browse button or drag-and-drop, select the file you downloaded earlier and click Upload Object:

  • Repeat this for all files you downloaded for this lab.
  • The end result should look like this with all files listed under Objects:

Construct the URLs of the Files on Your OCI Object Storage

  • Construct the base URL that points to the location of your files staged in the OCI Object Storage. The URL is structured as follows. The values for you to specify are in bold:

https://swiftobjectstorage.<region_name>.oraclecloud.com/v1/<tenant_name>/<bucket_name>/

  • The simplest way for you to find this information would be to be look at the details of your recently uploaded files.

  • In this example below, the region name is us-phoenix-1, the tenant name is labs, and the bucket name is ADWCLab. This is all of the information you need to construct the swift storage URL above.

  • Save the base URL you constructed to a note. We will use the base URL in the following steps.

Creating an Object Store Auth Token

To load data from the Oracle Cloud Infrastructure(OCI) Object Storage you will need an OCI user with the appropriate privileges to read data (or upload) data to the Object Store. The communication between the database and the object store relies on the Swift protocol and the OCI user Auth Token.

  • Go back to the Autonomous Data Warehouse Console in your browser. From the pull-out menu on the top left, under Identity, click Users.

  • Click the user’s name to view the details. Also, remember the username as you will need that in the next step. This username could also be an email address.

  • On the left side of the page, click Auth Tokens.

  • Click Generate Token.

  • Enter a friendly description for the token and click Generate Token.

  • The new Auth Token is displayed. Click Copy to copy the Auth Token to the clipboard. You probably want to save this in a temporary notepad document for the next few minutes (you’ll use it in the next step).

    Note: You can’t retrieve the Auth Token again after closing the dialog box.

Create a Database Credential for Your User

In order to access data in the Object Store you have to enable your database user to authenticate itself with the Object Store using your OCI object store account and Auth token. You do this by creating a private CREDENTIAL object for your user that stores this information encrypted in your Autonomous Data Warehouse. This information is only usable for your user schema.

  • Connected as your user in SQL Developer, copy and paste this code snippet to SQL Developer worksheet.

Specify the credentials for your Oracle Cloud Infrastructure Object Storage service: The username will be your OCI username (usually your email address, not your database username) and the password is the OCI Object Store Auth Token you generated in the previous step. In this example, the credential object named OBJ_STORE_CRED is created. You reference this credential name in the following steps.

  • Run the script.

  • Now you are ready to load data from the Object Store.

Loading Data Using the Data Import Wizard in SQL Developer

  • Click ‘Tables’ in your user schema object tree. Clicking the right mouse button opens the context-sensitive menu in SQL Developer; select ‘Import Data’:

When you are satisfied with the data preview, click NEXT.

Note: If you see an object not found error here, your user may not be set up properly to have data access to the object store. Please contact your Cloud Administrator.

  • On the Import Method page, you can click on Load Options to see some of the available options. For this exercise, leave the options at their defaults. Enter CHANNELS_CLOUD as the table name and click NEXT to advance to the next page of the wizard.

  • On the Column Definition page, you can control how the fields of the file map to columns in the table. You can also adjust certain properties such as the Data Type of each column. This data needs no adjustment, we can simply proceed by clicking Next.

  • The last screen before the final data load enables you to test a larger row count than the sample data of the beginning of the wizard, to see whether the previously made decisions are satisfying for your data load. Note that we are not actually loading any data into your database during these Tests. Click TEST and look at the Test Results log, the data you would load, any mistakes and what the external table definition looks like based on your inputs.

When done with your investigation, click NEXT.

  • The final screen reflects all your choices made in the Wizard. Click FINISH when you are ready to load the data into the table.

In the next post, Data Warehouse and Visualizations for Flight Analytics, we will review the data set, query the data and analyze a few problems with the help of visualizations.

Written by Sai Valluri and Philip Li

Related:

  • No Related Posts

Data Warehouse 101: Provisioning

How to Get Started With Autonomous Data Warehouse

Our previous post Data Warehouse 101: Introduction outlined the benefits of the Autonomous Data Warehouse–it’s simple, fast, elastic, secure, and best of all it’s incredibly easy to spin up an environment and start a new project. If you read through the last post, you already know how to sign up for a data warehouse trial account and download SQL Developer and Data Visualization Desktop, both of which come free with the Autonomous Data Warehouse.

Sign up for a Free Data Warehouse Trial Today

This post will focus on the steps to get started using the Oracle Autonomous Data Warehouse. We will provision a new Autonomous Data Warehouse instance and connect to the database using Oracle SQL Developer.

How to Use Autonomous Data Warehouse with Oracle Cloud Infrastructure

STEP 1: Sign in to Oracle Cloud

  • Go to cloud.oracle.com. Click Sign In to sign in with your Oracle Cloud account.
  • Enter your Cloud Account Name and click My Services.

  • Enter your Oracle Cloud username and password, and click Sign In.

STEP 2: Create an Autonomous Data Warehouse Instance

  • Once you are logged in, you are taken to the cloud services dashboard where you can see all the services available to you. Click Create Instance.

Note: You may also access your Autonomous Data Warehouse service via the pull out menu on the top left of the page, or by using Customize Dashboard to add the service to your dashboard.

  • Click Create on the Autonomous Data Warehouse tile. If it does not appear in your Featured Services, click on All Services and find it there.

  • Select the root compartment, or another compartment of your choice where you will create your new Autonomous Data Warehouse instance. If you want to create a new Compartment or learn more, click here.

    Note – Avoid the use of the ManagedCompartmentforPaaS compartment as this is an Oracle default used for Oracle Platform Services.

  • Click on Create Autonomous Data Warehouse button to start the instance creation process.

  • This will bring up the Create Autonomous Data Warehouse screen where you will specify the configurations of the instance. Select the root compartment, or another compartment of your choice.

  • Specify a memorable display name for the instance. Also specify your database’s name, for this lab use ADWFINANCE.

  • Next, select the number of CPUs and storage size. Here, we use 4 CPUs and 1 TB of storage.

  • Then, specify an ADMIN password for the instance, and a confirmation of it. Make a note of this password.

  • For this lab, we will select Subscribe To A New Database License. If your organization owns Oracle Database licenses already, you may bring those license to your cloud service.
  • Make sure everything is filled out correctly, then proceed to click on Create Autonomous Data Warehouse.

  • Your instance will begin provisioning. Once the state goes from Provisioning to Available, click on your display name to see its details.

  • You now have created your first Autonomous Data Warehouse instance. Have a look at your instance’s details here including its name, database version, CPU count and storage size.

Because Autonomous Data Warehouse only accepts secure connections to the database, you need to download a wallet file containing your credentials first. The wallet can be downloaded either from the instance’s details page, or from the Autonomous Data Warehouse service console.

STEP 4: Download the Connection Wallet

  • In your database’s instance details page, click DB Connection.

  • Under Download a Connection Wallet, click Download.

  • Specify a password of your choice for the wallet. You will need this password when connecting to the database via SQL Developer later, and is also used as the JKS keystore password for JDBC applications that use JKS for security. Click Download to download the wallet file to your client machine.

    Note: If you are prevented from downloading your Connection Wallet, it may be due to your browser’s pop-blocker. Please disable it or create an exception for Oracle Cloud domains.

Connecting to the database using SQL Developer

Start SQL Developer and create a connection for your database using the default administrator account ‘ADMIN’ by following these steps.

STEP 5: Connect to the database using SQL Developer

  • Click the New Connection icon in the Connections toolbox on the top left of the SQL Developer homepage.

  • Fill in the connection details as below:
    • Connection Name: admin_high
    • Username: admin
    • Password: The password you specified during provisioning your instance
    • Connection Type: Cloud Wallet
    • Configuration File: Enter the full path for the wallet file you downloaded before, or click the Browse button to point to the location of the file.
    • Service: There are 3 pre-configured database services for each database. Pick <databasename>_high for this lab. For

      example, if you the database you created was named adwfinance, select adwfinance_high as the service.

Note : SQL Developer versions prior to 18.3 ask for a Keystore Password. Here, you would enter the password you specified when downloading the wallet from ADW.

  • Test your connection by clicking the Test button, if it succeeds save your connection information by clicking Save, then connect to your database by clicking the Connect button. An entry for the new connection appears under Connections.
  • If you are behind a VPN or Firewall and this Test fails, make sure you have SQL Developer 18.3 or higher. This version and above will allow you to select the “Use HTTP Proxy Host” option for a Cloud Wallet type connection. While creating your new ADW connection here, provide your proxy’s Host and Port. If you are unsure where to find this, you may look at your computer’s connection settings or contact your Network Administrator.

Watch a video demonstration of provisioning a new autonomous data warehouse and connect using SQL Developer:

NOTE: The display name for the Autonomous Data Warehouse is ADW Finance Mart and the Database name is ADWFINANCE. This is for representation only and you can choose your name.

In the next post, Data Warehouse 101: Setting up Object Store, we will start exploring a data set, how to load and analyze the data set.

Written by Sai Valluri and Philip Li

Related:

  • No Related Posts

Data Warehouse 101: Introduction

What Is a Data Warehouse?

A data warehouse is a relational database that is designed for queries and analytics rather than for transaction processing. It usually contains historical data derived from transaction data, but it can include data from other sources. It separates analytics workloads from transaction workloads and enables an organization to consolidate data from several sources.

So what’s an Oracle Autonomous Data Warehouse?

With an Autonomous Data Warehouse, you no longer need specialized database administration skills. You could be a marketing manager, financial analyst or HRIS administrator, and start an analytics project in minutes without involving IT. Anyone can easily explore data for deeper business insights and transform analytics into visual stories to guide executive decision making.

Sign Up for a Free Data Warehouse Trial

What’s Inside an Autonomous Data Warehouse?

Oracle Autonomous Data Warehouse is built on the market-leading Oracle database and comes with fully automated features that offload manual IT administration tasks and deliver outstanding query performance. This environment is delivered as a fully managed cloud service running on optimized high-end Oracle hardware systems. You don’t need to spend time thinking about how you should store your data, when or how to back it up or how to tune your queries.

We take care of everything for you.This video explains the key features in Oracle’s Autonomous Data Warehouse:

This ongoing series of blogs provides you with a detailed, step-by-step guide on Oracle’s Autonomous Data Warehouse. With Oracle Autonomous Data Warehouse, we make it quick and easy for you to create a secure, fully managed data warehouse service in the Oracle Cloud which allows you to start loading and analyzing your data immediately.

We will provide you with step-by-step instructions on how to:

We’ll play with publicly available data sets and analyze them with the service, demonstrating the types of powerful visualizations you can create yourself.

How to Set up a Data Warehouse Trial

Let’s get started by setting up a free data warehouse trial:

  • You can quickly and easily sign up for a free trial account that provides:
    • $300 of free credits good for up to 3500 hours of Oracle Cloud usage
    • Credits can be used on all eligible Cloud Platform and Infrastructure services for the next 30 days
    • Your credit card will only be used for verification purposes and will not be charged unless you ‘Upgrade to Paid’ in My Services

Click on the image below to go to the trial sign-up page which will allow you to request your free cloud account:

Once your trial account is created, you will receive a “Welcome to Oracle Cloud” email that contains your cloud account password along with links to useful collateral. To sign into the Oracle Cloud, click here.

Logging in to Oracle Cloud, Selecting a Data Center for your Workshop

  • When you login into the Cloud Console for Autonomous Data Warehouse you will have the option of choosing the REGION for your new data warehouse instance.

To ensure you get the very best experience possible during this workshop, we recommend that when creating instances of Autonomous Data Warehouse in North America Data Region, please choose our data center in Phoenix. When creating instances of Autonomous Data Warehouse in EMEA and APAC, please choose our data center in Frankfurt.

For example the image below shows how you would select the us-phoenix-1 REGION.

If you do not see the US-Phoenix-1 region (or any other region), choose the menu item “Manage Regions”. Subscribe to the region you want (e.g. US-Phoenix-1). It will become available for selection after you refresh your browser.

Lab Prerequisites – Required Software

  • This workshop needs two desktop tools to be installed on your computer to do the exercises in this lab.

1. SQL Developer

To download and install SQL Developer please follow this link, select the operating system for your computer. This page also has instructions on how to install SQL Developer on Windows, Mac OSX and Linux.

If you already have SQL Developer installed on your computer then please check the version. The minimum version that is required to connect to an Oracle Autonomous Data Warehouse Cloud is SQL Developer 17.4.

2. Data Visualization Desktop

Oracle Data Visualization Desktop makes it easy to visualize your data so you can focus on exploring interesting data patterns. Choose from a variety of visualizations to look at data in a specific way. Data Visualization Desktop comes included with Autonomous Data Warehouse.

To download and install Data Visualization Desktop please follow this link and select the operating system for your computer. This page also has instructions on how to install DVD on Windows and Mac OSX.

If you already have Data Visualization Desktop installed on your computer then please check the version. The minimum version that is required to connect to an Oracle Autonomous Data Warehouse Cloud is 12c 12.2.5.0.0.

In the next post, Data Warehouse 101: Provisioning, we will provision an autonomous data warehouse.

Written by Sai Valluri and Philip Li

Related:

  • No Related Posts

How USC’s Person Data Integration Project Went Enterprise-Wide

Like other institutions, USC has many different legacy and modern systems that keep operations running on a daily basis. In some cases, we have the same kind of data in different systems; in other cases, we have different data in different systems. The majority of our student data, is in the Student Information System, which is over 30 years old, whereas the majority of our staff data is now in a cloud software with APIs. At the same time, the majority of our financial data is in another system, and the faculty research data in yet another system. You get the idea.

Naturally, all of these different systems make it hard to combine the data to make better decisions. A plethora of ways to extract data from these systems along with different time intervals make it easier to make mistakes each time a report needs to be created. Even though many groups on campus need to extract/combine the same kind of data, each group needs to individually do the work to extract it, which wastes time and resources across the entire organization.

There is no single system that will be able to accommodate all of the different functions USC needs to operate; USC simply does too much: Academics, Athletics, Healthcare, Research, Construction, HR, etc. Because of this, the solution is to systematically integrate the data from the different systems one time, so every department on campus can access the same data while the different systems continue to operate.

The Person Entity Project

At a high level, the Person Entity (PE) Project is essentially a database of everything applicable to a Person. This includes every kind of person – pre-applicant, applicant, admit, student, alum, donor, faculty, staff, etc. It aims to be a centralized, high-integrity database, encompassing data from multiple systems, that can supply data to every department at USC. Such complete, high-quality data encompassing every Trojan’s entire academic and professional life at USC can then motivate powerful decision-making in recruiting, admissions, financial aid, advancement, advising, and many other domains.

As one of the original members of the team that started the PE as a skunk works project, I have seen some of the challenges of transforming a small project to an enterprise-backed service.

Where It All Began

The project came about shortly after Dr. Douglas Shook became the USC Registrar. He wanted the data at USC to be more easily accessible to the academic units on campus as well as the university for its business processes. The initial business processes of getting data and using data was filled with manual text file ftps, excel files emailed back and forth, nightly data dumps, etc. and providing delayed, incomplete, costly data was the norm.

The odds were against us because there had been numerous attempts in the past to transition out of our custom-coded legacy student information system that died mid-project. Though our project was not entirely the same as the failed ones, it was quite similar so that if we were able to move the data into a relational database and make it more accessible to other systems, the eventual transition would be a lot easier. There was not a lot of documentation on the student information system, and there was 30 years worth of custom code and fields in the system. We were also using a message broker for the first time to connect to a legacy system like ours.

Working on this project was rewarding and worthwhile because we were able to not only overcome technical hurdles, but get some early wins for the project such as providing data for a widely used applicant portal and providing updated data faster than the existing method. We steadily grew our list of data consumers and now our service provides data for the entire University—schools and administrative units.

If you are currently working on a small experimental project in a large organization, here are some key drivers to keep in mind as you embark on your journey.

1. Create your own success criteria.

Good success criteria for a small project are the small proof of concepts that can be completed for the people on campus that are interested in your services or have not had all of their needs met by one of the enterprise services. The initial ROI will definitely not be as high as other projects so don’t measure things by the monetary amount. For us, the success criteria were things like successfully moved data from one system to the other and reducing the amount of hours a process used to take to get to the same result. We also had a timeline of the “Firsts” that the project had. For instance, first successful push of data, first database tables created, first triggers, first procedures, first production database, first integration with another system, first data client, etc. We could show to the sponsors that we were making progress, and we had a growing list of people who were excited about the problems we could solve for them. By keeping the lines of communication open about our progress, we were given the time and funding we needed to keep working on the project.

2. Minimize spending to maximize your budget.

Don’t be afraid to ask others in your organization for help. In my organization, we piggybacked on other internal organizations’ licenses to lock in lower prices on the renewals versus signing on as a new customer. Another way to cut costs is to use student workers or interns to do research and work. Student workers bring a different set of skills along with challenges, but have been instrumental in our project. In our case, the project initially started with only one full time member and three to four student workers; I was one of them. We did the data modeling at first, some requirements gathering, and the coding as well. Though our work was far from perfect and we needed some guidance from full-time employees, the entire team was able to gain traction on the project by accomplishing tasks with the help of the students at a very low cost.

I would also suggest doing a cost benefit analysis for software purchases. For example, we decided to purchase a professional data modeling tool even though we could have continued using Visio, because the monetary and time cost we would spend on finicking with Visio would be higher than the license fee in the long run. Lastly, find others willing to partially sponsor the project with equipment. I.e. You can ask other groups if they can give you a slice of their Virtual Machine while you are just getting started developing.

3. Work in bite-sized chunks.

It’s easy to be overwhelmed if your project scope is large. That’s why it’s extremely useful to do proof of concepts and pilot projects. For us, the project was so large that if we tried to plan everything out in the very beginning we would get too overwhelmed to even start. The goal of the project was to migrate all of the data over to the Oracle relational database. We needed to divide the system into different data domains and start with one or two. We chose to do Admissions and Person first. This is a little unconventional I think but we just deployed some tables and created some procedures to populate the tables just so we could get started, before we completely finished the model. As a small project it is usually okay to be wrong, because you can just start over!

4. Balance selling/promoting the project with working on the project.

The project sponsor is an important member of the team for this point. If there is no interest in your project then it will die, but if there is too much interest and too many expectations you will fall short of those expectations and people will lose faith in the project. Both are important but too much of one could be fatal. He pitched the project solutions to the problems that USC was trying to solve, how much faster and easier it would be, got people excited about using it and lining up to be the next customer. We started partnering with other groups on campus and doing proof of concepts together.

5. Base your design choices on mass adoption and impact.

When working on the project you need to think of what the best design choice is once the project is in production. This is easier said than done because the future is unknown. Essentially, don’t knowingly make design choices that will cripple your project in the future once it is in production. Think of the potential benefits to the organization if what you are working on is implemented at scale. It is a balancing act to get things out and get things perfect, find this balance that your organization finds acceptable, I think it is different for each feature and for our case, each data field. In other words, prioritize the features so you can make sure to pay special attention to the high priority ones.

6. Document everything.

Increase speed of onboarding new members and for remembering the rationale behind the decisions you made at the time. This will save you a lot of time down the road. Now that we are on year five of the project, sometimes we will come back to old code and tables, and wonder why we designed them the way they are; this is not ideal and we should have done a better job at documentation. This is less of an issue with the newer portions of the project but in the beginning we did not do much documentation and this has slowed us down recently. Members of the team could leave during a critical stage of the project and without documentation, it is very hard to progress at the same rate. Another thing is to at least version your documentation, so even if you don’t make it a habit to update it after things change in production, at least you should know when it was last updated.

7. Embrace change.

There’s always a lot of change to be expected when working on a small project. Decisions about the future of the project can be made almost any time, especially since there are very few services that are being provided in the beginning. The decisions could be made without you in the room as well. Funding and resources can be reallocated which can severely impact your project. We were lucky in that we didn’t have any of these happen to us but I think that is because we were able to overcome the major technical roadblocks in the beginning.

Now that we are an enterprise-backed service, things have definitely changed, and though I am proud that the project has progressed so far, I am a little nostalgic of the time when we were accomplishing milestones of what felt like every week. You should definitely enjoy the time when the project is just starting and small because it will never get smaller after it gains momentum.

The Person Entity project team now has around 3 full-time members, including one member strictly focusing on data quality. The project is now part of a larger data team, with a portfolio of data systems to manage like Tableau and Cognos. We provide some or all of the data & dashboards for all the different schools on campus such as Financial Aid, Registrar dashboards, Admissions, etc. We now have integration between cVent, Campaign!, the mandatory education modules, myUSC, and with plenty more on the way. We are still far from completing our task of extracting all the data from our legacy student information system, and will steadily continue to work on it, as well as moving our entire infrastructure to the cloud.

Guest Author, Stanley Su is currently a data architect on the Enterprise Data and Analytics team at USC. Stan was one of the original members on the Person Entity project, an enterprise data layer with integrated data from multiple systems at USC, and is the current lead on the project. During the Fall semester, he also TAs for a database class at the Marshall School of Business. Stan is interested in using technology to increase business efficiency and reduce repetitive tasks at work.

Related:

Two Paths Towards Data-Driven Transformation

In recent years, I have worked with organizations in many different industries—banking, pensions, higher education, real estate, research, and pharma—as well as clients from different countries in Europe and North America. Out of these companies, only a few have succeeded in a true data-driven transformation. Most struggle just to get incremental changes through.

In my experience, I’ve seen two paths taken towards a successful data-driven transformation: the evolutionary path and the radical path. I’ll explain what each entails below.

The Evolutionary Path

Many organizations are working with an existing business model that they need to maintain. For these companies, they may be applying data as a way to optimize this business model. This path is a safe one to go down, but change may not actually happen. If you’re in this situation, it’s crucial that you follow these five steps:

1. Define where applying data will benefit the most and go for it.

Apply a data strategy tool and find out where your most urgent business needs can be met with the lowest degree of complexity. A good place to deploy the first data science project is often in the marketing or customer oriented functions in your company. The benefits in these departments are quick and obvious, so you can gain momentum for the rest of the organization.

2. Make sure you invest in enough data science capability.

It is crucial that you gain a critical mass quickly by either hiring enough data scientists or engaging with external suppliers. The most important first hire are data science business translators who can maximize impact straight away. It can be difficult to identify the specific technical skillset to hire before you have executed first projects or experienced commercial success. Eventually, you’ll need to hire the right technical staff and build rapidly while you have momentum.

3. Conduct small experiments in your organization where potential benefits are greatest.

Keep the emphasis on fast learning, success, and failure. Don’t invest too heavily on long-term projects, systems, and backend before you have a proven impact. Use an agile approach and make sure to adapt along the way.

4. Develop a capture team that enables you to implement the results from experiments as soon as possible.

It is a waste of opportunities and delegitimizing to your data science strategy if you don’t implement. It is crucial that your data science strategy gets the credit in your organization—and to do so you have to show results. Enable sufficient resources to deploy the good results of the experiments in the business lines. Experiments are great, but real impact is even greater.

5. Measure the impact on sales.

Apply control groups where the data-driven approach outperforms the traditional approach. I have seen good examples of this in banking and telecom. If you are able to show on a weekly basis that the data-driven approach outperforms the traditional approach by 186%, the organization will soon be convinced.

The risk of taking the evolutionary path is that you end up not moving fast enough. Your traditional business model will prevail if you don’t inject rapidly.

The Radical Path

The radical path is challenging, but when successfully executed, it is also the most rewarding. This is the path that can get you to potentially disrupt your industry. However, you also risk disrupting yourself. If you are not a startup, you probably need to earn money while you identify your new future. Therefore, you need a strategy to keep your existing business successful while reinventing your business model from scratch. If this is where you’re at, you should take the following steps:

1. Build an organization independent from the mother company.

In order to develop a radically new business model while maintaining the old one, you should consider setting up an independent organization that is able to do all the things that you don’t do in the mother company. An example of this is Leo Innovation Lab, an independent unit established by Leo Pharma. They act completely on their own, don’t depend on legacy systems, and are in business to disrupt the pharma industry. They develop completely new data-driven business models and also digital products.

2. Focus on how data delivers impact and value before you focus on backend infrastructure and models.

Great ideas should always be tested in the market. I have met several tech startups who forget the importance of sales. They receive initial funding and have a great idea, but they run out of money. It’s a delicate balance. On the one hand, you need to identify interest from clients and consumers before you develop the technical solution, and on the other hand you need to be able to deliver as soon as the clients are there. Get internal or external resources to enable you to understand how to develop your business model in the best possible way.

3. Avoid being locked in by systems, programs, and licenses.

If you can, aim for open-source software and take advantage of the cloud. One of the benefits of living like a startup is that you are not bound by the legacy IT infrastructure. This makes you flexible and agile. You can save your time and resources and instead deploy simple solutions and have them grow in smaller puzzles. When you are not relying on a system to fix everything, you can easily change tools and policies down the line without going through bureaucratic processes. You also don’t get stuck with having to wait six weeks to get a new template in an inflexible system handled by a central IT department, when you can have built it instantly in a much cheaper and better performing Software-as-a-Service solution.

4. Make sure that you cover the entire data science skills value chain.

Your people value chain should be comprehensive to maximize your chances of commercial success. The value chain contains four main categories:

Back-end developer/IT infrastructure skills: This skillset includes identifying, extracting, and processing the data needed to solve your business issues.

Programming skills: This skillset includes building quantitative models such as machine learning models, and development so models can be deployed in production.

Front-end developer skills: This skillset includes creating visualizations and dashboards to make the analytics easily applicable to the people in the organization.

Business data transformation skills: This skillset includes developing data strategies, identifying value propositions, change management, and business development around data science–a crucial part to making sure data generates financial value to your organization.

5. Keep an eye on your business KPIs.

Don’t waste time on developing a business plan. Instead, think, execute, test, revise, and capture benefits early on in every development project. Identify and focus on the most important KPI, and get rid of any that don’t have traction or generate value. In some companies it might be turnover, customers, market penetration, and funding. In others, it might be users, paying users, and conversion rates. Whatever it is, choose the most important things for your own business and make sure to communicate the progress to your main stakeholders.

Conclusion

In order to determine the most opportune path for your business, you should ask yourself some key questions. Will your existing business model survive in the long run through a redevelopment? If yes, you can go down the evolutionary path. However, if you risk getting fundamentally disrupted, you should go for the radical path while keeping the mothership on track. The other question relates to your clients. Will you risk disrupting your clients’ market by developing new products and business models with data? If so, you will not be able to keep them on board and continue down the evolutionary path. Instead, you will end up staying in the status quo, in which case you should also aim for the radical path.

Guest Author, Kristian Mørk Puggaard is Managing Partner in Copenhagen and Stockholm-based DAMVAD Analytics where he advises organizations on developing and executing data science strategies with a business impact. Kristian has worked for large MNCs such as The Boeing Company, Novo Nordisk, ATP Group and Skanska as well as for governments, investors, and leading universities across Europe. Kristian holds a M.Sc. in Political Science from Aarhus University and Institut d’Études Politiques de Paris and an Executive Certificate in Global Management from INSEAD.

Related:

  • No Related Posts

Disruptive Effects of Cloud Native Machine Learning Systems and Tools

In 1970, E. F. Codd proposed the relational database. Fast forward today, and completely different cloud native architectures in application development have emerged that take advantage of native cloud properties. One cloud native paradigm is serverless. With serverless architecture, the entire stack is run in a manner that is inherently distributed and event-driven. This architecture can also be referred to as Functions as a Service (FaaS). Functions written in Python, Go, Java, or another language run in response to events, and resources are elastically provisioned. Additionally, instead of relational databases, which expect consistency, serverless architectures are designed around “eventual consistency.”

With cloud-native machine learning, the core foundation of the cloud allows an enterprise to leverage the operational expertise of the cloud provider and the system automation of the machine learning system. Tasks that do not add value to an organization’s ML and AI strategy such as feature engineering, ETL (extract, load, transfer), and model selection are automated away. This is accomplished by removing tasks that add little or no value to creating the final prediction model or could be better accomplished by machines. Some examples of this are:

  • Model selection (i.e., random forest vs decision tree vs logistic regression)
  • Hyperparameter tuning
  • Splitting test, train, and validation data
  • Cross-validation folding

In a nutshell, there is a big demand for user-friendly machine learning systems that simplify the often unneeded complexity of training machine learning models.

Cloud Native Machine Learning Tools

There are a few emerging descriptions of cloud native machine learning tools. Cloud native machine learning tools are defined as tools that have the origins in the cloud. These tools take advantage of the inherent features of the cloud such as elasticity, scalability, and economies of scale. Two of the most popular descriptions are “managed machine learning” and “automated machine learning.”

In the case of managed machine learning systems, the scalability of training a model and managing the entire system is automated. A good example would be a production Hadoop or Spark system that is utilizing a cluster of thousands of machines. Errors in machine learning training and inference will need to be debugged by Hadoop experts. Similarly, the cluster may need to be optimized to correctly scale up and down the resources needed for a training job to optimize cost. In a sophisticated, cloud-native machine learning system, both the training scalability and serving out the inference model can be automatically scaled. The model can also automatically A/B test different versions in production and then automatically switch to the model that performs best according to customer needs. Data scientists in the organization can then focus on solving enterprise needs instead of implementing and maintaining machine learning infrastructure.

Automated machine learning (AutoML) goes one step further. It can completely automate training a machine learning model and serve it out in production. It accomplishes this by training models from labeled columns (say, images) and automatically evaluating the best model. Next, an AutoML system registers an API that allows for predictions again that trained model. Finally, the model will have many diagnostic reports available that allow for a user to debug the created model—all without writing a single line of code.

Tools like this drive AI adoption in the enterprise by empowering and democratizing AI to all employees. Often, important business decisions are siloed away in the hands of a group of people who are the ones with the technical skills to generate models. With AutoML systems, it puts that same ability directly into the hands of decision makers who create AI solutions with the same ease that they use a spreadsheet.

Conclusion

High-level AI and ML systems that are directly accessible by non-technical gurus have arrived. As William Gibson said, “the future is here—it just isn’t evenly distributed.” Any company can immediately benefit from using AutoML and Managed ML systems to solve business problems. Even if your company isn’t using them, your competitors most likely are.

Guest author, Noah Gift is lecturer and consultant at both UC Davis Graduate School of Management MSBA program and the Graduate Data Science program, MSDS, at Northwestern. He is currently also consulting startups and other companies on machine learning, cloud architecture and CTO level consulting as the founder of Pragmatic AI Labs. His most recent book is Pragmatic AI: An introduction to Cloud-Based Machine Learning (Pearson, 2018).

Related:

  • No Related Posts

6 Steps to Data-Driven Transformation

We’re now well into the Fourth Industrial Revolution. The First Industrial Revolution was about steam and railroads, the Second about electricity, and the Third brought about by the Internet. AI, the basis of the Fourth Industrial Revolution, will completely change the way business is done and companies are run in the next five to ten years, just as the Internet has done in the last ten. The transformation will be bigger than any previous revolution has brought about.

Even if you feel ready to turn your organization into a data- and model-driven enterprise, you may be unsure where to start. The following six steps are derived from my work with enterprises across various industries that have transformed successfully, and can guide you in your own transformation journey.

1. Set a Data Strategy

You already sit on a lot of hidden information about your customers, clients, and business that can help you transform your organization and take it to the next level if—and only if—you treat your data as a strategic asset informing all your business decisions.

When I mention this concept to business leaders, their immediate response is often, “Hey, this means I’ll have to realign the entire organization. How would that work? How can I align all my 100,000 people with a single data strategy?” But setting data strategy is different from goal-setting. With goal-setting, we start at the top. Everything must orient to the goals top executives have set for the entire organization for the year. Data strategy, however, can be different for each sub-team and still contribute to the solution of your top business problems. These different strategies don’t need to involve a single set of constraints.

2. Democratize Your Data

The second step involves democratizing your data throughout the organization. This is important because everyone, from the barista to the CEO, makes business decisions on a daily basis. We know that data-driven decisions are better decisions, so why wouldn’t you choose to provide people with access to the data they need to make better decisions?

Let’s be practical, however. We live in a world of constraints and regulations. Not all organizations can completely democratize their data, particularly in industries such as banking, insurance, and healthcare. For privacy reasons, data leakage in these cases would be catastrophic. It would introduce direct business risk and liability. You also don’t want to share all your data with the entire organization in case proprietary information leaks out and costs you your competitive advantage.

So how can we democratize data intelligently? The answer is to figure out how to provide relevant data to relevant decision-makers so they can enhance their decision-making. Look at people’s roles, identify what decisions they make on a daily basis, and then provide them with the data that will support these decisions. Providing the right data to the right people will enhance their capacity to make the right decisions at the right time.

3. Build a Data-Driven Culture

Step three is about creating a data science and analytics culture within your organization. Leaders must incentivize employees to cultivate the habit of looking at data whenever they make decisions, which I call “the point of action.” This is tightly linked to the corporate culture you build. I often suggest that executives get creative and set up competitions and rewards for employees who champion data.

A second component of this principle requires you to bridge the gap between technical and non-technical teams so they can work seamlessly together to realize and operationalize machine intelligence. This is a key tool for the increase of ROI. Currently, these teams don’t understand each other or know how to work together. This is a major problem that must be faced and overcome.

One of the remedies is educating both teams about each other’s roles and functions. The second is a smart, highly collaborative, embedded organizational work structure that requires the two teams to interact during the normal course of business. The third is creating a semi-technical role for a middleman between the two sides of the business.

4. Accelerate Speed to Insight

The idea behind this principle is to democratize information and insight about your business throughout the organization. If you provide high-speed, dynamic insight to decision-makers, they will get into the habit of making data-driven decisions. The definition of a data-driven organization is an organization that cultivates a culture of looking at data to make all business decisions. To do that, it’s important to use your data to generate as much insight as possible.

One of the simplest and best ways to unleash insight throughout the organization is to use dynamic dashboard tools that provide insight into and beyond the data. Many organizations do not emphasize the importance and usefulness of such solutions. Static summaries and reports are no longer dynamic enough to inform decision-making.

5. Measure the Value of Data Science

The fifth step of data-driven transformation is about taking action. You must measure the value and impact of data science and machine learning on your business and make this metric one of your key performance indicators (KPIs). In doing this, prioritize data science investments with the highest potential ROI. A typical chief information or chief data officer at a Fortune 50 or Fortune 200 company receives between 2,000 and 2,500 requests a year for different data products. People within the organization think they should act upon all these, which is rarely feasible.

How should you prioritize? Look at an investment’s feasibility and impact. Feasibility refers to whether you have the data or not. Is the data clean and labeled? Do you have the talent, resources, and processes to get the project started? Impact refers to financial contribution. If you’re going to invest in this project, will it genuinely revolutionize your business over time? Will it add millions of dollars, or will it add $10,000?

Think about these two dimensions before you submit a request to your CIO for a project you think might be a good use case. Particularly when starting the journey, you don’t want everyone to submit hundreds of use cases. You want to grab one with high feasibility and impact that will be able to transform your organization quickly.

Start by piloting a project. If you see that a magnitude of change is reasonable, pour more money into it: invest more and hire more. Then operationalize it throughout the organization.

6. Implement a Data Governance Framework

This final step is all about the environment in which your data sits. Your data assets must be secure and private. This is a priority, and all large corporations should have thoroughly established data governance, security, and privacy by this time. By my standards, however, many of the companies I work with are still quite far behind the curve. While the importance of safeguards should go without saying, it still needs to be said: many organizations haven’t yet instituted them.

Organizations must start by gaining high visibility into their data flows, from the point source to the very end destination within the enterprise. This entails visualizing and quantifying the various data routes, understanding the different data types, and tools that these data interacts with. Only then can organizations securely apply the necessary policies that ensure governance from the outset. Approaching governance and security in this way helps organizations not only effectively manage their data, but also capitalize on it with confidence, knowing quality and security is uncompromised.

Move Towards Data-Driven Maturity

Initially, applying these six steps may appear daunting. No doubt it will take you a while to start thinking about maximizing the use and protection of data in every decision you make. Nonetheless, it can be done.

As an executive, positive transformations start with you and trickle throughout the organization. Before long, you will begin to see more people understanding and living by these principles. Then, your organization will be on its way to data-driven maturity.

Guest Author, Nir Kaldero is dedicated to bringing the benefits of data science and machine intelligence into business. As the head of data science at Galvanize, Inc., he has trained numerous C-Suite executives from Fortune 200 companies in how to transform their companies into data-driven organizations by applying the technology behind the “fourth industrial revolution.”

For more advice on the six principles of data-driven transformation, check out Nir Kaldero’s book titled, “Data Science For Executives” on Amazon.

Related:

  • No Related Posts

Oracle Autonomous Database at Openworld 2018

OpenWorld 2018—over 2500 sessions with customer and partner speakers from around the world, speaking about how they’ve found success—it’s the event of the year you don’t want to miss. But if you did, we’re providing a quick recap for you about Oracle Autonomous Database right here.

Autonomous Database Keynote

At the OpenWorld keynote, Oracle Executive Chairman and CTO Larry Ellison spoke about his vision for a second-generation cloud that is purpose-built for enterprise—and is more technologically advanced and secure than any other cloud on the market.

While many first-generation clouds are built on technology that’s already decades old, Oracle’s Gen 2 Cloud is built with the technology of today. Its unique architecture and capabilities deliver unmatched security, performance, and cost savings. It’s also the technology that’s used to build Oracle Autonomous Database, the industry’s first and only self-driving database.

He debuted a new ad, too. Sit back, relax—and take Oracle Autonomous Database for a spin.

Larry Ellison also talked about new deployment options, including dedicated Exadata Cloud Infrastructure and Cloud at Customer.

Dedicated Exadata

Ellison shared benchmark test results that highlighted the performance gap between Oracle and Amazon.

Database Performance Metrics

Larry Ellison said on stage, “We’re still 80 times faster than Amazon’s data warehouse.”

Autonomous Database Performance Metrics

And Andrew Mendelsohn, executive VP of Oracle Database said, “Oracle Autonomous Database has redefined data management. Our customers see significant advantages in using our cloud database services to take the complexity out of running a business-critical database while delivering unprecedented cost savings, security, and availability.”

Autonomous Database Truly Elastic

Autonomous Database in the Media

During OpenWorld, Monica Kumar, Oracle’s VP of Product Marketing for Database, also gave an interview with John Furrier of CUBEConversation about Autonomous Database. She highlighted how machine learning makes data management smarter in the cloud, and how companies can use that to get more value from their data with Oracle Autonomous Database.

Autonomous Database Customers and Awards

At the Oracle Excellence Awards, we presented awards to an extraordinary set of customers and partners who are using Oracle solutions to accelerate innovation and drive business transformation.

We heard from customers that are using Oracle Autonomous Database to increase agility, lower costs, and reduce IT complexity. These companies used Autonomous Database to speed access to data for business analysts and data scientists, and to take the load off their database administrators. Many of them are focusing on automation to improve their future capabilities.

For a leading oil and gas company, for example, their 24/7 business means that patching is difficult. But a self-patching system like Autonomous Database makes their processes drastically simpler, and they save costs too by not having to pay when they’re not using compute. They told us, “The elasticity is brilliant.”

A large newspaper company in Argentina is using Autonomous Database to improve their access to analytics. They moved to Oracle Autonomous Database so their line of business could control the analytics better, and get faster access to it.

Here was the full lineup of winners:

Data Warehouse and Big Data Leader of the Year

  • Cheolki Kim of Hyundai Home Shopping
  • Conny Björling of Skanska AB
  • Vicente Alencar Junior of Nextel
  • Benjamin Arnulf of The Hertz Corporation

Cloud Architect of the Year

  • Steven Chang of Kingold Group Co., Ltd
  • Erik Dvergsnes of AkerBP
  • Leonardo Simoes of UHG United Health Group
  • Dave Magnell of Sabre

CDO of the Year

  • V. Kalyana Rama of CONCOR
  • Luis Esteban of CaixaBank
  • Pablo Giudici of AGEA – Grupo Clarin
  • Steve Chamberlin of QMP Health

Conclusion

We’re committed to making Oracle Autonomous Database the best cloud database on the market. So far, we’re succeeding. In the next few weeks, we’ll continue to highlight Oracle Autonomous Database, Autonomous Data Warehouse, and what DBAs should know about our exciting product lineup.

Related:

  • No Related Posts