Oracle Autonomous Data Warehouse: The world’s first and only self-securing database cloud service

Part 3 of 3-Post Series: The world’s first and only self-securing database cloud service

Just thinking about a data breach will make any IT or security professional sweat. And, for good reason. We know that the cost of a data breach goes far beyond a dollar figure. The loss of data impacts brand, customers, partnerships, and more. And, when you throw in managing data in the cloud, working with security can become even more complicated.

A recent security report from Oracle found that confusion about cloud and tenant ‘shared responsibility security models’ (SRSM) has come at a serious cost. Over a third of organizations participating in this year’s research shared that such confusion has led to the introduction of malware (34%) and a similar number of respondents (32%) noted it has exposed them to increased audit risk.

This lack of a clear understanding of the shared responsibility security model has also put data at risk, with 30% of organizations reporting that, as a result, unauthorized individuals accessed data. Additionally, 29% of respondents reported an unpatched or misconfigured system was compromised as a result of confusion, highlighting the fact that public-facing cloud infrastructure is constantly subject to botnet attacks exploiting improperly configured public services.

Remember, your cloud environment is only as secure as you design it. For example, according to recent statistics, as many as 7% of all S3 servers are completely publicly accessible without any authentication and 35% are unencrypted. And if the incidents of the past six months or so are any indication, (according to recent Risk Based Security research, in the first half of 2019 alone, 3,813 breaches were reported, exposing more than 4.1 billion records), these aren’t low-value data stores.

Before we go on too much further, it’s important that you take a look at Part 1 and Part 2 of our three-part series about the Oracle Autonomous Data Warehouse. But, if you’re skipping right to security, we don’t blame you — it’s an important and timely topic.

“Data is your most critical asset, but could become your biggest liability if not properly secured,” says Vipin Samar, senior vice president of Oracle Database Security. So, what makes working with the Oracle Autonomous Data Warehouse solution so unique? Based on my observations, it’s the security aspect. The security is built into the DNA of the entire architecture. I’ll explain how.

Oracle Autonomous Data Warehouse stores all data in encrypted format. Only authenticated users and applications can access the data when they connect to the database. From there, all connections to Autonomous Data Warehouse use certificate-based authentication and Secure Sockets Layer (SSL) encryption. This ensures that there is no unauthorized access to Autonomous Data Warehouse and that communications between the client and server are fully encrypted and cannot be intercepted or altered. So, if there is a malicious attack, a man-in-the-middle attack for example, the fully encrypted communications can never be accessed, keeping Autonomous Data Warehouse operating safely.

Here’s the other really cool part: You do not need to do any manual configuration to encrypt your data and the connections to your database. Autonomous Data Warehouse does this for you – autonomously. Why is this important? Because some cloud providers don’t actually encrypt your storage repositories or buckets. As mentioned earlier, statistics by security firm Skyhigh Networks indicate that 35% of all S3 buckets are unencrypted. And that lack of security has already impacted major organizations.

Beyond autonomous encryption, Autonomous Data Warehouse uses strong password complexity rules for all users based on Oracle Cloud security standards. Believe it or not, password policies are still a problem for a lot of organizations! As a result, you see breaches that could have been prevented if users updated their passwords more frequently and created ones that are more complex. Strong password complexity rules ensure that your most critical data points never waiver from a strict security policy.

You can further restrict connections by specifying a network Access Control List (ACL). By specifying a network ACL, only a specific Autonomous Data Warehouse database accepts connections from addresses on the ACL, rejecting all other client connections. This means that malicious access attempts and even spoofing attacks won’t get through. Network Access Control Lists can granularly lock down which devices have access to the ADW database.

Aside from implementing security best practices around your data, Oracle Autonomous Data Warehouse does something else that’s unique. It self-secures your data warehouse, which if you ask me, is pretty invaluable.

The World’s First and Only Self-Securing Autonomous Warehouse

We discuss this more in Part 1 and Part 2 of our Autonomous Data Warehouse blog series. But it’s important to note that there are a lot of powerful autonomous processes that the service will take care of for you.

  • The Oracle Autonomous Data Warehouse is self-driving means it’s a fully managed data warehouse cloud service that takes care of network configuration, storage, and database patching and upgrades for you. No customer DBA required.
  • The Oracle Autonomous Data Warehouse is also self-repairing, with high availability built into every component, and completely automated backups. This means that this architecture has built-in protection from downtime, purpose-built into the core of the design. You get your nights and weekends back knowing you’ve got a data platform that’s actively working to keep your database operating.

Oracle Autonomous Data Warehouse is Self-Securing.

As the very first self-securing automated warehouse database of its kind, self-securing starts with the security of the Oracle Cloud infrastructure and database service. Within the Autonomous Data Warehouse ecosystem that is built on Oracle Cloud infrastructure, security patches are automatically applied as needed, narrowing the window of vulnerability and mitigating the risk of an unpatched system.

Furthermore, patching includes the full stack: firmware, operating system [OS], clusterware, and database. There are no steps required from the customer side. Gone are the days of needing to manually track patch releases, or tracking down multiple patches across different layers of the stack. It is exactly what the term applies: self securing.

The Oracle Autonomous Data Warehouse self-securing service takes care of the security health of the infrastructure, including the database then automating the entire process, leaving nothing to chance or exposure to human error. From there, the ecosystem encrypts customer data everywhere: in motion, at rest, and in backups. The encryption keys are managed automatically, again without requiring any customer intervention. And, unlike some other data solutions in the market, encryption cannot be turned off and is set by default. In the age of rampant data breaches, your data is simply too important to be left unencrypted.

Finally, administrator activity on Oracle Autonomous Data Warehouse Cloud is logged centrally and monitored for any abnormal activities. Yes, you heard that correctly: the Autonomous Data Warehouse service will scan and evaluate abnormal behavior and anomalous user access. This means that the Autonomous Data Warehouse enables database auditing using predefined policies so that customers can view logs for any abnormal access.

As proactively and intelligently secure as the Autonomous Data Warehouse is, customers should still employ security best practices around the workloads and data they’re deploying. According to Vipin Samar, senior vice president of Oracle Database Security, “Securing databases in the cloud is a shared responsibility, with Oracle securing the infrastructure and network; monitoring the OS and network activity; applying OS and database patches and upgrades; and providing encryption, appropriate separation of duties, and various certifications.”

Samar goes on, adding, “The customer organization still needs to secure its applications, users, and data. It needs to ensure that its applications can thwart attacks targeted at the company, that its users follow security best practices, and that its sensitive data is protected using appropriate controls. In some sense, these requirements are no different from those for an organization’s current on-premises databases, except that Oracle has already handled the security infrastructure part.”

Automatically Secure, Autonomously Intelligent

The scale, speed, and ferocity of the modern threat vector will probably keep business leaders and technologies on edge for the foreseeable future. However, the automated security technologies included as part of the Oracle Autonomous Data Warehouse solution and cloud-based identity management can help organizations manage the risks.

Attacks against your data and infrastructure can come in many forms. Malicious actors like nation states, advanced persistent threats, organized crime, and even accidental (or disgruntled) insider threats can all have major repercussions on your business. They could attack your infrastructure, operating systems, applications, users, and certainly your databases.

As data sets grow and become even more valuable, now is the time to take a step back and really understand your databases and how you leverage data.

Believe it or not, in a data-driven world, many organizations still don’t really know how secure their databases are, where their sensitive data is located, or how much data they actually have. If you’re in that boat, don’t try to navigate the sea of data on your own.

For example, Oracle recently released the Oracle Database Security Assessment Tool feature of Oracle Autonomous Database, which lets organizations answer these questions. The tool looks at various security configuration parameters, identifies gaps, and discovers missing security patches. It checks whether security measures such as encryption, auditing, and access control are deployed, and how those controls compare against best practices.

The Assessment Tool helps organizations discover where their sensitive data is located and how much data they have. Oracle Database Security Assessment Tool searches database metadata for more than 50 types of sensitive data, including personally identifiable information, job data, health data, financial data, and information technology data. This helps businesses to understand the security risks for that data.

Finally, for those global organizations, the assessment tool also highlights findings and provides recommendations to assist with regulatory compliance. The findings and recommendations support both the European Union General Data Protection Regulation (EU GDPR) and the Center for Internet Security (CIS) benchmark.

When it comes to keeping your data (and reputation) secure, a great way to start your data security journey is by asking the right questions, knowing how your data is being used, and leveraging smart, autonomous solutions to revolutionize the way you manage data and approach a digital market. Remember, getting started means having a good awareness of your own data requirements. This means knowing things like:

  • Is my data growing and is it becoming more complex to manage?
  • Have I experienced security ‘scares’ where better policies were required?
  • Is there a general lack of intelligence around the current data systems I’m using?
  • Am I losing competitive advantages because I’m not leveraging my data analytics properly?
  • Do I have a good data visualization solution?
  • Am I looking to grow and distribute where my data lives?

These are just a few of the questions you can ask to help identify the right kind of data-driven architecture. With Oracle Autonomous Data Warehouse, you take a lot of the guesswork out of the equation. The self-driving, self-repairing, and even self-securing features are all designed to help you get the absolute most out of your data warehouse.

As I wrote about in post one, data is the lifeblood of your business. So is security. It’s simply too important to take any shortcuts. Most of all, don’t let a legacy architecture drag you down. When environments become complex and fragmented, they’re not only harder to manage, they pose even greater security risk. In fact, 85% of the time that a breach occurs, there’s a patch available that could have prevented it. Solutions like the Oracle Autonomous Data Warehouse, and the underlying self-securing architecture remove these kinds of threats and allow you to focus on what’s truly valuable – your users, your business, and your data.

Photo by Steven Su on Unsplash

Related:

  • No Related Posts

Structured vs. Unstructured Data

What is the difference between structured and unstructured data—and should you care? For many businesses and organizations, such distinctions may feel like they belong solely to the IT department dealing with big data. And while there is some truth to that, it’s worthwhile for everyone to understand the difference, because once you grasp the definition of structured data and unstructured data (along with where that data lives and how to process it), it’s possible to see how this can be used to improve any data-driven process.

And these days, nearly any workflow in any department is data-driven.

Sales, marketing, communications, operations, human resources, all of these produce data. Even the smallest of small business—say, a brick-and-mortar store with physical inventory and a local customer base—produces structured and unstructured data from things like email, credit card transactions, inventory purchases, and social media. Thus, taking advantage of this comes through understanding the two, and how they work together.

What Is Structured Data?

Structured data is data that uses a predefined and expected format. This can come from many different sources, but the common factor is that the fields are fixed, as is the way that it is stored (hence, structured). This predetermined data model enables easy entry, querying, and analysis. Here are two examples to illustrate this point.

First, consider transactional data from an online purchase. In this data, each record will have a timestamp, purchase amount, associated account information (or guest account), item(s) purchased, payment information, and confirmation number. Because each field has a defined purpose, it makes it easy to manually query (the equivalent of hitting CTRL+F on an Excel spreadsheet) and also easy for machine learning algorithms to identify patterns—and in many cases, identify anomalies outside of those patterns.

Another example is data coming from a medical device. Something as simple as a hospital EKG meter represents structured data down to two key fields: the electrical activity of a person’s heart and the associated timestamp. Those two fields are predefined and would easily fit into a relational or tabular database; machine learning algorithms could easily identify patterns and anomalies with just a few minutes worth of records.

Despite the vast difference in technical complexity between these examples, it’s clearly shown that structured data drills down to using established and expected elements. Timestamps will arrive in a defined format; it won’t (or can’t) transmit a timestamp described in words because that is outside of the structure. A predefined format allows for easy scalability and processing, even if handled on a manual level.

Structured data can be used for anything as long as the source defines the structure. Some of the most common uses in business include CRM forms, online transactions, stock data, corporate network monitoring data, and website forms.

What Is Unstructured Data?

Structured data comes with definition. Thus, unstructured data is the opposite of that. Rather than predefined fields in a purposeful format, unstructured data can come in all shapes and sizes. Though typically text (like an open text field in a form), unstructured data can come in many forms to be stored as objects: images, audio, video, document files, and other file formats. The common point with all types of unstructured data comes back to the idea of lacking definition. Unstructured data is more commonly available (more on that below) and fields may not have the same character or space limits as structured data. Given the wide range of formats comprising unstructured data, it’s not surprising that this type typically makes up about 80% of an organization’s data.

Let’s look at some examples of unstructured data.

First, a company’s social posts are a specific example of unstructured data. The metrics behind each social media post—likes, shares, views, hashtags, and so on—are structured, in that they are predefined and purposeful for each post. The actual posts, though, are unstructured. The posts archive into a repository, but searching or relating the posts with metrics or other insights requires effort. There is no way of knowing what each post specifically contains without actually examining it, whether it’s customer service or promotion or an organizational news update. Compare that to structured data, where the purpose of fields (e.g., dates, names, geospatial coordinates) is clear.

A second example comes from media files. Something like a podcast has no structure to its content. Searching for the podcast’s MP3 file is not easy by default; metadata such as file name, timestamp, and manually assigned tags may help the search, but the audio file itself lacks context without further analysis or relationships.

Another example comes from video files. Video assets are everywhere these days, from short clips on social media to larger files that show full webinars or discussions. As with podcast MP3 files, content of this data lacks specificity outside of metadata. You simply can’t search for a specific video file based on its actual content in the database.

How Do They Work Together?

In today’s data-driven business world, structured and unstructured data tend to go hand in hand. For most instances, using both is a good way to develop insight. Let’s go back to the example of a company’s social media posts, specifically posts with some form of media attachment. How can an organization develop insights on marketing engagement?

First, use structured data to sort social media posts by highest engagement, then filter out hashtags that aren’t related to marketing (for example, removing any high-engagement posts with a hashtag related to customer service). From there, the related unstructured data can be examined—the actual social media post content—looking at messaging, type of media, tone, and other elements that may give insight as to why the post generated engagement.

This may sound like a lot of manual labor is involved, and that was true several years ago. However, advances in machine learning and artificial intelligence are enabling levels of automation. For example, if audio files are run through natural-language processing to create speech-to-text output, then the text can be analyzed for keyword patterns or positive/negative messaging. These insights are expedited thanks to cutting-edge tools, which are becoming increasingly important due to the fact that big data is getting bigger and that the majority of that big data is unstructured.

Where Data Comes From and Where It Goes

In today’s business world, data comes in from multiple sources. Let’s look at a mid-size company with a standard ecommerce setup. In this case, data likely comes from the following areas:

  • Customer transactions
  • Customer account data
  • Customer feedback forms
  • Inventory purchasing
  • Logistical tracking
  • Social media engagement
  • Marketing outreach engagement
  • Internal HR data
  • Search engine crawling for keywords
  • And much more

In fact, the amount of data pulled by any company these days is staggering. You don’t have to be one of the world’s biggest corporations to be part of the big data revolution. But how you handle that data is key to being able to utilize it. The best solution in many cases is a data lake.

Data lakes are repositories that receive structured, and unstructured data. The ability to consolidate multiple data inputs into a single source makes data lakes an essential part of any big data infrastructure. When data goes into a data lake, any inherent structure is stripped out so that it is raw data, making it easily scalable and flexible. When the data is read and processed, it is then given structure and schema as needed, balancing both volume and efficiency.

Efficiency in storage is key because scalability and flexibility allow for including more data sources and more applications of cutting-edge tools such as machine learning. This means that the foundation for receiving structured and unstructured data needs to be built for the present and the future, and the industry consensus points to moving data to the cloud.

Want to dig deeper? The following links might help:

What is big data?

What is machine learning?

What is a data lake?

And for more about how you can benefit from Oracle Big Data, visit Oracle’s Big Data pageand don’t forget to subscribe to the Oracle Big Data blog and get the latest posts sent to your inbox.

Related:

  • No Related Posts

Why You Should Use Your Database for Machine Learning

When it comes to taking a trained machine learning model and putting it into operational use at the company, TDWI found that it took 14 percent of respondents over 9 months just for that step along.

But for more than 20 percent, that same step took only a few days to a few weeks.

Operationalizing Machine Learning

When it comes to successful machine learning, there are several things you can do to ensure that you’ll be part of the fast 20 percent, not the slower 14 percent. One of them, of course, involves using your database for machine learning.

What Machine Learning Can Do

You’ve probably heard about some applications of machine learning in the news, like computers creating art and music through machine learning.

But what really excites people in the business world is machine learning’s ability to use data to find patterns and trends.

Machine learning is uniquely suited for this because it involves taking massive amounts of data and then using computers with algorithms. Computers are enabled to learn how to explore the data to find hidden information.

This allows businesses to do things like:

  • Improve retail sales with a recommendation algorithm
  • Anticipate equipment breakdown with a prediction algorithm
  • Detect fraud through anomaly detection
  • Machine learning can do this in ways that traditional analytics just can’t.

Machine Learning in Databases

Machine learning in the database means faster time to results

You might not know that some databases come with machine learning inside them.

What this means is that you don’t have to go out and acquire a data science platform, and you don’t have to learn how to use Hadoop, and you don’t have to mess with data lakes when you’re just starting out.

Actually, you have everything you already need to get started, if you’re using a database that comes with machine learning actually inside it.

Now, this isn’t how every database provider does things, and it’s not how every machine learning platform does things. But it makes things easier for you in so many ways. Here are a few examples:

  • Do more than ever with your existing data, while maintaining control of data by using your database as a single source of truth and eliminating movement of data out of your database.
  • Experiment with machine learning products that come with the database and are optimized to start running. With the right products, you can even perform experimentation without needing data scientist skills, although they do make it easier.
  • Use machine learning in the database to develop solutions for everything from fraud detection to predicting customer behavior to identifying selling opportunities.
  • Make it easy for teams to use the machine learning models you’ve created, and operationalize your machine learning more easily than ever.

So, what is it about machine learning in the database that makes this possible, and what are the benefits?

Before I explain the benefits of using existing data in your database, I want to explain the data science process just in case you need a refresher. It will make some of the benefits of machine learning in the database much clearer.

The Data Science Process for Machine Learning

Here’s a simplified data science process for developing machine learning models. It starts with identifying the data needed for the model. If you’re using a separate platform for machine learning, then that data has to be extracted from the source and loaded into that platform.

The Data Science Process for Machine Learning

When you are working in the database, the assumption is that the data is already there so you don’t need to extract it, which avoids an often time-consuming step.

The middle three steps are the core work of understanding and preparing the data, building the model, then testing and evaluating the model. These steps may be repeated multiple times.

When a satisfactory result is achieved, then the model needs to be deployed so it can be used with new data and used by people and applications as needed. This can be one of the most challenging steps when working with an open-source machine-learning platform. What hardware is it going to run on? How will it access data? Does the code need to be converted into a different language? Does it need to be accessed from an API?

But when you build your machine learning in the database, you also run it in the database. There’s no need to convert code. Just call the model from a SQL statement. Additionally, we provide a way to expose that model as a REST API if needed.

In this process, you can see that you can significantly simplify 40 percent of the activities needed to build and deploy the model. That’s where you can really start to maximize the value of your existing data.

Okay, so now that we have that out of the way, let’s talk about the benefits of existing data to you.

Three Benefits of Machine Learning in the Database

Database Machine Learning Benefit #1: You Get Simplicity

You know your data. For data scientists or anyone else, working with data in the database versus data in the data lake is like being a kid in a candy shop. The data is clean, it’s managed, and you can often just jump ahead and apply analytical techniques.

If you’re using a database with machine learning that your company is familiar with, then it means that you already have people who work with it and know it. Instead of hiring five people who are each an expert in one of five software platforms that you think you need in your machine learning workflow, just hire one or two who are well-versed in your existing ecosystem. You can try using in-house talent too. And you can ensure that this way, everyone is on the same page because they’re on the same platform.

And don’t forget that your database provider has likely optimized its database to work best with its machine learning offering.

We talk a lot about making machine learning easier, but it’s still hard—so believe me, you want anything that can simplify your process.

Database Machine Learning Benefit #2: You Get Time

At Oracle, we approach machine learning differently from other companies. About 20 years ago, we saw that moving the ever-growing volumes of data to where you could run your algorithms was going to get more and more difficult.

It can often take hours or days just to move the data to another platform where the algorithms reside. And doing that introduces all this complexity, like potential data loss. But this is actually still standard for most other companies.

Here at Oracle, we said it makes much more sense to move the algorithms over to the database where they can use the power of the database to run very quickly and where you can easily use the database to get access to data in other databases, a native feature of Oracle Database.

So, pairing machine learning with the database just makes a lot of sense. It’s so much faster. You save time, and you save a lot of effort.

Database Machine Learning Benefit #3: You Get Results

The fact is that although it may take a lot of time and effort to build a machine learning model, train it, gain results, and analyze the results, machine learning doesn’t usually matter to businesses until the model is deployed into production; until ordinary people at your company can make use of it.

For example, you can build a beautiful lead scoring model for your marketing team and express happiness about the superiority of the leads it categorizes. But until that lead scoring model is integrated with your marketing team’s systems, until they are actually using it as part of their workflow, that machine learning project isn’t valuable.

Now, this step of operationalization and deployment into production is what we discussed at the very beginning of this article, with that survey by TDWI. Remember, it took 20 percent of respondents under a month. But it took 14 percent over 9 months.

Operationalization can be simple or it can be very complex, because deployment and integration into applications like business intelligence (BI) dashboards, call centers, ATMs, websites, and mobile devices can be a very big challenge for IT.

Errors can be introduced at multiple stages. It’s a very complicated process and it can be a very time-consuming and expensive model deployment phase.

What you want is an operationalized model with business benefits that you can show to your executive team, and to be able to tell them that you have actually improved the business, through machine learning, with results that anyone can point to as proof of something to be proud of.

If your model has been in the database all along, you don’t have to do as much complicated deployment work. That makes the process just that much easier, with results that are gained that much faster.

Conclusion

So what does machine learning with existing data in the database mean for your business? It means that with the right database and machine learning provider, you can minimize the number of steps you need to take for more efficient, faster, easier-to-operationalize machine learning.

Here’s what you get:

  • More simplicity for you and your employees, since you’re starting with tools and data you’re familiar with
  • More time, with algorithms in the database that ensure minimized data movement and more speed, which saves time and costs
  • More results faster, with models in the database that are easier to deploy and operationalize

All of this equals faster and easier time to impact, which is what everyone wants when it comes to a project like this.

To learn how you can benefit from Oracle Big Data, visit Oracle.com/Big-Data, and don’t forget to subscribe to the Oracle Big Data blog and get the latest posts sent to your inbox.

Written by Sherry Tiao and Wes Prichard

Related:

  • No Related Posts

Keeping your Data Safe, Part (3): Assessing Cloud Databases

The first step in securing a database is understanding the current state of that database.You need to know how it is configured, who the users are, and what kind of data the system contains. Oracle Data Safe helps with that, allowing you to easily analyze your database’s configuration, survey your database users for risk, discover what types of sensitive data are in your database, and how much of that data is being stored.

Today, I’d like to focus on the assessment capabilities of Data Safe – we’ll cover sensitive data discovery another day. Data Safe offers two types of assessment -security and user.

Security assessment looks at configuration, security control usage, and how you are managing users – including privilege and role assignment. Use security assessment to identify configurations that may be introducing unnecessary risk into your environment- things like weak password policies, unnecessary access to sensitive database objects, and access control exemptions. Each security assessment finding delivers details of what was found, remarks on why this is important, and (if appropriate) references to applicable security frameworks like CIS, STIG, and EU GDPR. Below you see an example of a finding, in this case the Datapump_EXP_FULL_DATABASE role has been granted to several people, and along with that role comes an indirect grant of the EXEMPT REDACTION POLICY privilege. This finding is advisory in nature, just letting you know that with this grant you are nullifying the effectiveness of Oracle Data Redaction policies for these users.

User assessment focuses on database accounts and drills into the level of risk those accounts present to the system – in other words, if a user’s account is compromised, how much damage could the compromise do? From this screen, you can see who the users are – drill into a user to see who created them, their account status, when they last logged on, and what roles and privileges they are assigned. You can also see when they last changed their password, and by clicking View Activity you can drill down into what this user has done in the database. We’re particularly excited about this Data Safe capability because it’s really the first time we’ve presented this type of view in any of our products. You’re probably already aware that the number one cause of database breaches are compromised accounts – so doesn’t it make sense that we need to start approaching risk from the standpoint of those accounts? You can expect to see this area within Data Safe evolve rapidly over the coming year as we improve our ability to help you assess risk in your user accounts.

If you are operating a database in the Oracle Cloud, and aren’t already using Data Safe, you should make configuring the service and assessing your databases a priority. Data Safe is included with your database service at no additional cost, and is one of your best tools to ensure your data is protected in the cloud.

For more information about how Data Safe can secure your users and data in the cloud, see our Data Safe White Paper or new database security eBook (3rd Edition) with new Data Safe chapter. And if you didn’t catch them, read Part 1 or Part 2 of our 5-part blog series on Data Safe.

Related:

  • No Related Posts

A Guided Tour of Oracle Data Safe

Users are eager to leverage the many benefits of the Oracle Cloud, but they also need easy access to tools to help them secure their data. Fortunately, Oracle Database users can leverage a rich set of security controls which support deployments both on-premises and in the cloud. However, deployment and ongoing support of these solutions is often a “do-it-yourself” exercise for uses.

Oracle Data Safe provides users with a set of five essential data security features in a single, integrated cloud service which is easy to use and needs no on-premises deployment.

In this post, we take a look at each of these features.

• Security Assessment

• User Assessment

• Activity Auditing

• Data Discovery

• Data Masking

Even with a managed database service like Oracle Autonomous Database, users have considerable latitude in how they configure the service to allow their users to access it. Data Safe’s database security assessment feature highlights configuration decisions that negatively impact security, helping to identify any gaps that could represent a vulnerability. Data Safe’s security assessment performs a comprehensive check of database configurations, examining areas like user accounts, privilege and role grants, authorization controls, fine-grained controls, auditing, encryption, and configuration parameters. It identifies gaps compared against organizational best practices and delivers actionable reports with prioritized recommendations as well as mappings to common compliance mandates like EU GDPR, DISA STIGs, and CIS benchmarks.

Data Safe implements a unique new capability that allows security administrators to evaluate the risk represented by various database users. The user risk assessment feature performs an evaluation of database users, looking at both static and dynamic characteristics of the user’s profile, in order to identify the highest risk users. User risk is presented graphically, allowing administrators to very quickly determine which users may be over privileged or require application of a compensating control such as auditing.

Database auditing is perhaps the most critical control for database security and regulatory compliance. Data Safe’s user activity auditing feature allows administrators to select from a variety of predefined audit policies and enable them in the database with a single mouse click. They can then start collecting audit records from their cloud databases, which are stored and securely retained in the Data Safe service. Data Safe users can access interactive reports for user activity tracing or forensics purposes, as well as summary reports for routine collection and reporting. These reports can be downloaded as PDFs to help with organizations’ compliance programs. Administrators can also select from a number of predefined alert policies so they are immediately notified of unusual activities that may indicate compromise.

The types of data contained within the database, and their sensitivity, helps determine which controls should be used to protect that data. Data Safe includes a sensitive data discovery feature that allows security administrators to quickly answer the critical questions of “what types of sensitive data do I have?” and “how much of it do I have?” Data Safe’s sensitive data discovery feature provides automated discovery of over 125 sensitive data types across categories including personally identifiable information, financial information, health information, job-related information, and education information. Sensitive data discovery helps users to understand the value of the data and enables them to prioritize their defenses.

Masking sensitive data removes security risk from test and development systems, and minimizing the amount of sensitive data stored by the enterprise. Data Safe’s data masking feature provides the ability to quickly mask sensitive application data with a library of over 50 predefined masking formats. Default masking formats are automatically suggested based on the type of sensitive data discovered using the sensitive data discovery feature. Data masking can be used to transform columns of sensitive information such as birth dates and credit card numbers, and can also support more complex data masking use cases such as conditional and compound masking.

The security technologies and capabilities available with the Oracle Database enable customers to maintain a highly secure database environment. With Oracle Data Safe, these critical functionalities are now available with a simple click-and-secure interface, with no deep security expertise required. Delivered as a service with no deployment required, Data Safe helps customers reduce their operational costs associated with securing databases and helps all customers, big or small, to keep their data safe. With Data Safe, security is now truly the reason to move to the cloud.

Be sure to join us next week at the same time as we begin our drill-down into the first of these five features: the security assessment. In the meantime if you’d like more information about Data Safe, visit us here. And if you didn’t catch the first blog of this series, read part 1 for the full product announcement.

Related:

  • No Related Posts

Big Data Highlights from Oracle OpenWorld 2019

Oracle OpenWorld 2019 has come and gone, and while the big news came from Larry Ellison’s keynotes, there was certainly plenty to talk about in the world of big data—not just the application of big data, but the evolution of Oracle’s product offerings in this space as well. Let’s take a look at some of the discussions and announcements that highlighted the big data space during Oracle OpenWorld 2019.

Taking the Stage with Big Data

Greg Pavlik kicked off OpenWorld on its first day with the session Innovating with Big Data and Data Science on Oracle Cloud Infrastructure. Greg discussed the emerging shift with businesses moving to get ahead of the competition regarding big data and AI. Specifically, he noted that 61% of businesses see AI as their most important initiative, but this was essentially a double-edged sword: while AI clearly can drive innovation, integrating AI the wrong way can create double the work as you undo and redo configurations. With big data continuing to get bigger (as Greg noted, 80% of time is still spent preparing, searching, and governing data), executing AI in a smart forward-thinking fashion is the key to success. How much success? Projections show up to $6 trillion will be realized from AI in the next five years alone. An AI future built on cloud architecture provides the scalability, security, and accessibility to make that happen.

The team of Savita Raina (director, product marketing), Aali Masood (senior director, product marketing), and Amro Shihadah (CEO/COO of IdenTV) took the stage for Build AI Powered Enterprise with Oracle Cloud. During this hour, the trio broke down how AI is used in the modern world, from consumer recommendations to virtual assistants to self-driving cars. That requires enterprise-level infrastructure to truly fulfill the promise of AI, and the trio dug deeper into enterprise needs and concerns, along with the path to achieving a successful strategy. Specifically, they focused on how Oracle delivers an end-to-end big data and AI solution to maximize opportunities using Oracle Cloud as the foundation.

See the deck from this presentation here.

Masood also joined Peter Jeffcock (senior principal product marketing director for big data, cloud GTM database) and Aleksandr Pozhivilk (DBA team lead of Wargaming.net) to present A Unified Platform for All Data. The discussion examined how big data is getting bigger—and in that, more is definitely better to track metrics and develop quantifiable insights. But that puts the onus on data management, meaning that a DIY solution simply isn’t sustainable. Autonomous data management optimizes processes for everyone involved: app developers, data scientists, IT managers, and business analysts. In an enterprise data ecosystem, all of these interested parties are supported with cloud access, machine learning, and unified data sources. The presentation examined this up close from a workflow view, and then finished with Pozhivilk providing a real-world perspective thanks to Wargaming.net, which supports 180 million registered users for its family of multiplayer games.

See the deck from this presentation here.

Data science got its own time in the spotlight, as Elena Sunshine (senior principal product manager, data science) took the stage for A Deep Dive on the Oracle Data Science Service. Technically known as Oracle Cloud Infrastructure Data Science, Elena reviewed the transition from DataScience.com to joining Oracle before looking at how this platform is designed for the modern “expert” data scientist. Combining the best of the Python open source ecosystem with the power and scale of Oracle Cloud Infrastructure, data scientists can build, train, and deploy AI/machine learning models with the OCI Data Science service. The service is built for teams of data scientists to collaborate more effectively on solving data science challenges in the enterprise with model lifecycle management, explainability, auditability, and reproducibilitywhich Elena then showed via a live demonstration.

See the deck from this presentation here.

Product marketing’s Sherry Tiao (senior manager) and Wes Prichard (senior director) welcomed a special guest during Use Machine Learning to Maximize the Value of Your Existing Data. First, the duo discussed the possibilities of machine learning in a database, including increased efficiency due to processing within the environment and better results through operationalizing machine learning models. They were then joined by Ray Owens, CEO of DX Marketing. DX Marketing based their workflow on Oracle Cloud: unifying databases in Autonomous Data Warehouse, processing records with Oracle Machine Learning, and creating projections with Oracle Analytics Cloud.

See the deck from this presentation here.

A Closer Look at Oracle’s Products

Products are always the star of the show, and OpenWorld put big data in the spotlight. From that perspective, OpenWorld news provided plenty of excitement as the following products featured major announcements:

Data Catalog (coming soon): A single collaborative environment for data professionals to collect, organize, find, access, understand, enrich, and activate technical, business, and operational metadata to support self-service data discovery, advanced analytics, and governance for trusted data assets in Oracle Cloud and beyond.

Oracle Cloud Infrastructure Data Science (coming soon): To expedite collaboration, this secure enterprise-grade platform comes with project-driven UI that enables teams to easily work together on end-to-end modeling workflows with self-service access to data and resources. The platform also supports the latest open source tools, version control, and tight integration with Oracle Cloud Infrastructure and the Oracle Big Data platform.

Big Data Service (coming soon): A new and improved cloud service based on Cloudera Hadoop, Oracle Big Data Service delivers the flexibility of native Oracle Cloud Infrastructure shapes (start very small, support very large clusters). Users will enable big data processing for data integration pipelines and leverage data lakes for true information lifecycle management.

Oracle Cloud SQL: Oracle Cloud SQL enables users to scale out queries against Object Stores and Big Data Service. Automatic and transparent, it delivers user flexibility through scalability and a pay-for-what-you-use model.

On the Exhibition Level, attendees got to experience Oracle’s latest and greatest up close with a demo showcase. Oracle Cloud Infrastructure Data Science, Oracle Data Catalog, Oracle Data Flow (Spark), and the Oracle Big Data platform were all available on the exhibition level in Moscone South. This ongoing showcase allowed for a hands-on experience that gave a glimpse as to how Oracle’s platform combines simplicity in user experience with extremely powerful data management and insight capabilities.

More Big Data Updates

Now that Oracle OpenWorld 2019 is in the rearview mirror, the coming weeks and months will be filled with more updates related to all of these announcements. Stay on top of it all by visitingOracle’s Big Data pageand don’t forget tosubscribe to the Oracle Big Data blog to get the latest posts sent to your inbox.

Related:

  • No Related Posts

Larry Ellison Details 4 Database Innovations

“Just one more thing,” Oracle Chairman and CTO Larry Ellison said, as he completed his keynote address at Oracle OpenWorld September 16. After detailing a raft of innovations for the company’s Generation 2 Cloud Infrastructure—complete with trademark brash challenges to industry rivals—Ellison, with a nod to his friend and former Apple CEO Steve Jobs, ended with a kicker: Starting immediately, Oracle would offer a free version of its revolutionary autonomous database for people to build, learn, and explore on Oracle Cloud.

“So, everybody should leave now, open your laptop, log on, and give it a try,” Ellison said to close out his keynote.

Developers who sign up for a cloud account under the new “Always Free” program will receive access to Oracle Autonomous Database and the essential Oracle Cloud Infrastructure building blocks to create applications on it, including virtual machines, object storage, and data egress. Free version users also get access to a host of free tools for building those applications, such as Oracle Application Express, he said, as well as drivers for popular programming languages, REST services for publishing data, and even a popular notebook for doing data science.

That means developers can prototype, build, and try their next big idea for free. Students can learn on the most modern cloud—using the same database that their future employers like banks, biotechs, and global manufacturers use to run their businesses. Schools can build courses with real-world labs. Enterprises can prototype for free with no time constraints. “And it runs on Exadata,” he added. “You get our best stuff.”

And as long as people use the service, they can keep their free account: “It will never expire. It will never go away,” Ellison said.

The new Always Free offering should add fuel to the fire of growth for Oracle Autonomous Database—the company’s self-driving database that deploys, tunes, patches, upgrades, and secures itself, “leaving no room for pilot error,” said Ellison. And it does it all while continuing to run, instead of manually taking the database to patch and upgrade, as one would do on a first-generation cloud like Amazon Web Services, he said.

During the talk, Ellison made clear that the company intends to continue to evolve its Autonomous Database and widen its appeal to people across the spectrum of database users. Here are four recently announced database innovations:

1. Evolving Exadata

Ellison announced a major upgrade to Exadata engineered systems, which are designed to run Oracle Database for peak performance and reliability. Exadata machines underpin the autonomous database. The new Oracle Exadata X8M offers new direct memory access and persistent memory, giving companies improved data access across workloads that demand lower latency, such as high-frequency trading and IoT data streams. The “M” in Exadata X8M is for “memory,” said Ellison. “We could have X8PM for “persistent memory,” but we didn’t have enough room on the cabinet door.” The new Exadata machines are, not surprisingly, “faster than the X7 that came before,” said Ellison. “More CPUs, more cores, more memory. Everything runs faster.”

2. Appealing to Citizen Data Scientists

While Oracle Database has long provided a library of machine learning algorithms for data analytics, Oracle Cloud databases now offer AutoML, a feature that allows both experienced data scientists and non-experts to quickly build, test, and deploy machine learning models in the database without writing a line of code.

3. Adding More for Developers

Oracle announced a new blockchain table type where rows are cryptographically chained to provide a secured ledger. This should make it easier to use and more functional than existing blockchain implementations because the blockchain tables can participate in transactions and queries with other tables. In addition to native blockchain tables, Oracle also added a binary JSON datatype for increased performance. “And it’s all in one system—in one autonomous system, in one highly available, secure, autonomous system” for developers, Ellison said.

4. Keeping Data Safe (On Both Sides of the Cloud Relationship)

Oracle Cloud Infrastructure now offers an added set of features, called Data Safe, that guides users of all Oracle cloud databases as they manage their side of the cloud service relationship. Data Safe does this by helping customers set and keep proper configurations, watch for risky users, do data audits, and spot and mask sensitive data. If it determines there’s suspicious behavior or that a customer starts drifting from a standard configuration, “We’ll let you know about it,” Ellison said.

Data Safe is meant to complement the security features already in Oracle Autonomous Database, such as always-on encryption and its self-patching capability. “Oracle Data Safe is available today,” Ellison said, “and no additional cost in the Oracle Cloud.”

Related:

  • No Related Posts

Paradigm Shift: Oracle Sees Autonomous Database Driving Innovation Today

By Denise Doyle, Senior Marketing Manager, Autonomous Database

Recently, Gartner Research Director, Adam Ronthal, and Oracle EVP of Systems Technology, Juan Loaiza, spoke in a webcast on the impact that the Autonomous Database is having on the IT landscape, specifically how intelligent, autonomous technologies are revolutionizing the database management landscape.

The webcast touched on two core methodologies cloud vendors use and what impact that could have to the cost and effort to implement a cloud solution. It also covers how customers should evaluate “spending” on cloud services and how they should understand the true cost of implementing cloud services.

If you want to hear more about Oracle’s thoughts on autonomous and the cloud, register for the on-demand webcast.

See why Gartner recognizes Oracle as a Leader in the October 2018 Magic Quadrant for Operational Database Management Systems.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, express or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Related:

  • No Related Posts

Oracle Data Safe: Five Ways to Help Protect Your Digital Assets

Data is one of your most valuable assets. If you don’t protect it properly, this same data can become your biggest liability. Just ask any of the companies who have been in the news after they experienced a large breach. They lost not just highly sensitive personal, financial, health and IP data, but it also often impacted their brand and resulted in significant remediation expenses and fines.

With today’s cyber attackers using advanced, automated hacking tools, typical organizations with limited expertise, time, or tools do not stand a chance against this asymmetric warfare. The question for them becomes not if they will be breached, but when.

Without technology and automation, most organizations are sitting ducks. We need to rethink how to defend databases, the repository of most sensitive assets.

As breach awareness has gone up, our customers are increasingly asking about security as they move their databases to the cloud. First and foremost, they are concerned with the security of the underlying OS, VMs, and networking infrastructure. But they are also asking about protection and isolation from the cloud service providers as well.

As customers hear about our cloud security, along with the on-line security patching, strict separation of duties for our administrators, and always-on encryption options for cloud databases, those concerns are alleviated.

As we double-click into their remaining concerns, the following issues bubble up:

  • Are my databases configured securely? Are there any gaps?
  • Where is my sensitive data? Is it properly secured?
  • Who are my risky users? What are they doing? What could they do, given their privileges?
  • Can I meet my compliance requirements?

Customers want to protect their systems 24x7x365 because a single hit could lead to a total loss. But protecting is not straight forward without automation and unification.

In response to customer concerns, we created Oracle Data Safe – a modern, unified, and automated security service – to help defend customers’ databases on Oracle Cloud. Data Safe is designed to detect gaps in their defensive posture, give visibility into security issues with data, users, and applications, and provide recommendations on how to contain security risks.

Five Primary Features of Oracle Data Safe

At a high-level, Data Safe provides:

  • Database Security and Compliance Assessment: Data Safe helps ensure your databases are securely configured. It identifies drifts from best practices, offers recommendations for remediation, and helps you comply with regulations such as EU GDPR, DISA STIGs, and CIS Benchmarks. It categorizes and prioritizes these risks so that you can decide which ones to address first.
  • User Risk Assessment: Data Safe can create reports on your users, roles, and privileges, highlighting critical users you should closely monitor/control. It can further analyze static and dynamic user profiles highlighting last login times and IP addresses. As hackers typically target users, it is critical to understand the gaps they might exploit.
  • User Activity Auditing and Reporting: Data Safe can track database user activity and raise alerts on risky actions, a must-have requirement for many regulations. You can select from default audit policies for regular and privileged users and use one of many out-of-the-box audit reports for various database activities. You can retain the audit data for up to a year for forensics in case something were to go wrong.
  • Sensitive Data Discovery: Today most customers do not know what sensitive data they have and where it is located. Data Safe helps you discover the amount and location of 125+ different types of sensitive data across hundreds of columns spanning multiple databases. Customers can also add support for their own custom sensitive types easily. Once you know how much sensitive data you have and where it resides, it is easier to assess the risk and protect that data.
  • Data Masking: Data Safe can mask data while maintaining complex data relationships. Data Safe minimizes the amount of personal data and allows internal test, development, and analytics teams to operate with reduced risk in an environment where sensitive data has been removed.

Oracle Data Safe can give you full 360-degree insight into the security of all of your databases, including risks with security configuration, data, users, and open alerts. This unification helps simplify understanding the security posture, and the risk profile of the database.

Oracle Data Safe Console

Data Safe helps businesses gain insight into their data by helping them discover where their sensitive data is, what sensitive categories and types they have, and how much they have.

Identify Sensitive Data with Data Safe

We believe that keeping data secure by default is absolutely critical to help protect against the asymmetric cyber warfare. No “ifs, ands, or buts” about it. Organizations should not have to choose between security and performance, or security and complexity, or for that matter, security and expenses.

To this end, we have unified and simplified proven security technologies from our on-premises portfolio, and combined Oracle Data Safe with all Database as a Service offerings in the Oracle Cloud, including the Oracle Autonomous Database.

Data Safe scales from organizations with just one database to enterprises with hundreds. With Data Safe, you neither have to worry about any expensive setup, nor train any specialized resources. It can provide defaults based upon best practices, and allows customization. With Data Safe, there are no compromises!

In today’s world of rapidly evolving threats to the security of your data, Data Safe makes modern, unified, and automated security available to every single database customer on Oracle Cloud.

In the past, you may have been hesitant about moving to the cloud due to security concerns, but security is now the reason to move to the cloud. Learn how Oracle Data Safe can help your organization today. Be sure to tune in next Tuesday for the next blog in our series on Data Safe.

Related:

  • No Related Posts

Enabling the Autonomous Enterprise

This post was contributed by Senior Principal Product Marketing Director, Ron Craig.

Background – data overflow

The ability of enterprises to generate data is increasingly outpacing their ability to realize real value from that data. As a result, opportunities for innovation, driven by customer, market and competitive intelligence are being left on the table. And given that only a subset of the avalanche of data is being put to good use, it’s entirely possible that the use of inadequate data is leading to bad decisions.

A key source of this problem is that the productivity of human beings simply hasn’t kept pace with the technologies we have developed to help improve our business processes. IDC has predicted that by 2025, 6 billion consumers will have one data interaction every 18 seconds. At that point, the volume of global data will be 175ZB (175,000,000,000,000,000,000,000 bytes), and ~30% of that will be real time data – a 6X increase vs. 2018. The exponential increase in effort required to clean, arrange, secure, maintain and process the data from all those customers means less effort can be dedicated to insights. As a consequence, enterprises are not truly seeing the benefits from their own success in becoming data companies.

Abstraction as a game-changer

So what’s needed in response? Sometimes it’s good to look to other areas for inspiration, and innovation in the semiconductor industry provides some useful insights. That industry, since its early days, has had to deal with the fact that user productivity has struggled to keep pace with advances in technology, and has surmounted those issues with innovations that address those productivity limitations head on.

Digital designs – the creations that comprise everything from the silicon chips to operate a timer on a microwave oven all the way up to the ability to forecast the weather with a supercomputer – are at their essence created from logical components, known as gates. These logic gates perform pretty routine Boolean operations, and effectively allow decisions or calculations to be made based on sets of inputs, and propagate those calculations in real time. Chip designers working at the gate level could be expected to produce verified designs (effectively combinations of connected gates) at a rate of ~50 gates per day – a productivity level that’s remained pretty constant over time.

The processors in today’s high end cellphones may contain around 100 million gates, so a team of 100 chip designers working at the gate level would take 80 years to put such a chip together. In reality though, today such chips are often developed in two years or less, as a result of innovations introduced in the chip design flow over the last twenty years. For the purposes of this blog, and since it provides a useful analogy, the innovation we’ll focus on is the introduction of the hardware definition language (HDL). An HDL effectively works like software, allowing the chip designer to describe logic in a way that resembles what it does, as opposed to how it’s built, hence freeing the designer from the details of how that logic operation is implemented in hardware. HDL-based design goes hand in hand with automated synthesis algorithms, which translate those higher level descriptions into the equivalent gates that perform the same function, and which ultimately can be realized in silicon.

As a result of these innovations, the semiconductor industry has enabled designers to keep up with the capacity of silicon chips by allowing them to be less and less concerned about the lower level implementation details of the chips they are designing, and put their focus on what those chips actually do. Chip designers take care of the ‘what’, where they can bring their creativity and experience to bear, and automation takes care of the ‘how’ in a reliable and repeatable fashion.

Oracle Autonomous Database – Automating a path to innovation

The semiconductor industry experience provides a useful blueprint for a path that the data industry must also take, demonstrating why automation is the key to unlocking the potential of today’s data, in the same way that innovation in the semiconductor industry has allowed designers to fully exploit the capacity of silicon chips. Across a range of industries, corporations differentiate themselves by what they do with the data they generate and collect, not in the effort they expend to manage and secure that data. To have maximum impact, database experts need to be able to maximize their focus on what their data is telling them (the ‘what’), and rely on automation to keep that data available and secure (the ‘how’).

95% of the respondents to a recent Oracle user survey noted that they are having difficulty keeping up with the growth in their data, and the majority of data managers are performing multiple major database updates per year. In addition to simply keeping the database up and running, the survey noted that significant manual effort continues to be dedicated to general troubleshooting and tuning, backup/recovery tasks, and provisioning to handle usage peaks and troughs.

Data security also stands out as an area that can benefit significantly from automation, not only because automation can reduce manual effort, but because it can reduce risk. In an age where managers of on premises database setups much continuously juggle the urgency of patch installation with the cost of the downtime needed to install those patches, it comes as no surprise that a recent Verizon survey noted that 85% of successful data breaches exploited vulnerabilities for which patches were available for up to a year before the attack occurred. It makes perfect sense to instead make use of Oracle Autonomous Database to automatically apply security patches with no downtime.

In total, these automated capabilities reduce administrative costs by 80%, meaning that the Autonomous Enterprise taking advantage of these advances can dedicate significantly more effort to innovation.

Coming back to our semiconductor analogy, innovations in how design is done didn’t make chip designers less necessary, rather it made them significantly more productive and enabled them to make more innovative use of advances in technology. We expect the Oracle Autonomous Database to have the same impact for DBAs and data managers in the Autonomous Enterprise.

Learn more at Oracle Open World 2019

To learn more about how enterprises who have already become autonomous, visit the sessions below at the 2019 Oracle Open World event –

Drop Tank: A Cloud Journey Case Study, Tuesday September 17, 11:15AM – 12:00PM

Oracle Autonomous Data Warehouse: Customer Panel, Tuesday September 17, 1:45PM – 2:30PM

Oracle Autonomous Transaction Processing Dedicated Deployment: The End User’s Experience, Tuesday September 17, 5:15PM – 6:00PM

Managing One Of the Largest IoT Systems in the World With Autonomous Technologies, Wednesday September 18, 9:00AM – 9:45AM

Related:

  • No Related Posts