The best of all worlds –> Dell EMC Ready Bundle for Oracle (Part II)

In my last blog I talked about the small and medium configurations. In this blog, I am going to talk about the large configuration of Ready Bundle for Oracle (RBO) .Part III of this blog series will talk about the different backup options available in this product. To back up the small configuration of RBO (called commercial backups), we use Data Domain 6300 while Data Domain 9300 (DD9300) is used for backing up large configuration (called enterprise backups). Let’s talk about the large configurations in this blog here. It is very exciting to talk about the large configuration which mainly caters to the enterprise world which wants to use the oracle environment in a bigger scale with higher IOPS and bandwidth along with lesser CPU utilization and latency. The large configuration of Ready Bundle for Oracle hosts 50 mixed-workload databases on a vSphere cluster with two ESXi 6 hosts on PowerEdge R940 servers. A VMAX 250F array with two V-Brick blocks, 2 x 1 TB mirrored cache and 105 TBe SSDs is used as the storage array for the VM OS and Oracle RAC databases. Figure 1 depicts the large-configuration architecture.


Fig. 1: Architecture of large configuration

The large configuration uses PowerEdge R940 servers, which are designed for larger workloads, to maximize enterprise application performance. The PowerEdge R940 servers are configured with 112 cores—88 more physical cores than the PowerEdge R740 servers in the small configuration and 40 more physical cores than the PowerEdge R940 servers in the medium configuration. The PowerEdge R940 server is configured with 3,072 GB—2,688 GB more than the R740 server in the small configuration and 1,920 GB more than the R940 server in the medium configuration. In Figure 2 we have performed a comparative analysis of the configurations for the small, medium (discussed in my previous blog) and large configurations of RBO.


Fig. 2: Comparative Analysis of small, medium and large configurations of Ready Bundle for Oracle

The comparative analysis will help the customers to select the appropriate configuration based on their requirements. There is also a tool available called VMAX sizer (available with the local support rep) which helps the customers to ascertain the appropriate configuration for their database/data center infrastructure. Now, I would like to discuss about the spectacular performances that were achieved during the stress-testing of large configuration for RBO. Let me provide some background for in-house database stress-testing which was performed by DellEMC engineers. The engineers ran 10 OLTP production RAC databases, 30 OLTP development RAC databases, and 10 OLAP RAC databases in parallel for a total mixed workload of 50 RAC databases. SLOB was used to create a 60/40 read/write OLTP workload. Swingbench Sales History benchmark was used for the OLAP workload generation. By adding the OLAP databases, they tested the VMAX 250F array along with large sequential reads to the storage workload. This load is reflected by the database size of 2 TB and the db_32k_cache_size and db_file_multiblock_read_count settings that enable larger database I/O to improve large sequential read performance. The entire architecture and database configuration of this use case has been depicted in the figure 3 and 4.


Fig 3: Architecture of the use case for large configuration


Fig 4 : Oracle RAC configuration for the use case in large configuration of RBO

The large configuration included two PowerEdge R940 servers and a VMAX 250F array. Here is a review of the results for this use case, in which the workload of 10 production OLTP databases, 30 development OLTP databases, and 10 OLAP databases ran in parallel:

  • 100 vCPUs and 2,640 GB vMem per R940 computer server were used to generate a workload of over 189,968 IOPS on the VMAX array as is depicted in figure 5.
  • Due to the processing power of the PowerEdge servers, the CPU utilization was only 20 percent, leaving room for more databases or for failover of VMs from one ESXi server to another.
  • The VMAX array with inline deduplication and compression saved 5X the flash space, using only 8,280 GB of capacity for 30 development databases.
  • Database IOPS was also excellent. The 189,000 IOPS of workload (highlighted in figure 5) were serviced at sub-millisecond latency.
  • The 10 OLAP databases generated a total of 3.88 GB/s of throughput.


Fig 5. Aggregate IOPS numbers for the use case of large configuration

Figure 6 below illustrates the complete dashboard of the configuration and performance numbers (including the results that were discussed above) that got generated after the stress-testing for this use case in large configuration of RBO.


Fig 6. Stress-testing Dashboard of the use case for large configuration

The test results show that this configuration of RBO offer twice the number of supported IOPS compared to the small and medium configurations. The salient feature of this large configuration is that it supported just over 189,968 IOPS below sub-millisecond latencies for all OLTP databases. It supported the total bandwidth of 3.88 GB/sec using two PowerEdge R940 servers and a VMAX 250F with more flash drives. Under this database workload, a mere 20 percent of the CPU capacity was utilized, while the VMAX array accelerated 100 percent on most of the writes. Like the small and medium configurations, storage efficiency continued with 5X space savings across the 30 development OLTP databases due to the inline compression of VMAX 250F. In addition, the large configuration still had plenty of unused resources to support even greater workloads. More use cases and configurations of large and other configurations(like small & medium etc.) can be viewed in the validation guide which explains the RBO in its entirety. In the next blog (Part III), I would like to discuss the backup and recovery solutions for RBO.

Follow us on Twitter:


Tweet this document:

Want to learn about the newly launched solution of Oracle from DELL-EMC ? Click here for Part II of the blog series –>

Click here to learn more:


Oracle’s Ellison Teases ‘Highly Automated’ Security Technology

Oracle’s Ellison Teases ‘Highly Automated’ Security Technology
Jessica Lyons Hardcastle
October 2, 2017

6:54 am PT

SAN FRANCISCO — Oracle CTO Larry Ellison teased a “highly automated” security technology that will integrate with Oracle’s new fully autonomous database during his Sunday night keynote at Oracle OpenWorld 2017.

“The biggest threat in cybersecurity is data theft,” he said. “And the safest place to store your data is on an autonomous Oracle database.”

Related Articles

Gap, FexEx, Caesars Entertainment Shower Love on Oracle Cloud

Gap, FedEx, Caesars Entertainment Rain Love on Oracle Cloud

Level 3 Warns Investors about Canadian Firm’s Mini-Tender Offering

Level 3 Warns Investors about Canadian Firm’s Mini-Tender Offering

Hardware Manufacturer Lancom Gets Into SDN

Hardware Maker Lancom Gets Into SDN

1 Million Container Instances in 2 Minutes Draws Rare Applause

1 Million Container Instances in 2 Minutes Draws Rare Applause


The security product — which Ellison promised to reveal more details about in his Tuesday keynote — will work with the database to prevent data theft. He said that automation, which reduces human error, will play a key role in both technologies.

“We do everything we possibly can to avoid human intervention,” Ellison said. “It’s our computers versus their computers in cyber warfare. And we’re going to have to have a lot more automation if we’re going to defend our data.”

While the security system will only be partially automated, compared to the fully autonomous database, “make no mistake: we are headed toward full autonomy in cybersecurity as well,” Ellison said.

Oracle first announced the fancy new database during its first quarter 2018 earnings call last month. “Based on machine learning, this new version of Oracle is a totally automated, self-driving system that does not require a human being either to manage the database or tune the database,” Ellison said on the conference call. He added that it is better and cheaper than Amazon Web Services’ database.

During the keynote, Ellison said the Oracle 18c database will be available for warehousing applications in December. The online transaction processing (OLTP) version will ship in June 2018.

The database will run on Oracle’s bare metal cloud infrastructure. The company guarantees less than 30 minutes per year of planned or unplanned downtime. And it’s “truly elastic in its use of computer resources,” Ellison said, adding that it can allocate more processors as needed.

“I know it’s called the Amazon Elastic Compute Cloud, but it’s just not elastic,” Ellison said. “In other words, Amazon’s database Redshift cannot automatically increase the number of processors for a bigger workload and then decrease the number of processors.”

The database debut at Oracle OpenWorld was expected. The security announcement, however, was a surprise.

Automated Security System

The new technology is designed to detect threats when they first occur and then direct the database to automatically remediate the problem while the database is running, Ellison said.

“The cybersecurity system identifies a threat, and then it says ‘OK, we need to combat this threat.’ It might mean we have to patch the database, and the database has to be able to immediately patch itself, not wait for a human being to schedule downtime in a month or two. We have to automate our cyber defenses and be able to defend yourself without having to take all your computer systems offline or shut down your database.”

The security system and automated database share the same underlying technology, Ellison added. “And that technology is every bit as revolutionary as the Internet itself and it’s called machine learning.”

Right on cue, Ellison’s presentation, seemingly activated by a hand-held clicker, skipped ahead. He had to ask an assistant to move the slides back.

“All my button does is notify a human being,” he joked. “It’s not automation at all. It’s fake automation. If it was real automation, that wouldn’t happen.”

A human, not a machine, advanced the slides for the rest of Ellison’s talk.


Oracle unveils world’s first autonomous database

Speaking today at the opening keynote for Oracle Open World 2017, Founder, Executive Chairman and Chief Technology Officer, Larry Ellison outlined Oracle’s next major software release.

“The biggest threat by far in cybersecurity is data theft,” Ellison said. “Preventing data theft is all about securing your data wherever you choose to store your data. We’re going to make the case the safest place for you to store your data is an Oracle database, particularly an autonomous database,” he said, setting the scene for his talk.

An autonomous database, Ellison explained, is highly automated. It will do everything it possibly can to avoid human intervention.

Relating back to security, Ellison stated “it’s our computers vs. their computers in cyber-warfare. We have to have a lot better computer systems and automation if we’re going to defend our data.”

“Oracle has new cybersecurity technology to automate threat detection and then immediately remediate the problem. This new database technology means an automated database can instantly patch itself while running. There’s no delay waiting for a human process, or a downtime window.”

This is the autonomous database, and it’s Oracle’s vision for Oracle 18C which will be released for different database workloads from December 2017 through 2018.

Oracle Data Warehouse edition will be available in December 2017, Oracle OLTP database cloud service in June 2018, and Oracle NoSQL database cloud services by the end of 2018. Each edition will have autonomous functionality by default, and will all be available on-premises, in Oracle’s cloud, and on public cloud services, to meet customer requirements.

The big thing about an autonomous database, Ellison explained, is the automation of cybersecurity and the automation of database operations working together. “The database has to be able to patch itself immediately while running when instructed by the cybersecurity system – no delay, it has to happen immediately, and has to happen without human intervention.”

The technology driving autonomy is “as revolutionary as the Internet,” Ellison said. “It’s called machine learning, a branch of Artificial Intelligence, or AI.”

“For years and years, AI did not live up to its promise. There is a new type of AI, machine learning. The applications are revolutionary. It powers self-driving cars, facial recognition better than humans can do, and new applications like an autonomous database and automated cybersecurity.”

web counter

Machine learning is when computers learn from patterns in data and make predictions. It relies on large amounts of accurate training data. Higher volumes of accurate data increases learning, and increased learning means more accurate solutions to problems.

An example of machine learning is anomaly detection, to separate normal from abnormal patterns in data, like detecting cancer from normal cells or detecting the CFO is logging in from a computer in the Ukraine and that is out of the ordinary.

In Oracle’s case, the new autonomous database will be driven by machine learning absorbed from vast amounts of event logs including infrastructure logs – network, server, storage, VM and OS, platform logs – database, Java, analytics, and application logs – ERP, CX, HCM and more.

This event log training data will enable new machine learning applications across security, like detecting and correcting anomalous events be they logins or unusual SQL queries, and across database operations, such as classifying normal query patterns and automatically tuning the database.

On this point, Ellison states the Oracle 18C autonomous database will continuously tune the database based on information it gathers. “This is a big deal,” Ellison said, “Nobody else does this.”

“Oracle 18C is the world’s first 100% self-driving autonomous database,” he said. “The world’s first and only.”

“This is the most important thing we have done in a long, long time.”

Ellison continued to state the autonomous database will have total automation, based on machine learning. It will require no human labour to manage the database whatsoever. The database will automatically provision, upgrade, patch and tune itself while running; it will perform automated real-time security patching with no downtime window required and will have no human error.

The Oracle cloud SLA will guarantee 99.995% reliability and availability – “we will minimise costly planned plus unplanned downtime to less than 30 minutes per year,” Ellison stated.

In one of many jabs at Amazon’s AWS hosting services, Ellison stated “we guarantee your Amazon bill is cut in half, and the lower labour costs are even bigger savings. We will write that in your contract.”

Ellison states Oracle 18C will perform fully automated database provisioning and management, even for mission-critical scale-out clusters with data centre disaster protection. The user must define policies but the system automatically manages itself thereafter. It will perform automatic provisioning, backup, upgrades, patching and tuning while running, and without any human administration needed, further cutting errors and malicious behaviour.

While this autonomy is the headline feature in Oracle 18C, Ellison noted it also includes a range of OLTP and analytics improvements, including 4x faster in-memory OLTP, 5x faster RAC for high-contention OLTP, 2x faster streaming ingress for IoT workloads, NVRAM ready row and column store, 2x faster in-memory column store, 100x faster in-memory analytics for external data, 100x faster approximate query processing, and more features.

Poking fun at Amazon’s AWS hosting, Ellison said, “You get all this, but the shock is you must be willing to pay a lot less.”

Ellison went on to demonstrate the greater performance of running Oracle in Oracle’s cloud vs. on Amazon’s AWS cloud, and an even greater performance disparity compared to Amazon’s RedShift. This performance improvement translates to real cost savings, which Ellison says is further compounded by the reduction of human labour to run the database.

Ellison stated Oracle cloud’s reliability is 100% more reliable than Amazon with no exceptions in the fine-print, noting Amazon’s contracted uptime excludes planned downtime, downtime for adding compute and storage capacity, downtime for Amazon planned maintenance, downtime for database upgrades and patching, downtime for regional outages, and downtime for software bugs. “They may as well have excluded a few more things and just their uptime guarantee is 100%,” Ellison joked. By contrast, he stated Oracle’s uptime guarantee is 99.995% including mission-critical workloads, and without exclusions, and is guaranteed to be at least half Amazon’s costs for a similar workload.

The minimum configuration for Oracle cloud is one OCPU, or Oracle CPU, roughly defined as one physical core of an Intel Xeon CPU with hyper-threading enabled, along with 1Tb of storage, for $300 per month.

For database administrators, the automation features of Oracle 18C will allow them to turn their mind to innovation. “Database administrators don’t lack things to do,” Ellison said. Instead, with Oracle 18C administrators can spend less time on infrastructure, patching, upgrades, ensuring availability and tuning, and more time on database design, data analytics, data policies and securing data.

“Oracle 18C is the only self-administering database on the planet,” Ellison said. “It’s fully automated and fault-tolerant, with real-time security patching, constantly tunes itself to get the best performance, and it’s fully automated and will go out and get additional compute resources the moment they’re needed but not a moment sooner, so you only pay for what you use.”


Managing Mixed Application Workloads on Converged Infrastructure


Solution Summary

VCE, the Converged Platform Division (CPD) of EMC just released a paper titled VCE Solutions for Enterprise Mixed Workload on Vblock System 540. In this solution guide we show how the VblockConverged infrastructure (CI) platform using all-flash XtremIO storage provides a revolutionary new platform for modernizing deployment and management of mixed-workload and mixed-application environments. The Converged Platform Division (CPD) together with the Global Solutions Organization brought together a team with expertise in both deploying Vblock systems and deep Oracle, Microsoft, and SAP workload knowledge. The goal of the team was to build, test, and document a near-real life mixed application solution using a Vblock 540 system powered by XtremIO all-flash storage.


The business application landscape for the testing environment consisted of:

                • A high frequency online transaction processing (OLTP) Oracle application
                • A simulatied stock trading OLTP application for SQL Server
                • SAP ERP with an Oracle data store simulating a sell-from-stock application
                • An Oracle decision support system (DSS) workload
                • An online analytical processing (OLAP) workload accessing two SQL Server analysis and reporting databases
                • Ten development/test database copies for each of the Oracle and SQL Server OLTP and five development/test copies of the SAP/Oracle system.

The combined test results when Oracle, Microsoft and SAP mixed workloads were run simultaneously produced demand on the XtremIO array of ~230k predominately 8KB IOPS together with an average throughput of 3.8 GB/s (primary I/O size 64 KB and 128 KB), with an 88 percent read and 12 percent write ratio. Average response times were recorded to be 866 μs, 829 μs for reads and 1152 μs for writes.

mixed workload results.png

IT decision makers who are evaluating new options for data center management to help provide better service with lower TCO should research VCE CI platforms that use all-flash technology. We invite you to read the full report to understand our methodology and results or contact your local EMC representatives to discuss if converged platforms are the right choice for your next data center modernization project.

Thanks for reading,

Phil Hummel @GotDisk



Hi Tom,

I guess we could say that the problem with the database and application world is that like most worlds, it isn’t perfect. As such, you are correct by saying that an application that during the day behaves as an OLTP type, at night shifts to a batch/DSS like behavior. This happens due to natural business cycles where the operational systems’ data need to produce summary reports, run backups, or the data extracted, cleansed and sent to other systems such as Data Warehouses (DW), Business Intelligence (BI), OLAP, etc. The result is that the border between OLTP and DSS softens. When today I ask DBAs if their database is an OLTP or DSS the answer is most often – both!

There is another trend that soften the boundary, which is in-memory databases (IMDB). Being that IMDBs are very fast, they are often positioned as a single repository of data for both OLTP and DSS alike. I’m not a great believer in this trend since a true BI or DW indeed shows a DSS workload behavior (large data scans, sequential nature and so on), however, their *content* is not the same as the operational systems. It is often an aggregation and summary of data from multiple operational systems and requires some cleansing first (often data in different operational systems isn’t inserted in the same way, may include missing records, too detailed, etc.).

Regarding your question of treating transaction logs as DSS my take is that although like you said the logs are typically sequential write (and reads for archiving), we tend to use the terms DSS or OLTP for a whole database or application, and more often to the nature of how the data is being accessed, not so much the transaction logs. Still, you’re right if you only consider the access pattern per storage group.

Regarding eNAS on VMAX3/AF, we don’t necessarily distinguish between FC, iSCSI (block storage), or eNAS (file storage) as far as application type and leave it to the customer to decide how they want to access their data. You can find a white paper on VMAX3 with SQL Server and eNAS here.

To actually go back to what you asked about – “I was trying to identify examples of host apps/services that fall under the workload type of DSS.” I’m not an expert in this space to provide a list, however, if you’re interested, one approach is looking at Gartner’s “Magic Quadrant” (example here, just click on the image to magnify. The full report goes into more details though cost money). Or google the subject (here is one result I found). Since I didn’t read closely either of these two links, I just bring them as examples and not advocating their content However, note how they usually point to an application and no a database. If you dig deeper, some of these application use the same databases OLTP systems may use, and other have their own.





Hi Tom,

Actually, OLTP and DSS typically refer to a type of workload, not a type of database.

OLTP (online transaction processing) is often categorized as ~70/30 read/write mix with a lot of random read activity. Its measuring criteria is likely to be IOPS and IO response time.

DSS (decision support system) is often categorized as read-only (or mostly) workload, with large sequential reads or writes (‘batch’ and ETL). It’s measuring criteria is likely to be GB/sec (bandwidth).

The databases in your example above (Oracle and Sybase) can both run either OLTP type workload, or DSS (though in Sybase/SAP case it will be different database engines: ASE, IQ, or HANA).

Back to VMAX: it does fantastic for both type of workloads (and others). You can find an Oracle example for VMAX AF with OLTP and DSS type workloads in the following white paper:




Your actual view on IT Architecture for where to put the logics: inside or outside the database

Hi Tom and team

This is the first question ever I am posting here, until now I was using Asktom as a silent reader only – and I want to say thank You a thousand times to keep this site up for so long, helping many people to find explanations and solution patterns for similar problems.

Now my question: what is Tom’s and the team’s actual view in terms of best practices, where to put the “logics”. Because the “mainstream” of IT Architecture today says: take all logics OUT of the db and be ignorant about where to put the data – a view which I do NOT share, which some people may find provocating – so be it =)

I know from many Tom’s posts, that You are in favor of putting it INto PL/SQL Stored Procedures, which I worked with many years successfully as well. I would recommend PL/SQL for DWH as first option in any case, for OLTP there might be some other variants acceptable … for simple reasons of efficiency.

Now there is all these talks about REST, DbaaS, Micro Services etc You name it, which at any cost strives to seperate the “layers” of data, logics and representation. To the degree that they want to be deliberately ignorant on where the data are stored at all, could be even non-db, anything. They don’t care. And they think they are smart doing so … and they oppose Stored Procedures in any form, making the database to a data dump.

Is my impression right that this kind of hip ignorance towards databases comes with some considerable hidden costs? Actually I can see it for real in a number of projects … to whom that pays off, is clearly a question on which side of the table You are sitting.

I am open to any comments!


Performance basics for row and column access control

IBM DB2 for i version 7.2 has the new database security capability, row and column access control (RCAC). RCAC provides the capability to control data access to the
record and column level. Specified through SQL statements, though it controls all access
to the enabled tables, performance is a consideration when using RCAC. This article
discusses the basic factors of RCAC performance and provides examples of the performance
effects on OLTP workloads.