How Big Data Powers the Internet of Things

The Internet of Things (IoT) may sound like a futuristic term, but it’s already here and increasingly woven into our everyday lives. The concept is simpler than you may think: If you have a smart TV, fridge, doorbell, or any other connected device, that’s part of the IoT. If you’ve used an app on your phone to navigate through your day-to-day tasks, then that’s also part of the IoT. With the IoT, the future is now, but how does this connected world really work? More importantly, how can businesses get on board so they’re not left behind the competition?

The answer to both questions is big data. Big data powers the IoT, and as data connectivity evolves into 5G networks, Wi-Fi capabilities expand, and smartphone users grow even larger in population, the “big” in big data grows even bigger. Let’s take a look at two examples of how businesses can be part of the IoT despite not being in the tech industry.

  • Example 1: The region’s most popular theme park has released its own app. It does more than just provide a map, schedule, and menu items (though those are important); it also uses GPS pings to identify app users in line, thus being able to display predicted wait times for rides based on density, even being able to reserve a spot or trigger attractions based on proximity.
  • Example 2: The retail experience has multiple avenues in driving data. Rewards accounts immediately link transactional data to individuals, as does their activity on the store’s app. Retailers also glean data from other avenues, such as data obtained via a social media crawler and demographic data obtained via a 3rd party license. All of this feeds into an individual app experience by displaying recommendations, sales, and personalized reward opportunities.

These examples show how the combination of IoT connectivity and big data continuous transmission can make things better for businesses and their customers. IoT enables an improved experience for everyone involved, but how does it actually work? Let’s take a closer look.

The Connection Between Big Data and IoT

To understand exactly how big data and the IoT work together, we need to examine several pieces in the overall workflow:

  1. A company’s devices are installed to use sensors for collecting and transmitting data.
  2. That big data—sometimes pentabytes of data—is then collected, often in an repository called a data lake. Both structured data from prepared data sources (user profiles, transactional information, etc.) and unstructured data from other sources (social media archives, emails and call center notes, security camera images, licensed data, etc.) reside in the data lake.
  3. Reports, charts, and other outpus are generated, sometimes by AI-driven analytics platforms such as Oracle Analytics.
  4. User devices provide further metrics through settings, preferences, scheduling, metadata, and other tangible transmissions, feeding back into the data lake for even heavier volumes of big data.

Big data and IoT devices have a symbiotic relationship, and if there’s an AI system responsible for processing that data and making decisions, then that adds another variable to the equation. As big data storage is both the repository and source of data, the more IoT devices that get connected or the more complex the AI model, the greater the spotlight on big data hardware. Performance and processing depend on the capacity of the big data hardware to pull what is necessary, which highlights the importance of investing smartly in efficient hardware and optimized infrastructure design.

What Does This Mean for Business?

Let’s go back to the two examples above, the theme park and the retailer. Their uses of big data and connectivity directly impact the possibility of people converting into customers.

Theme park example: One of the biggest reasons why people avoid theme parks is the lines. But real-time data showing the status of lines—and in turn, aggregate data that can show average wait times at specific points of the day, similar to the way Google Maps projects drive times for certain hours—makes the whole venue more accessible. It allows people to maximize their time and plan around their needs, be it small children or just sheer patience, and that in turn converts customers and builds relationships.

Retailer example: The best-rated retailer apps are the ones that provide both savings and convenience. To achieve this, combining unstructured data (like social media mentions or demographic data) with structured data (a user’s browsing history on the app) can generate smart recommendations, even entice with algorithm-generated coupons. For example, if a city is having a heat wave, backend analysis can show a spike in regional fan searches, cross-reference that with a user’s browsing history, generate a coupon for a specific product in-app, and notify that it’s available for in-store pickup. Data and connectivity thus work to bring the user back into the store for purchasing more items at lower prices.

Note that in both of these cases, investing in big data collection and device interconnectivity pays off by building upon the expected customer experience to offer the industry standard while evolving with current capabilities. From a technology perspective, this means establishing the avenues for identifying the data, collecting it, and then processing it and outputting it in a format that benefits the business and the consumer.

The Path Forward

For businesses looking at exploring the opportunities made possible by an IoT paradigm, there are two major areas to consider. First, ask how your business can use interconnectivity and metrics to better your customer experience. This might even be an indirect benefit, such as creating a system that optimizes your inter-departmental communication to ultimately streamline processes for waiting customers. Second, consider the current state of your IT infrastructure. Adapting to the needs of IoT and big data involves elements such as scalability and processing speed beyond traditional hardware capabilities.

Thus, such a decision may feel like it solely belongs to the IT department, but it really is a business decision. These opportunities create ways to deliver immediate dividends, while also establishing a company as forward-thinking and technology savvy, enhancing its reputation and customer loyalty while building the technological foundation for future improvements. Though it requires up-front resources to create an IoT experience, such an investment is almost a necessity these days. Given how much connectivity has become part of our daily lives, not supporting big data and IoT is a sure way to fall behind the competition in today’s dynamic and connected business landscape.

To learn more about big data, IoT, and analytics, check out the following links:

To learn how you can benefit from the Oracle Big Data, visit Oracle.com/Big-Data, and don’t forget to subscribe to the Oracle Big Data blog and get the latest posts sent to your inbox.

Related:

  • No Related Posts

Building Upon More Secure Databases with Oracle Autonomous Database

Author: Maywun Wong

My kids love Legos, those building blocks that click together to make taller and stronger structures. Many of the sets created a building, a ship, or a plane, and then my kids would add other Lego pieces to strengthen it such that they could play more with the toy. With each addition, the toy inherited new capabilities and reinforced the strength.

The same can be said for the strength of the Oracle database. In the past 40 years, as Oracle has released new databases which can support increasingly larger organizations with more complex security requirements, each following generation of Oracle databases inherit and build on the features of its predecessor.

Today, Oracle is a leader in the database market; we built upon that database legacy and have expanded to the cloud, applications, and infrastructure. We continue to sell our Oracle Database in many forms, including on-premises, Oracle Database Cloud Service, and in all of the forms of Exadata (on-premise, Cloud at Customer, and Cloud Service), and all of them have strong security built into their technology. Autonomous Database, both data warehouse and transaction processing, inherits many of the strengths of the rest of the Oracle databases and, with its name, adds autonomous capabilities extending to its security.

I had an opportunity to sit down in a podcast with Russ Lowenthal, Director of Product Management, to discuss security on the Autonomous Database in our recent podcast “How Can a Database Secure Itself?” During the podcast, he outlines the self-securing capabilities of the Autonomous Database and how Oracle has built upon its non-Autonomous Database, making it closer to a Software as a Service model. He shares what differentiates the security of the Autonomous Database from the non-Autonomous Database, including self-patching, automatic encryption, and the shared responsibility model between Oracle and our customers. All of these capabilities combined, along with many other security capabilities unique only to Autonomous Database, help build the next generation of databases for our customers.

I encourage you to download the latest podcast in the Autonomous Database podcast series, where you can learn more about the self-securing capabilities of the Autonomous Database as well as many other leading features which make Oracle’s latest offering unique.

Related:

  • No Related Posts

How Hockey Is Embracing Big Data and Analytics

The idea of applying analytics in professional sports has been around since the term sabermetrics came about for baseball in the early 1970s. This hit the mainstream with the phenomenon of Moneyball—the technique as used by the Oakland A’s and the ensuing book and film. However, all of the actual statistical tracking involved came from manual tracking, which made sense for a sport like baseball, given its rhythmic pace and defined start/stop points.

Today, professional soccer and basketball have started to use analytics, particularly as data tracking technology has moved things from manual entry to automated data. But when people discuss analytics in hockey, the potential seems limited at first glance—after all, players skate at speeds up to 35 MPH and the puck flies up to 100 MPH. To the uninitiated, the game can appear to be too chaotic to quantify. But while teams began applying new standards of advanced statistics in the past decade, the convergence of technology and sports is about to create the big data era for the NHL. The result is a revolution in the potential of sports analytics, and it’s going live for the 2019-20 NHL season.

How It Works

For the past decade, the NHL has employed a team of statisticians utilizing a tool called the Hockey Information Tracking System to log things like a player’s time on ice, face-off wins, and other stats. The key word here is team—statistics were tracked manually on a proprietary data entry application by multiple people.

Automated data tracking technology, though, has been in development for several years and first publicly deployed during the 2019 NHL All-Star Game in San Jose. Specially manufactured pucks contain tracking chips, and similar chips are placed into player shoulder pads. Combined with a network of sensors placed around the rink, suddenly the world’s fastest sport has big data being collected during every moment of the game: position, speed, direction, distance, and other key metrics.

How many metrics comprise hockey’s version of big data? Consider that the chips involved deliver data points at about 200 times per second, all across the categories stated above. Now combine that with fiver skaters and a goalie per each team and 60 minutes per game, and that’s a lot of data points, especially because skater data must be consolidated between who’s currently on the ice and who’s on the bench. In fact, crunching the numbers shows that 60 minutes of gameplay will create roughly 9,360,000 logged events per category in an NHL game that ends in regulation. That’s some pretty big data, and that’s even before it’s parsed into standard delineations for individual player shifts (e.g. consolidated numbers for total distance skated).

Collecting this raw data is an achievement in itself, but the NHL is leaning into the potential of emerging technology by processing this data through artificial intelligence. The AI is designed to flag specific hockey plays as it crunches through the data; for example, if a goal is scored on a 2-on-1 rush or if a goalie was out of position. As with all AI models, the more data processed and the more plays and details that can be flagged—all without human error and bias built into manually tracking such a fast sport—the more robust the model.

A New Level of Hockey Analytics

The first analytics revolution in hockey sought to provide context to traditional statistics such as goals and assists. This was done by applying manually tracked statistics such as shots on goal and face-off location in different ways, all in an effort to create deeper insight into player and team effectiveness. With completely new categories of statistics tracked in real time, the upcoming analytics revolution is poised to shatter what was previously possible. Now data-driven insights can quantify—or disprove—assumptions about players, strategies, and other coaching techniques. For example, players will naturally fatigue over the course of three periods, but how much does this affect top speed and reaction time? And is there a formula for distance skated or length of shift to maximize late-game speed? Details on this level were previously unheard of, but now they will all be part of a standard real-time data package.

That’s a lot of data points, and a lot of people to use that data in different ways. It’s still yet to be determined as to who will have access to what; the common assumption is that the NHL will follow the model set by other professional leagues in tiering the data so fans get some numbers, media gets another tier, and the league has the full complement. Who truly benefits from big data in hockey? Let’s take a closer look.

Teams: In pro sports, every single decision is about getting an advantage. We’re well past the days when roster and coaching strategies were based solely on gut feel; in sports, as with business, results can now be quantified. With heavy volumes of data, team statisticians and data scientists can do more to look at tweaking strategies, uncovering hidden free agent gems, and maximizing player usage. Companies use big data analytics for insights that power data-driven decisions, and this is the same thing: it’s all about using results to get better results.

Media: Last year’s All-Star Game provided a hint as to how a TV broadcast can be changed with real-time big data. For live game action, player statistics such as speed can be displayed during play to provide viewers with more data. Replays can be completely changed, as new information about distance, speed, and space provide context into the how and why plays happened.

Revenue Streams: The primary role of big data in hockey focuses on evaluating player and team performance, whether for internal strategy or in media. However, an indirect benefit of this data comes from new revenue streams. Licensing that data, sponsoring data milestones on broadcasts (e.g., “Tonight’s fastest skater presented by Hertz”), even seeing how data provides new gambling lines, all of that has been discussed by the NHL internally. So while there is an investment involved with implementing big data and analytics, it also provides a means to new revenue.

Fans: The more information fans get, the better. Whether that’s during a game broadcast, via an app with real-time data while sitting in an arena, or aggregated on sports websites after the fact, this data—and the ensuing embrace of new analytics models—provides fans with greater understanding of the how, why, and when of hockey. That understanding ultimately brings them closer to the game, which is what any professional sports team wants in building fan relations.

As with all technology advances, this represents just the beginning of the NHL’s big data era. In coming years, analytics models will improve, more tracking possibilities will open up, and media will have a better understanding of how this can all be optimally integrated into their presentations. Players themselves may even change, adapting training and nutrition to insights provided by big data. The scope of this revolution can be judged by the way big data has impacted other industries, such as the way manufacturing processes are now optimized for quality and procurement, or the way healthcare uses data to streamline the patient experience. For the NHL, this all starts with the first face-off of the 2019-20 season—a face-off featuring a microchip in a puck.

New to the concepts of big data and analytics? Subscribe to Oracle’s Big Data Blog or check out the following links to learn more:

Related:

  • No Related Posts

Five Different Ways Businesses Use Big Data

Can your business use big data?

Yes, absolutely.

How you use big data depends on a number of things. Big data is all about insight. The sheer volume of numbers and metrics provides enough scope and scale to present a clear picture about, well, whatever it’s applied to. Processes, customer behavior, logistical issues—all of these can be identified, drilled down, and segmented with big data. Then, when coupled with tools like analytics and machine learning, your business has all the capabilities for data-driven decisions that elevate and accelerate your goals.

So what can you do with all that? Let’s take a look at five very different real-world scenarios. The following examples show how flexible—and powerful—big data can be, in practically any situation.

Health Care: Simplifying Logistics

As medical records become electronic, the ability for big data to streamline processes extends to both health care management and patients. On the management side, big data can reveal many critical variables that affect staffing and logistics. For example, it’s a given that cold and flu season will necessitate more patient visits, but identifying variables such as weather, proximity to holiday travel, percentage of patients having received flu shots, and other such individual factors can provide a bigger picture.

The result allows health care facilities to appropriately manage everything from staff size to time allocated for booking appointments to stocking flu shots and other seasonal needs. This ultimately benefits patients, and they have more transparency and accessibility into fulfilling their needs. At the same time, big data allows an organization’s data scientists to develop models for things like patient reminders or identifying who is at risk (or could benefit from) new medical research.

Banking: Minimizing Fraud

Fraudulent activity is the nemesis of the banking industry. When fraud happens, it takes up valuable time and resources from all parties—the victims, the bank’s staff, and the location that processed the fraudulent purchase. It also damages trust, which is perhaps the most important element of banking. The longer fraud goes undetected, the more people get hurt and the more resources get drained. Big data, however, is the most significant innovation in fraud prevention in decades.

For the banking industry, big data means countless bytes of information—transactions, metrics, payments, etc.—that provide details into user behavior. At scale, this is a blueprint for how money is used. When coupled with machine learning and analytics, patterns can be identified, and as machine learning increases this ability, anomalies become much easier to spot. This allows banks to catch fraudulent behavior as soon as it starts, minimizing the chance that it can spread and damage more accounts.

Manufacturing: Identifying Bottlenecks

The manufacturing process has many moving parts in a workflow, from parts procurement to final quality control. Each of those steps comes with numerous variables: for example, procurement may stall with vendor inventory problems or shipping delays; and assembly may have an issue with tool or machine failure. By applying digitally tracked metrics to all of these steps and taking in large volumes of records, big data can act as the foundation to identify potential sources of bottlenecks.

This can work both directly and indirectly. As an example of direct improvement, big data can show if a certain inventory provider is consistently late in shipping or the source of quality failure. In this case, big data can be the flag that leads to an eventual vendor change. As an example of indirect improvement, big data can help procurement teams identify ways to maximize vendor discounts, thus freeing up budget to be applied at other levels (e.g., new assembly machines, or more quality control staff).

Software: Identifying User Behavior

When software is released, be it a video game or workplace application, the development team’s goal is to have all of its features properly and regularly used. This, of course, isn’t always the case. But the how and why of feature usage can be explained using big data. Big data metrics can collect data that identifies which features are used, not just activated, and how long users remained engaged. It can also tell you whether any bugs or failures were triggered, and what else was activated.

Analytics tools can then break this data down into more isolated segments to create definitive looks. For example, perhaps a crash bug always happens within a piece of software’s feature, but only when another feature is concurrently activated. Big data collects situational metrics to build a roadmap for future iteration, whether that’s to fix buggy features or obsolete them due to lack of user interest.

Government: Optimizing Resources

All branches of government deal with massive amounts of data. Stereotypical jokes about government bureaucracy have a certain level of truth, but in the digital world, all that paperwork has gone online. This actually turns a negative into a positive: all that paperwork laid the foundation for the metrics to be tracked in the digital space. With big data, suddenly that information is dynamic and fluid, and in many cases, it’s much more accurate as clerical errors become minimized.

This leads to an overhaul of resource usage in many ways. Big data can lead to the development of automated processes, which optimize human resources to more appropriate uses. Big data can also provide insight into things like traffic patterns and utility usage, identifying problems and creating a path to infrastructure improvement.

Big Data: The Future of Everything

The above five examples stem from vastly different businesses and industries, but they all have one thing in common: they show how data can identify problems in almost any circumstance. As device technology and data communications evolve, the volume of data is continuously growing, and that means that big data will only get bigger. At the same time, the power of analytics tools and machine learning/artificial intelligence is growing as well.

Thus, the amount of connectivity in our world is only going to increase, and the importance of big data for any organization in any industry is only going to grow more significant. The lesson? Regardless of what you do or how you do it, there’s a way to integrate big data into your processes and workflows. In fact, doing so isn’t just a good idea; it’s probably the best idea to future-proof your business. Because if you’re not integrating big data into your organization, chances are your competition is already way ahead of you.

Want to learn more? The following links will get you up to speed on big data and related technologies:

Also, don’t forget to subscribe to the Oracle Big Data blog and get the latest posts sent to your inbox.

Related:

  • No Related Posts

Continuous Availability and Extreme Scalability with Oracle Key Vault

Taylor Lewis, Product Marketing Manager

Oracle is proud to announce the general availability (GA) release of Oracle Key Vault 18. Oracle Key Vault enables organizations to quickly deploy encryption and other security solutions by centrally managing encryption keys, Oracle Wallets, Java Keystores, and credential files.

Oracle Key Vault 18 introduces new multi-master clustering functionality, providing unprecedented improvements in the availability and scalability of key management operations, while significantly reducing the operational burden. Organizations can now group up to 16 nodes to form a multi-master cluster that can be deployed across geographically distributed data centers.

Databases can connect to any node in the Oracle Key Vault cluster to get encryption keys. Any updates to keys or changes to authorization rules are quickly replicated to all other Oracle Key Vault nodes. If the Oracle Key Vault connection fails or an Oracle Key Vault node goes down for any reason, the database servers transparently failover to the nearby active Oracle Key Vault nodes.

Oracle Key Vault provides key management for Oracle Database 11g Release 2 and later releases, including Oracle Database 19c, and is the only enterprise-grade key management solution tightly integrated with Oracle databases including support for Oracle Transparent Data Encryption (TDE). Please visit the Oracle Cloud Security blog to learn more about the Oracle Key Vault Release.

Related:

  • No Related Posts

What’s the Connection Between Big Data and AI?

When people talk about big data, are they simply referring to numbers and metrics?

Yes.

And no.

Technically, big data is simply bits and bytes—literally, a massive amount (petabytes or more) of data. But to dismiss big data as mere ones and zeroes misses the point. Big data may physically be a collection of numbers, but when placed against proper context, those numbers take on a life of their own.

This is particularly true in the realm of artificial intelligence (AI). AI and big data are intrinsically connected; without big data, AI simply couldn’t learn. From the perspective of the team in charge of Oracle’s Cloud Business Group (CBG) Product Marketing, they liken big data to the human experience. On Oracle’s Practical Path To AI podcast episode Connecting the Dots Between Big Data and AI, team members compare the AI learning process to the human experience.

The short version: the human brain ingests countless experiences every moment. Everything that is taken in by senses is technically a piece of information or data—a note of music, a word in a book, a drop of rain, and so on. Infant brains learn from the very beginning they start taking in sensory information, and the more they encounter, the more they are able to assimilate and process, then respond in new and informed ways.

AI works similarly. The more data an AI model encounters, the more intelligent it can become. Over time, as more and more data processes through the AI model, it becomes increasingly significant. In that sense, AI models are trained by big data, just as human brains are trained by the data accumulated through multiple experiences.

And while this may all seem scary at first, there’s a definite public shift toward trusting AI-driven software. This is discussed further by Oracle’s CBG team on the podcast episode, and it all goes back to the idea of human experiences. In the digital realm, people now have the ability to document, review, rank, and track these experiences. This knowledge becomes data points in big data, thus fed into AI models which start validating or invalidating the experiences. With enough of a sample size, a determination can be made based on “a power of collective knowledge” that grows and creates this network.

However, that doesn’t mean that AI is the authority on everything, even with all the data in the world.

To hear more about this topic—and why human judgment is still a very real and very necessary part of, well, everything—listen to the entire podcast episode Connecting the Dots Between Big Data and AI and be sure to visit Oracle’s Big Data site to stay on top of the latest developments in the field of big data.

Guest author Michael Chen is a senior manager, product marketing with Oracle Analytics.

Related:

  • No Related Posts

Check out the 19c New Features Learning Paths

Curious about Oracle Database 19c features? Well, if you liked the 18c learning path series, including the Apply Oracle 18c Database New Features learning path, then you are in luck.

The Database User Assistance group is happy to announce two new learning paths for 19c:

The learning paths provide a set of detailed tutorials that will help you explore new features of the database from your laptop. To get started, download the database here.

Let us know what you think!

Related:

  • No Related Posts

Microsoft and Oracle to Interconnect Microsoft Azure and Oracle Cloud


Microsoft Corp. and Oracle Corp. on Wednesday announced a cloud interoperability partnership enabling customers to migrate and run mission-critical enterprise workloads across Microsoft Azure and Oracle Cloud. Enterprises can now seamlessly connect Azure services, to Oracle Cloud services. Today, a lot of enterprises already use a combination of Microsoft and Oracle to run their business. In addition, enterprises have invested heavily in both Oracle and Microsoft solutions for many years. Now for the first time, organizations can develop and leverage both Microsoft and Oracle cloud services simultaneously which enables easier migration of on-premises applications, the utilization of a broader range of tools, and the ability to take advantage of existing investments across both clouds.

As a result of this expanded partnership, the companies are today making available a new set of capabilities:

  • Connect Azure and Oracle Cloud seamlessly, allowing customers to extend their on-premises data centers to both clouds. This direct interconnect is available starting today in Ashburn (North America) and Azure US East, with plans to expand additional regions in the future.

  • Unified identity and access management, via a unified single sign-on experience and automated user provisioning, to manage resources across Azure and Oracle Cloud. Also available in early preview today, Oracle applications can use Azure Active Directory as the identity provider and for conditional access.

  • Supported deployment of custom applications and packaged Oracle applications (JD Edwards EnterpriseOne, E-Business Suite, PeopleSoft, Oracle Retail, Hyperion) on Azure with Oracle databases (RAC, Exadata, Autonomous Database) deployed in Oracle Cloud. The same Oracle applications will also be certified to run on Azure with Oracle databases in Oracle Cloud.

  • A collaborative support model to help IT organizations deploy these new capabilities while enabling them to leverage existing customer support relationships and processes.

  • Oracle Database will continue to be certified to run in Azure on various operating systems, including Windows Server and Oracle Linux.

More information about specific cross-cloud capabilities, use cases, business advantages, and more can be found here: https://blogs.oracle.com/cloud-infrastructure/oracle-microsoft-azure-alliance

Related:

  • No Related Posts