Easy, Economical Cloud DR to AWS with RecoverPoint for Virtual Machines

The most recent RecoverPoint for Virtual Machines v5.2.1 release, adds the capability to protect VMs directly to AWS S3 object storage, using proprietary snap-based replication, with RPO that can be measured in minutes. This blog recaps the capabilities that Cloud DR 18.4 unlocks for Recover Point for Virtual Machines. RecoverPoint for Virtual Machines works with Cloud DR to protect your VMs by replicating them to the AWS cloud. Replicated data is compressed, encrypted, and stored as incremental snapshots on Amazon S3 object storage. You can set parameters around the snap-replication policies for reliable and repeatable Disaster … READ MORE

Related:

  • No Related Posts

Data Domain Cloud Tier. The Right Choice for Long Term Retention

EMC logo


The Data Domain Family has been the number one choice when it comes to the Purpose Built Backup Appliance market. Data Domain provides best in class deduplication and scale for your data protection needs, including the ability to tier data to the cloud of your choice via Data Domain Cloud Tier. With Data Domain Cloud Tier you can send your data directly from the DD appliance to any of the validated and supported cloud object storage providers; public, private or hybrid for long-term retention needs. Data Domain Cloud Tier (DD Cloud Tier) is a function of … READ MORE



ENCLOSURE:https://blog.dellemc.com/uploads/2018/12/skyline-839795_1280-600×356.jpg

Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

  • No Related Posts

Data Warehouse 101: Setting up Object Store

In the previous posts we discussed how to set up a trial account, provision Oracle Autonomous Data Warehouse, and connect using SQL Developer.

Get Started With a Free Data Warehouse Trial

The next step is to load data. There are multiple ways of uploading data for use in Oracle Autonomous Data Warehouse. Let’s explore how to set up OCI Object Store and load data into OCI Object Store.

Here are step-by-step instructions on how to set up OCI Object Store, load data, and create auth token and database credential for users.

  • From the Autonomous Data Warehouse console, pull out the left side menu from the top-left corner and select Object Storage. To revisit signing-in and navigating to ADW, visit our introduction to data warehouses.

To learn more about the OCI Object Storage, refer to its documentation .

  • You should now be on the Object Storage page. Choose the root compartment in the Compartment dropdown if it is not already chosen.

Create a Bucket for the Object Storage

In OCI Object Storage, a bucket is the terminology for a container of multiple files.

  • Click the Create Bucket button:

  • Name your bucket ADWCLab and click the Create Bucket button.

Upload Files to Your OCI Object Store Bucket

  • Click on your bucket name to open it:

  • Click on the Upload Object button:

  • Using the browse button or drag-and-drop, select the file you downloaded earlier and click Upload Object:

  • Repeat this for all files you downloaded for this lab.
  • The end result should look like this with all files listed under Objects:

Construct the URLs of the Files on Your OCI Object Storage

  • Construct the base URL that points to the location of your files staged in the OCI Object Storage. The URL is structured as follows. The values for you to specify are in bold:

https://swiftobjectstorage.<region_name>.oraclecloud.com/v1/<tenant_name>/<bucket_name>/

  • The simplest way for you to find this information would be to be look at the details of your recently uploaded files.

  • In this example below, the region name is us-phoenix-1, the tenant name is labs, and the bucket name is ADWCLab. This is all of the information you need to construct the swift storage URL above.

  • Save the base URL you constructed to a note. We will use the base URL in the following steps.

Creating an Object Store Auth Token

To load data from the Oracle Cloud Infrastructure(OCI) Object Storage you will need an OCI user with the appropriate privileges to read data (or upload) data to the Object Store. The communication between the database and the object store relies on the Swift protocol and the OCI user Auth Token.

  • Go back to the Autonomous Data Warehouse Console in your browser. From the pull-out menu on the top left, under Identity, click Users.

  • Click the user’s name to view the details. Also, remember the username as you will need that in the next step. This username could also be an email address.

  • On the left side of the page, click Auth Tokens.

  • Click Generate Token.

  • Enter a friendly description for the token and click Generate Token.

  • The new Auth Token is displayed. Click Copy to copy the Auth Token to the clipboard. You probably want to save this in a temporary notepad document for the next few minutes (you’ll use it in the next step).

    Note: You can’t retrieve the Auth Token again after closing the dialog box.

Create a Database Credential for Your User

In order to access data in the Object Store you have to enable your database user to authenticate itself with the Object Store using your OCI object store account and Auth token. You do this by creating a private CREDENTIAL object for your user that stores this information encrypted in your Autonomous Data Warehouse. This information is only usable for your user schema.

  • Connected as your user in SQL Developer, copy and paste this code snippet to SQL Developer worksheet.

Specify the credentials for your Oracle Cloud Infrastructure Object Storage service: The username will be your OCI username (usually your email address, not your database username) and the password is the OCI Object Store Auth Token you generated in the previous step. In this example, the credential object named OBJ_STORE_CRED is created. You reference this credential name in the following steps.

  • Run the script.

  • Now you are ready to load data from the Object Store.

Loading Data Using the Data Import Wizard in SQL Developer

  • Click ‘Tables’ in your user schema object tree. Clicking the right mouse button opens the context-sensitive menu in SQL Developer; select ‘Import Data’:

When you are satisfied with the data preview, click NEXT.

Note: If you see an object not found error here, your user may not be set up properly to have data access to the object store. Please contact your Cloud Administrator.

  • On the Import Method page, you can click on Load Options to see some of the available options. For this exercise, leave the options at their defaults. Enter CHANNELS_CLOUD as the table name and click NEXT to advance to the next page of the wizard.

  • On the Column Definition page, you can control how the fields of the file map to columns in the table. You can also adjust certain properties such as the Data Type of each column. This data needs no adjustment, we can simply proceed by clicking Next.

  • The last screen before the final data load enables you to test a larger row count than the sample data of the beginning of the wizard, to see whether the previously made decisions are satisfying for your data load. Note that we are not actually loading any data into your database during these Tests. Click TEST and look at the Test Results log, the data you would load, any mistakes and what the external table definition looks like based on your inputs.

When done with your investigation, click NEXT.

  • The final screen reflects all your choices made in the Wizard. Click FINISH when you are ready to load the data into the table.

In the next series of posts, we will explore different industries, review industry data sets, query the data, and analyze industry problems with the help of visualizations:

Data Warehouse and Visualizations for Flight Analytics

Data Warehouse and Visualizations for Credit Risk Analysis

Written by Sai Valluri and Philip Li

Related:

  • No Related Posts

Making the Most of Data – and the Opportunity it Represents

EMC logo


Dell EMC is leading the field in Distributed File Systems and Object Storage for a reason… Digital transformation is often discussed in terms of processes, services and applications. But what’s really driving it is data: management of it, access to it, and the ability to analyse it and put it to use. And of course, it’s growing at a phenomenal rate – especially when it’s unstructured. According to Gartner, by 2022, more than 80% of enterprise data will be stored in scale-out storage systems in enterprise and cloud data centers, up from 40% in 2018¹. Organizations … READ MORE



ENCLOSURE:https://blog.dellemc.com/uploads/2018/11/HongKongSky_1000_500px-600×356.jpg

Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

  • No Related Posts

Re: Cloudboost support for Oracle Cloud Storage Classic

Hello,

Do you have any plans to support Oracle Cloud storage Classic as a Object Storage repository for Clouboost?

They support S3 and Openstack Swift connections.

I tried to connect using generic openstack swift using cloudboost 18.1 and 2.2.3, but i am not successfull here are somo of my tests:

admin@mag-fs> diagnostics blobstore-cli “–provider openstack-swift –endpoint https://XXXXXXXXX.br.storage.oraclecloud.com/v1/Storage-XXXXXXXXX –identity Storage-XXXXXXXXX:ti.suporte.infra@XXXXXXXXX.com.br –credential XXXXXXXXX” validate

Running BSV CLI with arguments: –provider openstack-swift –endpoint https://XXXXXXXXX.br.storage.oraclecloud.com/v1/Storage-XXXXXXXXX –identity Storage-XXXXXXXXX:ti.suporte.infra@XXXXXXXXX.com.br –credential XXXXXXXXX validate

Code:

255

Output:

Failed in command: blobstore.cli.commands.Validate@3e10dc6, org.jclouds.http.HttpResponseException: command: POST https://XXXXXXXXX.br.storage.oraclecloud.com/v1/Storage-XXXXXXXXX/tokens HTTP/1.1 failed with response: HTTP/1.1 400 Bad Request; content: [<html><body>Account or Container PUT or POST call cannot contain a message body</body></html>]

Error:

admin@mag-fs> diagnostics blobstore-cli “–provider openstack-swift –endpoint https://XXXXXXXXX.br.storage.oraclecloud.com/v1/Storage-XXXXXXXXX/teste –identity Storage-XXXXXXXXX:ti.suporte.infra@XXXXXXXXX.com.br –credential XXXXXXXXX” validate

Running BSV CLI with arguments: –provider openstack-swift –endpoint https://XXXXXXXXX.br.storage.oraclecloud.com/v1/Storage-XXXXXXXXX/teste –identity Storage-XXXXXXXXX:ti.suporte.infra@XXXXXXXXX.com.br –credential XXXXXXXXX validate

Code:

13

Output:

Error:

admin@mag-fs> diagnostics blobstore-cli “–verbose –provider openstack-swift –endpoint https://XXXXXXXXX.br.storage.oraclecloud.com/auth/v1.0 –identity Storage-XXXXXXXXX:ti.suporte.infra@XXXXXXXXX.com.br –credential XXXXXXXXX” validate

Running BSV CLI with arguments: –verbose –provider openstack-swift –endpoint https://XXXXXXXXX.br.storage.oraclecloud.com/auth/v1.0 –identity Storage-XXXXXXXXX:ti.suporte.infra@XXXXXXXXX.com.br –credential XXXXXXXXX validate

Code:

255

Output:

Failed in command: blobstore.cli.commands.Validate@7e22550a, org.jclouds.rest.ResourceNotFoundException: org.jclouds.http.HttpResponseException: command: POST https://XXXXXXXXX.br.storage.oraclecloud.com/auth/v1.0/tokens HTTP/1.1 failed with response: HTTP/1.1 404 Not Found; content: [<html><body>Sorry, but the content requested does not seem to be available. Try again later. If you still see this message, then contact Oracle Support.</body></html>]

Error:

Thanks and regards

Related:

  • No Related Posts

Cloudboost support for Oracle Cloud Storage Classic

Hello,

Do you have any plans to support Oracle Cloud storage Classic as a Object Storage repository for Clouboost?

They support S3 and Openstack Swift connections.

I tried to connect using generic openstack swift using cloudboost 18.1 and 2.2.3, but i am not successfull here are somo of my tests:

admin@mag-fs> diagnostics blobstore-cli “–provider openstack-swift –endpoint https://XXXXXXXXX.br.storage.oraclecloud.com/v1/Storage-XXXXXXXXX –identity Storage-XXXXXXXXX:ti.suporte.infra@XXXXXXXXX.com.br –credential XXXXXXXXX” validate

Running BSV CLI with arguments: –provider openstack-swift –endpoint https://XXXXXXXXX.br.storage.oraclecloud.com/v1/Storage-XXXXXXXXX –identity Storage-XXXXXXXXX:ti.suporte.infra@XXXXXXXXX.com.br –credential XXXXXXXXX validate

Code:

255

Output:

Failed in command: blobstore.cli.commands.Validate@3e10dc6, org.jclouds.http.HttpResponseException: command: POST https://XXXXXXXXX.br.storage.oraclecloud.com/v1/Storage-XXXXXXXXX/tokens HTTP/1.1 failed with response: HTTP/1.1 400 Bad Request; content: [<html><body>Account or Container PUT or POST call cannot contain a message body</body></html>]

Error:

admin@mag-fs> diagnostics blobstore-cli “–provider openstack-swift –endpoint https://XXXXXXXXX.br.storage.oraclecloud.com/v1/Storage-XXXXXXXXX/teste –identity Storage-XXXXXXXXX:ti.suporte.infra@XXXXXXXXX.com.br –credential XXXXXXXXX” validate

Running BSV CLI with arguments: –provider openstack-swift –endpoint https://XXXXXXXXX.br.storage.oraclecloud.com/v1/Storage-XXXXXXXXX/teste –identity Storage-XXXXXXXXX:ti.suporte.infra@XXXXXXXXX.com.br –credential XXXXXXXXX validate

Code:

13

Output:

Error:

admin@mag-fs> diagnostics blobstore-cli “–verbose –provider openstack-swift –endpoint https://XXXXXXXXX.br.storage.oraclecloud.com/auth/v1.0 –identity Storage-XXXXXXXXX:ti.suporte.infra@XXXXXXXXX.com.br –credential XXXXXXXXX” validate

Running BSV CLI with arguments: –verbose –provider openstack-swift –endpoint https://XXXXXXXXX.br.storage.oraclecloud.com/auth/v1.0 –identity Storage-XXXXXXXXX:ti.suporte.infra@XXXXXXXXX.com.br –credential XXXXXXXXX validate

Code:

255

Output:

Failed in command: blobstore.cli.commands.Validate@7e22550a, org.jclouds.rest.ResourceNotFoundException: org.jclouds.http.HttpResponseException: command: POST https://XXXXXXXXX.br.storage.oraclecloud.com/auth/v1.0/tokens HTTP/1.1 failed with response: HTTP/1.1 404 Not Found; content: [<html><body>Sorry, but the content requested does not seem to be available. Try again later. If you still see this message, then contact Oracle Support.</body></html>]

Error:

Thanks and regards

Related:

  • No Related Posts

Magic Three-peat: Gartner Recognizes Dell EMC as a Leader in Distributed File and Object Storage for Third Year in a Row

EMC logo


For the third straight year, Dell EMC has been recognized by Gartner as a leader in the 2018 Magic Quadrant for Distributed File Systems and Object Storage. We feel the report evaluates Distributed File and Object Storage vendors that help enterprises manage the rapid growth in unstructured data. Per Gartner, by 2022, more than 80% of enterprise data will be stored in scale-out storage systems in enterprise and cloud data centers, up from 40% in 2018. Around the world, organizations are realizing that data is their most valuable asset. As a leader in this space, Dell … READ MORE



ENCLOSURE:https://blog.dellemc.com/uploads/2018/10/ECS-internals-600×356.jpeg

Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

  • No Related Posts

ECS Appliance: ECS Guidance on 500 error rate response in ECS

Article Number: 504612 Article Version: 4 Article Type: How To



ECS Appliance,ECS Appliance Hardware,Elastic Cloud Storage

When performing normal Create,Read,Update,Delete (CRUD) activities against the ECS object storage system there may be occassions when 500 errors may be experienced

The official definition of a 500 error is

“The server encountered an unexpected condition which prevented it from fulfilling the request.”

Currently Dell EMC advise an 500 failure rate of 0.1% falls within acceptable parameters. Any value greater than this will require investigation and an SR can be opened with our support team to further investigate.

Retry mechanism from customer application

When an application encounters this specific error it should issue a retry of the request. In terms of guidance of performing a retry we recommend using a backoff algorithm for better flow control. The idea behind the backoff is to use progressively longer waits between retries for consecutive error responses. For example, you might let 2 second elapse before the first retry, 3 seconds before the second retry, 4 seconds before the third retry, and so on. Another example would be to look at an exponential backoff of one second elapse for first retry, four seconds for second, 16 seconds for third retry and so on. Ultimately it is the application development team should decide on retry mechanism rate. However, if the request has not succeeded after a minute, the problem might be a hard limit and not the request rate. For example, you may have reached the maximum number of pipelines allowed. Set the maximum number of retries to stop around one minute.

Related:

  • No Related Posts

Dell EMC Azure Stack ”Storage-as-a-Service” with Isilon

By Karthik Angamuthu | Dell EMC | Sr. Product Manager for Azure Stack



Dell EMC Azure Stack

Azure Stack is designed to help organizations deliver Azure services from their own data center. By allowing end users to ‘develop once and deploy anywhere’ (public Azure or on premises). Customers now can take full advantage of cloud for various applications that could not live in cloud otherwise, may it be due to regulations, data sensitivity, edge use cases, or location of data that prevents them from using public cloud.

Dell EMC co-engineers this solution with Microsoft with added value in automation of deployments, patches and updates along with integration of various key solutions to meet our customers’ holistic needs. One such value add is enabling our Azure stack customers to expand file storage.

Why should you care? – Because of the storage limitation!

Storage Limitation

Azure Stack storage is a set of cloud storage services that includes blobs, tables and queues which are consistent with public Azure storage services, built on Storage Spaces Direct (S2D). It is important to note that while ‘file storage’ is supported in public Azure, it is currently not supported on Azure Stack. This means that any tenant workload that needs to access files via SMB/NFS has to use external NAS storage residing outside the stack. Additionally, since Azure Stack is built on a hyper-converged architecture, it inherently limits the capacity of S2D storage.

What is the solution? – Isilon. Though there are many external file storage options available, Isilon is an excellent option for the following reasons.

Isilon

Isilon is a scale out network-attached storage (NAS) platform offered by Dell EMC for high-volume storage, backup and archiving of unstructured data. While Isilon offers extremely high scalability that is highly cost effective, it provides enterprise class features such as

  • Performance monitoring with InsightIQ,
  • Tiering of data with SmartPools,
  • Quota management with SmartQuotas,
  • Data protection with SnapshotIQ,
  • Data replication with SyncIQ,
  • High availability and load balancing with SmartConnect,
  • Deduplication with SmartDedupe,
  • Data retention with SmartLock,
  • Stringent compliance and governance needs through WORM capability and
  • Last but not the least, seamless tiering of cold or frozen data to public Azure storage with CloudPools.

Isilon supports SMB, NFS, HTTPS object storage, and HDFS, among other protocols. And, there are three different Isilon platform families to meet the performance needs of the data, all running Isilon OneFS:

  1. The all-flash F-series, focusing on extreme performance and scalability for unstructured data applications and workloads
  2. The hybrid H-series, which seek to balance performance and capacity
  3. The archive A-series, for both active and deep archive storage

Configuring your file share

Ok, how do you set up such that tenant workloads can access external file shares? – Simple. If you have a NAS such as Isilon sitting outside of the stack, your tenants can directly map the file shares via SMB/NFS as long as the network connectivity is set up. It requires careful consideration, planning and administration.

Your feedback

Whether you are an existing Isilon customer who purchasing Azure stack or a new Azure stack customer that needs file storage for your documents, back up vault or running analytics workload with HDFS, you asked us for a simple and scalable storage solution that is easy to manage with Azure stack. We heard you!

Multi-tenancy and Administration complexity

Isilon supports multi-tenancy such that, each Azure stack tenant can access specific sub-folders of Isilon storage under the single namespace. You can set up access-zones, network segmentation with groupnets and subnets, authentication providers (AD/LDAP, etc.) along with quotas and policies for each tenant on Isilon. While it may not be complex to set up, ongoing administration can be non-trivial. For example, when Isilon and Azure stack are managed independently, and when the cloud admin needs to onboard new a tenant or offboard a tenant or one of the tenants need more capacity or change backup configurations, etc. You have to manage this workflow carefully with SLA and must plan for the overhead.

An ideal approach to simplify or reduce this overhead is to give autonomy to cloud admins to manage tenants’ storage. Cloud admins ought to be able to simply manage storage capacity and storage services with respect to their tenants. Furthermore, extend the flexibility to tenants to self-manage their storage space and users within their respective org. Needless to say, this must be done with a consistent Azure user experience.

That is what we have done with our VConnect plug-in for Isilon and Azure Stack.

How does it work?

Once Isilon is setup and multi-tenancy is enabled by Isilon admins, Azure stack admins can deploy our VConnect plug-in for Isilon and enable it through plans and offers for their tenants. Tenants now can subscribe to the storage services under that plan and self-manage storage capacity, users and access.

Azure Stack Isilon Setup.png

Step by Step Instruction

A detailed step by step guide to setup and use VConnect plug-in for Isilon is described below.

Isilon step 1.png

Isilon Step 2.png

Isilon Step 3.png

Isilon step 4.png

Isilon step 5.png

Isilon step 6.png

Opportunities

Whether you are a service provider or an enterprise customer, you now have the ability to offer Storage-as-a-Service to your customers with enterprise class storage features such as high availability and data protection, security and more with Azure Stack and Isilon.

How to order

This solution is currently available for purchase through our solution partner CloudAssert. You can reach out to your Dell EMC Azure Stack sales specialist to buy this.

For inquiries, questions and comments, please reach out to Dell EMC Azure Stack product manager Karthik Angamuthu at Karthik_Angamuthu@dell.com

Related:

DELL EMC Data Domain Cloud Verify Tool

DELL EMC Data Domain Cloud Verify Tool





DDD.jpg

The following notes are intended for anyone who will be configuring, installing, and supporting Data Domain Cloud Tier.

An understanding of these technical notes requires an understanding of the following:

 Data Domain protection technology and connectivity

 Local and Wide Area IP network design

 Cloud-based object storage concepts

 Data Domain concepts and components

Cloud verify tool was introduced in DD OS 6.1.1.5 and later and requires admin or limited-admin privileges to run.

This is very useful tool which helps in understanding any technical challenges with respect to connectivity, right access levels, endpoint verification, S3 API validation.



Steps involved are as follows



  • Cloud Enablement Check
  • Connectivity Check
  • Account Validation
  • S3 API Validation
  • Cleaning Up

Let us see what the above steps do.

Cloud enablement check: Verifies that DD Cloud Tier is enabled on the Data Domain system, and the appropriate license, passphrase, and configuration are set.

Connectivity check: Verifies the existence of the correct certificate, and tests the connection to the cloud provider endpoint.

User account validation: Creates a test cloud profile and bucket based on the specified configuration values.

Cloud provider validation: Verifies the cloud provider supports the S3 operations required for DD Cloud Tier.

S3 API validation supported operations are



PUT – Bucket create, Write object

LIST – Bucket list, Object list

GET – Object read

DELETE – Object delete

BULK DELETE

Cleaning up: This deletes the temporary buckets created on the object store. All test data, including the test bucket, created by this command is automatically deleted when the cloud provider verification is complete.

How to run this tool: This tool is already added to Data Domain OS, and you dont need to download anything and run.



From GUI

Go to Data Management > File System > Cloud Units



Click ‘+Add’ and enter the required details like desired name of cloud unit, cloud provider, storage region, access and secret key and proxy if desired.



Once all information is added, click on verify under cloud verification.



DDCT.jpg



Command Line Interface

we’ve introduced a new command “cloud provider verify” and upon providing the required information, verification will be done accordingly.

# cloud provider verify



This operation will perform test data movement after creating a temporary profile and bucket



Do you want to continue? (yes|no) [yes]: yes



Enter provider name (aws|azure|virtustream|ecs|s3_flexible): aws



Enter the access key:xxxxxxxxxxxxxxxxx



Enter the secret key:xxxxxxxxxxxxxxxxxxxxxxxxxxx



Enter the region (us-east-1|us-west-1|us-west-2|eu-west-1|ap-northeast-1|



ap-southeast-1|ap-southeast-2|sa-east-1|ap-south-1|



ap-northeast-2|eu-central-1): us-east-1



Verifying cloud provider …



This process may take a few minutes.



Cloud Enablement Check:



Checking Cloud feature enabled: PASSED



Checking Cloud volume: PASSED





Connectivity Check:



Validating certificate: PASSED



Checking network access: PASSED





Account Validation:



Creating temporary profile: PASSED



Creating temporary bucket: PASSED



S3 API Validation:



Validating Put Bucket: PASSED



Validating List Bucket: PASSED



Validating Put Object: PASSED



Validating Get Object: PASSED



Validating List Object: PASSED



Validating Delete Object: PASSED



Validating Bulk Delete: PASSED



Cleaning Up:



Deleting temporary bucket: PASSED



Deleting temporary profile: PASSED



Provider verification passed.



In case of any issues that this tool results into a failure, logs can be viewed.

/ddvar/log/debug/verify_logs/

You may want to consult the following resources for more information

  • DD OS 6.1 CLI reference Guide

https://support.emc.com/docu85240_Data_Domain_Operating_System_6.1_Command_Reference_Guide.pdf?language=en_US

  • DD OS 6.1.1.5 Combined release Notes

https://support.emc.com/docu87488_Data_Domain_Operating_System_6.1.1.5_Combined_Release_Notes.pdf?language=en_US

Hope it helps!!

Related: