Dell Technologies Named a Leader in the Gartner 2020 Data Center Backup & Recovery Solutions Magic Quadrant

A Constant in a Sea of Change

Heraclitus, an ancient Greek philosopher once said, “The only thing that is constant is change.” These words ring truer now perhaps more than any other time in our lives. The recent health, economic and social watershed events of 2020 bring into stark reality how the fabric of our lives can change in an instant.

Despite these dizzying changes, one thing has not changed – the criticality of data. In fact, data has never been more critical than it is right now. That’s why we are humbled to be recognized by Gartner as a Leader in the 2020 Data Center Backup & Recovery Solutions Magic Quadrant – a distinction we have earned in every Gartner Magic Quadrant for data center backup and recovery solutions since 1999.¹

Our 21-year position as a Leader in the Magic Quadrant is, we believe, a continuing affirmation of the trust our customers place in us, not merely as just a provider of leading data protection solutions, but as a partner committed to their long-term success.

That commitment begins with taking a customer-first approach in everything we do – from investing in our people, to investing in our communities, to investing in innovation.

And it all begins with data. Unlocking the next great discovery begins with creating, curating, protecting and securing the vast amounts of data being born across core, edge and multi-cloud computing environments.

Therefore, safeguarding data to ensure it is highly available, efficiently protected and secure from cyber threats wherever it lives, is a mission we are deeply committed to.

For example, three key areas where we are helping our customers today are:

  1. Cloud Data Protection – delivering solutions to protect any workload across any cloud while providing the lowest cost to protect in the cloud. Today, we have over 1,000 customers protecting over 2.7 Exabytes of data in the cloud.²
  2. VMware data protection – delivering deep integration with VMware to simplify and automate data protection for virtual and containerized workloads for increased agility.
  3. Cyber Recovery – secure infrastructure that optimizes cyber resiliency to ensure rapid recovery from destructive cyberattacks.

And going forward, we have even more ambitious plans to continue to drive innovation across our entire data protection portfolio. With our PowerProtect Data Manager platform, we plan to increase IT simplicity, increase data protection efficiencies and increase automation of data protection and recovery operations, so that our customers can focus on delivering the innovation and change needed to help them transform, modernize and compete more effectively in the data era.

¹ As Dell EMC, 2017-2019; As EMC, 2005 – 2016; As Legato, 2004; As Legato Systems and EMC, 2003; As Legato Networker and EMC Data Manager (EDM), 2001; As Legato and EMC, 1999-2000

² Based on information provided by public cloud providers, Feb 2020

Related:

Webinar Series: Data Protection Solutions

Dell Technologies continues to modernize and simplify our trusted data protection portfolio. Dell EMC NetWorker is proven technology that offers comprehensive, flexible and scalable data protection. As businesses evolve, they need assurance that their existing and modern workloads are protected. We invite you to join us for a special webinar on Wednesday, July 29, 12:00 PM – 1:00 PM EST to learn more about the Dell EMC NetWorker Advantage and help you maximize your investment and plan for the future. In this session, we will present NetWorker’s latest innovations including centralized monitoring and reporting of backups … READ MORE

Related:

Performing FLR for a Hyper-V VM backup with NMM

This article aims to describe File Level Recovery (FLR) for Hyper-V VM backups done with NMM. FLR can be used to recover individual files/folders from a VM backup without having to restore the entire VM.

FLR can be performed using a browser from any windows host on the network that has access to the Networker server and the Hyper-V server. Following are the pre-requisites for performing FLR:

1. The user used for the FLR login needs to be a member of the NMC administrators console group:

FLR1.png

2. The user does not need to have any other NetWorker user group membership, i.e no need to have membership of the ‘operators’ group or any other group. At the OS level the user will need to have write permissions to do the restore. So either member of backup operator group or local administrators group will suffice.



3. Host from where the restore is done, needs to have a client resource created on the NetWorker server.



4. The host from where FLR is done does not require NMM or NWC (NetWorker client) to be installed.

Procedure:

1. Login to a windows server with a user that meets the above criteria and launch a browser with the following URL:

https://<networker-server>:11000.



FLR2.png



2. The GUI detects Hyper-V clients that have been backed up to the NetWorker server. Select the Hyper-V client. In our case its the Hyper-V cluster name:



FLR3.png



3. The GUI will list all the VM backups found for the Hyper-V client. Select the VM to perform FLR on and click ‘Next’.



FLR4.png



4. The GUI will display the backups found for the selected VM. Select the desired backup and click ‘Next’



FLR5.png



5. The GUI displays all disk partitions in the VM. Note if the VM contains ‘Dynamic’ disk, it will not be shown here as dynamic disks are not supported for FLR or GLR.

FLR6.png

6. Select the file/folder to restore. Drag and drop it to the bottom pane. After selecting the required files, click ‘Next’



FLR7.png

7. Here you get a choice to restore to a browser location or restore to a VM. We will keep the default of ‘Recover to a browser download location’. Select ‘Next’



FLR8.png

8. Click on the green check mark

FLR9.png

9. Click on the ‘downloads’

FLR10.png

10. Save to the desired download path and then unzip the zip file for the recovered files.

FLR11.png

This concludes this article on how to perform FLR for Hyper-V backups done with NMM

Related:

Performing NMM Hyper-V Granular Level Restore (GLR)

This article aims to cover the procedure for Granular Level Restore (GLR) for Hyper-V VM backed up with NMM 18. The same procedure can be used to perform GLR in NMM 9

When there is a requirement to recover files/folders from a NMM VM backup, without the need to restore the entire VM, GLR feature can be used. GLR mounts the backup to a mount point on the host where the restore is done and this mount point can be traversed either with windows explorer or the NMM GUI to recover the desired files. The files can be copied to any desired location.

Pre-requisites for GLR:



  1. GLR can be done on any host that is part of the same domain as the Hyper V host. It can be done from any node in the HyperV cluster or a host outside the cluster. The GLR host need not have Hyper-V role installed.
  2. GLR host ideally should have the same OS as the source Hyper-V host.
  3. GLR host needs to have the same version of NMM or higher as the one installed on the source Hyper-V host.
  4. The user performing GLR needs to be a member of NetWorker Servers ‘operators’ security group to allow for index access of the Hyper-V servers index. This user should also be a member of the local ‘administrators’ group on the GLR host.

To perform GLR, login from the host with user credentials that meet the above requirements:



1. Start the NetWorker User for Microsoft GUI and from the ‘Client’ drop down select the Hyper-V client (Cluster name for a Clustered setup and Hyper-V host name for a standalone setup)



GLR1.png



2. Select ‘Granular Level Recovery’



GLR2.png





3. Select the “VM” for GLR from the desired browse time. Right click the VM and select ‘Mount’



GLR3.png



4. On the ‘Granular Level Recovery Mount Point Location’ page, review the default mount point location. If needed specify a different path. Mounting the backup at this mount point does not consume additional storage on the disk where the mount point is. The default path should be fine. Click ‘OK’



GLR5.png



5. Switch to the ‘Monitor’ view to check on the progress of the mount. When complete, the success is logged in this view:



GLR6.png



6. Change to the ‘Recover’ view to navigate to the disks,partitions and file systems on the VM. Mark the desired files/folders and click ‘Recover..’. The data is recovered to the default path:



C:Program FilesEMC NetWorkernsrtmpHyperVGlrRestore



GLR7.png

7. The files can also be recovered using ‘File Explorer’/’Windows Explorer’. As the backup is mounted to the ../nsr/tmp folder, this path can be navigated and one can do copy and paste from here as shown below:



GLR8.png



This concludes NMM Hyper-V GLR procedure.













Related:

Configuring NMM 18 to backup Hyper-V Server 2016 Virtual Machines

This article aims to provide a simple and detailed instructions on configuration NMM 18 to backup Hyper-V Server 2016 VMs.

Hyper-V 2016 introduced a feature called RCT (Resilient Change Tracking). It’s a native change tracking system build in Hyper-V. It keeps track of the change blocks in a virtual hard disk. It keeps in memory bitmap and on disk bitmap. This helps keep track of changes even through power failures, when a memory only bit map would be lost.

RCT backups do not use VSS frame work. It’s more scalable and reliable than VSS. NMM 18.1 introduced support for RCT. When backing Hyper-V Server 2016 VMs with NMM 18.1, it’s recommended to use RCT. Support to backup using VSS is still available.

Refer to the Networker compatibility guide for the latest compatibility information between NMM and Hyper-V at https://elabnavigator.emc.com/eln/modernHomeAutomatedTiles?page=NetWorker

Pre-requisites for backups using RCT:

1. NMM 18.1 or higher. NMM 9.x does not support RCT.

2. Hyper-V server 2016. Hyper-V server 2012 does not support RCT.

3. VM configuration version higher than 6.2

I have used NMM 18.1 on Hyper-V Server 2016 with NetWorker server 9.2.1.5 for this article.

Below are the steps:

1. The first step is to install NetWorker client software and NMM on each of the Hyper-V nodes. No instructions are provided for this step as its a pretty simple process.

2. Configure the Client resources for the backup. Start the Client Wizard from the NMC GUI.



Picture1.png

3. To backup Virtual Machines on a HyperV Cluster residing on CSV volume or SMB Share, use the Cluster name with the client configuration wizard. For standalone Hyper-V server use the Hyper-V server hostname or FQDN.



Picture2.png

4. Select ‘Hyper-V Server’ under applications:

Picture3.png

5. On the next page, keep the defaults



Picture4.png

6. Select the Virtual Machines to backup. To exclude any Virtual Machine from the backup deselect the Virtual machine and select ‘Use Exclude Component List’ at the bottom of the screen.



Picture5.png

7. Specify the backup options.



Picture6.png

The account used for backups in ‘Remote Username’ should have the following group membership.

a. Should be a domain user. i.e member of the ‘Domain users’ AD group.

b. Member of the local ‘Administrators’ group on each HyperV host in the cluster

c. Have FULL access to the HyperV cluster. Run the following Powershell command to provide this access:

Grant-clusteraccess –cluster <clustername> -user <username> -FULL.

Example

Grant-ClusterAccess -cluster thor -user brontenmmadmin –FULL

To verify if the account has FULL cluster access, run the following command:



Get-clusteraccess

Picture7.png

Note: While the above permissions are sufficient for backups, when restoring SMB VMs to the original location, additional permissions are required to perform restore to original VM location. The above permissions do not provide admin level read rights to the SMB share. There are 2 ways to workaround this:

a. When performing a restore login to the Hyper-V host with a account that has FULL rights to the SMB-share.

b. If performing the restore with the same account as backup account, first delete/rename the original VM folder or provide this user account all rights to the SMB share.

8. Review summary and click ‘Create’:

Picture8.png

The client wizard will proceed to create client resource for the Hyper-V cluster and a client resource for every node in the cluster.

Picture9.png

9. Once the client resources are created proceed to create a Group, Workflow in an existing policy or create a new Policy.

a. Create a group and add the cluster client to it. Note only the cluster client will be used for the backup.

Picture10.png

b. Create a new workflow and add the Hyper-V group to this workflow



Picture11.png

c. Create a New ‘Action’, by clicking the ‘Add’ button in the above window. Specify the schedule and backup levels (Only level ‘FULL’ and ‘Incr’ are supported)



Picture12.png

d. Specify storage node, pool, retention as appropriate:



Picture13.png

e. Set the ‘Retries’ to ‘0’ and the ‘Inactivity timeout’ to ‘0’



Picture14.png

f. Review summary and click ‘Configure’ (not shown in the screen shot)



Picture15.png

10. Add the ‘backup user’ to the Networkers ‘Operators’ group. From NMC GUI, click ‘Server’. Select ‘User Groups’ and then do properties on the ‘Operators’ group. In the ‘Users’ box add the entries similar to below:

user=nmmadmin,host=hypv2016n1

user=nmmadmin,host=hypv2016n2

There is one entry for each node in the cluster.

Picture16.png

This completes the backup configuration for a HyperV 2016 Federated backup using RCT. Note when using RCT backups, SMB and CSV VMs can be backed up together, unlike when using VSS SMB and CSV VMs had to be backed up separately.



Also for SMB VMs, as RCT does not use VSS, there is no need to install the “File Server VSS agent service” on the File Server. Also the backup user does not need any additional rights to backup VMs on SMB share.



Save sets created for a FULL level backup

HyperVPool.003 Data Domain thor.bronte.local 1/22/2019 3:08:11 PM 15 GB 2370271269 cb full APPLICATIONS:Microsoft Hyper-VEx1E9D2B2FD-B4B7-4A4A-87DC-0DD89D89FAC6

HyperVPool.003 Data Domain thor.bronte.local 1/22/2019 3:08:12 PM 58 MB 2353494053 cb full APPLICATIONS:Microsoft Hyper-VEx1ConfigFiles

HyperVPool.003 Data Domain thor.bronte.local 1/22/2019 3:13:46 PM 8 KB 2303162745 cb full APPLICATIONS:Microsoft Hyper-VEx1

For each VM following save sets are created:

  1. One save set for each disk in the VM. This is represented by a long hex number in the save set name. (e.g APPLICATIONS:Microsoft Hyper-VEx1E9D2B2FD-B4B7-4A4A-87DC-0DD89D89FAC6)
  2. One save set for ‘ConfigFiles’
  3. One metadata save set. This save set ends with the VM name (e.g APPLICATIONS:Microsoft Hyper-VEx1)

For a VM with one disk, there are 3 save sets. For a VM with 2 disks there are 4 save sets.

Save sets created for a Incremental backup

HyperVPool.003 Data Domain thor.bronte.local 1/22/2019 9:04:36 PM 15 GB 2068302767 cb full APPLICATIONS:Microsoft Hyper-VEx1E9D2B2FD-B4B7-4A4A-87DC-0DD89D89FAC6

HyperVPool.003 Data Domain thor.bronte.local 1/22/2019 9:04:37 PM 58 MB 2051525551 cb full APPLICATIONS:Microsoft Hyper-VEx1ConfigFiles

HyperVPool.003 Data Domain thor.bronte.local 1/22/2019 9:05:52 PM 8 KB 2001193984 cb incr APPLICATIONS:Microsoft Hyper-VEx1

Since a synthetic FULL is performed on Data Domain, all save sets are registered at level FULL in media DB, except for the metadata save set (From the above example, its “APPLICATIONS:Microsoft Hyper-VEx1”), that reflects the actual backup level for that backup.

Related:

Copying Networker backups to a hosted data domain from on premises data domain

We are in the process of getting a hosted data domain from a service provider.

This data domain will become the primary target for backups.

The plan is to add this new data domain to EMC networker and then copy data from on premises data domain to the one on the service provider side.

There is a pre-existing link from our organisation to the service provider which was setup for some other project and we will be leveraging the same 10 GB link.

I have few questions in regards to the copying process:

– Which ports I need to open from on premises networker server/storage nodes/clients to the new hosted data domain?

– What is the best way of copying data – Networker clone or mtree replication or some other way?

– With mtree replication how does Networker know about these backups in the replicated mtree?

– What sort of speeds I can predict with both these methods as I need to give estimated time frame for the project?

– We have different devices setup on the on premises data domain for different policies. Should we do the same on the service provider’s hosted data domain?

– Any other best practices to keep in mind?

Related:

NetWorker Module for Microsoft 18.2 Product Documentation Index

NetWorker SharePoint BLOB Backup and Recovery by using NetWorker Module for Microsoft and Metalogix StoragePoint

These technical notes contain supplemental information about backup and recovery of SharePoint Binary Large Objects (BLOB) by using NetWorker Module for Microsoft (NMM) and Metalogix StoragePoint.



NetWorker Module for Microsoft: Performing Exchange Server Granular Level Recovery (GLR) by using EMC NetWorker Module for Microsoft with Ontrack PowerControls

These technical notes contain supplemental information about using NetWorker Module for Microsoft with Ontrack PowerControls to perform Granular Level Recovery (GLR) of deleted Microsoft Exchange Server mailboxes, public folders, and public folder mailboxes.



NetWorker Configuring SQL VDI AlwaysOn Availability Group backups in Multi-homed (Backup LAN) Network by using NetWorker Module for Microsoft

These technical notes provide the information that you need to perform Microsoft SQL Server AlwaysOn Availability Group (AAG) backups by using EMC® NetWorker® Module for Microsoft (NMM) release 9.0 in a multihomed environment. NMM Supports SQL Server AAG backup from the SQL Standalone Server.



NetWorker Performing Backup and Recovery of SharePoint Server by using NetWorker Module for Microsoft SQL VDI Solution

These technical notes describe the procedures to perform backup and recovery of a SharePoint Server by using the SQL Server Virtual Device Interface (VDI) technology and the SharePoint VSS Writer with NetWorker Module for Microsoft (NMM).

Related:

NetWorker 18.2



NetWorker 18.2 Administration Guide

Describes how to configure and maintain the NetWorker software.

NetWorker 18.2 Cluster Integration Guide

Describes how to install and administer the NetWorker software on cluster servers and clients.

NetWorker 18.2 Command Reference Guide

Provides reference information for NetWorker commands and options.



NetWorker 18.2 Snapshot Management Integration Guide

Describes how to catalog and managesnapshot copies of production data that are created by using mirror technologies on EMC storage arrays.



NetWorker 18.2 Snapshot Management for NAS Integration Guide

Describes how to catalog and managesnapshot copies of production data that are created by using replication technologies on NASdevices.



NetWorker 18.2 Performance Optimization Planning Guide

Contains basic performance sizing, planning, and optimizing information for NetWorker environments.



NetWorker 18.2 Error Message Guide

Provides information on common NetWorker error messages.



NetWorker 18.2 Security Configuration Guide

Provides an overview of securityconfigurationsettings available in NetWorker, secure deployment, and physical securitycontrols needed to ensure the secure operation of the product.



Related:

  • No Related Posts

Re: Is smtpmail distributed as standard with Networker? (specifically 7.6.2.1)

Bit of an odd scenario, so apologies in advance!

I’m investigating one of our servers which has a few scheduled tasks running on it, attempting to make use of a ‘smtpmail.exe’. The only reference to any smtpmail executable I could find with the same argument syntax, is the one in EMC networker (as seen in this thread), and there just happens to be an installation of EMC networker on this particular server, with the ‘Legato/nsr/bin’ directory in the PATH. Which seems a pretty good indicator that the intention was for these tasks to be using smtpmail from networker.

However, there is no smtpmail executable in the bin directory, and never having heard of EMC networker until today, I don’t know if it’s something that is actually part of EMC networker and has somehow been removed from this particular installation, or it’s something non-standard and the above thread is just a bit of a red-herring.

Thanks in advance for any insights!

Related:

  • No Related Posts

Is smtpmail distributed as standard with Networker? (specifically 7.6.2.1)

Bit of an odd scenario, so apologies in advance!

I’m investigating one of our servers which has a few scheduled tasks running on it, attempting to make use of a ‘smtpmail.exe’. The only reference to any smtpmail executable I could find with the same argument syntax, is the one in EMC networker (as seen in this thread), and there just happens to be an installation of EMC networker on this particular server, with the ‘Legato/nsr/bin’ directory in the PATH. Which seems a pretty good indicator that the intention was for these tasks to be using smtpmail from networker.

However, there is no smtpmail executable in the bin directory, and never having heard of EMC networker until today, I don’t know if it’s something that is actually part of EMC networker and has somehow been removed from this particular installation, or it’s something non-standard and the above thread is just a bit of a red-herring.

Thanks in advance for any insights!

Related:

  • No Related Posts