Performing FLR for a Hyper-V VM backup with NMM

This article aims to describe File Level Recovery (FLR) for Hyper-V VM backups done with NMM. FLR can be used to recover individual files/folders from a VM backup without having to restore the entire VM.

FLR can be performed using a browser from any windows host on the network that has access to the Networker server and the Hyper-V server. Following are the pre-requisites for performing FLR:

1. The user used for the FLR login needs to be a member of the NMC administrators console group:

FLR1.png

2. The user does not need to have any other NetWorker user group membership, i.e no need to have membership of the ‘operators’ group or any other group. At the OS level the user will need to have write permissions to do the restore. So either member of backup operator group or local administrators group will suffice.



3. Host from where the restore is done, needs to have a client resource created on the NetWorker server.



4. The host from where FLR is done does not require NMM or NWC (NetWorker client) to be installed.

Procedure:

1. Login to a windows server with a user that meets the above criteria and launch a browser with the following URL:

https://<networker-server>:11000.



FLR2.png



2. The GUI detects Hyper-V clients that have been backed up to the NetWorker server. Select the Hyper-V client. In our case its the Hyper-V cluster name:



FLR3.png



3. The GUI will list all the VM backups found for the Hyper-V client. Select the VM to perform FLR on and click ‘Next’.



FLR4.png



4. The GUI will display the backups found for the selected VM. Select the desired backup and click ‘Next’



FLR5.png



5. The GUI displays all disk partitions in the VM. Note if the VM contains ‘Dynamic’ disk, it will not be shown here as dynamic disks are not supported for FLR or GLR.

FLR6.png

6. Select the file/folder to restore. Drag and drop it to the bottom pane. After selecting the required files, click ‘Next’



FLR7.png

7. Here you get a choice to restore to a browser location or restore to a VM. We will keep the default of ‘Recover to a browser download location’. Select ‘Next’



FLR8.png

8. Click on the green check mark

FLR9.png

9. Click on the ‘downloads’

FLR10.png

10. Save to the desired download path and then unzip the zip file for the recovered files.

FLR11.png

This concludes this article on how to perform FLR for Hyper-V backups done with NMM

Related:

  • No Related Posts

Performing NMM Hyper-V Granular Level Restore (GLR)

This article aims to cover the procedure for Granular Level Restore (GLR) for Hyper-V VM backed up with NMM 18. The same procedure can be used to perform GLR in NMM 9

When there is a requirement to recover files/folders from a NMM VM backup, without the need to restore the entire VM, GLR feature can be used. GLR mounts the backup to a mount point on the host where the restore is done and this mount point can be traversed either with windows explorer or the NMM GUI to recover the desired files. The files can be copied to any desired location.

Pre-requisites for GLR:



  1. GLR can be done on any host that is part of the same domain as the Hyper V host. It can be done from any node in the HyperV cluster or a host outside the cluster. The GLR host need not have Hyper-V role installed.
  2. GLR host ideally should have the same OS as the source Hyper-V host.
  3. GLR host needs to have the same version of NMM or higher as the one installed on the source Hyper-V host.
  4. The user performing GLR needs to be a member of NetWorker Servers ‘operators’ security group to allow for index access of the Hyper-V servers index. This user should also be a member of the local ‘administrators’ group on the GLR host.

To perform GLR, login from the host with user credentials that meet the above requirements:



1. Start the NetWorker User for Microsoft GUI and from the ‘Client’ drop down select the Hyper-V client (Cluster name for a Clustered setup and Hyper-V host name for a standalone setup)



GLR1.png



2. Select ‘Granular Level Recovery’



GLR2.png





3. Select the “VM” for GLR from the desired browse time. Right click the VM and select ‘Mount’



GLR3.png



4. On the ‘Granular Level Recovery Mount Point Location’ page, review the default mount point location. If needed specify a different path. Mounting the backup at this mount point does not consume additional storage on the disk where the mount point is. The default path should be fine. Click ‘OK’



GLR5.png



5. Switch to the ‘Monitor’ view to check on the progress of the mount. When complete, the success is logged in this view:



GLR6.png



6. Change to the ‘Recover’ view to navigate to the disks,partitions and file systems on the VM. Mark the desired files/folders and click ‘Recover..’. The data is recovered to the default path:



C:Program FilesEMC NetWorkernsrtmpHyperVGlrRestore



GLR7.png

7. The files can also be recovered using ‘File Explorer’/’Windows Explorer’. As the backup is mounted to the ../nsr/tmp folder, this path can be navigated and one can do copy and paste from here as shown below:



GLR8.png



This concludes NMM Hyper-V GLR procedure.













Related:

  • No Related Posts

Configuring NMM 18 to backup Hyper-V Server 2016 Virtual Machines

This article aims to provide a simple and detailed instructions on configuration NMM 18 to backup Hyper-V Server 2016 VMs.

Hyper-V 2016 introduced a feature called RCT (Resilient Change Tracking). It’s a native change tracking system build in Hyper-V. It keeps track of the change blocks in a virtual hard disk. It keeps in memory bitmap and on disk bitmap. This helps keep track of changes even through power failures, when a memory only bit map would be lost.

RCT backups do not use VSS frame work. It’s more scalable and reliable than VSS. NMM 18.1 introduced support for RCT. When backing Hyper-V Server 2016 VMs with NMM 18.1, it’s recommended to use RCT. Support to backup using VSS is still available.

Refer to the Networker compatibility guide for the latest compatibility information between NMM and Hyper-V at https://elabnavigator.emc.com/eln/modernHomeAutomatedTiles?page=NetWorker

Pre-requisites for backups using RCT:

1. NMM 18.1 or higher. NMM 9.x does not support RCT.

2. Hyper-V server 2016. Hyper-V server 2012 does not support RCT.

3. VM configuration version higher than 6.2

I have used NMM 18.1 on Hyper-V Server 2016 with NetWorker server 9.2.1.5 for this article.

Below are the steps:

1. The first step is to install NetWorker client software and NMM on each of the Hyper-V nodes. No instructions are provided for this step as its a pretty simple process.

2. Configure the Client resources for the backup. Start the Client Wizard from the NMC GUI.



Picture1.png

3. To backup Virtual Machines on a HyperV Cluster residing on CSV volume or SMB Share, use the Cluster name with the client configuration wizard. For standalone Hyper-V server use the Hyper-V server hostname or FQDN.



Picture2.png

4. Select ‘Hyper-V Server’ under applications:

Picture3.png

5. On the next page, keep the defaults



Picture4.png

6. Select the Virtual Machines to backup. To exclude any Virtual Machine from the backup deselect the Virtual machine and select ‘Use Exclude Component List’ at the bottom of the screen.



Picture5.png

7. Specify the backup options.



Picture6.png

The account used for backups in ‘Remote Username’ should have the following group membership.

a. Should be a domain user. i.e member of the ‘Domain users’ AD group.

b. Member of the local ‘Administrators’ group on each HyperV host in the cluster

c. Have FULL access to the HyperV cluster. Run the following Powershell command to provide this access:

Grant-clusteraccess –cluster <clustername> -user <username> -FULL.

Example

Grant-ClusterAccess -cluster thor -user brontenmmadmin –FULL

To verify if the account has FULL cluster access, run the following command:



Get-clusteraccess

Picture7.png

Note: While the above permissions are sufficient for backups, when restoring SMB VMs to the original location, additional permissions are required to perform restore to original VM location. The above permissions do not provide admin level read rights to the SMB share. There are 2 ways to workaround this:

a. When performing a restore login to the Hyper-V host with a account that has FULL rights to the SMB-share.

b. If performing the restore with the same account as backup account, first delete/rename the original VM folder or provide this user account all rights to the SMB share.

8. Review summary and click ‘Create’:

Picture8.png

The client wizard will proceed to create client resource for the Hyper-V cluster and a client resource for every node in the cluster.

Picture9.png

9. Once the client resources are created proceed to create a Group, Workflow in an existing policy or create a new Policy.

a. Create a group and add the cluster client to it. Note only the cluster client will be used for the backup.

Picture10.png

b. Create a new workflow and add the Hyper-V group to this workflow



Picture11.png

c. Create a New ‘Action’, by clicking the ‘Add’ button in the above window. Specify the schedule and backup levels (Only level ‘FULL’ and ‘Incr’ are supported)



Picture12.png

d. Specify storage node, pool, retention as appropriate:



Picture13.png

e. Set the ‘Retries’ to ‘0’ and the ‘Inactivity timeout’ to ‘0’



Picture14.png

f. Review summary and click ‘Configure’ (not shown in the screen shot)



Picture15.png

10. Add the ‘backup user’ to the Networkers ‘Operators’ group. From NMC GUI, click ‘Server’. Select ‘User Groups’ and then do properties on the ‘Operators’ group. In the ‘Users’ box add the entries similar to below:

user=nmmadmin,host=hypv2016n1

user=nmmadmin,host=hypv2016n2

There is one entry for each node in the cluster.

Picture16.png

This completes the backup configuration for a HyperV 2016 Federated backup using RCT. Note when using RCT backups, SMB and CSV VMs can be backed up together, unlike when using VSS SMB and CSV VMs had to be backed up separately.



Also for SMB VMs, as RCT does not use VSS, there is no need to install the “File Server VSS agent service” on the File Server. Also the backup user does not need any additional rights to backup VMs on SMB share.



Save sets created for a FULL level backup

HyperVPool.003 Data Domain thor.bronte.local 1/22/2019 3:08:11 PM 15 GB 2370271269 cb full APPLICATIONS:Microsoft Hyper-VEx1E9D2B2FD-B4B7-4A4A-87DC-0DD89D89FAC6

HyperVPool.003 Data Domain thor.bronte.local 1/22/2019 3:08:12 PM 58 MB 2353494053 cb full APPLICATIONS:Microsoft Hyper-VEx1ConfigFiles

HyperVPool.003 Data Domain thor.bronte.local 1/22/2019 3:13:46 PM 8 KB 2303162745 cb full APPLICATIONS:Microsoft Hyper-VEx1

For each VM following save sets are created:

  1. One save set for each disk in the VM. This is represented by a long hex number in the save set name. (e.g APPLICATIONS:Microsoft Hyper-VEx1E9D2B2FD-B4B7-4A4A-87DC-0DD89D89FAC6)
  2. One save set for ‘ConfigFiles’
  3. One metadata save set. This save set ends with the VM name (e.g APPLICATIONS:Microsoft Hyper-VEx1)

For a VM with one disk, there are 3 save sets. For a VM with 2 disks there are 4 save sets.

Save sets created for a Incremental backup

HyperVPool.003 Data Domain thor.bronte.local 1/22/2019 9:04:36 PM 15 GB 2068302767 cb full APPLICATIONS:Microsoft Hyper-VEx1E9D2B2FD-B4B7-4A4A-87DC-0DD89D89FAC6

HyperVPool.003 Data Domain thor.bronte.local 1/22/2019 9:04:37 PM 58 MB 2051525551 cb full APPLICATIONS:Microsoft Hyper-VEx1ConfigFiles

HyperVPool.003 Data Domain thor.bronte.local 1/22/2019 9:05:52 PM 8 KB 2001193984 cb incr APPLICATIONS:Microsoft Hyper-VEx1

Since a synthetic FULL is performed on Data Domain, all save sets are registered at level FULL in media DB, except for the metadata save set (From the above example, its “APPLICATIONS:Microsoft Hyper-VEx1”), that reflects the actual backup level for that backup.

Related:

  • No Related Posts

Copying Networker backups to a hosted data domain from on premises data domain

We are in the process of getting a hosted data domain from a service provider.

This data domain will become the primary target for backups.

The plan is to add this new data domain to EMC networker and then copy data from on premises data domain to the one on the service provider side.

There is a pre-existing link from our organisation to the service provider which was setup for some other project and we will be leveraging the same 10 GB link.

I have few questions in regards to the copying process:

– Which ports I need to open from on premises networker server/storage nodes/clients to the new hosted data domain?

– What is the best way of copying data – Networker clone or mtree replication or some other way?

– With mtree replication how does Networker know about these backups in the replicated mtree?

– What sort of speeds I can predict with both these methods as I need to give estimated time frame for the project?

– We have different devices setup on the on premises data domain for different policies. Should we do the same on the service provider’s hosted data domain?

– Any other best practices to keep in mind?

Related:

  • No Related Posts

NetWorker Module for Microsoft 18.2 Product Documentation Index

NetWorker SharePoint BLOB Backup and Recovery by using NetWorker Module for Microsoft and Metalogix StoragePoint

These technical notes contain supplemental information about backup and recovery of SharePoint Binary Large Objects (BLOB) by using NetWorker Module for Microsoft (NMM) and Metalogix StoragePoint.



NetWorker Module for Microsoft: Performing Exchange Server Granular Level Recovery (GLR) by using EMC NetWorker Module for Microsoft with Ontrack PowerControls

These technical notes contain supplemental information about using NetWorker Module for Microsoft with Ontrack PowerControls to perform Granular Level Recovery (GLR) of deleted Microsoft Exchange Server mailboxes, public folders, and public folder mailboxes.



NetWorker Configuring SQL VDI AlwaysOn Availability Group backups in Multi-homed (Backup LAN) Network by using NetWorker Module for Microsoft

These technical notes provide the information that you need to perform Microsoft SQL Server AlwaysOn Availability Group (AAG) backups by using EMC® NetWorker® Module for Microsoft (NMM) release 9.0 in a multihomed environment. NMM Supports SQL Server AAG backup from the SQL Standalone Server.



NetWorker Performing Backup and Recovery of SharePoint Server by using NetWorker Module for Microsoft SQL VDI Solution

These technical notes describe the procedures to perform backup and recovery of a SharePoint Server by using the SQL Server Virtual Device Interface (VDI) technology and the SharePoint VSS Writer with NetWorker Module for Microsoft (NMM).

Related:

  • No Related Posts

NetWorker 18.2



NetWorker 18.2 Administration Guide

Describes how to configure and maintain the NetWorker software.

NetWorker 18.2 Cluster Integration Guide

Describes how to install and administer the NetWorker software on cluster servers and clients.

NetWorker 18.2 Command Reference Guide

Provides reference information for NetWorker commands and options.



NetWorker 18.2 Snapshot Management Integration Guide

Describes how to catalog and managesnapshot copies of production data that are created by using mirror technologies on EMC storage arrays.



NetWorker 18.2 Snapshot Management for NAS Integration Guide

Describes how to catalog and managesnapshot copies of production data that are created by using replication technologies on NASdevices.



NetWorker 18.2 Performance Optimization Planning Guide

Contains basic performance sizing, planning, and optimizing information for NetWorker environments.



NetWorker 18.2 Error Message Guide

Provides information on common NetWorker error messages.



NetWorker 18.2 Security Configuration Guide

Provides an overview of securityconfigurationsettings available in NetWorker, secure deployment, and physical securitycontrols needed to ensure the secure operation of the product.



Related:

  • No Related Posts

Re: Is smtpmail distributed as standard with Networker? (specifically 7.6.2.1)

Bit of an odd scenario, so apologies in advance!

I’m investigating one of our servers which has a few scheduled tasks running on it, attempting to make use of a ‘smtpmail.exe’. The only reference to any smtpmail executable I could find with the same argument syntax, is the one in EMC networker (as seen in this thread), and there just happens to be an installation of EMC networker on this particular server, with the ‘Legato/nsr/bin’ directory in the PATH. Which seems a pretty good indicator that the intention was for these tasks to be using smtpmail from networker.

However, there is no smtpmail executable in the bin directory, and never having heard of EMC networker until today, I don’t know if it’s something that is actually part of EMC networker and has somehow been removed from this particular installation, or it’s something non-standard and the above thread is just a bit of a red-herring.

Thanks in advance for any insights!

Related:

  • No Related Posts

Is smtpmail distributed as standard with Networker? (specifically 7.6.2.1)

Bit of an odd scenario, so apologies in advance!

I’m investigating one of our servers which has a few scheduled tasks running on it, attempting to make use of a ‘smtpmail.exe’. The only reference to any smtpmail executable I could find with the same argument syntax, is the one in EMC networker (as seen in this thread), and there just happens to be an installation of EMC networker on this particular server, with the ‘Legato/nsr/bin’ directory in the PATH. Which seems a pretty good indicator that the intention was for these tasks to be using smtpmail from networker.

However, there is no smtpmail executable in the bin directory, and never having heard of EMC networker until today, I don’t know if it’s something that is actually part of EMC networker and has somehow been removed from this particular installation, or it’s something non-standard and the above thread is just a bit of a red-herring.

Thanks in advance for any insights!

Related:

  • No Related Posts

Unable to restore VM from tape device

Hi folks,

I am trying to do a comlete VM restore from a backup created via the vProxy. The Save Set was written to a Data Domain and later cloned to a tape device. The tape shall be the source for the restore.

I am on NetWorker v18.1 on a Windows Server 2012 R2. Tape Library is a TS4300 with LTO8 drives connected via SAN to the NetWorker Server (physical). There is no additional Storage Node.

So I started the recovery wizard within the NMC and selected everything I needed. At the last page I selected the tape as source and did choose my one and only Data Domain pool as staging pool.

After a few seconds I got the failure:

155752:nsrvproxy_recover: Unable to find a usable device for recovery of saveset 3976587711: Unable to find enabled Data Domain device for volume ‘000061L8’. An attempt will be made to perform automatic clone resurrection

165713:nsrvproxy_recover: Attempting to automatically resurrect clone 3976587711 with 7 days retention to DD volume in pool DATA



I searched the web and support.emc.com without any success. There were two KBs. One mentioned to check DNS which is correct. The second one suggested to mount a boost device from the selected pool which I checked.

Has anyone an advice?

Thanks

beacon

Related:

  • No Related Posts

Re: Troubles with NDMP-DSA Backup NW9.1.1 & NetApp Filer over dedicated Interface

Hello,

Networker Server 9.1.1

Stroage Nodes 9.1.1 with RemoteDevice DD for NDMP-Backups

We added storage nodes to our environment that got a direct 10G Ethernet connection to the NetApp Filers. There is one interface connected to the production network and one interface that is directly connected to the NetApp Filer with a private network address of 10.11.11.12.

NetApp Filer is configured with IP address 10.11.11.11.

We used the wizard to configure the NDMP-DSA Backup. Client Direct is disabled.

Backup command: nsrndmp_save -M -P <StorageNode> -T dump

Additional Information:

BUTYPE=dump

DIRECT=Y

HIST=Y

EXTRACT_ACL=y

UPDATE=Y

USE_TBB_IF_AVAILABLE=Y

In this above configuration backups work and move data over the production network.

Our goal is to use the direct connection to optimize backup. Therefor we added the hostname for the storage node with the private ip address on the netapp filer but the backups still moved over the production network.

In the next step we defined a “virtual” hostname of storagenode-fs with IP 10.11.11.12 and edited the Backupcommand to use P storagenode-fs but this did not work either.

Anyone got some ideas on this situation?

Regards,

Patric

Related: