VxRail: VDP backup fails on VM with error “There were no valid disks selected for backup. Only VM configuration files were backed up”

Article Number: 524449 Article Version: 3 Article Type: Break Fix



VxRail Appliance Family

VDP shows error: “There were no valid disks selected for backup. Only VM configuration files were backed up” when trying to backup a VM

VDP logs show:

2018-08-07T15:37:32.460-09:00 avvcbimage Warning <0000>: [IMG1002] Disk 2000 ([MARVIN-Virtual-SAN-Datastore-xxxxx/ABC.vmdk) has been intentionally excluded from backup

VM disk mode was set to “Independent – Persistent”. This disk mode doesn’t allow snapshots to be taken on the disk, and thus unsupported by VDP. You can check that from the VM settings as shown below:

User-added image

1. Power off the affected VM (You may need to take a clone as a backup before proceeding).

2. Once powered off, change the disk mode on the affected VM to “Dependent”.

3. Power back on the VM, and retry the backups from the VDP plugin.

Refer to AvamarKB-468794

Related:

  • No Related Posts

7023339: Windows Replications Fail Due to VSS Errors

This document (7023339) is provided subject to the disclaimer at the end of this document.

Environment

PlateSpin Forge 11.x and up

PlateSpin Migrate 12.x and up

PlateSpin Protect 11.x and up

Situation

A replication of a Windows source workload fails with an error message related to VSS or BlockBasedVolumeWrapper.

Resolution

Ensure each drive on the source workload being replicated has at least 10% – 15% free space of the total volume size (Ex: if C: is 100 GB in size, there should be at least 10 GB – 15 GB of free space).
The service Volume Shadow Copy in Services on the source workload should have its startup type set to Manual. The Volume Shadow Copy service should not be disabled.
No other application on the source workload that uses VSS should be running at the time of the replication.
If there is high disk utilization during the time of the replication, the snapshot creation can run into an error. If there is high disk utilization, please schedule the replication to run during a time when the disk is not being used heavily.
In Windows Explorer on the source workload, right click on each drive and choose Configure Shadow Copies.
Look at the settings of each drive. Make sure each drive has the no limit option selected and that the drive selected to store the snapshots has the sufficient space, 10%-15% free space of the total volume size.
Remove all existing shadow copies of each drive on the source. This can be done through the diskshadow utility.
Open a command prompt by right clicking on its icon and choosing run as administrator and run this command.
diskshadow
The diskshadow command line utility will open. Run this command to remove all existing VSS snapshots.
delete shadows all
See this Microsoft document for more information about diskshadow.
Test the creation of shadow copies using “vssadmin create shadow” for each drive; these snapshots can be removed using diskshadow after the snapshots have been taken.
See this Microsoft link for more information about vssadmin.
Test the creation and removal of VSS snapshots using PlateSpin.Athens.SnapshotExecution.exe. See TID 7017929.

Cause

PlateSpin uses the native Windows service Volume Shadow Copy (VSS) to create snapshots of the volumes being replicated. If the snapshots cannot be taken, the job will not complete successfully.

Additional Information

Consult with the system administrator of the source workload before making any changes this TID recommends.

There may be errors related to the creation of VSS snapshots on the source workload’s Event Viewer application and system logs. When submitting a service request, please export those logs as .evtx files and upload them to the service request along with the diagnostics of the failed replication job.

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

Related:

Re: EMC Unity and Veeam Direct SAN Access

Actually i think you can’t do that. Here are the supported Dell EMC storages for Veeam 9.5 for backup based on storage snapshots.

Backup from Storage Snapshots —Veeam Availability Suite

Supported storage:

For Dell EMC:

  • Dell EMC VNX
  • Dell EMC VNX2
  • Dell EMC VNXe

You will need to do this only with direct SAN, but Veeam will create the snapshot in VMware and read the data directly from the storage. It is not necessary put the LUNs in read only mode, just present it to the veeam backup server after the veeam application is installed, Veeam will mark as read only automatically.

I hope this information helps you.

Francisco

Related:

Re: Failover vm de un CG

Hi there,

The recovery works with a granularity of a Consistency Group. This means that when all VMs in a CG would be recovered when performing a recover production and all VMs in a CG would be failed over upon failover.

When performing a test (and every recover activity starts with a test), it’s possible to prevent the power on of the replica VMs (there’s a checkbox in the test wizard for that) so that one or replica VMs in a CG can be powered on manually as needed.

To your final question, recover production is a different recovery action than a failover. Recover production is a restore so that in case there’s a need to bring up to production as fast as possible on the original production site (rather than brining it up on the replica site as prod like in failover), this would be the recovery action to do that. So, recover production revereses replication, once recovery is complete, the original replication direction is resumed and he production VM is started up. The replication roles doesn’t change as in failover.

Hope that helps,

Idan

Related:

DELL EMC NetWorker Block Based Backups (BBB)





Avatar.jpgNW.jpg







Before I get into what is Block Based Backup, I would like to highlight what challenges our customers get into while backing up large file servers and how BBB can help. Data growth with servers having millions of files and then issues around backing them up, long backup windows, missed SLA’s etc are some of the very common examples.

Incremental backups do not solve this problem as they also take a long time as backup software has to look into the whole file system.

This imposes a business risk or production data for multi-million file systems. Another aspect to look into is that Index database or catalogs become huge and slow for protecting millions of files using traditional methods. Overlapping of backups, poor performance and business impact is what we get. This results in a very poor RPO (recovery point objective).



Why is the backup of millions of files slow?



WHy.jpg





Why2.jpg



Effect (millions of files)



  • Slow backups
  • Huge space required for the internal database of backup software
  • Long recovery window
  • Data loss -> far point to which we can recovery (RPO)
  • Backups overlap user activities/business
  • Users have poor performance during business hours



BBB why.jpg

How NetWorker BBB resolves these challenges?



Block-based backup (BBB) is a technology where the backup application scans a volume or a disk in a file system and backs up all the blocks that are in use in the file system. Unlike the traditional file system backup, block-based backup supports high-performance backups with a predictable backup window.



BBB is supported on Linux and Windows, however, you need to verify compatibility of various versions, example RHEL, CENTOS is supported, however, Oracle Linux isn’t supported yet.



Block-based backups use the following technologies:



  • The Volume Shadow Copy Service (VSS) snapshot capability on Windows and Logical Volume Manager (LVM) and Veritas Volume Manager (VxVM) on Linux to create consistent copies of the source volume for backups.
  • The Virtual Hard Disk (VHDx), which is sparse, to back up data to the target device.



Block-based backups support only the following Client Direct enabled devices as target devices:



  • Advanced File Type Devices (AFTDs)
  • Data Domain devices (DDBOOST)
  • Cloud Boost devices

The block-based incremental backups use the Change Block Tracking (CBT) driver to identify the changed blocks and back up only the changed blocks.

Block-based full and incremental backups are fast backups with reduced backup times because the backup process backs up only the occupied disk blocks and changed disk blocks respectively. Block-based backups can coexist with traditional backups.

Block-based backups provide instant access to the backups. The block-based backups enable you to mount the backups by using the same file systems that you used to back up the data.

Block-based backups provide the following capabilities:

  • Mounting of a backup as a file system.
  • Mounting of an incremental backup.
  • Sparse backup support.
  • Backups to disk-like devices.
  • Backups of operating system-deduplicated file systems as source volumes on Windows.
  • Forever virtual full backups to Data Domain.
  • Data Domain retention lock.
  • 38 incremental backups to AFTD and Cloud Boost devices.
  • Synthetic full backups to AFTD and Cloud Boost devices.
  • Backups of volumes up to 63 TB each.
  • NetWorker-supported devices as secondary devices for backups.
  • Recoveries from Data Domain without using CIFS share.
  • Recovery of multiple save sets in a single operation.
  • Setting parallel save streams if the target or destination is Data Domain.

For backup and recovery types, please consult NetWorker administration guide.

Supported OS: Windows and Linux. Only 64-bit architecture.

LINUX WINDOWS.jpg

For Compatibility please check the online compatibility matrix

CompGuideApp

One example is Oracle Linux is not supported for BBB.

  • Windows:

– New Technology File System (NTFS)

– Resilient File System (ReFS)

  • Linux:

– Third extended file system (ext3)

– Fourth extended file system (ext4)

Block Based Backups (BBB) do not support the WINDOWS ROLES AND FEATURES save set.

For more details on supported configurations, and limitations please refer NetWorker administration guide.

Media Support

  • AFTD (Advanced File Type Devices)
  • Data Domain CIFS and NFS
  • DDBoost devices
  • CloudBoost

There is one important information which you must consider before configuring BBB.

Note

For block-based backups to succeed, ensure that you meet the following requirements:

  • Create a separate pool.
  • The pool must contain only one backup device.
  • Perform all backups of a client to the same backup device.

If you want to make a local AFTD a Client Direct-enabled device, specify either the CIFS path or the NFS path in the Device access information field of the Create device properties dialog box.

BLU.jpg

Backup and Recover Architecture

BBB Backup Achitecture2.jpg

BBB recover Arch.jpg

Let us look a little deeper how it works



NetWorker reads the whole disk space as blocks (speeds up backup!)

  • does NOT open/close files (speeds up backup!)
  • does NOT write info about files into internal index databases (speeds up backup!)

NetWorker uses its own CBT (Change Block Tracking) mechanism to track changes on filesystem between backups, and how does it works is as follows



  • NetWorker has CBT map of all blocks on disk
  • The CBT map is set of bits
  • Every bit corresponds to a single block on disk
  • After the backup, all bits in the CBT map are set to „0”
  • If the block on a disk is changed than NetWorker knows about it and set the corresponding bit in CBT map to „1”

CBT map is kept by NetWorker in memory, during incremental backup NetWorker reads only changed blocks, those which correspond to value 1.

NetWorker writes backups in native Windows format

Quick recovery

  • whole disk recovery
  • single file recovery

How is CBT implemented in NetWorker?



CBT.jpg

CBT2.jpg

CBT3.jpg

CBT4.jpg

Results are amazing, I have tested them in a LAB and in a few customer environments, and results were more than satisfactory.

Some FAQ’s around BBB

  • Do we support the FAT32 file system?

No, backups fall to traditional method for FAT32 volumes.

  • Does BBB support Distributed File System, DFS?

    No.
  • Can the traditional backup and BBB co-exist?

Yes. You can backup a volume by separately using the traditional file system backup and the BBB. However, because of the inherent behavior of the backup, you cannot anchor or chain the save sets that you backed up by using the traditional file system backup to the save sets that you backed up by using the BBB or vice versa.

  • In case of backup or recovery failure, what logs do I need to see for initial checks or reporting?

You can see the daemon.raw file for all the status related messages during backup or recovery.

You can see the following files to get a brief overview of the backup or recovery:

In case of a backup failure, see the savegroup log files in the following location

<NetWorker Installation Location>logssg<Group Name>

In case of a recovery failure, see the log files in the following location

<NetWorker Installation Location>logsBBB<Client Name><SSID>

Alternatively, NMC can be used for both the backup and recovery related logs.



  • Do I need to restart the machine after installation? What if I do not reboot?

No. Reboot is not required for performing level full and level incremental backups

Note: First backup after a reboot is performed at level full and subsequent backups shall be at level incremental.



  • How big can be source volume?

In NetWorker 9.x and above releases, the source volume can be up to 63 TB.



  • How much space is consumed as part of the backup on the target device?

The size of the backed up data on the target device would be approximately 10% more than that of the data on the source device.



  • Are index entries for the file system that is being backed up created? What does Index DB (CFI) store?

No, An index entry with the save set name is created for compatibility and is stored in the indexDB. The index entry is created with the namespace of ‘BBB’.

  • How do I browse or perform a granular level recovery?

File level recovery can be performed by directly mounting the save set which is in the VHDx format. Once the save set is mounted it can be browsed as any regular file system. NMC can be used to search saveset and mount for browse and FLR or CLI can be used to perform this.



  • Can BBB be performed at file/directory level?

No, you can perform BBB at volume level only. Volume mount points are supported.



  • Can I perform client initiated block based backup?

Yes, you can perform client initiated backups of individual and multiple volumes. A new command line option -z is provided for performing client initiated block-based backups.



  • How would the backup policy work? How many incremental backups can be taken before performing a level full again?

The respective group policy is enforced while performing a block based backup. If the target device is AFTD,

then you can perform 38 block based incremental backups only. If the count exceeds 38, the backup shifts to a level full backup.

If the target device is Data Domain, then we can perform forever incremental backups. Here VSF will treat them as

synthetic Full backups, VSF means Virtual Synthetic Full which is a feature of DDBoost.



  • How do I know the current incremental level?

You can know the current incremental level from.

    1. A message that is logged for each backup.
    2. The BBB_LEVEL attribute of the mminfo -S output.

  • How does BBB incremental backup work?

    The filter driver tracks the changed blocks from the previous instance of a backup, and stores the information in system memory. During the incremental backup, the changed blocks are queried and backed up.


  • Note: Storing the changed blocks’ information in system memory does not have any performance impact. Restarting the machine results in a block-based full level backup because the changed blocks are not persistent


  • What are the scenarios in which the level incremental backups shift to level full backup?
    • You restart the client machine for any reason.
    • When a new volume is added to the machine
    • When there is a failure during an incremental backup
    • When there is a change in the disk geometry, say the change in volume size either due to volume shrink or volume expand operations.

      Note: EMC recommends you to perform a full level backup on a volume after de-fragmenting the volume
    • When 38 incremental backups have been performed after a level full backup to an AFTD device.
    • When an immediate previous incremental backup is deleted.

    • What if I perform only level full backups to de-dupe targets like DD?

    The data on the target device is deduplicated. Though the backup is triggered at full level, the actual data is very less.

    So, the impact is minimal.



    • How do I list only block based backup enabled save sets?

    “mminfo -avot -q “ssattr=*BlockBasedBackup”

    Note: The query option can be mixed with other query specs



    To list the block-based virtual full backup save sets, run the following command:

    mminfo -avot -q “ssattr=*BlockBased Virtual Full”

    To list the block-based synthetic full backup save sets, run the following command:

    mminfo -avot -q “ssattr=*Synthetic full”



    • How do I list the CFI of Block based backup enabled save sets?

    The following command can be given to listing the CFI of block based backup enabled save sets

    “nsrinfo -n bbb <ClientName>”

    Note: -n option is for namespace and BBB is the namespace value for block-based enabled save sets

    Have a look at NetWorker Administration Guide for more details on backups and restores.

    • Can we clone the data to Tapes?

    Yes, data can be cloned to tape, however in case of a recovery data has to be re-cloned to DD or AFTD.

    • Does BBB support staging?

      No
    • Is BBB I18N compliant?

    Yes

    • IPv6 Support?

    Yes

    • Does BBB support Data Domain devices over FC interface?

    Yes

    • Can the customer apply compress and encrypt ASM as the global directive?

    No, it is not supported.



    • Do BBB support volume managers and dynamic disks?

    Yes



    • What is the impact if my filesystem is highly fragmented?

    There is no impact. BBB reads the used blocks in a single pass and is agnostic to the fragmentation

    of the underneath file system.

    • What will be backed up if I run -de-fragment?

    De-fragmentation results in a huge number of changed blocks. This increases the incremental backup size though the data

    has not changed much. Therefore, DEL EMC recommends you to perform full level backups after performing defragmentation

    .

    Note: There is no impact if you run the de-fragmentation before performing a full level backup.



    • Will there be any impact if I have antivirus?

    No

    • Can BBB co-exist with SnapImage?

    No, BBB replaces SnapImage on Windows platforms.



    • Can i use VSS:*=OFF with BBB?

    No

    Hope it helps

Related:

Re: Veeam Agent for Windows and EMC Data Domain issue

Hi all

We do have an EMC Data Domain 2500 with DDboost and Veeam 9.5 U3. We configured the Veeam Agent on a Windows Server to store its backup to the Data Domain DDBoost Repository via Veeam Server Integration. Everything looks fine, the Agent is able to store the data on the Data Domain so all authentication and traffic is working fine.

Now we did have an issue with the server and we need to do a full metal restore of the system. I inseted / mounted the iso for this server, connected to the Data Domain via Server and Credentials BUT it fails!

https://ibb.co/fmzTUJ

https://ibb.co/eTk2pJ

https://ibb.co/kvPHOd

https://ibb.co/hiGWid

https://ibb.co/foJNpJ

When I copy the files to a local USB disk and try to this the Veeam Agent is not able to find the files, I am only able to see the files when I connect to the server and browse the Data Domain repository. Same thing when I directly connect to the data domain via CIFS!

Is a Data Domain based backup / restore not supported for the Veeam Agent? It works fine when I do a virtual machine restore with Data Domain!!

Cheers and thanks

Related:

Veeam Agent for Windows and EMC Data Domain issue

Hi all

We do have an EMC Data Domain 2500 with DDboost and Veeam 9.5 U3. We configured the Veeam Agent on a Windows Server to store its backup to the Data Domain DDBoost Repository via Veeam Server Integration. Everything looks fine, the Agent is able to store the data on the Data Domain so all authentication and traffic is working fine.

Now we did have an issue with the server and we need to do a full metal restore of the system. I inseted / mounted the iso for this server, connected to the Data Domain via Server and Credentials BUT it fails!

https://ibb.co/fmzTUJ

https://ibb.co/eTk2pJ

https://ibb.co/kvPHOd

https://ibb.co/hiGWid

https://ibb.co/foJNpJ

When I copy the files to a local USB disk and try to this the Veeam Agent is not able to find the files, I am only able to see the files when I connect to the server and browse the Data Domain repository. Same thing when I directly connect to the data domain via CIFS!

Is a Data Domain based backup / restore not supported for the Veeam Agent? It works fine when I do a virtual machine restore with Data Domain!!

Cheers and thanks

Related:

CBT + DDBoost + BBBackups not equal to “almost zero”

Hello,

Just wondering on the space used on a Data Domain when you do several backup of the same server in a short period of time.

Since a while (exactly from the 7 of June) we experiment an increase of the space used on our DD not in relation with data growing.

DD space used.png

I discover that on the date where we had a capacity increased we had several manual launch of some backup.

What is “bizarre” is the fact that the capacity used on the DD seems to increase accordingly to the number of backups… (which is “acceptable” in case of traditional file level backup)

Capacity report V2.png

If you look at the number of backup version we have for the server “Server one” (as an example) you can see that on the 22 of June we had 5 backup of this server (don’t ask me why 5…) and every backup has a Data Domain copy (which is confirmed by the column AE of the Excel report.

Server one.png

I was expecting that with all CBT + Dedup and whatsoever optimization we will have just few blocks added and not the full disk.

(We are using Networker 9.2.1.4)

Questions:

1) Is CBT an option to be setup on every VM (found an article on that). We are running vSphere 6.5 with vProxy.

2) Even if not activated, a Block Based Backup should do that for you ?

3) Will an Incremental backup not take only modified Blocks ?

3) And what is the role of DDBoost in that “party” ?

Any explanation or answers welcomed because EMC dosen’t want to give me a clear explanation how things are really working here…

Related:

Machine Creation Services (MCS) Storage Considerations

Overview

There are many factors that must be taken into account when deciding on storage solutions, configurations and capacities. This article does not serve as an exhaustive guide, but it aims to provide insight into the many consumers of storage that can quickly begin to deplete the available resources leading to sometimes complicated issues in regards to broken catalogs and hung MCS processes. The article will point to relevant documents and summarize key points from each.

Suggested Reading


Capacity Considerations

Disks

The largest consumer of space in most MCS deployments will likely be the Diff Disks created for each VM. Each VM created by MCS is given at minimum 2 disks upon creation.

  • Disk0 = Delta or Differencing (Diff) Disk (contains the OS as copied from the Master Base Image)
  • Disk1 = Identity Disk (16MB – contains Active Directory data for each VM)
As the product has evolved, additional disks may be added to satisfy certain use cases and feature consumption. For example:
  • Personal vDisk(Feature which basically provides end users with the ability to install applications without admin intervention on a seperate disk attached to the VM)
  • AppDisk (Feature which provides the ability to attach application only disks to VMs primarily for Server OS Catalogs)
  • New MCS Storage Optimization feature which creates a write cache style disk for each VM
  • MCS added the ability to use full clones as opposed to the Delta disk scenario described above.
Hypervisor features may also enter into the equation. For example:
  • XenServer intelli-cache Feature (creates a Read Disk on local storage for each XenServer to save on IOPS against the master image which may be help on the shared storage location).

    Hypervisor Overhead

    Different hypervisors will utilize specific files that create overhead on a per VM basis. Hypervisors may also use storage for management and general logging operations.
    Space must be calculated to include overhead for :
    • Log files
    • Hypervisor specific files, for example:
      • VMWare adds additional files to the VM storage folder. See : VMWare Best Practices
      • “Calculate your total virtual machine size requirements – Each virtual machine will require more space than just that used by its virtual disks. Consider a virtual machine with a 20GB OS virtual disk and 16GB of memory allocated. This virtual machine will require 20GB for the virtual disk, 16GB for the virtual machine swap file (size of allocated memory), and 100MB for log files, (total virtual disk size + configured memory + 100MB) or 36.1GB total.”
    • Snapshots (XenServer) Snapshots (VMWare)

    Process overhead

    Not all processes are the same.Creating a catalog, adding a machine and updating a catalog all have unique storage implications. For example:
    • Initial catalog creation requires a copy of the base disk to be copied to each storage location.
    • Adding a machine to a catalog does not require copying of the base disk to each storage location.
    • Catalog creation varies based upon features selected so a catalog which employs PvD or AppDisks will need more space than a simple pooled random catalog.
    • Catalog update process creates an additional base disk on each storage location.
    • Catalog updates also experience a temporary storage peak wherein each VM in the catalog has 2 Diff disks for a certain amount of time.
    Additional Considerations

    • RAM sizing – may affect the size of certain hypervisor files and disks including I/O optimization disks, write cache, and snapshot files, etc…
    • Thin / Thick provisioning – NFS storage is preferred due to the thin provisioning capabilities

    Calculators and Tools
    • Equations have been developed, but none will be exhaustive and inclusive of every possible scenario, but it may help to see what some others have come up with. See this discussion, for Example:
      • (number of defined storages in selected hosting connection * number of max. expected updates until complete reboot of machine catalog and 12h wait time * actual image size * number of machine catalogs using this image) + (number of max. CCU * 15% from image size) + (number of vm´s * 16 MB) (+ for VMware (number of vm´s * VM swap file *2 ))
    • Hardware and Storage Calculator – This link is a bit outdated but useful for older versions of XA/XD and as an educational tool.
    • Citrix Project Accelerator – Robust planning tool that takes many factors into consideration including storage.


    Reference articles and blogs

    Takeaways

    • Storage must be properly planned out in any VDI environment considering all the consumers of that storage solution.
    • It is best to over-shoot any calculations you have made since it is much easier to provision the extra storage up front than it is to repair a site that has been affected by storage related issues.

    Related:

Dell EMC Unity: File Async Replication Session Failed

Article Number: 488484 Article Version: 4 Article Type: Break Fix



Unity 300,Unity 300F,Unity 400,Unity 400F,Unity 500,Unity 500F,Unity 600,Unity 600F,Unity All Flash,Unity Family,Unity Hybrid,Unity Hybrid flash

While cleaning up File side Asynchronus replication sessions, the NAS Server replication session that the file systems were being replicated from failed. When attempting to ‘delete’ the NAS Server Replication session the following error message and code was received:

Failed: File resource replication sessions need to be deleted before nas server session. (Error Code:0x6500148)

File resource replication sessions need to be deleted before nas server session. (Error Code:0x6500148)

Deleted ‘NAS Server’ Replication session, while underlying ‘file system’ replication sessions were still operational

File resource replication sessions need to be deleted before nas server session.

Dell EMC Unity: Asynchronous Replication (Thunderbird code) Limitation:

All file systems are included for replication when establishing the replication session for the NAS Server, though you can remove individual file system replication sessions afterwards

Related: