Re: Isilon cloud Tier Implement

First,

Backing up an archive isn’t a good idea. Why? You’re paying AWS to geo-protect this S3 bucket, right? So it is protected there. S3 objects, natively have versioning built-in as well, and we’re talking about inactive content, with a set of keys tied to the bucket that shouldn’t be handed out to anyone.

What I would suggest is this:

1. Backup the whole thing with a full before you archive anything.

2. Now that the data is all in the backup set, run your CloudPools jobs.

3. Now run your incremental backups, the metadata of the file that was archived off won’t have changed / aside from the offline extended file attribute being set, so there shouldn’t be anything new to backup. If someone were to inadvertently delete a stub, and you need to restore it from a backup you’re just restoring the ~8KB stub (1 block), which contains a pointer to the offline data.

If you want to have your backups do pass-through reads, in most HSM systems that’s possible, and is likely possible here too; but consider that you’ll be paying egress costs to read that data back out of AWS every time. If that doesn’t sound ludicrous at first glance, just wait until you get your bill.

Look up these options in the CLI guide.

isilon3-1# isi cloud settings view

Default Accessibility: cached

Default Cache Expiration: 1D

Default Compression Enabled: No

Default Data Retention: 1W

Default Encryption Enabled: No

Default Full Backup Retention: 5Y

Default Incremental Backup Retention: 5Y

Default Read Ahead: partial

Default Writeback Frequency: 9H

Default Archive Snapshot Files: Yes

Anyway hope this helps. In Summary backing up archives is usually a waste. It’s in the archive in the first place because it’s likely no-longer relevant, a copy should already be in your older backup sets from before it was archived (because by definition it’s not changing), and it’ll cost you a boat-load of money.

~Chris Klosterman

Principal SE, Datadobi

chris.klosterman@datadobi.com

Related:

Isilon cloud Tier Implement

First,

Backing up an archive isn’t a good idea. Why? You’re paying AWS to geo-protect this S3 bucket, right? So it is protected there. S3 objects, natively have versioning built-in as well, and we’re talking about inactive content, with a set of keys tied to the bucket that shouldn’t be handed out to anyone.

What I would suggest is this:

1. Backup the whole thing with a full before you archive anything.

2. Now that the data is all in the backup set, run your CloudPools jobs.

3. Now run your incremental backups, the metadata of the file that was archived off won’t have changed / aside from the offline extended file attribute being set, so there shouldn’t be anything new to backup. If someone were to inadvertently delete a stub, and you need to restore it from a backup you’re just restoring the ~8KB stub (1 block), which contains a pointer to the offline data.

If you want to have your backups do pass-through reads, in most HSM systems that’s possible, and is likely possible here too; but consider that you’ll be paying egress costs to read that data back out of AWS every time. If that doesn’t sound ludicrous at first glance, just wait until you get your bill.

Look up these options in the CLI guide.

isilon3-1# isi cloud settings view

Default Accessibility: cached

Default Cache Expiration: 1D

Default Compression Enabled: No

Default Data Retention: 1W

Default Encryption Enabled: No

Default Full Backup Retention: 5Y

Default Incremental Backup Retention: 5Y

Default Read Ahead: partial

Default Writeback Frequency: 9H

Default Archive Snapshot Files: Yes

Anyway hope this helps. In Summary backing up archives is usually a waste. It’s in the archive in the first place because it’s likely no-longer relevant, a copy should already be in your older backup sets from before it was archived (because by definition it’s not changing), and it’ll cost you a boat-load of money.

~Chris Klosterman

Principal SE, Datadobi

chris.klosterman@datadobi.com

Related:

DELL EMC NetWorker Block Based Backups (BBB)





Avatar.jpgNW.jpg







Before I get into what is Block Based Backup, I would like to highlight what challenges our customers get into while backing up large file servers and how BBB can help. Data growth with servers having millions of files and then issues around backing them up, long backup windows, missed SLA’s etc are some of the very common examples.

Incremental backups do not solve this problem as they also take a long time as backup software has to look into the whole file system.

This imposes a business risk or production data for multi-million file systems. Another aspect to look into is that Index database or catalogs become huge and slow for protecting millions of files using traditional methods. Overlapping of backups, poor performance and business impact is what we get. This results in a very poor RPO (recovery point objective).



Why is the backup of millions of files slow?



WHy.jpg





Why2.jpg



Effect (millions of files)



  • Slow backups
  • Huge space required for the internal database of backup software
  • Long recovery window
  • Data loss -> far point to which we can recovery (RPO)
  • Backups overlap user activities/business
  • Users have poor performance during business hours



BBB why.jpg

How NetWorker BBB resolves these challenges?



Block-based backup (BBB) is a technology where the backup application scans a volume or a disk in a file system and backs up all the blocks that are in use in the file system. Unlike the traditional file system backup, block-based backup supports high-performance backups with a predictable backup window.



BBB is supported on Linux and Windows, however, you need to verify compatibility of various versions, example RHEL, CENTOS is supported, however, Oracle Linux isn’t supported yet.



Block-based backups use the following technologies:



  • The Volume Shadow Copy Service (VSS) snapshot capability on Windows and Logical Volume Manager (LVM) and Veritas Volume Manager (VxVM) on Linux to create consistent copies of the source volume for backups.
  • The Virtual Hard Disk (VHDx), which is sparse, to back up data to the target device.



Block-based backups support only the following Client Direct enabled devices as target devices:



  • Advanced File Type Devices (AFTDs)
  • Data Domain devices (DDBOOST)
  • Cloud Boost devices

The block-based incremental backups use the Change Block Tracking (CBT) driver to identify the changed blocks and back up only the changed blocks.

Block-based full and incremental backups are fast backups with reduced backup times because the backup process backs up only the occupied disk blocks and changed disk blocks respectively. Block-based backups can coexist with traditional backups.

Block-based backups provide instant access to the backups. The block-based backups enable you to mount the backups by using the same file systems that you used to back up the data.

Block-based backups provide the following capabilities:

  • Mounting of a backup as a file system.
  • Mounting of an incremental backup.
  • Sparse backup support.
  • Backups to disk-like devices.
  • Backups of operating system-deduplicated file systems as source volumes on Windows.
  • Forever virtual full backups to Data Domain.
  • Data Domain retention lock.
  • 38 incremental backups to AFTD and Cloud Boost devices.
  • Synthetic full backups to AFTD and Cloud Boost devices.
  • Backups of volumes up to 63 TB each.
  • NetWorker-supported devices as secondary devices for backups.
  • Recoveries from Data Domain without using CIFS share.
  • Recovery of multiple save sets in a single operation.
  • Setting parallel save streams if the target or destination is Data Domain.

For backup and recovery types, please consult NetWorker administration guide.

Supported OS: Windows and Linux. Only 64-bit architecture.

LINUX WINDOWS.jpg

For Compatibility please check the online compatibility matrix

CompGuideApp

One example is Oracle Linux is not supported for BBB.

  • Windows:

– New Technology File System (NTFS)

– Resilient File System (ReFS)

  • Linux:

– Third extended file system (ext3)

– Fourth extended file system (ext4)

Block Based Backups (BBB) do not support the WINDOWS ROLES AND FEATURES save set.

For more details on supported configurations, and limitations please refer NetWorker administration guide.

Media Support

  • AFTD (Advanced File Type Devices)
  • Data Domain CIFS and NFS
  • DDBoost devices
  • CloudBoost

There is one important information which you must consider before configuring BBB.

Note

For block-based backups to succeed, ensure that you meet the following requirements:

  • Create a separate pool.
  • The pool must contain only one backup device.
  • Perform all backups of a client to the same backup device.

If you want to make a local AFTD a Client Direct-enabled device, specify either the CIFS path or the NFS path in the Device access information field of the Create device properties dialog box.

BLU.jpg

Backup and Recover Architecture

BBB Backup Achitecture2.jpg

BBB recover Arch.jpg

Let us look a little deeper how it works



NetWorker reads the whole disk space as blocks (speeds up backup!)

  • does NOT open/close files (speeds up backup!)
  • does NOT write info about files into internal index databases (speeds up backup!)

NetWorker uses its own CBT (Change Block Tracking) mechanism to track changes on filesystem between backups, and how does it works is as follows



  • NetWorker has CBT map of all blocks on disk
  • The CBT map is set of bits
  • Every bit corresponds to a single block on disk
  • After the backup, all bits in the CBT map are set to „0”
  • If the block on a disk is changed than NetWorker knows about it and set the corresponding bit in CBT map to „1”

CBT map is kept by NetWorker in memory, during incremental backup NetWorker reads only changed blocks, those which correspond to value 1.

NetWorker writes backups in native Windows format

Quick recovery

  • whole disk recovery
  • single file recovery

How is CBT implemented in NetWorker?



CBT.jpg

CBT2.jpg

CBT3.jpg

CBT4.jpg

Results are amazing, I have tested them in a LAB and in a few customer environments, and results were more than satisfactory.

Some FAQ’s around BBB

  • Do we support the FAT32 file system?

No, backups fall to traditional method for FAT32 volumes.

  • Does BBB support Distributed File System, DFS?

    No.
  • Can the traditional backup and BBB co-exist?

Yes. You can backup a volume by separately using the traditional file system backup and the BBB. However, because of the inherent behavior of the backup, you cannot anchor or chain the save sets that you backed up by using the traditional file system backup to the save sets that you backed up by using the BBB or vice versa.

  • In case of backup or recovery failure, what logs do I need to see for initial checks or reporting?

You can see the daemon.raw file for all the status related messages during backup or recovery.

You can see the following files to get a brief overview of the backup or recovery:

In case of a backup failure, see the savegroup log files in the following location

<NetWorker Installation Location>logssg<Group Name>

In case of a recovery failure, see the log files in the following location

<NetWorker Installation Location>logsBBB<Client Name><SSID>

Alternatively, NMC can be used for both the backup and recovery related logs.



  • Do I need to restart the machine after installation? What if I do not reboot?

No. Reboot is not required for performing level full and level incremental backups

Note: First backup after a reboot is performed at level full and subsequent backups shall be at level incremental.



  • How big can be source volume?

In NetWorker 9.x and above releases, the source volume can be up to 63 TB.



  • How much space is consumed as part of the backup on the target device?

The size of the backed up data on the target device would be approximately 10% more than that of the data on the source device.



  • Are index entries for the file system that is being backed up created? What does Index DB (CFI) store?

No, An index entry with the save set name is created for compatibility and is stored in the indexDB. The index entry is created with the namespace of ‘BBB’.

  • How do I browse or perform a granular level recovery?

File level recovery can be performed by directly mounting the save set which is in the VHDx format. Once the save set is mounted it can be browsed as any regular file system. NMC can be used to search saveset and mount for browse and FLR or CLI can be used to perform this.



  • Can BBB be performed at file/directory level?

No, you can perform BBB at volume level only. Volume mount points are supported.



  • Can I perform client initiated block based backup?

Yes, you can perform client initiated backups of individual and multiple volumes. A new command line option -z is provided for performing client initiated block-based backups.



  • How would the backup policy work? How many incremental backups can be taken before performing a level full again?

The respective group policy is enforced while performing a block based backup. If the target device is AFTD,

then you can perform 38 block based incremental backups only. If the count exceeds 38, the backup shifts to a level full backup.

If the target device is Data Domain, then we can perform forever incremental backups. Here VSF will treat them as

synthetic Full backups, VSF means Virtual Synthetic Full which is a feature of DDBoost.



  • How do I know the current incremental level?

You can know the current incremental level from.

    1. A message that is logged for each backup.
    2. The BBB_LEVEL attribute of the mminfo -S output.

  • How does BBB incremental backup work?

    The filter driver tracks the changed blocks from the previous instance of a backup, and stores the information in system memory. During the incremental backup, the changed blocks are queried and backed up.


  • Note: Storing the changed blocks’ information in system memory does not have any performance impact. Restarting the machine results in a block-based full level backup because the changed blocks are not persistent


  • What are the scenarios in which the level incremental backups shift to level full backup?
    • You restart the client machine for any reason.
    • When a new volume is added to the machine
    • When there is a failure during an incremental backup
    • When there is a change in the disk geometry, say the change in volume size either due to volume shrink or volume expand operations.

      Note: EMC recommends you to perform a full level backup on a volume after de-fragmenting the volume
    • When 38 incremental backups have been performed after a level full backup to an AFTD device.
    • When an immediate previous incremental backup is deleted.

    • What if I perform only level full backups to de-dupe targets like DD?

    The data on the target device is deduplicated. Though the backup is triggered at full level, the actual data is very less.

    So, the impact is minimal.



    • How do I list only block based backup enabled save sets?

    “mminfo -avot -q “ssattr=*BlockBasedBackup”

    Note: The query option can be mixed with other query specs



    To list the block-based virtual full backup save sets, run the following command:

    mminfo -avot -q “ssattr=*BlockBased Virtual Full”

    To list the block-based synthetic full backup save sets, run the following command:

    mminfo -avot -q “ssattr=*Synthetic full”



    • How do I list the CFI of Block based backup enabled save sets?

    The following command can be given to listing the CFI of block based backup enabled save sets

    “nsrinfo -n bbb <ClientName>”

    Note: -n option is for namespace and BBB is the namespace value for block-based enabled save sets

    Have a look at NetWorker Administration Guide for more details on backups and restores.

    • Can we clone the data to Tapes?

    Yes, data can be cloned to tape, however in case of a recovery data has to be re-cloned to DD or AFTD.

    • Does BBB support staging?

      No
    • Is BBB I18N compliant?

    Yes

    • IPv6 Support?

    Yes

    • Does BBB support Data Domain devices over FC interface?

    Yes

    • Can the customer apply compress and encrypt ASM as the global directive?

    No, it is not supported.



    • Do BBB support volume managers and dynamic disks?

    Yes



    • What is the impact if my filesystem is highly fragmented?

    There is no impact. BBB reads the used blocks in a single pass and is agnostic to the fragmentation

    of the underneath file system.

    • What will be backed up if I run -de-fragment?

    De-fragmentation results in a huge number of changed blocks. This increases the incremental backup size though the data

    has not changed much. Therefore, DEL EMC recommends you to perform full level backups after performing defragmentation

    .

    Note: There is no impact if you run the de-fragmentation before performing a full level backup.



    • Will there be any impact if I have antivirus?

    No

    • Can BBB co-exist with SnapImage?

    No, BBB replaces SnapImage on Windows platforms.



    • Can i use VSS:*=OFF with BBB?

    No

    Hope it helps

Related:

CBT + DDBoost + BBBackups not equal to “almost zero”

Hello,

Just wondering on the space used on a Data Domain when you do several backup of the same server in a short period of time.

Since a while (exactly from the 7 of June) we experiment an increase of the space used on our DD not in relation with data growing.

DD space used.png

I discover that on the date where we had a capacity increased we had several manual launch of some backup.

What is “bizarre” is the fact that the capacity used on the DD seems to increase accordingly to the number of backups… (which is “acceptable” in case of traditional file level backup)

Capacity report V2.png

If you look at the number of backup version we have for the server “Server one” (as an example) you can see that on the 22 of June we had 5 backup of this server (don’t ask me why 5…) and every backup has a Data Domain copy (which is confirmed by the column AE of the Excel report.

Server one.png

I was expecting that with all CBT + Dedup and whatsoever optimization we will have just few blocks added and not the full disk.

(We are using Networker 9.2.1.4)

Questions:

1) Is CBT an option to be setup on every VM (found an article on that). We are running vSphere 6.5 with vProxy.

2) Even if not activated, a Block Based Backup should do that for you ?

3) Will an Incremental backup not take only modified Blocks ?

3) And what is the role of DDBoost in that “party” ?

Any explanation or answers welcomed because EMC dosen’t want to give me a clear explanation how things are really working here…

Related:

Re: Virtual Synthetic Full vs. 1 month _and_ 6 years retention

Hello,

We are happily using Virtual Synthetic Full backups for huge file servers where we need to do daily backups (with weekly fulls) / 1 month retention.

It’s very useful, because we don’t need to (re-)read eg. the ~1T drive every week, DD+NW is doing the trick, so only incremental backups are executed on the client every day.

The problem arises when we need to keep monthly full backups with 6 years retention..

We have 2 savegroups (nw8):

daily backups: 1 month retention, schedule: synth full on saturday, incremental every other days, skip on 1st saturday every month

monthly backup: 6 years retention, schedule: synth full on 1st saturday every month, skip rest of the month

The problem: when we do the monthly (6y) backup, the 1st step is an incremental backup – this is how virtual synth. full works – however this saveset is stored with 6y retention, which prevents the expiration of the backups made on the previous week.

# mminfo -ot -q “client=…,name=G:,savetime>01/10/2018,savetime<02/05/2018” -r “savetime(17),ssretent,ssflags,sumsize,level,name”

date time retent ssflags size lvl name

01/27/2018 18:23 02/27/2018 vF 1162 GB full G:

01/28/2018 18:23 02/28/2018 vF 1779 MB incr G:

01/29/2018 18:23 02/28/2018 vF 54 GB incr G:

01/30/2018 18:24 02/28/2018 vF 81 GB incr G:

01/31/2018 18:23 02/28/2018 vF 65 GB incr G:

02/01/2018 18:24 03/01/2018 vF 62 GB incr G:

02/02/2018 18:23 03/02/2018 vF 71 GB incr G:

02/03/2018 18:21 02/03/2024 vF 78 GB incr G:

02/03/2018 18:21 02/03/2024 vF 1164 GB full G:

I’ve open an RFE to ask EMC to remove the incr backup if the synthesis was successfull (I think we don’t need it, the data is already stored within the synthetized full).

A good workaround is to make a regular full for the long retention backup, but it’s not nice.

Do you have any other ideas?

Related:

Virtual Synthetic Full vs. 1 month _and_ 6 years retention

Hello,

We are happily using Virtual Synthetic Full backups for huge file servers where we need to do daily backups (with weekly fulls) / 1 month retention.

It’s very useful, because we don’t need to (re-)read eg. the ~1T drive every week, DD+NW is doing the trick, so only incremental backups are executed on the client every day.

The problem arises when we need to keep monthly full backups with 6 years retention..

We have 2 savegroups (nw8):

daily backups: 1 month retention, schedule: synth full on saturday, incremental every other days, skip on 1st saturday every month

monthly backup: 6 years retention, schedule: synth full on 1st saturday every month, skip rest of the month

The problem: when we do the monthly (6y) backup, the 1st step is an incremental backup – this is how virtual synth. full works – however this saveset is stored with 6y retention, which prevents the expiration of the backups made on the previous week.

# mminfo -ot -q “client=…,name=G:,savetime>01/10/2018,savetime<02/05/2018” -r “savetime(17),ssretent,ssflags,sumsize,level,name”

date time retent ssflags size lvl name

01/27/2018 18:23 02/27/2018 vF 1162 GB full G:

01/28/2018 18:23 02/28/2018 vF 1779 MB incr G:

01/29/2018 18:23 02/28/2018 vF 54 GB incr G:

01/30/2018 18:24 02/28/2018 vF 81 GB incr G:

01/31/2018 18:23 02/28/2018 vF 65 GB incr G:

02/01/2018 18:24 03/01/2018 vF 62 GB incr G:

02/02/2018 18:23 03/02/2018 vF 71 GB incr G:

02/03/2018 18:21 02/03/2024 vF 78 GB incr G:

02/03/2018 18:21 02/03/2024 vF 1164 GB full G:

I’ve open an RFE to ask EMC to remove the incr backup if the synthesis was successfull (I think we don’t need it, the data is already stored within the synthetized full).

A good workaround is to make a regular full for the long retention backup, but it’s not nice.

Do you have any other ideas?

Related:

NMM 9 Exchange backup levels Explained.

After upgrading to NMM 9.x from NMM 8.x or NMM 3.x, one element that has caused some confusion are the backup levels. This is true for NMM Application backups that use VSS, namely Exchange and HyperV.

This article will attempt to explain the backup levels performed with NMM for Exchange. Prior to NMM 9, NMM Exchange backups could be done at 2 levels, FULL and Incremental. While this is still true with NMM 9, due to the feature of ‘Synthetic Full’ (when the backup device is data domain), the incremental backups are recorded as ‘FULL’ within Networker server media database. When one looks at the backup information via ‘mminfo’ or the NMC GUI, all Exchange backups are registered as FULL. This has caused confusion to customers as they believe their incremental backups do not work.

So how does one know which backups were actual incrementals. Let’s see an example of the backup history of one database.

mminfo -avot -A “blockbasedbackup=yes” -q “savetime >= today” | findstr -i userdb01



volume.001 Data Domain maplemail01.maple.local 4/9/2018 3:26:31 PM 906 MB 1271643754 cb full APPLICATIONS:Microsoft Exchange 2016mapleuserdb01DatabaseFiles

volume.001 Data Domain maplemail01.maple.local 4/9/2018 3:26:50 PM 173 MB 1238089338 cb full APPLICATIONS:Microsoft Exchange 2016mapleuserdb01LogFiles

volume.001 Data Domain maplemail01.maple.local 4/9/2018 3:36:30 PM 906 MB 1003208895 cb full APPLICATIONS:Microsoft Exchange 2016mapleuserdb01DatabaseFiles

volume.001 Data Domain maplemail01.maple.local 4/9/2018 3:36:35 PM 173 MB 986431684 cb full APPLICATIONS:Microsoft Exchange 2016mapleuserdb01LogFiles

Note the switch –A blockbasedbackup=yes. This will list only the ‘Logfiles’ and ‘Databasefiles’ savesets as these save sets are done using BBB (Block Based Backup). In the output you will see that all the backups are registered at level ‘FULL’

So how does one know which of the above backups was performed at level ‘FULL’ and which one was performed at level ‘Incremental’?

The command below will list BBB (block based backups) performed at level 0, aka FULL.

mminfo -avot -A “bbb_level=0” -q “savetime >= today” | findstr -i userdb01

volume.001 Data Domain maplemail01.maple.local 4/9/2018 3:26:31 PM 906 MB 1271643754 cb full APPLICATIONS:Microsoft Exchange 2016mapleuserdb01DatabaseFiles

volume.001 Data Domain maplemail01.maple.local 4/9/2018 3:26:50 PM 173 MB 1238089338 cb full APPLICATIONS:Microsoft Exchange 2016mapleuserdb01LogFiles

volume.001 Data Domain maplemail01.maple.local 4/9/2018 3:36:35 PM 173 MB 986431684 cb full APPLICATIONS:Microsoft Exchange 2016mapleuserdb01LogFiles

Similarly if you need to query for ‘incremental’ backups, use the following command:

mminfo -avot -A “bbb_level=1” -q “savetime >= today” | findstr -i userdb01



volume.001 Data Domain maplemail01.maple.local 4/9/2018 3:36:30 PM 906 MB 1003208895 cb full APPLICATIONS:Microsoft Exchange 2016mapleuserdb01DatabaseFiles

**Note in the above only ‘DatabaseFiles’ save set is listed in the query, but not ‘Logfiles’. This is because the During ‘incremental’ backups only ‘LogFiles’ are backed up and they are backed up at level ‘FULL’, The save set for ‘DatabaseFiles’ is generated at level ‘incr’. The ‘DatabaseFiles’ saveset are ‘0 kb’ in size, i.e no data from the ‘database file’ was backed up.

Logging in nsrnmmsv.raw file:



Looking at the nsrnmmsv.raw file on the Exchange host under the default path of “c:program filesemc networkernsrapplogs”, the following messages are logged for level FULL backup:

30 4/9/2018 3:27:09 PM 1 5 0 11328 2872 0 maplemail01.maple.local nsrnmmsv NSR notice nsrnmmsv: APPLICATIONS:Microsoft Exchange 2016mapleuserdb01DatabaseFiles level=full, 906 MB 00:00:38 1 file

0 4/9/2018 3:27:19 PM 1 5 0 21576 2872 0 maplemail01.maple.local nsrnmmsv NSR notice nsrnmmsv: APPLICATIONS:Microsoft Exchange 2016mapleuserdb01LogFiles level=full, 173 MB 00:00:29 1 file

For Incremental backup, below is what’s logged:

0 4/9/2018 3:36:53 PM 1 5 0 18448 20344 0 maplemail01.maple.local nsrnmmsv NSR notice nsrnmmsv: APPLICATIONS:Microsoft Exchange 2016mapleuserdb01LogFiles level=full, 173 MB 00:00:18 1 file

0 4/9/2018 3:37:33 PM 1 5 0 17720 20344 0 maplemail01.maple.local nsrnmmsv NSR notice nsrnmmsv: APPLICATIONS:Microsoft Exchange 2016mapleuserdb01DatabaseFiles level=full, 0 KB 00:01:03 0 files

*** Notice ‘0 KB’ for the ‘DatabaseFiles’ save set.



NMC GUI:



Full level backup details

Level-Full.JPG.jpg

Incremental backup details. Notice the ‘Total Amount’ field, it’s almost the same size. The ‘Total Amount’ reflects the total size of all databases backed up, not the actual amount of data that was backed up.

Level-Incr.JPG.jpg

So how do we know from the NMC GUI if the backup was ‘incremental’ or ‘FULL’.? To get this information from the GUI, click on ‘Show Action Logs’ and then ‘Get Full Log’ as below:

Level-Incr1.JPG.jpg

How is it recorded within Exchange?:

get-mailboxdatabase -status -identity mapleuserdb01 | ft name,lastfullbackup,lastincrementalbackup

Creating a new session for implicit remoting of “Get-MailboxDatabase” command…

Name LastFullBackup LastIncrementalBackup

—- ————– ———————

mapleuserdb01 4/9/2018 3:34:44 PM

***Notice within Exchange the backup is not registered as ‘Incremental’, but as ‘Full’ backup.

AFTD device as the backup device:

If the backup device is AFTD, then there is no ‘synthetic full’ created on the backup device. The incremental backups are registered and ‘incr’:

C:UsersAdministrator>mminfo -avot -q “client=lucky,savetime >= yesterday” | findstr -i userdb01

  1. mapledc.maple.local.004 adv_file maplemail01.maple.local 4/10/2018 4:33:49 PM 906 MB 382541742 cb full APPLICATIONS:Microsoft Exchange 2016mapleuserdb01DatabaseFiles
  2. mapledc.maple.local.004 adv_file maplemail01.maple.local 4/10/2018 4:34:03 PM 177 MB 332210108 cb full APPLICATIONS:Microsoft Exchange 2016mapleuserdb01LogFiles
  3. mapledc.maple.local.004 adv_file maplemail01.maple.local 4/10/2018 4:35:18 PM 53 KB 231546886 cb full APPLICATIONS:Microsoft Exchange 2016mapleuserdb01
  4. mapledc.maple.local.004 adv_file maplemail01.maple.local 4/11/2018 1:28:23 PM 171 MB 4157490616 cb incr APPLICATIONS:Microsoft Exchange 2016mapleuserdb01DatabaseFiles
  5. mapledc.maple.local.004 adv_file maplemail01.maple.local 4/11/2018 1:28:28 PM 179 MB 4140713405 cb incr APPLICATIONS:Microsoft Exchange 2016mapleuserdb01LogFiles
  6. mapledc.maple.local.004 adv_file maplemail01.maple.local 4/11/2018 1:29:08 PM 53 KB 3989718500 cb incr APPLICATIONS:Microsoft Exchange 2016mapleuserdb01

Hope this article provides clarity to this topic.

Related:

Re: When doing the incremental backup of file system the networker only writes in the data domain like backup full.


File server clustering consists of two machines: server1 and server2.

The backup is done using the virtual name of the cluster.

The servers:

S.O: WIN2012 R2

Networker: 8.2.9

Client networker: 8.2.9

DD: 5.6

As the file server is very large, with more than size 3 TB and have no space to save the DD all day the space occupied by backup full, I changed the setting to make up only one folder, but the problem continued.

When selecting a small one with 213 MB, and every time that I do the incremental backup copies 213 MB to the DD.

Already done I created a new client and new group, but it did not work.

I already added in the configuration of the client in backup command the line:

save -c client name, but the problem continued.

I already did the incremental backup force test for the group configuration, but the problem continued.

——————-

Ao fazer o backup incremental de file system o networker somente grava no data domain como backup full.

Cluster de servidor de arquivos é formados por duas máquinas: server1 e server2.

O backup é feito usando o nomevirtual do cluster.

Os servidores:

S.O: WIN2012 R2

Networker: 8.2.9

Cliente networker:8.2.9

DD: 5.6

Como o servidor de arquivos é muito grande, com mais de 3 TB de tamanho e não tenho espaço para salvar no DD todo dia o espaço ocupado pelo backup full, mudei a configuração para fazer o backup somente de uma pasta, mas o problema continuou.

Ao selecionar uma pequena com 213 MB, e toda vez que mando fazer o backup incremental copia 213 MB para o DD.

Ja fiz criei um novo cliente e novo grupo, , mas não funcionou.

Ja adicionei na configuração do cliente em backup command a linha:

save -c nome do cliente, mas o problema continuou.

Ja fiz o teste de força o backup incremental pela configuração de grupo, mas o problema continuou.

Related:

DB2 LUW enable incremental backup – when to perform first FULL backup?

Hi,
I’m planning to start using incremental backup on our DB2 database.
The procedure about enabling incremental backup is clear – set TRACKMOD ON and perform FULL DB backup after that. But the documentation is not very clear about the timing of the FULL backup. It says the full backup should be performed after the TRACKMOD is set, but can I let the applications to connect and start working with the DB first and then start FULL ONLINE backup? Or do I have to wait the backup to complete and then start the applications?
I’m asking this because we have 24/7 service depending on DB and since the backup takes about an hour, we cannot afford to be down so long.
Thanks !
BR,
Domen

Related: