Avamar: Exchange Plugin – Why backup change ratio jumps from 0.5% to over 20% in one day if we failover the DB in a DAG cluster

Article Number: 499650 Article Version: 3 Article Type: Break Fix



Avamar Plug-in for Exchange VSS

Case Scenario:

We observe that Exchange DAG backup change-ratio presents a huge change ratio if we perform a failover of a database in a DAG environment “moving one DB from one exchange node to another”

We see the % value spiking up from the usual daily value of ~0.5% going up to ~20%/~30% and depending on which DB is being failed-over we even see ~60% of change ratio.

The DB is exactly the same before and after the failover, the only difference is that after the failover it is being backed up from a different node of the DAG cluster.

This behavior gives the impression that the backup data is being sent twice to the avamar storage, the expectation is that it should only send the usual ~0.5 of data which reflects average the daily change ratio for all the DBs under this DAG cluster environment.

The result of this inconsistency is that the avamar capacity is being unnecessarily over-utilized.

Explanation:

This is an expected behavior.

The logical structure of the Exchange database is the same but Avamar uses the *physical* layout of the database when performing the de-duplication.

In DAG environments, Exchange performs log shipping from the active copy of the database to the passive copy and replays the transactions against this passive copy.

Each Exchange server is responsible for applying the transaction logs against its copy of the database and there is no guarantee that it will use the same physical database layout as the other servers in the DAG.

Here is an example to better explain this concept:

  • Imagine we have two copies of the same picture and we cut each one into a puzzle using a different pattern.
  • While the picture looks the same once assembled, the physical layout of the puzzle is completely different.

Since Avamar only looks at the shape of the pieces, it has no way to know that the data is the same logical data, therefore the data is being sent to the server again however.

The DB failover was performed in the DAG cluster.

As per above explanation we learned that this is the normal behavior of the product.

Related:

Leave a Reply