How to fix Event ID 455 ESENT error on Windows 10

ESENT is a built-in database search engine on your PC which helps File Explorer, Windows Search to search for parameters throughout your Windows 10 computer. If you’re encountering the Event ID 455 ESENT error on your Windows 10 device, then this post is intended to help you. In this post, we will provide the potential solutions you can try to mitigate this issue.

When this error occurs, you’ll see in the event log the following error description;

svchost (15692,R,98) TILEREPOSITORYS-1-5-18: Error -1023 (0xfffffc01) occurred while opening logfile

C:WINDOWSsystem32configsystemprofileAppDataLocalTileDataLayerDatabaseEDB.log.

Fix Event ID 455 ESENT error

If you’re faced with this Event ID 455 ESENT error on your Windows 10 PC, you can try either of our two recommended solutions presented below to resolve the issue.

  1. Create Database folder in TileDataLayer folder via File Explorer
  2. Create Database folder in TileDataLayer folder via Command Prompt

Let’s take a look at the description of the process involved concerning either of the listed solutions.

1] Create a Database folder in TileDataLayer folder via File Explorer

To create a Database folder in TileDataLayer folder via File Explorer, do the following:

  • Press Windows key + R to invoke the Run dialog.
  • In the Run dialog, copy and paste the directory path (assuming the C drive is housing your Windows 10 installation) below and hit Enter.
C:Windowssystem32configsystemprofileAppDataLocal
  • Now, right-click on the open space and then click New >Folder to create a folder in that location.
  • Next, rename the new folder as TileDataLayer.
  • Now, double-click the newly created TileDataLayer folder on it to explore it.
  • Again right-click on the space within the open folder and then click New >Folder to create a new folder.
  • Rename the new folder as Database.
  • Exit File Explorer
  • Reboot your computer.

After rebooting the Event ID 455 ESENT error should be fixed.

Alternatively, to achieve the same result using File Explorer, you can use the CMD Prompt. Continue below to see how.

2] Create a Database folder in TileDataLayer folder via Command Prompt

To create a Database folder in TileDataLayer folder via Command Prompt, do the following:

  • Press Windows key + R to invoke the Run dialog.
  • In the Run dialog box, type cmd and then press CTRL + SHIFT + ENTER to open Command Prompt in admin/elevated mode.
  • In the command prompt window, copy and paste the syntax below one by one and hit Enter after each line to execute them sequentially on your computer.
cd configsystemprofileAppDataLocalmkdir TileDataLayercd TileDataLayermkdir Database
  • Once the task completes, exit the CMD prompt.
  • Reboot your computer.

After rebooting the Event ID 455 ESENT error should be fixed.

ESENT

ESENT is an embeddable, transactional database engine. It first shipped with Microsoft Windows 2000 and has been available for developers to use since then. You can use ESENT for applications that need reliable, high-performance, low-overhead storage of structured or semi-structured data. The ESENT engine can help with data needs ranging from something as simple as a hash table that is too large to store in memory to something more complex such as an application with tables, columns, and indexes.

Active Directory, Windows Desktop Search, Windows Mail, Live Mesh, and Windows Update, currently rely on ESENT for data storage. And Microsoft Exchange stores all of its mailbox data (a large server typically has dozens of terabytes of data) using a slightly modified version of the ESENT code.

Features

Significant technical features of ESENT include:

  • ACID transactions with savepoints, lazy commits, and robust crash recovery.
  • Snapshot isolation.
  • Record-level locking (multi-versioning provides non-blocking reads).
  • Highly concurrent database access.
  • Flexible meta-data (tens of thousands of columns, tables, and indexes are possible).
  • Indexing support for integer, floating-point, ASCII, Unicode, and binary columns.
  • Sophisticated index types, including conditional, tuple, and multi-valued.
  • Columns that can be up to 2GB with a maximum database size of 16TB.

Benefits

  • No additional download needed. ManagedEsent uses the native esent.dll that already comes as part of every version of Microsoft Windows.
  • No administration required. ESENT automatically manages log files, database recovery, and even the database cache size.

Note: The ESENT database file cannot be shared between multiple processes simultaneously. ESENT works best for applications with simple, predefined queries; if you have an application with complex, ad-hoc queries, a storage solution that provides a query layer will work better for you.

Related:

SourceOne Email Management – Index sets marked with “Unperformed Transaction” or “Missing Items” status due to transaction file with .xvlts extension stuck in index DropDir folder and multiple copies of the files found in Intermediary folder[4]

Article Number: 521257 Article Version: 5 Article Type: Break Fix



SourceOne Email Management,SourceOne

You find one or more SourceOne index transaction files with XVLTS extension are stuck in a processing loop within index DropDir folder and multiple copies of same files would be created within “Intermediary” sub-folder.

If issue is not identified in time it may have following impact:

  • Thousands of copies of the XVLTS file with same size and belonging to same index set would end up in intermediary folder. Here is an example of how those moved files within Intermediary may look :
User-added image

In the above example, ES1Mixed is the archive folder name followed by YYYYMM and index set number. Post index set number, with each creation of file an incremental number is added because there is already original XVLTS file present in the folder. All copies will be having same size.

  • If multiple transaction files have issue they may end up causing backlog of index transaction files within DropDir folders because maximum number of index processes that could run in SourceOne environment are busy processing files with the issue.

Event messages similar to the following may be found within the ExAsIdxObj.exe.log:

Unable to remove '\HostNameES1_MsgCenterUnpack_AreaEs1Mixed201710201805140308275B8F6E32246377675963F2E4B99AFF166449CD5FC4E695D200.EMCMF.MD'. OsError: 67|IndexRun.cpp(2062)|Job Id: -1; Activity Name: HostName; Activity Id: -1; Activity Type: -1; HostName(0x86042B76) Unknown error (0x80030043)|IndexThread.cpp(3038)|Job Id: -1; Activity Name: HostName; Activity Id: -1; Activity Type: -1; HostName[\HostNameEs1_IndexEs1Mixed20171001] Aborting index run!!!!!|IndexRun.cpp(1211)|Job Id: -1; Activity Name: HostName; Activity Id: -1; Activity Type: -1; HostNameStopAncillaryRun \HostNameEs1_IndexEs1Mixed20171001|IdxAncillaryDB.cpp(295)|Job Id: -1; Activity Name: HostName; Activity Id: -1; Activity Type: -1; HostNameMarking local idx state as missmsg E:ExIndexTempEs1Mixed_201710_001Index|CIdxState.cpp(279)|Job Id: -1; Activity Name: HostName; Activity Id: -1; Activity Type: -1; HostNameEs1Mixed_201710_001] Not copying index to network due to previous fatal error. (0x86042B86)|IndexThread.cpp(3279)|Job Id: -1; Activity Name: HostName; Activity Id: -1; Activity Type: -1; HostName

This problem is caused by a software defect

There are two conditions identified which may cause this issue:

1. Index transaction files have transactions that reference EMCMF files location within Unpack_Area that is inaccessible or may have been changed (in case message center path was changed).

2. Index transaction file contains transaction entries where path to EMCMF files is corrupt or the XVLTS file has been corrupted due to environment issue.

This issue will be resolved in patches or Service Pack versions of EMC SourceOne Email Management post 7.2.SP6 Hotfix 2 (7.2.6.6175). DELL EMC SourceOne Patch and Service Pack kits are available for download via http://support.emc.com.

Workaround:

  1. Stop “EMC SourceOne Index” service on all SourceOne native archive servers with index role.
  2. Check task manager on index server hosts and make sure there is no “ExAsIdxObj.exe” or “ExAsElasticIdxObj.exe” processes are running. SourceOne index service from step 1 will wait for the ExAsIdxObj.exe or ExAsElasticIdxObj.exe to stop before service stops. Executable for index service is ExAsIndex.exe.
  3. Once index service from step 1 stops, navigate to index file share and go to DropDirIntermediary folder.
  4. Based on the files listing within intermediary folder make list index sets impacted. For an example, based on screenshot provided above index set transaction files in question belong to “es1mixed_201710_001” and “es1mixed_201804_001” index sets.
  5. Create a folder within Intermediary folder that you will use in the next step to take backup of files from index DropDir folder.
  6. Using the list created in step 4 above, identity any files starting with same names, YYYYMM, index set number and move those files to backup folder created in step 5.
  7. Start SourceOne index service on all indexing hosts where service was stopped in step 1.
  8. Index sets identified above needs to be rebuilt since some of its transaction files didn’t get processed. SourceOne Administration Console can be used to submit index sets to be rebuilt. For detailed instruction on how to rebuild index sets SourceOne Email Management Administration Guide can be followed.
  9. On successful rebuild of index sets, status of index sets should change to “Available” state from “Unperformed Transaction” or “Missing Items” state.
  10. If index sets rebuild successfully, files located within intermediary folder and backup folder (from step 5) related to index sets in question can be deleted.

Related:

Avamar: Exchange Plugin – Why backup change ratio jumps from 0.5% to over 20% in one day if we failover the DB in a DAG cluster

Article Number: 499650 Article Version: 3 Article Type: Break Fix



Avamar Plug-in for Exchange VSS

Case Scenario:

We observe that Exchange DAG backup change-ratio presents a huge change ratio if we perform a failover of a database in a DAG environment “moving one DB from one exchange node to another”

We see the % value spiking up from the usual daily value of ~0.5% going up to ~20%/~30% and depending on which DB is being failed-over we even see ~60% of change ratio.

The DB is exactly the same before and after the failover, the only difference is that after the failover it is being backed up from a different node of the DAG cluster.

This behavior gives the impression that the backup data is being sent twice to the avamar storage, the expectation is that it should only send the usual ~0.5 of data which reflects average the daily change ratio for all the DBs under this DAG cluster environment.

The result of this inconsistency is that the avamar capacity is being unnecessarily over-utilized.

Explanation:

This is an expected behavior.

The logical structure of the Exchange database is the same but Avamar uses the *physical* layout of the database when performing the de-duplication.

In DAG environments, Exchange performs log shipping from the active copy of the database to the passive copy and replays the transactions against this passive copy.

Each Exchange server is responsible for applying the transaction logs against its copy of the database and there is no guarantee that it will use the same physical database layout as the other servers in the DAG.

Here is an example to better explain this concept:

  • Imagine we have two copies of the same picture and we cut each one into a puzzle using a different pattern.
  • While the picture looks the same once assembled, the physical layout of the puzzle is completely different.

Since Avamar only looks at the shape of the pieces, it has no way to know that the data is the same logical data, therefore the data is being sent to the server again however.

The DB failover was performed in the DAG cluster.

As per above explanation we learned that this is the normal behavior of the product.

Related:

Re: Vipr SRM – Status or Replication report

I am working on a report that monitors data received or transmitted on an array that is a target of replication. ie. MB/s received by the target array for a Replicated device. I have set thresholds to indicate status.. green is data is being transmitted, so any value greater than zero, and red data was not transferred, so value = 0. My problem is, when a device is deleted and there is no numeric value, so it would be a NULL until that device is considered a non active metric in SRM. In my report that type of condition is represented by the last known value and never changes. My report polling is Last Day, 1 Hr, Last.

I want to be able to show that in my report by using a threshold so it is investigated. I am using the Status report type. Does anyone know of a way to set a threshold to NULL

Or has anyone created a report that they use to show replication status, VMAX, Recoverpoint, Netapp…

Thanks

Lisa

Related:

Vipr SRM – Status or Replication report

I am working on a report that monitors data received or transmitted on an array that is a target of replication. ie. MB/s received by the target array for a Replicated device. I have set thresholds to indicate status.. green is data is being transmitted, so any value greater than zero, and red data was not transferred, so value = 0. My problem is, when a device is deleted and there is no numeric value, so it would be a NULL until that device is considered a non active metric in SRM. In my report that type of condition is represented by the last known value and never changes. My report polling is Last Day, 1 Hr, Last.

I want to be able to show that in my report by using a threshold so it is investigated. I am using the Status report type. Does anyone know of a way to set a threshold to NULL

Or has anyone created a report that they use to show replication status, VMAX, Recoverpoint, Netapp…

Thanks

Lisa

Related:

7022994: New Command Control rules inconsistently processed, intermittently failing

This document (7022994) is provided subject to the disclaimer at the end of this document.

Environment

Privileged Account Manager

Situation

New Command Control rules work on primary node but not secondary node.
Recently created rules are inconsistently processed by Primary and Backup Manager(s) – intermittently failing.
Failed attempts being processed by the backup manager; services stopped on backup manager will result in successful requests processed by the online primary manager.
Existing rules remain unaffected and are processed correctly by both managers.

Resolution

The simplest solution is to promote the existing Primary modules so the replication thread pushes the latest configuration to all the Secondaries:
  1. Please verify the Backup Manager’s 29120 port is reachable from the Primary Manager:

    telnet <backup> 29120

  2. Re-promote primary package modules:

    Note: This should force replication to happen from the primary manager to all backup managers.
  • Navigate to the Primary Manager’s packages in the Hosts Console.
  • Select all the packages that display ‘Primary’ status
  • Click ‘Promote Manager’ from the left pane.

  • If the issue persists, please restart PAM service on both primary and backup manager(s) and wait a few minutes.
  • Cause

    Replication issue of Command Control and Auth Modules caused by network issues from Primary to Secondary (backups) on port 29120. A very rare issue which may occur when there is no connectivity between Primary and Secondary servers at the time the replication thread runs.

    Disclaimer

    This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

    Related:

    Avamar Client : #SQL Backup Label number

    Article Number: 503823 Article Version: 4 Article Type: How To



    Avamar Client

    1 – Database backups are created by backing up each individual database as a partial backup, partial at this point means, a piece, a backup of each database its part of an entire backup. A partial backup is just that – it’s a piece of a backup that doesn’t stand on its own.

    As we can see below, each labelnum represent a partial backup.

    <file internal=”false” labelnum=”1372″ fullname=”SQLSQLDB/db2/i-2.stream0″ acnt=”/clients/sql.moja.com” saveas=”i-1.stream0″ />

    <file internal=”false” labelnum=”1371″ fullname=”SQLSQLDB/db2/f-0.stream0″ acnt=”/clients/sql.moja.com” />

    <file internal=”false” labelnum=”1370″ fullname=”SQLSQLDB/db4/i-2.stream0″ acnt=”/clients/sql.moja.com” saveas=”i-1.stream0″ />

    <file internal=”false” labelnum=”1369″ fullname=”SQLSQLDB/db3/i-2.stream0″ acnt=”/clients/sql.moja.com” saveas=”i-1.stream0″ />

    <file internal=”false” labelnum=”1368″ fullname=”SQLSQLDB/db4/f-0.stream0″ acnt=”/clients/sql.moja.com” />

    <file internal=”false” labelnum=”1367″ fullname=”SQLSQLDB/db3/f-0.stream0″ acnt=”/clients/sql.moja.com” />

    2- Later, then calling a “snapview” to tie those backups together into a restorable set.

    (Essentially, snapview, ties partial backups together into a restorable set.).

    So, the backup above will become only one number in the Avamar Administrator.

    The snapview process is how Avamar ties partial backups together into a restorable set. After the backup of each database is complete, a final avtar runs that collects the root hashes of the partial backups and ties them together. This process of tying the backups together is a snapview.

    Streams:

    Avatar consider as partial backup each stream, for this reason, we have a one labelnum for each stream.

    Based in example below, the configuration below we have 5 streams, for larger databases, we can see the 5 streams, so 5 label number because it each stream is a partial backup.



    <flag type=”integer” pidnum=”3006″ value=”5” name=“max-parallel” />

    <file internal=”false” labelnum=”259344” fullname=”sqlinstraveleef-0.stream0″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259343″ fullname=”sqlinstraveleef-0.stream2″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259342″ fullname=”sqlinstraveleef-0.stream1″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259340″ fullname=”sqlinstraveleef-0.stream3″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259341″ fullname=”sqlinstraveleef-0.stream4″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259367″ fullname=”sqlinsadminsysf-0.stream3″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259368″ fullname=”sqlinsadminsysf-0.stream2″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259369″ fullname=”sqlinsadminsysf-0.stream1″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259365” fullname=”sqlinsadminsysf-0.stream4″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259366″ fullname=”sqlinsadminsysf-0.stream0″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259362″ fullname=”sqlinsCoref-0.stream3″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259364″ fullname=”sqlinsCoref-0.stream4″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259363” fullname=”sqlinsCoref-0.stream1″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259360″ fullname=”sqlinsCoref-0.stream0″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259361″ fullname=”sqlinsCoref-0.stream2″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=“259349” fullname=”sqlinsLCS_SQLf-0.stream4″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259347″ fullname=”sqlinsLCS_SQLf-0.stream1″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259348″ fullname=”sqlinstLCS_SQLf-0.stream0″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259346” fullname=”sqlinsLCS_SQLf-0.stream3″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259345” fullname=”sqlinsLCS_SQLf-0.stream2″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    Related:

    #SQL Backup Label number

    Article Number: 503823Article Version: 3 Article Type: How To





    Avamar Client



    This article explain how label number works for Avamar on SQL Backup.

    Instructions 1 – Database backups are created by backing up each individual database as a partial backup, partial at this point means, a piece, a backup of each database its part of an entire backup. A partial backup is just that – it’s a piece of a backup that doesn’t stand on its own.

    As we can see below, each labelnum represent a partial backup.

    <file internal=”false” labelnum=”1372″ fullname=”SQLSQLDB/db2/i-2.stream0″ acnt=”/clients/sql.moja.com” saveas=”i-1.stream0″ />

    <file internal=”false” labelnum=”1371″ fullname=”SQLSQLDB/db2/f-0.stream0″ acnt=”/clients/sql.moja.com” />

    <file internal=”false” labelnum=”1370″ fullname=”SQLSQLDB/db4/i-2.stream0″ acnt=”/clients/sql.moja.com” saveas=”i-1.stream0″ />

    <file internal=”false” labelnum=”1369″ fullname=”SQLSQLDB/db3/i-2.stream0″ acnt=”/clients/sql.moja.com” saveas=”i-1.stream0″ />

    <file internal=”false” labelnum=”1368″ fullname=”SQLSQLDB/db4/f-0.stream0″ acnt=”/clients/sql.moja.com” />

    <file internal=”false” labelnum=”1367″ fullname=”SQLSQLDB/db3/f-0.stream0″ acnt=”/clients/sql.moja.com” />

    2- Later, then calling a “snapview” to tie those backups together into a restorable set.

    (Essentially, snapview, ties partial backups together into a restorable set.).

    So, the backup above will become only one number in the Avamar Administrator.

    The snapview process is how Avamar ties partial backups together into a restorable set. After the backup of each database is complete, a final avtar runs that collects the root hashes of the partial backups and ties them together. This process of tying the backups together is a snapview.

    Streams:

    Avatar consider as partial backup each stream, for this reason, we have a one labelnum for each stream.

    Based in example below, the configuration below we have 5 streams, for larger databases, we can see the 5 streams, so 5 label number because it each stream is a partial backup.

    <flag type=”integer” pidnum=”3006″ value=”5″ name=”max-parallel” />

    <file internal=”false” labelnum=”259344″ fullname=”sqlinstraveleef-0.stream0″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259343″ fullname=”sqlinstraveleef-0.stream2″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259342″ fullname=”sqlinstraveleef-0.stream1″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259340″ fullname=”sqlinstraveleef-0.stream3″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259341″ fullname=”sqlinstraveleef-0.stream4″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259367″ fullname=”sqlinsadminsysf-0.stream3″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259368″ fullname=”sqlinsadminsysf-0.stream2″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259369″ fullname=”sqlinsadminsysf-0.stream1″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259365″ fullname=”sqlinsadminsysf-0.stream4″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259366″ fullname=”sqlinsadminsysf-0.stream0″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259362″ fullname=”sqlinsCoref-0.stream3″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259364″ fullname=”sqlinsCoref-0.stream4″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259363″ fullname=”sqlinsCoref-0.stream1″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259360″ fullname=”sqlinsCoref-0.stream0″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259361″ fullname=”sqlinsCoref-0.stream2″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259349″ fullname=”sqlinsLCS_SQLf-0.stream4″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259347″ fullname=”sqlinsLCS_SQLf-0.stream1″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259348″ fullname=”sqlinstLCS_SQLf-0.stream0″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259346″ fullname=”sqlinsLCS_SQLf-0.stream3″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    <file internal=”false” labelnum=”259345″ fullname=”sqlinsLCS_SQLf-0.stream2″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

    Related:

    Avamar f_cache2 size discrepancy between servers

    Working with a customer on a performance issue and we ran across something that I am not sure is related, but did cause me to wonder.

    The customer has 2x servers – one is their “active” file server at their primary location, and the second is a “alternate version” of the active server that is kept in sync via a product called “DoubleTake”. So at any given time, each server effectively has “the same amount” of data, files, directories, etc. on them.

    At present, we are backing up the active file server to an Avamar/DD at the primary location – and we are also backing up the “alternate” file server to an Avamar/DD at the secondary location. FWIW, the performance issue we are seeing is that the “alternate” file server backs up “twice as fast” as the active file server does – or more accurately, since the amount of changed data is not actually that much, the “alternate” file server is “processing” the 20 million files that need to be backed up twice as fast as the active file server is. So we are looking at various aspects of what might be causing one server to scan through all those files faster than another.



    Which brings me to the f_cache2 files – which I’m not sure have anything to do with this, but did look odd to me.



    On the active file server, from what I can tell from the Avamar session logs and elsewhere, the f_cache2 file is just under 9GB, and there are around “980 pages in all backups in cache”.



    On the alternate file server, from what I can tell from the Avamar session logs and elsewhere, the f_cache2 file is just under 47GB, and there are around “550 pages in all backups in cache”.



    Can someone help me understand how one f_cache file is 5 times the size of another, but seems to contain only half the amount of “pages”, which as far as I understood were all the same size?

    The only thing that I can think of is that we did have some issues in the past on the active file server that caused the Avamar cache files to be recreated a couple of times – so those files have not “been around” and been active for the same amount of backups that the cache files have on the alternate server (which as far as I know have not been recreated and have just been growing since the agent was installed on that server).

    All comments/feedback appreciated – thanks.

    Related: