Unable to back up a VHD with UniTrends. Error: Snapshot chain is too long, Operation failed.

When Unitrends creates snapshots, it copies the snap shots to the Unitrends storage and then deletes the snapshot. If the delete operation fails, then every time Unitrends creates a snapshot the snapshot chain will grow in size. The snapshot chain has a maximum size of 30 snapshots. After the 30th snapshot no more can be created. The error, “Snapshot chain is too long” will appear in the Unitrends logs. You will also not be able to create a snapshot in XenCenter or from an xe command

Related:

Quiesced Snapshots supportibility

Quiesced snapshots take advantage of the Windows Volume Shadow Copy Service (VSS) to generate application consistent point-in-time snapshots. The VSS framework helps VSS-aware applications (for example Microsoft

Exchange or Microsoft SQL Server) flush data to disk and prepare for the snapshot before it is taken.

Quiesced snapshots are therefore safer to restore, but can have a greater performance impact on a system while they are being taken. They may also fail under load so more than one attempt to take the snapshot may be required.

XenServer supports quiesced snapshots on:

• Windows Server 2012 R2 Server Core

• Windows Server 2012 R2

• Windows Server 2012

• Windows Server 2008 R2

• Windows Server 2008 (32/64-bit)

• Windows Server 2003 (32/64-bit)

Windows 8,1, Windows 8, Windows 7, Windows 2000, and Windows Vista are not supported

Note : VSS and quiesced snapshots have been deprecated starting Citrix Hypervisor 8.1

https://docs.citrix.com/en-us/citrix-hypervisor/whats-new/removed-features.html

Related:

Citrix Hypervisor Export Running VM – Export snapshot to file through CLI

Find the VM that you want to backup: xe vm-list

Snapshot the VM, see https://docs.citrix.com/en-us/xencenter/7-1/vms-snapshots-take.html

Find the snapshot to export by listing snapshots and their corresponding VMs: xe snapshot-list params=uuid,name-label,snapshot-of

Set the snapshot to be exportable: xe snapshot-param-set is-a-template=false uuid=<snapshot uuid>

Export the snapshot to a file: xe vm-export uuid= filename=<snapshot uuid>

Later, it can be imported: xe vm-import filename=<path> preserve=true force=true sr-uuid=<uuid> (xe sr-list to find the sr uuid)

Related:

Citrix Hypervisor Export Running VM – Export snapshot to file

Find the VM that you want to backup: xe vm-list

Snapshot the VM, see https://docs.citrix.com/en-us/xencenter/7-1/vms-snapshots-take.html

Find the snapshot to export by listing snapshots and their corresponding VMs: xe snapshot-list params=uuid,name-label,snapshot-of

Set the snapshot to be exportable: xe snapshot-param-set is-a-template=false uuid=<snapshot uuid>

Export the snapshot to a file: xe vm-export uuid= filename=<snapshot uuid>

Later, it can be imported: xe vm-import filename=<path> preserve=true force=true sr-uuid=<uuid> (xe sr-list to find the sr uuid)

Related:

OneFS Shadow Stores

The recent series of articles on SmartDedupe have generated several questions from the field around shadow stores. So this seemed like an ideal topic to explore in a bit more depth over the course of the next couple of articles.

A shadow store is a class of system file that contains blocks which can be referenced by different file – thereby providing a mechanism that allows multiple files to share common data. Shadow stores were first introduced in OneFS 7.0, initially supporting Isilon file clones, and indeed there are many overlaps between cloning and deduplicating files. As we will see, a variant of the shadow store is also used as a container for file packing in OneFS SFSE (Small File Storage Efficiency), often used in archive workflows such as healthcare’s PACS.

Architecturally, each shadow store can contain up to 256 blocks, with each block able to be referenced by 32,000 files. If this reference limit is exceeded, a new shadow store is created. Additionally, shadow stores do not reference other shadow stores. All blocks within a shadow store must be either sparse or point at an actual data block. And snapshots of shadow stores are not allowed, since shadow stores have no hard links.

Shadow stores contain the physical addresses and protection for data blocks, just like normal file data. However, a fundamental difference between a shadow stores and a regular file is that the former doesn’t contain all the metadata typically associated with traditional file inodes. In particular, time-based attributes (creation time, modification time, etc) are explicitly not maintained.

Consider the shadow store information for a regular, undeduped file (file.orig):

# isi get -DDD file.orig | grep –i shadow

* Shadow refs: 0

zero=36 shadow=0 ditto=0 prealloc=0 block=28

A second copy of this file (file.dup) is then created and then deduplicated:

# isi get -DDD file.* | grep -i shadow

* Shadow refs: 28

zero=36 shadow=28 ditto=0 prealloc=0 block=0

* Shadow refs: 28

zero=36 shadow=28 ditto=0 prealloc=0 block=0

As we can see, the block count of the original file has now become zero and the shadow count for both the original file and its copy is incremented to ‘28′. Additionally, if another file copy is added and deduplicated, the same shadow store info and count is reported for all three files.It’s worth noting that even if the duplicate file(s) are removed, the original file will still retain the shadow store layout.

Each shadow store has a unique identifier called a shadow inode number (SIN). But, before we get into more detail, here’s a table of useful terms and their descriptions:

Element

Description

Inode

Data structure that keeps track of all data and metadata (attributes, metatree blocks, etc.) for files and directories in OneFS

LIN

Logical Inode Number uniquely identifies each regular file in the filesystem.

LBN

Logical Block Number identifies the block offset for each block in a file

IFM Tree or Metatree

Encapsulates the on-disk and in-memory format of the inode. File data blocks are indexed by LBN in the IFM B-tree, or file metatree. This B-tree stores protection group (PG) records keyed by the first LBN. To retrieve the record for a particular LBN, the first key before the requested LBN is read. The retried record may or may not contain actual data block pointers.

IDI

Isi Data Integrity checksum. IDI checkcodes help avoid data integrity issues which can occur when hardware provides the wrong data, for example. Hence IDI is focused on the path to and from the drive and checkcodes are implemented per OneFS block.

Protection Group (PG)

A protection group encompasses the data and redundancy associated with a particular region of file data. The file data space is broken up into sections of 16 x 8KB blocks called stripe units. These correspond to the N in N+M notation; there are N+M stripe units in a protection group.

Protection Group Record

Record containing block addresses for a data stripe .There are five types of PG records: sparse, ditto, classic, shadow, and mixed. The IFM B-tree uses the B-tree flag bits, the record size, and an inline field to identify the five types of records.

BSIN

Base Shadow Store, containing cloned or deduped data

CSIN

Container Shadow Store, containing packed data (container or files).

SIN

Shadow Inode Number is a LIN for a Shadow Store, containing blocks that are referenced by different files; refers to a Shadow Store

Shadow Extent

Shadow extents contain a Shadow Inode Number (SIN), an offset, and a count.

Shadow extents are not included in the FEC calculation since protection is provided by the shadow store.

Blocks in a shadow store are identified with a SIN and LBN (logical block number).

# isi get -DD /ifs/data/file.dup | fgrep –A 4 –i “protection group”

PROTECTION GROUPS

lbn 0: 4+2/2

4000:0001:0067:0009@0#64

0,0,0:8192#32

A SIN is essentially a LIN that is dedicated to a shadow store file, and SINs are allocated from a subset of the LIN range. Just as every standard file is uniquely identified by a LIN, every shadow store is uniquely identified by a SIN. It is easy to tell if you are dealing with a shadow store because the SIN will begin with 4000. For example, in the output above:

4000:0001:0067:0009

Correspondingly, in the protection group (PG) they are represented as:

  • SIN
  • Block size
  • LBN
  • Run

The referencing protection group will not contain valid IDI data (this is with the file itself). FEC parity, if required, will be computed assuming a zero block.

When a file references data in a shadow store, it contains meta-tree records that point to the shadow store. This meta-tree record contains a shadow reference, which comprises a SIN and LBN pair that uniquely identifies a block in a shadow store.

A set of extension blocks within the shadow store holds the reference count for each shadow store data block. The reference count for a block is adjusted each time a reference is created or deleted from any other file to that block. If a shadow store block’s reference count drop to zero, it is marked as deleted, and the ShadowStoreDelete job, which runs periodically, deallocates the block.

Be aware that shadow stores are not directly exposed in the filesystem namespace. However, shadow stores and relevant statistics can be viewed using the ‘isi dedupe stats’, ‘isi_sstore list’ and ‘isi_sstore stats’ command line utilities.

Cloning

In OneFS, files can easily be cloned using the ‘cp –c’ command line utility. Shadow store(s) are created during the file cloning process, where the ownership of the data blocks is transferred from the source to the shadow store.

shadow_store_1.png



In some instances, data may be copied directly from the source to the newly created shadow stores. Cloning uses logical references to shadow stores, and the source and the destination data blocks refer to an offset in a shadow store. The source file’s protection group(s) are moved to a shadow store, and the PG is now referenced by both the source file and destination clone file. After cloning a file, both the source and the destination data blocks refer to an offset in a shadow store.

Dedupe

As we have seen in the recent blog articles, shadow Stores are also used for SmartDedupe. The principle difference with dedupe, as compared to cloning, is the process by which duplicate blocks are detected.

shadow_store_2.png

The deduplication job also has to spend more effort to ensure that contiguous file blocks are generally stored in adjacent blocks in the shadow store. If not, both read and degraded read performance may be impacted.

Small File Storage Efficiency

A class of specialized shadow stores are also used as containers for storage efficiency, allowing packing of small file into larger structures that can be FEC protected.

shadow_store_3.png

These shadow stores differ from regular shadow stores in that they are deployed as single-reference stores. Additionally, container shadow stores are also optimized to isolate fragmentation, support tiering, and live in a separate subset of ID space from regular shadow stores.

SIN Cache

OneFS provides a SIN cache, which helps facilitate shadow store allocations. It provides a mechanism to create a shadow store on demand when required, and then cache that shadow store in memory on the local node so that it can be shared with subsequent allocators. The SIN cache segregates stores by disk pool, protection policy and whether or not the store is a container.

Related:

How to increase disk space for MCS catalogs with VMware

When needing to expand the system drive on MCS provisioned servers with eager zeroed thick provisioning on the master image (gold image), VMware will not allow you to do this due to existing snapshots for this VM.

Also, removal of any snapshots from the master image will cause Citrix Studio to have problems as it requires access to the entire snapshot chain for the master image, and removal of a single snapshot will cause instability while it’s looking for this missing snapshot when referencing the catalog in Studio.

There is also the fact that the disk drive space is set in the provisioning scheme of the catalog upon creation, and this cannot be modified.

Related:

OneFS: 7.2.1.0 – 7.2.1.5: cannnot perform a snapshot restore of MS-office file from Previous Tab. Error: Write-Protected

Article Number: 503048 Article Version: 3 Article Type: Break Fix



Isilon OneFS 7.2,Isilon NL-Series,Isilon 108NL,Isilon NL400

OneFS: 7.2.1.0 – 7.2.1.5: When attempting to restore from snapshot using ‘Previous’ Tab for MS-office file, i.e. Excel, Word, one gets the error: Write-Protected. You are unable to restore an Isilon snapshot as a result.


Cannot perform a snapshot restore of MS-office file from Previous Tab. Error: Write-Protected

This issue is addressed in OneFS version 7.2.1.6. The bug number is 187005. .

We recommend you upgrade to 7.2.1.6 avoid the bug in the future.

Workaround

Currently the only workaround we can find is to log into an Isilon node directly, as the root use, and copy over the files/folders from the .snapshot directory onto the folder of your choice. You will need to ensure the snapshot folder is visible, as set in your snapshot settings.

For example:

Folder to restore is: /ifs/.snapshot/DataDailyBackup_35_2017-07-25-_22-30/data/ FolderA

Isilon1-1# cd /ifs/.snapshot/DataDailyBackup_35_2017-07-25-_22-30/data

Isilon1-1# pwd

/ifs/.snapshot/DataDailyBackup_35_2017-07-25-_22-30/data

njsgehisl01-1 # cp -Rp FolderA /ifs/data

FolderA folder has been copied from .snapshot to the current location.

Related:

Re: Need explanation on Parent and Child Storage Group in VMAX All Flash Array

One example, from the Oracle space, is to create a child SG (storage group) for the Oracle data files (e.g. data_sg) and another for the redo logs (e.g. redo_sg). Aggregate them under a parent SG (e.g. database_sg). Then use the parent for operations that relate to the whole database:

1. Masking view.

2. Remote replications with SRDF (create a consistency group that includes both data and logs).

3. Local database snapshots using SnapVX (a snapshot containing both data and log is considered ‘restartable’ and you can use it to instantiate new copies of the database, or as a ‘savepoint’ before patch update or for any other reason).

However, if you need to recover production database, you only want to restore data_sg and not redo_sg (in case the online redo logs survived, as they contain the latest transactions). Therefore, although the snapshots are made with the parent (database_sg), you can restore just the child (data_sg), and proceed with database recovery – all the way to the current redo logs.

Another advantage is separate performance monitoring of the database data files and logs. When using the child SG’s you can monitor them separately for performance KPIs without the redo logs metrics mix up with the data files.

Finally, with the introduction of Service Levels (SL’s) back into the code, you can use them differently on the child SG’s if you so wanted (e.g. Silver for data_sg, gold for redo_sg, etc.)

Related: