OneFS: How to recover individual file(s) from Snapshots using the OneFS’ Sync IQ

Article Number: 514023 Article Version: 3 Article Type: How To



Isilon OneFS,Isilon SyncIQ

This KB article explains how to restore/recover data from Snapshots via Sync IQ

A Snapshot is a copy of the files/folders within the location selected. The contents of each snapshot reflect the state of the file system at the time the snapshot was created. It is easy to navigate through each snapshot as if it were still active. Your directories/folders and files will appear as they were at the time that the snapshot was created. You can easily recover your own files, before snapshot expiration, simply by copying an earlier version from the snapshot to the original directory or to an alternate location.

Note: It is good practice to copy files to a temporary directory rather than overwriting current files/folders. This gives users the option to keep either the Snapshot copy, current copy, or both

Example:

Assume we have folder /ifs/original_folder andthis folder contained subfolders:

folder1 folder2 folder3 folder4

subfolders “folder1 , folder2 and folder3” got deleted from HEAD but they still exist in snapshot with ID 59 as below:

# ls /ifs/original_folder

folder4

# isi snapshot snapshots view snapshot

ID: 59

Name: snapshot

Path: /ifs/original_folder

Has Locks: No

Schedule: –

Alias Target ID: –

Alias Target Name: –

Created: 2017-11-15T11:04:58

Expires: –

Size: 10.0k

Shadow Bytes: 0

% Reserve: 0.00%

% Filesystem: 0.00%

State: active

Riptide-1# ls /ifs/original_folder/.snapshot/snapshot

folder1 folder2 folder3 folder4

We will recover subfolder “folder 2″ only from snapshot 59 to the path /ifs/recoverd_folder as below:

1- Create a policy with source /ifs/original_folder and to include subfolder “folder2” only

# isi sync policies create –name=recover –source-root-path=/ifs/original_folder –source-include-directories=/ifs/original_folder/folder2 —target-host=localhost –target-path=/ifs/recoverd_folder –action=sync

2- Start a sync job for policy recover but from snapshot id 59

In OneFS 7.x

# isi_classic sync pol start recover–use_snap 59

In OneFS 8.x

# isi sync jobs start –policy-name=recover –source-snapshot=59

3- Confirm that the sync job finished

# isi sync reports list

Policy Name Job ID Start Time End Time Action State

—————————————————————————–

recover 1 2017-11-15T11:38:02 2017-11-15T11:38:07 run finished

—————————————————————————–

Total: 1

4- Check recovered folder , it will contain subfolder “folder2” only.

# ls /ifs/recoverd_folder

folder2

Related:

CmnClntErrorInstances folder

I need a solution

Recently deployed 14.2 across Windows Server platforms; 2008 non-R2, 2008 R2, 2012, 2016.

Now seing the following folder accumulating large number of files on most systems, does not seem to be OS version specific.

C:ProgramDataSymantecSymantec Endpoint Protection14.2.770.0000.105DataCmnClntErrorInstances

In some cases several Gb in total.

Is there some way to prevent this and/or restrict the number of files retained (by size, count or date)?

I have executed SymDiag and nothing untoward is indicated.

Any assistance greatly appreciated.

0

Related:

  • No Related Posts

File Dynamics 6.1 Introduces Security Notify Policies

buckgashler

This month we released File Dynamics 6.1 – a product and version number that might need a little bit of explanation. As I mentioned in a previous article, File Dynamics is a new data governance product built from the Identity-Driven policy management technology of Micro Focus Storage Manager for Active Directory, but greatly enhanced to …

+read more

The post File Dynamics 6.1 Introduces Security Notify Policies appeared first on Cool Solutions. buckgashler

Related:

  • No Related Posts

ShareFile Error “The folder structure that you tried to download is too deep to be supported by most operating systems….. your folders and files.”

The Microsoft Windows API defines the maximum file path limit as 260 characters for a fully specific path-and-filename. This includes the beginning of the directory to the file extension. While there are some exceptions to this limit, ShareFile typically enforces a file path limit on files upload or downloaded via our various apps and tools.

File Path errors refer to the length of the file path and file name rather than the size of the file. If you encounter this error when attempting to upload or download a file using one of our apps, you may need to navigate to a deeper folder within the structure you are trying to download. By downloading at a level of 1 or 2 folders deeper than the root folder, you may be able to still download the majority of your data without having to recreate the folder structure on your own computer.

Additionally, you may want to consider renaming lengthy file names that occupy the majority of the limit.

Related:

Using copy VM to copy VMs from one machine to another servers takes too long

Required knowledge:

  • Basic level use of command line (CLI)
  • Basic level usage of VI
  • Basic level usage of SSH client
  • Hosts should have at least 4GB of dom0 memory
  1. Log into the master host via CLI

  2. Make copy of sparse_dd.conf file: cp /etc/sparse_dd.conf /etc/sparse_dd.conf.old

  3. Use VI to access file: vi /etc/sparse_dd.conf

  4. Uncomment the line:

    # encryption-mode = never

    so it reads

    encryption-mode = never

  5. Save conf file

  6. Reboot host


This procedure will have to be done on whatever host will be master. If the environment changes often, and this is a feature that is desired across pool please complete on all hosts.

**This type of encryption is for high-security environments. If yours is such an environment either do not use, or make sure you change back when copy procedure is done. Use at own risk.**

Related:

NetScaler GSLB Static Proximity Does Not Work After Upgrading to 11.0/11.1 Firmware

To resolve this issue delete the nslocation.* files from the /var/netscaler/locdb/ directory and then re-run the configuration to add the location file.

root@NS-Cumulus1# cd /var/netscaler/locdb/

root@NS-Cumulus1# ls

GeoIPCountryWhois.csv GeoLite2-City-Locations-en.csv IP2LOCATION-LITE-DB1.CSV nslocation.ck nslocation.db

root@NS-Cumulus1# rm nslocation.*

> add locationfile /var/netscaler/locdb/GeoIPCountryWhois.csv -format geoip-country

Related:

How to Create a Folder and Modify Folder Options

Create a folder to organize the files stored on your account. You have granular control of who can access files stored in a given folder, including the ability to control download and upload permissions.

If you’re looking for information on Folder Options, click here.

Permission Requirements

In order to create a subfolder, you must have upload permissions in the parent folder. To create a folder:

  1. Access the green Action Button and select Create Folder.
  2. Enter a folder name, description (optional) and a drop-down menu to add users.
  3. If you would like to allow other users to access this folder with specific permissions, click the checkbox for Add People to Folder. Leave this box unchecked if you do not wish to add users at this time, or if you plan to add users at a later date.
  4. ShareFile does not allow you to have duplicate folder names on the root of the account or in the same parent folder.
  5. Click Create Folder.
  6. To create a subfolder, repeat the above steps. If you wish to create another folder at the same level as the previous folder, you will need to navigate back to the original folder and repeat these steps.
User-added image

Share Your Folder with Others

Click here for information on how to share your folder with other users.


Create Folders in Bulk

The Bulk Folder Upload is designed for customers who want each of their clients to have their own folder within their account. The Bulk Folder Upload will add your client users to your ShareFile account, provide them with login information, and create folders for each client to access.

Click here to download the Bulk Folder Upload template. Please enter the following information in the provided columns:

  1. EmailAddress
  2. FirstName
  3. LastName
  4. Company
  5. Password (if left blank, the client will receive a randomly generated password)
  6. FolderName

When filling out the spreadsheet do not change the spreadsheet name or any of the column titles. This will cause an error in the upload.

Send the completed spreadsheet to ShareFile Customer Support with the following information:

  • Which root-level folder the new client folders will be created under
  • Who will be the Owner of the new Folders (either you or another Employee)
  • Which permissions and settings the client users should have. These include:
    • Ability to change their passwords
    • Add the user to the company Shared Address Book
    • Users can download from their folders
    • Users can upload from their folders
    • Can delete
    • Users are Folder Administrators
    • Users can receive download notifications
    • Users can receive upload notifications


You will also need to let ShareFile Customer Care know if the Welcome Email should be customized and if you want this sent out to all your new users at one time. Alternatively, you may send out the Welcome Email manually through the Manage Users link in your account.

You may submit the above request directly by clicking here: https://www.sharefile.com/support

Folder Creator vs Folder Owner

When a folder is created by a user, the creator will be listed as the Creator of the folder when viewing the folder as an individual item. Once you have navigated within that folder, you can view the current folder owner in the Folder Access pane at the bottom of the page. If a user created a folder, but has been removed from the account, that user will still be listed as the Folder Creator. However, Folder Owner will be changed when the deleted user’s folders and files have been reassigned to another user.

(The Creator Column denotes the original folder creator)

User-added image

(The current owner of the folder is denoted in the Folder Access pane)

User-added image



Available Folder Options

Folder Options can be accessed in the More Options menu when viewing a folder.




File Retention Policy

The File Retention Policy determines how long files are retained in a specific folder. You can set a default file retention policy for all newly created folders if you are an administrator for the account. To set a policy, click into the root level folder you would like to set the policy on. You have the option to have files deleted 1 day, 7 days, 14 days, 30 days, 60 days, 90 days, 6 months, 1 year or 2 years after they are uploaded.

This applies to all files in the root level folder, as well as all files within the subfolders.

Account-wide default settings can be configured by an account Admin in the Advanced Preferences menu. When changing the account-wide setting, the new setting will only apply to newly created folders and not previous folders in your account.

Will I be notified before my policy deletes my files?

Before files or folders are removed, a warning message is shown in the following locations:

User-added image
User-added image

Retention Policy FAQ

If you set both a folder expiration date and a file retention policy, the most restrictive policy will take effect.

For example, if the folder expiration date is set to one week from today’s date and the file retention policy is set to 30 days then the folders and all its contents will be deleted under the one week policy.

When moving files and folders, they will inherit the new parent/root level folder’s policy.

For example, if you move a file from your File Box into a folder with a retention policy of 90 days, then the file will inherit a expiration date of 90 days from its uploaded date.

When setting a new policy or changing a policy on an existing folder there will be an automatic 7 day warning.

For example, if you uploaded a file six months ago and today set a file retention policy for 30 days, then the file will be set to delete in 7 days to avoid any accidental deletions.

Will the File Retention Policy delete all subfolders?

No – the files contained within the folders will be removed, but the empty folder will remain. To have folders automatically removed, try using a Folder Expiration Date lower down in this article.

What happens to files removed by retention policy?

Files and folders deleted by a retention policy are permanently removed. Files removed by retention policy cannot be restored from the Recycle Bin.

It is possible to customize the retention policy of the Personal Folders section of your account via the Edit Folder Options link. Any changes made to the File Retention policy of your Personal Folders will supersede account-wide File Retention policy settings. If you do not want your users to have this ability, please contact ShareFile Customer Support to have this setting disabled.



Folder Expiration Date

Items deleted via Expiration Policy cannot be restored from the Recycle Bin. To set a specific date on which a folder and all files contained within it are deleted:

  1. Access the folder you wish to delete.
  2. Access More Options beside the folder name and select Advanced Folder Settings.
  3. Under Folder Expiration Date, use the calendar or date format to specify the expiration date.
  4. Save.



Sort Files in a Folder

Files can be sorted by clicking on any header within the folder. The options are by Title, Mb, Uploaded date, or Creator. Folder Admins can change the default sort order in the Advanced Folder Options menu. Account Administrators can set account-wide sorting defaults in the Admin Settings section of their account, in Advanced Preferences.

Related:

File Count Per Directory

Got asked the following question from the field recently:

“I have a customer with hundreds of thousands of files per directory in a small number of directories on their cluster. What’s least impactful command to count the number of files per directory?”



Unfortunately, there’s no command currently available that will provide that count instantaneously. Something will have to perform a treewalk to gather these statistics. That said, there are a couple of approaches to this, each with its pros and cons:

  • If the cluster has a SmartQuotas license, an advisory directory quota can be configured on the file count directories they want to check. As mentioned, the first job run will require walking the directory tree, but fast, low impact reports will be available after this first pass.

  • Another approach is using traditional UNIX commands, either from the OneFS CLI or, less desirably, from a UNIX client NFS session.



The two following commands will both take time to run:

# ls -f /path/to/directory | wc –l

# find /path/to/directory -type f | wc -l

It’s worth noting that when counting files with ls, it will probably yield faster results if the ‘-l’ flag is omitted and the ‘-f’ flag used instead. This is because ‘-l’ resolves UID & GIDs to display users/groups, which creates more work thereby slowing the listing. In contrast, ‘-f’ allows the ‘ls’ command to avoid sorting the output. This should be faster, and reduce memory consumption when listing extremely large numbers of files.

That said, there really is no quick way to walk a file system and count the files – especially since both ‘ls’ and ‘find’ are single threaded commands. Running either of these in the background with output redirected to a file is probably the best approach.

Depending on the arguments for either the ‘ls’ or ‘find’ command, you can gather a comprehensive set of context info and metadata on a single pass.

# find /path/to/scan -ls > output.file

It will take quite a while for the command to complete, but once you have the output stashed in a file you can pull all sorts of useful data from it.

Assuming a latency of around 20ms per file, it will take about 33 minutes to parse a directory containing 100,000 files. This estimate is conservative, but there are typically multiple protocol operations that need to be done to each file, and they do add up since ‘ls’ is not multi-threaded.

  • If possible, ensure the directories of interest are stored on a file pool that has at least one of the metadata mirrors on SSD (metadata-read).



  • Windows Explorer can also enumerate the files in a directory tree surprisingly quickly. All you get is a file count, but it can work pretty well.

  • If the directory you wish to know the file count for just happens to be /ifs, you can run the LinCount job, which will tell you how many LINs there are in the file system.

LinCount which (relatively) quickly scans the file system and returns the total count of LINs (logical inodes). The LIN count is equivalent to the total file and directory count on a cluster. The job runs by default on LOW priority, and is the fastest method of determining object count on OneFS – assuming no other job has run to completion.



To kick off the LinCount job, the following command can be run from the OneFS command line interface (CLI):



# isi job start lincount



The output from this will be along the lines of “Added job [52]”.



Note that the number in square brackets is the job ID.



To view results, run the following from the CLI:



# isi job reports view [job ID]



For example:



# isi job reports view 52

LinCount[52] phase 1 (2018-09-17T09:33:33)

——————————————

Elapsed time 1 seconds

Errors 0

Job mode LinCount

LINs traversed 1722

SINs traversed 0



The “LINs traversed” metric indicates that 1722 files and directories were found.



Be aware that the LinCount job output will also include snapshot revisions of LINs in its count.



Alternatively, if another treewalk job has run against the directory you wish to know the count for, you might be in luck.



Some other considerations regarding the scenario presented in the original question:



Hundreds of thousands is an extremely large number of files to store in one directory. To reduce the directory enumeration time, where possible divide the files up into multiple subdirectories.



When it comes to NFS, the behavior is going to partially depend on whether the client is doing READDIRPLUS operations vs READDIR. READDIRPLUS is useful if the client is going to need the metadata. However, ff all you’re trying to do is list the filenames, it actually makes that operation much slower.



If you only read the filenames in the directory, and you don’t attempt to stat any associated metadata, then this requires a relatively small amount of I/O to pull the names from the meta-tree, and should be fairly fast.



If this has already been done recently, some or all of the blocks are likely to already be in L2 cache. As such, a subsequent operation won’t need to read from hard disk and will be substantially faster.



NFS is more complicated regarding what it will and won’t cache on the client side, particularly with the attribute cache and the timeouts that are associated with it.



Here are the options from fastest to slowest:

  • If NFS is using READDIR, as opposed to READDIRPLUS, and the ‘ls’ command is invoked with the appropriate arguments to prevent it polling metadata or sorting the output, execution will be relatively swift.

  • If ‘ls’ polls the metadata (or if NFS uses READDIRPLUS) but doesn’t sort the results, output will be fairly immediately, but will take longer to complete overall.

  • If ‘ls’ sorts the output, nothing will be displayed until the command has read everything and sorted it, then the output will be returned in a deluge at the end.

Related:

Folder Permissions

The folder permissions detailed below allow for user-specific folder functionality. Each user on the folder can have their own permission set. To change folder permissions for an individual folder, navigate to that folder and access the People on this Folder menu. Use the checkboxes to change permissions as needed. To manage folder permissions without navigating to them individually, click here.

Advisory – When using the “Apply changes to subfolders” checkbox on a folder containing more than 30 subfolders, the attempt to save and apply permissions may time out. If this is the case, consider using Distribution Groups to easily manage permissions for large groups of users.

Download permission

With download permission, a user has the ability to download any document in the folder to their computer or mobile device.

Download alerts

With download alerts, users will be notified via email that files have been downloaded from the folder. A user must be granted admin permission on the folder to be granted download alerts, as it will allow them to identify other users on the folder when they download documents.

Upload permission

Granting a user upload permission gives the user the ability to upload files or folders to the folder. With this permission, the user is also able to create subfolders within this folder. Any subfolders created will automatically inherit the parent folder’s permissions. The user creating the subfolder will not be able to manage users on the newly created subfolder unless they were also granted admin permission on the parent folder.

Upload alerts

With upload alerts, users will be notified via email that files have been uploaded to the folder. In order to be granted this permission, the user does have to also be granted the download permission. Users with download permission will also be able to grant themselves this permission through a checkbox on the folder.

Delete permission

The delete permission grants a user the ability to delete files within the folder that they did not upload. Note that by default, all users are able to delete files that they uploaded to the folder. This can be turned off for an account by an administrator’s request to ShareFile customer support .

Admin permission

A user granted admin permission has the ability to manage folder access on this folder and can add or remove users. They will also be able to edit some folder options. Note that the user listed as Owner (usually the creator of the folder) may not be removed by any user.

View permission

Granting a user view permission allows them to view a document without downloading it. A user must be granted just view permission if you desire them to view the document with a watermark. If you grant a user download permissions on the folder, they will automatically be granted view permissions as well. Only VDR accounts and StorageZones accounts with View-Only Sharing have the ability to grant View permissions.


Manage Folder Access Permissions via Manage Users

This feature can only be utilized by a Super User. Master Admin users must also be a Super User to utilize this feature. To change folder permissions from the Manage Users menu, or quickly view a user’s folder permissions without having to navigate to each individual folder one at a time:

  1. Navigate to the People > Manage Users menu.
  2. Use the Search or Browse function to locate the user you wish to view.
  3. Under Basic User Permissions, click Configure folder access permissions for this user.
  4. Use the Folder Tree menu to quickly edit your user’s existing folder permissions, or add a user to a folder if they do not already have access.

User-added image

FAQ

Why can my client see the People on this Folder tab ?

Clients can view Folder Access Lists if the the account preference for Folder Access List is set to Yes. This setting can be configured in
Advanced Preferences.

Related:

EMC SourceOne Email Management File Restore Activity fails with “Permission Error (0x86044710).”

Article Number: 483073 Article Version: 2 Article Type: Break Fix



SourceOne for File Systems

When trying to restore the files previously shortcut, using File restore activity, the following file permission-related error shows:

Unable to verify that the user has required file system permissions to restore the file. Try restoring to an alternate location. (0x86044710)

Function: CExFileSystem::iWriteFileProperties

Error encountered checking permissions (0x80070423)

Function: CoExFileProvider::StoreDoc

Failed to restore file to original location

(000000002FA2F179A9A64B3E18708A1301F6951BEA98F76600). (0x86044701) The specified property FSC_TargetLocation does not exist

(0x86044002)

The issue was related with Windows File permissions not being applied properly to the files needed to be restored. Although Windows NTFS file/folder permissions stated that a particular end user and SourceOne account had full permission on those files, somehow Windows was not applying it.

Also, when running a File Restore Activity, the following NTFS security file/folder permissions need to be configured for the SourceOne service account :

–>Local Administrators group has the following rights:

–> Backup files and directories

–> Manage auditing and security log

–> Restore files and directories

–>Take ownership of files or other objects

  1. Remove and re-add the file permissions, disabling the option “Include inheritable permissions from this object’s parent” when managing NTFS permission on the problem file server foldername.
  2. Re-add any other identified users in this manner that lost file/folder permissions

Related: