App Layering: “Welcome to Emergency Mode” usually means the Repository logical volume is damaged

Shutdown the ELM. Make a snapshot now. Then power back on and login as root, using your normal “root” password.

This error means you have a fatal error in the Layering Service layer repository store. Beware, it may not be possible to recover this. However, your recovery efforts need to be focused on that area.

Your first instinct might be that the boot disk partitions need an fsck, like CTX221751. In reality, this is not true. App Layering uses XFS as the filesystem for both the boot partitions and the repository store, and when you attempt to fsck an XFS filesystem, fsck returns success without doing anything. XFS is a self-repairing, journaled filesystem that should never need this kind of repair.

Although there is a tool called “xfs_repair”, it cannot be run on a mounted filesystem. So if you really believe that you need to run xfs_repair on /dev/sda1 or /dev/sda2 (the boot and root partitions on the boot disk), you will need to boot up another Linux machine and attach the boot disk from your ELM manually to that machine in order to do that. That’s beyond the scope fof this article, and has never yet been necessary in App Layering, so we will not go into details here.

The Layering Service layer repository is a “logical volume” built using the Linux Logical Volume Manager (LVM) tools. This is how we allow you to expand the layer repository: we simply take any extra space or blank disks you provide, initialize them for use in the LVM, expand the volume group (VG), and expand the Logical Volume (LV) itself. Your VG could be composed of multiple Physical Volumes (PV) with your data spanned across the disks. If your VG is damaged in a way that LVM cannot recover from, you may not be able to get access to the data.

General troubleshooting guidance for the ELM’s LVM can be found in the article App Layering: How to troubleshoot the layer repository disk as well.

Having SAN-level snapshots, backups or even clones of the ELM can help guard against complete data-loss in situations like this.

Getting the complete history of storage operations is critical at this point. You need to know when the repository was expanded and how, to be able to determine what course forward you have. It’s not possible to lay out all the ways you can have LVM problems. Instead we will walk through one common scenario: a disk added to the LVM is deleted.

Imagine you start with the initial 300GB disk. You then expand it with a 3000GB disk. Then you decide you didn’t want 3000GB, so you delete the disk, and add a 300GB disk and expand into that. You think you now have a 600GB volume. The ELM thinks you have a 3600GB volume. The additional expansion could succeed as long as LVM never tried to access data in the 3000GB gap in the middle.

This can get a lot more complicated, too, because you can expand your original disk as well, at an time. So you could start with 300GB, add a 200GB disk, expand the initial disk to 400GB, add another 200GB disk, and expand the first 200GB disk to 300GB. From the user perspective, you have a single 900GB volume, but in LVM, there are 5 separate segments spread across three disks in chronological order. While we could probably recover from deleting the third disk with the 4th segment, we probably cannot recover from deleting the second disk with the second and fifth segments.

In all cases, your only hope for full recovery is if the missing disk has not had any data written to it. If you add a disk and immediately delete it, then you have some pretty good hope for recovery. If you add a disk, use it for a month, and then delete it, you are very likely to have corrupted or completely missing layer files.

In the ELM, there is only ever one VG, named unidesk_vg, into which all PVs are concatenated. That VG contains one LV, called xfs_lv, and accessible as /dev/unidesk_vg/xfs_lv. This is true no matter what platform the ELM is based on. The disk device names may change (/dev/sdb versus /dev/xvdb), but the VG and LV names are consistent.

Note, LVM stores its configuration in /etc/lvm. The current configuration can be found in /etc/lvm/backup/unidesk_vg, and previous copies of the configuration (copies are made before each LV operation – see the “description” line) are stored in /etc/lvm/archive. While reading those files is well beyond the scope of this article, it’s possible to piece together the history of LVM operations in thie ELM by reading through the archive files in chronological order.

/etc/lvm/archive:

total 16

-rw——-. 1 root root 913 Feb 6 12:46 unidesk_vg_00000-1214181995.vg

-rw——-. 1 root root 924 Feb 6 12:46 unidesk_vg_00001-58662273.vg

-rw——- 1 root root 1360 May 4 11:01 unidesk_vg_00002-1911836168.vg

-rw——- 1 root root 1612 May 4 11:01 unidesk_vg_00003-1711355933.vg

/etc/lvm/backup:

total 4

-rw——- 1 root root 1789 May 4 11:01 unidesk_vg

There are three basic tools for Linux LVM: pvdisplay (show the physical volumes in your LVM), vgdisplay (show your “volume groups” built up from your PVs), and lvdisplay (show the “logical volumes” carved out of your VGs). Use those to determine the UUID of the missing disk. In this example, we’re going to simply give LVM back the disk it’s missing, using the GUID that the old disk had. This will allow LVM to bring back up the VG, but will leave you with a hole in the middle of your VG.

First, run pvdisplay to see your present and missing PVs.

WARNING: Device for PV w3F3ad-tmK8-DfPL-eWlN-aeNg-KLbW-ZkLr05 not found or rejected by a filter.

— Physical volume —

PV Name /dev/xvdb

VG Name unidesk_vg

PV Size 300.00 GiB / not usable 4.00 MiB

Allocatable yes (but full)

PE Size 4.00 MiB

Total PE 76799

Free PE 0

Allocated PE 76799

PV UUID KzGOla-iLmf-Mog0-YYOd-9EWn-S1ug-fjW0nx

— Physical volume —

PV Name [unknown]

VG Name unidesk_vg

PV Size 100.00 GiB / not usable 4.00 MiB

Allocatable yes (but full)

PE Size 4.00 MiB

Total PE 25599

Free PE 0

Allocated PE 25599

PV UUID w3F3ad-tmK8-DfPL-eWlN-aeNg-KLbW-ZkLr05

Then run vgdisplay and lvdisplay to ensure that the LV and VG size agree and match the sum of the PVs listed. We only care about the deleted PV disk, but now is your best opportunity to see if you have more serious problems.

WARNING: Device for PV w3F3ad-tmK8-DfPL-eWlN-aeNg-KLbW-ZkLr05 not found or rejected by a filter.

— Logical volume —

LV Path /dev/unidesk_vg/xfs_lv

LV Name xfs_lv

VG Name unidesk_vg

LV UUID Iechln-zjD7-W2gf-55mH-d1Ou-aqDa-QhZWQd

LV Write Access read/write

LV Creation host, time localhost.localdomain, 2018-02-06 12:46:11 -0500

LV Status NOT available

LV Size 399.99 GiB

Current LE 102398

Segments 2

Allocation inherit

Read ahead sectors auto

WARNING: Device for PV w3F3ad-tmK8-DfPL-eWlN-aeNg-KLbW-ZkLr05 not found or rejected by a filter.

— Volume group —

VG Name unidesk_vg

System ID

Format lvm2

Metadata Areas 1

Metadata Sequence No 4

VG Access read/write

VG Status resizable

MAX LV 0

Cur LV 1

Open LV 0

Max PV 0

Cur PV 2

Act PV 1

VG Size 399.99 GiB

PE Size 4.00 MiB

Total PE 102398

Alloc PE / Size 102398 / 399.99 GiB

Free PE / Size 0 / 0

VG UUID GU8esp-euNA-qMDO-UH9Z-V0LB-Xzvs-5YsUPG

The three important pieces of information to determine are the GUID of the missing PV, its size, and that its PV Name really is “unknown”. The PV Name normally tells you the device that the PV is currently found on. The device can change; PVs are known by their GUIDs, not their physical location. But it’s important to make sure that the PV you’re about to create is listed as being attached to “unknown”.

Now attach a new, correctly-sized disk to your virtual machine. If you really can’t be sure for some reason, overestimate. Extra space at the end is unused, but coming up short is likely a disaster. Always remember that you have a snapshot.

Then get Linux to recognize the new SCSI disk you just attached by rebooting. Since there are no other processes running, a reboot is the safest way to get the new, blank disk available. Depending on your hypervisor, you may need to power-off and power back on to get the disk recognized.

Use “fdisk -l” to identify the device path for the new, empty disk. Note that every PV disk has no partition, so you need to make some intelligent guesses about which disk is being used. Run “pvdisplay” again to make sure you’re not considering any disks that are already in use. In this case, /dev/xvdc is not used in pvdisplay.

Disk /dev/xvdc: 107.4 GB, 107374182400 bytes, 209715200 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/xvda: 32.2 GB, 32214351872 bytes, 62918656 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk label type: dos

Disk identifier: 0x000b3644

Device Boot Start End Blocks Id System

/dev/xvda1 * 2048 1026047 512000 83 Linux

/dev/xvda2 1026048 41986047 20480000 83 Linux

/dev/xvda3 41986048 58763263 8388608 82 Linux swap / Solaris

Disk /dev/xvdb: 322.1 GB, 322122547200 bytes, 629145600 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Then run this command (substituting in the correct disk instead of /dev/xvdc) to create the new PV with the old UUID.

pvcreate –uuid=w3F3ad-tmK8-DfPL-eWlN-aeNg-KLbW-ZkLr05 /dev/xvdc –restorefile /etc/lvm/backup/unidesk_vg

Success looks like this:

Couldn’t find device with uuid w3F3ad-tmK8-DfPL-eWlN-aeNg-KLbW-ZkLr05.

WARNING: Device for PV w3F3ad-tmK8-DfPL-eWlN-aeNg-KLbW-ZkLr05 not found or rejected by a filter.

Physical volume “/dev/xvdc” successfully created.

If you see a “Can’t find uuid” error like this, then you have mistyped the UUID. Double-check the ID and re-enter the command. (See if you can figure out where I substituted a capital i for a lower-case L.)

Couldn’t find device with uuid w3F3ad-tmK8-DfPL-eWlN-aeNg-KLbW-ZkLr05.

Can’t find uuid w3F3ad-tmK8-DfPL-eWIN-aeNg-KLbW-ZkLr05 in backup file /etc/lvm/backup/unidesk_vg

Run `pvcreate –help’ for more information.

Once you have confirmation that the PV is created, reboot. The LVM should start up, and the system will be functional, including the management console. However, your layer disk data may be corrupted. Make an immediate backup of the system and then test to see how bad the damage might be.

Login as root again, and perform the following to attempt to repair the XFS filesystem. This may or may not actually fix the problem with the large void in the middle of the volume, but it is important to gt this started before you start using the repository.

# umount /mnt/repository

# xfs_repair /dev/unidesk_vg/xfs_lv

xfs_repair may produce a lot of output. I have no guidance for interpreting that output. Reboot after the repair.

Related:

  • No Related Posts

7022921: XFS metadata corruption and invalid checksum’s on SAP Hana servers

This is only seen on newer SAN’s which support the TRIM / DISCARDwith support for one or both (WRITE_SAME and UNMAP) functions.

There are several symptoms seen with the same underlyingproblem.

1. XFS metadata corruption.

kernel: Metadata corruption detected atxfs_agf_read_verify+0x78/0x140 [xfs], xfs_agf block 0x707a601

kernel: XFS (dm-15): Unmount and runxfs_repair

kernel: XFS (dm-15): First 64 bytes of corruptedmetadata buffer:

kernel: c000002f73357380: 00 00 00 00 00 00 0000 00 00 00 00 00 00 00 00

kernel: c000002f73357390: 00 00 00 00 00 00 0000 00 00 00 00 00 00 00 00

2. SAP checksum error during backup caused by underlying OSI/O issue

Seeing messages in indexserver trace files.

FaultProtectionImpl.cpp(01620) : NOTE: full crash dump willbe written to/usr/sap/<SID>/HDB00 /sapqsgdb01/trace/DB_<SID>/indexserver_sapqsgdb01.30003.crashdump.20180428-100239.091877.trc

Seeing quite a few corrupt .dmp files inside SAP workingdirectories.

indexserver.perspage.20180428-092540580.0x000000000001b918L.0x0000700000087dc8P.0.corrupt.dmp

nameserver.perspage.20180404-152641740.0x000000000001b42eL.0x0000800000070050P.0.corrupt.dmp

Hana Studio shows a checksum error while trying to backup SYSTEMDB.

[447] backup could not be completed, Wrong checksumCalculated xxxxxxxxx with checksum algorithm 3 (CRC32)

3. Multipath and SCSI errors seen after fstrim.service isrun. This can overload the SAN with I/O requests especiallywhen multiple SLES12sp2 servers are running the fstrim.service atthe same time all connected to same back end SAN.

kernel: sd 3:0:3:37: Cancelling outstanding commands.

kernel: sd 4:0:0:8: [sdac] tag#4 Command (42) failed:transaction cancelled (200:600) flags: 0 fcp_rsp: 0, resid=0,scsi_status: 0

multipathd[44370]: 360050768018000222000000000008efa: sdlf -tur checker timed out

multipathd[44370]: checker failed path 67:464 in map360050768018000222000000000008efa

multipathd[44370]: 360050768018000222000000000008efa:remaining active paths: 6

kernel: device-mapper: multipath: Failing path 67:464.

Related:

  • No Related Posts

How to resolve “Failed to probe partitions from virtual disk” error while importing an OS Layer

There may be other reasons you could get this error, like the customer is using EFI or GPT partition tables. Unidesk requires MBR partition tables currently.

But if you get this error:

Failed to attach the disk /mnt/repository/Unidesk/OsImport Disks/Windows 10.vhd. Failed to probe partitions from virtual disk

Collect logs from Citrix Enterprise Layer Manager (ELM) as described in https://support.citrix.com/article/CTX223723. When you export logs, the Citrix software creates a gzipped tar file (.tgz) containing the log files. On extracting the .tgz log file you will find camlogfile.

camlogfile

Open the camlogifle in notepad and see if you have a block like this:

2016-07-15 12:53:39,717 ERROR DefaultPool3 AttachDiskJobStep: Disk attach failed, detaching /dev/nbd255 Message: MessageId=FailedProbeVirtualDiskPartitions, DefaultTitle=, CategoryData={[ExternalToolFailure { Call = “/bin/nice”, Args = “-n-10 /sbin/partprobe /dev/nbd255”, Output = “”, Error = “Warning: Error fsyncing/closing /dev/nbd255: Input/output error

Warning: Error fsyncing/closing /dev/nbd255: Input/output error

Error: Can’t have a partition outside the disk!

Warning: Error fsyncing/closing /dev/nbd255: Input/output error”, ErrorCode = 1 }]}

The important bit is that “Can’t have a partition outside the disk!” line.

For reasons we don’t yet understand, sometimes the ELM’s way of mounting up a disk comes up about 1MB short. If you mount the same disk to a VM through a hypervisor, the size is correct, but in the ELM, the size is reported just a hair too small. The partition then ends just slightly after the end of the disk, which is illegal.

Related:

  • No Related Posts

Isi Get & Set

One of the lesser publicized but highly versatile tools in OneFS is the ‘isi get’ command line utility. It can often prove invaluable for generating a vast array of useful information about OneFS filesystem objects. In its most basic form, the command outputs this following information:

  • Protection policy
  • Protection level
  • Layout strategy
  • Write caching strategy
  • File name

For example:

# isi get /ifs/data/file2.txt

POLICY LEVEL PERFORMANCE COAL FILE

default 4+2/2 concurrency on file2.txt

Here’s what each of these categories represents:

POLICY: Indicates the requested protection for the object, in this case a text file. This policy field is displayed in one of three colors:

Requested Protection Policy

Description

Green

Fully protected

Yellow

Degraded protection under a mirroring policy

Red

Under-protection using FEC parity protection



LEVEL: Displays the current actual on-disk protection of the object. This can be either FEC parity protection or mirroring. For example:

Protection Level

Description

+1n

Tolerate failure of 1 drive OR 1 node (Not Recommended)

+2d:1n

Tolerate failure of 2 drives OR 1 node

+2n

Tolerate failure of 2 drives OR 2 nodes

+3d:1n

Tolerate failure of 3 drives OR 1 node

+3d:1n1d

Tolerate failure of 3 drives OR 1 node AND 1 drive

+3n

Tolerate failure of 3 drives or 3 nodes

+4d:1n

Tolerate failure of 4 drives or 1 node

+4d:2n

Tolerate failure of 4 drives or 2 nodes

+4n

Tolerate failure of 4 nodes

2x to 8x

Mirrored over 2 to 8 nodes, depending on configuration



PERFORMANCE: Indicates the on-disk layout strategy, for example:

Data Access Setting

Description

On Disk Layout

Caching

Concurrency

Optimizes for current load on cluster, featuring many simultaneous clients. Recommended for mixed workloads.

Stripes data across the minimum number of drives required to achieve the configured data protection level.

Moderate prefetching

Streaming

Optimizes for streaming of a single file. For example, fast reading by a single client.

Stripes data across a larger number of drives.

Aggressive prefetching

Random

Optimizes for unpredictable access to a file. Performs almost no cache prefetching.

Stripes data across the minimum number of drives required to achieve the configured data protection level.

Little to no prefetching



COAL: Indicates whether the Coalescer, OneFS’s NVRAM based write cache, is enabled. The coalescer provides failure-safe buffering to ensure that writes are efficient and read-modify-write operations avoided.

The isi get command also provides a number of additional options to generate more detailed information output. As such, the basic command syntax for isi get is as follows:

isi get {{[-a] [-d] [-g] [-s] [{-D | -DD | -DDC}] [-R] <path>}

| {[-g] [-s] [{-D | -DD | -DDC}] [-R] -L <lin>}}

Here’s the description for the various flags and options available for the command:

Command Option

Description

-a

Displays the hidden “.” and “..” entries of each directory.

-d

Displays the attributes of a directory instead of the contents.

-g

Displays detailed information, including snapshot governance lists.

-s

Displays the protection status using words instead of colors.

-D

Displays more detailed information.

-DD

Includes information about protection groups and security descriptor owners and groups.

-DDC

Includes cyclic redundancy check (CRC) information.

-L <LIN>

Displays information about the specified file or directory. Specify as a file or directory LIN.

-R

Displays information about the subdirectories and files of the specified directories.

The following command shows the detailed properties of a directory, /ifs/data (note that the output has been truncated slightly to aid readability):



# isi get -D data u

POLICY W LEVEL PERFORMANCE COAL ENCODING FILE IADDRS

default 4x/2 concurrency on v N/A ./ <1,36,268734976:512>, <1,37,67406848:512>, <2,37,269256704:512>, <3,37,336369152:512> ct: 1459203780 rt: 0

*************************************************

* IFS inode: [ 1,36,268734976:512, 1,37,67406848:512, 2,37,269256704:512, 3,37,336369152:512 ] w

*************************************************

* Inode Version: 6

* Dir Version: 2

* Inode Revision: 6

* Inode Mirror Count: 4

* Recovered Flag: 0

* Restripe State: 0

* Link Count: 3

* Size: 54

* Mode: 040777

* Flags: 0xe0

* Stubbed: False

* Physical Blocks: 0

* LIN: 1:0000:0004 x

* Logical Size: None

* Shadow refs: 0

* Do not dedupe: 0

* Last Modified: 1461091982.785802190

* Last Inode Change: 1461091982.785802190

* Create Time: 1459203780.720209076

* Rename Time: 0

* Write Caching: Enabled y

* Parent Lin 2

* Parent Hash: 763857

* Snapshot IDs: None

* Last Paint ID: 47

* Domain IDs: None

* LIN needs repair: False

* Manually Manage:

* Access False

* Protection True

* Protection Policy: default

* Target Protection: 4x

* Disk pools: policy any pool group ID -> data targetzx410_136tb_1.6tb-ssd_256gb:32(32), metadata target x410_136tb_1.6tb-ssd_256gb:32(32)

* SSD Strategy: metadata-write {

* SSD Status: complete

* Layout drive count: 0

* Access pattern: 0

* Data Width Device List:

* Meta Width Device List:

*

* File Data (78 bytes):

* Metatree Depth: 1

* Dynamic Attributes (40 bytes):

ATTRIBUTE OFFSET SIZE

New file attribute 0 23

Isilon flags v2 23 3

Disk pool policy ID 26 5

Last snapshot paint time 31 9

*************************************************

* NEW FILE ATTRIBUTES |

* Access attributes: active

* Write Cache: on

* Access Pattern: concurrency

* At_r: 0

* Protection attributes: active

* Protection Policy: default

  1. 1. * Disk pools: policy any pool group ID

* SSD Strategy: metadata-write

*

*************************************************

Here is what some of these lines indicate:



u OneFS command to display the file system properties of a directory or file.

v The directory’s data access pattern is set to concurrency

w Write caching (Coalescer) is turned on.

x Inode on-disk locations.

y Primary LIN.

z Indicates the disk pools that the data and metadata are targeted to.

{ The SSD strategy is set to metadata-write.

| Files that are added to the directory are governed by these settings, most of which can be changed by applying a file pool policy to the directory.

From the WebUI, a subset of the ‘isi get –D’ output is also available from the OneFS File Explorer. This can be accessed by browsing to File System > File System Explorer and clicking on ‘View Property Details’ for the file system object of interest.



A question that is frequently asked is how to find where a file’s inodes live on the cluster. The ‘isi get -D’ command output makes this fairly straightforward to answer. Take the file /ifs/data/file1, for example:

# isi get -D /ifs/data/file1 | grep -i “IFS inode”

* IFS inode: [ 1,9,8388971520:512, 2,9,2934243840:512, 3,8,9568206336:512 ]



This shows the three inode locations for the file in the *,*,*:512 notation. Let’s take the first of these:



1,9,8388971520:512



From this, we can deduce the following:



  • The inode is on node 1, drive 9 (logical drive number).
  • The logical inode number is 8388971520.
  • It’s an inode block that’s 512 bytes in size (Note: OneFS data blocks are 8kB in size).



Another example of where isi get can be useful is in mapping between a file system object’s pathname and its LIN (logical inode number). This might be for translating a LIN returned by an audit logfile or job engine report into a valid filename, or finding an open file from vnodes output, etc.



For example, say you wish to know which configuration file is being used by the cluster’s DNS service:



First, inspect the busy_vnodes output and filter for DNS:



# sysctl efs.bam.busy_vnodes | grep -i dns

vnode 0xfffff8031f28baa0 (lin 1:0066:0007) is fd 19 of pid 4812: isi_dnsiq_d

This, among other things, provides the LIN for the isi_dnsiq_d process. The output can be further refined to just the LIN address as such:



# sysctl efs.bam.busy_vnodes | grep -i dns | awk ‘{print $4}’ | sed -E ‘s/)//’

1:0066:0007



This LIN address can then be fed into ‘isi get’ using the ‘-L’ flag, and a valid name and path for the file will be output:



# isi get -L `sysctl efs.bam.busy_vnodes | grep -i dns | grep -v “(lin 0)” | awk ‘{print $4}’ | sed -E ‘s/)//’`

A valid path for LIN 0x100660007 is /ifs/.ifsvar/modules/flexnet/flx_config.xml



This confirms that the XML configuration file in use by isi_dnsiq_d is flx_config.xml.



So, to recap, the ‘isi get’ command provides information about an individual or set of file system objects.

OneFS also provides the complimentary ‘isi set’ command, which allows configuration of OneFS-specific file attributes. This command works similarly to the UNIX ‘chmod’ command, but on OneFS-centric attributes, such as protection, caching, encoding, etc. As with isi set, files can be specified by path or LIN. Here are some examples of the command in action.



For example, the following syntax will recursively configure a protection policy of +2d:1n on /ifs/data/testdir1 and its contents:



# isi set –R -p +2:1 /ifs/data/testdir1



To enable write caching coalescer on testdir1 and its contents, run:



# isi set –R -c on /ifs/data/testdir1



With the addition of the –n flag, no changes will actually be made. Instead, the list of files and directories that would have write enabled is returned:



# isi set –R –n -c on /ifs/data/testdir2



The following command will configure ISO-8859-1 filename encoding on testdir3 and contents:



# isi set –R –e ISO-8859-1 /ifs/data/testdir3



To configure streaming layout on the file ‘test1’, run:



# isi set -l streaming test1



The following syntax will set a metadata-write SSD strategy on testdir1 and its contents:



# isi set –R -s metadata-write /ifs/data/testdir1



To performs a file restripe operation on the file2:



# isi set –r file2



To configure write caching on file3 via its LIN address, rather than file name:



# isi set –c on –L `# isi get -DD file1 | grep -i LIN: | awk {‘print $3}’`

1:0054:00f6

The following table describes in more detail the various flags and options available for the isi set command:

Command Option

Description

-f

Suppresses warnings on failures to change a file.

-F

Includes the /ifs/.ifsvar directory content and any of its subdirectories. Without -F, the /ifs/.ifsvar directory content and any of its subdirectories are skipped. This setting allows the specification of potentially dangerous, unsupported protection policies.

-L

Specifies file arguments by LIN instead of path.

-n

Displays the list of files that would be changed without taking any action.

-v

Displays each file as it is reached.

-r

Runs a restripe.

-R

Sets protection recursively on files.

-p <policy>

Specifies protection policies in the following forms: +M Where M is the number of node failures that can be tolerated without loss of data.

+M must be a number from, where numbers 1 through 4 are valid.

+D:M Where D indicates the number of drive failures and M indicates number of node failures that can be tolerated without loss of data. D must be a number from 1 through 4 and M must be any value that divides into D evenly. For example, +2:2 and +4:2 are valid, but +1:2 and +3:2 are not.

Nx Where N is the number of independent mirrored copies of the data that will be stored. N must be a number, with 1 through 8 being valid choices.

-w <width>

Specifies the number of nodes across which a file is striped. Typically, w = N + M, but width can also mean the total of the number of nodes that are used. You can set a maximum width policy of 32, but the actual protection is still subject to the limitations on N and M.

-c {on | off}

Specifies whether write-caching (coalescing) is enabled.

-g <restripe goal>

Specifies the restripe goal. The following values are valid:

  • repair
  • reprotect
  • rebalance
  • retune

-e <encoding>

Specifies the encoding of the filename.

-d <@r drives>

Specifies the minimum number of drives that the file is spread across.

-a <value>

Specifies the file access pattern optimization setting. Ie. default, streaming, random, custom.

-l <value>

Specifies the file layout optimization setting. This is equivalent to setting both the -a and -d flags. Values are concurrency, streaming, or random

–diskpool <id | name>

Sets the preferred diskpool for a file.

-A {on | off}

Specifies whether file access and protections settings should be managed manually.

-P {on | off}

Specifies whether the file inherits values from the applicable file pool policy.

-s <value>

Sets the SSD strategy for a file. The following values are valid: If the value is metadata-write, all copies of the file’s metadata are laid out on SSD storage if possible, and user data still avoids SSDs. If the value is data, Both the file’s meta- data and user data (one copy if using mirrored protection, all blocks if FEC) are laid out on SSD storage if possible.

avoid Writes all associated file data and metadata to HDDs only. The data and metadata of the file are stored so that SSD storage is avoided, unless doing so would result in an out-of-space condition.

metadata Writes both file data and metadata to HDDs. One mirror of the metadata for the file is on SSD storage if possible, but the strategy for data is to avoid SSD storage.

metadata-write Writes file data to HDDs and metadata to SSDs, when available. All copies of metadata for the file are on SSD storage if possible, and the strategy for data is to avoid SSD storage.

data Uses SSD node pools for both data and metadata. Both the metadata for the file and user data, one copy if using mirrored protection and all blocks if FEC, are on SSD storage if possible.

<file> {<path> | <lin>} Specifies a file by path or LIN.

Related:

  • No Related Posts

Error: The Citrix Desktop Service was refused a connection to the delivery controller ” (IP Address ‘xxx.xxx.xxx.xxx’)

Try to determine which files are taking up disk space. on Identity disk

For access to the junction linked to the Identity Disk volume at C:Program FilesCitrixPvsVmServicePersistedData, you will need to execute the command prompt under the context of the Local System account via PsExec tool

The PsExec tool is available for download at this location

http://docs.microsoft.com/en-us/sysinternals/downloads/psexec

Follow these steps to access the Identity disk volume on the VDA:

1. Open elevated command prompt <Run as administrator>

2. execute the command under the context of the Local System account via PsExec:

PSEXEC -i -s cmd.exe

This it to access to the junction linked to the Identity Disk volume

3. Navigate to the root of the junction “PersistedData”, and execute the following command:

DIR /O:S /S > C:{location}Out.txt

4. Open out.txt using Notepad or text editor

5. Check the files taking up the disk space.

6. Move the unwanted files to an alternate location or delete them

Note: You may see .gpf files which shouldn’t be deleted. BrokerAgent.exe writes changed farm policies to %ProgramData%CitrixPvsAgentLocallyPersistedDataBrokerAgentInfo<GUID>.gpf. BrokerAgent.exe then triggers a policy evaluation via CitrixCseClient.dll.

Related:

  • No Related Posts

Event ID 1023 Error: The Citrix Desktop Service was refused a connection to the delivery controller ” (IP Address ‘xxx.xxx.xxx.xxx’). VDAs in Delivery Group losing registration with the Delivery Controllers

Try to determine which files are taking up disk space. on Identity disk

For access to the junction linked to the Identity Disk volume at C:Program FilesCitrixPvsVmServicePersistedData, you will need to execute the command prompt under the context of the Local System account via PsExec tool

The PsExec tool is available for download at this location

http://docs.microsoft.com/en-us/sysinternals/downloads/psexec

Follow these steps to access the Identity disk volume on the VDA:

1. Open elevated command prompt <Run as administrator>

2. execute the command under the context of the Local System account via PsExec:

PSEXEC -i -s cmd.exe

This it to access to the junction linked to the Identity Disk volume

3. Navigate to the root of the junction “PersistedData”, and execute the following command:

DIR /O:S /S > C:{location}Out.txt

4. Open out.txt using Notepad or text editor

5. Check the files taking up the disk space.

6. Move the unwanted files to an alternate location or delete them

Note: You may see .gpf files which shouldn’t be deleted. BrokerAgent.exe writes changed farm policies to %ProgramData%CitrixPvsAgentLocallyPersistedDataBrokerAgentInfo<GUID>.gpf. BrokerAgent.exe then triggers a policy evaluation via CitrixCseClient.dll.

Related:

  • No Related Posts

Re: The configuration of this client does not support browsing on Red Hat clients

thats what I thought.

so when you choose the folders in Backup and Restore – you get an error correct?

I am the same way NONE of my RH servers quilify for FLR (see below for Limitations)

so I have to backup my Linux VM’s as a vm once a week (so I can rebuild if needed)

and daily I do backups as a physical with the client installed on the server.

File-level restore limitations

The following limitations apply to file-level restore as described in “Restoring specific

folders or files” on page 86:

The following virtual disk configurations are not supported:

• Unformatted disks

• Dynamic disks

• GUID Partition Table (GPT) disks

• Ext4 filesystems

• FAT16 filesystems

• FAT32 filesystems

• Extended partitions (that is, any virtual disk with more than one partition or when

two or more virtual disks are mapped to single partition)

• Encrypted partitions

• Compressed partitions

Symbolic links cannot be restored or browsed

You cannot restore more than 5,000 folders or files in the same restore operation

The following limitations apply to logical volumes managed by Logical Volume

Manager (LVM):

• One Physical Volume (.vmdk) must be mapped to exactly one logical volume

• Only ext2 and ext3 formatting is supported

Related:

  • No Related Posts