How To Resize Citrix PVS vDIsk Using DiskPart Utility

Note:

  • If you have subversions or differential disks of the PVS disk, merge it to an base disk. Delete the merged version from the PVS console.
  • And also before performing any activity please backup the old VHD or vDisk, in case of corruption on disk same can be used.
  • The commands in this post must be run while the VHD file is closed not be in used. It cannot be attached with disk manager and it cannot be attached to a running VM.

1. Change vdisk to private mode.

2 Use inbuilt diskpart Utility to resize vdisk in any Windows Machine

C:Windowssystem32>diskpart

3. Open Comand Prompt or Powershell and get Diskpart

On Diskpart enter “Select vDisk file=”

DIsk1

4. Enter “List vDisk”to check if the vDisk has been added or not.

disk2

5. After that Expand the vDisk by using following query “expand vDisk maximum=XXX (in Mb)”. The size should be in “MB”

disk3

6. On Diskpart enter “attach vDisk” to attach it. Type “list disk” to check if the vDisk has been attached.

Disk4

7. Type “List Volume” to see what are the available volumes

DIsk5

8. Type “Select Volume 3” the vDisk which you want to resize and enter “Extend” to extend the vDisk.

Note: The number of the vDisk you want to resize.

Disk6


9. Now the vDisk has been extended to verify type “List Volume” to see the status. after checking the status. Type “Detach” to detach the vDisk.

Disk7

disk8

Related:

  • No Related Posts

Exclusions – Wildcards

I need a solution

Hi,

It’s been ages since I last needed to look at this. Microsoft have recommendations for exclusions for DFSR:

<drive>:system volume informationDFSR

$db_normal$
FileIDTable_2
SimilarityTable_2

<drive>:system volume informationDFSRdatabase_<guid>

$db_dirty$
Dfsr.db
Fsr.chk
> *.log <
> Fsr*.jrs <
Tmp.edb

<drive>:system volume informationDFSRconfig

> *.xml <

<drive>:<replicated folder>dfsrprivatestaging*

> *.frx <

These are mostly easy. What about the one inside > < brackets? Also what if the System Volume Information is on different drives between servers?

Thanks

0

Related:

  • No Related Posts

Manual Deletion of Data from NetWorker and Data Domain

DD.pngNW.jpg

One should be careful and pay attention before deleting any data manually from a data protection solution.

There might be a lot of information about the save sets that are available to be deleted. However, we need to know a few other items before we start to remove the save sets.

Begin by verifying that there are save sets that have expired or are eligible to be removed.

Use the following command for this: mminfo -avot -q ssrecycle,family=disk

If there are eligible save sets that have not been removed you need to check a few items before starting to remove the data:



1. Make sure that none of the volumes with expired save sets are unmounted. In the NMC select Devices | Devices. A “volume name” should show a name under the “Volume name” column. If it is blank the volume is not mounted.

2. Make sure that none of the volumes with expired save sets are read-only: In the NMC select Media | Disk Volumes. Right-click on “Properties”. If needed, select “Change Mode” so that it does not show as ‘Read Only’.

3. Make sure that none of the volumes with expired save sets are disabled: In the NMC select “Devices”. Check each of the devices that are involved. They should show ‘Yes’ under the ‘Enabled’ column.

4. Make sure that the volume is not flagged as ‘scan needed’. If it is flagged the NetWorker commands for removing save sets will not delete any of the old save sets from the device until the flag is changed to “scan is not needed” or scanned using the “scanner -m <Device> command.

Here is a quick way to check the flags and change them for the volumes to verify that no scans are needed for the volumes:

1. At a command prompt run the command: “mminfo -mV” . This will show the current status for all of the volumes.

2. If any volumes are marked as “scan needed, do the following:

2a. Make sure that the volume or the device is unmounted.

2b. Change the status using: NMC | Media | Disk Volumes | right-click volume | select ‘Mark Scan Needed’ | select “Scan is NOT needed”.

2c. Remount the volume.

2d. Note: If the volume does not reset to “scan is not needed” or if it returns to “scan needed” in a short time after resetting the flag, it is recommended to reset the volume to ‘scan NOT needed’ by running the “scanner -m <Device>” command.

3. If the volumes do not show that they need to be scanned then you should search first for any eligible save sets with the following command: mminfo -avot -q ssrecycle,family=disk

4. This will show all of the save sets that should be eligible for removal by the automated removal command from the disk storage. Make a note of the volume names in the first column as these will be needed for the next command to remove the save sets:

5. Run “nsrstage -C -V <volumeName>” to remove the old save sets that are eligible for removal.

If you have to save sets that you desire to remove but they have not expired, you will need to expire the save sets first using the following steps:

1. Pick a date or range of dates for which the save sets will need to be changed for removal using this command:

mminfo -avot -q “savetime<DD/MM/YYYY,savetime>DD/MM/YYYY”

2. Once you have selected the specific time interval that you desire for gathering the save sets you want to change to make them eligible for recycling, follow these steps:



First, gather the save set ID numbers:

i. mminfo -avot -q “savetime<DD/MM/YYYY,savetime>DD/MM/YYYY” –r ssid,cloneid -xc/ > Expire.txt *(see Note)

ii. Change the save set flags to indicate that the save set is recyclable with this command: “nsrmm -o recycleable -S <ssid>”.

Here is a script that will run this command on the entire file. [for /f %A in (Expire.txt) do @nsrmm -o recyclable -y -S %A]

iii. Now run the “nsrstage -C -V <VolumeName>” command to clean up the volumes that now have eligible save sets to be removed from Networker.

To change the retention times of save sets, you will need to edit the properties of the clients and/or the clones.

Please remember that when you change the retention time of a client it does not automatically change the retention times of any save sets that have already been backed up. They will need to be changed manually.

Please see https://support.emc.com/kb/196072 for more help with this process.

If you have a VBA in your system you may want to cross sync the devices so that all systems are synced as to the save sets on each system.

This can be performed with the following command:

nsrim -X -S -h <VBA_hostname>

If the command “nsrstage -C -V <VolumeName>” or nsrim -X has not removed the save sets, then you should contact Technical Support for further help.

Now go to the Data Domain and run the cleanup. You may want to contact the Data Domain Support if you are not familiar with the cleanup.

All of the above steps will take time to clean up the system. You may want to do this in smaller sections to clean up space.

Please do not skip steps as this may cause other problems in the system later.

If you still have problems or questions after performing these steps and need help from Technical Support, make sure that you have:

1. Kept notes as to the steps that you have taken and the results of that step.

2. Gathered the nsrget file for the Networker server.

3. If Data Domain is involved, gather the sfs_dump file. https://support.emc.com/kb/466061

Hope it helps

Related:

  • No Related Posts

Error Windows API: There is not enough space on the disk. Error number 0xE00000070

When the vDisk store is located on a PVS failover cluster, an NFS share, or other type of remote share, it cannot be presented to PVS as a volume mount point inside a local drive, partition or a Windows folder.

A mounted volume will work for streaming and storing of vDisks and vDisk versions, but it is not supported for merging vDisk versions. During the merging process, PVS calls a Windows API function that checks the available free space to perform the merge process. In a mounted volume, the free space reported is of where the mounted volume shortcut exists, as opposed to the free space in the mounted volume itself. This will cause PVS to incorrectly get the size of the free space on the mounted volume and the merge process will fail.

Related:

You cannot change the drive label on a layer or a published image

You cannot relabel the C: drive in App Layering, whether it’s in a published image or in a Packaging Machine. We use the drive labels to identify our disks, and we will actively block you from changing it. In a published machine, the drive label will be ULayeredImage. In a packaging machine, it will be UDiskBoot.

You can see exactly when we refuse the volume rename via a line like this in C:Program FilesUnideskUniserviceLogLog0.txt:

[06/11/2018][14:32:20:786] Log Detail Data Length 1 Offset 0x52764 SET_VOLUME_INFORMATION irp denied Status 0xae00

Note, however, we only intercept IRPs when our software is running. There are two circumstances where our drivers and software is not running: when you publish with Elastic Layering set to None, and when you edit an OS layer version. Editing the OS layer is just booting a modified clone of the original Gold VM’s boot disk. You edit that in place, finalize, and we copy the whole disk back. That way you can put things in the OS layer that might not play well without filter driver. You can relabel the C: drive on the OS layer, and you can do it in a published image with Elastic Layering off. Everywhere else, we’ll intercept the IRP and just deny it.

Now, in those circumstances, with our software not running, our software also doesn’t care if you relabel the disk. However, you should still not attempt to relabel the disk when editing the OS layer. It’s possible we would fail a Finalize operation on your modified OS layer – since we can’t find the right disk. But even if we allow you ti finalize that, it won’t change anything in the published image. The volume labels in the published image are set by the ELM, and you must not change them, even if you figure out a way. We use those volume labels to find our disks, and if you manage to modify the label on a published image, we will have problems with that.

Related:

Re: Best Practice for VTL Migration Networker

“By the “replicate”, I said that the data of the previous VTL were copied to the new DXi.”

Here, I am assuming that this is an *exact* replica or duplicate. Not a volume that was cloned by NetWorker.

If the volumes (i.e. virtual tapes) were replicated and “loaded” in the new VTL, then the volumes would be identical to the original copy, and NetWorker should therefore be able to load and read the tape label use the tape. I equate this to be the same as removing a physical tape from one physical autochanger, and loading it to another autochanger.

“For the “Label Design”, I wanted to know if NetWorker re-write labels.”

If the virtual tape volumes are identical to the original, then NetWorker would not see any difference and would not know that this is a replicated volume. So NetWorker should be able to load and mount this replicated volume.

Note: The NetWorker volume label is different that the volume bar code.

The bar code is an optical, machine-readable, representation of data. This code is is usually imprinted on a piece of paper and placed outside the tape cartridge. The autochanger’s bar code reader is used to read this bar code label. This bar code can be changed by replacing the bar code label. For a virtual tape… you’ll need to look at the VTL manual.



The NetWorker volume label is information that is written in the first 64 KB of the tape itself. This information is used by NetWorker to identify what this tape is, and it contains information such as: volume name, volume id, and pool. This information is read when the volume is mounted and then verified with the NetWorker media database. If the label matches what is in the media database, then the mount is complete, and it is ready for use. This label can only be changed by relabeling the volume.

Related:

Organizational Groups missing from Cubes

I need a solution

Does anyone know which cubes require processing to show the Organizational Groups I’ve created within the Computers Cube. After my upgrade from 7.6 when I use either Organizational Group – Full Path or Organizational Group – Name, the only entries are the ones scraped from AD. The other view I have which contains additional groups I want to filter on are no longer there post upgrade. What do I need to do to get these additional Groups showing up in my cubes again for filtering?

My former saved views execute since the Computers cube has been processed, however, the results are blank with the message “The current configuration has no query results”.

0

Related:

  • No Related Posts

App Layering: How to troubleshoot the layer repository disk

The Enterprise Layer Manager is a Linux CentOS system. Initially, it contains a 30GB boot disk and a 300GB Layer Repository disk. Both are XFS filesystems.

The boot disk is a standard MBR-partitioned disk with 3 partitions: /dev/sda1 is the boot/initrd disk, /dev/sda2 is the 8GB swap space partition, and /dev/sda3 is the root filesystem. Standard disk troubleshooting techniques in Linux will work fine.

/dev/sda1 on /boot type xfs (rw,relatime,attr2,inode64,noquota)

/dev/sda3 on / type xfs (rw,relatime,attr2,inode64,noquota)

The Layer Repository disk is more complicated. It is a logical volume in a Linux LVM volume group. We put it in LVM to allow us to expand it more conveniently. You can expand the repository disk, or add additional blank disks to the ELM, and expand the LVM and the filesystem on it live. The actual filesystem is mounted here:

/dev/mapper/unidesk_vg-xfs_lv on /mnt/repository type xfs (rw,relatime,attr2,inode64,noquota)

To get a list of all disks currently involved in the LVM, run the command “pvdisplay”. PV in this context stands for “physical volume”. It will report data like this for each physical volume currently used in the LVM. Note that /dev/sda, the boot disk, is not listed because it is not an LVM disk.

[root@localhost ~]# pvdisplay

— Physical volume —

PV Name /dev/sdb

VG Name unidesk_vg

PV Size 300.00 GiB / not usable 3.00 MiB

Allocatable yes (but full)

PE Size 4.00 MiB

Total PE 76799

Free PE 0

Allocated PE 76799

PV UUID 1LVMFX-od7x-eegR-RWrc-QoCa-7ZYh-UcaKk3

Each physical volume is assigned to a single Volume Group. You can get a list of volume groups with the command “vgdisplay”, like this:

[root@localhost ~]# vgdisplay

— Volume group —

VG Name unidesk_vg

System ID

Format lvm2

Metadata Areas 1

Metadata Sequence No 7

VG Access read/write

VG Status resizable

MAX LV 0

Cur LV 1

Open LV 1

Max PV 0

Cur PV 1

Act PV 1

VG Size 300.00 GiB

PE Size 4.00 MiB

Total PE 76799

Alloc PE / Size 76799 / 300.00 GiB

Free PE / Size 0 / 0

VG UUID J5ALWe-AVK0-izRN-RvlJ-we7s-1vwf-trPECf

Each Volume Group can be divided into separate Logical Volumes. A logical volume is the equivalent of an attached virtual disk. You can partition it or just put a filesystem on the raw disk. You can see a list of logical volumes defined across all VGs with the “lvdisplay” command.

[root@localhost ~]# lvdisplay

— Logical volume —

LV Path /dev/unidesk_vg/xfs_lv

LV Name xfs_lv

VG Name unidesk_vg

LV UUID hLn1sC-nrXj-Cdxa-opQp-IhNF-yu4O-DFfK34

LV Write Access read/write

LV Creation host, time localhost.localdomain, 2016-03-16 13:56:24 -0400

LV Status available

# open 1

LV Size 300.00 GiB

Current LE 76799

Segments 1

Allocation inherit

Read ahead sectors auto

– currently set to 256

Block device 253:0

Normally in App Layering, there is exactly one VG, called “unidesk_vg”, and exactly one LV, called “xfs_lv”, which is available at /dev/unidesk_vg/xfs_lv. This is in fact just a symlink to /dev/dm-0 (device-mapper drive 0), so any reference you see to /dev/dm-0 is actually referring to your LVM. There is a single XFS filesystem written to the raw disk of the Logical Volume, and it is mounted at /mnt/repository.

Troubleshooting guidance for LVM is well beyond the scope of this article, however all normal LVM functions are completely supported in our Linux image. All normal LVM limitations and problems can apply here as well. If /mnt/repository does not mount correctly, consider investigating it as an LVM problem and search the web for troubleshooting guides.

The Layer Repository itself is an XFS filesystem, and the standard XFS tools are installed. Most likely, the only one you will ever need is xfs_repair. XFS is a journalled filesystem with remarkable resilience, but when you do have an error, you will need to run “xfs_repair /dev/dm-0”. This is complicated by the fact that the Unidesk services will always maintain an open file-handle within that mounted filesystem, and xfs_repair cannot work on a mounted filesystem.

[root@localhost ~]# xfs_repair /dev/dm-0

xfs_repair: /dev/dm-0 contains a mounted filesystem

xfs_repair: /dev/dm-0 contains a mounted and writable filesystem

What you might have to do is switch to Single User Mode by running “init S” while logged in as root, to get a place where the App Layering services are not running, unmount the volume then, and run the repair on it. Thus:

[root@localhost ~]# init S



Enter the root password for access



[root@localhost ~]# umount /mnt/repository

[root@localhost ~]# xfs_repair /dev/dm-0



[root@localhost ~]# reboot

Related:

Error: The Citrix Desktop Service was refused a connection to the delivery controller ” (IP Address ‘xxx.xxx.xxx.xxx’)

Try to determine which files are taking up disk space. on Identity disk

For access to the junction linked to the Identity Disk volume at C:Program FilesCitrixPvsVmServicePersistedData, you will need to execute the command prompt under the context of the Local System account via PsExec tool

The PsExec tool is available for download at this location

http://docs.microsoft.com/en-us/sysinternals/downloads/psexec

Follow these steps to access the Identity disk volume on the VDA:

1. Open elevated command prompt <Run as administrator>

2. execute the command under the context of the Local System account via PsExec:

PSEXEC -i -s cmd.exe

This it to access to the junction linked to the Identity Disk volume

3. Navigate to the root of the junction “PersistedData”, and execute the following command:

DIR /O:S /S > C:{location}Out.txt

4. Open out.txt using Notepad or text editor

5. Check the files taking up the disk space.

6. Move the unwanted files to an alternate location or delete them

Note: You may see .gpf files which shouldn’t be deleted. BrokerAgent.exe writes changed farm policies to %ProgramData%CitrixPvsAgentLocallyPersistedDataBrokerAgentInfo<GUID>.gpf. BrokerAgent.exe then triggers a policy evaluation via CitrixCseClient.dll.

Related: