How To Resize Citrix PVS vDIsk Using DiskPart Utility

Note:

  • If you have subversions or differential disks of the PVS disk, merge it to an base disk. Delete the merged version from the PVS console.
  • And also before performing any activity please backup the old VHD or vDisk, in case of corruption on disk same can be used.
  • The commands in this post must be run while the VHD file is closed not be in used. It cannot be attached with disk manager and it cannot be attached to a running VM.

1. Change vdisk to private mode.

2 Use inbuilt diskpart Utility to resize vdisk in any Windows Machine

C:Windowssystem32>diskpart

3. Open Comand Prompt or Powershell and get Diskpart

On Diskpart enter “Select vDisk file=”

DIsk1

4. Enter “List vDisk”to check if the vDisk has been added or not.

disk2

5. After that Expand the vDisk by using following query “expand vDisk maximum=XXX (in Mb)”. The size should be in “MB”

disk3

6. On Diskpart enter “attach vDisk” to attach it. Type “list disk” to check if the vDisk has been attached.

Disk4

7. Type “List Volume” to see what are the available volumes

DIsk5

8. Type “Select Volume 3” the vDisk which you want to resize and enter “Extend” to extend the vDisk.

Note: The number of the vDisk you want to resize.

Disk6


9. Now the vDisk has been extended to verify type “List Volume” to see the status. after checking the status. Type “Detach” to detach the vDisk.

Disk7

disk8

Related:

  • No Related Posts

XenServer 7.1 all storage repositories filling up with duplicated MCS base disks

storage space insufficient, new basedisks/diff disks failed to created

1. symptom

User-added image
2. stop “DisusedImageCleanup” task from XenDesktop side;

1) clean up data of XD database table “[DesktopUpdateManagerSchema].[PendingImageDeletes] ”

2) in xendesktop powershell, Get-ProvTask -Active $true; Stop -ProvTask -TaskId XXXXXXXXX

3. Identify real base disks of each machine catalog,

check xendesktop database table ” [DesktopUpdateManagerSchema].[ProvisioningSchemeVMImageLocation]”

Note: DiskID is the basedisk uuid in xenserver, and StorageID is the storage repositories uuid

4. delete basedisk via CLI

#xe vdi-list name-label=”XXXX-basedisk” params=uuid, sr-uuid

User-added image

# vhd-util scan -f -m “VHD-*” -l VG_XenStorage-<UUID of SR> -p | grep <basedisk parent VHD uuid> /////check VHD chain, make sure there’s no VM still using the basedisk before deleting them.


#lvremove /dev/VG_XenStorage-<UUID of SR>/VHD-<orphan basedisk UUID> –config global{metadata_read_only=0} ////////delete the orphan base disk

Note: for those VMs still using old base disk, restart them from Citrix Studio to update diff disk.

Related:

  • No Related Posts

Provisioning Server 7.6: Error- “vDisk is locked. 0xffff8017”

Scenario 1: Basic Troubleshooting

To resolve this issue, complete the following procedure:

  1. Shutdown all target devices streaming the vDisk.
  2. Refresh the console.
  3. If any target devices still show as connected to the vDisk, mark them down from the PVS console.
  4. Right click the vdisk and choose manage locks.
  5. Making sure all device are selected click Remove Locks.
  6. Check the console on any other PVS servers in the farm and ensure that they do not show the vdisk as locked.

Scenario 2: Core Troubleshooting

User-added image

As a fix , remove everyone that is still connected and running using that vDisk. So there is an outage involved in this procedure for users using that image.

1. Users that are still connected should save their work and log off. This is going to be a complete outage for anyone that uses that particular image.

2. Go to your DDC and put the Desktop Group in maintenance mode. This will prevent the DDC from attempting to start up VMs and potentially lock up the vDisk while working on it. Then Force Shutdown on all the VMs. Verify in hypervisor console they are all shutdown.

3. RDP to a single PVS server and in the PVS Console, go to the Store and right click on your vDisk. Verify there is no gold lock next to the vDisk. If there is clear all the locks. Then click “Unassign from Selected Devices(s)…”

User-added image

4. Make sure all your VMs are checked and click Unassign

User-added image
5. If you have maintenance versions, you should preferably merge them at this point. Use the “Merged Base – Last base + all updates from that base” option so you get a nice single .vhd file.

User-added image

6. Once you have verified you have a single .vhd file you can rename it if you want. Copy that .vhd file and the associated .pvp file to all your other PVS stores. Get all your PVS servers in sync and check the replication status. They should all have blue dots:

User-added image

7. Now go back to your Store view and right click on your vDisk. You should now see an option to Delete. Click it.

User-added image

8. MAKE SURE you DO NOT check the Delete the associated VHD files check box. Just click “Yes” only. All it does is delete it from the PVS database. It will not touch anything in your Store this way. Do this on all your PVS servers.

User-added image

9. Now right click on “Store” and click “Add or Import Existing vDisk…”

User-added image

10. Click Search to search your Store for vDisks. Only check that new .vhd you had created in step 5 and 6 above. Then click Add once it stops being grayed out.

User-added image

11. It will be imported in Private mode every time. Go ahead and switch it to Standard mode. Also do check the Cache type, Enable Active Directory machine account password management, and KMS on the Microsoft Volume Licensing tab because all that will likely not carry over for you.

12. Now go to your Device Collection. In this example, we have 20+ Devices that need this particular vDisk golden image. We are not going to modify each one. So we will set the vDisk on the first VM only.

User-added image

13. Now right click that VM you just set and click “Copy Device Properties…”

User-added image

14. Hit “Clear All”, then check “vDisk Assignment” only, then hit Copy.

User-added image

15. Now just highlight all your other VMs, right click in the highlighted area, and click Paste. Instantly all your VMs will be assigned that vDisk.

User-added image

16. Now boot up a couple of VMs and verify the “Vdisk is locked. 0xffff8017” error is gone. Then disable Maintenance mode on your DDC and you’re back in business.

17. You can delete all those old .vhd, .avhd, and .pvp files from old versions of your image if you like or archive them somewhere.

Related:

  • No Related Posts

Managing Provisioning Services VDisk Versions with VhdUtil Tool

The VhdUtil commands should be run from a command prompt (an elevated prompt is not required, it cannot be run in PowerShell) in the C:Program FilesCitrixProvisioning Services directory. You may also copy the VhdUtil utility to whatever folder you would like to run the command from. All VhdUtil commands are case sensitive.


VhdMerge

Used to perform a manual full merge to a new base image outside of the PVS console.

Example command to merge the disk chain from “MyVDisk.3.avhd” to base “MyVDisk.vhd” into a new disk “E:vDiskStoreMyVDiskMerged.vhd”:

rundll32 VhdUtil.dll,VhdMerge E:vDiskStoreMyVDisk.3.avhd MyVDisk.vhd E:vDiskStoreMyVDiskMerged.vhd>E:vDiskStoreMergeLog.txt

User-added image

The steps of the merging process are logged into file E:vDiskStoreMergeLog.txt.

User-added image

This process does not delete the old vDisk or its versions from the store or the PVS console. They will have to be manually deleted.

Import the new vDisk to the PVS console.


VhdRename

Used to rename the vDisk vhd chains that have existing versions. For renaming specifically, make a copy of the original vDisk and rename the copy. The renaming is a critical action, as we are modifying the disk chain. We found that if the network does not have a timing service, a remote slow storage might cause incorrect timestamp vs. the hosting server.

Example command to rename the entire disk chain from vDisk “MyVDisk” in E:vDiskStore folder (e.g. from version 3 to base) to “MyNewVDisk”:

rundll32 VhdUtil.dll,VhdRename E:vDiskStoreMyVDisk.3.avhd MyNewVDisk>E:vDiskStoreRenameLog.txt

User-added image

The old vdisk chain information for each version and the renaming process are logged into file E:vDiskStoreRenameLog.txt.

User-added image

User-added image

You can then run the VhdDump command to get the info of the renamed vdisk.

A manifest (XML) file with the new vDisk name will be created in the store.

Import the new vDisk into the PVS console, unassign the old vDisk from all targets and assign the new vDisk to them.

Delete the old vDisk but leave the .vhd files.

User-added image

If you try to delete the associated .vhd files, the error “Management Interface: vDisk properties were lost” will be displayed.

User-added image

Related:

  • No Related Posts

How to Create Machine Catalog using MCS in Azure Resource Manager

Pre-requisites

  • Access to the XenApp and XenDesktop Service of Citrix Cloud.
  • An Azure Subscription.
  • An Azure Active Directory (Azure AD) user account in the directory associated with your subscription, which is also co-administrator of the subscription.
  • An ARM virtual network and subnet in your preferred region with connectivity to an AD controller and Citrix Cloud Connector.
  • “Microsoft Azure” host connection
  • To create an MCS machine catalog, XenDesktop requires a master image that will be used as a template for all the machines in that catalog.

User-added image

Creating Master Image from Virtual Machine deployed in Azure Resource Manager

Create a virtual machine (VM) in Azure using the Azure Resource Manager gallery image with either the Server OS or Desktop OS (based on whether you want to create Server OS catalog or Desktop OS catalog).

Refer to Citrix Documentation – install Citrix VDA software on the VM for more information.

Install the applications on the VM that you want to publish using this master image. Shutdown the VM from Azure Portal once you have finished installing applications. Make sure that the power status for the VM in Azure Portal is Stopped (deallocated)

User-added image

When creating MCS catalog we need to use the .vhd file that represents OS disk associated with this VM as master image for the catalog. If you have the experience of using Microsoft Azure Classic connection type in XenDesktop, you would have captured specialized image of the VM at this stage, but for Microsoft Azure connection type you don’t have to capture the VM image, you will only shutdown the VM and use the VHD associated with the VM as master image.

Create MCS Catalog

This information is a supplement to the guidance in the Create a Machine Catalog article. After creating master image, you are all set to create MCS catalog. Please follow the steps as described below to create MCS catalog.

  1. Launch the Studio from your Citrix Cloud client portal and navigate to Machine Catalogs in the left hand pane.

  2. Right click Machine Catalogs and click on Create Machine Catalog to launch the machine creation wizard.

  3. Click Next on the Introduction page.

    User-added image

  4. On the Operating System page Select Server OS or Desktop OS based on what type of catalog you want to create and click Next.

    User-added image

  5. On the Machine Management page, select Citrix Machine Creation Service (MCS) as the deployment technology and select the Microsoft Azure hosting resource and click Next.

    User-added image

Master Image Selection – This page provides a tree view which you can navigate to select the master image VHD. At the topmost level are all the resource groups in your subscription except those which represent the MCS catalog created by XenDesktop. When you select and expand a particular resource group, it shows list of all the storage accounts in that resource group. If there are no storage accounts in that resource group, there will not be any child items under that resource group. If you have manually created number of resource groups and storage accounts to host your manually created VMs in your subscription, the master image page will show all those resource groups, storage accounts, containers and VHDs even though not all those VHDs are master images that you want to use for the provisioning. Select the storage account that has your master image. When you expand the storage account, it shows list of containers inside the storage account. Expand the container that has master image VHD and select the VHD that you want to use as master image for the catalog.

User-added image

You need to know the VHD path in order to select it. If you have stood up a VM in Azure and prepared it to be used as a master image and you want to know the VHD path, follow the steps below:

  1. Select the resource group that has your master image VM.

  2. Select the master image VM and click Settings

  3. Click on Disks then Click OS Disks and copy the disk path.

    User-added image
    User-added image

  4. OS disk path is structured as https://<storage account name>.blob.core.window.net/<container name>/<image name>.vhd

  5. You can use the disk path obtained in the step above to navigate the tree view to select image.

Note: If you don’t shutdown the master image VM and select the corresponding VHD to create a catalog, the catalog creation will fail. So make sure if you are selecting the VHD which is attached to running VM instance, the VM is in Stopped(deallocated) state.

  1. Storage type selection – XenDesktop supports Locally Redundant Standard or Premium storage for provisioning VMs in Azure. Your master image VHD can be hosted in any type of storage account, but for the VMs to be provisioned in Azure, XenDesktop will create new storage accounts based on storage type you selected.User-added image

  2. XenDesktop will provision maximum 40 VMs in single storage account due to IOPS limitations in Azure. For example if you want to create 100 VM catalog, you will find 3 storage accounts created and VM distribution in each storage account will be 40, 40 and 20.

  3. VM instance size selection – XenDesktop will show only those VM instance sizes which are supported for the selected storage type in the previous step. Enter number of VMs and select the VM instance size of your choice and click Next.

    User-added image

  4. Network Card Selection – Select network card and the associated network. Only one network card is supported.

    User-added image

  5. Select resource location domain and enter machine naming scheme.

    User-added image

  6. Enter credentials for your resource location Active Directory.

    User-added image

  7. Review the catalog summary, enter the catalog name and click Finish to start provisioning.

    User-added image

  8. Once the provisioning is complete, you will find new resource group created in your Azure subscription which hosts, all the VMs, storage accounts and network adapters for the catalog you provisioned. The default power state for the VMs after provisioning is Stopped(deallocated).

    User-added image

Once the provisioning is complete, you will find new resource group created in your subscription that has VM RDSDesk-01 as per the naming scheme we provided, NIC corresponding to that VM and a storage account that XenDesktop created to host the OS disk and the identity disk for the VM. The VM will be hosted on the same network as that of the selected hosting resource during catalog creation and the default power state of the VM will be Shutdown(deallocated).

The resource group created by XenDesktop during the MCS provisioning will have following naming convention

citrix-xd-<ProvisioningSchemeUid>-<xxxxx>

To find out which resource group in the Azure portal corresponds to the catalog you created from studio, follow the steps below.

  1. Connect to your XenApp and XenDesktop service using Remote PowerShell SDK. Please visit this link to find our how to interact with your Citrix Cloud environment using Remote PowerShell SDK.
  2. Run command Get-ProvScheme -ProvisioningSchemeName <Catalog Name>
  3. Note down the ‘ProvisioningSchemeUid’ from the output of the above command.
  4. Go to the Azure portal and search for the resource group name that contains ‘ProvisioningSchemeUid’ you obtained in step 3.
  • Note:

    As a best practice you should always create a copy of your master image and use the copied image as input to the provisioning process. In future if you want to update the catalog, you can start the master image VM and make necessary changes, shut it down and again create a copy of the image which will be your update image. This helps you to use the master image VM to create multiple image updates.

    Remember to shutdown the master image VM from Azure portal before starting to create the catalog. The master image needs to be copied into catalog’s storage account once provisioning starts, so we need to make sure it is not in use by any VM, otherwise it will lead to image copy failure and eventually provisioning failure.

  • Make sure you have sufficient cores, NIC quota in your subscription to provision VMs. You are most likely going to run out of these two quotas. You may not be able to check your subscription quota limits,
  • If your master image VM is provisioned in the Premium storage account then just shutting down the VM from the portal isn’t enough. You also need to detach the disk from the VM to use it as master image in provisioning. But in Azure Resource Manager you can not detach the disk while the VM is still available. So you need to delete the VM from the portal, this will only delete the VM but keep the OS disk in the storage account. The NIC corresponding to the VM also needs to be deleted separately.
User-added image

Related:

Understanding Write Cache in Provisioning Services Server

This article provides information about write cache usage in a Citrix Provisioning, formerly Provisioning Services (PVS), Server.

Write Cache in Provisioning Services Server

In PVS, the term “write cache” is used to describe all the cache modes. The write cache includes data written by the target device. If data is written to the PVS server vDisk in a caching mode, the data is not written back to the base vDisk. Instead, it is written to a write cache file in one of the following locations:

When the vDisk mode is private/maintenance mode, all data is written back to the vDisk file on the PVS Server. When the target device is booted in standard mode or shared mode, the write cache information is checked to determine the cache location. When a target device boots to a vDisk in standard mode/shared mode, regardless of the cache type, the data written to the Write Cache is deleted on boot so that when a target is rebooted or starts up it has a clean cache and contains nothing from the previous sessions.

If the PVS target is using Cache on Device RAM with overflow on hard disk or Cache on device hard disk, the PVS target software either does not find an appropriate hard disk partition or it is not formatted using NTFS. As a result, it will fail over to Cache on the server. The PVS target software will, by default, redirect the system page file to the same disk as the write cache so that the pagefile.sys is allocating space on the cache drive unless it is manually set up to be redirected on a separate volume.

For RAM cache without a local disk, you should consider setting the system page file to zero because all writes, including system page file writes, will go to the RAM cache unless redirected manually. PVS does not redirect the page file in the case of RAM cache.



Cache on device Hard Disk

Requirements

  • Local HD in every device using the vDisk.
  • The local HD must contain a basic volume pre-formatted with a Windows NTFS file system with at least 512MB of free space.

The cache on local HD is stored in a file called .vdiskcache on a secondary local hard drive. It gets created as an invisible file in the root folder of the secondary local HD. The cache file size grows, as needed, but never gets larger than the original vDisk, and frequently not larger than the free space on the original vDisk. It is slower than RAM cache or RAM Cache with overflow to local hard disk, but faster than server cache and works in an HA environment. Citrix recommends that you do not use this cache type because of incompatibilities with Microsoft Windows ASLR which could cause intermittent crashes and stability issues. This cache is being replaced by RAM Cache with overflow to the hard drive.

Cache in device RAM

Requirement

  • An appropriate amount of physical memory on the machine.

The cache is stored in client RAM. The maximum size of the cache is fixed by a setting in the vDisk properties screen. RAM cache is faster than other cache types and works in an HA environment. The RAM is allocated at boot and never changes. The RAM allocated can’t be used by the OS. If the workload has exhausted the RAM cache size, the system may become unusable and even crash. It is important to pre-calculate workload requirements and set the appropriate RAM size. Cache in device RAM does not require a local hard drive.

Cache on device RAM with overflow on Hard Disk

Requirement

  • Provisioning Service 7.1 hotfix 2 or later.
  • Local HD in every target device using the vDisk.
  • The local HD must contain Basic Volume pre-formatted with a Windows NTFS file system with at least 512 MB of free space. By default, Citrix sets this to 6 GB but recommends 10 GB or larger depending on workload.
  • The default RAM is 64 MB RAM, Citrix recommends at least 256 MB of RAM for a Desktop OS and 1 GB for Server OS if RAM cache is being used.
  • If you decide not to use RAM cache you may set it to 0 and only the local hard disk will be used to cache.

Cache on device RAM with overflow on hard disk represents the newest of the write cache types. Citrix recommends using this cache type for PVS, it combines the best of RAM with the stability of hard disk cache. The cache uses non-paged pool memory for the best performance. When RAM utilization has reached its threshold, the oldest of the RAM cache data will be written to the local hard drive. The local hard disk cache uses a file it creates called vdiskdif.vhdx.

Things to note about this cache type:

  • This write cache type is only available for Windows 7/2008 R2 and later.
  • This cache type addresses interoperability issues with Microsoft Windows ASLR.



Cache on Server

Requirements

  • Enough space allocated to where the server cache will be stored.
Server cache is stored in a file on the server, or on a share, SAN, or other location. The file size grows, as needed, but never gets larger than the original vDisk, and frequently not larger than the free space on the original vDisk. It is slower than RAM cache because all reads/writes have to go to the server and be read from a file. The cache gets deleted when the device reboots, that is, on every boot, the device reverts to the base image. Changes remain only during a single boot session. Server cache works in an HA environment if all server cache locations to resolve to the same physical storage location. This cache type is not recommended for a production environment.

Additional Resources

Selecting the Write Cache Destination for Standard vDisk Images

Turbo Charging your IOPS with the new PVS Cache in RAM with Disk Overflow Feature

Related:

Failed to mount Elastic Layers, “A virtual disk support provider for the specified file was not found.”

There may be multiple causes for this. The error really just means that Windows is being blocked by policy from allowing the disk to be attached. One identified cause is a specific GPO:

Computer Configuration/Administrative Templates/System/Device Installation/Device Installation Restrictions – Prevent installation of devices not described by other policy settings.

If you disable that setting, you should be able to mount the VHD manually using DiskMgmt or DiskPart, and Elastic Layers should start attaching properly at logon. Be careful, of course, to find where that GPO is set. If it’s in your domain policies, then the setting might be captured in the Platform Layer, and not get cleared by the updated GPO before a user logs in.

Related:

Hotfix XS71ECU1018 – For XenServer 7.1 Cumulative Update 1

Who Should Install This Hotfix?

This is a hotfix for customers running XenServer 7.1 Cumulative Update 1.

Note: This hotfix is available only to customers on the Customer Success Services program.

Information About this Hotfix

Component Details
Prerequisite None
Post-update tasks* Restart the XAPI Toolstack
Content live patchable** No
Revision History

Published on July 6, 2018

** Available to Enterprise Customers.

Issues Resolved In This Hotfix

This hotfix resolves the following issues:

  • Copying a virtual disk between SRs using unbuffered I/O can cause a performance regression due to I/O requests getting split into 64 KiB and 512 byte writes.
  • On HA-enabled pools, when a task is initiated after a XenServer host has failed, VMs running on the host can take longer (about 10 minutes) to restart. This issue occurs when a task is assigned to the host after it has failed, but before XAPI is aware of the host failure. In such cases, the task doesn’t get cancelled even when XAPI is notified about the failure, causing delays in restarting the VMs.
  • Changing the name or description of a snapshot Virtual Disk Image (VDI) can cause a VM snapshot to lose track of its virtual disks (VDIs). This makes it impossible to revert the VM to the snapshot.
  • When you add a host to a pool as a pool member, the performance alerts stop working. If this happens you can use the xe CLI to configure the performance alerts on the pool member after it has joined the pool. For more information, see the Citrix XenServer Administrator’s Guide.
  • xe edit-bootloader script is not consistent when handling partitions with UUIDs that end with a letter. The bootloader script will be successful if the partition UUID ends with a number. However, if the partition UUID ends with a letter, the script may not locate the bootloader config.
  • After installing a XenServer hotfix that recognizes an extended set of CPU features, XenCenter can incorrectly raise an alert that says some CPU features have disappeared.
  • In XenServer deployments with multiple VLANs containing different MTU values, restarting a VM can reset all the MTU values to the lowest value present on the network bridge.
  • When you copy a Virtual Disk Image (VDI) into a thinly provisioned SR (NFS, EXT or SMB) that does not have enough space for the VDI, the copy fails and does not report an error.

This hotfix also includes the following previously released hotfix:

Installing the Hotfix

Customers should use either XenCenter or the XenServer Command Line Interface (CLI) to apply this hotfix. As with any software update, back up your data before applying this update. Citrix recommends updating all hosts within a pool sequentially. Upgrading of hosts should be scheduled to minimize the amount of time the pool runs in a “mixed state” where some hosts are upgraded and some are not. Running a mixed pool of updated and non-updated hosts for general operation is not supported.

Note: The attachment to this article is a zip file. It contains the hotfix update package only. Click the following link to download the source code for any modified open source components XS71ECU1018-sources.iso. The source code is not necessary for hotfix installation: it is provided to fulfill licensing obligations.

Installing the Hotfix by using XenCenter

Choose an Installation Mechanism

There are three mechanisms to install a hotfix:

  1. Automated Updates
  2. Download update from Citrix
  3. Select update or Supplemental pack from disk

The Automated Updates feature is available for XenServer Enterprise Edition customers, or to those who have access to XenServer through their XenApp/XenDesktop entitlement. For information about installing a hotfix using the Automated Updates feature, see the section Applying Automated Updates in the XenServer Installation Guide.

For information about installing a hotfix using the Download update from Citrix option, see the section Applying an Update to a Pool in the XenServer Installation Guide.

The following section contains instructions on option (3) installing a hotfix that you have downloaded to disk:

  1. Download the hotfix to a known location on a computer that has XenCenter installed.
  2. Unzip the hotfix zip file and extract the .iso file
  3. In XenCenter, on the Tools menu, select Install Update. This displays the Install Update wizard.
  4. Read the information displayed on the Before You Start page and click Next to start the wizard.
  5. Click Browse to locate the iso file, select XS71ECU1018.iso and then click Open.
  6. Click Next.
  7. Select the pool or hosts you wish to apply the hotfix to, and then click Next.
  8. The Install Update wizard performs a number of update prechecks, including the space available on the hosts, to ensure that the pool is in a valid configuration state. The wizard also checks whether the hosts need to be rebooted after the update is applied and displays the result.
  9. Follow the on-screen recommendations to resolve any update prechecks that have failed. If you want XenCenter to automatically resolve all failed prechecks, click Resolve All. When the prechecks have been resolved, click Next.

  10. Choose the Update Mode. Review the information displayed on the screen and select an appropriate mode.
  11. Note: If you click Cancel at this stage, the Install Update wizard reverts the changes and removes the update file from the host.

  12. Click Install update to proceed with the installation. The Install Update wizard shows the progress of the update, displaying the major operations that XenCenter performs while updating each host in the pool.
  13. When the update is applied, click Finish to close the wizard.
  14. If you chose to carry out the post-update tasks, do so now.

Installing the Hotfix by using the xe Command Line Interface

  1. Download the hotfix file to a known location.
  2. Extract the .iso file from the zip.
  3. Upload the .iso file to the Pool Master by entering the following commands:

    (Where -s is the Pool Master’s IP address or DNS name.)

    xe -s <server> -u <username> -pw <password> update-upload file-name=<filename>XS71ECU1018.iso

    XenServer assigns the update file a UUID which this command prints. Note the UUID.

    62e77e22-5904-11e8-b50f-bb160817e4cd

  4. Apply the update to all hosts in the pool, specifying the UUID of the update:

    xe update-pool-apply uuid=62e77e22-5904-11e8-b50f-bb160817e4cd

    Alternatively, if you need to update and restart hosts in a rolling manner, you can apply the update file to an individual host by running the following:

    xe update-apply host=<name_of_host> uuid=62e77e22-5904-11e8-b50f-bb160817e4cd

    Verify that the update was applied by using the update-list command.

  5. xe update-list -s <server> -u root -pw <password> name-label=XS71ECU1018

    If the update is successful, the hosts field contains the UUIDs of the hosts to which this patch was successfully applied. This should be a complete list of all hosts in the pool.

  6. The hotfix is applied to all hosts in the pool, but it will not take effect until the XAPI service is restarted on all hosts. On the console of each host in the pool beginning with the master, enter the following command to restart the XAPI service:

    xe-toolstack-restart

    Note: When this command is run on the Pool Master, XenCenter will lose connection to the pool. Wait for 30 seconds after losing connection, and then reconnect manually.

Files

Hotfix File

Component Details
Hotfix Filename XS71ECU1018.iso
Hotfix File sha256 336b855b391536207523016bfb7ef9e6f05634ec2efd77af85ecc308460c1dc3
Hotfix Source Filename XS71ECU1018-sources.iso
Hotfix Source File sha256 d096035c169372c5ed236175dc1c70977071355c15c53260373b19ea8332eb71
Hotfix Zip Filename XS71ECU1018.zip
Hotfix Zip File sha256 9bc66fe63918f930a50bc838cec9b45d22521851ca20d70285779c3496ee9dbc
Size of the Zip file 48.9 MB

Files Updated

squeezed-0.13.2-1.el7.centos.x86_64.rpm
v6d-citrix-10.0.6-1.el7.centos.x86_64.rpm
vhd-tool-0.11.3-1.el7.centos.x86_64.rpm
xapi-core-1.14.38-1.x86_64.rpm
xapi-tests-1.14.38-1.x86_64.rpm
xapi-xe-1.14.38-1.x86_64.rpm
xcp-networkd-0.13.6-1.el7.centos.x86_64.rpm
xenopsd-0.17.11-1.el7.centos.x86_64.rpm
xenopsd-xc-0.17.11-1.el7.centos.x86_64.rpm
xenopsd-xenlight-0.17.11-1.el7.centos.x86_64.rpm

More Information

For more information see, XenServer Documentation.

If you experience any difficulties, contact Citrix Technical Support.

Related:

Can’t find the volume UDiskP… It will be reattached on the next login.

This means that the layer has been detached. The App Layering services actually have no idea why that volume is missing. It might be that the file server disconnected from Windows, or that something manually detached the layer VHD. App Layering is not part of the process of attaching or maintaining layer VHDs in any way. We send the request to Windows to attach the VHD, and from then on, it’s whatever Microsoft wants.

When a user who has Elastic Assignments logs in, our Layering Service (ULayer.exe) creates a conenction to the designated file share, using the user’s credentials. We identify the layer VHD files that are required, and call Disk Management to attach those disks. When the volume identifiers become available, we inform our filter drivers to begin using the new disks.

From then on, Windows is responsible for persisting the connection to the VHD and/or the file share itself. Unfortunately, Windows does not log file share connection events or attached VHD events, so the only indication you may have is from UlayerSvc.log.

Following the example, eventually Windows loses the share, or maybe Windows just loses the VHD. Windows might or might not retry or have some timeouts before it gives up. All of this is completely determined by Microsoft, we do not (and cannot) set any parameters to affect this behavior.

When Windows finally decides the VHD is no longer attached, the volume identifier disappears from Windows’ list of volumes. That is the first that our software is aware of anything at all. We check every 60 seconds, so the disconnect could have happened some time ago. We log that the volume is no longer visible, we tell the filter driver to stop using it, and we note in the log that the next time someone logs in who has this layer assigned, we will attempt to reattach it and reuse it.

That whole process has nothing to do with whether the attached VHD is an app layer or a VHD or something completely different. It’s all up to Windows’ native fault tolerance in this situation. We have no idea that there’s even a problem until the volume disappears.

However, this just means that the contents of that layer will be unavailable until the next login. The machine will continue operation, any other layers will continue functioning, and when a user (even the same user) that has that particular layer assigned logs in again, we will attach the missing layers. This process, other than temporarily losing access to the contents of the Elastic Layer, should be completely nondisruptive.

Note that if the volume that disappears is the User Layer, then you will not see this log entry, and Windows itself will become substantially unresponsive, being unable to find anything, including things on the boot disk which is still attached.

So what can you do about it? Unfortunately, since Windows does not generally log VHD or file server failure events, it can be very difficult to figure out why the disk is disappearing. App Layering and ULayer don’t know anything about the cause, we just know about when it happened. Start recording the times when it happens, in case there is some external event that you can correlate to the lost volumes. See if there are any other events in the Windows System and Application event logs approximately when ulayersvc.log reports the volume disappearing.

There is some suggestion that the disconnects can be the result of an overeager antivirus, but we do not have any information about why or how to stop it. If the problem goes away completely if you stop including your AV layer in the published image, it might be worth a conversation with your AV vendor about the circumstances where they might disconnect a VHD from a machine they’re protecting.

In some circumstances (and this might be why you looked for this page in the first place), we have seen layers disconnected in the logs, but when the user (or any user) attempts to login again, at the point where we should be reattaching the VHD, the entire machine hangs. Something is blocking our ability to reattach the VHD. So far, all we know is that this too appears to be correlated to antiviruses. If we ever figure out why, and how to stop them, we’ll certainlty record it here.

Related:

Hotfix XS71ECU1011 – For XenServer 7.1 Cumulative Update 1

Who Should Install This Hotfix?

This is a hotfix for customers running XenServer 7.1 Cumulative Update 1.

This hotfix does not apply to XenServer 7.1. You must apply Cumulative Update 1 before you can apply this hotfix.

Note: XenServer 7.1 Cumulative Update 1 and its subsequent hotfixes are available only to customers on the Customer Success Services program.

Information About this Hotfix

Component Details
Prerequisite None
Post-update tasks* Restart the XAPI Toolstack
Content live patchable** No
Revision History Published on Feb 16, 2018
** Available to Enterprise Customers.

Issues Resolved In This Hotfix

This hotfix addresses the following issues:

  • When you copy a Virtual Disk Image (VDI) into a thinly-provisioned SR (NFS, EXT or SMB) that does not have enough space for the VDI, the copy fails and does not report an error.
  • On Nutanix hosts, the host’s memory-overhead is miscalculated after first boot. This is because toolstack (XAPI) calculates the available host RAM on startup assuming no domains other than the XenServer Control Domain are running. On first boot this is true but on subsequent boots, the Nutanix Controller VM (CVM) is started before toolstack.
  • When migrating VMs that have Dynamic Memory Configuration enabled, the VMs shutdown operation can unexpectedly fail. This is caused by memory consumption shrinking before shutdown and this operation takes longer than expected.
  • On rare occasions, after a toolstack restart, XenServer cannot recognize the host a VM is located on. This is caused by failed VM migration misleading the xenopsd daemon, which toolstack queries on startup.
  • If the disable_pv_vnc key in a paravirtualized VM’s other_config field is set to 1, the xenconsoled daemon stores all console output from this VM. This can cause the xenconsoled daemon to consume all memory in control domain (dom0).
  • When XenServer loses connection to the license server, the v6d license daemon grants a grace license when it temporarily cannot reach the license server. For some license types it would not grant such a license.

Installing the Hotfix

Customers should use either XenCenter or the XenServer Command Line Interface (CLI) to apply this hotfix. As with any software update, back up your data before applying this update. Citrix recommends updating all hosts within a pool sequentially. Upgrading of hosts should be scheduled to minimize the amount of time the pool runs in a “mixed state” where some hosts are upgraded and some are not. Running a mixed pool of updated and non-updated hosts for general operation is not supported.

Note: The attachment to this article is a zip file. It contains the hotfix update package only. Click the following link to download the source code for any modified open source components XS71ECU1011-sources.iso. The source code is not necessary for hotfix installation: it is provided to fulfill licensing obligations.

Installing the Hotfix by using XenCenter

Before installing this hotfix, we recommend that you update your version of XenCenter to the latest available version for XenServer 7.1 CU 1.

Choose an Installation Mechanism

There are three mechanisms to install a hotfix:

  1. Automated Updates
  2. Download update from Citrix
  3. Select update or Supplemental pack from disk

The Automated Updates feature is available for XenServer Enterprise Edition customers, or to those who have access to XenServer through their XenApp/XenDesktop entitlement. For information about installing a hotfix using the Automated Updates feature, see the section Applying Automated Updates in the XenServer 7.1 Cumulative Update 1 Installation Guide.

For information about installing a hotfix using the Download update from Citrix option, see the section Applying an Update to a Pool in the XenServer 7.1 Cumulative Update 1 Installation Guide.

The following section contains instructions on option (3) installing a hotfix that you have downloaded to disk:

  1. Download the hotfix to a known location on a computer that has XenCenter installed.
  2. Unzip the hotfix zip file and extract the .iso file
  3. In XenCenter, on the Tools menu, select Install Update. This displays the Install Update wizard.
  4. Read the information displayed on the Before You Start page and click Next to start the wizard.
  5. Click Browse to locate the iso file, select XS71ECU1011.iso and then click Open.
  6. Click Next.
  7. Select the pool or hosts you wish to apply the hotfix to, and then click Next.
  8. The Install Update wizard performs a number of update prechecks, including the space available on the hosts, to ensure that the pool is in a valid configuration state. The wizard also checks whether the hosts need to be rebooted after the update is applied and displays the result.
  9. Follow the on-screen recommendations to resolve any update prechecks that have failed. If you want XenCenter to automatically resolve all failed prechecks, click Resolve All. When the prechecks have been resolved, click Next.

  10. Choose the Update Mode. Review the information displayed on the screen and select an appropriate mode.
  11. Note: If you click Cancel at this stage, the Install Update wizard reverts the changes and removes the update file from the host.

  12. Click Install update to proceed with the installation. The Install Update wizard shows the progress of the update, displaying the major operations that XenCenter performs while updating each host in the pool.
  13. When the update is applied, click Finish to close the wizard.
  14. If you chose to carry out the post-update tasks, do so now.

Installing the Hotfix by using the xe Command Line Interface

  1. Download the hotfix file to a known location.
  2. Extract the .iso file from the zip.
  3. Upload the .iso file to the Pool Master by entering the following commands:

    (Where -s is the Pool Master’s IP address or DNS name.)

    xe -s <server> -u <username> -pw <password> update-upload file-name=<filename>XS71ECU1011.iso

    XenServer assigns the update file a UUID which this command prints. Note the UUID.

    72d0ce58-f5fc-11e7-850f-a7b856340c1f

  4. Apply the update to all hosts in the pool, specifying the UUID of the update:

    xe update-pool-apply uuid=<UUID_of_file>

    Alternatively, if you want to update and restart hosts in a rolling manner, you can apply the update file to an individual host by running the following:

    xe update-apply host=<name_of_host> uuid=<UUID_of_file>

  5. Verify that the update was applied by using the update-list command.

    xe update-list -s <server> -u root -pw <password> name-label=XS71ECU1011

    If the update is successful, the hosts field contains the UUIDs of the hosts to which this hotfix was successfully applied. This should be a complete list of all hosts in the pool.

  6. If the hotfix is applied successfully, carry out any specified post-update task on each host, starting with the master.

Files

Hotfix File

Component Details
Hotfix Filename XS71ECU1011.iso
Hotfix File sha256 7494873ca316a80c896eae1cb7738dfc0ffde5be009039a2674140c9aee3cdcb
Hotfix Source Filename XS71ECU1011-sources.iso
Hotfix Source File sha256 6a3569eae01d86c7182409da351722a93e89a16cd0b95c608357a081637f2f93
Hotfix Zip Filename XS71ECU1011.zip
Hotfix Zip File sha256 0f54cd49612d55951113b4b83297713508631cbc8a7124d34adf8b51f9cad530
Size of the Zip file 44.6 MB

Files Updated

vhd-tool-0.11.1-1.el7.centos.x86_64.rpm
xapi-tests-1.14.36-1.x86_64.rpm
xenopsd-xenlight-0.17.11-1.el7.centos.x86_64.rpm
xapi-xe-1.14.36-1.x86_64.rpm
v6d-citrix-10.0.6-1.el7.centos.x86_64.rpm
xenopsd-xc-0.17.11-1.el7.centos.x86_64.rpm
squeezed-0.13.2-1.el7.centos.x86_64.rpm
xapi-core-1.14.36-1.x86_64.rpm
xenopsd-0.17.11-1.el7.centos.x86_64.rpm

More Information

For more information see, the XenServer 7.1 Documentation.

If you experience any difficulties, contact Citrix Technical Support.

Related: