How to Create Machine Catalog using MCS in Azure Resource Manager

Pre-requisites

  • Access to the XenApp and XenDesktop Service of Citrix Cloud.
  • An Azure Subscription.
  • An Azure Active Directory (Azure AD) user account in the directory associated with your subscription, which is also co-administrator of the subscription.
  • An ARM virtual network and subnet in your preferred region with connectivity to an AD controller and Citrix Cloud Connector.
  • “Microsoft Azure” host connection
  • To create an MCS machine catalog, XenDesktop requires a master image that will be used as a template for all the machines in that catalog.

User-added image

Creating Master Image from Virtual Machine deployed in Azure Resource Manager

Create a virtual machine (VM) in Azure using the Azure Resource Manager gallery image with either the Server OS or Desktop OS (based on whether you want to create Server OS catalog or Desktop OS catalog).

Refer to Citrix Documentation – install Citrix VDA software on the VM for more information.

Install the applications on the VM that you want to publish using this master image. Shutdown the VM from Azure Portal once you have finished installing applications. Make sure that the power status for the VM in Azure Portal is Stopped (deallocated)

User-added image

When creating MCS catalog we need to use the .vhd file that represents OS disk associated with this VM as master image for the catalog. If you have the experience of using Microsoft Azure Classic connection type in XenDesktop, you would have captured specialized image of the VM at this stage, but for Microsoft Azure connection type you don’t have to capture the VM image, you will only shutdown the VM and use the VHD associated with the VM as master image.

Create MCS Catalog

This information is a supplement to the guidance in the Create a Machine Catalog article. After creating master image, you are all set to create MCS catalog. Please follow the steps as described below to create MCS catalog.

  1. Launch the Studio from your Citrix Cloud client portal and navigate to Machine Catalogs in the left hand pane.

  2. Right click Machine Catalogs and click on Create Machine Catalog to launch the machine creation wizard.

  3. Click Next on the Introduction page.

    User-added image

  4. On the Operating System page Select Server OS or Desktop OS based on what type of catalog you want to create and click Next.

    User-added image

  5. On the Machine Management page, select Citrix Machine Creation Service (MCS) as the deployment technology and select the Microsoft Azure hosting resource and click Next.

    User-added image

Master Image Selection – This page provides a tree view which you can navigate to select the master image VHD. At the topmost level are all the resource groups in your subscription except those which represent the MCS catalog created by XenDesktop. When you select and expand a particular resource group, it shows list of all the storage accounts in that resource group. If there are no storage accounts in that resource group, there will not be any child items under that resource group. If you have manually created number of resource groups and storage accounts to host your manually created VMs in your subscription, the master image page will show all those resource groups, storage accounts, containers and VHDs even though not all those VHDs are master images that you want to use for the provisioning. Select the storage account that has your master image. When you expand the storage account, it shows list of containers inside the storage account. Expand the container that has master image VHD and select the VHD that you want to use as master image for the catalog.

User-added image

You need to know the VHD path in order to select it. If you have stood up a VM in Azure and prepared it to be used as a master image and you want to know the VHD path, follow the steps below:

  1. Select the resource group that has your master image VM.

  2. Select the master image VM and click Settings

  3. Click on Disks then Click OS Disks and copy the disk path.

    User-added image
    User-added image

  4. OS disk path is structured as https://<storage account name>.blob.core.window.net/<container name>/<image name>.vhd

  5. You can use the disk path obtained in the step above to navigate the tree view to select image.

Note: If you don’t shutdown the master image VM and select the corresponding VHD to create a catalog, the catalog creation will fail. So make sure if you are selecting the VHD which is attached to running VM instance, the VM is in Stopped(deallocated) state.

  1. Storage type selection – XenDesktop supports Locally Redundant Standard or Premium storage for provisioning VMs in Azure. Your master image VHD can be hosted in any type of storage account, but for the VMs to be provisioned in Azure, XenDesktop will create new storage accounts based on storage type you selected.User-added image

  2. XenDesktop will provision maximum 40 VMs in single storage account due to IOPS limitations in Azure. For example if you want to create 100 VM catalog, you will find 3 storage accounts created and VM distribution in each storage account will be 40, 40 and 20.

  3. VM instance size selection – XenDesktop will show only those VM instance sizes which are supported for the selected storage type in the previous step. Enter number of VMs and select the VM instance size of your choice and click Next.

    User-added image

  4. Network Card Selection – Select network card and the associated network. Only one network card is supported.

    User-added image

  5. Select resource location domain and enter machine naming scheme.

    User-added image

  6. Enter credentials for your resource location Active Directory.

    User-added image

  7. Review the catalog summary, enter the catalog name and click Finish to start provisioning.

    User-added image

  8. Once the provisioning is complete, you will find new resource group created in your Azure subscription which hosts, all the VMs, storage accounts and network adapters for the catalog you provisioned. The default power state for the VMs after provisioning is Stopped(deallocated).

    User-added image

Once the provisioning is complete, you will find new resource group created in your subscription that has VM RDSDesk-01 as per the naming scheme we provided, NIC corresponding to that VM and a storage account that XenDesktop created to host the OS disk and the identity disk for the VM. The VM will be hosted on the same network as that of the selected hosting resource during catalog creation and the default power state of the VM will be Shutdown(deallocated).

The resource group created by XenDesktop during the MCS provisioning will have following naming convention

citrix-xd-<ProvisioningSchemeUid>-<xxxxx>

To find out which resource group in the Azure portal corresponds to the catalog you created from studio, follow the steps below.

  1. Connect to your XenApp and XenDesktop service using Remote PowerShell SDK. Please visit this link to find our how to interact with your Citrix Cloud environment using Remote PowerShell SDK.
  2. Run command Get-ProvScheme -ProvisioningSchemeName <Catalog Name>
  3. Note down the ‘ProvisioningSchemeUid’ from the output of the above command.
  4. Go to the Azure portal and search for the resource group name that contains ‘ProvisioningSchemeUid’ you obtained in step 3.
  • Note:

    As a best practice you should always create a copy of your master image and use the copied image as input to the provisioning process. In future if you want to update the catalog, you can start the master image VM and make necessary changes, shut it down and again create a copy of the image which will be your update image. This helps you to use the master image VM to create multiple image updates.

    Remember to shutdown the master image VM from Azure portal before starting to create the catalog. The master image needs to be copied into catalog’s storage account once provisioning starts, so we need to make sure it is not in use by any VM, otherwise it will lead to image copy failure and eventually provisioning failure.

  • Make sure you have sufficient cores, NIC quota in your subscription to provision VMs. You are most likely going to run out of these two quotas. You may not be able to check your subscription quota limits,
  • If your master image VM is provisioned in the Premium storage account then just shutting down the VM from the portal isn’t enough. You also need to detach the disk from the VM to use it as master image in provisioning. But in Azure Resource Manager you can not detach the disk while the VM is still available. So you need to delete the VM from the portal, this will only delete the VM but keep the OS disk in the storage account. The NIC corresponding to the VM also needs to be deleted separately.
User-added image

Related:

  • No Related Posts

Understanding Write Cache in Provisioning Services Server

This article provides information about write cache usage in a Citrix Provisioning, formerly Provisioning Services (PVS), Server.

Write Cache in Provisioning Services Server

In PVS, the term “write cache” is used to describe all the cache modes. The write cache includes data written by the target device. If data is written to the PVS server vDisk in a caching mode, the data is not written back to the base vDisk. Instead, it is written to a write cache file in one of the following locations:

When the vDisk mode is private/maintenance mode, all data is written back to the vDisk file on the PVS Server. When the target device is booted in standard mode or shared mode, the write cache information is checked to determine the cache location. When a target device boots to a vDisk in standard mode/shared mode, regardless of the cache type, the data written to the Write Cache is deleted on boot so that when a target is rebooted or starts up it has a clean cache and contains nothing from the previous sessions.

If the PVS target is using Cache on Device RAM with overflow on hard disk or Cache on device hard disk, the PVS target software either does not find an appropriate hard disk partition or it is not formatted using NTFS. As a result, it will fail over to Cache on the server. The PVS target software will, by default, redirect the system page file to the same disk as the write cache so that the pagefile.sys is allocating space on the cache drive unless it is manually set up to be redirected on a separate volume.

For RAM cache without a local disk, you should consider setting the system page file to zero because all writes, including system page file writes, will go to the RAM cache unless redirected manually. PVS does not redirect the page file in the case of RAM cache.



Cache on device Hard Disk

Requirements

  • Local HD in every device using the vDisk.
  • The local HD must contain a basic volume pre-formatted with a Windows NTFS file system with at least 512MB of free space.

The cache on local HD is stored in a file called .vdiskcache on a secondary local hard drive. It gets created as an invisible file in the root folder of the secondary local HD. The cache file size grows, as needed, but never gets larger than the original vDisk, and frequently not larger than the free space on the original vDisk. It is slower than RAM cache or RAM Cache with overflow to local hard disk, but faster than server cache and works in an HA environment. Citrix recommends that you do not use this cache type because of incompatibilities with Microsoft Windows ASLR which could cause intermittent crashes and stability issues. This cache is being replaced by RAM Cache with overflow to the hard drive.

Cache in device RAM

Requirement

  • An appropriate amount of physical memory on the machine.

The cache is stored in client RAM. The maximum size of the cache is fixed by a setting in the vDisk properties screen. RAM cache is faster than other cache types and works in an HA environment. The RAM is allocated at boot and never changes. The RAM allocated can’t be used by the OS. If the workload has exhausted the RAM cache size, the system may become unusable and even crash. It is important to pre-calculate workload requirements and set the appropriate RAM size. Cache in device RAM does not require a local hard drive.

Cache on device RAM with overflow on Hard Disk

Requirement

  • Provisioning Service 7.1 hotfix 2 or later.
  • Local HD in every target device using the vDisk.
  • The local HD must contain Basic Volume pre-formatted with a Windows NTFS file system with at least 512 MB of free space. By default, Citrix sets this to 6 GB but recommends 10 GB or larger depending on workload.
  • The default RAM is 64 MB RAM, Citrix recommends at least 256 MB of RAM for a Desktop OS and 1 GB for Server OS if RAM cache is being used.
  • If you decide not to use RAM cache you may set it to 0 and only the local hard disk will be used to cache.

Cache on device RAM with overflow on hard disk represents the newest of the write cache types. Citrix recommends using this cache type for PVS, it combines the best of RAM with the stability of hard disk cache. The cache uses non-paged pool memory for the best performance. When RAM utilization has reached its threshold, the oldest of the RAM cache data will be written to the local hard drive. The local hard disk cache uses a file it creates called vdiskdif.vhdx.

Things to note about this cache type:

  • This write cache type is only available for Windows 7/2008 R2 and later.
  • This cache type addresses interoperability issues with Microsoft Windows ASLR.



Cache on Server

Requirements

  • Enough space allocated to where the server cache will be stored.
Server cache is stored in a file on the server, or on a share, SAN, or other location. The file size grows, as needed, but never gets larger than the original vDisk, and frequently not larger than the free space on the original vDisk. It is slower than RAM cache because all reads/writes have to go to the server and be read from a file. The cache gets deleted when the device reboots, that is, on every boot, the device reverts to the base image. Changes remain only during a single boot session. Server cache works in an HA environment if all server cache locations to resolve to the same physical storage location. This cache type is not recommended for a production environment.

Additional Resources

Selecting the Write Cache Destination for Standard vDisk Images

Turbo Charging your IOPS with the new PVS Cache in RAM with Disk Overflow Feature

Related:

  • No Related Posts

Failed to mount Elastic Layers, “A virtual disk support provider for the specified file was not found.”

There may be multiple causes for this. The error really just means that Windows is being blocked by policy from allowing the disk to be attached. One identified cause is a specific GPO:

Computer Configuration/Administrative Templates/System/Device Installation/Device Installation Restrictions – Prevent installation of devices not described by other policy settings.

If you disable that setting, you should be able to mount the VHD manually using DiskMgmt or DiskPart, and Elastic Layers should start attaching properly at logon. Be careful, of course, to find where that GPO is set. If it’s in your domain policies, then the setting might be captured in the Platform Layer, and not get cleared by the updated GPO before a user logs in.

Related:

  • No Related Posts

Hotfix XS71ECU1018 – For XenServer 7.1 Cumulative Update 1

Who Should Install This Hotfix?

This is a hotfix for customers running XenServer 7.1 Cumulative Update 1.

Note: This hotfix is available only to customers on the Customer Success Services program.

Information About this Hotfix

Component Details
Prerequisite None
Post-update tasks* Restart the XAPI Toolstack
Content live patchable** No
Revision History

Published on July 6, 2018

** Available to Enterprise Customers.

Issues Resolved In This Hotfix

This hotfix resolves the following issues:

  • Copying a virtual disk between SRs using unbuffered I/O can cause a performance regression due to I/O requests getting split into 64 KiB and 512 byte writes.
  • On HA-enabled pools, when a task is initiated after a XenServer host has failed, VMs running on the host can take longer (about 10 minutes) to restart. This issue occurs when a task is assigned to the host after it has failed, but before XAPI is aware of the host failure. In such cases, the task doesn’t get cancelled even when XAPI is notified about the failure, causing delays in restarting the VMs.
  • Changing the name or description of a snapshot Virtual Disk Image (VDI) can cause a VM snapshot to lose track of its virtual disks (VDIs). This makes it impossible to revert the VM to the snapshot.
  • When you add a host to a pool as a pool member, the performance alerts stop working. If this happens you can use the xe CLI to configure the performance alerts on the pool member after it has joined the pool. For more information, see the Citrix XenServer Administrator’s Guide.
  • xe edit-bootloader script is not consistent when handling partitions with UUIDs that end with a letter. The bootloader script will be successful if the partition UUID ends with a number. However, if the partition UUID ends with a letter, the script may not locate the bootloader config.
  • After installing a XenServer hotfix that recognizes an extended set of CPU features, XenCenter can incorrectly raise an alert that says some CPU features have disappeared.
  • In XenServer deployments with multiple VLANs containing different MTU values, restarting a VM can reset all the MTU values to the lowest value present on the network bridge.
  • When you copy a Virtual Disk Image (VDI) into a thinly provisioned SR (NFS, EXT or SMB) that does not have enough space for the VDI, the copy fails and does not report an error.

This hotfix also includes the following previously released hotfix:

Installing the Hotfix

Customers should use either XenCenter or the XenServer Command Line Interface (CLI) to apply this hotfix. As with any software update, back up your data before applying this update. Citrix recommends updating all hosts within a pool sequentially. Upgrading of hosts should be scheduled to minimize the amount of time the pool runs in a “mixed state” where some hosts are upgraded and some are not. Running a mixed pool of updated and non-updated hosts for general operation is not supported.

Note: The attachment to this article is a zip file. It contains the hotfix update package only. Click the following link to download the source code for any modified open source components XS71ECU1018-sources.iso. The source code is not necessary for hotfix installation: it is provided to fulfill licensing obligations.

Installing the Hotfix by using XenCenter

Choose an Installation Mechanism

There are three mechanisms to install a hotfix:

  1. Automated Updates
  2. Download update from Citrix
  3. Select update or Supplemental pack from disk

The Automated Updates feature is available for XenServer Enterprise Edition customers, or to those who have access to XenServer through their XenApp/XenDesktop entitlement. For information about installing a hotfix using the Automated Updates feature, see the section Applying Automated Updates in the XenServer Installation Guide.

For information about installing a hotfix using the Download update from Citrix option, see the section Applying an Update to a Pool in the XenServer Installation Guide.

The following section contains instructions on option (3) installing a hotfix that you have downloaded to disk:

  1. Download the hotfix to a known location on a computer that has XenCenter installed.
  2. Unzip the hotfix zip file and extract the .iso file
  3. In XenCenter, on the Tools menu, select Install Update. This displays the Install Update wizard.
  4. Read the information displayed on the Before You Start page and click Next to start the wizard.
  5. Click Browse to locate the iso file, select XS71ECU1018.iso and then click Open.
  6. Click Next.
  7. Select the pool or hosts you wish to apply the hotfix to, and then click Next.
  8. The Install Update wizard performs a number of update prechecks, including the space available on the hosts, to ensure that the pool is in a valid configuration state. The wizard also checks whether the hosts need to be rebooted after the update is applied and displays the result.
  9. Follow the on-screen recommendations to resolve any update prechecks that have failed. If you want XenCenter to automatically resolve all failed prechecks, click Resolve All. When the prechecks have been resolved, click Next.

  10. Choose the Update Mode. Review the information displayed on the screen and select an appropriate mode.
  11. Note: If you click Cancel at this stage, the Install Update wizard reverts the changes and removes the update file from the host.

  12. Click Install update to proceed with the installation. The Install Update wizard shows the progress of the update, displaying the major operations that XenCenter performs while updating each host in the pool.
  13. When the update is applied, click Finish to close the wizard.
  14. If you chose to carry out the post-update tasks, do so now.

Installing the Hotfix by using the xe Command Line Interface

  1. Download the hotfix file to a known location.
  2. Extract the .iso file from the zip.
  3. Upload the .iso file to the Pool Master by entering the following commands:

    (Where -s is the Pool Master’s IP address or DNS name.)

    xe -s <server> -u <username> -pw <password> update-upload file-name=<filename>XS71ECU1018.iso

    XenServer assigns the update file a UUID which this command prints. Note the UUID.

    62e77e22-5904-11e8-b50f-bb160817e4cd

  4. Apply the update to all hosts in the pool, specifying the UUID of the update:

    xe update-pool-apply uuid=62e77e22-5904-11e8-b50f-bb160817e4cd

    Alternatively, if you need to update and restart hosts in a rolling manner, you can apply the update file to an individual host by running the following:

    xe update-apply host=<name_of_host> uuid=62e77e22-5904-11e8-b50f-bb160817e4cd

    Verify that the update was applied by using the update-list command.

  5. xe update-list -s <server> -u root -pw <password> name-label=XS71ECU1018

    If the update is successful, the hosts field contains the UUIDs of the hosts to which this patch was successfully applied. This should be a complete list of all hosts in the pool.

  6. The hotfix is applied to all hosts in the pool, but it will not take effect until the XAPI service is restarted on all hosts. On the console of each host in the pool beginning with the master, enter the following command to restart the XAPI service:

    xe-toolstack-restart

    Note: When this command is run on the Pool Master, XenCenter will lose connection to the pool. Wait for 30 seconds after losing connection, and then reconnect manually.

Files

Hotfix File

Component Details
Hotfix Filename XS71ECU1018.iso
Hotfix File sha256 336b855b391536207523016bfb7ef9e6f05634ec2efd77af85ecc308460c1dc3
Hotfix Source Filename XS71ECU1018-sources.iso
Hotfix Source File sha256 d096035c169372c5ed236175dc1c70977071355c15c53260373b19ea8332eb71
Hotfix Zip Filename XS71ECU1018.zip
Hotfix Zip File sha256 9bc66fe63918f930a50bc838cec9b45d22521851ca20d70285779c3496ee9dbc
Size of the Zip file 48.9 MB

Files Updated

squeezed-0.13.2-1.el7.centos.x86_64.rpm
v6d-citrix-10.0.6-1.el7.centos.x86_64.rpm
vhd-tool-0.11.3-1.el7.centos.x86_64.rpm
xapi-core-1.14.38-1.x86_64.rpm
xapi-tests-1.14.38-1.x86_64.rpm
xapi-xe-1.14.38-1.x86_64.rpm
xcp-networkd-0.13.6-1.el7.centos.x86_64.rpm
xenopsd-0.17.11-1.el7.centos.x86_64.rpm
xenopsd-xc-0.17.11-1.el7.centos.x86_64.rpm
xenopsd-xenlight-0.17.11-1.el7.centos.x86_64.rpm

More Information

For more information see, XenServer Documentation.

If you experience any difficulties, contact Citrix Technical Support.

Related:

  • No Related Posts

Can’t find the volume UDiskP… It will be reattached on the next login.

This means that the layer has been detached. The App Layering services actually have no idea why that volume is missing. It might be that the file server disconnected from Windows, or that something manually detached the layer VHD. App Layering is not part of the process of attaching or maintaining layer VHDs in any way. We send the request to Windows to attach the VHD, and from then on, it’s whatever Microsoft wants.

When a user who has Elastic Assignments logs in, our Layering Service (ULayer.exe) creates a conenction to the designated file share, using the user’s credentials. We identify the layer VHD files that are required, and call Disk Management to attach those disks. When the volume identifiers become available, we inform our filter drivers to begin using the new disks.

From then on, Windows is responsible for persisting the connection to the VHD and/or the file share itself. Unfortunately, Windows does not log file share connection events or attached VHD events, so the only indication you may have is from UlayerSvc.log.

Following the example, eventually Windows loses the share, or maybe Windows just loses the VHD. Windows might or might not retry or have some timeouts before it gives up. All of this is completely determined by Microsoft, we do not (and cannot) set any parameters to affect this behavior.

When Windows finally decides the VHD is no longer attached, the volume identifier disappears from Windows’ list of volumes. That is the first that our software is aware of anything at all. We check every 60 seconds, so the disconnect could have happened some time ago. We log that the volume is no longer visible, we tell the filter driver to stop using it, and we note in the log that the next time someone logs in who has this layer assigned, we will attempt to reattach it and reuse it.

That whole process has nothing to do with whether the attached VHD is an app layer or a VHD or something completely different. It’s all up to Windows’ native fault tolerance in this situation. We have no idea that there’s even a problem until the volume disappears.

However, this just means that the contents of that layer will be unavailable until the next login. The machine will continue operation, any other layers will continue functioning, and when a user (even the same user) that has that particular layer assigned logs in again, we will attach the missing layers. This process, other than temporarily losing access to the contents of the Elastic Layer, should be completely nondisruptive.

Note that if the volume that disappears is the User Layer, then you will not see this log entry, and Windows itself will become substantially unresponsive, being unable to find anything, including things on the boot disk which is still attached.

So what can you do about it? Unfortunately, since Windows does not generally log VHD or file server failure events, it can be very difficult to figure out why the disk is disappearing. App Layering and ULayer don’t know anything about the cause, we just know about when it happened. Start recording the times when it happens, in case there is some external event that you can correlate to the lost volumes. See if there are any other events in the Windows System and Application event logs approximately when ulayersvc.log reports the volume disappearing.

There is some suggestion that the disconnects can be the result of an overeager antivirus, but we do not have any information about why or how to stop it. If the problem goes away completely if you stop including your AV layer in the published image, it might be worth a conversation with your AV vendor about the circumstances where they might disconnect a VHD from a machine they’re protecting.

In some circumstances (and this might be why you looked for this page in the first place), we have seen layers disconnected in the logs, but when the user (or any user) attempts to login again, at the point where we should be reattaching the VHD, the entire machine hangs. Something is blocking our ability to reattach the VHD. So far, all we know is that this too appears to be correlated to antiviruses. If we ever figure out why, and how to stop them, we’ll certainlty record it here.

Related:

  • No Related Posts

Hotfix XS71ECU1011 – For XenServer 7.1 Cumulative Update 1

Who Should Install This Hotfix?

This is a hotfix for customers running XenServer 7.1 Cumulative Update 1.

This hotfix does not apply to XenServer 7.1. You must apply Cumulative Update 1 before you can apply this hotfix.

Note: XenServer 7.1 Cumulative Update 1 and its subsequent hotfixes are available only to customers on the Customer Success Services program.

Information About this Hotfix

Component Details
Prerequisite None
Post-update tasks* Restart the XAPI Toolstack
Content live patchable** No
Revision History Published on Feb 16, 2018
** Available to Enterprise Customers.

Issues Resolved In This Hotfix

This hotfix addresses the following issues:

  • When you copy a Virtual Disk Image (VDI) into a thinly-provisioned SR (NFS, EXT or SMB) that does not have enough space for the VDI, the copy fails and does not report an error.
  • On Nutanix hosts, the host’s memory-overhead is miscalculated after first boot. This is because toolstack (XAPI) calculates the available host RAM on startup assuming no domains other than the XenServer Control Domain are running. On first boot this is true but on subsequent boots, the Nutanix Controller VM (CVM) is started before toolstack.
  • When migrating VMs that have Dynamic Memory Configuration enabled, the VMs shutdown operation can unexpectedly fail. This is caused by memory consumption shrinking before shutdown and this operation takes longer than expected.
  • On rare occasions, after a toolstack restart, XenServer cannot recognize the host a VM is located on. This is caused by failed VM migration misleading the xenopsd daemon, which toolstack queries on startup.
  • If the disable_pv_vnc key in a paravirtualized VM’s other_config field is set to 1, the xenconsoled daemon stores all console output from this VM. This can cause the xenconsoled daemon to consume all memory in control domain (dom0).
  • When XenServer loses connection to the license server, the v6d license daemon grants a grace license when it temporarily cannot reach the license server. For some license types it would not grant such a license.

Installing the Hotfix

Customers should use either XenCenter or the XenServer Command Line Interface (CLI) to apply this hotfix. As with any software update, back up your data before applying this update. Citrix recommends updating all hosts within a pool sequentially. Upgrading of hosts should be scheduled to minimize the amount of time the pool runs in a “mixed state” where some hosts are upgraded and some are not. Running a mixed pool of updated and non-updated hosts for general operation is not supported.

Note: The attachment to this article is a zip file. It contains the hotfix update package only. Click the following link to download the source code for any modified open source components XS71ECU1011-sources.iso. The source code is not necessary for hotfix installation: it is provided to fulfill licensing obligations.

Installing the Hotfix by using XenCenter

Before installing this hotfix, we recommend that you update your version of XenCenter to the latest available version for XenServer 7.1 CU 1.

Choose an Installation Mechanism

There are three mechanisms to install a hotfix:

  1. Automated Updates
  2. Download update from Citrix
  3. Select update or Supplemental pack from disk

The Automated Updates feature is available for XenServer Enterprise Edition customers, or to those who have access to XenServer through their XenApp/XenDesktop entitlement. For information about installing a hotfix using the Automated Updates feature, see the section Applying Automated Updates in the XenServer 7.1 Cumulative Update 1 Installation Guide.

For information about installing a hotfix using the Download update from Citrix option, see the section Applying an Update to a Pool in the XenServer 7.1 Cumulative Update 1 Installation Guide.

The following section contains instructions on option (3) installing a hotfix that you have downloaded to disk:

  1. Download the hotfix to a known location on a computer that has XenCenter installed.
  2. Unzip the hotfix zip file and extract the .iso file
  3. In XenCenter, on the Tools menu, select Install Update. This displays the Install Update wizard.
  4. Read the information displayed on the Before You Start page and click Next to start the wizard.
  5. Click Browse to locate the iso file, select XS71ECU1011.iso and then click Open.
  6. Click Next.
  7. Select the pool or hosts you wish to apply the hotfix to, and then click Next.
  8. The Install Update wizard performs a number of update prechecks, including the space available on the hosts, to ensure that the pool is in a valid configuration state. The wizard also checks whether the hosts need to be rebooted after the update is applied and displays the result.
  9. Follow the on-screen recommendations to resolve any update prechecks that have failed. If you want XenCenter to automatically resolve all failed prechecks, click Resolve All. When the prechecks have been resolved, click Next.

  10. Choose the Update Mode. Review the information displayed on the screen and select an appropriate mode.
  11. Note: If you click Cancel at this stage, the Install Update wizard reverts the changes and removes the update file from the host.

  12. Click Install update to proceed with the installation. The Install Update wizard shows the progress of the update, displaying the major operations that XenCenter performs while updating each host in the pool.
  13. When the update is applied, click Finish to close the wizard.
  14. If you chose to carry out the post-update tasks, do so now.

Installing the Hotfix by using the xe Command Line Interface

  1. Download the hotfix file to a known location.
  2. Extract the .iso file from the zip.
  3. Upload the .iso file to the Pool Master by entering the following commands:

    (Where -s is the Pool Master’s IP address or DNS name.)

    xe -s <server> -u <username> -pw <password> update-upload file-name=<filename>XS71ECU1011.iso

    XenServer assigns the update file a UUID which this command prints. Note the UUID.

    72d0ce58-f5fc-11e7-850f-a7b856340c1f

  4. Apply the update to all hosts in the pool, specifying the UUID of the update:

    xe update-pool-apply uuid=<UUID_of_file>

    Alternatively, if you want to update and restart hosts in a rolling manner, you can apply the update file to an individual host by running the following:

    xe update-apply host=<name_of_host> uuid=<UUID_of_file>

  5. Verify that the update was applied by using the update-list command.

    xe update-list -s <server> -u root -pw <password> name-label=XS71ECU1011

    If the update is successful, the hosts field contains the UUIDs of the hosts to which this hotfix was successfully applied. This should be a complete list of all hosts in the pool.

  6. If the hotfix is applied successfully, carry out any specified post-update task on each host, starting with the master.

Files

Hotfix File

Component Details
Hotfix Filename XS71ECU1011.iso
Hotfix File sha256 7494873ca316a80c896eae1cb7738dfc0ffde5be009039a2674140c9aee3cdcb
Hotfix Source Filename XS71ECU1011-sources.iso
Hotfix Source File sha256 6a3569eae01d86c7182409da351722a93e89a16cd0b95c608357a081637f2f93
Hotfix Zip Filename XS71ECU1011.zip
Hotfix Zip File sha256 0f54cd49612d55951113b4b83297713508631cbc8a7124d34adf8b51f9cad530
Size of the Zip file 44.6 MB

Files Updated

vhd-tool-0.11.1-1.el7.centos.x86_64.rpm
xapi-tests-1.14.36-1.x86_64.rpm
xenopsd-xenlight-0.17.11-1.el7.centos.x86_64.rpm
xapi-xe-1.14.36-1.x86_64.rpm
v6d-citrix-10.0.6-1.el7.centos.x86_64.rpm
xenopsd-xc-0.17.11-1.el7.centos.x86_64.rpm
squeezed-0.13.2-1.el7.centos.x86_64.rpm
xapi-core-1.14.36-1.x86_64.rpm
xenopsd-0.17.11-1.el7.centos.x86_64.rpm

More Information

For more information see, the XenServer 7.1 Documentation.

If you experience any difficulties, contact Citrix Technical Support.

Related:

  • No Related Posts

Re: Hyper-V Disaster Recovery with DELLEMC Unity

VMware uses Storage Replication Adapters, or plugins that specifically can handle using storage-based replication to secure the virtual machines, and handle the failover.

When looking at Hyper-V, the only materials that I could find were these:

https://docs.microsoft.com/en-us/azure/site-recovery/hyper-v-vmm-disaster-recovery

Look at page 38 of this guide:

https://www.emc.com/collateral/white-papers/h12557-storage-ms-hyper-v-virtualization-wp.pdf

Which basically makes it sound like Hyper-V Replica handles the replication on a per-guest/VHD basis. Alternatively you might want to look at Veeam or Zerto for VM-level recovery, but still it’s not storage-level replication.

You could also look at EMC Storage Integrator (ESI):

https://www.emc.com/data-center-management/storage-integrator-for-windows-suite.htm#!interoperability

I’m not saying that using storage-level replication won’t work. Maybe it will, but I can’t find anything that specifically says it will, or that seems to make it easy like it is with VMware.

~Chris

Related:

  • No Related Posts

P2V Options when Client is SEE Encrypted?

I need a solution

Hi All,

I have a work tablet (Win 10 Pro, UEFI) that is SEE Encrypted. It is managed by the corporate SEE Server. I have local admin rights on the tablet.

I would like to find a possible solution to convert this into a P2V (physical to Virtual) VM and put it onto a much faster laptop.

I tried Disk2VHD.exe and converted into a .VHDX file (Virtual Hard Disk), put it on a host machine with Hyper-V installed, created a new VM and selected this as the Virtual Disk. However, if i select Secure boot, Hyper-V cannot find the MBR. If i uncheck secure boot, i get the SEE pre-boot authentication window, put as soon as it passes, i get a windowswinload.efi error probably since the MBR is encrypted.

I installed Vmware Standalone converter however its service is not starting up correctly

Are there any other alternatives that can successfully turn this into a VM?

Thanks!

0

Related:

  • No Related Posts

How To Use Azure Hybrid Use Benefit In XenApp and XenDesktop

There are two ways to utilize HUB.

  1. Check to see whether you have Enterprise Agreement (EA) Azure Subscription. If you have one, you can use images from Azure Marketplace that are pre-configured with HUB. Refer to the snapshot below, which shows the HUB images from Azure Marketplace.

    User-added image

  2. If you don’t have an Enterprise Agreement Azure subscription, you first need to prepare the Windows Server VHD with base server build in your on-premises hypervisor and upload the VHD to Azure storage account to stand-up the virtual machine (VM) in Azure.

Prepare the master image to use Azure Hybrid Use Benefit in XenDesktop using the pre-configure HUB images in Azure Marketplace.

In your EA Azure subscription, deploy Server OS VM using HUB image from the Azure marketplace. Install VDA software, application and tools that you want in your master image. Stop (deallocate) the VM from the Azure portal. To confirm that the VM you deployed from Azure portal is utilizing the HUB, run the following PowerShell command and check that the command output contains LicenseType = Windows_Server.

Get-AzureRmVM -ResourceGroup "xdResourceGroup" -Name "xdVDA"

As a best practice, create a copy of the OS disk VHD in container newvm in the same storage account manualvmstorage. Now you are ready with the master image that you can use as input to create MCS catalog.

User-added image

ubvm

Prepare the master image to use Azure Hybrid Use Benefit in XenDesktop using the on-premises image

If you already have the XenDesktop deployment in your on-premises datacenter or don’t have EA Azure subscription but have the on-premises windows server license that include the software assurance and therefore want to use the on-premises master image in Azure, follow the steps described below.

In case you don’t have existing master image, deploy Windows Server VM in on-premises hypervisor. Install the VDA software, applications, VM agent and Azure PowerShell. For MCS provisioning, we need specialized image so you don’t need to run the Sysprep in on-premises VM and create generalized image. Once you have finished configuring the master image, upload the corresponding VHD to Azure storage account. Use following PowerShell command to upload the VHD to Azure storage account.

Add-AzureRmVhd -ResourceGroupName "xdResourceGroup" -LocalFilePath "C:ImagexdVDA.vhd" -Destination https://xdstorageaccount.blob.core.windows.net/vhds/xdVDA.vhd"

Create Azure MCS catalog to utilize HUB

Ensure you establish the Azure host connection using Citrix Cloud’s XenApp and XenDesktop service or XD 7.12 where support for Azure HUB MCS catalog is available.

Launch the Studio from your Citrix Cloud client portal or from Studio console for XD 7.12 and navigate to Machine Catalogs in the left hand pane. Right click Machine Catalogs and click on Create Machine Catalog to launch the machine creation wizard.

Click Next on the Introduction page.

User-added image

On the Operating System page Select Server OS and click Next.

Note: that HUB is available only for Windows Server OS.

User-added image
ubostype
On the Machine Management page select Citrix Machine Creation Service (MCS) as the deployment technology and select the Microsoft Azure hosting resource and click Next.

User-added image

Master Image Selection – Select the master image VHD prepared using HUB image from Azure marketplace or the on-premises image uploaded to Azure.

User-added image

Storage type and License type selection – In the previous versions of XenDesktop, this page was used to select storage type, but now it is updated to select storage and license type both. When you select Yes for the license type, you are telling XenDesktop that the master image you have selected in the previous step is either HUB image from Azure marketplace or the on-premises Windows Server image with software assurance. This choice will enable the HUB for the VDAs provisioned in the Azure.

User-added image
ubstglicselection
VM instance size selection – XenDesktop will show only those VM instance sizes which are supported for the selected storage type in the previous step. Enter number of VMs you want to provision and select the VM instance size of your choice and click Next.

User-added image
ubinstancesizeselection
Azure Write Back Cache – Write back cache is now available for Azure MCS catalogs. Refer to Configure cache for temporary data section in Citrix documentation to learn more about write back cache. Enabling write back cache is optional, so disable it by unselecting the two check boxes on this page if you don’t want write back cache.

User-added image
ubwbcselection
Network Interface Card Selection – Select network card and the associated network. Only one network interface is supported.

User-added image
ubnetworkcardselection
Select resource location domain and enter the machine naming scheme.

User-added image
ubadaccountselection
If you are using Citrix Cloud, enter the credentials for your resource location Active Directory and click Next. On the Summary page review the catalog summary. You will find the on-premises license is set to “Yes” in the catalog summary. Enter the catalog name and click Finish to start provisioning.

User-added image
ubcatalogsummary
Once the catalog provisioning is complete, ensure that the VDAs provisioned are utilizing Azure HUB. Check the snapshot below, the VDAs provisioned for this catalog shows the license type Windows_Server.

User-added image
uboutput
Notes:

  • If you are using PowerShell scripts to provision MCS catalog, you need to update the CustomProperties details in your script to pass the LicenseType parameter with value Windows_Server. Check the New-ProvScheme in the PowerShell output generated by the studio when we provisioned MCS catalog using studio. The output shows the CustomProperty that you need to pass for creating HUB catalog. Once you update your script to use this custom property, you can use PowerShell script to create HUB MCS catalog.

  • New-ProvScheme -AdminAddress “xa-controller.xenapp.local:80” -CleanOnBoot -CustomProperties “&lt;CustomProperties xmlns=`”http://schemas.citrix.com/2014/xd/machinecreation`” xmlns:xsi=`”http://www.w3.org/2001/XMLSchema-instance`”&gt;&lt;Property xsi:type=`”StringProperty`” Name=`”StorageAccountType`” Value=`”Standard_LRS`” /&gt;&lt;Property xsi:type=`”StringProperty`” Name=`”LicenseType`” Value=`”Windows_Server`” /&gt;&lt;/CustomProperties&gt;” -HostingUnitName “ARMHu1” -IdentityPoolName “Azure HUB Catalog” -InitialBatchSizeHint 2 -LoggingId “bfd0936f-fe3a-4d0a-b3bb-19ca5b309131” -MasterImageVM “XDHyp:HostingUnitsARMHu1image.folderPreFlightTesting.resourcegroupmanualvmstorage.storageaccountnewvm.containerAZ-HUB-Sr1220161221180543.vhd.vhd” -NetworkMapping @{“0″=”XDHyp:HostingUnitsARMHu1\virtualprivatecloud.folderPreFlightTesting.resourcegroupvirtualNetwork.virtualprivatecloudSubnet.network”} -ProvisioningSchemeName “Azure HUB Catalog” -RunAsynchronously -Scope @() -SecurityGroup @() -ServiceOffering “XDHyp:HostingUnitsARMHu1serviceoffering.folderStandard_D2_v2.serviceoffering” -UseWriteBackCache -WriteBackCacheDiskSize 127 -WriteBackCacheMemorySize 256

  • Remember that only pre-configured HUB images from the Azure marketplace can be used as master image. If you use a Windows Server image in the Azure marketplace that is not a HUB image and prepare it as a master image to provision HUB MC catalog, the provisioning will fail with the following error.

    DesktopStudio_ErrorId : ProvisioningTaskError

    ErrorCategory : NotSpecified

    ErrorID : FailedToCreateImagePreparationVm

    TaskErrorInformation : Terminated

    InternalErrorMessage : Error: creating virtual machine failed. Exception=Microsoft.Rest.Azure.CloudException: Long running operation failed with status ‘Failed’.

  • To confirm whether the above failure is really due to the HUB image issue, you can try to deploy a VM using PowerShell in Azure using the same master image VHD. You will notice the VM deployment failure with the following error.

    New-AzureRmVM : Long running operation failed with status ‘Failed’.

    ErrorCode: InternalDiskManagementError

    ErrorMessage: An internal disk management error occurred.

  • Azure hybrid use benefit can only be used for Windows Server OS, it is not supported for the Windows Desktop OS.

  • If you deploy MCS catalog using the HUB image and Yes for the license type on the Storage and License Types page but when updating the catalog if your update image is non HUB Azure marketplace image, the catalog update will fail during the image preparation process. So make sure you prepare your update image as HUB image.

  • If you deploy MCS catalog using the non HUB image and No for the license type on the Storage and License Types page but when updating the catalog if your update image is HUB Azure marketplace image, the catalog update will be successful but the VDAs will not utilize the HUB. For VDAs to utilize the HUB, you need to deploy first MCS catalog by selecting Yes for license type.

  • If you have an existing MCS catalog created with non HUB image and No for the license type on the Storage and License Types page, it is not possible to migrate that catalog to use HUB. You need to create new catalog with HUB master image and Yes for the license type.

  • If you add machines to existing HUB catalog, the machines added will also have the license type set to Windows_Server.

User-added image

Related:

  • No Related Posts

How To Update and Rollback XenDesktop Azure Resource Manager Catalog

Master Image prepared from VM deployed in Azure

Stand-up VM in Azure using Azure gallery image and install the VDA software, necessary tools and applications on it and use the associated OS disk VHD as the master image.

In this case, it is best to keep the master image VM in Shutdown (deallocated) state when you are creating the catalog and then power it on when you want to make any changes to your image and then use that updated image for the catalog update.

For the below mentioned scenario, a VM EL-Sr16-RDS was used in Azure, installed VDA software and applications on it and shutdown the VM from the portal. From Azure portal I can check the OS disk path for this VM.

https://rdshwestusstorage1.blob.core.windows.net/vhds/EL-Sr16-RDS20161013223432.vhd

The path tells me that this VM is hosted in rdshwestusstorage1 storage account and vhds container. Using PowerShell I created copy of the OS disk VHD into another container basevhd in the same storage account. The PowerShell commands used for image copy are provided below for the reference.

Login-AzureRmAccountSelect-AzureRmSubscription -SubscriptionId SubscriptionID #Provide Azure SubscriptionID# VHD blob to copy #$blobName = "EL-Sr16-RDS20161013223432.vhd"# Source Storage Account Information #$sourceStorageAccountName = "rdshwestusstorage1"$sourceKey = AccessKey #Get the storage account access keys from Azure Portal storage account settings$sourceContext = New-AzureStorageContext –StorageAccountName $sourceStorageAccountName -StorageAccountKey $sourceKey$sourceContainer = "vhds"# Destination Storage Account Information #$destinationStorageAccountName = "rdshwestusstorage1"$destinationKey = AccessKey #Get the storage account access keys from Azure Portal storage account settings$destinationContext = New-AzureStorageContext –StorageAccountName $destinationStorageAccountName -StorageAccountKey $destinationKey# Create the destination container #$destinationContainerName = "basevhd"New-AzureStorageContainer -Name $destinationContainerName -Context $destinationContext# Copy the blob #$blobCopy = Start-AzureStorageBlobCopy -DestContainer $destinationContainerName -DestContext `$destinationContext -SrcBlob $blobName -Context $sourceContext -SrcContainer $sourceContainer# Retrieve the current status of the copy operation #$status = $blobCopy | Get-AzureStorageBlobCopyState# Print out status #$status# Loop until copy complete #While($status.Status -eq "Pending"){$status = $blobCopy | Get-AzureStorageBlobCopyStateStart-Sleep 10# Print out status #$status }

I created catalog EL Sr16 RDS Catalog1 using image EL-Sr16-RDS20161013223432.vhd in basevhd container, also created Delivery Group EL Sr16 RDS Catalog1 DG1 and published applications. Now I want to update this catalog. To do so, I will power-on and logon to master image VM EL-Sr16-RDS, install updates and make required changes. Post that I will shutdown the VM from the portal. Again as a best practice I will create copy of this updated image in another container named updatevhd in the same storage account. Now we are ready for catalog update.

Update Catalog

  1. Select Machine Catalogs in studio navigation pane.

  2. Select the catalog and then select Update Machines from the Actions pane.

  3. Click Next on the overview page.

    User-added image

  4. On the Master Image page select the image you want to use to update the catalog.User-added image

  5. From the above snapshot, notice that the master image attached to the VM is present in the vhds container, whereas the image used to create the catalog is present in the basevhd container and the image used to update the catalog is present in the updatevhdcontainer in the same storage account.

  6. Click Next to navigate to next page in the wizard.

  7. You would require to confirm whether selected image is attached to VM that is running and if so shutdown the VM from the portal. At present there is no mechanism in XenDesktop to automatically detect whether the image is standalone or the power state of the VM it is attached to.

  8. Click Close on the pop-up message.User-added image

  9. On the Rollout Strategy page, choose when the machines in the Machine Catalog will be updated with the new master image, on the next shutdown or immediately.User-added image

  10. Verify the information on the Summary page and click Finish.User-added image

  11. Wait for the catalog update to finish. Each machine restarts automatically once it is updated. You can use following PowerShell commands from the Controller to find out machine reboot status.

    asnp citrix*

    Get-BrokerRebootCycle

  12. Ensure that BrokerRebootCycle output shows state completed and machinescompleted is equal to machines selected for update.

    User-added image

Login to the VDA and confirm the image is updated with latest changes.

Master Image prepared from VM deployed in On-Premises Hypervisor If you already have XenDesktop on-premises deployment, you will want to use the on-premises image in Azure. You may use a VM in your on-premises hypervisor, install the VDA software, install Azure VM agent, necessary tools and applications on it and upload the associated VHD into Azure storage account to use it as master image.

For this case, a VM XS-VDA1 was used in on-prem Xenserver and installed VDA software, Azure VM agent and required applications. Then the VM as VHD using XenCenter was exported and then uploaded associated VHD in the storage account in Azure. After that MCS catalog was created in Azure. Now to update the catalog, the VM XS-VDA1 hosted on XenServer hypervisor was started, installed updates, shutdown the VM and again exported the updated VM as VHD on my local machine.

Upload image to Azure

Add-AzureRMVHD -ResourceGroupName OnPremVHDStore -Destination “https://xsvhdwestusstorage.blob.core.windows.net/updatevhd/XS-VDA1-Updated.vhd” -LocalFilePath “D:XS-VDAXS-VDA1-Updated.vhd”

To update the catalog follow the same steps as discussed above.

User-added image

Create new image for Catalog Update

A new image from scratch can be created by standing up new VM in Azure or in on-prem hypervisor and use that image for catalog update.

Note – In Azure this scenario is supported only if your catalog before and after the update is using VDA from XenDdesktop 7.12. If you are using VDA older than XD 7.12 you will observe the VDA registration failure post image update and you will not be able to logon to VDA. This issue is fixed in XenDesktop 7.12 so if you are using earlier VDA version you need to maintain copy of your original master VDA and apply changes on top of that.

Create new image in Azure for Catalog Update

Stand-up new VM in Azure using Azure gallery image. Install VDA software and required tools, applications, shutdown the VM from the portal and as a best practice create a copy of the VHD associated with the VM. Use this copy as a input for the catalog update process.

In this scenario,a VM EL-Sr16-RDS2 was used in Azure and installed VDA and required tools and applications. After that using the PowerShell script provided earlier in this blog create a copy of the image in the updatevhd container in the same storage account. Now you are ready for the catalog update. Follow the same steps to complete the catalog update.

See the screenshot of the image selection page below.

User-added image

Create new image in On-Premises hypervisor for Catalog Update

If you are managing images in the on-prem hypervisor, you can use a new VM in the on-prem hypervisor and install VDA software, the Azure VM agent, and required tools and applications, shutdown the VM and upload the associated VHD to Azure portal. Use this uploaded image as an input for the catalog update process. Remember as mentioned earlier this scenario is supported only when your existing and new catalog is using VDA from XD 7.12 As mentioned earlier there is no need to stand-up VM from this uploaded VHD, you can directly use it for creatingupdating catalog.

A VM XS-VDA2 was used in on-prem Xenserver and installed VDA software, Azure VM agent and required applications. After that export the VM as VHD using XenCenter and then upload associated VHD in the Azure storage account. Now you ready for the catalog update. Follow the same steps to complete the catalog update.

Rollback Catalog Update

After you roll out new image, you can rollback the catalog update in case there are issues with updated image. When you roll back, machines in the catalog are rolled back to the last working image. Any new features that require the newer image will no longer be available.

The database keeps an historical record of the master images used with each Machine Catalog. This history is used for catalog rollback. So do not delete, move, or rename master images; otherwise, you will not be able to revert a catalog to use them.

Follow the steps below to rollback update

  1. Select Machine Catalogs in the Studio navigation pane.

  2. Select the catalog and then select Rollback machine update in the Actions pane.

  3. On the Rollout Strategy page specify when to apply the earlier master image to machines.

  4. On the Summary page review details and submit request.

  5. Need to maintain image copies for successful rollback

  6. Let’s see how the catalog rollback will behave when you don’t create copy of the master image.

  7. We will use VHD associated with the VM EL-Sr16-RDS to create first catalog. The image selection will look like the screenshot below.

    User-added image

  8. XenDesktop records following image path in the database. Same path will be used for image rollback.

    XDHyp:HostingUnitsARM-EL-HU1image.folderXDARMVDAStore.resourcegrouprdshwestusstorage1.storageaccountvhds.containerEL-Sr16-RDS20161013223432.vhd.vhd

  9. To update catalog we will make changes to the same VM, shutdown the VM and point to the same VHD for the catalog update. Note that state of the master image is changed from that of first image due to update. Update image selection will look like the screenshot below.

    User-added image

  10. XenDesktop records following image path for the update image.

    XDHyp:HostingUnitsARM-EL-HU1image.folderXDARMVDAStore.resourcegrouprdshwestusstorage1.storageaccountvhds.containerEL-Sr16-RDS20161013223432.vhd.vhd

  11. Now we want to to rollback the update. Check image path in the rollback screenshot below.

    User-added image

  12. The image at this path is already modified during the update, so even though the image rollback finishes successfully, you will notice that the VDA after the rollback is same as the VDA after the update. Unfortunately Xendesktop does not recognize this and show any error. So you will be under the impression that rollback is completed successfully but in reality nothing is changed.

  13. You can mitigate this limitation by manually creating the copies of the image. Like we stored first catalog master image in basevhd container and update image in updatevhdcontainer.

  14. In this case XenDesktop will store following image paths.

    Create Catalog – XDHyp:HostingUnitsARM-EL-HU1image.folderXDARMVDAStore.resourcegrouprdshwestusstorage1.storageaccountbasevhd.containerEL-Sr16-RDS20161013223432.vhd.vhd

    Update Catalog – XDHyp:HostingUnitsARM-EL-HU1image.folderXDARMVDAStore.resourcegrouprdshwestusstorage1.storageaccountupdatevhd.containerEL-Sr16-RDS20161013223432.vhd.vhd

  15. Rollback Catalog – My rollback catalog image path in this case will look like below screenshot.

    User-added image

Note that since we maintained copies of the image we have image with its original state available for the rollback. If you follow this recommended method you will be able to successfully create, update and rollback catalog.

Note:

  1. If you are maintaining master image VM in Azure, always create copy of associated master VHD and use that for creating catalog. For catalog update, after you make changes in your master image, again create a copy of the associated VHD and use that copy to update the catalog. If you follow this method, you can maintain one master image VM in Azure and use it for multiple image updates, rollback etc.
  2. Do not rename, delete or move master image, otherwise you won’t be able to rollback the update if required.
  3. BrokerRebootCycle after catalog update/rollback happens only for those machines which are added in the delivery group. If you have only created catalog but not added machines in the delivery group, machines will not get automatically rebooted after update/rollback, you will have to manually reboot the machines from Studio console for the new image to take the effect.
  4. The rollback is applied only to machines that need to be reverted. For machines that have not been updated with the new/updated master image (for example, machines with users who have not logged off), users do not receive notification messages and are not forced to log off.
  5. During broker reboot cycle after the image update, machines eligible for the reboot are divided into two groups. The reboot cycle is started on all machines in the first group. The cycle then waits for at least one machine to register. If the machine fails to register in the configured timeout the cycle is abandoned. This is by design and is intended to avoid taking all the machines in a delivery group out of service due to a bad update.
User-added image

Related:

  • No Related Posts