Citrix Profile Management: VHDX-based Outlook cache and Outlook search index on a user basis.


Behaviour prior to Profile Management 7.18

There have been two Outlook-related performance challenges with Citrix Profile Management.

To best explain, we’ll take a user that has an existing Profile Management 7.17 profile and launches published Outlook (or opens Outlook in a desktop session). Outlook has been configured to use the Outlook .OST cache. The location of the file is typically:

C:Users<user>AppDataLocalMicrosoftOutlook<email address>.OST

In Outlook cached mode, the OST file can be very large, so there is a significant overhead if the OST file is part of Profile Management logon/logoff synchronization.

Actions in Outlook, such as sending/receiving mail result in changes to the Outlook search index database. This data is stored in the Windows search index database: Windows.edb.

This is a machine-based file, which means it holds search index data for all users logging onto the machine. If a user launches an Outlook session (or opens Outlook in a desktop session) on a VDA machine that they haven’t access previously or haven’t accessed for a while, the Outlook search index has to be rebuilt in Windows.edb. Outlook searches have to wait until re-indexing finishes. The location of the Windows search index database is typically: C:ProgramDataMicrosoftSearchDataApplicationsWindowsWindows.edb

We can see this design in the image below:

User-added image

Behavior using VHDX-based Outlook cache and Outlook search index on a user basis

The Profile Management 7.18 release introduced a feature to address these Outlook-related performance challenges.

To explain we’ll use the same user as above, but logging on to a 7.18 version VDA for the first time.

During the Profile Management logon, the user’s Outlook-related search index is split out of the Windows search index database (Windows.edb) and written to a VHDX file created at:

PathToUserStore<User_information>VHD<Platform>OutlookSearchIndex.vhdx

The remote VHDX file is then mounted locally in the user’s local profile at:

C:Users<user>AppDataRoamingCitrixSearch.vhdx

From this point, the user has their own profile-based version of the Outlook search index database. Within the Search.vhdx mount-point folder, the Outlook search index database is named < userSID>.edb

The Outlook .OST file is converted into a VHDX file and stored at:

PathToUserStore<User_information>VHD<Platform> OutlookOST.vhdx

The remote VHDX file is then mounted locally in the user’s local profile at (default):

C:Users<user>AppDataLocalMicrosoftOutlook.vhdx

During the Outlook session, changes to the Outlook search index and Outlook .OST are made directly to their respective VHDX files over SMB.

The feature also requires additional registry settings to be created. Read the Registry Changes section for further information.

When the user logs off the session, both VHDX files are unmounted from the local profile. Because the VHDX files had been mounted over SMB, no synchronization is required at Profile Management logoff. The additional registry settings are synchronized to the user’s profile store (or before if the Active Writeback Registry feature has been enabled).

Feature Enhancements introduced in Citrix Virtual Apps and Desktops 1808

The Profile Management 1808 release Outlook search index feature supports groups explicitly excluded through the Profile Management Excluded Groups policy as well and groups implicitly excluded through the Profile Management Processed Groups policy (see Feature Limitations for further information).

This release also introduced support for Windows 10 1803.

Feature Limitations

Related:

  • No Related Posts

How to Create Machine Catalog using MCS in Azure Resource Manager

Pre-requisites

  • Access to the XenApp and XenDesktop Service of Citrix Cloud.
  • An Azure Subscription.
  • An Azure Active Directory (Azure AD) user account in the directory associated with your subscription, which is also co-administrator of the subscription.
  • An ARM virtual network and subnet in your preferred region with connectivity to an AD controller and Citrix Cloud Connector.
  • “Microsoft Azure” host connection
  • To create an MCS machine catalog, XenDesktop requires a master image that will be used as a template for all the machines in that catalog.

User-added image

Creating Master Image from Virtual Machine deployed in Azure Resource Manager

Create a virtual machine (VM) in Azure using the Azure Resource Manager gallery image with either the Server OS or Desktop OS (based on whether you want to create Server OS catalog or Desktop OS catalog).

Refer to Citrix Documentation – install Citrix VDA software on the VM for more information.

Install the applications on the VM that you want to publish using this master image. Shutdown the VM from Azure Portal once you have finished installing applications. Make sure that the power status for the VM in Azure Portal is Stopped (deallocated)

User-added image

When creating MCS catalog we need to use the .vhd file that represents OS disk associated with this VM as master image for the catalog. If you have the experience of using Microsoft Azure Classic connection type in XenDesktop, you would have captured specialized image of the VM at this stage, but for Microsoft Azure connection type you don’t have to capture the VM image, you will only shutdown the VM and use the VHD associated with the VM as master image.

Create MCS Catalog

This information is a supplement to the guidance in the Create a Machine Catalog article. After creating master image, you are all set to create MCS catalog. Please follow the steps as described below to create MCS catalog.

  1. Launch the Studio from your Citrix Cloud client portal and navigate to Machine Catalogs in the left hand pane.

  2. Right click Machine Catalogs and click on Create Machine Catalog to launch the machine creation wizard.

  3. Click Next on the Introduction page.

    User-added image

  4. On the Operating System page Select Server OS or Desktop OS based on what type of catalog you want to create and click Next.

    User-added image

  5. On the Machine Management page, select Citrix Machine Creation Service (MCS) as the deployment technology and select the Microsoft Azure hosting resource and click Next.

    User-added image

Master Image Selection – This page provides a tree view which you can navigate to select the master image VHD. At the topmost level are all the resource groups in your subscription except those which represent the MCS catalog created by XenDesktop. When you select and expand a particular resource group, it shows the list of all storage accounts for the Azure Umanaged Disks in that resource group. If there are no storage accounts in that resource group, there will not be any child items under that resource group. If you have manually created number of resource groups and storage accounts to host your manually created VMs in your subscription, the master image page will show all those resource groups, storage accounts, containers and VHDs even though not all those VHDs are master images that you want to use for the provisioning. Select the storage account that has your master image. When you expand the storage account, it shows list of containers inside the storage account. Expand the container that has master image VHD and select the VHD that you want to use as master image for the catalog.

User-added image

In the case of Azure Unmanaged Disk, you need to know the VHD path in order to select it. If you have stood up a VM in Azure and prepared it to be used as a master image and you want to know the VHD path, follow the steps below:

  1. Select the resource group that has your master image VM.

  2. Select the master image VM and click Settings

  3. Click on Disks then Click OS Disks and copy the disk path.

    User-added image
    User-added image

  4. OS disk path is structured as https://<storage account name>.blob.core.window.net/<container name>/<image name>.vhd

  5. You can use the disk path obtained in the step above to navigate the tree view to select image.

In the case of Azure Managed disk, it will be available directly under the Resource Group that you had created or as a part of the Virtual Machine’s Resource Group, as show below:

Note: If you don’t shutdown the master image VM and select the corresponding VHD to create a catalog, the catalog creation will fail. So make sure if you are selecting the VHD which is attached to running VM instance, the VM is in Stopped(deallocated) state.

  1. Storage type selection – XenDesktop supports Locally Redundant Standard or Premium storage for provisioning VMs in Azure. Your master image VHD can be hosted in any type of storage account, but for the VMs to be provisioned in Azure, XenDesktop will create new storage accounts based on storage type you selected.

    User-added image

  2. XenDesktop will provision maximum 40 VMs in single storage account due to IOPS limitations in Azure. For example if you want to create 100 VM catalog, you will find 3 storage accounts created and VM distribution in each storage account will be 40, 40 and 20.

  3. VM instance size selection – XenDesktop will show only those VM instance sizes which are supported for the selected storage type in the previous step. Enter number of VMs and select the VM instance size of your choice and click Next.

    User-added image

  4. Network Card Selection – Select network card and the associated network. Only one network card is supported.

    User-added image

  5. Select resource location domain and enter machine naming scheme.

    User-added image

  6. Enter credentials for your resource location Active Directory.

    User-added image

  7. Review the catalog summary, enter the catalog name and click Finish to start provisioning.

    User-added image

  8. Once the provisioning is complete, you will find new resource group created in your Azure subscription which hosts, all the VMs, storage accounts and network adapters for the catalog you provisioned. The default power state for the VMs after provisioning is Stopped(deallocated).

    User-added image

Once the provisioning is complete, you will find new resource group created in your subscription that has VM RDSDesk-01 as per the naming scheme we provided, NIC corresponding to that VM and a storage account that XenDesktop created to host the OS disk and the identity disk for the VM. The VM will be hosted on the same network as that of the selected hosting resource during catalog creation and the default power state of the VM will be Shutdown(deallocated).

The resource group created by XenDesktop during the MCS provisioning will have following naming convention

citrix-xd-<ProvisioningSchemeUid>-<xxxxx>

To find out which resource group in the Azure portal corresponds to the catalog you created from studio, follow the steps below.

  1. Connect to your XenApp and XenDesktop service using Remote PowerShell SDK. Please visit this link to find our how to interact with your Citrix Cloud environment using Remote PowerShell SDK.
  2. Run command Get-ProvScheme -ProvisioningSchemeName <Catalog Name>
  3. Note down the ‘ProvisioningSchemeUid’ from the output of the above command.
  4. Go to the Azure portal and search for the resource group name that contains ‘ProvisioningSchemeUid’ you obtained in step 3.
  • Note:

    As a best practice you should always create a copy of your master image and use the copied image as input to the provisioning process. In future if you want to update the catalog, you can start the master image VM and make necessary changes, shut it down and again create a copy of the image which will be your update image. This helps you to use the master image VM to create multiple image updates.

    Remember to shutdown the master image VM from Azure portal before starting to create the catalog. The master image needs to be copied into catalog’s storage account once provisioning starts, so we need to make sure it is not in use by any VM, otherwise it will lead to image copy failure and eventually provisioning failure.

  • Make sure you have sufficient cores, NIC quota in your subscription to provision VMs. You are most likely going to run out of these two quotas. You may not be able to check your subscription quota limits,
  • If your master image VM is provisioned in the Premium storage account then just shutting down the VM from the portal isn’t enough. You also need to detach the disk from the VM to use it as master image in provisioning. But in Azure Resource Manager you can not detach the disk while the VM is still available. So you need to delete the VM from the portal, this will only delete the VM but keep the OS disk in the storage account. The NIC corresponding to the VM also needs to be deleted separately.
User-added image

Related:

Citrix Provisioning does not support VHD vDisks on 4Kn storage – Console Error: 0x00000057 – Invalid Parameter

There are two possible solutions:

1. Upgrade your existing VHD vDisks to the VHDX file format.

– This can be done by running the Citrix Provisioning vDisk Creation Wizard and selecting the VHDX format.

– Alternatively, you can leverage Hyper-V to properly convert the format from VHD to VHDX.

https://docs.microsoft.com/en-us/powershell/module/hyper-v/convert-vhd?view=win10-ps

2. Use 512 byte based storage disks.

Guidelines with regard to vDisk sizing and storage disk compatibility according to Microsoft

1. VHD (512 logical sector size) must be on storage with 512 logical sector size. No RMW.

2. VHD (512 logical sector size) does not support storage with 4096 logical sector size. PVS has not implemented RMW for performance reasons.

3. VHDX (512 logical sector size) can be on storage with 512 logical sector size. No RMW.

4. VHDX (512 logical sector size) can be on storage with 4096 logical sector size (PVS implements RMW). Citrix recommends VHDX 4096 logical sector size for optimal performance.

5. VHDX (4096 logical sector size) can be on storage with 512 or 4096 logical sector size. No RMW.

*Read-Modify-Write (RMW) results in degraded overall disk performance.

Related:

Published Image Has Digits Appended After the Image Name

In our example there was a failed publish image task on March 11th. This left the image vhd in the Layered Images folder.

Looking at the ELM logs export, one can see the below:

1) “Camlogfile”, shows the error, “File already exists”, as the new image is being written to the disk. The new name is then saved with the additional characters.


2019-03-26 10:40:45,484 INFO DefaultPool2 JobStepInterceptor: JobId: 38928393 JobStep: CreateDiskJobStep -> Execute

2019-03-26 10:40:45,522 WARN DefaultPool2 VirtualDiskService: Exception trying to create file: Could not create file “/mnt/repository/Unidesk/Layered Images/ELM Testing.vhd”. File already exists.

2019-03-26 10:40:45,531 INFO DefaultPool2 VirtualDiskService: Created placeholder file ELM Testing_130.vhd under /mnt/repository/Unidesk/Layered Images

2) “ls-al.txt”, shows the existing vhd, with the same name. This causes the additional characters, ‘_130_’, to be added to the name of the newly published image, as noted in camlogfile. Any other image, with the same name will have different, random characters added.


/mnt/repository/Unidesk/Layered Images: total 5332420

-rw-r–r– 1 root root 7572733440 Mar 11 09:20 ELM Testing.vhd

Related:

  • No Related Posts

How to Attach Additional Disk to NetScaler MAS

  • In the hypervisor, attach an additional disk of the required disk size to NetScaler MAS virtual machine.

    For example, for a NetScaler MAS virtual machine of 120 GB, if you want to increase its disk space to 200 GB, you then need to attach a disk space of 200 GB instead of 80 GB. Newly attached 200 GB of disk space will be used to store the database data, NetScaler MAS log files. The existing 120 GB disk space will be used to store core files, operating system log files, and so on. The new disk does not have to be formatted. While adding the disk, please make sure we add it under the Primary controller (IDE Controller 0). Do not create a new Controller for adding the disk.

    Note: For best performance use fast HDD or SSD.

  • Related:

    Space not freed up after force-canceling a task

    When a task is force-canceled, there is no clean-up performed, so temporary disks can be left in the Layering Service repository. You need to manually delete these. You can do this by logging in as root, or through any file manager like WinSCP or FileZilla.

    Files can be left in one of these two folders:

    /mnt/repository/Unidesk/Packaging Disks

    /mnt/repository/Unidesk/Layered Images

    If no tasks are running, then there should be no files in either folder. If there are tasks running, please allow them to complete or be canceled before proceeding.

    Here is an example of “ls -l” with two force-canceled layer edit tasks, one for a layer named TEST APP and one for a layer named TEST. Since those tasks have already been force-canceled, you can delete all 4 files back and get 23GB back in your repository.

    /mnt/repository/Unidesk/Packaging Disks

    total 23158606

    -rw-r–r– 1 root root 10752833024 Feb 6 15:53 TEST APPBoot.vhd

    -rw-r–r– 1 root root 138452480 Feb 6 15:53 TEST APPPkg.vhd

    -rw-r–r– 1 root root 12619631104 Feb 6 12:09 TESTBoot.vhd

    -rw-r–r– 1 root root 203496448 Feb 6 12:09 TESTPkg.vhd

    Similarly, here are four temporary files for publishing operations which were not cleaned up properly. Again, if there are no running tasks, you can safely delete these files.

    /mnt/repository/Unidesk/Layered Images:

    total 142595488

    -rw-r–r– 1 root root 37246370304 Dec 19 18:33 Windows 10 Standard V2_237.vhd

    -rw-r–r– 1 root root 37246370304 Dec 19 17:58 Windows 10 Standard V2_50.vhd

    -rw-r–r– 1 root root 37147780096 Dec 5 13:07 Windows 10 Standard V2.vhd

    -rw-r–r– 1 root root 38611949568 Oct 16 09:31 Windows 10 Test V2.vhd

    If you happen to delete any of these files while a task is running, the task will fail. But subsequent asks will run normally.

    Related:

    Can't create App Layering image for MCS in Azure, hangs during Windows Setup

    To deploy an MCS image to Azure, you decide to use the Azure connector in App Layering. This produces a virtual disk that is primed to run through Windows Setup. You attach it to a VM, power it on, and find that Windows Setup never completes. If you look at the console screen shot in the debugging page in Azure, you see an image like this:


    That is a black screen that says Getting Ready, with a CMD (command prompt) window titled ERROR HANDLER, at the path C:Windowssystem32oobe. This means Windows Setup failed. You can download the VHD and open it up to read Windowssetupact.log, Windowspanthersetupact.log, WindowslogsCBScbs.log, but that won’t help too much.

    Related:

    Performing command line Image level restores of Hyper-V VMs with NMM

    NMM offers the option of flat file restore of a Virtual Machine (Full VM restore) from command line. There is no support of performing flat file restore from GUI. Below are some of the scenarios where flat file restore could be useful:

    1. Image Level recovery from the GUI is failing. It’s possible a flat file restore may not give you any different result, but as flat file restore eliminates the use of VSS framework during restore, it simplifies the restore. Flat file restore can be used in this case as a troubleshooting mechanism.
    2. Flat file restore can be performed from a non-Hyper-V host that’s not part of the same domain as the Hyper-V host. The host would need NMM to be installed.
    3. As flat file restore does not interact with Hyper-V, it can be used to recover the VM’s without any dependency on Hyper-V version. i.e potentially a backup from a HyperV 2016 host can be recovered to a Hyper-V 2012 host.



    After performing flat file restore, the VM will need to be imported in Hyper-V manager.

    Pre-requisites for performing flat file restore:



    1. The user performing the restore needs to be member of the Network operators group.
    2. The user needs to either to be a member of the local administrators group or have enough rights to perform recovers on the host.
    3. The recover host OS ideally should be same version as the source Hyper-V host.





    Procedure:

    Step 1. Get the backup that you need to restore with mminfo command. E.g: (If the command is run from the client add the ‘-s <nwserver>’ switch.)



    mminfo -avot -q client=thor.bronte.local -r “client,nsavetime,savetime(25),sumsize,sumflags,level,name”

    thor.bronte.local 1548449822 1/25/2019 3:57:02 PM 58 MB cb full APPLICATIONS:Microsoft Hyper-VAd1ConfigFiles



    thor.bronte.local 1548449823 1/25/2019 3:57:03 PM 7967 MB cb full APPLICATIONS:Microsoft Hyper-VAd1DF9EBEC6-63A2-43FC-9180-17C5BC720DCF



    thor.bronte.local 1548449828 1/25/2019 3:57:08 PM 5120 KB cb full APPLICATIONS:Microsoft Hyper-VAd1D4BF4D6E-F82D-4875-9E10-800DD93F05A6



    thor.bronte.local 1548450034 1/25/2019 4:00:34 PM 10 KB cb incr APPLICATIONS:Microsoft Hyper-VAd1

    Choose the ‘nsavetime’ of the ‘meta data/cover save set’ e.g “APPLICATIONS:Microsoft Hyper-VAd1” in the above.

    Issue the restore command as below:



    nsrnmmrc -s ad2016 -c thor.bronte.local -A NSR_SNAP_TYPE=rct -x c:restore -t 1548450034“APPLICATIONS:Microsoft Hyper-VAd1\”

    The above command requires the parameter NSR_SNAP_TYPE. As the backup was done using rct, this value needs to be specified with this parameter.



    Also note this command is sensitive to the order of operands. Use the order operands as shown above. See below the correct command:

    nsrnmmrc -s ad2016 -c thor.bronte.local -A NSR_SNAP_TYPE=rct -x c:restore -t 1548450034“APPLICATIONS:Microsoft Hyper-VAd1\”

    e.g of Incorrect command (same command as above, except the order of the operands is changed i.e –A is used after –x)



    nsrnmmrc -s ad2016 -c thor.bronte.local -x c:restore -t 1548450034 -A NSR_SNAP_TYPE=rct “APPLICATIONS:Microsoft Hyper-VAd1\”





    This command fails with the following error:



    Recovery process has failed to locate correct logical savetime for requested selections — error 0x80004005..”

    Also the ‘\’ in the saveset name is required. Its not a typo. This is mandatory otherwise the restore will fail.

    The operant ‘-x’ specifies the destination directory for the restore. Make sure this has enough space for the restore. The restore command does not check for this ahead of time.



    When the restore is successful, its reported as below:



    CLI-Restore1.png







    For RCT backups, flat file restore, restores the vhd/vhdx files in the VM. It does not restore the configuration files (the VMCX, VMRS files). The VM cannot be imported into Hyper-V manager. However a VM can be created and the recovered vhd/vhdx file can be associated with the VM to rebuild the VM. Because the config files are not recovered, VM information like, memory, CPU’s, Network etc is not recovered.

    When the backup type is VSS, flat file restore restores all the files including the configuration files and this can be used to import the VM



    This concludes the procedure of performing flat file restore of a VM from a NMM Hyper-V backup

    Related:

    NMM Hyper-V Image Level Restore

    This article describes the procedure to perform Image Level Restore (Full VM restore) of Hyper-V VMs backed up with NMM 18.1. Image level restore is used to recover from situations of loss/corruption of virtual machine. This article demonstrates this restore procedure for Hyper-V 2016 server backups with NMM 18.1.

    The following are pre-requisites to perform this restore successfully. Restore can be done from the source Hyper-V node or from any node in the Hyper-V cluster. Restore can also be done from a non-Hyper-V host:



    1. If restore is done from a host that is not part of the Hyper-V cluster, then ensure it has the same OS as the source Hyper-V cluster node. e.g if the source node is Hyper-V 2016, do the restore from a Windows Server 2016 rather than a Windows Server 2012 host.



    2. The restore host (host from where the restore is performed) needs to be in the same domain as the Hyper-V host



    3. The restore host needs to have a client resource on the NetWorker server.



    4. The user needs to be a member of the ‘operators’ group on the Networker server or have be listed under ‘Remote access’ attribute of the source Hyper-V client.



    5. The user should be a domain user that is part of the local administrators group of the target Hyper-V node and the host from where the restore is initiated. Note for SMB VM restores, you will need additional rights on the SMB Share, if the restore is being done to the original location. The additional rights will include rights to delete any user’s files and folders on the share and rights to create files/folders on the share. If the restore is being done to an empty folder on the share, these rights may not be required. If the restore fails due to lack of permissions, the error message in the failure will indicate so.



    Procedure:



    1. Start the NMM GUI and select the cluster client. For standalone server, select the host name of the source Hyper-V client.

    image-restore1.png

    2. Select ‘Hyper-V Recover Session’ -> ‘Image Recovery’



    image-restore2.png



    3 . Browse for the desired backup time and select the desired VM

    image-restore3.png



    4. Click ‘Recover…’ and additional recover choices are seen as below:



    image-restore4.png

    The default is to recover to the original location using the current cluster owner Hyper-V node. In the above the host to recover is ‘Hypv2016N2’. NMM picked this node by default as it was the cluster owner node at the time of restore.

    Note: Restore to the original location will overwrite the source virtual machine.



    5. When you click next, the restore summary is shown. Notice the ‘warning’. Proceed if you need to replace the existing VM from a backup copy.



    image-restore5.png



    Relocating the Virtual Machine during Restore.



    1. Select the VM from the desired backup time. Click ‘Recover..’ and choose ‘Recover the virtual machine to an alternate location’: (Note: NMM does not support relocation of SMB VMs)



    image-restore6.png

    2. After you click next, specify new path for the VM location. Here we are moving the VM from its original path on ‘Volume2’ to ‘Volume1’. Change the path for the ‘VHD’ files and also the path for the ‘Configuration’ files for the VM. Click ‘Next’

    image-restore7.png

    3. The message below indicates ‘Restore and registration completed successfully.’ Note when the VM is relocated to the different path, the VM files (vhd’s and config files) in the original path are not deleted. If these are no longer required after relocating the VM to a new path with the restore, then delete these files manually.



    image-restore8.png

    If the VM is not seen as a highly available VM in failover cluster manager ( this happens when a VM is relocated to a different path during restore), follow the steps below to configure the VM as a highly available VM:



    Step 1. Start the Failover Cluster Manger. Right click ‘Roles’ and select ‘Configure Role.’



    image-restore9.png



    Step 2: Click ‘Next’, on the informational page:



    image-restore10.png



    Step 3. Select ‘Virtual Machine’ , under ‘Select Role’. Click ‘Next’



    image-restore11.png



    Step 4. Pick the virtual machine just recovered. Click ‘Next’



    image-restore12.png



    Step 5. Failover cluster configures the VM as highly available. Note if the VM was recovered to a local path and not to a CSV or SMB path, the VM cannot be configured as highly available.



    image-restore13.png



    image-restore14.png

    image-restore14.png



    This concludes the procedure for Image Level Restore of Hyper-V.

    For Hyper-V Granular Level Restore refer to article:

    Related: