XenApp 7.15 : Cannot update Machine Catalogue “PBM error occurred during PreProcessReconfigureSpec: pbm.fault.PBMFault”

After updating a machine catalog that was created using Machine Creation Services (MCS), virtual machines hosted on vSAN 6 or later might fail to start. The following error message appears in the VMware console:

“A general system error occurred: PBM error occurred during PreProcessReconfigureSpec: pbm.fault.PBMFault; Error when trying to run pre-provision validation; Invalid entity.”

Related:

  • No Related Posts

App Layering: Machine Time on a Published Image is Wrong at First Boot

You can always manually set the time once the machine starts, but that might be a pain to remember to do every time you publish a new image.

The initial clock time in Windows on a physical machine comes from a battery-powered clock on the motherboard (called TOD for Time Of Day), which is set to the local time in Windows’ current timezone. In a virtual machine, the virtualized TOD clock is set by the hypervisor at bootup. Since hypervisors normally know the time in GMT rather than a local timezone, your hypervisor has to know what “local time” is for your Windows instance in your virtual machine before it powers on. If the hypervisor doesn’t know the conversion factor for the VM’s local timezone, the initial time can be off by hours. Hypervisors learn the machine’s local time zone pretty quickly, but it means that the first boot for any VM is usually wrong.

In a published App Layering image, unless your template is derived from a VM that was originally a full Windows machine set to the correct timezone, the first boot usually has bad clock time. However, if your Platform Layer was added to the domain, your published VM should also have the correct information for how to sync its clock with the Domain Controller.

So make sure your Platform Layer was joined to the domain, so it can immediately correct the clock discrepancy.

Otherwise, consider setting this registry key so that Windows will treat the motherboard clock as being in UTC rather than the local timezone:

[HKEY_LOCAL_MACHINESystemCurrentControlSetControlTimeZoneInformation]

“RealTimeIsUniversal”=DWORD 1

Some hypervisors store the local timezone offset for a VM as a virtual motherboard resource. When Windows is running, every time it updates the clock time, it sets the motherboard resource to be the correct time. This is how your hypervisor finds out what the timezone offset for this VM is: because Windows is always writing local time to the motherboard, all your hypervisor has to do is compare the motherboard resource for the TOD clock to the hypervisor’s own clock. That timezone offset is an attribute of the VM itself, not part of Windows and not part of the virtual disk.

Note that Nutanix does not currently notice and record the time zone offset of a VM. You would need to set it manually. See this thread, for instance:

https://next.nutanix.com/installation-configuration-23/windows-vm-time-issues-22562

It may be worthwhile to generate a new template for your Connector, by having (or building) a Windows VM that has booted in the correct time zone. If you have a template you want to continue using, for instance, convert it to a VM, attach a bootable Windows disk (or boot from PVS or something like that – it’s just important that Windows run on this machine), power the machine on, and set the clock correctly. When you adjust the clock, Windows writes it to the motherboard, and your hypervisor records the offset in the virtual machine parameters. Then you can shut the machine down, remove any extra disks, and convert it back to a template.

You can also just take a working Windows machine with a correct local time, shut it down, clone it, remove any extra disks, and convert that to a VM template. This is one good reason to make a template out of a clone of your original Gold VM that you imported your OS Layer from: it already has all the virtual hardware parameters you want, including the local clock offset. Now that your template includes the current timezone offset, your hypervisor will be able to set the initial motherboard TOD clock correctly, meaning Windows has the correct time immediately and doesn’t need to wait for a jump when AD comes in to set the clock.

Configure your Connector to use this template so that newly published images will be correct. If you are using PVS, you should also use this template to build your Target Machines so that the virtual hardware of your Target Machines matches the hardware your layers were built from, including the local timezone offset.

Note that it’s also possible to have your hypervisor’s internal clock wrong. Also, your PVS server will try to set the machine’s clock based on the PVS server’s local clock. If any of these are wrong, you will need to get them synched as well.

Related:

FAQ: Personal vDisk in XenDesktop

Q: Can multiple PvDs be associated to a device/user?

A: There can only be one PvD per Virtual Machine. The PvD is assigned to a Virtual Machine when building the catalog of desktops. The pool type for a PvD catalog is a pooled static, which the desktop is assigned to the user on first use.

Q: Is the PvD a 1-1 mapping per user?

A: Actually, it is a 1:1 mapping to a Virtual Machine in a catalog, which is then assigned to the user on first use. A PvD is attached to a Virtual Machine assigned to the user. The administrator can move a PvD to a new virtual machine in a recovery situation.

Q: If you create a catalog for pooled with PvD, it does not mean that the user is always required to be assigned to that Virtual Machine defeating one of the benefits of a pooled?

A: The base image is still shared and updated across the pool. However, once the user makes an initial connection to a Virtual Machine, the Virtual Machine is kept assigned to the user.

Note: You must connect early in the starting stage long before you know who the user is in order to maximize the application compatibility for services, devices etc.

Q: How does the pooled with personal vDisk catalog affect idle pool?

A: After the user connects, this user is kept assigned to the Virtual Machine.

You must connect early in the starting stage long before you know who the user is in order to maximize the application compatibility for services, devices etc. So for hypervisor resource management, instead of idle pool management, you would use power management to handle Virtual Machine idle workloads.

Q: What Operating Systems are supported for PvD?

A: Windows 7 x86, Windows 7 x64, and Windows 10 up to v1607.

Q: Does Citrix 7.15 LTSR support Windows 10 1703 Semi-Annual Servicing Channel (SAC)?

A: Yes, XenApp and XenDesktop 7.15 LTSR supports Windows 10 1703 SAC. Reference Citrix product documentation for more information.

Q: Is PvD only for Desktop Operating Systems or will it also work with Server Operating Systems?

A: It is only supported on Desktop Operating Systems.

Design and Deploy

Q: What kinds of risks are there for BSODs with PvDs?

A: PvD is architected to be compatible with a wide range of Windows software, including software that loads the drivers. However, drivers that load in phase 0 or software that alters the networking stack of the machine (through the installation of additional miniports or intermediate or protocol drivers) might cause PvD to not operate as expected. You must install these types of software in the base Virtual Machine image.

Related:

Error: “Preparation of the Master VM Image failed” when Creating MCS Catalog in XenApp or XenDesktop

This article contains information about creating a Machine Creation Services (MCS) catalog in XenApp or XenDesktop fails with error “Preparation of the Master VM Image failed”.

Background

As part of the process of creating a machine catalog using MCS, the contents of the shared base disk are updated and manipulated in a process referred to as Image Preparation. Under some conditions, this Preparation step can fail completely without generating any information that can be used to derive the exact cause. Possible reasons for this are:

  1. The Machine Image does not have the correct version of the Citrix VDA software installed.

  2. The virtual machine used to perform the preparation never starts or takes so long to start that the operation is cancelled with the following symptoms:

  • Blue screen on boot.

  • No operating system found.

  • Slow start/execution of the virtual machine. This particularly affects Cloud based deployments where machine templates might have to be copied between different storage locations.

Triaging Issues

The issue detailed under Point 1 in the Background section should be easy to diagnose using the following procedure:

  1. Start the original master Virtual Machine (VM).

  2. Ensure that the Citrix VDA Software has been installed from the correct version using XenDesktopVdaSetup.exe.

Notes

  • It is not sufficient to install individual MSI installers, as the complete VDA functionality is provided by several distinct components which XenDesktopVdaSetup will install as required.

  • If a VDA from XenDesktop 5.x is installed (for example, to support Windows XP or Vista), then this does not support image preparation and you must select the option in Studio to tell the system that an older VDA is in use and this will instruct the services to skip the image preparation operation.

For the issues detailed in Point 2, more diagnosis is required as explained in this section:

Caution! Refer to the Disclaimer at the end of this article before using Registry Editor.

Note: The preparation step will be performed by a VM named Preparation – <catalog name> – <unique id>. The Hypervisor management console can be used to check that this VM manages to start and progress past the point of starting the guest operating system. If this does not occur then check that the snapshot used to create the catalog was made of a fully functioning VM.

MCS will only support a single virtual disk, so ensure that your master image only uses a single disk.

If the VM manages to start Windows, then information will be required from inside the running VM. As the machine will be forcibly stopped if the preparation step is not completed in the expected time, hence it is necessary to prevent this from happening. In order to do this, run a PowerShell console from the Studio management console and issue the following command:

Set-ProvServiceConfigurationData -Name ImageManagementPrep_NoAutoShutdown -Value $True –AdminAddress <your Controller Address>

This is a global configuration option and will affect all catalog creation operations, so ensure that no other creation operations are being run whilst you are triaging your issues. It also requires the issuing user to be a Full Administrator of the XenDesktop site.

After this property is set, the preparation VM will no longer shutdown automatically or be forced down on expiry of allowed time.

  1. To obtain additional information from the preparation operation, it is necessary to enable diagnostic logging on the Machine Creation components in the master image. To do this, set the following DWORD registry value to 1 in the master image and create a new snapshot of the master image.

    HKLMSoftwareCitrixMachineIdentityServiceAgentLOGGING

    After this is set, the image preparation operation will create log files in C: on the preparation VM. The logfiles are “image-prep.log” and “PvsVmAgentLog.txt”.

  2. When the preparation VM is running, use the Hypervisor management console to log into the VM.

  3. Check that image-prep.log exists in C: and open it.

  4. Check for errors in the log file. These might be sufficient for the problem to be resolved directly, otherwise the details must be used in reporting an issue to Citrix Support.

  5. As the preparation VM is created, the network adapters disconnects in order to isolate it from the rest of the Active Directory domain, it is not possible to copy the log file to an external destination. Screen shots of the VM console will be the best way to obtain a report. Ensure that all parts of the log file are captured.
  6. When finished with the investigations, remove the global setting as follows:

    Remove-ProvServiceConfigurationData -Name ImageManagementPrep_NoAutoShutdown –AdminAddress <your Controller Address>

  7. Set the registry key on the master image to 0 and create a new snapshot to provision.

Related:

  • No Related Posts

VxRail: Migration of VMs from host fails with “The target host does not support the virtual machine’s current hardware requirements”

Article Number: 524619 Article Version: 3 Article Type: Break Fix



VxRail Appliance Family

When attempting to migrate VMs from a certain host in the cluster, the following compatibility error is encountered in the migration wizard:

The target host does not support the virtual machine’s current hardware requirements.

To resolve CPU incompatibilities, use a cluster with Enhanced vMotion Compatibility (EVC) enabled. See KB article 1003212.

com.vmware.vim.vmfeature.cpuid.ssbd

com.vmware.vim.vmfeature.cpuid.stibp

com.vmware.vim.vmfeature.cpuid.ibrs

com.vmware.vim.vmfeature.cpuid.ibpb

The above instructions are related to the new Intel Spectre/Meltdown Hypervisor-assisted Guest Mitigation fixes.

CPU features can be compared on the affected host and the other hosts via the following:

> Host-01 (affected):$ cat /etc/vmware/config | wc -l57> Host-02:$ cat /etc/vmware/config | wc -l53

Hence, when VMs start on the affected hosts, they have extra CPU requirement that wont be met when migrating to other hosts.

In order to remove these CPU requirements, you will need to refresh the EVC baseline by disabling EVC and re-enable it again. This will update the /etc/vmware/config on all hosts in the cluster.

Related:

RecoverPoint for Virtual Machines: Install fails at 53% creating virtual repository

Article Number: 525663 Article Version: 2 Article Type: Break Fix



RecoverPoint for Virtual Machines 5.2,RecoverPoint for Virtual Machines 5.2 P1

Issue:

RP4MVs setup/cluster install is failing @ 53% in Deployer.

Symptoms:

clusterLogic.log shows

2018/09/19 20:16:59.336 [pool-6-thread-189] (BaseInstallationServerAdapter.java:285) ERROR – errorType: OPERATION_FAILED , userMSG: Operation failed. Failed to create JAM profile.

2018/09/19 20:16:59.337 [pool-6-thread-189] (BaseInstallationServerAdapter.java:292) ERROR – Transaction failed. transactionId=318, timeoutInSeconds=900, errorMSG=Operation failed. Failed to create JAM profile. , errorType=OPERATION_FAILED, value=null

2018/09/19 20:16:59.337 [pool-6-thread-189] (BaseRpaCall.java:52) ERROR – CreateVirtualRepositoryCall failed.

java.lang.RuntimeException: Operation failed. Failed to create JAM profile.

server.log shows

2018-09-19 20:11:53,631 [CommandWorker-14] (CreateJirafPolicyAction.java:47) ERROR – Failed creating new JCD policy: java.lang.RuntimeException: RP Filter metadata not found, please verify that RP Filter is installed.

2018-09-19 20:11:53,631 [CommandWorker-14] (BaseAction.java:46) ERROR – CreateJirafPolicyAction Failed.

java.lang.RuntimeException: RP Filter metadata not found, please verify that RP Filter is installed.

at com.emc.recoverpoint.connectors.vi.infra.pbm.PbmCreatePolicyCommand.waitAndGetRPFilterMetadata(PbmCreatePolicyCommand.java:129) ~[vi_connector_commons.jar:?]

at com.emc.recoverpoint.connectors.vi.infra.pbm.PbmCreatePolicyCommand.addRPRules(PbmCreatePolicyCommand.java:74) ~[vi_connector_commons.jar:?]

at com.emc.recoverpoint.connectors.vi.infra.pbm.PbmCreatePolicyCommand.perform(PbmCreatePolicyCommand.java:66) ~[vi_connector_commons.jar:?]

at com.emc.recoverpoint.connectors.vi.infra.pbm.PbmCreatePolicyCommand.perform(PbmCreatePolicyCommand.java:18) ~[vi_connector_commons.jar:?]

at com.emc.recoverpoint.connectors.vi.infra.pbm.BasePBMCommand.call(BasePBMCommand.java:27) ~[vi_connector_commons.jar:?]

at com.emc.recoverpoint.connectors.vi.infra.PBMProxy.pbmCreateCreateDummyJCDPolicy(PBMProxy.java:90) ~[vi_connector_commons.jar:?]

at com.emc.recoverpoint.connectors.actions.create.CreateJirafPolicyAction.createPolicy(CreateJirafPolicyAction.java:44) ~[vsphere_actions.jar:?]

at com.emc.recoverpoint.connectors.actions.create.CreateRPPolicyAction.perform(CreateRPPolicyAction.java:35) ~[vsphere_actions.jar:?]

at com.emc.recoverpoint.connectors.actions.create.CreateRPPolicyAction.perform(CreateRPPolicyAction.java:14) ~[vsphere_actions.jar:?]

at com.emc.recoverpoint.connectors.actions.infra.BaseAction.call(BaseAction.java:30) [vsphere_actions.jar:?]

at com.kashya.installation.server.commands.vsphere.CreateVirtualRepositoryCommand.createJirafPolicyId(CreateVirtualRepositoryCommand.java:61) [com.kashya.recoverpoint.installation.server.jar:?]

at com.kashya.installation.server.commands.vsphere.CreateVirtualRepositoryCommand.preExecute(CreateVirtualRepositoryCommand.java:39) [com.kashya.recoverpoint.installation.server.jar:?]

at com.kashya.installation.server.commands.Command.runNormal(Command.java:108) [com.kashya.recoverpoint.installation.server.jar:?]

at com.kashya.installation.server.commands.Command.run(Command.java:48) [com.kashya.recoverpoint.installation.server.jar:?]

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_172]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_172]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_172]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_172]

at com.kashya.installation.server.ThreadPoolFactory$1.run(ThreadPoolFactory.java:43) [com.kashya.recoverpoint.installation.server.jar:?]

at java.lang.Thread.run(Thread.java:748) [?:1.8.0_172]

2018-09-19 20:11:53,632 [CommandWorker-14] (CreateVirtualRepositoryCommand.java:41) WARN – Failed to create JAM profile, calling troubleshoot action…

com.kashya.installation.server.exceptions.CommandFailedException: Operation failed. Failed to create JAM profile.

There are VASA (Storage) IOFilter providers that are seen as offline/disconnected by the VC

RP4VM Install

Workaround:

1. In vSphere web client:

a. For VC 6.0 – Select the relevant VC, then choose the “Manage” and the “Storage Providers” tabs consequently.

b. For VC 6.5 – Select the relevant VC, then choose the “Configure” tab and select the “Storage Providers” from the sub menu on the left side.

2. Unregister IOFILTER providers whose status isn’t “online”.

3. After all of the providers from “2” are gone from the list, resync the providers. After the resync has finished all the IOFilter providers should appear as “online”.

Resolution:

Dell EMC engineering is currently investigating this issue. A permanent fix is still in progress. Contact the Dell EMC Customer Support Center or your service representative for assistance and reference this solution ID.

Related:

  • No Related Posts

RecoverPoint for Virtual Machines: SC production RPAs are in reboot regulation

Article Number: 502032 Article Version: 3 Article Type: Break Fix



RecoverPoint for Virtual Machines

Issue (impact on customer): SC production RPAs are in reboot regulation

Impacted Configuration & Settings [Replication Mode,Splitter Type,Compression,CDP/CLR/Multi,FC/IP/iSCSI, Policies etc.]: RP4VM, failed to register new VM to the VC too many times.

Impact on RP: Control keeps on crashing

Affected versions: 5.0.1

Root cause:

When we register a new VM (upon performing protect) in the VC, to create a shadow VM. In this case, the registration fails and after 5 retry attempts, RPA control give-up and generate a message of the failure with the VM name. Problem is, we deleted the memory allocated for the registration before generating the message, and thus, getting into Assertion (trying to access variable in the memory that no longer exists) and control crashes repeatedly until RPA enters reboot regulation.

Customer deleted a production that protected by RecoverPoint for Virtual Machine cluster without un-protecting it first.

Or

Any other changes that may cause failure for RPA to register VMs.

Fixed at: 5.1 5.0.1.2

Workaround:

Perform step-by-step procedure:

1. Find out why the VM keep failing to register to the VC and try to solve the error at the VC to prevent this from re-occurring.

2. Unprotect and Re-protect the VMs on the same CG.

3. If 2+3 doesn’t succeed – disable/enable the CG.

Related:

Avamar Client for VMware: How to enable Instant Access (IA) restore for multiple VMs with DataDomain 6.0 and Avamar version 7.4

Article Number: 499593 Article Version: 3 Article Type: How To



Avamar Client for VMware

Instant Access (IA) is similar to restoring an image backup to a new virtual machine, except that the restored, virtual machine can be booted directly from the Data Domain system. This reduces the amount of time required to restore an entire virtual machine.

When used with Data Domain systems earlier than release 6.0, in order to minimize operational impact to the Data Domain system, only one instant access is permitted at a time. For Data Domain systems at release 6.0 or greater, 32 instant access processes are permitted at the same time. This requirement is documented inAvamar 7.4 and Service Packs for VMware User Guide


Here is the procedure to enable Multiple IAs restore in Avamar server:

1. Edit /usr/local/avamar/var/mc/server_data/prefs/mcserver.xml in Avamar server, and change value to ‘true’

<entry key=”ddr_can_modify_ir_limit” value=”true” />

2. Restart MCS service to make the change take effect.

dpnctl stop mcs

dpnctl start mcs

dpnctl start sched

3. In Avamar GUI – Server – Modify DD IA limit to 32, and save the change

User-added image



Other Best practices for Instant Access restore

  • It is recommended to change the NFS.MaxVolumes values on the ESXi server to a value greater than 32.

This is because each IA creates an NFS volume on the server. Within vCenter, the setting is under ‘Configuration’->‘Advanced Settings’ of a ESXi server. By default, the value is set to 8.

  • Perform a post-restore migration and clean-up after a system restore.

This is because Avamar resources need to be released for subsequent IA operations.

This also protects the virtual machines’ data. vMotion can be used to migrate a virtual machine from one datastore to another.

Related:

RecoverPoint for Virtual Machines: Cannot Enable Image Access Due to a a Virtual Network Device Not Being Visible to an ESXi Host

Article Number: 494877 Article Version: 3 Article Type: Break Fix



RecoverPoint for Virtual Machines 5.0,RecoverPoint for Virtual Machines 5.0 P1

A user attempting to perform an Image Access test is unable to do so. An error is returned stating that one of the configured networks is not visible on the ESXi host.

While validating that the Virtual Machine networks shown in the the settings are correct, this system check ends up failing. Image Access is then prevented since restoring the existing settings may not be possible in this situation.

The following will be present in the vRPA management log:

MgmtAction: setGroupSettings_internal() returning: : (mgmtRS=VectorSet((mgmtRC=MRC_RPVE_IMAGE_ACCESS_NIC_NOT_VISIBLE,additionalInfo=NoOption)),bGood=0)

There is a caching mechanism within the RecoverPoint management process that causes the Virtual Machine to appear as if it’s under the wrong ESXi host.

Workaround:

Configure all ESXi hosts to see the same virtual network, and configure the RecoverPoint for Virtual Machine’s shadow VMs to use only specific network.

Related:

RecoverPoint for Virtual Machines: Failover is unsuccessful when shadow VM is storage vMotioned

Article Number: 488925 Article Version: 4 Article Type: Break Fix



RecoverPoint for Virtual Machines 4.3,RecoverPoint for Virtual Machines 4.3 P1,RecoverPoint for Virtual Machines 4.3 SP1,RecoverPoint for Virtual Machines 4.3 SP1 P1,RecoverPoint for Virtual Machines 4.3 SP1 P2

Impact:

There was a storage vMotion done on a copy VM to a different DataStore.

In order to do so, both shadow VM and replica VM must be migrated.

The first migration is successfully performed but the second one is blocked by VMware.

Symptoms found in the logs:

The second migration is blocked due to an error:

“Relocate virtual machine <vm name> Cannot complete the operation because the file or folder ds:///vmfs/volumes/<first vmUid>/<first vm name>/vmware.log already exists”

Root cause:

Storage vMotion is performed by VMware, the second migration fails due to a conflict of an already existing file.

Affected versions:

4.3, 4.3.1, 4.3.1.1, 4.3.1.2, 5.0

Change:

In RP4VMs, migrating protected VMs (with shadow based RP4VM solutions) to a different DataStore.

Workaround:

In order to Storage vMotion the shadow and replica VMs, the following steps would need to be performed:

1. Storage vMotion the shadow VM – only the configuration file (VMX) should be migrated, the replica VMDK(s) should be kept on the current DataStore.

By default, Configuration file and all VMDKs are migrated as part of the storage vMotion process so it is vital to click on “Advanced” on the “Select DataStore” step of the “Migrate Virtual Machine” wizard and select a different DataStore specifically for the Configuration file while keeping the VMDKs on the current DataStore.

2. Perform Test a Copy (enable image access) to bring up the replica VM

3. Storage vMotion the replica VM – (Configuration file and all VMDKs) to the new DataStore

If in step 1 a full migration was performed (i.e. to both VMX configuration file and the VMDKs), the second migrate of the replica VM will be denied. In order to solve this, perform a full migration of the shadow VM back to the initial DataStore and follow the storage vMotion steps exactly as described in the procedure above.

Finally, the floppy imageiso file used by the shadow VM would need to be changed as well:

1. Check if the target DataStore has a “RecoverPoint” folder with a RecoverPointVM.flpRecoverPointVM.iso file.

If it does not, copy the RecoverPoint folder from the current DataStore

2. Edit the settings of the shadow VM and point the floppy image to the RecoverPointVM.flp file on the target DataStore.

3. Reset the shadow VM

Remark: RecoverPointVM.iso is relevant to version 4.3.1 only. RecoverPointVM.flp is relevant to 4.3.1.1 and higher versions.

ResolutionFixed at:

Storage vMotion procedure is described above.

Related: