VxRail: Migration of VMs from host fails with “The target host does not support the virtual machine’s current hardware requirements”

Article Number: 524619 Article Version: 3 Article Type: Break Fix



VxRail Appliance Family

When attempting to migrate VMs from a certain host in the cluster, the following compatibility error is encountered in the migration wizard:

The target host does not support the virtual machine’s current hardware requirements.

To resolve CPU incompatibilities, use a cluster with Enhanced vMotion Compatibility (EVC) enabled. See KB article 1003212.

com.vmware.vim.vmfeature.cpuid.ssbd

com.vmware.vim.vmfeature.cpuid.stibp

com.vmware.vim.vmfeature.cpuid.ibrs

com.vmware.vim.vmfeature.cpuid.ibpb

The above instructions are related to the new Intel Spectre/Meltdown Hypervisor-assisted Guest Mitigation fixes.

CPU features can be compared on the affected host and the other hosts via the following:

> Host-01 (affected):$ cat /etc/vmware/config | wc -l57> Host-02:$ cat /etc/vmware/config | wc -l53

Hence, when VMs start on the affected hosts, they have extra CPU requirement that wont be met when migrating to other hosts.

In order to remove these CPU requirements, you will need to refresh the EVC baseline by disabling EVC and re-enable it again. This will update the /etc/vmware/config on all hosts in the cluster.

Related:

  • No Related Posts

Re: Hyper-V Cluster 2012 R2 (12 Node ) , Unity 550F( with offloading ISCSI ) -> Disk Role Change over 70 Seconds !

Hi,

we have opend a Ticket already with EMC , but as we cant use our shiny new Unity at all at the Moment . I would like to adress the Problem to the Community as well.

Ist Situation , we have a VNX connected to a Hyper-V 2012 Cluster via ISCSI ( 10GB ) … MS DSM was used not PowerPath . The Combination workst since 4 Years … no Drops not Problems … rock solid !!ur

Now we got our new Unity the first Thing we noticed that the Lun 0 is now not a Disk -1 as on our 2 VNX .

We configured the ISCSI Part exactly ! like we did with the VNX , everthing looks alike .

BUT if you switch the Hyper-V Disk Role to another Node in the Cluster the DiskRole Process will take Seconds until the ROLE is GREEN again in the FailOverCluster Managment Console and even then there is now IO Traffic until around 80! Seconds later. “

So the VM “Stops” for 80 Seconds ( like it is freezed ! ) . The Storage is for that Reason at the Moment not usable because we have such an Impact in the Production that it is irresponsible to use the “expensive” Unity for that Matter.

The Same Disk Role Change on a Disk Role with a VNX 5600 then the Traffic will “break in” for a fraction of a second and then it “goes” on … like a breeze.

Anyone has such an Experience also in Regard to an UNITY 550F, 300 ( firmware 4.4 ) . The two Unitys are connected via Each other in an active/active Failover Senario ( for that Reason then need direct FC Connection ). We have a Speciality on the UNITY 550F as they have the ISCSI ( Ethernet Interfaces with Offloading ) . Maybe there are also Question in Regard to PowerPath and the “ISCSI with Offloading” , which could fit the Problem Description.

Maybe someon has a Glue where to start the Search or has maybe heard form EMC ( I read about PowerPath Error Description which sounded alike )

Regards and Thanks

Related:

  • No Related Posts

Re: Problems with EMC Networker 9.2 and Hyper-V

Hello,

I am running EMC Networker 9.2 and am constantly having trouble when running Hyper-V backups.

My topology is as follows:

– storage node FC attached to DD, using DD Boost

– Hyper-V standalone cluster that I wish to back up to the SN above

Although the DD is accessible only over FC, all other Standalone Hyper-V servers have client direct and block based backups enabled.

When I try enabling Client Direct and block based backups for the my new Hyper-V server, I get the error below

Unable to perform backup for save-set ID ” with DIRECT_ACCESS=Yes, check device and pool configuration to enable Client direct



Filesystem backups run just fine with Client Direct disabled, but I am not able to make Hyper-V backups complete successfully with Client Direct enabled or disabled.

I am also certain that the policy is correctly configured and that there is end to end connectivity between client, SN and server.

Please advise.

Thanks,

Bogdan

Related:

  • No Related Posts

Avamar Plug-in for Hyper-V VSS: avhypervvss Error : Failed to get the current hyper-v node name.

Article Number: 499677 Article Version: 3 Article Type: Break Fix



Avamar Client,Avamar Plug-in for Hyper-V VSS

Hyper-V restore between different clusters fails with:

2017-05-08 17:14:09 avhypervvss Info <13827>: Could not get the current owner name for VMNAME. Will try to restore to the node that owned this VM at backup.

2017-05-08 17:14:09 avhypervvss Info <13830>: The cluster node that owned VMNAME at backup is not running.

2017-05-08 17:14:09 avhypervvss Info <13831>: Attempting to restore the VMNAME to the current node.

2017-05-08 17:14:09 avhypervvss Error <0000>: Failed to get the current hyper-v node name.

In this scenario, customer takes a backup on Cluster A and wants restore with manual provisioning on Cluster B.

The flag –-federated was present in the C:Program Filesavsvaravhypervvss.cmd.

Remove the –federated from the C:Program Filesavsvaravhypervvss.cmd, the –federated flag must only be available in the cluster var folder.

Related:

Catalog creation fails with error: Creation of Image Preparation VM Failed, message Could not retrieve the network with reference [dvportgroup-xx] as it no longer exists

If you encounter this error message, you will have to re-create the host connection and the catalog.

Please follow the below steps to recreate the same.

1. Browse to the Hosting tab in Studio and select Add new Host Connection and Resources.

2. It will give you an option to add new Resources to existing connection or create a new connection. Select New Connection and complete the wizard by selecting the hypervisor host/cluster and the corresponding network.

3. Once the host connection is created, Test the connection and make sure that there are no errors.

4. Now browse to the Delivery Group, place the machines in maintenance mode and shut down all the machines in that Delivery Group.

5. Once the machines are down, select all the machines and right-click on it. You will find an option to Remove from Delivery Group. Select the option. Now all the machines will be removed from that Delivery Group

6. Now browse to Machine Catalogs, Select all the machines in the Machine Catalog and click on Delete. You will see an option to Delete the Machine and leave the machine accounts in Active Directory.

7. Once all the machines are deleted from the catalog, right-click on the catalog and delete it.

8. Now select the option to create a new Machine Catalog and select the new host connection that you created. At the penultimate step of the wizard, you will have see an option to select the machine accounts for the new VMs. Select the accounts that were previously linked to the machines and finish the wizard.

9. After the new machines are created, browse to Delivery Group and right-click on the one in which you need to add the VMs. You will find an option to Add Machines to the Delivery Group. Select that option and add the newly created VMs.

Related:

Problems with EMC Networker 9.2 and Hyper-V

Hello,

I am running EMC Networker 9.2 and am constantly having trouble when running Hyper-V backups.

My topology is as follows:

– storage node FC attached to DD, using DD Boost

– Hyper-V standalone cluster that I wish to back up to the SN above

Although the DD is accessible only over FC, all other Standalone Hyper-V servers have client direct and block based backups enabled.

When I try enabling Client Direct and block based backups for the my new Hyper-V server, I get the error below

Unable to perform backup for save-set ID ” with DIRECT_ACCESS=Yes, check device and pool configuration to enable Client direct



Filesystem backups run just fine with Client Direct disabled, but I am not able to make Hyper-V backups complete successfully with Client Direct enabled or disabled.

I am also certain that the policy is correctly configured and that there is end to end connectivity between client, SN and server.

Please advise.

Thanks,

Bogdan

Related:

  • No Related Posts

How to Use host-cpu-tune to Fine tune XenServer 6.2.0 Performance

Pinning Strategies

  • No Pinning (default): When no pinning is in effect, the Xen hypervisor is free to schedule domain’s vCPUs on any pCPUs.
    • Pros: Greater flexibility and better overall utilization of available pCPUs.
    • Cons: Possible longer memory access times, particularly on NUMA-based hosts. Possible lower I/O throughput and control plane operations when pCPUs are overcommitted.
    • Explanation: When vCPUs are free to run on any pCPU, they may allocate memory in various regions of the host’s memory address space. At a later stage, a vCPU may run on a different NUMA node and require access to that previously allocated data. This makes poor utilization of pCPU caches and incur in higher access times to that data. Another aspect is the impact on I/O throughput and control plane operations. When more vCPUs are being executed than pCPUs that are available, the Xen hypervisor might not be able to schedule dom0’s vCPUs when they require execution time. This has a negative effect on all operations that depend on dom0, including I/O throughput and control plane operations.
  • Exclusive Pinning: When exclusive pinning is on effect, the Xen hypervisor pins dom0 vCPUs to pCPUs in a one-to-one mapping. That is, dom0 vCPU 0 runs on pCPU 0, dom0 vCPU 1 runs on pCPU 1 and so on. Any VM running on that host is pinned to the remaining set of pCPUs.
    • Pros: Possible shorter memory access times, particularly on NUMA-based hosts. Possible higher throughput and control plane operations when pCPUs are overcommitted.
    • Cons: Lower flexibility and possible poor utilization of available pCPUs.
    • Explanation: If exclusive pinning is on and VMs are running CPU-intensive applications, they might under-perform by not being able to run on pCPUs allocated to dom0 (even when dom0 is not actively using them).

Note: The exclusive pinning functionality provided by host-cpu-tune will honor specific VM vCPU affinity configured using the VM parameter vCPU-params:mask. For more information, refer to the VM Parameters section in the appendix of the XenServer 6.2.0 Administrator’s Guide.

Using host-cpu-tune

The tool can be found in /usr/lib/xen/bin/host-cpu-tune. When executed with no parameters, it displays help:

[root@host ~]# /usr/lib/xen/bin/host-cpu-tune

Usage: /usr/lib/xen/bin/host-cpu-tune { show | advise | set <dom0_vcpus> <pinning> [–force] }

show Shows current running configuration

advise Advise on a configuration for current host

set Set host’s configuration for next reboot

<dom0_vcpus> specifies how many vCPUs to give dom0

<pinning> specifies the host’s pinning strategy

allowed values are ‘nopin’ or ‘xpin’

[–force] forces xpin even if VMs conflict

Examples: /usr/lib/xen/bin/host-cpu-tune show

/usr/lib/xen/bin/host-cpu-tune advise

/usr/lib/xen/bin/host-cpu-tune set 4 nopin

/usr/lib/xen/bin/host-cpu-tune set 8 xpin

/usr/lib/xen/bin/host-cpu-tune set 8 xpin –force

[root@host ~]#

Recommendations

The total number of pCPUs and advise as follows:

# num of pCPUs < 4 ===> same num of vCPUs for dom0 and no pinning

# < 24 ===> 4 vCPUs for dom0 and no pinning

# < 32 ===> 6 vCPUs for dom0 and no pinning

# < 48 ===> 8 vCPUs for dom0 and no pinning

# >= 48 ===> 8 vCPUs for dom0 and excl pinning

The utility works in three distinct modes:

  1. Show: This mode displays the current dom0 vCPU count and infer the current pinning strategy.

    Note: This functionality will only examine the current state of the host. If configurations are changed (for example, with the set command) and the host has not yet been rebooted, the output may be inaccurate.

  2. Advise: This recommends a dom0 vCPU count and a pinning strategy for this host.

    Note: This functionality takes into account the number of pCPUs available in the host and makes a recommendation based on heuristics determined by Citrix. System administrators are encouraged to experiment with different settings and find the one that best suits their workloads.

  3. Set: This functionality changes the host configuration to the specified number of dom0 vCPUs and pinning strategy.

    Note: This functionality may change parameters in the host boot configuration files. It is highly recommended to reboot the host as soon as possible after using this command.

    Warning: Setting zero vCPUs to dom0 (with set 0 nopin) will cause the host not to boot.

Resetting to Default

The host-cpu-tune tool uses the same heuristics as the XenServer Installer to determine the number of dom0 vCPUs. The installer, however, never activates exclusive pinning because of race conditions with Rolling Pool Upgrades (RPUs). During RPU, VMs with manual pinning settings can fail to start if exclusive pinning is activated on a newly upgraded host.

To reset the dom0 vCPU pinning strategy to default:

  1. Run the following command to find out the number of recommended dom0 vCPUs:

    [root@host ~]# /usr/lib/xen/bin/host-cpu-tune advise

  2. Configure the host accordingly, without any pinning:
    • [root@host ~]# /usr/lib/xen/bin/host-cpu-tune set <count> nopin
    • Where <count> is the recommended number of dom0 vCPUs indicated by the advise command.
  3. Reboot the host. The host will now have the same settings as it did when XenServer 6.2.0 was installed.

Usage in XenServer Pools

Settings configured with this tool only affect a single host. If the intent is to configure an entire pool, this tool must be used on each host separately.

When one or more hosts in the pool are configured with exclusive pinning, migrating VMs between hosts may change the VM's pinning characteristics. For example, if a VM are manually pinned with the vCPU-params:mask parameter, migrating it to a host configured with exclusive pinning may fail. This could happen if one or more of that VM's vCPUs are pinned to a pCPU index exclusively allocated to dom0 on the destination host.

Additional commands to obtain information concerning CPU topology:

xenpm get-cpu-topology

xl vcpu-list

Related:

Updates to XenVIF Windows I/O driver – For XenServer 7.0 and later

Who Should Read This Article?

This information is for customers using XenServer 7.0 and later who are entitled to receive automatic Windows I/O driver updates on their Windows VMs.

Latest version

The following version of XenVIF is the latest that is available through Windows Automatic Updates:

Version Release Date Applicable Windows versions Catalogue Link Fixed Issues Included in:
8.2.1.170 06 Sep 2018 All supported Windows VMs XenVIF 8.2.1.170 in Microsoft Update Catalog
  • General improvements

For information about how to install these drivers on your Windows VM, see How to get Windows I/O driver updates on XenServer 7.0 and later.

Version history

Note: History is only available for versions released since the start of 2018.

Version Release Date Applicable Windows versions Catalogue Link Fixed Issues Included in:
8.2.1.155 28 Mar 2018 All supported Windows VMs XenVIF 8.2.1.155 in Microsoft Update Catalog
  • General improvements
  • Hotfix XS71ECU1019

Related: