VxRail: Migration of VMs from host fails with “The target host does not support the virtual machine’s current hardware requirements”

Article Number: 524619 Article Version: 3 Article Type: Break Fix



VxRail Appliance Family

When attempting to migrate VMs from a certain host in the cluster, the following compatibility error is encountered in the migration wizard:

The target host does not support the virtual machine’s current hardware requirements.

To resolve CPU incompatibilities, use a cluster with Enhanced vMotion Compatibility (EVC) enabled. See KB article 1003212.

com.vmware.vim.vmfeature.cpuid.ssbd

com.vmware.vim.vmfeature.cpuid.stibp

com.vmware.vim.vmfeature.cpuid.ibrs

com.vmware.vim.vmfeature.cpuid.ibpb

The above instructions are related to the new Intel Spectre/Meltdown Hypervisor-assisted Guest Mitigation fixes.

CPU features can be compared on the affected host and the other hosts via the following:

> Host-01 (affected):$ cat /etc/vmware/config | wc -l57> Host-02:$ cat /etc/vmware/config | wc -l53

Hence, when VMs start on the affected hosts, they have extra CPU requirement that wont be met when migrating to other hosts.

In order to remove these CPU requirements, you will need to refresh the EVC baseline by disabling EVC and re-enable it again. This will update the /etc/vmware/config on all hosts in the cluster.

Related:

  • No Related Posts

How to Use host-cpu-tune to Fine tune XenServer 6.2.0 Performance

Pinning Strategies

  • No Pinning (default): When no pinning is in effect, the Xen hypervisor is free to schedule domain’s vCPUs on any pCPUs.
    • Pros: Greater flexibility and better overall utilization of available pCPUs.
    • Cons: Possible longer memory access times, particularly on NUMA-based hosts. Possible lower I/O throughput and control plane operations when pCPUs are overcommitted.
    • Explanation: When vCPUs are free to run on any pCPU, they may allocate memory in various regions of the host’s memory address space. At a later stage, a vCPU may run on a different NUMA node and require access to that previously allocated data. This makes poor utilization of pCPU caches and incur in higher access times to that data. Another aspect is the impact on I/O throughput and control plane operations. When more vCPUs are being executed than pCPUs that are available, the Xen hypervisor might not be able to schedule dom0’s vCPUs when they require execution time. This has a negative effect on all operations that depend on dom0, including I/O throughput and control plane operations.
  • Exclusive Pinning: When exclusive pinning is on effect, the Xen hypervisor pins dom0 vCPUs to pCPUs in a one-to-one mapping. That is, dom0 vCPU 0 runs on pCPU 0, dom0 vCPU 1 runs on pCPU 1 and so on. Any VM running on that host is pinned to the remaining set of pCPUs.
    • Pros: Possible shorter memory access times, particularly on NUMA-based hosts. Possible higher throughput and control plane operations when pCPUs are overcommitted.
    • Cons: Lower flexibility and possible poor utilization of available pCPUs.
    • Explanation: If exclusive pinning is on and VMs are running CPU-intensive applications, they might under-perform by not being able to run on pCPUs allocated to dom0 (even when dom0 is not actively using them).

Note: The exclusive pinning functionality provided by host-cpu-tune will honor specific VM vCPU affinity configured using the VM parameter vCPU-params:mask. For more information, refer to the VM Parameters section in the appendix of the XenServer 6.2.0 Administrator’s Guide.

Using host-cpu-tune

The tool can be found in /usr/lib/xen/bin/host-cpu-tune. When executed with no parameters, it displays help:

[root@host ~]# /usr/lib/xen/bin/host-cpu-tune

Usage: /usr/lib/xen/bin/host-cpu-tune { show | advise | set <dom0_vcpus> <pinning> [–force] }

show Shows current running configuration

advise Advise on a configuration for current host

set Set host’s configuration for next reboot

<dom0_vcpus> specifies how many vCPUs to give dom0

<pinning> specifies the host’s pinning strategy

allowed values are ‘nopin’ or ‘xpin’

[–force] forces xpin even if VMs conflict

Examples: /usr/lib/xen/bin/host-cpu-tune show

/usr/lib/xen/bin/host-cpu-tune advise

/usr/lib/xen/bin/host-cpu-tune set 4 nopin

/usr/lib/xen/bin/host-cpu-tune set 8 xpin

/usr/lib/xen/bin/host-cpu-tune set 8 xpin –force

[root@host ~]#

Recommendations

The total number of pCPUs and advise as follows:

# num of pCPUs < 4 ===> same num of vCPUs for dom0 and no pinning

# < 24 ===> 4 vCPUs for dom0 and no pinning

# < 32 ===> 6 vCPUs for dom0 and no pinning

# < 48 ===> 8 vCPUs for dom0 and no pinning

# >= 48 ===> 8 vCPUs for dom0 and excl pinning

The utility works in three distinct modes:

  1. Show: This mode displays the current dom0 vCPU count and infer the current pinning strategy.

    Note: This functionality will only examine the current state of the host. If configurations are changed (for example, with the set command) and the host has not yet been rebooted, the output may be inaccurate.

  2. Advise: This recommends a dom0 vCPU count and a pinning strategy for this host.

    Note: This functionality takes into account the number of pCPUs available in the host and makes a recommendation based on heuristics determined by Citrix. System administrators are encouraged to experiment with different settings and find the one that best suits their workloads.

  3. Set: This functionality changes the host configuration to the specified number of dom0 vCPUs and pinning strategy.

    Note: This functionality may change parameters in the host boot configuration files. It is highly recommended to reboot the host as soon as possible after using this command.

    Warning: Setting zero vCPUs to dom0 (with set 0 nopin) will cause the host not to boot.

Resetting to Default

The host-cpu-tune tool uses the same heuristics as the XenServer Installer to determine the number of dom0 vCPUs. The installer, however, never activates exclusive pinning because of race conditions with Rolling Pool Upgrades (RPUs). During RPU, VMs with manual pinning settings can fail to start if exclusive pinning is activated on a newly upgraded host.

To reset the dom0 vCPU pinning strategy to default:

  1. Run the following command to find out the number of recommended dom0 vCPUs:

    [root@host ~]# /usr/lib/xen/bin/host-cpu-tune advise

  2. Configure the host accordingly, without any pinning:
    • [root@host ~]# /usr/lib/xen/bin/host-cpu-tune set <count> nopin
    • Where <count> is the recommended number of dom0 vCPUs indicated by the advise command.
  3. Reboot the host. The host will now have the same settings as it did when XenServer 6.2.0 was installed.

Usage in XenServer Pools

Settings configured with this tool only affect a single host. If the intent is to configure an entire pool, this tool must be used on each host separately.

When one or more hosts in the pool are configured with exclusive pinning, migrating VMs between hosts may change the VM's pinning characteristics. For example, if a VM are manually pinned with the vCPU-params:mask parameter, migrating it to a host configured with exclusive pinning may fail. This could happen if one or more of that VM's vCPUs are pinned to a pCPU index exclusively allocated to dom0 on the destination host.

Additional commands to obtain information concerning CPU topology:

xenpm get-cpu-topology

xl vcpu-list

Related:

How to Create a Platform Layer in App Layering 4.x

The first thing that’s important to understand is that a platform layer is the highest priority layer in a published image. So a setting or file in the platform layer will be the last applied when the file system and registry are merged during image creation. However, one thing to note is that, just like regular application layers, any changes made to the local SAM database will not be captured in the platform layer.

The general flow for creating a platform layer is as follows (PVS, MCS, and Horizon View are covered in more specific details near the end of this article)

  1. Install Special Drivers like NVIDIA
  2. Install the Broker Agent
  3. Join the Domain. This is done to configure the registry as a domain joined machine would be. Each broker is actually responsible for joining the desktops/Xenapp servers to the domain.
  4. Install any SSO software you may use. We do this because both the broker agents and SSO software modify Windows Logon Providers and we don’t want the platform layer to just overwrite the SSO settings from another layer.
  5. If the platform layer will be used cross platform, meaning for example that you package on vSphere but you will be deploying to XenServer, then you install the cross platform hypervisor tools as well. Your primary hypervisor tools should be installed in your OS layer.
  6. If you are using PVS, make sure you install the PVS tools last, as required for PVS with or without App Layering

If you are using Receiver and Workspace Environment Management in your XenDesktop or XenApp environment include those in the Platform Layer as well. Receiver has an SSO component and WEM can be affected by the scrubbing we do between hypervisors if it is not installed in the Platform layer.

There are some problems that also need to be handled via GPO/GPP because of the SAM database update issue with layering. Since we join the Domain here in the Platform Layer you will not automatically have Domain Admins in the local Administrators group and Domain Users in the local Users group. The easiest way to deal with this is to create a Group Policy Preference (GPP) to fix that using those groups or whatever groups you desire to assign administrators and users to your machines.

The Citrix VDA also adds two services into local groups. These can also be added via GPP.

The NT ServiceCitrixTelemetryService is added to the local Performance Log Users group.

The NT ServiceBrokerAgent is added to the local Performance Monitor Users group.

If you want to allow direct access via RDP to VDA’s add a Domain group to the local Direct Access Users group.


If you are using Citrix App-V integration, the VDA will create a local user (CtxAppVCOMAdmin), then give access for that local user to a DCOM object (Citrix.VirtApp.VDA.Com.AppVObject).

The way to handle this is to create the local user in your OS layer (before you create the platform layer), give it a password and document the password. Then after you install the VDA :

Open Component Services

goto DCOM Config

Open Properties of Citrix.VirtApp.VDA.Com.AppVObject, click the Identify tab and change the password to the one you set in the OS layer.

A word on optimizations:

Since the platform layer is the highest priority layer you may think it would be the best layer to include optimizations. However, with Windows 10 any optimization that remove “Windows Apps” will only work if run in the OS layer because otherwise the app removal does not work because the apps are integrated with the Windows Store and the store can only be modified in the OS layer. Recently Citrix has developed an excellent Optimizer (CTXOE). This tool is highly recommend to use for optimizations as it applies them and it can also reverse most of them.

One last note that is very helpful, during your process of rebooting the platform layer after you join the domain logon once with a network user account, then reboot and logon next as administrator and delete the profile created. When the first network user logs on some system files must be updated and by following this procedure you will significantly speed up logons because these files will no longer need to be modified.


Here are the general best practices for creating a Platform layer for the most popular methods of provisioning:

PVS

  1. Install NVIDIA Drivers if using (Should configure the packaging machine with your NVIDIA profile before installing)
  2. Install the VDA
  3. Join the Domain
  4. Log on as a network user, reboot, logon as admin, delete network user profile
  5. Install hypervisor tools if using cross platform
  6. Install any SSO Software, Citrix Receiver if it wasn’t installed with the VDA and WEM if you are using it.
  7. Reboot
  8. Install PVS Tools
  9. Finalize

MCS

  1. Install NVIDIA Drivers if using (Should configure the packaging machine with your NVIDIA profile before installing)
  2. Install the VDA
  3. Join the Domain
  4. Log on as a network user, reboot, logon as admin, delete network user profile
  5. Install hypervisor tools if using cross platform
  6. Install any SSO Software, Citrix Receiver if it wasn’t installed with the VDA and WEM if you are using it.
  7. Reboot
  8. Finalize

Horizon View

  1. Install NVIDIA Drivers if using (Should also configure the packaging machine with your NVIDIA profile before installing)
  2. Install the View Agent
  3. Join the Domain
  4. Log on as a network user, reboot, logon as admin, delete network user profile
  5. Install hypervisor tools if using cross platform
  6. Install any SSO Software
  7. Reboot
  8. Finalize

Related:

How to Convert VMware Virtual Machines to XenServer Virtual Machines

Citrix Hypervisor, formerly XenServer, is powered by the Xen Project hypervisor.

The first method of converting Open Virtualization Format (OVF) packages exported directly from VMware is preferred because it is the quickest, most efficient, and allows you to convert multiple virtual drives at the same time. The second method of converting VMDK files should be used as an alternative because it only allows converting one drive at a time.

Note: For best results, copy the OVF template and the VMDK file to the computer that XenConvert is installed on for conversion.

Requirements

  • Administrator access to the VMware virtual machine being converted

  • Administrator access to XenServer and XenCenter

  • Basic knowledge of OVF

You must have basic knowledge of VMware, a Windows computer to run the XenConvert Utility, XenServer, and XenCenter.

Note: This guide is for XenServer 5.6 and previous versions. For information on XenServer 6 and later, refer to the Knowledge Center article CTX133505 – How to Export VMware Virtual Machine to OVF Package.

Refer to the Citrix XenConvert Guide for XenConvert Supported Operating Systems and CTX121652 – Overview of the Open Virtualization Format for more information on OVF packages.

Click here to download Citrix XenConvert Application Software

Note: Citrix XenServer Conversion Manager is the preferred supported conversion tool for XenServer 6.1 and newer.

Related:

How to license Citrix Hypervisor versions 5.6 and higher

Free Citrix Hypervisor 5.6-6.1

Free Citrix Hypervisor 6.2

Installation of license for Citrix Hypervisor 6.2

Retail Citrix Hypervisor


Free Citrix Hypervisor 5.6 – 6.1 Activation using License Manager in XenCenterWhen a server is installed with the free Citrix Hypervisor, you have up to 30 days of use until you must activate the server. Follow the below procedure to activate Citrix Hypervisor using License Manager in XenCenter:

  1. Launch XenCenter.
  2. Click Tools and select License Manager.
  3. The License Manager pop up box is displayed with a list of products and servers. Select the License server to activate your free Citrix Hypervisor 5.6.
  4. From the dropdown box available at the bottom of the page, select Request Activation Key.
  5. On the Citrix Hypervisor activation web page, Enter your valid contact information
    • The licensing file will be sent to the entered Email ID.
  6. Select the Download Agreement check box and click Submit.
Please Note:
  1. A confirmation message will be displayed, informing that your license file will be mailed to your Email ID within 5 minutes.
  2. Open your Email and it will have a license file sent from Xenserver.activations@citrix.com. Save the license file in a location where it can be accessible from the XenCenter console.
  3. In the XenCenter, Click License Manager and select Apply Activation Key from the dropdown box available at the bottom of the page.
  4. Browse for the downloaded license file and select the license file and click Open.
  5. A pop up box is displayed that shows the expiry date of the hypervisor. The Citrix Hypervisor will be ready to use for a 30 day trial period.

Free Citrix Hypervisor 6.2

With the release of Citrix Hypervisor 6.2, Citrix has unlocked all features in the free version and removed the need for a license. To obtain the free version, follow the below steps:

  1. Go to www.xenserver.org
  2. Select the Software link at the top of the page
  3. The next page provides all downloads applicable to Citrix Hypervisor, Select the media desired
  4. Save the media on the desktop and proceed to install.


Installation of license for Citrix Hypervisor 6.2 Free version

There is no license to install therefore there will be no need for a license. To view the system and verify there is no license, follow the below steps:

  1. Open XenCenter
  2. Navigate to the Tools menu and click License Manager
  3. This will show the license manager server option as Unsupported

Retail Citrix Hypervisor editions

There are two types of Citrix Hypervisor editions available which uses retail licensing. The types of hypervisor editions are Citrix Hypervisor Standard Edition, and Citrix Hypervisor Premium Edition. All Citrix Hypervisor editions licenses have to be added to a separate Citrix Licensing Server. The license files are maintained and controlled using Citrix License Administrative (LAC) console

Each host in a resource pool must be individually licensed. (For example, if you are supporting four hypervisor hosts in a resource pool, you must configure the license type to use on each of the four hosts separately.) As a result, license settings are configured on each host in the pool. However, in XenCenter, you can select multiple hosts at once in the License Manager and apply the same settings to them.


Tasks required to License Citrix Hypervisor retail editions

Follow the below tasks to license Citrix Hypervisor retail editions:

  1. Create a Citrix license server. Citrix Hypervisor release requires the Citrix License Server, version 11.6.1 or higher http://support.citrix.com/proddocs/topic/licensing-1110/lic-install.html

  2. Download and add the Citrix Hypervisor license file to the Citrix License Server CTX130884-How to Download the Citrix Hypervisor License File from My Account Portal / CTX126338-How to Add Allocated License Files to the License Administration Console.

  3. Configure each Citrix Hypervisor host to use the Citrix License Server that is hosting the license you allocated for it CTX130884-How to Download the Citrix Hypervisor License File from My Account Portal / CTX126338-How to Add Allocated License Files to the License Administration Console.

Retail Licensing for Citrix Hypervisor activation using License Manager in XenCenter

Follow the below procedure to activate Citrix Hypervisor using License Manager in XenCenter:

  1. Open XenCenter. Click Tools and select License Manager.
  2. The License Manager pop up box is displayed. Select required hosts (you can select more than one host file) and Click Assign License.
  3. The Apply License dialog box is displayed. Under the License Edition section, select the type of your hypervisor (For example, if you have Citrix Hypervisor Premium Edition, click on the radio button against it).
  4. Under License Server section, enter the name of the server in Name field (by default, it will have Local host text, Delete it and enter the name of the server) and port number of the server in Port Number field.
Note: If you have changed the port on the Citrix License Server, specify the changed port number in the Port Number field. If you have not changed the port, leave the default value 27000 as is. 27000 is the default port number used by Citrix products.
  1. Click OK.
  2. The licensing file will be associated with Citrix Hypervisor and the server is ready to use.

Related:

VxRail: Virtual Machines need to be power cycled for branch target injection mitigation (Spectre v2) to take effect

Article Number: 519601 Article Version: 8 Article Type: How To



VxRail Appliance Family,VxRail Appliance Series,VxRail Software

VxRail release with versions higher than or equal to 4.0.402 and 4.5.152 address Hypervisor-Assisted Guest Mitigation for branch target injection mitigation (Spectre v2). Patching the VMware vSphere hypervisor and updating the CPU Microcode will allow guest operating systems to use hardware support for branch target mitigation.

Please follow below sequence for the security fix to take effect for all Virtual Machines:

  1. Upgrade current VxRail cluster to versions with the security fix completely
  2. Log in vSphere Web Client with an administrative account. For each service Virtual Machine, perform Power -> Shut Down Guest OS action. Please note that if vCenter Server Appliance (VCSA) or vCenter Server Platform Service Controller (PSC) is shut down, the vSphere Web Client will be down. So please take note of the ESXi host that VCSA or PSC resides on and use vSphere Host Client or the steps at end of this article to power them back on. User-added image
  3. Wait until the VM is powered off completely, power it on to finish the power cycle operation
  4. Repeat above steps for all service Virtual Machines [in the following order]:
    1. VxRail Manager
    2. vRealize Log Insight [if deployed]
    3. vCenter Server Platform Service Controller (PSC) [if Internal]
    4. vCenter Server Appliance (VCSA) [if Internal]

For technical details, please refer to VMware KB 52085

Hypervisor-Assisted Guest Mitigation for Branch Target injection

https://kb.vmware.com/s/article/52085

When then the VCSA or PSC is shut down the vSphere Web Client will become inaccessible and therefore you will not be able to power them back on using the vSphere Web Client. Please use vSphere Host Client to power them back on. Alternatively, the following steps can be followed.
  1. In vCenter click the VCSA VM and select Summary Tab –> Related Objects –> Host. This gives the ESXi host running this VM.User-added image
  2. Enable SSH on the ESXi Host: Select ESXi Host –> Configure –> Security Profile –> Services –> Edit –> Enable SSH
  3. Login to ESXi Host using root credentials.
  4. Issue the following commands:
  • vim-cmd vmsvc/getallvms | grep “VMware vCenter Server”
This returns the “vmid” in this case 1 [first number on the line]; in this case the vmid=1 so a value of 1 would be used.
  • vim-cmd vmsvc/power.getstate <vmid>
  • vim-cmd vmsvc/power.off <vmid>
Wait a few mins and then power back on the VM.
  • vim-cmd vmsvc/power.on <vmid>
User-added image

Note: The VCSA could take a while to restart 30 – 60 mins depending on the size of the configuration.

Repeat these steps for the PSC.

For other self-deployed Virtual Machine, customer could decide the proper time to do the power cycle depending on the workload, but we suggest to take the action as earlier as possible. Customer is also responsible for Operating System-Specific Mitigations to take the latest OS patch for self-deployed VM’s.

Related:

  • No Related Posts

7023078: Security Vulnerability: “L1 Terminal Fault” (L1TF) ??? Hypervisor Information (CVE-2018-3620, CVE-2018-3646, XSA-273).

Full mitigation for this issue requires a combination of hardware and software changes. Depending on the guest type, software changes may be required at both the Hypervisor and guest level.

Updated Intel microcode (provided through your hardware / BIOS vendor or by SUSE) introduces a new feature called “flush_l1d”. Hypervisors and bare-metal kernels use this feature to flush the L1 data cache during operations which may be susceptible to data leakage (e.g. when switching between VMs in Hypervisor environments).

Software mitigations exist for the Linux Kernel and for Hypervisors. These mitigations include support for new CPU features, passing these features to guests, and support for enabling/disabling/tuning the mitigations. Recommended mitigations vary depending on the environment.

For the Linux kernel (on both bare metal and virtual machines) L1TF mitigation is controlled through the “l1tf” kernel boot parameter. For complete information on this parameter, see TID 7023077.

KVM

For KVM host environments, mitigation can be achieved through L1D cache flushes, and/or disabling Extended Page Tables (EPT) and Simultaneous MultiThreading (SMT).

The L1D cache flush behavior is controlled through the “kvm-intel.vmentry_l1d_flush” kernel command line option:

kvm-intel.vmentry_l1d_flush=always

The L1D cache is flushed on every VMENTER.

kvm-intel.vmentry_l1d_flush=cond

The L1D cache is flushed on VMENTER only when there can be leak of host memory between VMEXIT and VMENTER. This could still leak some host data, like address space layout.

kvm-intel.vmentry_l1d_flush=never

Disables the L1D cache flush mitigation.

The default setting here is “cond”.

The l1tf “full” setting overrides the settings of this configuration variable.


L1TF can be used to bypass Extended Page Tables (EPT). To mitigate this risk, it is possible to disable EPT and use shadow pages instead. This mitigation is available through the “kvm-intel.enable_ept” option:
kvm-intel.enable_ept=0

The Extended Page tables support is switched off.
As shadow pages are much less performant than EPT, SUSE recommends leaving EPT enabled, and use L1D cache flush and SMT tuning for full mitigation.


To eliminate the risk of untrusted processes or guests exploiting this vulnerability on a sibling hyper-thread, Simultaneous MultiThreading (SMT) can be disabled completely.

SMT can be controlled through kernel boot command line parameters, or on-the-fly through sysfs:

On the kernel boot command line:

nosmt

SMT is disabled, but can be later reenabled in the system.

nosmt=force

SMT is disabled, and can not be reenabled in the system.

If this option is not passed, SMT is enabled. Any SMT options used with the “l1tf” kernel parameter option overrides this “nosmt” option.


SMT can also be controlled through sysfs:

/sys/devices/system/cpu/smt/control

This file allows to read the current control state and allows to disable or (re)enable SMT.

Possible states are:

on

SMT is supported and enabled.

off

SMT is supported, but disabled. Only primary SMT threads can be onlined.

forceoff

SMT is supported, but disabled. Further control is not possible.

notsupported

SMT is not supported.

Potential values that can be written into this file:

on

off

forceoff

/sys/devices/system/cpu/smt/active

This file contains the state of SMT, if it is enabled and active, where active means that multiple threads run on 1 core.

Xen

For Xen hypervisor environments, mitigation is enabled by default and varies based on guest type. Manual adjustment of the “smt=” parameter is recommended, but the remaining parameters are best left at default values.A description of all relevant parameters are provided in the event any changes are necessary.

PV guests achieve mitigation at the Xen Hypervisor level. If a PV guest attempts to write an L1TF-vulnerable PTE, the hypervisor will force shadow mode and prevent the vulnerability. PV guests which fail to switch to shadow mode (e.g. due to a memory shortage at the hypervisor level) are intentionally crashed.

pv-l1tf=[ <bool>, dom0=<bool>, domu=<bool> ]

By default, pv-l1tf is enabled for DomU environments and, for stability and performance reasons, disabled for Dom0.

HVM guests achieve mitigation through a combination of L1D flushes, and disabling SMT.

spec-ctrl=l1d-flush=<bool>

This parameter determines whether or not the Xen hypervisor performs L1D flushes on VMEntry. Regardless of this setting, this feature is virtualized and passed to HVM guests for in-guest mitigation.

smt=<bool>
This parameter can be used to enable/disable SMT from the hypervisor. Xen environments hosting any untrusted HVM guests, or guests not under the full control of the host admin, should either disable SMT (through BIOS or smt=<bool> means), or ensure HVM guests use shadow mode (hap=0) in order to fully mitigate L1TF. It is also possible to reduce the risk of L1TF through the use of CPU pinning, custom CPU pools and/or soft-offlining of some hyper-threads.
These approaches are beyond the scope of this TID, but are documented in the standard Xen documentation.

WARNING – The combination of Meltdown mitigation (KPTI) and shadow mode on hardware which supports PCID can result in a severe performance degradation.

NOTE – Efforts are ongoing to implement scheduling improvements that allow hyper-thread siblings to be restricted to threads from a single guest. This will reduce the exposure of L1TF, and the requirement to disable SMT in many environments.

Related:

NVP-vProxy: vSphere vCenter virtual machine updates are not automatically populated in the VMware View

Article Number: 501545 Article Version: 3 Article Type: Break Fix



NetWorker 9.1,NetWorker 9.2,NetWorker 9.1.1

The NetWorker VMware Protection integration is configured with the vProxy Appliance. The VMware Administrator adds or removes a Virtual machine to the vCenter inventory. The NetWorker Administrator does not see the virtual machine details updated in the VMware View or in the NSR Protection Group for their active NetWorker Management Console (NMC) session. For example, if a virtual machine is created in the VMware vCenter inventory, it will not show in the VMware View or NetWorker Protection Group details.

If the NMC session is left open for 24 hours, it will not automatically update the virtual machine details. If the NMC VMware View is manually refreshed or re-opened the virtual machine details will be updated properly and the VMware View will reflect the changes.

The NMC is working as designed. When the NMC is opened it runs the nsrvim process for all the configured vCenter servers to update the NetWorker resource database (nsrdb) and populate the VMware View details. Once the VMware View is populated, it does not automatically re-populate the VMware View (or NSR Protection Group) unless the user selects to “Refresh” the VMware View details.

A virtual machine was created or deleted in the VMware vCenter inventory after opening the NMC or refreshing the VMware View.

In the NMC select the VMware View and then the “Refresh” option. The VMware View refresh will run the nsrvim process for all the configured vCenter servers and update the VMware View with the changes.

The VMware vCenter environment details are stored in the NetWorker resource database (nsrdb) under the NSR Hypervisor resource. The NSR Hypervisor resource contains the “environment” variable, which stores the VMware vCenter information in XML format. This information is used by several NetWorker components to obtain details about the configured VMware vCenter servers. The nsrvim process is responsible for connecting to the configured NSR Hypervisor resources and updating the vCenter information in the nsrdb.

The nsrvim execution is called by several processes in the NetWorker environment to ensure the most recent information is available in the NSR Hypervisor attribute. The manual execution of the nsrvim, or execution via another process, will not automatically update the NMC VMware View. The only nsrvim execution that will update the NMC VMware View is via the opening of the NMC or VMware View refresh.

Related:

Required amount of Free Memory for XenServer with vGPU

Question:

When XenServer has vGPU enabled, what is the amount of memory that needs to be left free and unassigned in the XenServer host for Xen Hypervisor utilization?

Answer:

When using the vGPU functionality in XenServer host, for every GB of memory assigned to the VM, 32MB from the host will be utilized by Xen for PV-IOMMU. So, for every 1GB of memory assigned to the VM with vGPU, 32MB of memory should be left free for Xen Hypervisor utilization. For example, If you have a XenServer host with 5 Virtual Machines and each VM has 10GB of memory assigned, you need 5 * 10 * 32MB = 1032MB of memory left unassigned and available in the XenServer host machine for the Xen Hypervisor.

If you do not allocate free memory when using vGPU-enabled VMs, you may encounter an unexpected out-of-memory condition in Xen (even though XenCenter indicates that free memory is available). This can trigger a GPU or VM failure, and in rare cases, a host crash. The out-of-memory issue occurs due to poor reporting of an inefficient memory structure in Xen which is used to support vGPU workloads.

Note: Customers running XenServer 7.1 Cumulative Update 1 can resolve the out of memory issue by installing Hotfix XS71ECU1023.

Related:

  • No Related Posts