How to Use host-cpu-tune to Fine tune XenServer 6.2.0 Performance

Pinning Strategies

  • No Pinning (default): When no pinning is in effect, the Xen hypervisor is free to schedule domain’s vCPUs on any pCPUs.
    • Pros: Greater flexibility and better overall utilization of available pCPUs.
    • Cons: Possible longer memory access times, particularly on NUMA-based hosts. Possible lower I/O throughput and control plane operations when pCPUs are overcommitted.
    • Explanation: When vCPUs are free to run on any pCPU, they may allocate memory in various regions of the host’s memory address space. At a later stage, a vCPU may run on a different NUMA node and require access to that previously allocated data. This makes poor utilization of pCPU caches and incur in higher access times to that data. Another aspect is the impact on I/O throughput and control plane operations. When more vCPUs are being executed than pCPUs that are available, the Xen hypervisor might not be able to schedule dom0’s vCPUs when they require execution time. This has a negative effect on all operations that depend on dom0, including I/O throughput and control plane operations.
  • Exclusive Pinning: When exclusive pinning is on effect, the Xen hypervisor pins dom0 vCPUs to pCPUs in a one-to-one mapping. That is, dom0 vCPU 0 runs on pCPU 0, dom0 vCPU 1 runs on pCPU 1 and so on. Any VM running on that host is pinned to the remaining set of pCPUs.
    • Pros: Possible shorter memory access times, particularly on NUMA-based hosts. Possible higher throughput and control plane operations when pCPUs are overcommitted.
    • Cons: Lower flexibility and possible poor utilization of available pCPUs.
    • Explanation: If exclusive pinning is on and VMs are running CPU-intensive applications, they might under-perform by not being able to run on pCPUs allocated to dom0 (even when dom0 is not actively using them).

Note: The exclusive pinning functionality provided by host-cpu-tune will honor specific VM vCPU affinity configured using the VM parameter vCPU-params:mask. For more information, refer to the VM Parameters section in the appendix of the XenServer 6.2.0 Administrator’s Guide.

Using host-cpu-tune

The tool can be found in /usr/lib/xen/bin/host-cpu-tune. When executed with no parameters, it displays help:

[root@host ~]# /usr/lib/xen/bin/host-cpu-tune

Usage: /usr/lib/xen/bin/host-cpu-tune { show | advise | set <dom0_vcpus> <pinning> [–force] }

show Shows current running configuration

advise Advise on a configuration for current host

set Set host’s configuration for next reboot

<dom0_vcpus> specifies how many vCPUs to give dom0

<pinning> specifies the host’s pinning strategy

allowed values are ‘nopin’ or ‘xpin’

[–force] forces xpin even if VMs conflict

Examples: /usr/lib/xen/bin/host-cpu-tune show

/usr/lib/xen/bin/host-cpu-tune advise

/usr/lib/xen/bin/host-cpu-tune set 4 nopin

/usr/lib/xen/bin/host-cpu-tune set 8 xpin

/usr/lib/xen/bin/host-cpu-tune set 8 xpin –force

[root@host ~]#

Recommendations

The total number of pCPUs and advise as follows:

# num of pCPUs < 4 ===> same num of vCPUs for dom0 and no pinning

# < 24 ===> 4 vCPUs for dom0 and no pinning

# < 32 ===> 6 vCPUs for dom0 and no pinning

# < 48 ===> 8 vCPUs for dom0 and no pinning

# >= 48 ===> 8 vCPUs for dom0 and excl pinning

The utility works in three distinct modes:

  1. Show: This mode displays the current dom0 vCPU count and infer the current pinning strategy.

    Note: This functionality will only examine the current state of the host. If configurations are changed (for example, with the set command) and the host has not yet been rebooted, the output may be inaccurate.

  2. Advise: This recommends a dom0 vCPU count and a pinning strategy for this host.

    Note: This functionality takes into account the number of pCPUs available in the host and makes a recommendation based on heuristics determined by Citrix. System administrators are encouraged to experiment with different settings and find the one that best suits their workloads.

  3. Set: This functionality changes the host configuration to the specified number of dom0 vCPUs and pinning strategy.

    Note: This functionality may change parameters in the host boot configuration files. It is highly recommended to reboot the host as soon as possible after using this command.

    Warning: Setting zero vCPUs to dom0 (with set 0 nopin) will cause the host not to boot.

Resetting to Default

The host-cpu-tune tool uses the same heuristics as the XenServer Installer to determine the number of dom0 vCPUs. The installer, however, never activates exclusive pinning because of race conditions with Rolling Pool Upgrades (RPUs). During RPU, VMs with manual pinning settings can fail to start if exclusive pinning is activated on a newly upgraded host.

To reset the dom0 vCPU pinning strategy to default:

  1. Run the following command to find out the number of recommended dom0 vCPUs:

    [root@host ~]# /usr/lib/xen/bin/host-cpu-tune advise

  2. Configure the host accordingly, without any pinning:
    • [root@host ~]# /usr/lib/xen/bin/host-cpu-tune set <count> nopin
    • Where <count> is the recommended number of dom0 vCPUs indicated by the advise command.
  3. Reboot the host. The host will now have the same settings as it did when XenServer 6.2.0 was installed.

Usage in XenServer Pools

Settings configured with this tool only affect a single host. If the intent is to configure an entire pool, this tool must be used on each host separately.

When one or more hosts in the pool are configured with exclusive pinning, migrating VMs between hosts may change the VM's pinning characteristics. For example, if a VM are manually pinned with the vCPU-params:mask parameter, migrating it to a host configured with exclusive pinning may fail. This could happen if one or more of that VM's vCPUs are pinned to a pCPU index exclusively allocated to dom0 on the destination host.

Additional commands to obtain information concerning CPU topology:

xenpm get-cpu-topology

xl vcpu-list

Related:

  • No Related Posts

Leave a Reply