Task failure, FaultType: SystemError – Message: A general system error occurred: The system returned an error. Communication with the virtual machine might have been interrupted.

Doing any of a number of tasks in Unidesk – rebuilding desktops, deleting desktops, performing backups – you may get a task failure that says:

FaultType: SystemError

Message: A general system error occurred: The system returned an error. Communication with the virtual machine might have been interrupted.

If you look in vSphere at the same time, you will see a task that also fails with the same information. This doesn’t tell us exactly how the VM is broken, but it tells us pretty clearly that the VM is broken in vSphere. Unidesk can’t fix it. In fact, we probably can’t even delete it, which means we can’t do a repair to a new CP. Crtainly, any plain rebuild (including a Repair operation) will fail because vSphere itself won’t reconfigure the machine.

o You may be able to migrate this VM to a new host (do not change the datastore) to rehabilitate it.

o You may be able to remove it from the vSphere inventory, add it back, and then in Unidesk, System -> Synchronize Infrastructure to get us to use the new VM ID. Adding a machine to inventory doe a lot of sanity checking on the VMX file.

o You may, if you have a Unidesk backup, be able to remove it from the vSphere inventory, delete the boot image from the datastore, and use the Unidesk Repair function to recreate the VM.

o You may just have to open a case with VMware and get them to either fix to destroy that VM.

KB #33931

Related:

  • No Related Posts

Avamar Plug-in for Hyper-V VSS : Hyper-V backups failing with “avtar Error : Specified source path does not exist”

Article Number: 501887 Article Version: 4 Article Type: Break Fix



Avamar Plug-in for Hyper-V VSS

Hyper-V backups may fail with errors similar to the following:

avhypervvss Error <13117>: Unable to successfully process backup workorder MOD-1490821241871#101, targets

\?Volume{abcd-1234-xxxx-xxxx-xxxxxxx}dellemcVirtual Machines12345-abcd-6789-efgh-XXXX.xml

avhypervvss Error <17120>: Failed to process federated backup workorder.

avhypervvss Error <0000>: Federated backup did not complete successfully.

_________________________________________________________________________________________________

avtar Error <7553>: Specified source path “?Volume{abcd-1234-xxxx-xxxx-xxxxxxx}dellemcVirtual Machines12345-abcd-6789-efgh-XXXX.xml” does not exist; ignored

avtar Error <7553>: Specified source path “?Volume{abcd-1234-xxxx-xxxx-xxxxxxx}dellemc}Virtual Hard DisksAvamar-vm.vhdx” does not exist; ignored

avtar Error <7553>: Specified source path “?Volume{abcd-1234-xxxx-xxxx-xxxxxxx}dellemcVirtual Hard DisksAvamar-vm-AutoRecovery.avhdx” does not exist; ignored



avtar Error <8012>: Error in path list: All paths are invalid.. Correct path list before retrying.

VMs are configured with Volume GUID instead of drive letter. This can be confirmed either via PowerShell or from the Properties page of the VM in Hyper-V Manager.

Example:

\?Volume{abcd-1234-xxxx-xxxx-xxxxxxx}dellemcVirtual Machines

The Hyper-V VSS plug-in does not support mountvol GUID paths for VM configuration. Reason being that restores would fail if the same GUID is not found (i.e. drives are replaced).

Fix the VM configuration paths to point to a drive letter, i.e. L:Virtual Machines or G:Virtual Machines.

Related:

  • No Related Posts

VMware asks whether “I moved it” or “I copied it” for a Unidesk desktop VM

vSphere may freeze a Unidesk VM (or refuse to reconfigure it or power it on or even delete it), and instead prompts you with a question about it having been moved or copied.

Virtual Machine Message

This virtual machine might have been moved or copied. In order to configure certain management and networking features, VMware ESX needs to know if this virtual machine was moved or copied. If you don’t know, answer “I copied it.”

o Cancel

o I moved it

o I copied it

It turns out that the answer doesn’t matter. Technically, the answers mean “there is a second copy of this VM, and therefore I need to regenerate all the automatic GUIDs and addresses” or “this is still the only copy of this VM, so I can re-use the auto-generated information I already have.” Since this is happening spontaneously on an individual machine, and there is no copy operation, you can safely select “I moved it”. However, it is always safe to select “I copied it”, as the message suggests.

We don’t know why this is happening in partiular. It appears to be an ESX host locking issue where two hosts claim a lock on the VMX file, and the “I moved it” question is part of the process for the correct ESX host to break the lock and claim the VM. It may be specific to particular kinds of storage, because we have seen it more often on SSD SANs, possibly involving VAAI.

The only workaround we’re aware of that is not simply manually answering the question is to edit the VMX file to pre-answer the question:

http://kb.vmware.com/kb/1027096

Also, note that when a question is pending in vSphere for a VM, VMware will fail tasks for that desktop with the error “Another task is already in progress.” You may find that Unidesk tasks fail with the message, “Failed to detach disks from the desktop and reattach to the CachePoint Appliance,” but if you look at the vSphere task list, you find that the vSphere task is failing with, “Another task is already in progress.” Check to see if there is a question pending, and answer it, and the rebuild should succeed on the next attempt.

Related:

  • No Related Posts

FAQ: Personal vDisk in XenDesktop

Q: Can multiple PvDs be associated to a device/user?

A: There can only be one PvD per Virtual Machine. The PvD is assigned to a Virtual Machine when building the catalog of desktops. The pool type for a PvD catalog is a pooled static, which the desktop is assigned to the user on first use.

Q: Is the PvD a 1-1 mapping per user?

A: Actually, it is a 1:1 mapping to a Virtual Machine in a catalog, which is then assigned to the user on first use. A PvD is attached to a Virtual Machine assigned to the user. The administrator can move a PvD to a new virtual machine in a recovery situation.

Q: If you create a catalog for pooled with PvD, it does not mean that the user is always required to be assigned to that Virtual Machine defeating one of the benefits of a pooled?

A: The base image is still shared and updated across the pool. However, once the user makes an initial connection to a Virtual Machine, the Virtual Machine is kept assigned to the user.

Note: You must connect early in the starting stage long before you know who the user is in order to maximize the application compatibility for services, devices etc.

Q: How does the pooled with personal vDisk catalog affect idle pool?

A: After the user connects, this user is kept assigned to the Virtual Machine.

You must connect early in the starting stage long before you know who the user is in order to maximize the application compatibility for services, devices etc. So for hypervisor resource management, instead of idle pool management, you would use power management to handle Virtual Machine idle workloads.

Q: What Operating Systems are supported for PvD?

A: Windows 7 x86, Windows 7 x64, and Windows 10 up to v1703.

Q: Is PvD only for Desktop Operating Systems or will it also work with Server Operating Systems?

A: It is only supported on Desktop Operating Systems.

Design and Deploy

Q: What kinds of risks are there for BSODs with PvDs?

A: PvD is architected to be compatible with a wide range of Windows software, including software that loads the drivers. However, drivers that load in phase 0 or software that alters the networking stack of the machine (through the installation of additional miniports or intermediate or protocol drivers) might cause PvD to not operate as expected. You must install these types of software in the base Virtual Machine image.

Related:

  • No Related Posts

7022687: Open Enterprise Server 2018 upgrade fails with blank screen when virtual machine hardware is set to old level

This document (7022687) is provided subject to the disclaimer at the end of this document.

Environment

VMware

Open Enterprise Server 2018 (OES 2018) Linux

Situation

When booting from the VMware-connected OES 2018 ISO file in the virtual drive, the system boots properly, the “Upgrade” option is selected, various items continue to load on the console, and then there is nothing but a black screen shown. In some cases, the virtual machine (VM) will not boot at all.

When checked, the VMware hardware compatibility level was set to four (4).

Resolution

Change the virtual machine (VM) hardware compatibility level to 5.1 (or later), and possible ensure that the display resolution is at least 1024×768 to resolve the issue.

Screenshot before the VMware configuration change:

Screenshot after the VMware configuration change:

Additional Information

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

Related:

  • No Related Posts

ICO – Resize fails – reason is_scsi

Hello

In ICO 2.4.0.2 for some of the VMs resize does not work. We tried with different VMs, also newly created and everything is OK. Just for few issue appear, there is a possibility that something was changed on VMware level. Any idea what could be the cause?

Thank you

Error in BPM log:

wle_javascrip I Error Handler – going to throw following error exception: CTJCO0815EResize of virtual machine(s) failed 713771df-d3e7-4ad7-b4e5-9cc4dcfdabad

Compute log:

2018-01-12 10:22:50.061 16409 ERROR oslo.messaging.rpc.dispatcher [-] Exception during message handling: ‘is_scsi’
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher File “/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py”, line 134, in _dispatch_and_reply
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher incoming.message))
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher File “/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py”, line 177, in _dispatch
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args)
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher File “/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py”, line 123, in _do_dispatch
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args)
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher File “/usr/lib/python2.6/site-packages/nova/exception.py”, line 88, in wrapped
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher payload)
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher File “/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py”, line 68, in __exit__
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher File “/usr/lib/python2.6/site-packages/nova/exception.py”, line 71, in wrapped
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher return f(self, context, *args, **kw)
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher File “/usr/lib/python2.6/site-packages/nova/compute/manager.py”, line 305, in decorated_function
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher LOG.warning(msg, e, instance_uuid=instance_uuid)
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher File “/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py”, line 68, in __exit__
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher File “/usr/lib/python2.6/site-packages/nova/compute/manager.py”, line 282, in decorated_function
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs)
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher File “/usr/lib/python2.6/site-packages/nova/compute/manager.py”, line 358, in decorated_function
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher function(self, context, *args, **kwargs)
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher File “/usr/lib/python2.6/site-packages/nova/compute/manager.py”, line 270, in decorated_function
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher migration.instance_uuid, exc_info=True)
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher File “/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py”, line 68, in __exit__
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher File “/usr/lib/python2.6/site-packages/nova/compute/manager.py”, line 257, in decorated_function
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs)
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher File “/usr/lib/python2.6/site-packages/nova/compute/manager.py”, line 334, in decorated_function
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher e, sys.exc_info())
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher File “/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py”, line 68, in __exit__
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher File “/usr/lib/python2.6/site-packages/nova/compute/manager.py”, line 321, in decorated_function
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs)
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher File “/usr/lib/python2.6/site-packages/nova/compute/manager.py”, line 3756, in resize_instance
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher self.instance_events.clear_events_for_instance(instance)
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher File “/usr/lib64/python2.6/contextlib.py”, line 34, in __exit__
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher self.gen.throw(type, value, traceback)
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher File “/usr/lib/python2.6/site-packages/nova/compute/manager.py”, line 5952, in _error_out_instance_on_exception
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher self._set_instance_error_state(context, instance_uuid)
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher File “/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py”, line 68, in __exit__
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher File “/usr/lib/python2.6/site-packages/nova/compute/manager.py”, line 5925, in _error_out_instance_on_exception
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher yield
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher File “/usr/lib/python2.6/site-packages/nova/compute/manager.py”, line 3707, in resize_instance
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher hot_resize, new_mem = self._check_hot_resize_enabled(instance, instance_type)
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher File “/usr/lib/python2.6/site-packages/nova/compute/manager.py”, line 3771, in _check_hot_resize_enabled
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher resize = self.driver.hot_resize_enabled(instance, new_instance_type, old_instance_type)
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher File “/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py”, line 581, in hot_resize_enabled
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher resize = self._vmops.hot_resize_enabled(instance)
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher File “/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vmops.py”, line 2121, in hot_resize_enabled
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher resize[‘disk_resizable’] = root_disk[‘is_scsi’]
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher KeyError: ‘is_scsi’
2018-01-12 10:22:50.061 16409 TRACE oslo.messaging.rpc.dispatcher
2018-01-12 10:22:50.063 16409 ERROR oslo.messaging._drivers.common [-] Returning exception ‘is_scsi’ to caller
2018-01-12 10:22:50.063 16409 ERROR oslo.messaging._drivers.common [-] [‘Traceback (most recent call last):n’, ‘ File “/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py”, line 134, in _dispatch_and_replyn incoming.message))n’, ‘ File “/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py”, line 177, in _dispatchn return self._do_dispatch(endpoint, method, ctxt, args)n’, ‘ File “/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py”, line 123, in _do_dispatchn result = getattr(endpoint, method)(ctxt, **new_args)n’, ‘ File “/usr/lib/python2.6/site-packages/nova/exception.py”, line 88, in wrappedn payload)n’, ‘ File “/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py”, line 68, in __exit__n six.reraise(self.type_, self.value, self.tb)n’, ‘ File “/usr/lib/python2.6/site-packages/nova/exception.py”, line 71, in wrappedn return f(self, context, *args, **kw)n’, ‘ File “/usr/lib/python2.6/site-packages/nova/compute/manager.py”, line 305, in decorated_functionn LOG.warning(msg, e, instance_uuid=instance_uuid)n’, ‘ File “/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py”, line 68, in __exit__n six.reraise(self.type_, self.value, self.tb)n’, ‘ File “/usr/lib/python2.6/site-packages/nova/compute/manager.py”, line 282, in decorated_functionn return function(self, context, *args, **kwargs)n’, ‘ File “/usr/lib/python2.6/site-packages/nova/compute/manager.py”, line 358, in decorated_functionn function(self, context, *args, **kwargs)n’, ‘ File “/usr/lib/python2.6/site-packages/nova/compute/manager.py”, line 270, in decorated_functionn migration.instance_uuid, exc_info=True)n’, ‘ File “/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py”, line 68, in __exit__n six.reraise(self.type_, self.value, self.tb)n’, ‘ File “/usr/lib/python2.6/site-packages/nova/compute/manager.py”, line 257, in decorated_functionn return function(self, context, *args, **kwargs)n’, ‘ File “/usr/lib/python2.6/site-packages/nova/compute/manager.py”, line 334, in decorated_functionn e, sys.exc_info())n’, ‘ File “/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py”, line 68, in __exit__n six.reraise(self.type_, self.value, self.tb)n’, ‘ File “/usr/lib/python2.6/site-packages/nova/compute/manager.py”, line 321, in decorated_functionn return function(self, context, *args, **kwargs)n’, ‘ File “/usr/lib/python2.6/site-packages/nova/compute/manager.py”, line 3756, in resize_instancen self.instance_events.clear_events_for_instance(instance)n’, ‘ File “/usr/lib64/python2.6/contextlib.py”, line 34, in __exit__n self.gen.throw(type, value, traceback)n’, ‘ File “/usr/lib/python2.6/site-packages/nova/compute/manager.py”, line 5952, in _error_out_instance_on_exceptionn self._set_instance_error_state(context, instance_uuid)n’, ‘ File “/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py”, line 68, in __exit__n six.reraise(self.type_, self.value, self.tb)n’, ‘ File “/usr/lib/python2.6/site-packages/nova/compute/manager.py”, line 5925, in _error_out_instance_on_exceptionn yieldn’, ‘ File “/usr/lib/python2.6/site-packages/nova/compute/manager.py”, line 3707, in resize_instancen hot_resize, new_mem = self._check_hot_resize_enabled(instance, instance_type)n’, ‘ File “/usr/lib/python2.6/site-packages/nova/compute/manager.py”, line 3771, in _check_hot_resize_enabledn resize = self.driver.hot_resize_enabled(instance, new_instance_type, old_instance_type)n’, ‘ File “/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py”, line 581, in hot_resize_enabledn resize = self._vmops.hot_resize_enabled(instance)n’, ‘ File “/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vmops.py”, line 2121, in hot_resize_enabledn resize[‘disk_resizable’] = root_disk[‘is_scsi’]n’, “KeyError: ‘is_scsi’n”]

Related:

  • No Related Posts