“Power Control – Restart” works but indicates failure?

I do not need a solution (just sharing information)

When using the builtin “Power Control – Restart” task, the task instance reports as Failed in the client job’s run history. This is a false negative – in reality, the computer does reboot and the client job does continue executing.

This seems to be affected by the Fast Startup feature in Windows 10, whereby the computer basically hibernates instead of shutting down.

Just curious to hear — does this affect anyone else?



  • No Related Posts

Error: “the system cannot find the path specified” after boot disk is created

I need a solution

Dear experts:

I am trying the latest Ghost Standard Tools in order to create an image of a UEFI Windows 10 64 bits.

I have created an automation boot disk (USB device), and the boot process works fine but I get an error “the system cannot find the path specified”, and Ghost is not launched.

The question is: ¿Is Ghost copied to the USB in the process of making the boot disk or the boot process tries to launch Ghost from the server where the tools are installed?

How can I troubleshoot this error message?

Thank you very much,




“Downloading Disk Image”/ Red X. Status code 1 when restoring.

I need a solution

I have over 400 PCs that I used GSS 2.5.1 almost flawlessly on. Then I had 25% of them upgrade to windows 10, and I had to upgrade to GSS 3.x and everythig that was simple became very complicated.

PC’s don’t want to rename to their current name after reimaging just being one of the problems.

This particular issue is about a PC that needs to receive a new image, but I cannot make it happen.

Some prelim info:

1. This PC has the altiris agent, and the automation folder installed.

2. This image pushed to an identical PC without problem, on the same subnet. (All PCs are on the same subnet).

3. Both the problem PC and the other PC had the same previous image,before pushing this new image.

4. Because 1 identical PC was able to boot and reimage without problem, I know the boot disk and PXE boot are correct.

The problem:

1. I drag the restore job to the appropriate PC object in console.

2. Eventually, the Status (in console) shows “Downloading disk image…”

3. When I check the “Status Detail” I see:

      Status                                         Time                              Status Code        Module

       i  Booting to Automation            {current time stamp}        blank                  Deploy Server

       i  Downloading disk image…     {current time stamp}        blank                  Deploy Server

       X                                               {current time stamp}        1                         User Defined

      i  Downloading disk image…     {current time stamp}        blank                  Deploy Server

       X                                               {current time stamp}        1                         User Defined

      i  Downloading disk image…     {current time stamp}        blank                  Deploy Server

       X                                               {current time stamp}        1                         User Defined

      i  Downloading disk image…     {current time stamp}        blank                  Deploy Server

       X                                               {current time stamp}        1                         User Defined

This continues every ~16 seconds, and nothing ever happens. I let this go since last Friday Morning, and it was still going like this today (Monday morning) before i stopped it.

To recap:

The PCs are exactly the same HW.

The PCs had the same previous image, so they are exactly the same SW.

The PCs are on the same network/subnet.

One PC connected, reimaged fine. This PC gets stuck in the above “Downloading disk imagge…” / Red “X” status 1 loop.

Any advice or suggestions?

Thank you,



  • No Related Posts

ScaleIO: Cannot install Windows 2012 on VxRack Node hardware

Article Number: 493737 Article Version: 3 Article Type: Break Fix

ScaleIO 1.32.1,ScaleIO 1.32.2,ScaleIO 1.32.3,ScaleIO 1.32.4,ScaleIO 1.32.5,ScaleIO 1.32.6,ScaleIO 2.0,ScaleIO 2.0.0,ScaleIO,ScaleIO,ScaleIO,ScaleIO 2.0.1,ScaleIO

Issue Description

Unable to install Windows 2012 R2 on VxRack Node hardware


Customer is following the SolVe Desktop procedures to install Windows 2012 R2 on VxRack Node hardware and is failing at the “Windows Setup” screen.


Upon choosing which partition to install Windows to in the “Windows Setup” screen and hitting the “Next” button, an error will pop up:

“Windows cannot be installed to this disk. This computer’s hardware may not

support booting to this disk . Ensure that the disk’s controller is enabled in the

computer’s BIOS menu.”

User-added image


Cannot install Windows OS to the VxRack Node hardware, upon which ScaleIO will be installed.

Windows will not recognize the UEFI disk as bootable.


If the SolVe Desktop procedures have been followed precisely, the workaround is to make the boot disk use legacy mode.

In the BIOS of the VxRack node, change the following settings:

  1. Advanced -> CSM Configuration -> Storage -> Legacy
  2. Boot -> Boot Mode -> Legacy
  3. Boot -> Boot Option #1 -> Hard Disk:(Bus 03 Dev 00)PCI Raid Adapter

    This can be manipulated under “Boot -> Hard Disk Drive BBS Priorities

If step #3 shows the satadom, make sure that the virtual drive has already been created. If it has, reboot the host one more time to show the hard drive correctly in step #3.

This will allow Windows to see the disk as bootable and then successfully install Windows 2012 on the hard drive.

Impacted versions


v 2.0.x

Fixed in version



VxRack Node: storcli commands not effective

Article Number: 500119 Article Version: 3 Article Type: Break Fix

VxRack Systems,VxRack Node,VxRack Node Software,VxRack Node Hardware

Issue Description

storcli commands to set VD attributes returns successfully but the changes are not effective


Using storcli commands to set VD attributes. This can happen after a disk failure, or replacing a disk.


Some storcli commands to set VD attributes would return successfully, but upon checking, the VD attributes are not updated.

In some cases when there is a failed disk, the attributes of other VDs might change and storcli commands can’t set them.


Reduced performance.

This problem can occur in two situations:

  • RAID controller firmware has been upgraded, but the host has not been rebooted yet
  • There is preservedcache for the failed disk, which prevents the storcli commands to be effective


Use “storcli /c0 show all” to check what is the version of currently running firmware – if it’s not, reboot the host in order to load the correct firmware.

If the firmware is already running at expected version, use the following command to check preservedcache:

storcli /c0 show preservedcache 

If there is preservedcache, this command will show it, for example:

Controller = 0Status = SuccessDescription = None-----------VD State-----------4 Missing----------- 

And the VD number in the output will most likely not be present in “storcli /c0/vall show”, as it’s usually a failed disk.

Use the following command to delete preservedcache (Replace ‘#’ with the number in “VD” column in above command’s output. Based on the above example, use number ‘4’).

storcli /c0/v# delete preservedcache [force] 

After that use storcli commands to set VD attributes again.

Impacted versions

This is not a ScaleIO software issue and can affect all servers with LSI raid controllers and all versions of ScaleIO.

Fixed in version

N/A. This is not a ScaleIO issue.


Configuring PVS for High Availability with UEFI Booting and PXE service

Configuring PVS for High availability with UEFI booting and PXE service

Requirements and configuration:

  1. PVS 7.8 or above installed on all servers
  2. PXE service configured to run on multiple PVS servers
  3. User-added image
  4. Option 11 configured on the DHCP server for multiple PVS servers (or option 17 configured with Round Robin DNS entry)
  5. User-added image
  6. Vdisk stores configured with multiple PVS servers serving those stores:
  7. User-added image

Additional information:

We can split a PVS target booting into 4 tasks.

Task 1

PXE client on target device getting an IP address and scope options.

IP address will come from DHCP server.

Scope options there are two options:

  • Scope options for PXE are defined on dhcp server
Options 66 and 67 specify the server name and file name for tftp retrieval of the pxe bootstrap file
  • PXE server (option 60, doesn’t need to be configured)
PXE server responds to DHCP request with PXE information, giving its own server name, and the appropriate file name for tftp retrieval of the pxe bootstrap file

Task 2

PXE client retrieves boot file via TFTP

Option 66&67:

  • PXE client retrieves boot file from TFTP server as specified in scope options, this TFTP address can be load balanced and configured for HA with Netscaler.
Round robin can be used for also load balancing, but not for HA, as there is no recovery if one tftp server is offline

PXE server:

  • The PXE server which responded first is used by PXE client.
PXE server specifies itself as the source tftp server, and provides the appropriate file name
  • In PVS 7.8 and above, PXE service can provide the appropriate boot file, gen1/bios boot file- ardbp32.bin, or gen2/uefi file – pvsnbpx64.efi, depending on the pxe client request

Task 3

PXE client executes boot file which handles further booting, and the boot file contacts PVS login server.


  • Ardbp32.bin has been preconfigured with the addresses of PVS login servers


  • pvsnbpx64.efi is a signed file and cannot be preconfigured with PVS login servers.
  • Instead it will retrieve the location of PVS login servers from DHCP scope options, using either option 11, or option 17.
  • Option 17 can be used to specify a single PVS login server in the format: pvs:[]:17:6910
There is no HA for login server in this scenario, when using a single IP address​
  • Option 17 can be used to specify a DNS name, which is round robin list of all PVS servers in the format: pvs:[DNSRRENTRY]:17:6910
As the DNS entry resolves to multiple PVS servers, and non-responsive PVS login servers will be skipped over by the bootstrap, this is HA appropriate.
  • Option 11 can be used to specify a list of up to 32 PVS login servers.
As multiple login servers are specified, and non-responsive PVS login servers will be skipped over by the bootstrap, this is HA appropriate.

Task 4

PVS login server finds vdisk assigned to target device and tells the target device to switch to PVS streaming server

  • Provide multiple PVS servers are configured to stream a vdisk, this will be highly available
  • If a PVS server is offline, a target device will not be instructed to stream from it


Boot failure 0xc000000f: Windows failed to load because the NLS data is missing, or corrupt.

This means that an NLS file (international code page) that Windows wants is not on the boot image, which means App Layering didn’t recognize it during our boot analysis. When you import an OS layer, the ELM reads the list of drivers and services to determine what needs to be on the boot image and what can wait until layers are available. Unfortunately, NLS files are not recognized. We automatically include only US and Western Europe code pages.

However, there is a way to include a file on the boot disk that our drivers do not automatically detect. You can do this on the Gold VM and reimport, but it’s usually easier to add a version to your existing OS layer and fix it there.

In the OS Layer, edit “C:Program FilesUnideskUniservicebootfile.txt”. This is a list of all files in this VM that we identified as being critical system boot-time files. Add the files you need, save the file, and finalize the OS layer.

Unfortunately, App Layering does not in general know all the files that are necessary for a particular language. We have successfully tested this procedure with only a handful of regionalized Windows 7 systems: Greek, Russian, Slovenian, Japanese, and a few others. So you will need to do the analysis to determine what you need.

But a good first step is to edit bootfile.txt, search down to the other NLS files, and add lines like this (these are the two additional NLS files necessary for Japanese, for instance):



Note the slashes – they go forwards. Then finalize the OS layer and test.

If you are unsure what code pages you are currently using, you can get that information from the registry. Look for the three values in this key:





You would need to include the appropriate NLS file for each of those entries. However, if you expect to need to support additional languages, remember to add them as well. A list of NLS tables for many languages is here, and that’s what I derived the above files with (you will want the ANSI, MAC and OEM pages).


It’s not clear that the NLS files alone will be sufficient in all cases. Yuo may wind up with an iterative process trying to identify what else might be required. However, if you determine that other files are necessary, you can always add those to bootfile.txt as well.

Do not be tempted to include all Windows files in bootfile.txt without a clear reason. Unidesk provides only 1GB of space for files in the boot image, and all the NLS files add up to at least 10MB. However, we have seen at least one case where the end-user was unable to determine what additional language was required, and they would up adding all of the NLS files anyway. Their boot image did not run out of space, and they were able to boot with that. So, as a last ditch effort, it’s possible to just add all of them.

NOTE: It has been reported that “C:/Windows/System/vgaoem.fon” might be required and might not be present, so if this KB doesn’t work, try adding that file as well if it is not already listed in bootfile.txt.

(Supplementary information for Unidesk V2 and V3 customers: you may not be able to fix this in a new version of the OS layer. You might need to fix it in the Gold VM and reimport the OS layer, if you have never gotten your OS laer to work. Or if the damage was introduced in a newer version of the OS layer, delete that new version, recreate it, and make sure you add the appropriate files to the bootfile.txt file before finalizing.)


FAILURE – WRITE_DMA timed out and g_vfs_done errors in Messsages log of NetScaler VPX running on Hyper-V

Following errors will show up in the offline analysis under hard drive analysis or under /var/log/messages file.

Jan 31 21:05:59 <kern.crit> VPXHOSTNAME kernel: ad0: FAILURE – WRITE_DMA timed out LBA=11882751

Jan 31 21:05:59 <kern.crit> VPXHOSTNAME kernel: ad0: FAILURE – WRITE_DMA timed out LBA=11882783

Jan 31 22:03:59 <kern.crit> VPXHOSTNAME kernel: ad0: FAILURE – WRITE_DMA timed out LBA=27859423

Feb 1 18:04:45 <kern.crit> VPXHOSTNAME kernel: ad0: FAILURE – WRITE_DMA timed out LBA=27689727

Feb2017_09_07/var/log/messages:Jan 31 21:05:59 <kern.crit> VPXHOSTNAME kernel: g_vfs_done():ad0s1e[WRITE(offset=98304, length=16384)]error = 5

Feb2017_09_07/var/log/messages:Jan 31 21:05:59 <kern.crit> VPXHOSTNAME kernel: g_vfs_done():ad0s1e[WRITE(offset=114688, length=16384)]error = 5

Feb2017_09_07/var/log/messages:Jan 31 22:03:59 <kern.crit> VPXHOSTNAME kernel: g_vfs_done():ad0s1e[WRITE(offset=8180154368, length=4096)]error = 5

Feb2017_09_07/var/log/messages:Feb 1 18:04:45 <kern.crit> VPXHOSTNAME kernel: g_vfs_done():ad0s1e[WRITE(offset=8093270016, length=16384)]error = 5

These errors could also possibly cause the VPX to reboot unexpectedly.


Avamar Client for Windows: Bare Metal Recovery (BMR) failed with error Firmware mismatch between source (UEFI) and target (BIOS) machines

Article Number: 499637 Article Version: 3 Article Type: Break Fix

Avamar Client for Windows

BMR restore and get failure with error ‘Firmware mismatch between source (UEFI) and target (BIOS) machines’

User-added image

Source (backup) contains UEFI boot firmware, however, target (current machine) configured with BIOS boot firmware.

Avamar detect this mismatch between source and target boot firmware, so the PreRestore failed with error code : 0x8007065e, the BMR restore job will be terminated.

As a workaround, shutdown the target VM, and change the boot firmware of that VM from BIOS to EFI, power on and try the BMR restore.

User-added image


Need Help in replacing Boot drive


We have just got our support contract expired and we will renewing the contract in some days, but we got 1 boot drive replacement alert and our third party hardware support is replacing the boot drive

We need to replace the boot drive on NL400 node, I have got the document regarding it. Can some one brief me out the list of commands that need to be on cluster

I’m just aware the FRU package needs to be installed and drive firmware update needs to be done, where the firmware packages needs to be placed, it just says drag and drop on the cluster