“Power Control – Restart” works but indicates failure?

I do not need a solution (just sharing information)

When using the builtin “Power Control – Restart” task, the task instance reports as Failed in the client job’s run history. This is a false negative – in reality, the computer does reboot and the client job does continue executing.

This seems to be affected by the Fast Startup feature in Windows 10, whereby the computer basically hibernates instead of shutting down.

Just curious to hear — does this affect anyone else?

0

Related:

Error: “the system cannot find the path specified” after boot disk is created

I need a solution

Dear experts:

I am trying the latest Ghost Standard Tools in order to create an image of a UEFI Windows 10 64 bits.

I have created an automation boot disk (USB device), and the boot process works fine but I get an error “the system cannot find the path specified”, and Ghost is not launched.

The question is: ¿Is Ghost copied to the USB in the process of making the boot disk or the boot process tries to launch Ghost from the server where the tools are installed?

How can I troubleshoot this error message?

Thank you very much,

Mr.Clinker

0

Related:

“Downloading Disk Image”/ Red X. Status code 1 when restoring.

I need a solution

I have over 400 PCs that I used GSS 2.5.1 almost flawlessly on. Then I had 25% of them upgrade to windows 10, and I had to upgrade to GSS 3.x and everythig that was simple became very complicated.

PC’s don’t want to rename to their current name after reimaging just being one of the problems.

This particular issue is about a PC that needs to receive a new image, but I cannot make it happen.

Some prelim info:

1. This PC has the altiris agent, and the automation folder installed.

2. This image pushed to an identical PC without problem, on the same subnet. (All PCs are on the same subnet).

3. Both the problem PC and the other PC had the same previous image,before pushing this new image.

4. Because 1 identical PC was able to boot and reimage without problem, I know the boot disk and PXE boot are correct.

The problem:

1. I drag the restore job to the appropriate PC object in console.

2. Eventually, the Status (in console) shows “Downloading disk image…”

3. When I check the “Status Detail” I see:

      Status                                         Time                              Status Code        Module

       i  Booting to Automation            {current time stamp}        blank                  Deploy Server

       i  Downloading disk image…     {current time stamp}        blank                  Deploy Server

       X                                               {current time stamp}        1                         User Defined

      i  Downloading disk image…     {current time stamp}        blank                  Deploy Server

       X                                               {current time stamp}        1                         User Defined

      i  Downloading disk image…     {current time stamp}        blank                  Deploy Server

       X                                               {current time stamp}        1                         User Defined

      i  Downloading disk image…     {current time stamp}        blank                  Deploy Server

       X                                               {current time stamp}        1                         User Defined

This continues every ~16 seconds, and nothing ever happens. I let this go since last Friday Morning, and it was still going like this today (Monday morning) before i stopped it.

To recap:

The PCs are exactly the same HW.

The PCs had the same previous image, so they are exactly the same SW.

The PCs are on the same network/subnet.

One PC connected, reimaged fine. This PC gets stuck in the above “Downloading disk imagge…” / Red “X” status 1 loop.

Any advice or suggestions?

Thank you,

0

Related:

  • No Related Posts

ScaleIO: Cannot install Windows 2012 on VxRack Node hardware

Article Number: 493737 Article Version: 3 Article Type: Break Fix



ScaleIO 1.32.1,ScaleIO 1.32.2,ScaleIO 1.32.3,ScaleIO 1.32.4,ScaleIO 1.32.5,ScaleIO 1.32.6,ScaleIO 2.0,ScaleIO 2.0.0,ScaleIO 2.0.0.1,ScaleIO 2.0.0.2,ScaleIO 2.0.0.3,ScaleIO 2.0.1,ScaleIO 2.0.1.1

Issue Description

Unable to install Windows 2012 R2 on VxRack Node hardware

Scenario

Customer is following the SolVe Desktop procedures to install Windows 2012 R2 on VxRack Node hardware and is failing at the “Windows Setup” screen.

Symptoms

Upon choosing which partition to install Windows to in the “Windows Setup” screen and hitting the “Next” button, an error will pop up:

“Windows cannot be installed to this disk. This computer’s hardware may not

support booting to this disk . Ensure that the disk’s controller is enabled in the

computer’s BIOS menu.”

User-added image

Impact

Cannot install Windows OS to the VxRack Node hardware, upon which ScaleIO will be installed.

Windows will not recognize the UEFI disk as bootable.

Workaround

If the SolVe Desktop procedures have been followed precisely, the workaround is to make the boot disk use legacy mode.

In the BIOS of the VxRack node, change the following settings:

  1. Advanced -> CSM Configuration -> Storage -> Legacy
  2. Boot -> Boot Mode -> Legacy
  3. Boot -> Boot Option #1 -> Hard Disk:(Bus 03 Dev 00)PCI Raid Adapter

    This can be manipulated under “Boot -> Hard Disk Drive BBS Priorities

If step #3 shows the satadom, make sure that the virtual drive has already been created. If it has, reboot the host one more time to show the hard drive correctly in step #3.

This will allow Windows to see the disk as bootable and then successfully install Windows 2012 on the hard drive.

Impacted versions

v1.32.x

v 2.0.x

Fixed in version

N/A

Related:

VxRack Node: storcli commands not effective

Article Number: 500119 Article Version: 3 Article Type: Break Fix



VxRack Systems,VxRack Node,VxRack Node Software,VxRack Node Hardware

Issue Description

storcli commands to set VD attributes returns successfully but the changes are not effective

Scenario

Using storcli commands to set VD attributes. This can happen after a disk failure, or replacing a disk.

Symptoms

Some storcli commands to set VD attributes would return successfully, but upon checking, the VD attributes are not updated.

In some cases when there is a failed disk, the attributes of other VDs might change and storcli commands can’t set them.

Impact

Reduced performance.

This problem can occur in two situations:

  • RAID controller firmware has been upgraded, but the host has not been rebooted yet
  • There is preservedcache for the failed disk, which prevents the storcli commands to be effective

Workaround

Use “storcli /c0 show all” to check what is the version of currently running firmware – if it’s not, reboot the host in order to load the correct firmware.

If the firmware is already running at expected version, use the following command to check preservedcache:

storcli /c0 show preservedcache 

If there is preservedcache, this command will show it, for example:

Controller = 0Status = SuccessDescription = None-----------VD State-----------4 Missing----------- 

And the VD number in the output will most likely not be present in “storcli /c0/vall show”, as it’s usually a failed disk.

Use the following command to delete preservedcache (Replace ‘#’ with the number in “VD” column in above command’s output. Based on the above example, use number ‘4’).

storcli /c0/v# delete preservedcache [force] 

After that use storcli commands to set VD attributes again.

Impacted versions

This is not a ScaleIO software issue and can affect all servers with LSI raid controllers and all versions of ScaleIO.

Fixed in version

N/A. This is not a ScaleIO issue.

Related:

Configuring PVS for High Availability with UEFI Booting and PXE service

Configuring PVS for High availability with UEFI booting and PXE service

Requirements and configuration:

  1. PVS 7.8 or above installed on all servers
  2. PXE service configured to run on multiple PVS servers
  3. User-added image
  4. Option 11 configured on the DHCP server for multiple PVS servers (or option 17 configured with Round Robin DNS entry)
  5. User-added image
  6. Vdisk stores configured with multiple PVS servers serving those stores:
  7. User-added image

Additional information:

We can split a PVS target booting into 4 tasks.

Task 1

PXE client on target device getting an IP address and scope options.

IP address will come from DHCP server.

Scope options there are two options:

  • Scope options for PXE are defined on dhcp server
Options 66 and 67 specify the server name and file name for tftp retrieval of the pxe bootstrap file
  • PXE server (option 60, doesn’t need to be configured)
PXE server responds to DHCP request with PXE information, giving its own server name, and the appropriate file name for tftp retrieval of the pxe bootstrap file

Task 2

PXE client retrieves boot file via TFTP

Option 66&67:

  • PXE client retrieves boot file from TFTP server as specified in scope options, this TFTP address can be load balanced and configured for HA with Netscaler.
Round robin can be used for also load balancing, but not for HA, as there is no recovery if one tftp server is offline

PXE server:

  • The PXE server which responded first is used by PXE client.
PXE server specifies itself as the source tftp server, and provides the appropriate file name
  • In PVS 7.8 and above, PXE service can provide the appropriate boot file, gen1/bios boot file- ardbp32.bin, or gen2/uefi file – pvsnbpx64.efi, depending on the pxe client request

Task 3

PXE client executes boot file which handles further booting, and the boot file contacts PVS login server.

Gen1/bios:

  • Ardbp32.bin has been preconfigured with the addresses of PVS login servers

Gen2/uefi:

  • pvsnbpx64.efi is a signed file and cannot be preconfigured with PVS login servers.
  • Instead it will retrieve the location of PVS login servers from DHCP scope options, using either option 11, or option 17.
  • Option 17 can be used to specify a single PVS login server in the format: pvs:[192.168.0.1]:17:6910
There is no HA for login server in this scenario, when using a single IP address​
  • Option 17 can be used to specify a DNS name, which is round robin list of all PVS servers in the format: pvs:[DNSRRENTRY]:17:6910
As the DNS entry resolves to multiple PVS servers, and non-responsive PVS login servers will be skipped over by the bootstrap, this is HA appropriate.
  • Option 11 can be used to specify a list of up to 32 PVS login servers.
As multiple login servers are specified, and non-responsive PVS login servers will be skipped over by the bootstrap, this is HA appropriate.

Task 4

PVS login server finds vdisk assigned to target device and tells the target device to switch to PVS streaming server

  • Provide multiple PVS servers are configured to stream a vdisk, this will be highly available
  • If a PVS server is offline, a target device will not be instructed to stream from it

Related:

7023375: NVMe drive label changes after reboot

This document (7023375) is provided subject to the disclaimer at the end of this document.

Environment

SUSE Linux Enterprise Server 15

Situation

After installation of SLES 15, NVMe drive labels are changing after each reboot. For example, nvme0n1 could be an NVMe Drive with 800GB capacity and nvme1n1 is another NVMe drive with 1.6TB capacity in one instance of boot. When the system is rebooted, nvme0n1 could be drive with 1.6TB Capacity and nvme1n1 could be drive with 800GB capacity.

Steps to reproduce:
1- Attach 3-4 NVMe drives to the server.
2- Install and boot into os.
3- Check Major and Minor numbers of NVMe devices and note them to compare with the next reboot instance.
4- Reboot the server and compare Major and Minor numbers to previous boot.
5- Repeat step-4 for 4-5 times and observe the device label change.
Note: This behavior was not seen in SLES 12 SP3.

Resolution

This is the expected behavior for SLE 15 even though it is different from SLES 12 SP3. The device label can get changed on reboots and there is no relation between NVMe controller numbering and NVMe block device numbering.

As an alternative, using UUIDs is recommended in the Storage Administrator Guide as well, see:https://www.suse.com/documentation/sles-12/singlehtml/stor_admin/stor_admin.html#cha.uuid

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

Related:

Avamar Client for Windows: Bare Metal Recovery (BMR) failed with error Firmware mismatch between source (UEFI) and target (BIOS) machines

Article Number: 499637 Article Version: 3 Article Type: Break Fix



Avamar Client for Windows

BMR restore and get failure with error ‘Firmware mismatch between source (UEFI) and target (BIOS) machines’

User-added image

Source (backup) contains UEFI boot firmware, however, target (current machine) configured with BIOS boot firmware.

Avamar detect this mismatch between source and target boot firmware, so the PreRestore failed with error code : 0x8007065e, the BMR restore job will be terminated.


As a workaround, shutdown the target VM, and change the boot firmware of that VM from BIOS to EFI, power on and try the BMR restore.

User-added image


Related:

Need Help in replacing Boot drive

Hi,

We have just got our support contract expired and we will renewing the contract in some days, but we got 1 boot drive replacement alert and our third party hardware support is replacing the boot drive

We need to replace the boot drive on NL400 node, I have got the document regarding it. Can some one brief me out the list of commands that need to be on cluster

I’m just aware the FRU package needs to be installed and drive firmware update needs to be done, where the firmware packages needs to be placed, it just says drag and drop on the cluster

Related:

> 16 unix groups – server_param maxgroups – what are example valid OS ?

Some operating systems allow sending more than 16 groups.

VNX has a parameter called: maxgroups – which allows support for more than 16 groups (up to 128) but reboot is required.

[nasadmin@localhost ~]$ server_param server_2 -facility security -info maxgroups -v

server_2 : name = maxgroups

facility_name = security

default_value = 16

current_value = 16

configured_value = 16

user_action = reboot DataMover

change_effective = reboot DataMover range = (16,128)

description = Define the max number of extra group in Unix credential

[nasadmin@localhost ~]$ server_param server_2 -facility security -modify maxgroups -value 64

server_2 : done Warning 17716815750: server_2 : You must reboot server_2 for maxgroups changes to take effect.

detailed_description

does anyone have an example of OS ?

Related: