PXE don’t start

I need a solution

Hi all,

I have all tutorials and all videos read and watch but I don’t have a solution for my problem. I have no idea what to do in my case.

I have GSS Suite on a Server 2012R2 in a VM ESXI and on the same VM machina a windows 7 prof. Client. If I want to boot PXE

I become the following screen and after this my Win7 boot normaly but don’t boot PXE. Is there anyone who have the same problem?

Thanks for your help

0

Related:

  • No Related Posts

VxRail: PTAgent upgrade failure, ESXi error “Can not delete non-empty group: dellptagent”[3]

Article Number: 516314 Article Version: 6 Article Type: Break Fix



VxRail 460 and 470 Nodes,VxRail E Series Nodes,VxRail P Series Nodes,VxRail S Series Nodes,VxRail V Series Nodes,VxRail Software 4.0,VxRail Software 4.5

VxRail upgrade process fails when upgrading PTAgent from older version 1.4 (and below) to newer 1.6 (and above).

Error message

[LiveInstallationError]

Error in running [‘/etc/init.d/DellPTAgent’, ‘start’, ‘upgrade’]:

Return code: 1

Output: ERROR: ld.so: object ‘/lib/libMallocArenaFix.so’ from LD_PRELOAD cannot be preloaded: ignored.

ERROR: ld.so: object ‘/lib/libMallocArenaFix.so’ from LD_PRELOAD cannot be preloaded: ignored.

ERROR: ld.so: object ‘/lib/libMallocArenaFix.so’ from LD_PRELOAD cannot be preloaded: ignored.

Errors:

Can not delete non-empty group: dellptagent

It is not safe to continue. Please reboot the host immediately to discard the unfinished update.

Please refer to the log file for more details.

Dell ptAgent upgrade failed on target: <hostname> failed due to Bad script return code:1

PTAgent can’t be removed without ESXi asking for a reboot, due to earlier version of PTAgent (lower than 1.6) had problem dealing with process signals, ESXi is unable to stop it no matter what signal is sent or what method is attempted to kill the process. Rebooting ESXi si required to kill the defunct process so the upgrade can proceed.

PTAgent 1.6 (and above) had this issue fixed, but upgrading from 1.4 to 1.6 can’t be done without human intervene once the issue is encountered.


Impacted VxRail versions (Dell platform only):

  • 4.0.x: VxRail 4.0.310 and below
  • 4.5.x: VxRail 4.5.101 and below

This issue is fixed in recent VxRail releases, but upgrade from earlier VxRail releases are greatly impacted. It’s strongly suggested customer to contact Dell EMC Technical Support to upgrade to PTAgent 1.7-4 which is included in below VxRail releases:

  • VxRail 4.0.500 for customer who stays on vSphere 6.0
  • VxRail 4.5.211 or above for customers who choose vSphere 6.5

Manual workaround if experiencing the PTAgent upgrade failure

  • Enter maintenance mode and reboot the host mentioned in error message
  • Wait until the host is available and showing proper state in vCenter, click retry button in VxRail Manager to retry upgrade.

Related:

  • No Related Posts

Re: Sharing a vm proxy node between different Avamar servers

Hi all,

Some backgroud info:

We are planning to have different Avamar servers pointing to their own respective DD Boost account on the same DataDomain hardware.

The idea is that each Avamar server is dedicated to a single business group.

We can’t make use of Avamar subdomains because Avamar forces all subdomains on the same Avamar server to point to the same DD boost account.

But we need to use different DD Boost accounts because we are going for the “secure multi-tenancy” architecture.

We also need DataDomain’s per-account storage hard-quota feature that Avamar subdomains cannot provide.

I asked about this topic in a previous thread.

Question for this discussion:

Since we will be using multiple Avamar servers, I would like to know if vm proxy nodes can be shared between them?

For example, if we deploy a single Avamar vm proxy node inside an ESXi cluster (let’s say the cluster has 2 ESXi hosts and one shared datastore), can all Avamar servers use this same vm proxy node to backup their own VMs from the ESXi cluster?

If different Avamar servers cannot share vm proxy nodes, then that would mean each Avamar server must deploy their own vm proxy node in the same ESXi cluster. This would be considered an inefficient use of resources.

If that is indeed the case, base on our overall architecture, what would be the best workaround to mitigate this issue?

Thanks all!

RLeon

Related:

  • No Related Posts

Re: Recoverpoint v3.5 – ‘Failover options’ greyed out

Hello All,

I am trying to test my failover capabilities through RPA. After enabling ‘Image access’ (physical) and mounting the replica LUN to the DR ESXi 5.5 hosts, I am not able to do the ‘Failover Actions – Failover to remote replica’ (as shown in the picture below). This options is grayed out. I don’t have any SRM and my policy setting for ‘Stretch cluster/VMWare SRM support’ etc are set to NONE. Without the failover actions, I am not able to replicate back my changes to the Prod_Test site. I need suggestions. Am I missing any policy or any configuration?

Also, as seen in the picture below, the storage status in the DR_Test (remote side) is showing ‘Enabling logged access’. So does this mean that this is still processing this enabling and that’s the reason why it is grayed out? I am however able to mount the replica LUN, add the replicated VM to the inventory, boot it up and browse the replicated VM at my DR site.

Thanks in advance!

RPA_Failover_Options.jpg

Regards,

Vilas

Related:

Install VAAI 1.2.3 on ESX 6.0 could cause ESX server to become unresponsive

Article Number: 504703 Article Version: 4 Article Type: Break Fix



Isilon VAAI,Isilon OneFS,VMware ESX Server

After installing VAAI 1.2.3 plugin on ESX 6.0 server, log file /var/log/isi-nas-vib.log can grow unbounded, eventually filling up /var partition and can cause ESX server to become unresponsive. The log file will continue to be recreated after it is removed.

VAAI 1.2.3 plugin was built with debug enabled by default

WORKAROUND

Uninstall VAAI 1.2.3 plugin until a fix is provided.

OR

Setup a cron job on ESX 6.0 server to periodically remove the log file:

1. ssh into ESX server and login as root user

2. edit /etc/rc.local.d/local.sh, and add the following lines toward the end, before “exit 0”:

/bin/echo ‘*/15 * * * * /bin/rm /var/log/isi-nas-vib.log’ >>/var/spool/cron/crontabs/root

/bin/kill -HUP $(cat /var/run/crond.pid)

/usr/lib/vmware/busybox/bin/busybox crond

3. The reason for the above step is so if ESX server reboots, the workaround will persist after reboot. But at this point workaround has not been set on the ESX server yet.

4. Manually run the above 3 yellow-highlighted commands to implement the workaround on the current ESX server session.

5. Monitor /var/log/syslog.log and make sure you see every 15min an entry such as:

2017-10-03T15:15:01Z crond[35429]: crond: USER root pid 38236 cmd /bin/rm /var/log/isi-nas-vib.log

Related:

  • No Related Posts

IsilonSD Edge cluster installation problems

Hi,

I am trying to deploy an IsilonSD Edge 3 node cluster, on my Vxrail environment that I have running. I have already successfully deployed the Isilon Management Server, added an Edge license, registered successfully with the VxRail vcenter and uploaded the OneFS image.

When I try to create the cluster however, I put in all the relevant details and it fails while trying to create the cluster VMs, with the following error –

“Unable to create IsilonSD virtual machine: IsilonSD-isilononefs-1. Error. java.net.UnknownHostException: Vxrailnode1.vxrail.local.com”

When I check the logs on vSphere, this is the error I see:

“vSAN datastore does not have capacity.

Possible Causes:

This might be because no disk is configured for vsan, local disks configured for vsan service become inaccessible or flash disks configured for vsan service become inaccessible.

Action: check is vsan storage configuration is correct and if the local disks and flash disks configured vsan service are accessible.”

Now, I understand this is a self explanatory message. However, the fact is that I have over 15TB total space out of which 12TB is free, and I’m trying to create a cluster of 2TB capacity.

When I log into one of the ESXi hosts to check for any errors, there aren’t any. I looked into the Isilon management server logs and this is what I see when I try to build the cluster –



2018-11-22 00:05:13,442 [play-akka.actor.default-dispatcher-74] INFO com.emc.isi.vi.core.virtual.vsphere.VSManageVM – Creating cluster: isilononefs

2018-11-22 00:05:13,453 [play-akka.actor.default-dispatcher-74] ERROR com.emc.isi.vi.core.virtual.vsphere.VSSessionObject – Invalid vSphere connection – try reconnect com.vmware.vim25.NotAuthenticated

2018-11-22 00:05:13,453 [play-akka.actor.default-dispatcher-74] INFO com.emc.isi.vi.core.virtual.vsphere.VSSessionObject – Creating new vSphere connection for administrator@vsphere.local on 192.168.x.x

2018-11-22 00:05:13,455 [play-akka.actor.default-dispatcher-74] ERROR com.emc.isi.vi.core.virtual.vsphere.VSSessionObject – Invalid vSphere connection – try reconnect com.vmware.vim25.NotAuthenticated

2018-11-22 00:05:13,498 [play-akka.actor.default-dispatcher-74] INFO com.emc.isi.vi.core.virtual.vsphere.VSSessionObject – Successfully created vSphere connection for administrator@vsphere.local on 192.168.x.x Key: fbc58e741eb097122758b0bcabed7caabde3b16b



And then later in the logs,



2018-11-22 00:05:13,530 [pool-22-thread-1] INFO com.emc.isi.vi.core.virtual.vsphere.VSManageVM – Trying to get datastore for VxRail-Virtual-SAN

2018-11-22 00:05:13,530 [pool-24-thread-1] INFO com.emc.isi.vi.core.virtual.vsphere.VSManageVM – Trying to get datastore for VxRail-Virtual-SAN

2018-11-22 00:05:13,530 [pool-25-thread-1] INFO com.emc.isi.vi.core.virtual.vsphere.VSManageVM – Trying to get datastore for VxRail-Virtual-SAN

2018-11-22 00:05:13,531 [pool-26-thread-1] INFO com.emc.isi.vi.core.virtual.vsphere.VSManageVM – Trying to get datastore for VxRail-Virtual-SAN

2018-11-22 00:05:13,538 [pool-21-thread-1] INFO com.emc.isi.vi.core.virtual.vsphere.VSManageVM – Attempting to deploy the default OVF Template onto datastore VxRail-Virtual-SAN.

2018-11-22 00:05:13,567 [pool-26-thread-1] INFO com.emc.isi.vi.core.virtual.vsphere.VSManageVM – Attempting to deploy the default OVF Template onto datastore VxRail-Virtual-SAN.

2018-11-22 00:05:13,607 [pool-21-thread-1] INFO com.emc.isi.vi.core.virtual.vsphere.vijava.ImportLocalOvfVApp – Taking lock on virtual machine: IsilonSD-isilononefs-6

And finally the time when it fails,



2018-11-22 00:05:15,612 [pool-25-thread-1] ERROR com.emc.isi.vi.core.virtual.vsphere.VSManageVM – Error creating OneFS VM – java.net.UnknownHostException: Vxrail-node-01.vxrail.local.com

2018-11-22 00:05:15,625 [pool-21-thread-1] ERROR com.emc.isi.vi.core.virtual.vsphere.VSManageVM – Error creating OneFS VM – java.net.UnknownHostException: Vxrail-node-02.vxrail.local.com

2018-11-22 00:05:15,625 [play-akka.actor.default-dispatcher-74] ERROR com.emc.isi.vi.core.virtual.vsphere.SimpleExecutor – java.util.concurrent.ExecutionException: com.emc.isi.vi.common.error.VSException: Unable to create IsilonSD virtual machine: IsilonSD-isilononefs-6. Error: java.net.UnknownHostException: Vxrail-node-02.vxrail.local.com

2018-11-22 00:05:15,671 [pool-26-thread-1] ERROR com.emc.isi.vi.core.virtual.vsphere.VSManageVM – Error creating OneFS VM – java.net.UnknownHostException: Vxrail-node-03.vxrail.local.com





My Vxrail version is 4.5.229

Esxi version – 6.5

IsilonSD Edge MS version 1.1.0

OneFS version – 8.1.2.0

Any clue on what could be going wrong here? Any help on this would be really appreciated.

Thanks,

Dipbosse

Related:

  • No Related Posts

Re: Very Slow Unity VSA 4.4

Hey there,

Iam trying out the unity community edition in an lab environment with two Dell R810 Hosts with a RAID5 of SSDs.

I was reading the manual, checked out the HowTo videos and it was real easy to get it up and running.

But the performance is soooooo poor, oh man.

The Unity is installed on the servers SSDs and I mapped two 500GB virtual dirves into the unity for a Pool.

I present a NFS and also a iSCSI datastore to the ESXi host where is unity is installed too.

The result is performance of about ~25MB/s if I copy a vmdk-file to the unitys datastore.

The CPU usage is meanwhile about 50-80% and if I start another copy job it peaks at 100%.

Can anybody help me? We’ll like to use the Unitys in production environments, but with this result…..never.

I only post because I like the workflow of this storage system and wnat to give it a chance.

Any idea is welcome

Related:

  • No Related Posts

XtremIO: Cannot modify cluster parameter using CLI command “modify-clusters-parameters esx-device-connectivity-mode=apd”

Article Number: 525024 Article Version: 3 Article Type: Break Fix



XtremIO Family,XtremIO X1,XtremIO HW Gen2 400GB,XtremIO HW Gen2 400GB Encrypt Disable,XtremIO HW Gen2 400GB Exp Encrypt Disable,XtremIO HW Gen2 400GB Expandable,XtremIO HW Gen2 800GB Encrypt Capbl,XtremIO HW Gen2 800GB Encrypt Disable

When running CLI command “modify-clusters-parameters esx-device-connectivity-mode=apd” in XMS 6.0/6.1 managing 4.0.10-33 to 4.0.25-27 clusters, it would fail with error “*** XMS Completion Code: Format string requests exactly 5 items from array, but array has 6 items. (A ‘*’ at the end would avoid this failure)“.

Here is the example of the error:

xmcli (tech)> modify-clusters-parameters esx-device-connectivity-mode=apd

*** XMS Completion Code: Format string requests exactly 5 items from array, but array has 6 items. (A ‘*’ at the end would avoid this failure)

This error would happen in scenario that XMS is in version of 6.0/6.1 and manages 4.0.10-33 to 4.0.25-27 clusters. This is due to a software issue in current 6.0/6.1 XMS and has been fixed in 6.2 XMS which will be officially released in Sep. 2018.

This is due to a software issue in current 6.0/6.1 XMS and has been fixed in 6.2 XMS which will be officially released in Sep 2018.

1. The command modifying the connectivity-mode was introduced from cluster version 4.0.10-33 to allows to preset how an XtremIO cluster responds before a volume becomes unavailable to the ESX server. If running cluster XIOS 4.0.10-33 or later Dell EMC recommend setting the host connectivity mode for ESX server to APD (All Paths Down) (KB483391). Switching connectivity-mode from PDL to APD would normally be suggested by Upgrade engineers when NDU is scheduled. For more info about how to prepare XtremIO upgrade, Preparing for an XtremIO upgradecan be referred.

2. If XMS version is still at 4.x, please ensure connectivity-mode is changed to APD prior to XMS and cluster upgrade.

3. If XMS is already in 6.x (6.0/6.1), change connectivity-mode to APD once XMS is upgraded to 6.2 when it is released, then proceed cluster upgrade.

Related:

  • No Related Posts

VxRail: During upgrade from 4.0.x to 4.5.x, instant cloning in VMware Horizon may fail,

Article Number: 524627 Article Version: 3 Article Type: Break Fix



VxRail Software 4.0

The issue occurs if:

  1. The cluster is using instant cloning in VMware Horizon
  2. The attached vCenter is at 6.5 and the hosts are at 6.0 and a build lower than 6921384. The likely situation for this scenario is a cluster being upgraded directly from 4.0.3xx to 4.5.x.

Symptoms:

  • Adding instant-clone desktop fails.
  • On vCenter Server the VitualMachine.createForkChild.label task fails with the error: The resource ‘<number>’ is in use.

This issue occurs because of an internal design incompatibility in port binding between ESXi 6.0 and vCenter Server 6.5 when instant clone deployment is used. Currently, instant clone works when the versions of ESXi and vCenter Server are the same. (i.e. ESXi 6.0 with vCenter Server 6.0 or ESXi 6.5 with vCenter Server 6.5 is used)

See VMware KB 2150925

Option 1: Upgrade the ESXi hosts to match the code version of the vCenter server or revert the vCenter code to match that of ESXI.

Option 2: The ESXi for this issue was included in release 4.0.400 and above. As such, the cluster could be upgraded from 4.0.3xx to 4.0.4xx and then to 4.5.x to avoid the issue.

Related:

  • No Related Posts