XenApp 7.15 : Cannot update Machine Catalogue “PBM error occurred during PreProcessReconfigureSpec: pbm.fault.PBMFault”

After updating a machine catalog that was created using Machine Creation Services (MCS), virtual machines hosted on vSAN 6 or later might fail to start. The following error message appears in the VMware console:

“A general system error occurred: PBM error occurred during PreProcessReconfigureSpec: pbm.fault.PBMFault; Error when trying to run pre-provision validation; Invalid entity.”

Related:

Re: Fail Installation Vxrail Step 38

Hi All

I solved that case, I just give native vlan on my configuration switch.

I used :

– vlan 0 for management and vcenter —> this should untagged traffic

– vlan 106 for Virtual VM Switch

Vxrail has 4 port:

port 0 : Management

Port 1 : Virtual Mechine

Port 2 : VSAN

Port 3 : VMotion

Management used port 0, and on my vxrail I used VLAN 0, that’s mean untagged traffic, I need to connect with my existing network, so I set :

– Switchport mode trunk

– switchtrunk native vlan 130.

Vlan 130 just for management.

Virtual machine used port 1, as my vcenter also is virtual machine and used vlan 0 and the traffic pass through port 1 so I just give same configuration.

– switchport mode trunk

– switchport trunk native vlan 130

the rest of port I just set switchport mode trunk on my physical switch

Related:

IsilonSD Edge cluster installation problems

Hi,

I am trying to deploy an IsilonSD Edge 3 node cluster, on my Vxrail environment that I have running. I have already successfully deployed the Isilon Management Server, added an Edge license, registered successfully with the VxRail vcenter and uploaded the OneFS image.

When I try to create the cluster however, I put in all the relevant details and it fails while trying to create the cluster VMs, with the following error –

“Unable to create IsilonSD virtual machine: IsilonSD-isilononefs-1. Error. java.net.UnknownHostException: Vxrailnode1.vxrail.local.com”

When I check the logs on vSphere, this is the error I see:

“vSAN datastore does not have capacity.

Possible Causes:

This might be because no disk is configured for vsan, local disks configured for vsan service become inaccessible or flash disks configured for vsan service become inaccessible.

Action: check is vsan storage configuration is correct and if the local disks and flash disks configured vsan service are accessible.”

Now, I understand this is a self explanatory message. However, the fact is that I have over 15TB total space out of which 12TB is free, and I’m trying to create a cluster of 2TB capacity.

When I log into one of the ESXi hosts to check for any errors, there aren’t any. I looked into the Isilon management server logs and this is what I see when I try to build the cluster –



2018-11-22 00:05:13,442 [play-akka.actor.default-dispatcher-74] INFO com.emc.isi.vi.core.virtual.vsphere.VSManageVM – Creating cluster: isilononefs

2018-11-22 00:05:13,453 [play-akka.actor.default-dispatcher-74] ERROR com.emc.isi.vi.core.virtual.vsphere.VSSessionObject – Invalid vSphere connection – try reconnect com.vmware.vim25.NotAuthenticated

2018-11-22 00:05:13,453 [play-akka.actor.default-dispatcher-74] INFO com.emc.isi.vi.core.virtual.vsphere.VSSessionObject – Creating new vSphere connection for administrator@vsphere.local on 192.168.x.x

2018-11-22 00:05:13,455 [play-akka.actor.default-dispatcher-74] ERROR com.emc.isi.vi.core.virtual.vsphere.VSSessionObject – Invalid vSphere connection – try reconnect com.vmware.vim25.NotAuthenticated

2018-11-22 00:05:13,498 [play-akka.actor.default-dispatcher-74] INFO com.emc.isi.vi.core.virtual.vsphere.VSSessionObject – Successfully created vSphere connection for administrator@vsphere.local on 192.168.x.x Key: fbc58e741eb097122758b0bcabed7caabde3b16b



And then later in the logs,



2018-11-22 00:05:13,530 [pool-22-thread-1] INFO com.emc.isi.vi.core.virtual.vsphere.VSManageVM – Trying to get datastore for VxRail-Virtual-SAN

2018-11-22 00:05:13,530 [pool-24-thread-1] INFO com.emc.isi.vi.core.virtual.vsphere.VSManageVM – Trying to get datastore for VxRail-Virtual-SAN

2018-11-22 00:05:13,530 [pool-25-thread-1] INFO com.emc.isi.vi.core.virtual.vsphere.VSManageVM – Trying to get datastore for VxRail-Virtual-SAN

2018-11-22 00:05:13,531 [pool-26-thread-1] INFO com.emc.isi.vi.core.virtual.vsphere.VSManageVM – Trying to get datastore for VxRail-Virtual-SAN

2018-11-22 00:05:13,538 [pool-21-thread-1] INFO com.emc.isi.vi.core.virtual.vsphere.VSManageVM – Attempting to deploy the default OVF Template onto datastore VxRail-Virtual-SAN.

2018-11-22 00:05:13,567 [pool-26-thread-1] INFO com.emc.isi.vi.core.virtual.vsphere.VSManageVM – Attempting to deploy the default OVF Template onto datastore VxRail-Virtual-SAN.

2018-11-22 00:05:13,607 [pool-21-thread-1] INFO com.emc.isi.vi.core.virtual.vsphere.vijava.ImportLocalOvfVApp – Taking lock on virtual machine: IsilonSD-isilononefs-6

And finally the time when it fails,



2018-11-22 00:05:15,612 [pool-25-thread-1] ERROR com.emc.isi.vi.core.virtual.vsphere.VSManageVM – Error creating OneFS VM – java.net.UnknownHostException: Vxrail-node-01.vxrail.local.com

2018-11-22 00:05:15,625 [pool-21-thread-1] ERROR com.emc.isi.vi.core.virtual.vsphere.VSManageVM – Error creating OneFS VM – java.net.UnknownHostException: Vxrail-node-02.vxrail.local.com

2018-11-22 00:05:15,625 [play-akka.actor.default-dispatcher-74] ERROR com.emc.isi.vi.core.virtual.vsphere.SimpleExecutor – java.util.concurrent.ExecutionException: com.emc.isi.vi.common.error.VSException: Unable to create IsilonSD virtual machine: IsilonSD-isilononefs-6. Error: java.net.UnknownHostException: Vxrail-node-02.vxrail.local.com

2018-11-22 00:05:15,671 [pool-26-thread-1] ERROR com.emc.isi.vi.core.virtual.vsphere.VSManageVM – Error creating OneFS VM – java.net.UnknownHostException: Vxrail-node-03.vxrail.local.com





My Vxrail version is 4.5.229

Esxi version – 6.5

IsilonSD Edge MS version 1.1.0

OneFS version – 8.1.2.0

Any clue on what could be going wrong here? Any help on this would be really appreciated.

Thanks,

Dipbosse

Related:

RecoverPoint for VMs: At least one Journal volume that was supposed to be provisioned – failed

Article Number: 479698 Article Version: 3 Article Type: Break Fix



RecoverPoint for Virtual Machines 4.3,RecoverPoint for Virtual Machines 4.3 P1,RecoverPoint for Virtual Machines 4.3 SP1

Journal Provisioning fails with WARNING: At least one Journal volume that was supposed to be provisioned – failed.

Symptoms found in the logs:

The ActionLogic goes into an endless loop of trying to provision the journals and the new CG remains in an error state.

The user sees a GW saying that provisioning of journal failed and a “NullPointerException” as the reason-

WARNING: At least one Journal volume that was supposed to be provisioned – failed.

Failure string is: Pair((creationID=Option(13162470750099402752)),[java.lang.NullPointerException])”

Impacted Configuration & Settings:

Relevant for vSCSI splitters and vSAN datastores

In vSAN it is not possible to create objects larger than 254GB.

vSAN enforces a limit of 254GB on the capacity that can be placed directly in a single namespace (or root folder).

When a total capacity of 254GB (across all repositoriesjournals) in vSAN datastore is reached, customer cannot create anymore JUKe devices and the operation fails with NoDiskSpace error.

Our method of provisioning the flat files means that the capacity we consume for these JUKe devices is part of its parent namespace.

Since RecoverPoint does not use the OSFS but rather places the journal/repository devices’ data directly in a single VSAN namespace, whenever an attempt is made to create more than 254GBs of repository/journals in a given VSAN cluster, the operation fails with an error.

Provisioning Journal volumes

Workaround:

There is no Workaround for this issue.

The limitation to a total of 254 GB of repositories & journals across all RP clusters on the target vSAN datastore cannot be exceeded.

Also, it is not possible to create a single journal device larger than 254 GB on a vSAN datastore.

Permanent Fix:

4.3.1.1 (splitting a large journal devices to smaller files will be fix in 5.0)

Affected versions:

RecoverPoint 4.3, 4.3.0.1, 4.3.1

Related:

Fully Resilient FCIP Design CISCO MDS 9250i

FCIP.PNG.png

Hi

i just have a question about VSAN configuration for fully resilient FCIP Design, because i am a little bit confuse for backup paths (Blue and Orange line), if you see the image i am going to connect VSAN 60, profile 60 and FCIP 60 on Ge2 (Site A) with VSAN 70, profile 70 and FCIP70 on Ge2 (Site B) line Blue, is it correct? i mean, can i do that? or to connect them, do i need to set the same VSAN Number (60 and 60), same case with the orange line. i understand that the Profile and FCIP can be different, but i dont know if the vsan can be different.

i found the best practice for FCIP and on that image i can see red and blue vsan are different for backups paths, take a look at that image

i will appreciate your help on this.Capture.PNG.png

Best Regards

Related:

Re: VxRail issue

ken

I guess you’re having a machine

(https://www.dellemc.com/resources/en-us/asset/technical-guides-support-information/products/converged-infrastructure/h15…)

VxRail-P.png

So, I’d expect network configuration like this:

4x10-GbE.png

So, VMNIC0 maps to port #0 etc..

vSphere goes into port #3 & #4 (passive/active).

Would not go with trunking or messing with free hand network uplink modifications , because this may caue trouble later on (f.e. with upgrades or system expansion).

“Do not enable Link Aggregation on VxRail Switch Ports Do not use link aggregation, including protocols such as LACP and EtherChannel, on any ports directly connected to VxRail nodes. VxRail Appliances use the vSphere active/standby configuration (NIC teaming) for network redundancy. However, LACP could be enabled on non-system ports, such as additional NIC ports or 1G ports, for user traffic. VxRail uses vSphere Network I/O Control (NIOC) to allocate and control network resources for the four predefined network traffic types required for operation: Management, vSphere vMotion, vSAN and Virtual Machine. The respective NIOC settings for the predefined network traffic types are listed in the tables below for the various VxRail Models. 4”

hope this will help a bit

krichot

Related:

vSAN Ready Nodes on PowerEdge MX

EMC logo


Dell EMC vSAN Ready Nodes are coming to the new Dell EMC PowerEdge MX architecture. This combination makes a lot of exciting things possible. The best of two worlds comes together this week as we announce new Dell EMC vSAN Ready Nodes specifically for the Dell EMC PowerEdge MX modular architecture. This combination sweetens the pot for organizations that want to capitalize on the power of vSAN on their terms. In this post, I will highlight some of the cool things about this combination. But, first, let’s set the stage with a little background. PowerEdge MX … READ MORE



ENCLOSURE:https://blog.dellemc.com/uploads/2018/08/vsanonMX-600×356.jpg

Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

Re: IsilonSD Edge with vSAN

Hello,

I’m looking for help with a deployment IsilonSD Edge on vSphere/vSAN. There doesn’t seem to be much documentation about deploying cluster in this scenario.

I have 3 hosts with 18 spinning disks and 6 ssd.

1) Do you deploy vShere/vSAN in the normal manner first using all the disks to create one datastore?

2) Does Isilon share the physical disks with vsan or do you create vmdk’s for the isilon nodes on the vsan datastore?

3) What storage policy do you set on the vmdk’s for the nodes located on vsan? Is the redundancy dealt with by the isilon software.

There is so little documentation on the configuration of the vsphere cluster that the Isilon sits on.

Thanks in advance!

Related:

ViPR Controller 3.0: Zoning error during LUN creation

Article Number: 487584 Article Version: 2 Article Type: Break Fix



ViPR Controller,ViPR Controller Controller 2.4 SP1,ViPR Controller Controller 3.0

Customer was attempting to provision using Create Block Volume for a Host via the ViPR Controller UI and received the following error:

Error 22003: Failed to create zones for export group because of: Encountered an error while adding zones: MDS Device last prompt is incorrect (found MDS_CONFIG but expected MDS_CONFIG_ZONE)

Switches were running firmware version 6.2(11)

No entries existed in the ViPR Controller ExportGroup or ExportMask column families when reviewing the slither DB outputs.

This issue was caused by using an account for MDS discovery that had Read-Only permissions.

To resolve this issue you will need to use an account that has privileges to provision (configure the mod of zone, zonesets, and VSAN) and to show the FCNS database.

For more information please review the technical documentation at the following link:

http://www.emc.com/techpubs/vipr/add_switch_cisco-2.htm

Related:

  • No Related Posts