How to run an on-demand backup of the VM residing under the “containerclients” domain using CLI.

I want to run an on-demand backup of the VM under the domain “containerclients” using CLI. It keeps on giving me error client is not a member of the group.

admin@Backupserver:~/>: mccli client backup-group-dataset –name=”/VCenter/ContainerClients/VMClient” –group-name= “/VCenter/VMwareBackupGroup”

1,22241,Client is not a member of group.

admin@Backupserver:~/>:



Where as i can run it from GUI by selecting the VM seperately and while starting the backup it asks for Groupname in which the Container folder is added. So it seems it can collect the information from GUI but not from CLI.

I do not want to run it by adding the client into some temporary group. I want to know how to run it directly as we run on-demand backup for an agent based client, as it is possible from GUI

Related:

  • No Related Posts

PowerPath integration in Unisphere for PowerMax

In the Unisphere for PowerMax 9.0 release we have increased integration with Powerpath hosts that are discovered on the arrays we monitor in Unisphere. With a new PowerMax array with a FX/Pro license (premium) you are entitled up to 75 ESXi host licenses. These PowerPath licenses are now automatically deployed for VMware, Windows and Linux hosts with PowerPath 6.3 and later.

With over 25 years of engineering and development behind it PowerPath is a family of multipathing software products that automate and optimize I/O performance in physical and virtual environments. PowerPath intelligently manages I/O path loads and failover to increase application availability and reduce latency. Proprietary optimization for Dell EMC PowerMax and VMAX increases I/O performance 2X – £3X over native multipathing.

We have heard from customers they want to see more integration with our existing software products in order to help their ongoing automation drive and also provide enhanced troubleshooting capabilities from our side.

PowerPath Hosts Landing Page:



In Unisphere on the left hand navigation you will find it under Hosts>PowerPath Hosts.

pp1.PNG.png

This view gives you a single pane of glass view of all your PowerPath hosts that are connected on arrays that this specific Unisphere instance can see.

In this view you are presented with a lot of useful information if you are in the midst of a troubleshooting scenario. By selecting the lcvt1032 host you are presented with a lot more information on the right hand navigation. The PowerPath version and patch level, OS version, revision and patch level, Server hardware, MFG and serial number, PowerPath license type and connectivity type (FC, iSCSI or FCOE) are all displayed.

The objects Initiators, Hosts/Host Groups, Masking Views, Storage Groups and Volumes are displayed in blue and can be selected in order for you to drill down to the specific object. At the bottom of the list is the number of VM’s on this ESXi server. Selecting the VM count highlighted in blue will get you to a list of the VM’s as we see here.



pp 1.5.PNG.png

So let’s say we need to quickly find out what are the initiators for lcvt1032 as we are investigating an internal ticket for a performance issue.

pp2.PNG.png

Now in this view to drill down on your performance problem you can select volumes to see what volumes are assigned to this host.

pp3.PNG.png

When you select an individual volume you can see the right hand pane shows a wealth of good information including whether if the volume is mounted. In this case it is not so that may indicate the root of the problem. If the dev was mounted, below the mount status will be the name of the host process using the dev (for ESXi it’s the VM name). The host process/vm name allows you to figure out what application is using the device. At the bottom of the device information is the date when date when the device last received I/O from any host. This allows you to find the devices that were not used for a while, and the combination of host name and process/vm name allows you to identify who owns the dormant device.

I hope I’ve displayed some of the power of the troubleshooting you can potentially do from the PowerPath landing page when you are investigating a potential PowerPath issue.

Auto Initiator Group (IG) creation using PowerPath Host Name:

On the main landing page we also have a new option called create host which will allow Auto IG creation:

pp4.PNG.png

On 5978 HyperMax code and Unisphere/Solutions Enabler 9.0 allows for the hostname to be provided when creating an IG. Unisphere/SE picks the host HBA WWN’s from the Host registration data provided by PowerPath and add them to the IG for you. This significantly helps you in the overall provisioning flow if adding a new PowerPath host.

The initiator array switch that controls AUTO IG creation is disabled by default, but you can turn it on by selecting it in the settings section of Unisphere.

pp 4.5.jpg

Auto IG Creation works in tandem with the PowerPath Host registration, if the PowerPath host registration switch is disabled, then the Auto IG Creation will also be disabled.

As of now Linux and AIX PowerPath will automatically detect Devices added or deleted on a certain path following masking or zoning changes (ESXi and Windows already does this). Once PowerPath Devices have changed it will run a host scan and the O/S will discover the new Devices.

This significantly helps you in the overall provisioning flow if adding a new PowerPath host. It achieves this by automatically pulling the WWN’s registered with that hostname provided the correct settings are enabled as outlined below. This reduces some of the configuration work that the user has to perform when setting up a host.

Device in Use Details:



Another really powerful feature is the Device in use example. By using Unisphere or SE you can easily see if a device is mounted and active by looking at the volume level. In Unisphere you would navigate down to the volume level:



pp5.png

On selecting a specific device the right hand pane will show more detailed information. We can see here that the device mounted state is true and is used by a process called consist_lun_x (for ESXi it would be the vm name). Therefore we can see if a device was used lately and who is the owner (host, mount status, application or vm name). The same information can also be viewed in SE with symdev list –sid xxxx –ppi.

SE screenshot.png

The use case we were looking to resolve here is that often SA’s get requested for resources on the array by application owners and they do so to future proof themselves for projects. Although from an application owner this makes perfect sense but from a SA whose job is to manage array usage and consumption it is not best practice. In certain cases people who manage the capacity of the array can leave or transfer and the capacity they owned goes unused. Up until this feature there was no good way from the storage side to track it down.

This information is updated every 24 hours and if the SA feels that application owners are taking liberties by not mounting volumes or using storage they can have a frank discussion with the application owner!

PowerPath Metrics:



Another new feature we have added is we’ve exposed some PowerPath metrics in our Performance section. These metrics are % PowerPath Observes Relative RT (PowerPath reported RT divided by array RT), PowerPath Average Response Time (ms) and PowerPath Observed Delta RT (ms) (PowerPath reported RT minus the array RT).

pp6.png

These metrics can be accessed in the all metrics section by using the radio button slider. These new metrics are available at the SG and Thin volume object type level. Here’s an example showing the differences in the three measurements vs array response time.

pp 7.png

In this example the array response is .41 ms, while the average I/o Response time as measured by PowerPath from the host to the array is .60 ms. This .20 ms (observed delta) and 147% relative delta (%PowerPath observed Relative RT) difference can help identify where the slowdown is occurring – not at the array level here! A spike in PowerPath observed Relative RT % may indicate SAN slowdown or a possible slow drain problem which occurs outside of the array.

I hope you found these use cases helpful and if you have any questions please let me know.

I would like to thank some learned PowerPath colleagues in preparing this content, Owen Crowley in QE, Eric Don in Engineering and Robert Lonadier in PM.

Related:

  • No Related Posts

RecoverPoint for Virtual Machines: SC production RPAs are in reboot regulation

Article Number: 502032 Article Version: 3 Article Type: Break Fix



RecoverPoint for Virtual Machines

Issue (impact on customer): SC production RPAs are in reboot regulation

Impacted Configuration & Settings [Replication Mode,Splitter Type,Compression,CDP/CLR/Multi,FC/IP/iSCSI, Policies etc.]: RP4VM, failed to register new VM to the VC too many times.

Impact on RP: Control keeps on crashing

Affected versions: 5.0.1

Root cause:

When we register a new VM (upon performing protect) in the VC, to create a shadow VM. In this case, the registration fails and after 5 retry attempts, RPA control give-up and generate a message of the failure with the VM name. Problem is, we deleted the memory allocated for the registration before generating the message, and thus, getting into Assertion (trying to access variable in the memory that no longer exists) and control crashes repeatedly until RPA enters reboot regulation.

Customer deleted a production that protected by RecoverPoint for Virtual Machine cluster without un-protecting it first.

Or

Any other changes that may cause failure for RPA to register VMs.

Fixed at: 5.1 5.0.1.2

Workaround:

Perform step-by-step procedure:

1. Find out why the VM keep failing to register to the VC and try to solve the error at the VC to prevent this from re-occurring.

2. Unprotect and Re-protect the VMs on the same CG.

3. If 2+3 doesn’t succeed – disable/enable the CG.

Related:

  • No Related Posts

RecoverPoint for Virtual Machines: Unable to protect vAppliance due to error stating “Create an appliance Linux root password” must be configured

Article Number: 499643 Article Version: 4 Article Type: Break Fix



RecoverPoint for Virtual Machines

Issue (impact on customer):

Unable to finish protecting vAppliance due to error stating “Create an appliance Linux root password” must be configured

Impacted Configuration & Settings:

RP4VM replicating vAppliances

Impact on RP:

RP cannot finish replicating vAppliance until the vApp options are populated on the replica VM.

Symptoms found in the logs:

When protecting vAppliance the following error will show on the vSphere GUI that stops the protection process:

“Property ‘Create an appliance Linux root password’ must be configured for the VM to power on.”

OR

“Property ‘Initial root password’ must be configured for the VM to power on.”

Root cause:

RP does not support replicating vApp properties which stops the copy vAppliance from powering on.

No change

Workaround:

– Deploy the OVF of the vAppliance to be replicated at the DR side and configure the vApp options.

– When protecting the vAppliance have it replicate to an existing VM and select the VM that was deployed in the previous step.

OR

– Protect the vAppliance allowing RP to auto provision the copy.

– When the error stating the replica VM cannot power on occurs, manually select the VM, edit its settings, and apply the vApp options as they are configured on production.

– Allow for replication to complete, enter image access on the DR copy to confirm everything is working correctly.

ResolutionFixed at:

N/A

Related:

  • No Related Posts

Re: Dell EMC Unity Laptop Demo Install error

Hello,

I am trying to install the Unity laptop demo on my Windows7 VM running on my Mac with Fusion 8.1.1. It appears that the Anywhere installer thinks there are multiple instances running and quits. See the attached screen shot. This seem to be an issue within the Fusion VM since I can install it on my home PC.

Checking Anywhere’s support website, there is an article that indicates there is a .tmp file that might need to be deleted in my home directory, but I am unable to find it.

Has anyone tried to install this demo on a Windows VM?

Thanks,

Alan Kobuke

Related:

  • No Related Posts

Re: Isilon avscan Statistics, Interpretation of efs.bam.av.stats

IHAC with a 5 node X410 cluster. 5 ICAP servers are configured and connected.



nuea-snisi05-4# isi antivirus servers list

Url Description Enabled

—————————————————-

icap://dcew-ificap01.gfk.com McAfee ICAP VM Yes

icap://dcew-ificap02.gfk.com McAfee ICAP VM Yes

icap://nuew-ificap01.gfk.com McAfee ICAP VM Yes

icap://nuew-ificap02.gfk.com McAfee ICAP VM Yes

icap://nuew-ificap03.gfk.com McAfee ICAP VM Yes

—————————————————-

Total: 5



Checking the av statistics I found the following (After clearing the stats and running again for about an hour)

nuea-snisi05-4# isi_sysctl_cluster efs.bam.av.stats

efs.bam.av.stats=

scanned: 475521 scan_on_open: 0

scan_on_read: 0 scan_on_close: 475521

manual_scan: 0

current_wi_count: 2 max_wi_count: 6562

timeout: 163963 success: 204994

quarantine: 0 repair: 0

truncate: 0 infected: 0

skipped: 224167 failed: 46354

Can anybody help with more information on the shown statistic values?

scanned is the same as scan_on_close . That is the way it should be.

Is skipped the number of files that are scanned, but filtered out? The are a lot of file extensions filtered on this cluster.

Scanned – skipped is 251354. That is about he same as the sum of “Success + failed” .

But does timeout mean. That is a pretty high number

The same for failed , that about 10%.

Customer claims to have latency issues. Just trying to investigate if the av-engine is part of the problem.

Related:

RecoverPoint for Virtual Machines: Failover is unsuccessful when shadow VM is storage vMotioned

Article Number: 488925 Article Version: 4 Article Type: Break Fix



RecoverPoint for Virtual Machines 4.3,RecoverPoint for Virtual Machines 4.3 P1,RecoverPoint for Virtual Machines 4.3 SP1,RecoverPoint for Virtual Machines 4.3 SP1 P1,RecoverPoint for Virtual Machines 4.3 SP1 P2

Impact:

There was a storage vMotion done on a copy VM to a different DataStore.

In order to do so, both shadow VM and replica VM must be migrated.

The first migration is successfully performed but the second one is blocked by VMware.

Symptoms found in the logs:

The second migration is blocked due to an error:

“Relocate virtual machine <vm name> Cannot complete the operation because the file or folder ds:///vmfs/volumes/<first vmUid>/<first vm name>/vmware.log already exists”

Root cause:

Storage vMotion is performed by VMware, the second migration fails due to a conflict of an already existing file.

Affected versions:

4.3, 4.3.1, 4.3.1.1, 4.3.1.2, 5.0

Change:

In RP4VMs, migrating protected VMs (with shadow based RP4VM solutions) to a different DataStore.

Workaround:

In order to Storage vMotion the shadow and replica VMs, the following steps would need to be performed:

1. Storage vMotion the shadow VM – only the configuration file (VMX) should be migrated, the replica VMDK(s) should be kept on the current DataStore.

By default, Configuration file and all VMDKs are migrated as part of the storage vMotion process so it is vital to click on “Advanced” on the “Select DataStore” step of the “Migrate Virtual Machine” wizard and select a different DataStore specifically for the Configuration file while keeping the VMDKs on the current DataStore.

2. Perform Test a Copy (enable image access) to bring up the replica VM

3. Storage vMotion the replica VM – (Configuration file and all VMDKs) to the new DataStore

If in step 1 a full migration was performed (i.e. to both VMX configuration file and the VMDKs), the second migrate of the replica VM will be denied. In order to solve this, perform a full migration of the shadow VM back to the initial DataStore and follow the storage vMotion steps exactly as described in the procedure above.

Finally, the floppy imageiso file used by the shadow VM would need to be changed as well:

1. Check if the target DataStore has a “RecoverPoint” folder with a RecoverPointVM.flpRecoverPointVM.iso file.

If it does not, copy the RecoverPoint folder from the current DataStore

2. Edit the settings of the shadow VM and point the floppy image to the RecoverPointVM.flp file on the target DataStore.

3. Reset the shadow VM

Remark: RecoverPointVM.iso is relevant to version 4.3.1 only. RecoverPointVM.flp is relevant to 4.3.1.1 and higher versions.

ResolutionFixed at:

Storage vMotion procedure is described above.

Related:

  • No Related Posts

RecoverPoint for Virtual Machines: Virtual Machine CPU / RAM resources insufficient on failover

Article Number: 486282 Article Version: 3 Article Type: Break Fix



RecoverPoint for Virtual Machines 4.3,RecoverPoint for Virtual Machines 4.3 P1,RecoverPoint for Virtual Machines 4.3 SP1,RecoverPoint for Virtual Machines 4.3 SP1 P1,RecoverPoint for Virtual Machines 4.3 SP1 P2

Issue:

When a customer is failing over a Virtual Machine (VM) from production VM to the copy or shadow VM (ex. during disable image access, failover, etc.), the shadow VM is loaded with the reservation for the CPU / RAM (Memory) of the production or copy VM and may prevent the shadow VM from powering on.

Also, when switching back from the shadow VM to the production or copy VM (during image access, failover, etc.), the production or copy VM is loaded with the memory reservation of the shadow VM (default memory reservation of the shadow VM is 64 MB).

Symptoms found in logs:

If the shadow VM can’t power on because of the CPU reservation (as seen in BZ 131448), the following warning will be shown in the Virtual Center:

“insufficient capacity on each physical CPU”

If the shadow VM can’t power on due to the RAM/memory reservation (as seen in BZ 130370), search the VM configuration in the connectors view log in RP (files/home/kos/connectors/logs/vi_connector_view.log):

isShadowVM=truevirtualMemoryReservationInMB=(different from 64)

Cause:

Due to a bug in the VMS API with VMware, the shadow VM is loaded with the wrong reservations.

Affected versions:

4.3, 4.3.0.1, 4.3.1

Customer uses RP4VMs to failover a VM or put a CG into image access.

Workaround:

Manually change the shadow/copy/production VM’s RAM / memory and CPU resources.

Fixed/Resolved:

4.3.1.2

Related:

  • No Related Posts

7023355: ‘execstack -c ‘, or link it with ‘-z noexecstack’ message in log files

This document (7023355) is provided subject to the disclaimer at the end of this document.

Environment


eDirectory

Identity Manager

iManager

Situation

Messages in ndsd.log

Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /opt/novell/lib64/libnpkit.so which might have disabled stack guard. The VM will try to fix the stack guard now.
It’s highly recommended that you fix the library with ‘execstack -c <libfile>’, or link it with ‘-z noexecstack’.
NetIQ JClient 2.08.0403-2.8.403. (c) 2013 NetIQ Corporation and its affiliates. All Rights Reserved.
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /opt/novell/eDirectory/lib64/libdhutilj.so.3.0.500 which might have disabled stack guard. The VM will try to fix the
stack guard now.
It’s highly recommended that you fix the library with ‘execstack -c <libfile>’, or link it with ‘-z noexecstack’.
Message in Catalina.out
NetIQ JClient 4.00.0130-4.0.130. (c) 2013 NetIQ Corporation and its affiliates. All Rights Reserved.
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /var/opt/novell/iManager/nps/WEB-INF/bin/linux/libnpkiapi.so which might have disabled stack guard. The VM will try t
o fix the stack guard now.
It’s highly recommended that you fix the library with ‘execstack -c <libfile>’, or link it with ‘-z noexecstack’.

Resolution

Analysis of this message has determined the process isn’t effect and it is cosmetic in nature.

Cause

Messages will be generated when a java process tries to load a native library.

It is been introduced by Oracle after java 1.7 onward.

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

Related:

  • No Related Posts

How to troubleshoot common SR-IOV related issues

Common error scenario #1: A VM fails to have SR-IOV network assigned.

  1. Please make sure there are enough available VFs to be assigned to the VM by checking “remaining-capacity”. This can be confirmed from XenCenter or xe CLI.
  • XenCenter

User-added image

  • xe CLI
  1. )Get the UUID of the SR-IOV Network:

xe network-sriov-list

  1. )Check remaining-capacity of the network

xe network-sriov-param-list uuid=<SR-IOV_Network_UUID>

Sample output

User-added image

  1. In case of NIC legacy driver is updated in Dom0, please double check whether following parameters are changed in /etc/modprobe.d/<driver_name>.conf configuration file. If any of the value is incorrect, please manually change the value based on your requirement again.

param name

default max_vfs

sysfs interface support


Common error scenario #2: When trying to assign SR-IOV Network to a VM from XenCenter, there is no SR-IOV Network option in “Add Virtual Interface” Wizard for the VM.

If the VM is newly created in XenServer 7.6, please check if the VM has <restriction field=”allow-network-sriov” value=”1″/> tagged in recommendations by xe vm-param-list uuid=<vm_uuid> command.

If there is no such tag, that means the OS of the VM does not support SR-IOV feature, you should not assign SR-IOV network to that VM.

Output sample for a guest OS that supports SR-IOV Network:

recommendations ( RO): <restrictions><restriction field=”memory-static-max” max=”1649267441664″/><restriction field=”vcpus-max” max=”32″/><restriction field=”has-vendor-device” value=”false”/><restriction field=”allow-gpu-passthrough” value=”1″/><restriction field=”allow-vgpu” value=”1″/><restriction field=”allow-network-sriov” value=”1″/><restriction max=”255″ property=”number-of-vbds”/><restriction max=”7″ property=”number-of-vifs”/></restrictions>

Please note, if a VM is exported from a previous XenServer version, even the guest OS supports SR-IOV feature, it has no above restriction field. You will need to assign SR-IOV Network to that VM by xe CLI.


Common error scenario #3: SR-IOV Network is assigned to a VM by xe CLI, while inside the VM, the SR-IOV Network is not visible.

In this case, please make sure

  1. The guest OS supports SR-IOV. Please refer to scenario 2 to check if the OS supports SR-IOV feature.
  2. The NIC driver has been installed on the guest already.

Example for Windows Server OS:

User-added image

Example for Linux Server:

ethtool -i eth1

driver: ixgbevf

version: 3.2.2-k-rh7.4

firmware-version:

expansion-rom-version:

bus-info: 0000:00:05.0

supports-statistics: yes

supports-test: yes

supports-eeprom-access: no

supports-register-dump: yes

supports-priv-flags: no

  1. If the VM was running when you were assigning SR-IOV Network to it. For Windows Server OS, SR-IOV Network will not be visible until the VM has been shut down and rebooted.

Please understand, you can assign SR-IOV Network to any VM by xe CLI. So please be sure that you are assigning it to a guest that supports this feature. Even SR-IOV Network may work on a non-supported guest, Citrix does NOT support this use case if any problem occurs.


Common error scenario #4: VM fails to start after SR-IOV Network has been assigned.

In this, you need to check kern.log and xensource.log with keyword “sriov” to find more information.

Reference:

For your reference, here are messages are outputted for working scenario.

xensource.log output when VF is attached and active for the VM.
broadman xenopsd-xc: [debug|localhost.localdomain|16 |Async.VM.start R:34ed6712dab6|xenops] Device.NetSriovVf.add domid=90 devid=2 pci=0000:03:10.1 vlan=none mac=fa:75:91:01:17:41 carrier=true rate=none other_config=[] extra_private_keys=[net-sriov-vf-id=2; xenopsd-backend=classic] extra_xenserver_keys=[static-ip-setting/mac=fa:75:91:01:17:41; static-ip-setting/error-code=0; static-ip-setting/error-msg=; static-ip-setting/enabled=0; static-ip-setting/enabled6=0; mac=fa:75:91:01:17:41]



broadman xenopsd-xc: [debug|localhost.localdomain|16 |Async.VM.start R:34ed6712dab6|xenops] adding device B0[/local/domain/0/xenserver/backend/net-sriov-vf/90/2] F90[/local/domain/90/xenserver/device/net-sriov-vf/2] H[/xapi/8f93c150-0fe5-a053-8748-47a22d1ef023/hotplug/90/net-sriov-vf/2]

xensource.log output which indicates SR-IOV is not active on the VM yet.

broadman xenopsd-xc: [debug|localhost.localdomain|4361 |org.xen.xapi.xenops.classic events D:de40a7d84a1f|xenops] VM = 8f93c150-0fe5-a053-8748-47a22d1ef023; domid = 34; Device is not active: kind = net-sriov-vf; id = 2; active devices = [ ]

Common error scenario #5:In SR-IOV network enabled usage case, a host may not be able to join an existing pool.

If you have enabled SR-IOV network on a host, and want to have that host join an existing pool, it will fail. Only when the not having SR-IOV configured, it can join an existing pool. Please refer to the below table for more details.
pool (Network SR-IOV enabled)
pool(Network SR-IOV not configured)
host (Network SR-IOV enabled) Joining to existing pool will be blocked since the new host can be considered as a not clean host. Joining to existing pool will be blocked since the new host can be considered as a not clean host.
host (Network SR-IOV not configured) The host can join the pool, but with following notice:

1. The SR-IOV network of the newly joined host will be enabled AUTOMATICALLY if and only if the NIC on host and the NIC on pool master have same NIC type and same NIC position; this is checked by XAPI;

2. If PCI type of same NIC position for pool master and newly joined host are different, then SR-IOV network will not be enabled for that NIC in the joined host; even the host can join the pool.

3. Different types of SR-IOV physical PIFs can NOT be put into one network;

4. In the pool, user can enable a SR-IOV network for the newly joined host by XE CLI if the SR-IOV PIF has same type with the pool master’s PIF in that network, even they are in different positions.
The host can join the existing pool.

Common error scenario #6: A VM is configured as both SR-IOV VF and vGPU assigned, but when the VM is started, it has no SR-IOV VF assigned.

If a VM is configured as both SR-IOV VF and vGPU assigned, the selection of host to start the VM will respect vGPU’s host selection only. After host selection based on vGPU resource, XAPI only asserts on the selected host based on SR-IOV network remaining capability. This is because that vGPU is a more expensive and rarer resource than SR-IOV VF.

Related:

  • No Related Posts