Re: Recoverpoint v3.5 – ‘Failover options’ greyed out

Hello All,

I am trying to test my failover capabilities through RPA. After enabling ‘Image access’ (physical) and mounting the replica LUN to the DR ESXi 5.5 hosts, I am not able to do the ‘Failover Actions – Failover to remote replica’ (as shown in the picture below). This options is grayed out. I don’t have any SRM and my policy setting for ‘Stretch cluster/VMWare SRM support’ etc are set to NONE. Without the failover actions, I am not able to replicate back my changes to the Prod_Test site. I need suggestions. Am I missing any policy or any configuration?

Also, as seen in the picture below, the storage status in the DR_Test (remote side) is showing ‘Enabling logged access’. So does this mean that this is still processing this enabling and that’s the reason why it is grayed out? I am however able to mount the replica LUN, add the replicated VM to the inventory, boot it up and browse the replicated VM at my DR site.

Thanks in advance!





SEP 14.x does not allow users’ registry to dismount after logoff

I need a solution

I’ve had this issue for quite some time and surprised no one else has noticed this bug.

After about a day of running SEP, when I look in Regedit under HKEY_USERS I’ll see everyone’s hive who has previously logged into the Windows Server 2016/XenApp 1808 VM’s.  If these users attempted to return to the affected VM, they would be denied logging in until their hive was dismounted.  The bug is able to suvive a reboot.

This issue seems to manifest when the Symantec registry key LaunchSMCGui is set to zero.

I used to temporarily mitigate the problem by running SMC -Stop and SMC -Start but this no longer works in 14.2 MP1.  SEP 14.2 would cause my XenApp VM’s to BSOD a lot.



How to run an on-demand backup of the VM residing under the “containerclients” domain using CLI.

I want to run an on-demand backup of the VM under the domain “containerclients” using CLI. It keeps on giving me error client is not a member of the group.

admin@Backupserver:~/>: mccli client backup-group-dataset –name=”/VCenter/ContainerClients/VMClient” –group-name= “/VCenter/VMwareBackupGroup”

1,22241,Client is not a member of group.


Where as i can run it from GUI by selecting the VM seperately and while starting the backup it asks for Groupname in which the Container folder is added. So it seems it can collect the information from GUI but not from CLI.

I do not want to run it by adding the client into some temporary group. I want to know how to run it directly as we run on-demand backup for an agent based client, as it is possible from GUI


PowerPath integration in Unisphere for PowerMax

In the Unisphere for PowerMax 9.0 release we have increased integration with Powerpath hosts that are discovered on the arrays we monitor in Unisphere. With a new PowerMax array with a FX/Pro license (premium) you are entitled up to 75 ESXi host licenses. These PowerPath licenses are now automatically deployed for VMware, Windows and Linux hosts with PowerPath 6.3 and later.

With over 25 years of engineering and development behind it PowerPath is a family of multipathing software products that automate and optimize I/O performance in physical and virtual environments. PowerPath intelligently manages I/O path loads and failover to increase application availability and reduce latency. Proprietary optimization for Dell EMC PowerMax and VMAX increases I/O performance 2X – £3X over native multipathing.

We have heard from customers they want to see more integration with our existing software products in order to help their ongoing automation drive and also provide enhanced troubleshooting capabilities from our side.

PowerPath Hosts Landing Page:

In Unisphere on the left hand navigation you will find it under Hosts>PowerPath Hosts.


This view gives you a single pane of glass view of all your PowerPath hosts that are connected on arrays that this specific Unisphere instance can see.

In this view you are presented with a lot of useful information if you are in the midst of a troubleshooting scenario. By selecting the lcvt1032 host you are presented with a lot more information on the right hand navigation. The PowerPath version and patch level, OS version, revision and patch level, Server hardware, MFG and serial number, PowerPath license type and connectivity type (FC, iSCSI or FCOE) are all displayed.

The objects Initiators, Hosts/Host Groups, Masking Views, Storage Groups and Volumes are displayed in blue and can be selected in order for you to drill down to the specific object. At the bottom of the list is the number of VM’s on this ESXi server. Selecting the VM count highlighted in blue will get you to a list of the VM’s as we see here.

pp 1.5.PNG.png

So let’s say we need to quickly find out what are the initiators for lcvt1032 as we are investigating an internal ticket for a performance issue.


Now in this view to drill down on your performance problem you can select volumes to see what volumes are assigned to this host.


When you select an individual volume you can see the right hand pane shows a wealth of good information including whether if the volume is mounted. In this case it is not so that may indicate the root of the problem. If the dev was mounted, below the mount status will be the name of the host process using the dev (for ESXi it’s the VM name). The host process/vm name allows you to figure out what application is using the device. At the bottom of the device information is the date when date when the device last received I/O from any host. This allows you to find the devices that were not used for a while, and the combination of host name and process/vm name allows you to identify who owns the dormant device.

I hope I’ve displayed some of the power of the troubleshooting you can potentially do from the PowerPath landing page when you are investigating a potential PowerPath issue.

Auto Initiator Group (IG) creation using PowerPath Host Name:

On the main landing page we also have a new option called create host which will allow Auto IG creation:


On 5978 HyperMax code and Unisphere/Solutions Enabler 9.0 allows for the hostname to be provided when creating an IG. Unisphere/SE picks the host HBA WWN’s from the Host registration data provided by PowerPath and add them to the IG for you. This significantly helps you in the overall provisioning flow if adding a new PowerPath host.

The initiator array switch that controls AUTO IG creation is disabled by default, but you can turn it on by selecting it in the settings section of Unisphere.

pp 4.5.jpg

Auto IG Creation works in tandem with the PowerPath Host registration, if the PowerPath host registration switch is disabled, then the Auto IG Creation will also be disabled.

As of now Linux and AIX PowerPath will automatically detect Devices added or deleted on a certain path following masking or zoning changes (ESXi and Windows already does this). Once PowerPath Devices have changed it will run a host scan and the O/S will discover the new Devices.

This significantly helps you in the overall provisioning flow if adding a new PowerPath host. It achieves this by automatically pulling the WWN’s registered with that hostname provided the correct settings are enabled as outlined below. This reduces some of the configuration work that the user has to perform when setting up a host.

Device in Use Details:

Another really powerful feature is the Device in use example. By using Unisphere or SE you can easily see if a device is mounted and active by looking at the volume level. In Unisphere you would navigate down to the volume level:


On selecting a specific device the right hand pane will show more detailed information. We can see here that the device mounted state is true and is used by a process called consist_lun_x (for ESXi it would be the vm name). Therefore we can see if a device was used lately and who is the owner (host, mount status, application or vm name). The same information can also be viewed in SE with symdev list –sid xxxx –ppi.

SE screenshot.png

The use case we were looking to resolve here is that often SA’s get requested for resources on the array by application owners and they do so to future proof themselves for projects. Although from an application owner this makes perfect sense but from a SA whose job is to manage array usage and consumption it is not best practice. In certain cases people who manage the capacity of the array can leave or transfer and the capacity they owned goes unused. Up until this feature there was no good way from the storage side to track it down.

This information is updated every 24 hours and if the SA feels that application owners are taking liberties by not mounting volumes or using storage they can have a frank discussion with the application owner!

PowerPath Metrics:

Another new feature we have added is we’ve exposed some PowerPath metrics in our Performance section. These metrics are % PowerPath Observes Relative RT (PowerPath reported RT divided by array RT), PowerPath Average Response Time (ms) and PowerPath Observed Delta RT (ms) (PowerPath reported RT minus the array RT).


These metrics can be accessed in the all metrics section by using the radio button slider. These new metrics are available at the SG and Thin volume object type level. Here’s an example showing the differences in the three measurements vs array response time.

pp 7.png

In this example the array response is .41 ms, while the average I/o Response time as measured by PowerPath from the host to the array is .60 ms. This .20 ms (observed delta) and 147% relative delta (%PowerPath observed Relative RT) difference can help identify where the slowdown is occurring – not at the array level here! A spike in PowerPath observed Relative RT % may indicate SAN slowdown or a possible slow drain problem which occurs outside of the array.

I hope you found these use cases helpful and if you have any questions please let me know.

I would like to thank some learned PowerPath colleagues in preparing this content, Owen Crowley in QE, Eric Don in Engineering and Robert Lonadier in PM.


RecoverPoint for Virtual Machines: SC production RPAs are in reboot regulation

Article Number: 502032 Article Version: 3 Article Type: Break Fix

RecoverPoint for Virtual Machines

Issue (impact on customer): SC production RPAs are in reboot regulation

Impacted Configuration & Settings [Replication Mode,Splitter Type,Compression,CDP/CLR/Multi,FC/IP/iSCSI, Policies etc.]: RP4VM, failed to register new VM to the VC too many times.

Impact on RP: Control keeps on crashing

Affected versions: 5.0.1

Root cause:

When we register a new VM (upon performing protect) in the VC, to create a shadow VM. In this case, the registration fails and after 5 retry attempts, RPA control give-up and generate a message of the failure with the VM name. Problem is, we deleted the memory allocated for the registration before generating the message, and thus, getting into Assertion (trying to access variable in the memory that no longer exists) and control crashes repeatedly until RPA enters reboot regulation.

Customer deleted a production that protected by RecoverPoint for Virtual Machine cluster without un-protecting it first.


Any other changes that may cause failure for RPA to register VMs.

Fixed at: 5.1


Perform step-by-step procedure:

1. Find out why the VM keep failing to register to the VC and try to solve the error at the VC to prevent this from re-occurring.

2. Unprotect and Re-protect the VMs on the same CG.

3. If 2+3 doesn’t succeed – disable/enable the CG.


RecoverPoint for Virtual Machines: Unable to protect vAppliance due to error stating “Create an appliance Linux root password” must be configured

Article Number: 499643 Article Version: 4 Article Type: Break Fix

RecoverPoint for Virtual Machines

Issue (impact on customer):

Unable to finish protecting vAppliance due to error stating “Create an appliance Linux root password” must be configured

Impacted Configuration & Settings:

RP4VM replicating vAppliances

Impact on RP:

RP cannot finish replicating vAppliance until the vApp options are populated on the replica VM.

Symptoms found in the logs:

When protecting vAppliance the following error will show on the vSphere GUI that stops the protection process:

“Property ‘Create an appliance Linux root password’ must be configured for the VM to power on.”


“Property ‘Initial root password’ must be configured for the VM to power on.”

Root cause:

RP does not support replicating vApp properties which stops the copy vAppliance from powering on.

No change


– Deploy the OVF of the vAppliance to be replicated at the DR side and configure the vApp options.

– When protecting the vAppliance have it replicate to an existing VM and select the VM that was deployed in the previous step.


– Protect the vAppliance allowing RP to auto provision the copy.

– When the error stating the replica VM cannot power on occurs, manually select the VM, edit its settings, and apply the vApp options as they are configured on production.

– Allow for replication to complete, enter image access on the DR copy to confirm everything is working correctly.

ResolutionFixed at:



Re: Dell EMC Unity Laptop Demo Install error


I am trying to install the Unity laptop demo on my Windows7 VM running on my Mac with Fusion 8.1.1. It appears that the Anywhere installer thinks there are multiple instances running and quits. See the attached screen shot. This seem to be an issue within the Fusion VM since I can install it on my home PC.

Checking Anywhere’s support website, there is an article that indicates there is a .tmp file that might need to be deleted in my home directory, but I am unable to find it.

Has anyone tried to install this demo on a Windows VM?


Alan Kobuke


Re: Isilon avscan Statistics, Interpretation of efs.bam.av.stats

IHAC with a 5 node X410 cluster. 5 ICAP servers are configured and connected.

nuea-snisi05-4# isi antivirus servers list

Url Description Enabled


icap:// McAfee ICAP VM Yes

icap:// McAfee ICAP VM Yes

icap:// McAfee ICAP VM Yes

icap:// McAfee ICAP VM Yes

icap:// McAfee ICAP VM Yes


Total: 5

Checking the av statistics I found the following (After clearing the stats and running again for about an hour)

nuea-snisi05-4# isi_sysctl_cluster efs.bam.av.stats


scanned: 475521 scan_on_open: 0

scan_on_read: 0 scan_on_close: 475521

manual_scan: 0

current_wi_count: 2 max_wi_count: 6562

timeout: 163963 success: 204994

quarantine: 0 repair: 0

truncate: 0 infected: 0

skipped: 224167 failed: 46354

Can anybody help with more information on the shown statistic values?

scanned is the same as scan_on_close . That is the way it should be.

Is skipped the number of files that are scanned, but filtered out? The are a lot of file extensions filtered on this cluster.

Scanned – skipped is 251354. That is about he same as the sum of “Success + failed” .

But does timeout mean. That is a pretty high number

The same for failed , that about 10%.

Customer claims to have latency issues. Just trying to investigate if the av-engine is part of the problem.


RecoverPoint for Virtual Machines: Failover is unsuccessful when shadow VM is storage vMotioned

Article Number: 488925 Article Version: 4 Article Type: Break Fix

RecoverPoint for Virtual Machines 4.3,RecoverPoint for Virtual Machines 4.3 P1,RecoverPoint for Virtual Machines 4.3 SP1,RecoverPoint for Virtual Machines 4.3 SP1 P1,RecoverPoint for Virtual Machines 4.3 SP1 P2


There was a storage vMotion done on a copy VM to a different DataStore.

In order to do so, both shadow VM and replica VM must be migrated.

The first migration is successfully performed but the second one is blocked by VMware.

Symptoms found in the logs:

The second migration is blocked due to an error:

“Relocate virtual machine <vm name> Cannot complete the operation because the file or folder ds:///vmfs/volumes/<first vmUid>/<first vm name>/vmware.log already exists”

Root cause:

Storage vMotion is performed by VMware, the second migration fails due to a conflict of an already existing file.

Affected versions:

4.3, 4.3.1,,, 5.0


In RP4VMs, migrating protected VMs (with shadow based RP4VM solutions) to a different DataStore.


In order to Storage vMotion the shadow and replica VMs, the following steps would need to be performed:

1. Storage vMotion the shadow VM – only the configuration file (VMX) should be migrated, the replica VMDK(s) should be kept on the current DataStore.

By default, Configuration file and all VMDKs are migrated as part of the storage vMotion process so it is vital to click on “Advanced” on the “Select DataStore” step of the “Migrate Virtual Machine” wizard and select a different DataStore specifically for the Configuration file while keeping the VMDKs on the current DataStore.

2. Perform Test a Copy (enable image access) to bring up the replica VM

3. Storage vMotion the replica VM – (Configuration file and all VMDKs) to the new DataStore

If in step 1 a full migration was performed (i.e. to both VMX configuration file and the VMDKs), the second migrate of the replica VM will be denied. In order to solve this, perform a full migration of the shadow VM back to the initial DataStore and follow the storage vMotion steps exactly as described in the procedure above.

Finally, the floppy imageiso file used by the shadow VM would need to be changed as well:

1. Check if the target DataStore has a “RecoverPoint” folder with a RecoverPointVM.flpRecoverPointVM.iso file.

If it does not, copy the RecoverPoint folder from the current DataStore

2. Edit the settings of the shadow VM and point the floppy image to the RecoverPointVM.flp file on the target DataStore.

3. Reset the shadow VM

Remark: RecoverPointVM.iso is relevant to version 4.3.1 only. RecoverPointVM.flp is relevant to and higher versions.

ResolutionFixed at:

Storage vMotion procedure is described above.


RecoverPoint for Virtual Machines: Virtual Machine CPU / RAM resources insufficient on failover

Article Number: 486282 Article Version: 3 Article Type: Break Fix

RecoverPoint for Virtual Machines 4.3,RecoverPoint for Virtual Machines 4.3 P1,RecoverPoint for Virtual Machines 4.3 SP1,RecoverPoint for Virtual Machines 4.3 SP1 P1,RecoverPoint for Virtual Machines 4.3 SP1 P2


When a customer is failing over a Virtual Machine (VM) from production VM to the copy or shadow VM (ex. during disable image access, failover, etc.), the shadow VM is loaded with the reservation for the CPU / RAM (Memory) of the production or copy VM and may prevent the shadow VM from powering on.

Also, when switching back from the shadow VM to the production or copy VM (during image access, failover, etc.), the production or copy VM is loaded with the memory reservation of the shadow VM (default memory reservation of the shadow VM is 64 MB).

Symptoms found in logs:

If the shadow VM can’t power on because of the CPU reservation (as seen in BZ 131448), the following warning will be shown in the Virtual Center:

“insufficient capacity on each physical CPU”

If the shadow VM can’t power on due to the RAM/memory reservation (as seen in BZ 130370), search the VM configuration in the connectors view log in RP (files/home/kos/connectors/logs/vi_connector_view.log):

isShadowVM=truevirtualMemoryReservationInMB=(different from 64)


Due to a bug in the VMS API with VMware, the shadow VM is loaded with the wrong reservations.

Affected versions:

4.3,, 4.3.1

Customer uses RP4VMs to failover a VM or put a CG into image access.


Manually change the shadow/copy/production VM’s RAM / memory and CPU resources.