How to run an on-demand backup of the VM residing under the “containerclients” domain using CLI.

I want to run an on-demand backup of the VM under the domain “containerclients” using CLI. It keeps on giving me error client is not a member of the group.

admin@Backupserver:~/>: mccli client backup-group-dataset –name=”/VCenter/ContainerClients/VMClient” –group-name= “/VCenter/VMwareBackupGroup”

1,22241,Client is not a member of group.

admin@Backupserver:~/>:



Where as i can run it from GUI by selecting the VM seperately and while starting the backup it asks for Groupname in which the Container folder is added. So it seems it can collect the information from GUI but not from CLI.

I do not want to run it by adding the client into some temporary group. I want to know how to run it directly as we run on-demand backup for an agent based client, as it is possible from GUI

Related:

  • No Related Posts

Avamar Client for Windows: Avamar backup fails with “avtar Error : Out of memory for cache file” on Windows clients

Article Number: 524280 Article Version: 3 Article Type: Break Fix



Avamar Plug-in for Oracle,Avamar Client for Windows,Avamar Client for Windows 7.2.101-31



In this scenario we have the same issue presented in the KB 495969 however the solution does not apply due to an environment issue on a Windows client.

  • KB 495969 – Avamar backup fails with “Not Enough Space” and “Out of Memory for cache file”

The issue could affect any plugin like in this case with the error presented in the following manner:

  • For FS backups:
avtar Info <8650>: Opening hash cache file 'C:Program Filesavsvarp_cache.dat'avtar Error <18866>: Out of memory for cache file 'C:Program Filesavsvarp_cache.dat' size 805306912avtar FATAL <5351>: MAIN: Unhandled internal exception Unix exception Not enough space
  • For VSS backups:
avtar Info <8650>: Opening hash cache file 'C:Program Filesavsvarp_cache.dat'avtar Error <18866>: Out of memory for cache file 'C:Program Filesavsvarp_cache.dat' size 1610613280avtar FATAL <5351>: MAIN: Unhandled internal exception Unix exception Not enough space
  • For Oracle backup:
avtar Info <8650>: Opening hash cache file 'C:Program Filesavsvarclientlogsoracle-prefix-1_cache.dat'avtar Error <18866>: Out of memory for cache file 'C:Program Filesavsvarclientlogsoracle-prefix-1_cache.dat' size 100663840avtar FATAL <5351>: MAIN: Unhandled internal exception Unix exception Not enough spaceor this variant:avtar Info <8650>: Opening hash cache file 'C:Program Filesavsvarclientlogsoracle-prefix-1_cache.dat'avtar Error <18864>: Out of restricted memory for cache file 'C:Program Filesavsvarclientlogsoracle-prefix-1_cache.dat' size 100663840avtar FATAL <5351>: MAIN: Unhandled internal exception Unix exception Not enough space avoracle Error <7934>: Snapup of <oracle-db> aborted due to rman terminated abnormally - check the logs
  • With the RMAN log reporting this:
RMAN-00571: ===========================================================RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============RMAN-00571: ===========================================================RMAN-03002: failure of backup plus archivelog command at 06/14/2018 22:17:40RMAN-03009: failure of backup command on c0 channel at 06/14/2018 22:17:15ORA-04030: out of process memory when trying to allocate 1049112 bytes (KSFQ heap,KSFQ Buffers)Recovery Manager complete. 

Initially it was though the cache file could not grow in size due to incorrect “hashcachemax” value.

The client had plenty of free RAM (48GB total RAM) so we increase the flag’s value from -16 (3GB file size max) to -8 (6GB file size max)

But the issue persisted and the disk space was also not an issue, there was plenty of GBs of free space

Further investigations with a test binary from the engineering team lead to the fact that the MS OS was not releasing enough unused and contiguous memory required to allocate/load into the memory the entire hash cache file for the backup operation.

It was tried a test binary that would allocate the memory in smaller pieces to see if we could reach the point where the OS would allow the full file p_cache.dat to be loaded into memory but that also did not help, the Operative system was still not allowing to load the file into memory for some reason.

The root cause is hided somewhere in the OS however in this case we did not engage the MS team for further investigations on their side.

Instead we found a way to work around the issue setting the cache file to be smaller, see details in the resolution section below.

In order to work around this issue we set the hash cache file to be of a smaller size so that the OS would not have issues in allocating it into memory.

In this case it was noticed that the OS was also having problems in allocating smaller sizes like 200+ MB so we decided to re-size the p_cache.dat to be just 100MB with the use of the following flag:

–hashcachemax=100

This way the hash cache file would never grow beyond 100MB and would overwrite the old entries.

After adding that flag it is requited to recycle the cache file by renaming or deleting the p_cache.dat (renaming is the preferred option)

After the first backup which would take longer than usual as expected (to rebuild the cache file) the issue should be resolved.

  • The Demand-paging cache is not recommended in this scenario since the backup are directed to GSAN storage so the Monolithic paging cache was used.
  • Demand-paging was designed to gain benefit for backup being sent to DataDomain storage.

Related:

  • No Related Posts

Re: Can we change the control port settings on isilon node?

Sorry, Control Port?

Isilon doesn’t have Management ports, or management interfaces. All management is done in-band. Because you’re likely talking about doing NDMP-based backups to a DataDomain, I assume that you’re talking about the network interfaces for your 3-way NDMP. So all that’s necessary to accomplish what you’re describing is to:

#1 Ensure that your backup application, NetBackup, NetWorker, etc. is using a smart connect zone name to talk to the Isilon cluster.

#2 Ensure that there are enough IP addresses for the new interfaces un-used on the static smartconnect zone on the Isilon Cluster.

#3 Add the network interfaces for the new nodes to that SmartConnect Zone/Pool. (Assuming they are cabled, and switch ports are cabled correctly).

That’s it. I hope that helps!

~Chris

Related:

  • No Related Posts

Re: Unable to obtain backup status from vProxy after 5 attempts

Hello guys,

Some backups are failing with the following error :

164541 10/01/2017 03:37:45 AM nsrvproxy_save NSR error “vmname”: Unable to obtain backup status from vProxy after 5 attempts, failing the backup: Received an HTTP code: 404, libCURL message: “Operation timed out after 1800618 milliseconds with 0 out of -1 bytes received”, vProxy message: “Error received from vProxy =”-404: Resource not found: c92f57e4-8c25-4cc4-a081-562b8c39e4dd”.

“, url: “https://vproxy:9090/api/v1/BackupVmSessions/c92f57e4-8c25-4cc4-a081-562b8c39e4dd“, body: ” “.

156203 10/01/2017 03:37:51 AM nsrvproxy_save NSR warning “vmname”: Unable to download vProxy Session log into file ‘/nsr/logs/policy/WF/Policy/358869-vmname-2017-10-1-3-37-45.log’: Possible wrong vProxy hostname or port, resource path not found. Received an HTTP code: 404, libCURL message: “Operation timed out after 1800618 milliseconds with 0 out of -1 bytes received”, url: “https://vproxy:9090/api/v1/BackupVmSessions/c92f57e4-8c25-4cc4-a081-562b8c39e4dd/log“.

164546 10/01/2017 03:37:51 AM nsrvproxy_save NSR error vmname: Backup failed.

146000 10/01/2017 03:37:51 AM nsrvproxy_save SYSTEM critical The size of proxied data written has not been set.

142169 10/01/2017 03:37:51 AM nsrvproxy_save NSR warning Save-set ID ‘399523354’ (client ‘vcenter’: save-set ‘vm:5034b93a-c988-b3ef-a823-8a650f5a5421:vcenter’) is aborted.

0 10/01/2017 03:37:51 AM nsrmmd NSR error 10/01/17 03:37:51 ddirasm: save set vm:5034b93a-c988-b3ef-a823-8a650f5a5421:vcenter for client vcenter was aborted and removed from volume

Looks like it’s a timeout issue but the vProxy is not overloaded and is handling other backups successfully.

In this config, i have :

3 vProxys for around 150 VMs

Max 30 VMs per group

Parallelism set to 15

timeout set to 60

1 hour delay between each VMs group

Any idea what could cause this ?

thanks

Julien

Related:

Unisphere for VMAX 8.4.0: Procedure to move Unisphere and Solutions Enabler to a different server.

Article Number: 501142 Article Version: 3 Article Type: How To



Unisphere for VMAX 8.4.0

Customer wishes to decommission an existing Unisphere/ SE management server & move to a new server.

Ensure that the versions of SE & Unisphere that you are running, are the same on both servers.

Solutions Enabler side:

Take a backup/output of the following SE files & outputs:

  1. > stordaemon list (output)
  2. > symcfg list (output)
  3. > syminq (output)
  4. > powermt display dev=all (output, if using PP)
  5. Copy /var/symapi/config/ (full directory)
  6. Copy /var/symapi/db/symapi_db.bin (file)
  7. Export any dgs/cgs you might have:

> symdg exportall –f <dgexportfile>

> symcg exportall –f <cgexportfilename>

  1. If using GNS (check stordaemon list to see if storgnsd is enabled) then I would recommend taking backup of that database:

> /usr/symcli/daemons>./storgnsd backup -sid xxxxxxxxxxx (leading zeros and 8 char SID)


Unisphere side:

Backup up all your performance databases in Unisphere via performance/ settings/ databases.

Hover over performance icon/ exports settings. Enter a password & confirm.

Install SE & Unisphere on different host.

Copy /var/symapi/db/symapi_db.bin (file)

symdg importall –f <dgexportfile>

symcg importall –f <cgexportfilename>

rest above are just a precaution above & can be ignored.

Restore your performance databases in Unisphere via performance/ settings/ databases.

Hover over performance icon/ import settings.

Related:

  • No Related Posts

Unisphere for VMAX: Backups fail with Compress Data is Corrupt errors

Article Number: 501444 Article Version: 3 Article Type: Break Fix



Unisphere for VMAX 8.3,Unisphere for VMAX

Cannot perform Unisphere for VMAX backup for array.

The disk space was low on the backup partition.

None

Clear some of the older backup files, so that there was room on the partition.

The smas.log has the following entry:

<date> <time> ERROR [stderr] (EJB default - 5) psql:<install_path>/SMAS/backup/SPA/<SID>_<DATE>/<SID>_historical_backup.sql:133: ERROR: compressed data is corruptThe postgresql.log file has the following entry:<date> <time> GMT:::1(61658):spa@spa:[11792]:ERROR: compressed data is corrupt<date> <time> GMT:::1(61658):spa@spa:[11792]:STATEMENT: COPY( SELECT <arguments> FROM dwf_storagegroup ) TO '<install_path>\SMAS\backup\SPA\<SID>_<DATE>\<SID>_dwf_storagegroup.sql' DELIMITER ','; 

Related:

  • No Related Posts

Unisphere for VMAX: After upgrading to Unisphere for VMAX 8.4, unable to restore the performance DB

Article Number: 500595 Article Version: 3 Article Type: Break Fix



Unisphere for VMAX 8.3,Unisphere for VMAX 8.4

After upgrading to Unisphere for VMAX 8.4, unable to restore the performance DB

Backups of perf DB’s successful prior to upgrading

SMAS log:

SpaHibernateUtil:filterMessage – SPA system time updated for array 000xxxxx category dbtaskstatus status DBTASK_BACKUP_SUCCESS

SMAS log:

2017-05-24 16:28:25,053 WARN [em.spa.SPA] (default task-158) SpaHibernateUtil:filterMessage – SPA system time updated for array 000xxxxxx category dbtaskstatus status DBTASK_RESTORE_FAILED

2017-05-24 16:28:25,056 WARN [em.spa.SPA] (default task-158) SpaHibernateUtil:filterMessage – SPA system time updated for array 000xxxxxx category dbstatus status DBSTATUS_INVALID

2017-05-24 16:28:25,058 ERROR [em.spa.SPM] (default task-158) SPADBManager:SPA Error: restoreDatabase – Exception restoring the database spadb6 for array 000xxxxxxx com.emc.em.spa.database.SpaDatabaseException: Unable to restore the database

In the Settings > Databases screen “Load Status” shows “Last Restore Failed”

GUI:

User-added image

Upgrade process

In one instance, deleted the DB that shows “invalid” (Database only), and re-register the array for performance. Historical performance data is gone at this point and only available via the backup file. Then successfully restored the performance DB file again, providing no issues with the backup file itself.

To ensure there is a good backup DB copy, the recommended approach is to provide support the .dat backup file(s) that are associated with the Invalid State in the GUI per SID, and have these looked at. An EMC Grab/Reports, is advised as well as a screen shot.

Related:

  • No Related Posts

Re: Problems with EMC Networker 9.2 and Hyper-V

Hello,

I am running EMC Networker 9.2 and am constantly having trouble when running Hyper-V backups.

My topology is as follows:

– storage node FC attached to DD, using DD Boost

– Hyper-V standalone cluster that I wish to back up to the SN above

Although the DD is accessible only over FC, all other Standalone Hyper-V servers have client direct and block based backups enabled.

When I try enabling Client Direct and block based backups for the my new Hyper-V server, I get the error below

Unable to perform backup for save-set ID ” with DIRECT_ACCESS=Yes, check device and pool configuration to enable Client direct



Filesystem backups run just fine with Client Direct disabled, but I am not able to make Hyper-V backups complete successfully with Client Direct enabled or disabled.

I am also certain that the policy is correctly configured and that there is end to end connectivity between client, SN and server.

Please advise.

Thanks,

Bogdan

Related:

  • No Related Posts

Data Protection Advisor 6.3 : DPA All Jobs report does not match HP DP Session report

Article Number: 502367 Article Version: 3 Article Type: Break Fix



Data Protection Advisor,Data Protection Advisor 6.4,Data Protection Advisor 6.3,Data Protection Advisor 6.2,Data Protection Advisor 6.1,Data Protection Advisor 6.0

When running the HP DataProtector List of Sessions the status of a Session will appear as Completed/Failures:

Backup Backup_Group_Name Completed/Failures incr

Within the Data Protection Advisor (DPA) All Jobs report the backups attached to this Session appear as successful:

Backup_Server Media_Server Backup_Group_Name Client Filesystem Domain_Name Backup_Set Session success

Therefor it appears that DPA Reports are not matching the output of HP DataProtector reports.

The DPA All Jobs report is not designed to match the output of the HP DataProtector List of Sessions report.

DPA is working as designed as the All Jobs report will provide information on the status of the Backup Jobs and not Sessions. Within HP DataProtector the Backup Jobs run for the Session do not inherent the status from the Session but have their own separate status.

For HP DataProtector DPA will collect several details but for the purposes of this article we are interested in the following three:

Sessions

Backup Jobs

Backup Errors

Sessions in HP DataProtector are similar to Groups in other Backup Applications. Given this DPA will collect information on Sessions and reports within DPA such as Backup Group Status will return these details.

DPA will also collect information on Backup Jobs from HP DataProtector. Backup Jobs have their own separate status (success or failed) from the Session status within HP DataProtector. Therefor the DPA All Jobs report will return information on the Backup Jobs status and not the Session status.

Lastly DPA collects information on Backup Errors from HP DataProtector as well. This information is seen when running DPA reports such as Backup Job Error Details.

Related:

  • No Related Posts

Avamar Plug-in for Oracle: Oracle Avamar rman CLI archivelog backup does not run concurrently with database backups when a taskfile is in use

Article Number: 500862 Article Version: 3 Article Type: Break Fix



Avamar Plug-in for Oracle

Oracle Avamar rman CLI archivelog backup does not run concurrently with database backups when a taskfile is in use.

rman script that does the database backup uses the same taskfile than archivelog backups

rman script to do database backups:

run {

allocate channel t1 type sbt PARMS ‘SBT_LIBRARY=/usr/local/avamar/lib/libobk_avamar64.so’ SEND ‘”–flagfile=/home/oracle/flags.txt” “–bindir=/usr/local/avamar/bin” “–cacheprefix=t1” “–vardir=/var/avamar” “–logfile=/var/avamar/t1.log” “–taskfile=/home/oracle/taskflag.txt“‘ trace 5;

rman script that does archivelog backups:

run {

allocate channel t1 type sbt PARMS ‘SBT_LIBRARY=/usr/local/avamar/lib/libobk_avamar64.so’ SEND ‘”–flagfile=/home/oracle/flags.txt” “–bindir=/usr/local/avamar/bin” “–cacheprefix=t1arch” “–vardir=/var/avamar” “–logfile=/var/avamar/t1arch.log” “–taskfile=/home/oracle/taskflag.txt“‘ trace

5;

Note: taskfile is a file that contains info about number of channels and if the operation is backup or restore.

Concurrent operations should use different taskfile names.

  • Create a different file name for the taskfile used for archivelog backups.

Example:


rman script to do database backups:

run {

allocate channel t1 type sbt PARMS ‘SBT_LIBRARY=/usr/local/avamar/lib/libobk_avamar64.so’ SEND ‘”–flagfile=/home/oracle/flags.txt” “–bindir=/usr/local/avamar/bin” “–cacheprefix=t1” “–vardir=/var/avamar” “–logfile=/var/avamar/t1.log” “–taskfile=/home/oracle/taskflag.txt“‘ trace 5;


rman script that does archivelog backups:

allocate channel t1 type sbt PARMS ‘SBT_LIBRARY=/usr/local/avamar/lib/libobk_avamar64.so’ SEND ‘”–flagfile=/home/oracle/flags.txt” “–bindir=/usr/local/avamar/bin” “–cacheprefix=t1arch” “–vardir=/var/avamar” “–logfile=/var/avamar/t1arch.log” “–taskfile=/home/oracle/taskarch.txt“‘ trace 5;

Related:

  • No Related Posts