SDX backup fails with error “Backup Operation failed” .

While SDX is backed up, it fails with error “Backup Operation failed”

User-added image
Following logs were observed :

mps_service.log :

Monday, 9 Jul 18 07:42:14.792 +0000 [Debug] [TCPServerConnection (default[#773])] Request from 10.2.25.38:60001 – POST – SSLCertPresent=false – type=https – /nitro/v1/config/backup

Monday, 9 Jul 18 07:42:14.792 +0000 [Debug] [TCPServerConnection (default[#773])] Json Web Content: {“backup”:{“backup_password”:”***********”}}

Monday, 9 Jul 18 07:42:14.793 +0000 [Debug] [TCPServerConnection (default[#773])] [GUI] Incoming request in ServiceProcessor for type “backup”

Monday, 9 Jul 18 07:42:14.857 +0000 [Debug] [TCPServerConnection (default[#773])] Sending Message to CONFIG /tmp/mps/ipc_sockets/mps_config_sock:{ “errorcode”: 0, “message”: “Done”, “is_user_part_of_default_group”: false, “skip_auth_scope”: true, “message_id”: -1, “resrc_driven”: true, “login_session_id”: “##7E875767D046786C75A8842FC9719788233249813DCCF55880198BF4CB52”, “username”: “xxxx”, “tenant_name”: “Owner”, “mps_ip_address”: “xxxxx”, “client_ip_address”: “xxxxxx”, “client_protocol”: “https”, “client_port”: 60001, “mpsSessionId”: “”, “source”: “service”, “target”: “CONFIG”, “version”: “v1”, “messageType”: “config”, “client_type”: “GUI”, “resourceType”: “backup”, “orignal_resourceType”: “backup”, “resourceName”: “”, “operation”: “add”, “asynchronous”: false, “params”: { “pageno”: 0, “pagesize”: 0, “detailview”: true, “compression”: false, “count”: false, “total_count”: 0, “action”: “”, “type”: “”, “onerror”: “EXIT”, “is_db_driven”: false, “order_by”: “”, “asc”: false, “duration”: “”, “duration_summary”: 0, “report_start_time”: “0”, “report_end_time”: “0” }, “additionalInfo”: { “Referer”: “https://cs-2-cgprod2-mgmt/admin_ui/svm/html/configuration.html”, “cert_present”: “false”, “rand_key”: “f84d69cc3a32b24”, “request_source”: “NITRO_WEB_APPLICATION”, “sessionId”: “##7E875767D046786C75A8842FC9719788233249813DCCF55880198BF4CB52” }, “backup”: [ { “backup_password”: “***********” } ] }

Monday, 9 Jul 18 07:43:04.909 +0000 [Debug] [TCPServerConnection (default[#773])] Received Message from CONFIG:{ “errorcode”: 30030, “message”: “Backup Operation failed”, “severity”: “ERROR”, “resourceName”: “backup”

mps_config.log

============

Monday, 9 Jul 18 00:30:51.477 +0000 [Information] [default[#1]] Backup:ns

{ “errorcode”: 0, “message”: “Done”, “is_user_part_of_default_group”: false, “skip_auth_scope”: true, “message_id”: -1, “resrc_driven”: true, “login_session_id”: “##7E875767D046786C75A8842FC9719788233249813DCCF55880198BF4CB52”, “username”: “dever”, “tenant_name”: “Owner”, “mps_ip_address”: “xxxxxx”, “client_ip_address”: “xxxxx”, “client_protocol”: “https”, “client_port”: 60001, “mpsSessionId”: “”, “source”: “service”, “target”: “CONFIG”, “version”: “v1”, “messageType”: “config”, “client_type”: “GUI”, “resourceType”: “backup”, “orignal_resourceType”: “backup”, “resourceName”: “”, “operation”: “add”, “asynchronous”: false, “params”: { “pageno”: 0, “pagesize”: 0, “detailview”: true, “compression”: false, “count”: false, “total_count”: 0, “action”: “”, “type”: “”, “onerror”: “EXIT”, “is_db_driven”: false, “order_by”: “”, “asc”: false, “duration”: “”, “duration_summary”: 0, “report_start_time”: “0”, “report_end_time”: “0” }, “additionalInfo”: { “Referer”: “https://cs-2-cgprod2-mgmt/admin_ui/svm/html/configuration.html”, “cert_present”: “false”, “rand_key”: “f84d69cc3a32b24”, “request_source”: “NITRO_WEB_APPLICATION”, “sessionId”: “##7E875767D046786C75A8842FC9719788233249813DCCF55880198BF4CB52” }, “backup”: [ { “backup_password”: “***********” } ] }

Monday, 9 Jul 18 07:42:14.858 +0000 [Debug] [Main] Incoming request in ConfigProcessor for type “backup”

Monday, 9 Jul 18 07:42:14.858 +0000 [Debug] [Main] File to be parsed: /var/mps/policy/mps_policy_backup.xml

Monday, 9 Jul 18 07:42:14.858 +0000 [Debug] [Main] Policy Name: mps_policy_backup, Policy Type: Backup

Monday, 9 Jul 18 07:42:14.859 +0000 [Debug] [Main] exclude are: device_backup

Monday, 9 Jul 18 07:42:14.859 +0000 [Debug] [Main] Type of policy rule is: backup

Monday, 9 Jul 18 07:42:14.859 +0000 [Debug] [Main] Backups to keep: 3

Monday, 9 Jul 18 07:42:14.860 +0000 [Debug] [Main] Total Backups to keep: 3

Monday, 9 Jul 18 07:42:14.907 +0000 [Debug] [Main] Starting to take the backup of mpsdb

Monday, 9 Jul 18 07:42:14.913 +0000 [Debug] [Main] Dumping table: backup_external_storage

Monday, 9 Jul 18 07:42:14.913 +0000 [Debug] [Main] Dumping table: backup_policy

Monday, 9 Jul 18 07:42:15.441 +0000 [Debug] [Main] Backup complete. pg_dump:

Monday, 9 Jul 18 07:42:15.450 +0000 [Debug] [Main] Copy file:ns_ssl_keys: TO :/var/mps/backup/Backup_10.1.83.32_11.1_58.13_09Jul2018_07_42_14/backup//var/mps/tenants/root/

Monday, 9 Jul 18 07:42:15.456 +0000 [Debug] [Main] Copy file:ns_ssl_certs: TO :/var/mps/backup/Backup_10.1.83.32_11.1_58.13_09Jul2018_07_42_14/backup//var/mps/tenants/root/

Monday, 9 Jul 18 07:42:15.462 +0000 [Debug] [Main] Copy file:ns_ssl_csrs: TO :/var/mps/backup/Backup_10.1.83.32_11.1_58.13_09Jul2018_07_42_14/backup//var/mps/tenants/root/

Monday, 9 Jul 18 07:42:15.463 +0000 [Debug] [Main] Copy file:ns_diff_reports: TO :/var/mps/backup/Backup_10.1.83.32_11.1_58.13_09Jul2018_07_42_14/backup//var/mps/tenants/root/

Monday, 9 Jul 18 07:42:15.469 +0000 [Debug] [Main] Match failed:device_backup

Monday, 9 Jul 18 07:43:04.789 +0000 [Information] [Main] Backup:ns

Monday, 9 Jul 18 07:43:04.909 +0000 [Error] [Main] Exception in mps_policy_backup

{ “errorcode”: 30030, “message”: “Backup Operation failed”, “severity”: “ERROR”, “resourceName”: “backup” }

Monday, 9 Jul 18 08:04:25.773 +0000 [Debug] [Config[#8]] Table copied: backup_external_storage

Monday, 9 Jul 18 08:04:25.774 +0000 [Debug] [Config[#8]] Table copied: backup_policy

The var utilization was also normal :

bash-2.05b# df -kh

Filesystem Size Used Avail Capacity Mounted on

/dev/md0 542M 375M 156M 71% /

devfs 1.0k 1.0k 0B 100% /dev

procfs 4.0k 4.0k 0B 100% /proc

/dev/ad0s1a 1.4G 624M 707M 47% /flash

/dev/ad0s1e 110G 12G 89G 12% /var

Related:

  • No Related Posts

NVP-vProxy: one or more VMs in a policy fail to back up with the message: no matching devices for save of client `vCenter_Server_Name’; check storage nodes, devices or pool

Article Number: 502312 Article Version: 3 Article Type: Break Fix



NetWorker 9.1,NetWorker 9.2

The NetWorker VMware Protection integration is configured with the vProxy Appliance. One or more virtual machine backups are failing with the error noted below:

no matching devices for save of client `[VCENTER_SERVER]‘; check storage nodes, devices or pool

The vCenter Server has a NetWorker Client resource configured in the NetWorker Management Console (NMC) Protection tab. The vCenter Server NetWorker Client “Storage Nodes” attribute, under the “Globals (2 of 2)” tab, has the default blank settings.

The specific cause is unknown. Some of the virtual machines require the vCenter Server NetWorker Client “Storage Nodes” attribute to be configured with the appropriate Storage nodes.

To work around the backup failures, the vCenter Server NetWorker Client “Storage Nodes” attribute should be updated:

  1. Open the NetWorker Management Console.
  2. Select the Protection Tab.
  3. In the left hand tree view, select “Clients“.
  4. Select the vCenter server, right click, and select “Modify Client Properties“.
  5. Go to the “Globals 2 of 2” tab
  6. Add the individual storage node names with the target backup devices for the vProxy backups. Include nsrserverhost (default for NetWorker Server) as the last entry: (One server name per line)
For Example:
NW_SNode1.yourdomain.com

NW_SNode2.yourdomain.com

nsrserverhost

This modification will explicitly zone all the required devices, on the storage nodes that the vProxy backups use, as available to the vCenter server.

Related:

  • No Related Posts

NetWorker 9.0: When the retention policy set to greater than 21 years (making clretent go past the year 2038), the Policy fails with “mm/dd/yyyy hh:mm:ss Action backup traditional ‘backup’ has initialized as ‘backup action job’ with job id “

Article Number: 499678 Article Version: 3 Article Type: Break Fix



NetWorker 9.0

Retention is set to 50 years for the Action (second page, “Backup options”).

Client properties has a retention policy that is also set to 50 years.

The Policy failure is instantaneous (with in 8 seconds).

There are no other details in the Show messages.

After the upgrade from NetWorker 8.2.3 to NetWorker 9.0.1.8 the policies with these “50 year” settings are failing with

“mm/dd/yyyy hh:mm:ss Action backup traditional ‘backup’ has initialized as ‘backup action job’ with job id <Number>”

The NetWorker nsrjobd was failing to interpret the retention that is greater than the epoch of January 1, 2038

Reconfigured the Policy’s retention period to less than 2038.

Reconfigured any clients that possessed a retention policy defined to be greater than 2038.

Related:

  • No Related Posts

CBT + DDBoost + BBBackups not equal to “almost zero”

Hello,

Just wondering on the space used on a Data Domain when you do several backup of the same server in a short period of time.

Since a while (exactly from the 7 of June) we experiment an increase of the space used on our DD not in relation with data growing.

DD space used.png

I discover that on the date where we had a capacity increased we had several manual launch of some backup.

What is “bizarre” is the fact that the capacity used on the DD seems to increase accordingly to the number of backups… (which is “acceptable” in case of traditional file level backup)

Capacity report V2.png

If you look at the number of backup version we have for the server “Server one” (as an example) you can see that on the 22 of June we had 5 backup of this server (don’t ask me why 5…) and every backup has a Data Domain copy (which is confirmed by the column AE of the Excel report.

Server one.png

I was expecting that with all CBT + Dedup and whatsoever optimization we will have just few blocks added and not the full disk.

(We are using Networker 9.2.1.4)

Questions:

1) Is CBT an option to be setup on every VM (found an article on that). We are running vSphere 6.5 with vProxy.

2) Even if not activated, a Block Based Backup should do that for you ?

3) Will an Incremental backup not take only modified Blocks ?

3) And what is the role of DDBoost in that “party” ?

Any explanation or answers welcomed because EMC dosen’t want to give me a clear explanation how things are really working here…

Related:

  • No Related Posts

Re: NDMP-Backup Error >> Failed to propagate handle; TimeOut after inactive

hello COmmunity,

After switching to a new backup server and a platform change from Linux to Windows, we get errors in certain processes when backing up NDMP file systems

suppressed 138 bytes of output.

.144324:nsrndmp_save: Adding attribute *policy workflow name = eNAS-VDM-016

.144324:nsrndmp_save: Adding attribute *policy action name = backup

.06/18/18 07:52:22.821430 NDMP Service Debug: The process id for NDMP service is 0x5a670b0

42909:nsrndmp_save: Performing DAR Backup..

83320:nsrndmp_save: Performing incremental backup, BASE_DATE = 44478769945

42794:nsrndmp_save: Performing backup to Non-NDMP type of device

174908:nsrdsa_save: Saving the backup data in the pool ‘dd3 enas’.

175019:nsrdsa_save: Received the media management binding information on the host ‘bkpmgmnt01.sis.net’.

174910:nsrdsa_save: Connected to the nsrmmd process on the host ‘bkpmgmnt01.sis.net’.

175295:nsrdsa_save: Successfully connected to the Data Domain device.

129292:nsrdsa_save: Successfully established Client direct save session for save-set ID ‘2854701209’ (eNAS1-DM-01:/root_vdm_9/VDM-16_fs2) with Data Domain volume ‘enas_001’.

42658:nsrdsa_save: DSA savetime = 1529301142

85183:nsrndmp_save: DSA is listening for an NDMP data connection on: 10.109.130.100, port = 8912

42952:nsrndmp_save: eNAS1-DM-01:/root_vdm_9/VDM-16_fs2 NDMP save running on ‘bkpmgmnt01.sis.net’

84118:nsrndmp_save: Failed to propagate handle 0000000000000000 to C:Program FilesEMC NetWorkernsrbinnsrndmp_2fh.exe child process: Das Handle ist ungültig. (Win32 error 0x6)

84118:nsrndmp_save: Failed to propagate handle 0000000000000000 to C:Program FilesEMC NetWorkernsrbinnsrndmp_2fh.exe child process: Das Handle ist ungültig. (Win32 error 0x6)

accept connection: accepted a connection

42953:nsrdsa_save: Performing Non-Immediate save

42923:nsrndmp_save: NDMP Service Error: Medium error

42923:nsrndmp_save: NDMP Service Warning: Write failed on archive volume 1

42617:nsrndmp_save: NDMP Service Log: server_archive: emctar vol 1, 93 files, 0 bytes read, 327680 bytes written

42738:nsrndmp_save: Data server halted: Error during the backup.

7136:nsrndmp_save: (interrupted), exiting

— Job Indications —

Termination request was sent to job 576172 as requested; Reason given: Inactive

eNAS1-DM-01:/root_vdm_9/VDM-16_fs2: retried 1 times.

eNAS1-DM-01:/root_vdm_9/VDM-16_fs2 aborted, inactivity timeout has been reached.



Strangely, these messages do not occur on all file systems, but rather randomly.

Does anyone know this error message and knows where the problem lies? The evaluation of the Celerra logs has so far revealed nothing.

Best Regard

Cykes

Related:

  • No Related Posts

NDMP-Backup Error >> Failed to propagate handle; TimeOut after inactive

hello COmmunity,

After switching to a new backup server and a platform change from Linux to Windows, we get errors in certain processes when backing up NDMP file systems

suppressed 138 bytes of output.

.144324:nsrndmp_save: Adding attribute *policy workflow name = eNAS-VDM-016

.144324:nsrndmp_save: Adding attribute *policy action name = backup

.06/18/18 07:52:22.821430 NDMP Service Debug: The process id for NDMP service is 0x5a670b0

42909:nsrndmp_save: Performing DAR Backup..

83320:nsrndmp_save: Performing incremental backup, BASE_DATE = 44478769945

42794:nsrndmp_save: Performing backup to Non-NDMP type of device

174908:nsrdsa_save: Saving the backup data in the pool ‘dd3 enas’.

175019:nsrdsa_save: Received the media management binding information on the host ‘bkpmgmnt01.sis.net’.

174910:nsrdsa_save: Connected to the nsrmmd process on the host ‘bkpmgmnt01.sis.net’.

175295:nsrdsa_save: Successfully connected to the Data Domain device.

129292:nsrdsa_save: Successfully established Client direct save session for save-set ID ‘2854701209’ (eNAS1-DM-01:/root_vdm_9/VDM-16_fs2) with Data Domain volume ‘enas_001’.

42658:nsrdsa_save: DSA savetime = 1529301142

85183:nsrndmp_save: DSA is listening for an NDMP data connection on: 10.109.130.100, port = 8912

42952:nsrndmp_save: eNAS1-DM-01:/root_vdm_9/VDM-16_fs2 NDMP save running on ‘bkpmgmnt01.sis.net’

84118:nsrndmp_save: Failed to propagate handle 0000000000000000 to C:Program FilesEMC NetWorkernsrbinnsrndmp_2fh.exe child process: Das Handle ist ungültig. (Win32 error 0x6)

84118:nsrndmp_save: Failed to propagate handle 0000000000000000 to C:Program FilesEMC NetWorkernsrbinnsrndmp_2fh.exe child process: Das Handle ist ungültig. (Win32 error 0x6)

accept connection: accepted a connection

42953:nsrdsa_save: Performing Non-Immediate save

42923:nsrndmp_save: NDMP Service Error: Medium error

42923:nsrndmp_save: NDMP Service Warning: Write failed on archive volume 1

42617:nsrndmp_save: NDMP Service Log: server_archive: emctar vol 1, 93 files, 0 bytes read, 327680 bytes written

42738:nsrndmp_save: Data server halted: Error during the backup.

7136:nsrndmp_save: (interrupted), exiting

— Job Indications —

Termination request was sent to job 576172 as requested; Reason given: Inactive

eNAS1-DM-01:/root_vdm_9/VDM-16_fs2: retried 1 times.

eNAS1-DM-01:/root_vdm_9/VDM-16_fs2 aborted, inactivity timeout has been reached.



Strangely, these messages do not occur on all file systems, but rather randomly.

Does anyone know this error message and knows where the problem lies? The evaluation of the Celerra logs has so far revealed nothing.

Best Regard

Cykes

Related:

  • No Related Posts

Re: Can I restore backup from physical cartridge which is don’t have barcode sticker and no information about backed up data

To recover data that is on this tape, the tape volume and its contents must first be cataloged into the NetWorker Media database.

  1. Review the output file out.txt. Find the client name whose data you want to recover from.
  2. Ensure that the client name resolves to an i.p address
  3. At this point, I strongly urge you to make a bootstrap backup before proceeding further.
  4. In NMC, create the NetWorker client, using the same name as reported in the output file.
  5. Now to populate the NetWorker Media Database. You will follow the same steps 1-7 from above.
  6. To scan the entire tape, use: scanner -m (drive) >out2.txt 2>&1

After the scanner command completes:

  1. Eject the tape using: nsrjb -uv -f (drive)
  2. Go back into the drive properties and un-select the Read-Only flag
  3. use the following to verify that NetWorker has information on the volume:
    1. mminfo -avot (NetWorker volume name)

At this point, the mminfo should list all the backups that are on that tape volume. However, they would be in recoverable mode only.

Depending on what you want to recover, you can either perform a saveset recovery (for file system backups only), or you may still have to make the saveset browsable by using the scanner -i command .

For example: if you wanted to catalog the saveset=123456789 so that you can then “browse” and select what files you want to recover, or if this backup was done using one of the NetWorker modules, then you load the tape back into a tape drive, and run:

mminfo -avot -q ssid=123456789 -r mediafile,mediarec

scanner -i -f (mediafile#) -r (mediarec#) -S 123456789 (drive)



If successful, then the following will show that the saveset is now browsable:



mminfo -avot -q ssid=123456789



And you can now proceed with the recovery.

Related:

  • No Related Posts

Re: Avamar managed file replication failing with error Replication failed: could not open connection to dest DDR

My issue was related to an avamar backup image that was written to DD but was corrupted. So when Avamar instructs DD to replicate that backup image/file, it errors out with that code.

It is difficult to narrow down the exact backup image that is corrupt, but there’s two options here: 1) comb through the logs to find what client’s backups are being replicated at time of faillure, note the name, remove the client from the replication scope, rerun replication, if it fails again, repeat the procedure and keep a running list of clients that fail until the replication job succeeds. Then create a separate replication job that just has the failed servers, and narrow the data range of the replication scope until you find the backup image that is causing the issue. Likely, if there are multiple servers with this issue, the failures will trace back to the same date/time. Once you have the Avamar backup images identified, you can delete it from Avamar. I’m not entirely sure if that removes the image from DD, so if the backup image is sizable, you may want to engage EMC support to dig into the array to remove the file.

Related:

Re: Migrate Networker 8.2 on Solaris to RHEL

A pretty tough task – I doubt that you will receive a straight answer.

This is how I would proceed:

– On the target server create a fake hosts table resolving all your current NW client names.

– Copy the NW databases to the new system – different directory!!

Forget the /nsr/index directory right now – it is nice to have but not mandatory.

– Copy the NW software to the target system

– Disconnect the network of the new host completely

– Rename the target server so that it will have the exact name as before

– Install the same NW version in the same directory as on the old system

– Copying the resource files will most likely not work for all of them – just think about the different device names

– Start NW and wait until it has started successfully.

It will take a few minutes

It will take much longer if he cannot resolve the hostnames

– Make sure to get the most important resources to work.

– Finalize the NW configuration

– Make sure you can run local backups and restores

Now you may restore/move the client file index directories

TEST – TEST – TEST !!!

– If your system runs fine, you may now try the upgrade to NW 9.x

TEST – TEST – TEST !!!

– Now you may shutdown the old server and

– Connect the new server to join the network

Pretty much things to do. Good luck.

Related:

  • No Related Posts

Change default location of PCT captures

I need a solution

I would like to change where Personality Caputers (PCT) are stored in my Symantec Management Console to a server that doesn’t get backed up. PCTs are large and only needed for a couple days. We are rolling out a large deployement and transferring data/settings from users’ PCs to new machines. Can I change the save location to something different than “..NSCapbinDeploymentPackagesPCT”

0

Related:

  • No Related Posts