Re: Help on how to backup the media database and client indexes to a single tape for DR purposes

I’m running networker version v8.2.4 on a windows server 2008 R2. Our external DR environment has the same physical infrastructure and also same OS version and same hostnames for clients and the networker server.

Does this DR site have the same networking setup?

If your production environment depend on DNS for name resolution, are the DR’s DNS servers all populated with the same host info as the production server?

If the DR’s DNS is not setup, does each of the DR NetWorker hosts all have their local host file populated with the needed hosts and ip. addresses?

Many people performing DR tests has encountered seeing the NetWorker server starting up the NetWorker server application very slow or even appear hung because host name lookup was not defined at the DR site. The lookup happens because when nsrd is started, it reads the nsrdb database, and then afterwards when nsrmmdbd opens and reads the media database. In both cases, essentially, if it finds a hostname (or FQDN), it will perform a name lookup for its IP address. And if it fails, it will retry a few times before finally giving up.

This behavior can also happen in production sites if the name lookup service is unavailable when NetWorker server is started.

The more hosts that are not resolvable, the longer the NetWorker server startup will take.

During previous DR testings on this external environment, we were using the scanner command to scan the tapes and rebuild the media database and indexes, but this is taking 2 days or more to complete.



In a pure DR scenario, after installing NetWorker in the DR server, you need to locate the volume that has the most recent NetWorker bootstrap backup. And if you have the ssid of that most recent bootstrap, then even better. With that, then all you have to do is to create the NetWorker device that can read that volume, put that volume into that device, and (prior to nw 9) run mmrecov. After that, you stop nw, make an online backup copy of res.R and mm, replace res with res.R, then restart NetWorker and complete the recovery. Note: after the recovery is complete, you should also scan that same bootstrap tape.

If you only know the volume that has the most recent bootstrap, then load the tape and use “scanner -m” so that it can catalog the tape. “mminfo -avot (volume)” will then tell you what is on the tape, and you can then find the bootstrap info from that output.

If you do not know which volume has the most recent bootstrap, then unfortunately you will need to scan each volume until you locate it. You should only need to scan tapes until the bootstrap is located.

After the bootstrap is recovered, you can then recover some or all of the client file index using: “nsrck -L7”.

EMC Networker Disaster Recovery Guide suggests I should backup the media database and client indexes to a single tape…

Ideally that would be convenient for DR purposes, so that all you would need is minimal set of tapes to recall for recovery. This can be achieved by creating a pool to store index and bootstrap backups. To create a set of volumes that will contain the bootstrap and index:

– label a new tape with the pool you will use to hold the bootstrap

– load that new tape so that new tape will be used

– create a NetWorker Group that has all the clients that you want to backup their client file indexes. Or just add the clients used for the DR testing.

– backup the bootstrap and indexes with: savegrp -O -l full -b (pool) (Group)

The “savegrp -O” will backup the client file index databases first before finally backing up the bootstrap.

In a pure DR scenario, you would need to rebuild your NetWorker server, and possibly your networking infrastructure. For your DR testing, I would recommend creating a text file containing all the hostnames and ip’s in your NetWorker environment. Save that file to a memory stick, and bring it along to the DR site.

Related:

  • No Related Posts

Re: Does Avamar overwrite previous backup with new data?

There are two basic questions:

1. Is the new client handled in Avamar as original, has same client id??

2. How long is retention policy??

If client has same id than you should be able to view backups from original, if not, you have to check correct one from Management console and restore data from old client to new one.

If all backups has become expired, than it is question of setting Avamar, it should not recycle last backup of any client by default so you should so one at least. If all backups became recycled, than there is no way to restore it, because Garbage collection deletes chunks with no link to valid backup.

Regards

Lukas

Related:

Does Avamar overwrite previous backup with new data?

There are two basic questions:

1. Is the new client handled in Avamar as original, has same client id??

2. How long is retention policy??

If client has same id than you should be able to view backups from original, if not, you have to check correct one from Management console and restore data from old client to new one.

If all backups has become expired, than it is question of setting Avamar, it should not recycle last backup of any client by default so you should so one at least. If all backups became recycled, than there is no way to restore it, because Garbage collection deletes chunks with no link to valid backup.

Regards

Lukas

Related:

Avamar Client : #SQL Backup Label number

Article Number: 503823 Article Version: 4 Article Type: How To



Avamar Client

1 – Database backups are created by backing up each individual database as a partial backup, partial at this point means, a piece, a backup of each database its part of an entire backup. A partial backup is just that – it’s a piece of a backup that doesn’t stand on its own.

As we can see below, each labelnum represent a partial backup.

<file internal=”false” labelnum=”1372″ fullname=”SQLSQLDB/db2/i-2.stream0″ acnt=”/clients/sql.moja.com” saveas=”i-1.stream0″ />

<file internal=”false” labelnum=”1371″ fullname=”SQLSQLDB/db2/f-0.stream0″ acnt=”/clients/sql.moja.com” />

<file internal=”false” labelnum=”1370″ fullname=”SQLSQLDB/db4/i-2.stream0″ acnt=”/clients/sql.moja.com” saveas=”i-1.stream0″ />

<file internal=”false” labelnum=”1369″ fullname=”SQLSQLDB/db3/i-2.stream0″ acnt=”/clients/sql.moja.com” saveas=”i-1.stream0″ />

<file internal=”false” labelnum=”1368″ fullname=”SQLSQLDB/db4/f-0.stream0″ acnt=”/clients/sql.moja.com” />

<file internal=”false” labelnum=”1367″ fullname=”SQLSQLDB/db3/f-0.stream0″ acnt=”/clients/sql.moja.com” />

2- Later, then calling a “snapview” to tie those backups together into a restorable set.

(Essentially, snapview, ties partial backups together into a restorable set.).

So, the backup above will become only one number in the Avamar Administrator.

The snapview process is how Avamar ties partial backups together into a restorable set. After the backup of each database is complete, a final avtar runs that collects the root hashes of the partial backups and ties them together. This process of tying the backups together is a snapview.

Streams:

Avatar consider as partial backup each stream, for this reason, we have a one labelnum for each stream.

Based in example below, the configuration below we have 5 streams, for larger databases, we can see the 5 streams, so 5 label number because it each stream is a partial backup.



<flag type=”integer” pidnum=”3006″ value=”5” name=“max-parallel” />

<file internal=”false” labelnum=”259344” fullname=”sqlinstraveleef-0.stream0″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259343″ fullname=”sqlinstraveleef-0.stream2″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259342″ fullname=”sqlinstraveleef-0.stream1″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259340″ fullname=”sqlinstraveleef-0.stream3″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259341″ fullname=”sqlinstraveleef-0.stream4″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259367″ fullname=”sqlinsadminsysf-0.stream3″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259368″ fullname=”sqlinsadminsysf-0.stream2″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259369″ fullname=”sqlinsadminsysf-0.stream1″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259365” fullname=”sqlinsadminsysf-0.stream4″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259366″ fullname=”sqlinsadminsysf-0.stream0″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259362″ fullname=”sqlinsCoref-0.stream3″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259364″ fullname=”sqlinsCoref-0.stream4″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259363” fullname=”sqlinsCoref-0.stream1″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259360″ fullname=”sqlinsCoref-0.stream0″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259361″ fullname=”sqlinsCoref-0.stream2″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=“259349” fullname=”sqlinsLCS_SQLf-0.stream4″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259347″ fullname=”sqlinsLCS_SQLf-0.stream1″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259348″ fullname=”sqlinstLCS_SQLf-0.stream0″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259346” fullname=”sqlinsLCS_SQLf-0.stream3″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259345” fullname=”sqlinsLCS_SQLf-0.stream2″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

Related:

#SQL Backup Label number

Article Number: 503823Article Version: 3 Article Type: How To





Avamar Client



This article explain how label number works for Avamar on SQL Backup.

Instructions 1 – Database backups are created by backing up each individual database as a partial backup, partial at this point means, a piece, a backup of each database its part of an entire backup. A partial backup is just that – it’s a piece of a backup that doesn’t stand on its own.

As we can see below, each labelnum represent a partial backup.

<file internal=”false” labelnum=”1372″ fullname=”SQLSQLDB/db2/i-2.stream0″ acnt=”/clients/sql.moja.com” saveas=”i-1.stream0″ />

<file internal=”false” labelnum=”1371″ fullname=”SQLSQLDB/db2/f-0.stream0″ acnt=”/clients/sql.moja.com” />

<file internal=”false” labelnum=”1370″ fullname=”SQLSQLDB/db4/i-2.stream0″ acnt=”/clients/sql.moja.com” saveas=”i-1.stream0″ />

<file internal=”false” labelnum=”1369″ fullname=”SQLSQLDB/db3/i-2.stream0″ acnt=”/clients/sql.moja.com” saveas=”i-1.stream0″ />

<file internal=”false” labelnum=”1368″ fullname=”SQLSQLDB/db4/f-0.stream0″ acnt=”/clients/sql.moja.com” />

<file internal=”false” labelnum=”1367″ fullname=”SQLSQLDB/db3/f-0.stream0″ acnt=”/clients/sql.moja.com” />

2- Later, then calling a “snapview” to tie those backups together into a restorable set.

(Essentially, snapview, ties partial backups together into a restorable set.).

So, the backup above will become only one number in the Avamar Administrator.

The snapview process is how Avamar ties partial backups together into a restorable set. After the backup of each database is complete, a final avtar runs that collects the root hashes of the partial backups and ties them together. This process of tying the backups together is a snapview.

Streams:

Avatar consider as partial backup each stream, for this reason, we have a one labelnum for each stream.

Based in example below, the configuration below we have 5 streams, for larger databases, we can see the 5 streams, so 5 label number because it each stream is a partial backup.

<flag type=”integer” pidnum=”3006″ value=”5″ name=”max-parallel” />

<file internal=”false” labelnum=”259344″ fullname=”sqlinstraveleef-0.stream0″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259343″ fullname=”sqlinstraveleef-0.stream2″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259342″ fullname=”sqlinstraveleef-0.stream1″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259340″ fullname=”sqlinstraveleef-0.stream3″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259341″ fullname=”sqlinstraveleef-0.stream4″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259367″ fullname=”sqlinsadminsysf-0.stream3″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259368″ fullname=”sqlinsadminsysf-0.stream2″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259369″ fullname=”sqlinsadminsysf-0.stream1″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259365″ fullname=”sqlinsadminsysf-0.stream4″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259366″ fullname=”sqlinsadminsysf-0.stream0″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259362″ fullname=”sqlinsCoref-0.stream3″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259364″ fullname=”sqlinsCoref-0.stream4″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259363″ fullname=”sqlinsCoref-0.stream1″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259360″ fullname=”sqlinsCoref-0.stream0″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259361″ fullname=”sqlinsCoref-0.stream2″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259349″ fullname=”sqlinsLCS_SQLf-0.stream4″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259347″ fullname=”sqlinsLCS_SQLf-0.stream1″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259348″ fullname=”sqlinstLCS_SQLf-0.stream0″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259346″ fullname=”sqlinsLCS_SQLf-0.stream3″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

<file internal=”false” labelnum=”259345″ fullname=”sqlinsLCS_SQLf-0.stream2″ acnt=”/SQLServers/Production/Cluster/sql.labemc.local” />

Related:

Re: Avamar backup retention question

I don’t set mine by Yearly, Monthly Weekly or Daily

I just do daily’s

but a yearly backup is the first backup of the year

Monthly (I think) is the first of the month

Weekly the first of the week

Daily – duh…

so you can say keep yearly’s for 7 years (well that is only 7 backups)

if you go to Backup and Restore and Cjoose the Manage tab

them retrieve for more than one your

you can see the Retention Tags

All of mine have D some have DW DM or even DWMY

read the manual about retention settings it explains it really good.

you might night have room for 6 months of daily backup

so you might keep Monthly for 6 , Weekly for 90 and daily for 60

Related:

Re: “Performance testing” guidelines for DDBEA backups of Oracle?

Not sure if that’s the best title, but here’s what I’m looking for:

Customer wants to run separate Oracle backup jobs using DDBEA over a period of time, and their key “ask” is that they get “true” results relative to how long it takes to run the backup without any “previous knowledge” of any other backups or backup data that may have run before.

So – what has to be done at the RMAN, Data Domain and possibly DDBEA levels for the customer to be able to run multiple Oracle backup jobs (that is, different DB sizes, using different numbers of channels, and things like that) but NOT see any “skew” due to any other backup job that may have run before?

One other “performance” question – is there any “formula” for determining how many channels to allocate in order to “optimize” an Oracle backup job? In my case, the customer has seen a backup run using 4 channels which “split” the overall database info into “9 pieces”, and he is wondering whether that means it would run better if he used 9 channels instead?

One other thing – if either of these are “Oracle questions” as opposed to DDBEA question in any way, please comment accordingly so I know that is the case.

All comments/feedback appreciated – thanks.

Cal C.

Related:

“Performance testing” guidelines for DDBEA backups of Oracle?

Not sure if that’s the best title, but here’s what I’m looking for:

Customer wants to run separate Oracle backup jobs using DDBEA over a period of time, and their key “ask” is that they get “true” results relative to how long it takes to run the backup without any “previous knowledge” of any other backups or backup data that may have run before.

So – what has to be done at the RMAN, Data Domain and possibly DDBEA levels for the customer to be able to run multiple Oracle backup jobs (that is, different DB sizes, using different numbers of channels, and things like that) but NOT see any “skew” due to any other backup job that may have run before?

One other “performance” question – is there any “formula” for determining how many channels to allocate in order to “optimize” an Oracle backup job? In my case, the customer has seen a backup run using 4 channels which “split” the overall database info into “9 pieces”, and he is wondering whether that means it would run better if he used 9 channels instead?

One other thing – if either of these are “Oracle questions” as opposed to DDBEA question in any way, please comment accordingly so I know that is the case.

All comments/feedback appreciated – thanks.

Cal C.

Related: