Re: Troubles with NDMP-DSA Backup NW9.1.1 & NetApp Filer over dedicated Interface

Hello,

Networker Server 9.1.1

Stroage Nodes 9.1.1 with RemoteDevice DD for NDMP-Backups

We added storage nodes to our environment that got a direct 10G Ethernet connection to the NetApp Filers. There is one interface connected to the production network and one interface that is directly connected to the NetApp Filer with a private network address of 10.11.11.12.

NetApp Filer is configured with IP address 10.11.11.11.

We used the wizard to configure the NDMP-DSA Backup. Client Direct is disabled.

Backup command: nsrndmp_save -M -P <StorageNode> -T dump

Additional Information:

BUTYPE=dump

DIRECT=Y

HIST=Y

EXTRACT_ACL=y

UPDATE=Y

USE_TBB_IF_AVAILABLE=Y

In this above configuration backups work and move data over the production network.

Our goal is to use the direct connection to optimize backup. Therefor we added the hostname for the storage node with the private ip address on the netapp filer but the backups still moved over the production network.

In the next step we defined a “virtual” hostname of storagenode-fs with IP 10.11.11.12 and edited the Backupcommand to use P storagenode-fs but this did not work either.

Anyone got some ideas on this situation?

Regards,

Patric

Related:

Storage Node Network connectivity to Datadomain best practices

I am looking for some advise on the best practices on connecting networker storage nodes in a environment where clients are having backup IP’s in several different VLAN’s . So basically our storage nodes will contact NDMP clients over their backup networker in layer-2 on diff vlans and need send the backup data to data domain on separate vlan.

To depict this here is how we are currently backing up

NDMPClient1-Backup-vlan1———->Storage Node-Backup-Vlan1( Vlan5)———->DataDomain over Vlan5

NDMPClient2-Backup-vlan2———->Storage Node-Backup-Vlan2( Vlan5)———->DataDomain over Vlan5

NDMPClient3-Backup-vlan3 ———->Storage Node-Backup-Vlan3( Vlan5)———->DataDomain over Vlan5

NDMPClient4-Backup-vlan4 ———->Storage Node-Backup-Vlan4( Vlan5)———->DataDomain over Vlan5

So for every NDMP client backup vlan we defined and interface on storage nodes in the same Vlan.

And from Storage node to Datadomain connectivity we have a seperate backup vlan in layer-2

Since this is a 3 way NDMP backp , the traffic flows from clients to Storage nodes in one network and from storage nodes to Dataomdin in a different paths.

is this is a good model or do we have any other model that we can adopt to have better backup/restore performances.

Thanks in advance

Related:

  • No Related Posts

Isilon NDMP backup related snapshot management/deletion

We are doing NDMP-DSA backups for EMC Isilon using EMC networker to the Data Domain.

We use the following variables under the “Application Information” section of Isilon client properties in Networker:-

BACKUP_OPTIONS=0x00000100

BACKUP_MODE=SNAPSHOT

BUTYPE=tar

DIRECT=Y

HIST=D

UPDATE=Y

USE_TBB_IF_AVAILABLE=Y

NSR_DSA_NODE=server.domain.local

We are seeing lot of snapshots left behind by the NDMP backup(some get deleted every night and some left behind).

Is that normal and if not then what should I check?

Related:

  • No Related Posts

DELL EMC NetWorker NAS backups with NDMP

DELLEMCNetWorker





Avatar.jpg





Before we jump into NDMP and how NetWorker handles it, let us first understand some basics about NAS and what are the different types of backups.



Network Attached Storage (NAS)

NAS systems.jpg







A NAS system is a dedicated high-performance file server with the storage system. It provides file-level data access and sharing. File sharing refers to storing and accessing files over a network. NAS uses network and file sharing protocols, including TCP/IP for data transfer and CIFS and NFS for remote file services. Using NFS or CIFS, remote clients gain access over TCP/IP to all or a portion of a file system that is on a file server. The owner of a file can set the required type of access, such as read-only or read-write for a particular user or group of users and control changes to the file. When multiple users try to access a shared file at the same time, a protection scheme is required to maintain data integrity and at the same time make this sharing possible.



File Copy Backup

The simplest method for backup uses file copy, such as an operating system’s copy application. In this type of copy, the metadata includes the names and characteristics of all files so that the level of granularity for recovery is at the file level. The performance of a file copy backup is directly affected by the number of files, sizes, and the general characteristics of the file system being backed up.

Raw Device Backup

Backup of data can also occur on a raw device level. That means that the file system will have to be unmounted so that the copy can take place. The backup application can then use “dump” applications, such as UNIX’s dd to perform a copy from the raw device to the backup device. This type of backup is usually faster than a file copy but affects restore granularity.

NAS Head Systems

The use of NAS heads imposes a new set of considerations on the backup and recovery strategy in NAS environments. NAS heads use a proprietary operating system and file system structure supporting multiple file-sharing protocols. In application server-based backup, the NAS head retrieves data from storage over the network and transfers it to the backup client running on the application server. The backup client sends this data to a storage node, which in turn writes the data to the backup device. This results in overloading the network with the backup data and the use of production server resources to move backup data.

Serverless Backup

In the server-less backup, the network share is mounted directly on the storage node. This avoids overloading the network during the backup process and eliminates the need to use resources on the production server. In this scenario, the storage node, which is also a backup client, reads the data from the NAS head and writes it to the backup device without involving the application server. Compared to the previous solution, this eliminates one network hop.

With the adoption of NAS devices by the industry, several challenges were noticed.

Proprietary Operating Systems

Most NAS devices run on proprietary operating systems designed for very specific functionality and therefore do not generally support “Open System” management software applications for control. There are different data storage formats between the storage arrays.

Network File Systems

Security structures differ on the two most common network file systems, NFS and CIFS. Backups implemented via one of the common protocols would not effectively backup any data security attributes on the NAS device that was accessed via a different protocol. For example, CIFS LAN backup, when restored, would not be able to restore NFS file attributes and vice-versa. With a dual accessed file system, NFS and CIFS gave rise to the concern that if the file system was corrupted and there was no formal independent methodology for recovering it, then the permissions and rights of the file system could be compromised on recovery and neither protocol would understand the other’s schema. Therefore, when pre-NDMP backups were performed, the image on tape was that of the specific protocol used to perform the backup

Network Data Management Protocol (NDMP)

NAS backup challenges are addressed with NDMP, which is both a mechanism and protocol utilized on a network infrastructure to enable the control of backup, recovery, and transfer of other data between NDMP-enabled primary and secondary storage devices. TCP/IP is the transport protocol. XDR is the data output language where all data is read from and copied back to disparate operating systems and hardware platforms without losing the data integrity. The NFS file system and Microsoft use XDR to describe its data format. By enabling this standard on a NAS device, the proprietary operating system ensures that the data storage format conforms to the XDR standard and therefore allows the data to be backed up and restored without file system structure loss with respect to different rights and permission structures, as in the case of dual accessed file systems.

Some History and facts about NDMP

Co-Invented by NetApp and PDC Software (aka Intelliguard) – now Legato/EMC – in the early 1990’s, with first commercial deployments of NDMP enabled systems as early as 1996.

Since its inception, the protocol has gone through multiple versions, designed and standardized by the NDMP consortium (www.ndmp.org) and providing varying degrees of functionality and interoperability.

The current and latest version of the NDMP protocol is version 4 and it is supported by all enterprise NAS and DMA vendors.

NDMP version 5 has been in the works for a number of years but has not become standardized.

Some of the proposed NDMP v.5 features are already supported independently by a few NAS and DMA vendors (i.e. Token Based Backup, Checkpoint Restartable Backup, Snapshot Management Extension, and Cluster Aware Backup).

NDMP is purpose-built for NAS backup and recovery so it’s most efficient for this task and removes the traditional method technical barriers.

With NDMP, the NAS performs its own backup and recovery. The DMA only sends the commands to the NAS and maintains the device configuration and catalog.

  • Overall, NDMP provides the following benefits:
  • Reduces complexity
  • Provides interoperability
  • Allows NAS device to be “backup ready”
  • Allows faster backups
  • Allows NAS and DMA vendors to focus on core competency and compatibility
  • It is a cooperative open standard initiative

Additionally, the ability to backup the filesystem from a block-level representation can provide a significant performance benefit, particularly in the case of dense file systems.



NDMP Standard

NDMP (Network Data Management Protocol) is an open protocol used to control data backup and recovery communications between primary and secondary storage in a heterogeneous network environment. Compliance with NDMP standards ensures that the data layout format, interface to storage, management software controls, and tape format are common irrespective of the device and software being used. Refer to the NDMP organization for more details on the protocol and implementation, at http://www.ndmp.org



NDMP Operation on NAS Devices

When implemented on a NAS device, it responds to backup software requests for backup and recovery functions. In traditional backup methods, NDMP backups only use the LAN for metadata. The actual backup data is directly transferred to the local backup device by the NAS device.

NDMP Components in a NetWorker Environment



Three main components support NDMP data operations with the NetWorker software:

NDMP Data Server, NDMP Tape Server, DMA

NDMP Components.jpg

Control station: Control Station is the management control host interface into the entire NAS system.

Data Mover: Data mover is the host machine that basically owns all the NAS resources.



NDMP Configuration Models

There are several different NDMP configuration models. Direct-NDMP and NDMP-DSA or three-way backups.

Each one of the models target specific user needs and applications. In all the scenarios, the backup server and the NAS device are NDMP-compliant. The backup application controls the backup/restore process and handles file and scheduling information.

Supported NDMP Backup Types

ndmp backup types.jpg

NDMP Optional Features

Depending upon the backup software vendor, there are two additional NDMP features that are supported:

• Direct Access Recovery or DAR

• Dynamic Drive Sharing or DDS

Direct Access Recovery (DAR)

DAR is the ability to keep a track of tape position for individual files in NDMP backups so that the tape server can seek directly to the file during restore. Without DAR support, a single file restore requires reading through the entire index. Another form of DAR is the Directory DAR or DDAR, which is an improved version. DDAR supports directory-level DAR by restoring all the content under a particular directory.

Dynamic Drive Sharing (DDS)

DDS enables tape drives within individual tape libraries to be shared between multiple NAS devices and/or storage nodes in a SAN. By allowing storage nodes to write data to all available drives, more drives can be assigned per backup group in comparison to an environment whereby drives are dedicated to specific servers. As a result, DDS maximizes library utilization, enables backups and recoveries to be completed sooner, and increases library ROI.

NDMP key Installation binaries with NetWorker

NDMP Binaries.jpg

Direct-NDMP

The DMA (NetWorker) controls the NDMP connection and manages the metadata.

The NAS backs up the data directly, over Fibre Channel, to a locally attached NDMP TAPE device.



  • Direct-NDMP writes and reads to/from TAPE in serial fashion, one save set at a time.

  • Direct-NDMP does not support multiplexing of save sets on the same volume.

  • Client parallelism must not exceed the total number of NDMP devices available to the NAS.

  • Direct-NDMP has the advantage of backing up data directly to tape from the NAS.

  • Typically fast, but cannot take advantage of tape multiplexing.

  • Maybe a better choice for best performance when few, large file systems require backup where throughput is more relevant than multiplexing.



How it works

  • nsrndmp_save runs on the NetWorker Server. It is invoked by ‘workflow’ or manually via command line.

  • nsrndmp_save establishes a TCP connection with the NAS on port 10000 and sends backup parameters to the NAS. The NAS backs up directly to NDMP TAPE over FC SAN.

  • nsrndmp_save spawns ‘nsrndmp_2fh’ which receives and sorts File History (FH) messages from the NAS for building the backup index.

  • Once all File History is received, ‘nsrndmp_2fh’ exits. ‘nsrndmp_save’ then spawns ‘nsrdmpix’ which converts the FH to NetWorker index format and passes it to ‘nsrindexd’ for index commit on the NetWorker Server.

Once both the backup and index processing are completed (not necessarily at the same time) ‘nsrndmp_save’ reports the backup status and exits.

If either the backup or index generation fails, ‘nsrndmp_save’ reports the backup has failed.

NOTE: If the backup fails the index generation fails. But if only index generation fails, the backup is likely successful and recoverable.

A separate ‘nsrndmp_save’ process is issued and running for each save set. Same for ‘nsrndmp_2fh’ and ‘nsrdmpix’.







NDMP-DSA (Data Server Agent)

NetWorker controls the NDMP connection and manages the metadata. NetWorker is also responsible for the data backup to a NetWorker device (non-NDMP).

The NAS sends the data over TCP/IP to NetWorker for backup to the Server’s or Storage Node’s configured device.

NDMP-DSA supports any and all NetWorker device types (Tape, AFTD, DD-Boost).

NDMP-DSA is sometimes also called “Three-Way NDMP”.



  • NDMP-DSA writes and reads data over the TCP/IP network to the NetWorker server or Storage Node.

  • Client parallelism can be set based on the capabilities of the NAS as opposed to the number of available devices.

  • NDMP-DSA has the advantage of leveraging NetWorker’s tape multiplexing capability to backup more than one save sets to the same tape at the same time.

  • With NDMP-DSA the data has to be sent over the TCP/IP network to NetWorker which may cause throughput contention.

  • NDMP-DSA may be a better choice for best performance when many, small file systems require backup where greater parallelism and multiplexing are more relevant than network bandwidth.



How it works



  • ‘nsrndmp_save’ starts via ‘savegrp’ on the NetWorker Server. It can also be started manually from a command line.

  • ‘nsrndmp_save’ connects to the NAS over TCP port 10000, authenticates, and passes the backup parameters to the NAS.

  • ‘nsrndmp_save’ communicates with the Storage Node (Server or Remote) via ‘nsrexecd’ and spawns the ‘nsrdsa_save’ process.

  • ‘nsrndmp_save’ passes the DSA hostname information to the NAS so it can connect to the ‘nsrdsa_save’ process and start the backup.

  • ‘nsrdsa_save’ communicates with the NAS over any of the available NetWorker TCP Service ports configured on the Server or Storage Node.
  • ‘nsrdsa_save’ receives the backup data from the NAS and passes it to the ‘nsrmmd’ for backup to the NetWorker device.

  • If “Client Direct” is set in the NDMP Client, ‘nsrdsa_save’ communicates with the backup device directly.

  • In parallel, ‘nsrndmp_save’ spawns the ‘nsrndmp_2fh’ process on the NetWorker Server which receives FH messages for index generation.

  • After ‘nsrndmp_2fh’ is done, ‘nsrndmp_save’ spawns ‘nsrdmpix’ for index processing to ‘nsrindexd’.

  • Once the backup is done, the NAS closes the DSA connection and the ‘nsrdsa_save’ process exits. If /when index processing is complete, ‘nsrndmp_save’ exits and reports the status of the backup.

If either the backup or index generation fails, ‘nsrndmp_save’ reports the backup has failed.

NOTE: If the backup fails the index generation fails. But if only index generation fails, the backup is likely successful and recoverable.



  • A separate ‘nsrdsa_save’ process is issued and running for each save set being backed up concurrently.



NDMP-DSA Client Direct



NDMP “Client Direct” applies solely to the communication between the ‘nsrdsa_save’ process and the backup device (AFTD or DD-Boost). Contrary to core NetWorker Clients, NDMP Clients cannot communicate directly with the NDMP-DSA backup device (i.e. the NAS always has to send the data to the ‘nsrdsa_save’ on the NW host). Thus, the DSA host is always the acting client insofar as NDMP “Client Direct” is concerned. With “Client Direct” set, the ‘nsrdsa_save’ communicates with the backup device directly. This is called “Immediate Save”. Without the “Client Direct” set, the ‘nsrdsa_save’ communicates with the ‘nsrmmd’ process rather than the device directly. This is called “Non-Immediate Save”.

Client Direct NDMP workflow.jpg



More information can be found in NetWorker NDMP guide.



https://support.emc.com/docu89901_NetWorker_18.1_Network_Data_Management_Protocol_(NDMP)_User_Guide.pdf?language=en_US



NDMP Index Processing ( Technical Notes)



https://support.emc.com/docu39128_NDMP-Index-Processing:-A-Performance-Case-Study-Technical-Notes.pdf?language=en_US



Tip: Interested in setting up a simulator for Isilon in the LAB.

Here is the info if you would be interested.



Download VMWare Player.



Download Isilon Simulator (162 MB) from the following link



https://download.emc.com/downloads/DL89604_Isilon_OneFS_8.1.0.4_Simulator.zip?language=en_US&source=Coveo



Each Node will require 2 GB of Memory.



Remember Isilon Cluster should have at least 3 Nodes, the maximum is 144.



Follow the following Video and have fun.



Technical Demo: EMC Isilon OneFS Simulator | ID.TV – YouTube





Post Install if you want to explore the GUI



Open a Browser and type one of the external IP addresses with port number 8080.

Have Fun and I hope it helps!!



Related:

  • No Related Posts

Re: Networker 9.2.1.1 – nsrndmp_save: Failed to propagate handle

Have you ever run NDMP backups before?

Do not forget about fundamental settings

– specific NDMP devices (unless you use NetWorker DSA)

– Specific pools

– NDMP specific client settings like

– NDMP option

– Remote user & Password

– The save set must be specified (‘All’ cannot be resolved)

just to name the most important ones …

Related:

  • No Related Posts

Re: Can we change the control port settings on isilon node?

Sorry, Control Port?

Isilon doesn’t have Management ports, or management interfaces. All management is done in-band. Because you’re likely talking about doing NDMP-based backups to a DataDomain, I assume that you’re talking about the network interfaces for your 3-way NDMP. So all that’s necessary to accomplish what you’re describing is to:

#1 Ensure that your backup application, NetBackup, NetWorker, etc. is using a smart connect zone name to talk to the Isilon cluster.

#2 Ensure that there are enough IP addresses for the new interfaces un-used on the static smartconnect zone on the Isilon Cluster.

#3 Add the network interfaces for the new nodes to that SmartConnect Zone/Pool. (Assuming they are cabled, and switch ports are cabled correctly).

That’s it. I hope that helps!

~Chris

Related:

GUI policy manager shows version old version for NDMP clients after the client was upgraded.[1]

Article Number: 498815 Article Version: 3 Article Type: Break Fix



Avamar Client for Linux, UNIX, MacOSX,Avamar Plug-in for NDMP

After upgrade of a Linux/Unix/NDMP client, the previous version

is displayed in activity window(client version column) and policy windows(version).

See examples below……

User-added image


User-added image

The client plug-in was upgraded but the information had not been send to the grid yet. The client needs to re-register to the grid to update it’s version information.

This does not limit the functionality of Avamar to successfully complete any backups.

Run avregister from the client side as root.

More information on avregister can be found in the

NDMP Accelerator User Guide.

There are different guides for Dell EMC, NetApp and Oracle-ZFS NAS devices.

For example, for our 7.3 release, the guides would be…

docu69628_Avamar-7.3-NDMP-Accelerator-for-NAS-Systems-User-Guide.pdf

docu69629_Avamar-7.3-NDMP-Accelerator-for-NetApp-Filers-User-Guide.pdf

docu69630_Avamar-7.3-NDMP-Accelerator-for-Oracle-ZFS-User-Guide.pdf

It is also discussed in the Backup Clients Users guide for Linux/Unix based clients.

an example, for our 7.3 release, the guide would be..

docu69621_Avamar-7.3-Backup-Clients-User-Guide.pdf

You can get these documents from the Dell EMC support web page.

support.emc.com

==================================================================================================

# avregister

=== Client Registration and Activation

This script will register and activate the client with the Administrator server.

Enter the Administrator server address (DNS text name or numeric IP address, DNS name preferred): emc.com

Enter the Avamar server domain [clients]:

avagents will be restarted for all clients

NOTE: all clients will be registered to the Administrator server given (Avamar Grid)

and the same Avamar server domain given.

==================================================================================================

You may need to use the Avamar Admin GUI > Policy > client tab > select the client

un-check the activated box.

User-added image

You can start here and un-check the activated box for each client you will run avregister for

or wait till it complains that it can’t register a client that is already activated.

On an NDMP accelerator, you only have to run avregister once for all clients on the box.

All clients will be registered to the same domain.

after this, refresh the admin GUI and you should see the new version.

Related:

Avamar Isilon – browse Isilon hangs after the browser password is entered but does not show /ifs

Article Number: 498454 Article Version: 3 Article Type: Break Fix



Avamar Plug-in for NDMP 7.3.100-233

Fresh configuration.

When browsing the Isilon in the Avamar GUI, after the browse password is entered, the /ifs dir is not displayed.

No errors or warnings.

Running ‘avsetupndmp’ on the Accelerator node and selecting edit, the client fails authenticating the NDMP user.

Password configuration.

Fresh install.

Verify NDMP user and av-browse-admin users on Isilon.

You can use curl to test from Accelerator to Isilon…

curl -vk -u “<username>” -X GET ‘https://<node_IP>:8080/namespace/<access_point>’

For example, to get the content listing of /ifs:

curl -vk -u “root” -X GET ‘https://10.11.1.1:8080/namespace/ifs

Verify that the NDMP protocol is enabled on the Isilon.

On Isilon GUI > Data protection > protocol, ndmp

> make sure ndmp service is checked

> save settings.

telnet to the Isilon as root

run the following commands.

# isi ndmp settings global view

should show ndmp port 10000 and dma: generic

# isi ndmp user ls

Output should be similar to the following…

NDMP User

———

ndmp

ebadmin

The error we saw described above was because NDMP did not have permissions to run NDMP jobs

To add the NDMP user an NDMP authorized user should run this command on the Isilon:

# isi ndmp user create ndmp –password PASSWORD

Where PASSWORD is the password you want to use.


On the Accelerator node, if you exited ‘avsetupndmp’, start again, accepting the current settings.

At the menu, edit the client.

Accept the defaults up to the request for the NDMP password.

Enter the password for the NDMP user just created on the Isilon.

If you left the ‘avsetupndmp’ program running and waiting for the password, enter the NDMP password as prompted.

Once completed, go back to the Avamar Admin GUI > Backup & Restore

Browse the client.

Now the /ifs should show up and allow you to browse down into it.

Related:

Re: what do you recommend when migrating NetApp to isilon, is there a best tool and best practice to achieve this?

Ok, so shameless plug here, but it’s certainly the appropriate place to do it. I work for Datadobi, and our software DobiMigrate was purpose built for this. It’s API integrated with NTAP 7M, CDOT, Isilon and others. It’ll detect all of your vfilers and SVMs, detect the qtree security styles, NFS Exports, SMB Shares. It’ll copy all of the data, shares, and exports over to Isilon. With Isilon it is SmartConnect Zone aware, and as a result each of the proxies that copy data over can talk to up to 5 Isilon nodes at the same time. As a result it’s crazy-fast and scales-out.

You can read more here:

Accelerating your Journey to the data lake with DobiMiner from Datadobi

Or from our site here:

https://www.datadobi.com

Anyway certainly reach out if you’d like to see a demo, or get some more information. Or ask your DellEMC account team. Also FWIW with reference to the suggestion of isi_vol_copy “might” work for your 7M systems, but it does not support CDOT. It’s an NDMP-based dump, so there are a lot of other issues that come with that decision.

I’ll of course leave it to others to comment on their experiences with the same type of migration.

~Chris Klosterman

Principal SE, Datadobi.

chris.klosterman@datadobi.com

Related:

Re: NDMP Exclude issue using Netbackup 8.1 and Isilon 8.1.0.4

SET EXCLUDE=Folder Space

is the correct syntax.

Maybe the directory name has more than one space character, or non-ASCII spaces:



$ ls -d1 Folder*

Folder Space

Folder Space

Folder Space

$ ls -d1 Folder* | od -c

0000000 F o l d e r S p a c e n F o

0000020 l d e r S p a c e n F o l d

0000040 e r ** S p a c e n

0000052





First example has two spaces in the middle,

second example has a space at the end.

Third example has a ‘non-breaking space’ Unicode character in the middle.

It takes two bytes in UTF-8 encoding, where

the second byte is represented by ** in the output of od -c.





While we ar at it: One more thing, big caveat, related to NDMP and include/exclude settings on Isilon:

It is possible, but dangerous, to define multiple NDMP jobs for the same base directory (FILESYSTEM)

with different include-FILES or EXCLUDE patterns.

Be warned that this will lead to unexpected results, because OneFS NDMP only keeps

one single backup timestamp per NDMP FILESYSTEM base directory and NDMP level.

FILESYSTEM=/ifs/test

FILES=a b c

FILESYSTEM=/ifs/test

FILES=d e f

Say the a b c job runs at 1am and the d e f job at 2am, the timestamp (for the given NDMP level)

for /ifs/test will finally be set to 2am.



Next day when the a b c job is running with an incremental level (>=1),

it will backup only data changes that happened after 2am (23h ago),

and miss any changes from the hour between 1am (24h ago) and 2am.



fwiw

— Peter





Related: