Storage Node Network connectivity to Datadomain best practices

I am looking for some advise on the best practices on connecting networker storage nodes in a environment where clients are having backup IP’s in several different VLAN’s . So basically our storage nodes will contact NDMP clients over their backup networker in layer-2 on diff vlans and need send the backup data to data domain on separate vlan.

To depict this here is how we are currently backing up

NDMPClient1-Backup-vlan1———->Storage Node-Backup-Vlan1( Vlan5)———->DataDomain over Vlan5

NDMPClient2-Backup-vlan2———->Storage Node-Backup-Vlan2( Vlan5)———->DataDomain over Vlan5

NDMPClient3-Backup-vlan3 ———->Storage Node-Backup-Vlan3( Vlan5)———->DataDomain over Vlan5

NDMPClient4-Backup-vlan4 ———->Storage Node-Backup-Vlan4( Vlan5)———->DataDomain over Vlan5

So for every NDMP client backup vlan we defined and interface on storage nodes in the same Vlan.

And from Storage node to Datadomain connectivity we have a seperate backup vlan in layer-2

Since this is a 3 way NDMP backp , the traffic flows from clients to Storage nodes in one network and from storage nodes to Dataomdin in a different paths.

is this is a good model or do we have any other model that we can adopt to have better backup/restore performances.

Thanks in advance

Related:

  • No Related Posts

How to Configure the NetScaler Appliances in a High Availability Setup to Communicate in a Two-Arm Configuration with Different 802.1q VLAN Tags on Each Arm

As a work around for the NSVLAN limitation, use the -trunk ON functionality available in the set interface command. As explained in CTX115575 – FAQ: The “trunk” or “tagall” Option of NetScaler Appliance, the -trunk option allows you to tag all frames coming from an interface.

The default native VLAN that all interfaces are initially associated with is VLAN 1, but in the preceding scenario VLAN 1 is not allowed across the Switch. To modify the native VLAN for an interface, you must bind it to a VLAN without the -tagged option. By doing this you are essentially taking VLAN 1 off the interface and adding another VLAN ID as the native VLAN for that interface. In the preceding example, the native VLAN for interface 1/1 is VLAN 12 and the native VLAN for interface 1/2 is VLAN 9.

After completing this, the -trunk option is set to ON and all HA frames are tagged with the ID of the untagged VLAN that the interface is associated with. For the preceding topology requirement, the following configuration is required:

  • add vlan 9
  • add vlan 12
  • bind vlan 9 -ifnum 1/2
  • bind vlan 12 -ifnum 1/1
  • set int 1/1 -trunk ON
  • set int 1/2 -trunk ON

Note: The preceding workaround tags the HA frames on a different native VLAN on each interface. If these two interfaces need additional tagged VLANs for different subnets to the backend servers, you can bind these VLANs to the appropriate interface with the -tagged Switch. Therefore, you send that traffic out on the appropriate VLAN with the appropriate tag.

Related:

Re: setting up ROUTE for new interface on VNX 5200

Thanks Rainer for picking up this for us. We have a new eNAS installed recently, but when we created interfaces which are used for CIFS filers. We found the eNAS would automatically create a route for it, just like kevlee mentioned in the topic.

For example, we create:

interface with ip 192.168.100.43/24 – vlan43, it will automatically create a route to 192.168.100.0/24 via 192.168.100.43.

interface with ip 192.168.101.43/24 – vlan143, it will automatically create a route to 192.168.101.0/24 via 192.168.101.43

interface with ip 192.168.102.43/24 – vlan 243, it will automatically create a route to 192.168.102.0/24 via 192.168.102.43

our DNS/AD is 192.168.150.50 – vlan 50

The physical link between eNAS and Ethernet switch is running with 802.1Q. For each VLAN, it’s gateway is on the switch side.

I suppose we should tell eNAS that the gateway IP of each VLAN, but we couldn’t as the system automatically created route is already there (192.168.10x/24 via 192.168.10x.43)

Per your advise we define a default route per data mover, in that case, the eNAS knows how to forward the traffic out, but each traffic has a VLAN id tagged with it, the switch/firewall will drop the packet because the VLAN id.

In theory, host x (vlan x) need to talk to host y (vlan y),

the traffic flow is: host x -> gateway-vlan x -> gateway-vlan y -> host y

If we define 192.168.100.1 as the gateway (0.0.0.0/0 192.168.100.1), we have no issues for the communication of 192.168.100.43, but how about 192.168.101.43 and 192.168.102.43? The packets will be dropped because of VLAN id.

Either our deployment/understanding has something wrong, or we should be able to define gateway for each interface, such 192.168.100.1 for vlan 43, 192.168.101.1 for vlan 143, 192.168.102.1 for vlan 243.

I know each physical DM has a default/global CIFS server for antivirus, etc. I think that default route should be for that default/global CIFS server. But how about the other cifs servers on VDMs? do we have a way to define gateway/defaultroute for each of them?

Thanks,

John

Related:

Re: dedicated backup vlan

Hello All,

a little background on the situation right now we using the same VLAN for backup as production. All the storage nodes, DD, networker server all located on production VLAN. all the client added in Networker using Production DNS name.

Now I try to change this setup by putting second NIC (for dedicated backup) in Networker server, storage node and one client ( which I am using for test). dedicated backup VLAN does not have any DNS server on so I am putting all entries manually on all nodes in the host files. ping worked all way

but some reason backup still flowing through the production NIC.

so just wondering if someone points me in the right direction to solve this problem…

some points on new setup:-

Networker Server based on Linux and networker version is 9.1.1.3

Storage node and client are both window based client and have the same version as the networker.

already updated the aliases (Globals (1 of 2)) for networker server and a storage node in networker.

the client I added into networker using clients production DNS name.

client have DNS name for networker server in Server network interface ((Globals (1 of 2)) (from dedicated VLAN)

devices(data domain) are still in a production network.

Dedicated VLAN have no communication with production VLAN.

hosts files are updated on all three nodes with backup VLAN IP address and hostnames.

server files also updated on the client

Thanks in advance,

Sukhdeep

Related:

Citrix NetScaler Interface Tagging and Flow of High Availability Packets

This article describes the flow of High Availability packets when various combinations of tagging are implemented in the NetScaler configuration.

Flow of High Availability Packets

Heart beats, that is High Availability packets, are always untagged unless the NSVLAN is configured using set ns config -nsvlan command or an interface is configured with the -trunk on option in NetScaler software release 9.2 and earlier or -tagall option in NetScaler software release 9.3 and later.

The following scenarios help in describing the flow of the High Availability packets:

Scenario 1

NSVLAN is the default of 1

interface 1/1 is bound to VLAN 2

Interface 1/2 is bound to VLAN 3

For example:

add vlan 2add vlan 3bind vlan 2 -ifnum 1/1bind vlan 3 -ifnum 1/2

High Availability packets flow as untagged on the 1/1 and 1/2 interfaces on the native VLAN.

Scenario 2

NSVLAN is the default of 1

interface 1/1 is bound to VLAN 2, which is configured with -trunk ON

Interface 1/2 is bound to VLAN 3, which is configured with -trunk OFF (the default)

For example:

set interface 1/1 -trunk ONadd vlan 2add vlan 3bind vlan 2 -ifnum 1/1bind vlan 3 -ifnum 1/2

High Availability packets flow on 1/1 as tagged with a VLAN ID of 2, and untagged on the 1/2 interface.

Scenario 3

NSVLAN is VLAN10 (non default)

interface 1/1 is bound to VLAN 2

interface 1/2 is bound to VLAN 3

interface 1/3 is bound to VLAN 10

For example:

add vlan 2add vlan 3bind vlan 2 -ifnum 1/1bind vlan 3 -ifnum ½set ns config -nsvlan 10 -ifnum 1/3

High Availability packets flow as tagged on VLAN 10, interface 1/3 only and do not flow on VLAN 2 or VLAN 3.

Related:

High Availability Traffic Does Not Show on NetScaler Tagged Channel Network Interfaces

To overcome this issue you need to introduce an additional vLAN for HA heartbeat packets only on the LA/1 switch port side and not on the device interfaces. Without this configuration, vLAN 1 untagged HA traffic will be discarded with other tagged frames.

HA heartbeats are always communicated on a native, untagged vlan (default vLAN 1 from NetScaler perspective). In our configuration, it is working fine for interface 0/1 because on the switch port side we have configured an access port vLAN 115 (so the switch allows native packets as vLAN 115).

LA/1 however has interfaces connected to trunked/tagged switch ports configured with vLAN 100 allowed. LA/1 sends untagged HA heartbeat traffic as described above, but due to the switch’s configuration they are dropped.

You are required to have switch ports configured in a particular way to have native vLAN 115 visible in addition to tagged vLAN 100 (the switch ports will treat untagged HA heartbeats as vLAN 115 and allow traffic to be submitted).

Important:! After committing the preceding actions, you need remove and recreate the HA pair. Ensure to take note of the commands that are executed. For example -tagall OFF on all interfaces including the channel, disabling of all inactive interfaces, and so on.

Related:

Remote Push – Browse Network

I need a solution

We have workstations on different VLANs and the browse feature does not apprear to use DNS or AD to search for available workstations.  Is it really using broadcast?  How do I get this to see other subnet VLAN? If I search by IP address, I just get host IPs back (no workstation names). 

Please help.

Thanks,

Robert K

0

Related: