Upgrading to 8.5 From Server 2008R2

I need a solution

I’m trying to work through our CMS migration from 8.1 RU7 to 8.5.  We’ll be moving to new server(s) because our existing install is on 2008r2 servers.

Restoring the database and running NSUpgradeWizard theoretically seems to get us most of the way there (it seems to give us ability to move patches, tasks, and software items), but it seems like the deployment solution part of the suite is not migrated, is that correct?

I understand images need to be updated for the new version of the client (8.5), but typically we’d restore those images, let the client update, and then re-upload the updated image.  It seems odd to me that TECH251847 doesn’t mention that images and copy file resources aren’t gathered by the migration wizard/database restore.  I assume Deployment tasks and jobs come over but am not sure.

The ITMS Data Migration PDF has DS migration steps (page 30-36) for standalone migration.

If I’m understanding everything above correctly, since most of our machines are getting swapped over the next few months anyway, I’m now leaning towards not migrating anything and instead starting fresh with new win10 machines going to new NS with plan to shut down old NS after our rollout.

My only complication with that is for the few month overlap where we have old machines on one NS and new machines on new NS where we need ability to image on same VLAN.  Support suggested we could leave NBS services stopped on the server least used until it’s needed and then swap back and forth stopping/starting services between the 2 services depending on if we’re imaging a new or old machine.  I’m assuming there’s no better way to manage 2 iPXE installations on same VLANs?

thanks for any thoughts.

0

Related:

  • No Related Posts

How do I have two default gateways one for mgmt and one for interception?

I need a solution

Hi;

I have port 0:0 as the management port with a default gateway associated with the default route domain and I want port 2:0 to have its own default route. Shall I create a new route domain, a new vlan and associate them with interface 2:0, which already has an IP address.

then shall I define a Default gateway for the new route domain?

Would this work?

Kindly

Wasfi

0

1544671730

Related:

  • No Related Posts

WSS Policy For Non- Domain Users

I need a solution

Hello,

I have a challenge to overcome. Will appriciate if anyone can contribute to this.

Access Method – Currently using  IPSEC VPN, Ready to use any access method to achive this requirement.

Systems – Workgroup

Requirement – To give user/group based policy and to get username based report.

User are from specific VLAN.

Help me to know how can i achive this.

0

Related:

  • No Related Posts

BGP route advertisement packets are getting dropped on NetScaler.


It is observed from the newnslogs that the LA/1 interface is flapping. Since LA/1 is bound to VLAN 128,when LA/1 goes down VLAN 128 also goes down.

This triggers the automatic router id selection process and this results in the BGP session is being reset. The reason for LA channel flap is below.

The VPX which had the issue is 10.236.25.73_13. When checked in xenstore_info.out [/var/shell in collector] the LA channel configuration is not found. If LA channel is configured from SVM, it will be seen in the Xenstore_info output. This value is fetched from XenServer at the time the collector was captured. If LA channel is added from VPX, it is not observed in the XenStore_info. Hence it is assumed that the LA channel was added/created at the VPX level.

On observing the VPX which was working in 10.236.25.45, it can be seen that the LA channel configuration in Xenstore_info.out.

Logs:

[jobinb@sjanalysis-1 /upload/ftp/70327476/collector_S_10.236.25.73_13Jul2015_10_31/shell]$ more xenstore_info.out vmname_mac_table = “{‘cdl128-bda01ma-wius’:[‘3a:20:0b:66:4b:0b’,’ca:9b:0d:5d…”

interface = “”

mtu = “[10/1:1500,10/2:1500]”

L2mode = “0”

dedicated = “0”

gateway = “10.236.25.1”

ip = “10.236.25.73”

netmask = “255.255.255.0”

nsvlan_id = “0”

nsvlan_intflist = “”

nsvlan_tagged = “0”

physical_intflist = “”

priv = “b84ac6745ffa4cd4819c19a3a0b9f022”

priv2 = “ce8299aad9866985ac44c430119d528854a60381941eafaabf57e2e4e437ca36eb6…”

pw = “1cb5e4e3103eee2d53c282ef0ef677de3841a6f25610d5247”

vlan_id = “0”

Priv2 = ce8299aad9866985ac44c430119d528854a60381941eafaabf57e2e4e437ca36eb69890905d912e0c57134fb4e40844c2366c56c25daff722646e505c220ebb7

Mac_interface_list = 8e_85_a3_63_f0_15-0/1,ee_82_8a_8a_ed_f5-0/2,3a_20_0b_66_4b_0b-10/1,ca_9b_0d_5d_29_1e-10/2


[jobinb@sjanalysis-1 /upload/ftp/70327476/collector_S_10.236.25.45_11Aug2015_01_44/shell]$ more xenstore_info.out | more vmname_mac_table = “{‘cdl100-bda01ma-wius’:[’42:f7:5c:c3:ca:80′,’9a:2e:09:9a…”

interface = “”

mtu = “[10/1:1500,10/2:1500,LA/1:1500]”

L2mode = “0”

dedicated = “0”

gateway = “10.236.25.1”

ip = “10.236.25.45”

netmask = “255.255.255.0”

nsvlan_id = “0”

nsvlan_intflist = “”

nsvlan_tagged = “0”

physical_intflist = “”

priv = “7831c14f1eba5d6550ccaee1e7ff2659”

priv2 = “a6b0193917bc48f68f165c69408db2a611780aa76f1005de5ae5bbd571fdb319b44…”

pw = “157bec273b3ceb5cad5a442d070864fd2306c616af58bc09c”

vlan_id = “0”

LA = “”

1 = “”

interface_list = “10/1,10/2”

mac = “00_e0_ed_44_9e_69”

type = “LACP”

Priv2 = a6b0193917bc48f68f165c69408db2a611780aa76f1005de5ae5bbd571fdb319b448e37d1d570dc49f8f2ab17336632d507727fdcc5673a504d569fa32f9aa77

Mac_interface_list = 72_01_22_66_6b_ae-0/1,ea_80_88_b2_65_73-0/2,42_f7_5c_c3_ca_80-10/1,9a_2e_09_9a_ac_94-10/2

Related:

  • No Related Posts

Storage Node Network connectivity to Datadomain best practices

I am looking for some advise on the best practices on connecting networker storage nodes in a environment where clients are having backup IP’s in several different VLAN’s . So basically our storage nodes will contact NDMP clients over their backup networker in layer-2 on diff vlans and need send the backup data to data domain on separate vlan.

To depict this here is how we are currently backing up

NDMPClient1-Backup-vlan1———->Storage Node-Backup-Vlan1( Vlan5)———->DataDomain over Vlan5

NDMPClient2-Backup-vlan2———->Storage Node-Backup-Vlan2( Vlan5)———->DataDomain over Vlan5

NDMPClient3-Backup-vlan3 ———->Storage Node-Backup-Vlan3( Vlan5)———->DataDomain over Vlan5

NDMPClient4-Backup-vlan4 ———->Storage Node-Backup-Vlan4( Vlan5)———->DataDomain over Vlan5

So for every NDMP client backup vlan we defined and interface on storage nodes in the same Vlan.

And from Storage node to Datadomain connectivity we have a seperate backup vlan in layer-2

Since this is a 3 way NDMP backp , the traffic flows from clients to Storage nodes in one network and from storage nodes to Dataomdin in a different paths.

is this is a good model or do we have any other model that we can adopt to have better backup/restore performances.

Thanks in advance

Related:

  • No Related Posts

Re: Can we setup seperate subnet/pool in Isilon for communication to Cloudpool destnation

We have OneFS version 8.

Currently our network on Isilon is composed of 2 subnets under the groupnet as listed below:

1. Mgmt Subnet

– Mgmt IP pool

2. Prod subnet

– SMB IP pool

– NFS IP pool

Each Isilon node has 2 aggregated NICs in prod IP pools & 2 aggregated NICs in mgmt pool.

We are now initializing Cloudpools. Cloudpools is being provided by a vendor that uses Isilon nodes to provide cloudpool.

We want to connect directly to this vendor rather then going through the firewall.

So a new Vlan is being created that will extend to the vendors data center.

I am thinking of adding another subnet and another IP pool for the communication to Cloudpool provider.

This will use the same aggregated NICs as the “Prod subnet” IP pool.

Also, static route will be added to ensure that communication to cloudpool provider goes through this subnet.

Will this work for SMB shares as those are the ones that we will archive to cloudpool?

Related:

  • No Related Posts

Network Validation Tool for IDPA

SDS Technology is pleased to announce the release of Network Validation Tool for IDPA (NVT-IDPA).

Dell EMC Integrated Data Protection Appliance (IDPA) is a pre-integrated, turnkey and optimized appliance, built on the strength of Dell EMC’s leading data protection technologies. Network validation tool (NVT) for IDPA automates validation of Network readiness at customer site before deployment of IDPA appliance. Prior to installing IDPA, a network configuration has to be completed for the datacenter. As an installation best practice, the deployment team should review this configuration before starting the IDPA installation. Network validation tool (NVT) for IDPA automates this manual validation – saving time and effort.

For additional documentation or to download Network Validation Tool for IDPA (NVT-IDPA), visit our Network Validation Tool for IDPA (NVT) solution page.

New features and capabilities include:

· Added Support to run NVT-IDPA as Standalone Program, so installation is no longer required.

· Added Support to Download Results in Excel as well as PDF format

· Improved Performance by reducing time for completing Network Services Validation

· Added Switch validation support for Top of Rack and Upstream Switch: Is Port a member of Port Channel, and if so, what mode?

· Added Support For DELL OS 10 Switch, using show-tech support command output – vlan created in upstream switch

· Added IDPA version and Model in Download Results

Target audience:

· Internal Professional Services Resources (PS) and Project Managers for deployment of IDPA solution

· Business Partner Resources (PS) and Project Managers for deployment of IDPA solution

· Support Engineers to pre-validate the customer environment

· Customer Before deploying IDPA solution

Related:

  • No Related Posts

Basic Design Guidelines and Principles on NetScaler Routing, Default Routes, Interfaces and Channels, VLANs, and GARP

User-added image

NSIP – This is the NetScaler’s IP. Generally the IP used for Management because it is the only IP unique to an individual NetScaler in an HA or Cluster environment. Also important to note is that LDAP, RADIUS, and User scripted Monitor traffic (such as the LDAP monitor and Storefront monitor) will Source from the NSIP and thus route over the VLAN and Interface the NSIP is bound to (Default Native VLAN 1). If you need the LDAP and RADIUS traffic to source from the SNIP, then create a LB Vserver for your backend servers.

SNIP – This is the NetScaler’s Subnet IP. This IP is used to initiate communication to backend servers and is generally always going to initiate traffic. That said, it can be the destination for traffic in these cases:

  • It can be used as the Gateway address on other devices when doing Layer 3 routing on the NetScaler.
  • It can, when enabled, accept Management services, such as access to the GUI, SSH, and SNMP.

VIP – The VIP is unique in that it will never be used to initiate outbound traffic. It is intended to receive Traffic only. Of course once it receives traffic, it will reply and send traffic outbound back to the client. The point is, however, that the VIP does not and will not initiate the outbound traffic. Note this also means it will not be used as the source for communicating with Backend servers used in, for example, an LB Vserver.

MIP – Very similar to the SNIP. We won’t go into details here as it’s rarely used any more, but if you like you can review these links:

Viewing Specific IP Types in CLI. The standard CLI command for showing all IPs is “sh ip”. However if you have a lot of Vservers configured, this will be a long list. If you only want to see your SNIPs or MIPs, you can use the -type switch. For example: “sh ip -type SNIP”.

Related: