Can HTTPS be monitored with Network Monitor 15.1?

I need a solution

Hi,

I have 2 detection servers Network Monitor 15.1 in 2 different core switches but checking the incidents HTTPS are generated without having the HTTPS protocol enabled, someone can explain me why it generates this type of incidents or within this new version it already detects the encrypted traffic HTTPS Network Monitor natively.

Thanks and regards.

0

Related:

  • No Related Posts

Introducing Dell EMC Ready Solutions for Microsoft WSSD: QuickStart Configurations for ROBO and Edge Environments

The QuickStart Configurations for ROBO and Edge environments are four node, two switch highly available, end-to-end HCI configurations that simplifies the design, ordering, deployment and support.

High_Level_Overview.png

Key Features:

  • Pre-configured and sized Small, Medium and Large solution templates to simplify ordering/sizing.
  • Available in two platform options – R640 All-Flash & R740XD Hybrid S2D Ready Nodes.
  • Includes two fully redundant, half-width Dell EMC S4112-ON switches in 1U form factor.
  • Includes optional ProDeploy and mandatory ProSupport services providing the same solution level support as S2D Ready Nodes.
  • Detailed QuickStart deployment guide to de-risk deployment and accelerate time to bring up.
  • Additionally, the QuickStart configurations also includes rack, PDUs and blanking panels right sized for the solution. These Data Center Infrastructure (DCI) solutions help our customers save time and resources, reduce effort and improve overall experience.



The network fabric for this QuickStart configurations implements a non-converged topology for the in-band/out-of-band Management and Storage networks. The RDMA capable Qlogic FastLinQ 41262 adapters are used in support of the storage 25Gbe SFP28 traffic while the rNDC provides the 10 GbE SFP+ bandwidth used for the host management and VM network traffic.



Non-Converged_Option-2 (1).png

This optimized network architecture of the switch fabric ensures redundancy of both the storage and management networks. In this design, the storage networks are isolated to each respective switch (Storage 1 to TOR 1; Storage 2 to TOR 2). As storage traffic (typically) never traverses the customer LAN networks, the VLT bandwidth has been optimized to fully support redundancy of the management network up to customer data center uplink.



For sample switch configurations, see https://community.emc.com/docs/DOC-70310.



The operations guidance for Dell EMC Ready Solutions for Microsoft WSSD provides the necessary instructions to perform the day 0 management and monitoring on-boarding tasks and instructions on performing life cycle management of the Storage Spaces Direct cluster.

Related:

  • No Related Posts

Large Dataset Design – Environmental and Logistical Considerations

In this article, we turn our attention to some of the environmental and logistical aspects of cluster installation and management.



In addition to available rack space and physical proximity of nodes, provision needs to be make for adequate power and cooling as the cluster expands. New generations of nodes typically deliver and increased storage density, which often magnifies the power draw and cooling requirements per rack unit.



The larger the cluster, the more disruptive downtime and reboots can be. To this end, the recommendation is for a large cluster’s power supply to be fully redundant and backed up with a battery UPS and/or power generator. In the worst instance, if a cluster does loose power, the nodes are protected internally by file system journals which preserve any in-flight uncommitted writes. However, the time to restore power and reboot a large cluster can be considerable.



Like most data center equipment, the cooling fans in Isilon nodes and switches pull air from the front to back of the chassis. To complement this, most data centers use a hot isle/cold isle rack configuration, where cool, low humidity air is supplied in the aisle at the front of each rack or cabinet either at the floor or ceiling level, and warm exhaust air is returned at ceiling level in the aisle to the rear of each rack.



Given the high power draw and heat density of cluster hardware, some data centers are limited in the number of nodes each rack can support. For partially filled racks, the use of blank panels to cover the front and rear of any unfilled rack units can help to efficiently direct airflow through the equipment.



The use of intelligent power distribution units (PDUs) within each rack can facilitate the remote power cycling of nodes, if desired.



For Gen6 hardware, where chassis depth can be a limiting factor, 2RU horizontally mounted PDUs within the rack can be used in place of vertical PDUs. If front-mounted, partial depth Ethernet switches are deployed, horizontal PDUs can be installed in the rear of the rack directly behind the switches to maximize available rack capacity.



Cabling and Networking

With copper (CX4) Infiniband cables the maximum cable length is limited to 10 meters. After factoring in for dressing the cables to maintain some level of organization and proximity within the racks and cable trays, all the racks with Isilon nodes need to be in close physical proximity to each other –either in the same rack row or close by in an adjacent row.



Support for multi-mode fiber (SC) for Infiniband and for Ethernet extends the cable length limitation to 150 meters. This allows nodes to be housed on separate floors or on the far side of a floor in a data center if necessary. While solving the floor space problem, this has the potential to introduce new administrative and management issues. The table below shows the various optical and copper backend network cabling options available.

Cable Type

Model

Connector

Length

Ethernet Cluster

Infiniband Cluster

Copper

851-0253

QSFP+

1m

P

Copper

851-0254

QSFP+

3m

P

Copper

851-0255

QSFP+

5m

P

Optical

851-0224

MPO

10m

P

P

Optical

851-0225

MPO

30m

P

P

Optical

851-0226

MPO

50m

P

P

Optical

851-0227

MPO

100m

P

P

Optical

851-0228

MPO

150m

P

P

With large clusters, especially when the nodes may not be racked in a contiguous manner, having all the nodes and switches connected to serial console concentrators and remote power controllers is highly advised. However, to perform any physical administration or break/fix activity on nodes you must know where the equipment is located and have administrative resources available to access and service all locations.



As such, the following best practices are highly recommended:



  • Develop and update thorough physical architectural documentation.
  • Implement an intuitive cable coloring standard.
  • Be fastidious and consistent about cable labeling.
  • Use the appropriate length of cable for the run and create a neat 12” loop of any excess cable, secured with Velcro.
  • Observe appropriate cable bend ratios, particularly with fiber cables.
  • Dress cables and maintain a disciplined cable management ethos.
  • Keep a detailed cluster hardware maintenance log.
  • Where appropriate, maintain a ‘mailbox’ space for cable management.



Disciplined cable management and labeling for ease of identification is particularly important in larger Gen6 clusters, where density of cabling is high. Each Gen6 chassis can require up to twenty eight cables, as shown in the table below:

Cabling Component

Medium

Cable Quantity per Gen6 Chassis

Back end network

10 or 40 Gb Ethernet or QDR Infiniband

8

Front end network

10 or 40Gb Ethernet

8

Management Interface

1Gb Ethernet

4

Serial Console

DB9 RS 232

4

Power cord

110V or 220V AC power

4

Total

28

The recommendation for cabling a Gen6 chassis is as follows:

  • Split cabling in the middle of the chassis, between nodes 2 and 3.
  • Route Ethernet and Infiniband cables towards lower side of the chassis.
  • Connect power cords for nodes 1 and 3 to PDU A and power cords for nodes 2 and 4 to PDU B.
  • Bundle network cables with the AC power cords for ease of management.
  • Leave enough cable slack for servicing each individual node’s FRUs.



hardware_6.png



Consistent and meticulous cable labeling and management is particularly important in large clusters. Gen6 chassis that employ both front and back end Ethernet networks can include up to twenty Ethernet connections per 4RU chassis.



hardware_7.png



In each node’s compute module, there are two PCI slots for the Ethernet cards (NICs). Viewed from the rear of the chassis, in each node the right hand slot (HBA Slot 0) houses the NIC for the front end network, and the left hand slot (HBA Slot 1) the NIC for the front end network. In addition to this, there is a separate built-in 1Gb Ethernet port on each node for cluster management traffic.



While there is no requirement that node 1 aligns with port 1 on each of the backend switches, it can certainly make cluster and switch management and troubleshooting considerably simpler. Even if exact port alignment is not possible, with large clusters, ensure that the cables are clearly labeled and connected to similar port regions on the backend switches.



Servicing and FRU Parts Replacement

Isilon nodes and the drives they contain have identifying LED lights to indicate when a component has failed and to allow proactive identification of resources. The ‘isi led’ CLI command can be used to proactive illuminate specific node and drive indicator lights to aid in identification.

Drive repair times depend on a variety of factors:

  • OneFS release (determines Job Engine version and how efficiently it operates)
  • System hardware (determines drive types, amount of CPU and RAM, etc)
  • File system: Amount of data, data composition (lots of small vs large files), protection, tunables, etc.
  • Load on the cluster during the drive failure

The best way to estimate future FlexProtect run-time is to use old repair run-times as a guide, if available.

Gen 6 drives have a bay-grid nomenclature similar to that of HD400 where A-E indicates each of the sleds and 0-6 would point to the drive position in the sled. The drive closest to the front is 0, whereas the drive closest to the back is 2/3/5, depending on the drive sled type.



For Gen5 and earlier hardware running OneFS 8.0 or prior, the isi_ovt_check CLI tool can be run on a node to verify the correct operation of the hardware.

Hardware Refresh

When it comes to updating and refreshing hardware in a large cluster, swapping nodes can be a lengthy process of somewhat unpredictable duration. Data has to be evacuated from each old node during the Smartfail process prior to its removal, and restriped and balanced across the new hardware’s drives. During this time there will also be potentially impactful group changes as new nodes are added and the old ones removed. An alternative and efficient approach can often be the swapping out of drives into new chassis. In addition to being considerable faster, the drive swapping process focuses the disruption on a single whole cluster down event. Estimating the time to complete a drive swap, or ‘disk tango’ process is simpler and more accurate and can typically be completed in a single maintenance window.



For Gen 5 and earlier 4RU nodes, a drive tango can be a complex procedure due to the large number of drives per node (36 or 60 drives).



With Gen 6 chassis, the available hardware ‘tango’ options are expanded and simplified. Given the modular design of these platforms, the compute and chassis tango strategies typically replace the disk tango:



Replacement Strategy

Component

Gen 4/5

Gen 6

Description

Disk tango

Drives / drive sleds

P

P

Swapping out data drives or drive sleds

Compute tango

Gen6 Compute modules

P

Rather than swapping out the twenty drive sleds, it’s usually cleaner to exchange the four compute modules

Chassis tango

Gen6 chassis

P

Typically only required if there’s an issue with the chassis mid-plane.



Note that any of the above ‘tango’ procedures should only be executed under the recommendation and supervision of Isilon support.

Related:

  • No Related Posts

Large Dataset Design – Hardware Layout & Installation Considerations

In this next article in the series, we’ll take a look at some of the significant aspects of large cluster physical design and hardware installation.



Most Isilon nodes utilize a 35 inch depth chassis and will fit in a standard depth data center cabinet. However, high capacity models such as the HD400 and A2000 have 40 inch depth chassis and require extended depth cabinets such as the APC 3350 or Dell EMC Titan-HD rack.



hardware_1.png

Additional room must be provided for opening the FRU service trays at the rear of the nodes and, in Gen6 hardware, the disk sleds at the front of the chassis. Isilon nodes are either 2RU or 4RU in height (with the exception of the 1RU diskless accelerator and backup accelerator nodes).



Note that the Isilon A2000 nodes can also be purchased as a 7.2PB turnkey pre-racked solution.



Weight is another critical factor to keep in mind. Individual 4RU chassis can weigh up to around 300lbs each, and the floor tile capacity for each individual cabinet or rack must be kept in mind. For the large archive nodes styles (HD400 and A2000), the considerable node weight may prevent racks from being fully populated with Isilon equipment. If the cluster uses a variety of node types, installing the larger, heavier nodes at the bottom of each rack and the lighter chassis at the top can help distribute weight evenly across the cluster racks’ floor tiles.



There are no lift handles on a Gen6 chassis. However, the drive sleds can be removed to provide handling points if no lift is available. With all the drive sleds removed, but leaving the rear compute modules inserted, the chassis weight drops to a more manageable 115lbs. It is strongly recommended to use a lift for installation of Gen6 chassis and the 4RU earlier generation nodes.

Ensure that smaller Ethernet switches are drawing cool air from the front of the rack, not from inside the cabinet, as they are shorter than the IB switches. This can be achieved either with switch placement or by using rack shelving.Cluster backend switches ship with the appropriate rails (or tray) for proper installation of the switch in the rack. These rail kits are adjustable to fit NEMA front rail to rear rail spacing ranging from 22 in to 34 in.

Note that the Celestica Ethernet switch rails are designed to overhang the rear NEMA rails to align the switch with the Generation 6 chassis at the rear of the rack. These require a minimum clearance of 36 in from the front NEMA rail to the rear of the rack, in order to ensure that the rack door can be closed.

Consider the following large cluster topology, for example:



hardware_2.png

This contiguous eleven rack architecture is designed to scale up to ninety six 4RU nodes as the environment grows, while keeping cable management simple and taking the considerable weight of the Infiniband cables off the connectors as much as possible.



Best practices include:



  • Pre-allocate and reserve adjacent racks in the same isle to fully accommodate the anticipated future cluster expansion
  • Reserve an empty 4RU ‘mailbox’ slot above the center of each rack for pass-through cable management.
  • Dedicate the central rack in the group for the back-end and front-end switches – in this case rack F (image below).



Below, the two top Ethernet switches are for front-end connectivity and the lower two Infiniband switches handle the cluster’s redundant back-end connections.



hardware_3.png



Image showing cluster Front and Back-end Switches (Rack F Above)



The 4RU “mailbox” space is utilized for cable pass-through between node racks and the central switch rack. This allows cabling runs to be kept as short and straight as possible.



hardware_4.png



Rear of Rack View Showing Mailbox Space and Backend Network Cabling (Rack E Above)



Excess cabling can be neatly stored in 12” service coils on a cable tray above the rack, if available, or at the side of the rack as illustrated below.



hardware_5.png

Rack Side View Detailing Excess Cable Coils (Rack E Above)



Successful large cluster infrastructures depend heavily on the proficiency of the installer and their optimizations for maintenance and future expansion.



Note that for Hadoop workloads, Isilon is compatible with the rack awareness feature of HDFS to provide balancing in the placement of data. Rack locality keeps the data flow internal to the rack.

Related:

Re: Remove Virtual Fabric

Hello,

I need help for merging 2 switches where VF needs to be disabled.

Below is our current Brocade switches configuration;

Model: Brocade 6520

Switches:

Switch_1

Switch_2

Zoning: Soft

VF > Enabled

1) FID: 128

> cfgzone: Not Defined

> Fabric: n/a

2) FID: 2

> cfgczone: Zoning_A (Defined and Active)

> Fabric: Switch_1

3) FID: 6

> cfgzone: Zoning_B (Defined and Active)

> Fabric: Switch_1 & Switch_2

All FIDs, i.e. 128, 2, 6 have ports connected and zoned.

Scenerio 1: Disable Virtual Fabric

(A) We need to get rid of virtual fabric, i.e. FID 2, 6, and finally 128

(B) I need Default switch into Fabric with no impact to current zones and alias.

> I am ok with Downtime of assets connected

> I am ok with creating a New Defined Zoning Configuration.

Scenerio 2: Keep Virtual Fabric Enabled, but delete FID 2 & FID 6, so using just Default FID 128

> I am ok with Downtime of assets connected

> I am ok with creating a New Defined Zoning Configuration.

> What will happen to zones and alias when ports are move from FID 2 & FID 6 to FID 128

> What will happen to zones and alias when finally all ports are in FID 128 and we disable Virtual fabric

Step by step process to achieve scenerio 1& 2 will be highly appreicated.

Thanks

Related:

  • No Related Posts

Re: Brocade SW 6520 Domain ID

I have two new Brocade Switch 6520, acting as standalone, i wont merge those

The default domain id is 1

I want to learn about the best practices to set the domain id for standalone switches, is there any advice to about brocade 6520 domain id must be greater o smaller than others? for example Brocade 6520-1 Set the domain to 3 and the other Brocade 6520-2 Set the domain id to 4.

Does it matter domain id ranking??

Do you recommend to use or leave the default id ? or change it just for best practices

Bes regards

Related:

  • No Related Posts

Brocade SW 6520 Domain ID

I have two new Brocade Switch 6520, acting as standalone, i wont merge those

The default domain id is 1

I want to learn about the best practices to set the domain id for standalone switches, is there any advice to about brocade 6520 domain id must be greater o smaller than others? for example Brocade 6520-1 Set the domain to 3 and the other Brocade 6520-2 Set the domain id to 4.

Does it matter domain id ranking??

Do you recommend to use or leave the default id ? or change it just for best practices

Bes regards

Related:

  • No Related Posts

Large Dataset Design – Hardware Considerations

In the previous blog article, we looked at some of the architectural drivers and decision points when design clusters to support large datasets. To build on this theme, the next couple of articles will focus on cluster hardware considerations at scale.

A key decision for performance, particularly in a large cluster environment, is the type and quantity of nodes deployed. Heterogeneous clusters can be architected with a wide variety of node styles and capacities, in order to meet the needs of a varied data set and wide spectrum of workloads. These node styles encompass several hardware generations, and fall loosely into four main categories or tiers.



  • Extreme performance (all-flash)
  • Performance
  • Hybrid/Utility
  • Archive

While heterogeneous clusters can easily include multiple hardware classes and configurations with a minimum of three of each, the best practice of simplicity for building large clusters holds true here too. The smaller the disparity in hardware style across the cluster, the less opportunity there is for overloading, or bullying, the more capacity-oriented nodes. Some points to consider are:



  • Ensure all nodes contain at least one SSD.
  • 20 nodes is the maximum number that OneFS will stripe data across.
  • At a node pool size 40 nodes, Gen6 hardware achieves sled, chassis and neighborhood level protection.
  • When comparing equivalent Gen 6 and earlier generations and types, consider the number of spindles rather than just overall capacity.



Consider the physical cluster layout and environmental factors when designing and planning for a large cluster installation. These factors include:



  • Redundant power supply
  • Airflow and cooling
  • Rackspace requirements
  • Floor tile weight constraints
  • Networking Requirements
  • Cabling distance Limitations



The following table details the physical dimensions, weight, power draw, and thermal properties for the range of Gen 6 chassis:



Model

Tier

Height

Width

Depth

RU

Weight

MaxWatts

Watts

Max BTU

Normal BTU

F800

All-flash

performance

4U (4×1.75IN)

  1. 17.6 IN / 45 cm

35 IN / 88.9 cm

4RU

169 lbs (77 kg)

1764

1300

6019

4436

H600

Performance

4U (4×1.75IN)

  1. 17.6 IN / 45 cm

35 IN / 88.9 cm

4RU

213 lbs (97 kg)

1990

1704

6790

5816

H500

Hybrid/Utility

4U (4×1.75IN)

  1. 17.6 IN / 45 cm

35 IN / 88.9 cm

4RU

248 lbs (112 kg)

1906

1312

6504

4476

H400

Hybrid/Utility

4U (4×1.75IN)

  1. 17.6 IN / 45 cm

35 IN / 88.9 cm

4RU

242 lbs (110 kg)

1558

1112

5316

3788

A200

Archive

4U (4×1.75IN)

  1. 17.6 IN / 45 cm

35 IN / 88.9 cm

4RU

219 lbs (100 kg)

1460

1052

4982

3584

A2000

Archive

4U (4×1.75IN)

  1. 17.6 IN / 45 cm

39 IN / 99.06 cm

4RU

285 lbs (129 kg)

1520

1110

5186

3788

Isilon’s backend network is analogous to a distributed systems bus. Each node has two backend interfaces for redundancy that run in an active/passive configuration. The primary interface is connected to the primary switch, and the secondary interface to a separate switch.



Older clusters utilized DDR Infiniband controllers which required copper CX4 cables with a maximum cable length of 10 meters. After factoring in for dressing the cables to maintain some form of organization within the racks and cable tray, all the racks with Isilon nodes needed to be in close physical proximity to each other –either in the same rack row or close by in an adjacent row.



With newer generation nodes using either QDR Infiniband or 10 or 40Gb Ethernet utilize multi-mode fiber (SC), the cable length limitation is extended to 100 meters. This means that a cluster can now span multiple rack rows, floors, and even buildings, if necessary. This solves the floor space problem but introduces new ones. To perform any physical administration activity on nodes you must know where the equipment is located and potentially have admin resource in both locations, or have to travel back and forth to multiple locations.



Ethernet Backend

The table below shows the various Isilon nodes types and their respective backend network support. As we can see, Infiniband is the common denominator for the backend interconnect, so this is required network type for all legacy clusters that contain Gen5 and earlier node types. For new Gen6 deployments, Ethernet is the preferred medium – particularly for large clusters.

Node Type/ Backend Network

F800

H600

H500

H400

A200

A2000

S210

X210

X410

NL410

HD400

10 Gb Ethernet

P

P

P

40 Gb Ethernet

P

P

P

Infiniband (QDR)

P

P

P

P

P

P

P

P

P

P

P

Currently only Dell EMC Isilon approved switches are supported for backend Ethernet and IB cluster interconnection.

  • 40GbE is supported for the F800, H600, and H500 Nodes.
    • Celestica D4040 – 32 Ports
    • Arista DCS-7308 – 32-64 ports, 64-144 Ports with up to 3 additional line cards

Vendor

Model

Isilon Model Code

Backend Port Qty

Port Type

Rack Units

40 GbE Nodes

Mixed Nodes (10 & 40 GbE)

Celestica

D4040

851-0259

32

All 40 GbE

1

Less than 32

Supports breakout cables: Total 96 x 10 GbE nodes

Arista

DCS-7308

851-0261

64

All 40 GbE

13

Greater than 32 and less than 64 (included two 32 port line cards

No breakout cables, but supports addition of 10 GbE line card

Arista

851-0282

Leaf upgrade (32 ports)

All 40 GbE

Greater than 64 and less than 144 (max 3 leaf upgrade)

  • 10GbE is supported for the A200 and A2000 nodes intended for Archive workflows
    • Celestica D2024 – 24 ports
    • Celestica D2060 – 24-48 ports
    • Arista DCS-7304 – 48-96 ports, 96-144 ports with one additional line card

Vendor

Model

Isilon Model Code

Backend Port Qty

Port Type

Rack Units

All 10 GbE Nodes

Mixed Nodes (10 & 40 GbE)

Celestica

D2024

851-0258

24

24 x 10 GbE, 2 x 40 GbE

1

Up to 24 nodes

Not supported

Celestica

D2060

851-0257

48

48 x 10 GbE, 6 x 40 GbE

1

24 to 48 nodes

Not supported

Arista

DCS-7304

851-0260

96

48 x 10 GbE, 4 x 40 GbE

8

48 to 96 nodes (two 48 port line cards included)

40 GbE line card can be added

Arista

851-0283

Leaf upgrade (48 ports)

96 to 144 nodes (max 1 leaf upgrade

Be aware that the use of patch panels is not supported for Isilon cluster backend connections, regardless of overall cable lengths. All connections must be a single link, single cable directly between the node and backend switch. Also, Ethernet and Infiniband switches must not be reconfigured or used for any traffic beyond a single cluster.



Infiniband Backend

As the cluster grows, cable length limitations can become a challenge. A review of the current rack layout and node location is a great exercise to avoid downtime.



  • To upgrade an Infiniband switch, unplug the IB Cable from the Switch Side first. Be aware that there is power on the cable, and an electrical short or static discharge can fry the IB card. Use of a static wrist-band for grounding is strongly encouraged.
  • It is recommended to upgrade to OneFS 8.0.0.6 or later, which has an enhanced Infiniband backend throttle detection and back-off algorithm.
  • Ensure the IB switch is up to date with the latest firmware.
  • The CELOG events and alerts for a cluster’s Infiniband backend are fairly limited. For large clusters with managed switches, the recommendation is to implement additional SNMP monitoring and health-checks for the backend.
  • A pair of redundant backend switches for the exclusive use of a single cluster is a hard requirement.
  • If the cluster is backed by Intel 12800 IB switches, periodic switch reboots are recommended.

For Infiniband clusters that are anticipated to grow beyond 48 nodes, the current large cluster switches are the 6RU Mellanox SX6506 and the 9RU Mellanox SX6512. The details of these two switches are outlined in the table below:

Vendor

Model

Backend Port Qty

Port Type

Rack Units

Cable Type

Mellanox

SX6506

90

FDR Infiniband

6RU

QSFP+ copper or fiber

Mellanox

SX6512

144

FDR Infiniband

9RU

QSFP+ copper or fiber

Further information on monitoring, diagnosing and resolving backend Infiniband network issues is available in the Isilon Infiniband troubleshooting guide.

Related:

  • No Related Posts

Remove Virtual Fabric

Hello,

I need help for merging 2 switches where VF needs to be disabled.

Below is our current Brocade switches configuration;

Model: Brocade 6520

Switches:

Switch_1

Switch_2

Zoning: Soft

VF > Enabled

1) FID: 128

> cfgzone: Not Defined

> Fabric: n/a

2) FID: 2

> cfgczone: Zoning_A (Defined and Active)

> Fabric: Switch_1

3) FID: 6

> cfgzone: Zoning_B (Defined and Active)

> Fabric: Switch_1 & Switch_2

All FIDs, i.e. 128, 2, 6 have ports connected and zoned.

Scenerio 1: Disable Virtual Fabric

(A) We need to get rid of virtual fabric, i.e. FID 2, 6, and finally 128

(B) I need Default switch into Fabric with no impact to current zones and alias.

> I am ok with Downtime of assets connected

> I am ok with creating a New Defined Zoning Configuration.

Scenerio 2: Keep Virtual Fabric Enabled, but delete FID 2 & FID 6, so using just Default FID 128

> I am ok with Downtime of assets connected

> I am ok with creating a New Defined Zoning Configuration.

> What will happen to zones and alias when ports are move from FID 2 & FID 6 to FID 128

> What will happen to zones and alias when finally all ports are in FID 128 and we disable Virtual fabric

Step by step process to achieve scenerio 1& 2 will be highly appreicated.

Thanks

Related:

  • No Related Posts

ECS – xDoctor: “One or more network interfaces are down or missing”

Article Number: 503814 Article Version: 5 Article Type: Break Fix



Elastic Cloud Storage,ECS Appliance,ECS Appliance Hardware



xDoctor is reporting the below warning:

admin@ecs1:~> sudo -i xdoctor --report --archive=2017-09-01_064438 -CEWDisplaying xDoctor Report (2017-09-01_064438) Filter:['CRITICAL', 'ERROR', 'WARNING'] ...Timestamp = 2017-09-01_064438 Category = platform Source = ip show Severity = WARNING Node = 169.254.1.1 Message = One or more network interfaces are down or missing Extra = {'169.254.1.4': ['slave-0']} 

Connect to the node 169.254.1.4 in question and see in this case connection to rabbit switch is down:

admin@ecs4:~> sudo lldpcli show neighbor-------------------------------------------------------------------------------LLDP neighbors:-------------------------------------------------------------------------------Interface: slave-1, via: LLDP, RID: 1, Time: 28 days, 16:42:58 Chassis: ChassisID: mac 44:4c:a8:f5:63:ad SysName: hare SysDescr: Arista Networks EOS version 4.16.6M running on an Arista Networks DCS-7050SX-64 MgmtIP: 192.168.219.253 Capability: Bridge, on Capability: Router, off Port: PortID: ifname Ethernet12 PortDescr: MLAG group 4-------------------------------------------------------------------------------Interface: private, via: LLDP, RID: 2, Time: 28 days, 16:42:44 Chassis: ChassisID: mac 44:4c:a8:d1:77:b9 SysName: turtle SysDescr: Arista Networks EOS version 4.16.6M running on an Arista Networks DCS-7010T-48 MgmtIP: 192.168.219.251 Capability: Bridge, on Capability: Router, off Port: PortID: ifname Ethernet4 PortDescr: Nile Node04 (Data)-------------------------------------------------------------------------------admin@ecs4:~> 

Check public interface config:

admin@ecs4:~> sudo cat /etc/sysconfig/network/ifcfg-publicBONDING_MASTER=yesBONDING_MODULE_OPTS="miimon=100 mode=4 xmit_hash_policy=layer3+4"BONDING_SLAVE0=slave-0BONDING_SLAVE1=slave-1BOOTPROTO=staticIPADDR=10.x.x.x/22MTU=1500STARTMODE=autoadmin@ecs4:~> 
admin@ecs4:~> viprexec -i "grep Mode /proc/net/bonding/public"Output from host : 192.168.219.1Bonding Mode: IEEE 802.3ad Dynamic link aggregationOutput from host : 192.168.219.2Bonding Mode: IEEE 802.3ad Dynamic link aggregationOutput from host : 192.168.219.3Bonding Mode: IEEE 802.3ad Dynamic link aggregationOutput from host : 192.168.219.4Bonding Mode: IEEE 802.3ad Dynamic link aggregationadmin@ecs4:~> 

Check interface link status:

admin@ecs4:~> viprexec -i 'ip link show | egrep "slave-|public"'Output from host : 192.168.219.1bash: public: command not found3: slave-0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master public state UP mode DEFAULT group default qlen 10005: slave-1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master public state UP mode DEFAULT group default qlen 100010: public: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group defaultOutput from host : 192.168.219.2bash: public: command not found3: slave-0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master public state UP mode DEFAULT group default qlen 10005: slave-1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master public state UP mode DEFAULT group default qlen 100010: public: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group defaultOutput from host : 192.168.219.3bash: public: command not found4: slave-0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master public state UP mode DEFAULT group default qlen 10005: slave-1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master public state UP mode DEFAULT group default qlen 100010: public: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group defaultOutput from host : 192.168.219.4bash: public: command not found2: slave-0: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc mq master public state DOWN mode DEFAULT group default qlen 10005: slave-1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master public state UP mode DEFAULT group default qlen 100010: public: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group defaultadmin@ecs4:~> 
admin@ecs4:~> sudo ethtool slave-0Settings for slave-0: Supported ports: [ FIBRE ] Supported link modes: 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: No Advertised link modes: 10000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: No Speed: Unknown! Duplex: Unknown! (255) Port: Other PHYAD: 0 Transceiver: external Auto-negotiation: off Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: no

Refer to ECS Hardware Guide for details of specific port on the switch.

The ECS Hardware Guide is available in SolVe as well as at support.emc.com:

https://support.emc.com/docu62946_ECS-3.1-Hardware-Guide.pdf?language=en_US

Port 12 on rabbit switch is connected to slave-0 interface of node 4.

Connect to rabbit with admin credentials from a different node and check interface status:

admin@ecs1:~> ssh rabbitPassword:Last login: Tue Sep 5 11:13:30 2017 from 192.168.219.1rabbit>show interfaces Ethernet12Ethernet12 is down, line protocol is notpresent (notconnect) Hardware is Ethernet, address is 444c.a8de.8f83 (bia 444c.a8de.8f83) Description: MLAG group 4 Member of Port-Channel4 Ethernet MTU 9214 bytes , BW 10000000 kbit Full-duplex, 10Gb/s, auto negotiation: off, uni-link: n/a Loopback Mode : None 0 link status changes since last clear Last clearing of "show interface" counters never 5 minutes input rate 0 bps (0.0% with framing overhead), 0 packets/sec 5 minutes output rate 0 bps (0.0% with framing overhead), 0 packets/sec 0 packets input, 0 bytes Received 0 broadcasts, 0 multicast 0 runts, 0 giants 0 input errors, 0 CRC, 0 alignment, 0 symbol, 0 input discards 0 PAUSE input 0 packets output, 0 bytes Sent 0 broadcasts, 0 multicast 0 output errors, 0 collisions 0 late collision, 0 deferred, 0 output discards 0 PAUSE outputrabbit> 

The above interface status shows link also down and there has been never any I/O traffic on this interface.



SFP was not properly seated during install phase.



Customer was able to re-seat the SFP interface. After that the link automatically was detected and came online.

Otherwise CE need to go onsite for a physical inspection of SFP module, cable etc. connecting to slave-x interface on node.

Extract from ECS Hardware Guide:

Network cabling

The network cabling diagrams apply to U-Series, D-Series, or C-Series ECS Appliance in an Dell EMC or customer provided rack.

To distinguish between the three switches, each switch has a nickname:
  • Hare: 10 GbE public switch is at the top of the rack in a U- or D-Series or the top switch in a C-Series segment.
  • Rabbit: 10 GbE public switch is located just below the hare in the top of the rack in a U- or D-Series or below the hare switch in a C-Series segment.
  • Turtle: 1 GbE private switch that is located below rabbit in the top of the rack in a U-Series or below the hare switch in a C-Series segment.
U- and D-Series network cabling

The following figure shows a simplified network cabling diagram for an eight-node configuration for a U- or D-Series ECS Appliance as configured by Dell EMC or a customer in a supplied rack. Following this figure, other detailed figures and tables provide port, label, and cable color information.

Switches cabling

The ECS Hardware and cabling guide is showing rabbit and hare switches labeled as switch 1 and switch 2 what can cause confusion when cabling want to be verified.

See below table for matching switches and ports as well as the picture for also showing the appropriate switch port numbers.

Switch 1 = Rabbit = Bottom switch

Switch 2 = Hare = Top switch

Node ports:

Slave-0 = P01 = right port – connects to Switch 1 / Rabbit / Bottom switch

Slave-1 = P02 = left port – connects to Switch 2 / Hare / Top switch

User-added image

Related: