Cisco Nexus 5600 and 6000 Series Switches Fibre Channel over Ethernet Denial of Service Vulnerability

A vulnerability in the Fibre Channel over Ethernet (FCoE) protocol implementation in Cisco NX-OS Software could allow an unauthenticated, adjacent attacker to cause a denial of service (DoS) condition on an affected device.

The vulnerability is due to an incorrect allocation of an internal interface index. An adjacent attacker with the ability to submit a crafted FCoE packet that crosses affected interfaces could trigger this vulnerability. A successful exploit could allow the attacker to cause a packet loop and high throughput on the affected interfaces, resulting in a DoS condition.

Cisco has released software updates that address this vulnerability. There are no workarounds that address this vulnerability.

This advisory is available at the following link:
https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190306-nexus-fbr-dos

Security Impact Rating: Medium

CVE: CVE-2019-1595

Related:

  • No Related Posts

Re: HOW TO CONNECT HOST DIRECTLY TO THE VNX5400 without SAN switch

For direct attach, it’s the same as for a switch, except you can only connect as many paths as you have HBA’s (fiber channel or iSCSI). If your host has one HBA, you can only connect one cable to the array. So you should have at least two HBA’s so you can connection one HBA to SPA and the other HBA to SPB to have failover protection.

Go to this link and select VNX Series, then in the VNX Server Tasks, select Attach a Server and follow the wizard.,

https://mydocuments.emc.com/

glen

Related:

  • No Related Posts

Re: Unable to get I/O Ethernet modules working in VNX5300

So I have reimaged and updated my VNX5300 (out of support) to latest FLARE 32.000.5.249, and it still doesn’t seem to support any of the many Ethernet modules I insert, so I can run iSCSI and sancopy over ethernet.

I have this problem with all of my lab arrays it seems, regardless of software version. Surely these cards are supported by the latest software, what am I doing wrong here?

Card p/ns

303-195-100c-01 (dual SFP 10g modules)

and 303-121-100a (4xgig Ethernet) – I have tried 6 of these modules, same thing.



I put in a 4x8gbFC module and it said Present – so the card slots seems to be fine… 4GB FC said Not Supported.



I’m hoping there’s just some enabler or something I am missing, pls help if you’ve seen this before.



emc_ethernet_nogo.PNG.png

Related:

  • No Related Posts

SG200 – LAN/WAN/Bridge – Intercepting Interfaces

I need a solution

Are the interfaces identified/labeled LAN and WAN solely used for inline deployments?  On an SG200, my 0:0 interface is used for management of the appliance, but I’d like to use one of the other interfaces to intercept traffic.  I see that the 2:0 and 2:1 interfaces (WAN/LAN respectively) are pre-configured into a bridge group.  It also seems as though I can put IP addresses on those interfaces.  So, if I delete the bridge group, can I intercept on the LAN interface and route out to the internet on the WAN interface?  Or is that not the way an explicit deployment is intended to be configured?  Is it common to simply use one of the unused (non-management) interfaces to both intercept and route out to the internet?

0

Related:

  • No Related Posts

Re: Gen4T model nums for 10G options

Hello Schlankae, if you are still looking for this issue please find the answer below. If not, perhaps my answer can help someone else

Kindly note that this part (AVM10GBMCU : AVAMAR G4T QUAD 10GB CU) is the SLIC (IO Module) with 4 x 10Gbase-T copper RJ45 ports. It can be used with CAT5/6 cables ordered separately. This part can be ordered in MyQuotes when you select the 10 Gbase-T option it is automatically added to the summary. This part is equivalent to the following FRU part: (303-254-100C-00 : PCB TLA QUAD-PORT 10G CU IO MODULE)

On the other hand, if you choose the 10Gbe Twinax/Optical option, it would have added the following part number to the summary (AVM10GBMOPT-SFP : AVAMAR G4T QUAD 10GB OPT+SFP MANF INSTL) which is equivalent to the FRU part (303-242-100C-01 : QUAD-PORT 10GBE ETHERNET TOE ISCSI Local)

The same info is confirmed by a message in MyQuotes, which states:

  • If you select 10G Base-T, each node will be configured with two copper I/O SLICs with four 10G Base-T ports on each I/O SLIC.
  • If you select 10Gbe Twinax/Optical, each node will be configured with one copper SLIC with four 10G Base-T ports and one optical I/O SLIC with four 10Gbe ports support both copper direct attached cables and optical media via SFP+ media. Four SFP+ modules are included for each node.

For more info on any Avamar FRU part numbers, please check the following KB article:

Part Numbers for Avamar Data Store Hardware, Article Number 000467214: https://emcservice–c.na55.visual.force.com/apex/KB_BreakFix_clone?id=kA2j0000000R5yp

From this same article it suggests the following CAT6 cable for use with the 10Gbase-T ports:

038-003-476 : 25ft white Cat6 LAN cable

This single cable should only be used to replace an external (customer) cable.

Hope this helps,

Hussein Baligh

Senior Sales Engineer Analyst

Related:

Introducing Dell EMC Ready Solutions for Microsoft WSSD: QuickStart Configurations for ROBO and Edge Environments

The QuickStart Configurations for ROBO and Edge environments are four node, two switch highly available, end-to-end HCI configurations that simplifies the design, ordering, deployment and support.

High_Level_Overview.png

Key Features:

  • Pre-configured and sized Small, Medium and Large solution templates to simplify ordering/sizing.
  • Available in two platform options – R640 All-Flash & R740XD Hybrid S2D Ready Nodes.
  • Includes two fully redundant, half-width Dell EMC S4112-ON switches in 1U form factor.
  • Includes optional ProDeploy and mandatory ProSupport services providing the same solution level support as S2D Ready Nodes.
  • Detailed QuickStart deployment guide to de-risk deployment and accelerate time to bring up.
  • Additionally, the QuickStart configurations also includes rack, PDUs and blanking panels right sized for the solution. These Data Center Infrastructure (DCI) solutions help our customers save time and resources, reduce effort and improve overall experience.



The network fabric for this QuickStart configurations implements a non-converged topology for the in-band/out-of-band Management and Storage networks. The RDMA capable Qlogic FastLinQ 41262 adapters are used in support of the storage 25Gbe SFP28 traffic while the rNDC provides the 10 GbE SFP+ bandwidth used for the host management and VM network traffic.



Non-Converged_Option-2 (1).png

This optimized network architecture of the switch fabric ensures redundancy of both the storage and management networks. In this design, the storage networks are isolated to each respective switch (Storage 1 to TOR 1; Storage 2 to TOR 2). As storage traffic (typically) never traverses the customer LAN networks, the VLT bandwidth has been optimized to fully support redundancy of the management network up to customer data center uplink.



For sample switch configurations, see https://community.emc.com/docs/DOC-70310.



The operations guidance for Dell EMC Ready Solutions for Microsoft WSSD provides the necessary instructions to perform the day 0 management and monitoring on-boarding tasks and instructions on performing life cycle management of the Storage Spaces Direct cluster.

Related:

  • No Related Posts

Large Dataset Design – Environmental and Logistical Considerations

In this article, we turn our attention to some of the environmental and logistical aspects of cluster installation and management.



In addition to available rack space and physical proximity of nodes, provision needs to be make for adequate power and cooling as the cluster expands. New generations of nodes typically deliver and increased storage density, which often magnifies the power draw and cooling requirements per rack unit.



The larger the cluster, the more disruptive downtime and reboots can be. To this end, the recommendation is for a large cluster’s power supply to be fully redundant and backed up with a battery UPS and/or power generator. In the worst instance, if a cluster does loose power, the nodes are protected internally by file system journals which preserve any in-flight uncommitted writes. However, the time to restore power and reboot a large cluster can be considerable.



Like most data center equipment, the cooling fans in Isilon nodes and switches pull air from the front to back of the chassis. To complement this, most data centers use a hot isle/cold isle rack configuration, where cool, low humidity air is supplied in the aisle at the front of each rack or cabinet either at the floor or ceiling level, and warm exhaust air is returned at ceiling level in the aisle to the rear of each rack.



Given the high power draw and heat density of cluster hardware, some data centers are limited in the number of nodes each rack can support. For partially filled racks, the use of blank panels to cover the front and rear of any unfilled rack units can help to efficiently direct airflow through the equipment.



The use of intelligent power distribution units (PDUs) within each rack can facilitate the remote power cycling of nodes, if desired.



For Gen6 hardware, where chassis depth can be a limiting factor, 2RU horizontally mounted PDUs within the rack can be used in place of vertical PDUs. If front-mounted, partial depth Ethernet switches are deployed, horizontal PDUs can be installed in the rear of the rack directly behind the switches to maximize available rack capacity.



Cabling and Networking

With copper (CX4) Infiniband cables the maximum cable length is limited to 10 meters. After factoring in for dressing the cables to maintain some level of organization and proximity within the racks and cable trays, all the racks with Isilon nodes need to be in close physical proximity to each other –either in the same rack row or close by in an adjacent row.



Support for multi-mode fiber (SC) for Infiniband and for Ethernet extends the cable length limitation to 150 meters. This allows nodes to be housed on separate floors or on the far side of a floor in a data center if necessary. While solving the floor space problem, this has the potential to introduce new administrative and management issues. The table below shows the various optical and copper backend network cabling options available.

Cable Type

Model

Connector

Length

Ethernet Cluster

Infiniband Cluster

Copper

851-0253

QSFP+

1m

P

Copper

851-0254

QSFP+

3m

P

Copper

851-0255

QSFP+

5m

P

Optical

851-0224

MPO

10m

P

P

Optical

851-0225

MPO

30m

P

P

Optical

851-0226

MPO

50m

P

P

Optical

851-0227

MPO

100m

P

P

Optical

851-0228

MPO

150m

P

P

With large clusters, especially when the nodes may not be racked in a contiguous manner, having all the nodes and switches connected to serial console concentrators and remote power controllers is highly advised. However, to perform any physical administration or break/fix activity on nodes you must know where the equipment is located and have administrative resources available to access and service all locations.



As such, the following best practices are highly recommended:



  • Develop and update thorough physical architectural documentation.
  • Implement an intuitive cable coloring standard.
  • Be fastidious and consistent about cable labeling.
  • Use the appropriate length of cable for the run and create a neat 12” loop of any excess cable, secured with Velcro.
  • Observe appropriate cable bend ratios, particularly with fiber cables.
  • Dress cables and maintain a disciplined cable management ethos.
  • Keep a detailed cluster hardware maintenance log.
  • Where appropriate, maintain a ‘mailbox’ space for cable management.



Disciplined cable management and labeling for ease of identification is particularly important in larger Gen6 clusters, where density of cabling is high. Each Gen6 chassis can require up to twenty eight cables, as shown in the table below:

Cabling Component

Medium

Cable Quantity per Gen6 Chassis

Back end network

10 or 40 Gb Ethernet or QDR Infiniband

8

Front end network

10 or 40Gb Ethernet

8

Management Interface

1Gb Ethernet

4

Serial Console

DB9 RS 232

4

Power cord

110V or 220V AC power

4

Total

28

The recommendation for cabling a Gen6 chassis is as follows:

  • Split cabling in the middle of the chassis, between nodes 2 and 3.
  • Route Ethernet and Infiniband cables towards lower side of the chassis.
  • Connect power cords for nodes 1 and 3 to PDU A and power cords for nodes 2 and 4 to PDU B.
  • Bundle network cables with the AC power cords for ease of management.
  • Leave enough cable slack for servicing each individual node’s FRUs.



hardware_6.png



Consistent and meticulous cable labeling and management is particularly important in large clusters. Gen6 chassis that employ both front and back end Ethernet networks can include up to twenty Ethernet connections per 4RU chassis.



hardware_7.png



In each node’s compute module, there are two PCI slots for the Ethernet cards (NICs). Viewed from the rear of the chassis, in each node the right hand slot (HBA Slot 0) houses the NIC for the front end network, and the left hand slot (HBA Slot 1) the NIC for the front end network. In addition to this, there is a separate built-in 1Gb Ethernet port on each node for cluster management traffic.



While there is no requirement that node 1 aligns with port 1 on each of the backend switches, it can certainly make cluster and switch management and troubleshooting considerably simpler. Even if exact port alignment is not possible, with large clusters, ensure that the cables are clearly labeled and connected to similar port regions on the backend switches.



Servicing and FRU Parts Replacement

Isilon nodes and the drives they contain have identifying LED lights to indicate when a component has failed and to allow proactive identification of resources. The ‘isi led’ CLI command can be used to proactive illuminate specific node and drive indicator lights to aid in identification.

Drive repair times depend on a variety of factors:

  • OneFS release (determines Job Engine version and how efficiently it operates)
  • System hardware (determines drive types, amount of CPU and RAM, etc)
  • File system: Amount of data, data composition (lots of small vs large files), protection, tunables, etc.
  • Load on the cluster during the drive failure

The best way to estimate future FlexProtect run-time is to use old repair run-times as a guide, if available.

Gen 6 drives have a bay-grid nomenclature similar to that of HD400 where A-E indicates each of the sleds and 0-6 would point to the drive position in the sled. The drive closest to the front is 0, whereas the drive closest to the back is 2/3/5, depending on the drive sled type.



For Gen5 and earlier hardware running OneFS 8.0 or prior, the isi_ovt_check CLI tool can be run on a node to verify the correct operation of the hardware.

Hardware Refresh

When it comes to updating and refreshing hardware in a large cluster, swapping nodes can be a lengthy process of somewhat unpredictable duration. Data has to be evacuated from each old node during the Smartfail process prior to its removal, and restriped and balanced across the new hardware’s drives. During this time there will also be potentially impactful group changes as new nodes are added and the old ones removed. An alternative and efficient approach can often be the swapping out of drives into new chassis. In addition to being considerable faster, the drive swapping process focuses the disruption on a single whole cluster down event. Estimating the time to complete a drive swap, or ‘disk tango’ process is simpler and more accurate and can typically be completed in a single maintenance window.



For Gen 5 and earlier 4RU nodes, a drive tango can be a complex procedure due to the large number of drives per node (36 or 60 drives).



With Gen 6 chassis, the available hardware ‘tango’ options are expanded and simplified. Given the modular design of these platforms, the compute and chassis tango strategies typically replace the disk tango:



Replacement Strategy

Component

Gen 4/5

Gen 6

Description

Disk tango

Drives / drive sleds

P

P

Swapping out data drives or drive sleds

Compute tango

Gen6 Compute modules

P

Rather than swapping out the twenty drive sleds, it’s usually cleaner to exchange the four compute modules

Chassis tango

Gen6 chassis

P

Typically only required if there’s an issue with the chassis mid-plane.



Note that any of the above ‘tango’ procedures should only be executed under the recommendation and supervision of Isilon support.

Related:

  • No Related Posts

Dell EMC Unity: SANCopy Pull prefers iSCSI over FC if both protocols are configured (User Correctable)

Article Number: 525420 Article Version: 2 Article Type: Break Fix



Dell EMC Unity 300,Dell EMC Unity 300F,Dell EMC Unity 350F,Dell EMC Unity 400,Dell EMC Unity 400F,Dell EMC Unity 450F,Dell EMC Unity 500,Dell EMC Unity 500F,Dell EMC Unity 550F,Dell EMC Unity 600,Dell EMC Unity 600F,Dell EMC Unity 650F

For SANCopy Pull for third-party migrations, if both FC and iSCSI connectivity are configured, iSCSI will be used by default for third-party LUN migrations with no option to switch to FC.

This is Functioning as Designed (FAD).

To use FC for third-party migrations, delete the iSCSI path and connection using the following commands:

/remote/iscsi/connection/path and/remote/iscsi/connection

UnitySANCopySession:

Descriptor Id: 733

Session Name: sv_517-targetLUN934958-20171123T160628

Session Type: Pull

Session Status: Success

Is Incremental: no

Default Owner: B

Current Owner: B

Originator: Admin

Source LUN WWN: 60:06:01:60:0D:60:3C:00:C5:E1:16:5A:00:4D:16:65

User Connection Type: FC first <======================= User preference

Actual Connection Type: iSCSI <======================= Actual connection


Source LUN Location: Frontend

Source Start Block Address: 0

User Size to Copy: 0

Driver Size to Copy: 20971520

Auto-Restart: 1

Auto-Transfer: 0

Initial Throttle: 7

Current Throttle: 7

User Bandwith: 4294967295

User Latency: 4294967295

Session ID: 131559268119562900

Blocks Copied: 20971520

Percentage Copied: 100%

Session Bandwith: 32735

Session Latency: 0

Start Time: 2017-Nov-23 16:06:54

Completion Time: 2017-Nov-23 16:07:30

Source Failure Status: No failure

Buffer Size: 1024

Buffer Count: 4

I/O Transfer Count: 20479

Number of Destinations: 1


Below is the cli guide of the command:

06:25:40 service@(none) spa:~> uemcli /remote/iscsi/connection/path –help

Storage system address: 127.0.0.1

Storage system port: 443

HTTPS connection

Manage and monitor iSCSI connection path. An iSCSI connection can have one or more iSCSI paths configured

Actions:

[Show]

/remote/iscsi/connection/path { -connection <value> | -connectionName <value> } show [ -output { nvp | csv | table [ -wrap ] } ] [ { -brief | -detail | -filter <value> } ]

[Create]

/remote/iscsi/connection/path create [ -async ] { -connection <value> | -connectionName <value> } [ -descr <value> ] -addr <value> [ -port <value> ] -if <value>

[Delete]

/remote/iscsi/connection/path { -connection <value> | -connectionName <value> } -index <value> delete [ -async ]

Related:

  • No Related Posts

Re: Hyper-V Cluster 2012 R2 (12 Node ) , Unity 550F( with offloading ISCSI ) -> Disk Role Change over 70 Seconds !

Hi,

we have opend a Ticket already with EMC , but as we cant use our shiny new Unity at all at the Moment . I would like to adress the Problem to the Community as well.

Ist Situation , we have a VNX connected to a Hyper-V 2012 Cluster via ISCSI ( 10GB ) … MS DSM was used not PowerPath . The Combination workst since 4 Years … no Drops not Problems … rock solid !!ur

Now we got our new Unity the first Thing we noticed that the Lun 0 is now not a Disk -1 as on our 2 VNX .

We configured the ISCSI Part exactly ! like we did with the VNX , everthing looks alike .

BUT if you switch the Hyper-V Disk Role to another Node in the Cluster the DiskRole Process will take Seconds until the ROLE is GREEN again in the FailOverCluster Managment Console and even then there is now IO Traffic until around 80! Seconds later. “

So the VM “Stops” for 80 Seconds ( like it is freezed ! ) . The Storage is for that Reason at the Moment not usable because we have such an Impact in the Production that it is irresponsible to use the “expensive” Unity for that Matter.

The Same Disk Role Change on a Disk Role with a VNX 5600 then the Traffic will “break in” for a fraction of a second and then it “goes” on … like a breeze.

Anyone has such an Experience also in Regard to an UNITY 550F, 300 ( firmware 4.4 ) . The two Unitys are connected via Each other in an active/active Failover Senario ( for that Reason then need direct FC Connection ). We have a Speciality on the UNITY 550F as they have the ISCSI ( Ethernet Interfaces with Offloading ) . Maybe there are also Question in Regard to PowerPath and the “ISCSI with Offloading” , which could fit the Problem Description.

Maybe someon has a Glue where to start the Search or has maybe heard form EMC ( I read about PowerPath Error Description which sounded alike )

Regards and Thanks

Related:

Dell EMC Unity: How To change ethernet port settings (User Correctable)[1]

Article Number: 502880 Article Version: 5 Article Type: How To



Unity Family

Ethernet port settings can changed in Unisphere and via uemcli.

In Unisphere:

1. Select the Settings icon, and then select Access > Ethernet.

2. To change Ethernet port speed and MTU, select the relevant Ethernet port, and then select the Edit icon.

MTU has a default value. If you change the value, you must also change all components of the network path (switch ports and host). If you want to support jumbo frames, set the MTU size to 9000 bytes.

view and edit ethernet port settings

Ethernet port properties

This is also described in the Unisphere help.

via uemcli:

1. list all ethernet ports:

The -detail will show all attributes, so we can see, what speeds and MTU sizes are available for a particular port.

C:>uemcli -d 10.10.10.11 -u admin -p mikemyers /net/port/eth show -detail

Storage system address: 10.10.10.11

Storage system port: 443

HTTPS connection

1: ID = spa_mgmt

Name = SP A Management Port

SP = spa

Protocols = mgmt

MTU size = 1500

Requested MTU size = 0

Available MTU sizes = 1500

Speed = 10 Gbps

Requested speed = auto

Available speeds = auto

Health state = OK (5)

Health details = “The port is operating normally.”

Connector type = RJ45

MAC address = xx:xx:xx:xx:xx:xx

SFP supported speeds =

SFP supported protocols =

2: ID = spa_eth2

Name = SP A Ethernet Port 2

SP = spa

Protocols = file, net, iscsi

MTU size = 1500

Requested MTU size = 0

Available MTU sizes = 1500, 9000

Speed = 10 Gbps

Requested speed = auto

Available speeds = auto

Health state = OK (5)

Health details = “The port is operating normally.”

Connector type = RJ45

MAC address = xx:xx:xx:xx:xx:xx

SFP supported speeds =

SFP supported protocols =

3: ID = spa_eth3

Name = SP A Ethernet Port 3

SP = spa

Protocols = file, net, iscsi

MTU size = 1500

Requested MTU size = 0

Available MTU sizes = 1500, 9000

Speed = 10 Gbps

Requested speed = auto

Available speeds = auto

Health state = OK (5)

Health details = “The port is operating normally.”

Connector type = RJ45

MAC address = xx:xx:xx:xx:xx:xx

SFP supported speeds =

SFP supported protocols =

4: ID = spa_eth1

Name = SP A Ethernet Port 1

SP = spa

Protocols = file, net, iscsi

MTU size = 1500

Requested MTU size = 0

Available MTU sizes = 1500, 9000

Speed = 10 Gbps

Requested speed = auto

Available speeds = auto

Health state = OK (5)

Health details = “The port is operating normally.”

Connector type = RJ45

MAC address = xx:xx:xx:xx:xx:xx

SFP supported speeds =

SFP supported protocols =

5: ID = spa_eth0

Name = SP A Ethernet Port 0

SP = spa

Protocols = file, net, iscsi

MTU size = 1500

Requested MTU size = 0

Available MTU sizes = 1500, 9000

Speed = 10 Gbps

Requested speed = auto

Available speeds = auto

Health state = OK (5)

Health details = “The port is operating normally.”

Connector type = RJ45

MAC address = xx:xx:xx:xx:xx:xx

SFP supported speeds =

SFP supported protocols =

2. modify ethernet port settings:

set port to auto:

C:>uemcli -d 10.10.10.11 -u admin -p mikemyers /net/port/eth -id spa_eth0 set -speed auto

Storage system address: 10.10.10.11

Storage system port: 443

HTTPS connection

ID = spa_eth0

Operation completed successfully.

Valid speed qualifiers are: Kbps, Mbps, Gbps ( case-sensitive)

Example:

-speed 1Gbps would set the port to 1GBit/sec.

set port to MTU 9000:

C:>uemcli -d 10.10.10.11 -u admin -p mikemyers /net/port/eth -id spa_eth0 set -mtuSize 9000

Storage system address: 10.10.10.11

Storage system port: 443

HTTPS connection

ID = spa_eth0

Operation completed successfully.

Please note:

– Only speed and MTU can be set.

– Only data Ethernet Port can be configured, Sync Replication Management Ports cannot be modified.

If you try, you will get this error message:

Operation failed. Error code: 0x6000935

Only data Ethernet Port can be configured. (Error Code:0x6000935)

Related:

  • No Related Posts