Re: Booting a dormant Isilon

Hi there – looking for some pointers in resolving an error message I have booting up a 4 node Isilon cluster which has laid dormant for past 18 months. The nodes are currently running: OneFS V7.0.1.1 B_7_0_1_34.

On initial boot of one of the nodes the failure identified the cache batteries as having no charge (obviously) gave them enough time to charge and they appear to hold charge and be in a healthy state (solid green lights) however, the problem has ‘moved’ and now expresses itself with the following detail:

Unable to save journal (isi_dumpjournal -s returned 1)

Error output:

Writing v4/v5/v6 journal

type Sector Bytes

Bad version: 0

isi_dumpjournl: Bad v4 superblock_sect 0x0

Unable to save journal /var/journal/journal:

DRAM journal is invalid, initiating recovery…

Attempting to save and restore journal to clear and ECC errors in unused DRAM blocks…

Restore failed

Could not recover journal.

I’ve worked with a huge amount of storage arrays, VNX, VMAX, Storwize, 3par, EVA, Equallogics, Netapp FAS and E series but have never had the chance to work on an Isilon.

I’ve downloaded the SIM and am currently reading the manual and can instantly see that it states “an Isilon cluster starts with as few as three nodes” so please be nice if this is one of those obvious errors and i just need to get all the nodes up and running.

Many thanks in advance!

Gremlin Hunter

Related:

  • No Related Posts

VPLEX: Director crash due to one VPDID having two volumes from a HP 3PAR array.

Article Number: 500431 Article Version: 2 Article Type: Break Fix



VPLEX for All Flash,VPLEX Local,VPLEX Metro,VPLEX Series,VPLEX VS2,VPLEX GeoSynchrony 5.5 Service Pack 1 Patch 1,VPLEX GeoSynchrony 5.5 Service Pack 1 Patch 2,VPLEX GeoSynchrony 5.5 Service Pack 1,VPLEX GeoSynchrony 5.5 Patch 1

Director crash analysis led to show there were 8 paths to the volume ( instead of best practice of 4 paths). Research showed two volumes having the same VPD ID.

Timeline shows the write abort to storage volume located on 3PAR backend array causes a path renew which in turn causes the firmware on the director to crash.

Cause here was the fact that this particular storage volume has 8 active pathes to each director instead of the Best Practice maximum of 4 paths per volume to each director.

Firmware logs show that for a volume belonging to a 3PAR array had more than the recommended 4 active paths per director. More than four paths is not the recommended Best Practice..

128.221.253.38/cpu0/log:5988:W/”00601660e787223714-3″:664082:<4>2016/07/21 21:41:05.84: scsi/146 Active path count of 8 for LU VPD83T3:60002ac0000000000000012b00018ae7 is above recommended limit of 4

128.221.253.36/cpu0/log:5988:W/”00601660df7c22227-3″:664515:<4>2016/07/21 21:41:05.85: scsi/146 Active path count of 8 for LU VPD83T3:60002ac0000000000000012b00018ae7 is above recommended limit of 4

128.221.253.37/cpu0/log:5988:W/”0060166134af19113-2″:660154:<4>2016/07/21 21:41:05.84: scsi/146 Active path count of 8 for LU VPD83T3:60002ac0000000000000012b00018ae7 is above recommended limit of 4

Firmware logs also contain scsi write errors with vendor specific sense code 0xb/0x4b/0xc4 (according to 3PAR this is a Data Phase Error for data-out buffer overflow):

128.221.252.35/cpu0/log:5988:W/”00601638004519113-2″:656582:<6>2016/07/21 21:29:27.89: scsi/27 tgt VPD83T3:60002ac0000000000000012b00018ae7 cmd 0x8a status 0x2 valid 0 resp 0x70 seg 0x0 bits 0x0 key 0xb info 0x0 alen 10 csi 0x0 asc 0x4b ascq 0xc4 fru 0x0 sks 0x0

128.221.252.35/cpu0/log:5988:W/”00601638004519113-2″:656583:<6>2016/07/21 21:31:42.56: scsi/27 tgt VPD83T3:60002ac0000000000000012b00018ae7 cmd 0x8a status 0x2 valid 0 resp 0x70 seg 0x0 bits 0x0 key 0xb info 0x0 alen 10 csi 0x0 asc 0x4b ascq 0xc4 fru 0x0 sks 0x0

128.221.253.35/cpu0/log:5988:W/”00601638004519113-2″:656587:<6>2016/07/21 21:36:09.97: scsi/27 tgt VPD83T3:60002ac0000000000000012b00018ae7 cmd 0x8a status 0x2 valid 0 resp 0x70 seg 0x0 bits 0x0 key 0xb info 0x0 alen 10 csi 0x0 asc 0x4b ascq 0xc4 fru 0x0 sks 0x0

128.221.252.35/cpu0/log:5988:W/”00601638004519113-2″:656589:<6>2016/07/21 21:38:49.26: scsi/27 tgt VPD83T3:60002ac0000000000000012b00018ae7 cmd 0x8a status 0x2 valid 0 resp 0x70 seg 0x0 bits 0x0 key 0xb info 0x0 alen 10 csi 0x0 asc 0x4b ascq 0xc4 fru 0x0 sks 0x0

The problem arises from the fact that the 3PAR array is presenting multiple LUNs to the same Initiator-Target (IT) with the same VPD_ID. This is operating outside of normal behavior. In this situation, VPLEX behaves as if it has 8 paths to that SCSI unit. When in reality, it has 4 paths, with 2 LUNs presented on each path.

This crash is likely due to the fact that there are multiple LUNs to the same IT with the same VPD_ID. When LUN0 paths are going down for renew, LUN2f paths serve I/O and crashes when it tries to renew the LUN0 paths.


WorkAround:

1.Check the 3PAR array that both the LUNs on the path mentioned in firmware logs are the same

device.

Example:

ITL: “x fcp i 0x50001442d01a9213 t 0x22020002ac018ae7 l 0x0000000000000000” AAO

ITL: “x fcp i 0x50001442d01a9213 t 0x22020002ac018ae7 l 0x0001000000000000” AAO

NOTE: If LUN0 and LUN1 are different devices on the 3PAR array, the user is AT RISK OF DATA CORRUPTION. The user should stop I/O to the 3PAR array and engage VPLEX Customer Support immediately and mention this article

2. Please follow the steps below to remove the extra LUNs that are presenting the same VPD_ID in each IT.

Once customer confirms that customer is safe from data corruption, customer can proceed with workaround provided by support.

The user needs to reconfigure the 3PAR in such a way that it doesn’t present LUNs in this manner.

To avoid firmware crashes, customer should fix the Scsi Unit that has the LUN0 first.

Example:

If VPLEX sees 8 paths to this SCSI unit. Customer need to remove one of the LUNs

Note: Below steps use example volumes to explain necessary steps

3. If possible, pause I/O tovirtual volume (that is built on top of the volume causing issue (Example R39001_R49001_xxxxx (VPD83T3:6XXXXXX). This is the volume that is built on top of VPD83T3:XXXXX). This is prevent another possible firmware crash.

4. Remove the extra LUN as the example below shows:

u: VPD83T3:60002ac0000000000000012b00018ae7

a: 3PARdata~VV~8ae7 name 8ae7 vend “3PARdata” prod “VV” rev “3221” type 3PARdata~VV

c: 8ae7

l: “x fcp i 0x50001442c01b3813 t 0x22020002ac018ae7” << 1st IT

l: “x fcp i 0x50001442c01b3813 t 0x23020002ac018ae7” << 2nd IT

l: “x fcp i 0x50001442c01b3812 t 0x20010002ac018ae7” << 3rd IT

l: “x fcp i 0x50001442c01b3812 t 0x21010002ac018ae7” << 4th IT

p: “x fcp i 0x50001442c01b3813 t 0x23020002ac018ae7 l 0x0000000000000000” AAO << 2nd IT

p: “x fcp i 0x50001442c01b3813 t 0x22020002ac018ae7 l 0x0001000000000000” AAO << 1st IT –
this needs to be removed.

p: “x fcp i 0x50001442c01b3813 t 0x22020002ac018ae7 l 0x0000000000000000” AAO << 1st IT

p: “x fcp i 0x50001442c01b3812 t 0x20010002ac018ae7 l 0x0001000000000000” AAO << 3rd IT
– this needs to be removed.

p: “x fcp i 0x50001442c01b3812 t 0x21010002ac018ae7 l 0x0000000000000000” AAO << 4th IT

p: “x fcp i 0x50001442c01b3812 t 0x21010002ac018ae7 l 0x0001000000000000” AAO << 4th IT
– this needs to be removed.

p: “x fcp i 0x50001442c01b3812 t 0x20010002ac018ae7 l 0x0000000000000000” AAO << 3rd IT

p: “x fcp i 0x50001442c01b3813 t 0x23020002ac018ae7 l 0x0001000000000000” AAO << 2nd IT
– this needs to be removed.

5. Once removed resume I/O to R39001_XXXXXXX (VPD83T3XXXXX)

6. Afterwards we should fix the other devices presented by the 3PAR. This is because VPLEX is currently operating in an unknown situation.

Example:

VPD83T3:60002ac0000000000000014000018ae7. Where there are 2 LUNs mapped to the same VPD_ID LUN 11 and LUN 15.

We need to remove the other LUN.

u: VPD83T3:60002ac0000000000000014000018ae7

a: 3PARdata~VV~8ae7 name 8ae7 vend “3PARdata” prod “VV” rev “3221” type 3PARdata~VV

c: 8ae7

l: “x fcp i 0x50001442c01b3811 t 0x20020002ac018ae7”

l: “x fcp i 0x50001442c01b3811 t 0x21020002ac018ae7”

l: “x fcp i 0x50001442c01b3810 t 0x22010002ac018ae7”

l: “x fcp i 0x50001442c01b3810 t 0x23010002ac018ae7”

p: “x fcp i 0x50001442c01b3811 t 0x21020002ac018ae7 l 0x0011000000000000” AAO

p: “x fcp i 0x50001442c01b3811 t 0x21020002ac018ae7 l 0x0015000000000000” AAO

p: “x fcp i 0x50001442c01b3810 t 0x23010002ac018ae7 l 0x0015000000000000” AAO

p: “x fcp i 0x50001442c01b3810 t 0x23010002ac018ae7 l 0x0011000000000000” AAO

p: “x fcp i 0x50001442c01b3810 t 0x22010002ac018ae7 l 0x0011000000000000” AAO

p: “x fcp i 0x50001442c01b3810 t 0x22010002ac018ae7 l 0x0015000000000000” AAO

p: “x fcp i 0x50001442c01b3811 t 0x20020002ac018ae7 l 0x0011000000000000” AAO

p: “x fcp i 0x50001442c01b3811 t 0x20020002ac018ae7 l 0x0015000000000000” AAO


7. In lieu of the workaround, to avoid potential director crashes, please avoid making configuration changes on the 3PAR arrays connected to VPLEX. This includes provisioning, un-provisioning, and LUN size changes.

8. Please check LUN mapping/masking on the 3PAR array.

9. Check VPLEX implementation best practices is followed.


Permanent Fix:

EMC VPLEX Engineering is currently investigating this problem. Once a fix is available this article will be updated.

Related:

Re: VPLEX for Data Migration purpose only

We have successfully migrated more than 5000+ hosts from 3par, xp 24k etc to vmax 200k, vmax 40k and vnx in last 1 year using vplex extent level mobility jobs. All down time we require is once, just to reboot the server. Nothing much to worry about, migration process is clean with less down time.

Let me see if I could prepare a doc out of it.

High level process I would assume, existing array is zoned with vplex ports. below is what you have to do in arrays.

1. Present existing Luns to vplex > encapsulate them > Create extents>devices and VVs on top. ( Don’t add it to the group)

2. Present New Luns to vplex,> Encapsulate>Create Extents. Don’t create devices and VVs for new Luns.

3. Create new zones for hosts to vplex. (Don’t activate new zones, just keep them ready.)

4. Discover initiators on vplex and ensure host is logged in.

5. Ensure multi-pathing tools and the add-ons are configured properly. (Don’t worry unless you use windows MPIO)

Migration steps

6. Bring down host.

7. Disable Original Zone, ( Source array to Host) and activate new zones to vplex

8. On vplex, add VVs to storage view, add initiators and select desired port groups to make LUns visible.

9. Power on host, Original LUNs from Source array is still the ones host would see, but its via vplex. Since there is no change in disk signature, Host should come up with disks with out any issues.

10. Apps and other services can be started. Have any of the wintel/app team member check services and disk status.

11. Once all confirmed, you may kick off a extent mobility job to the new targets in vplex. Please note, Mobility job is transparent to the host. We don’t have to bring down host again.

Related:

Re: DMX1000 – This Symmetrix does not have any gatekeepers

Hi

I’ve been given access to a really old DMX1000 so I can brush up my skills but I’ve run in to a brick wall. There appear to be no Gatekeeper LUNs available so I can’t discover or admin the array.

When I query the array I get the following

Symmetrix ID: 000XXXXXXXX

Total Paths Unique Paths

———– ————

Pdevs 8 8

GK Candidates 0 0

Dedicated GKs 0 0

VCM/ACLX devs 0 0

Pdevs in gkavoid 0

Pdevs in gkselect 0

Max Available GKs 1

Num Open GKs 0

Gatekeeper Utilization

Current 0 %

Past Minute 0 %

Past 5 Minutes 0 %

Past 15 Minutes 0 %

Since Midnight 0 %

Since Starting 0 %

Highwater

Open Gatekeepers 0

Time of Highwater N/A

Gatekeeper Utilization 0 %

Time of Highwater N/A

Gatekeeper Timeouts

Since starting 0

Past Minute 0

Time of last timeout N/A

*WARNING* This Symmetrix does not have any gatekeepers

I can see some Gatekeepers but these are from a VMAX and not the DMX

C:Program FilesEMCSYMCLIbin>syminq

Device Product Device

————————– ————————— ———————

Name Type Vendor ID Rev Ser Num Cap (KB)

————————– ————————— ———————

\.PHYSICALDRIVE0 HP LOGICAL VOL* 4.68 N/A 143338560

\.PHYSICALDRIVE1 3PARdata VV 3213 005E0000 10485760

\.PHYSICALDRIVE2 M(9) EMC SYMMETRIX 5671 0100097000 79548480

\.PHYSICALDRIVE3 M(9) EMC SYMMETRIX 5671 01000A0000 79548480

\.PHYSICALDRIVE4 M(9) EMC SYMMETRIX 5671 010001A000 79548480

\.PHYSICALDRIVE5 M(9) EMC SYMMETRIX 5671 0100023000 79548480

\.PHYSICALDRIVE6 GK EMC SYMMETRIX 5876 1400062000 2880

\.PHYSICALDRIVE7 GK EMC SYMMETRIX 5876 1400063000 2880

\.PHYSICALDRIVE8 GK EMC SYMMETRIX 5876 1400064000 2880

\.PHYSICALDRIVE9 GK EMC SYMMETRIX 5876 1400065000 2880

\.PHYSICALDRIVE10 GK EMC SYMMETRIX 5876 1400066000 2880

\.PHYSICALDRIVE11 GK EMC SYMMETRIX 5876 1400067000 2880

\.PHYSICALDRIVE12 EMC SYMMETRIX 5876 1400068000 5243520

\.PHYSICALDRIVE13 EMC SYMMETRIX 5876 1400102000 14680320

\.PHYSICALDRIVE14 M(9) EMC SYMMETRIX 5671 0100191000 79548480

\.PHYSICALDRIVE15 M(9) EMC SYMMETRIX 5671 010019A000 79548480

\.PHYSICALDRIVE16 M(9) EMC SYMMETRIX 5671 0100115000 79548480

\.PHYSICALDRIVE17 M(9) EMC SYMMETRIX 5671 010011E000 79548480

I’ve read about the VMC, so tried zoning my server to FCA but had no luck…

Any suggestions?

Related:

DMX1000 – This Symmetrix does not have any gatekeepers

Hi

I’ve been given access to a really old DMX1000 so I can brush up my skills but I’ve run in to a brick wall. There appear to be no Gatekeeper LUNs available so I can’t discover or admin the array.

When I query the array I get the following

Symmetrix ID: 000XXXXXXXX

Total Paths Unique Paths

———– ————

Pdevs 8 8

GK Candidates 0 0

Dedicated GKs 0 0

VCM/ACLX devs 0 0

Pdevs in gkavoid 0

Pdevs in gkselect 0

Max Available GKs 1

Num Open GKs 0

Gatekeeper Utilization

Current 0 %

Past Minute 0 %

Past 5 Minutes 0 %

Past 15 Minutes 0 %

Since Midnight 0 %

Since Starting 0 %

Highwater

Open Gatekeepers 0

Time of Highwater N/A

Gatekeeper Utilization 0 %

Time of Highwater N/A

Gatekeeper Timeouts

Since starting 0

Past Minute 0

Time of last timeout N/A

*WARNING* This Symmetrix does not have any gatekeepers

I can see some Gatekeepers but these are from a VMAX and not the DMX

C:Program FilesEMCSYMCLIbin>syminq

Device Product Device

————————– ————————— ———————

Name Type Vendor ID Rev Ser Num Cap (KB)

————————– ————————— ———————

\.PHYSICALDRIVE0 HP LOGICAL VOL* 4.68 N/A 143338560

\.PHYSICALDRIVE1 3PARdata VV 3213 005E0000 10485760

\.PHYSICALDRIVE2 M(9) EMC SYMMETRIX 5671 0100097000 79548480

\.PHYSICALDRIVE3 M(9) EMC SYMMETRIX 5671 01000A0000 79548480

\.PHYSICALDRIVE4 M(9) EMC SYMMETRIX 5671 010001A000 79548480

\.PHYSICALDRIVE5 M(9) EMC SYMMETRIX 5671 0100023000 79548480

\.PHYSICALDRIVE6 GK EMC SYMMETRIX 5876 1400062000 2880

\.PHYSICALDRIVE7 GK EMC SYMMETRIX 5876 1400063000 2880

\.PHYSICALDRIVE8 GK EMC SYMMETRIX 5876 1400064000 2880

\.PHYSICALDRIVE9 GK EMC SYMMETRIX 5876 1400065000 2880

\.PHYSICALDRIVE10 GK EMC SYMMETRIX 5876 1400066000 2880

\.PHYSICALDRIVE11 GK EMC SYMMETRIX 5876 1400067000 2880

\.PHYSICALDRIVE12 EMC SYMMETRIX 5876 1400068000 5243520

\.PHYSICALDRIVE13 EMC SYMMETRIX 5876 1400102000 14680320

\.PHYSICALDRIVE14 M(9) EMC SYMMETRIX 5671 0100191000 79548480

\.PHYSICALDRIVE15 M(9) EMC SYMMETRIX 5671 010019A000 79548480

\.PHYSICALDRIVE16 M(9) EMC SYMMETRIX 5671 0100115000 79548480

\.PHYSICALDRIVE17 M(9) EMC SYMMETRIX 5671 010011E000 79548480

I’ve read about the VMC, so tried zoning my server to FCA but had no luck…

Any suggestions?

Related:

Get Your Digital Content to Shine with Arista Networks

Is the rapid rise of digital content testing the limits of your network? Transition to lossless IP networking from HPE and Arista to gain the resiliency, reliability, and scalability while making your content shine.

Arista Neworks Media and Entertainment.jpgSo much is going on in the vibrant media and entertainment arena, it’s hard to keep up. The demand for real-time broadcasting is forging a movement away from traditional HD-SDI to next-generation IP-based operations. Massive media files continue to grow in numbers and volume—having a tremendous impact on file-based workflows. And the distribution of digital content—think video streaming today—is adding to the load of already congested networks. All of this raises the question: Is your network ready for what is next?

The good news is that HPE and Arista offer networking solutions—for media and entertainment—that can transform your infrastructure to better address evolving industry requirements. Of course, a foundational aspect of this transformation is high-speed IP networking that allows many different systems, serving multiple applications, to connect reliably, efficiently, and at scale. HPE and Arista networking solutions, including HPE Pointnext services, can put you on the fast path to that transformation, with high-performance capabilities—including lossless transport—and the agility and scalability that can propel your business forward.

Performance-optimized networking

HPE and Arista leaf and spine switches (7500R, 7280R, and 7020R) are purpose-built for key media and entertainment industry requirements. They deliver low latency and deep buffers for high performance and lossless smooth data traffic. Our leaf and spine designs are scalable and easy to manage. Plus, we offer network monitoring and telemetry tools for better network visibility to detect and alleviate network congestion in real time. This is critical to help ensure a positive experience and, ultimately, create satisfied customers long term.

Furthermore, HPE delivers a complete, end-to-end solution that includes servers and storage with Arista networking for high infrastructure availability and application performance. HPE Apollo, Cloudline, BladeSystem, and ProLiant servers provide the industry’s best infrastructure foundation. This is matched by our storage portfolio, which includes HPE 3PAR, StoreServ, and Nimble all-flash storage, as well as Scality RING for big data and object storage, plus hyper-converged SimpliVity solutions.

Here’s a look at the complete offering:

HPE_Arista_Media_Ent_Blog_image.jpg

You can also quickly ramp up and optimize your solution with HPE Pointnext advisory, professional, and operational services. We’ll work with you to find the best approach, implement the right solution, and make sure you’re using that solution to its fullest with a broad range of operational services tailored to your needs.

Confidently capitalize on the explosion of digital data on your network

Take advantage of an agile, high-performing, and lossless HPE and Arista network that’s built and tested for your industry.

  • Increase capacity to broadcast work streams and handle more visually immersive theatrical and special effects workloads.
  • Converge content creation, post-production, rights management, transcoding, and global distribution onto a shared IP infrastructure.
  • Deliver an open, standards-based IP transport, supporting economical and open authoring, rendering, non-linear production, transcoding, and storage systems.

Discover just what’s possible with HPE and Arista networking today. Contact your HPE sales rep, or visit HPE Data Center Networking for more information.

Related:

  • No Related Posts

Competition Bites The Dust in Face-off with Dell EMC VMAX All Flash

EMC logo


As the General Manager of the Dell EMC VMAX business unit, I interact with a variety of customers on a daily basis. It is very clear to me that today’s IT practitioners are increasingly relying on All-Flash storage to consolidate their ever-increasing, performance-hungry workloads amid the explosive growth of data. Given the adoption rate, we expect that over half of all data centers will use only All-Flash arrays for primary data storage within the next three to five years. While there are many All-Flash solutions out there in the market, clearly all are not created equal!

We have architected Dell EMC VMAX All Flash arrays to solve the CIO challenge of embracing a modernized flash-centric datacenter for mission-critical applications while simultaneously simplifying, automating, and consolidating IT operations. VMAX All Flash combines the best of new high-density flash technologies with a proven set of rich data services required to build a modern data center. Simultaneously, it continues to deliver the reliability and mission-critical availability that VMAX customers have relied on for more than two decades.

Earlier this month, we sat down with 12 independent storage industry experts at a Storage Field Day event and discussed the core architectural tenets of VMAX All Flash that are helping 94% of Fortune 50 companies, 18 out of 20 top banks, 9 out of 10 top life insurance companies and thousands of other customers to manage data growth, reduce data center sprawl, deliver extreme performance and availability while meeting their business SLAs day in and day out. Needless to say, they were impressed! Check out the entire set of VMAX All Flash video recordings at the Storage Field Day 14 site.

Figure 1: VMAX All Flash Architectural Details (Video from Storage Field Day)

And we are not afraid of going head-to-head against our competition in a public setting!

Recently, Principled Technologies evaluated VMAX 250F All Flash and HPE 3PAR 8450 side-by-side with an eye towards how these arrays can handle mixed workloads in a modern data center as well as deliver business continuity in the event of an interruption. Both the arrays were configured with the same number of controllers and similar number of disks/capacity to make it an apples-to-apples comparison.

The tests started with performance testing for Oracle Database 12c transactional workloads. Next, a data mart workload was added to determine how the arrays handle this mixed workload. Data marts are a convenient way to move chunks of data from multiple departments to one centralized location that’s easy to access. Businesses rely on the smooth flow of data mart information for reporting, analysis, trending, presentations, and database backups. Ability to consolidate transactional workloads as well as data warehouse sequential workloads on the same platform is a cornerstone of an All-Flash array to serve as the core primary storage platform in today’s data centers.

Figure 2: VMAX 250F and 3PAR 8450 IOPS Before and During Data Mart Load Addition

Principled Technologies determined that the VMAX 250F handled 39.5% more IOPS on average than the 3PAR 8450 during the data mart load. The VMAX solution handled both reads and writes much faster than the 3PAR solution during the data mart load – up to 145% faster reads and 1,973% faster writes than the HPE solution! The 3PAR 8450 storage array experienced lengthy delays when processing reads and writes at the same time with the added data mart load – read latency increased 129% while write latency increased 2,133%!

Figure 3: VMAX 250F and 3PAR 8450 Latency Before and During Data Mart Load Addition

So what does this mean for you? With the VMAX 250F, you can run your production workloads while always meeting your SLAs and never worry about whether adding an analytics workload, compiling large amounts of data from multiple sources, or performing an extensive backup would affect the performance. With 3PAR 8450, it’s not the same story – long wait times on accessing data when you run mixed workloads will surely frustrate your users and hamper your business.

VMAX All Flash SRDF: The Gold Standard for Remote Replication

The second phase of Principled Technologies’ comparison of the Dell EMC and HPE 3PAR All-Flash storage involved remote replication and disaster recovery. For the Dell EMC solution, they used Unisphere for VMAX to set up two VMAX 250F arrays with active-active SRDF/Metro remote replication. For the HPE 3PAR solution, they set up one 3PAR 8450 array and one 3PAR 8400 array with Remote Copy and Peer Persistence (which is only available as active-passive at the LUN level) enabled.

When Principled Technologies initiated a lost host connection to both the local arrays, the entire workload on the VMAX 250F solution continued to run with no downtime following the outage and all I/O shifting immediately to the remote VMAX 250F. In contrast, the application workload on the 3PAR solution needed to transfer 100% of the I/O to the secondary (passive) site and stopped until they restarted the VM. SRDF/Metro is a true active-active solution, which ensured consistent data access during their site failure. HPE’s Peer Persistence is active-passive at the LUN level. So, during their local storage failure simulation, all the paths were inaccessible until the standby paths to the remote array became active or they restored the connection to the local system and failback occurred. This means the difference between having consistent data access during a site failure or not.

Check out all the details of the Principled Technologies testing and their findings.

In the end, Principled Technologies discerned that Dell EMC VMAX 250F All Flash storage array lived up to its promises better than the HPE 3PAR 8450 storage array did. We are not surprised – thousands of customers reach the same conclusion day in and day out not only when compared against HPE 3PAR but also against all of our All-Flash enterprise storage competition!

Learn more about VMAX All Flash.

 



ENCLOSURE:https://blog.dellemc.com/uploads/2017/12/Sailboat-Ocean-Racing-1000×500.jpg

Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

New Products and Programs Will “Future-Proof” Your Dell EMC Midrange Storage Investments

EMC logo


We’ve had a busy summer releasing new midrange storage products including the SC 5020, SCv3000 and new Dell EMC Unity All-Flash data storage arrays. Today, we’re announcing the All-Flash version of SC Series, which makes deploying All-Flash even easier; one SKU and all-inclusive licensing. For Dell EMC Unity, deduplication is available, along with other features to simplify data-in-place migrations without any downtime.

As the storage market evolves, customers are looking beyond just product innovations. While product enhancements remain extremely important, customers are looking for more. They want products and programs to ensure their technology investments solve business problems today and into the future given the rapid advancement of technology. They need a future-proof investment. We’re excited to share our innovations in both products and the best storage loyalty program in the industry. Together these can help lower the risk and cost for customers to modernize their data centers.

 

Dell EMC SC All-Flash

Storage customers now get the best of both worlds in the SC Series, with both speed and intelligence. With the SC Series All-Flash, they get huge performance, up to 399,000 IOPS per array, and 3.9 million aggregate IOPS per multi-array federated cluster[i], all in an easy-to-use package. Ideal for storage generalists, because the SC Series has all the intelligence to do the work automatically, all behind the scenes, with little to no storage administrator intervention. Best-in-class data reduction is also built-in, so no special operator skills are needed. Federated data mobility, to move active or “live volumes” between systems for easy, non-disruptive workload migration is done automatically too. Just a few check boxes – that’s all it takes.

With the addition of all-inclusive software[ii], every SC feature is included that makes purchasing and maintaining even easier, and keeps the cost low over time. Our customers agree. According to Steve Athanas, Director of Platforms & Systems Engineering at UMASS Lowell: “We chose SC Series storage because dollar-for-dollar, Dell EMC arrays were the most cost-effective and they provide a great roadmap for longevity.”

Dell EMC Unity

Our midrange momentum continues with Dell EMC Unity, already designed for simplicity and unified customer environments. In our previous blogs, we announced deduplication was coming soon and as promised to our customers, it’s now part of this new v4.3 software release. We’re building on Dell EMC Unity’s strong efficiency and performance, where it leads the market in shipments[i]. We also put the product to the test against another strong midrange player – HPE 3PAR.  Research firm Principled Technologies confirmed that Dell EMC Unity loads data up to 22% faster, handles up to 29% more orders per minute, and requires up to 2.5X less storage than HPE; impressive results!

Building on top of deduplication, new synchronous file replication is included too, giving our customers business continuity and zero data loss remote protection for users’ critical file-based data. Customers can combine file replication with block-level cloud tiering for greater data protection.

Snapshots are a lower cost option than full volume copies, since they require far less space. For the ultimate in data protection, customers can leverage cloud tiering with our new Future-Proof Storage Loyalty Program. This program provides free cloud storage for Dell EMC Unity customers (SC in the future), ideal for long-term snapshot retention. Storing snapshots in the cloud is the lowest cost option. If you haven’t heard of the program, more details are available here.

Rounding out the Unity OS release, new functionality now enables customers to swap in new and more powerful storage controllers while data and operations remain online and in place.  This eliminates downtime and protects existing investments.  Upgrades can take place from hybrid to All-Flash and operations can continue during a hardware upgrade with no downtime.

[i] Source: IDC Enterprise Storage Systems Quarterly Tracker, Sept 2017. Midrange is defined as external storage systems priced $25k < $250k



ENCLOSURE:https://blog.dellemc.com/uploads/2017/11/SC-All-Flash-Glamour-Shot-1000×500.jpg

Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

Dedupe + Compression in Dell EMC Unity All-Flash Storage Boosts Efficiency for Midrange Storage Workloads

EMC logo


When customers purchase data storage systems, there are multiple factors in the buying decision.  Features, of course, but customers “buy” into more than just the current set of available features.  They’re buying not just for today; they want to know what’s next for the product.  At Dell EMC, we’ve always been customer focused and we’d like to give our customers and partners more insight into our future plans for Dell EMC’s midrange portfolio.  In today’s blog, we’ll highlight efficiency in our Midrange portfolio with a spotlight on our deduplication plans for Dell EMC Unity, our latest enhancement coming to the industry’s #1 midrange storage portfolio[i].  By combining deduplication and compression, we can offer customers ultra-efficient storage for a wide range of workloads and applications.

Forthcoming Functionality Increases Efficiency Even Further

We’re excited to announce that later this year we’ll unveil our data deduplication capabilities in Dell EMC Unity.  We wanted to give an early glimpse into this technology which will help our customers further advance their IT modernization efforts. These new features leverage a variety of advanced patents to eliminate redundant data. They are optimized to ensure the most efficient use of available hardware resources.  In fact, we’ll support these improvements for all current and previous generations of Dell EMC Unity.

Dell EMC engineers designed the code to be an extension of inline compression so that applications that have compression enabled will automatically get more efficient. Best of all, deduplication will be available as a no-cost, non-disruptive upgrade. We’re really happy we can deliver the consistent performance levels that our customers expect while lowering costs – a win-win situation.

Dell EMC Unity All-Flash and hybrid storage array family

Midrange Portfolio Delivers Exceptional Efficiency Today

Today, customers can reduce their costs at every phase of their IT projects with our industry-leading midrange storage products, Dell EMC Unity and SC Series (Compellent).  These products give exceptional efficiency already.  Across the board, we offer a range of data reduction features such as thin provisioning, data tiering and space-efficient snapshots that can reduce the cost of storage and maximize overall capacity.  We offer software efficiencies through intelligent data reduction technologies which are backed by the Dell EMC All Flash Storage Efficiency Guarantee  offering a 4:1 efficiency savings benefit.  For lowering longer term lifecycle costs, we offer platform migration and data-in-place upgrades. Together with the Dell EMC Integrated Offerings portfolio, we can extend the management, mobility and data protection capabilities of our customer’s systems.  The addition of deduplication makes the current portfolio even better.

Automate Your Cost Savings with SC Series

We believe the Dell EMC SC Series today offers customers the most flexible flash architecture on the market.  With intelligent data tiering running behind the scenes and always-on thin provisioning, customers benefit from the lowest cost flash on the market[i].  Other features offer built-in federation to recommend and live-transfer volumes to an optimal SC system automatically.  Together, these give exceptional operational and cost efficiencies to enterprises of all sizes.

Simplify and Unify Your Storage with Dell EMC Unity

Dell EMC Unity is a fully unified, dense 2U system that offers an exceptional feature set, built-in migration capabilities from Dell EMC VNX arrays, compression, and built-in data at-rest encryption (D@RE).  Our compression has been measured at third-party labs to show a 2.5X higher compression rate than HPE 3PAR. [ii]  With tiering to public or private cloud storage, your snapshots can live forever.  This gives recovery options that weren’t possible in previous products.

Our customers are truly excited about the benefits they are getting from both Dell EMC Unity and SC Series:

“With Dell EMC Unity, we get enhanced performance as well as a number of features that make life easier for me. As an IT manager in a small organization I wear about 10 hats and the modernity, simplicity, affordability and flexibility of Dell EMC Unity helps me to achieve more with limited resources.” – Madsen Wikholm, IT Manager, Society of Swedish Literature in Finland

“We chose SC Series storage because dollar for dollar, Dell EMC SC arrays were the most cost-effective and they provide a great roadmap for longevity.”  Steve Athanas, Director of Platforms & Systems Engineering, UMASS Lowell

But our customers now want us to take this a step further.  With the latest Dell EMC Unity release, we’re making system administrators more efficient and effective with faster installation setup wizards and new features with Dell EMC CloudIQ. Setup takes only 10 minutes and configuration just another 15 minutes.  So customers are up-and-running in less than 30 minutes.  Together, these advances help system administrators become more efficient, adding value to the business so they can focus on strategic initiatives and spend much less time on storage management.  The next step in the efficiency evolution is continuing to provide the most data-efficient platform available.

When our customers made the decision to purchase Dell EMC, they embarked with us on a long-term partnership.  In that kind of relationship, strong communication is key.  With great new technologies coming from Dell EMC, we’ll share more information with our customers and our partners as we reach the next milestone on our IT Transformation journey. Stay tuned.

____________________________________________________________

[i] Dell internal analysis, April 2017.Estimated street price, net effective capacity after 5:1 data reduction including 5 years of 7x24x4-hour on-site support.

[ii] “Handle transaction workloads and data mart loads with better performance”, a Principled Technologies report, June 2017, page 2:

https://www.dellemc.com/en-us/campaigns/dell-emc/unity-all-flash-storage-vs-hpe-3par.htm



ENCLOSURE:https://blog.dellemc.com/uploads/2017/06/Dell-EMC-Unity-Storage-Family-1-e1504787483940.png

Update your feed preferences


   

   


   


   

submit to reddit
   

Related: