VPLEX: Distributed device showing cluster-1 leg contains cluster-2 extent in GUI[1]

Article Number: 503013 Article Version: 3 Article Type: Break Fix



VPLEX Series,VPLEX Metro,VPLEX VS2,VPLEX VS6,VPLEX GeoSynchrony 5.5,VPLEX GeoSynchrony 5.4 Service Pack 1,VPLEX GeoSynchrony 5.4 Service Pack 1 Patch 1,VPLEX GeoSynchrony 5.4 Service Pack 1 Patch 3,VPLEX GeoSynchrony 5.4 Service Pack 1 Patch 4

The VPLEX GUI displays the anatomy of a distributed device as having a leg containing an extent that is being provisioned from the remote cluster. Please refer to the attached image.

An extent used to create each leg of the distributed device has the same name. This can be determined by navigating to the relevant context in the VPlexcli

VPlexcli:/> cd /distributed-storage/distributed-devices/

and then run the command show-use-hierarchy name_of_distributed_device

where name_of_distributed_device is the name of the distributed device you want to look at.

Example:

VPlexcli:/distributed-storage/distributed-devices> ll

Name Status Operational Health Auto Rule Set Name Transfer

—————————————— ——- Status State Resume —————— Size

—————————————— ——- ———– —— —— —————— ——–

sharry_device running ok ok true cluster-1-detaches 128K

VPlexcli:/distributed-storage/distributed-devices> show-use-hierarchy day_apps

virtual-volume: day_apps_vol (5G, distributed @ cluster-1, unexported)

distributed-device: sharry_device (5G, raid-1)

distributed-device-component: migtest1 (5G, raid-0, cluster-2)

local-device: EXTENT_MIGRATE_migrationtest0607 (5G, raid-1)

extent: extent_sharry1_1 (5G)

storage-volume: VPD83T3:600601xxxxxxxxxxxxxxxxxx (5G)

storage-array: EMC-CLARiiON-APM001xxxxxxx

distributed-device-component: sharry_device2017Jul17_083255 (5G, raid-0, cluster-1)

extent: extent_sharry1_1 (5G)

storage-volume: sharry1 (5G)

logical-unit: VPD83T3:600601yyyyyyyyyyyyyyyyyyyyy

storage-array: EMC-CLARiiON-APMxxxxxxxx

The names of the extents are cosmetic.

You can change the name of one of the extents by navigating to the correct context and issuing the command to change the name.

Example:

From VPlexcli navigate to the relevant context and issue the command to change the name

/clusters/cluster-2/storage-elements/extents> set extent_sharry1_1::name extent_sharry1_2

After the command executes, you can run the show-use-hierarchy command again to take a look at the new anatomy of the distributed device


VPlexcli:/distributed-storage/distributed-devices> show-use-hierarchy day_apps

virtual-volume: day_apps_vol (5G, distributed @ cluster-1, unexported)

distributed-device: sharry_device (5G, raid-1)

distributed-device-component: migtest1 (5G, raid-0, cluster-2)

local-device: EXTENT_MIGRATE_migrationtest0607 (5G, raid-1)

extent: extent_sharry1_1 (5G)

storage-volume: VPD83T3:600601xxxxxxxxxxxxxxxxxx (5G)

storage-array: EMC-CLARiiON-APM001xxxxxxx

distributed-device-component: sharry_device2017Jul17_083255 (5G, raid-0, cluster-1)

extent: extent_sharry1_2 (5G)

storage-volume: sharry1 (5G)

logical-unit: VPD83T3:600601yyyyyyyyyyyyyyyyyyyyy

storage-array: EMC-CLARiiON-APMxxxxxxxx

Related:

  • No Related Posts

VPlex KCS Newsletter January 2019

For restricted KB articles, please contact VPlex support for further information.

KB Article

Link

Audience

VPLEX: Workaround for meta-data corruption on GeoSynchrony 6.0 P1 through SP1 P3 releases

https://support.emc.com/kb/527633

Restricted

KB Article

Link

VPLEX: Metro-FC with FCIP WAN performance interoperability issue at GeoSynchrony 6.1

https://support.emc.com/kb/527524

Restricted

KB Article

Link

VPLEX: Custom NDU scripts that prevent the issues in VPLEX-4441, VPLEX-1589 and VPLEX-3647 simultaneously.

https://support.emc.com/kb/524664

Restricted

KB Article

Link

VPLEX: How to engage VPlex Engineering Team for SRA cases via ASD Jira.

https://support.emc.com/kb/528444

Restricted

KB Article

Link

VPLEX VS6: An unsupported model number has been detected for the SSD [Dell EMC Correctable]

https://support.emc.com/kb/528295

Employees and Partners

KB Article

Link

VPLEX: How to restart the ECOM process.

https://support.emc.com/kb/323358

Customers

New KBs to address NDU issues

KB Article

Link

https://support.emc.com/kb/524664

Restricted

KB Article

Link

https://support.emc.com/kb/519226

Employees and Partners

KB Article

Link

https://support.emc.com/kb/523838

Customer

KB Article

Link

https://support.emc.com/kb/523981

Customer

KBs for issues in 6.0 SP1 P5

KB Article

Link

VPLEX: For VS6 Hardware Possible Total Cluster Outage (TCO) during NDU I/O Forwarding Phase due to FCID differences

https://support.emc.com/kb/502886

Customers

KB Article

Link

VPLEX: VS6 Unexpected VPLEX director firmware crash while running GeoSynchrony 6.0 Service Pack 1 Patch 5

https://support.emc.com/kb/514853

Employees and Partners

KB Article

Link

VPLEX:VS6 Mid-plane causes slic1 (IO Module) on director B to go missing or offline either with or without director B reboot.

https://support.emc.com/kb/514683

Employees and Partners

Note:

Do not dispatch for replacement of any parts for this issue.

It is first to be escalated to engineering for their review and recommendation of what action(s) need to be taken.

Current Target Codes

GeoSynchrony 5.5 Service Pack 2 Patch 4 (5.5.2.04.00.01) as of 6th April 2018

GeoSynchrony 6.0 Service Pack 1 Patch 7 (6.0.1.07.00.04) as of 15th May 2018

GeoSynchrony 6.1.x (Karst) was released in September 2018

  • Karst Q2’FY19 release of 6.1.0.00.00.23

o Karst DA is targeted to for July 12th 6.1.0.00.00.17

  • RTO on August 26th
  • Karst GA targeted for August 30th

  • “Nalanda” = VPLEX GeoSynchrony 6.2

End Of Service Life (EOSL)

  • VPLEX VE – EOSL Date is 28th February 2019
  • VPLEX VS2 Geo – EOL Date is 31st July 2015
  • VPLEX VS1 – EOSL Date 31st August 2017
  • 5.1.x EOSL on 30 April 2016
  • 5.2.x EOSL on 30 April 2017
  • 5.3.x EOSL on 30 April 2018
  • 5.4.x EOSL on 30 April 2019

KB Article

Link

ETA 501370 EMC VPLEX: Potential silent data inconsistency if the target leg of a FULL Thin Rebuild or migration is a thin device in GeoSynchrony 6.0 Service Pack 1, and later

https://support.emc.com/kb/501370

Customers

NOTE**

This issue is specific to FULL Thin Rebuilds and migrations where the target is a thin device

DR1 log rebuilds (including Thin log rebuilds) are unaffected by this issue.

All rebuilds on thick devices are unaffected by this issue.

All rebuilds on thin devices where VPLEX rebuilds are set to thick are unaffected by the issue.

Latest Top 10 Viewed KB Articles:

Number

KB Article Title

Link

1

323253 : VPLEX: How to collect logs from a VPLEX Instance

https://support.emc.com/kb/323253

Customers

2

334964 : VPLEX: How to restart VPlexManagementConsole to refresh VPLEX CLI/GUI

https://support.emc.com/kb/334964

Customers

3

324094 : VPLEX: How to manually configure call home.

https://support.emc.com/kb/324094

Customers

4

330110 : VPLEX: How to reset the admin password

https://support.emc.com/kb/330110

Customers

5

457313 : VPLEX: 0x8a4a91fa / 0x8a4a31fb Management Server partitions full or exceeded xx percent threshold limit

https://support.emc.com/kb/457313

Customers

6

323313 : VPLEX: How to perform a Basic Health check using VPlexcli

https://support.emc.com/kb/323313

Customers

7

472497: VPLEX: Logical-units and Array Connectivity show as degraded

https://support.emc.com/kb/472497

Customers

8

336564 : VPLEX: How to identify ports on a VPLEX director with low Rx and/or TX power – 0x8a54600f, 0x8a36303a, 0x8a363039

https://support.emc.com/kb/336564

Customers

9

335182: VPLEX: Slow performance on VPLEX with a workload consisting of medium to large outstanding queue depth of large block reads

https://support.emc.com/kb/335182

Customers

10

463942: VPLEX: Random temporary loss of connection to storage devices and/or performance degradation on ESXi hosts from version 5.5 u2

https://support.emc.com/kb/463942

Customers

Useful to know Articles:

  • KB# 484453- VPLEX: Best Practice Guidelines – Master Article [https://support.emc.com/kb/484453]
    • This master KB article provides links to up to date VPlex Best Practices documents

****With swarm requests increasing the following KB articles will assist in initiating and engaging in a successful collaboration******

  • KB#484448- VPLEX: How to collaborate with VPlex Support [https://support.emc.com/kb/484448]
    • This KB article provides a template for the case owner to fill out
    • The article also requests VPlex collect-diagnostics logs and provides the link to the KB article on how to collect these

  • KB#336056- How To Collaborate with the XtremIO Team [https://support.emc.com/kb/336056]
    • This KBA outlines the information needed in order to effectively collaborate with the XtremIO support team

  • KB#335164- Connectrix: How to collaborate with the Connectivity Team [https://support.emc.com/kb/335164]
    • This KB article outlines how to collaborate with Connectivity
    • This will enable the Connectivity team to efficiently assist in swarm requests

The following articles can be provided to the customer for gathering Host Grabs to avoid unnecessary collaborations with the Host teams:

  1. How To Create Solaris Grabs: https://support.emc.com/kb/468243
  2. How To Run EMC Grabs On An AIX Host: https://support.emc.com/kb/335706
  3. How To Create Linux Grabs: https://support.emc.com/kb/468251
  4. How To Run EMC Grabs on HP-UX: https://support.emc.com/kb/335700
  5. What logs need to be collected for Microsoft Windows-based EMC software?: https://support.emc.com/kb/304457
  6. How do I run an ESX or ESXi EMCGrab?: https://support.emc.com/kb/323232

** Recovery Procedures **

Recover procedures for data loss events in underlying back end storage arrays explaining what VPlex can and cannot do to recover data.

VPlex symptom: dead storage-volume

VNX Uncorrectables:

https://support.emc.com/kb/488035: VNX1/2: Uncorrectables/Coherency reported on VNX/VNX2 provisioned to a VPLEX (where device is marked as Dead due to SCSI check Condition 3/11/0) (DellEMC correctable)

Raid-1 device recovery:

https://support.emc.com/kb/470385 Recovering a VPLEX Metro Geo leg for distributed devices after storage failure or Data Loss on that leg occurred.

Additional info:

  • – Solve procedure generator VPlex > VPlex procedures > administration procedures > manage > Recovering a raid-1 leg after experiencing data loss due to backend array failure this is a more detailed version KB 470385 above
  • – KB 488035 above also has many supporting KB’s and doc’s

Related:

  • No Related Posts

VPLEX: The backup meta-volume(s) exceeds the 39-character limit causing the health-check to report a warning[1]

Article Number: 523881 Article Version: 3 Article Type: Break Fix



VPLEX Series,VPLEX Local,VPLEX Metro,VPLEX VS2,VPLEX VS6,VPLEX GeoSynchrony 5.4 Service Pack 1,VPLEX GeoSynchrony 5.4 Service Pack 1 Patch 1,VPLEX GeoSynchrony 5.4 Service Pack 1 Patch 3,VPLEX GeoSynchrony 5.4 Service Pack 1 Patch 4

  • VPLEX GUI is not showing Metadata details for a cluster.
  • One of the VPLEX Clusters GUI cannot determine the health state of the Metadata.​

When running a health-check the following warning occurs under the Meta Data section for the command output:

Meta Data:----------Cluster Volume Volume Oper Health ActiveName Name Type State State--------- ------------------------------------------------- -------------- ----- ------ ------cluster-1 C1_Logging logging-volume ok ok -cluster-1 cluster_1_meta_volume_vnx_backup_2018Jul20_204956 meta-volume ok ok Truecluster-1 cluster_1_meta_volume_vnx_backup_2018Jul20_204821 meta-volume ok ok Truecluster-1 cluster_1_meta_volume_vnx meta-volume ok ok TrueThe meta-volume cluster_1_meta_volume_vnx_backup_2018Jul20_204956 exceeds the character limit for meta-volume name.The meta-volume cluster_1_meta_volume_vnx_backup_2018Jul20_204821 exceeds the character limit for meta-volume name.cluster-2 C2_Meta meta-volume ok ok Truecluster-2 C2_Logging logging-volume ok ok -cluster-2 C2_Meta_backup_2018Jul02_060022 meta-volume ok ok Truecluster-2 C2_Meta_backup_2018Jul01_060025 meta-volume ok ok True

This issue may also manifest itself in the Unisphere for the VPLEX GUI in the form of missing active or backup meta-volumes shown in the example below for cluster-1 with the yellow bar and 0 meta volumes listed versus the green bar for cluster-2 and the number of meta volumes showing as ‘4’.

User-added image

This warning message indicates that the name(s) of the backup meta-volume(s) exceed the predefined character limit of 39 characters. The backup meta-volume names are a combination of the active meta-volume name and a time-stamp suffix as well as dashes and/or underscores. When the active meta-volume name is too long, it may push the backup meta-volume names beyond the character limit.

Active meta-volume is name changed to a value that pushes the name of the backup meta-volume beyond the character limit.

To resolve this warning, rename the active meta-volume with a shorter name and then re-run the meta-volume backups. This process is non-disruptive.

1. Determine which backup meta-volumes have names that exceed the character limit by running a health-check.

VPlexcli:/> health-check
Locate the Meta Data section of the health-check, like the one shown below:
Meta Data:----------Cluster Volume Volume Oper Health ActiveName Name Type State State--------- ------------------------------------------------- -------------- ----- ------ ------cluster-1 C1_Logging logging-volume ok ok -cluster-1 cluster_1_meta_volume_vnx_backup_2018Jul20_204956 meta-volume ok ok Truecluster-1 cluster_1_meta_volume_vnx_backup_2018Jul20_204821 meta-volume ok ok Truecluster-1 cluster_1_meta_volume_vnx meta-volume ok ok TrueThe meta-volume cluster_1_meta_volume_vnx_backup_2018Jul20_204956 exceeds the character limit for meta-volume name.The meta-volume cluster_1_meta_volume_vnx_backup_2018Jul20_204821 exceeds the character limit for meta-volume name.cluster-2 C2_Meta meta-volume ok ok Truecluster-2 C2_Logging logging-volume ok ok -cluster-2 C2_Meta_backup_2018Jul02_060022 meta-volume ok ok Truecluster-2 C2_Meta_backup_2018Jul01_060025 meta-volume ok ok True

The output from the health-check output above shows that the meta-volume backups for cluster-1 have names that exceed

the 39-character limit. Therefore, the active meta-volume for cluster-1 must be renamed to resolve the warning.

2. View the meta-volumes configured on the VPLEX to find the active meta-volume for each cluster.

VPlexcli:/> ll /clusters/*/system-volumes/clusters/cluster-1/system-volumes:Name Volume Type Operational Health Active Ready Geometry Component Block Block Capacity Slots------------------------------------------------- -------------- Status State ------ ----- -------- Count Count Size -------- ------------------------------------------------------ -------------- ----------- ------ ------ ----- -------- --------- -------- ----- -------- -----C1_Logging_vol logging-volume ok ok - - raid-1 1 2621440 4K 10G -cluster_1_meta_volume_vnx meta-volume ok ok true true raid-1 2 20971264 4K 80G 64000cluster_1_meta_volume_vnx_backup_2018Jul20_204821 meta-volume ok ok false true raid-1 1 20971264 4K 80G 64000cluster_1_meta_volume_vnx_backup_2018Jul20_204956 meta-volume ok ok false true raid-1 1 20971264 4K 80G 64000/clusters/cluster-2/system-volumes:Name Volume Type Operational Health Active Ready Geometry Component Block Block Capacity Slots------------------------------- -------------- Status State ------ ----- -------- Count Count Size -------- ------------------------------------ -------------- ----------- ------ ------ ----- -------- --------- -------- ----- -------- -----C2_Logging_vol logging-volume ok ok - - raid-0 1 2621440 4K 10G -C2_Meta meta-volume ok ok true true raid-1 2 20446976 4K 78G 64000C2_Meta_backup_2018Jul01_060025 meta-volume ok ok false true raid-1 1 20446976 4K 78G 64000C2_Meta_backup_2018Jul02_060022 meta-volume ok ok false true raid-1 1 20971264 4K 80G 64000
3. Navigate to the context for the active meta-volume.
VPlexcli:/> cd /clusters/cluster-1/system-volumes/cluster_1_meta_volume_vnxVPlexcli:/clusters/cluster-1/system-volumes/cluster_1_meta_volume_vnx>

4. Use the set command from this context to change the name of the active meta-volume.

VPlexcli:/clusters/cluster-1/system-volumes/cluster_1_meta_volume_vnx> set name C1_MetaVPlexcli:/clusters/cluster-1/system-volumes/C1_Meta>
Note that the name of the context changes to reflect the name change. You can re-run the following command to

verify the name change:
VPlexcli:/> ll /clusters/*/system-volumes/clusters/cluster-1/system-volumes:Name Volume Type Operational Health Active Ready Geometry Component Block Block Capacity Slots------------------------------------------------- -------------- Status State ------ ----- -------- Count Count Size -------- ------------------------------------------------------ -------------- ----------- ------ ------ ----- -------- --------- -------- ----- -------- -----C1_Logging_vol logging-volume ok ok - - raid-1 1 2621440 4K 10G -C1_Meta meta-volume ok ok true true raid-1 2 20971264 4K 80G 64000cluster_1_meta_volume_vnx_backup_2018Jul20_204821 meta-volume ok ok false true raid-1 1 20971264 4K 80G 64000cluster_1_meta_volume_vnx_backup_2018Jul20_204956 meta-volume ok ok false true raid-1 1 20971264 4K 80G 64000/clusters/cluster-2/system-volumes:Name Volume Type Operational Health Active Ready Geometry Component Block Block Capacity Slots------------------------------- -------------- Status State ------ ----- -------- Count Count Size -------- ------------------------------------ -------------- ----------- ------ ------ ----- -------- --------- -------- ----- -------- -----C2_Logging_vol logging-volume ok ok - - raid-0 1 2621440 4K 10G -C2_Meta meta-volume ok ok true true raid-1 2 20446976 4K 78G 64000C2_Meta_backup_2018Jul01_060025 meta-volume ok ok false true raid-1 1 20446976 4K 78G 64000C2_Meta_backup_2018Jul02_060022 meta-volume ok ok false true raid-1 1 20971264 4K 80G 64000
5. To rename the meta-volume backups run the metadatabackup local command one time for each meta-data backup volume

(this should be two times on a correctly configured system).

NOTE: This command MUST be run from the cluster where the meta-volume is located. For example, if the meta-volume

was on cluster-1 volume, you would need to run this command from the cluster-1 VPlexcli.
VPlexcli:/> metadatabackup localVPlexcli:/> metadatabackup local

As there are two meta-volume backups, we ran the command two times.

6. Re-run the following command to verify the name change for the meta-volume backups.
VPlexcli:/> ll /clusters/*/system-volumes//clusters/cluster-1/system-volumes:Name Volume Type Operational Health Active Ready Geometry Component Block Block Capacity Slots------------------------------- -------------- Status State ------ ----- -------- Count Count Size -------- ------------------------------------ -------------- ----------- ------ ------ ----- -------- --------- -------- ----- -------- -----C1_Logging_vol logging-volume ok ok - - raid-1 1 2621440 4K 10G -C1_Meta meta-volume ok ok true true raid-1 2 20971264 4K 80G 64000C1_Meta_backup_2018Jul20_212714 meta-volume ok ok false true raid-1 1 20971264 4K 80G 64000C1_Meta_backup_2018Jul20_212804 meta-volume ok ok false true raid-1 1 20971264 4K 80G 64000/clusters/cluster-2/system-volumes:Name Volume Type Operational Health Active Ready Geometry Component Block Block Capacity Slots------------------------------- -------------- Status State ------ ----- -------- Count Count Size -------- ------------------------------------ -------------- ----------- ------ ------ ----- -------- --------- -------- ----- -------- -----C2_Logging_vol logging-volume ok ok - - raid-0 1 2621440 4K 10G -C2_Meta meta-volume ok ok true true raid-1 2 20446976 4K 78G 64000C2_Meta_backup_2018Jul01_060025 meta-volume ok ok false true raid-1 1 20446976 4K 78G 64000C2_Meta_backup_2018Jul02_060022 meta-volume ok ok false true raid-1 1 20971264 4K 80G 64000
Note that the meta-volume backup names have now changed.

6. Rerun the health-check to confirm the warning message is no longer present.
VPlexcli:/> health-check
Locate the Meta Data section and verify that the warnings are no longer present.
Meta Data:----------Cluster Volume Volume Oper Health ActiveName Name Type State State--------- ------------------------------- -------------- ----- ------ ------cluster-1 C1_Meta meta-volume ok ok Truecluster-1 C1_Logging logging-volume ok ok -cluster-1 C1_Meta_backup_2018Jul20_212804 meta-volume ok ok Truecluster-1 C1_Meta_backup_2018Jul20_212714 meta-volume ok ok Truecluster-2 C2_Meta meta-volume ok ok Truecluster-2 C2_Logging logging-volume ok ok -cluster-2 C2_Meta_backup_2018Jul02_060022 meta-volume ok ok Truecluster-2 C2_Meta_backup_2018Jul01_060025 meta-volume ok ok True

Related:

  • No Related Posts

VPLEX: Health-check –full reports Call Home “Error” state post NDU[1]

Article Number: 523118 Article Version: 3 Article Type: Break Fix



VPLEX GeoSynchrony,VPLEX Local,VPLEX Metro,VPLEX Series,VPLEX VS2,VPLEX VS6

An Error is reporting in the commandhealth-check –full post upgrade but the Call Home functions properly.

  • Pre NDU Health-check –full doesn’t report an error.

  • Post NDUHealth-check –full reports “Checking Call Home Status” as Error.

  • ConnectEMC_config.xml file looks the same as pre NDU as post NDU.

  • No issues seen in connectemc related logs.

  • The SMTP service is reachable and non-blocked.

  • Call-Home works right, for every triggered call-home test.

  • SYR / CLM system determine call home alerts have being correctly received. Hence, confirming Connecthome is received.

Comparing PRE & POST Non-Disruptive Upgrade (NDU)

PRE NDU

VPlexcli:/> health-check –full

Configuration (CONF):

Checking VPlexCli connectivity to directors……………….. OK

Checking Directors Commission……………………………. OK

Checking Directors Communication Status…………………… OK

Checking Directors Operation Status………………………. OK

Checking Inter-director management connectivity……………. OK

Checking ports status…………………………………… OK

Checking Call Home……………………………………… OK

Checking Connectivity…………………………………… OK

POST NDU

VPlexcli:/> health-check –full

Configuration (CONF):

Checking VPlexCli connectivity to directors……………….. OK

Checking Directors Commission……………………………. OK

Checking Directors Communication Status…………………… OK

Checking Directors Operation Status………………………. OK

Checking Inter-director management connectivity……………. OK

Checking ports status…………………………………… OK

Checking Call Home Status……………………………….. Error

service@vplexMM:/var/log/VPlex/cli> more health_check_full_scan.log

Configuration (CONF):

Checking VPlexCli connectivity to directors……………….. OK

Checking Directors Commission……………………………. OK

Checking Directors Communication Status…………………… OK

Checking Directors Operation Status………………………. OK

Checking Inter-director management connectivity……………. OK

Checking ports status…………………………………… OK

Checking Call Home Status……………………………….. Error

Email Server under Notification type: ‘onSuccess/onFailure’ is either

Not reachable or invalid.

Check if Email Server IP address: ‘10.1.111.100’ is reachable and valid.

Email Server under Notification type: ‘Primary’ and ‘Failover’ are either

Not reachable or invalid.

Check if Email Server IP address: ‘10.1.111.100’ and ‘10.1.111.100’ are

Reachable and valid.

service@vplexMM:/opt/emc/connectemc> cat ConnectEMC_config.xml

<?xml version=”1.0″ encoding=”UTF-8″ standalone=”no” ?>

<ConnectEMCConfig SchemaVersion=”1.1.0″>

<ConnectConfig Type=”Email”>

<Retries>7</Retries>

<Notification>Primary</Notification>

<Timeout>700</Timeout>

<Description></Description>

<BsafeEncrypt>no</BsafeEncrypt>

<IPProtocol>IPV4</IPProtocol>

<EmailServer>10.1.111.100</EmailServer>

<EmailAddress>emailalert@EMC.com</EmailAddress>

<EmailSender>VPlex_CKM00000000999@EMC.com</EmailSender>

<EmailFormat>ASCII</EmailFormat>

<EmailSubject>Call Home</EmailSubject>

<STARTTLS>no</STARTTLS>

<IncludeCallHomeData>no</IncludeCallHomeData>

<InsertBefore></InsertBefore>

<PreProcess></PreProcess>

<PostProcess></PostProcess>

<HeloParameter></HeloParameter>

</ConnectConfig>

<ConnectConfig Type=”Email”>

<Retries>7</Retries>

<Notification>Failover</Notification>

<Timeout>700</Timeout>

<Description></Description>

<BsafeEncrypt>no</BsafeEncrypt>

<IPProtocol>IPV4</IPProtocol>

<EmailServer>10.1.111.100</EmailServer>

<EmailAddress>emailalert@EMC.com</EmailAddress>

<EmailSender> VPlex_CKM00000000999@EMC.com</EmailSender>

<EmailFormat>ASCII</EmailFormat>

<EmailSubject>Call Home</EmailSubject>

<STARTTLS>no</STARTTLS>

<IncludeCallHomeData>no</IncludeCallHomeData>

<InsertBefore></InsertBefore>

<PreProcess></PreProcess>

<PostProcess></PostProcess>

<HeloParameter></HeloParameter>

</ConnectConfig>

<ConnectConfig Type=”Email”>

<Retries>7</Retries>

<Notification>onSuccess/onFailure</Notification>

<Timeout>700</Timeout>

<Description></Description>

<BsafeEncrypt>no</BsafeEncrypt>

<IPProtocol>IPV4</IPProtocol>

<EmailServer>10.1.111.100</EmailServer>

<EmailAddress>customer@genericemailaddress.com</EmailAddress>

<EmailSender>VPlex_CKM00000000999@EMC.com</EmailSender>

<EmailFormat>ASCII</EmailFormat>

<EmailSubject>Call Home</EmailSubject>

<STARTTLS>no</STARTTLS>

<IncludeCallHomeData>yes</IncludeCallHomeData>

<InsertBefore></InsertBefore>

<PreProcess></PreProcess>

<PostProcess></PostProcess>

<HeloParameter></HeloParameter>

</ConnectConfig>

</ConnectEMCConfig>

service@vplexMM:/var/log/ConnectEMC/logs> ping 10.1.111.100

PING 10.1.111.100 (10.1.111.100) 56(84) bytes of data.

— 10.1.111.100 ping statistics —

6 packets transmitted, 0 received, 100% packet loss, time 5010ms

service@vplexMM:~> telnet 10.1.111.100 25

Trying 10.1.111.100…

Connected to 10.1.111.100

Escape character is ‘^]’.

220 emc.com

helo localhost

250 emc.com

mail from: VPlex_CKM00000000999@EMC.com

250 2.1.0 Ok

rcpt to:customer@genericemailaddress.com

250 2.1.0 Ok

VPlexcli:/notifications/call-home> test

call-home test was successful.


As per the above information, this means that the customer is allowing the SMTP service on port “25” only and not the ICMP “ping”.

This error is expected and can be ignored once you verify that the test call home is working and appearing under /opt/emc/connectemc/archive

service@vplexMM:/opt/emc/connectemc/archive> ll

-rw-r—– 1 service users 2814 Jun 25 13:17 RSC_CKM00000000999_062518_011656000.xml

-rw-r—– 1 service users 2814 Jun 25 10:54 RSC_CKM00000000999_062518_105401000.xml

-rw-r—– 1 service users 2814 Jun 25 11:11 RSC_CKM00000000999_062518_111102000.xml

-rw-r—– 1 service users 2814 Jun 25 11:48 RSC_CKM00000000999_062518_114834000.xml

Checking call home status is part of the health-check — full script which does the following:

1- Check the email server for each notification type in /opt/emc/connectemc/ConnectEMC_config.xml

2- Ping the server. If the server is not pingable for any reason (not reachable via network, server is shutdown, ICMP service is blocked via firewall, the <EmailServer> is a DNS name instead of the name in the ConnectEMC_config.xml file).

As a result, the commandhealth-check –full script will fail and will show the following error:

Checking Call Home Status……………………………….. Error

The current healthcheck script checks if call home is enabled and generates a “Warning” state if it’s disabled.

The healthcheck script also checks if call home has been functioning properly with several verifications such as: checking call homes have been generated; the call home emails have been sent successfully sent; or if SMTP server ping is alive.

If any of these verifications fail, the script’s result will be flagged with an error as shown:

Checking Call Home Status……………………………….. Error

After enabling the ICMP protocol on the firewall level between the VPLEX management server and their selected email server used (ESRS, customer’s email server), the Call Home “Error” status is now clean:

VPlexcli:/> health-check –full

Configuration (CONF):

Checking VPlexCli connectivity to directors……………….. OK

Checking Directors Commission……………………………. OK

Checking Directors Communication Status…………………… OK

Checking Directors Operation Status………………………. OK

Checking Inter-director management connectivity……………. OK

Checking ports status…………………………………… OK

Checking Call Home Status……………………………….. OK

Checking Connectivity…………………………………… OK

Checking COM Port Power Level……………………………. OK

Checking Meta Data Backup……………………………….. OK

Checking Meta Data Slot Usage……………………………. OK

Related:

  • No Related Posts

VPLEX: 0x8a2d30aa – Dial-home – An array returned Unit Attention 6/38/07h THIN_PROVISIONING_SOFT_THRESHOLD_REACHED.[1]

Article Number: 524009 Article Version: 2 Article Type: Break Fix



VPLEX Series,VPLEX Local,VPLEX Metro,VPLEX GeoSynchrony 5.5 Service Pack 2 Patch 1,VPLEX GeoSynchrony 5.5 Service Pack 2 Patch 2,VPLEX GeoSynchrony 5.5 Service Pack 2 Patch 3,VPLEX GeoSynchrony 5.5 Service Pack 2 Patch 4

VPLEX dials home reporting the following message:

<Event>

<SymptomCode>0x8a2d30aa</SymptomCode>

<Category>Status</Category>

<Severity>Warning</Severity>

<Status>Warning</Status>

<Component>CLUSTER</Component>

<ComponentID>director-2-1-B</ComponentID>

<SubComponent>scsi</SubComponent>

<SubComponentID></SubComponentID>

<CallHome>Yes</CallHome>

<FirstTime><Date and timestamp>Z</FirstTime>

<LastTime><Date and timestamp>Z</LastTime>

<Count>4</Count>
<< this value will vary

<EventData><![CDATA[Thin Provisioning Soft Threshold reached – vol VPD83T3:6006016007f03e0048886ea54138e811 [Versions:MS{Dx.x.x.x.x, Dx.x.x.x, Dx.x.x.x}, Director{x.x.x.x.x}, ClusterWitnessServer{Dx.x.x.x}] RCA: An array returned Unit Attention 6/38/07h THIN_PROVISIONING_SOFT_THRESHOLD_REACHED for a storage-volume on a VPLEX write. The thin pool on the array is running out of space. Remedy: Add additional block resources to the thin pool on the array from which the storage-volume is provisioned.

]]></EventData>

<Description><![CDATA[A storage volume reported a Thin Provisioning Soft Threshold Reached error on a VPLEX write.

When VPLEX sends a SCSI write command from the host to a thin storage-volume provisioned from a thin pool that is over the pre-configured soft threshold for utilization, the storage array may return a unit attention 6/38/07h alert to the VPLEX signaling that the storage pool is running low on space.

A unit attention is a sense key which a storage array can use to communicate a change in its operational state to other devices (such as the VPLEX). When VPLEX receives this unit attention alert it will report this notice on behalf of the array in a dial-home with tas a warning with the symptom code ID 0x8a2d30aa. This dial-home does not indicate an issue with the VPLEX, instead it indicates a threshold surpassed on an array behind the VPLEX. VPLEX is just reporting the symptom.

A change in thin pool utilization on the back-end storage array behind VPLEX.

To resolve this issue, have the Storage Admin either add additional block resources to the thin pool on the back-end array reporting the issue or increase the soft threshold (if supported by the array) to a value greater than the current utilization.

To identify the array and storage-volume returning the unit attention find the VPD for the volume in the CDATA portion of the dial home message.

Thin Provisioning Soft Threshold reached – vol VPD83T3:60060160c9c02c00520c47ef8ac4e711<– Volume reporting the issue

[Versions:MS{Dx.x.x.x.x, Dx.x.x.x, Dx.x.x.x}, Director{x.x.x.x.x}, ClusterWitnessServer{unknown}] RCA: An array returned Unit Attention 6/38/07h THIN_PROVISIONING_SOFT_THRESHOLD_REACHED for a storage-volume on a VPLEX write. The thin pool on the array is running out of space. Remedy: Add additional block resources to the thin pool on the array from which the storage-volume is provisioned.

From the VPLEX CLI run the command ‘show-use-hierarchy /clusters/*/storage-elements/storage-arrays/*/logical-units/<Volume ID>‘ to list out the details of the storage volume.

VPlexcli:/> show-use-hierarchy /clusters/*/storage-elements/storage-arrays/*/logical-units/VPD83T3:60060160c9c02c00520c47ef8ac4e711

storage-view: test_view (cluster-1)

consistency-group: CGTEST_100 (synchronous)

virtual-volume: Device-C1-1_1_1_vol (10G, local @ cluster-1, running)

local-device: Device-C1-1_1 (10G, raid-0, cluster-1)

extent: extent_CLARiiON1539_LUN_C1-4006-1_1 (10G)

storage-volume: CLARiiON1539_LUN_C1-4006-1 (10G)

logical-unit: VPD83T3:60060160c9c02c00520c47ef8ac4e711

storage-array: EMC-CLARiiON-APMxxxxxxxxxxx <–
Array where the volume is located

Related:

  • No Related Posts

Re: RecoverPoint for VMs Alerts

Yes, do this first and as per your original request then configure your alerts/events.

Regards,

Rich Forshaw

Consultant Corporate Systems Engineer – RecoverPoint & VPLEX (EMEA)

Data Protection and Availability Solutions

EMC Europe Limited

Mobile: 44 (0) 7730 781169<tel:44%20(0)%207730%20781169>

E-mail: richard.forshaw@emc.com<mailto:richard.forshaw@emc.com>

Twitter: @rw4shaw

Related:

  • No Related Posts

VPLEX: How to create web vendor signed certificate for VPLEX performance monitor

Article Number: 504672 Article Version: 4 Article Type: How To



VPLEX Performance Monitor,VPLEX Series,VPLEX GeoSynchrony,VPLEX Local,VPLEX Metro,VPLEX Geo,VPLEX VS2,VPLEX VS6,VPLEX GeoSynchrony 6.0 Service Pack 1 Patch 5,VPLEX GeoSynchrony 6.0 Service Pack 1 Patch 4,VPLEX GeoSynchrony 6.0 Patch 1

Issue:If you launch VPLEX Performance Monitor GUI with a self-signed certificate, the browser displays the following warning message:User-added imageResolution:1. Generate the certificate signing request/CSR file from VPlex peformance monitor VM in order to generate SSL certificates to import vendor signed certificate for VPlex performance monitor, login into VPLEX performance monitor VM using SSH.This command will auto generate two files after filling the required attributes. (see the example in attachement)
localhost:~ # openssl req -new -newkey rsa:2048 -nodes -keyout vplex-monitor-key.pem -out vplex-monitor-csr.csr
2. The customer uses the certificate signing request .csr file and submit it to the vendor CA portal to generate the certificate file.3. Upload the certificate and key files to this directory /home/appadmin/appcentric/keys/4. Ensure that the certificate and key files have the following names:
vplex-monitor-cert.pem vplex-monitor-key.pem
5. Run the following commands at the localhost:~ prompt to ensure that the certificates have the correct permissions:
localhost:~ # chown appadmin:users /home/appadmin/appcentric/keys/*.pemlocalhost:~ # chmod 600 /home/appadmin/appcentric/keys/*.pemlocalhost:~ # ll /home/appadmin/appcentric/keys/total 8-rw------- 1 appadmin users 1192 Oct 6 18:36 vplex-monitor-cert.pem-rw------- 1 appadmin users 1675 Oct 6 18:36 vplex-monitor-key.pem
6. Type the following command to restart the VPLEX Performance Monitor server :
localhost:~ # sudo forever restartall 

  • VPLEX performance monitor is an independent product from VPLEX
  • For creating, web vendor signed certificate for VPLEX GUI Kindly check KB#495191: https://support.emc.com/kb/495191

Related:

  • No Related Posts

VPLEX: VPLEX takes 3x-4x longer to free up space compared to Native SCSI UNMAP on supported arrays

Article Number: 502965 Article Version: 3 Article Type: Break Fix



VPLEX Series,VPLEX Local,VPLEX Metro,VPLEX GeoSynchrony,VPLEX GeoSynchrony 5.5 Service Pack 1,VPLEX GeoSynchrony 5.5 Service Pack 1 Patch 1,VPLEX GeoSynchrony 5.5 Service Pack 1 Patch 2,VPLEX GeoSynchrony 5.5 Service Pack 2

In the current VPLEX UNMAP SCSI command implementation, UNMAP SCSI command completion latency is expected to be 3x-4x longer than native XtremIO/VNX/Unity/VMAX array implementation of the SCSI UNMAP command. Storage reclamation using UNMAP SCSI commands is a maintenance activity. Users should consider the increased processing time required in order to reclaim storage through VPLEX and plan accordingly.

Typically, VPLEX average latency in completing individual SCSI UNMAP command is in sub seconds. VPLEX provides the following UNMAP SCSI command statistics:

  • The number of UNMAP SCSI commands per second seen at the target
  • The average latency in quarters per second of UNMAP SCSI command at the target

These statistics work with the following targets:

  • front-end port
  • front-end director
  • front-end logical unit
  • host initiator port

Users must create new monitors to read these statistics. Please refer to the VPLEX documentation for details about creating new VPLEX monitors

Current VPLEX UNMAP implementation can only issue maximum 1MB UNMAP in its request to XtremIO storage volume compared to host issuing 4MB UNMAP when XtremIO SV is directly used by the host. This VPLEX implementation limitation is the main cause for the decreased VPLEX UNMAP performance. UNMAP is considered as a maintenance activity to free unused blocks at a host-application-convenient time and therefore UNMAP performance was considered as a secondary goal in the VPLEX implementation.

This is an expected behavior. Customers should consider the increased processing time required in order to reclaim storage through VPLEX and plan accordingly.

Related:

  • No Related Posts

ViPR Controller: No volumes visible for catalog service Export Volume to a Host

Article Number: 526085 Article Version: 3 Article Type: Break Fix



ViPR Controller,ViPR Controller Controller 3.6 SP2,ViPR Controller Controller 3.6 SP1,ViPR Controller Controller 3.6,ViPR Controller Controller 3.5

When attempting to run an “Export Volume to a Host” order, to export a VPLEX volume to a cluster, the volumes field does not populate with a list of available volumes to be exported.

ViPR Controller sasvc service log reports the following API call to check the VirtualArray association to the cluster:

vipr1 sasvc 2018-08-08 14:31:52,380 [qtp514587349-47 - /catalog/asset-options/vipr.unassignedBlockVolume] INFO LoggingFilter.java (line 238) 196 > GET https://<VIP/FQDN>:4443/vdc/varrays/search?cluster=urn:storageos:Cluster:d08eb16f-e018-4e14-bff4-bd209d1cf8e2:vdc1

ViPR Controller sasvc service log reports the response to the above API call:

vipr1 sasvc 2018-08-08 14:31:52,685 [qtp514587349-47 - /catalog/asset-options/vipr.unassignedBlockVolume] INFO LoggingFilter.java (line 125) 196 < 200 took 305 ms{"resource":[]}



As the API response is empty, this prevents ViPR Controller from showing volumes that are available to be exported to the cluster.

The cluster that the user is attempting to export to contains hosts that are associated to more than one VirtualArray.

The API call used in Export Volume to a Host, expects the hosts in the cluster to be associated to just one VirtualArray.

This behaviour is per design.

The service catalog Export Volume to a Host is not designed to export a VPLEX volume to a cluster whose hosts are associated to more than one VirtualArray.

The service catalog “Export VPLEX Volume” should be used to export a volume to clusters of this configuration.

The API call used in “Export VPLEX Volume” is run against each host in the cluster as opposed to the cluster itself:

vipr1 sasvc 2018-08-30 12:25:55,259 [qtp514587349-1211] INFO LoggingFilter.java (line 238) 1361 > GET https://<VIP/FQDN>:4443/vdc/varrays/search?host=urn:storageos:Host:019d835c-3998-4b81-b083-5e2202b0ba3f:vdc1

As an example take the following:

  • The user has added a VPLEX Metro to ViPR Controller.
    • No VPLEX cross-connect configured
  • The user has configured 2 VirtualArrays in ViPR Controller
    • The first represents VPLEX cluster-1 and its backend array
    • The second represents VPLEX cluster-2 and its backend array
  • The user has configured a stretched host cluster with 4 hosts
    • 2 of the hosts have connectivity back to VPLEX cluster-1 and VA_1 by association
    • 2 of the hosts have connectivity back to VPLEX cluster-2 and VA_2 by association

When the user attempts to run “Export Volume to a Host” against this cluster the API expects all 4 hosts to be associated to the same VirtualArray.

When the user attempts to run Export VPLEX Volume, this API call is not made and ViPR Controller is successfully able to populate the volume field.

Related:

  • No Related Posts

Data Migration from SVC into VPLEX

Hi there,

I would like to migrate LUNs of my bare metal hosts from SVC to VPLEX. If possible without downtime.

I have ESX boot LUNs in SVC, for example, which I need to move to my VPLEX. What`s the best way to do this?

In SVC I was able to zone the “old” storage device into SVC and to provide the old LUNs as so called “Image Mode VDISKs” to the same host. Migration of all these LUNs into a different pool made the old storage device obsolet after completion.

Storage is provided to my VPLEX from two Unitys and to my SVC from two V7000.

Can I zone the SVC to my VPLEX in some way?

Should I provide storage from my Unitys to my SVC (in its own pool), move vdisks from V7000 devices into that Unity pool and provide it to my VPLEX in some kind?

Any hints or best practises?

Best regards

globber

Related:

  • No Related Posts