Key Elements of a Successful Digital Workplace Strategy

EMC logo

Digital Workplace plays an important role in workforce transformation. People want to work for organizations that are digitally savvy and create productive environments. Yet many organizations have trouble meeting worker expectations. IT can barely keep up with the changes. Maybe a new acquisition is happening, maybe it’s just time to revamp the tools your workers are using. Perhaps hiring smart, savvy younger workers is the goal. Whatever the challenge, many organizations are looking at the tools they provide workers with renewed interest. And one of those is the next generation Intranet, a communications and collaboration platform that improves productivity and binds everyone together.

An Effective Digital Workplace: A Powerful Force

But experience has shown that Intranets and early Digital Workplaces are messy. These large unwieldy systems often grow organically and contain a lot of outdated information. Search doesn’t work. Processes are hybrid band-aids that lower productivity. Yet, when done right, a Digital Workplace is an incredibly powerful force that unifies and connects people across the Enterprise. The right tools support collaboration where people work at their own pace naturally and effortlessly. An effective search greatly reduces the amount of time it takes to find people and content. A good Digital Workplace breaks down some of the organizational silo’s and encourages participation and innovation to solve problems and create value.

Yes, a Digital Workplace is worth the journey and a good Digital Strategy will get you there. How you go about it will determine the success of the project. What are the options? To save time, you can get a group of people from different business areas with a stake in the outcome and bring them together to brainstorm goals and use cases. Don’t forget the end users, one or two workshops to give them a say, document it and you’re done. This will get you a lot of insight and documents, but it doesn’t lead to a cohesive strategy that communicates the vision and lays out the plan. A strategy creates direction and a sense of purpose. Without it the work will still get done but people will have a hard time figuring out priorities and how their piece fits in with the bigger picture.

A Realistic Roadmap

At Dell EMC Consulting, we’ve done many Digital Workplace strategies over the years for some of the largest, global organizations. We’ve seen the power of a successful strategy, one that is tied to a clear set of goals and values, with stories and outcomes aligned to worker needs. A strategy level sets expectations with a realistic roadmap to get it done. And, each outcome can be measured and adapted for change. This is what we’ve learned and recommend – to ensure success, create an effective Digital Workplace strategy that connects the details and results in an achievable plan.

Here’s a simple example using a Communications goal. One challenge many large enterprises face is how to enable more dialogue between leaders and staff. This is especially important when talking about latest results and strategy. Organizations need honest, open communications to be more responsive and initiate change. This goal is often discussed on Digital Workplace projects.

The example below shows this goal, the supporting elements and how they connect in the strategy.

The Digital Workplace Strategy ensures goals are implemented and met. Product owners of different technologies will know what to budget and when it’s needed. Communications professionals can alert business owners on the upcoming improvements and how they’ll engage. Everyone involved with the project knows what’s needed and why. Most important, budgets and timelines can be set based on tangible outcomes that everyone agrees on.


The best thing about a Digital Workplace Strategy is that it can be started right away. Many organizations have an Intranet in place and are thinking about upgrading their suite of tools. They have an idea of the goals and outcomes they want to achieve but implementing technology takes time. A Digital Workplace Strategy begins with critical user research that can be started while IT decisions are being made. Workforce transformation is an on-going journey that provides tremendous value, and a well-planned Digital Workplace is a major part of that story.

Learn more about our perspective on Digital Workplace by downloading our eBook Empower the workforce with consumer-grade, personalized experiences:

Related Blogs:

Trends Impacting Digital Workplace Strategies

Office 365 Security and Compliance Tools for Collaboration Apps – Are You Covered?

4 Tools and Techniques to Create Change and Empower the Workforce with Personalized Experiences (eBook)

The post Key Elements of a Successful Digital Workplace Strategy appeared first on InFocus Blog | Dell EMC Services.

Update your feed preferences





submit to reddit


  • No Related Posts

SourceOne Email Management – Index sets marked with “Unperformed Transaction” or “Missing Items” status due to transaction file with .xvlts extension stuck in index DropDir folder and multiple copies of the files found in Intermediary folder[4]

Article Number: 521257 Article Version: 5 Article Type: Break Fix

SourceOne Email Management,SourceOne

You find one or more SourceOne index transaction files with XVLTS extension are stuck in a processing loop within index DropDir folder and multiple copies of same files would be created within “Intermediary” sub-folder.

If issue is not identified in time it may have following impact:

  • Thousands of copies of the XVLTS file with same size and belonging to same index set would end up in intermediary folder. Here is an example of how those moved files within Intermediary may look :
User-added image

In the above example, ES1Mixed is the archive folder name followed by YYYYMM and index set number. Post index set number, with each creation of file an incremental number is added because there is already original XVLTS file present in the folder. All copies will be having same size.

  • If multiple transaction files have issue they may end up causing backlog of index transaction files within DropDir folders because maximum number of index processes that could run in SourceOne environment are busy processing files with the issue.

Event messages similar to the following may be found within the ExAsIdxObj.exe.log:

Unable to remove '\HostNameES1_MsgCenterUnpack_AreaEs1Mixed201710201805140308275B8F6E32246377675963F2E4B99AFF166449CD5FC4E695D200.EMCMF.MD'. OsError: 67|IndexRun.cpp(2062)|Job Id: -1; Activity Name: HostName; Activity Id: -1; Activity Type: -1; HostName(0x86042B76) Unknown error (0x80030043)|IndexThread.cpp(3038)|Job Id: -1; Activity Name: HostName; Activity Id: -1; Activity Type: -1; HostName[\HostNameEs1_IndexEs1Mixed20171001] Aborting index run!!!!!|IndexRun.cpp(1211)|Job Id: -1; Activity Name: HostName; Activity Id: -1; Activity Type: -1; HostNameStopAncillaryRun \HostNameEs1_IndexEs1Mixed20171001|IdxAncillaryDB.cpp(295)|Job Id: -1; Activity Name: HostName; Activity Id: -1; Activity Type: -1; HostNameMarking local idx state as missmsg E:ExIndexTempEs1Mixed_201710_001Index|CIdxState.cpp(279)|Job Id: -1; Activity Name: HostName; Activity Id: -1; Activity Type: -1; HostNameEs1Mixed_201710_001] Not copying index to network due to previous fatal error. (0x86042B86)|IndexThread.cpp(3279)|Job Id: -1; Activity Name: HostName; Activity Id: -1; Activity Type: -1; HostName

This problem is caused by a software defect

There are two conditions identified which may cause this issue:

1. Index transaction files have transactions that reference EMCMF files location within Unpack_Area that is inaccessible or may have been changed (in case message center path was changed).

2. Index transaction file contains transaction entries where path to EMCMF files is corrupt or the XVLTS file has been corrupted due to environment issue.

This issue will be resolved in patches or Service Pack versions of EMC SourceOne Email Management post 7.2.SP6 Hotfix 2 ( DELL EMC SourceOne Patch and Service Pack kits are available for download via


  1. Stop “EMC SourceOne Index” service on all SourceOne native archive servers with index role.
  2. Check task manager on index server hosts and make sure there is no “ExAsIdxObj.exe” or “ExAsElasticIdxObj.exe” processes are running. SourceOne index service from step 1 will wait for the ExAsIdxObj.exe or ExAsElasticIdxObj.exe to stop before service stops. Executable for index service is ExAsIndex.exe.
  3. Once index service from step 1 stops, navigate to index file share and go to DropDirIntermediary folder.
  4. Based on the files listing within intermediary folder make list index sets impacted. For an example, based on screenshot provided above index set transaction files in question belong to “es1mixed_201710_001” and “es1mixed_201804_001” index sets.
  5. Create a folder within Intermediary folder that you will use in the next step to take backup of files from index DropDir folder.
  6. Using the list created in step 4 above, identity any files starting with same names, YYYYMM, index set number and move those files to backup folder created in step 5.
  7. Start SourceOne index service on all indexing hosts where service was stopped in step 1.
  8. Index sets identified above needs to be rebuilt since some of its transaction files didn’t get processed. SourceOne Administration Console can be used to submit index sets to be rebuilt. For detailed instruction on how to rebuild index sets SourceOne Email Management Administration Guide can be followed.
  9. On successful rebuild of index sets, status of index sets should change to “Available” state from “Unperformed Transaction” or “Missing Items” state.
  10. If index sets rebuild successfully, files located within intermediary folder and backup folder (from step 5) related to index sets in question can be deleted.


  • No Related Posts

OneFS SmartDedupe: Deduplication Assessment

To complement the actual SmartDedupe job, a dry-run Dedupe Assessment job is also provided to help estimate the amount of space savings that will be seen by running deduplication on a particular directory or set of directories. The dedupe assessment job reports a total potential space savings. The dedupe assessment does not differentiate the case of a fresh run from the case where a previous dedupe job has already done some sharing on the files in that directory. The assessment job does not provide the incremental differences between instances of this job. Isilon recommends that the user should run the assessment job once on a specific directory prior to starting an actual dedupe job on that directory.

The assessment job runs similarly to the actual dedupe job, but uses a separate configuration. It also does not require a product license and can be run prior to purchasing SmartDedupe in order to determine whether deduplication is appropriate for a particular data set or environment. This can be configured from the WebUI by browsing to File System > Deduplication > Settings and adding the desired directories(s) in the ‘Assess Deduplication’ section.


Alternatively, the following CLI syntax will achieve the same result:

# isi dedupe settings modify –add-assess-paths /ifs/data

Once the assessment paths are configured, the job can be run from either the CLI or WebUI. For example:


Or, from the CLI:

# isi job types list | grep –I assess

DedupeAssessment Yes LOW

# isi job jobs start DedupeAssessment

Once the job is running, it’s progress and be viewed by first listing the job to determine it’s job ID.

# isi job jobs list

ID Type State Impact Pri Phase Running Time


919 DedupeAssessment Running Low 6 1/1 –


Total: 1

And then viewing the job ID as follows:

# isi job jobs view 919

ID: 919

Type: DedupeAssessment

State: Running

Impact: Low

Policy: LOW

Pri: 6

Phase: 1/1

Start Time: 2019-01-21T21:59:26

Running Time: 35s

Participants: 1, 2, 3

Progress: Iteration 1, scanning files, scanned 61 files, 9 directories, 4343277 blocks, skipped 304 files, sampled 271976 blocks, deduped 0 blocks, with 0 errors and 0 unsuccessful dedupe attempts

Waiting on job ID: –

Description: /ifs/data

The running job can also be controlled and monitored from the WebUI:


Under the hood, the dedupe assessment job uses a separate index table from the actual dedupe process. Plus, for the sake of efficiency, the assessment job also samples fewer candidate blocks than the main dedupe job, and obviously does not actually perform deduplication. This means that, often, the assessment will provide a slightly conservative estimate of the actually deduplication efficiency that’s likely to be achieved.

Using the sampling and consolidation statistics, the assessment job provides a report which estimates the total dedupe space savings in bytes. This can be viewed for the CLI using the following syntax:

# isi dedupe reports view 919

Time: 2019-01-21T22:02:18

Job ID: 919

Job Type: DedupeAssessment


Time: 2019-01-21T22:02:18


Dedupe job report:{

Start time = 2019-Jan-21:21:59:26

End time = 2019-Jan-21:22:02:15

Iteration count = 2

Scanned blocks = 9567123

Sampled blocks = 383998

Deduped blocks = 2662717

Dedupe percent = 27.832

Created dedupe requests = 134004

Successful dedupe requests = 134004

Unsuccessful dedupe requests = 0

Skipped files = 328

Index entries = 249992

Index lookup attempts = 249993

Index lookup hits = 1


Elapsed time: 169 seconds

Aborts: 0

Errors: 0

Scanned files: 69

Directories: 12

1 path:


CPU usage: max 81% (dev 1), min 0% (dev 2), avg 17%

Virtual memory size: max 341652K (dev 1), min 297968K (dev 2), avg 312344K

Resident memory size: max 45552K (dev 1), min 21932K (dev 3), avg 27519K

Read: 0 ops, 0 bytes (0.0M)

Write: 4006510 ops, 32752225280 bytes (31235.0M)

Other jobs read: 0 ops, 0 bytes (0.0M)

Other jobs write: 41325 ops, 199626240 bytes (190.4M)

Non-JE read: 1 ops, 8192 bytes (0.0M)

Non-JE write: 22175 ops, 174069760 bytes (166.0M)

Or from the WebUI, by browsing to Cluster Management > Job Operations > Job Types:


As indicated, the assessment report for job # 919 in this case discovered the potential of 27.8% in data savings from deduplication.

Note that the SmartDedupe dry-run estimation job can be run without any licensing requirements, allowing an assessment of the potential space savings that a dataset might yield before making the decision to purchase the full product.


  • No Related Posts

Scaling NMT with Intel® Xeon® Scalable Processors

With the continuous research interest in the field of machine translation and efficient neural network architecture designs improving translation quality, there’s a great need to improve its time to solution. Training a better performing Neural Machine Translation (NMT) model still takes days to weeks depending on the hardware, size of the training corpus and the model architecture.

Intel® Xeon® Scalable processors provide incredible leap in scalability and over 90% of the Top500 super computers run on Intel. In this article we show some of the training considerations and effectiveness when scaling a NMT model using Intel® Xeon® Scalable processors.

An NMT model reads a source sentence in a language and passes it to an encoder which builds an intermediate representation and the decoder processes the intermediate representation to produce the translated target sentence in another language.

enc-dec-architecture.pngFigure 1: Encoder-decoder architecture

The figure above shows an encoder-decoder architecture. The English source sentence, “Hello! How are you?” is read and processed by the architecture to produce a translated German sentence “Hallo! Wie geht sind Sie?”. Traditionally, Recurrent Neural Network (RNN) were used in encoders and decoders, but other neural network architectures such as Convolutional Neural Network (CNN) and attention mechanismbased models are also used.

Architecture and environment

Transformer model is one of the interesting architectures in the field of NMT, which is built with variants of attention mechanism in the encoder-decoder part there by replacing the traditional RNNs in the architecture. This model was able to achieve state of the art results in English-German and English-French translation tasks.


Figure 2: Multi-head attention block

The above figure shows the multi-head attention block used in the transformer model. At a high-level, the scaled dot-product attention can be thought as finding the relevant information, values (V) based on Query (Q) and Keys (K) and multi-head attention could be thought as several attention layers in parallel to get distinct aspects of the input.

We use the Tensorflows’ official model implementation of the transformer model and we’ve also added Horovod to perform distributed training. WMT English-German parallel corpus with 4.5M sentences was used to train the model.

The tests described in this article were performed in house on Zenith super computer in Dell EMC HPC and AI Innovation lab. Zenith is Dell EMC PowerEdge C6420-based cluster, consisting of 388 dual socket nodes powered by Intel® Xeon® Scalable Gold 6148 processors and interconnected with an Intel® Omni-path fabric.

System Information

CPU Model

Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz

Operating System

Red Hat Enterprise Linux Server release 7.4 (Maipo)

Tensorflow Version

1.10.1 with Intel® MKL

Horovod Version



Open MPI 3.1.2

Note: We used a specific Horovod branch to handle sparse gradients. Which is now part of the main branch in their GitHub repository.

Weak scaling, environment variables and TF configurations

When training using CPUs, weak scaling, environment variables and TF configurations play a vital role in improving the throughput of the deep learning model. Setting it optimally can help with additional performance gains.

Below are the suggestions based on our empirical tests when running 4 processes per node for the transformer (big) model on 50 zenith nodes. We found setting these variables on all our experiments seem to improve the throughput and modifying OMP_NUM_THREADS based on the number of processes per node.

Environment Variables:



export KMP_AFFINITY=granularity=fine,verbose,compact,1,0

TF Configurations:



Experimenting with weak scaling options allows to find the optimal number of processes run per node such that the model is fit in the memory and performance doesn’t deteriorate. For some reason TensorFlow creates an extra thread. Hence, to avoid oversubscription it’s better to set the OMP_NUM_THREADS to 9, 19 or 39 when training with 4,2,1 process per node respectively. Although we didn’t see it affecting the throughput performance in our experiments but may affect performance in a very large-scale setup.

Performance can be improved by threading. This can be done by setting OMP_NUM_THREADS, such that the product of its value and number of MPI ranks per node equals the number of available CPU cores per node. KMP_AFFINITY environment variable provides a way to control the interface which binds OpenMP threads to physical processing units. KMP_BLOCKTIME, sets the time in milliseconds that a thread should wait after completing a parallel execution before sleep. TF configurations such as intra_op_parallelism_threads and inter_op_parallelism_threads are used to adjust the thread pools there by optimizing the CPU performance.


Figure 3: Effect of environment variables

The above results show that there’s a 1.67x improvement when environment variables are set correctly.

Faster distributed training

Training a large neural network architecture could be time consuming even to perform rapid prototyping or hyper parameter tuning. Thanks to distributed training and open source frameworks like Horovod which allows to train a model using multiple workers. In our previous blog we showed the effectiveness of training an AI radiologist with distributed deep learning and using Intel® Xeon® Scalable processors. Here, we show how distributed training improves the performance of machine translation task.


Figure 4: Scaling Performance

The above chart shows the throughput of the transformer (big) model when trained using 1 – 100 zenith nodes. We get a near linear performance when scaling up the number of nodes. Based on our tests, which include setting the correct environment variables and optimal number of processes, we see an 79x improvement on 100 Zenith nodes with 2 processes per node compared to the throughput on single node with 4 processes.

Translation Quality

NMT models’ translation quality is measured in terms of BLEU (Bi-Lingual Evaluation Understudy) score. It’s a measure to compute the difference between the human and machine translated output.

In a previous blog post we explained some of the challenges of large-batch training of deep learning models. Here, we experimented using a large global batch size of 402k tokens to determine the models’ performance on English to German task. Most of the hyper parameters were set same as that of transformer (big) model, the model was trained using 50 Zenith nodes with 4 processes per node and 2010 being the local batch size. The learning rate grows linearly for 4000 steps to 0.001 and then follows inverse square root decay.

Case-Insensitive BLEU Case-Sensitive BLEU
TensorFlow Official Benchmark Results 28.9
Our results 29.15 28.56

Note: Case-Sensitive score not reported in the Tensorflow Official Benchmark.

Above table shows our results on the test set (newstest2014) after training the model for around 2.7 days (26000 steps). We can also see a clear improvement in the translation quality compared to the results posted on Tensorflow Official Benchmarks.


In this post we showed how to effectively train an NMT system using Intel® Xeon® Scalable processors. We also showed some of the best practices for setting the environment variables and the corresponding scaling performance. Based on our experiments and also following other research work on NMT to understand some of the important aspects of scaling an NMT system, we were able obtain a better translation quality and to speed up the training process. With growing research interest in the field of neural machine translation, we expect to see much interesting and improved NMT models in the future.


  • No Related Posts

Monitoring Azure Stack Update Status

Microsoft has good documentation surrounding the Azure Stack update process, so my goal is not to repeat that content but to add some color by sharing some of the commands that I use. To do this I’ll share the example script from Microsoft and then my variation of the script.

Creating the Privileged Endpoint Session


This establishes the Privileged Endpoint session. You could accomplish this using Enter-PSSession, but doing it this way allows you to stay in the context of your workstation and gives you access to local resources like drives (for output files) and installed modules (for additional output processing). This is an important command to know and should be in the toolbelt of any Azure Stack Cloud Operator.

Microsoft Example:

$cred = Get-Credential

$pepSession = New-PSSession -ComputerName <Prefix>-ercs01 -Credential $cred -ConfigurationName PrivilegedEndpoint


The Microsoft approach of storing the credential in a variable is useful if you need to use the credential again, but in this scenario it’s not necessary. The modified command below consolidates the same process into a single line and still doesn’t expose the PEP password in the script. When executing the command the login prompt will appear with the username pre-populated – simply enter the password and login.

My Example:

$pepSession = New-PSSession -ComputerName <ERCS Name or IP> -ConfigurationName PrivilegedEndpoint -Credential <Domain>CloudAdmin

Getting the High-Level Status of the Current Update Process


This one is great to do a quick check of the update process, especially because it replaces the need to click through the admin portal to bring up the Update blade.

Microsoft Example:

$statusString = Invoke-Command -Session $pepSession -ScriptBlock { Get-AzureStackUpdateStatus -StatusOnly }



Here we have a script that creates a variable and then uses the variable to access a property. By wrapping the first command in parentheses I can access the property on the same line. This is a useful trick in PowerShell for consolidating commands when using the console. It also allows me to see the most recent status by running the command without having to refresh the variable.

My Example:

(Invoke-Command -Session $pepSession -ScriptBlock {Get-AzureStackUpdateStatus -StatusOnly}).Value

Getting the Full Update Status with Details


This is the command to use if you need more detail regarding the current steps in the update process. It uses the Get-AzureStackUpdateStatus cmdlet, which returns the full update output as XML, but filters out everything except the currently executing steps. I find this helpful for getting an understanding of the update process, but it is invaluable when you are trying to work through a failure. This information is also available in the Admin Portal through the Update Status blade, but I find this method a lot easier than downloading and scanning through an XML document.

Microsoft Example:

[xml]$updateStatus = Invoke-Command -Session $pepSession -ScriptBlock { Get-AzureStackUpdateStatus }



Similar to the previous command we are declaring a variable and then using that variable to access a method. By using the same principal as before we can wrap the first step in parentheses, but in this case it also includes the cast to XML. This is needed to access the SelectNodes() method in the second step. You can also change the “InProgress” to “Error” to see tasks that have failed.

My Example:

([xml](Invoke-Command -Session $pepSession -ScriptBlock {Get-AzureStackUpdateStatus})).SelectNodes(“//Step[@Status=’InProgress’]”)

Getting the Verbose Azure Stack Update Log


When things have gone wrong during an update and you’re trying to understand why, that’s when you break out this command. It will output the verbose log data from the latest update session to a text file for review. This will include Informational (“VerboseMessage”), Warning (“WarningMessage”), and Exception (“ErrorMessage”) records.

Microsoft Example:

  1. None.


Since there is currently no example in the Microsoft documentation, we’ll jump straight to how I run the command. This will output the verbose update log to a text file in a specified location. It’s pretty straightforward, but I’ve added the date and time to the output file name because I want to allow for this command to be run multiple times during the course of an update cycle.

My Example:

Invoke-Command -Session $pepSession -ScriptBlock {Get-AzureStackUpdateVerboseLog} | Out-File -FilePath <FileName with Path>_$(Get-Date -Format “yyyyMMdd_HHmmss”).txt

Most of the changes that are shown here boil down to one important aspect of PowerShell…the way you execute on the console is different than the way you execute in a script. When scripting I have 2 primary goals:

  • Perform a repeatable activity
  • Make the script easy to read, understand, and by extension…maintain

When I’m executing commands from the console the goal is generally the shortest path to the result I’m looking for. I will avoid storing a value in a variable unless I need to use it more than once. I am also more likely to use aliases for cmdlets because I’m not looking to save my work, so readability is not as important.

I hope that you find this information useful and look forward to hearing your feedback.


  • No Related Posts

Announcing a New look for SolVe Online – Coming January 28, 2019


On January 28th, 2019SolVe Online is getting a new look!

SolVe Online has a fresh new look, with new features to help you find procedures and other related information you’re looking for. If you are not familiar with SolVe, click here to learn more!

SolVe Online Link:

New features

The following are new features associated with the updated SolVe Online page

SearchYou can now search for Procedures, Knowledgebase Articles, Advisories, or directed Selections (SolVe Route).

Example: Search for VNX5600 returns Procedures, commonly searched/used selections and KB articles.

  • My Content tab – a history of products and procedures you have used, along with related advisories for your product

  • Advisories tab – Technical and Security Advisories for all products

  • Top 10 Trending Topics tab – Most viewed monthly knowledgebase articles

  • Tools & Forms tab – Links to external tools and forms

  • All Products tab – List of all available Product (Procedure) Generators

  • Support tab – Feedback (contact) information, Terms of use, user details, and ‘changes by product release’ information.

Home Button – located upper Left Corner next to the Dell EMC SolVe Online name.

Training is available!

    • Training will be available on our SolVe page (PowerPoint, and Video)


For Questions, Concerns, Comments, please feel free to reach out to us!

Contact us!


  • No Related Posts

Dell EMC Unity: Dynamic pools minimum drives count on creation wizard (User correctable)

Article Number: 503066 Article Version: 4 Article Type: How To

Unity 300,Unity 350F,Unity 400,Unity 400F,Unity 450F,Unity 500,Unity 500F,Unity 550F,Unity 600,Unity 600F,Unity All Flash,Unity Family

On Dynamic Pool creation wizard, there is a minimum number of drives that need to be selected to create the Pool. This number directly depends on the RAID type selected, and a warning will be provided if the minimum drive count is not satisfied. The table below shows the relationship between the RAID type, the RAID width, and the minimum number of drives to create the Pool. This table only shows the smallest RAID widths supported, and the minimum number of drives it takes to create them. As an example, the smallest RAID 5 RAID width the system supports is a 4+1. The smallest Dynamic Pool created with the RAID 5 4+1 configuration is 6 drives. The minimum drive count includes the number of drives specified in the RAID width, plus an extra drive to satisfy the spare space requirements.

User-added image


  • No Related Posts

ViPR SRM: VMAX Discovery fails ERROR: Could not get SYMAUTH information

Article Number: 503064 Article Version: 3 Article Type: Break Fix

ViPR SRM 4.0,ViPR SRM 4.1,ViPR SRM 3.7 SP1

Customers VMAX Discovery fails with the below error for SYMAUTH.


The issue is caused because the USER used in VIPR SRM to connect to the SE server was not added to the user list.

Even though SYMAUTH was disabled as shown below the user was not added.


Use the below command to check the users list.

symauth -sid <SymmID> -users list

If the users are not added as shown in below image please add the user to list.

Note:- Enabling SYMAUTH is not required you just need to add the user.

User Add


  • No Related Posts