Re: Re: unity migration vdm,usermapper,multiprotocol questions

castleknock wrote:

This differs from VNX behaviour as secmap created a local UID reference to ‘hide’ the lack of a unix account rather than simple deny SMB access Is this a correct read ? and it so explains the lack of any references to secmap import during VNX migration.

the different isnt in secmap

secmap is not a mapping method – its merely a cache so that we dont have to do repeated do calls to external mapping source which can take time

The difference is with usermapper

usermapper was only every meant to as a mapping method for CIFS only file systems but on VNX/Celerra this wasnt enforced.

The manuals told you clearly to disable usermapper if you are doing multi-protocol but many customers didnt do that – either because they didnt know of out of convinience

So they are using a config where some users were mapped through the AD/NIS/ntxmap and the ones that couldnt got a uid from usermapper

In Unity we improved this:

usermapper is per NAS server – and not globally per data mover

by default usermapper is disabled for multi-protocol NAS server

instead we add options for default Unix/Windows user that get used if AD/NIS/ntxmap are unable to map the user – which didnt exist in VNX/Celerra

So if you use the default on a multi-protocol NAS server and we cannot map a user then access is denied

You an then either:

– make sure this user is covered by the mapping sources

– configure the default Unix user

– enable automatic user mapping (usermapper)

this is explained in detail with flowcharts in the multi-protocol manual that I mentioned

keep in mind though that just enabling usermapper like on VNX is convinient but it also makes changes and troubleshooting more difficult

This is because secmap entries never expire or get updated

For example if a user connects to a NAS server before you have configured its account in AD/NIS/ntxmap mappings he will get a UID from usermapper

Then if later the admin adds the account to AD/NIS/ntxmap this account will still use the uid from usermapper for this NAS server but on a new NAS server the uid from the mapping source

Also since usermapper is now per NAS server the same user will get different uid’s on different NAS servers

bottom line – if you want full multi-protocol then use a deterministic mapping method and not usermapper

Related:

ViPR Controller : Order to remove an NFS file system fails

Article Number: 524432 Article Version: 3 Article Type: Break Fix



ViPR Controller,VNX/VNXe Family

The user is unable to delete an NFS file system on a VNX array in ViPR Controller.

ViPR Controller UI errors

[ERROR] Fri Jun 22 08:58:20 UTC 2018 Error 12000: Message: Operation failed due to the following error: Failure

Description: Check File System dependencies: NFS and CIFS exports and snapshots

ViPR Controller queries the VNX (CheckpointQueryParams XML API)to check for all snapshots on the entire array.

If Snapsure is not enabled/licensed on the VNX array, this will cause the API query to fail.

e.g.

Time on CS: Mon Jun 18 18:12:50 CEST 2018

Output from: /nas/bin/nas_license -list

key status value

site_key online 57 93 7c 7a

cifs online

nfs online

replicatorV2 online

Snapsure is a pre-requisite for ViPR Controller in VNX file environments.

See “VIPR Controller Virtual Data Center Requirements and Information Guide” for more details. https://community.emc.com/docs/DOC-57470 :

(“VNX SnapSure is installed, configured, and licensed.”)

This is a new implementation of VNX File within ViPR Controller

Enable Snapsure on the VNX Array.

1. In the Unisphere GUI, select the name of the Control Station.

2. Then you will get the full menu for the VNX cluster and on the left you will see the license sub-menu.

3. Verify if the “Snapsure” checkbox is checked. If it is, check that it is validated, and if it is not, validate it.

Before deleting the filesystem , ViPR Controller queries the VNX (CheckpointQueryParams XML API)to check for all snapshots on the entire array.

vipr3 vipr3 controllersvc 2018-06-22 08:58:08,844 [199|checkFileSystemDependenciesInStorage|1246f9af-6340-4754-b2d6-ecb570f544f144f2570a-0c48-4a39-ac03-bd068a13fd4b] DEBUG Wire.java (line 84) >> “<?xml version=”1.0″ encoding=”UTF-8″ standalone=”yes”?><RequestPacket xmlns=”http://www.emc.com/schemas/celerra/xml_api“><Request><Query><CheckpointQueryParams/></Query></Request></RequestPacket>”

vipr3 vipr3 controllersvc 2018-06-22 08:58:08,844 [199|checkFileSystemDependenciesInStorage|1246f9af-6340-4754-b2d6-ecb570f544f144f2570a-0c48-4a39-ac03-bd068a13fd4b] DEBUG EntityEnclosingMethod.java (line 508) Request body sent


VIPR Controller errors with the output below

vipr3 vipr3 controllersvc 2018-06-22 08:58:10,768 [199|checkFileSystemDependenciesInStorage|1246f9af-6340-4754-b2d6-ecb570f544f144f2570a-0c48-4a39-ac03-bd068a13fd4b] DEBUG Wire.java (line 70) << “<ResponsePacket xmlns=”http://www.emc.com/schemas/celerra/xml_api“>[n]”

vipr3 vipr3 controllersvc 2018-06-22 08:58:10,768 [199|checkFileSystemDependenciesInStorage|1246f9af-6340-4754-b2d6-ecb570f544f144f2570a-0c48-4a39-ac03-bd068a13fd4b] DEBUG Wire.java (line 70) << ” <Response>[n]”

vipr3 vipr3 controllersvc 2018-06-22 08:58:10,768 [199|checkFileSystemDependenciesInStorage|1246f9af-6340-4754-b2d6-ecb570f544f144f2570a-0c48-4a39-ac03-bd068a13fd4b] DEBUG Wire.java (line 70) << ” <Fault maxSeverity=”error”>[n]”

vipr3 vipr3 controllersvc 2018-06-22 08:58:10,768 [199|checkFileSystemDependenciesInStorage|1246f9af-6340-4754-b2d6-ecb570f544f144f2570a-0c48-4a39-ac03-bd068a13fd4b] DEBUG Wire.java (line 70) << ” <Problem messageCode=”14227210241″ facility=”APL” component=”API” message=”APL subsystem query failed.” severity=”error”>[n]”

vipr3 vipr3 controllersvc 2018-06-22 08:58:10,768 [199|checkFileSystemDependenciesInStorage|1246f9af-6340-4754-b2d6-ecb570f544f144f2570a-0c48-4a39-ac03-bd068a13fd4b] DEBUG Wire.java (line 70) << ” <Description>
Appliance Layer (APL) subsystem that the XML API server talks to failed to execute a query. This indicates a problem with the APL subsystem in Control Station software.</Description>[n]”

vipr3 vipr3 controllersvc 2018-06-22 08:58:10,769 [199|checkFileSystemDependenciesInStorage|1246f9af-6340-4754-b2d6-ecb570f544f144f2570a-0c48-4a39-ac03-bd068a13fd4b] DEBUG Wire.java (line 70) << ” <Action>If the problem persists, collect support materials by running the /nas/tools/collect_support_materials script or the newer /nas/tools/automaticcollection script. Note any symptoms. If you require more information on the materials collection process, refer to the Problem Resolution Roadmap for Celerra, available on EMC Powerlink, or EMC Knowledgebase Support Solution number emc135846. For more information on this message, use the text from the error message’s brief description or the message’s ID to search the Knowledgebase on Powerlink. After logging in to Powerlink, go to Support > Knowledgebase Search > Support Solutions Search.</Action>[n]”


In the VNX Array support_materials logs, it shows that there is an issue with Snapsure licensing.

(cel_api.log)

Jun 22, 15:12:24 APL reply is:

< ?xml version=”1.0″ encoding=”UTF-8″ standalone=”no” ?>

< APLTask complete=”true” description=”Query checkpoints” failed=”true” originator=”username@localhost” xmlns=”http://www.emc.com/schemas/celerra/apl_1.0“>

<Statuses>

<Status creationTime=”1530623544″>

<MessageArgList>

<MessageArg argName=”license” argType=”8″>

<string>snapsure</string>

</MessageArg>

</MessageArgList>

</Status>

</Statuses>

< /APLTask>

Jun 22, 15:12:25 Creating API status ————–>

Jun 22, 15:12:25 Diagnostics tag: 1646045cef9

com.emc.celerra.api.RequestException: com.emc.nas.ccmd.common.MessageInstanceImpl@50020001

at com.emc.celerra.api.apl.AbstractQuery.makeAplListCall(AbstractQuery.java:144)

Related:

Re: secondary control station

This is expected with the dual control station as the second control station is working as a standby to the primary control station, The NAS service runs only on one Control Station which is the active one at that time. So in normal case, primary CS is the active one and the secondary CS is the standby.

You may run /nas/sbin/getreson command on the active CS – which will tell you slot 0 is primary CS and slot 1 is secondary.

Primary CS has all NAs volumes mounted and NAS service available – standby CS is only having its internal volumes.

In case Primary (Active) CS fails or administrator fails over the CS functionality manually (/nas/sbin/cs_standby command) – then the secondary CS becomes the Primary or active one.

On Celerra Manager – if you go to the Control Station property page – it should show that “standby ready“. Similarly the /nass/sbin/getreason command will tell you whether the standby CS is available or not. If so, there is no action needed from your end.

Hope this helps,

Thanks,

Sandip

Related:

  • No Related Posts

VNX1 Series: VNX Data mover Failover did not occur after a hardware failure of CS (User Correctable)

Article Number: 497666 Article Version: 4 Article Type: Break Fix



VNX1 Series,VNX2 Series,VNX5200,VNX5300,VNX5400,VNX5500,VNX5600,VNX5700,VNX5800,VNX7500,VNX7600,VNX8000

A VNX Control Station failure occurred and before the hardware was replaced a fault occurred which should have normally triggered a data mover failover, but no data mover failover happened.

Array with a Single Control Station, when a hardware failure occurred on the control station resulting it being unbootable or unable to run correctly the NAS Control Station services for the management of the array, any subsequent event which would normally trigger a failover of the data mover will not. The NAS Control Station and its management services are required to perform a data mover failover. An inoperable control station or one that has the NAS Services in a stopped state cannot trigger a data mover failover.

In a dual control station configuration a failure of the primary control station services or hardware will result in the standby peer control station forcibly taking over the role as the primary control station , this is triggered when the peer control station fails to receive responses to it’s management heartbeats or the heartbeat responses exceed a timeout value.

Hardware failure in Single Control Station array.

For a control station that is online, run nas_checkup command to confirm if there are any reported hardware faults or software faults reported. If there are hardware faults, VNX Support should be engaged to resolve. A warning for a software issue may be possible to resolve using the Dell EMC Knowledgebase https://support.emc.com/

Always run a collect support materials on the control station if possible to capture logs and the current state before making any changes so these can be analyzed if required.

To check specifically for a hardware fault only the commands below can be used,

for enclosure status, the data mover enclosure number is specified after (-e)

$ nas_inventory -tree

$ /nas/sbin/enclosure_status -e 0 -v

Additional References:

The procedure to generate this diagnostic Zip file on the VNX Control Station is below:

[Collect Support Materials]

——-

1. To Generate a Collect Support Materials (Diagnostic Bundle) from the VNX NAS

please run the following script on the control station when connected via SSH and logged in as nasadmin.

$ /nas/tools/collect_support_materials

2. When the script completes a Zip file will be generated and the name & location of this file will be displayed on screen

3. An SCP client like Winscp will be needed to download the file from the Control Station to your workstation, the default location on the control station for the collect support materials to be generated in is /nas/var/emcsupport.

Note: older collect support materials will be automatically deleted to make free space if required in /nas/var/emcsupport.

——-

Celerra: How to increase Control Station failover timeout value.

https://support.emc.com/kb/331802


The procedure of Celerra and VNX File Data Mover failover

https://support.emc.com/kb/322700

Related:

Re: VNX 5300 full

Hi there, I have a big trouble at my vnx.

From my vmware, these datastore show 50% free space, but on vnx it’s almost full and I have no space to increase the pool.

How do I release the space on my vnx?

My hardware is: VNX 5300 and vmware 5.5

There is an alert:

Severity : Critical

System : stg-DataCenter

Domain : Local

Created : Sep 18, 2018 6:01:53 PM

Message : Alert VP: Storage usage of pool Pool Producao has crossed threshold value 85.0% and has reached to 91.2%.

Full Description : The virtual thin pool’s storage usage has reached the maximum threshold. To continue using the pool, you must add additional storage to the pool.

Recommended Action : Using Unisphere:

1) Log in as nasadmin (or as a user with NAS administrative privileges).

2) Navigate to the Storage > Pools list page.

3) Go to the properties page of the thin pool identified in the log message or alert.

4) Look for the Physical Storage Usage of this virtual thin pool.

5) Extend the pool if there are existing Celerra storage resources available that can be used. If no storage is available to extend the pool then you will need to add the appropriate type of physical disks to the system so that it can be extended.

For more information or assistance, search the Knowledgebase on Powerlink as follows:

a. Log in to http://support.emc.com and go to Support > Knowledgebase Search > Support Solutions Search.

b. Use the message ID or text from the error message’s brief description to search.

Event Code : 0x12608700e0

Related:

  • No Related Posts

Re: NAS Proxy functionality

Additionally, over SMB, snapshots can frequently be seen by using either:

a) the previous versions tab (from a windows box)

or

b) by changing your path from \proxyrepodoc to \proxyrepodoc.ckpt .Different NAS systems call this different things. VNX/Celerra was always .ckpt, Isilon + NetApp use .snapshot, and so-forth, however in some cases it’s hidden over SMB. Of course confirming that the snapshots are there in the first place is where I would start.

~Chris

Related:

Re: CEPA/CEPP issues

I’m attempting to setup CEPA so that we can demo Varonis. I’ve gotten past the whole can’t start the cepp service without a physical CIFS (a terribly annoying limitation of this service with a CIFS residing on a VDM.) I’m able to start the service now but it can’t seem to chat with the Windows server that’s had the VNX Event Enabler stuff installed on it.

Cepp.conf:

surveytime=10

pool name=ceppapool

servers=10.1.104.92

postevents=*

option=ignore

reqtimeout=5000

retrytimeout=1500

server_cepp server_2 -pool -info results:

server_2 :

pool_name = ceppapool

server_required = No

access_checks_ignored = 488956

req_timeout = 5000ms

retry_timeout = 1500ms

pre_events =

post_events = OpenFileNoAccess,OpenFileRead,OpenFileWrite,CreateFile,CreateDir,DeleteFile,DeleteDir,CloseModified,CloseUnmodified,RenameFile,RenameDir,SetAclFile,SetAclDir,OpenDir,CloseDir,FileRead,FileWrite,SetSecFile,SetSecDir

post_err_events =

CEPP Servers:

IP = 10.1.104.92, state = ERROR_CEPP_NOT_FOUND, rpc = MS-RPC over SMB, cava version = 4.9.1.0, nt status = SUCCESS, server name = varonis01.<mydomain>.com

I’ve already configured the EMC Window services to run under the service account. I’ve also added the service account to the EMC Virus Checking and EMC Event Notification Bypass groups as well. There are no firewalls between the data mover and the Windows server and it appears the control station and the data mover can resolve the IP of the CEPA server. Any ideas would be greatly appreciated!

Related:

VNX: How to get around the challenges with the folder names or absolute paths having spaces and special characters.

Article Number: 480377 Article Version: 3 Article Type: How To



Celerra Network Server,VNX/VNXe Family,VNX OE for File

Consider this folder name for this example:

Market & Product Development/2763 – Market Validation and Quality Analysis – Guillermo Bautista Alderete

ACL commands throw an error though applying double quotes to the above articulated folder name, as:

$.server_config server_x -v “acl dump= “/root_vdm_9/mnt33/Market & Product Development/2763 – Market Validation and Quality Analysis – Guillermo Bautista Alderete””

1459192752: ADMIN: 3: Command failed: acl dump=/root_vdm_9/mnt33/Market

Error 4020: server_2 : failed to complete command

In order to get around this situation, create an environmental variable pointing to an absolute path of folder name which is in question as:

# PATH_STRING=”/root_vdm_9/mnt33/Market & Product Development/2763 – Market Validation and Quality Analysis – Guillermo Bautista Alderete”

Where

Folder name is: “Market & Product Development/2763 – Market Validation and Quality Analysis – Guillermo Bautista Alderete”

Environmental variable name: PATH_STRING

Use the environmental variable in ACL dump or ACL reset command instead absolute path, as:

$.server_config server_x -v “acl dump=’$PATH_STRING'”

server_3 : commands processed: 1

command(s) succeeded

output is complete

Where

The server_x is name of DataMover. Verify you are applying single quote preceding ‘$’ to the environmental variable in command. ‘$PATH_STRING’

The environmental variable name can be set as desired.

Related:

VNX: How to find the VNX/VNX2 Celerra CAVA Anti-Virus CIFS Microsoft Management Console MMC Snap-in

Article Number: 481008 Article Version: 4 Article Type: How To



VNX/VNXe Family,VNX1 Series,VNX2 Series,Celerra,vVNX Series,Unity Family

How to find the VNX Celerra CAVA CIFS Microsoft Management Console MMC Snap-in:

support.emc.com

-> Support by Products

-> VNX2 Series

-> VNX5600 or your VNX2 Series Model

-> Downloads

-> Product Tools

-> More

-> Search on the page for VNX FileCifsMgmt.exe 8.1.9.155


Another method to find the MMC Snap-in:

support.emc.com/search -> “virus” “VNX2 Series” -> second page –> its there…..


Here is a quick way to find the latest version of the VNX File Code:

support.emc.com

-> Support by Products

-> VNX2 Series

-> VNX5600 or your VNX2 Series Model

-> Downloads

-> More

-> More again

-> Search on the page for “dvd_image.iso”


The last method takes some time to find the latest version of the VNX File code.

Just by searching for “DVD IMAGE iso”, then search for the version number in of “All Support” for the related information

https://support.emc.com/search/?text=dvd%20image%20iso&product_id=36656&searchLang=en_US

As of today, May 17, 2016, the Latest VNX2 Series File Code version is 8.1.9.155, so search for “8.1.9.155”

https://support.emc.com/search/?text=8.1.9.155&product_id=36656&searchLang=en_US

Link to the latest the VNX Celerra CAVA CIFS Microsoft Management Console MMC Snap-in and install guide Plus the Latest VNX File code

VNX FileCifsMgmt.exe 8.1.9.155

https://download.emc.com/downloads/DL48750_VNX_FileCifsMgmt.exe_8.1.9.155.exe?source=OLS

This is a set of Windows-based tools that enable you to manage CIFS functionality in a VNX File, VNX Unified, or Celerra system. UNIX Attributes Migration, UNIX Users and UNIX Groups property page extensions, and UNIX User Management are used to manage users from Windows in native mode. …

March 10, 2016 | Celerra NS-120,Celerra NS-480,Celerra NS-960,Celerra NS-G2,Celerra NS-G8…More | DL48750 | Windows | 4.86 MB | Checksum 5a35508e9e55c2bb7c1b36a46e2638bd

Installing Management Applications on VNX for File 8.1

https://support.emc.com/docu48483_Installing_Management_Applications_on_VNX_for_File_8.1.pdf?language=en_US

… UNIX User Management Celerra AntiVirus Management is an MMC snap-in to the Unisphere.You can use the Celerra AntiVirus Management AntiVirus Management snap-in with the Common AntiVirus Agent (CAVA), third- party AntiVirus engines that run on a Windows NT or Windows 2000 or later, and a Data Mover to …

August 16, 2013 | VNX8000,VNX7600,VNX5800,VNX5600,VNX5400…More | docu48483 | Support Task:Administer, Configure, Install | 0.5 MB | pdf | en_US | Manual and Guides

8.1.9.155_dvd_image.iso

VNX2 OE for File OE 8.1.9.155 Upgrade DVD. Support use only for CLI code upgrades. Remote Support may ask customers to download this file in preparation for a remote upgrade. When upgrading a VNX2 File or Unified system, this code is to be used in with VNX OE for Block version 05.33.009.5.155. (CR52495)

March 10, 2016 | 1,198.0 MB | Checksum e164154061faf3ef47df40b67f9fd6cd

Related:

Re: NDMP-Backup Error >> Failed to propagate handle; TimeOut after inactive

hello COmmunity,

After switching to a new backup server and a platform change from Linux to Windows, we get errors in certain processes when backing up NDMP file systems

suppressed 138 bytes of output.

.144324:nsrndmp_save: Adding attribute *policy workflow name = eNAS-VDM-016

.144324:nsrndmp_save: Adding attribute *policy action name = backup

.06/18/18 07:52:22.821430 NDMP Service Debug: The process id for NDMP service is 0x5a670b0

42909:nsrndmp_save: Performing DAR Backup..

83320:nsrndmp_save: Performing incremental backup, BASE_DATE = 44478769945

42794:nsrndmp_save: Performing backup to Non-NDMP type of device

174908:nsrdsa_save: Saving the backup data in the pool ‘dd3 enas’.

175019:nsrdsa_save: Received the media management binding information on the host ‘bkpmgmnt01.sis.net’.

174910:nsrdsa_save: Connected to the nsrmmd process on the host ‘bkpmgmnt01.sis.net’.

175295:nsrdsa_save: Successfully connected to the Data Domain device.

129292:nsrdsa_save: Successfully established Client direct save session for save-set ID ‘2854701209’ (eNAS1-DM-01:/root_vdm_9/VDM-16_fs2) with Data Domain volume ‘enas_001’.

42658:nsrdsa_save: DSA savetime = 1529301142

85183:nsrndmp_save: DSA is listening for an NDMP data connection on: 10.109.130.100, port = 8912

42952:nsrndmp_save: eNAS1-DM-01:/root_vdm_9/VDM-16_fs2 NDMP save running on ‘bkpmgmnt01.sis.net’

84118:nsrndmp_save: Failed to propagate handle 0000000000000000 to C:Program FilesEMC NetWorkernsrbinnsrndmp_2fh.exe child process: Das Handle ist ungültig. (Win32 error 0x6)

84118:nsrndmp_save: Failed to propagate handle 0000000000000000 to C:Program FilesEMC NetWorkernsrbinnsrndmp_2fh.exe child process: Das Handle ist ungültig. (Win32 error 0x6)

accept connection: accepted a connection

42953:nsrdsa_save: Performing Non-Immediate save

42923:nsrndmp_save: NDMP Service Error: Medium error

42923:nsrndmp_save: NDMP Service Warning: Write failed on archive volume 1

42617:nsrndmp_save: NDMP Service Log: server_archive: emctar vol 1, 93 files, 0 bytes read, 327680 bytes written

42738:nsrndmp_save: Data server halted: Error during the backup.

7136:nsrndmp_save: (interrupted), exiting

— Job Indications —

Termination request was sent to job 576172 as requested; Reason given: Inactive

eNAS1-DM-01:/root_vdm_9/VDM-16_fs2: retried 1 times.

eNAS1-DM-01:/root_vdm_9/VDM-16_fs2 aborted, inactivity timeout has been reached.



Strangely, these messages do not occur on all file systems, but rather randomly.

Does anyone know this error message and knows where the problem lies? The evaluation of the Celerra logs has so far revealed nothing.

Best Regard

Cykes

Related: