large file being blocked by SPE

I need a solution

Hi, I am running SPE 8.0 on Windows 2016 server. We have some large zip files that are blocked by SPE. When copying the files from the NAS to the local server, it always denid access. The zip files are not encrypted and the Maximum extract size of file settings under the container handling policy has exceeed the actual file size. There is file size limit set for scanning. The old part is even if I configured the container handling to log only and it still blocked access to the file (unless I disabled the scanning on the NAS). Here is the errror.

ICAP protocol issue: unexpected ICAP status code 500 returned by facility spescanner:1344

Fri May  3 10:11:28 2019               MAJOR Event ID:10.34   Error occurred while scanning for viruses.

0

Related:

  • No Related Posts

Avamar 18.2



Avamar 18.2 and Data Domain System Integration Guide

This guide describes how to install, configure, administer, and use a Data Domain system as a backup target for Avamar.



Avamar 18.2 for Windows Servers User Guide

This guide describes how to install the Avamar client for Microsoft Windows, and how to back up and restore data on a Windows server.



Avamar 18.2 for SQL Server User Guide

This guide describes how to install Avamar in a Microsoft SQL Server environment, and how to back up and restore SQL Server databases.



Avamar 18.2 for IBM DB2 User Guide

This guide describes how to install Avamar in an IBM DB2 environment and how to back up and restore DB2 databases.

Avamar 18.2 for SAP with Oracle User Guide

This guide describes how to install Avamar in an SAP environment with Oracle, and how to back up and restore SAP servers with Oracle databases.



Avamar 18.2 for Oracle User Guide

This guide describes how to install Avamar in an Oracle database environment, and how to back up and restore Oracle databases.

Avamar 18.2 for NDMP Accelerator for Oracle ZFS User Guide

This guide describes how to install and configure the Avamar NDMP Accelerator for Oracle ZFS , and how to back up and restore data on Oracle ZFS.



Avamar 18.2 for NDMP Accelerator for EMC NAS Systems User Guide

This guide describes how to install and configure the Avamar NDMP Accelerator for EMC NAS systems, and how to back up and restore data on supported EMC VNX and EMC storage systems.



Avamar 18.2 for NDMP Accelerator for NetApp Filers User Guide

This guide describes how to install and configure the Avamar NDMP Accelerator for NetApp Filers, and how to back up and restore data on supported NetApp filers.



Avamar 18.2 for Lotus Domino User Guide

This guide describes how to install Avamar in a Lotus Domino environment, and how to back up and restore data.

Avamar 18.2 for SharePoint VSS User Guide

This guide describes how to install Avamar in a SharePoint environment, and how to back up and restore data using Avamar with Microsoft Volume Shadow Copy Service (VSS) technology.



Avamar 18.2 for Sybase ASE User Guide

This guide describes how to install Avamar in a Sybase environment, and how to back up and restore Sybase Adaptive Server Enterprise (ASE) databases.

Related:

  • No Related Posts

Upgrading OE on Unity that is configured for file services only

it depends what you understand as “disrupt connectivity”

a reboot – no matter how fast – will always be kind of disruptive on the lower levels

and a client will have to at least re-establish the TCP connection

The question is more how much of that is visible to the client OS and application

NFS clients using default hard mounts will just see a pause in I/O but no error to applications

The OS and protocol stack will of course re-establish the connection, recover locks, ….

for CIFS clients it depends on the application and OS

Windows itself will automatically reconnect

cluster aware application that retry internally should be ok

simple applications like copying files via explorer.exe can stop and show a “Try again” dialog

For those application that really require transparent failover – like SharePoint or Hyper-V over SMB shares you can enable SMB CA (Continuous Availability) per share

then they will also just pause and resume I/O similar to NFS

See the NAS white paper and Microsoft details about CA in SMB3

Why dont you just try it ??

all an upgrade is doing is a SP reboot – which you can easily do even from the GUI

If you dont want to use your hardware Unity as VSA will show the same behaviour

Related:

  • No Related Posts

Re: Audit and Reporting Tool? FLR?

Maybe if dynamox was an EMC software designer…

You do realize you could just, use Dell Change Auditor with VNX, right? Or NetApp or a couple of other NAS platforms. EMC, NetApp and other NAS tend to leave out much of this functionality, just so that Varonis, Dell, Northern Storage and other vendors can build (and sell) them in their products.

For what it’s worth, Varonis provides much of (not all) of what you list, and will do so across many NAS platforms. While they lack good timeline integration, the get top marks (in our book) for meeting much of the rest of your list. DCA actually meets fewer of your list bullets than Varonis with NAS platforms, but Dell’s enterprise licensing seems much more affordable for larger clients (Varonis can become unwieldy if you have lots of NAS platforms but little storage).

The available auditing in VNX certainly seems limited on the surface, but there’s little value add, I think, for most customers. Why? Most IT organizations are looking for centralizing reporting and auditing functions. I’d wager they’d rather get 60% of the bullets you list from one tool, enterprise-wide, than run even two or three tools to get 90% of the data from just one platform.

Related:

  • No Related Posts

Re: Networker NetApp SaveSet Syntax

Hi all – I know this should be simple information to find, but I’m just not finding it in all the places I look. I have a NetApp Filer and I want to back it up of course. I have everything setup in Networker to do so, except the SaveSet syntax.

In my EMC days this was something like /root_vdm_1/volume/ etc, but I need to know the syntax for NetApp. I’ve tried many different combinations to try to get it to work, but the backups tell me there is no such file/directory.


Does anyone out there use Networker to protect a NetApp filer, and if so, could you share your saveset box with me so I can mimmick the syntax?

Thanks all

Related:

  • No Related Posts

Re: Re: unity migration vdm,usermapper,multiprotocol questions

castleknock wrote:

This differs from VNX behaviour as secmap created a local UID reference to ‘hide’ the lack of a unix account rather than simple deny SMB access Is this a correct read ? and it so explains the lack of any references to secmap import during VNX migration.

the different isnt in secmap

secmap is not a mapping method – its merely a cache so that we dont have to do repeated do calls to external mapping source which can take time

The difference is with usermapper

usermapper was only every meant to as a mapping method for CIFS only file systems but on VNX/Celerra this wasnt enforced.

The manuals told you clearly to disable usermapper if you are doing multi-protocol but many customers didnt do that – either because they didnt know of out of convinience

So they are using a config where some users were mapped through the AD/NIS/ntxmap and the ones that couldnt got a uid from usermapper

In Unity we improved this:

usermapper is per NAS server – and not globally per data mover

by default usermapper is disabled for multi-protocol NAS server

instead we add options for default Unix/Windows user that get used if AD/NIS/ntxmap are unable to map the user – which didnt exist in VNX/Celerra

So if you use the default on a multi-protocol NAS server and we cannot map a user then access is denied

You an then either:

– make sure this user is covered by the mapping sources

– configure the default Unix user

– enable automatic user mapping (usermapper)

this is explained in detail with flowcharts in the multi-protocol manual that I mentioned

keep in mind though that just enabling usermapper like on VNX is convinient but it also makes changes and troubleshooting more difficult

This is because secmap entries never expire or get updated

For example if a user connects to a NAS server before you have configured its account in AD/NIS/ntxmap mappings he will get a UID from usermapper

Then if later the admin adds the account to AD/NIS/ntxmap this account will still use the uid from usermapper for this NAS server but on a new NAS server the uid from the mapping source

Also since usermapper is now per NAS server the same user will get different uid’s on different NAS servers

bottom line – if you want full multi-protocol then use a deterministic mapping method and not usermapper

Related:

  • No Related Posts

Re: Troubles with NDMP-DSA Backup NW9.1.1 & NetApp Filer over dedicated Interface

Hello,

Networker Server 9.1.1

Stroage Nodes 9.1.1 with RemoteDevice DD for NDMP-Backups

We added storage nodes to our environment that got a direct 10G Ethernet connection to the NetApp Filers. There is one interface connected to the production network and one interface that is directly connected to the NetApp Filer with a private network address of 10.11.11.12.

NetApp Filer is configured with IP address 10.11.11.11.

We used the wizard to configure the NDMP-DSA Backup. Client Direct is disabled.

Backup command: nsrndmp_save -M -P <StorageNode> -T dump

Additional Information:

BUTYPE=dump

DIRECT=Y

HIST=Y

EXTRACT_ACL=y

UPDATE=Y

USE_TBB_IF_AVAILABLE=Y

In this above configuration backups work and move data over the production network.

Our goal is to use the direct connection to optimize backup. Therefor we added the hostname for the storage node with the private ip address on the netapp filer but the backups still moved over the production network.

In the next step we defined a “virtual” hostname of storagenode-fs with IP 10.11.11.12 and edited the Backupcommand to use P storagenode-fs but this did not work either.

Anyone got some ideas on this situation?

Regards,

Patric

Related:

ViPR Controller : Order to remove an NFS file system fails

Article Number: 524432 Article Version: 3 Article Type: Break Fix



ViPR Controller,VNX/VNXe Family

The user is unable to delete an NFS file system on a VNX array in ViPR Controller.

ViPR Controller UI errors

[ERROR] Fri Jun 22 08:58:20 UTC 2018 Error 12000: Message: Operation failed due to the following error: Failure

Description: Check File System dependencies: NFS and CIFS exports and snapshots

ViPR Controller queries the VNX (CheckpointQueryParams XML API)to check for all snapshots on the entire array.

If Snapsure is not enabled/licensed on the VNX array, this will cause the API query to fail.

e.g.

Time on CS: Mon Jun 18 18:12:50 CEST 2018

Output from: /nas/bin/nas_license -list

key status value

site_key online 57 93 7c 7a

cifs online

nfs online

replicatorV2 online

Snapsure is a pre-requisite for ViPR Controller in VNX file environments.

See “VIPR Controller Virtual Data Center Requirements and Information Guide” for more details. https://community.emc.com/docs/DOC-57470 :

(“VNX SnapSure is installed, configured, and licensed.”)

This is a new implementation of VNX File within ViPR Controller

Enable Snapsure on the VNX Array.

1. In the Unisphere GUI, select the name of the Control Station.

2. Then you will get the full menu for the VNX cluster and on the left you will see the license sub-menu.

3. Verify if the “Snapsure” checkbox is checked. If it is, check that it is validated, and if it is not, validate it.

Before deleting the filesystem , ViPR Controller queries the VNX (CheckpointQueryParams XML API)to check for all snapshots on the entire array.

vipr3 vipr3 controllersvc 2018-06-22 08:58:08,844 [199|checkFileSystemDependenciesInStorage|1246f9af-6340-4754-b2d6-ecb570f544f144f2570a-0c48-4a39-ac03-bd068a13fd4b] DEBUG Wire.java (line 84) >> “<?xml version=”1.0″ encoding=”UTF-8″ standalone=”yes”?><RequestPacket xmlns=”http://www.emc.com/schemas/celerra/xml_api“><Request><Query><CheckpointQueryParams/></Query></Request></RequestPacket>”

vipr3 vipr3 controllersvc 2018-06-22 08:58:08,844 [199|checkFileSystemDependenciesInStorage|1246f9af-6340-4754-b2d6-ecb570f544f144f2570a-0c48-4a39-ac03-bd068a13fd4b] DEBUG EntityEnclosingMethod.java (line 508) Request body sent


VIPR Controller errors with the output below

vipr3 vipr3 controllersvc 2018-06-22 08:58:10,768 [199|checkFileSystemDependenciesInStorage|1246f9af-6340-4754-b2d6-ecb570f544f144f2570a-0c48-4a39-ac03-bd068a13fd4b] DEBUG Wire.java (line 70) << “<ResponsePacket xmlns=”http://www.emc.com/schemas/celerra/xml_api“>[n]”

vipr3 vipr3 controllersvc 2018-06-22 08:58:10,768 [199|checkFileSystemDependenciesInStorage|1246f9af-6340-4754-b2d6-ecb570f544f144f2570a-0c48-4a39-ac03-bd068a13fd4b] DEBUG Wire.java (line 70) << ” <Response>[n]”

vipr3 vipr3 controllersvc 2018-06-22 08:58:10,768 [199|checkFileSystemDependenciesInStorage|1246f9af-6340-4754-b2d6-ecb570f544f144f2570a-0c48-4a39-ac03-bd068a13fd4b] DEBUG Wire.java (line 70) << ” <Fault maxSeverity=”error”>[n]”

vipr3 vipr3 controllersvc 2018-06-22 08:58:10,768 [199|checkFileSystemDependenciesInStorage|1246f9af-6340-4754-b2d6-ecb570f544f144f2570a-0c48-4a39-ac03-bd068a13fd4b] DEBUG Wire.java (line 70) << ” <Problem messageCode=”14227210241″ facility=”APL” component=”API” message=”APL subsystem query failed.” severity=”error”>[n]”

vipr3 vipr3 controllersvc 2018-06-22 08:58:10,768 [199|checkFileSystemDependenciesInStorage|1246f9af-6340-4754-b2d6-ecb570f544f144f2570a-0c48-4a39-ac03-bd068a13fd4b] DEBUG Wire.java (line 70) << ” <Description>
Appliance Layer (APL) subsystem that the XML API server talks to failed to execute a query. This indicates a problem with the APL subsystem in Control Station software.</Description>[n]”

vipr3 vipr3 controllersvc 2018-06-22 08:58:10,769 [199|checkFileSystemDependenciesInStorage|1246f9af-6340-4754-b2d6-ecb570f544f144f2570a-0c48-4a39-ac03-bd068a13fd4b] DEBUG Wire.java (line 70) << ” <Action>If the problem persists, collect support materials by running the /nas/tools/collect_support_materials script or the newer /nas/tools/automaticcollection script. Note any symptoms. If you require more information on the materials collection process, refer to the Problem Resolution Roadmap for Celerra, available on EMC Powerlink, or EMC Knowledgebase Support Solution number emc135846. For more information on this message, use the text from the error message’s brief description or the message’s ID to search the Knowledgebase on Powerlink. After logging in to Powerlink, go to Support > Knowledgebase Search > Support Solutions Search.</Action>[n]”


In the VNX Array support_materials logs, it shows that there is an issue with Snapsure licensing.

(cel_api.log)

Jun 22, 15:12:24 APL reply is:

< ?xml version=”1.0″ encoding=”UTF-8″ standalone=”no” ?>

< APLTask complete=”true” description=”Query checkpoints” failed=”true” originator=”username@localhost” xmlns=”http://www.emc.com/schemas/celerra/apl_1.0“>

<Statuses>

<Status creationTime=”1530623544″>

<MessageArgList>

<MessageArg argName=”license” argType=”8″>

<string>snapsure</string>

</MessageArg>

</MessageArgList>

</Status>

</Statuses>

< /APLTask>

Jun 22, 15:12:25 Creating API status ————–>

Jun 22, 15:12:25 Diagnostics tag: 1646045cef9

com.emc.celerra.api.RequestException: com.emc.nas.ccmd.common.MessageInstanceImpl@50020001

at com.emc.celerra.api.apl.AbstractQuery.makeAplListCall(AbstractQuery.java:144)

Related:

  • No Related Posts

Re: secondary control station

This is expected with the dual control station as the second control station is working as a standby to the primary control station, The NAS service runs only on one Control Station which is the active one at that time. So in normal case, primary CS is the active one and the secondary CS is the standby.

You may run /nas/sbin/getreson command on the active CS – which will tell you slot 0 is primary CS and slot 1 is secondary.

Primary CS has all NAs volumes mounted and NAS service available – standby CS is only having its internal volumes.

In case Primary (Active) CS fails or administrator fails over the CS functionality manually (/nas/sbin/cs_standby command) – then the secondary CS becomes the Primary or active one.

On Celerra Manager – if you go to the Control Station property page – it should show that “standby ready“. Similarly the /nass/sbin/getreason command will tell you whether the standby CS is available or not. If so, there is no action needed from your end.

Hope this helps,

Thanks,

Sandip

Related:

  • No Related Posts

Dell EMC Unity:Unable to create dhsm connection with HTTPS connection to Secondary Storage (User Correctable)

Article Number: 525095 Article Version: 2 Article Type: Break Fix



Dell EMC Unity 300,Dell EMC Unity 300F,Dell EMC Unity 400,Dell EMC Unity 400F,Dell EMC Unity 450F,Dell EMC Unity 500,Dell EMC Unity 500F,Dell EMC Unity 600,Dell EMC Unity 600F,Dell EMC Unity All Flash,Dell EMC Unity Family

Trying to create dhsm connection with https secondary storage URL (on either ports 443/9000) coming out with errors below: (taking into consideration that http is working fine)

C:Program Files (x86)EMCUnisphere CLI>uemcli -d 172.22.224.155 -u admin -p XXXXXX /net/nas/dhsmconn create -fs fs_63 -secondaryUrlhttps://myarray.starwars.local/EnterpriseVault -secondaryPort 443 -mode enabled -readPolicy full -secondaryUsername VEVsrv@starwars.local -secondaryPassword XXXXXX

Storage system address: 172.22.224.155

Storage system port: 443

HTTPS connection

Operation failed. Error code: 0x5

One or more specified parameters are invalid. (Error Code:0x5)


C:Program Files (x86)EMCUnisphere CLI>uemcli -d 172.22.224.155 -u admin -p XXXXXX /net/nas/dhsmconn create -fs fs_63 -secondaryUrlhttps://myarray.starwars.local/EnterpriseVault -secondaryPort 9000 -mode enabled -readPolicy full -secondaryUsername VEVsrv@starwars.local -secondaryPassword XXXXXXX

Storage system address: 172.22.224.155

Storage system port: 443

HTTPS connection

Operation failed. Error code: 0x5

One or more specified parameters are invalid. (Error Code:0x5)

HTTPS is not supported for secondary URL (URL of the secondary storage, if the secondary storage is Centera or Cloud, this URL should point to the CTA)

This is working by design.

Related:

  • No Related Posts