ShareFile Error “The folder structure that you tried to download is too deep to be supported by most operating systems….. your folders and files.”

The Microsoft Windows API defines the maximum file path limit as 260 characters for a fully specific path-and-filename. This includes the beginning of the directory to the file extension. While there are some exceptions to this limit, ShareFile typically enforces a file path limit on files upload or downloaded via our various apps and tools.

File Path errors refer to the length of the file path and file name rather than the size of the file. If you encounter this error when attempting to upload or download a file using one of our apps, you may need to navigate to a deeper folder within the structure you are trying to download. By downloading at a level of 1 or 2 folders deeper than the root folder, you may be able to still download the majority of your data without having to recreate the folder structure on your own computer.

Additionally, you may want to consider renaming lengthy file names that occupy the majority of the limit.

Related:

  • No Related Posts

Re: Re: How to add VM client from CLI

I will start by assuming that you’ll use the same VMware vCenter name and same Domain in your list, and only the VM name will change. For this, I would create a text file called “vm.txt” that contained each VM name on its own line and no other characters, i.e.

VM1

VM2

VM3

etc.

If you’re trying to build this file on a Windows computer in a text editor, be 100% sure your file is encoded for ASCII.

I would next make a file called “rename.sh” on the Utility Node and put in the following:

#!/bin/bash

for i in `cat vm.txt`

do

java -jar proxycp.jar –addvm –vc my.vcenter.org –vm $i –domain my.domain.org

done



What this does is simply read each line in the file and run the command against the VM name. In the above script, you may also need to provide the full path to the java command or to the proxycp.jar file, depending on your shell environment and path settings.



Let us know if that helps!

Karl

Related:

  • No Related Posts

DELL EMC Data Domain Cleaning

Deep dive into Data Domain GC or cleaning



When your backup application (such as NetBackup or NetWorker) expires data, the data is marked by the Data Domain system for deletion. However, the data is not deleted immediately; it is removed during a cleaning operation.

  • During the cleaning operation, the file system is available for all normal operations including backup (write) and restore (read).
  • Although cleaning uses a significant amount of system resources, cleaning is self throttling and gives up system resources in the presence of user traffic.
  • Data Domain recommends running a cleaning operation after the first full backup to a Data Domain system. The initial local compression on a full backup is generally a factor of 1.5 to 2.5. An immediate cleaning operation gives additional compression by another factor of 1.15 to 1.2 and reclaims a corresponding amount of disk space.
  • When the cleaning operation finishes, a message is sent to the system log giving the percentage of storage space that was reclaimed.

As files are written to a Data Domain system, they are deduplicated which means



  • Duplicate data in the file is replaced with a pointer to existing data on disk.
  • Only unique data is placed in ~128 Kb compression regions which sit in 4.5 Mb containers on disk.



This reflects unique data uses space on disk and unique data written by one file maybe referred to by one or more other files (written later, however contains the same data)Cleaning works on data on disk which is completely unreferenced, this data is deemed to be ‘dead’ and therefore superfluous to the system.Garbage Collection/Cleaning is used to find ‘dead’ data on the system (enumeration) and remove this data and make the corresponding space available for re-use (copy forward).



GC.jpg







History of Garbage Collection (GC) algorithms:



The biggest challenge with cleaning we had was its speed, it was slow and there is no other way to free space.



Currently we have 12 phases of cleaning.



Cleaning: phase 1 of 12 (pre-merge)

Cleaning: phase 2 of 12 (pre-analysis)

Cleaning: phase 3 of 12 (pre-enumeration)

Cleaning: phase 4 of 12 (pre-filter)

Cleaning: phase 5 of 12 (pre-select)

Cleaning: phase 6 of 12 (merge)

Cleaning: phase 7 of 12 (analysis)

Cleaning: phase 8 of 12 (candidate)

Cleaning: phase 9 of 12 (enumeration)

Cleaning: phase 10 of 12 (filter)

Cleaning: phase 11 of 12 (copy)

Cleaning: phase 12 of 12 (summary)



DD OS 5.4 and earlier



DD OS 5.4 and previous versions used to have Full cleaning algorithm.

  • 10 phases.
  • File centric (clean enumerates each file on the system to work out which data is live).
  • Did not scale well for systems with a very large data set or a lot of small files.

DD OS 5.5.- 5.7



Data Domain starting DD OS 5.5 up to DD OS 5.7 uses Physical cleaning (PGC)

  • 12 phases.
  • Data centric (clean enumerates metadata within the file system to work out which data is live).
  • Scaled better for system with a very large data set or a lot of small files but perhaps not as well as was originally envisaged.



DD OS 6.0 and later



Starting DD OS 6.0 engineers came up with yet another way to constantly improve the cleaning process. This is called perfect physical cleaning (PPGC)PPGC is essentially the same as PGC, however only changes are how memory is used. This allows certain phases to be skipped meaning that clean can run more quickly on the majority of the systems.



The need of sampling



Clean is allocated a fixed amount of memory to hold its data structures, In most cases this memory is not sufficient to track whether all data on a Data Domain is live or dead, this means that most DD units leveraging FULL or PGC have to perform ‘sampling’ .



  • The file system is split into logical chunks.
  • Each chunk is sampled to work out how dead data it is likely to contain.
  • Clean selects the chunk expected to give best return in free space
  • This chunk is then cleaned (i,e enumerated/ copied forwards)

Thus sampling means that we need to run a number of phases twice and wastes a lot of time.With PPGC (DD OS 6.0 and later) optimizes the above approach and avoids sampling by tracking a much larger amount of data in the same amount of GC memory (approx 4.4x more segments compared to PGC). This reduces overall clean duration by 20-50 %.



Some Prerequisites of using PPGC

  • DD OS 6.0 and later
  • Index files must be in the index 2.0 format – this will always be the case as indices must be upgraded to index 2.0 before the upgrade to DD OS 6.x can proceed (https://support.emc.com/kb/495604).
  • Segment information for the entire data set must be able to fit in GC memory – Whilst PPGC can track approximately 4.4x more segments in the same format of RAM as PGC there will still be some very large systems where this is still not the case.
  • On applicable systems PPGC will be automatically enabled – users do not need to ‘turn on’ or ‘switch to’ PPGC, other systems will automatically fall back on PGC (and sampling).

Once a DDR with index files in the index 1.0 format is upgraded to DDOS 5.5.x/5.6.x/5.7.x index files will automatically be converted to the index 2.0 format. Note, however, that:

  • Index file conversion is performed when clean/garbage collection is performed on the DDR
  • The conversion is performed in a lazy style (i.e. the entire conversion is not performed at once) with only ~1% of index files being converted during each clean

As a result it can take considerable time for all index files to be fully converted to the index 2.0 format and until this is complete upgrades to DDOS 6.x will fail during pre-check (with the error shown in above). Systems in this state can either:



  • Be left to run as they are (i.e. on DDOS 5.5.x/5.6.x/5.7.x) until conversion of index files naturally completes – at this point an upgrade to DDOS 6.x will be allowed to proceed. Note, however, that with standard user interfaces (i.e. the Data Domain System Manager/Enterprise Manager/Management Center/command line shell) it is not possible to determine the format/state of index files and therefore it is not easy to determine when the system is ready for upgrade.

  • Have their index files forcibly converted to the index 2.0 format – this then allows an upgrade to DDOS 6.x to take place immediately

Forcibly converting index files requires elevated access to the Data Domain Operating System (DDOS) via the command line shell (and therefore cannot be performed by end users/administrators). There is no way to tell whether PGC or PPGC is being used other than the number of phases run during the operation.



When clean starts, it will report that it is running all 12 phases, however will skip phases, for example jump straight from

phase 5 (pre-select) to phase 11 (copy), PGC will continue to run all 12 phases.



GC1.jpg





Disabling PPGC

  • To disable PPGC and force a system to use PGC

# filesys disable# se sysparam set GC_PPGC_IS_ENABLED=FALSE# filesys enable



  • To disable PPGC and PGC and force a system to use traditional/full GC)

# filesys disable

# se sysparam set GC_PHYSICAL_ENABLED=FALSE

# filesys enable

Note that traditional/full GC should be avoided on long term retention (LTR) enabled systems as this can cause DDFS panics, this is purely based on experience.



Hope it helps!!

Related:

  • No Related Posts

Re: [ScaleIO] Virtual disk bad block medium error is detected.

Hello all,

We got several [Storage Service Virtual disk bad block medium error is detected.: Virtual Disk X (Virtual Disk X)] logs in scaleIO server.

I believe this disk went bad and it caused unrecovered read error.

What I want to ask is, when I check [query_all_sds] and [query_sds], this sds seems to be healthy and all DISK seems normal.

Is this normal?

Also can you please let me know the details about monitoring disks in SDSs. Does MDM checks all disks in all SDSs frequently? or when SDC cannot connect to Disk, SDC informs MDM , then it changes mapping towards SDCs?

Thanks.

Related:

  • No Related Posts

[ScaleIO] Virtual disk bad block medium error is detected.

Hello all,

We got several [Storage Service Virtual disk bad block medium error is detected.: Virtual Disk X (Virtual Disk X)] logs in scaleIO server.

I believe this disk went bad and it caused unrecovered read error.

What I want to ask is, when I check [query_all_sds] and [query_sds], this sds seems to be healthy and all DISK seems normal.

Is this normal?

Also can you please let me know the details about monitoring disks in SDSs. Does MDM checks all disks in all SDSs frequently? or when SDC cannot connect to Disk, SDC informs MDM , then it changes mapping towards SDCs?

Thanks.

Related:

  • No Related Posts

Dell EMC Unity: How to use service logs to view customer array information in HTML GUI Format

Article Number: 487082 Article Version: 3 Article Type: How To



Unity Family



To find information on the array via diagnostic files, do the following:

1. Download Diagnostic Files

2. Use Winzip or file extraction software of your choice to extract files to desired location

3. Highlight spa.service_dc.tgz and spb.service_dc.tgz and extract these logs to a location of your choice

4. Navigate to spa or spb folder—-> cmd_outputs —-> svc_data

5. There will be a zip file in svc_data (20990721_XXXXXX_APMXXXXXXXXXXX_EMC-UEM-Telemetry.tar.gz)

6. The files that are listed will all have the same naming convention. This will show as Year-Month-Day_XXXXX_APMXXXXXXXXXXX ex: (20160721_XXXXX_APM0012345678)

7. Extract the XXXXXXXXXXXXXX_EMC-UEM-Telemetry.tar.gz to a folder of your choice. There you will find a .html file i.e

( XXXXXXXXXXXXXX_EMC-UEM-Telemetry.html)

After opening the HTML file, you will be able to maneuver between tabs to view configurations, errors, alerts, etc

Related:

  • No Related Posts

7022870: How to identify a Compound session document in Reflection or InfoConnect Desktop

Method #1:

1. Open the Host session file (*.RD3X, *.RD5X, *.RDOX, etc.) with the WinZip or 7-Zip application.

2. Find the settings.xml file in the documentssettings folder.

3. Open this file in Internet Explorer.

4. Search for the following lines:

<Item name=”CompoundSession” type=”System.Boolean”>

<Value>true</Value>

</Item>


Method #2:

1. Save a copy of the Host session file (*.RD3X, *.RD5X, *.RDOX, etc.) with a *.ZIP extension.

2. From Windows Explorer, double click on the file to open in the Windows compressed file viewer.

3. Navigate to the documentssettings folder inside the file.

4. Double click on the settings.xml file to open it in the Windows XML viewer (Internet Explorer).

5. Search for the following lines:

<Item name=”CompoundSession” type=”System.Boolean”>

<Value>true</Value>

</Item>


Possible Values:

true = Compound Session Document

false = Regular Session Document

Related:

  • No Related Posts

How to Configure Log File Rotation on NetScaler

The newsyslog utility included with the NetScaler firmware, archives log files, if necessary, and rotates the system logs so the current log is empty when rotation occurs. The system crontabruns this utility every hour and it reads the configuration file which specifies the files to rotate and the conditions. The archived files may be compressed if required.

The existing configuration is located in /etc/newsyslog.conf. However, because this file resides in the memory filesystem, the administrator must save the modifications to /nsconfig/newsyslog.conf so the configuration survives restarting the NetScaler.

The entries contained in this file have the following format:

logfilename [owner:group] mode count size when flags [/pid_file] [sig_num]

Note: Fields within squared brackets are optional and can be omitted.

Each line on the file represents a log file that should be handled by the newsyslog utility and the conditions under which rotation should occur.

For example, following is a highlighted entry taken from the newsyslog.conf file. In this entry, the size field indicates that the size of the file ns.log is 100 Kilobytes and the count field indicates that the number of archived ns.log files is 25. A size of 100K and count of 25 are the default size and count values.

Note that the when field is configured with an asterisk ( * ), meaning that the ns.log file is not rotated based on time. Every hour, a crontab job runs the newsyslog utility which checks if the size of ns.log is greater than or equal to the size configured in this file. In this example, if it is greater than or equal to 100K, it rotates that file.

root@ns# cat /etc/newsyslog.conf# Netscaler newsyslog.conf# This file is present in the memory filesystem by default, and any changes# to this file will be lost following a reboot. If changes to this file# require persistence between reboots, copy this file to the /nsconfig# directory and make the required changes to that file.## logfilename [owner:group] mode count size when flags [/pid_file] [sig_num]/var/log/cron 600 3 100 * Z/var/log/amd.log 644 7 100 * Z/var/log/auth.log 600 7 100 * Z/var/log/ns.log 600 25 100 * Z

The size field can be changed to modify the minimum size of the ns.log file or the when field can be changed to enable rotating the ns.log file based on a certain time.

The daily, weekly, and/or monthly specification is given as: [Dhh], [Dhh [Ww]] and [Dhh [Mdd]], respectively. The time-of-day fields, which are optional, default to midnight. The ranges and meanings for these specifications are:

Hh hours, range 0 … 23

w day of week, range 0 … 6, 0 = Sunday

dd day of month, range 1 … 31, or the letter L or l to specify the last day of the month.

Examples

Here are some examples with explanations for the logs that are rotated by default:

/var/log/auth.log 600 7 100 * Z

The authentication log is rotated when the file reaches 100K, the last 7 copies of the auth.log are archived and compressed with gzip (Z flag), and the resulting archives are assigned the following permissions –rw——-.

/var/log/all.log 600 7 * @T00 Z

The catch-all log is rotated 7 times at midnight every night (@T00) and compressed with gzip. The resulting archives are assigned the following permissions –rw-r—–.

/var/log/weekly.log 640 5 * $W6D0 Z

The weekly log is rotated 5 times at midnight every Monday. The resulting archives are assigned the following permissions –rw-r—–.

Common Rotation Patterns

  • D0: rotate every night at midnight

  • D23: rotate every day at 23:00

  • W0D23: rotate every week on Sunday at 23:00

  • W5: rotate every week on Friday at midnight

  • MLD6: rotate at the last day of every month at 6:00

  • M5: rotate on every 5th day of month at midnight

If an interval and a time specification are both given, then both conditions must be met. That is, the file must be as old as or older than the specified interval and the current time must match the time specification.

The minimum file size can be controlled, but there is no limit on how large the size of the file can be before newsyslog utility gets its turn in the next hour slot.

Debugging

To debug the behavior of the newsyslog utility, add the verbose flag.

root@dj_ns# newsyslog -v/var/log/cron <3Z>: size (Kb): 31 [100] --> skipping/var/log/amd.log <7Z>: does not exist, skipped./var/log/auth.log <7Z>: size (Kb): 2 [100] --> skipping/var/log/kerberos.log <7Z>: does not exist, skipped./var/log/lpd-errs <7Z>: size (Kb): 0 [100] --> skipping/var/log/maillog <7Z>: --> will trim at Tue Mar 24 00:00:00 2009/var/log/sendmail.st <10>: age (hr): 0 [168] --> skipping/var/log/messages <5Z>: size (Kb): 7 [100] --> skipping/var/log/all.log <7Z>: --> will trim at Tue Mar 24 00:00:00 2009/var/log/slip.log <3Z>: size (Kb): 0 [100] --> skipping/var/log/ppp.log <3Z>: does not exist, skipped./var/log/security <10Z>: size (Kb): 0 [100] --> skipping/var/log/wtmp <3>: --> will trim at Wed Apr 1 04:00:00 2009/var/log/daily.log <7Z>: does not exist, skipped./var/log/weekly.log <5Z>: does not exist, skipped./var/log/monthly.log <12Z>: does not exist, skipped./var/log/console.log <5Z>: does not exist, skipped./var/log/ns.log <5Z>: size (Kb): 18 [100] --> skipping/var/log/nsvpn.log <5Z>: size (Kb): 0 [100] --> skipping/var/log/httperror.log <5Z>: size (Kb): 1 [100] --> skipping/var/log/httpaccess.log <5Z>: size (Kb): 1 [100] --> skippingroot@dj_ns#

Related:

  • No Related Posts

WSS Block executables into zip file

I need a solution

Hello everyone

I need you help, in my portal of WSS I do a rule to block all executable files *.exe, according this KB

https://support.symantec.com/en_US/article.TECH245091.html

The rule work fine, but if the file *.exe is compress in file *.zip don´t work

Any idea of ​​why it does not work like that?

regards

Andres Garcia

0

Related:

  • No Related Posts

7022838: Unable to upgrade iManager plugin for GroupWise.

You can workaround this problem by following steps:

1. Stop tomcat that runs iManager.

2. Go into /var/opt/Novell/imanager/nps/packages and delete anyGroupWise related NPM.

3. Go into /var/opt/Novell/imanager/nps/web-inf/modules/groupwiseand delete all from this directory.

4. Go into /var/opt/Novell/imanager/nps/uninstallerdata directoryand delete GroupWise related items.

Manually installing NPM file on the server:

1. Copy provided NPM file into some dedicated directory on theserver and using roller Linux tool extract it. There will be twodirectories: META-INF and currentwebapp.

2. In META-INF file there is MANIFEST.MF file which you can edit bygedit and correct Implementation-Version field and adjust year. Forinstance it could be listed as 1.0.0.20160129. You can change itinto 1.0.0.20180129, save the file.

3. Copy the META-INF directory into/var/opt/novell/imanager/nps/web-inf/modules/groupwisedirectory.

4. Go now into currentwebapp directory, select both, WEB-INF andportal directories and paste them into /var/opt/novell/imanager/npswith overwrite option.

5. Start tomcat on the server to start iManager.

Once you login into eDir again, you will notice that GroupWiseplugin is listed ow under installed with a info you updated in step2 within the manifest file.

Related:

  • No Related Posts