Symantec does not detect EICAR on big partition

I need a solution

Hello,

we have a Windows fileserver with SEP 14RU1MP2 installed on it. The fileserver has 6 partitions with different sizes. On 5 partitions SEP works great. But on the biggest partition it does not detect EICAR files which is really strange. The partition is 11 TB large.

I already repaired SEP together with the Symantec support and after that SEP works again good for a few days. The exeptions are also ok. But now we have that issue again.

Does anyone have an idea how to solve that issue?

Thank you!

0

Related:

How to Free Space From /var Directory of NetScaler Appliance When Unable to Log on to NetScaler GUI

  • Run the following commands to view the contents of the /var directory:

    cd /varls -l


    The directories that are usually of interest are as follows:

    /var/nstrace – This directory contains trace files.This is the most common reason for HDD being filled on the NetScaler. This is due to an nstrace being left running for indefinite amount of time. All traces that are not of interest can and should be deleted. To stop an nstrace, go back to the CLI and issue stop nstrace command.

    /var/log – This directory contains system specific log files.

    /var/nslog – This directory contains NetScaler log files.

    /var/tmp/support – This directory contains technical support files, also known as, support bundles. All files not of interest should be deleted.

    /var/core – Core dumps are stored in this directory. There will be directories within this directory and they will be labeled with numbers starting with 1. These files can be quite large in size. Clear all files unless the core dumps are recent and investigation is required.

    /var/crash – Crash files, such as process crashes are stored in this directory. Clear all files unless the crashes are recent and investigation is required.

    /var/nsinstall – Firmware is placed in this directory when upgrading. Clear all files, except the firmware that is currently being used.

  • Related:

    Reserved and recovery getting drive letters

    I need a solution

    So win 10 creates two hidden partitions when you install it, a 499 reserved and 500 mb recovery, these don’t have drive letters which is great, but when I capture the image and then redeploy it I’m getting three drives C: E and F, C is the main one and e and f are the drives above. Is there anyway to keep these drives from getting letters after ghost restores them? We use Altiris 8.5 but that forum is pretty much dead and the 1st level support person I had said I need to delete those partitions from my win 10 image but that seems risky from what I have read and if we do in place upgrades down the road.

    0

    Related:

    OneFS SmartDedupe: Performance

    Deduplication is a compromise: In order to gain increased levels of storage efficiency, additional cluster resources (CPU, memory and disk IO) are utilized to find and execute the sharing of common data blocks.

    Another important performance impact consideration with dedupe is the potential for data fragmentation. After deduplication, files that previously enjoyed contiguous on-disk layout will often have chunks spread across less optimal file system regions. This can lead to slightly increased latencies when accessing these files directly from disk, rather than from cache. To help reduce this risk, SmartDedupe will not share blocks across node pools or data tiers, and will not attempt to deduplicate files smaller than 32KB in size. On the other end of the spectrum, the largest contiguous region that will be matched is 4MB.

    Because deduplication is a data efficiency product rather than performance enhancing tool, in most cases the consideration will be around cluster impact management. This is from both the client data access performance front, since, by design, multiple files will be sharing common data blocks, and also from the dedupe job execution perspective, as additional cluster resources are consumed to detect and share commonality.

    The first deduplication job run will often take a substantial amount of time to run, since it must scan all files under the specified directories to generate the initial index and then create the appropriate shadow stores. However, deduplication job performance will typically improve significantly on the second and subsequent job runs (incrementals), once the initial index and the bulk of the shadow stores have already been created.

    If incremental deduplication jobs do take a long time to complete, this is most likely indicative of a data set with a high rate of change. If a deduplication job is paused or interrupted, it will automatically resume the scanning process from where it left off. The SmartDedupe job is a long running process that involves multiple job phases that are run iteratively. In its default, low impact configuration, SmartDedupe typically processes around 1TB or so of data per day, per node.

    Deduplication can significantly increase the storage efficiency of data. However, the actual space savings will vary depending on the specific attributes of the data itself. As mentioned above, the deduplication assessment job can be run to help predict the likely space savings that deduplication would provide on a given data set.

    For example, virtual machines files often contain duplicate data, much of which is rarely modified. Deduplicating similar OS type virtual machine images (VMware VMDK files, etc, that have been block-aligned) can significantly decrease the amount of storage space consumed. However, as noted previously, the potential for performance degradation as a result of block sharing and fragmentation should be carefully considered first.

    Isilon SmartDedupe does not deduplicate across files that have different protection settings. For example, if two files share blocks, but file1 is parity protected at +2:1, and file2 has its protection set at +3, SmartDedupe will not attempt to deduplicate them. This ensures that all files and their constituent blocks are protected as configured. Additionally, SmartDedupe won’t deduplicate files that are stored on different node pools. For example, if file1 and file2 are stored on tier 1 and tier 2 respectively, and tier1 and tier2 are both protected at 2:1, OneFS won’t deduplicate them. This helps guard against performance asynchronicity, where some of a file’s blocks could live on a different tier, or class of storage, than others.

    OneFS 8.0.1 introduced performance resource management, which provides statistics for the resources used by jobs – both cluster-wide and per-node. This information is provided via the ‘isi statistics workload’ CLI command. Available in a ‘top’ format, this command displays the top jobs and processes, and periodically updates the information.

    For example, the following syntax shows, and indefinitely refreshes, the top five processes on a cluster:



    # isi statistics workload --limit 5 –format=top

    last update: 2019-01-23T16:45:25 (s)ort: default

    CPU Reads Writes L2 L3 Node SystemName JobType

    1. 1.4s 9.1k 0.0 3.5k 497.0 2 Job: 237 IntegrityScan[0]
    2. 1.2s 85.7 714.7 4.9k 0.0 1 Job: 238 Dedupe[0]
    3. 1.2s 9.5k 0.0 3.5k 48.5 1 Job: 237 IntegrityScan[0]
    4. 1.2s 7.4k 541.3 4.9k 0.0 3 Job: 238 Dedupe[0]
    5. 1.1s 7.9k 0.0 3.5k 41.6 2 Job: 237 IntegrityScan[0]

    From the output, we can see that two job engine jobs are in progress: Dedupe (job ID 238), which runs at low impact and priority level 4 is contending with IntegrityScan (job ID 237), which runs by default at medium impact and priority level 1.

    The resource statistics tracked per job, per job phase, and per node include CPU, reads, writes, and L2 & L3 cache hits. Unlike the output from the ‘top’ command, this makes it easier to diagnose individual job resource issues, etc.

    Below are some examples of typical space reclamation levels that have been achieved with SmartDedupe.

    Be aware that these dedupe space savings values are provided solely as rough guidance. Since no two data sets are alike (unless they’re replicated), actual results can vary considerably from these examples.

    Workflow / Data Type

    Typical Space Savings

    Virtual Machine Data

    35%

    Home Directories / File Shares

    25%

    Email Archive

    20%

    Engineering Source Code

    15%

    Media Files

    10%

    SmartDedupe is included as a core component of Isilon OneFS but requires a valid product license key in order to activate. This license key can be purchased through your Isilon account team. An unlicensed cluster will show a SmartDedupe warning until a valid product license has been purchased and applied to the cluster.

    License keys can be easily added via the ‘Activate License’ section of the OneFS WebUI, accessed by navigating via Cluster Management > Licensing.

    For optimal cluster performance, observing the following SmartDedupe best practices is recommended.

    • Deduplication is most effective when applied to data sets with a low rate of change – for example, archived data.
    • Enable SmartDedupe to run at subdirectory level(s) below /ifs.
    • Avoid adding more than ten subdirectory paths to the SmartDedupe configuration policy,
    • SmartDedupe is ideal for home directories, departmental file shares and warm and cold archive data sets.
    • Run SmartDedupe against a smaller sample data set first to evaluate performance impact versus space efficiency.
    • Schedule deduplication to run during the cluster’s low usage hours – i.e. overnight, weekends, etc.
    • After the initial dedupe job has completed, schedule incremental dedupe jobs to run every two weeks or so, depending on the size and rate of change of the dataset.
    • Always run SmartDedupe with the default ‘low’ impact Job Engine policy.
    • Run the dedupe assessment job on a single root directory at a time. If multiple directory paths are assessed in the same job, you will not be able to determine which directory should be deduplicated.
    • When replicating deduplicated data, to avoid running out of space on target, it is important to verify that the logical data size (i.e. the amount of storage space saved plus the actual storage space consumed) does not exceed the total available space on the target cluster.
    • Run a deduplication job on an appropriate data set prior to enabling a snapshots schedule.
    • Where possible, perform any snapshot restores (reverts) before running a deduplication job. And run a dedupe job directly after restoring a prior snapshot version.

    With dedupe, there’s always trade-off between cluster resource consumption (CPU, memory, disk), the potential for data fragmentation and the benefit of increased space efficiency. Therefore, SmartDedupe is not ideally suited for heavily trafficked data, or high performance workloads.

    • Depending on an application’s I/O profile and the effect of deduplication on the data layout, read and write performance and overall space savings can vary considerably.
    • SmartDedupe will not permit block sharing across different hardware types or node pools to reduce the risk of performance asymmetry.
    • SmartDedupe will not share blocks across files with different protection policies applied.
    • OneFS metadata, including the deduplication index, is not deduplicated.
    • Deduplication is a long running process that involves multiple job phases that are run iteratively.
    • Dedupe job performance will typically improve significantly on the second and subsequent job runs, once the initial index and the bulk of the shadow stores have already been created.
    • SmartDedupe will not deduplicate the data stored in a snapshot. However, snapshots can certainly be created of deduplicated data.
    • If deduplication is enabled on a cluster that already has a significant amount of data stored in snapshots, it will take time before the snapshot data is affected by deduplication. Newly created snapshots will contain deduplicated data, but older snapshots will not.

    SmartDedupe is just one of several components of OneFS that enable Isilon to deliver a very high level of raw disk utilization. Another major storage efficiency attribute is the way that Isilon natively manages data protection in the file system. Since OneFS protects data at the file level and, using software-based erasure coding, this translates to raw disk space utilization levels in the 85% range or higher. SmartDedupe serves to further extend this storage efficiency headroom, bringing an even more compelling and demonstrable TCO advantage to primary file based storage.

    Related:

    Re: Unable to boot to utility partition on VNX5300, help!

    System is out of warranty and a lab system, its working but i want to reload the image on it because some configuration won’t let me delete it.

    I boot it up with serial cable attached, and hit ctrl-c and per “Backrev Array” solution it’s supposed to do a Minisetup and reboot a few times. It never reboots it just sits at the “int13 – EXTENDED READ (4000)” and never goes further.

    I rebooted it manually myself and try to start the process again but I just get this… any ideas?

    ABCDabcdEFabcd << Stopping after POST >> GabcdefHabcdefIabcdefJabcdeKLabMabNabOabcPQRSTUVWabXYabZAABBCCabDDabcEEabcFFabcGGabcHHabcIIabJJabKKLLMMNNOOPPQQRRSSTTUUVVWWXX

    ************************************************************

    * Extended POST Messages

    ************************************************************

    INFORMATION: POST Start

    INFORMATION: MCU Operating mode changed from Linux to Clariion

    INFORMATION: PSB not present

    ************************************************************

    EndTime: 10/28/2018 15:29:59

    …. Storage System Failure – Contact your Service Representative …

    *******

    Enclosure: 0x0008000B : Added to Table

    Motherboard: 0x00130009 : Added to Table

    Memory: 0x00000001

    DIMM 0: 0x00000001

    DIMM 1: 0x00000001

    DIMM 2: 0x00000001

    Mezzanine: 0x00100007

    I/O Module 0: 0x00000001 : Added to Table

    I/O Module 1: 0x00000001 : Added to Table

    Power Supply A: 0x000B0014

    Power Supply B: 0x00000001

    0x00130009: MCU 0540

    0x00130009: CMDAPP 0504

    0x00130009: CMDTABLE 0096

    0x00130009: CMDBOOT 0002

    0x00130009: PLX 0305

    0x000B0014: PS FW 0027

    Checksum valid

    Relocating Data Directory Boot Service (DDBS: Rev. 05.03)…

    DDBS: K10_REBOOT_DATA: Count = 1

    DDBS: K10_REBOOT_DATA: State = 0

    DDBS: K10_REBOOT_DATA: ForceDegradedMode = 0

    DDBS: **** WARNING: SP rebooted unexpectedly before completing MiniSetup on the Utility Partition.

    DDBS: MDDE (Rev 600) on disk 1

    DDBS: MDDE (Rev 600) on disk 3

    DDBS: MDB read from both disks.

    DDBS: Chassis and disk WWN seeds match.

    DDBS: First disk is valid for boot.

    DDBS: Second disk is valid for boot.

    Utility Partition image (0x0040000F) located at sector LBA 0x1453D802

    Disk Set: 1 3

    Total Sectors: 0x013BA000

    Relative Sectors: 0x00000800

    Calculated mirror drive geometry:

    Sectors: 63

    Heads: 255

    Cylinders: 1287

    Capacity: 20686848 sectors

    Total Sectors: 0x013BA000

    Relative Sectors: 0x00000800

    Calculated mirror drive geometry:

    Sectors: 63

    Heads: 255

    Cylinders: 1287

    Capacity: 20686848 sectors

    Stopping USB UHCI Controller…

    Stopping USB UHCI Controller…

    EndTime: 10/28/2018 15:33:37

    int13 – RESET (1)

    int13 – CHECK EXTENSIONS PRESENT (3)

    int13 – CHECK EXTENSIONS PRESENT (5)

    int13 – GET DRIVE PARAMETERS (Extended) (6)

    int13 – EXTENDED READ (200)

    int13 – EXTENDED READ (400)

    int13 – EXTENDED READ (600)

    int13 – READ PARAMETERS (800)

    int13 – READ PARAMETERS (802)

    int13 – DRIVE TYPE (803)

    int13 – CHECK EXTENSIONS PRESENT (804)

    int13 – GET DRIVE PARAMETERS (Extended) (805)

    int13 – READ PARAMETERS (806)

    int13 – EXTENDED WRITE (846)

    int13 – EXTENDED WRITE (847)

    int13 – EXTENDED WRITE (848)

    int13 – READ PARAMETERS (964)

    int13 – DRIVE TYPE (965)

    int13 – CHECK EXTENSIONS PRESENT (966)

    int13 – GET DRIVE PARAMETERS (Extended) (967)

    int13 – READ PARAMETERS (968)

    int13 – EXTENDED WRITE (997)

    int13 – EXTENDED WRITE (998)

    int13 – EXTENDED WRITE (999)

    int13 – EXTENDED READ (1000)

    int13 – EXTENDED WRITE (1012)

    int13 – EXTENDED WRITE (1013)

    int13 – EXTENDED WRITE (1014)

    int13 – EXTENDED READ (1200)

    int13 – EXTENDED READ (1400)

    int13 – EXTENDED READ (1600)

    int13 – EXTENDED READ (1800)

    int13 – EXTENDED READ (2000)

    int13 – EXTENDED READ (2200)

    int13 – EXTENDED READ (2400)

    int13 – EXTENDED READ (2600)

    int13 – EXTENDED READ (2800)

    int13 – EXTENDED READ (3000)

    int13 – EXTENDED READ (3200)

    int13 – EXTENDED READ (3400)

    int13 – EXTENDED READ (3600)

    int13 – EXTENDED READ (3800)

    int13 – EXTENDED READ (4000)

    It doesn’t seem to ever go past this… so I cannot move on to the next steps in the solution.

    any help or experience with this would be much appreciated!

    -M

    Related:

    • No Related Posts

    Re: insight iq  how to  collect the information of the contents ( files) of a specific directory

    My question is how to collect the information in a specific directory with the cli of insight IQ

    I can collect the directories with the IIQ cli

    iiq_data_export fsa export -c clustername –data-module directories -o 7318 –name <file>

    output CSV:

    path[directory:/ifs/],dir_cnt (count),file_cnt (count),ads_cnt,other_cnt (count),log_size_sum (bytes),phys_size_sum (bytes),log_size_sum_overflow,report_date: 1537042326

    /ifs/data,4092765,123934912,0,0,1097588028518359,1332028348377088,0

    /ifs/home,12,68,0,0,94902,2095104,0

    /ifs/.isilon,3,22,0,0,60217,564224,0

    /ifs/data/files,1,7,0,0,17907,184832,0

    The –data-module directories generates an overview of the files

    in the manual i can only find the data-modules option

    Directories directories

    File Count by Logical Size file_count_by_logical_size

    File Count by Last Modified file_count_by_modified_time

    File Count by Physical Size file_count_by_physical_size

    Top Directories top_directories

    Top Files top_files

    Now i want to have an export of the directory /ifs/data/files exported to csv format

    I can do report in the gui also under file system analytics with the download as csv

    Can someone hint me the syntax

    Thanks

    Related:

    How Big is the 4.x User Layer Disk, and How Can You Change That?

    By default, the User Layer is 10GB.

    If you have set User Quotas on your file share, then we will size the User Layer disk to be equal to your user quote. We assume that the share is specific to layering, and the only thing a user is going to be writing to it is their User Disk. So we assume that any user-specific quota is how big you want to make the User Disk. This supersedes the default. Setting the file share quota is the standard, preferred method for setting the User Layer size.

    There are three registry keys in your image which govern this behavior. If you want to modify them, you can do it with a GPO or a layer. I suspect the best place to put it is in the Platform Layer, but you could put it in the OS or App Layers too.

    [HKEY_LOCAL_MACHINESoftwareUnideskUlayer]

    “UseQuotaIfAvailable” String Value

    Values: “True” (Default), “False”

    True to enable discovery and use of quotas. False to disable.

    “DefaultUserLayerSizeInGb” DWord Value

    The size of the user layer in GB without quotas (E.g. 5, 10, 23, etc.)

    When not specified, the default is 10.

    “QuotaQuerySleepMS” DWord Value

    The number of milliseconds to wait after creating the directory for the user layer before checking to see if it has a quota. This is necessary to give some quota systems time to apply the quota to the new directory (FSRM requires this).

    When not specified the default is 1000.

    You’ll probably never use the last one, but try it if you are sure you have set a quota and it seems to be not working.

    Expanding the User Layer

    If you already have a User Layer disk and you want to expand it, you just need to expand the VHD itself, and then expand the filesystem on it. PowerShell on Hyper-V servers, for instance, has a CmdLet named “resize-vhd” which will take a local filename and a total number of bytes and resize your disk. For some reason, resize-vhd is only available on Hyper-V servers, but you just need to enable the Role in your server to have access to the cmdlet. Obviously you can only do this while the user is not logged in and not using the disk. Otherwise, any third-party VHD-resizing tool will work.

    Once you resize the VHD, run Disk Management and expand the filesystem in the disk to fill the extra space, and that space will be reflected when the user logs in. You can use the Attach VHD function in Disk Management to expand the filesystem after you expand it, or you can let the user log back in, run Disk Management themselves, and expand the filesystem themselves. It will be the only disk with free space at the end. The space will become immediately available.

    Note: resize-vhd command can be run from Hyper-V only. You will need a Hyper-V server to run the command, or you can temporarily add Hyper-V role to a machine, then run the command.

    Related:

    • No Related Posts

    BSOD after deploying sysprepped image

    I need a solution

    I’ve been having periodic issues deploying sysprepped images with GSS 3 (we’re currently on 3.2 RU6) but managed to get it working on Windows 10 1709 and 1803 – until today. The problem has surfaced again and I think I can trace it to GSS writing files to the wrong partition. I have a deployment task for a sysprepped image with a custom unattend file that fails randomly; sometimes on certain models, and other times on models that work 99% of the time. The result is after laying down the image, it reboots to a blue screen claiming that the registry is corrupt. When I boot the machine with a recovery USB, I open the command line and run notepad. I can see that the System Reserved drive is labelled C, and in that drive is a “windows” directory, with a “panther” subdirectory (note that they aren’t capitalized) that contains our unattend file named unattend.xml. I assume this is GSS copying our custom unnattend file to C:windowspanther – except it appears that GSS (or WinPE) thinks the reserved partition is the system drive and writes the unattend file to it, causing corruption and ultimately the blue screen on boot. Another example of this is that I have a script that will write to the WindowsSetupScriptssetupcomplete.cmd file in order to install the DAgent on first boot (for portability, since we have multiple GSS servers in our environment), except I have to specify that drive as D: for WinPE to write to the correct partition.

    REM Point DAgent to correct ghost server
    md D:WINDOWSSETUPSCRIPTS
    ECHO msiexec /i C:DAgentdagent_x64.msi /qn server_tcp_addr=%DSSERVER% server_tcp_port=402 >> D:WINDOWSSETUPSCRIPTSsetupcomplete.cmd

    Is there an easy way to mitigate this? I don’t know of a way to either force WinPE to see the correct drive letters, or to specify where to copy the unattend file since WinPE sees the System Reserve partition as the C drive.

    0

    Related:

    • No Related Posts