SEP 15 – file hash calculated incorrectly

I need a solution

Just started to test SEP 15, and installed it on my test machine using the default policy.

It identified a file, NTOSKRNL.EXE, with the following filehash:

5379732000000000000000000000000000000000000000000000000000000000

This hash is not correct, as using the powershell get-filehash command provides this instead:

C732B1DD3480285B6666641BC417A0C897884331229F47B055B79A9E42DF4282

Which is a known file on VT.  Any idea why Symantec’s hash calculation would be incorrect?  Or what I should do next?

0

1550755857

Related:

Blacklisting md5 hash with restapi

I need a solution

Hello all,

I want a way to block md5 hash in SEPM 14.0. using REST API.

I have already tried https://apidocs.symantec.com/home/saep#_updateblacklist but I m not able to get the proper blacklist payload that is required. If someone can give me an exact payload to send to SEPM or just a way forward will do too.

0

Related:

SEMS (PGP console) : Does the WDE hashing algorithm changed to sha2 yet?

I need a solution

Hi All,

I’ve been refreshing my knowledge on the PGP WDE algorithm and technical, seems like in manual it mention as using SHA1 as hashing function

Is it updated to SHA2 in latest version or is there any etrack/request to change it?

Any risk should be considered since SHA1 is rated as weak nowdays?

https://support.symantec.com/en_US/article.TECH149…

Regards,

0

1541607562

Related:

  • No Related Posts

Re: Delete large bucket

You can get the bucket-wipe tool here:

WARNING: This will erase the bucket and all of its data! Please make absolutely sure this is what you want.

http://130753149435015067.public.ecstestdrive.com/share/bucket-wipe-1.9.jar



usage: java -jar bucket-wipe.jar [options] <bucket-name>

-a,--access-key the S3 access key

-e,--endpoint the endpoint to connect to, including

protocol, host, and port

-h,--help displays this help text

-hier,--hierarchical Enumerate the bucket hierarchically. This

is recommended for ECS's

filesystem-enabled buckets.

--keep-bucket do not delete the bucket when done

-l,--key-list instead of listing bucket, delete objects

matched in source file key list

--no-smart-client disables the ECS smart-client. use this

option with an external load balancer

-p,--prefix deletes only objects under the specified

prefix

-s,--secret-key the secret key

--stacktrace displays full stack trace of errors

-t,--threads number of threads to use

--vhost enables DNS buckets and turns off load

balancer

Related:

Avamar Client for Windows: Avamar backup fails with “avtar Error : Out of memory for cache file” on Windows clients

Article Number: 524280 Article Version: 3 Article Type: Break Fix



Avamar Plug-in for Oracle,Avamar Client for Windows,Avamar Client for Windows 7.2.101-31



In this scenario we have the same issue presented in the KB 495969 however the solution does not apply due to an environment issue on a Windows client.

  • KB 495969 – Avamar backup fails with “Not Enough Space” and “Out of Memory for cache file”

The issue could affect any plugin like in this case with the error presented in the following manner:

  • For FS backups:
avtar Info <8650>: Opening hash cache file 'C:Program Filesavsvarp_cache.dat'avtar Error <18866>: Out of memory for cache file 'C:Program Filesavsvarp_cache.dat' size 805306912avtar FATAL <5351>: MAIN: Unhandled internal exception Unix exception Not enough space
  • For VSS backups:
avtar Info <8650>: Opening hash cache file 'C:Program Filesavsvarp_cache.dat'avtar Error <18866>: Out of memory for cache file 'C:Program Filesavsvarp_cache.dat' size 1610613280avtar FATAL <5351>: MAIN: Unhandled internal exception Unix exception Not enough space
  • For Oracle backup:
avtar Info <8650>: Opening hash cache file 'C:Program Filesavsvarclientlogsoracle-prefix-1_cache.dat'avtar Error <18866>: Out of memory for cache file 'C:Program Filesavsvarclientlogsoracle-prefix-1_cache.dat' size 100663840avtar FATAL <5351>: MAIN: Unhandled internal exception Unix exception Not enough spaceor this variant:avtar Info <8650>: Opening hash cache file 'C:Program Filesavsvarclientlogsoracle-prefix-1_cache.dat'avtar Error <18864>: Out of restricted memory for cache file 'C:Program Filesavsvarclientlogsoracle-prefix-1_cache.dat' size 100663840avtar FATAL <5351>: MAIN: Unhandled internal exception Unix exception Not enough space avoracle Error <7934>: Snapup of <oracle-db> aborted due to rman terminated abnormally - check the logs
  • With the RMAN log reporting this:
RMAN-00571: ===========================================================RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============RMAN-00571: ===========================================================RMAN-03002: failure of backup plus archivelog command at 06/14/2018 22:17:40RMAN-03009: failure of backup command on c0 channel at 06/14/2018 22:17:15ORA-04030: out of process memory when trying to allocate 1049112 bytes (KSFQ heap,KSFQ Buffers)Recovery Manager complete. 

Initially it was though the cache file could not grow in size due to incorrect “hashcachemax” value.

The client had plenty of free RAM (48GB total RAM) so we increase the flag’s value from -16 (3GB file size max) to -8 (6GB file size max)

But the issue persisted and the disk space was also not an issue, there was plenty of GBs of free space

Further investigations with a test binary from the engineering team lead to the fact that the MS OS was not releasing enough unused and contiguous memory required to allocate/load into the memory the entire hash cache file for the backup operation.

It was tried a test binary that would allocate the memory in smaller pieces to see if we could reach the point where the OS would allow the full file p_cache.dat to be loaded into memory but that also did not help, the Operative system was still not allowing to load the file into memory for some reason.

The root cause is hided somewhere in the OS however in this case we did not engage the MS team for further investigations on their side.

Instead we found a way to work around the issue setting the cache file to be smaller, see details in the resolution section below.

In order to work around this issue we set the hash cache file to be of a smaller size so that the OS would not have issues in allocating it into memory.

In this case it was noticed that the OS was also having problems in allocating smaller sizes like 200+ MB so we decided to re-size the p_cache.dat to be just 100MB with the use of the following flag:

–hashcachemax=100

This way the hash cache file would never grow beyond 100MB and would overwrite the old entries.

After adding that flag it is requited to recycle the cache file by renaming or deleting the p_cache.dat (renaming is the preferred option)

After the first backup which would take longer than usual as expected (to rebuild the cache file) the issue should be resolved.

  • The Demand-paging cache is not recommended in this scenario since the backup are directed to GSAN storage so the Monolithic paging cache was used.
  • Demand-paging was designed to gain benefit for backup being sent to DataDomain storage.

Related:

Change Action taken on risk for specific hash

I need a solution

Hello,

I’m recently facing a lot of detections of not needed software which is detected as PUA.OpenCandy with action taken on risk “Left alone”.

Is there any way to change the action taken on risk for specific hash? I would need to remove or atleast move it to Quarantine. Or better move all these same files to quarantine.

I know that I can change the actions taken on every detection via Auto-Protect option, but it is not that effective and I’m not sure into which category PUA.OpenCandy belongs.

Thank you for any suggestions.

BR

Lukas

0

1538392588

Related:

  • No Related Posts

New variant of KeyPass ransomware

I need a solution

Dear,

Acording this post https://securelist.com/keypass-ransomware/87412/ there is a new variant of the keypass ransomware

The hash is 901d893f665c6f9741aa940e5f275952 and symantec detect this as Trojan.Gen.2 (https://www.virustotal.com/#/file/ee74c63faa2eb970…

My question is, this new variant and other is detect from Simantec (Sep and SMG) ? or a have to create a case to support ?

How I should proceed?

Regards

Miguel Angel

0

1534955364

Related:

Detection by digital signature publisher

I need a solution

I want to block or ban or clean a file based on digital signature publisher. Mindspark Interactive Network, Inc. is a greyware whack-a-mole that hash banning just won’t take care of. I need SEP to interogate the file, and upon seeing the digital signature publisher equals Mindspark Interactive Network, Inc., remove the file or clean it or delete it or quarantine it. Any hep on this would be greatly appreciative.

Thanks,

Rogue

0

Related:

  • No Related Posts