Ticket open close to 3 months

Not sure if this is the right area, but what do you, as a person paying for support, do when you have a ticket open for close to 3 months, and you are no closer to a resolution, than the day you created it?

I’ve escalated it to a manager, and they give me some talk about “keeping an eye on it”, it went to at least 1, if not 2 levels of engineering support, and still, no resolution.

FWIW, it was an issue that happened in the upgrade from 8.1, to 8.2, and impacts us almost daily. Yet no resolution, and it doesn’t seem to resonate with anyone at EMC that this is a critical problem.

Frustrated, and curious as to what others do in this situation.

Thanks.

Related:

Re: ECS – S3 Interface – how to increase the retention-period

Hello Jason,

Thank you for your answer!

During my testing i started without a retention period and so i was able to “once” set it…

In the Centera-SDK, i solved this task by changing the MetaData, Which was allowed, but created a new CDF with a new Clip-ID, which pointed to the same blob as the original. In the new CDF there was a field called “prev.clip”, which contained the ID of the original. During deleting, i used this field to deleted all previous clips as well.

If I would do the same for ECS:

  • Copy the existing (protected) object to a new object
    • Keeping a reference to the copied item (in the metadata)
    • protect the new object with a longer retention-period.
    • remember the ID of the new object, instead of the old one
  • During Delete:
    • check if there is a reference to a prev. object, delete that one first (in a loop)
    • delete the object

Would that cause the underlying data to be duplicated as well?

Is that a good or at least acceptable solution?

The problem behind, is that a user has files, where he does not know at the start, how important they are (and how long they need to have protection). Later it’s also possible to increase the retention for another year(s)… (e.g. because a contract was renewed and all data in the contract must be protected longer).

Thank you!

Christoph

Related:

Re: Filesystem Encryption over NFS Isilon folder

Encryption can be handled in several ways. Data at rest encryption on Isilon can be handled by buying nodes with SEDs (self encrypting drives). Data at flight encryption can be handled by the protocols in use (SMB 3 or kerberized NFSv4), or by a third party application. One of the best in this area is certainly Vormetric from Thales.

https://www.thalesesecurity.com/solutions/use-case/data-security-and-encryption/database-security/securing-unstructured-files

Hope this helps,

Chris Klosterman

Principal SE, DataDobi

Related:

Re: Problem running search against archived table data

I am running on a 64 bit Linux system with the following specifics

Kernel name – Linux

Kernel release – 2.6.32-573.35.2.el6.x86_64

Kernel version – #1 SMP Mon Oct 24 14:14:01 EDT 2016

machine hardware name – x86_64

processor type – x86_64

hardware platform – x86_64

operating system – GNU/Linux

Since this is a Linux system where is the XDB admin client located ?

I constructed the Xquery by simply cutting and pasting the code that was provided to me

I am now trying to use the “trace” function to get some meaningful debugging output. I have modified my xquery code as follows :

declare option xhive:index-debug ‘true’;

declare option xhive:queryplan-debug ‘true’;

declare option xhive:pathexpr-debug ‘true’;

declare namespace table=”urn:x-emc:ia:schema:table”;

declare namespace ia = “urn:x-emc:ia:schema:fn”;

declare variable $page external;

declare variable $size external;

declare variable $EMPNO external;

declare variable $LAST_NAME external;

declare variable $GENDER external;

declare variable $STATUS external;

declare function ia:decrypt-value($columnValue as xs:string*) as xs:string* external;

declare function local:getResultsPage($rows, $page, $size) {

let $offset := $page * $size

let $total := count($rows)

return <results total=”{ $total }”> {

for $row in subsequence($rows, $offset + 1, $size)

return $row

} </results>

};

declare function local:addClause($whereClause as xs:string, $var as xs:string*, $expr as xs:string) as xs:string{

fn:trace($var, ‘the value of $var is:’),

if (empty($var) or $var = “”)

then $whereClause

else if ($whereClause = “”) then $expr else concat($whereClause, ” and “, $expr)};

let $whereClause := trace(local:addClause(“”, $EMPNO, concat(“contains($elem/EMPNO ,'”, $EMPNO, “‘)”)))

let $whereClause := trace(local:addClause($whereClause, $LAST_NAME, concat(“$elem/LAST_NAME = &apos;”, $LAST_NAME, “&apos;”)))

However after I reran my the InfoArchive query which runs the xquery code I checked the logfiles wyhich were updated at the time I ran my test and all that I found was the following :

2017-07-17 13:02:57.209 DEBUG org.apache.http.wire – http-outgoing-95 << ” “query” : “declare option xhive:index-debug ‘true’;

declare option xhive:queryplan-debug ‘true’;

declare option xhive:pathexpr-debug ‘true’;

declare namespace table=”urn:x-emc:ia:schema:table”;

declare namespace ia = “urn:x-emc:ia:schema:fn”;

declare variable $page external;

declare variable $size external;

declare variable $EMPNO external;

declare variable $LAST_NAME external;

declare variable $GENDER external;

declare variable $STATUS external;

declare function ia:decrypt-value($columnValue as xs:string*) as xs:string* external;

declare function local:getResultsPage($rows, $page, $size) {

let $offset := $page * $size

let $total := count($rows)

return <results total=”{ $total }”> {

for $row in subsequence($rows, $offset + 1, $size)

return $row

} </results>

};

declare function local:addClause($whereClause as xs:string, $var as xs:string*, $expr as xs:string) as xs:string{

if (empty($var) or $var = “”)

then $whereClause

else if ($whereClause = “”) then $expr else concat($whereClause, ” and “, $expr)};

let $whereClause := trace(local:addClause(“”, $EMPNO, concat(“contains($elem/EMPNO ,'”, $EMPNO, “‘)”)))

let $whereClause := trace(local:addClause($whereClause, $LAST_NAME, concat(“$elem/LAST_NAME = &apos;”, $LAST_NAME, “&apos;”)))

let $whereClause := trace(local:addClause($whereClause, $GENDER, concat(“$elem/SEX = &apos;”, $GENDER, “&apos;”)))

let $whereClause := trace(local:addClause($whereClause, $STATUS, concat(“$elem/STATUS = &apos;”, $STATUS, “&apos;”)))

let $whereClause := if ($whereClause != “”) then concat(“where “, $whereClause) else $whereClause

let $query-str := trace(concat(” for $elem in /HR_APPLICATION/PAY_HISTHDR/ROW “, $whereClause, ” return $elem”))

let $main-query := trace(xhive:evaluate($query-str))

let $rows := for $elem in $main-query

return

(… more lines have been deleted …)

Note that there is nothing in here about variable values which is what i really wanted to see. Is there something else I need to configure ?

Related:

SourceOne Email Management – Historical Archive jobs fail because the message properties are missing or the message is corrupt

SourceOne Email Management

Unable to archive certain messages.Historical Archive jobs fail because few properties are missing from the message or the message is corrupt.Following errors present in ExArchiveJBC.exe logs:

Date : <date>

Time : <time>

Seq : 3464

Verbosity : ERROR-LOGD

Process : <process id>

Thread : <thread id>

Module : CExArchiveThread.cpp

Func : CExArchiveThread::Run(302)

Machine : <worker server name>

Message : Failed to calculate message ID (0x86040604) [ExArchiveJBC.exe, ExM

sgBaseImpl2.cpp(33).CExMsgBase::ProcessMessageId]>System call failed. (0x86040100) [ExArchiveJBC.exe,ExExchMsgIdEx.cpp(196).CExExchMsgIdEx::

CalcMsgIdUsingSchema] >System call failed. (0x86040100) [ExArchiveJBC.exe, ExExchMsgIdEx.cpp(334).CExExchMsgIdEx::HandleComputedProperty]>Failed to Open Message Attachment Table,MsgId: <Unknown> (0x86040C0C) Unspecified error (0x80004005)[ExArchiveJBC.exe, ExExchMsgIdEx.cpp(762).CExExchMsgIdEx::UpdateAttachmentsCRC]

JobId : <job id>

ActivityId : <activity id>



Follow below steps to resolve the issue:

1. Identify the message using its subject line, folder hierarchy and time stamp (these information can be found from ExArchiveJBC.exe.log) in the users mailbox.

2. Move the problem message out of the mailbox.

3. Move the message back to mailbox under the same folder. This will help regaining the missing properties.

For a detailed step by step resolution please refer to DELL EMC Support Solution 473808https://support.emc.com/kb/473808

YOU MAY ALSO BE INTERESTED IN THE FOLLOWING CONTENTS FOR SourceOne:

Top Services Topics

.p

Related:

Re: ImageProcessor throughput in CC7.5

Hi,

I tested some service setups of ImageProcessor to find a setup for our throughput requirements. Tested was CC 7.5 P07.

Test Setup:

  • 3 Windows Server 2012 R2 VMs on Serverhardware (1x IAS, 1x ImageProcessor, 1x MS SQL Server), 1 PC
  • ImageProcessor is used to find Code39, Code128 and Datamatrix barcodes. No other functions or filters are added.
  • Batch with representative documents (400 images, DIN A4 and DIN A5, 300 dpi, colordepth 1 Bit, TIFF G4).
  • Process IPP, MaxVBAHosts: 3

Tests:

  1. One Instance of Image Processor running at Server VM
  2. Two instances of Image Processor running at Server VM
  3. Three instances of Image Processor running at Server VM
  4. Four instances of Image Processor running at Server VM
  5. One Instance of Image Processor running at PC
  6. Four Instances of Image Processor running at PC
  7. One Instance of Image Processor at VM and One Instance of Images Processor at PC

Results:

Test

Average Processing Time per Image [ms]

Processing Time Batch
1 95 04:02
2 509 03:17
3 918 03:12
4 1287 03:14
5 116 04:33
6 986 03:10
7 440 03:16

CPU utilization of each Image Processor Instance in all tests were round about 2%.

Conclusion:

More ImageProcessor Instances on one machine will decrease throughput of all instances because of limited CPU ressources. I think that is okay although I think difference between average processing time per images is strange. The total processing time for the batch is only a little bit influenced by the average processing time per image. When running Image Processor not as service on PC, it could be recognized that IAS doesn’t send tasks and the module was waiting for tasks. I wonder, why average processing time in test 7 is bad, because there are two machines running only one Instance. If only one of these machines is running, both are processing images very fast.

BatchMaxAdressSpaceK is set to 12GB. At tests there was only one Batch processed and round about 20 small batches waiting for manual tasks. IAS used round about 100 MB of RAM. All batches have been loaded in memory, no batches were swapped to disk.

I think that could be caused by IAS, which has to deliver Image Processor Profile files. If these files are not cached by IAS, the server has to requested these files from database with every request, because they are not stored in IAS working directory. I don’t know how many connections IAS opens to database or whether there is a connection pool for IADB to increase DB throughput for these requests. Monitoring MS SQL Server indicates no utilization or connection problems.

Has someone an idea how to increase throughput, espacially in testscenario 7? Is this a Problem, that could be fixed with an higher Patch?

Related: