ShareFile Migration Tool – Logs


Migration Logs:

Once the transfer is complete, you can review the migration details and any errors encountered during the migration process.

Log Files generated:

  1. SFMT [TimeStamp][FolderName].log – This log contains complete details starting from launch of the tool
  2. Transfer Info [TimeStamp][FolderName].log – This log contains verbose information for the transfer
  3. Transfer [TimeStamp][FolderName].log – This log contains all the files and folders that were successfully transferred.
  4. Transfer Failure [TimeStamp][FolderName].log – This log contains a brief explanation as to why a file failed to transfer.
  5. Transfer Cancelled [TimeStamp][FolderName].log – This log contains a brief explanation as to why a transfer cancelled.

NOTE: For debugging the issue “SFMT [TimeStamp][FolderName].log” and “Transfer Info [TimeStamp][FolderName].log” are required.

Logs are stored at USERNAMEAppDataRoamingCitrixShareFileMigration ToolLogs

Related:

UTC offset is wrong in local time

I need a solution

Hi,

I have configured the my Timezone correclty as “Asia/Colombo” in my ASG . Im sending my access logs to a syslog files which I have configured “localtime” as the time parameter. But when I check the logs I observed that time is appearing as “[30/Dec/2019:12:12:44 +0550]”. The time value is correct. But UTC offset value is +0550 which is wring. It should be +0530. How can I change this. Actually Im sending these logs to ELK stack (Kiabana visualization) using filebeat agent. Because of the errorness in the UTC offset vaue, ELK stack shows corresponding log values with a difference of 20 minutes. 

0

1577702533

Related:

Web browsing issue

I need a solution

Hello,

I’m asking  your help regarding a web browsing issue. The packet shaper web page doesn’t open on any web browser.

However, I can connect through SSH and send some commands.

See below :

PacketShaper# version

  Version:             PacketShaper 11.6.4.2 build 204779

  Product:             PacketShaper S200

  Serial Number:       0817330127

  Memory:              5.8GB RAM, 4GB System Disk total, 3.2GB System Disk available

  Copyright (c) 1996-2015, Blue Coat Systems, Inc. All rights reserved.

PacketShaper# uptime                       

System up for 530 days 1 hours 46 mins 34 secs

PacketShaper# event log status

Event log status:       ok

Log file directory: /opt/bluecoat/ps/9.258/log

Maximum file size: 1152000 bytes (4000 event records)

Maximum number of archived log files: 4

Number of events in current log file /opt/bluecoat/ps/9.258/log/events:  0

I also tried to download log files but I can’t extract files with FileZilla or WinSCP.

Do you have an idea about this issue ? 

Thank you in advance.

Regards.

0

Related:

Symantec Message Tracking logs

I do not need a solution (just sharing information)

Dear Team,

We have a mandate to maintain message tracking logs for 2 years. However we are not able to do it as the message tracking in the SMG is stored only for 100 days.

When we tried to send the logs to a remote syslog server, we are only getting limited information. We have opened a support case with Symantec (Case ID:28461169), but they have confirmed that it is a limitation in the SMG for sending all logs fields to the remote syslog server.

We tried to perform a malquery to extract the logs from the command prompt for last three months. But the output file generated is more than 1.5 GB which i have difficulty in opening with any word processor.

Can you please provide any one of the solutions as mentioned below ASAP?

  • Provide a mechanism to store the logs on the appliance for 2 years similar to other messaging gateway appliance from other vendors.
  • Send all the log fileds to the syslog server.

Regards,

Benny John

  •  
0

Related:

App Layering : Unable to create an image.

From the ELM logs, we see the following error in the connector log (unidesk-vsphere-connector.log.json.log) :

[2019-01-28T08:43:51.360Z] INFO: DeployVm/11730 on localhost.localdomain: Looking up environmentBrowser for host.parent id: domain-s3241

{“obj”:{“attributes”:{“type”:”ComputeResource”},”$value”:”domain-s3241″},”propSet”:[{“name”:”resourcePool”,”val”:{“attributes”:{“type”:”ResourcePool”,”xsi:type”:”ManagedObjectReference”},”$value”:”resgroup-3242″}}]}

[2019-01-28T08:43:51.815Z] ERROR: vsphere-connector/11730 on localhost.localdomain: [bd85cc80-2302-11e9-824d-f35fab16bc43] -> Operation ‘vsphere:DeployVm’ has failed: The vsphere:DeployVm operation encountered an unexpected error. Message = Cannot read property ‘$value’ of undefined.

While we try to DeployVm – we are trying to create a new VM.

The vSphere connector does not have a template VM configured, so ELM is trying to make a simple Windows OS and gets an error while creating the VM.

It might be an issue with permissions.

Related:

Introduction of IBM DB2 Archive logging leveraged by Avamar DB2 plugin

Article Number: 504655 Article Version: 3 Article Type: How To



Avamar Plug-in for IBM DB2 7.4.101-58,Avamar Plug-in for IBM DB2 7.3.101-125,Avamar Plug-in for IBM DB2 7.5.100-183

The intention of this article is introducing DB2 Archiving logging function, which will be leveraged by Avamar DB2 backup.

Archive logging is used specifically for rollforward recovery. Archived logs are log files that are copied from the active log path to another location. You can use one or both of the logarchmeth1 or logarchmeth2 database configuration parameters to allow you or the database manager to manage the log archiving process.

Active and archived database logs in rollforward recovery. There can be more than one active log in the case of a long-running transaction.

User-added image

Taking online backups is only supported if the database is configured for archive logging. During an online backup operation, all activities against the database are logged. When Avamar online backup is restored, the logs must be rolled forward at least to the point in time at which the backup operation completed. For this to happen, the logs must be archived and made available when the database is restored. After Avamar online backup is complete, the database manager forces the currently active log to be closed, and as a result, it will be archived. This ensures that Avamar online backup has a complete set of archived logs available for recovery.

The logarchmeth1 and logarchmeth2 database configuration parameters allow you to change where archived logs are stored. The logarchmeth2 parameter enables you to archive log files to a second separate location. The newlogpath parameter affects where active logs are stored.

Unless you specify that you want to manage the active logs (by using the LOGRETAIN value), the database manager removes log files from the active log path after these files are archived and they are no longer needed for crash recovery. If you enable infinite logging, additional space is required for more active log files, so the database server renames the log files after it archives them. The database manager retains up to 8 extra log files in the active log path for renaming purposes.

Related:

Provisioning Services: PVS Servers May Stop Responding Or Target Devices May Freeze During Startup Due To Large Size Of MS SQL Transaction Logs

Backup the XenApp/XenDesktop Site and PVS database and the Transaction log file to trigger the Transaction log auto truncation.

The transaction log should be backed up on the regular basis to avoid the auto growth operation and filling up a transaction log file.

Reference: https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/back-up-a-transaction-log-sql-server?view=sql-server-2017​

ADDITIONAL INFORMATION

Ideally Transaction log will be truncated automatically after the following events:

  • Under the simple recovery model, unless some factor is delaying log truncation, an automatic checkpoint truncates the unused section of the transaction log.In the Simple recovery there is little chance for the transaction log growing – just in specific situations when there is a long running transaction or transaction that creates many changes
  • By contrast, under the full and bulk-logged recovery models, once a log backup chain has been established, automatic checkpoints do not cause log truncation. Under the full recovery model or bulk-logged recovery model, if a checkpoint has occurred since the previous backup, truncation occurs after a log backup (unless it is a copy-only log backup). There is no automated process of transaction log truncation, the transaction log backups must be made regularly to mark unused space available for overwriting. Bulk-logged recovery model reduces transaction log space usage by using minimal logging for most bulk operations

Transaction log file size may not decrease even if transaction log has been truncated automatically.

Log truncation frees space in the log file for reuse by the transaction log. Log truncation is essential to keep the log from filling. Log truncation deletes inactive virtual log files from the logical transaction log of a SQL Server database, freeing space in the logical log for reuse by the Physical transaction log. If a transaction log were never truncated, it would eventually fill all the disk space that is allocated to its physical log files.

It is recommended also to keep the transaction log file in a separate drive from the database data files, as placing both data and log files on the same drive can result poor database performance.

Related:

How to Enable Logging for Desktop Director

The following are two methods for collecting logs for Desktop Director:

CDF Tracing

When using CDF tracing, select DirectorService.

Director

Specify the CDF Control log collection settings from Tools–>Options

Director2
For information about installing and using CDF Control, see CTX111961 – CDFControl

Application Settings in IIS

Complete the following procedure to enable logging for Desktop Director through Application Settings in IIS:

  1. Create a folder for logs (for example, c:logs) or use an existing folder.
  2. Create a text file (for example, c:logsdesktopdirector.txt) and add appropriate permission.
  3. ​Open IIS Manager.
  4. In the Connections panel, access Default Web Site > DesktopDirector.
  5. Double-click Application Settings.
  6. Ensure Log.File.Overwrite is set to 1.
  7. Select the Log.FileName application setting and click Edit. In the Value field, type the path and filename of the log file
  8. Change the value to 1 for the following logs:

Log.IncludeLocation

Log.LogToCdf

Log.LogToConsole

Log.LogToDebug

Log.LogToFile



9. Restart IIS

Note: When you access Desktop Director, there must already be log entries. If log entries are not available, verify the log file path. Ensure that the log file name is complete; you have edited the filename and extension in Application Settings and created the permissions .txt file. Ensure that the IIS can write to the folder. You might be required to add the local ServerIIS_IUSRS group with write permission to the folder, as shown in the following screen shot:

User-added image

10. To stop the IIS service after collecting logs, right click the Server in the IIS console and click Stop.

11. Copy the log file, and start the web server again. To do so, right click the Server in the IIS console and click Start.

Related:

7021106: How to enable the client log

This document (7021106) is provided subject to the disclaimer at the end of this document.

Environment

NetIQ Privileged Account Manager

Situation

How to enable the client log for forked processes

Logs generated by the forked processes will not be present in the standard unifid.log

The client log can be enabled to capture events for these forked processes

How to capture logs for client connections not captured in the unifid.log
Example of forked client processes: sshrelay, rdprelay.

Resolution

  1. Edit /opt/netiq/npum/config/unifi.xml and add the following line nested within as a child to the <Unifi> tag:

    <ClientLog level=”trace” file=”logs/client.log” max_size=”10″/>

    Note: Restarting the NPUM service is optional after adding this line.

  2. Try the client-type connection or session once more for the log to be generated and begin capture.

    Please find this log in the following location:

    /opt/netiq/npum/logs/client.log

    C:Program FilesNetiqnpumlogsclient.log

    Note: Any new client sessions that occur on this server will start being logged here.

  3. When finished, please disable the client log so unnecessary logging does not occur consuming disk space.

    Either remove the line added in Step 1 above or encapsulate it within an xml comment:

    <!– <ClientLog level=”trace” file=”logs/client.log” max_size=”10″/> –>

    Then restart the PAM service for the settings to be picked up.

Additional Information

The following is an example of where this client log tag can be placed:
<Unifi db_sync=”1″ service_name=”npum”>

<ClientLog level=”trace” file=”logs/client.log” max_size=”10″/>

<Worker min=”5″ smax=”20″ hmax=”60″ ttl=”60″ stacksize=”1048576″ guardsize=”0″/>

<Handler base=”service/local”>

<Engine type=”dso” lib=”spf_dso”/>

<Engine type=”perl” lib=”spf_perl”/>

</Handler>

<SSL b.changed=”1″ i.reneg_dos_protection=”0″/>

<Log rollover=”D1″ I.max_size=”250″ level=”debug” file=”logs/unifid.log”>

<Script/>

</Log>

</Unifi>

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

Related:

  • No Related Posts