ShareFile Migration Tool – Logs


Migration Logs:

Once the transfer is complete, you can review the migration details and any errors encountered during the migration process.

Log Files generated:

  1. SFMT [TimeStamp][FolderName].log – This log contains complete details starting from launch of the tool
  2. Transfer Info [TimeStamp][FolderName].log – This log contains verbose information for the transfer
  3. Transfer [TimeStamp][FolderName].log – This log contains all the files and folders that were successfully transferred.
  4. Transfer Failure [TimeStamp][FolderName].log – This log contains a brief explanation as to why a file failed to transfer.
  5. Transfer Cancelled [TimeStamp][FolderName].log – This log contains a brief explanation as to why a transfer cancelled.

NOTE: For debugging the issue “SFMT [TimeStamp][FolderName].log” and “Transfer Info [TimeStamp][FolderName].log” are required.

Logs are stored at USERNAMEAppDataRoamingCitrixShareFileMigration ToolLogs

Related:

UTC offset is wrong in local time

I need a solution

Hi,

I have configured the my Timezone correclty as “Asia/Colombo” in my ASG . Im sending my access logs to a syslog files which I have configured “localtime” as the time parameter. But when I check the logs I observed that time is appearing as “[30/Dec/2019:12:12:44 +0550]”. The time value is correct. But UTC offset value is +0550 which is wring. It should be +0530. How can I change this. Actually Im sending these logs to ELK stack (Kiabana visualization) using filebeat agent. Because of the errorness in the UTC offset vaue, ELK stack shows corresponding log values with a difference of 20 minutes. 

0

1577702533

Related:

How to Enable Trace Logging for the Linux VDA

For Linux VDA version 1909 and later, please refer to article CTX261900 – How to Enable Trace Logging for the Linux VDA 1909 and Later

Follow the steps below to enable trace logging for ctxvda:

  1. Find the /etc/xdl/ctx-vda.conf file. The file is generated after you configure the Linux VDA by ctxsetup.sh.
  2. Uncomment the line and change the following setting.
From
 #Log4jConfig=”/etc/xdl/log4j.xml”
To
 Log4jConfig=”/etc/xdl/log4j.xml”
  1. Open the /etc/xdl/log4j.xml file, find the following content, change the level value to trace, and save the file.
<root><level value=”info”/><appender-ref ref=”file”/><appender-ref ref=”syslog”/></root>
  1. Restart the ctxvda service.
sudo service ctxvda restart

Trace level logs will be found in /var/log/xdl/vda.log.

Each log level defines the minimum rank of messages to log:

  1. ALL - The lowest possible rank. Intended to log all.
  2. TRACE - Designates finer-grained informational events than the DEBUG level.
  3. DEBUG - Designates fine-grained informational events that are most useful to debug an application.
  4. INFO - Designates informational messages that highlight the progress of an application at coarse-grained level.
  5. WARN - Designates potentially harmful situations.
  6. ERROR - Designates error events that might still allow an application to continue to run.
  7. FATAL - Designates very severe error events that will presumably cause an application to abort.
  8. OFF - The highest possible rank. Intended to .

The Linux VDA has been using a new log system since version 1.2. Log messages are sent directly to syslog and trace messages are sent to a new optional component – the ctxLog daemon (ctxlogd).

The setdbg utility is replaced with the new setlog utility, which offers more intuitive control over trace output.

User-added image

The setlog utility and ctxlog daemon are included in the trace package. For the relevant trace and symbol packages, contact Citrix Escalation engineers.

Follow the steps below to enable trace logging for ctxhdx and all other services:

  1. Install the proper version of the trace and symbol packages.
  2. Enable the ctxlog daemon.
sudo service ctxlogd start
  1. Restart corresponding services to enable the trace log for each services.
sudo service ctxhdx restartsudo service ctxcdm restart // For CDMsudo service ctxpolicyd restart //For policysudo service ctxusbsd restart //For usb
  1. Change log levels using the setlog utility.

Logs will be found in /var/log/xdl/hdx.log.

Trace logging is configured using the setlog utility, which is packaged alongside the ctxlog daemon.

Log classes are now organized into a hierarchy, allowing settings to be inherited. You can set each log class to allow a minimum message priority, to be disabled entirely, or to inherit settings from its parent. The flags to control what metadata is printed with trace messages have returned.

The setlog utility allows you to configure the path that trace output will be written to and to configure log rollover based on a threshold file size. It also now has basic command line options so that you can control trace flags and levels over SSH or in any other circumstances where a graphical environment is not available. For more information, type setlog help.

For the log classes required, contact Citrix technical support.

Related:

How to Enable Trace Logging for the Linux VDA 1909 and Later

To enable trace logging for ctxjproxy

Follow the steps below to enable trace logging for ctxjproxy:

1. Find the /etc/xdl/ctx-jproxy.conf file. The file is generated after you configure the Linux VDA by ctxsetup.sh.

2. Uncomment the line and change the following setting.

From
 #Log4jConfig=”/etc/xdl/log4j.xml”
To
 Log4jConfig=”/etc/xdl/log4j.xml”

3. Open the /etc/xdl/log4j.xml file, find the following content, change the level value to trace, and save the file

<root><level value=”info”/><appender-ref ref=”file”/><appender-ref ref=”syslog”/></root>

4. Restart the ctxjproxy service.

sudo service ctxjproxy restart

Trace level logs will be found in /var/log/xdl/jproxy.log

Each log level defines the minimum rank of messages to log:

1. ALL - The lowest possible rank. Intended to log all.2. TRACE - Designates finer-grained informational events than the DEBUG level.3. DEBUG - Designates fine-grained informational events that are most useful to debug an application.4. INFO - Designates informational messages that highlight the progress of an application at coarse-grained level.5. WARN - Designates potentially harmful situations.6. ERROR - Designates error events that might still allow an application to continue to run.7. FATAL - Designates very severe error events that will presumably cause an application to abort.8. OFF - The highest possible rank.

To enable trace logging for ctxvda

1. The trace file of .NET ctxvda service is under /var/log/xdl/ with file name vda.*.log. The file to configure tracing is /etc/xdl/brokeragent.conf, open this file to change logging level.

2. Trace modules in ctxvda are listed as following:

 <add key="TraceProvider.BrokerAgent.CategoryFilter" value="Error" /> <add key="TraceProvider.BrokerAgentAoTracing.CategoryFilter" value="Error" /> <add key="TraceProvider.BrokerAgentEvents.CategoryFilter" value="Error" /> <add key="TraceProvider.BrokerAgentPluginProxy.CategoryFilter" value="Error" /> <add key="TraceProvider.CBP.CategoryFilter" value="Error" /> <add key="TraceProvider.LaunchStore.CategoryFilter" value="Error" /> <add key="TraceProvider.LoadBalancing.CategoryFilter" value="None" /> <add key="TraceProvider.MonitorManager.CategoryFilter" value="Error" /> <add key="TraceProvider.NTEvent.CategoryFilter" value="Error" /> <add key="TraceProvider.PerfCounter.CategoryFilter" value="Error" /> <add key="TraceProvider.Plugin.CategoryFilter" value="Error" /> <add key="TraceProvider.Registry.CategoryFilter" value="Error" /> <add key="TraceProvider.SessionMonitoring.CategoryFilter" value="Error" /> <add key="TraceProvider.SessionParameters.CategoryFilter" value="Error" /> <add key="TraceProvider.ShortcutEnumerator.CategoryFilter" value="Error" />

Trace levels for each tracing module include Error, Warning, Information and EntryExit. By default, the trace level is Error which means only error message will be output to trace file. Modifying the value parameter will change the tracing level for certain tracing module.

3. After updating the configuration, ctxvda service should be restarted to let those change take effect.

sudo service ctxvda restart

To enable trace logging for ctxhdx and all other services

The Linux VDA has been using a new log system since version 1.2. Log messages are sent directly to syslog and trace messages are sent to a new optional component – the ctxLog daemon (ctxlogd).

The setdbg utility is replaced with the new setlog utility, which offers more intuitive control over trace output.


The setlog utility and ctxlog daemon are included in the trace package. For the relevant trace and symbol packages, contact Citrix Escalation engineers.

Follow the steps below to enable trace logging for ctxhdx and all other services:

1. Install the proper version of the trace and symbol packages.

2. Enable the ctxlog daemon.

sudo service ctxlogd start

3. Restart corresponding services to enable the trace log for each services.

sudo service ctxhdx restartsudo service ctxcdm restart // For CDMsudo service ctxpolicyd restart //For policysudo service ctxusbsd restart //For usb

4. Change log levels using the setlog utility.

Logs will be found in /var/log/xdl/hdx.log.

Trace logging is configured using the setlog utility, which is packaged alongside the ctxlog daemon.

Log classes are now organized into a hierarchy, allowing settings to be inherited. You can set each log class to allow a minimum message priority, to be disabled entirely, or to inherit settings from its parent. The flags to control what metadata is printed with trace messages have returned.

The setlog utility allows you to configure the path that trace output will be written to and to configure log rollover based on a threshold file size. It also now has basic command line options so that you can control trace flags and levels over SSH or in any other circumstances where a graphical environment is not available. For more information, type setlog help

For the log classes required, contact Citrix technical support.

Related:

Web browsing issue

I need a solution

Hello,

I’m asking  your help regarding a web browsing issue. The packet shaper web page doesn’t open on any web browser.

However, I can connect through SSH and send some commands.

See below :

PacketShaper# version

  Version:             PacketShaper 11.6.4.2 build 204779

  Product:             PacketShaper S200

  Serial Number:       0817330127

  Memory:              5.8GB RAM, 4GB System Disk total, 3.2GB System Disk available

  Copyright (c) 1996-2015, Blue Coat Systems, Inc. All rights reserved.

PacketShaper# uptime                       

System up for 530 days 1 hours 46 mins 34 secs

PacketShaper# event log status

Event log status:       ok

Log file directory: /opt/bluecoat/ps/9.258/log

Maximum file size: 1152000 bytes (4000 event records)

Maximum number of archived log files: 4

Number of events in current log file /opt/bluecoat/ps/9.258/log/events:  0

I also tried to download log files but I can’t extract files with FileZilla or WinSCP.

Do you have an idea about this issue ? 

Thank you in advance.

Regards.

0

Related:

Symantec Message Tracking logs

I do not need a solution (just sharing information)

Dear Team,

We have a mandate to maintain message tracking logs for 2 years. However we are not able to do it as the message tracking in the SMG is stored only for 100 days.

When we tried to send the logs to a remote syslog server, we are only getting limited information. We have opened a support case with Symantec (Case ID:28461169), but they have confirmed that it is a limitation in the SMG for sending all logs fields to the remote syslog server.

We tried to perform a malquery to extract the logs from the command prompt for last three months. But the output file generated is more than 1.5 GB which i have difficulty in opening with any word processor.

Can you please provide any one of the solutions as mentioned below ASAP?

  • Provide a mechanism to store the logs on the appliance for 2 years similar to other messaging gateway appliance from other vendors.
  • Send all the log fileds to the syslog server.

Regards,

Benny John

  •  
0

Related:

App Layering : Unable to create an image.

From the ELM logs, we see the following error in the connector log (unidesk-vsphere-connector.log.json.log) :

[2019-01-28T08:43:51.360Z] INFO: DeployVm/11730 on localhost.localdomain: Looking up environmentBrowser for host.parent id: domain-s3241

{“obj”:{“attributes”:{“type”:”ComputeResource”},”$value”:”domain-s3241″},”propSet”:[{“name”:”resourcePool”,”val”:{“attributes”:{“type”:”ResourcePool”,”xsi:type”:”ManagedObjectReference”},”$value”:”resgroup-3242″}}]}

[2019-01-28T08:43:51.815Z] ERROR: vsphere-connector/11730 on localhost.localdomain: [bd85cc80-2302-11e9-824d-f35fab16bc43] -> Operation ‘vsphere:DeployVm’ has failed: The vsphere:DeployVm operation encountered an unexpected error. Message = Cannot read property ‘$value’ of undefined.

While we try to DeployVm – we are trying to create a new VM.

The vSphere connector does not have a template VM configured, so ELM is trying to make a simple Windows OS and gets an error while creating the VM.

It might be an issue with permissions.

Related:

Introduction of IBM DB2 Archive logging leveraged by Avamar DB2 plugin

Article Number: 504655 Article Version: 3 Article Type: How To



Avamar Plug-in for IBM DB2 7.4.101-58,Avamar Plug-in for IBM DB2 7.3.101-125,Avamar Plug-in for IBM DB2 7.5.100-183

The intention of this article is introducing DB2 Archiving logging function, which will be leveraged by Avamar DB2 backup.

Archive logging is used specifically for rollforward recovery. Archived logs are log files that are copied from the active log path to another location. You can use one or both of the logarchmeth1 or logarchmeth2 database configuration parameters to allow you or the database manager to manage the log archiving process.

Active and archived database logs in rollforward recovery. There can be more than one active log in the case of a long-running transaction.

User-added image

Taking online backups is only supported if the database is configured for archive logging. During an online backup operation, all activities against the database are logged. When Avamar online backup is restored, the logs must be rolled forward at least to the point in time at which the backup operation completed. For this to happen, the logs must be archived and made available when the database is restored. After Avamar online backup is complete, the database manager forces the currently active log to be closed, and as a result, it will be archived. This ensures that Avamar online backup has a complete set of archived logs available for recovery.

The logarchmeth1 and logarchmeth2 database configuration parameters allow you to change where archived logs are stored. The logarchmeth2 parameter enables you to archive log files to a second separate location. The newlogpath parameter affects where active logs are stored.

Unless you specify that you want to manage the active logs (by using the LOGRETAIN value), the database manager removes log files from the active log path after these files are archived and they are no longer needed for crash recovery. If you enable infinite logging, additional space is required for more active log files, so the database server renames the log files after it archives them. The database manager retains up to 8 extra log files in the active log path for renaming purposes.

Related:

Provisioning Services: PVS Servers May Stop Responding Or Target Devices May Freeze During Startup Due To Large Size Of MS SQL Transaction Logs

Backup the XenApp/XenDesktop Site and PVS database and the Transaction log file to trigger the Transaction log auto truncation.

The transaction log should be backed up on the regular basis to avoid the auto growth operation and filling up a transaction log file.

Reference: https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/back-up-a-transaction-log-sql-server?view=sql-server-2017​

ADDITIONAL INFORMATION

Ideally Transaction log will be truncated automatically after the following events:

  • Under the simple recovery model, unless some factor is delaying log truncation, an automatic checkpoint truncates the unused section of the transaction log.In the Simple recovery there is little chance for the transaction log growing – just in specific situations when there is a long running transaction or transaction that creates many changes
  • By contrast, under the full and bulk-logged recovery models, once a log backup chain has been established, automatic checkpoints do not cause log truncation. Under the full recovery model or bulk-logged recovery model, if a checkpoint has occurred since the previous backup, truncation occurs after a log backup (unless it is a copy-only log backup). There is no automated process of transaction log truncation, the transaction log backups must be made regularly to mark unused space available for overwriting. Bulk-logged recovery model reduces transaction log space usage by using minimal logging for most bulk operations

Transaction log file size may not decrease even if transaction log has been truncated automatically.

Log truncation frees space in the log file for reuse by the transaction log. Log truncation is essential to keep the log from filling. Log truncation deletes inactive virtual log files from the logical transaction log of a SQL Server database, freeing space in the logical log for reuse by the Physical transaction log. If a transaction log were never truncated, it would eventually fill all the disk space that is allocated to its physical log files.

It is recommended also to keep the transaction log file in a separate drive from the database data files, as placing both data and log files on the same drive can result poor database performance.

Related: