Assign value without running on computer

I need a solution

I have 2 run script tasks that uses command script as the script type. Both uses unmanaged tokens. I feed the value from task to another. 

The command I use for one of them is just:

SET ListOfUsers="%!Users!%"
IF DEFINED ListOfUsers(
	ECHO %ListOfUsers%
)

That task runs on the computer, but it doesn’t need to. Is there another way to I put in my info and not have it run on the computer where it is being pushed? That info is being feed to another task that will have to run on the computer it is being pushed to. Thanks.

0

Related:

Possible to run a Job or Task Once Only?

I need a solution

Hello all,

I have two command line scripts that I would like to run on all of our systems that have an agent.  However, not all the systems are online at one time.  Is it possible to setup a job that will run once on all systems once it is detected online?  That way they all get the commands but it is not constantly running on systems that have already run it.  Any tips or help is greatly appreciated.

0

1575581888

Related:

  • No Related Posts

How to Configure Log File Rotation on NetScaler

The newsyslog utility included with the NetScaler firmware, archives log files, if necessary, and rotates the system logs so the current log is empty when rotation occurs. The system crontabruns this utility every hour and it reads the configuration file which specifies the files to rotate and the conditions. The archived files may be compressed if required.

The existing configuration is located in /etc/newsyslog.conf. However, because this file resides in the memory filesystem, the administrator must save the modifications to /nsconfig/newsyslog.conf so the configuration survives restarting the NetScaler.

The entries contained in this file have the following format:

logfilename [owner:group] mode count size when flags [/pid_file] [sig_num]

Note: Fields within squared brackets are optional and can be omitted.

Each line on the file represents a log file that should be handled by the newsyslog utility and the conditions under which rotation should occur.

For example, following is a highlighted entry taken from the newsyslog.conf file. In this entry, the size field indicates that the size of the file ns.log is 100 Kilobytes and the count field indicates that the number of archived ns.log files is 25. A size of 100K and count of 25 are the default size and count values.

Note that the when field is configured with an asterisk ( * ), meaning that the ns.log file is not rotated based on time. Every hour, a crontab job runs the newsyslog utility which checks if the size of ns.log is greater than or equal to the size configured in this file. In this example, if it is greater than or equal to 100K, it rotates that file.

root@ns# cat /etc/newsyslog.conf# Netscaler newsyslog.conf# This file is present in the memory filesystem by default, and any changes# to this file will be lost following a reboot. If changes to this file# require persistence between reboots, copy this file to the /nsconfig# directory and make the required changes to that file.## logfilename [owner:group] mode count size when flags [/pid_file] [sig_num]/var/log/cron 600 3 100 * Z/var/log/amd.log 644 7 100 * Z/var/log/auth.log 600 7 100 * Z/var/log/ns.log 600 25 100 * Z

The size field can be changed to modify the minimum size of the ns.log file or the when field can be changed to enable rotating the ns.log file based on a certain time.

The daily, weekly, and/or monthly specification is given as: [Dhh], [Dhh [Ww]] and [Dhh [Mdd]], respectively. The time-of-day fields, which are optional, default to midnight. The ranges and meanings for these specifications are:

Hh hours, range 0 … 23

w day of week, range 0 … 6, 0 = Sunday

dd day of month, range 1 … 31, or the letter L or l to specify the last day of the month.

Examples

Here are some examples with explanations for the logs that are rotated by default:

/var/log/auth.log 600 7 100 * Z

The authentication log is rotated when the file reaches 100K, the last 7 copies of the auth.log are archived and compressed with gzip (Z flag), and the resulting archives are assigned the following permissions –rw——-.

/var/log/all.log 600 7 * @T00 Z

The catch-all log is rotated 7 times at midnight every night (@T00) and compressed with gzip. The resulting archives are assigned the following permissions –rw-r—–.

/var/log/weekly.log 640 5 * $W6D0 Z

The weekly log is rotated 5 times at midnight every Monday. The resulting archives are assigned the following permissions –rw-r—–.

Common Rotation Patterns

  • D0: rotate every night at midnight

  • D23: rotate every day at 23:00

  • W0D23: rotate every week on Sunday at 23:00

  • W5: rotate every week on Friday at midnight

  • MLD6: rotate at the last day of every month at 6:00

  • M5: rotate on every 5th day of month at midnight

If an interval and a time specification are both given, then both conditions must be met. That is, the file must be as old as or older than the specified interval and the current time must match the time specification.

The minimum file size can be controlled, but there is no limit on how large the size of the file can be before newsyslog utility gets its turn in the next hour slot.

Debugging

To debug the behavior of the newsyslog utility, add the verbose flag.

root@dj_ns# newsyslog -v/var/log/cron <3Z>: size (Kb): 31 [100] --> skipping/var/log/amd.log <7Z>: does not exist, skipped./var/log/auth.log <7Z>: size (Kb): 2 [100] --> skipping/var/log/kerberos.log <7Z>: does not exist, skipped./var/log/lpd-errs <7Z>: size (Kb): 0 [100] --> skipping/var/log/maillog <7Z>: --> will trim at Tue Mar 24 00:00:00 2009/var/log/sendmail.st <10>: age (hr): 0 [168] --> skipping/var/log/messages <5Z>: size (Kb): 7 [100] --> skipping/var/log/all.log <7Z>: --> will trim at Tue Mar 24 00:00:00 2009/var/log/slip.log <3Z>: size (Kb): 0 [100] --> skipping/var/log/ppp.log <3Z>: does not exist, skipped./var/log/security <10Z>: size (Kb): 0 [100] --> skipping/var/log/wtmp <3>: --> will trim at Wed Apr 1 04:00:00 2009/var/log/daily.log <7Z>: does not exist, skipped./var/log/weekly.log <5Z>: does not exist, skipped./var/log/monthly.log <12Z>: does not exist, skipped./var/log/console.log <5Z>: does not exist, skipped./var/log/ns.log <5Z>: size (Kb): 18 [100] --> skipping/var/log/nsvpn.log <5Z>: size (Kb): 0 [100] --> skipping/var/log/httperror.log <5Z>: size (Kb): 1 [100] --> skipping/var/log/httpaccess.log <5Z>: size (Kb): 1 [100] --> skippingroot@dj_ns#

Related:

OneFS Job Exclusion Sets

In the latest of this series of job engine articles, we’ll take a closer look at OneFS exclusion sets. These job execution classes define which jobs can run simultaneously within OneFS. A job is not required to be part of any exclusion set, and jobs may also belong to multiple exclusion sets.

The Job Engine only allows up to three jobs to be run simultaneously. If a fourth job with a higher priority is started, the lowest of the currently executing jobs will be paused. For example:

# isi job start fsanalyze

Started job [583]

# isi job status

The job engine is running.

Running and queued jobs:

ID Type State Impact Pri Phase Running Time

———————————————————

578 SmartPools Running Low 6 1/2 11s

581 Collect Running Low 4 1/3 16s

583 FSAnalyze Running Low 5 1/10 1s

———————————————————

Total: 3

In this case, there are three jobs running: SmartPools with a priority of 6, MultiScan with priority 4, and FSAnalyze with priority 5.

Next, a deduplication job is started, with a priority value of 4:

# isi job start dedupe

Started job [584]

Looking at the cluster’s job status shows that SmartPools job has been put into a waiting state (paused), because of its relative priority. A value of ‘1’ indicates the highest priority job level that OneFS supports, with ‘10’ being the lowest.

# isi job status

The job engine is running.

Running and queued jobs:

ID Type State Impact Pri Phase Running Time

———————————————————

578 SmartPools Waiting Low 6 1/2 11s

581 Collect Running Low 4 1/3 1m 4s

583 FSAnalyze Running Low 5 9/10 43s

584 Dedupe Running Low 4 1/1 –

———————————————————

Total: 4



Once the FSAnalyze job has completed, the SmartPools job is automatically restarted again:



# isi job status

The job engine is running.

Running and queued jobs:

ID Type State Impact Pri Phase Running Time

———————————————————

578 SmartPools Running Low 6 1/2 23s

581 Collect Running Low 4 1/3 5m 9s

584 Dedupe Running Low 4 1/1 1m 2s

———————————————————

Total: 3

Let’s look at this in a bit more detail. So the Job Engine’s concurrent job execution is governed by the following criteria:

  • Job Priority
  • Exclusion Sets – jobs which cannot run together (ie, FlexProtect and AutoBalance)
  • Cluster health – most jobs cannot run when the cluster is in a degraded state.

There are two exclusion sets that jobs can be part of:

  • Marking Exclusion Set
  • Restriping Exclusion Set



job_contention_0-1.png

Here’s are the basic concurrent job combinations that OneFS supports:

  • 1 Restripe Job + 1 Mark Job + 1 Other Job
  • 1 Restripe Job + 2 Other Jobs
  • 1 Mark Job + 2 Other Jobs
  • 1 Mark and Restripe Job + 2 Other Jobs
  • 3 Other Jobs

OneFS marks blocks that are actually in use by the file system. IntegrityScan, for example, traverses the live file system, marking every block of every LIN in the cluster to proactively detect and resolve any issues with the structure of data in a cluster. The jobs that comprise the marking exclusion set are:

Job Name

Job Description

Access Method

Collect

Reclaims disk space that could not be freed due to a node or drive being unavailable while they suffer from various failure conditions.

Drive + LIN

IntegrityScan

Performs online verification and correction of any file system inconsistencies.

LIN

MultiScan

Runs Collect and AutoBalance jobs concurrently.

LIN

OneFS protects data by writing file blocks across multiple drives on different nodes in a process known as ‘restriping’. The Job Engine defines a restripe exclusion set that contains these jobs which involve file system management, protection and on-disk layout. The restripe exclusion set contains the following jobs:

Job Name

Job Description

Access Method

AutoBalance

Balances free space in the cluster.

Drive + LIN

AutoBalanceLin

Balances free space in the cluster.

LIN

FlexProtect

Rebuilds and re-protects the file system to recover from a failure scenario.

Drive + LIN

FlexProtectLin

Re-protects the file system.

LIN

MediaScan

Scans drives for media-level errors.

Drive + LIN

MultiScan

Runs Collect and AutoBalance jobs concurrently.

LIN

SetProtectPlus

Applies the default file policy. This job is disabled if SmartPools is activated on the cluster.

LIN

ShadowStoreProtect

Protect shadow stores which are referenced by a LIN with higher requested protection.

LIN

SmartPools

Job that runs and moves data between the tiers of nodes within the same cluster. Also executes the CloudPools functionality if licensed and configured.

LIN

Upgrade

Manage OneFS upgrades.

LIN



Note that in OneFS 8.0 and beyond, the restriping exclusion set is per-phase instead of per job. This helps to more efficiently parallelize restripe jobs when they don’t need to lock down resources.

Restriping jobs only block each other when the current phase performs restriping. This is most evident with MultiScan, whose final phase only sweeps rather than restripes. Similarly, MediaScan, which rarely ever restripes, is usually able to run to completion more without contending with other restriping jobs.

For example, below the two restripe jobs, MediaScan and AutoBalananceLin, are both running their respective first job phases. ShadowStoreProtect, also a restriping job, is in a ‘waiting’ state, blocked by AutoBalanceLin:

Running and queued jobs:

ID Type State Impact Pri Phase Running Time

———————————————————————-

26850 AutoBalanceLin Running Low 4 1/3 20d 18h 19m

26910 ShadowStoreProtect Waiting Low 6 1/1 –

28133 MediaScan Running Low 8 1/8 1d 15h 37m

———————————————————————-



MediaScan restripes in phases 3 and 5 of the job, and only if there are disk errors (ECCs) which require data reprotection. If MediaScan reaches phase 3 with ECCs, it will pause until AutoBalanceLin is no longer running. If MediaScan’s priority were in the range 1-3, it would cause AutoBalanceLin to pause instead.

If two jobs happen to reach their restriping phases simultaneously and the jobs have different priorities, the higher priority job (ie. priority value closer to “1”) will continue to run, and the other will pause. If the two jobs have the same priority, the one already in its restriping phase will continue to run, and the one newly entering its restriping phase will pause.

Jobs may also belong to both exclusion sets. An example of this is MultiScan, since it includes both AutoBalance and Collect.

However, the bulk of OneFS’ jobs do not belong to an exclusion set, as illustrated in the following graphic. These are typically the feature support jobs, as described above, and they can coexist and contend with any of the other jobs.

Exclusion sets do not change the scope of the individual jobs themselves, so any runtime improvements via parallel job execution are the result of job management and impact control. The Job Engine monitors node CPU load and drive I/O activity per worker thread every twenty seconds to ensure that maintenance jobs do not cause cluster performance problems.

If a job affects overall system performance, Job Engine reduces the activity of maintenance jobs and yields resources to clients. Impact policies limit the system resources that a job can consume and when a job can run. You can associate jobs with impact policies, ensuring that certain vital jobs always have access to system resources.

Looking at the previous example again, where the SmartPools job is paused when FSAnalyze is running:

# isi job list

ID Type State Impact Pri Phase Running Time

———————————————————

578 SmartPools Waiting Low 6 1/2 15s

581 Collect Running Low 4 1/3 37m

584 Dedupe Running Low 4 1/1 33m

586 FSAnalyze Running Low 5 1/10 9s

———————————————————

Total: 4

If this is undesirable, FSAnalyze can be manually paused, to allow SmartPools to run unimpeded:

# isi job pause FSAnalyze

# isi job list

ID Type State Impact Pri Phase Running Time

————————————————————-

578 SmartPools Waiting Low 6 1/2 15s

581 Collect Running Low 4 1/4 38m

584 Dedupe Running Low 4 1/1 34m

586 FSAnalyze User Paused Low 5 1/10 20s

————————————————————-

Total: 4

Alternatively, the priority of the SmartPools job can also be elevated to value ‘4’ (or the priority of FSAnalyze lowered to value ‘7’) to permanently prioritize it over the FSAnalyze job.

For example:

# isi job types modify SmartPools –priority 4

Are you sure you want to modify the job type SmartPools? (yes/[no]): yes

# isi job types view SmartPools

ID: SmartPools

Description: Enforce SmartPools file policies. This job requires a SmartPools license.

Enabled: Yes

Policy: LOW

Schedule: every day at 22:00

Priority: 4

Or, via the webUI:

job_engine_2.png

Navigate to Job Operations > Job Type and configure the desired priority value by editing the appropriate job type’s details, as above:

Related:

Re: WormQueue can it be disabled

TL;DR – For your case, go ahead and set it to manual if you like.

You can disable it if you aren’t using SmartLock. It’s there to handle a race condition that can occur when changing certain parameters on a worm domain, which you obviously would not need.

As far as it affecting other jobs, in general, you won’t notice. It chews up a spot in the job engine run queue when running, but it doesn’t belong to an exclusion group so other than being one of the 3 jobs that can run at a time, it doesn’t hurt anyone. It’s priority is pretty low (6) so it will yield to more critical jobs if the job run queue fills. SnapDelete, for example, has a priority of 2 so it will take precedence over WormQueue.

SyncIQ runs outside of the job engine, so there’s no effect there.

Related:

7022672: How to set up the sadc cron job on SLES12

Starting the sysstat.service systemd unit sets up a symlink named sysstat in the /etc/cron.d directory which points to the /etc/sysstat/sysstat.cron file. This link is removed when the service is stopped.

By default, system activity data are collected every ten minutes and written to /var/log/sa/sa## data files, where ## corresponds to the day of the month. These data files can be read with the “sar” program. This is the sa1 component of sadc.

A daily report is run every six hours which parses the sa## data files and outputs the activity information into to text files. These text reports are written to /var/log/sa/sar##. This is the sa2 component of sadc.

The frequency of either one of these crons can be changed in the /etc/sysstat/sysstat.cron file.

You can read more about these features in their respective man pages.

Related: