Apache HTTP Server (32 bit) service reports high cpu usage

I need a solution

Hello,

We are experiencing Apache HTTP Server (32 bit) service high cpu usage [75% approx] which raises the overall CPU utilization to 100% constantly after upgrading SEPM 14.2RU1  Version: 14.2.3332.1000 on Windows 2016 Servers Virtual Machines. However, the physical machines are working fine.

The Virtual Machines configuration is – Windows Server 2016 @2.00GHz  – 6 processors, 32GB RAM. The previous SEPM version 14.2MP1 didn’t have this issue.

Appreciate your thoughts.

Thank you.

0

Related:

  • No Related Posts

How To Troubleshoot High Packet or Management CPU Issue on Citrix ADC

CPU is a finite resource. Like many resources, there are limits to a CPU’s capacity. The NetScaler appliance has two kinds of CPUs in general: The Management CPU and Packet CPU.

Wherein, the Management CPU is responsible for processing all the Management traffic on the appliance and the Packet CPU(s) are responsible for handling all the data traffic for eg. TCP , SSL etc.

When diagnosing a complaint involving high CPU, start by gathering the following fundamental facts:

  1. CPUs impacted: nsppe (one or all) & management.
  2. Approximate time stamp/duration.

The following command o/p are quintessential for troubleshooting the high CPU issues:

  • Output of top command: Gives the CPU utilization percentage by the processes running on the NetScaler.
  • Output of stat system memory command: Gives the memory utilization percentage which can also contribute in the CPU utilization.
  • Output of stat system cpu command: This gives the stats about the current CPU utilization in total on the appliance.

Sample o/p of stat cpu command:

> stat cpuCPU statisticsID Usage1 29

The above o/p indicates that there is only 1 CPU (utilized for both Management and Data traffic) and the percentage of utilization is 29%.

The CPU ID is 1.

Now, there are appliances with multiple cores (nCore ) wherein more than single core is allocated to the appliance and then we see multiple CPU IDs on the “stat system cpu ” o/p.

*The high CPU seen when running a “top” command does not impact the performance of the box. It also “does not” mean that the NetScaler is running at high CPU or consuming all of the CPU. The NetScaler Kernel runs on top of BSD and that is what is being seen. Although it appears to be using the full amount of the CPU, it is actually not.

We can further follow the below steps for understanding the CPU usage:

  1. Check the following counters to understand CPU usage.

    CLASSIC:

    master_cpu_use

    cc_appcpu_use filter=cpu(0)

    (If AppFW or CMP is configured, then looking at slave_cpu_use also makes sense for classic)

    nCORE:

    (For an 8 Core system)

    mgmt_cpu_use (CPU0 – nscollect runs here)

    master_cpu_use (average of cpu(1) thru cpu(7))

    cc_cpu_use filter=cpu(1)

    cc_cpu_use filter=cpu(2)

    cc_cpu_use filter=cpu(3)

    cc_cpu_use filter=cpu(4)

    cc_cpu_use filter=cpu(5)

    cc_cpu_use filter=cpu(6)

    cc_cpu_use filter=cpu(7)

  2. How to look for CPU use for a particular CPU?

    Use the nsconmsg command and search for cc_cpu_use and grep for the CPU you are interested in.

    The output will look like the following:

    Index rtime totalcount-val delta rate/sec symbol-name&device-no
    320 0 209 15 2 cc_cpu_use cpu(8)
    364 0 205 -6 0 cc_cpu_use cpu(8)
    375 0 222 17 2 cc_cpu_use cpu(8)
    386 0 212 -10 -1 cc_cpu_use cpu(8)
    430 0 216 6 0 cc_cpu_use cpu(8)
    440 0 201 -15 -2 cc_cpu_use cpu(8)
    450 0 208 7 1 cc_cpu_use cpu(8)
    461 0 202 -6 0 cc_cpu_use cpu(8)
    471 0 209 7 1 cc_cpu_use cpu(8)
    482 0 238 29 4 cc_cpu_use cpu(8)
    492 0 257 19 2 cc_cpu_use cpu(8)
  • Look at the total count (third) column and divide by 10 to get the CPU percentage. For eg. in the last line above, 257 implies that 257/10 = 25.7% CPU is used by CPU(8).

    Run the following command to investigate the nsconsmg counters for CPU issue:

    nsconmsg –K newnslog –g cpu_use –s totalcount=600 –d currentnsconmsg –K newnslog –d current | grep cc_cpu_use
  • Look at the traffic, memory and CPU in conjunction. We may be hitting platform limits if it sustained high CPU usage. Try to understand if the CPU has gone up because of traffic. If so, try to understand if it is genuine traffic or any sort of attack.
  • We can further check for the Profiler o/p to understand who is taking the CPU.

    For details on the profiler o/p , logs , refer to the below article:

    https://support.citrix.com/article/CTX212480

  • We can further use the CPU counters mentioned in the below article for more details:

    https://support.citrix.com/article/CTX133887


Profiling FAQs

1. What is Constant profiling?

This refers to the running of CPU profiler at all times, as soon as the NetScaler device comes up. At the boot time, the profiler is invoked and it keeps running. Any time any of the PE’s associated CPU exceeds 90%, the profiler captures the data into a set of files.

2. Why is this needed?

This was necessitated with the issues seen at some customer sites and in internal tests. With customer issues, it’s hard to go back and request the customer to run the profiler when the issue is seen again. Hence, we have felt the need of a profiler running to be able to see the functions triggering high CPU. With this feature now, the profiler will be running always and the data gets captured when the high CPU usage occurs.

3. Which releases/builds contain this feature?

TOT (Crete) 44.2+

9.3 – all builds

9.2 52.x +

Only nCore builds are affected.

4. How do we know the profiler is already running?

Run the ps command to check if nsproflog and nsprofmon are running. The number of nsprofmon processes should be the same as the number of PEs running.

root@nc1# ps -ax | grep nspro36683 p0 S+ 0:00.00 grep nspro79468 p2- I 0:00.01 /bin/sh /netscaler/nsproflog.sh cpuuse=800 start79496 p2- I 0:00.00 /bin/sh /netscaler/nsproflog.sh cpuuse=800 start79498 p2- I 0:00.00 /bin/sh /netscaler/nsproflog.sh cpuuse=800 start79499 p2- I 0:00.00 /bin/sh /netscaler/nsproflog.sh cpuuse=800 start79502 p2- S 33:46.15 /netscaler/nsprofmon -s cpu=3 -ys cpuuse=800 -ys profmode=cpuuse -O -k /v79503 p2- S 33:48.03 /netscaler/nsprofmon -s cpu=2 -ys cpuuse=800 -ys profmode=cpuuse -O -k /v79504 p2- S 32:20.63 /netscaler/nsprofmon -s cpu=1 -ys cpuuse=800 -ys profmode=cpuuse -O -k /v

5. Where is the profiler data?

The profiled data is collected in /var/nsproflog directory. Here is a sample output of the list of files in that folder. At any point of time, the currently running files are newproflog_cpu_<penum>.out. Once the data in these files exceed 10MB in size, they are archived into a tar file and compressed. The roll over mechanism is similar to what we have for newnslog files.

newproflog.0.tar.gz newproflog.5.tar.gz newproflog.old.tar.gznewproflog.1.tar.gz newproflog.6.tar.gz newproflog_cpu_0.outnewproflog.2.tar.gz newproflog.7.tar.gz nsproflog.nextfilenewproflog.3.tar.gz newproflog.8.tar.gz nsproflog_optionsnewproflog.4.tar.gz newproflog.9.tar.gz ppe_cores.txt

The current data is always captured in newproflog_cpu_<ppe number>.out. Once the profiler is stopped, the newproflog_cpu_* files will be archived into newproflog.(value in nsproflog.nextfile-1).tar.gz.

6. What is nsprofmon and what’s nsproflog.sh?

Nsprofmon is the binary that interacts with PE, retrieves the profiler records and writes them into files. There are a myriad of options present which are hard to remember. The wrapper script nsproflog.sh is easier to use and remember. Going forward, it is recommended to use the wrapper script, if it’s limited to collecting CPU usage data.

7. Should I use nsprofmon or nsproflog.sh?

In earlier releases (9.0 and earlier), nsprofmon was heavily used internally and by the support groups. Some internal scripts that devtest use, refer to nsprofmon. It is recommended to use nsproflog.sh, if it’s limited to collecting CPU usage data.

8. Will the existing scripts be affected?

It will affect the existing scripts if they try to invoke the profiler. Please see the next question.

9. What if I want to start the profiler with a different set of parameters?

There can be only one instance of profiler running at any time. If the profiler is already running (invoked at boot time with constant profiling), and if we want to invoke again, it flags an error and exits.

root@nc1# nsproflog.sh cpuuse=900 startnCore ProfilingAnother instance of profiler is already running.If you want to run the profiler at a different CPU threshold, please stop the current profiler using# nsproflog.sh stop... and invoke again with the intended CPU threshold. Please see nsproflog.sh -h for the exact usage.


Similarly, nsprofmon is also modified to check if another instance is running. If it is, it exits flagging an error.

If the profiler needs to be run again with a different CPU usage (i.e. 80%), the running instance needs to be stopped and invoked again:

root@nc1# nsproflog.sh stopnCore ProfilingStopping all profiler processesRemoving buffer for -s cpu=1Removing profile buffer on cpu 1 ... Done.Saved profiler capture data in newproflog.5.tar.gzSetting minimum lost CPU time for NETIO to 0 microsecond ... Done.Stopping mgmt profiler process
root@nc1# nsproflog.sh cpuuse=800 start

10. How do I view the profiler data?

In /var/nsproflog, unzip and untar the desired tar archive. Each file in this archive should correspond to each PE.

Caution: When we unzip and untar the older files, the files from the archive will overwrite the current ones. The names stored inside the tar archive are the same as the ones to which currently running profiler keeps writing into. To avoid this, unzip and untar into a temporary directory.

The simplest way to see the profiled data is

# nsproflog.sh kernel=/netscaler/nsppe display=newproflog_cpu_<ppe number>.out

11. How do we collect this data for analysis?

The showtech script has been modified to collect the profiler data. When customer issues arrive, var/nsproflog can be checked to see if the profiler has captured any data.

12. Anything else that I need to know?

Collecting traces and profiler data are made mutually exclusive. When nstrace.sh is run to collect traces, profiler is automatically stopped and restarted when nstrace.sh exits. We wouldn’t have the profiler data during the time of collecting traces.

13. What commands get executed when profiler is started?

Initialization:

For each CPU, the following commands are executed initially:

nsapimgr -cnsapimgr -ys cpuuse=900 nsprofmon -s cpu=<cpuid> -ys profbuf=128 -ys profmode=cpuuse

Capturing:

For each CPU, the following are executed:

nsapimgr -cnsprofmon -s cpu=<cpuid> -ys cpuuse=900 -ys profmode=cpuuse -O -k /var/nsproflog/newproflog_cpu_<cpuid>.out -s logsize=10485760 -ye capture

After the above, nsprofmon processes will be running till any one of the capture buffers is full.

nsproflog.sh waits for any of the above child processes to exit

Stopping:

Kill all nsprofmon processes (killall -9 nsprofmon)

For each CPU, the following commands are executed:

nsprofmon -s cpu=<cpuid> -yS profbuf

Profiler capture files are archived:

nsapimgr -ys lctnetio=0

Related:

  • No Related Posts

How To Troubleshoot High Packet or Management CPU Issue on NetScaler Appliance

CPU is a finite resource. Like many resources, there are limits to a CPU’s capacity. The NetScaler appliance has two kinds of CPUs in general: The Management CPU and Packet CPU.

Wherein, the Management CPU is responsible for processing all the Management traffic on the appliance and the Packet CPU(s) are responsible for handling all the data traffic for eg. TCP , SSL etc.

When diagnosing a complaint involving high CPU, start by gathering the following fundamental facts:

  1. CPUs impacted: nsppe (one or all) & management.
  2. Approximate time stamp/duration.

The following command o/p are quintessential for troubleshooting the high CPU issues:

  • Output of top command: Gives the CPU utilization percentage by the processes running on the NetScaler.
  • Output of stat system memory command: Gives the memory utilization percentage which can also contribute in the CPU utilization.
  • Output of stat system cpu command: This gives the stats about the current CPU utilization in total on the appliance.

Sample o/p of stat cpu command:

> stat cpuCPU statisticsID Usage1 29

The above o/p indicates that there is only 1 CPU (utilized for both Management and Data traffic) and the percentage of utilization is 29%.

The CPU ID is 1.

Now, there are appliances with multiple cores (nCore ) wherein more than single core is allocated to the appliance and then we see multiple CPU IDs on the “stat system cpu ” o/p.

*The high CPU seen when running a “top” command does not impact the performance of the box. It also “does not” mean that the NetScaler is running at high CPU or consuming all of the CPU. The NetScaler Kernel runs on top of BSD and that is what is being seen. Although it appears to be using the full amount of the CPU, it is actually not.

We can further follow the below steps for understanding the CPU usage:

  1. Check the following counters to understand CPU usage.

    CLASSIC:

    master_cpu_use

    cc_appcpu_use filter=cpu(0)

    (If AppFW or CMP is configured, then looking at slave_cpu_use also makes sense for classic)

    nCORE:

    (For an 8 Core system)

    mgmt_cpu_use (CPU0 – nscollect runs here)

    master_cpu_use (average of cpu(1) thru cpu(7))

    cc_cpu_use filter=cpu(1)

    cc_cpu_use filter=cpu(2)

    cc_cpu_use filter=cpu(3)

    cc_cpu_use filter=cpu(4)

    cc_cpu_use filter=cpu(5)

    cc_cpu_use filter=cpu(6)

    cc_cpu_use filter=cpu(7)

  2. How to look for CPU use for a particular CPU?

    Use the nsconmsg command and search for cc_cpu_use and grep for the CPU you are interested in.

    The output will look like the following:

    Index rtime totalcount-val delta rate/sec symbol-name&device-no
    320 0 209 15 2 cc_cpu_use cpu(8)
    364 0 205 -6 0 cc_cpu_use cpu(8)
    375 0 222 17 2 cc_cpu_use cpu(8)
    386 0 212 -10 -1 cc_cpu_use cpu(8)
    430 0 216 6 0 cc_cpu_use cpu(8)
    440 0 201 -15 -2 cc_cpu_use cpu(8)
    450 0 208 7 1 cc_cpu_use cpu(8)
    461 0 202 -6 0 cc_cpu_use cpu(8)
    471 0 209 7 1 cc_cpu_use cpu(8)
    482 0 238 29 4 cc_cpu_use cpu(8)
    492 0 257 19 2 cc_cpu_use cpu(8)
  • Look at the total count (third) column and divide by 10 to get the CPU percentage. For eg. in the last line above, 257 implies that 257/10 = 25.7% CPU is used by CPU(8).

    Run the following command to investigate the nsconsmg counters for CPU issue:

    nsconmsg –K newnslog –g cpu_use –s totalcount=600 –d currentnsconmsg –K newnslog –d current | grep cc_cpu_use
  • Look at the traffic, memory and CPU in conjunction. We may be hitting platform limits if it sustained high CPU usage. Try to understand if the CPU has gone up because of traffic. If so, try to understand if it is genuine traffic or any sort of attack.
  • We can further check for the Profiler o/p to understand who is taking the CPU.

    For details on the profiler o/p , logs , refer to the below article:

    https://support.citrix.com/article/CTX212480

  • We can further use the CPU counters mentioned in the below article for more details:

    https://support.citrix.com/article/CTX133887


Profiling FAQs

1. What is Constant profiling?

This refers to the running of CPU profiler at all times, as soon as the NetScaler device comes up. At the boot time, the profiler is invoked and it keeps running. Any time any of the PE’s associated CPU exceeds 90%, the profiler captures the data into a set of files.

2. Why is this needed?

This was necessitated with the issues seen at some customer sites and in internal tests. With customer issues, it’s hard to go back and request the customer to run the profiler when the issue is seen again. Hence, we have felt the need of a profiler running to be able to see the functions triggering high CPU. With this feature now, the profiler will be running always and the data gets captured when the high CPU usage occurs.

3. Which releases/builds contain this feature?

TOT (Crete) 44.2+

9.3 – all builds

9.2 52.x +

Only nCore builds are affected.

4. How do we know the profiler is already running?

Run the ps command to check if nsproflog and nsprofmon are running. The number of nsprofmon processes should be the same as the number of PEs running.

root@nc1# ps -ax | grep nspro36683 p0 S+ 0:00.00 grep nspro79468 p2- I 0:00.01 /bin/sh /netscaler/nsproflog.sh cpuuse=800 start79496 p2- I 0:00.00 /bin/sh /netscaler/nsproflog.sh cpuuse=800 start79498 p2- I 0:00.00 /bin/sh /netscaler/nsproflog.sh cpuuse=800 start79499 p2- I 0:00.00 /bin/sh /netscaler/nsproflog.sh cpuuse=800 start79502 p2- S 33:46.15 /netscaler/nsprofmon -s cpu=3 -ys cpuuse=800 -ys profmode=cpuuse -O -k /v79503 p2- S 33:48.03 /netscaler/nsprofmon -s cpu=2 -ys cpuuse=800 -ys profmode=cpuuse -O -k /v79504 p2- S 32:20.63 /netscaler/nsprofmon -s cpu=1 -ys cpuuse=800 -ys profmode=cpuuse -O -k /v

5. Where is the profiler data?

The profiled data is collected in /var/nsproflog directory. Here is a sample output of the list of files in that folder. At any point of time, the currently running files are newproflog_cpu_<penum>.out. Once the data in these files exceed 10MB in size, they are archived into a tar file and compressed. The roll over mechanism is similar to what we have for newnslog files.

newproflog.0.tar.gz newproflog.5.tar.gz newproflog.old.tar.gznewproflog.1.tar.gz newproflog.6.tar.gz newproflog_cpu_0.outnewproflog.2.tar.gz newproflog.7.tar.gz nsproflog.nextfilenewproflog.3.tar.gz newproflog.8.tar.gz nsproflog_optionsnewproflog.4.tar.gz newproflog.9.tar.gz ppe_cores.txt

The current data is always captured in newproflog_cpu_<ppe number>.out. Once the profiler is stopped, the newproflog_cpu_* files will be archived into newproflog.(value in nsproflog.nextfile-1).tar.gz.

6. What is nsprofmon and what’s nsproflog.sh?

Nsprofmon is the binary that interacts with PE, retrieves the profiler records and writes them into files. There are a myriad of options present which are hard to remember. The wrapper script nsproflog.sh is easier to use and remember. Going forward, it is recommended to use the wrapper script, if it’s limited to collecting CPU usage data.

7. Should I use nsprofmon or nsproflog.sh?

In earlier releases (9.0 and earlier), nsprofmon was heavily used internally and by the support groups. Some internal scripts that devtest use, refer to nsprofmon. It is recommended to use nsproflog.sh, if it’s limited to collecting CPU usage data.

8. Will the existing scripts be affected?

It will affect the existing scripts if they try to invoke the profiler. Please see the next question.

9. What if I want to start the profiler with a different set of parameters?

There can be only one instance of profiler running at any time. If the profiler is already running (invoked at boot time with constant profiling), and if we want to invoke again, it flags an error and exits.

root@nc1# nsproflog.sh cpuuse=900 startnCore ProfilingAnother instance of profiler is already running.If you want to run the profiler at a different CPU threshold, please stop the current profiler using# nsproflog.sh stop... and invoke again with the intended CPU threshold. Please see nsproflog.sh -h for the exact usage.


Similarly, nsprofmon is also modified to check if another instance is running. If it is, it exits flagging an error.

If the profiler needs to be run again with a different CPU usage (i.e. 80%), the running instance needs to be stopped and invoked again:

root@nc1# nsproflog.sh stopnCore ProfilingStopping all profiler processesRemoving buffer for -s cpu=1Removing profile buffer on cpu 1 ... Done.Saved profiler capture data in newproflog.5.tar.gzSetting minimum lost CPU time for NETIO to 0 microsecond ... Done.Stopping mgmt profiler process
root@nc1# nsproflog.sh cpuuse=800 start

10. How do I view the profiler data?

In /var/nsproflog, unzip and untar the desired tar archive. Each file in this archive should correspond to each PE.

Caution: When we unzip and untar the older files, the files from the archive will overwrite the current ones. The names stored inside the tar archive are the same as the ones to which currently running profiler keeps writing into. To avoid this, unzip and untar into a temporary directory.

The simplest way to see the profiled data is

# nsproflog.sh kernel=/netscaler/nsppe display=newproflog_cpu_<ppe number>.out

11. How do we collect this data for analysis?

The showtech script has been modified to collect the profiler data. When customer issues arrive, var/nsproflog can be checked to see if the profiler has captured any data.

12. Anything else that I need to know?

Collecting traces and profiler data are made mutually exclusive. When nstrace.sh is run to collect traces, profiler is automatically stopped and restarted when nstrace.sh exits. We wouldn’t have the profiler data during the time of collecting traces.

13. What commands get executed when profiler is started?

Initialization:

For each CPU, the following commands are executed initially:

nsapimgr -cnsapimgr -ys cpuuse=900 nsprofmon -s cpu=<cpuid> -ys profbuf=128 -ys profmode=cpuuse

Capturing:

For each CPU, the following are executed:

nsapimgr -cnsprofmon -s cpu=<cpuid> -ys cpuuse=900 -ys profmode=cpuuse -O -k /var/nsproflog/newproflog_cpu_<cpuid>.out -s logsize=10485760 -ye capture

After the above, nsprofmon processes will be running till any one of the capture buffers is full.

nsproflog.sh waits for any of the above child processes to exit

Stopping:

Kill all nsprofmon processes (killall -9 nsprofmon)

For each CPU, the following commands are executed:

nsprofmon -s cpu=<cpuid> -yS profbuf

Profiler capture files are archived:

nsapimgr -ys lctnetio=0

Related:

  • No Related Posts

How to pin Citrix Hypervisor Virtual CPUs to specific Physical CPUs

Citrix Hypervisor maps vCPUs to pCPUs by default in a semi-even way to distribute VM load on the host. In some cases it may be needed to have a specific mapping, for example, if some VMs will be CPU intensive while other wont, the intensive VMs can be mapped to exclusive physical CPUs while the others share resources. This can lead to performance improvements in the Hypervisor.

The following example will be used to change hard affinity. This means where the vCPU is allowed to run. In this sense, once a vCPU is pinned to a pCPU with hard affinity, the vCPU won’t be able to run in any other pCPU.

Soft affinity is used to define where does the vCPU prefers to run but it doesn’t restrict where it is allowed to run.

To determine which type of mapping to use, analyze the type of workload that the guest VMs are going to be running. Ultimately the best way to determine the best mapping is through testing different configurations with real workloads and observing the results. There is not a perfect ratio since workloads change depending on many factors, such as OS type, patches, apps run by users, employee work schedules, etc.

Related:

  • No Related Posts

Can i deployment SSL intercept with single CPU

I need a solution

Hi all.

I want to ask, can my Proxy SG with single CPU doing the SSL intercept ?

I try find the document requirement for SSL intercept but i can not find the article.

Our Proxy SG type is SG S200-20. 

Because i have an experience with Proxy SG with single CPU and the CPU is full 100%.

When i check utilization for CPU, there are SSL and Cryptography proccess which consumes more than 45% of CPU resources.

Best Regards

Indra Pramono

0

Related:

  • No Related Posts

How to Configure the Integrated Caching Feature of a NetScaler Appliance for various Scenarios

You can configure the Integrated Caching feature of a NetScaler appliance for the following scenarios, as required:

Note: The memory limit of the NetScaler appliance is identified when the appliance starts. Therefore, any changes to the memory limit requires you to restart the appliance to make the changes applicable across the packet engines.

The Feature is Enabled and Cache Memory Limit is Set to Non-Zero

In this scenario, when you start the appliance, the Integrated Caching feature is enabled and the global memory limit is set to a positive number. Therefore, the memory you had set earlier is allocated to the Integrated Caching feature during the boot process. However, you might want to change the memory limit to another value depending on the available memory on the appliance.

To configure the Integrated Caching feature in this scenario, complete the following procedure:

  1. Run the following command to verify the value for the memory limit:

    NS> show cache parameter

    Integrated cache global configuration:

    Memory usage limit: 500 MBytes

    Memory usage limit (active value): 500 MBytes


    Maximum value for Memory usage limit: 843 MBytes

    Via header: NS-CACHE-9.3: 18

    Verify cached object using: HOSTNAME_AND_IP

    Max POST body size to accumulate: 0 bytes

    Current outstanding prefetches: 0

    Max outstanding prefetches: 4294967295

    Treat NOCACHE policies as BYPASS policies: YES

    Global Undef Action: NOCACHE

  2. Run the following command to set a non-zero memory limit for the Integrated Caching feature:

    set cache parameter –memLimit 600

    The preceding command displays the following warning message:

    Warning: To use new Integrated Cache memory limit, save the configuration and restart the NetScaler.

  3. Run the following command to save the configuration:

    save config

  4. From the shell prompt, run the following command to verify the memory limit in the configuration file:

    root@ns# cat /nsconfig/ns.conf | grep memLimit

    set cache parameter -memLimit 600 -via NS-CACHE-9.3: 18 -verifyUsing HOSTNAME_AND_IP -maxPostLen 0 -enableBypass YES -undefAction NOCACHE

  5. Run the following command to restart the appliance:

    root@ns# reboot

  6. Run the following command to verify the new value for the memory limit:

    show cache parameter

    Integrated cache global configuration:

    Memory usage limit: 600 Mbytes

    Memory usage limit (active value): 600 Mbytes

    Maximum value for Memory usage limit: 843 Mbytes


    Via header: NS-CACHE-9.3: 18

    Verify cached object using: HOSTNAME_AND_IP

    Max POST body size to accumulate: 0 bytes

    Current outstanding prefetches: 0

    Max outstanding prefetches: 4294967295

    Treat NOCACHE policies as BYPASS policies: YES

    Global Undef Action: NOCACHE

    After all packet engines start successfully, the Integrated Caching feature negotiates the memory you had configured. If the appliance cannot use the configured memory, then the memory is allocated accordingly. If the available memory is less than the one you allocated, the appliance recommends a lesser number and the Integrated Caching feature uses same as the active value.

The Feature is Disabled and Cache Memory Limit is Set to Non-Zero

In this scenario, when you start the appliance, the Integrated Caching feature is disabled and the global memory limit is set to a positive number. Therefore, no memory is allocated to the Integrated Caching feature during the boot process.

To configure the Integrated Caching feature to a new memory limit, complete the following procedure:

  1. Run the following command to verify the current value for the memory limit:

    show cache parameter

    Integrated cache global configuration:

    Memory usage limit: 600 Mbytes

    Maximum value for Memory usage limit: 843 Mbytes


    Via header: NS-CACHE-9.3: 18

    Verify cached object using: HOSTNAME_AND_IP

    Max POST body size to accumulate: 0 bytes

    Current outstanding prefetches: 0

    Max outstanding prefetches: 4294967295

    Treat NOCACHE policies as BYPASS policies: YES

    Global Undef Action: NOCACHE

  2. Run the following command to set a new memory limit for the Integrated Caching feature, such as 500 MB:

    set cache parameter –memLimit 500

    The preceding command displays the following warning message:

    Warning: Feature(s) not enabled [IC]

  3. Run the following command to save the configuration:

    save config

  4. From the shell prompt, run the following command to verify the memory limit in the configuration file:

    ns# cat /nsconfig/ns.conf | grep memLimit

    set cache parameter -memLimit 500 -via NS-CACHE-9.3: 18 -verifyUsing HOSTNAME_AND_IP -maxPostLen 0 -enableBypass YES -undefAction NOCACHE

  5. Run the following command to verify the new value for the memory limit:

    show cache parameter

    Integrated cache global configuration:

    Memory usage limit: 500 Mbytes

    Maximum value for Memory usage limit: 843 Mbytes


    Via header: NS-CACHE-9.3: 18

    Verify cached object using: HOSTNAME_AND_IP

    Max POST body size to accumulate: 0 bytes

    Current outstanding prefetches: 0

    Max outstanding prefetches: 4294967295

    Treat NOCACHE policies as BYPASS policies: YES

    Global Undef Action: NOCACHE

  6. Run the following command to enable the Integrated Caching feature:

    enable ns feature ic

    After running the preceding command, the appliance negotiates memory for the Integrated Caching feature and the available memory is assigned to the feature. This results in the appliance caching objects without restarting the appliance.

  7. Run the following command to verify the value for the memory limit:

    show cache parameter

    Integrated cache global configuration:

    Memory usage limit: 500 Mbytes

    Memory usage limit (active value): 500 Mbytes


    Maximum value for Memory usage limit: 843 Mbytes

    Via header: NS-CACHE-9.3: 18

    Verify cached object using: HOSTNAME_AND_IP

    Max POST body size to accumulate: 0 bytes

    Current outstanding prefetches: 0

    Max outstanding prefetches: 4294967295

    Treat NOCACHE policies as BYPASS policies: YES

    Global Undef Action: NOCACHE

    Notice that 500 MB of memory is allocated to the Integrated Caching feature.

  8. Save the configuration to ensure that memory is automatically allocated to the feature when the appliance is restarted.

The Feature is Enabled and Cache Memory Limit is Set to Zero

In this scenario, when you start the appliance, the Integrated Caching feature is enabled and the global memory limit is set to zero. Therefore, no memory is allocated to the Integrated Caching feature during the boot process.

To configure a cache memory limit in this scenario, complete the following procedure:

  1. Switch to the shell prompt and run the following command to verify the memory limits set in the ns.conf file:

    ns# cat ns.conf | grep memLimit

    set cache parameter -memLimit 0 -via “NS-CACHE-9.3: 18” -verifyUsing HOSTNAME -maxPostLen 4096 -enableBypass YES -undefAction NOCACHE

  2. Run the following command to verify the value for the memory limit:

    show cache parameter

    Integrated cache global configuration:

    Memory usage limit: 0 Mbytes

    Maximum value for Memory usage limit: 843 Mbytes


    Via header: NS-CACHE-9.3:

    Verify cached object using: HOSTNAME_AND_IP

    Max POST body size to accumulate: 0 bytes

    Current outstanding prefetches: 0

    Max outstanding prefetches: 4294967295

    Treat NOCACHE policies as BYPASS policies: YES

    Global Undef Action: NOCACHE

    Notice that the memory limit is set to 0 MB and no memory is allocated to the Integrated Caching feature.

  3. To ensure that the Integrated Caching feature caches objects, run the following command to set the memory limits:

    set cache parameter -memLimit 600

    After running the preceding command, the appliance negotiates memory for the Integrated Caching feature and the available memory is assigned to the feature. This results in the appliance caching objects without restarting the appliance.

  4. Run the following command to verify the value for the memory limit:

    show cache parameter

    Integrated cache global configuration:

    Memory usage limit: 600 Mbytes

    Memory usage limit (active value): 600 Mbytes

    Maximum value for Memory usage limit: 843 Mbytes


    Via header: NS-CACHE-9.3:

    Verify cached object using: HOSTNAME_AND_IP

    Max POST body size to accumulate: 0 bytes

    Current outstanding prefetches: 0

    Max outstanding prefetches: 4294967295

    Treat NOCACHE policies as BYPASS policies: YES

    Global Undef Action: NOCACHE

    Notice that 600 MB of memory is allocated to the Integrated Caching feature.

  5. Save the configuration to ensure that memory is automatically allocated to the feature when the appliance is restarted.

  6. Switch to the shell prompt and run the following command to verify the memory limits set in the ns.conf file:

    ns# cat /nsconfig/ns.conf | grep memLimit

    set cache parameter -memLimit 600 -via NS-CACHE-9.3: -verifyUsing HOSTNAME_AND_IP -maxPostLen 0 -enableBypass YES -undefAction NOCACHE

The Feature is Disabled and Cache Memory Limit is Set to Zero

In this scenario, when you start the appliance, the Integrated Caching feature is disabled and the global memory limit is set to zero. Therefore, no memory is allocated to the Integrated Caching feature during the boot process.

To configure the Integrated Caching feature in this scenario, complete the following procedure:

  1. Run the following command to verify the memory limits set in the ns.conf file:

    ns# cat /nsconfig/ns.conf | grep memLimit

    set cache parameter -memLimit 0 -via “NS-CACHE-9.3: 18” -verifyUsing HOSTNAME_AND_IP -maxPostLen 0 -enableBypass YES -undefAction NOCACHE-maxPostLen 4096 -enableBypass YES -undefAction NOCACHE

  2. Run the following command to verify the value for the memory limit:

    show cache parameter

    Integrated cache global configuration:

    Memory usage limit: 0 Mbytes

    Maximum value for Memory usage limit: 843 Mbytes


    Via header: NS-CACHE-9.3: 18

    Verify cached object using: HOSTNAME_AND_IP

    Max POST body size to accumulate: 0 bytes

    Current outstanding prefetches: 0

    Max outstanding prefetches: 4294967295

    Treat NOCACHE policies as BYPASS policies: YES

    Global Undef Action: NOCACHE

    Notice that the memory limit is set to 0 MB and no memory is allocated to the Integrated Caching feature. Additionally, when you run any cache configuration command, the following warning message is displayed:

    Warning: Feature(s) not enabled [IC]

  3. Run the following command to enable the Integrated Caching feature:

    enable ns feature ic

    At this stage, when you enable the Integrated Caching feature, the appliance does not allocate memory to the feature. As a result, no object is cached to the memory. Additionally, when you run any cache configuration command, the following warning message is displayed:

    No memory is configured for IC. Use set cache parameter command to set the memory limit.

  4. To ensure that the Integrated Caching feature caches objects, run the following command to set the memory limits:

    set cache parameter -memLimit 500

    After running the preceding command, the appliance negotiates memory for the Integrated Caching feature and the available memory is assigned to the feature. This results in the appliance caching objects without restarting the appliance.

    Note: The order in which you run, enable the feature, and set the memory limits is very important. If you set the memory limits before enabling the feature, then the following warning message is displayed:

    Warning: Feature(s) not enabled [IC]

  5. Run the following command to verify the value for the memory limit:

    show cache parameter

    Integrated cache global configuration:

    Memory usage limit: 500 Mbytes

    Memory usage limit (active value): 500 Mbytes

    Maximum value for Memory usage limit: 843 Mbytes


    Via header: NS-CACHE-9.3: 18

    Verify cached object using: HOSTNAME_AND_IP

    Max POST body size to accumulate: 0 bytes

    Current outstanding prefetches: 0

    Max outstanding prefetches: 4294967295

    Treat NOCACHE policies as BYPASS policies: YES

    Global Undef Action: NOCACHENotice that 500 MB of memory is allocated to the Integrated Caching feature.

  6. Run the following command to save the configuration:

    save config

  7. Run the following command to verify the memory limits set in the ns.conf file:

    ns# cat /nsconfig/ns.conf | grep memLimit

    set cache parameter -memLimit 500 -via NS-CACHE-9.3: 18 -verifyUsing HOSTNAME_AND_IP -maxPostLen 0 -enableBypass YES -undefAction NOCACHE

Related:

  • No Related Posts