How to Offload HTTP Compression to the NetScaler Appliance

This article describes how to configure HTTP Compression on the NetScaler appliance to prevent backend Web servers from sending compressed responses.


HTTP Compression is an available feature on the NetScaler appliance that you can enable to partially using compression policies, or fully offload compression normally performed on backend Web servers.

Compression is performed using GZIP/DEFLATES in compliance with RFC 1950, 1951, and 1952 to reduce bandwidth requirements and increase speed for client Web connections. To fully offload this functionality from the backend Web servers to the NetScaler, two available options are:

  • The backend Web servers can disable their compression features and you can enable the NetScaler Compression feature at a global level and configure services for compression.
  • Leave the compression feature enabled on backend Web servers and have the NetScaler appliance remove the Accept Encoding header on all HTTP client requests triggering the backend Web servers to respond with an uncompressed response. This allows the NetScaler appliance to compress the server responses back to the client instead.

Note: This article reviews the steps necessary to complete the second option.

When the NetScaler appliance receives a compressed HTTP response from a backend server, the appliance does not attempt to compress the response. Compression statistics reflects this.

By removing the Accept Encoding header in the HTTP client requests, assuming that the backend server does not forcibly compress all responses, the server responds with an uncompressed response allowing the appliance to compress the response back to the client.

This effectively offloads the compression workload from the backend Web servers to the appliance because the uncompressed response is then processed by the appliance.


  • No Related Posts

How to Determine NetScaler’s HTTP Compression Capability

What is NetScaler HTTP Compression feature?

NetScaler can perform compression to reduce size of data in transit without any loss. Benefits of compression include – reduced bandwidth consumption, less download time and reduction in other network related performance issues.

This feature is available in NetScaler Platinum and Enterprise Edition license. It is optional in the Standard license.

HTTP Compression on NetScaler is based upon the GZIP and DEFLATE algorithms. Compression feature will compress data within HTML, XML, CSS, text, and Microsoft Office documents. It does not compress any picture format files, JavaScript files, or other web files that are not text related.

What are the capabilities of the HTTP compression feature?

Higher Compression Ratio: NetScaler compression feature can achieve higher compression ratio and it is more significant for Deflate types compression algorithm. For example, if backend server is sending 5 GB of compress-able data then, NetScaler can compress it and send 1 GB of data to client (ratio: 1:5).

Browser Awareness: NetScaler serves compressed data to compression aware browsers only.

Compression Caching: With the integrated caching feature enabled, subsequent requests for the same content are served from the local cache.

Compression throughput metrics

Enabling HTTP compression might have an immediate impact to CPU performance where virtual servers exist and compression is enabled on the virtual server.

The HTTP compression performance numbers is termed as Compression throughput in NetScaler data sheets. Compression Throughput indicates the maximum rate at which a NetScaler appliance can compress and transmit application data.

Note: Compression throughput on NetScaler is measured after compressing the data send to the client.

For example, for MPX 14020 appliance, where L7 throughput is 20 Gbps (Tx + Rx together), compression throughput is 4.4 Gbps (number signifies post compression throughput).

User-added image

The preceding screen shot is from the NetScaler data sheet which contains compression throughput after the data has compressed which may vary based on the NetScaler appliance type, CPU and memory allocated to various NetScaler models.


WSS Block executables into zip file

I need a solution

Hello everyone

I need you help, in my portal of WSS I do a rule to block all executable files *.exe, according this KB

The rule work fine, but if the file *.exe is compress in file *.zip don´t work

Any idea of ​​why it does not work like that?


Andres Garcia



Secure Mail: How to edit the MDX file for enabling/disabling the hidden policy.

Following are the steps for enabling/disabling Android hidden MDX policy:(Please take a backup of the .MDX file prior to making any changes)

1. Rename the .mdx file that you have to a .zip file

2. Unzip it, open the policy_metadata.xml file and look for the policy, here we take “Auto_Populate_username_title” as an example.

3. Change the PolicyHidden tag to true or false.

User-added image

4. Now, select all the files in the folder that has xml and apk tag, along with various properties files, and compress them (To compress we will have to go inside the folder, select all the files -> send to -> Compressed ).

5. Rename the compressed .zip file to a .mdx file

6. Upload this mdx file on your environment

XenMobile How Do I


7021301: Updating ACC Files in Verastream

This document (7021301) is provided subject to the disclaimer at the end of this document.


Verastream Host Integrator version 7.7 or earlier

Verastream Process Designer R6 or earlier


Apache Commons Collections (ACC) library version 3.2.1 contains a vulnerability that allows a remote attacker to execute arbitrary code on an unpatched machine that uses JMX. This technical note explains how to update the ACC files to address this vulnerability.

Note: For more information about this vulnerability, see Technical Note 2700.


The steps depend on your Verastream product.

Verastream Host Integrator

Use the following steps to update your VHI installation with the patched ACC files:

  1. Go to and download the version 3.2.2 binaries (either .zip or .tar.gz).
  2. Uncompress the .zip or .tar.gz file to extract the commons-collections-3.2.2.jar file.
  3. Stop the Verastream Management Server service.
  4. Repeat the following steps for all of the following directories:

    1. Locate the existing commons-collections-3.2.1.jar and rename it to a different file extension (such as commons-collections-3.2.1.jar.backup).
    2. Copy the 3.2.2 file from step 2 above.
  1. Start the Verastream Management Server service.

Verastream Process Designer

Use the following steps to update your VHI installation with the patched ACC files:

  1. Go to and download the version 3.2.2 binaries (either .zip or .tar.gz).
  2. Uncompress the .zip or .tar.gz file to extract the commons-collections-3.2.2.jar file.
  3. Stop the Verastream Process Server service.
  4. Repeat the following steps for all of the following directories:


    1. Locate the existing commons-collections-3.2.1.jar and rename it to a different file extension (such as commons-collections-3.2.1.jar.backup).
    2. Copy the 3.2.2 file from step 2 above.
  1. Start the Verastream Process Server service.

Additional Information

Legacy KB ID

This document was originally published as Attachmate Technical Note 10162.


This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.


How to Troubleshoot Authentication Issues Through NetScaler or NetScaler Gateway with aaad.debug Module

To troubleshoot authentication with aaad.debug module, complete the following procedure:

  1. Connect to NetScaler Gateway command line interface with a Secure Shell (SSH) client such as PuTTY.

  2. Run the following command to switch to the shell prompt:


  1. Run the following command to change to the /tmp directory:

    cd /tmp

  1. Run the following command to start the debugging process:

    cat aaad.debug

  1. Perform the authentication process that requires troubleshooting, such as a user logon attempt.

  1. Monitor the output of the cat aaad.debug command to interpret and troubleshoot the authentication process.

  1. Stop the debugging process by pressing Ctrl+Z.

  1. Run the following command to record the output of aaad.debug to a file:

    cat aaad.debug | tee /var/tmp/<debuglogname>

    Where /var/tmp is the required directory path and <debuglogname.log> is the required log name.

The following section provides examples of how aaad.debug module can be used to troubleshoot and interpret an authentication error.

Incorrect Password

In this example, the user entered an incorrect Lightweight Directory Access Protocol (LDAP) password.

Fri Oct 19 17:53:20 2007/usr/home/build/rs_80_48/usr.src/usr.bin/nsaaad/../../netscaler/aaad/ldap_drv.c[40]: start_ldap_auth attempting toauth scottli @ Oct 19 17:53:20 2007/usr/home/build/rs_80_48/usr.src/usr.bin/nsaaad/../../netscaler/aaad/ldap_drv.c[291]: recieve_ldap_bind_event receive ldap bind eventFri Oct 19 17:53:20 2007/usr/home/build/rs_80_48/usr.src/usr.bin/nsaaad/../../netscaler/aaad/ldap_drv.c[551]: recieve_ldap_user_search_event built group string for scottli of:Domain AdminsFri Oct 19 17:53:22 2007/usr/home/build/rs_80_48/usr.src/usr.bin/nsaaad/../../netscaler/aaad/naaad.c[1198]: send_reject sending reject tokernel for : scottli

Invalid Username

In this example, the user entered an incorrect username.

/usr/home/build/rs_80_48/usr.src/usr.bin/nsaaad/../../netscaler/aaad/ldap_drv.c[40]: start_ldap_auth attempting toauth scott @ Fri Oct 19 17:53:30 2007 /usr/home/build/rs_80_48/usr.src/usr.bin/nsaaad/../../netscaler/aaad/ldap_drv.c[291]: recieve_ldap_bind_eventFri Oct 19 17:53:30 2007 /usr/home/build/rs_80_48/usr.src/usr.bin/nsaaad/../../netscaler/aaad/ldap_drv.c[534]:recieve_ldap_user_search_event ldap_first_entry returned null, user not foundFri Oct 19 17:53:30 2007 /usr/home/build/rs_80_48/usr.src/usr.bin/nsaaad/../../netscaler/aaad/naaad.c[1198]: send_reject sending reject to kernel for : scott

Invalid LDAP Bind Attempt

In this example, an invalid set of LDAP bind credentials were defined in the authentication policy.

Fri Oct 19 18:17:16 2007/usr/home/build/rs_80_48/usr.src/usr.bin/nsaaad/../../netscaler/aaad/naaad.c[359]: process_kernel_socket call toauthenticate user :scottli, vsid :527Fri Oct 19 18:17:16 2007/usr/home/build/rs_80_48/usr.src/usr.bin/nsaaad/../../netscaler/aaad/ldap_drv.c[40]: start_ldap_auth attempting toauth scottli @ Oct 19 18:17:18 2007/usr/home/build/rs_80_48/usr.src/usr.bin/nsaaad/../../netscaler/aaad/ldap_drv.c[291]: recieve_ldap_bind_event receive ldap bind eventFri Oct 19 18:17:18 2007/usr/home/build/rs_80_48/usr.src/usr.bin/nsaaad/../../netscaler/aaad/ldap_drv.c[326]:recieve_ldap_bind_event ldap_bind with binddn bindpw failed:Invalid credentials Fri Oct 19 18:17:18 2007 /usr/home/build/rs_80_48/usr.src/usr.bin/nsaaad/../../netscaler/aaad/naaad.c[1198]: send_reject sending reject to kernel for : scottli

Determining Group Extraction Results

In this example, the group extraction results can be determined. Many issues with AAA group access involves the user not picking up the correct session polices for their assigned group in an NetScaler Gateway appliance. Common reasons for this include incorrect spelling of Active Directory/Radius group name in the appliance and users not being a member of the security group in Active Directory/Radius.

Fri Oct 19 18:22:14 2007 /usr/home/build/rs_80_48/usr.src/usr.bin/nsaaad/../../netscaler/aaad/ldap_drv.c[40]:start_ldap_auth attempting to auth scottli @ Fri Oct 19 18:22:14 2007 /usr/home/build/rs_80_48/usr.src/usr.bin/nsaaad/../../netscaler/aaad/ldap_drv.c[291]: recieve_ldap_bind_event receive ldap bind event Fri Oct 19 18:22:14 2007 /usr/home/build/rs_80_48/usr.src/usr.bin/nsaaad/../../netscaler/aaad/ldap_drv.c[551]: recieve_ldap_user_search_event built group string for scottli of:Domain Admins 


Understanding VMAX Compression

This is a piece I put together based on a request from the community to explain how compression works on the Vmax, in regards to what data gets compressed, what doesn’t and how often this occurs.

Compression allows users to compress user data on storage groups and storage resources. The feature is enabled by default and can be turned on and off at storage group and storage resource pool level. If a storage group is cascaded, enabling compression at this level enables compression for each of the child storage groups. The user has the option to disable compression on one or more of the child storage groups if desired.

The VMAX All Flash’s Adaptive Compression Engine (ACE) offers a number of benefits such as capacity savings and delivering expected performance improvements. Space savings is commonly the first thought when compression is discussed. However, in all cases there is some cost usually in performance, due to the overhead the actual function of compressing the data. The Dell EMC Adaptive Compression Engine’s design using intelligent algorithms, paired with hardware acceleration, minimizes the cost. This combination allows the system to maintain a balanced and optimal configuration. The result is a system that can deliver efficient capacity savings and deliver optimal performance.

This feature was made available in the 5977.945.890 release and this is the minimum version of code you need to be running on the All Flash Array. All data services offered on the VMAX All Flash array with compression enabled are supported. This includes local replication (SnapVX), remote replication (SRDF), D@RE and VVols.

ace of spades.png

The VMAX Adaptive Compression Engine (ACE) comprises of 4 basic tenants:

1. Hardware Acceleration – Each array has multiple hardware compression modules that handle the actual compressing and decompressing of data. The system requirements state that each system will have a compression module per director which equates to 2 modules per engine. The compression modules being used are tested and proven components and have been in use for years in the VMAX to support SRDF Compression.

2. Optimized Data Placement – This is a function within the VMAX AFA that is always running and is responsible for dynamically changing the compression pools as needed. This function generates minimal overhead similar to how FAST operated on a VMAX 3. Data is stored in back-end compression pools based on its compressibility (8KB pool ranging to a 128K pool).

These compression pools represent actual disk space on multiple solid state drives. Once compressed, data is allocated to these pools. There are multiple possible compression pools that may be created in order to build an optimal back end. The result is a suitable layout of compression pools that accommodate the data sent to the system. All data can be compressed however it does not all compress to the same degree. Some data may compress to one size and other data may compress to another size. In order to maximize compression efficiency, multiple compression ratios need to be available.


The figure above gives us a good visual representation of how multiple pools handle the various compression ratios as the writes come in.

3. Activity Based Compression (ABC) – ABC aims to prevent constant compression and decompression of data that is active or frequently accessed. The ABC function marks the busiest data in the SRP to skip the compression flow regardless of the related storage group compression setting. This function differentiates busy data from idle or less busy data and only accounts for 20% of the allocations in the SRP. Marking up to 20% of the busiest allocations to skip the compression action is a benefit to the whole system as well as the end users.

This ensures optimal response time and reduced overhead that can result from constantly decompressing frequently accessed data. The mechanism used to determine the busiest data does not add additional CPU load on the system. The function is similar to FAST code used for promoting data in previous code releases. ABC leverages the FAST statistics to determining what data sets are the best candidates for compression. It allows them to maintain balance across the resources proving an optimal environment for both the best possible compression savings and the best performance. Effectively this avoids compress and decompress latency for the busiest data. In addition this reduces the system overhead of compression allowing the focus to be on the best candidate data to be compressed.


The figure above gives us a visual of how busy data remains uncompressed while idle data goes forward for compression to the pool.

4. Fine Grain Data Packing – In VMAX AFA using the Active Compression engine each 128K I/O is split into four 32K buffers. Each buffer is compressed individually in parallel maximizing the efficiency of the compression IO module. The total of four buffers result in the final compressed size and determines where the data is allocated. Fine Grain Data Packing offers benefits with performance for both the compression functions as well as the overall performance of the system. Included in the process is a zero reclaim function that prevents the allocation of buffers with all zeroes or no actual data. Pairing the zero non-allocation function with the Fine Grain Data packing allows the compression function to operate in a very efficient manner with minimal cost to performance.

Compressing the 128K I/O in four buffers individually in parallel allows for each section to be handled independent even though they are still part of the initial 128K I/O. In the event that only one or two of the sections need to be updated or read, only that data is uncompressed.


This figure represents a 128K write I/O divided into four cache buffers. Each buffer starts as a 32K and is compressed individually. The sum of the four sections creates a 64K compressed track. The achieved savings with this example is 2:1 as the 128K I/O is compressed and allocated as a 64K track.

Managing Compression

Enabling Compression – Compression is enabled by default when provisioning storage using Unisphere for VMAX, Solutions Enabler or REST API. The Unisphere provisioning wizard includes the compression option as a check box when the storage group is being created. The compression option is available for managed storage groups. Managed storage groups are when a storage resource pool (SRP) is assigned. If there is no SRP assigned the compression option is not available. When compression is enabled, the data sent to the system is sent through the compression path and compressed, when possible, to the best-case track size available in the array. The compressed data is allocated to the appropriate compression pool. When using the modify storage group option to enable compression the data is not immediately compressed. All incoming data is sent through the compression flow and existing data is compressed when accessed and over time. In parallel there is a code function that scans data sets looking for data to be compressed and does so when such data is encountered.

Disabling Compression – Disabling compression does not immediately start a decompression process. Just like enabling compression on an existing storage group the data is decompressed when accessed and over time. In parallel the code function finds data that should not be compressed and decompress it.

When modifying storage groups if the assigned SRP is changed to none this automatically disables compression.

Compression Displays with Unisphere for VMAX

There are two compression reporting levels in regards to the savings achieved from ACE; overall system level and storage group level. Overall system compression can be viewed within the Unisphere capacity report. System achieved compression accounts for all data allocated to the system. Storage group achieved compression can be found in a few different views and accounts for allocations that relate only to it. In addition, there is a compressibility report that provides a possible achieved compression ratio for storage groups where compression is not enabled.

Unisphere Capacity Report

The capacity report presents a system’s efficiency using a few sections. This view shows the system compression ratio as well as the capacity usage.

The figure below represents the array capacity usage in two factors, subscribed capacity and usable capacity. Subscribed capacity represents the total amount of requested front end host and eNAS capacity plus system-configured capacity such as Guest OS and RecoverPoint (RP) devices. The blue portion of the display includes logical allocated capacity based on a track size of 128K. The usable capacity represents the amount of total physical capacity available using the pool track size of all enabled data devices (TDATs). The blue portion represents the allocated physical capacity for all front end hosts, eNAS as well as internal devices such as Guest OS and RP devices.


The figure below shows the system current overall system compression ratio. The COMPRESSION ENABLED STORAGE percentage represents the total amount of data populating the system where compression is enabled. System compression can also be displayed using Solutions Enabler as seen in the SYMMETRIX EFFICIECNY display (symcfg –sid xxx list –efficiency {xxx = last three digits from the system serial number})



Compressibility Report

The compressibility report provides a list of storage groups that do not have compression enabled. The list displays information specific to that storage group such as number of volumes, allocated capacity, used capacity and target compression ratio. The # of volumes is how many devices are in that storage group. The Allocated capacity and Used capacity reflect how much of the capacity has actually been written to the system. Target ratio presents the user with a potential compression ratio that could be achieved if compression was enabled.

These are the reports taken from Unisphere and SE:compression7.PNG.png


Finally I would like to include some Compression I/O flow models which show how the I/O reacts based on different scenarios:







The primary area of focus for any Storage Administrator is Physical Storage Capacity and with the trend being large amounts of Data being produced each year the need for greater efficiencies is critical. The VMAX AFA and ACE enables you to lower your Data consumption and in turn deliver savings through a lower data center footprint, less physical drives and in turn lower power and cooling costs. Finally it’s simple to use and enabling and disabling can be achieved by a single click or command, the system handles all the work.

I hope you found this post helpful and informative, please let me know if you have any questions.


DAta backup from EP

Hello All,We are deploying an EC appliance in HA and a console on VM.No DR is planned.So I want to take data backup of the logs in EP on a daily basis. I have a estmated 5000EPS traffic for which calculation is upto 260 GB of logs everyday.My question is , is there any native mechanism in QRadar by which this data can be compressed?If not can I compress the data and then can use it. I am not talking about the archived log which will go to a archive storage after a year.
Please suggest.Thanks


QRadar Compression vs. Storwize HW Compression

We have a Storwize v7000 with hardware compression. Can and should QRader compression be turned off? Or should I create an uncompressed volume and let QRader handle the compression? I suppose having double compression is a bad idea.