Hotfix XS73E003 – For XenServer 7.3

Who Should Install This Hotfix?

This is a hotfix for customers running XenServer 7.3.

All customers who are affected by the issues described in CTX232655 – Citrix XenServer Multiple Security Updates should install this hotfix.

Information About this Hotfix

Component Details
Prerequisite None
Post-update tasks* Restart Host
Content live patchable** No
Baselines for Live Patch N/A
Revision History Published on Mar 21, 2018
* Important: If you have previously disabled microcode loading on your XenServer host or pool. You must enable microcode loading again after applying this hotfix. For more information, see How to disable microcode loading on a XenServer pool.
** Available to Enterprise Customers.

Issues Resolved In This Hotfix

This security hotfix addresses the vulnerabilities as described in the Security Bulletin above. In addition, it resolves the following issues:

  • XenServer versions 7.3 and 7.0 are unable to correctly load the new microcode for AMD Family 17h CPUs that contains mitigations for Spectre vulnerability. Systems, where the microcode is updated by BIOS, remain unaffected.
  • VMs running on AMD EPYC hardware with microcode that enables mitigations for CVE-2017-5715 (Spectre variant 2) might crash if the guest attempts to make use of these mitigations.

This hotfix also includes the following previously released hotfix:

Installing the Hotfix

Customers should use either XenCenter or the XenServer Command Line Interface (CLI) to apply this hotfix. When the installation is complete, see the Post-update tasks in the table Information About this Hotfix for information about any post-update tasks you should perform for the update to take effect. As with any software update, back up your data before applying this update. Citrix recommends updating all hosts within a pool sequentially. Upgrading of hosts should be scheduled to minimize the amount of time the pool runs in a “mixed state” where some hosts are upgraded and some are not. Running a mixed pool of updated and non-updated hosts for general operation is not supported.

Note: The attachment to this article is a zip file. It contains the hotfix update package only. Click the following link to download the source code for any modified open source components XS73E003-sources.iso. The source code is not necessary for hotfix installation: it is provided to fulfill licensing obligations.

Installing the Hotfix by using XenCenter

Choose an Installation Mechanism

There are three mechanisms to install a hotfix:

  1. Automated Updates
  2. Download update from Citrix
  3. Select update or Supplemental pack from disk

The Automated Updates feature is available for XenServer Enterprise Edition customers, or to those who have access to XenServer through their XenApp/XenDesktop entitlement. For information about installing a hotfix using the Automated Updates feature, see the section Applying Automated Updates in the XenServer 7.3 Installation Guide.

For information about installing a hotfix using the Download update from Citrix option, see the section Applying an Update to a Pool in the XenServer 7.3 Installation Guide.

The following section contains instructions on option (3) installing a hotfix that you have downloaded to disk:

  1. Download the hotfix to a known location on a computer that has XenCenter installed.
  2. Unzip the hotfix zip file and extract the .iso file
  3. In XenCenter, on the Tools menu, select Install Update. This displays the Install Update wizard.
  4. Read the information displayed on the Before You Start page and click Next to start the wizard.
  5. Click Browse to locate the iso file, select XS73E003.iso and then click Open.
  6. Click Next.
  7. Select the pool or hosts you wish to apply the hotfix to, and then click Next.
  8. The Install Update wizard performs a number of update prechecks, including the space available on the hosts, to ensure that the pool is in a valid configuration state. The wizard also checks whether the hosts need to be rebooted after the update is applied and displays the result.
  9. Follow the on-screen recommendations to resolve any update prechecks that have failed. If you want XenCenter to automatically resolve all failed prechecks, click Resolve All. When the prechecks have been resolved, click Next.

  10. Choose the Update Mode. Review the information displayed on the screen and select an appropriate mode.
  11. Note: If you click Cancel at this stage, the Install Update wizard reverts the changes and removes the update file from the host.

  12. Click Install update to proceed with the installation. The Install Update wizard shows the progress of the update, displaying the major operations that XenCenter performs while updating each host in the pool.
  13. When the update is applied, click Finish to close the wizard.
  14. If you chose to carry out the post-update tasks, do so now.

Installing the Hotfix by using the xe Command Line Interface

  1. Download the hotfix file to a known location.
  2. Extract the .iso file from the zip.
  3. Upload the .iso file to the Pool Master by entering the following commands:

    (Where -s is the Pool Master’s IP address or DNS name.)

    xe -s <server> -u <username> -pw <password> update-upload file-name=<filename>XS73E003.iso

    XenServer assigns the update file a UUID which this command prints. Note the UUID.

    4caa5859-f9a5-4a9f-a139-f252515e2feb

  4. Apply the update to all hosts in the pool, specifying the UUID of the update:

    xe update-pool-apply uuid=<UUID_of_file>

    Run the following command if you would like to apply the hotfix for a individual host:

    xe update-apply host-uuid=<UUID_of_host> uuid=<UUID_of_file>

    Alternatively, if you need to update and restart hosts in a rolling manner, you can apply the update file to an individual host by running the following:

    xe upload-apply host-uuid=<UUID_of_host> uuid=<UUID_of_file>

  5. Verify that the update was applied by using the update-list command.

    xe update-list -s <server> -u root -pw <password> name-label=XS73E003

    If the update is successful, the hosts field contains the UUIDs of the hosts to which this patch was successfully applied. This should be a complete list of all hosts in the pool.

  6. If the hotfix is applied successfully, carry out any specified post-update task on each host, starting with the master.

Files

Hotfix File

Component Details
Hotfix Filename XS73E003.iso
Hotfix File sha256 8c3bb6d253dec1460a6559c90df503178397caa01d852d275960d32de83fe305
Hotfix Source Filename XS73E003-sources.iso
Hotfix Source File sha256 442b434381a444090890b8f72a362287a62539764cd6160a4a46795f145ec801
Hotfix Zip Filename XS73E003.zip
Hotfix Zip File sha256 9f5fd74cf12fcec5c5d2d115fe42c987f7b5687bf95470030f8851a4515eec1b
Size of the Zip file 31.76 MB

Files Updated

linux-firmware-20170622-3.noarch.rpm
xen-tools-4.7.4-3.3.x86_64.rpm
dracut-network-033-360.el7.centos.xs13.x86_64.rpm
xen-libs-4.7.4-3.3.x86_64.rpm
xen-hypervisor-4.7.4-3.3.x86_64.rpm
microcode_ctl-2.1-16.xs2.x86_64.rpm
xen-dom0-tools-4.7.4-3.3.x86_64.rpm
xen-dom0-libs-4.7.4-3.3.x86_64.rpm
dracut-033-360.el7.centos.xs13.x86_64.rpm

More Information

If you experience any difficulties, contact Citrix Technical Support.

Related:

High Availability Synchronization on NetScaler Appliance

This article contains information about synchronization between appliances that are part of a high availability setup.

Background

High availability synchronization is the process by which configurations are kept identical between the appliances. It is not the process that runs individual commands on the secondary appliance after successful completion on the primary appliance, that is called Command Propagation and while the two are similar in function, they perform different functions.

High availability synchronization is used as a failsafe method of keeping configurations between the appliances synchronized whether the command propagation is successful or not.

High availability synchronization is completed by a process called nssync over TCP port 3008 or 3010. The process starts during system startup and sleeps on a secondary appliance. It uses RPC remote GET ioctls to collect the running configuration from the primary appliance and applies it to the secondary appliance.

It is important to note that the configuration synchronized is the running configuration and not the saved ns.conf file. The obtained configuration file is stored in /tmp/ns_com_cfg.conf file and the secondary appliance runs the clear ns config –force extended command and then begins to load the new configuration. This process is easily identifiable in the logs, as shown in the following snippet:

<local0.info> 192.168.200.42 01/16/2010:13:15:56 GMT ns2-ha : UI CMD_EXECUTED : User #nsinternal# – Remote_ip 127.0.0.1 – Command “clear ns config -force extended+” – Status “Success”

High availability synchronization is automatically triggered in the following circumstances:

  • When the nssync process starts during startup, if the appliance is a secondary appliance and it is aware of the existence of the primary appliance, synchronization is initiated and the process goes to sleep. If it is not aware of the primary appliance, the process immediately goes to sleep.
  • When synchronization is not disabled manually or automatically and the high availability engine detects a configuration mismatch it starts the nssync process which then collects the remote configuration.

Usually synchronization is initiated in the second scenario when the nssync process starts during startup. The high availability setup is in the INIT state and is therefore unaware of the existence of the primary appliance. After the high availability state machine detects the primary appliance and a configuration mismatch, it starts the nssync process to collect and apply the remote configuration.

Forced Synchronization

The NetScaler appliance also supports a user forced synchronization of the appliances that are part of a high availability setup. The administrator is able to force the synchronization from either the primary or secondary appliance. The configuration always comes from the primary appliance no matter which appliance the administrator executes the command on. If synchronization is already in process when the forced synchronization is initiated, the appliance responds with an error message. Forced synchronization also fails in the following situations:

  • When you try to force synchronization on a standalone appliance
  • When the secondary appliance is disabled or unreachable
  • When high availability synchronization is disabled on the secondary appliance

An administrator can force synchronization from the Graphical User Interface (GUI) or command line interface of the appliance.

Force Synchronization From the GUI

To initiate force synchronization from the Configuration utility of the NetScaler appliance, complete the following procedure:

  1. Expand the System node.
  2. Select the High Availability node.
  3. Select the Nodes tab.
  4. Click Force Synchronization.

Force Synchronization From the CLI

To initiate force synchronization from the Command Line Interface of a NetScaler appliance, run the following command:

>force ha sync

It is important to note that this command only synchronizes the running configuration as previously noted and has no effect on the saved configuration or other files that might need to be synchronized between the primary and secondary appliances.

Related:

NetScaler High Availability Counters

Tradução automática

Эта статья была переведена автоматической системой перевода и не был рассмотрен людьми. Citrix обеспечивает автоматический перевод с целью расширения доступа для поддержки контента; Однако, автоматически переведенные статьи могут может содержать ошибки. Citrix не несет ответственности за несоответствия, ошибки, или повреждения, возникшие в результате использования автоматически переведенных статей.

Related:

  • No Related Posts

Citrix NetScaler Interface Tagging and Flow of High Availability Packets

This article describes the flow of High Availability packets when various combinations of tagging are implemented in the NetScaler configuration.

Flow of High Availability Packets

Heart beats, that is High Availability packets, are always untagged unless the NSVLAN is configured using set ns config -nsvlan command or an interface is configured with the -trunk on option in NetScaler software release 9.2 and earlier or -tagall option in NetScaler software release 9.3 and later.

The following scenarios help in describing the flow of the High Availability packets:

Scenario 1

NSVLAN is the default of 1

interface 1/1 is bound to VLAN 2

Interface 1/2 is bound to VLAN 3

For example:

add vlan 2add vlan 3bind vlan 2 -ifnum 1/1bind vlan 3 -ifnum 1/2

High Availability packets flow as untagged on the 1/1 and 1/2 interfaces on the native VLAN.

Scenario 2

NSVLAN is the default of 1

interface 1/1 is bound to VLAN 2, which is configured with -trunk ON

Interface 1/2 is bound to VLAN 3, which is configured with -trunk OFF (the default)

For example:

set interface 1/1 -trunk ONadd vlan 2add vlan 3bind vlan 2 -ifnum 1/1bind vlan 3 -ifnum 1/2

High Availability packets flow on 1/1 as tagged with a VLAN ID of 2, and untagged on the 1/2 interface.

Scenario 3

NSVLAN is VLAN10 (non default)

interface 1/1 is bound to VLAN 2

interface 1/2 is bound to VLAN 3

interface 1/3 is bound to VLAN 10

For example:

add vlan 2add vlan 3bind vlan 2 -ifnum 1/1bind vlan 3 -ifnum ½set ns config -nsvlan 10 -ifnum 1/3

High Availability packets flow as tagged on VLAN 10, interface 1/3 only and do not flow on VLAN 2 or VLAN 3.

Related:

NetScaler High Availability Failover Does Not Work in AWS Environment

When you run the HA failover command in an Amazon Web Service (AWS) high availability environment, the failover might not occur. This article explains two such scenarios and its resolution.

Background

After configuring a high availability setup in an AWS environment, the failover process requires the NetScaler VPX instance to make contact with REST API server. The communication is done using IAM user account that is associated with the NetScaler VPX instance. Sometimes, the IAM user might not have appropriate permissions and this results in a situation without any response and/or failure to failover.

For example, when initiating a high availability failover on the secondary appliance, ns.log might record the following error messages:

Mar 2 02:36:49 <local0.info> ns ptpd: nsnet_read: select FD=4 failed with 4: Interrupted system callMar 2 02:36:49 <local0.info> ns last message repeated 2 timesMar 2 02:36:49 <local0.info> ns ptpd: protocol error: 4 - Interrupted system callMar 2 02:36:49 <local0.info> ns ptpd: nsnet_read: select FD=4 failed with 4: Interrupted system callMar 2 02:36:50 <local0.info> ns last message repeated 2 timesMar 2 02:36:50 <local0.info> ns ptpd: protocol error: 4 - Interrupted system callMar 2 02:36:50 <local0.info> ns ptpd: nsnet_read: select FD=4 failed with 4: Interrupted system callMar 2 02:36:50 <local0.info> ns last message repeated 2 timesMar 2 02:36:50 <local0.info> ns ptpd: protocol error: 4 - Interrupted system callMar 2 02:36:50 <local0.info> ns ptpd: nsnet_read: select FD=4 failed with 4: Interrupted system callMar 2 02:36:50 <local0.info> ns last message repeated 2 timesMar 2 02:36:50 <local0.info> ns ptpd: protocol error: 4 - Interrupted system callMar 2 02:36:50 <local0.info> ns ptpd: nsnet_read: select FD=4 failed with 4: Interrupted system callMar 2 02:36:51 <local0.info> ns last message repeated 2 timesMar 2 02:36:51 <local0.info> ns ptpd: protocol error: 4 - Interrupted system callMar 2 02:36:51 <local0.info> ns ptpd: nsnet_read: select FD=4 failed with 4: Interrupted system callMar 2 02:36:51 <local0.info> ns awsconfig: AWSCONFIG 2 interfaces will move Primary from instance i-9e5c23ed to i-7cf18e0fMar 2 02:36:51 <local0.notice> 10.217.245.106 03/02/2013:02:36:49 GMT 0-PPE-0 : EVENT STATECHANGE 114 0 : Device "self node 10.217.245.106" - State Primary (Remote node - ACTIVE, UP)Mar 2 02:36:51 <local0.alert> 10.217.245.106 03/02/2013:02:36:51 GMT 0-PPE-0 : PITBOSS1 Message 115 0 : "Sat Mar 2 02:36:49 2013 PB_OP_CHANGE_POLICY new policy 0x28b5 (10421)"Mar 2 02:36:51 <local0.info> ns awsconfig: AWSCONFIG Detaching Id eni-1276217f attachId eni-attach-5aaa4d31Mar 2 02:36:51 <local0.notice> 10.217.245.106 03/02/2013:02:36:51 GMT 0-PPE-0 : EVENT DEVICEUP 117 0 : Device "server_svc_NSSVC_HTTP_127.0.0.1:80(internal)" - State UPMar 2 02:36:51 <local0.info> ns awsconfig: AWSCONFIG Final Call AWSAccessKeyId=AKIAI4MDH73LWHPYWNBQ&Signature=DJq%2Fl8Cc9ub1EBIbiOrA8pTOV6MX8xw6CwGKDyse7I4%3D&Version=2012-06-15&Timestamp=2013-03-02T02%3A36%3A51Z&Action=DetachNetworkInterface&AttachmentId=eni-attach-5aaa4d31&Force=True&SignatureMethod=HmacSHA256&SignatureVersion=2 len 268Mar 2 02:36:51 <local0.info> ns awsconfig: AWSCONFIG AWS COMMON... status 403 res 0 failed 1Mar 2 02:36:51 <local0.info> ns awsconfig: AWSCONFIG AWS API request failed ret = 403Mar 2 02:36:51 <local0.info> ns awsconfig: AWSCONFIG retrying.... 2Mar 2 02:36:51 <local0.info> ns awsconfig: AWSCONFIG AWS COMMON... status 403 res 0 failed 1Mar 2 02:36:51 <local0.info> ns awsconfig: AWSCONFIG AWS API request failed ret = 403Mar 2 02:36:51 <local0.info> ns awsconfig: AWSCONFIG retrying.... 1Mar 2 02:36:51 <local0.info> ns awsconfig: AWSCONFIG AWS COMMON... status 403 res 0 failed 1Mar 2 02:36:51 <local0.info> ns awsconfig: AWSCONFIG AWS API request failed ret = 403Mar 2 02:36:51 <local0.info> ns awsconfig: AWSCONFIG Failed AWS API DetachNetworkInterface due to 403

Related:

High Availability Heartbeats Not Shown on all NetScaler Interfaces

Tradução automática

Эта статья была переведена автоматической системой перевода и не был рассмотрен людьми. Citrix обеспечивает автоматический перевод с целью расширения доступа для поддержки контента; Однако, автоматически переведенные статьи могут может содержать ошибки. Citrix не несет ответственности за несоответствия, ошибки, или повреждения, возникшие в результате использования автоматически переведенных статей.

Related:

  • No Related Posts

Session Reliability Feature Does Not Restore Session When ICA AppFlow is Enabled and NetScaler High Availability Failover Occurs

  • If a NetScaler high-availability failover occurs when ICA AppFlow is enabled, the session reliability feature now restores the session. This capability is currently disabled by default and configurable through CLI. The CLI command to enable/disable the feature is:

    set ica parameter EnableSRonHAFailover YES/NO

    For more information, see http://docs.citrix.com/en-us/netscaler/11-1/ns-ag-appflow-intro-wrapper-con/session-reliablility-on-netscaler-ha-pair.html

    [From Build 49.16] [#456218, 438710, 547601, 620411]

  • Related:

    High Availability Synchronization Failure When Masked Virtual Server is Configured on NetScaler Appliance

    High Availability synchronization failure on NetScaler appliance.

    The secondary node sends a SYN packet and reset is observed in this environment.

    The following error will appear:

    Oct 27 12:00:01 <local0.info> NS5-Chicago nsconf: nsnet_connectremoteas: failed to connect to host 10.28.153.60Oct 27 12:00:04 <local0.alert> 10.28.153.61 10/27/2014:17:00:04 GMT NS5-Chicago 0-PPE-0 : EVENT STATECHANGE 14724 0 : Device "self node 10.28.153.61" - State "SYNC Failure - Save remote config failed"

    Telnet port 3010 from secondary to primary node will fail and reset on window 8121 is observed within the network trace.

    The following is a snippet from the configuration where this issue was troubleshot.

    Masked virtual server:

    add lb vserver vip_httpredirect_subnet HTTP -Pattern 10.28.153.0 -IPMask 255.255.255.0 80 -persistenceType NONE -cltTimeout 180

    NSIPs of the High Availability pair:

    set ns config -IPAddress 10.28.153.61 -netmask 255.255.255.0

    set ns config -IPAddress 10.28.153.61 -netmask 255.255.255.0

    Related:

    High Availability Failovers Due to Missed HA HeartBeats of NetScaler VPX on VMware ESX Hypervisor

    Using newslog event to confirm that VPX has scheduling issues

    Check the failover event in the /var/nslog/newnslog*.

    nsconmsg -K newnslog -d event | grep -E “node|heartbeat” | more

    Here is an example of what is seen for an HA failover due to missed HA heartbeats.

    Primary Device:

    (The Primary device is now Secondary due to the Secondary device not receiving HA heartbeats)

     2077 7537 PPE-0 self node 192.168.1.10: INIT due to REQUEST from HA peer node Tue Jul 26 10:20:25 2016 2062 0 PPE-1 self node 192.168.1.10: INIT due to REQUEST from HA peer node Tue Jul 26 10:20:25 2016 2064 0 PPE-2 self node 192.168.1.10: INIT due to REQUEST from HA peer node Tue Jul 26 10:20:25 2016 2085 0 PPE-2 self node 192.168.1.10: Secondary Tue Jul 26 10:20:25 2016

    Secondary Device:

    (This Secondary Device did not miss the required HA heartbeats causing an HA failover and now it’s Primary)

     2630 7529 PPE-0 interface(0/1): No HA heartbeats (Last received: Tue Jul 26 10:20:24 2016; Missed 15 heartbeats) Tue Jul 26 10:20:27 2016 2631 0 PPE-0 interface(1/1): No HA heartbeats (Last received: Tue Jul 26 10:20:24 2016; Missed 15 heartbeats) Tue Jul 26 10:20:27 2016 2632 0 PPE-0 interface(1/2): No HA heartbeats (Last received: Tue Jul 26 10:20:24 2016; Missed 15 heartbeats) Tue Jul 26 10:20:27 2016 2633 0 PPE-0 interface(1/3): No HA heartbeats (Last received: Tue Jul 26 10:20:24 2016; Missed 15 heartbeats) Tue Jul 26 10:20:27 2016 2634 0 PPE-0 remote node 192.168.1.10: DOWN Tue Jul 26 10:20:27 2016 2635 0 PPE-0 self node 192.168.1.20: Claiming Tue Jul 26 10:20:27 2016 2636 0 PPE-0 self node 192.168.1.20: Primary Tue Jul 26 10:20:27 2016

    Examining the netio_tot_called counter to confirm that VPX has scheduling issues

    In the following logs we see that counter logging is stopped for few seconds on both VPXs during the HA failover, which means that the VPX Virtual Machine was scheduled out.

    netio_tot_called – This is the number of times the function netio is called. This function is called every time NetScaler needs to start packet processing; ideally the gap should be seven (7) seconds.

    Collector bundle for 192.168.1.10 – /var/nslog/

    nsconmsg -g netio_tot_called -d current -K newnslog -s time=26Jul2016:10:20 -s disptime=1 |more

     Index rtime totalcount-val delta rate/sec symbol-name&device-no&time 0 3585223 287355050 56748 8105 netio_tot_called Tue Jul 26 10:20:08 2016 1 7002 287381927 26877 3838 netio_tot_called Tue Jul 26 10:20:15 2016 2 7002 287408841 26914 3843 netio_tot_called Tue Jul 26 10:20:22 2016 3 7002 287554531 85636 12230 netio_tot_called Tue Jul 26 10:20:34 2016 à Here we have a 12 second gap; ideally it should have been just 7 seconds 4 7002 287593240 38709 5528 netio_tot_called Tue Jul 26 10:20:41 2016 5 7003 287621530 28290 4039 netio_tot_called Tue Jul 26 10:20:48 2016 6 7003 287648373 26843 3833 netio_tot_called Tue Jul 26 10:20:55 2016 7 7001 287676102 27729 3960 netio_tot_called Tue Jul 26 10:21:02 2016 8 7004 287703248 27146 3875 netio_tot_called Tue Jul 26 10:21:09 2016 9 7004 287730415 27167 3878 netio_tot_called Tue Jul 26 10:21:16 2016

    Collector bundle for 192.168.1.20 – /var/nslog/

    nsconmsg -g netio_tot_called -d current -K newnslog -s time=26Jul2016:10:20 -s disptime=1 |more

     Index rtime totalcount-val delta rate/sec symbol-name&device-no&time 0 343090 246967167 26729 3817 netio_tot_called Tue Jul 26 10:20:07 2016 1 7001 246994115 26948 3849 netio_tot_called Tue Jul 26 10:20:14 2016 2 7003 247019658 25543 3647 netio_tot_called Tue Jul 26 10:20:21 2016 3 12698 247055240 35582 2802 netio_tot_called Tue Jul 26 10:20:33 2016 à Here is the 12 seconds gap 4 7012 247125542 70302 10025 netio_tot_called Tue Jul 26 10:20:40 2016 5 7001 247200102 25784 3682 netio_tot_called Tue Jul 26 10:20:55 2016

    Examining the sys_cur_duration_since_start counter to confirm that VPX has scheduling issues

    You can also verify this issue using sys_cur_duration_since_start counter which should also be updated every second and thus have a delta of seven (7) seconds in ideal case. If there is gaps in uptime reporting counter then it clearly indicates issue with lost CPU time.

     9 7001 163.21:23:31 7 0 sys_cur_duration_sincestart Mon Aug 14 13:32:12 2017 10 12201 163.21:23:43 12 0 sys_cur_duration_sincestart Mon Aug 14 13:32:25 2017------Delta value more than 7 11 7002 163.21:23:50 7 0 sys_cur_duration_sincestart Mon Aug 14 13:32:32 2017

    Citrix Documentation – Managing High Availability Heartbeat Messages on a NetScaler Appliance

    User-added image

    Related:

    • No Related Posts

    Troubleshooting NetScaler High Availability (HA) and Sync Cheat Sheet

    A high availability (HA) deployment of two NetScaler appliances can provide uninterrupted operation in any transaction. With one appliance configured as the primary node and the other as the secondary node, the primary node accepts connections and manages servers while the secondary node monitors the primary. If, for any reason, the primary node is unable to accept connections, the secondary node takes over.

    By default, NetScaler (NS) sends heartbeats every 200ms and dead interval is 3sec. After three seconds, a peer node is marked DOWN if heartbeat messages are not received from the peer node.

    The NetScaler high availability and sync cheat sheet provides you with the most commonly used resources to troubleshoot NetScaler high availability and sync issues. Use the following link to download High Availability (HA) and Sync Cheat Sheet.

    User-added image

    Related: