SMG downstream loadbalancing questions

I need a solution

I have recently seen a document about SMG downstream message distribution.

https://support.symantec.com/en_US/article.TECH209…

The document officially says that SMG does NOT provide load balancing feature in the 4 configuration areas mentioned.

I have a question with this.

Does [Content > Email > Add > Actions > Route the massage] also not provide equal downstream distribution when using MX lookup?

I configured a content rule which routes massages to a specific domain which has multiple mx records with equal preference. (actually the next hop is Symantec DLP)

But it doesn’t look like SMG distributes messages equally.

0

Related:

  • No Related Posts

Performance issue is observed if USIP/RNAT + TCP Timestamp is enabled on Client machine as well as on NetScaler

1) When (USIP + Timestamp) is enabled on NetScaler (LB Vserver and Service) and also if we enabled TCP timestamp on Client machine through Registry settings. RESULT: NetScaler is sending the same TSVAL to backend server and Latency issues found. Once after refreshing the browser NetScaler sends the TSVAL properly to backend server and the required page gets displayed.

2) When (USIP + Timestamp) is enabled only on NetScaler (LB Vserver and Service) and if the Client is not enabled with TCP timestamp. RESULT: NetScaler is sending TSVAL properly to backend and no latency issues found.

3) When (SNIP +Timestamp) is enabled on NetScaler (LB Vserver and Service) and if the Client is enabled with TCP timestamp. RESULT: NetScaler is sending the same TSVAL to backend server. However, no delay issue observed.

4) Also we could see the NetScaler is advertising the MSS value as 1448 instead of 1460 when TCP timestamp is enabled on NetScaler.

Related:

  • No Related Posts

How to Increase Seed Database Size for the URL Filtering Feature

Complete the installation of NetScaler release 12.0 build 53.110 and wait for the NetScaler node to return to service after reboot next follow the steps bellow to perform the necessary steps:

  1. Modify the file “/flash/boot/loader.conf” appending the following line:
netscaler.bsd_max_mem_mb=5000

After the change the file should look like this…

autoboot_delay=3boot_verbose=0kernel="/ns-12.0-53.110"vfs.root.mountfrom="ufs:/dev/md0c"console="vidconsole,comconsole"netscaler.bsd_max_mem_mb=5000

This change is needed in order to accommodate the larger Seed DB sizes available with the new SDK.

  1. Next, reboot the node for the new setting to take effect.

  1. Change Seed DB size level to the max value with the “SeedDbSizeLevel” parameter:
> set urlfiltering parameter -SeedDbSizeLevel 5

The updated seed DB size will take effect on the next automatic seed DB update. The update schedule is defined by the HoursBetweenDbUpdates and TimeOfDayToUpdateDB parameters. An update will only occur if a new version is available.

  1. Verify downloaded Seed DB size.

Once the Seed DB update process has been successfully completed, check the “/var/gcf1/data” directory and verify the size of the fcdb.now matches (approximately) the configured CLI level.


NOTE: In order to use the larger sizes the next steps have to be made first in order for the system to be able to allocate that memory.

With smaller sizes of the DB the URL filtering feature it’s possible that will require more frequent access to the non-local DB through an Internet connection.

Related:

  • No Related Posts

StoreFront Loopback Feature analysis when configuring Base URL for load balance

In previous versions of StoreFront such as 2.6 or older, Citrix recommended that you manually modify the hosts file on each StoreFront server to map the fully qualified domain name (FQDN) of the load balancer to the loopback address or the IP address of the specific StoreFront server. This ensures that Receiver for Web always communicates with the StoreFront services on the same server in a load balanced deployment. This is necessary because an HTTP session is created during the explicit login process between Receiver for Web and the authentication service and Receiver for Web communicates with StoreFront services using the base FQDN. If the base FQDN were to resolve to the load balancer, the load balancer could potentially send the traffic to a different StoreFront server in the group, leading to authentication failure. This does not bypass the load balancer except when Receiver for Web attempts to contact the Store service residing on the same server as itself.

You can set loopback options using PowerShell. Enabling loopback negates the need to create host file entries on every StoreFront server in the server group.

Example Receiver for Web web.config file:

<communication attempts=”2″ timeout=”00:01:00″ loopback=”On” loopbackPortUsingHttp=”80″>

Example PowerShell command:

& “c:program filesCitrixreceiver storefrontscriptsImportModules.ps1”

Set-DSLoopback -SiteId 1 -VirtualPath “/Citrix/StoreWeb” -Loopback “OnUsingHttp” -LoopbackPortUsingHttp 81

Related:

  • No Related Posts

How to Set Up a High Availability Pair on NetScaler

Set Up NetScaler High Availability

Note: Secure Shell (SSH) connection is used to execute the commands within this article.

Complete the following procedure to setup High Availability
pair on NetScaler appliance:

  1. Log in to the primary NetScaler appliance and run the following command from CLI:

    set ha nodehastatusSTAYPRIMARY

  2. Log in to the secondary NetScaler appliance and run the following command from CLI:

    set ha nodehastatusSTAYSECONDARY

  3. Run the following command on both primary and secondary NetScaler appliance to disable any network interface that is not connected to the network:

    disable interface <interface_num>

  4. From the primary NetScaler appliance run the following command from CLI to specify the ID and the NetScaler IP (NSIP) address of the secondary appliance:

    add HA node <id> <ipAddress>

    Note: The maximum node ID for appliances in a high availability setup is 64. It can be any number. For example, you can use the number 2 for the secondary appliance. The number 64 does not indicate that you can have 64 nodes in a high availability setup. It is just a variable value. The high availability setup is always created from two appliances.

  5. Log into the secondary NetScaler appliance and run the following command in the CLI to specify the ID and the NetScaler IP (NSIP) address of the primary appliance:

    add HA node <id> <ipAddress>.

  6. The RpcNode password must be set on both the appliances. The passwords must be the same on each appliance. The primary appliance must be aware of the secondary RpcNode password and the secondary appliance must be aware of the primary RpcNode password.

    Note: The NetScaler nsroot password must also be the same on each node. The RpcNode password does not have to be the same as the nsroot password.

    On the primary NetScaler Gateway appliance, run the following command from the command line interface:

    set ns rpcnode <ipAddress> -password <string>

    The IP address must be the IP address of the primary appliance. For more information on the ns rpcNode command refer to Citrix Documentation.

  7. Run the same command and specify the IP address of the secondary appliance. Use the same password.

  8. Repeat the action and specify both RpcNode passwords with same commands on the secondary NetScaler Gateway appliance.

  9. After you specify the RpcNode password on the primary and the secondary appliances, run the following command to check the setting:

    show ns rpcnode

  10. After the node and rpcnode password on both appliances are set up correctly, verify the node status with following command:

    show ha node

    If the RpcNode password is set correctly on both appliances, then the status of the second appliance appears correctly. Else, you can get UNKNOWN status results of the remote node.

    a. Node ID: 0 IP: x.x.x.x.x (ns) Node State: UP Master State: Primary INC State: DISABLED Sync State: ENABLED Propagation: ENABLED Enabled Interfaces : 1/1 Disabled Interfaces : 0/1 1/3 1/2 1/4 HA MON ON Interfaces : 1/1 Interfaces on which heartbeats are not seen : SSL Card Status: UP Hello Interval: 200 msecs Dead Interval: 3 secsb. Node ID: 2 IP: x.x.x.x.x Node State: UP Master State: Secondary INC State: DISABLED Sync State: SUCCESS Propagation: ENABLED Enabled Interfaces : 1/1 Disabled Interfaces : 0/1 1/3 1/2 1/4 HA MON ON Interfaces : 1/1 Interfaces on which heartbeats are not seen : SSL Card Status: UP
  11. Use the sync HA files command on the Primary appliance to force file synchronization from the primary appliance to the secondary appliance. This command synchronizes all the SSL Certificates, SSL CRL lists, and VPN bookmarks. The primary appliance is considered authoritative and files are copied from the primary to the secondary appliance overwriting all differences.

    sync ha files all

  12. To enable HA setup run the following command on both the primary and secondary NetScaler appliances:

    set ha node -hastatus ENABLED

  13. In case you added a new appliance to an already existing appliance to form an HA pair, then go to the new appliance and remove the duplicate default route (0.0.0.0/0). Pairing adds the default route defined on the already existing appliance, but does not remove the default route configured on the new appliance.

  14. After all files are synchronized and the communication between the secondary and primary appliance is working properly, test the failover scenario. The following command fully simulates a failover situation where the role of primary and secondary appliance switch between the appliances, the secondary appliance takes full control of all dedicated traffic and becomes the primary appliance.

    force HA failover

    User-added image

  15. When the high availability failover works successfully and you would like to return the primary appliance to its original state, use the command again to force the failover back.

Stay Secondary Appliance

The secondary appliance automatically becomes the primary appliance when the primary appliance is restarted and the connection between them is interrupted. The heartbeat will fail. You can have the secondary node stay as the secondary appliance when the primary appliance will not be accessible as the secondary appliance.

This could be very helpful in some specific maintenance scenarios, when the secondary appliance fails and must be replaced. For example, if the secondary appliance is replaced, then the high availability setup can be set up but the configuration cannot be synchronized and the primary appliance fails or cannot be accessible for any reason. The secondary appliance becomes active without the proper configuration and cause problems in the infrastructure. In some scenarios it could overwrite the configuration of the previous primary appliance if the communication between the secondary and primary appliances is established again.

User-added image

Run the following command from the command line interface on the secondary appliance to keep it as the secondary:

set node -hastatus STAYSECONDARY

To remove the STAYSECONDARY setting, run the following command:

set node -hastatus ENABLE

Related:

How to Change nsroot Password for a VPX Provisioned and SVM on NetScaler SDX

Follow below steps to change the password of the default user account for SDX:

  1. On the Configuration tab, in the navigation pane, expand System > User Administration, then click Users.
  2. In the Users pane, click the default user account (here: nsroot), and then click Edit.
  3. In the Configure System User screen, in Password and Confirm Password fields, enter the password of your choice.
  4. Click OK.

Follow below steps to change the password of the default user account for VPX:

  1. On the Configuration tab, in the navigation pane, expand NetScaler > Instances.
  2. In the Instances pane, click the instance for the password to be changed, and then click Edit.
  3. In the Configure NetScaler screen, in the Instance Administration section enter the password of your choice in the Password and Confirm Password fields.
  4. Click Done.

More Instructions for earlier builds:

Do not change the nsroot password directly on the NetScaler VPX instance. If you do so, the instance becomes unreachable from the Management Service.

To change a password, first create a new admin profile, and then modify the NetScaler instance, selecting this profile from the Admin Profile list. To change the password of NetScaler instances in a high availability setup, first change the password on the instance designated as the secondary node, and then change the password on the instance designated as the primary node. Remember to change the passwords only by using the Management Service.

For more information on how to create an Admin Profile on NetScaler SDX, refer to the following Citrix Documentation – Create an admin profile.

Related:

NetScaler Load Balancing Does Not Honor Persistence Under Certain Conditions

Users sessions start timing out and they face connectivity issues.

For websites with authentication, users may be asked to login again or will receive Forbidden Error message.

Persistence appears to be broken as NetScaler sends users based on Load Balancing Methods and does not honor persistence under certain conditions like:

  1. Network level congestion between the HA pair nodes.
  2. High rps on the vsever, particularly when almost each request is creating a new persistence session leading to session buildup.

In newnslog, we can see the below counters increasing:

dht_ns_tot_max_limit_exceeds dht_ns:LB_SESSION

The above counters indicate that the NetScaler has hit the total limit for persistent sessions.

dht_err_unable_to_put_replica_del_msg

The above counters indicate that the NetScaler is unable to sync/clear the session information on secondary.

lb_sess_dht_ssf_pcb_backed_up_err

This indicates that the SSF connection between the primary and secondary NetScaler is facing issues.

Refer to the following table to find the limits of persistency:

NETSCALER VERSION 10.0 10.1/10/5 11.0/11.1 12.0/12.1
Default Limit of PERSISTENCE nCore: 150,000/Packet

Engine
nCore:250,000/Packet Engine nCore:250,000/Packet Engine nCore:250,000/Packet Engine
Maximum Limit of PERSISTENCE 1,000,000/Packet Engine*

*To set the Maximum Limit to this value, you must alter the value using this CLI command, where the number is 1000000 * Number of Packet Engines. Example for 4 PEs:

set lb parameter -sessionsThreshold 4000000

Related:

NetScaler Appliance System Limits

*The sum total of virtual servers and services cannot exceed 60,000. For example, if you configure 4,000 virtual servers, then you cannot configure more than 56,000 services.

**Number of virtual servers includes Load Balancing, Content Switching, Domain Name System (DNS) name servers, Global Server Load Balancing (GSLB) Virtual Servers, SSL VPN Virtual Servers, RTSP virtual servers and persistence groups.

***250,000 per core is the default for NetScaler 10.1 and 10.5. To configure 1 million session entries per PE, run the following command:

set lb parameter -sessionsthreshold <1000000*number of PE>

For a 3 PE system, run the following command:

set lb parameter -sessionsthreshold 3000000

Related:

Large Dataset Design – Networking Considerations

In this final article of the series, we’ll take a look at some of the important aspects of large cluster front end networking and connection balancing.



OneFS supports a variety of Ethernet speeds, cable and connector styles, and network interface counts, depending on the node type selected. Unlike the back-end network, Isilon does not specify particular front-end switch models.

The recommendation is that each node have at least one front-end interface configured, preferably in at least one static SmartConnect zone. Additionally, the following cluster services actually require that every node has an address that can reach the external servers:

  • ICAP antivirus scanning.
  • Monitoring services such as SNMP & ESRS.
  • Authentication services such as Microsoft Active Directory (AD), LDAP, and NIS.

ESRS support is not a hard requirement because other nodes that are online can proxy out ESRS dial-home events. Ensure that the ESRS service can reach external servers so that every node can properly register with the ESRS gateway servers.

Sometimes a cluster needs to be expanded to increase storage capacity or CPU, not because more front-end I/O is required. With Gen5 and earlier platforms, diskless accelerator nodes with large amounts of L1 cache can be added to handle streaming and concurrent client connections, thereby freeing up resources on the storage nodes. With Gen6 nodes, the storage to compute ratio has changed in favor of the latter, so this is less frequently an issue.

Although a cluster can be run in a ‘not all nodes on the network’ (NANON) configuration, the recommendation is to connect all nodes to the front-end network(s). In contrast with scale-up NAS platforms that use separate network interfaces for out-of-band management and configuration, Isilon traditionally performs all cluster network management in-band. Bear in mind that the Gen6 Isilon nodes all contain a 1Gb Ethernet port that can be configured for use as a dedicated management network and/or ICMP, simplifying administration of a large cluster.



OneFS also supports using the nodes’ DB9 serial ports as RS232 out-of-band management interfaces, and this practice is highly recommended for large clusters. Serial ports can provide reliable BIOS-level command line access for on-site or remote service staff to perform maintenance, troubleshooting and installation operations.



For most workflows on large clusters, the recommendation is to configure at least one front-end 40Gb or 10 Gb Ethernet connection per node to support the high levels of network utilization that often take place. Assigning each workload or data store to a unique IP address enables Isilon SmartConnect to move each workload to one of the other interfaces, minimizing the additional work that a remaining node in the SmartConnect pool must absorb and ensuring that the workload is evenly distributed across all the other nodes in the pool.



For a SmartConnect pool with four-node interfaces, using the N * (N – 1) model will result in three unique IP addresses being allocated to each node. A failure on one node interface will cause each of that interface’s three IP addresses to fail over to a different node in the pool. This ensuring that each of the three active interfaces remaining in the pool receives one IP address from the failed node interface. If client connections to that node were evenly balanced across its three IP addresses, SmartConnect distributes the workloads to the remaining pool members evenly.



For large clusters, the highest IP allocation per cluster that Isilon recommends is a /23 subnet, or 510 usable addresses. There are very few cases that would require such a large IP allocation. From a load-balancing perspective, for dynamic pools, it is ideal (though optional) that all the interfaces have the same number of IP addresses, whenever possible.



More information on IP allocation sizing is available in the Isilon Advanced Networking Fundamentals guide.



Enabling jumbo frames (Maximum Transmission Unit set to 9000 bytes) yields slightly better throughput performance with slightly less CPU usage than standard frames, where the MTU is set to 1500 bytes. For example, with 10 Gb Ethernet connections, jumbo frames provide about 5 percent better throughput and about 1 percent less CPU usage.

OneFS provides the ability to optimize storage performance by designating zones to support specific workloads or subsets of clients. Different network traffic types can be segregated on separate subnets using SmartConnect pools.



For large clusters, partitioning the cluster’s networking resources and allocate bandwidth to each workload minimizes the likelihood that heavy traffic from one workload will affect network throughput for another. This is particularly true for SyncIQ replication and NDMP backup traffic, which can definitely benefit from its own set of interfaces, separate from user and client IO load.



Many customers as a best practice create separate SmartConnect subnets for the following traffic segregation:



  • Workflow separation.
  • SyncIQ Replication.
  • NDMP backup on target cluster.
  • Service Subnet for cluster administration and management traffic.
  • Different node types and performance profiles.



OneFS 8.0 and later include the ‘groupnet’ networking object as part of the support for multi-tenancy. Groupnets sit above subnets and pools and allow separate Access Zones to contain distinct DNS settings. In the example below, each node’s four front end NICs are configured for SyncIQ replication traffic, Isilon management traffic, and two client access data networks.

The management and data networks can then be incorporated into different Access Zones, each with their own DNS, directory access services, and routing, as appropriate.

Network Interface

Network

Access Zone

igb0

Management traffic

Systems zone

igb1

SyncIQ replication traffic

Systems zone

bnxe0

Data network 1

Data 1 zone

bnxe1

Data network 2

Data 2 zone

By default, OneFS SmartConnect balances connections among nodes by using a round-robin policy and a separate IP pool for each subnet. A SmartConnect license adds advanced balancing policies to evenly distribute CPU usage, client connections, or throughput. It also lets you define IP address pools to support multiple DNS zones in a subnet.

Load-balancing Policy

General or Other

Few Clients with Extensive Usage

Large Number of Persistent NFS & SMB Connections

Large Number of Transitory Connections (HTTP, FTP)

NFS Automount of UNC Paths are Used

Round Robin

P

P

P

P

P

Connection Count

P

P

P

P

CPU Usage

Network Throughput

A ‘round robin’ load balancing strategy is the recommendation for both client connection balancing and IP failover. During a cluster split or merge group change the SmartConnect service will not respond to DNS inquiries. This is seldom as group changes typically take around 30 seconds. However, the time taken for a group change to complete can vary due to the load on the cluster at the time of the change. Any time a node is added, removed, or rebooted in a cluster there will be two group changes that cause SmartConnect to be impacted, one from down/split and one from up/merge.

For large clusters, if group changes are adversely impacting SmartConnect’s load-balancing performance, the core site DNS servers can be configured to use a Round Robin configuration instead of redirecting DNS requests to SmartConnect. SmartConnect supports IP failover to provide continuous access to data when hardware or a network path fails. Dynamic failover is recommended for high availability workloads on SmartConnect subnets that handle traffic from NFS clients.

For optimal network performance, observe the following SmartConnect best practices:



  • Do not mix interface types (40Gb / 10Gb / 1Gb) in the same SmartConnect Pool
  • Do not mix node types with different performance profiles (for example, H600 and A200 interfaces).
  • Use the ‘round-robin’ SmartConnect Client Connection Balancing and IP-failover policies.



To evenly distribute connections and optimize performance, the recommendation is to size SmartConnect for the expected number of connections and for the anticipated overall throughput likely to be generated. The sizing factors for a pool include:



  • The total number of active client connections expected to use the pool’s bandwidth at any time.
  • Expected aggregate throughput that the pool needs to deliver.
  • The minimum performance and throughput requirements in case an interface fails.



Since OneFS is a single volume, fully distributed file system, a client can access all the files and associated metadata that are stored on the cluster, regardless of the type of node a client connects to or the node pool on which the data resides. For example, data stored for performance reasons on a pool of performance nodes (eg. H600) can be mounted and accessed by connecting to an archive node (eg. A200) in the same cluster. Note that the different types of Isilon nodes will deliver different levels of performance.



To avoid unnecessary network latency under most circumstances, the recommendation is to configure SmartConnect subnets such that client connections are to the same physical pool of nodes on which the data resides. In other words, if a workload’s data lives on a pool of F800 nodes for performance reasons, the clients that work with that data should mount the cluster through a pool that includes the same F800 nodes that host the data.



Keep in mind the following networking and name server considerations:



  • Minimize disruption by suspending nodes in preparation for planned maintenance and resuming them after maintenance is complete
  • If running OneFS 8.0 or later, leverage the groupnet feature to enhance multi-tenancy and DNS delegation, where desirable.
  • Ensure traffic flows through the right interface by tracing routes. Leverage OneFS Source-Based Routing (SBR) feature to keep traffic on desired paths.
  • If you have firewalls, ensure that the appropriate ports are open. For example, open both UDP port 53 and TCP port 53 for the DNS service.
  • The client never sends a DNS request directly to the cluster. Instead, the site name-servers handle DNS requests from clients and route the requests appropriately.
  • In order to successfully distribute IP addresses, the OneFS SmartConnect DNS delegation server answers DNS queries with a time-to-live (TTL) of 0 so that the answer is not cached. Certain DNS servers (particularly Windows DNS Servers) will fix the value to one second. If you have many clients requesting an address within the same second, this will cause all of them to receive the same address. If you encounter this problem, you may need to use a different DNS server, such as BIND.
  • Certain clients perform DNS caching and might not connect to the node with the lowest load if they make multiple connections within the lifetime of the cached address. Recommend turning off client DNS caching, where possible. To handle client requests properly, SmartConnect requires that clients use the latest DNS entries
  • The site DNS servers must be able to communicate with the node that is currently hosting the SmartConnect service. This is the node with the lowest logical node number (LNN) with an active interface in the subnet that contains the SSIP address. This behavior cannot be modified.
  • Connection policies other than round robin are sampled every 10 seconds. The CPU policy is sampled every 5 seconds. If multiple requests are received during the same sampling interval, SmartConnect will attempt to balance these connections by estimating or measuring the additional load.

Related:

  • No Related Posts

MPX/ SDX with SSL Coleto card installed failure.

https://theresource.citrix.com/trending-issue-vpx-on-sdx-ssl-card-failure-vpx-in-yellow-state-no-gui-access-or-connections


Check newnslogs

collector_abbr_S_10.151.88.17_29Oct2018_11_35/var/nslog]$ nsconmsg120 -K newnslog -g ssl_err_card_ -d stats

Displaying current counter value information

NetScaler V20 Performance Data

NetScaler NS12.0: Build 57.24.nc, Date: Apr 13 2018, 12:06:28

reltime:mili second between two records Mon Oct 29 05:57:03 2018

Index reltime counter-value symbol-name&device-no

1 0 410 ssl_err_card_process_fail_rst

3 0 0 ssl_err_card_process_resp_fail_rst

reltime:mili second between two records Mon Oct 29 05:57:03 2018

Index reltime counter-value symbol-name&device-no

595 0 0 ssl_err_coleto_ecdsa_verify_pub_coordinates

597 0 0 ssl_err_coleto_ecdsa_verify_submit

599 0 0 ssl_err_coleto_encfin

601 0 2 ssl_err_coleto_encmsgdp_submit

603 0 0 ssl_err_coleto_enc_msg

605 0 312 ssl_err_coleto_expected_finmismatch

607 0 0 ssl_err_coleto_findecdp_submit

609 0 0 ssl_err_coleto_finencdp_submit

611 0 0 ssl_err_coleto_force_mon_requests

613 0 12 ssl_err_coleto_keyblock_submit

615 0 1674 ssl_err_col

———

/upload/ftp/78466979/SDX/collector_abbr_S_10.151.88.15_29Oct2018_03_33/var/nslog]$ nsconmsg -K newnslog -g ssl_err_coleto -s disptime=1 -d current | egrep –line-buffered ‘_submit’| more



NetScaler NS12.0: Build 57.24.nc, Date: Apr 13 2018, 12:06:28



Index rtime totalcount-val delta rate/sec symbol-name&device-no&time

2 552995 1 1 0 ssl_err_coleto_masterkey_submit Mon Oct 29 05:08:01 2018

3 63000 2 1 0 ssl_err_coleto_masterkey_submit Mon Oct 29 05:09:04 2018

4 21000 3 1 0 ssl_err_coleto_masterkey_submit Mon Oct 29 05:09:25 2018

5 14000 6 3 0 ssl_err_coleto_masterkey_submit Mon Oct 29 05:09:39 2018

6 35000 7 1 0 ssl_err_coleto_masterkey_submit Mon Oct 29 05:10:14 2018

399 42000 469 1 0 ssl_err_coleto_masterkey_submit Mon Oct 29 08:10:35 2018

400 21000 470 1 0 ssl_err_coleto_masterkey_submit Mon Oct 29 08:10:56 2018

401 14000 472 2 0 ssl_err_coleto_masterkey_submit Mon Oct 29 08:11:10 2018

Check SVM logs



Method: GET, URL: https://10.151.88.17/nitro/v1/stat/ns?format=json

Sunday, 28 Oct 18 20:29:11.828 -0700 [Error] [Stat[#2]] https://10.151.88.17/nitro/v1/stat/ns?format=json, Reason: SSL Exception: error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac

Sunday, 28 Oct 18 20:29:11.846 -0700 [Debug] [Stat[#2]] Sending Message to SYSOP /tmp/mps/ipc_sockets/mps_sysop_sock:{ “errorcode”: 0, “message”: “Done”, “is_user_part_of_default_group”: true, “skip_auth_scope”: true, “message_id”: “”, “resrc_driven”: true


Related:

  • No Related Posts