Problem symptoms occur when all nodes in a XenMobile Server cluster are not located near to each other

XenMobile Server supports ‘active/passive’ failover between multiple sites.

It is not typically possible to configure ‘active/active’ for anything other than the access layer, comprising Citrix ADC, Public DNS records and GSLB.

A typical setup for Disaster Recovery will use two XenMobile Server clusters, one active and the other passive.

Supported SQL infrastructures include Basic AOAG (Always On Availability Groups) and Clustered SQL for High Availability.

In a Disaster Recovery scenario (either for testing or practical reasons), it is usually necessary to block or prevent device connections in to the Primary datacentre or site as one of the first steps of performing the failover. This is to prevent any further changes from being made to the configuration database for XenMobile Server.

When performing a failover to a Disaster Recovery datacentre or site, synchronise the configuration database from the Primary site to the DR site one last time, after blocking or preventing device connections in to the Primary site first. Afterwards, allow connections in to the DR site where changes can then be made to the since synchronised database.

Whilst the access layer (front end) of supported Disaster Recovery infrastructures can be configured as ‘active/active’, the different clusters of XenMobile Server nodes (back end) are intended to be ‘active/passive’.


Slow Enumeration Of VMs On SCVMM Hypervisor Connections While Adding VMs To Machine Catalog

Observing delay while adding new VMs to machine catalogs when browsing SCVMM hosting connections.

After clicking on browse, it takes more than 15 minutes to expand a SCVMM hypervisor connection to display the host groups.

It can take another 15+ minutes to expand further and see the clusters and the VMs on the clusters.

Issue is not specific to any SCVMM version and can be seen in a large cluster environment.


Implementing Failover between (SG 600-35) and (S200-30)

I need a solution

As per below KB, it’s not recommend to implement failover between two different models…

but as we are planning HW refresh (600-35 ver. proxy cluster with a new cluster (S200-30 ver.

To achieve no downtime, we will remove the cables from old standby proxy and connect them to the new standby that has the same configuration; and so on with the active one but not in the same night

regardless the load point that is mentioned in above link, is there any “compatibility issue”, “concerns” or “recommendation” to implement HA between both of those proxies (with different model/version)?




Android Enterprise: All available applications not reflecting Enterprise Play Store


A Storelayoutclusters resource represents a list of products displayed together as a group on a Google Play for Work store page. Each page can have up to 30 clusters, and each cluster can have up to 100 products.



Netscaler Cluster fails to serve traffic when some of the nodes are down

The behavior you are experiencing is expected. A cluster needs a minimum of (n/2 +1) working nodes in order to serve traffic.

In your scenario, 4 nodes will need a minimum of 3 nodes to be operational in order for the cluster to serve traffic.

The only exception to this is a 2 node cluster,

Further information can be found at


What’s It Take to Build a Great HPC Cluster? Ask These Students.

At ISC High Performance 2019, a team sponsored by Dell EMC and other technology leaders took home the top prize in the Student Cluster Competition. The business of building a high-performance computing system and optimizing it for top performance on industry benchmarks may seem like it belongs strictly in the domain of technology experts. But not so, says a team of six South African undergraduate students. This team, affiliated with the South African Centre for High Performance Computing (CHPC), took home the gold at a recent international competition that pitted teams of students against each other … READ MORE


SOFS(Scale-Out file server) does not work with 1901 or 1902

Unable to mount a share on an scale-out-fileserver clusterresource (cluster-shared-volumes).

The below errors are seen maservice.log.

2019-02-07 03:42:04,148 ERROR [80] MountPointService: Encountered error creating mount point /mnt/maserviceshare/mnt00000004 with exception Uni.Core.Handlers.Exceptions.GlobalizedErrorException`1[Uni.Core.Contract.Results.FileCategory]: MessageId=FileShareServicePermissionDenied, DefaultTitle=, CategoryData={[FileCategory { Message = “mount error(13): Permission denied; Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)” }]}

at Uni.Appliance.Services.FileShareServices.MountPointService.MountUncToPath (System.String uncPath, Uni.Appliance.Services.FileShareServices.Interfaces.FileShareType fsType, Uni.Appliance.Services.FileShareServices.Interfaces.FileShareCredentials fsCreds, System.Boolean cacheEnabled, System.Nullable`1[T] timeout, System.Int64 mntNumber, System.String mntPoint) [0x000af] in d:buildsR4ZION-WSW-JOB1sourceUni.ApplianceServicesFileShareServicesMountPointService.cs:281

2019-02-07 03:42:04,148 INFO [80] TestRemoteFileShar: TEST( \citrix-layers.CustPrivInfo.dirlayers$, Cifs, CustPrivInfoservice, * ) FAILED MessageId=FileShareServicePermissionDenied, DefaultTitle=, CategoryData={[FileCategory { Message = “mount error(13): Permission denied; Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)” }]}

2019-02-07 03:42:04,148 INFO [80] HandlerHelper: Finished Command TestRemoteFileShareCommand->TestRemoteFileShareResult

2019-02-07 03:42:04,148 ERROR [80] HandlerHelper: ‘Application Error while processing ‘Command’ ‘TestRemoteFileShareCommand”: ‘DefaultTitle=””, MessageID=”FileShareServicePermissionDenied”, {CategoryData={[FileCategory { Message = “mount error(13): Permission denied; Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)” }]}’


  • No Related Posts

Debugging options for Encryption Management Server Logs

I need a solution


I have a ticket open with Symantec Technical Support.  My issue is that my primary SEMS server constantly reports the secondary cluster server as “unreachable” before flipping back to “normal.”  Basically, it’s flipping back and forth all the time.

The “Cluster” log clearly shows that the link is being established, followed by an EOF and the link being dropped.  This repeats itself every few seconds.

The tech I’m working with at Symantec isn’t frankly, the greatest.  He asked me to turn on debugging, and sent me a KB article (…) on how to do that by editing the debug.xml log.  The KB says to turn on the debugging options I need, and I kept asking him which ones those are, but he just told me to turn them all on, and leave it for a day (which I have no intention of doing for fear my server will crash sometime in the middle of the night.)

I’m looking at the console, and I see debug logs being created in the “Client,” “Groups,” and “Mail” logs.  However, I DON’T see any debugs being created in the “Cluster” log.

Does anyone know if turning on debugging in this way will actually help with the cluster service?  Or is this tech just wasting my time?

Thank you,

– Steve



VIP Enterprise Gateway ( Redundancy/replication ? )

I need a solution

Hi everybody,

New on the VIP access solution.

I was wondering if there is any implemented features on the EG software to have a cluster that sychronise modifications from the Actif to the Passif EG for example.

Thanks for the replies 🙂