Configure Citrix Endpoint Management as SAML IDP for ShareFile

This article provides a overview information of how to configure ShareFile Single Sign-On (SSO) with Citrix Endpoint Management.

A working configuration of Citrix Gateway and Citrix Endpoint Management server can be leveraged for user authentication. Users signing into their ShareFile account using a web browser or Citrix Files clients will be redirected to the Citrix Gateway webpage for their credentials. After successful authentication by Citrix Endpoint Management server, the user receives a SAML token that is valid for sign-in to their Citrix ShareFile account.

As well, you can use the Citrix Endpoint Management server with Secure Hub to single sign-on (SSO) into Citrix Files MDX-wrapped applications. In this scenario, Secure Hub obtains a SAML token from the Citrix Endpoint Management server and automatically signs users into their Citrix ShareFile account without their entering credentials.

Related:

Database Access and Permission Model for XenDesktop

This article describes the SQL Server database access and permission model used by XenDesktop 5 and later.

Background

All runtime access to the central XenDesktop site database is performed by the services running on each controller. These services gain access to the database through their Active Directory machine accounts. This database access is sufficient to allow full day-to-day operation of the site including use of Desktop Studio, Desktop Director, and the service-specific SDKs.

The controller machine accounts and users in the database are granted only the minimum access to the XenDesktop database required for the services to operate.

The use of machine accounts for database access removes the need to securely store SQL logon (SQL authentication) passwords on the controller. It also ensures that only machines that have been configured with appropriate database access at the database server can act as XenDesktop controllers for a particular site.

Use of machine accounts provides a simple and secure model for protecting the critical data in the XenDesktop database. However, the creation and manipulation of the machine account logons at the database server is an inherently privileged operation that falls outside the scope of the permissions granted within the XenDesktop database itself. For this reason, certain key actions on the site are considered privileged administrative operations that require additional database server level permissions not granted to the XenDesktop services themselves; these operations cannot be performed except by a database user with elevated privileges.

The database access performed is summarized in the following diagram:

User-added image

The normal runtime permissions and administrative permissions used by the XenDesktop site are described separately in the following sections.

Note: Use of SQL logons (SQL authentication) in place of machine account logons for the XenDesktop services is not a supported configuration. The SQL scripts used by Desktop Studio and the SDKs are based on the use of machine account logons. In addition, attempting to use SQL logons (SQL authentication) might lead to the account passwords being trivially exposed through the SDKs.

Database Permission Model

Runtime Permissions

Each XenDesktop service gains access to the database through the local controller’s machine account. All routine Desktop Studio, Desktop Director, and SDK operations go through one of the XenDesktop services and thus no additional machine account logons are required for use of any of those components irrespective of the machine on which they run.

For a controller machine that is not on the same machine as the database server, the detailed database permissions are granted as:

  • The services gain access to the database server through their machine account logon (names of the form ‘DOMAINMACHINE$’). These logons do not need to be members of any server-level roles.

  • Within the XenDesktop database, the machine logons are mapped one-to-one with a dedicated per-machine user. Each such user has the same name as the logon to which it relates (in other words: ‘DOMAINMACHINE$’).

  • Each per-machine user is a member of the following XenDesktop-specific database-level roles:

Database Role Corresponding XenDesktop Service

ADIdentitySchema_ROLE

AD Identity Service

chr_Broker

chr_Controller

Broker Service

ConfigurationSchema_ROLE

Central Configuration Service

DesktopUpdateManagerSchema_ROLE

Desktop Update Manager Service

HostingUnitServiceSchema_ROLE

Hosting Management Service

MachinePersonalitySchema_ROLE

(This is no longer present on 7.x DBs)
NA
ConfigLoggingSchema_ROLE

(added in XenDesktop 7.0 and later)
Configuration Logging Service
ConfigLoggingSiteSchema_ROLE

(added in XenDesktop 7.0 and later)
Configuration Logging Service
DAS_ROLE

(added in XenDesktop 7.0 and later)
Delegated Admin service
MonitorData_ROLE

(added in XenDesktop 7.0 and later)
Monitor Service
StorefrontSchema_ROLE

(added in XenDesktop 7.0 and later)
StoreFront Service

Note: This is not the real StoreFront service which are web services in IIS and some other Windows services such as Credential Wallet and Configuration Replication. This is a XenDesktop integration service with StoreFront.
Analytics_ROLE

(added in XenDesktop 7.0 and later)
Analytics Service
AppLibrarySchema_ROLE App library service
EnvTestServiceSchema_ROLE Env Test service
Monitor_ROLE Monitor service
OrchestrationSchema_ROLE Orchestration service
TrustSchema_ROLE FMA trust service


Each one of the preceding roles has the minimum permissions granted to it to allow the corresponding service on the controller to function. These permissions are restricted to execute on stored procedures and read on some tables.

For a controller that is on the same machine as the database server, the model is as in the preceding diagram except that the logon and user relate to the local NetworkService account. In this case, both the logon and user names are ‘NT AUTHORITYNETWORK SERVICE’.

Notes

  • All server logons, database users, roles and permissions are created as required either by Desktop Studio, or through the scripts obtained directly from the service-specific SDKs. No further configuration is required.

  • It should never be necessary to manual modify the users, roles, or permissions created within the XenDesktop database.

Administrative Permissions

The permissions required to perform various administrative operations on a XenDesktop database are shown in the following table. Because it is envisaged that these are typically performed by a database administrator, no operation-specific database roles are provided, thus db_owner rights are usually required as shown.

All of these operations can be performed using Desktop Studio, if required. In these cases, a direct connection is made from Desktop Studio to the database server; thus, the Desktop Studio user must either have a database server account that is explicitly a member of the appropriate server roles or be able to provide credentials of an account that is. Such direct database access is only used for the following operations; all other operations go through the underlying XenDesktop services.

Operation

Purpose

Server Roles

Database Roles

Database Creation

Create suitable empty database for use by XenDesktop.

See note [4] below.

dbcreator

NA

Schema Creation

Create all service-specific database schemas and add first controller to site.

securityadmin

db_owner

Add Controller

Add controller (other than the first) to site.

securityadmin

db_owner

Add Controller (mirror server)

Add controller login to the database server currently in the mirror role of a mirrored XenDesktop database.

securityadmin

NA

Remove Controller

Remove controller from site.

See note [5] below.

db_owner

Schema Update

Apply schema updates/hotfixes.

NA

db_owner

Notes

  1. While technically more restrictive, in practice, the securityadmin server role should be treated as equivalent to the sysadmin server role.

  2. Where the preceding operations are performed using Desktop Studio, the user account must currently explicitly be a member of the sysadmin server role.

  3. The accounts used to perform the preceding administrative operations are never recorded by the XenDesktop site. An account that was previously used for an operation can subsequently be removed without impacting the site in any way.

  4. When an empty database is created using Desktop Studio, it is created with all default attributes except for the following:

    • The collation sequence is set to Latin1_General_CI_AS_KS. Where a database is created manually, any collation sequence can be used provided that it is case-insensitive, ascent-sensitive, and kanatype-sensitive (typically the collation sequence name ends with _CI_AS_KS).
    • The recovery model is set to Simple. For use as a mirrored database, this must be changed to Full.
  5. When a controller is removed from a site, either directly through Desktop Studio, or using the scripts generated by Desktop Studio or SDK, the controller logon to the database server is not removed. This is to avoid potentially removing a logon being used by non-XenDesktop services on the same machine. The logon must be removed manually if it is no longer required; this requires securityadmin server role membership.

Additional Resources

Citrix Documentation – System requirements for XenDesktop

Refer to the following for more information on Microsoft SQL Server roles:

Related:

Untitled

An overall dip of 7-8% may be seen using a cloud provider but the disk selection is crucial. The dd command was used to write out to disk, bypassing the disk cache and reporting back on a disk’s maximum MB/s. eDirectory requires a minimum of 100 MB/s. Some lower end drive selections were found to drop below 10MB/s at times.
AWS
AWS recommends the use of EBS optimized EC2 instances to get the maximum out of the configured disks.

The links below give a good understanding on how to obtain optimal disk performance through Instance/Volume selection.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html?icmpid=docs_ec2_console

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html

Instance type – General Purpose, t2.micro 1 vCPU, 1GB Memory (Non-EBS Optimized, Free instance)

Disk – Provisioned IOPS SSD, 1000 GB, IOPS – 32000

# dd if=/dev/zero of=/home/ec2-user/iotest.log bs=64k count=8k conv=fdatasync

8192+0 records in

8192+0 records out

536870912 bytes (537 MB, 512 MiB) copied, 7.43317 s, 72.2 MB/s

Instance type – General Purpose, t3.medium 2vCPU, 4GB Memory

Disk – Provisioned IOPS SSD, 1000 GB, IOPS – 32000

# dd if=/dev/zero of=/home/ec2-user/iotest.log bs=64k count=8k conv=fdatasync

8192+0 records in

8192+0 records out

536870912 bytes (537 MB, 512 MiB) copied, 1.22511 s, 438 MB/s

Instance type – Memory Optimized & EBS optimized, r5ad.2xlarge 8 vCPU, 64 GB Memory

Disk – Provisioned IOPS SSD, 1000 GB, IOPS – 32000

# dd if=/dev/zero of=/home/ec2-user/iotest.log bs=64k count=8k conv=fdatasync

8192+0 records in

8192+0 records out

536870912 bytes (537 MB, 512 MiB) copied, 0.684601 s, 784 MB/s

Even a medium level instance of this instance category gave decent throughput. However, it also shows how selecting the low end disk will result in problems.

AZURE
Azure has around 14 Premium SSD Managed disks. We highly recommend a Premium SSD Managed disk over a Standard SSD. However, it is up to a customer’s environment to dictate a minimum required disk type. While throughput and IOPs can be increased by just increasing the disk size there a corresponding increase in cost as well. The bottom line is that disk performance was found to be not uniform across disk sizes. We also sometimes found that the published numbers may not match what is found.
As a benchmark, using the same command, it was observed that a ‘normal’ VM was giving 500MB/s (using a local available host). There are too many permutations available to report on every configuration available and the types are documented on their site.
https://azure.microsoft.com/en-au/pricing/details/managed-disks/
It is hoped these numbers can server as an indicator when customers choose their drive type. Specifically, both the type of disk is important but also is its size.
– Azure P30 disk (1TB)
Listed as 200MB/s but was returning less than 50MB/s.
– Azure P60 disk (1TB)
Listed as 500MB/s but returned 35 MB/s
– Final Maximum Configuration:
VM details:

Memory Optimized – Standard_DS13_v2 (8 vcpus, 56 GiB memory)

Max cached and temp storage throughput: IOPS/MBps (32000/256)

Max uncached disk throughput: IOPS/MBps (25600/384)
Disk details:

P60 Data Disk ( 8 TB, Premium SSD )

Listed as 500 MB/s
Our final results using the dd command showed this configuration was returning 304-297 MB/s. Simply increasing the size makes an enourmous difference. When testing ensure before running the dd command that eDirectory is up and busy as results can be very load dependant. In the final configuration there was only a drop from 304 to 297 MB/s when running a continuous LDAP thread.

Related:

Untitled

An overall dip of 7-8% may be seen using a cloud provider but the disk selection is crucial. The dd command was used to write out to disk, bypassing the disk cache and reporting back on a disk’s maximum MB/s. eDirectory requires a minimum of 100 MB/s. Some lower end drive selections were found to drop below 10MB/s at times.
AWS
AWS recommends the use of EBS optimized EC2 instances to get the maximum out of the configured disks.

The links below give a good understanding on how to obtain optimal disk performance through Instance/Volume selection.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html?icmpid=docs_ec2_console

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html

Instance type – General Purpose, t2.micro 1 vCPU, 1GB Memory (Non-EBS Optimized, Free instance)

Disk – Provisioned IOPS SSD, 1000 GB, IOPS – 32000

# dd if=/dev/zero of=/home/ec2-user/iotest.log bs=64k count=8k conv=fdatasync

8192+0 records in

8192+0 records out

536870912 bytes (537 MB, 512 MiB) copied, 7.43317 s, 72.2 MB/s

Instance type – General Purpose, t3.medium 2vCPU, 4GB Memory

Disk – Provisioned IOPS SSD, 1000 GB, IOPS – 32000

# dd if=/dev/zero of=/home/ec2-user/iotest.log bs=64k count=8k conv=fdatasync

8192+0 records in

8192+0 records out

536870912 bytes (537 MB, 512 MiB) copied, 1.22511 s, 438 MB/s

Instance type – Memory Optimized & EBS optimized, r5ad.2xlarge 8 vCPU, 64 GB Memory

Disk – Provisioned IOPS SSD, 1000 GB, IOPS – 32000

# dd if=/dev/zero of=/home/ec2-user/iotest.log bs=64k count=8k conv=fdatasync

8192+0 records in

8192+0 records out

536870912 bytes (537 MB, 512 MiB) copied, 0.684601 s, 784 MB/s

Even a medium level instance of this instance category gave decent throughput. However, it also shows how selecting the low end disk will result in problems.

AZURE
Azure has around 14 Premium SSD Managed disks. We highly recommend a Premium SSD Managed disk over a Standard SSD. However, it is up to a customer’s environment to dictate a minimum required disk type. While throughput and IOPs can be increased by just increasing the disk size there a corresponding increase in cost as well. The bottom line is that disk performance was found to be not uniform across disk sizes. We also sometimes found that the published numbers may not match what is found.
As a benchmark, using the same command, it was observed that a ‘normal’ VM was giving 500MB/s (using a local available host). There are too many permutations available to report on every configuration available and the types are documented on their site.
https://azure.microsoft.com/en-au/pricing/details/managed-disks/
It is hoped these numbers can server as an indicator when customers choose their drive type. Specifically, both the type of disk is important but also is its size.
– Azure P30 disk (1TB)
Listed as 200MB/s but was returning less than 50MB/s.
– Azure P60 disk (1TB)
Listed as 500MB/s but returned 35 MB/s
– Final Maximum Configuration:
VM details:

Memory Optimized – Standard_DS13_v2 (8 vcpus, 56 GiB memory)

Max cached and temp storage throughput: IOPS/MBps (32000/256)

Max uncached disk throughput: IOPS/MBps (25600/384)
Disk details:

P60 Data Disk ( 8 TB, Premium SSD )

Listed as 500 MB/s
Our final results using the dd command showed this configuration was returning 304-297 MB/s. Simply increasing the size makes an enourmous difference. When testing ensure before running the dd command that eDirectory is up and busy as results can be very load dependant. In the final configuration there was only a drop from 304 to 297 MB/s when running a continuous LDAP thread.

Related:

Untitled

An overall dip of 7-8% may be seen using a cloud provider but the disk selection is crucial. The dd command was used to write out to disk, bypassing the disk cache and reporting back on a disk’s maximum MB/s. eDirectory requires a minimum of 100 MB/s. Some lower end drive selections were found to drop below 10MB/s at times.
AWS
AWS recommends the use of EBS optimized EC2 instances to get the maximum out of the configured disks.

The links below give a good understanding on how to obtain optimal disk performance through Instance/Volume selection.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html?icmpid=docs_ec2_console

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html

Instance type – General Purpose, t2.micro 1 vCPU, 1GB Memory (Non-EBS Optimized, Free instance)

Disk – Provisioned IOPS SSD, 1000 GB, IOPS – 32000

# dd if=/dev/zero of=/home/ec2-user/iotest.log bs=64k count=8k conv=fdatasync

8192+0 records in

8192+0 records out

536870912 bytes (537 MB, 512 MiB) copied, 7.43317 s, 72.2 MB/s

Instance type – General Purpose, t3.medium 2vCPU, 4GB Memory

Disk – Provisioned IOPS SSD, 1000 GB, IOPS – 32000

# dd if=/dev/zero of=/home/ec2-user/iotest.log bs=64k count=8k conv=fdatasync

8192+0 records in

8192+0 records out

536870912 bytes (537 MB, 512 MiB) copied, 1.22511 s, 438 MB/s

Instance type – Memory Optimized & EBS optimized, r5ad.2xlarge 8 vCPU, 64 GB Memory

Disk – Provisioned IOPS SSD, 1000 GB, IOPS – 32000

# dd if=/dev/zero of=/home/ec2-user/iotest.log bs=64k count=8k conv=fdatasync

8192+0 records in

8192+0 records out

536870912 bytes (537 MB, 512 MiB) copied, 0.684601 s, 784 MB/s

Even a medium level instance of this instance category gave decent throughput. However, it also shows how selecting the low end disk will result in problems.

AZURE
Azure has around 14 Premium SSD Managed disks. We highly recommend a Premium SSD Managed disk over a Standard SSD. However, it is up to a customer’s environment to dictate a minimum required disk type. While throughput and IOPs can be increased by just increasing the disk size there a corresponding increase in cost as well. The bottom line is that disk performance was found to be not uniform across disk sizes. We also sometimes found that the published numbers may not match what is found.
As a benchmark, using the same command, it was observed that a ‘normal’ VM was giving 500MB/s (using a local available host). There are too many permutations available to report on every configuration available and the types are documented on their site.
https://azure.microsoft.com/en-au/pricing/details/managed-disks/
It is hoped these numbers can server as an indicator when customers choose their drive type. Specifically, both the type of disk is important but also is its size.
– Azure P30 disk (1TB)
Listed as 200MB/s but was returning less than 50MB/s.
– Azure P60 disk (1TB)
Listed as 500MB/s but returned 35 MB/s
– Final Maximum Configuration:
VM details:

Memory Optimized – Standard_DS13_v2 (8 vcpus, 56 GiB memory)

Max cached and temp storage throughput: IOPS/MBps (32000/256)

Max uncached disk throughput: IOPS/MBps (25600/384)
Disk details:

P60 Data Disk ( 8 TB, Premium SSD )

Listed as 500 MB/s
Our final results using the dd command showed this configuration was returning 304-297 MB/s. Simply increasing the size makes an enourmous difference. When testing ensure before running the dd command that eDirectory is up and busy as results can be very load dependant. In the final configuration there was only a drop from 304 to 297 MB/s when running a continuous LDAP thread.

Related: