NetScaler Native OTP, failed to register device.

Tradução automática

Эта статья была переведена автоматической системой перевода и не был рассмотрен людьми. Citrix обеспечивает автоматический перевод с целью расширения доступа для поддержки контента; Однако, автоматически переведенные статьи могут может содержать ошибки. Citrix не несет ответственности за несоответствия, ошибки, или повреждения, возникшие в результате использования автоматически переведенных статей.

Related:

  • No Related Posts

Secure Mail – Personal calendar overlay doesn’t work properly

Tradução automática

Эта статья была переведена автоматической системой перевода и не был рассмотрен людьми. Citrix обеспечивает автоматический перевод с целью расширения доступа для поддержки контента; Однако, автоматически переведенные статьи могут может содержать ошибки. Citrix не несет ответственности за несоответствия, ошибки, или повреждения, возникшие в результате использования автоматически переведенных статей.

Related:

  • No Related Posts

How to Roam Linux User Profile Through Network File System

Configuration overview

The following configurations are required to implement user profile roaming through NFS mechanism:

  • Configuring NFS Server
  1. Install required NFS packages
  2. Enable/start required services
  3. Configure Firewall
  4. Export shared directories
  • Configuring NFS Client
  1. Install required NFS packages
  2. Mount NFS shares on client
  3. Configure IDMAP

Note that a real example based upon RHEL 7.2 distribution is used to elaborate how to set up the configuration for each step in the following sections. As for other supported distributions, such as CentOs, SUSE and Ubuntu, this article also applies to them, however, package name and service name mentioned below may have minor differences, and this article does not cover that.

Configuring NFS Server

  1. Install required NFS packages

Install nfs-utils and libnfsidmap packages on NFS server using the following command:

yum install nfs-utils libnfsidmap
  1. Enable/start required services

Enable rpcbind and nfs-server services, using the following commands:

systemctl enable rpcbindsystemctl enable nfs-server

Activate the following four services using the following commands:

systemctl start rpcbindsystemctl start nfs-serversystemctl start rpc-statdsystemctl start nfs-idmapd

Additional details about the services mentioned above:

  • rpcbind — The rpcbind server converts RPC program numbers into universal addresses.
  • nfs-server — It enables the clients to access NFS shares.
  • rpc-statd — NFS file locking. Implements file lock recovery when an NFS server crashes and reboots.
  • nfs-idmap — It translates user and group ids into names, and translates user and group names into ids.
  1. Set up firewall configuration

We need to configure firewall on NFS server to allow client services to access NFS shares. To do that, run the following commands on the NFS server:

firewall-cmd --permanent --zone public --add-service mountdfirewall-cmd --permanent --zone public --add-service rpc-bindfirewall-cmd --permanent --zone public --add-service nfsfirewall-cmd --reload
  1. Export shared directories

There are two sub steps in this section.

  • Specify shared directory and its attributes in /etc/exports.
  • Export shared directory using command “exportfs -r”

Specify shared directory and its attributes in /etc/exports.

Example:

To share directory /home in NFS server with NFS client “10.150.152.167”, we need to add the following line to /etc/exports

/home 10.150.152.167(rw,sync, no_root_squash)

Note that:

/home — directory name in NFS server

10.150.152.167 — IP address of NFS client

rw,sync, no_root_squash — directory attributes

  1. – read/write permission to the shared folder
  2. – all changes to filesystem are immediately flushed to disk;
  3. : By default, any file request made by user root on the client machine is treated as by user nobody on the server. (Exactly which UID the request is mapped to depends on the UID of user “nobody” on the server, not the client.) If no_root_squash is selected, then root on the client machine will have the same level of access to the files on the system as root on the server.

We can get all options in the man page (man exports)

Export shared directory using command “exportfs -r”

Execute command “exportfs –r” to export the shared directory on the shell of NFS server.

We can also use the command “exportfs –v” to get a list for all shared directories.

More details on exportfs commands:

exportfs -v : Displays a list of shared files and export options on a server

exportfs -a : Exports all directories listed in /etc/exports

exportfs -u : Un-export one or more directories

exportfs -r : Re-export all directories after modifying /etc/exports

Configuring NFS Client

  1. Install required NFS packages

Install the nfs-utils package using the following command.

yum install nfs-utils
  1. Mount NFS shares on the client

There are two different ways to mount the exported directories.

  • Use command “mount” to manually mount the directories.
  • Update /etc/fstab to mount the directories at boot time.

Use the “mount” command to manually mount directories.

Example:

The command to mount remote directory /home in 10.150.138.34 to local /home/GZG6N, command is as follows:

mount -t nfs -o options 10.150.138.34:/home /home/GZG6N

Note that:

10.150.138.34 -IP address of NFS server

/home – Shared directory on NFS server

/home/GZG6N – Local mount point

Update /etc/fstab to mount the directories at boot time.

Examples:

Add a line similar to the following to /etc/fstab.

10.150.138.34:/home /home/GZG6N nfs defaults 0 0

Then execute command “mount –a” to mount all filesystems mentioned in fstab.

  1. Configure IDMAP

Update /etc/samba/smb.conf to make sure that each user has a unique UID across all the Linux VDAs. Add the following lines to [global] section in the smb.conf file:

[Global] idmap config * : backend = tdb idmap config <DomainREALM> : backend = rid idmap config <DomainREALM> : range = 100000-199999 idmap config <DomainREALM> : base_rid = 0 template homedir = /home/<DomainName>/%u

Now that all the configurations have been done, we can normally launch session from Linux VDA called NFS client in this article (its IP address is 10.150.152.167 in the example), however, user directory is actually located in NFS server (its IP address is 10.150.138.34 in the example).

Related:

How to Configure Multi-Monitor Support on the Linux VDA

The Linux VDA provides the out-of-the-box multi-monitor support with a default resolution of 2560×1600 per monitor. Standard VDAs support up to nine monitors, and HDX 3D Pro VDAs support up to four monitors.

This article describes how to configure the Linux VDA for different monitor resolutions and layouts.

Like the Windows VDA, the Linux VDA has the concept of a multi-monitor virtual desktop, which is based on the bounding rectangle of all monitors, not the actual layout of the monitors. Thus, the area of the virtual desktop can theoretically be larger than the area covered by the monitors of the client.

User-added image

The origin of the virtual session desktop is calculated from the top-left corner of the bounding rectangle of all monitors. That point is located at X = 0, Y = 0, where X and Y are the horizontal and vertical axes, respectively.

The width of the virtual session desktop is the horizontal distance, in pixels, from the origin to the top-right corner of the bounding rectangle of all monitors.

Similarly, the height of the virtual session desktop is the vertical distance, in pixels, from the origin to the bottom-left corner of the bounding rectangle of all monitors.

This is important for the following reasons:

  • Allowing for different client monitor layouts
  • Understanding memory usage on the Linux VDA

Knowing the maximum virtual desktop size for your various client monitor configurations allows you to configure the Linux VDA to be flexible in terms of client monitor configurations.

Consider the following client monitor configuration:

User-added image

The diagram above shows an out-of-the-box multi-monitor configuration with two monitors, each with a resolution of 2560×1600.

Now, consider connecting to the same Linux VDA with the following client monitor configuration:

User-added image

If each monitor in the above diagram has a resolution of 2560×1600, the out-of-the-box multi-monitor configuration parameters will be insufficient. The maximum height is too small to accommodate the virtual session desktop for this monitor layout. To accommodate the client monitor configuration in this example, the Linux VDA virtual desktop must be set to a size of 4160×2560.

For the greatest flexibility in a multi-monitor configuration, find the smallest bounding rectangle of all monitor layouts you want to support. For configurations with two 2560×1600 monitors, the possible layouts include:

  • Monitor1 2560×1600 and Monitor2 2560×1600
  • Monitor1 1600×2560 and Monitor2 2560×1600
  • Monitor1 2560×1600 and Monitor2 1600×2560
  • Monitor1 1600×2560 and Monitor2 1600×2560

To accommodate all of the layouts above, you need to have a virtual session desktop of 5120×2560 because this is the smallest bounding rectangle that can contain all the desired layouts.

If all your users have only one monitor in the typical landscape layout, set the maximum virtual desktop size to the resolution of the monitor with the highest resolution.

User-added image

In this example, the virtual desktop should be set to a size of 2560×1600. Note that because the default configuration is 5120×1600 and 2 monitors, a configuration change is required to optimize memory usage for single-monitor deployments.
Knowing the virtual desktop size allows you to calculate the amount of memory used by each HDX session. This is the memory allocated to each session for its graphics data when the session begins. It does not change for the life of the session. While this is not the total amount of memory used for the session, it is the easiest way of calculating per-session memory usage.

To calculate how much memory is allocated to each HDX session, use the following formula:

M = X×Y×Z,

Where:

  • M is the amount of memory used for session graphics
  • X is the width of the virtual session desktop
  • Y is the height of the virtual session desktop
  • Z is the color depth of the HDX session window. The value is in bytes, not bits, so use 4 for 32-bit color.

NOTE: The color depth of the X server starts and cannot change with the life of the session (from login through disconnects/reconnects until logoff). Hence, the Linux VDA always allocates the virtual session desktop as 32-bit and down samples to the color depth requested for the session.

For example, for a 1024×768 session, the memory used is:

1024 × 768 × 4 / 2^20 MB = 3 MB

Knowing the various client monitor configurations in your environment, the virtual session desktop size, and being able to calculate the memory usage allows you to minimize memory waste on the Linux VDA. This is important for increasing session density on each Linux VDA.

Consider the following client monitor configuration:

User-added image

Assuming each monitor has a resolution of 2560×1600, to accommodate this client monitor configuration, the virtual session desktop size needs to be 5120×3200. Notice that the gray area is unused and equates to 16,384,000 (i.e. 2560 x 1600 x 4) bytes of wasted memory.

The multi-monitor functionality of the Linux VDA is controlled by the following configuration parameters:

MaxScreenNum

Parameter HKEY_LOCAL_MACHINE/System/CurrentControlSet/Control/Citrix/Thinwire/MaxScreenNum

Description Number of monitors to support

Type DWORD

Default 2

Maximum 9 for standard VDA, 4 for HDX 3D Pro VDA

MaxFbWidth

Parameter HKEY_LOCAL_MACHINE /System/CurrentControlSet/Control/Citrix/Thinwire/MaxFbWidth

Description Maximum width of a virtual session desktop

Type DWORD

Default 5,120

Maximum 16,384 (8,192 x 2)

MaxFbHeight

Parameter HKEY_LOCAL_MACHINE /System/CurrentControlSet/Control/Citrix/Thinwire/MaxFbHeight

Description Maximum height of a virtual session desktop

Type DWORD

Default 1,600

Maximum 16,384 (8,192 x 2)

The following section outlines how to enable, configure, and disable the multi-monitor functionality on the Linux VDA.

Set the maximum number of monitors by using:

sudo ctxreg create -k “ HKEY_LOCAL_MACHINE SystemCurrentControlSetControlCitrixThinwire” -t “REG_DWORD” -v “MaxScreenNum” -d “NumMons” –force

Where NumMons is a value between 1 and 9 for standard VDA or 1 and 4 for HDX 3D Pro VDA.

Set the maximum width of a virtual session desktop by using:

sudo ctxreg create -k “ HKEY_LOCAL_MACHINE SystemCurrentControlSetControlCitrixThinwire” -t “REG_DWORD” -v “MaxFbWidth” -d ” MaxWidth” –force

Where MaxWidth is a value between 1,024 and 16,384.

Set the maximum height of a virtual session desktop by using:

sudo ctxreg create -k “ HKEY_LOCAL_MACHINE SystemCurrentControlSetControlCitrixThinwire” -t “REG_DWORD” -v “MaxFbHeight” -d ” MaxHeight” –force

Where MaxHeight is a value between 1,024 and 16,384.

Related:

Nutanix Acropolis Hypervisor Support in XenApp/XenDesktop

Tradução automática

Эта статья была переведена автоматической системой перевода и не был рассмотрен людьми. Citrix обеспечивает автоматический перевод с целью расширения доступа для поддержки контента; Однако, автоматически переведенные статьи могут может содержать ошибки. Citrix не несет ответственности за несоответствия, ошибки, или повреждения, возникшие в результате использования автоматически переведенных статей.

Related:

  • No Related Posts

ShareFile : offline file update doesn’t work

This is the expected behavior when phone is in Online Mode:


When the phone is in Online, connected to internet:

1. Make the file available offline

2. Edit and save. The delta <difference> is cached/saved locally on phone < not updated at ShareFile server until the upload queue is completed>

3. When you try to open it again, the it tries to get the version from ShareFile server <which is not updated yet with delta, until the upload queue is completed> and not a local copy. This is because the phone is online on network/connect to internet.

4. Once the upload is completed in queue <which takes some seconds>, you have latest version of the file.

When phone is offline mode < airplane mode >

1. Make it available offline

2. Edit and save. The delta <difference> is cached/saved locally < not updated at ShareFile server as the phone is in offline mode>

3. When you try to open it again, the it tries to get the version from local cache <which is updated with delta> a local copy.

4. You have latest version of the file in offline mode.

Related:

Prevent iOS Receiver from using weak ciphers when connecting to NetScaler Gateway

Tradução automática

Эта статья была переведена автоматической системой перевода и не был рассмотрен людьми. Citrix обеспечивает автоматический перевод с целью расширения доступа для поддержки контента; Однако, автоматически переведенные статьи могут может содержать ошибки. Citrix не несет ответственности за несоответствия, ошибки, или повреждения, возникшие в результате использования автоматически переведенных статей.

Related:

  • No Related Posts

Failed to Connect to Cloud Citrix Studio with error “Studio was Closed”

Tradução automática

Эта статья была переведена автоматической системой перевода и не был рассмотрен людьми. Citrix обеспечивает автоматический перевод с целью расширения доступа для поддержки контента; Однако, автоматически переведенные статьи могут может содержать ошибки. Citrix не несет ответственности за несоответствия, ошибки, или повреждения, возникшие в результате использования автоматически переведенных статей.

Related:

  • No Related Posts

Error: “HdxSdkErrorDomain_Session error 8” When Launching VDI Through NetScaler Gateway Using Receiver for iOS 7.2.4

Tradução automática

Эта статья была переведена автоматической системой перевода и не был рассмотрен людьми. Citrix обеспечивает автоматический перевод с целью расширения доступа для поддержки контента; Однако, автоматически переведенные статьи могут может содержать ошибки. Citrix не несет ответственности за несоответствия, ошибки, или повреждения, возникшие в результате использования автоматически переведенных статей.

Related:

  • No Related Posts

Error: “HdxSdkErrorDomain_Session error 8” When Launching VDI Through NetScaler Gateway

Tradução automática

Эта статья была переведена автоматической системой перевода и не был рассмотрен людьми. Citrix обеспечивает автоматический перевод с целью расширения доступа для поддержки контента; Однако, автоматически переведенные статьи могут может содержать ошибки. Citrix не несет ответственности за несоответствия, ошибки, или повреждения, возникшие в результате использования автоматически переведенных статей.

Related:

  • No Related Posts