OneFS: Best practices for NFS client settings[1]

Article Number: 457328 Article Version: 8 Article Type: Break Fix



Isilon,Isilon OneFS

This article describes the best practices and recommendations for client-side settings and mount options when using the NFS protocol to connect to an Isilon cluster and applies to all currently supported versions of OneFS.

Supported Protocol Versions

At this time Isilon OneFS supports NFS versions 3 and 4. NFS version 2 has not been supported since the move to the 7.2.X code family.

NFSv3

NFS version 3 is the most widely used version of the NFS protocol today, and is generally considered to have the widest client and filer adoption. Here are key components of this version:

  • Stateless – A client does not technically need to establish a new session if it has the correct information to ask for files, etc. This allows for simple failover between OneFS nodes via dynamic IP pools.
  • User and Group info is presented numerically – Client and Server communicate user information by numeric identifiers, allowing the same user to possible appear as different names between client and server.
  • File Locking is out of band – Version 3 of NFS uses a helper protocol called NLM to perform locks. This requires the client to respond to RPC messages from the server to confirm locks have been granted, etc.
  • Can run over TCP or UDP – This version of the protocol can run over UDP instead of TCP, leaving handling of loss and retransmission to the software instead of the operating system. We always recommend using TCP.

NFSv4

NFS version 4 is the newest major revision of the NFS protocol, and is increasing in adoption. At this time NFSv4 is generally less performant than v3 against the same workflow due to the greater amount of identity mapping and session tracking work required to reply. Here are some of the key differences between v3 and v4

  • Stateful – NFSv4 uses sessions in order to handle communication, as such both client and server need to track session state to continue communicating.
    • Prior to OneFS 8.X this meant that NFSv4 clients required static IP pools on the Isilon or could encounter issues.
  • User and Group info is presented as strings – Both the client and server need to resolve the names of the numeric information stored. The server needs to lookup names to present, while the client needs to remap those to numbers on its end.
  • File Locking is in band – Version 4 no longer users a separate protocol for file locking, instead making it a type of call that is usually compounded with OPENs, CREATES, or WRITES.
  • Compound Calls – Version 4 can bundle a series of calls in a single packet, allowing the server to process all of them and reply at the end. This is used to reduce the number of calls involved in common operations.
  • Only supports TCP – Version 4 of NFS has left loss and retransmission up to the underlying operating system.

NFSv4.1 and Beyond

At this time OneFS does not support NFS version 4.1. If you need specific features of version 4.1, speak with your account team to see if that is something that we can provide via OneFS’s unique featureset as an NFS filer.

OneFS Version specific concerns

For customers that have been using Isilon OneFS since versions 7.1 or before, changes made in the 7.2.0 version of OneFS, and remaining in place until OneFS 8.1.1, might impact how clients using encoding that differs from the cluster’s are able to view and interact with directory listings. For more details review ETA 483840.

This is not an issue if you began using OneFS on version 7.2 or beyond.

Mount options

While we do not have hard requirements for mount options, we do make some recommendations on how clients connection. We have not provided specific mount strings, as the syntax used to define these options varies depending on the operating system in use. You should refer to your distribution maintainers documentation for specific mount syntax.

Defining Retries and Timeouts

While the Isilon generally replies to client communication very quickly, during instances when a node has lost power or network connectivity, it might take a few seconds for its IP addresses to move to a functional node, as such it is important to have correctly defined timeout and retry values. Isilon generally recommends a timeout of 60 seconds to account for a worst case failover scenario, set to retry two times before reporting a failure.

Soft vs Hard Mounts

Hard mounts cause the client to retry its operations indefinitely on timeout or error. This ensures that the client does not disconnect the mount in circumstances where the Isilon cluster moves IP addresses from one node to another. A soft mount will instead error out and expire the mount requiring a remount to restore access after the IP address moves.

Allowing interrupt

By default, most clients do no allow you to interrupt an input/output or I/O wait, meaning you cannot use ctrl+c, etc, to end the waiting process if the cluster is hanging, including the interrupt mount option allows those signals to pass normally instead.

Local versus Remote Locking

When mounting an NFS export, you can specify whether a like will perform its locks locally, or using the lock co-ordinator on the cluster. Most clients default to remote locking, and this is generally the best option when multiple clients will be accessing the same directory, however there can be performance benefits to performing local locking when a client does not need to share access to the directory it is working with. In addition, some databases and softwares will request you use local locking, as they have their own coordinator.

Related:

NFS High availability issues with node reboots and certain Linux Kernels

We have been looking into issues with NFS file copies during a node reboot, like with a rolling upgrade.

EMC states;

“””””””OneFS 8: NFSv4 failover —— with the introduction of NFSv4 failover, when a client’s virtual IP address moves, or a OneFS group change event occurs, the client application will continue without disruption. As such, no unexpected I/O error will propagate back up to the client application. In OneFS 8.0, both NFSv3 and NFSv4 clients can now use dynamic”””””””

In working with EMC, it looks like this can be affected by the Linux Kernel and file size and smart connect.

To summarize with CentOS, It works from kernel 3.10.0-514.el7.x86_64 but fails with 3.10.0-862.el7.x86_64 when copying a 5 GB file and rebooting a node.

Per EMC we are going to verify that the issue happens with Redhat and then open a call.

Has anyone else seen this?

————————- Detailed explanation from EMC ———————————

The NFS SME wanted to go back over everything before moving forward with getting Engineering involved. He found the smoking gun that explains the client behavior from our original packet capture a few weeks ago.

The pcaps indicate that a client running the affected kernel isn’t properly supplying the file handle during the PUTFH process.

****** Here’s the connection from node 2’s perspective before failover is induced: ******

Isilon-dev-2.lagg1_09072018_134025.pcap

*****************

1029 12.559173 XX.XXX.XXX..16 → XX.XXX.XXX..30 NFS 270 V4 Call OPEN_CONFIRM

1030 12.559263 XX.XXX.XXX..30 → XX.XXX.XXX..16 NFS 142 V4 Reply (Call In 1029) OPEN_CONFIRM

*****************

1031 12.559376 XX.XXX.XXX..16 → XX.XXX.XXX..30 NFS 302 V4 Call SETATTR FH: 0xa577051b

1032 12.561461 XX.XXX.XXX..30 → XX.XXX.XXX..16 NFS 318 V4 Reply (Call In 1031) SETATTR

1122 12.931344 XX.XXX.XXX..16 → XX.XXX.XXX..30 NFS 31926 V4 Call WRITE StateID: 0xcddd Offset: 0 Len: 1048576[TCP segment of a reassembled PDU]

1136 12.936254 XX.XXX.XXX..30 → XX.XXX.XXX..16 NFS 206 V4 Reply (Call In 1122) WRITE

tshark -r Isilon-dev-2.lagg1_09072018_134025.pcap -O nfs -Y “frame.number==1029”

Frame 1029: 270 bytes on wire (2160 bits), 270 bytes captured (2160 bits)

Ethernet II, Src: Vmware_84:2c:6f (00:50:56:84:2c:6f), Dst: Broadcom_77:c4:f0 (00:0a:f7:77:c4:f0)

802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 2215

Internet Protocol Version 4, Src: XX.XXX.XXX..16, Dst: XX.XXX.XXX..30

Transmission Control Protocol, Src Port: 872, Dst Port: 2049, Seq: 877, Ack: 765, Len: 200

Remote Procedure Call, Type:Call XID:0x40a1d0c1

Network File System, Ops(2): PUTFH, OPEN_CONFIRM

[Program Version: 4]

[V4 Procedure: COMPOUND (1)]

Tag: <EMPTY>

length: 0

contents: <EMPTY>

minorversion: 0

Operations (count: 2): PUTFH, OPEN_CONFIRM

************************

Opcode: PUTFH (22)

filehandle

length: 53

[hash (CRC-32): 0xa577051b]

filehandle: 011f0000000200c50201000000ffffffff00000000020000…

************************

Opcode: OPEN_CONFIRM (20)

stateid

[StateID Hash: 0xc32a]

seqid: 0x00000001

Data: 019842390100000000000000

[Data hash (CRC-32): 0x57d33b9b]

seqid: 0x00000001

[Main Opcode: OPEN_CONFIRM (20)]

************* Here is where the connection fails after moving over to node 3. ************

We see that failover occurs and the client reestablishes connection after failover, but fails to provide a file handle during that PUTFH operation. This is why the cluster is returning “NFS4ERR_BADHANDLE” at that point.

Isilon-dev-3.lagg1_09072018_134025.pcap

*****************

25390 28.650376 XX.XXX.XXX..16 → XX.XXX.XXX..30 NFS 214 V4 Call OPEN_CONFIRM

25391 28.650455 XX.XXX.XXX..30 → XX.XXX.XXX..16 NFS 118 V4 Reply (Call In 25390) PUTFH Status: NFS4ERR_BADHANDLE

*****************

tshark -r Isilon-dev-3.lagg1_09072018_134025.pcap -O nfs -Y “frame.number==25390”

Frame 25390: 214 bytes on wire (1712 bits), 214 bytes captured (1712 bits)

Ethernet II, Src: Vmware_84:2c:6f (00:50:56:84:2c:6f), Dst: QlogicCo_a5:54:00 (00:0e:1e:a5:54:00)

802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 2215

Internet Protocol Version 4, Src: XX.XXX.XXX..16, Dst: XX.XXX.XXX..30

Transmission Control Protocol, Src Port: 772, Dst Port: 2049, Seq: 782969877, Ack: 42129, Len: 144

Remote Procedure Call, Type:Call XID:0xd0a5d0c1

Network File System, Ops(2): PUTFH, OPEN_CONFIRM

[Program Version: 4]

[V4 Procedure: COMPOUND (1)]

Tag: <EMPTY>

length: 0

contents: <EMPTY>

minorversion: 0

Operations (count: 2): PUTFH, OPEN_CONFIRM

********************

Opcode: PUTFH (22)

filehandle

length: 0

*********************

Opcode: OPEN_CONFIRM (20)

stateid

[StateID Hash: 0x5a23]

seqid: 0x00000001

Data: 013830d0ac03000000000000

[Data hash (CRC-32): 0xc2270b06]

seqid: 0x00000003

[Main Opcode: OPEN_CONFIRM (20)]

They believe this to be fairly definitive evidence that the client kernel’s behavior here is something that Isilon likely has no control over. We can create a knowledge base article for awareness surrounding the issue, but this wouldn’t be going up to Dev based on those findings according to the L3.

Best regards,

Technical Support Engineer, Global Support Center

——————-

——————-

——————-

——————-

——————- My test notes —————

I used VMware player and had a three-node OneFS 8.0.0.7 simulator, and Centos 7 client. I had the networking all isolated to the VMware player no external network access.

Copying small files (10 MB) mounted via NFSv4 to the smart connect IP and rebooting the node worked. The file copy would pause on one file then it would pick up, and the copy would continue. All files looked good with MD5.

Copying small files (10 MB), mounted via a nodes IP (we used the same IP we received in the previous example) and rebooting the node DID NOT work. The file copy would pause on one file, then it would pick up, and the copy would continue. All files looked good with MD5, except for the one that failed.

Copying a large file (5 GB), mounted via NFSv4 to the smart connect IP and rebooting the node would NOT work. We got an I/O error.

Copying a large file (5 GB), mounted via a nodes IP (we used the same IP we received in the previous example) and rebooting the node would NOT work. We got an I/O error.

——————————————

Related:

7023210: NFS clients getting ‘permission denied’, even when ownership and permissions are correct

This document (7023210) is provided subject to the disclaimer at the end of this document.

Environment

SUSE Linux Enterprise Server 11 Service Pack 4 (SLES 11 SP4)

Situation

Machines acting as NFS clients would mount NFS shares successfully and then function correctly for a time, but occasionally begin getting “permission denied” errors when attempting operations within the NFS mounted file system, even on actions as simple as “ls”. File system ownership and permission are correct, for these users to access the files / directories in question.
Looking at packet traces of NFS activities during these times, the RPC layer returns error. Wireshark displays these errors as:
Reject State: AUTH_ERROR (1)
Auth State: bad credential (seal broken) (1)
The problem comes and goes.
While the problem is occurring, new attempts to mount something from the NFS server also fail with “permission denied.”

Resolution

The NFS server machine was having intermittent access to DNS resolution. Therefore, the RPC layer could not always verify the host names which belonged to the client IP addresses. Forward and backward name resolution is important to NFS security. An NFS Server should be able to find A records (for name to address resolution) and PTR records (for address to name resolution) for any NFS client machine which will be accessing NFS shares.
In the case in question, there was a DNS server which would not always answer requests. Other DNS servers were functioning reliably. The administrator took the faulty DNS server out of service, and the problem was corrected.
If correction of DNS behavior or contents is not immediately feasible, then adding entries to the NFS Server’s /etc/hosts file, for the NFS client machines in question, will also resolve the problem.

Additional Information

Most cases of missing DNS information are not intermittent and therefore a DNS failure (or lack of information) simply causes the inability to accomplish an NFS mount operation. Since there are limited reasons that a NFS Server would deny a client’s request to mount, most cases of missing DNS information are discovered relatively quickly.
However, when the DNS problem is intermittent, clients may successful mount an NFS share, but much later may get “permission denied” while trying to use the already-mounted NFS file systems. This scenario is a bit harder to recognize, but the most important clue is whether the RPC layer is returning the auth and credential errors mentioned above in the “Situation” section.

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

Related:

Re: isilon nfs operations slow ?

Hope your solutions architect has been helping you out.

As an aside, let us keep in mind that “vanila”, in particular the “plain vanilla” flavours,

respond very fast in certain scenarios — because they CHEAT on the NFS protocol:

Where NFS demands a guarantee that file creates, chmod/chattr, and certain “stable” data writes

only get acknowledged as successfull to the client when the operation has hit

permanent storage (disk/ssd/nvram), “vanilla” NFS servers often adopt the “async”

operation mode of the underlying local file system. Which means that things are

only in the RAM on the server whereas the client would assume things were safely on disk.



As an alternative, vanilla servers offer full “sync” behaviour,

which is usually way too slow for most uses.

In contrast, dedicated NAS systems make sure they buffer to RAM only where allowed

by the NFS protocol, and do wait for permanent storage where required.

(keywords: “stable” / “unstable” NFS writes)



makes sense?

— Peter

Related:

7007308: NFS client cannot perform hundreds of NFS mounts in one batch

This document (7007308) is provided subject to the disclaimer at the end of this document.

Environment

SUSE Linux Enterprise Server 11 Service Pack 2
SUSE Linux Enterprise Server 11 Service Pack 1
SUSE Linux Enterprise Server 10 Service Pack 3
SUSE Linux Enterprise Server 10 Service Pack 4

Situation

A system running SLES 10 or 11 has been set up to attempt to perform hundreds of NFS v3 client mounts at the same time. This could be via the /etc/fstab file and it could be during boot, or it could be other methods, later after the machine is booted. However, the system does not complete the task of mounting the directories.

Resolution

NOTE: This Technical Information Document (TID) is somewhat outdated now, as newer kernels and other code in newer SLES 11 support packs have had changes implemented to streamline the NFS client port usage, and make this situation less likely to arise. For example, people on SLES 11 SP4 can typically perform hundreds or thousands of NFS mounts in a large batch, without running into this problem.
However, this TID is being preserved for now for those on older support packs, who may still benefit from it.
NFS is implemented underneath the “sunrpc” specification. There is a special range of client side ports (665 – 1023) which are available for certain sunrpc operations. Every time a NFS v3 mount is performed, several of these client ports can be used. Any other process on the server can conceivably use these as well. If too much NFS related activity is being performed, then all these ports can be in use, or (when used for TCP) they can be in the process of closing and waiting their normal 60 seconds before being used by another process. This type of situation is typically referred to as “port exhaustion”. While this port range can be modified, such changes are not recommended, because of potential side effects (discussed in item #5 below).
In this scenario, the port exhaustion is happening because too many NFS client ports are being used to talk to an NFS Server’s portmapper daemon and/or mount daemon. There are several approaches to consider that can help resolve this issue:
1. The simplest solution is usually to add some options to each NFS client mount request, in order to reduce port usage. The additional mount options would be:
proto=tcp,mountproto=udp,port=2049
Those can be used in an fstab file, or directly on mount commands as some of the “-o” options. Note that successful usage of these options may depend on having the system up to date. SLES 10 SP3 and above, or SLES 11 SP1 and above, are recommended.
To explain them in more detail:
The option “proto=tcp” (for NFS transport protocol) insures that the NFS protocol (after a successful mount) will use TCP connections. Adding this setting is not mandatory (and is actually the default) but is mentioned to differentiate it from the second option, “mountproto=udp”.
The option “mountproto=udp” causes the initial mount request itself to use UDP. By using UDP instead of TCP, the port used briefly for this operation can be immediately reused instead of being required to wait 60 seconds. This does not effect which transport protocol (TCP or UDP) NFS itself will use. It only effects some brief work done during the initial mount of the nfs share. (In NFS v3, the mount protocol and daemon are separate from the nfs protocol and daemon.)
The option “port=2049” tells the procedure to expect the server’s NFS daemon to be listening on port 2049. Knowing this up front eliminates the usage of an additional TCP connection. That connection would have been to the sunrpc portmapper, which would have been used to confirm where NFS is listening. Usage of port 2049 for the NFS protocol is standard, so there is normally no need to confirm it through portmapper.
If many of the mounts point to the same NFS server, it may also help to allow one connection to an NFS server to be shared for several of the mounts. This is automatic on SLES 11 SP1 and above, but on SLES 10 it is configured with the command:
sysctl -w sunrpc.max_shared=10
and if you want to ensure it is in effect without rebooting:
echo 10 > /proc/sys/sunrpc/max_shared
NOTE: This feature was introduced to SLES in November 2008 so a kernel update may be needed on some older systems. Valid values are 1 to 65535. The default is 1, which means no sharing takes place. A number such as 10 means that 10 mounts can share the same connection. While it could be set very high, 10 or 20 should be sufficient. Going higher than is necessarily is not recommended, as too many mounts sharing the same connection can cause performance degradation.
2. Another option is to use automount (autofs) to mount nfs file systems when they are actually needed, rather than trying to have everything mount at the same time. However, even with automount, if an application is launched which suddenly starts accessing hundreds of paths at once, the same problem could come up.
3. Another option would be to switch to NFS v4. This newer version of the NFS specification uses fewer ports, for two reasons:
a. It only connects to one port on the server (instead of 3 to 5 ports, as NFS v3 will)
b. It does a better job using one connection for multiple activities. NFS v4 can be requested by the Linux NFS Client system by specifying mount type “nfs4 “. This can be placed in the NFS mount entry in the /etc/fstab file, or can be specified on a mount command with “-t nfs4 “. Note, however, that there are significant differences in how NFS v3 and v4 are implemented and used, so this change is not trivial or transparent.
4. A customized script could also be used to mount the volumes after boot, at a reasonable pace. For example, /etc/init.d/after.local could be created and designed to mount a certain number of nfs shares, then sleep for some time, then mount more, etc.
5. If none of the above options are suitable or help to the degree necessary, the last resort would be to change the range of ports in use. This is controlled with:
sysctl -w sunrpc.min_resvport = value
sysctl -w sunrpc.max_resvport = value
and can be checked (and set) on the fly in the proc area:
/proc/sys/sunrpc/min_resvport
/proc/sys/sunrpc/max_resvport
On Suse Linux, these normally default to 665 and 1023. Either the min or max (or both) can be changed, but there can be consequences of each:
a. Letting more ports be used for RPC (NFS and other things) increases the risk that a port is already taken when another service wants to acquire it. Competing services may fail to launch. To see what services can potentially use various ports, see the information in /etc/services. Note: That file is a list of possible services and their port usage, not necessarily currently active services.
b. Ports above 1023 are considered “non-privileged” or “insecure” ports. In other words, they can be used by non-root processes, and therefore they are considered less trustworthy. If an NFS Client starts making requests from ports 1024 or above, some NFS Servers may reject those requests. On a Linux NFS Server, you can tell it to accept requests from insecure ports by exporting it with the option, “insecure”.

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

Related:

Re: Re: message templates

That stuff is pretty basic and can hardly be used for notifying end users.

We are currently writing our own system, like others have already done (and mentioned, here or in the Google list).

The key limitations we found (6.5.5.x):

– Only the /ifs path is reported; end users would like to read the share name or the NFS mount point.

– Assume we you would like to generate an advisory message at 90% of the quota limit,

you can insert the current usage and the advisory threshold, but not the final hard quota limit.

So it is not possible to produce this clear message: “now reached 90% of 10TB, available: 1TB”

Other observations:

– Often regular reports on storage consumption by a group of users needs to be created for dept. leaders etc.

This is more than a quota notification system could provide, so it has to be implemented anyway and then

could also provide the more simple notifications.

– Assigning shares per user and setting *directory* quotas (container flag!) on these shares provides to end users

a very simply way to query consumption and limits at any time directly on

the mounted shares. That reduces the need for elaborate notifications down to

simple emergency notifications (as available through the built-in template message system).

On the other hand, standard user quotas can’t be queried by clients on Isilon (at least not with NFS).

If you have more specific questions about the built-in template message system, just feel free to ask.

Peter

Related:

Re: Unity FS question

In VNX the following command is used to export a CIFS to NFS



server_export server_2 -Protocol nfs -option access=x.x.x.x,rw=x.x.x.x,root=x.x.x.x -comment ‘Example Export NFS’ /root_vdm_1/path1/path2



But how can it be done in UNITY?



I converted the NAS SERVER into multiprotocol,



uemcli -u admin -p <yourpassword> /net/nas/server -name Myname set -mpSharingEnabled yes -unixDirectoryService local



but it did not solve me, I have an access error!



Could you help me, please?

Related:

7021759: TCP and UDP Port Values Used for Reflection Connections

Important Security Notes:

  • Creating a secure network environment is a complex task involving many custom elements designed to fit your individual network environment and security needs. The information provided in this note does not include all necessary security options for your environment. This information is designed only to provide Reflection customers with a framework on which to start building individual security environments.
  • When configuring a firewall, be as restrictive as possible. Open only ports and IP protocols that are necessary for the connection types you intend to use, and be specific about whether the connection should be incoming or outgoing. The direction of the connection depends on where the connection is initiated and the protocol in use. For example, an active FTP connection requires the initiating computer to have outgoing port 21/tcp (command channel) and incoming port 20/tcp (data channel) open.
  • The lists below specify only default Reflection ports. Depending on your network environment, you may need to configure additional port values.

Port Values

The tables below detail the port values for service protocols supported by the following Reflection applications. Whether or not the port should be configured for incoming or outgoing data depends on where the connection is initiated from and your network configuration.

Reflection Windows-Based Applications—Includes Reflection for IBM, Reflection for UNIX and OpenVMS, Reflection for ReGIS Graphics, and Reflection for HP.

Reflection PC X Server and NFS Client Applications—Includes Reflection X, Reflection NFS Client.

Reflection Components—Includes Reflection FTP, Reflection TimeSync, Reflection Line Printer Daemon (LPD), Reflection Ping.

The values used by Reflection are IANA (Internet Assigned Numbers Authority) and other standard values.

Reflection Windows-Based Applications

The following ports and service protocols are used in Reflection for IBM, Reflection for UNIX and OpenVMS, Reflection for ReGIS Graphics, and Reflection for HP.

Application abbreviation key:

RIBM – Reflection for IBM

RUO – Reflection for UNIX and OpenVMS

RRG – Reflection for ReGIS Graphics

RHP – Reflection for HP
    Port /

    IP Protocol

    Service

    Protocol

    Comment
    RIBM
    RUO
    RRG
    RHP
    20/tcp
    FTP-data
    Data channel

    X
    X
    X
    21/tcp
    FTP
    Command channel

    X
    X
    X
    22/tcp
    SSH
    Secure Shell, sftp, scp

    X
    X
    X
    23/tcp
    Telnet
    Telnet; TN3270; TN3270; TN5250
    X
    X
    X
    X
    42/tcp
    Nameserver
    Hostname to IP address
    X
    X
    X
    X
    53/udp/tcp
    DNS
    Domain Name Services
    X
    X
    X
    X
    80/tcp
    HTTP
    Unsecure HTTP via Reflection Web Launch and Reflection for the Web
    X
    X
    X
    X
    88/udp/tcp
    Kerberos
    Kerberos authentication
    X
    X
    X
    X
    443/udp/tcp
    https
    Secure http via Reflection Web Launch and Reflection for the Web
    X
    X
    X
    X
    443/udp/tcp
    kpasswd
    Kerberos password changing (kpasswd daemon)
    X
    X
    X
    X
    513/tcp
    login
    rlogin

    X
    X
    X
    749/udp/tcp
    kerberos-adm
    Kerberos password changing (v5passwdd daemon)
    X
    X
    X
    X
    992/tcp
    telnet
    SSL-secured Telnet
    X
    X
    X
    X
    1080/udp/tcp
    socks
    SOCKS
    X
    X
    X
    X
    1024-5000
    VAXLINK2 FFT
    Fast file transfer

    X
    X
    X
    1530

    1537

    NS/VT
    Network Services, Virtual Terminal

    X
    X
    X
    1649/udp/tcp
    kermit
    Kermit file transfer

    X
    X
    X
    8471/tcp
    lipi
    AS/400 LIPI file transfer
    X



    8476/tcp
    lipi
    AS/400 signon server port
    X



    8478/tcp
    ehntfw
    AS/400 EHNTFW file transfer
    X



    30000-40000
    PCLINK FFT
    Fast file transfer

    X
    X
    X

Reflection PC X Server and NFS Client Applications

The following ports and service protocols are used in Reflection X and Reflection NFS Client.

Note the following:

  • Reflection X XDMCP broadcasts and Reflection NFS Client connections do not use well-known port numbers and can not be used through a firewall.
  • Beginning in version 14.1, the Reflection NFS Client is no longer available.

Application abbreviation key:

RX – Reflection X

NFS – Reflection NFS Client
    Port /

    IP Protoco
    l

    Service

    Protocol

    Comment
    RX
    NFS
    22/tcp
    SSH
    Secure Shell, sftp, scp
    X

    23/tcp
    Telnet
    Telnet; TN3270; TN3270; TN5250
    X

    42/tcp
    Nameserver
    Hostname to IP address
    X
    X
    53/udp/tcp
    DNS
    Domain Name Services
    X
    X
    80/tcp
    HTTP
    Unsecure HTTP via Reflection Web Launch and Reflection for the Web
    X

    88/udp/tcp
    Kerberos
    Kerberos authentication
    X

    111
    Sunrpc
    Portmapper

    X
    177/udp
    XDMCP Broadcast
    X Display Manager
    X

    443/udp/tcp
    https
    Secure http via Reflection Web Launch and Reflection for the Web
    X

    443/udp/tcp
    kpasswd
    Kerberos password changing (kpasswd daemon)
    X

    512/tcp
    exec
    rexec
    X

    513/tcp
    login
    rlogin
    X

    514/tcp
    shell
    rsh
    X

    635/udp
    mount
    NFS mount service

    X
    640/udp
    pcnfs
    PC-NFS DOS authentication

    X
    731/udp

    733/udp

    ypserv
    NIS server and binder processes

    X
    732/tcp
    ypserv
    NIS server and binder processes

    X
    749/udp/tcp
    kerberos-adm
    Kerberos password changing (v5passwdd daemon)
    X

    1080/udp/tcp
    socks
    SOCKS
    X

    2049/udp/tcp
    nfsd
    NFS file service

    X
    6000/tcp
    X Protocol
    Incoming ports for RX clients
    X

    7000/tcp
    fs
    X font server
    X

    7100/tcp
    xfs
    X font server
    X

Reflection Components

The following ports and service protocols are used in Reflection FTP, Reflection TimeSync, Reflection Line Printer Daemon (LPD), and Reflection Ping.

Note: Beginning in version 14.1, the following components are no longer available: TimeSync, LPD, and Ping. If you have any of these utilities installed on your system, they are removed when you upgrade to 14.1.

Component abbreviation key:

RFTP – Reflection FTP

LPD – Reflection Line Printer Daemon
Port /

IP Protocol

Service

Protocol


Comment

RFTP

TimeSync

LPD

Ping

7/icmp
Echo
Data echo



X
20/tcp
FTP-data
Data channel
X



21/tcp
FTP
Command channel
X



22/tcp
SSH
Secure Shell, sftp, scp
X



37/udp/tcp
Time
Timeserver

X


42/tcp
Nameserver
Hostname to IP address
X
X
X
X
53/udp/tcp
DNS
Domain Name Services
X
X
X
X
88/udp/tcp
Kerberos
Kerberos authentication
X



123/udp
NTP
Network Time Protocol

X


443/udp/tcp
kpasswd
Kerberos password changing (kpasswd daemon)
X



515/tcp
printer
spooler


X

520/udp
route
routed



X
749/udp/tcp
kerberos-adm
Kerberos password changing (v5passwdd daemon)
X



1080/udp/tcp
socks
SOCKS
X


Related:

How to Roam Linux User Profile Through Network File System

Configuration overview

The following configurations are required to implement user profile roaming through NFS mechanism:

  • Configuring NFS Server
  1. Install required NFS packages
  2. Enable/start required services
  3. Configure Firewall
  4. Export shared directories
  • Configuring NFS Client
  1. Install required NFS packages
  2. Mount NFS shares on client
  3. Configure IDMAP

Note that a real example based upon RHEL 7.2 distribution is used to elaborate how to set up the configuration for each step in the following sections. As for other supported distributions, such as CentOs, SUSE and Ubuntu, this article also applies to them, however, package name and service name mentioned below may have minor differences, and this article does not cover that.

Configuring NFS Server

  1. Install required NFS packages

Install nfs-utils and libnfsidmap packages on NFS server using the following command:

yum install nfs-utils libnfsidmap
  1. Enable/start required services

Enable rpcbind and nfs-server services, using the following commands:

systemctl enable rpcbindsystemctl enable nfs-server

Activate the following four services using the following commands:

systemctl start rpcbindsystemctl start nfs-serversystemctl start rpc-statdsystemctl start nfs-idmapd

Additional details about the services mentioned above:

  • rpcbind — The rpcbind server converts RPC program numbers into universal addresses.
  • nfs-server — It enables the clients to access NFS shares.
  • rpc-statd — NFS file locking. Implements file lock recovery when an NFS server crashes and reboots.
  • nfs-idmap — It translates user and group ids into names, and translates user and group names into ids.
  1. Set up firewall configuration

We need to configure firewall on NFS server to allow client services to access NFS shares. To do that, run the following commands on the NFS server:

firewall-cmd --permanent --zone public --add-service mountdfirewall-cmd --permanent --zone public --add-service rpc-bindfirewall-cmd --permanent --zone public --add-service nfsfirewall-cmd --reload
  1. Export shared directories

There are two sub steps in this section.

  • Specify shared directory and its attributes in /etc/exports.
  • Export shared directory using command “exportfs -r”

Specify shared directory and its attributes in /etc/exports.

Example:

To share directory /home in NFS server with NFS client “10.150.152.167”, we need to add the following line to /etc/exports

/home 10.150.152.167(rw,sync, no_root_squash)

Note that:

/home — directory name in NFS server

10.150.152.167 — IP address of NFS client

rw,sync, no_root_squash — directory attributes

  1. – read/write permission to the shared folder
  2. – all changes to filesystem are immediately flushed to disk;
  3. : By default, any file request made by user root on the client machine is treated as by user nobody on the server. (Exactly which UID the request is mapped to depends on the UID of user “nobody” on the server, not the client.) If no_root_squash is selected, then root on the client machine will have the same level of access to the files on the system as root on the server.

We can get all options in the man page (man exports)

Export shared directory using command “exportfs -r”

Execute command “exportfs –r” to export the shared directory on the shell of NFS server.

We can also use the command “exportfs –v” to get a list for all shared directories.

More details on exportfs commands:

exportfs -v : Displays a list of shared files and export options on a server

exportfs -a : Exports all directories listed in /etc/exports

exportfs -u : Un-export one or more directories

exportfs -r : Re-export all directories after modifying /etc/exports

Configuring NFS Client

  1. Install required NFS packages

Install the nfs-utils package using the following command.

yum install nfs-utils
  1. Mount NFS shares on the client

There are two different ways to mount the exported directories.

  • Use command “mount” to manually mount the directories.
  • Update /etc/fstab to mount the directories at boot time.

Use the “mount” command to manually mount directories.

Example:

The command to mount remote directory /home in 10.150.138.34 to local /home/GZG6N, command is as follows:

mount -t nfs -o options 10.150.138.34:/home /home/GZG6N

Note that:

10.150.138.34 -IP address of NFS server

/home – Shared directory on NFS server

/home/GZG6N – Local mount point

Update /etc/fstab to mount the directories at boot time.

Examples:

Add a line similar to the following to /etc/fstab.

10.150.138.34:/home /home/GZG6N nfs defaults 0 0

Then execute command “mount –a” to mount all filesystems mentioned in fstab.

  1. Configure IDMAP

Update /etc/samba/smb.conf to make sure that each user has a unique UID across all the Linux VDAs. Add the following lines to [global] section in the smb.conf file:

[Global] idmap config * : backend = tdb idmap config <DomainREALM> : backend = rid idmap config <DomainREALM> : range = 100000-199999 idmap config <DomainREALM> : base_rid = 0 template homedir = /home/<DomainName>/%u

Now that all the configurations have been done, we can normally launch session from Linux VDA called NFS client in this article (its IP address is 10.150.152.167 in the example), however, user directory is actually located in NFS server (its IP address is 10.150.138.34 in the example).

Related: