7023210: NFS clients getting ‘permission denied’, even when ownership and permissions are correct

This document (7023210) is provided subject to the disclaimer at the end of this document.

Environment

SUSE Linux Enterprise Server 11 Service Pack 4 (SLES 11 SP4)

Situation

Machines acting as NFS clients would mount NFS shares successfully and then function correctly for a time, but occasionally begin getting “permission denied” errors when attempting operations within the NFS mounted file system, even on actions as simple as “ls”. File system ownership and permission are correct, for these users to access the files / directories in question.
Looking at packet traces of NFS activities during these times, the RPC layer returns error. Wireshark displays these errors as:
Reject State: AUTH_ERROR (1)
Auth State: bad credential (seal broken) (1)
The problem comes and goes.
While the problem is occurring, new attempts to mount something from the NFS server also fail with “permission denied.”

Resolution

The NFS server machine was having intermittent access to DNS resolution. Therefore, the RPC layer could not always verify the host names which belonged to the client IP addresses. Forward and backward name resolution is important to NFS security. An NFS Server should be able to find A records (for name to address resolution) and PTR records (for address to name resolution) for any NFS client machine which will be accessing NFS shares.
In the case in question, there was a DNS server which would not always answer requests. Other DNS servers were functioning reliably. The administrator took the faulty DNS server out of service, and the problem was corrected.
If correction of DNS behavior or contents is not immediately feasible, then adding entries to the NFS Server’s /etc/hosts file, for the NFS client machines in question, will also resolve the problem.

Additional Information

Most cases of missing DNS information are not intermittent and therefore a DNS failure (or lack of information) simply causes the inability to accomplish an NFS mount operation. Since there are limited reasons that a NFS Server would deny a client’s request to mount, most cases of missing DNS information are discovered relatively quickly.
However, when the DNS problem is intermittent, clients may successful mount an NFS share, but much later may get “permission denied” while trying to use the already-mounted NFS file systems. This scenario is a bit harder to recognize, but the most important clue is whether the RPC layer is returning the auth and credential errors mentioned above in the “Situation” section.

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

Related:

7007308: NFS client cannot perform hundreds of NFS mounts in one batch

This document (7007308) is provided subject to the disclaimer at the end of this document.

Environment

SUSE Linux Enterprise Server 11 Service Pack 2
SUSE Linux Enterprise Server 11 Service Pack 1
SUSE Linux Enterprise Server 10 Service Pack 3
SUSE Linux Enterprise Server 10 Service Pack 4

Situation

A system running SLES 10 or 11 has been set up to attempt to perform hundreds of NFS v3 client mounts at the same time. This could be via the /etc/fstab file and it could be during boot, or it could be other methods, later after the machine is booted. However, the system does not complete the task of mounting the directories.

Resolution

NOTE: This Technical Information Document (TID) is somewhat outdated now, as newer kernels and other code in newer SLES 11 support packs have had changes implemented to streamline the NFS client port usage, and make this situation less likely to arise. For example, people on SLES 11 SP4 can typically perform hundreds or thousands of NFS mounts in a large batch, without running into this problem.
However, this TID is being preserved for now for those on older support packs, who may still benefit from it.
NFS is implemented underneath the “sunrpc” specification. There is a special range of client side ports (665 – 1023) which are available for certain sunrpc operations. Every time a NFS v3 mount is performed, several of these client ports can be used. Any other process on the server can conceivably use these as well. If too much NFS related activity is being performed, then all these ports can be in use, or (when used for TCP) they can be in the process of closing and waiting their normal 60 seconds before being used by another process. This type of situation is typically referred to as “port exhaustion”. While this port range can be modified, such changes are not recommended, because of potential side effects (discussed in item #5 below).
In this scenario, the port exhaustion is happening because too many NFS client ports are being used to talk to an NFS Server’s portmapper daemon and/or mount daemon. There are several approaches to consider that can help resolve this issue:
1. The simplest solution is usually to add some options to each NFS client mount request, in order to reduce port usage. The additional mount options would be:
proto=tcp,mountproto=udp,port=2049
Those can be used in an fstab file, or directly on mount commands as some of the “-o” options. Note that successful usage of these options may depend on having the system up to date. SLES 10 SP3 and above, or SLES 11 SP1 and above, are recommended.
To explain them in more detail:
The option “proto=tcp” (for NFS transport protocol) insures that the NFS protocol (after a successful mount) will use TCP connections. Adding this setting is not mandatory (and is actually the default) but is mentioned to differentiate it from the second option, “mountproto=udp”.
The option “mountproto=udp” causes the initial mount request itself to use UDP. By using UDP instead of TCP, the port used briefly for this operation can be immediately reused instead of being required to wait 60 seconds. This does not effect which transport protocol (TCP or UDP) NFS itself will use. It only effects some brief work done during the initial mount of the nfs share. (In NFS v3, the mount protocol and daemon are separate from the nfs protocol and daemon.)
The option “port=2049” tells the procedure to expect the server’s NFS daemon to be listening on port 2049. Knowing this up front eliminates the usage of an additional TCP connection. That connection would have been to the sunrpc portmapper, which would have been used to confirm where NFS is listening. Usage of port 2049 for the NFS protocol is standard, so there is normally no need to confirm it through portmapper.
If many of the mounts point to the same NFS server, it may also help to allow one connection to an NFS server to be shared for several of the mounts. This is automatic on SLES 11 SP1 and above, but on SLES 10 it is configured with the command:
sysctl -w sunrpc.max_shared=10
and if you want to ensure it is in effect without rebooting:
echo 10 > /proc/sys/sunrpc/max_shared
NOTE: This feature was introduced to SLES in November 2008 so a kernel update may be needed on some older systems. Valid values are 1 to 65535. The default is 1, which means no sharing takes place. A number such as 10 means that 10 mounts can share the same connection. While it could be set very high, 10 or 20 should be sufficient. Going higher than is necessarily is not recommended, as too many mounts sharing the same connection can cause performance degradation.
2. Another option is to use automount (autofs) to mount nfs file systems when they are actually needed, rather than trying to have everything mount at the same time. However, even with automount, if an application is launched which suddenly starts accessing hundreds of paths at once, the same problem could come up.
3. Another option would be to switch to NFS v4. This newer version of the NFS specification uses fewer ports, for two reasons:
a. It only connects to one port on the server (instead of 3 to 5 ports, as NFS v3 will)
b. It does a better job using one connection for multiple activities. NFS v4 can be requested by the Linux NFS Client system by specifying mount type “nfs4 “. This can be placed in the NFS mount entry in the /etc/fstab file, or can be specified on a mount command with “-t nfs4 “. Note, however, that there are significant differences in how NFS v3 and v4 are implemented and used, so this change is not trivial or transparent.
4. A customized script could also be used to mount the volumes after boot, at a reasonable pace. For example, /etc/init.d/after.local could be created and designed to mount a certain number of nfs shares, then sleep for some time, then mount more, etc.
5. If none of the above options are suitable or help to the degree necessary, the last resort would be to change the range of ports in use. This is controlled with:
sysctl -w sunrpc.min_resvport = value
sysctl -w sunrpc.max_resvport = value
and can be checked (and set) on the fly in the proc area:
/proc/sys/sunrpc/min_resvport
/proc/sys/sunrpc/max_resvport
On Suse Linux, these normally default to 665 and 1023. Either the min or max (or both) can be changed, but there can be consequences of each:
a. Letting more ports be used for RPC (NFS and other things) increases the risk that a port is already taken when another service wants to acquire it. Competing services may fail to launch. To see what services can potentially use various ports, see the information in /etc/services. Note: That file is a list of possible services and their port usage, not necessarily currently active services.
b. Ports above 1023 are considered “non-privileged” or “insecure” ports. In other words, they can be used by non-root processes, and therefore they are considered less trustworthy. If an NFS Client starts making requests from ports 1024 or above, some NFS Servers may reject those requests. On a Linux NFS Server, you can tell it to accept requests from insecure ports by exporting it with the option, “insecure”.

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

Related:

7021759: TCP and UDP Port Values Used for Reflection Connections

Important Security Notes:

  • Creating a secure network environment is a complex task involving many custom elements designed to fit your individual network environment and security needs. The information provided in this note does not include all necessary security options for your environment. This information is designed only to provide Reflection customers with a framework on which to start building individual security environments.
  • When configuring a firewall, be as restrictive as possible. Open only ports and IP protocols that are necessary for the connection types you intend to use, and be specific about whether the connection should be incoming or outgoing. The direction of the connection depends on where the connection is initiated and the protocol in use. For example, an active FTP connection requires the initiating computer to have outgoing port 21/tcp (command channel) and incoming port 20/tcp (data channel) open.
  • The lists below specify only default Reflection ports. Depending on your network environment, you may need to configure additional port values.

Port Values

The tables below detail the port values for service protocols supported by the following Reflection applications. Whether or not the port should be configured for incoming or outgoing data depends on where the connection is initiated from and your network configuration.

Reflection Windows-Based Applications—Includes Reflection for IBM, Reflection for UNIX and OpenVMS, Reflection for ReGIS Graphics, and Reflection for HP.

Reflection PC X Server and NFS Client Applications—Includes Reflection X, Reflection NFS Client.

Reflection Components—Includes Reflection FTP, Reflection TimeSync, Reflection Line Printer Daemon (LPD), Reflection Ping.

The values used by Reflection are IANA (Internet Assigned Numbers Authority) and other standard values.

Reflection Windows-Based Applications

The following ports and service protocols are used in Reflection for IBM, Reflection for UNIX and OpenVMS, Reflection for ReGIS Graphics, and Reflection for HP.

Application abbreviation key:

RIBM – Reflection for IBM

RUO – Reflection for UNIX and OpenVMS

RRG – Reflection for ReGIS Graphics

RHP – Reflection for HP
    Port /

    IP Protocol

    Service

    Protocol

    Comment
    RIBM
    RUO
    RRG
    RHP
    20/tcp
    FTP-data
    Data channel

    X
    X
    X
    21/tcp
    FTP
    Command channel

    X
    X
    X
    22/tcp
    SSH
    Secure Shell, sftp, scp

    X
    X
    X
    23/tcp
    Telnet
    Telnet; TN3270; TN3270; TN5250
    X
    X
    X
    X
    42/tcp
    Nameserver
    Hostname to IP address
    X
    X
    X
    X
    53/udp/tcp
    DNS
    Domain Name Services
    X
    X
    X
    X
    80/tcp
    HTTP
    Unsecure HTTP via Reflection Web Launch and Reflection for the Web
    X
    X
    X
    X
    88/udp/tcp
    Kerberos
    Kerberos authentication
    X
    X
    X
    X
    443/udp/tcp
    https
    Secure http via Reflection Web Launch and Reflection for the Web
    X
    X
    X
    X
    443/udp/tcp
    kpasswd
    Kerberos password changing (kpasswd daemon)
    X
    X
    X
    X
    513/tcp
    login
    rlogin

    X
    X
    X
    749/udp/tcp
    kerberos-adm
    Kerberos password changing (v5passwdd daemon)
    X
    X
    X
    X
    992/tcp
    telnet
    SSL-secured Telnet
    X
    X
    X
    X
    1080/udp/tcp
    socks
    SOCKS
    X
    X
    X
    X
    1024-5000
    VAXLINK2 FFT
    Fast file transfer

    X
    X
    X
    1530

    1537

    NS/VT
    Network Services, Virtual Terminal

    X
    X
    X
    1649/udp/tcp
    kermit
    Kermit file transfer

    X
    X
    X
    8471/tcp
    lipi
    AS/400 LIPI file transfer
    X



    8476/tcp
    lipi
    AS/400 signon server port
    X



    8478/tcp
    ehntfw
    AS/400 EHNTFW file transfer
    X



    30000-40000
    PCLINK FFT
    Fast file transfer

    X
    X
    X

Reflection PC X Server and NFS Client Applications

The following ports and service protocols are used in Reflection X and Reflection NFS Client.

Note the following:

  • Reflection X XDMCP broadcasts and Reflection NFS Client connections do not use well-known port numbers and can not be used through a firewall.
  • Beginning in version 14.1, the Reflection NFS Client is no longer available.

Application abbreviation key:

RX – Reflection X

NFS – Reflection NFS Client
    Port /

    IP Protoco
    l

    Service

    Protocol

    Comment
    RX
    NFS
    22/tcp
    SSH
    Secure Shell, sftp, scp
    X

    23/tcp
    Telnet
    Telnet; TN3270; TN3270; TN5250
    X

    42/tcp
    Nameserver
    Hostname to IP address
    X
    X
    53/udp/tcp
    DNS
    Domain Name Services
    X
    X
    80/tcp
    HTTP
    Unsecure HTTP via Reflection Web Launch and Reflection for the Web
    X

    88/udp/tcp
    Kerberos
    Kerberos authentication
    X

    111
    Sunrpc
    Portmapper

    X
    177/udp
    XDMCP Broadcast
    X Display Manager
    X

    443/udp/tcp
    https
    Secure http via Reflection Web Launch and Reflection for the Web
    X

    443/udp/tcp
    kpasswd
    Kerberos password changing (kpasswd daemon)
    X

    512/tcp
    exec
    rexec
    X

    513/tcp
    login
    rlogin
    X

    514/tcp
    shell
    rsh
    X

    635/udp
    mount
    NFS mount service

    X
    640/udp
    pcnfs
    PC-NFS DOS authentication

    X
    731/udp

    733/udp

    ypserv
    NIS server and binder processes

    X
    732/tcp
    ypserv
    NIS server and binder processes

    X
    749/udp/tcp
    kerberos-adm
    Kerberos password changing (v5passwdd daemon)
    X

    1080/udp/tcp
    socks
    SOCKS
    X

    2049/udp/tcp
    nfsd
    NFS file service

    X
    6000/tcp
    X Protocol
    Incoming ports for RX clients
    X

    7000/tcp
    fs
    X font server
    X

    7100/tcp
    xfs
    X font server
    X

Reflection Components

The following ports and service protocols are used in Reflection FTP, Reflection TimeSync, Reflection Line Printer Daemon (LPD), and Reflection Ping.

Note: Beginning in version 14.1, the following components are no longer available: TimeSync, LPD, and Ping. If you have any of these utilities installed on your system, they are removed when you upgrade to 14.1.

Component abbreviation key:

RFTP – Reflection FTP

LPD – Reflection Line Printer Daemon
Port /

IP Protocol

Service

Protocol


Comment

RFTP

TimeSync

LPD

Ping

7/icmp
Echo
Data echo



X
20/tcp
FTP-data
Data channel
X



21/tcp
FTP
Command channel
X



22/tcp
SSH
Secure Shell, sftp, scp
X



37/udp/tcp
Time
Timeserver

X


42/tcp
Nameserver
Hostname to IP address
X
X
X
X
53/udp/tcp
DNS
Domain Name Services
X
X
X
X
88/udp/tcp
Kerberos
Kerberos authentication
X



123/udp
NTP
Network Time Protocol

X


443/udp/tcp
kpasswd
Kerberos password changing (kpasswd daemon)
X



515/tcp
printer
spooler


X

520/udp
route
routed



X
749/udp/tcp
kerberos-adm
Kerberos password changing (v5passwdd daemon)
X



1080/udp/tcp
socks
SOCKS
X


Related:

How to Roam Linux User Profile Through Network File System

Configuration overview

The following configurations are required to implement user profile roaming through NFS mechanism:

  • Configuring NFS Server
  1. Install required NFS packages
  2. Enable/start required services
  3. Configure Firewall
  4. Export shared directories
  • Configuring NFS Client
  1. Install required NFS packages
  2. Mount NFS shares on client
  3. Configure IDMAP

Note that a real example based upon RHEL 7.2 distribution is used to elaborate how to set up the configuration for each step in the following sections. As for other supported distributions, such as CentOs, SUSE and Ubuntu, this article also applies to them, however, package name and service name mentioned below may have minor differences, and this article does not cover that.

Configuring NFS Server

  1. Install required NFS packages

Install nfs-utils and libnfsidmap packages on NFS server using the following command:

yum install nfs-utils libnfsidmap
  1. Enable/start required services

Enable rpcbind and nfs-server services, using the following commands:

systemctl enable rpcbindsystemctl enable nfs-server

Activate the following four services using the following commands:

systemctl start rpcbindsystemctl start nfs-serversystemctl start rpc-statdsystemctl start nfs-idmapd

Additional details about the services mentioned above:

  • rpcbind — The rpcbind server converts RPC program numbers into universal addresses.
  • nfs-server — It enables the clients to access NFS shares.
  • rpc-statd — NFS file locking. Implements file lock recovery when an NFS server crashes and reboots.
  • nfs-idmap — It translates user and group ids into names, and translates user and group names into ids.
  1. Set up firewall configuration

We need to configure firewall on NFS server to allow client services to access NFS shares. To do that, run the following commands on the NFS server:

firewall-cmd --permanent --zone public --add-service mountdfirewall-cmd --permanent --zone public --add-service rpc-bindfirewall-cmd --permanent --zone public --add-service nfsfirewall-cmd --reload
  1. Export shared directories

There are two sub steps in this section.

  • Specify shared directory and its attributes in /etc/exports.
  • Export shared directory using command “exportfs -r”

Specify shared directory and its attributes in /etc/exports.

Example:

To share directory /home in NFS server with NFS client “10.150.152.167”, we need to add the following line to /etc/exports

/home 10.150.152.167(rw,sync, no_root_squash)

Note that:

/home — directory name in NFS server

10.150.152.167 — IP address of NFS client

rw,sync, no_root_squash — directory attributes

  1. – read/write permission to the shared folder
  2. – all changes to filesystem are immediately flushed to disk;
  3. : By default, any file request made by user root on the client machine is treated as by user nobody on the server. (Exactly which UID the request is mapped to depends on the UID of user “nobody” on the server, not the client.) If no_root_squash is selected, then root on the client machine will have the same level of access to the files on the system as root on the server.

We can get all options in the man page (man exports)

Export shared directory using command “exportfs -r”

Execute command “exportfs –r” to export the shared directory on the shell of NFS server.

We can also use the command “exportfs –v” to get a list for all shared directories.

More details on exportfs commands:

exportfs -v : Displays a list of shared files and export options on a server

exportfs -a : Exports all directories listed in /etc/exports

exportfs -u : Un-export one or more directories

exportfs -r : Re-export all directories after modifying /etc/exports

Configuring NFS Client

  1. Install required NFS packages

Install the nfs-utils package using the following command.

yum install nfs-utils
  1. Mount NFS shares on the client

There are two different ways to mount the exported directories.

  • Use command “mount” to manually mount the directories.
  • Update /etc/fstab to mount the directories at boot time.

Use the “mount” command to manually mount directories.

Example:

The command to mount remote directory /home in 10.150.138.34 to local /home/GZG6N, command is as follows:

mount -t nfs -o options 10.150.138.34:/home /home/GZG6N

Note that:

10.150.138.34 -IP address of NFS server

/home – Shared directory on NFS server

/home/GZG6N – Local mount point

Update /etc/fstab to mount the directories at boot time.

Examples:

Add a line similar to the following to /etc/fstab.

10.150.138.34:/home /home/GZG6N nfs defaults 0 0

Then execute command “mount –a” to mount all filesystems mentioned in fstab.

  1. Configure IDMAP

Update /etc/samba/smb.conf to make sure that each user has a unique UID across all the Linux VDAs. Add the following lines to [global] section in the smb.conf file:

[Global] idmap config * : backend = tdb idmap config <DomainREALM> : backend = rid idmap config <DomainREALM> : range = 100000-199999 idmap config <DomainREALM> : base_rid = 0 template homedir = /home/<DomainName>/%u

Now that all the configurations have been done, we can normally launch session from Linux VDA called NFS client in this article (its IP address is 10.150.152.167 in the example), however, user directory is actually located in NFS server (its IP address is 10.150.138.34 in the example).

Related:

Re: NFS export from ECS via load balancer

We have an ECS and it is placed behind a load balancer. S3 protocol works flawlessly via the load balancer. i.e. we use only the load balancer IP address to make REST calls. The problem is with NFS exports that we have configured in ECS. Earlier we were just using one of ECS 4 node IP address to mount the shares on Linux host. After installing Load Balancer, I edited entries in /etc/fstab and changed ECS node IP to Load Balancer IP address. When I tried to mount it didn’t work and the following error occurred,



clnt_create: RPC: Port mapper failure – Unable to receive: errno 111 (Connection refused)



Went back to IT and they changed something, probably added the ports in the firewall. Now I get a different error when trying to mount an NFS export,



mount.nfs: access denied by server while mounting 192.163.47.48:/namespace1/bucket1/



What can be done to resolve this error? Should I make some changes user group mapping on ECS? Could it be due to the fact that the users in User/Group mapping on ECS is not present in the load balancer?



Any help is much appreciated.

Related:

Re: ECS Community Ed. Single-Node NFS

I figured out my problem- CentOS 7 was running “rpcbind” in its main scope (outside docker), but ECS’s Docker container runs its own version of rpcbind and ECS appears to communicate only with that one. The giveaway was running “rpcinfo -p” in the CentOS shell (not inside the docker container) and finding no NFS services.

To disable the CentOS global rpcbind process, “systemctl disable rpcbind” seems to do the trick. I shut down the docker and rebooted the CentOS VM for good measure, then after initialization I was able to mount it up (also reinstalled the whole ECS single-node instance before figuring that out):

root@apollo:~# mount -v ecs.lan:/spir/nfstest1/ /mnt2 -o vers=3,sec=sys,proto=tcp

mount.nfs: timeout set for Thu Feb 23 23:54:42 2017

mount.nfs: trying text-based options ‘vers=3,sec=sys,proto=tcp,addr=192.168.0.5’

mount.nfs: prog 100003, trying vers=3, prot=6

mount.nfs: trying 192.168.0.5 prog 100003 vers 3 prot TCP port 2049

mount.nfs: prog 100005, trying vers=3, prot=6

mount.nfs: trying 192.168.0.5 prog 100005 vers 3 prot TCP port 2049

root@apollo:~# df -k

Filesystem 1K-blocks Used Available Use% Mounted on

ecs.lan:/spir/nfstest1/ 4278190080 33029632 4245160448 1% /mnt2





When you have this set up correctly, running “rpcinfo -p” on the CentOS shell throws an error:

[root@ecs ~]# rpcinfo -p

rpcinfo: can’t contact portmapper: RPC: Remote system error – No such file or directory

But running rpcinfo -p inside a docker shell works, and port 111 is definitely listening:

[root@ecs ~]# docker exec -ti ecsstandalone /bin/bash

ecs:/ # rpcinfo -p

program vers proto port service

100000 4 tcp 111 portmapper

100000 3 tcp 111 portmapper

100000 2 tcp 111 portmapper

100000 4 udp 111 portmapper

100000 3 udp 111 portmapper

100000 2 udp 111 portmapper

100005 3 tcp 2049 mountd

100005 3 udp 2049 mountd

100003 3 tcp 2049 nfs

100024 1 tcp 2049 status

100021 4 tcp 10000 nlockmgr

100021 4 udp 10000 nlockmgr

ecs:/ # exit

exit

[root@ecs ~]# netstat -nap | grep ‘:111 ‘

tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 9788/rpcbind

tcp6 0 0 :::111 :::* LISTEN 9788/rpcbind

udp 0 0 0.0.0.0:111 0.0.0.0:* 9788/rpcbind

udp6 0 0 :::111 :::* 9788/rpcbind

Related:

Event ID 3004 — Port Mapper Startup Status

Event ID 3004 — Port Mapper Startup Status

Updated: January 27, 2011

Applies To: Windows Server 2008 R2

The RPC Port Mapper service enables UNIX-based computers to discover the UNIX-compatible services that are available on Windows-based computers.

Event Details

Product: Windows Operating System
ID: 3004
Source: portmap
Version: 6.1
Symbolic Name: EVENT_PORTMAP_DYNAMIC_TRANSPORTS_NOT_YET_REGISTERED
Message: Windows(R) RPC Port Mapper service has completed starting but has not yet registered any interfaces. Interface registration for IPv6 UDP interfaces is dynamic and is performed as a background operation. These registrations should be completed shortly. An event will be logged when the registration is completed.

Until this registration is complete, any Network File System (NFS) clients attempting to use RPC Port Mapper (also known as Portmap and Rpcbind) to discover NFS protocols on this server may be unable to locate the local Server for NFS and so will not be to access any files hosted on this server.

If no registrations for the portmap service itself happen within a few seconds, verify that this server is configured properly for networking, stop the RPC Port Mapper service and then restart the Server for NFS manually (which will also start RPC Port Mapper).

Resolve
Configure the computer for networking and restart Portmap

To configure networking and restart the RPC Port Mapper service:

  1. Open Network and Sharing Center in Control Panel.
  2. In the left pane, click Manage network connections.
  3. Configure a new network connection or reconfigure an existing connection to ensure the computer is properly configured for networking.
  4. Open an elevated Command Prompt window. Click Start, point to All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator.
  5. Type net start portmap.

Verify

To verify the startup status of the RPC Port Mapper service:

  1. Open an elevated Command Prompt window. Click Start, point to All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator.
  2. Type devmgmt.msc.
  3. On the View menu, click Show hidden devices.
  4. Expand the Non-Plug and Play Drivers node.
  5. Right-click Server for NFS Open RPC (ONRPC) Portmapper and click Properties.
  6. On the Drivers tab, the Status indicates the current state of the service.

Related Management Information

Port Mapper Startup Status

File Services

Related:

Event ID 3002 — Port Mapper Startup Status

Event ID 3002 — Port Mapper Startup Status

Updated: January 27, 2011

Applies To: Windows Server 2008 R2

The RPC Port Mapper service enables UNIX-based computers to discover the UNIX-compatible services that are available on Windows-based computers.

Event Details

Product: Windows Operating System
ID: 3002
Source: portmap
Version: 6.1
Symbolic Name: EVENT_PORTMAP_FAILED_REGISTER_TCP
Message: Windows(R) failed a request to register RPC Port Mapper on TCP port 111. RPC Port Mapper cannot start.

Network File System (NFS) clients use RPC Port Mapper (also known as Portmap and Rpcbind) to discover NFS protocols on remote servers. Without RPC Port Mapper, Server for NFS cannot start and NFS clients cannot access files on this server.

Verify that no other software is registered on TCP port 111, then start Server for NFS manually (which will also start RPC Port Mapper).

Resolve
Free port 111 and restart Portmap

To make port 111 available and restart the RPC Port Mapper service:

  1. Open an elevated Command Prompt window. Click Start, point to All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator.
  2. Type netstat -a -b -o.
  3. Find the process that is listening on port 111. Then, either unload the driver using Device Manager, stop the service using net stop or the Services snap-in, or close the program.
  4. Type net start portmap to restart the Port Mapper service.

 

Verify

To verify the startup status of the RPC Port Mapper service:

  1. Open an elevated Command Prompt window. Click Start, point to All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator.
  2. Type devmgmt.msc.
  3. On the View menu, click Show hidden devices.
  4. Expand the Non-Plug and Play Drivers node.
  5. Right-click Server for NFS Open RPC (ONRPC) Portmapper and click Properties.
  6. On the Drivers tab, the Status indicates the current state of the service.

Related Management Information

Port Mapper Startup Status

File Services

Related:

Event ID 1076 — NFS Port Registration

Event ID 1076 — NFS Port Registration

Updated: January 27, 2011

Applies To: Windows Server 2008 R2

Network File System (NFS) clients discover NFS servers by querying the port mapper for a remote server. RPC Port Mapper converts RPC data into TCP and UDP protocol port numbers. It must be active for Server for NFS to start.

Event Details

Product: Windows Operating System
ID: 1076
Source: NfsServer
Version: 6.1
Symbolic Name: EVENT_NFS_ENDPOINT_CREATION_PARTIAL_FAILURE
Message: Server for NFS could not create all the endpoints configured. Server for NFS will attempt to continue but some NFS clients may not function properly.

Server for NFS creates RPC endpoints to receive requests from Network File System (NFS) clients. Without any RPC endpoints Server for NFS will not be able to receive any requests on this computer.

Resolve
Free ports and restart Server for NFS

To make TCP/IP ports available and restart Server for NFS:

  1. Open an elevated Command Prompt window. Click Start, point to All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator.
  2. Type netstat -a -b -o. to display all connections with their associated executables and processes.
  3. Unload unnecessary drivers, stop unnecessary services, or close unnecessary programs.
  4. Type net start nfssvc to start Server for NFS.

Verify

To verify that Server for NFS has successfully registered all protocols:

  1. Open a command prompt with elevated privileges. Click Start, point to All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator.
  2. Type rpcinfo to determine the ports and transports that Server for NFS uses.
  3. In the list, verify that the following services are present on both IPv4 and IPv6 (if used):
  • mountd
  • nfs
  • nlockmgr
  • status

Related Management Information

NFS Port Registration

File Services

Related: