2. Once this has been completed, you will simply “right click on the newly added storage and select -> Set as Default”.
3. Once you have completed setting the default, the previous Storage Repository can be removed.
2. Once this has been completed, you will simply “right click on the newly added storage and select -> Set as Default”.
3. Once you have completed setting the default, the previous Storage Repository can be removed.
We have been looking into issues with NFS file copies during a node reboot, like with a rolling upgrade.
EMC states;
“””””””OneFS 8: NFSv4 failover —— with the introduction of NFSv4 failover, when a client’s virtual IP address moves, or a OneFS group change event occurs, the client application will continue without disruption. As such, no unexpected I/O error will propagate back up to the client application. In OneFS 8.0, both NFSv3 and NFSv4 clients can now use dynamic”””””””
In working with EMC, it looks like this can be affected by the Linux Kernel and file size and smart connect.
To summarize with CentOS, It works from kernel 3.10.0-514.el7.x86_64 but fails with 3.10.0-862.el7.x86_64 when copying a 5 GB file and rebooting a node.
Per EMC we are going to verify that the issue happens with Redhat and then open a call.
Has anyone else seen this?
————————- Detailed explanation from EMC ———————————
The NFS SME wanted to go back over everything before moving forward with getting Engineering involved. He found the smoking gun that explains the client behavior from our original packet capture a few weeks ago.
The pcaps indicate that a client running the affected kernel isn’t properly supplying the file handle during the PUTFH process.
****** Here’s the connection from node 2’s perspective before failover is induced: ******
Isilon-dev-2.lagg1_09072018_134025.pcap
*****************
1029 12.559173 XX.XXX.XXX..16 → XX.XXX.XXX..30 NFS 270 V4 Call OPEN_CONFIRM
1030 12.559263 XX.XXX.XXX..30 → XX.XXX.XXX..16 NFS 142 V4 Reply (Call In 1029) OPEN_CONFIRM
*****************
1031 12.559376 XX.XXX.XXX..16 → XX.XXX.XXX..30 NFS 302 V4 Call SETATTR FH: 0xa577051b
1032 12.561461 XX.XXX.XXX..30 → XX.XXX.XXX..16 NFS 318 V4 Reply (Call In 1031) SETATTR
1122 12.931344 XX.XXX.XXX..16 → XX.XXX.XXX..30 NFS 31926 V4 Call WRITE StateID: 0xcddd Offset: 0 Len: 1048576[TCP segment of a reassembled PDU]
1136 12.936254 XX.XXX.XXX..30 → XX.XXX.XXX..16 NFS 206 V4 Reply (Call In 1122) WRITE
tshark -r Isilon-dev-2.lagg1_09072018_134025.pcap -O nfs -Y “frame.number==1029”
Frame 1029: 270 bytes on wire (2160 bits), 270 bytes captured (2160 bits)
Ethernet II, Src: Vmware_84:2c:6f (00:50:56:84:2c:6f), Dst: Broadcom_77:c4:f0 (00:0a:f7:77:c4:f0)
802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 2215
Internet Protocol Version 4, Src: XX.XXX.XXX..16, Dst: XX.XXX.XXX..30
Transmission Control Protocol, Src Port: 872, Dst Port: 2049, Seq: 877, Ack: 765, Len: 200
Remote Procedure Call, Type:Call XID:0x40a1d0c1
Network File System, Ops(2): PUTFH, OPEN_CONFIRM
[Program Version: 4]
[V4 Procedure: COMPOUND (1)]
Tag: <EMPTY>
length: 0
contents: <EMPTY>
minorversion: 0
Operations (count: 2): PUTFH, OPEN_CONFIRM
************************
Opcode: PUTFH (22)
filehandle
length: 53
[hash (CRC-32): 0xa577051b]
filehandle: 011f0000000200c50201000000ffffffff00000000020000…
************************
Opcode: OPEN_CONFIRM (20)
stateid
[StateID Hash: 0xc32a]
seqid: 0x00000001
Data: 019842390100000000000000
[Data hash (CRC-32): 0x57d33b9b]
seqid: 0x00000001
[Main Opcode: OPEN_CONFIRM (20)]
************* Here is where the connection fails after moving over to node 3. ************
We see that failover occurs and the client reestablishes connection after failover, but fails to provide a file handle during that PUTFH operation. This is why the cluster is returning “NFS4ERR_BADHANDLE” at that point.
Isilon-dev-3.lagg1_09072018_134025.pcap
*****************
25390 28.650376 XX.XXX.XXX..16 → XX.XXX.XXX..30 NFS 214 V4 Call OPEN_CONFIRM
25391 28.650455 XX.XXX.XXX..30 → XX.XXX.XXX..16 NFS 118 V4 Reply (Call In 25390) PUTFH Status: NFS4ERR_BADHANDLE
*****************
tshark -r Isilon-dev-3.lagg1_09072018_134025.pcap -O nfs -Y “frame.number==25390”
Frame 25390: 214 bytes on wire (1712 bits), 214 bytes captured (1712 bits)
Ethernet II, Src: Vmware_84:2c:6f (00:50:56:84:2c:6f), Dst: QlogicCo_a5:54:00 (00:0e:1e:a5:54:00)
802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 2215
Internet Protocol Version 4, Src: XX.XXX.XXX..16, Dst: XX.XXX.XXX..30
Transmission Control Protocol, Src Port: 772, Dst Port: 2049, Seq: 782969877, Ack: 42129, Len: 144
Remote Procedure Call, Type:Call XID:0xd0a5d0c1
Network File System, Ops(2): PUTFH, OPEN_CONFIRM
[Program Version: 4]
[V4 Procedure: COMPOUND (1)]
Tag: <EMPTY>
length: 0
contents: <EMPTY>
minorversion: 0
Operations (count: 2): PUTFH, OPEN_CONFIRM
********************
Opcode: PUTFH (22)
filehandle
length: 0
*********************
Opcode: OPEN_CONFIRM (20)
stateid
[StateID Hash: 0x5a23]
seqid: 0x00000001
Data: 013830d0ac03000000000000
[Data hash (CRC-32): 0xc2270b06]
seqid: 0x00000003
[Main Opcode: OPEN_CONFIRM (20)]
They believe this to be fairly definitive evidence that the client kernel’s behavior here is something that Isilon likely has no control over. We can create a knowledge base article for awareness surrounding the issue, but this wouldn’t be going up to Dev based on those findings according to the L3.
Best regards,
Technical Support Engineer, Global Support Center
——————-
——————-
——————-
——————-
——————- My test notes —————
I used VMware player and had a three-node OneFS 8.0.0.7 simulator, and Centos 7 client. I had the networking all isolated to the VMware player no external network access.
Copying small files (10 MB) mounted via NFSv4 to the smart connect IP and rebooting the node worked. The file copy would pause on one file then it would pick up, and the copy would continue. All files looked good with MD5.
Copying small files (10 MB), mounted via a nodes IP (we used the same IP we received in the previous example) and rebooting the node DID NOT work. The file copy would pause on one file, then it would pick up, and the copy would continue. All files looked good with MD5, except for the one that failed.
Copying a large file (5 GB), mounted via NFSv4 to the smart connect IP and rebooting the node would NOT work. We got an I/O error.
Copying a large file (5 GB), mounted via a nodes IP (we used the same IP we received in the previous example) and rebooting the node would NOT work. We got an I/O error.
——————————————
This document (7023210) is provided subject to the disclaimer at the end of this document.
This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.
Hope your solutions architect has been helping you out.
As an aside, let us keep in mind that “vanila”, in particular the “plain vanilla” flavours,
respond very fast in certain scenarios — because they CHEAT on the NFS protocol:
Where NFS demands a guarantee that file creates, chmod/chattr, and certain “stable” data writes
only get acknowledged as successfull to the client when the operation has hit
permanent storage (disk/ssd/nvram), “vanilla” NFS servers often adopt the “async”
operation mode of the underlying local file system. Which means that things are
only in the RAM on the server whereas the client would assume things were safely on disk.
As an alternative, vanilla servers offer full “sync” behaviour,
which is usually way too slow for most uses.
In contrast, dedicated NAS systems make sure they buffer to RAM only where allowed
by the NFS protocol, and do wait for permanent storage where required.
(keywords: “stable” / “unstable” NFS writes)
makes sense?
— Peter
This document (7007308) is provided subject to the disclaimer at the end of this document.
This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.
That stuff is pretty basic and can hardly be used for notifying end users.
We are currently writing our own system, like others have already done (and mentioned, here or in the Google list).
The key limitations we found (6.5.5.x):
– Only the /ifs path is reported; end users would like to read the share name or the NFS mount point.
– Assume we you would like to generate an advisory message at 90% of the quota limit,
you can insert the current usage and the advisory threshold, but not the final hard quota limit.
So it is not possible to produce this clear message: “now reached 90% of 10TB, available: 1TB”
Other observations:
– Often regular reports on storage consumption by a group of users needs to be created for dept. leaders etc.
This is more than a quota notification system could provide, so it has to be implemented anyway and then
could also provide the more simple notifications.
– Assigning shares per user and setting *directory* quotas (container flag!) on these shares provides to end users
a very simply way to query consumption and limits at any time directly on
the mounted shares. That reduces the need for elaborate notifications down to
simple emergency notifications (as available through the built-in template message system).
On the other hand, standard user quotas can’t be queried by clients on Isilon (at least not with NFS).
If you have more specific questions about the built-in template message system, just feel free to ask.
Peter
The tables below detail the port values for service protocols supported by the following Reflection applications. Whether or not the port should be configured for incoming or outgoing data depends on where the connection is initiated from and your network configuration.
The values used by Reflection are IANA (Internet Assigned Numbers Authority) and other standard values.
The following ports and service protocols are used in Reflection for IBM, Reflection for UNIX and OpenVMS, Reflection for ReGIS Graphics, and Reflection for HP.
Application abbreviation key:
Port / IP Protocol |
Service Protocol |
Comment |
RIBM |
RUO |
RRG |
RHP |
20/tcp |
FTP-data |
Data channel |
X |
X |
X |
|
21/tcp |
FTP |
Command channel |
X |
X |
X |
|
22/tcp |
SSH |
Secure Shell, sftp, scp |
X |
X |
X |
|
23/tcp |
Telnet |
Telnet; TN3270; TN3270; TN5250 |
X |
X |
X |
X |
42/tcp |
Nameserver |
Hostname to IP address |
X |
X |
X |
X |
53/udp/tcp |
DNS |
Domain Name Services |
X |
X |
X |
X |
80/tcp |
HTTP |
Unsecure HTTP via Reflection Web Launch and Reflection for the Web |
X |
X |
X |
X |
88/udp/tcp |
Kerberos |
Kerberos authentication |
X |
X |
X |
X |
443/udp/tcp |
https |
Secure http via Reflection Web Launch and Reflection for the Web |
X |
X |
X |
X |
443/udp/tcp |
kpasswd |
Kerberos password changing (kpasswd daemon) |
X |
X |
X |
X |
513/tcp |
login |
rlogin |
X |
X |
X |
|
749/udp/tcp |
kerberos-adm |
Kerberos password changing (v5passwdd daemon) |
X |
X |
X |
X |
992/tcp |
telnet |
SSL-secured Telnet |
X |
X |
X |
X |
1080/udp/tcp |
socks |
SOCKS |
X |
X |
X |
X |
1024-5000 |
VAXLINK2 FFT |
Fast file transfer |
X |
X |
X |
|
1530 1537 |
NS/VT |
Network Services, Virtual Terminal |
X |
X |
X |
|
1649/udp/tcp |
kermit |
Kermit file transfer |
X |
X |
X |
|
8471/tcp |
lipi |
AS/400 LIPI file transfer |
X |
|||
8476/tcp |
lipi |
AS/400 signon server port |
X |
|||
8478/tcp |
ehntfw |
AS/400 EHNTFW file transfer |
X |
|||
30000-40000 |
PCLINK FFT |
Fast file transfer |
X |
X |
X |
The following ports and service protocols are used in Reflection X and Reflection NFS Client.
Application abbreviation key:
Port / IP Protocol |
Service Protocol |
Comment |
RX |
NFS |
22/tcp |
SSH |
Secure Shell, sftp, scp |
X |
|
23/tcp |
Telnet |
Telnet; TN3270; TN3270; TN5250 |
X |
|
42/tcp |
Nameserver |
Hostname to IP address |
X |
X |
53/udp/tcp |
DNS |
Domain Name Services |
X |
X |
80/tcp |
HTTP |
Unsecure HTTP via Reflection Web Launch and Reflection for the Web |
X |
|
88/udp/tcp |
Kerberos |
Kerberos authentication |
X |
|
111 |
Sunrpc |
Portmapper |
X |
|
177/udp |
XDMCP Broadcast |
X Display Manager |
X |
|
443/udp/tcp |
https |
Secure http via Reflection Web Launch and Reflection for the Web |
X |
|
443/udp/tcp |
kpasswd |
Kerberos password changing (kpasswd daemon) |
X |
|
512/tcp |
exec |
rexec |
X |
|
513/tcp |
login |
rlogin |
X |
|
514/tcp |
shell |
rsh |
X |
|
635/udp |
mount |
NFS mount service |
X |
|
640/udp |
pcnfs |
PC-NFS DOS authentication |
X |
|
731/udp 733/udp |
ypserv |
NIS server and binder processes |
X |
|
732/tcp |
ypserv |
NIS server and binder processes |
X |
|
749/udp/tcp |
kerberos-adm |
Kerberos password changing (v5passwdd daemon) |
X |
|
1080/udp/tcp |
socks |
SOCKS |
X |
|
2049/udp/tcp |
nfsd |
NFS file service |
X |
|
6000/tcp |
X Protocol |
Incoming ports for RX clients |
X |
|
7000/tcp |
fs |
X font server |
X |
|
7100/tcp |
xfs |
X font server |
X |
The following ports and service protocols are used in Reflection FTP, Reflection TimeSync, Reflection Line Printer Daemon (LPD), and Reflection Ping.
Note: Beginning in version 14.1, the following components are no longer available: TimeSync, LPD, and Ping. If you have any of these utilities installed on your system, they are removed when you upgrade to 14.1.
Component abbreviation key:
Port / IP Protocol |
Service Protocol |
Comment |
RFTP |
TimeSync |
LPD |
Ping |
7/icmp |
Echo |
Data echo |
X |
|||
20/tcp |
FTP-data |
Data channel |
X |
|||
21/tcp |
FTP |
Command channel |
X |
|||
22/tcp |
SSH |
Secure Shell, sftp, scp |
X |
|||
37/udp/tcp |
Time |
Timeserver |
X |
|||
42/tcp |
Nameserver |
Hostname to IP address |
X |
X |
X |
X |
53/udp/tcp |
DNS |
Domain Name Services |
X |
X |
X |
X |
88/udp/tcp |
Kerberos |
Kerberos authentication |
X |
|||
123/udp |
NTP |
Network Time Protocol |
X |
|||
443/udp/tcp |
kpasswd |
Kerberos password changing (kpasswd daemon) |
X |
|||
515/tcp |
printer |
spooler |
X |
|||
520/udp |
route |
routed |
X |
|||
749/udp/tcp |
kerberos-adm |
Kerberos password changing (v5passwdd daemon) |
X |
|||
1080/udp/tcp |
socks |
SOCKS |
X |
Note that a real example based upon RHEL 7.2 distribution is used to elaborate how to set up the configuration for each step in the following sections. As for other supported distributions, such as CentOs, SUSE and Ubuntu, this article also applies to them, however, package name and service name mentioned below may have minor differences, and this article does not cover that.
Install nfs-utils and libnfsidmap packages on NFS server using the following command:
yum install nfs-utils libnfsidmap
Enable rpcbind and nfs-server services, using the following commands:
systemctl enable rpcbindsystemctl enable nfs-server
Activate the following four services using the following commands:
systemctl start rpcbindsystemctl start nfs-serversystemctl start rpc-statdsystemctl start nfs-idmapd
Additional details about the services mentioned above:
We need to configure firewall on NFS server to allow client services to access NFS shares. To do that, run the following commands on the NFS server:
firewall-cmd --permanent --zone public --add-service mountdfirewall-cmd --permanent --zone public --add-service rpc-bindfirewall-cmd --permanent --zone public --add-service nfsfirewall-cmd --reload
There are two sub steps in this section.
Specify shared directory and its attributes in /etc/exports.
Example:
To share directory /home in NFS server with NFS client “10.150.152.167”, we need to add the following line to /etc/exports
/home 10.150.152.167(rw,sync, no_root_squash)
Note that:
/home — directory name in NFS server
10.150.152.167 — IP address of NFS client
rw,sync, no_root_squash — directory attributes
We can get all options in the man page (man exports)
Export shared directory using command “exportfs -r”
Execute command “exportfs –r” to export the shared directory on the shell of NFS server.
We can also use the command “exportfs –v” to get a list for all shared directories.
More details on exportfs commands:
exportfs -v : Displays a list of shared files and export options on a server
exportfs -a : Exports all directories listed in /etc/exports
exportfs -u : Un-export one or more directories
exportfs -r : Re-export all directories after modifying /etc/exports
Install the nfs-utils package using the following command.
yum install nfs-utils
There are two different ways to mount the exported directories.
Use the “mount” command to manually mount directories.
Example:
The command to mount remote directory /home in 10.150.138.34 to local /home/GZG6N, command is as follows:
mount -t nfs -o options 10.150.138.34:/home /home/GZG6N
Note that:
10.150.138.34 -IP address of NFS server
/home – Shared directory on NFS server
/home/GZG6N – Local mount point
Update /etc/fstab to mount the directories at boot time.
Examples:
Add a line similar to the following to /etc/fstab.
10.150.138.34:/home /home/GZG6N nfs defaults 0 0
Then execute command “mount –a” to mount all filesystems mentioned in fstab.
Update /etc/samba/smb.conf to make sure that each user has a unique UID across all the Linux VDAs. Add the following lines to [global] section in the smb.conf file:
[Global] idmap config * : backend = tdb idmap config <DomainREALM> : backend = rid idmap config <DomainREALM> : range = 100000-199999 idmap config <DomainREALM> : base_rid = 0 template homedir = /home/<DomainName>/%u
Now that all the configurations have been done, we can normally launch session from Linux VDA called NFS client in this article (its IP address is 10.150.152.167 in the example), however, user directory is actually located in NFS server (its IP address is 10.150.138.34 in the example).