Jump To: ,: Related Articles
Jump To: ,: Related Articles
Jump To: ,: Related Articles
Related articles
NFS/Troubleshooting
From Wikipedia:
Network File System (NFS) is a distributed file system protocol originally developed by Sun
Microsystems in 1984, allowing a user on a client computer to access files over a network in a
manner similar to how local storage is accessed.
Note: NFS is not encrypted. Tunnel NFS through an encrypted protocol like Kerberos, or tinc when dealing
with sensitive data.
Contents
[hide]
1 Installation
2 Configuration
o 2.1 Server
2.1.2 Miscellaneous
o 2.2 Client
1|Page
2.2.2 Manual mounting
3.3.1 Cron
3.3.2 systemd/Timers
4 Troubleshooting
5 See also
Installation
Both client and server only require the installation of the nfs-utils package.
It is highly recommended to use a time sync daemon to keep client/server clocks in sync. Without accurate
clocks on all nodes, NFS can introduce unwanted delays.
Configuration
Server
The NFS server needs a list of exports (shared directories) which are defined in /etc/exports . NFS shares
defined in /etc/exports are relative to the so-called NFS root. A good security practice is to define an
NFS root in a discrete directory tree under the server's root file system which will keep users limited to that
2|Page
mount point. Bind mounts are used to link the share mount point to the actual directory elsewhere on the
filesystem.
/etc/fstab
Note: The permissions on the server filesystem is what NFS will honor so insure that connecting users have
the desired access.
Note: ZFS filesystems require special handling of bindmounts, see ZFS#Bind mount.
Add directories to be shared and limit them to a range of addresses via a CIDR or hostname(s) of client
machines that will be allowed to mount them in /etc/exports :
/etc/exports
/srv/nfs 192.168.1.0/24(rw,fsid=root)
/srv/nfs/music 192.168.1.0/24(rw,nohide) # note the nohide option which is applied to
mounted directories on the file system.
It should be noted that modifying /etc/exports while the server is running will require a re-export for
changes to take effect:
# exportfs -rav
3|Page
For more information about all available options see exports(5).
Miscellaneous
Optional configuration
Advanced configuration options can be set in /etc/nfs.conf . Users setting up a simple configuration may
not need to edit this file.
By default, starting nfs-server.service will listen for connections on all network interfaces, regardless of
/etc/exports . This can be changed by defining which IPs and/or hostnames to listen on.
/etc/nfs.conf
[nfsd]
host=192.168.1.123
# Alternatively, you can use your hostname.
# host=myhostname
Even though idmapd may be running, it may not be fully enabled. Verify by checking cat
/sys/module/nfsd/parameters/nfs4_disable_idmapping returns N . If not:
4|Page
# echo "N" | sudo tee /sys/module/nfsd/parameters/nfs4_disable_idmapping
Set this to survive reboots by adding an option to the nfs kernel module:
If journalctl reports lines like the following when starting nfs-server.service and nfs-idmapd.service, then
this may be the solution.
Reason: Configuration should be done in /etc/nfs.conf since nfs-utils 2.1.1.[1] (Discuss in Talk:NFS#)
Users needing support for NFSv3 clients, may wish to consider using static ports. By default, for NFSv3
operation rpc.statd and lockd use random ephemeral ports; in order to allow NFSv3 operations through
a firewall static ports need to be defined. Edit /etc/sysconfig/nfs to set STATDARGS :
/etc/sysconfig/nfs
The rpc.mountd should consult /etc/services and bind to the same static port 20048 under normal
operation; however, if it needs to be explicity defined edit /etc/sysconfig/nfs to set RPCMOUNTDARGS :
5|Page
/etc/sysconfig/nfs
RPCMOUNTDARGS="-p 20048"
After making these changes, several services need to be restarted; the first writes the configuration options
out to /run/sysconfig/nfs-utils (see /usr/lib/systemd/scripts/nfs-utils_env.sh ), the second
restarts rpc.statd with the new ports, the last reloads lockd (kernel module) with the new ports. Restart
these services now: nfs-config , rpcbind , rpc-statd , and nfs-server .
After the restarts, use rpcinfo -p on the server to examine the static ports are as expected. Using rpcinfo
-p <server IP> from the client should reveal the exact same static ports.
NFSv2 compatibility
Reason: Configuration should be done in /etc/nfs.conf since nfs-utils 2.1.1.[2] (Discuss in Talk:NFS#)
Users needing to support clients using NFSv2 (for example U-Boot), should set RPCNFSDARGS="-V 2" in
/etc/sysconfig/nfs .
Firewall configuration
To enable access through a firewall, tcp and udp ports 111, 2049, and 20048 need to be opened when using
the default configuration; use rpcinfo -p to examine the exact ports in use on the server. To configure this
for iptables, execute this commands:
To have this configuration load on every system start, edit /etc/iptables/iptables.rules to include the
following lines:
6|Page
/etc/iptables/iptables.rules
Note: This command will override the current iptables start configuration with the current iptables
configuration!
If using NFSv3 and the above listed static ports for rpc.statd and lockd these also need to be added to
the configuration:
/etc/iptables/iptables.rules
If using V4-only setup, only tcp port 2049 need to be opened. Therefore only one line needed.
/etc/iptables/iptables.rules
Client
7|Page
Users intending to use NFS4 with Kerberos, also need to start and enable nfs-client.target , which starts
rpc-gssd.service . However, due to bug FS#50663 in glibc, rpc-gssd.service currently fails to start.
Adding the "-f" (foreground) flag in the service is a workaround:
[Unit]
Requires=network-online.target
After=network-online.target
[Service]
Type=simple
ExecStart=
ExecStart=/usr/sbin/rpc.gssd -f
Users experiencing the following may consider turning off the service using system's masking feature:
"Dependency failed for pNFS block layout mapping daemon."
Example:
Manual mounting
For NFSv3 use this command to show the server's exported file systems:
$ showmount -e servername
For NFSv4 mount the root NFS directory and look around for available mounts:
8|Page
Then mount omitting the server's NFS export root:
If mount fails try including the server's export root (required for Debian/RHEL/SLES, some distributions
need -t nfs4 instead of -t nfs ):
Note: Server name needs to be a valid hostname (not just IP address). Otherwise mounting of remote share
will hang.
Using fstab is useful for a server which is always on, and the NFS shares are available whenever the client
boots up. Edit /etc/fstab file, and add an appropriate line reflecting the setup. Again, the server's NFS
export root is omitted.
/etc/fstab
Another method is using the systemd automount service. This is a better option than _netdev , because it
remounts the network device quickly when the connection is broken and restored. As well, it solves the
problem from autofs, see the example below:
/etc/fstab
One might have to reboot the client to make systemd aware of the changes to fstab. Alternatively, try
reloading systemd and restarting mountpoint-on-client.automount to reload the /etc/fstab
configuration.
Tip:
The noauto mount option will not mount the NFS share until it is accessed: use auto for it to be
available immediately.
If experiencing any issues with the mount failing due to the network not being up/available, enable
NetworkManager-wait-online.service . It will ensure that network.target has all the links
available prior to being active.
The users mount option would allow user mounts, but be aware it implies further options as
noexec for example.
The x-systemd.idle-timeout=1min option will unmount the NFS share automatically after 1
minute of non-use. Good for laptops which might suddenly disconnect from the network.
10 | P a g e
Note: Users trying to automount a NFS-share via systemd which is mounted the same way on the server
may experience a freeze when handling larger amounts of data.
Using autofs is useful when multiple machines want to connect via NFS; they could both be clients as well
as servers. The reason this method is preferable over the earlier one is that if the server is switched off, the
client will not throw errors about being unable to find NFS shares. See autofs#NFS network mounts for
details.
In order to get the most out of NFS, it is necessary to tune the rsize and wsize mount options to meet the
requirements of the network configuration.
In recent linux kernels (>2.6.18) the size of I/O operations allowed by the NFS server (default max block
size) varies depending on RAM size, with a maximum of 1M (1048576 bytes), the max block size of th
server will be used even if nfs clients requires bigger rsize and wsize . See
https://access.redhat.com/documentation/en-
US/Red_Hat_Enterprise_Linux/5/html/5.8_Technical_Notes/Known_Issues-kernel.html It is possible to
change the default max block size allowed by the server by writing to the /proc/fs/nfsd/max_block_size file
before starting nfsd. For example, the following command restores the previous default iosize of 32k:
11 | P a g e
Users making use of systemd-networkd might notice nfs mounts the fstab are not mounted when booting;
errors like the following are common:
The solution is simple; force systemd to wait for the network to be completely configured by enabling
systemd-networkd-wait-online.service . In theory this slows down the boot-process because less
services run in parallel.
This trick is useful for laptops that require nfs shares from a local wireless network. If the nfs host becomes
unreachable, the nfs share will be unmounted to hopefully prevent system hangs when using the hard
mount option. See https://bbs.archlinux.org/viewtopic.php?pid=1260240#p1260240
Make sure that the NFS mount points are correctly indicated in /etc/fstab :
$ cat /etc/fstab
Note: You must use hostnames in /etc/fstab for this to work, not IP addresses.
The noauto mount option tells systemd not to automatically mount the shares at boot. systemd would
otherwise attempt to mount the nfs shares that may or may not exist on the network causing the boot
process to appear to stall on a blank screen.
In order to mount NFS shares with non-root users the user option has to be added.
Create the auto_share script that will be used by cron or systemd/Timers to use ICMP ping to check if the
NFS host is reachable:
/usr/local/bin/auto_share
12 | P a g e
#!/bin/bash
function net_umount {
umount -l -f $1 &>/dev/null
}
function net_mount {
mountpoint -q $1 || mount $1
}
if [ $? -ne 0 ]; then
server_notok[${#Unix[@]}]=$SERVER
# The server could not be reached, unmount the share
net_umount $MOUNT_POINT
else
server_ok[${#Unix[@]}]=$SERVER
# The server is up, make sure the share are mounted
net_mount $MOUNT_POINT
fi
fi
done
Note: If you want to test using a TCP probe instead of ICMP ping (default is tcp port 2049 in NFS4) then
replace the line:
with:
13 | P a g e
# Check if the server is reachable
timeout 1 bash -c ": < /dev/tcp/${SERVER}/2049"
# chmod +x /usr/local/bin/auto_share
Create a cron entry or a systemd/Timers timer to check every minute if the server of the shares are
reachable.
Cron
# crontab -e
* * * * * /usr/local/bin/auto_share
systemd/Timers
# /etc/systemd/system/auto_share.timer
[Unit]
Description=Check the network mounts
[Timer]
OnCalendar=*-*-* *:*:00
[Install]
WantedBy=timer.target
# /etc/systemd/system/auto_share.service
[Unit]
Description=Check the network mounts
[Service]
Type=simple
ExecStart=/usr/local/bin/auto_share
14 | P a g e
# systemctl enable auto_share.timer
A systemd unit file can also be used to mount the NFS shares at startup. The unit file is not necessary if
NetworkManager is installed and configured on the client system. See #NetworkManager dispatcher.
/etc/systemd/system/auto_share.service
[Unit]
Description=NFS automount
After=syslog.target network.target
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/local/bin/auto_share
[Install]
WantedBy=multi-user.target
NetworkManager dispatcher
In addition to the method described previously, NetworkManager can also be configured to run a script on
network status change: Enable and start the NetworkManager-dispatcher.service .
The easiest method for mount shares on network status change is to just symlink to the auto_share script:
# ln -s /usr/local/bin/auto_share /etc/NetworkManager/dispatcher.d/30-nfs.sh
However, in that particular case unmounting will happen only after the network connection has already
been disabled, which is unclean and may result in effects like freezing of KDE Plasma applets.
The following script safely unmounts the NFS shares before the relevant network connection is disabled by
listening for the pre-down and vpn-pre-down events:
15 | P a g e
Note: This script ignores mounts with the noauto option.
/etc/NetworkManager/dispatcher.d/30-nfs.sh
#!/bin/bash
case "$2" in
"up")
mount -a -t nfs4,nfs
;;
"pre-down");&
"vpn-pre-down")
umount -l -a -t nfs4,nfs >/dev/null
;;
esac
fi
Make the script executable with chmod and create a symlink inside
/etc/NetworkManager/dispatcher.d/pre-down to catch the pre-down events:
# ln -s /etc/NetworkManager/dispatcher.d/30-nfs.sh /etc/NetworkManager/dispatcher.d/pre-
down.d/30-nfs.sh
The above script can be modified to mount different shares (even other than NFS) for different connections.
16 | P a g e