NFS使用设置入门
2013-09-10 22:05:43

NFS(Network File System,网络文件系统)可以通过网络将分享不同主机(不同的OS)的目录——可以通过NFS挂载远程主机的目录,访问该目录就像访问本地目录一样。一般而言,使用nfs能够方便地使各unix-like系统之间实现共享。如果需要在unix-like和windows系统之间共享,就得使用samba了(不过从Win7开始已经能很好的支持NFS挂载了)。网络文件系统(NFS)是一种分布式文件系统协议,最初由Sun Microsystems公司开发,并于1984年发布,其功能旨在允许客户端主机可以像访问本地存储一样通过网络访问服务器端文件。NFS和其他许多协议一样,是基于开放网络运算远程过程调用(ONC RPC)协议之上的。它是一个开放、标准的RFC协议,任何人或组织都可以依据标准实现它。


NFS运行在SUN的RPC(Remote Procedure Call,远程过程调用)基础上,RPC定义了一种与系统无关的方法来实现进程间通信。因此NFS server也可以看作是RPC server。正因为NFS是一个RPC服务程序,所以在使用它之前,先要映射好端口——通过portmap设定。比如:某个NFS client发起NFS服务请求时, 它需要先得到一个端口(port)。所以它先通过portmap得到port number(不仅NFS,所有的RPC服务程序启动之前,都需要设定好portmap),在启动RPC服务(比如NFS)之前,需要启动portmap服务(关于nfs和portmap是否存在于当前系统,看后面的"NFS安装"部分。


版本介绍

NFSv1
只在SUN公司内部用作实验目的。开发团队在NFSv1的基础上做了重大改进之后将其对外发布,版本NFSv2由此产生。

NFSv2


NFSv2最初在SunOS 2.0上面实现,1985年发布。NFSv2 的定义RFC 1094,于1989年3月发布。

NFSv2 最初只是基于 UDP。设计者旨在保持服务端是无状态的,而将“锁”等机制的实现独立于核心协议之外。这是一个关键决定,它使从服务器故障恢复变得简单:当一个服务器变得不可用时,所有的网络客户端冻结,但一旦服务器恢复,每一个尝试重传的状态都包含在每个RPC里面,这是由客户端存根发起的。这样的设计决策允许UNIX应用程序可以忽视服务器端的问题。

虚拟文件系统接口很容易模块化地实现一个简单的协议。在1986年2月,诸多操作系统实现了对NFSv2的支持,例如 System V release 2、DOS,以及使用Eunice的VAX/VMS。由于 32-bit 的限制,NFSv2 只允读写文件起始2G大小的内容。

NFSv3


Version 3(RFC 1813,1995年6月)添加如下功能:
支持 64 bit 文件大小和偏移量,即突破 2GB 文件大小的限制;
支持服务端的异步写操作,提升写入性能;
在许多响应报文中额外增加文件属性,避免用到这些属性时重新获取;
增加 READDIRPLUS 调用,用于在遍历目录时获取文件描述符和文件属性。


在NFSv2发布后不久,NFSv3协议提案就在Sun Microsystems内部被提出,其主要目的是解决NFSv2进行同步写操作的性能问题。1992年7月的实现版本已经解决了NFSv2的许多不足之处,但是大文件支持(64位文件大小和偏移量)这一紧迫的问题还没有解决。这成为迪吉多公司的一个痛点,他们当时推出64位版本的Ultrix,以支持其新推出的64位RISC处理器Alpha 21064。在引入NFSv3时厂商们正在越来越多的支持TCP作为传输层协议。当时有些厂商已经在NFS version 2支持TCP做为传输层,Sun Microsystems 在发布NFSv3时也增加了将TCP作为传输层的支持。使用TCP做传输层使得NFS跨越 WAN 成为可能,并且可以突破 UDP 传输大小8K的限制,使用更大的读写数据单元。

NFSv4

NFSv4协议(RFC 3010,2000年12月;更新版 RFC 3530,2003年4月),借鉴了AFS(Andrew File System)和SMB/CIFS(Server Message Block)的特性,主要做了如下改进:性能提升,强制安全策略,引入有状态的协议。从NFSv4开始,协议的实现/开发工作不再是由SUN公司主导开发,而是改为由互联网工程任务组(IETF)开发。

NFSv4.1

NFSv4.1(RFC 5661,2010年1月)旨在为并行访问可横向扩展的集群服务(pNFS扩展)提供协议支持。

NFSv4.2

NFSv4.2 目前正在开发中。


安装NFS
Debian/Ubuntu上默认是没有安装NFS服务器的,首先要安装NFS服务程序:
$ sudo apt-get install nfs-kernel-server
(安装nfs-kernel-server时,apt会自动安装nfs-common和portmap)
这样,宿主机就相当于NFS Server.同样地,目标系统作为NFS的客户端,需要安装NFS客户端程序。如果是Debian/Ubuntu系统,则需要安装nfs-common.
$ sudo apt-get install nfs-commmon
nfs-common和nfs-kernel-server都依赖于portmap!

与NFS相关的几个文件及命令

1、/etc/exports
对NFS卷的访问是由exports来批准, 它枚举了若干有权访问NFS服务器上文件系统的主机名。

2、/sbin/exportfs
维护NFS的资源共享。 可以通过它重新设定 /etc/exports 的共享目录, 卸载NFS Server共享的目录或者重新共享等。

3、/usr/sbin/showmount
用在 NFS Server 端,而 showmount 则主要用在 Client 端。 showmount 可以用來查看 NFS 共享的目录资源。

4、/var/lib/nfs/xtab
NFS的记录文档: 通过它可以查看有哪些Client 连接到NFS主机的记录。下面这几个并不直接负责NFS, 实际上它们负责所有的RPC

5、/etc/default/portmap
实际上, portmap负责映射所有的RPC服务端口, 它的内容非常非常之简单(后面详述)

6、/etc/hosts.deny
设定拒绝portmap服务的主机

7、/etc/hosts.allow
设定允许portmap服务的主机
 
配置NFS-1
由于NFS是一个RPC server程序。 而portmap是用来挂你RPC的端口号映射的。 所以先要配置portmap.
 
配置portmap
方法1: 编辑/etc/default/portmap, 将 -i 127.0.0.1 去掉。
 
方法2:sudo dpkg-reconfigure portmap,对Should portmap be bound to the loopback address? 选N.
 
配置/etc/hosts.deny
(禁止任何host(主机)能和你的NFS服务器进行NFS连接),加入:
 
### NFS DAEMONS
portmap:ALL
lockd:ALL
mountd:ALL
rquotad:ALL
statd:ALL
 
配置/etc/hosts.allow
允许那些你想要的主机和你的NFS服务器建立连接。下列步骤将允许任何IP地址以192.168.2开头的主机(连接到NFS服务器上),也可以指定特定的IP地址。参看man页 hosts_access(5), hosts_options(5)。加入:
 
### NFS DAEMONS
portmap:192.168.2.
lockd:192.168.2.
rquotad:192.168.2.
mountd:192.168.2.
statd:192.168.2.

/etc/hosts.deny 和 /etc/hosts.allow 设置对portmap的访问,采用这两个配置文件有点类似"mask"的意思。现在/etc/hosts.deny中禁止所有用户对portmap的访问,再在/etc/hosts.allow 中允许某些用户对portmap的访问。
运行 $ sudo /etc/init.d/portmap restart 重启portmap daemon
 
配置/etc/exports
NFS挂载目录及权限由/etc/exports文件定义
比如我要将将我的home目录中的/home/freeoa/share目录让192.168.2.*的IP共享, 则在该文件末尾添加下列语句:
/home/freeoa/share 192.168.2.*(rw,sync,no_root_squash)
或者:/home/freeoa/share 192.168.2.0/24(rw,sync,no_root_squash)
192.168.2.* 网段内的NFS客户端能够共享NFS服务器/home/freeoa/share目录内容。且有读,写权限, 并且该用户进入/home/freeoa/share目录后的身份为root
最好加上sync,否则 'exportfs -r' 时会给出警告,sync是NFS的默认选项。

运行 showmount -e 查看NFS server的export list.
若更改了/etc/exports,运行 sudo exportfs -r 更新
运行sudo /etc/init.d/nfs-kernel-server restart 重启nfs服务
/etc/exports实际上就是nfs服务器的核心配置文件了。

测试NFS
可以尝试一下挂载本地磁盘(假设本地主机IP地址为:192.128.2.1,将/home/freeoa/share挂载到/mnt)
$ sudo mount 192.168.2.1:/home/freeoa/share /mnt
运行'df'看看结果。

$ sudo umount /mnt
注意被拷贝文件的读/写权限!
另外,可以使用一定的参数:
使用加参数的办法:mount -o nolock,rsize=1024,wsize=1024,timeo=15 192.168.2.130:/tmp/ /tmp/

配置NFS-2
编辑/etc/exports,在文件中列出,要共享的目录。书写规则是:(每个共享规则一行)共享目录 主机(参数)

例如:
/mnt/disk1 192.168.70.51(ro,sync, no_root_squash)
上面的规则代表将/mnt/disk1目录以读写同步方式共享给主机192.168.70.51。如果登陆到NFS主机的用户是root,那么该用户就具有NFS主机的root用户的权限。下面是一些NFS共享的常用参数:
rw:可读写的权限;
ro:只读的权限;
no_root_squash:登入到NFS主机的用户如果是ROOT用户,他就拥有ROOT的权限root_squash:在登入 NFS 主机使用目录的使用者如果是 root 时,那么这个使用者的权限将被压缩成为匿名使用者,通常他的 UID 与 GID 都会变成 nobody 那个身份;
all_squash:不管登陆NFS主机的用户是什么都会被重新设定为nobody。
anonuid:将登入NFS主机的用户都设定成指定的user id,此ID必须存在于/etc/passwd中。
anongid:同 anonuid ,但是变成 group ID 就是了!
sync:资料同步写入存储器中。
async:资料会先暂时存放在内存中,不会直接写入硬盘。
insecure:允许从这台机器过来的非授权访问。

这个文件的内容非常简单,每一行由抛出路径,客户名列表以及每个客户名后紧跟的访问选项构成:
[共享的目录] [主机名或IP(参数,参数)]
其中参数是可选的,当不指定参数时,nfs将使用默认选项。默认的共享选项是 sync,ro,root_squash,no_delay。当主机名或IP地址为空时,则代表共享给任意客户机提供服务。当将同一目录共享给多个客户机,但对每个客户机提供的权限不同时,可以这样:
[共享的目录] [主机名1或IP1(参数1,参数2)] [主机名2或IP2(参数3,参数4)]
下面是一些NFS共享的参数:
ro:只读访问
rw:读写访问
sync:所有数据在请求时写入共享
async:NFS在写入数据前可以相应请求
secure:NFS通过1024以下的安全TCP/IP端口发送
insecure:NFS通过1024以上的端口发送
wdelay:如果多个用户要写入NFS目录,则归组写入(默认)
no_wdelay:如果多个用户要写入NFS目录,则立即写入,当使用async时,无需此设置。
hide:在NFS共享目录中不共享其子目录
no_hide:共享NFS目录的子目录
subtree_check:如果共享/usr/bin之类的子目录时,强制NFS检查父目录的权限(默认)
no_subtree_check:和上面相对,不检查父目录权限
all_squash:共享文件的UID和GID映射匿名用户anonymous,适合公用目录。
no_all_squash:保留共享文件的UID和GID(默认)
root_squash:root用户的所有请求映射成如anonymous用户一样的权限(默认)
no_root_squas:root用户具有根目录的完全管理访问权限
anonuid=xxx:指定NFS服务器/etc/passwd文件中匿名用户的UID
anongid=xxx:指定NFS服务器/etc/passwd文件中匿名用户的GID

exportfs命令:
如果我们在启动了NFS之后又修改了/etc/exports,是不是还要重新启动nfs呢?这个时候我们就可以用exportfs命令来使改动立刻生效,该命令格式如下:
exportfs [-aruv]

参数的意义如下:
-a:全部mount或者unmount /etc/exports中的内容
-r:重新mount /etc/exports中分享出来的目录
-u:umount 目录
-v:在 export 的时候,将详细的信息输出到屏幕上。


CentOS 7 NFS服务器和客户端

CentOS 7引入了全新的 systemctl 服务管理,设置和管理服务略有不同。

安装 NFS 支持
yum install nfs-utils nfs-utils-lib

设置nfs相关服务在操作系统启动时启动
systemctl enable rpcbind
systemctl enable nfs-server
systemctl enable nfs-lock
systemctl enable nfs-idmap  

启动nfs服务
systemctl start rpcbind
systemctl start nfs-server
systemctl start nfs-lock
systemctl start nfs-idmap

准备共享目录
chmod -R 755 /data
chown nfsnobody:nfsnobody /data

服务器端设置NFS卷输出,即编辑 /etc/exports 添加:
/data    10.10.0.0/24(rw,sync,no_root_squash,no_subtree_check)

相关参数说明

/data – 共享目录

10.10.0.0/24 – 允许访问NFS的客户端IP地址段

rw - 允许对共享目录进行读写
sync - 实时同步共享目录
no_root_squash - 允许root访问
no_all_squash - 允许用户授权
no_subtree_check - 如果卷的一部分被输出,从客户端发出请求文件的一个常规的调用子目录检查验证卷的相应部分。如果是整个卷输出,禁止这个检查可以加速传输。

可重启nfs server
systemctl restart nfs-server

如果不想重启nfs server,可执行exportfs -r 来更新共享信息。

NFS客户端挂载

Linux挂载NFS的客户端非常简单的命令,先创建挂载目录,然后用 -t nfs 参数挂载就可以了
mount -t nfs  10.10.0.9:/data /data

如果要设置客户端启动时候就挂载NFS,可以配置 /etc/fstab 添加以下内容
10.10.0.9:/data    /data  nfs auto,rw,vers=3,hard,intr,tcp,rsize=32768,wsize=32768      0   0
or
192.168.0.100:/home    /mnt/nfs/home   nfs defaults 0 0

然后在客户端简单使用以下命令就可以挂载
mount -a

防火墙设置

多会因此报:mount.nfs: Connection timed out

设置防火墙允许访问NFS服务器的服务端口,注意:需要配置NFS服务使用固定端口。

MOUNTD_PORT=port
# Controls which TCP and UDP port mountd (rpc.mountd) uses.

STATD_PORT=port
# Controls which TCP and UDP port status (rpc.statd) uses.

LOCKD_TCPPORT=port
# Controls which TCP port nlockmgr (lockd) uses.

LOCKD_UDPPORT=port
# Controls which UDP port nlockmgr (lockd) uses.

编辑 /etc/sysconfig/nfs 配置文件

# TCP port rpc.lockd should listen on.
LOCKD_TCPPORT=32803
# UDP port rpc.lockd should listen on.
LOCKD_UDPPORT=32769

MOUNTD_PORT=892
STATD_PORT=662

可以在Linux NFS服务器上执行以下命令获得NFS端口信息

rpcinfo -p

需要允许以下端口

NFS的TCP和UDP端口2049

rpcbind/sunrpc的TCP和UDP端口111

设置 MOUNTD_PORT 的TCP和UDP端口

设置 STATD_PORT 的TCP和UDP端口

设置 LOCKD_TCPPORT 的TCP端口

设置 LOCKD_UDPPORT 的UDP端口

在 Linux NFS 服务器上使用以下命令开启iptables防火墙允许访问以上端口

firewall-cmd --permanent --add-port=2049/tcp
firewall-cmd --permanent --add-port=2049/udp
firewall-cmd --permanent --add-port=111/tcp
firewall-cmd --permanent --add-port=111/udp
firewall-cmd --permanent --add-port=892/tcp
firewall-cmd --permanent --add-port=892/udp
firewall-cmd --permanent --add-port=662/tcp
firewall-cmd --permanent --add-port=662/udp
firewall-cmd --permanent --add-port=32803/tcp
firewall-cmd --permanent --add-port=32769/udp

在 Linux NFS 服务器上使用以下命令重新加载防火墙规则

firewall-cmd --reload

这可能不太好用,可使用名称来放行:
firewall-cmd --permanent --zone=public --add-service=nfs
firewall-cmd --permanent --zone=public --add-service=mountd
firewall-cmd --permanent --zone=public --add-service=rpc-bind
firewall-cmd --reload


以下是英文文档
Network File System (NFS) Configuration

What is NFS

NFS was developed at a time when we weren't able to share our drives like we are able to today - in the Windows environment. It offers the ability to share the hard disk space of a big server with many smaller clients. Again, this is a client/server environment. While this seems like a standard service to offer, it was not always like this. In the past, clients and servers were unable to share their disk space.

Thin clients have no hard drives and thus need a "virtual" hard-disk. The NFS mount their hard disk from the server and, while the user thinks they are saving their documents to their local (thin client) disk, they are in fact saving them to the server. In a thin client environment, the root, usr and home partitions are all offered to the client from the server via NFS.

Some of the most notable benefits that NFS can provide are:
• Local workstations use less disk space because commonly used data can be stored on a single machine and still remain accessible to others over the network.
• There is no need for users to have separate home directories on every network machine. Home directories could be set up on the NFS server and made available throughout the network.
• Storage devices such as CDROM drives, and Zip® drives can be used by other machines on the network. This may reduce the number of removable media drives throughout the network.

Note:Use nfs-kernel-server package if you have a fairly recent kernel (2.2.13 or better) and you want to use the kernel-mode NFS server. The user-mode NFS server in the "nfs-user-server" package is slower but more featureful and easier to debug than the kernel-mode server.

Installing NFS in Dedian
Making your computer an NFS server or client is very easy.A Debian NFS client needs
# apt-get install nfs-common portmap
while a Debian NFS server needs

# apt-get install nfs-kernel-server nfs-common portmap

NFS Server Configuration
NFS exports from a server are controlled by the file /etc/exports. Each line begins with the absolute path of a directory to be exported, followed by a space-seperated list of allowed clients.
cat /etc/exports
/home 195.12.32.2(rw,no_root_squash)
/usr 195.12.32.2/24(ro,insecure)
A client can be specified either by name or IP address. Wildcards (*) are allowed in names, as are netmasks (e.g. /24) following IP addresses, but should usually be avoided for security reasons.
A client specification may be followed by a set of options, in parenthesis. It is important not to leave any space between the last client specification character and the opening parenthesis, since spaces are intrepreted as client seperators.
For each options specified in /etc/exports file can be check export man pages.Click here for manpage.If you make changes to /etc/exports on a running NFS server, you can make these changes effective by issuing the command:

# exportfs -a

NFS Client Configuration
NFS volumes can be mounted by root directly from the command line. For example
# mount files.first.com:/home /mnt/nfs
mounts the /home directory from the machine files.example.com as the directory /mnt/nfs on the client. Of course, for this to work, the directory /mnt/nfs must exist on the client and the server must have been configured to allow the client to access the volume.
It is more usual for clients to mount NFS volumes automatically at boot-time. NFS volumes can be specified like any others in /etc/fstab.

/etc/fstab
195.12.32.1:/home /home nfs rw,rsize=4096,wsize=4096,hard,intr,async,nodev,nosuid 0 0
195.12.32.2:/usr /usr nfs ro,rsize=8192,hard,intr,nfsvers=3,tcp,noatime,nodev,async 0 0
There are two kinds of mount options to consider: those specific to NFS and those which apply to all mounts. Consider first those specific to NFS. For each options menctioned in /etc/fstab file check the man pages of fstab.Click here for manpage.

Performance Tuning
NFS does not need a fast processor or a lot of memory. I/O is the bottleneck, so fast disks and a fast network help. If you use IDE disks, use hdparam to tune them for optimal transfer rates. If you support multiple, simultaneous users, consider paying for SCSI disks; SCSI can schedule multiple, interleaved requests much more intelligently than IDE can.

On the software side, by far the most effective step you can take is to optimize the NFS block size. NFS transfers data in chunks. If the chunks are too small, your computers spend more time processing chunk headers than moving bits. If the chunks are too large, your computers move more bits than they need to for a given set of data. To optimize the NFS block size, measure the transfer time for various block size values. Here is a measurement of the transfer time for a 256 MB file full of zeros.


# mount files.first.com:/home /mnt -o rw,wsize=1024
# time dd if=/dev/zero of=/mnt/test bs=16k count=16k
16384+0 records in
16384+0 records out
real 0m32.207s
user 0m0.000s
sys 0m0.990s

# umount /mnt
This corresponds to a throughput of 63 Mb/s.
Try writing with block sizes of 1024, 2048, 4096, and 8192 bytes (if you use NFS v3, you can try 16384 and 32768, too) and measuring the time required for each. In order to get an idea of the uncertainly in your measurements, repeat each measurement several times. In order to defeat caching, be sure to unmount and remount between measurements.

# mount files.first.com:/home /mnt -o ro,rsize=1024
# time dd if=/mnt/test of=/dev/null bs=16k
16384+0 records in
16384+0 records out

real 0m26.772s
user 0m0.010s
sys 0m0.530s

# umount /mnt
Your optimal block sizes for both reading and writing will almost certainly exceed 1024 bytes. It may occur that, like mine, your data do not indicate a clear optimum, but instead seem to approach an asymptote as block size is increased. In this case, you should pick the lowest block size which gets you close to the asymptote, rather than the highest available block size; anecdotal evidence indicates that too large block sizes can cause problems.Once you have decided on an rsize and wsize, be sure to write them into your clients' /etc/fstab. You might also consider specifying the noatime option.


For basic options of exports
OptionDescription
rwAllow both read and write requests on a NFS volume.
roAllow only read requests on a NFS volume.
syncReply to requests only after the changes have been committed to stable storage. (Default)
asyncThis option allows the NFS server to violate the NFS protocol and reply to requests before any changes made by that request have been committed to stable storage.
secureThis option requires that requests originate on an Internet port less than IPPORT_RESERVED (1024). (Default)
insecureThis option accepts all ports.
wdelayDelay committing a write request to disc slightly if it suspects that another related write request may be in progress or may arrive soon. (Default)
no_wdelayThis option has no effect if async is also set. The NFS server will normally delay committing a write request to disc slightly if it suspects that another related write request may be in progress or may arrive soon. This allows multiple write requests to be committed to disc with the one operation which can improve performance. If an NFS server received mainly small unrelated requests, this behaviour could actually reduce performance, so no_wdelay is available to turn it off.
subtree_checkThis option enables subtree checking. (Default)
no_subtree_checkThis option disables subtree checking, which has mild security implications, but can improve reliability in some circumstances.
root_squashMap requests from uid/gid 0 to the anonymous uid/gid. Note that this does not apply to any other uids or gids that might be equally sensitive, such as user bin or group staff.
no_root_squashTurn off root squashing. This option is mainly useful for disk-less clients.
all_squashMap all uids and gids to the anonymous user. Useful for NFS exported public FTP directories, news spool directories, etc.
no_all_squashTurn off all squashing. (Default)
anonuid=UIDThese options explicitly set the uid and gid of the anonymous account. This option is primarily useful for PC/NFS clients, where you might want all requests appear to be from one user. As an example, consider the export entry for /home/joe in the example section below, which maps all requests to uid 150.
anongid=GIDRead above (anonuid=UID)

Important points
hard or soft Link
Soft mounts cause data corruption, that I have never tried them. When you use hard, though, be sure to also use intr, so that clients can escape from a hung NFS server with a Ctrl-C.

udp or tcp Protocol
Most admins usually end up using udp because they use Linux servers, But if you have BSD or Solaris servers, by all means use TCP, as long as your tests indicate that it does not have a substantial, negative impact on performance.

NFS v2 or NFS v3
NFS v2 and NFS v3 differ only in minor details. While v3 supports a non-blocking write operation which theoretically speeds up NFS, in practice I have not seen any discernable performance advantage of v2 over v3. Still, I use v3 when I can, since it supports files larger than 2 GB and block sizes larger than 8192 bytes.

rsize and wsize options in fstab file
See the section on performance tuning below for advise of choosing rsize and wsize.
NFS security is utterly attrocious. An NFS server trusts an NFS client to enfore file access permissions. Therefore it is very important that you are root on any box you export to, and that you export with the insecure option, which would allow any old user on the client box arbitrary access to all the exported files.