This may seem like a fairly simple concept, but it's quite difficult to make a protocol stateless when it deals with file access. A number of important features of most filesystems are inherently based on state. For instance, being able to maintain file locking requires keeping track of what process has locked the file. This design decision has had a major impact on security and the proliferation of system administrator jokes based upon the kernel message "NFS server not responding".
achines may be NFS servers (exporting their disks for access by other machines), NFS clients (accessing disks exported by NFS servers), or both. Almost every Unix implementation uses NFS as the primary way to share files, and NFS client applications are available for most other popular operating systems. (NFS server applications for non-Unix machines are more rare.)
Two versions of NFS are currently in widespread use. NFS version 2 is the protocol people are usually referring to when they just mention the term NFS. It is usually run over UDP (although the specification allows the use of TCP, most implementations do not implement it). NFS version 3, frequently written as NFSv3, is a newer version with several improvements, including support for larger files, and almost every implementation allows it to be run over TCP as well as UDP. From a security standpoint, there is little to distinguish between the two versions, so we use the term NFS to apply to both versions unless otherwise noted.
The NFS protocol itself is quite a straightforward RPC protocol, and all implementations and versions use a fixed port number (normally port 2049). A fixed port number is used so that an NFS client does not have to perform a portmapper query when a NFS server is restarted. However, in order to operate correctly, NFS relies upon a number of other services for initially mounting the filesystem, for file locking and for recovery after a system crash. These additional services are also based upon RPC but do not always use the same port numbers. This means that portmapper requests are needed to locate the services. For more information about RPC see Chapter 14, "Intermediary Protocols".
Some vendors also support a version of NFS based on Secure RPC, which addresses many of the problems with authentication, providing better authentication both of the client machine and of the user. Secure RPC has several problems that also apply to NFS implemented on top of it:
The server's trust in the client is established when the client mounts the filesystem from the server. To mount a filesystem a client sends a mount request containing the name of the filesystem to the mountd RPC service on the server and asks for permission to mount it. The mountd service checks whether or not the client is allowed to access that filesystem, using the source IP address of the request to identify the client. If the access is allowable, the mountd service gives the client a file handle (basically a magic set of credentials for the client), which the client then uses for all access to the filesystem.
Once the client has mounted the filesystem (and received a file handle from the server), the client sends a request using the NFS protocol to the server each time it wants to act on a file on that filesystem. The request describes the action the client wants to take and includes the file handle obtained from the server, so the server assumes that the client is authorized to request that action. Some NFS servers will log error messages when requests are received with invalid file handles, but many of them simply ignore them, which helps attackers who are trying to guess file handles. If you have the choice, choose an NFS server that will log requests with invalid file handles (this may not be the default configuration even on servers which support logging; check to make certain that you not only have the capability, but have actually enabled it).
This system has at least three problems. First, there are difficulties with the initial authentication. In addition to the usual problems with using forgeable source IP addresses for authentication, there is another way for attackers to authenticate illicitly. The RPC port location service offers a forwarding service where a client can send a request to a service via the location server. This request will show up to mountd as if it had been issued by the location service, which is running on the server. If mountd permits the server to mount its own filesystems, then an attacker can send a mount request using the forwarding feature in order to obtain a valid file handle. To deal with this, either the server should deny itself access, or the forwarding feature of the port location service should be disabled (and the best option is to do both).
The second problem with mountd authentication has to do with the use of the file handle as an authentication token. If an attacker can determine a valid file handle without help from mountd, the attacker can then use it without further authentication. Simply guessing randomly isn't going to work; NFS version 2 uses 32-byte file handles, and NFS version 3 uses variable-length file handles up to 64 bytes long. But attackers don't have to guess randomly because NFS implementations typically impose a structure to the file handles. Only a component of the file handle data is random, and that's the only part the attacker has to guess. Implementations vary on how much random data there is; early implementations are particularly bad about it, using file handles that are based on the time the filesystem was created, which is often easy to guess.
odern implementations of NFS have addressed this problem, and patches are available for many older implementations. If you run NFS on a system where security is important, you should consult your vendor's documentation to make sure that you have an NFS server with reasonable randomness in the file handle generation, and that you have followed any special instructions for setting up filesystems (some file handle generation schemes require special initialization for filesystems to ensure unguessable file handles).
The third problem with file handles is that they're difficult to get rid of. An NFS server is required to be stateless; all it can do is look at a file handle and determine whether or not that file handle is any good. On most implementations, once a client has a file handle, the only way to keep the client from using it is to change the method for generating file handles so that all previous file handles are invalid, requiring every client to remount the filesystem and probably generating mass trauma.
Well-behaved clients don't save file handles and will contact mountd for a new file handle each time they mount a filesystem. This still means that a well-behaved client that already has a filesystem mounted can continue to use it if you change its access permissions, but it does give you some measure of control. Eventually, the client will have to remount the filesystem (and you may be able to force it to do so if you have some sort of access to it). Nothing requires an attacker to be this well behaved; a hostile client can simply save the file handle and reuse it without requiring mountd 's assistance. In general, the only way to change the validity of file handles and prevent this is to change the filesystem on the server (for instance, by changing where it is mounted on the server). Vendor documentation will usually tell you what operations change file handles (mostly to prevent you from accidentally changing file handles and interrupting operations on clients).
Translating root to "nobody" is an extremely minor security improvement. Anybody who is capable of being root on the client is capable of pretending to be any user whatsoever on the client, and can therefore see and do anything any user can do. The translation hides only those files on the server restricted to access by root itself. You will still probably want to use translation wherever you can for the minimal protection it does give you, but you should not feel that it makes it safe to export filesystems to possibly hostile clients.
Better protection for the server is available by exporting the filesystem read-only. If the filesystem is exported purely read-only (no host is allowed to write it) you can be reasonably certain the data cannot be modified via NFS. If you allow any host to write it, you're vulnerable to forgery.
Some NFS clients provide options to mount that can be used to disable devices and setuid/setgid on mounted filesystems. If mount is not available to users other than root, or if it always uses these options for users other than root, this will protect the client from the server. If these options are not available, even if only root can mount filesystems, you should consider mounting an NFS filesystem to be equivalent to granting the server machine root access to the client.
NFS clients may also be vulnerable to less obvious forms of attack from NFS servers. Mounting a filesystem is a privileged operation, so NFS clients run as root. A hostile server may be able to exploit buffer overflow errors in the NFS client, causing it to run arbitrary programs. In general, this is not transparent to the user (it interferes with the ability to use whatever filesystem the client was trying to get to), and it requires an attacker with a high level of control over the server machine. In traditional fixed NFS server environments, it's not a major threat. On the other hand, the use of automounters, which are discussed in a later section, can make it an effective attack.
File locks are a form of state; when you request a lock on a file, you change the state of the file, and that state has to be kept track of both by the server (so that it can enforce the lock) and the client (so that it can release the lock when it is no longer needed). This is problematic for NFS because it's a stateless protocol.
There are therefore two parts to the problem of implementing locking in NFS. First, you have to add the ability to keep any kind of state across server and client restarts, and then you have to track the locks themselves.
It's easy enough for any program to keep state internally; it's not even all that difficult for a server to save that state so that when the server restarts, it can pick up where it left off. However, that's not good enough for NFS locking because the state that is important also includes the programs that had requested the locks, and NFS clients will almost never regain this state when they restart. If a machine crashes while you are in the middle of editing a file, the editor is unlikely to resume where it left off. If the editor had a lock on the file you were editing, something needs to free that lock so that you can restart your editing session. If the editor itself crashes, this task is handled by other programs on the machine. If the entire machine crashes, however, there has to be some other mechanism that will handle the situation.
The problem of dealing with restarts is solved using an ancillary protocol called statd, which is responsible for tracking and reporting restarts on behalf of other protocols. statd handles two types of requests: programs on the local machine can ask statd to notify them when specific remote machines restart, and remote machines can ask statd to notify them when the local machine restarts. It's also possible to cancel these requests, and when things shut down cleanly, they will cancel all outstanding requests. statd keeps track of requests in files, so that its state is preserved across restarts. statd is voluntary in that it relies on the remote systems to honor requests for notification when they restart -- for scalability reasons statd does not poll for status. When statd starts, it checks its files and notifies all remote machines that have requested notification.
statd is built on top of RPC and uses UDP. It is particularly problematic for firewalls because it involves traffic initiated by the server. NFS clients will request restart notification from NFS servers. The original request for notification will go from the client to the server, but if the server reboots, the notification will come from the server to the client and will not normally be permitted.
Locking itself is implemented using lockd. lockd in turn relies heavily on statd to reestablish locking after a restart as it does not store any persistent state. When a client wishes to lock a file on an NFS filesystem, it contacts the remote lockd in order to lock the file and requests its own statd to monitor the server. When both the lockd and statd response are received, the client assumes that the file is locked. When it receives the lockd request, the server asks the server statd to monitor the client. At this point, one of the following can occur:
The client restarts.
After a server restart, the server statd notifies all remote clients using locking of the event, which causes them to resubmit all lock requests. This can have unexpected results if more than one client was attempting to lock the same file. After a server restart, you can lose a lock. If the purpose of the lock was to prevent another system from making changes while a critical update was occurring, then this will usually result in loss of data or file corruption. More correct locking semantics would suggest that the original client should regain the lock so that it could proceed with the critical update. This is one reason why NFS file locking cannot be relied upon.
After a client restart, statd notifies all servers of the event. This causes them to immediately release any locks the client may have been holding before the restart. If the purpose of the lock was to prevent another system from making changes while a critical update was occurring, then this will usually result in loss of data or file corruption. More correct locking semantics would leave the file locked so that a cleanup process could check the consistency of the file before allowing another client to make changes. This is another reason why NFS file locking cannot be relied upon.
lockd, like statd, is built on top of RPC and uses UDP, which makes it extremely difficult to safely pass through a firewall. Some stateful and proxy firewall systems can handle RPC, and so it may be possible to use NFS file locking across this type of firewall. You will need to be very careful; some systems will allow everything but the server-to-client restart notifications, in which case locking will appear to work, but lock consistency will be lost as soon as the server restarts. If file locking is not needed, then it is possible to run some systems without either statd or lockd. However, any application programs that try to use file locking on NFS filesystems are likely to fail in bad ways that are likely to involve loss or corruption of data (which presumably would not occur in the unlikely event that lockd and statd were working correctly).
The solution to this problem is to use an automounter, a program that mounts filesystems when there is some reason to and unmounts them when they are no longer in use. Most automounters will also allow you to configure things so that a given filesystem is available on multiple machines, and clients use the most appropriate copy.
Intuitively, automounters seem as if they ought to be free from network vulnerabilities. After all, they provide services only to the local host. They ought to have only the vulnerabilities that other NFS clients have, and those are relatively minimal.
Unfortunately, this is not the case. Automounters have two additional kinds of vulnerabilities. First, and most obviously, automounters often use other services to get lists of NFS servers. (For instance, many of them will use NIS maps for this purpose.) They will have all the vulnerabilities associated with those services, and if those services are insecure, it may be easy for an attacker to direct an automounter system to a hostile server. It may also be possible to attack the automounter directly with the information about which servers to use; for instance, if the automounter itself has buffer overflow problems, feeding it an overlength server name may give an attacker the ability to run arbitrary commands.
The more major source of vulnerabilities comes from the way automounters are implemented. For technical reasons, the most effective way for an automounter to work is for it to claim to be an NFS server. Client programs that want to access filesystems speak to this fake server, which then acts as a client to the genuine servers. This fake server needs to accept requests only from clients on the local machine, but the fact that it is an NFS server opens it up to a number of attacks. For instance, the attack that depends on forwarding requests through the port location service is particularly effective against automounters, which must accept local requests.
If you are using an automounter on a client, you should be aware that it could be vulnerable to NFS server, RPC server, and other network application vulnerabilities.
[72]Ironically, the version 2 protocol incorrectly predicts that while 2049 is an unofficial standard, "later versions of the protocol use the `Portmapping' facility of RPC". Later versions of the protocol in fact just made 2049 official.NFS is provided over both TCP and UDP. Some clients and servers prefer TCP, and others prefer UDP. TCP-based NFS is relatively new, and not all clients or servers support it. Those that do often behave differently over TCP than over UDP. If a particular client-server combination behaves badly over one protocol, try it over the other.
In order to make use of NFS across a firewall, you will also need to make the portmapper and mountd available; the portmapper is at port 111. mountd is an RPC protocol at a randomly chosen port number managed by the portmapper. As discussed earlier, you may need lockd and statd as well, and in that case, you will need to allow statd in both directions. lockd and statd are also RPC protocols at randomly chosen port numbers managed by the portmapper. See Chapter 14, "Intermediary Protocols" for more information about packet filtering and RPC.
Direction | SourceAddr. | Dest.Addr. | Protocol | SourcePort | Dest.Port | ACKSet | Notes |
---|---|---|---|---|---|---|---|
In | Ext | Int | TCP/UDP | >1023 | 111 |
[73]
|
External NFS client to internal server, portmapper requests |
Out | Int | Ext | TCP/UDP | 111 | >1023 |
Yes[74]
|
Internal NFS server to external client, portmapper responses |
In | Ext | Int | TCP/UDP |
<1024[75]
|
2049 | [73] | External NFS client to internal server, NFS requests |
Out | Int | Ext | TCP/UDP | 2049 | <1024[75] | Yes[74] | Internal NFS server to external client, NFS responses |
Out | Int | Ext | TCP/UDP | >1023 | 111 | [73] | Internal NFS client to external server, portmapper requests |
In | Ext | Int | TCP/UDP | 111 | >1023 | Yes[74] | External NFS server to internal client, portmapper responses |
Out | Int | Ext | TCP/UDP | <1024[75] | 2049 | [73] | Internal NFS client to external server, NFS requests |
In | Ext | Int | TCP/UDP | 2049 | <1024[75] | Yes[74] | External NFS server to internal client, NFS responses |
[73]ACK is not set on the first TCP packet (establishing connection) but will be set on the rest. UDP has no ACK equivalent.
[74]TCP only; UDP has no ACK equivalent.
[75]Some implementations may use ports >1023 instead.