1 .\" Copyright (c) 1989, 1991, 1993
2 .\" The Regents of the University of California. All rights reserved.
4 .\" Redistribution and use in source and binary forms, with or without
5 .\" modification, are permitted provided that the following conditions
7 .\" 1. Redistributions of source code must retain the above copyright
8 .\" notice, this list of conditions and the following disclaimer.
9 .\" 2. Redistributions in binary form must reproduce the above copyright
10 .\" notice, this list of conditions and the following disclaimer in the
11 .\" documentation and/or other materials provided with the distribution.
12 .\" 3. Neither the name of the University nor the names of its contributors
13 .\" may be used to endorse or promote products derived from this software
14 .\" without specific prior written permission.
16 .\" THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
17 .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
18 .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
19 .\" ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
20 .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
21 .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
22 .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
23 .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
24 .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
25 .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
28 .\" @(#)nfsd.8 8.4 (Berkeley) 3/29/95
41 .Op Fl n Ar num_servers
43 .Op Fl p Ar pnfs_setup
44 .Op Fl m Ar mirror_level
45 .Op Fl V Ar virtual_hostname
46 .Op Fl Fl maxthreads Ar max_threads
47 .Op Fl Fl minthreads Ar min_threads
51 utility runs on a server machine to service NFS requests from client machines.
54 must be running for a machine to operate as a server.
56 Unless otherwise specified, eight servers per CPU for UDP transport are
59 The following options are available:
62 Register the NFS service with
64 without creating any servers.
65 This option can be used along with the
69 options to re-register NFS if the rpcbind server is restarted.
71 Unregister the NFS service with
73 without creating any servers.
74 .It Fl V Ar virtual_hostname
75 Specifies a hostname to be used as a principal name, instead of
78 Specifies how many servers to create.
79 This option is equivalent to specifying
83 with their respective arguments to
85 .It Fl Fl maxthreads Ar threads
86 Specifies the maximum servers that will be kept around to service requests.
87 .It Fl Fl minthreads Ar threads
88 Specifies the minimum servers that will be kept around to service requests.
90 Specifies which IP address or hostname to bind to on the local host.
91 This option is recommended when a host has multiple interfaces.
94 options may be specified.
96 Specifies that nfsd should bind to the wildcard IP address.
97 This is the default if no
100 It may also be specified in addition to any
103 Note that NFS/UDP does not operate properly when
104 bound to the wildcard IP address whether you use -a or do not use -h.
105 .It Fl p Ar pnfs_setup
106 Enables pNFS support in the server and specifies the information that the
107 daemon needs to start it.
108 This option can only be used on one server and specifies that this server
109 will be the MetaData Server (MDS) for the pNFS service.
110 This can only be done if there is at least one
113 as a Data Server (DS) for it to use.
117 string is a set of fields separated by ',' characters:
118 Each of these fields specifies one DS.
119 It consists of a server hostname, followed by a ':'
120 and the directory path where the DS's data storage file system is mounted on
122 This can optionally be followed by a '#' and the mds_path, which is the
123 directory path for an exported file system on this MDS.
124 If this is specified, it means that this DS is to be used to store data
125 files for this mds_path file system only.
126 If this optional component does not exist, the DS will be used to store data
127 files for all exported MDS file systems.
128 The DS storage file systems must be mounted on this system before the
130 is started with this option specified.
134 nfsv4-data0:/data0,nfsv4-data1:/data1
136 would specify two DS servers called nfsv4-data0 and nfsv4-data1 that comprise
137 the data storage component of the pNFS service.
138 These two DSs would be used to store data files for all exported file systems
144 are where the data storage servers exported
145 storage directories are mounted on this system (which will act as the MDS).
147 Whereas, for the example:
149 nfsv4-data0:/data0#/export1,nfsv4-data1:/data1#/export2
151 would specify two DSs as above, however nfsv4-data0 will be used to store
154 and nfsv4-data1 will be used to store data files for
157 When using IPv6 addresses for DSs
158 be wary of using link local addresses.
159 The IPv6 address for the DS is sent to the client and there is no scope
161 As such, a link local address may not work for a pNFS client to DS
165 will only use a link local address if it is the only address returned by
168 .It Fl m Ar mirror_level
169 This option is only meaningful when used with the
174 which defines how many of the DSs will
175 have a copy of a file's data storage file.
176 The default of one implies no mirroring of data storage files on the DSs.
179 would normally be set to 2 to enable mirroring, but
180 can be as high as NFSDEV_MAXMIRRORS.
181 There must be at least
183 DSs for each exported file system on the MDS, as specified in the
186 This implies that, for the above example using "#/export1" and "#/export2",
187 mirroring cannot be done.
188 There would need to be two DS entries for each of "#/export1" and "#/export2"
189 in order to support a
193 If mirroring is enabled, the server must use the Flexible File
195 If mirroring is not enabled, the server will use the File layout
196 by default, but this default can be changed to the Flexible File layout if the
198 vfs.nfsd.default_flexfile
201 Serve TCP NFS clients.
203 Serve UDP NFS clients.
205 Ignored; included for backward compatibility.
209 .Dq Li "nfsd -u -t -n 6"
210 serves UDP and TCP transports using six daemons.
212 A server should run enough daemons to handle
213 the maximum level of concurrency from its clients,
214 typically four to six.
218 utility listens for service requests at the port indicated in the
219 NFS server specification; see
220 .%T "Network File System Protocol Specification" ,
222 .%T "NFS: Network File System Version 3 Protocol Specification" ,
224 .%T "Network File System (NFS) Version 4 Protocol" ,
226 .%T "Network File System (NFS) Version 4 Minor Version 1 Protocol" ,
232 NFS is not loaded in the running kernel, it will attempt
233 to load a loadable kernel module containing NFS support using
235 If this fails, or no NFS KLD is available,
237 will exit with an error.
241 is to be run on a host with multiple interfaces or interface aliases, use
244 option is recommended.
245 If you do not use the option NFS may not respond to
246 UDP packets from the same IP address they were sent to.
248 is also recommended when securing NFS exports on a firewalling machine such
249 that the NFS sockets can only be accessed by the inside interface.
253 would then be used to block NFS-related packets that come in on the outside
256 If the server has stopped servicing clients and has generated a console message
258 .Dq Li "nfsd server cache flooded..." ,
259 the value for vfs.nfsd.tcphighwater needs to be increased.
260 This should allow the server to again handle requests without a reboot.
261 Also, you may want to consider decreasing the value for
262 vfs.nfsd.tcpcachetimeo to several minutes (in seconds) instead of 12 hours
265 Unfortunately making vfs.nfsd.tcphighwater too large can result in the mbuf
266 limit being reached, as indicated by a console message
268 .Dq Li "kern.ipc.nmbufs limit reached" .
269 If you cannot find values of the above
271 values that work, you can disable the DRC cache for TCP by setting
272 vfs.nfsd.cachetcp to 0.
276 utility has to be terminated with
278 and cannot be killed with
284 utility needs to ignore these signals in order to stay alive as long
285 as possible during a shutdown, otherwise loopback mounts will
286 not be able to unmount.
290 .Dq Li "kill -USR1 <PID of master nfsd>"
301 .Xr stablerestart 5 ,
312 utility first appeared in
319 is not running, it will service AUTH_SYS requests only.
320 To fix the problem you must kill
322 and then restart it, after the
326 If mirroring is enabled via the
328 option and there are Linux clients doing NFSv4.1 mounts, those clients
329 need to be patched to support the
332 the Flexible File layout or the
334 vfs.nfsd.flexlinuxhack
335 must be set to one on the MDS as a workaround.