1 .\" Copyright (c) 2018 Rick Macklem
3 .\" Redistribution and use in source and binary forms, with or without
4 .\" modification, are permitted provided that the following conditions
6 .\" 1. Redistributions of source code must retain the above copyright
7 .\" notice, this list of conditions and the following disclaimer.
8 .\" 2. Redistributions in binary form must reproduce the above copyright
9 .\" notice, this list of conditions and the following disclaimer in the
10 .\" documentation and/or other materials provided with the distribution.
12 .\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
13 .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
14 .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
15 .\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
16 .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
17 .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
18 .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
19 .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
20 .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
21 .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
29 .Nd NFS Version 4.1 and 4.2 Parallel NFS Protocol Server
33 servers may be configured to provide a
38 system needs to be configured as a MetaData Server (MDS) and
39 at least one additional
41 system needs to be configured as one or
42 more Data Servers (DS)s.
46 systems are configured to be NFSv4.1 and NFSv4.2
51 if you are not familiar with configuring a NFSv4.n server.
52 All DS(s) and the MDS should support NFSv4.2 as well as NFSv4.1.
53 Mixing an MDS that supports NFSv4.2 with any DS(s) that do not support
54 NFSv4.2 will not work correctly.
55 As such, all DS(s) must be upgraded from
59 before upgrading the MDS.
60 .Sh DS server configuration
61 The DS(s) need to be configured as NFSv4.1 and NFSv4.2 server(s),
62 with a top level exported
63 directory used for storage of data files.
64 This directory must be owned by
66 and would normally have a mode of
68 Within this directory there needs to be additional directories named
69 ds0,...,dsN (where N is 19 by default) also owned by
73 These are the directories where the data files are stored.
74 The following command can be run by root when in the top level exported
75 directory to create these subdirectories.
76 .Bd -literal -offset indent
77 jot -w ds 20 0 | xargs mkdir -m 700
82 is the default and can be set to a larger value on the MDS as shown below.
84 The top level exported directory used for storage of data files must be
85 exported to the MDS with the
86 .Dq maproot=root sec=sys
87 export options so that the MDS can create entries in these subdirectories.
88 It must also be exported to all pNFS aware clients, but these clients do
91 export option and this directory should be exported to them with the same
92 options as used by the MDS to export file system(s) to the clients.
94 It is possible to have multiple DSs on the same
97 of these DSs must have a separate top level exported directory used for storage
98 of data files and each
99 of these DSs must be mountable via a separate IP address.
100 Alias addresses can be set on the DS server system for a network
103 to create these different IP addresses.
104 Multiple DSs on the same server may be useful when data for different file systems
105 on the MDS are being stored on different file system volumes on the
108 .Sh MDS server configuration
109 The MDS must be a separate
115 It is configured as a NFSv4.1 and NFSv4.2 server with
116 file system(s) exported to clients.
119 command line argument for
121 is used to indicate that it is running as the MDS for a pNFS server.
123 The DS(s) must all be mounted on the MDS using the following mount options:
124 .Bd -literal -offset indent
125 nfsv4,minorversion=2,soft,retrans=2
128 so that they can be defined as DSs in the
131 Normally these mounts would be entered in the
134 For example, if there are four DSs named nfsv4-data[0-3], the
136 lines might look like:
138 nfsv4-data0:/ /data0 nfs rw,nfsv4,minorversion=2,soft,retrans=2 0 0
139 nfsv4-data1:/ /data1 nfs rw,nfsv4,minorversion=2,soft,retrans=2 0 0
140 nfsv4-data2:/ /data2 nfs rw,nfsv4,minorversion=2,soft,retrans=2 0 0
141 nfsv4-data3:/ /data3 nfs rw,nfsv4,minorversion=2,soft,retrans=2 0 0
148 indicates that the NFS server is a pNFS MDS and specifies what
155 nfs_server_flags line in your
159 nfs_server_flags="-u -t -n 128 -p nfsv4-data0:/data0,nfsv4-data1:/data1,nfsv4-data2:/data2,nfsv4-data3:/data3"
162 This example specifies that the data files should be distributed over the
163 four DSs and File layouts will be issued to pNFS enabled clients.
164 If issuing Flexible File layouts is desired for this case, setting the sysctl
165 .Dq vfs.nfsd.default_flexfile
172 Alternately, this variant of
174 will specify that two way mirroring is to be done, via the
178 nfs_server_flags="-u -t -n 128 -p nfsv4-data0:/data0,nfsv4-data1:/data1,nfsv4-data2:/data2,nfsv4-data3:/data3 -m 2"
181 With two way mirroring, the data file for each exported file on the MDS
182 will be stored on two of the DSs.
183 When mirroring is enabled, the server will always issue Flexible File layouts.
185 It is also possible to specify which DSs are to be used to store data files for
186 specific exported file systems on the MDS.
187 For example, if the MDS has exported two file systems
191 to clients, the following variant of
193 will specify that data files for
195 will be stored on nfsv4-data0 and nfsv4-data1, whereas the data files for
197 will be store on nfsv4-data2 and nfsv4-data3.
199 nfs_server_flags="-u -t -n 128 -p nfsv4-data0:/data0#/export1,nfsv4-data1:/data1#/export1,nfsv4-data2:/data2#/export2,nfsv4-data3:/data3#/export2"
202 This can be used by system administrators to control where data files are
203 stored and might be useful for control of storage use.
204 For this case, it may be convenient to co-locate more than one of the DSs
207 server, using separate file systems on the DS system
208 for storage of the respective DS's data files.
209 If mirroring is desired for this case, the
211 option also needs to be specified.
212 There must be enough DSs assigned to each exported file system on the MDS
213 to support the level of mirroring.
214 The above example would be fine for two way mirroring, but four way mirroring
215 would not work, since there are only two DSs assigned to each exported file
218 The number of subdirectories in each DS is defined by the
219 .Dq vfs.nfs.dsdirsize
221 This value can be increased from the default of 20, but only when the
223 is not running and after the additional ds20,... subdirectories have been
224 created on all the DSs.
225 For a service that will store a large number of files this sysctl should be
226 set much larger, to avoid the number of entries in a subdirectory from
229 Once operational, NFSv4.1 or NFSv4.2
234 option should do I/O directly on the DSs.
235 The clients mounting the MDS must be running the
237 daemon for pNFS to work.
239 .Bd -literal -offset indent
246 Non-pNFS aware clients or NFSv3 mounts will do all I/O RPCs on the MDS,
247 which acts as a proxy for the appropriate DS(s).
248 .Sh Backing up a pNFS service
249 Since the data is separated from the metadata, the simple way to back up
250 a pNFS service is to do so from an NFS client that has the service mounted
252 If you back up the MDS exported file system(s) on the MDS, you must do it
253 in such a way that the
255 namespace extended attributes get backed up.
256 .Sh Handling of failed mirrored DSs
257 When a mirrored DS fails, it can be disabled one of three ways:
259 1 - The MDS detects a problem when trying to do proxy
260 operations on the DS.
261 This can take a couple of minutes
262 after the DS failure or network partitioning occurs.
264 2 - A pNFS client can report an I/O error that occurred for a DS to the MDS in
265 the arguments for a LayoutReturn operation.
267 3 - The system administrator can perform the pnfsdskill(8) command on the MDS
269 If the system administrator does a pnfsdskill(8) and it fails with ENXIO
270 (Device not configured) that normally means the DS was already
271 disabled via #1 or #2.
272 Since doing this is harmless, once a system administrator knows that
273 there is a problem with a mirrored DS, doing the command is recommended.
275 Once a system administrator knows that a mirrored DS has malfunctioned
276 or has been network partitioned, they should do the following as root/su
278 .Bd -literal -offset indent
279 # pnfsdskill <mounted-on-path-of-DS>
280 # umount -N <mounted-on-path-of-DS>
283 Note that the <mounted-on-path-of-DS> must be the exact mounted-on path
284 string used when the DS was mounted on the MDS.
286 Once the mirrored DS has been disabled, the pNFS service should continue to
287 function, but file updates will only happen on the DS(s) that have not been disabled.
288 Assuming two way mirroring, that implies the one DS of the pair stored in the
290 extended attribute for the file on the MDS, for files stored on the disabled DS.
292 The next step is to clear the IP address in the
294 extended attribute on all files on the MDS for the failed DS.
295 This is done so that, when the disabled DS is repaired and brought back online,
296 the data files on this DS will not be used, since they may be out of date.
297 The command that clears the IP address is
304 # pnfsdsfile -r nfsv4-data3 yyy.c
305 yyy.c: nfsv4-data2.home.rick ds0/207508569ff983350c000000ec7c0200e4c57b2e0000000000000000 0.0.0.0 ds0/207508569ff983350c000000ec7c0200e4c57b2e0000000000000000
308 replaces nfsv4-data3 with an IPv4 address of 0.0.0.0, so that nfsv4-data3
311 Normally this will be called within a
313 command for all regular
314 files in the exported directory tree and must be done on the MDS.
317 you will probably also want the
319 option so that it won't spit out the results for every file.
320 If the disabled/repaired DS is nfsv4-data3, the commands done on the MDS
323 # cd <top-level-exported-dir>
324 # find . -type f -exec pnfsdsfile -q -r nfsv4-data3 {} \;
327 There is a problem with the above command if the file found by
329 is renamed or unlinked before the
331 command is done on it.
332 This should normally generate an error message.
333 A simple unlink is harmless
334 but a link/unlink or rename might result in the file not having been processed
336 To check that all files have their IP addresses set to 0.0.0.0 these
337 commands can be used (assuming the
341 # cd <top-level-exported-dir>
342 # find . -type f -exec pnfsdsfile {} \; | sed "/nfsv4-data3/!d"
345 Any line(s) printed require the
350 Once this is done, the replaced/repaired DS can be brought back online.
351 It should have empty ds0,...,dsN directories under the top level exported
352 directory for storage of data files just like it did when first set up.
353 Mount it on the MDS exactly as you did before disabling it.
354 For the nfsv4-data3 example, the command would be:
356 # mount -t nfs -o nfsv4,minorversion=2,soft,retrans=2 nfsv4-data3:/ /data3
359 Then restart the nfsd to re-enable the DS.
361 # /etc/rc.d/nfsd restart
364 Now, new files can be stored on nfsv4-data3,
365 but files with the IP address zeroed out on the MDS will not yet use the
366 repaired DS (nfsv4-data3).
367 The next step is to go through the exported file tree on the MDS and,
369 files with an IPv4 address of 0.0.0.0 in its extended attribute, copy the file
370 data to the repaired DS and re-enable use of this mirror for it.
371 This command for copying the file data for one MDS file is
373 and it will also normally be used in a
375 For the example case, the commands on the MDS would be:
377 # cd <top-level-exported-dir>
378 # find . -type f -exec pnfsdscopymr -r /data3 {} \;
381 When this completes, the recovery should be complete or at least nearly so.
382 As noted above, if a link/unlink or rename occurs on a file name while the
385 is in progress, it may not get copied.
386 To check for any file(s) not yet copied, the commands are:
388 # cd <top-level-exported-dir>
389 # find . -type f -exec pnfsdsfile {} \; | sed "/0\.0\.0\.0/!d"
392 If this command prints out any file name(s), these files must
395 command done on them to complete the recovery.
397 # pnfsdscopymr -r /data3 <file-path-reported>
400 If this command fails with the error
402 .Dq pnfsdscopymr: Copymr failed for file <path>: Device not configured
404 repeatedly, this may be caused by a Read/Write layout that has not
406 The only way to get rid of such a layout is to restart the
409 All of these commands are designed to be
410 done while the pNFS service is running and can be re-run safely.
412 For a more detailed discussion of the setup and management of a pNFS service
414 .Bd -literal -offset indent
415 https://people.freebsd.org/~rmacklem/pnfs-planb-setup.txt
434 service first appeared in
437 Since the MDS cannot be mirrored, it is a single point of failure just
441 For non-mirrored configurations, all
443 systems used in the service
444 are single points of failure.