4 .\" The contents of this file are subject to the terms of the
5 .\" Common Development and Distribution License (the "License").
6 .\" You may not use this file except in compliance with the License.
8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 .\" or http://www.opensolaris.org/os/licensing.
10 .\" See the License for the specific language governing permissions
11 .\" and limitations under the License.
13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 .\" If applicable, add the following below this CDDL HEADER, with the
16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
22 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23 .\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
24 .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
25 .\" Copyright (c) 2017 Datto Inc.
26 .\" Copyright (c) 2018 George Melikov. All Rights Reserved.
27 .\" Copyright 2017 Nexenta Systems, Inc.
28 .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
35 .Nd available properties for ZFS storage pools
37 Each pool has several properties associated with it.
38 Some properties are read-only statistics while others are configurable and
39 change the behavior of the pool.
41 The following are read-only properties:
44 Amount of storage used within the pool.
51 Percentage of pool space used.
52 This property can also be referred to by its shortened column name,
55 Amount of uninitialized space within the pool or device that can be used to
56 increase the total capacity of the pool.
57 Uninitialized space consists of any space on an EFI labeled vdev which has not
60 .Nm zpool Cm online Fl e
62 This space occurs when a LUN is dynamically expanded.
64 The amount of fragmentation in the pool. As the amount of space
66 increases, it becomes more difficult to locate
68 space. This may result in lower write performance compared to pools with more
69 unfragmented free space.
71 The amount of free space available in the pool.
75 property describes how much new data can be written to ZFS filesystems/volumes.
78 property is not generally useful for this purpose, and can be substantially more than the zfs
80 space. This discrepancy is due to several factors, including raidz parity; zfs
81 reservation, quota, refreservation, and refquota properties; and space set aside by
84 .Xr zfs-module-parameters 5
85 for more information).
87 After a file system or snapshot is destroyed, the space it was using is
88 returned to the pool asynchronously.
90 is the amount of space remaining to be reclaimed.
97 The current health of the pool.
99 .Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
101 A unique identifier for the pool.
103 A unique identifier for the pool.
106 property, this identifier is generated every time we load the pool (e.g. does
107 not persist across imports/exports) and never changes while the pool is loaded
110 operation takes place).
112 Total size of the storage pool.
113 .It Sy unsupported@ Ns Em feature_guid
114 Information about unsupported features that are enabled on the pool.
120 The space usage properties report actual physical space available to the
122 The physical space can be different from the total amount of space that any
123 contained datasets can actually use.
124 The amount of space used in a raidz configuration depends on the characteristics
125 of the data being written.
126 In addition, ZFS reserves some space for internal accounting that the
128 command takes into account, but the
131 For non-full pools of a reasonable size, these effects should be invisible.
132 For small pools, or pools that are close to being completely full, these
133 discrepancies may become more noticeable.
135 The following property can be set at creation time and import time:
138 Alternate root directory.
139 If set, this directory is prepended to any mount points within the pool.
140 This can be used when examining an unknown pool where the mount points cannot be
141 trusted, or in an alternate boot environment, where the typical paths are not
144 is not a persistent property.
145 It is valid only while the system is up.
149 .Sy cachefile Ns = Ns Sy none ,
150 though this may be overridden using an explicit setting.
153 The following property can be set only at import time:
155 .It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
158 the pool will be imported in read-only mode.
159 This property can also be referred to by its shortened column name,
163 The following properties can be set at creation time and import time, and later
168 .It Sy ashift Ns = Ns Sy ashift
169 Pool sector size exponent, to the power of
171 (internally referred to as
173 ). Values from 9 to 16, inclusive, are valid; also, the
174 value 0 (the default) means to auto-detect using the kernel's block
175 layer and a ZFS internal exception list. I/O operations will be aligned
176 to the specified size boundaries. Additionally, the minimum (disk)
177 write size will be set to the specified size, so this represents a
178 space vs. performance trade-off. For optimal performance, the pool
179 sector size should be greater than or equal to the sector size of the
180 underlying disks. The typical case for setting this property is when
181 performance is important and the underlying disks use 4KiB sectors but
182 report 512B sectors to the OS (for compatibility reasons); in that
185 (which is 1<<12 = 4096). When set, this property is
186 used as the default hint value in subsequent vdev operations (add,
187 attach and replace). Changing this value will not modify any existing
188 vdev, not even on disk replacement; however it can be used, for
189 instance, to replace a dying 512B sectors disk with a newer 4KiB
190 sectors device: this will probably result in bad performance but at the
191 same time could prevent loss of data.
192 .It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
193 Controls automatic pool expansion when the underlying LUN is grown.
196 the pool will be resized according to the size of the expanded device.
197 If the device is part of a mirror or raidz then all devices within that
198 mirror/raidz group must be expanded before the new space is made available to
200 The default behavior is
202 This property can also be referred to by its shortened column name,
204 .It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
205 Controls automatic device replacement.
208 device replacement must be initiated by the administrator by using the
213 any new device, found in the same physical location as a device that previously
214 belonged to the pool, is automatically formatted and replaced.
215 The default behavior is
217 This property can also be referred to by its shortened column name,
219 Autoreplace can also be used with virtual disks (like device
220 mapper) provided that you use the /dev/disk/by-vdev paths setup by
221 vdev_id.conf. See the
223 man page for more details.
224 Autoreplace and autoonline require the ZFS Event Daemon be configured and
227 man page for more details.
228 .It Sy autotrim Ns = Ns Sy on Ns | Ns Sy off
231 space which has been recently freed, and is no longer allocated by the pool,
232 will be periodically trimmed. This allows block device vdevs which support
233 BLKDISCARD, such as SSDs, or file vdevs on which the underlying file system
234 supports hole-punching, to reclaim unused blocks. The default setting for
238 Automatic TRIM does not immediately reclaim blocks after a free. Instead,
239 it will optimistically delay allowing smaller ranges to be aggregated in to
240 a few larger ones. These can then be issued more efficiently to the storage.
241 TRIM on L2ARC devices is enabled by setting
242 .Sy l2arc_trim_ahead > 0 .
244 Be aware that automatic trimming of recently freed data blocks can put
245 significant stress on the underlying storage devices. This will vary
246 depending of how well the specific device handles these commands. For
247 lower end devices it is often possible to achieve most of the benefits
248 of automatic trimming by running an on-demand (manual) TRIM periodically
252 .It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns / Ns Ar dataset
253 Identifies the default bootable dataset for the root pool. This property is
254 expected to be set mainly by the installation and upgrade programs.
255 Not all Linux distribution boot processes use the bootfs property.
256 .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
257 Controls the location of where the pool configuration is cached.
258 Discovering all pools on system startup requires a cached copy of the
259 configuration data that is stored on the root file system.
260 All pools in this cache are automatically imported when the system boots.
261 Some environments, such as install and clustering, need to cache this
262 information in a different location so that pools are not automatically
264 Setting this property caches the pool configuration in a different location that
265 can later be imported with
266 .Nm zpool Cm import Fl c .
267 Setting it to the value
269 creates a temporary pool that is never cached, and the
272 uses the default location.
274 Multiple pools can share the same cache file.
275 Because the kernel destroys and recreates this file when pools are added and
276 removed, care should be taken when attempting to access this file.
277 When the last pool using a
279 is exported or destroyed, the file will be empty.
280 .It Sy comment Ns = Ns Ar text
281 A text string consisting of printable ASCII characters that will be stored
282 such that it is available even if the pool becomes faulted.
283 An administrator can provide additional information about a pool using this
285 .It Sy dedupditto Ns = Ns Ar number
286 This property is deprecated and no longer has any effect.
287 .It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
288 Controls whether a non-privileged user is granted access based on the dataset
289 permissions defined on the dataset.
292 for more information on ZFS delegated administration.
293 .It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
294 Controls the system behavior in the event of catastrophic pool failure.
295 This condition is typically a result of a loss of connectivity to the underlying
296 storage device(s) or a failure of all devices within the pool.
297 The behavior of such an event is determined as follows:
298 .Bl -tag -width "continue"
300 Blocks all I/O access until the device connectivity is recovered and the errors
302 This is the default behavior.
306 to any new write I/O requests but allows reads to any of the remaining healthy
308 Any write requests that have yet to be committed to disk would be blocked.
310 Prints out a message to the console and generates a system crash dump.
312 .It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
313 The value of this property is the current state of
315 The only valid value when setting this property is
319 to the enabled state.
322 for details on feature states.
323 .It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
324 Controls whether information about snapshots associated with this pool is
332 This property can also be referred to by its shortened name,
334 .It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
335 Controls whether a pool activity check should be performed during
336 .Nm zpool Cm import .
337 When a pool is determined to be active it cannot be imported, even with the
339 option. This property is intended to be used in failover configurations
340 where multiple hosts have access to a pool on shared storage.
342 Multihost provides protection on import only. It does not protect against an
343 individual device being used in multiple pools, regardless of the type of vdev.
344 See the discussion under
347 When this property is on, periodic writes to storage occur to show the pool is
349 .Sy zfs_multihost_interval
351 .Xr zfs-module-parameters 5
352 man page. In order to enable this property each host must set a unique hostid.
356 .Xr spl-module-parameters 5
357 for additional details. The default value is
359 .It Sy version Ns = Ns Ar version
360 The current on-disk version of the pool.
361 This can be increased, but never decreased.
362 The preferred method of updating pools is with the
364 command, though this property can be used when a specific version is needed for
365 backwards compatibility.
366 Once feature flags are enabled on a pool this property will no longer have a