4 .\" The contents of this file are subject to the terms of the
5 .\" Common Development and Distribution License (the "License").
6 .\" You may not use this file except in compliance with the License.
8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 .\" or http://www.opensolaris.org/os/licensing.
10 .\" See the License for the specific language governing permissions
11 .\" and limitations under the License.
13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 .\" If applicable, add the following below this CDDL HEADER, with the
16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
22 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23 .\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
24 .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
25 .\" Copyright (c) 2017 Datto Inc.
26 .\" Copyright (c) 2018 George Melikov. All Rights Reserved.
27 .\" Copyright 2017 Nexenta Systems, Inc.
28 .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
34 .Nm zpool Ns Pf - Cm iostat
35 .Nd Display logical I/O statistics for the given ZFS storage pools/vdevs
39 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
40 .Op Fl T Sy u Ns | Ns Sy d
42 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
43 .Op Ar interval Op Ar count
49 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
50 .Op Fl T Sy u Ns | Ns Sy d
52 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
53 .Op Ar interval Op Ar count
55 Displays logical I/O statistics for the given pools/vdevs. Physical I/Os may
58 If writes are located nearby, they may be merged into a single
59 larger operation. Additional I/O may be generated depending on the level of
61 To filter output, you may pass in a list of pools, a pool and list of vdevs
62 in that pool, or a list of any vdevs from any pool. If no items are specified,
63 statistics for every pool in the system are shown.
66 the statistics are printed every
68 seconds until ^C is pressed. If
70 flag is specified the headers are displayed only once, otherwise they are
71 displayed periodically. If count is specified, the command exits
72 after count reports are printed. The first report printed is always
73 the statistics since boot regardless of whether
77 are passed. However, this behavior can be suppressed with the
79 flag. Also note that the units of
83 that are printed in the report are in base 1024. To get the raw
88 .It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
89 Run a script (or scripts) on each vdev and include the output as a new column
92 output. Users can run any script found in their
94 directory or from the system
96 directory. Script names containing the slash (/) character are not allowed.
97 The default search path can be overridden by setting the
98 ZPOOL_SCRIPTS_PATH environment variable. A privileged user can run
100 if they have the ZPOOL_SCRIPTS_AS_ROOT
101 environment variable set. If a script requires the use of a privileged
104 then it's recommended you allow the user access to it in
106 or add the user to the
107 .Pa /etc/sudoers.d/zfs
112 is passed without a script name, it prints a list of all scripts.
114 also sets verbose mode
115 .No \&( Ns Fl v Ns No \&).
117 Script output should be in the form of "name=value". The column name is
118 set to "name" and the value is set to "value". Multiple lines can be
119 used to output multiple columns. The first line of output not in the
120 "name=value" format is displayed without a column title, and no more
121 output after that is displayed. This can be useful for printing error
122 messages. Blank or NULL values are printed as a '-' to make output
125 The following environment variables are set before running each script:
126 .Bl -tag -width "VDEV_PATH"
128 Full path to the vdev
130 .Bl -tag -width "VDEV_UPATH"
132 Underlying path to the vdev (/dev/sd*). For use with device mapper,
133 multipath, or partitioned vdevs.
135 .Bl -tag -width "VDEV_ENC_SYSFS_PATH"
136 .It Sy VDEV_ENC_SYSFS_PATH
137 The sysfs path to the enclosure for the vdev (if any).
139 .It Fl T Sy u Ns | Ns Sy d
140 Display a time stamp.
143 for a printed representation of the internal representation of time.
148 for standard date format.
152 Display vdev GUIDs instead of the normal device names. These GUIDs
153 can be used in place of device names for the zpool
154 detach/offline/remove/replace commands.
156 Scripted mode. Do not display headers, and separate fields by a
157 single tab instead of arbitrary space.
159 Display real paths for vdevs resolving all symbolic links. This can
160 be used to look up the current block device name regardless of the
162 path used to open it.
164 Print headers only once when passed
166 Display numbers in parsable (exact) values. Time values are in
169 Display full paths for vdevs instead of only the last component of
170 the path. This can be used in conjunction with the
174 Print request size histograms for the leaf vdev's IO. This includes
175 histograms of individual IOs (ind) and aggregate IOs (agg). These stats
176 can be useful for observing how well IO aggregation is working. Note
177 that TRIM IOs may exceed 16M, but will be counted as 16M.
179 Verbose statistics Reports usage statistics for individual vdevs within the
180 pool, in addition to the pool-wide statistics.
182 Omit statistics since boot.
183 Normally the first line of output reports the statistics since boot.
184 This option suppresses that first line of output.
187 Display latency histograms:
190 Total IO time (queuing + disk IO time).
192 Disk IO time (time reading/writing the disk).
194 Amount of time IO spent in synchronous priority queues. Does not include
197 Amount of time IO spent in asynchronous priority queues. Does not include
200 Amount of time IO spent in scrub queue. Does not include disk time.
202 Include average latency statistics:
205 Average total IO time (queuing + disk IO time).
207 Average disk IO time (time reading/writing the disk).
209 Average amount of time IO spent in synchronous priority queues. Does
210 not include disk time.
212 Average amount of time IO spent in asynchronous priority queues.
213 Does not include disk time.
215 Average queuing time in scrub queue. Does not include disk time.
217 Average queuing time in trim queue. Does not include disk time.
219 Include active queue statistics. Each priority queue has both
224 IOs. Pending IOs are waiting to
225 be issued to the disk, and active IOs have been issued to disk and are
226 waiting for completion. These stats are broken out by priority queue:
228 .Ar syncq_read/write :
229 Current number of entries in synchronous priority
231 .Ar asyncq_read/write :
232 Current number of entries in asynchronous priority queues.
234 Current number of entries in scrub queue.
236 Current number of entries in trim queue.
238 All queue statistics are instantaneous measurements of the number of
239 entries in the queues. If you specify an interval, the measurements
240 will be sampled from the end of the interval.