1 .\" Copyright (c) 2010 Alexander Motin <mav@FreeBSD.org>
2 .\" All rights reserved.
4 .\" Redistribution and use in source and binary forms, with or without
5 .\" modification, are permitted provided that the following conditions
7 .\" 1. Redistributions of source code must retain the above copyright
8 .\" notice, this list of conditions and the following disclaimer.
9 .\" 2. Redistributions in binary form must reproduce the above copyright
10 .\" notice, this list of conditions and the following disclaimer in the
11 .\" documentation and/or other materials provided with the distribution.
13 .\" THIS SOFTWARE IS PROVIDED BY THE AUTHORS AND CONTRIBUTORS ``AS IS'' AND
14 .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
15 .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
16 .\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE
17 .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
18 .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
19 .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
20 .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
21 .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
22 .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
32 .Nd "control utility for software RAID devices"
84 utility is used to manage software RAID configurations, supported by the
86 GEOM RAID class uses on-disk metadata to provide access to software-RAID
87 volumes defined by different RAID BIOSes.
88 Depending on RAID BIOS type and its metadata format, different subsets of
89 configurations and features are supported.
90 To allow booting from RAID volume, the metadata format should match the
91 RAID BIOS type and its capabilities.
92 To guarantee that these match, it is recommended to create volumes via the
93 RAID BIOS interface, while experienced users are free to do it using this
98 indicates an action to be performed:
99 .Bl -tag -width ".Cm destroy"
101 Create an array with single volume.
104 argument specifies the on-disk metadata format to use for this array,
108 argument specifies the label of the created volume.
111 argument specifies the RAID level of the created volume, such as:
112 "RAID0", "RAID1", etc.
113 The subsequent list enumerates providers to use as array components.
114 The special name "NONE" can be used to reserve space for absent disks.
115 The order of components can be important, depending on specific RAID level
118 Additional options include:
119 .Bl -tag -width ".Fl s Ar strip"
121 Enforce specified configuration creation if it is officially unsupported,
122 but technically can be created.
124 Specifies metadata format options.
128 bytes on each component for this volume.
129 Should be used if several volumes per array are planned, or if smaller
130 components going to be inserted later.
131 Defaults to size of the smallest component.
133 Specifies strip size in bytes.
137 Create another volume on the existing array.
140 argument is the name of the existing array, reported by label command.
141 The rest of arguments are the same as for the label command.
143 Delete volume(s) from the existing array.
144 When the last volume is deleted, the array is also deleted and its metadata
148 argument is the name of existing array.
153 arguments allow specifying volume for deletion.
155 Additional options include:
156 .Bl -tag -width ".Fl f"
158 Delete volume(s) even if it is still open.
161 Insert specified provider(s) into specified array instead of the first missing
162 or failed components.
163 If there are no such components, mark disk(s) as spare.
165 Remove the specified provider(s) from the specified array and erase metadata.
166 If there are spare disks present, the removed disk(s) will be replaced by
169 Mark the given disks(s) as failed, removing from active use unless absolutely
170 necessary due to exhausted redundancy.
171 If there are spare disks present - failed disk(s) will be replaced with one
174 Stop the given array.
175 The metadata will not be erased.
177 Additional options include:
178 .Bl -tag -width ".Fl f"
180 Stop the given array even if some of its volumes are opened.
196 Additional options include:
197 .Bl -tag -width ".Fl v"
201 .Sh SUPPORTED METADATA FORMATS
202 The GEOM RAID class follows a modular design, allowing different metadata
204 Support is currently implemented for the following formats:
205 .Bl -tag -width "Intel"
207 The format defined by the SNIA Common RAID Disk Data Format v2.0 specification.
208 Used by some Adaptec RAID BIOSes and some hardware RAID controllers.
209 Because of high format flexibility different implementations support
210 different set of features and have different on-disk metadata layouts.
211 To provide compatibility, the GEOM RAID class mimics capabilities
212 of the first detected DDF array.
213 Respecting that, it may support different number of disks per volume,
214 volumes per array, partitions per disk, etc.
215 The following configurations are supported: RAID0 (2+ disks), RAID1 (2+ disks),
216 RAID1E (3+ disks), RAID3 (3+ disks), RAID4 (3+ disks), RAID5 (3+ disks),
217 RAID5E (4+ disks), RAID5EE (4+ disks), RAID5R (3+ disks), RAID6 (4+ disks),
218 RAIDMDF (4+ disks), RAID10 (4+ disks), SINGLE (1 disk), CONCAT (2+ disks).
220 Format supports two options "BE" and "LE", that mean big-endian byte order
221 defined by specification (default) and little-endian used by some Adaptec
224 The format used by Intel RAID BIOS.
225 Supports up to two volumes per array.
226 Supports configurations: RAID0 (2+ disks), RAID1 (2 disks),
227 RAID5 (3+ disks), RAID10 (4 disks).
228 Configurations not supported by Intel RAID BIOS, but enforceable on your own
229 risk: RAID1 (3+ disks), RAID1E (3+ disks), RAID10 (6+ disks).
231 The format used by JMicron RAID BIOS.
232 Supports one volume per array.
233 Supports configurations: RAID0 (2+ disks), RAID1 (2 disks),
234 RAID10 (4 disks), CONCAT (2+ disks).
235 Configurations not supported by JMicron RAID BIOS, but enforceable on your own
236 risk: RAID1 (3+ disks), RAID1E (3+ disks), RAID10 (6+ disks), RAID5 (3+ disks).
238 The format used by NVIDIA MediaShield RAID BIOS.
239 Supports one volume per array.
240 Supports configurations: RAID0 (2+ disks), RAID1 (2 disks),
241 RAID5 (3+ disks), RAID10 (4+ disks), SINGLE (1 disk), CONCAT (2+ disks).
242 Configurations not supported by NVIDIA MediaShield RAID BIOS, but enforceable
243 on your own risk: RAID1 (3+ disks).
245 The format used by Promise and AMD/ATI RAID BIOSes.
246 Supports multiple volumes per array.
247 Each disk can be split to be used by up to two arbitrary volumes.
248 Supports configurations: RAID0 (2+ disks), RAID1 (2 disks),
249 RAID5 (3+ disks), RAID10 (4 disks), SINGLE (1 disk), CONCAT (2+ disks).
250 Configurations not supported by RAID BIOSes, but enforceable on your
251 own risk: RAID1 (3+ disks), RAID10 (6+ disks).
253 The format used by SiliconImage RAID BIOS.
254 Supports one volume per array.
255 Supports configurations: RAID0 (2+ disks), RAID1 (2 disks),
256 RAID5 (3+ disks), RAID10 (4 disks), SINGLE (1 disk), CONCAT (2+ disks).
257 Configurations not supported by SiliconImage RAID BIOS, but enforceable on your
258 own risk: RAID1 (3+ disks), RAID10 (6+ disks).
260 .Sh SUPPORTED RAID LEVELS
261 The GEOM RAID class follows a modular design, allowing different RAID levels
263 Full support for the following RAID levels is currently implemented:
264 RAID0, RAID1, RAID1E, RAID10, SINGLE, CONCAT.
265 The following RAID levels supported as read-only for volumes in optimal
266 state (without using redundancy): RAID4, RAID5, RAID5E, RAID5EE, RAID5R,
268 .Sh RAID LEVEL MIGRATION
269 The GEOM RAID class has no support for RAID level migration, allowed by some
271 If you started migration using BIOS or in some other way, make sure to
273 Do not run GEOM RAID class on migrating volumes under pain of possible data
276 NVIDIA metadata format does not support volumes above 2TiB.
280 variable can be used to control the behavior of the
283 .Bl -tag -width indent
284 .It Va kern.geom.raid.aggressive_spare : No 0
285 Use any disks without metadata connected to controllers of the vendor
286 matching to volume metadata format as spare.
287 Use it with much care to not lose data if connecting unrelated disk!
288 .It Va kern.geom.raid.clean_time : No 5
289 Mark volume as clean when idle for the specified number of seconds.
290 .It Va kern.geom.raid.debug : No 0
294 .It Va kern.geom.raid.enable : No 1
295 Enable on-disk metadata taste.
296 .It Va kern.geom.raid.idle_threshold : No 1000000
297 Time in microseconds to consider a volume idle for rebuild purposes.
298 .It Va kern.geom.raid.name_format : No 0
299 Providers name format: 0 -- raid/r{num}, 1 -- raid/{label}.
300 .It Va kern.geom.raid.read_err_thresh : No 10
301 Number of read errors equated to disk failure.
302 Write errors are always considered as disk failures.
303 .It Va kern.geom.raid.start_timeout : No 30
304 Time to wait for missing array components on startup.
305 .It Va kern.geom.raid. Ns Ar X Ns Va .enable : No 1
306 Enable taste for specific metadata or transformation module.
307 .It Va kern.geom.raid.legacy_aliases : No 0
308 Enable geom raid emulation of legacy /dev/ar%d devices.
309 This should aid the upgrade of systems from legacy to modern releases.
312 Exit status is 0 on success, and non-zero if the command fails.
323 .An Alexander Motin Aq mav@FreeBSD.org
324 .An M. Warner Losh Aq imp@FreeBSD.org