1 .\" Copyright (c) 2010 Alexander Motin <mav@FreeBSD.org>
2 .\" All rights reserved.
4 .\" Redistribution and use in source and binary forms, with or without
5 .\" modification, are permitted provided that the following conditions
7 .\" 1. Redistributions of source code must retain the above copyright
8 .\" notice, this list of conditions and the following disclaimer.
9 .\" 2. Redistributions in binary form must reproduce the above copyright
10 .\" notice, this list of conditions and the following disclaimer in the
11 .\" documentation and/or other materials provided with the distribution.
13 .\" THIS SOFTWARE IS PROVIDED BY THE AUTHORS AND CONTRIBUTORS ``AS IS'' AND
14 .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
15 .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
16 .\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE
17 .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
18 .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
19 .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
20 .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
21 .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
22 .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
30 .Nd "control utility for software RAID devices"
82 utility is used to manage software RAID configurations, supported by the
84 GEOM RAID class uses on-disk metadata to provide access to software-RAID
85 volumes defined by different RAID BIOSes.
86 Depending on RAID BIOS type and its metadata format, different subsets of
87 configurations and features are supported.
88 To allow booting from RAID volume, the metadata format should match the
89 RAID BIOS type and its capabilities.
90 To guarantee that these match, it is recommended to create volumes via the
91 RAID BIOS interface, while experienced users are free to do it using this
96 indicates an action to be performed:
97 .Bl -tag -width ".Cm destroy"
99 Create an array with single volume.
102 argument specifies the on-disk metadata format to use for this array,
106 argument specifies the label of the created volume.
109 argument specifies the RAID level of the created volume, such as:
110 "RAID0", "RAID1", etc.
111 The subsequent list enumerates providers to use as array components.
112 The special name "NONE" can be used to reserve space for absent disks.
113 The order of components can be important, depending on specific RAID level
116 Additional options include:
117 .Bl -tag -width ".Fl s Ar strip"
119 Enforce specified configuration creation if it is officially unsupported,
120 but technically can be created.
122 Specifies metadata format options.
126 bytes on each component for this volume.
127 Should be used if several volumes per array are planned, or if smaller
128 components going to be inserted later.
129 Defaults to size of the smallest component.
131 Specifies strip size in bytes.
135 Create another volume on the existing array.
138 argument is the name of the existing array, reported by label command.
139 The rest of arguments are the same as for the label command.
141 Delete volume(s) from the existing array.
142 When the last volume is deleted, the array is also deleted and its metadata
146 argument is the name of existing array.
151 arguments allow specifying volume for deletion.
153 Additional options include:
154 .Bl -tag -width ".Fl f"
156 Delete volume(s) even if it is still open.
159 Insert specified provider(s) into specified array instead of the first missing
160 or failed components.
161 If there are no such components, mark disk(s) as spare.
163 Remove the specified provider(s) from the specified array and erase metadata.
164 If there are spare disks present, the removed disk(s) will be replaced by
167 Mark the given disks(s) as failed, removing from active use unless absolutely
168 necessary due to exhausted redundancy.
169 If there are spare disks present - failed disk(s) will be replaced with one
172 Stop the given array.
173 The metadata will not be erased.
175 Additional options include:
176 .Bl -tag -width ".Fl f"
178 Stop the given array even if some of its volumes are opened.
194 Additional options include:
195 .Bl -tag -width ".Fl v"
199 .Sh SUPPORTED METADATA FORMATS
200 The GEOM RAID class follows a modular design, allowing different metadata
202 Support is currently implemented for the following formats:
203 .Bl -tag -width "Intel"
205 The format defined by the SNIA Common RAID Disk Data Format v2.0 specification.
206 Used by some Adaptec RAID BIOSes and some hardware RAID controllers.
207 Because of high format flexibility different implementations support
208 different set of features and have different on-disk metadata layouts.
209 To provide compatibility, the GEOM RAID class mimics capabilities
210 of the first detected DDF array.
211 Respecting that, it may support different number of disks per volume,
212 volumes per array, partitions per disk, etc.
213 The following configurations are supported: RAID0 (2+ disks), RAID1 (2+ disks),
214 RAID1E (3+ disks), RAID3 (3+ disks), RAID4 (3+ disks), RAID5 (3+ disks),
215 RAID5E (4+ disks), RAID5EE (4+ disks), RAID5R (3+ disks), RAID6 (4+ disks),
216 RAIDMDF (4+ disks), RAID10 (4+ disks), SINGLE (1 disk), CONCAT (2+ disks).
218 Format supports two options "BE" and "LE", that mean big-endian byte order
219 defined by specification (default) and little-endian used by some Adaptec
222 The format used by Intel RAID BIOS.
223 Supports up to two volumes per array.
224 Supports configurations: RAID0 (2+ disks), RAID1 (2 disks),
225 RAID5 (3+ disks), RAID10 (4 disks).
226 Configurations not supported by Intel RAID BIOS, but enforceable on your own
227 risk: RAID1 (3+ disks), RAID1E (3+ disks), RAID10 (6+ disks).
229 The format used by JMicron RAID BIOS.
230 Supports one volume per array.
231 Supports configurations: RAID0 (2+ disks), RAID1 (2 disks),
232 RAID10 (4 disks), CONCAT (2+ disks).
233 Configurations not supported by JMicron RAID BIOS, but enforceable on your own
234 risk: RAID1 (3+ disks), RAID1E (3+ disks), RAID10 (6+ disks), RAID5 (3+ disks).
236 The format used by NVIDIA MediaShield RAID BIOS.
237 Supports one volume per array.
238 Supports configurations: RAID0 (2+ disks), RAID1 (2 disks),
239 RAID5 (3+ disks), RAID10 (4+ disks), SINGLE (1 disk), CONCAT (2+ disks).
240 Configurations not supported by NVIDIA MediaShield RAID BIOS, but enforceable
241 on your own risk: RAID1 (3+ disks).
243 The format used by Promise and AMD/ATI RAID BIOSes.
244 Supports multiple volumes per array.
245 Each disk can be split to be used by up to two arbitrary volumes.
246 Supports configurations: RAID0 (2+ disks), RAID1 (2 disks),
247 RAID5 (3+ disks), RAID10 (4 disks), SINGLE (1 disk), CONCAT (2+ disks).
248 Configurations not supported by RAID BIOSes, but enforceable on your
249 own risk: RAID1 (3+ disks), RAID10 (6+ disks).
251 The format used by SiliconImage RAID BIOS.
252 Supports one volume per array.
253 Supports configurations: RAID0 (2+ disks), RAID1 (2 disks),
254 RAID5 (3+ disks), RAID10 (4 disks), SINGLE (1 disk), CONCAT (2+ disks).
255 Configurations not supported by SiliconImage RAID BIOS, but enforceable on your
256 own risk: RAID1 (3+ disks), RAID10 (6+ disks).
258 .Sh SUPPORTED RAID LEVELS
259 The GEOM RAID class follows a modular design, allowing different RAID levels
261 Full support for the following RAID levels is currently implemented:
262 RAID0, RAID1, RAID1E, RAID10, SINGLE, CONCAT.
263 The following RAID levels supported as read-only for volumes in optimal
264 state (without using redundancy): RAID4, RAID5, RAID5E, RAID5EE, RAID5R,
266 .Sh RAID LEVEL MIGRATION
267 The GEOM RAID class has no support for RAID level migration, allowed by some
269 If you started migration using BIOS or in some other way, make sure to
271 Do not run GEOM RAID class on migrating volumes under pain of possible data
274 NVIDIA metadata format does not support volumes above 2TiB.
278 variable can be used to control the behavior of the
281 .Bl -tag -width indent
282 .It Va kern.geom.raid.aggressive_spare : No 0
283 Use any disks without metadata connected to controllers of the vendor
284 matching to volume metadata format as spare.
285 Use it with much care to not lose data if connecting unrelated disk!
286 .It Va kern.geom.raid.clean_time : No 5
287 Mark volume as clean when idle for the specified number of seconds.
288 .It Va kern.geom.raid.debug : No 0
292 .It Va kern.geom.raid.enable : No 1
293 Enable on-disk metadata taste.
294 .It Va kern.geom.raid.idle_threshold : No 1000000
295 Time in microseconds to consider a volume idle for rebuild purposes.
296 .It Va kern.geom.raid.name_format : No 0
297 Providers name format: 0 -- raid/r{num}, 1 -- raid/{label}.
298 .It Va kern.geom.raid.read_err_thresh : No 10
299 Number of read errors equated to disk failure.
300 Write errors are always considered as disk failures.
301 .It Va kern.geom.raid.start_timeout : No 30
302 Time to wait for missing array components on startup.
303 .It Va kern.geom.raid. Ns Ar X Ns Va .enable : No 1
304 Enable taste for specific metadata or transformation module.
307 Exit status is 0 on success, and non-zero if the command fails.
318 .An Alexander Motin Aq Mt mav@FreeBSD.org
319 .An M. Warner Losh Aq Mt imp@FreeBSD.org