2 .\" Copyright (c) 2012-2016 Intel Corporation
3 .\" All rights reserved.
5 .\" Redistribution and use in source and binary forms, with or without
6 .\" modification, are permitted provided that the following conditions
8 .\" 1. Redistributions of source code must retain the above copyright
9 .\" notice, this list of conditions, and the following disclaimer,
10 .\" without modification.
11 .\" 2. Redistributions in binary form must reproduce at minimum a disclaimer
12 .\" substantially similar to the "NO WARRANTY" disclaimer below
13 .\" ("Disclaimer") and any redistribution must be conditioned upon
14 .\" including a substantially similar Disclaimer requirement for further
15 .\" binary redistribution.
18 .\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
19 .\" "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
20 .\" LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR
21 .\" A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
22 .\" HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
23 .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
24 .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
25 .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
26 .\" STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
27 .\" IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
28 .\" POSSIBILITY OF SUCH DAMAGES.
30 .\" nvme driver man page.
32 .\" Author: Jim Harris <jimharris@FreeBSD.org>
41 .Nd NVM Express core driver
43 To compile this driver into your kernel,
44 place the following line in your kernel configuration file:
45 .Bd -ragged -offset indent
49 Or, to load the driver as a module at boot, place the following line in
51 .Bd -literal -offset indent
55 Most users will also want to enable
57 to surface NVM Express namespaces as disk devices which can be
59 Note that in NVM Express terms, a namespace is roughly equivalent to a
64 driver provides support for NVM Express (NVMe) controllers, such as:
67 Hardware initialization
69 Per-CPU IO queue pairs
71 API for registering NVMe namespace consumers such as
74 API for submitting NVM commands to namespaces
76 Ioctls for controller and namespace configuration and management
81 driver creates controller device nodes in the format
83 and namespace device nodes in
86 Note that the NVM Express specification starts numbering namespaces at 1,
87 not 0, and this driver follows that convention.
92 will create an I/O queue pair for each CPU, provided enough MSI-X vectors
93 and NVMe queue pairs can be allocated. If not enough vectors or queue
94 pairs are available, nvme(4) will use a smaller number of queue pairs and
95 assign multiple CPUs per queue pair.
97 To force a single I/O queue pair shared by all CPUs, set the following
100 .Bd -literal -offset indent
101 hw.nvme.per_cpu_io_queues=0
104 To assign more than one CPU per I/O queue pair, thereby reducing the number
105 of MSI-X vectors consumed by the device, set the following tunable value in
107 .Bd -literal -offset indent
108 hw.nvme.min_cpus_per_ioq=X
111 To force legacy interrupts for all
113 driver instances, set the following tunable value in
115 .Bd -literal -offset indent
119 Note that use of INTx implies disabling of per-CPU I/O queue pairs.
121 The following controller-level sysctls are currently implemented:
122 .Bl -tag -width indent
123 .It Va dev.nvme.0.num_cpus_per_ioq
124 (R) Number of CPUs associated with each I/O queue pair.
125 .It Va dev.nvme.0.int_coal_time
126 (R/W) Interrupt coalescing timer period in microseconds.
128 .It Va dev.nvme.0.int_coal_threshold
129 (R/W) Interrupt coalescing threshold in number of command completions.
133 The following queue pair-level sysctls are currently implemented.
134 Admin queue sysctls take the format of dev.nvme.0.adminq and I/O queue sysctls
135 take the format of dev.nvme.0.ioq0.
136 .Bl -tag -width indent
137 .It Va dev.nvme.0.ioq0.num_entries
138 (R) Number of entries in this queue pair's command and completion queue.
139 .It Va dev.nvme.0.ioq0.num_tr
140 (R) Number of nvme_tracker structures currently allocated for this queue pair.
141 .It Va dev.nvme.0.ioq0.num_prp_list
142 (R) Number of nvme_prp_list structures currently allocated for this queue pair.
143 .It Va dev.nvme.0.ioq0.sq_head
144 (R) Current location of the submission queue head pointer as observed by
146 The head pointer is incremented by the controller as it takes commands off
147 of the submission queue.
148 .It Va dev.nvme.0.ioq0.sq_tail
149 (R) Current location of the submission queue tail pointer as observed by
151 The driver increments the tail pointer after writing a command
152 into the submission queue to signal that a new command is ready to be
154 .It Va dev.nvme.0.ioq0.cq_head
155 (R) Current location of the completion queue head pointer as observed by
157 The driver increments the head pointer after finishing
158 with a completion entry that was posted by the controller.
159 .It Va dev.nvme.0.ioq0.num_cmds
160 (R) Number of commands that have been submitted on this queue pair.
161 .It Va dev.nvme.0.ioq0.dump_debug
162 (W) Writing 1 to this sysctl will dump the full contents of the submission
163 and completion queues to the console.
173 driver first appeared in
179 driver was developed by Intel and originally written by
180 .An Jim Harris Aq jimharris@FreeBSD.org ,
181 with contributions from Joe Golio at EMC.
183 This man page was written by
184 .An Jim Harris Aq jimharris@FreeBSD.org .