2 * SPDX-License-Identifier: BSD-2-Clause-FreeBSD
4 * Copyright (C) 2011-2014 Matteo Landi
5 * Copyright (C) 2011-2016 Luigi Rizzo
6 * Copyright (C) 2011-2016 Giuseppe Lettieri
7 * Copyright (C) 2011-2016 Vincenzo Maffione
10 * Redistribution and use in source and binary forms, with or without
11 * modification, are permitted provided that the following conditions
13 * 1. Redistributions of source code must retain the above copyright
14 * notice, this list of conditions and the following disclaimer.
15 * 2. Redistributions in binary form must reproduce the above copyright
16 * notice, this list of conditions and the following disclaimer in the
17 * documentation and/or other materials provided with the distribution.
19 * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
20 * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
21 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
22 * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
23 * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
24 * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
25 * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
26 * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
27 * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
28 * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
36 * This module supports memory mapped access to network devices,
39 * The module uses a large, memory pool allocated by the kernel
40 * and accessible as mmapped memory by multiple userspace threads/processes.
41 * The memory pool contains packet buffers and "netmap rings",
42 * i.e. user-accessible copies of the interface's queues.
44 * Access to the network card works like this:
45 * 1. a process/thread issues one or more open() on /dev/netmap, to create
46 * select()able file descriptor on which events are reported.
47 * 2. on each descriptor, the process issues an ioctl() to identify
48 * the interface that should report events to the file descriptor.
49 * 3. on each descriptor, the process issues an mmap() request to
50 * map the shared memory region within the process' address space.
51 * The list of interesting queues is indicated by a location in
52 * the shared memory region.
53 * 4. using the functions in the netmap(4) userspace API, a process
54 * can look up the occupation state of a queue, access memory buffers,
55 * and retrieve received packets or enqueue packets to transmit.
56 * 5. using some ioctl()s the process can synchronize the userspace view
57 * of the queue with the actual status in the kernel. This includes both
58 * receiving the notification of new packets, and transmitting new
59 * packets on the output interface.
60 * 6. select() or poll() can be used to wait for events on individual
61 * transmit or receive queues (or all queues for a given interface).
64 SYNCHRONIZATION (USER)
66 The netmap rings and data structures may be shared among multiple
67 user threads or even independent processes.
68 Any synchronization among those threads/processes is delegated
69 to the threads themselves. Only one thread at a time can be in
70 a system call on the same netmap ring. The OS does not enforce
71 this and only guarantees against system crashes in case of
76 Within the kernel, access to the netmap rings is protected as follows:
78 - a spinlock on each ring, to handle producer/consumer races on
79 RX rings attached to the host stack (against multiple host
80 threads writing from the host stack to the same ring),
81 and on 'destination' rings attached to a VALE switch
82 (i.e. RX rings in VALE ports, and TX rings in NIC/host ports)
83 protecting multiple active senders for the same destination)
85 - an atomic variable to guarantee that there is at most one
86 instance of *_*xsync() on the ring at any time.
87 For rings connected to user file
88 descriptors, an atomic_test_and_set() protects this, and the
89 lock on the ring is not actually used.
90 For NIC RX rings connected to a VALE switch, an atomic_test_and_set()
91 is also used to prevent multiple executions (the driver might indeed
92 already guarantee this).
93 For NIC TX rings connected to a VALE switch, the lock arbitrates
94 access to the queue (both when allocating buffers and when pushing
97 - *xsync() should be protected against initializations of the card.
98 On FreeBSD most devices have the reset routine protected by
99 a RING lock (ixgbe, igb, em) or core lock (re). lem is missing
100 the RING protection on rx_reset(), this should be added.
102 On linux there is an external lock on the tx path, which probably
103 also arbitrates access to the reset routine. XXX to be revised
105 - a per-interface core_lock protecting access from the host stack
106 while interfaces may be detached from netmap mode.
107 XXX there should be no need for this lock if we detach the interfaces
108 only while they are down.
113 NMG_LOCK() serializes all modifications to switches and ports.
114 A switch cannot be deleted until all ports are gone.
116 For each switch, an SX lock (RWlock on linux) protects
117 deletion of ports. When configuring or deleting a new port, the
118 lock is acquired in exclusive mode (after holding NMG_LOCK).
119 When forwarding, the lock is acquired in shared mode (without NMG_LOCK).
120 The lock is held throughout the entire forwarding cycle,
121 during which the thread may incur in a page fault.
122 Hence it is important that sleepable shared locks are used.
124 On the rx ring, the per-port lock is grabbed initially to reserve
125 a number of slot in the ring, then the lock is released,
126 packets are copied from source to destination, and then
127 the lock is acquired again and the receive ring is updated.
128 (A similar thing is done on the tx ring for NIC and host stack
129 ports attached to the switch)
134 /* --- internals ----
136 * Roadmap to the code that implements the above.
138 * > 1. a process/thread issues one or more open() on /dev/netmap, to create
139 * > select()able file descriptor on which events are reported.
141 * Internally, we allocate a netmap_priv_d structure, that will be
142 * initialized on ioctl(NIOCREGIF). There is one netmap_priv_d
143 * structure for each open().
146 * FreeBSD: see netmap_open() (netmap_freebsd.c)
147 * linux: see linux_netmap_open() (netmap_linux.c)
149 * > 2. on each descriptor, the process issues an ioctl() to identify
150 * > the interface that should report events to the file descriptor.
152 * Implemented by netmap_ioctl(), NIOCREGIF case, with nmr->nr_cmd==0.
153 * Most important things happen in netmap_get_na() and
154 * netmap_do_regif(), called from there. Additional details can be
155 * found in the comments above those functions.
157 * In all cases, this action creates/takes-a-reference-to a
158 * netmap_*_adapter describing the port, and allocates a netmap_if
159 * and all necessary netmap rings, filling them with netmap buffers.
161 * In this phase, the sync callbacks for each ring are set (these are used
162 * in steps 5 and 6 below). The callbacks depend on the type of adapter.
163 * The adapter creation/initialization code puts them in the
164 * netmap_adapter (fields na->nm_txsync and na->nm_rxsync). Then, they
165 * are copied from there to the netmap_kring's during netmap_do_regif(), by
166 * the nm_krings_create() callback. All the nm_krings_create callbacks
167 * actually call netmap_krings_create() to perform this and the other
168 * common stuff. netmap_krings_create() also takes care of the host rings,
169 * if needed, by setting their sync callbacks appropriately.
171 * Additional actions depend on the kind of netmap_adapter that has been
174 * - netmap_hw_adapter: [netmap.c]
175 * This is a system netdev/ifp with native netmap support.
176 * The ifp is detached from the host stack by redirecting:
177 * - transmissions (from the network stack) to netmap_transmit()
178 * - receive notifications to the nm_notify() callback for
179 * this adapter. The callback is normally netmap_notify(), unless
180 * the ifp is attached to a bridge using bwrap, in which case it
181 * is netmap_bwrap_intr_notify().
183 * - netmap_generic_adapter: [netmap_generic.c]
184 * A system netdev/ifp without native netmap support.
186 * (the decision about native/non native support is taken in
187 * netmap_get_hw_na(), called by netmap_get_na())
189 * - netmap_vp_adapter [netmap_vale.c]
190 * Returned by netmap_get_bdg_na().
191 * This is a persistent or ephemeral VALE port. Ephemeral ports
192 * are created on the fly if they don't already exist, and are
193 * always attached to a bridge.
194 * Persistent VALE ports must must be created separately, and i
195 * then attached like normal NICs. The NIOCREGIF we are examining
196 * will find them only if they had previosly been created and
197 * attached (see VALE_CTL below).
199 * - netmap_pipe_adapter [netmap_pipe.c]
200 * Returned by netmap_get_pipe_na().
201 * Both pipe ends are created, if they didn't already exist.
203 * - netmap_monitor_adapter [netmap_monitor.c]
204 * Returned by netmap_get_monitor_na().
205 * If successful, the nm_sync callbacks of the monitored adapter
206 * will be intercepted by the returned monitor.
208 * - netmap_bwrap_adapter [netmap_vale.c]
209 * Cannot be obtained in this way, see VALE_CTL below
213 * linux: we first go through linux_netmap_ioctl() to
214 * adapt the FreeBSD interface to the linux one.
217 * > 3. on each descriptor, the process issues an mmap() request to
218 * > map the shared memory region within the process' address space.
219 * > The list of interesting queues is indicated by a location in
220 * > the shared memory region.
223 * FreeBSD: netmap_mmap_single (netmap_freebsd.c).
224 * linux: linux_netmap_mmap (netmap_linux.c).
226 * > 4. using the functions in the netmap(4) userspace API, a process
227 * > can look up the occupation state of a queue, access memory buffers,
228 * > and retrieve received packets or enqueue packets to transmit.
230 * these actions do not involve the kernel.
232 * > 5. using some ioctl()s the process can synchronize the userspace view
233 * > of the queue with the actual status in the kernel. This includes both
234 * > receiving the notification of new packets, and transmitting new
235 * > packets on the output interface.
237 * These are implemented in netmap_ioctl(), NIOCTXSYNC and NIOCRXSYNC
238 * cases. They invoke the nm_sync callbacks on the netmap_kring
239 * structures, as initialized in step 2 and maybe later modified
240 * by a monitor. Monitors, however, will always call the original
241 * callback before doing anything else.
244 * > 6. select() or poll() can be used to wait for events on individual
245 * > transmit or receive queues (or all queues for a given interface).
247 * Implemented in netmap_poll(). This will call the same nm_sync()
248 * callbacks as in step 5 above.
251 * linux: we first go through linux_netmap_poll() to adapt
252 * the FreeBSD interface to the linux one.
255 * ---- VALE_CTL -----
257 * VALE switches are controlled by issuing a NIOCREGIF with a non-null
258 * nr_cmd in the nmreq structure. These subcommands are handled by
259 * netmap_bdg_ctl() in netmap_vale.c. Persistent VALE ports are created
260 * and destroyed by issuing the NETMAP_BDG_NEWIF and NETMAP_BDG_DELIF
261 * subcommands, respectively.
263 * Any network interface known to the system (including a persistent VALE
264 * port) can be attached to a VALE switch by issuing the
265 * NETMAP_REQ_VALE_ATTACH command. After the attachment, persistent VALE ports
266 * look exactly like ephemeral VALE ports (as created in step 2 above). The
267 * attachment of other interfaces, instead, requires the creation of a
268 * netmap_bwrap_adapter. Moreover, the attached interface must be put in
269 * netmap mode. This may require the creation of a netmap_generic_adapter if
270 * we have no native support for the interface, or if generic adapters have
271 * been forced by sysctl.
273 * Both persistent VALE ports and bwraps are handled by netmap_get_bdg_na(),
274 * called by nm_bdg_ctl_attach(), and discriminated by the nm_bdg_attach()
275 * callback. In the case of the bwrap, the callback creates the
276 * netmap_bwrap_adapter. The initialization of the bwrap is then
277 * completed by calling netmap_do_regif() on it, in the nm_bdg_ctl()
278 * callback (netmap_bwrap_bdg_ctl in netmap_vale.c).
279 * A generic adapter for the wrapped ifp will be created if needed, when
280 * netmap_get_bdg_na() calls netmap_get_hw_na().
283 * ---- DATAPATHS -----
285 * -= SYSTEM DEVICE WITH NATIVE SUPPORT =-
287 * na == NA(ifp) == netmap_hw_adapter created in DEVICE_netmap_attach()
289 * - tx from netmap userspace:
291 * 1) ioctl(NIOCTXSYNC)/netmap_poll() in process context
292 * kring->nm_sync() == DEVICE_netmap_txsync()
293 * 2) device interrupt handler
294 * na->nm_notify() == netmap_notify()
295 * - rx from netmap userspace:
297 * 1) ioctl(NIOCRXSYNC)/netmap_poll() in process context
298 * kring->nm_sync() == DEVICE_netmap_rxsync()
299 * 2) device interrupt handler
300 * na->nm_notify() == netmap_notify()
301 * - rx from host stack
305 * na->nm_notify == netmap_notify()
306 * 2) ioctl(NIOCRXSYNC)/netmap_poll() in process context
307 * kring->nm_sync() == netmap_rxsync_from_host
308 * netmap_rxsync_from_host(na, NULL, NULL)
310 * ioctl(NIOCTXSYNC)/netmap_poll() in process context
311 * kring->nm_sync() == netmap_txsync_to_host
312 * netmap_txsync_to_host(na)
314 * FreeBSD: na->if_input() == ether_input()
315 * linux: netif_rx() with NM_MAGIC_PRIORITY_RX
318 * -= SYSTEM DEVICE WITH GENERIC SUPPORT =-
320 * na == NA(ifp) == generic_netmap_adapter created in generic_netmap_attach()
322 * - tx from netmap userspace:
324 * 1) ioctl(NIOCTXSYNC)/netmap_poll() in process context
325 * kring->nm_sync() == generic_netmap_txsync()
326 * nm_os_generic_xmit_frame()
327 * linux: dev_queue_xmit() with NM_MAGIC_PRIORITY_TX
328 * ifp->ndo_start_xmit == generic_ndo_start_xmit()
329 * gna->save_start_xmit == orig. dev. start_xmit
330 * FreeBSD: na->if_transmit() == orig. dev if_transmit
331 * 2) generic_mbuf_destructor()
332 * na->nm_notify() == netmap_notify()
333 * - rx from netmap userspace:
334 * 1) ioctl(NIOCRXSYNC)/netmap_poll() in process context
335 * kring->nm_sync() == generic_netmap_rxsync()
338 * generic_rx_handler()
340 * na->nm_notify() == netmap_notify()
341 * - rx from host stack
342 * FreeBSD: same as native
343 * Linux: same as native except:
345 * dev_queue_xmit() without NM_MAGIC_PRIORITY_TX
346 * ifp->ndo_start_xmit == generic_ndo_start_xmit()
348 * na->nm_notify() == netmap_notify()
349 * - tx to host stack (same as native):
357 * ioctl(NIOCTXSYNC)/netmap_poll() in process context
358 * kring->nm_sync() == netmap_vp_txsync()
360 * - system device with native support:
363 * na->nm_notify() == netmap_bwrap_intr_notify(ring_nr != host ring)
364 * kring->nm_sync() == DEVICE_netmap_rxsync()
366 * kring->nm_sync() == DEVICE_netmap_rxsync()
369 * na->nm_notify() == netmap_bwrap_intr_notify(ring_nr == host ring)
370 * kring->nm_sync() == netmap_rxsync_from_host()
373 * - system device with generic support:
374 * from device driver:
375 * generic_rx_handler()
376 * na->nm_notify() == netmap_bwrap_intr_notify(ring_nr != host ring)
377 * kring->nm_sync() == generic_netmap_rxsync()
379 * kring->nm_sync() == generic_netmap_rxsync()
382 * na->nm_notify() == netmap_bwrap_intr_notify(ring_nr == host ring)
383 * kring->nm_sync() == netmap_rxsync_from_host()
386 * (all cases) --> nm_bdg_flush()
387 * dest_na->nm_notify() == (see below)
393 * 1) ioctl(NIOCRXSYNC)/netmap_poll() in process context
394 * kring->nm_sync() == netmap_vp_rxsync()
395 * 2) from nm_bdg_flush()
396 * na->nm_notify() == netmap_notify()
398 * - system device with native support:
400 * na->nm_notify() == netmap_bwrap_notify()
402 * kring->nm_sync() == DEVICE_netmap_txsync()
406 * kring->nm_sync() == netmap_txsync_to_host
407 * netmap_vp_rxsync_locked()
409 * - system device with generic adapter:
411 * na->nm_notify() == netmap_bwrap_notify()
413 * kring->nm_sync() == generic_netmap_txsync()
417 * kring->nm_sync() == netmap_txsync_to_host
423 * OS-specific code that is used only within this file.
424 * Other OS-specific code that must be accessed by drivers
425 * is present in netmap_kern.h
428 #if defined(__FreeBSD__)
429 #include <sys/cdefs.h> /* prerequisite */
430 #include <sys/types.h>
431 #include <sys/errno.h>
432 #include <sys/param.h> /* defines used in kernel.h */
433 #include <sys/kernel.h> /* types used in module initialization */
434 #include <sys/conf.h> /* cdevsw struct, UID, GID */
435 #include <sys/filio.h> /* FIONBIO */
436 #include <sys/sockio.h>
437 #include <sys/socketvar.h> /* struct socket */
438 #include <sys/malloc.h>
439 #include <sys/poll.h>
440 #include <sys/rwlock.h>
441 #include <sys/socket.h> /* sockaddrs */
442 #include <sys/selinfo.h>
443 #include <sys/sysctl.h>
444 #include <sys/jail.h>
445 #include <net/vnet.h>
447 #include <net/if_var.h>
448 #include <net/bpf.h> /* BIOCIMMEDIATE */
449 #include <machine/bus.h> /* bus_dmamap_* */
450 #include <sys/endian.h>
451 #include <sys/refcount.h>
456 #include "bsd_glue.h"
458 #elif defined(__APPLE__)
460 #warning OSX support is only partial
461 #include "osx_glue.h"
463 #elif defined (_WIN32)
465 #include "win_glue.h"
469 #error Unsupported platform
471 #endif /* unsupported */
476 #include <net/netmap.h>
477 #include <dev/netmap/netmap_kern.h>
478 #include <dev/netmap/netmap_mem2.h>
481 /* user-controlled variables */
484 static int netmap_no_timestamp; /* don't timestamp on rxsync */
485 int netmap_no_pendintr = 1;
486 int netmap_txsync_retry = 2;
487 static int netmap_fwd = 0; /* force transparent forwarding */
490 * netmap_admode selects the netmap mode to use.
491 * Invalid values are reset to NETMAP_ADMODE_BEST
493 enum { NETMAP_ADMODE_BEST = 0, /* use native, fallback to generic */
494 NETMAP_ADMODE_NATIVE, /* either native or none */
495 NETMAP_ADMODE_GENERIC, /* force generic */
496 NETMAP_ADMODE_LAST };
497 static int netmap_admode = NETMAP_ADMODE_BEST;
499 /* netmap_generic_mit controls mitigation of RX notifications for
500 * the generic netmap adapter. The value is a time interval in
502 int netmap_generic_mit = 100*1000;
504 /* We use by default netmap-aware qdiscs with generic netmap adapters,
505 * even if there can be a little performance hit with hardware NICs.
506 * However, using the qdisc is the safer approach, for two reasons:
507 * 1) it prevents non-fifo qdiscs to break the TX notification
508 * scheme, which is based on mbuf destructors when txqdisc is
510 * 2) it makes it possible to transmit over software devices that
511 * change skb->dev, like bridge, veth, ...
513 * Anyway users looking for the best performance should
514 * use native adapters.
517 int netmap_generic_txqdisc = 1;
520 /* Default number of slots and queues for generic adapters. */
521 int netmap_generic_ringsize = 1024;
522 int netmap_generic_rings = 1;
524 /* Non-zero if ptnet devices are allowed to use virtio-net headers. */
525 int ptnet_vnet_hdr = 1;
527 /* 0 if ptnetmap should not use worker threads for TX processing */
528 int ptnetmap_tx_workers = 1;
531 * SYSCTL calls are grouped between SYSBEGIN and SYSEND to be emulated
532 * in some other operating systems
536 SYSCTL_DECL(_dev_netmap);
537 SYSCTL_NODE(_dev, OID_AUTO, netmap, CTLFLAG_RW, 0, "Netmap args");
538 SYSCTL_INT(_dev_netmap, OID_AUTO, verbose,
539 CTLFLAG_RW, &netmap_verbose, 0, "Verbose mode");
540 SYSCTL_INT(_dev_netmap, OID_AUTO, no_timestamp,
541 CTLFLAG_RW, &netmap_no_timestamp, 0, "no_timestamp");
542 SYSCTL_INT(_dev_netmap, OID_AUTO, no_pendintr, CTLFLAG_RW, &netmap_no_pendintr,
543 0, "Always look for new received packets.");
544 SYSCTL_INT(_dev_netmap, OID_AUTO, txsync_retry, CTLFLAG_RW,
545 &netmap_txsync_retry, 0, "Number of txsync loops in bridge's flush.");
547 SYSCTL_INT(_dev_netmap, OID_AUTO, fwd, CTLFLAG_RW, &netmap_fwd, 0,
548 "Force NR_FORWARD mode");
549 SYSCTL_INT(_dev_netmap, OID_AUTO, admode, CTLFLAG_RW, &netmap_admode, 0,
550 "Adapter mode. 0 selects the best option available,"
551 "1 forces native adapter, 2 forces emulated adapter");
552 SYSCTL_INT(_dev_netmap, OID_AUTO, generic_mit, CTLFLAG_RW, &netmap_generic_mit,
553 0, "RX notification interval in nanoseconds");
554 SYSCTL_INT(_dev_netmap, OID_AUTO, generic_ringsize, CTLFLAG_RW,
555 &netmap_generic_ringsize, 0,
556 "Number of per-ring slots for emulated netmap mode");
557 SYSCTL_INT(_dev_netmap, OID_AUTO, generic_rings, CTLFLAG_RW,
558 &netmap_generic_rings, 0,
559 "Number of TX/RX queues for emulated netmap adapters");
561 SYSCTL_INT(_dev_netmap, OID_AUTO, generic_txqdisc, CTLFLAG_RW,
562 &netmap_generic_txqdisc, 0, "Use qdisc for generic adapters");
564 SYSCTL_INT(_dev_netmap, OID_AUTO, ptnet_vnet_hdr, CTLFLAG_RW, &ptnet_vnet_hdr,
565 0, "Allow ptnet devices to use virtio-net headers");
566 SYSCTL_INT(_dev_netmap, OID_AUTO, ptnetmap_tx_workers, CTLFLAG_RW,
567 &ptnetmap_tx_workers, 0, "Use worker threads for pnetmap TX processing");
571 NMG_LOCK_T netmap_global_lock;
574 * mark the ring as stopped, and run through the locks
575 * to make sure other users get to see it.
576 * stopped must be either NR_KR_STOPPED (for unbounded stop)
577 * of NR_KR_LOCKED (brief stop for mutual exclusion purposes)
580 netmap_disable_ring(struct netmap_kring *kr, int stopped)
582 nm_kr_stop(kr, stopped);
583 // XXX check if nm_kr_stop is sufficient
584 mtx_lock(&kr->q_lock);
585 mtx_unlock(&kr->q_lock);
589 /* stop or enable a single ring */
591 netmap_set_ring(struct netmap_adapter *na, u_int ring_id, enum txrx t, int stopped)
594 netmap_disable_ring(NMR(na, t)[ring_id], stopped);
596 NMR(na, t)[ring_id]->nkr_stopped = 0;
600 /* stop or enable all the rings of na */
602 netmap_set_all_rings(struct netmap_adapter *na, int stopped)
607 if (!nm_netmap_on(na))
611 for (i = 0; i < netmap_real_rings(na, t); i++) {
612 netmap_set_ring(na, i, t, stopped);
618 * Convenience function used in drivers. Waits for current txsync()s/rxsync()s
619 * to finish and prevents any new one from starting. Call this before turning
620 * netmap mode off, or before removing the hardware rings (e.g., on module
624 netmap_disable_all_rings(struct ifnet *ifp)
626 if (NM_NA_VALID(ifp)) {
627 netmap_set_all_rings(NA(ifp), NM_KR_STOPPED);
632 * Convenience function used in drivers. Re-enables rxsync and txsync on the
633 * adapter's rings In linux drivers, this should be placed near each
637 netmap_enable_all_rings(struct ifnet *ifp)
639 if (NM_NA_VALID(ifp)) {
640 netmap_set_all_rings(NA(ifp), 0 /* enabled */);
645 netmap_make_zombie(struct ifnet *ifp)
647 if (NM_NA_VALID(ifp)) {
648 struct netmap_adapter *na = NA(ifp);
649 netmap_set_all_rings(na, NM_KR_LOCKED);
650 na->na_flags |= NAF_ZOMBIE;
651 netmap_set_all_rings(na, 0);
656 netmap_undo_zombie(struct ifnet *ifp)
658 if (NM_NA_VALID(ifp)) {
659 struct netmap_adapter *na = NA(ifp);
660 if (na->na_flags & NAF_ZOMBIE) {
661 netmap_set_all_rings(na, NM_KR_LOCKED);
662 na->na_flags &= ~NAF_ZOMBIE;
663 netmap_set_all_rings(na, 0);
669 * generic bound_checking function
672 nm_bound_var(u_int *v, u_int dflt, u_int lo, u_int hi, const char *msg)
675 const char *op = NULL;
684 } else if (oldv > hi) {
689 nm_prinf("%s %s to %d (was %d)\n", op, msg, *v, oldv);
695 * packet-dump function, user-supplied or static buffer.
696 * The destination buffer must be at least 30+4*len
699 nm_dump_buf(char *p, int len, int lim, char *dst)
701 static char _dst[8192];
703 static char hex[] ="0123456789abcdef";
704 char *o; /* output position */
706 #define P_HI(x) hex[((x) & 0xf0)>>4]
707 #define P_LO(x) hex[((x) & 0xf)]
708 #define P_C(x) ((x) >= 0x20 && (x) <= 0x7e ? (x) : '.')
711 if (lim <= 0 || lim > len)
714 sprintf(o, "buf 0x%p len %d lim %d\n", p, len, lim);
716 /* hexdump routine */
717 for (i = 0; i < lim; ) {
718 sprintf(o, "%5d: ", i);
722 for (j=0; j < 16 && i < lim; i++, j++) {
724 o[j*3+1] = P_LO(p[i]);
727 for (j=0; j < 16 && i < lim; i++, j++)
728 o[j + 48] = P_C(p[i]);
741 * Fetch configuration from the device, to cope with dynamic
742 * reconfigurations after loading the module.
744 /* call with NMG_LOCK held */
746 netmap_update_config(struct netmap_adapter *na)
748 struct nm_config_info info;
750 bzero(&info, sizeof(info));
751 if (na->nm_config == NULL ||
752 na->nm_config(na, &info)) {
753 /* take whatever we had at init time */
754 info.num_tx_rings = na->num_tx_rings;
755 info.num_tx_descs = na->num_tx_desc;
756 info.num_rx_rings = na->num_rx_rings;
757 info.num_rx_descs = na->num_rx_desc;
758 info.rx_buf_maxsize = na->rx_buf_maxsize;
761 if (na->num_tx_rings == info.num_tx_rings &&
762 na->num_tx_desc == info.num_tx_descs &&
763 na->num_rx_rings == info.num_rx_rings &&
764 na->num_rx_desc == info.num_rx_descs &&
765 na->rx_buf_maxsize == info.rx_buf_maxsize)
766 return 0; /* nothing changed */
767 if (na->active_fds == 0) {
768 D("configuration changed for %s: txring %d x %d, "
769 "rxring %d x %d, rxbufsz %d",
770 na->name, na->num_tx_rings, na->num_tx_desc,
771 na->num_rx_rings, na->num_rx_desc, na->rx_buf_maxsize);
772 na->num_tx_rings = info.num_tx_rings;
773 na->num_tx_desc = info.num_tx_descs;
774 na->num_rx_rings = info.num_rx_rings;
775 na->num_rx_desc = info.num_rx_descs;
776 na->rx_buf_maxsize = info.rx_buf_maxsize;
779 D("WARNING: configuration changed for %s while active: "
780 "txring %d x %d, rxring %d x %d, rxbufsz %d",
781 na->name, info.num_tx_rings, info.num_tx_descs,
782 info.num_rx_rings, info.num_rx_descs,
783 info.rx_buf_maxsize);
787 /* nm_sync callbacks for the host rings */
788 static int netmap_txsync_to_host(struct netmap_kring *kring, int flags);
789 static int netmap_rxsync_from_host(struct netmap_kring *kring, int flags);
791 /* create the krings array and initialize the fields common to all adapters.
792 * The array layout is this:
795 * na->tx_rings ----->| | \
796 * | | } na->num_tx_ring
800 * na->rx_rings ----> +----------+
802 * | | } na->num_rx_rings
807 * na->tailroom ----->| | \
808 * | | } tailroom bytes
812 * Note: for compatibility, host krings are created even when not needed.
813 * The tailroom space is currently used by vale ports for allocating leases.
815 /* call with NMG_LOCK held */
817 netmap_krings_create(struct netmap_adapter *na, u_int tailroom)
820 struct netmap_kring *kring;
824 if (na->tx_rings != NULL) {
825 D("warning: krings were already created");
829 /* account for the (possibly fake) host rings */
830 n[NR_TX] = na->num_tx_rings + 1;
831 n[NR_RX] = na->num_rx_rings + 1;
833 len = (n[NR_TX] + n[NR_RX]) *
834 (sizeof(struct netmap_kring) + sizeof(struct netmap_kring *))
837 na->tx_rings = nm_os_malloc((size_t)len);
838 if (na->tx_rings == NULL) {
839 D("Cannot allocate krings");
842 na->rx_rings = na->tx_rings + n[NR_TX];
843 na->tailroom = na->rx_rings + n[NR_RX];
845 /* link the krings in the krings array */
846 kring = (struct netmap_kring *)((char *)na->tailroom + tailroom);
847 for (i = 0; i < n[NR_TX] + n[NR_RX]; i++) {
848 na->tx_rings[i] = kring;
853 * All fields in krings are 0 except the one initialized below.
854 * but better be explicit on important kring fields.
857 ndesc = nma_get_ndesc(na, t);
858 for (i = 0; i < n[t]; i++) {
859 kring = NMR(na, t)[i];
860 bzero(kring, sizeof(*kring));
862 kring->notify_na = na;
865 kring->nkr_num_slots = ndesc;
866 kring->nr_mode = NKR_NETMAP_OFF;
867 kring->nr_pending_mode = NKR_NETMAP_OFF;
868 if (i < nma_get_nrings(na, t)) {
869 kring->nm_sync = (t == NR_TX ? na->nm_txsync : na->nm_rxsync);
871 if (!(na->na_flags & NAF_HOST_RINGS))
872 kring->nr_kflags |= NKR_FAKERING;
873 kring->nm_sync = (t == NR_TX ?
874 netmap_txsync_to_host:
875 netmap_rxsync_from_host);
877 kring->nm_notify = na->nm_notify;
878 kring->rhead = kring->rcur = kring->nr_hwcur = 0;
880 * IMPORTANT: Always keep one slot empty.
882 kring->rtail = kring->nr_hwtail = (t == NR_TX ? ndesc - 1 : 0);
883 snprintf(kring->name, sizeof(kring->name) - 1, "%s %s%d", na->name,
885 ND("ktx %s h %d c %d t %d",
886 kring->name, kring->rhead, kring->rcur, kring->rtail);
887 mtx_init(&kring->q_lock, (t == NR_TX ? "nm_txq_lock" : "nm_rxq_lock"), NULL, MTX_DEF);
888 nm_os_selinfo_init(&kring->si);
890 nm_os_selinfo_init(&na->si[t]);
898 /* undo the actions performed by netmap_krings_create */
899 /* call with NMG_LOCK held */
901 netmap_krings_delete(struct netmap_adapter *na)
903 struct netmap_kring **kring = na->tx_rings;
906 if (na->tx_rings == NULL) {
907 D("warning: krings were already deleted");
912 nm_os_selinfo_uninit(&na->si[t]);
914 /* we rely on the krings layout described above */
915 for ( ; kring != na->tailroom; kring++) {
916 mtx_destroy(&(*kring)->q_lock);
917 nm_os_selinfo_uninit(&(*kring)->si);
919 nm_os_free(na->tx_rings);
920 na->tx_rings = na->rx_rings = na->tailroom = NULL;
925 * Destructor for NIC ports. They also have an mbuf queue
926 * on the rings connected to the host so we need to purge
929 /* call with NMG_LOCK held */
931 netmap_hw_krings_delete(struct netmap_adapter *na)
933 struct mbq *q = &na->rx_rings[na->num_rx_rings]->rx_queue;
935 ND("destroy sw mbq with len %d", mbq_len(q));
938 netmap_krings_delete(na);
942 netmap_mem_drop(struct netmap_adapter *na)
944 int last = netmap_mem_deref(na->nm_mem, na);
945 /* if the native allocator had been overrided on regif,
946 * restore it now and drop the temporary one
948 if (last && na->nm_mem_prev) {
949 netmap_mem_put(na->nm_mem);
950 na->nm_mem = na->nm_mem_prev;
951 na->nm_mem_prev = NULL;
956 * Undo everything that was done in netmap_do_regif(). In particular,
957 * call nm_register(ifp,0) to stop netmap mode on the interface and
958 * revert to normal operation.
960 /* call with NMG_LOCK held */
961 static void netmap_unset_ringid(struct netmap_priv_d *);
962 static void netmap_krings_put(struct netmap_priv_d *);
964 netmap_do_unregif(struct netmap_priv_d *priv)
966 struct netmap_adapter *na = priv->np_na;
970 /* unset nr_pending_mode and possibly release exclusive mode */
971 netmap_krings_put(priv);
974 /* XXX check whether we have to do something with monitor
975 * when rings change nr_mode. */
976 if (na->active_fds <= 0) {
977 /* walk through all the rings and tell any monitor
978 * that the port is going to exit netmap mode
980 netmap_monitor_stop(na);
984 if (na->active_fds <= 0 || nm_kring_pending(priv)) {
985 na->nm_register(na, 0);
988 /* delete rings and buffers that are no longer needed */
989 netmap_mem_rings_delete(na);
991 if (na->active_fds <= 0) { /* last instance */
993 * (TO CHECK) We enter here
994 * when the last reference to this file descriptor goes
995 * away. This means we cannot have any pending poll()
996 * or interrupt routine operating on the structure.
997 * XXX The file may be closed in a thread while
998 * another thread is using it.
999 * Linux keeps the file opened until the last reference
1000 * by any outstanding ioctl/poll or mmap is gone.
1001 * FreeBSD does not track mmap()s (but we do) and
1002 * wakes up any sleeping poll(). Need to check what
1003 * happens if the close() occurs while a concurrent
1004 * syscall is running.
1007 D("deleting last instance for %s", na->name);
1009 if (nm_netmap_on(na)) {
1010 D("BUG: netmap on while going to delete the krings");
1013 na->nm_krings_delete(na);
1016 /* possibily decrement counter of tx_si/rx_si users */
1017 netmap_unset_ringid(priv);
1018 /* delete the nifp */
1019 netmap_mem_if_delete(na, priv->np_nifp);
1020 /* drop the allocator */
1021 netmap_mem_drop(na);
1022 /* mark the priv as unregistered */
1024 priv->np_nifp = NULL;
1027 /* call with NMG_LOCK held */
1029 nm_si_user(struct netmap_priv_d *priv, enum txrx t)
1031 return (priv->np_na != NULL &&
1032 (priv->np_qlast[t] - priv->np_qfirst[t] > 1));
1035 struct netmap_priv_d*
1036 netmap_priv_new(void)
1038 struct netmap_priv_d *priv;
1040 priv = nm_os_malloc(sizeof(struct netmap_priv_d));
1049 * Destructor of the netmap_priv_d, called when the fd is closed
1050 * Action: undo all the things done by NIOCREGIF,
1051 * On FreeBSD we need to track whether there are active mmap()s,
1052 * and we use np_active_mmaps for that. On linux, the field is always 0.
1053 * Return: 1 if we can free priv, 0 otherwise.
1056 /* call with NMG_LOCK held */
1058 netmap_priv_delete(struct netmap_priv_d *priv)
1060 struct netmap_adapter *na = priv->np_na;
1062 /* number of active references to this fd */
1063 if (--priv->np_refs > 0) {
1068 netmap_do_unregif(priv);
1070 netmap_unget_na(na, priv->np_ifp);
1071 bzero(priv, sizeof(*priv)); /* for safety */
1076 /* call with NMG_LOCK *not* held */
1078 netmap_dtor(void *data)
1080 struct netmap_priv_d *priv = data;
1083 netmap_priv_delete(priv);
1089 * Handlers for synchronization of the rings from/to the host stack.
1090 * These are associated to a network interface and are just another
1091 * ring pair managed by userspace.
1093 * Netmap also supports transparent forwarding (NS_FORWARD and NR_FORWARD
1096 * - Before releasing buffers on hw RX rings, the application can mark
1097 * them with the NS_FORWARD flag. During the next RXSYNC or poll(), they
1098 * will be forwarded to the host stack, similarly to what happened if
1099 * the application moved them to the host TX ring.
1101 * - Before releasing buffers on the host RX ring, the application can
1102 * mark them with the NS_FORWARD flag. During the next RXSYNC or poll(),
1103 * they will be forwarded to the hw TX rings, saving the application
1104 * from doing the same task in user-space.
1106 * Transparent fowarding can be enabled per-ring, by setting the NR_FORWARD
1107 * flag, or globally with the netmap_fwd sysctl.
1109 * The transfer NIC --> host is relatively easy, just encapsulate
1110 * into mbufs and we are done. The host --> NIC side is slightly
1111 * harder because there might not be room in the tx ring so it
1112 * might take a while before releasing the buffer.
1117 * Pass a whole queue of mbufs to the host stack as coming from 'dst'
1118 * We do not need to lock because the queue is private.
1119 * After this call the queue is empty.
1122 netmap_send_up(struct ifnet *dst, struct mbq *q)
1125 struct mbuf *head = NULL, *prev = NULL;
1127 /* Send packets up, outside the lock; head/prev machinery
1128 * is only useful for Windows. */
1129 while ((m = mbq_dequeue(q)) != NULL) {
1130 if (netmap_verbose & NM_VERB_HOST)
1131 D("sending up pkt %p size %d", m, MBUF_LEN(m));
1132 prev = nm_os_send_up(dst, m, prev);
1137 nm_os_send_up(dst, NULL, head);
1143 * Scan the buffers from hwcur to ring->head, and put a copy of those
1144 * marked NS_FORWARD (or all of them if forced) into a queue of mbufs.
1145 * Drop remaining packets in the unlikely event
1146 * of an mbuf shortage.
1149 netmap_grab_packets(struct netmap_kring *kring, struct mbq *q, int force)
1151 u_int const lim = kring->nkr_num_slots - 1;
1152 u_int const head = kring->rhead;
1154 struct netmap_adapter *na = kring->na;
1156 for (n = kring->nr_hwcur; n != head; n = nm_next(n, lim)) {
1158 struct netmap_slot *slot = &kring->ring->slot[n];
1160 if ((slot->flags & NS_FORWARD) == 0 && !force)
1162 if (slot->len < 14 || slot->len > NETMAP_BUF_SIZE(na)) {
1163 RD(5, "bad pkt at %d len %d", n, slot->len);
1166 slot->flags &= ~NS_FORWARD; // XXX needed ?
1167 /* XXX TODO: adapt to the case of a multisegment packet */
1168 m = m_devget(NMB(na, slot), slot->len, 0, na->ifp, NULL);
1177 _nm_may_forward(struct netmap_kring *kring)
1179 return ((netmap_fwd || kring->ring->flags & NR_FORWARD) &&
1180 kring->na->na_flags & NAF_HOST_RINGS &&
1181 kring->tx == NR_RX);
1185 nm_may_forward_up(struct netmap_kring *kring)
1187 return _nm_may_forward(kring) &&
1188 kring->ring_id != kring->na->num_rx_rings;
1192 nm_may_forward_down(struct netmap_kring *kring, int sync_flags)
1194 return _nm_may_forward(kring) &&
1195 (sync_flags & NAF_CAN_FORWARD_DOWN) &&
1196 kring->ring_id == kring->na->num_rx_rings;
1200 * Send to the NIC rings packets marked NS_FORWARD between
1201 * kring->nr_hwcur and kring->rhead.
1202 * Called under kring->rx_queue.lock on the sw rx ring.
1204 * It can only be called if the user opened all the TX hw rings,
1205 * see NAF_CAN_FORWARD_DOWN flag.
1206 * We can touch the TX netmap rings (slots, head and cur) since
1207 * we are in poll/ioctl system call context, and the application
1208 * is not supposed to touch the ring (using a different thread)
1209 * during the execution of the system call.
1212 netmap_sw_to_nic(struct netmap_adapter *na)
1214 struct netmap_kring *kring = na->rx_rings[na->num_rx_rings];
1215 struct netmap_slot *rxslot = kring->ring->slot;
1216 u_int i, rxcur = kring->nr_hwcur;
1217 u_int const head = kring->rhead;
1218 u_int const src_lim = kring->nkr_num_slots - 1;
1221 /* scan rings to find space, then fill as much as possible */
1222 for (i = 0; i < na->num_tx_rings; i++) {
1223 struct netmap_kring *kdst = na->tx_rings[i];
1224 struct netmap_ring *rdst = kdst->ring;
1225 u_int const dst_lim = kdst->nkr_num_slots - 1;
1227 /* XXX do we trust ring or kring->rcur,rtail ? */
1228 for (; rxcur != head && !nm_ring_empty(rdst);
1229 rxcur = nm_next(rxcur, src_lim) ) {
1230 struct netmap_slot *src, *dst, tmp;
1231 u_int dst_head = rdst->head;
1233 src = &rxslot[rxcur];
1234 if ((src->flags & NS_FORWARD) == 0 && !netmap_fwd)
1239 dst = &rdst->slot[dst_head];
1243 src->buf_idx = dst->buf_idx;
1244 src->flags = NS_BUF_CHANGED;
1246 dst->buf_idx = tmp.buf_idx;
1248 dst->flags = NS_BUF_CHANGED;
1250 rdst->head = rdst->cur = nm_next(dst_head, dst_lim);
1252 /* if (sent) XXX txsync ? it would be just an optimization */
1259 * netmap_txsync_to_host() passes packets up. We are called from a
1260 * system call in user process context, and the only contention
1261 * can be among multiple user threads erroneously calling
1262 * this routine concurrently.
1265 netmap_txsync_to_host(struct netmap_kring *kring, int flags)
1267 struct netmap_adapter *na = kring->na;
1268 u_int const lim = kring->nkr_num_slots - 1;
1269 u_int const head = kring->rhead;
1272 /* Take packets from hwcur to head and pass them up.
1273 * Force hwcur = head since netmap_grab_packets() stops at head
1276 netmap_grab_packets(kring, &q, 1 /* force */);
1277 ND("have %d pkts in queue", mbq_len(&q));
1278 kring->nr_hwcur = head;
1279 kring->nr_hwtail = head + lim;
1280 if (kring->nr_hwtail > lim)
1281 kring->nr_hwtail -= lim + 1;
1283 netmap_send_up(na->ifp, &q);
1289 * rxsync backend for packets coming from the host stack.
1290 * They have been put in kring->rx_queue by netmap_transmit().
1291 * We protect access to the kring using kring->rx_queue.lock
1293 * also moves to the nic hw rings any packet the user has marked
1294 * for transparent-mode forwarding, then sets the NR_FORWARD
1295 * flag in the kring to let the caller push them out
1298 netmap_rxsync_from_host(struct netmap_kring *kring, int flags)
1300 struct netmap_adapter *na = kring->na;
1301 struct netmap_ring *ring = kring->ring;
1303 u_int const lim = kring->nkr_num_slots - 1;
1304 u_int const head = kring->rhead;
1306 struct mbq *q = &kring->rx_queue, fq;
1308 mbq_init(&fq); /* fq holds packets to be freed */
1312 /* First part: import newly received packets */
1314 if (n) { /* grab packets from the queue */
1318 nm_i = kring->nr_hwtail;
1319 stop_i = nm_prev(kring->nr_hwcur, lim);
1320 while ( nm_i != stop_i && (m = mbq_dequeue(q)) != NULL ) {
1321 int len = MBUF_LEN(m);
1322 struct netmap_slot *slot = &ring->slot[nm_i];
1324 m_copydata(m, 0, len, NMB(na, slot));
1325 ND("nm %d len %d", nm_i, len);
1327 D("%s", nm_dump_buf(NMB(na, slot),len, 128, NULL));
1331 nm_i = nm_next(nm_i, lim);
1332 mbq_enqueue(&fq, m);
1334 kring->nr_hwtail = nm_i;
1338 * Second part: skip past packets that userspace has released.
1340 nm_i = kring->nr_hwcur;
1341 if (nm_i != head) { /* something was released */
1342 if (nm_may_forward_down(kring, flags)) {
1343 ret = netmap_sw_to_nic(na);
1345 kring->nr_kflags |= NR_FORWARD;
1349 kring->nr_hwcur = head;
1361 /* Get a netmap adapter for the port.
1363 * If it is possible to satisfy the request, return 0
1364 * with *na containing the netmap adapter found.
1365 * Otherwise return an error code, with *na containing NULL.
1367 * When the port is attached to a bridge, we always return
1369 * Otherwise, if the port is already bound to a file descriptor,
1370 * then we unconditionally return the existing adapter into *na.
1371 * In all the other cases, we return (into *na) either native,
1372 * generic or NULL, according to the following table:
1375 * active_fds dev.netmap.admode YES NO
1376 * -------------------------------------------------------
1377 * >0 * NA(ifp) NA(ifp)
1379 * 0 NETMAP_ADMODE_BEST NATIVE GENERIC
1380 * 0 NETMAP_ADMODE_NATIVE NATIVE NULL
1381 * 0 NETMAP_ADMODE_GENERIC GENERIC GENERIC
1384 static void netmap_hw_dtor(struct netmap_adapter *); /* needed by NM_IS_NATIVE() */
1386 netmap_get_hw_na(struct ifnet *ifp, struct netmap_mem_d *nmd, struct netmap_adapter **na)
1388 /* generic support */
1389 int i = netmap_admode; /* Take a snapshot. */
1390 struct netmap_adapter *prev_na;
1393 *na = NULL; /* default */
1395 /* reset in case of invalid value */
1396 if (i < NETMAP_ADMODE_BEST || i >= NETMAP_ADMODE_LAST)
1397 i = netmap_admode = NETMAP_ADMODE_BEST;
1399 if (NM_NA_VALID(ifp)) {
1401 /* If an adapter already exists, return it if
1402 * there are active file descriptors or if
1403 * netmap is not forced to use generic
1406 if (NETMAP_OWNED_BY_ANY(prev_na)
1407 || i != NETMAP_ADMODE_GENERIC
1408 || prev_na->na_flags & NAF_FORCE_NATIVE
1410 /* ugly, but we cannot allow an adapter switch
1411 * if some pipe is referring to this one
1413 || prev_na->na_next_pipe > 0
1421 /* If there isn't native support and netmap is not allowed
1422 * to use generic adapters, we cannot satisfy the request.
1424 if (!NM_IS_NATIVE(ifp) && i == NETMAP_ADMODE_NATIVE)
1427 /* Otherwise, create a generic adapter and return it,
1428 * saving the previously used netmap adapter, if any.
1430 * Note that here 'prev_na', if not NULL, MUST be a
1431 * native adapter, and CANNOT be a generic one. This is
1432 * true because generic adapters are created on demand, and
1433 * destroyed when not used anymore. Therefore, if the adapter
1434 * currently attached to an interface 'ifp' is generic, it
1436 * (NA(ifp)->active_fds > 0 || NETMAP_OWNED_BY_KERN(NA(ifp))).
1437 * Consequently, if NA(ifp) is generic, we will enter one of
1438 * the branches above. This ensures that we never override
1439 * a generic adapter with another generic adapter.
1441 error = generic_netmap_attach(ifp);
1448 if (nmd != NULL && !((*na)->na_flags & NAF_MEM_OWNER) &&
1449 (*na)->active_fds == 0 && ((*na)->nm_mem != nmd)) {
1450 (*na)->nm_mem_prev = (*na)->nm_mem;
1451 (*na)->nm_mem = netmap_mem_get(nmd);
1458 * MUST BE CALLED UNDER NMG_LOCK()
1460 * Get a refcounted reference to a netmap adapter attached
1461 * to the interface specified by req.
1462 * This is always called in the execution of an ioctl().
1464 * Return ENXIO if the interface specified by the request does
1465 * not exist, ENOTSUP if netmap is not supported by the interface,
1466 * EBUSY if the interface is already attached to a bridge,
1467 * EINVAL if parameters are invalid, ENOMEM if needed resources
1468 * could not be allocated.
1469 * If successful, hold a reference to the netmap adapter.
1471 * If the interface specified by req is a system one, also keep
1472 * a reference to it and return a valid *ifp.
1475 netmap_get_na(struct nmreq_header *hdr,
1476 struct netmap_adapter **na, struct ifnet **ifp,
1477 struct netmap_mem_d *nmd, int create)
1479 struct nmreq_register *req = (struct nmreq_register *)hdr->nr_body;
1481 struct netmap_adapter *ret = NULL;
1484 *na = NULL; /* default return value */
1487 if (hdr->nr_reqtype != NETMAP_REQ_REGISTER) {
1491 if (req->nr_mode == NR_REG_PIPE_MASTER ||
1492 req->nr_mode == NR_REG_PIPE_SLAVE) {
1493 /* Do not accept deprecated pipe modes. */
1494 D("Deprecated pipe nr_mode, use xx{yy or xx}yy syntax");
1500 /* if the request contain a memid, try to find the
1501 * corresponding memory region
1503 if (nmd == NULL && req->nr_mem_id) {
1504 nmd = netmap_mem_find(req->nr_mem_id);
1507 /* keep the rereference */
1511 /* We cascade through all possible types of netmap adapter.
1512 * All netmap_get_*_na() functions return an error and an na,
1513 * with the following combinations:
1516 * 0 NULL type doesn't match
1517 * !0 NULL type matches, but na creation/lookup failed
1518 * 0 !NULL type matches and na created/found
1519 * !0 !NULL impossible
1522 /* try to see if this is a ptnetmap port */
1523 error = netmap_get_pt_host_na(hdr, na, nmd, create);
1524 if (error || *na != NULL)
1527 /* try to see if this is a monitor port */
1528 error = netmap_get_monitor_na(hdr, na, nmd, create);
1529 if (error || *na != NULL)
1532 /* try to see if this is a pipe port */
1533 error = netmap_get_pipe_na(hdr, na, nmd, create);
1534 if (error || *na != NULL)
1537 /* try to see if this is a bridge port */
1538 error = netmap_get_bdg_na(hdr, na, nmd, create);
1542 if (*na != NULL) /* valid match in netmap_get_bdg_na() */
1546 * This must be a hardware na, lookup the name in the system.
1547 * Note that by hardware we actually mean "it shows up in ifconfig".
1548 * This may still be a tap, a veth/epair, or even a
1549 * persistent VALE port.
1551 *ifp = ifunit_ref(hdr->nr_name);
1557 error = netmap_get_hw_na(*ifp, nmd, &ret);
1562 netmap_adapter_get(ret);
1567 netmap_adapter_put(ret);
1574 netmap_mem_put(nmd);
1579 /* undo netmap_get_na() */
1581 netmap_unget_na(struct netmap_adapter *na, struct ifnet *ifp)
1586 netmap_adapter_put(na);
1590 #define NM_FAIL_ON(t) do { \
1591 if (unlikely(t)) { \
1592 RD(5, "%s: fail '" #t "' " \
1594 "rh %d rc %d rt %d " \
1597 head, cur, ring->tail, \
1598 kring->rhead, kring->rcur, kring->rtail, \
1599 kring->nr_hwcur, kring->nr_hwtail); \
1600 return kring->nkr_num_slots; \
1605 * validate parameters on entry for *_txsync()
1606 * Returns ring->cur if ok, or something >= kring->nkr_num_slots
1609 * rhead, rcur and rtail=hwtail are stored from previous round.
1610 * hwcur is the next packet to send to the ring.
1613 * hwcur <= *rhead <= head <= cur <= tail = *rtail <= hwtail
1615 * hwcur, rhead, rtail and hwtail are reliable
1618 nm_txsync_prologue(struct netmap_kring *kring, struct netmap_ring *ring)
1620 u_int head = ring->head; /* read only once */
1621 u_int cur = ring->cur; /* read only once */
1622 u_int n = kring->nkr_num_slots;
1624 ND(5, "%s kcur %d ktail %d head %d cur %d tail %d",
1626 kring->nr_hwcur, kring->nr_hwtail,
1627 ring->head, ring->cur, ring->tail);
1628 #if 1 /* kernel sanity checks; but we can trust the kring. */
1629 NM_FAIL_ON(kring->nr_hwcur >= n || kring->rhead >= n ||
1630 kring->rtail >= n || kring->nr_hwtail >= n);
1631 #endif /* kernel sanity checks */
1633 * user sanity checks. We only use head,
1634 * A, B, ... are possible positions for head:
1636 * 0 A rhead B rtail C n-1
1637 * 0 D rtail E rhead F n-1
1639 * B, F, D are valid. A, C, E are wrong
1641 if (kring->rtail >= kring->rhead) {
1642 /* want rhead <= head <= rtail */
1643 NM_FAIL_ON(head < kring->rhead || head > kring->rtail);
1644 /* and also head <= cur <= rtail */
1645 NM_FAIL_ON(cur < head || cur > kring->rtail);
1646 } else { /* here rtail < rhead */
1647 /* we need head outside rtail .. rhead */
1648 NM_FAIL_ON(head > kring->rtail && head < kring->rhead);
1650 /* two cases now: head <= rtail or head >= rhead */
1651 if (head <= kring->rtail) {
1652 /* want head <= cur <= rtail */
1653 NM_FAIL_ON(cur < head || cur > kring->rtail);
1654 } else { /* head >= rhead */
1655 /* cur must be outside rtail..head */
1656 NM_FAIL_ON(cur > kring->rtail && cur < head);
1659 if (ring->tail != kring->rtail) {
1660 RD(5, "%s tail overwritten was %d need %d", kring->name,
1661 ring->tail, kring->rtail);
1662 ring->tail = kring->rtail;
1664 kring->rhead = head;
1671 * validate parameters on entry for *_rxsync()
1672 * Returns ring->head if ok, kring->nkr_num_slots on error.
1674 * For a valid configuration,
1675 * hwcur <= head <= cur <= tail <= hwtail
1677 * We only consider head and cur.
1678 * hwcur and hwtail are reliable.
1682 nm_rxsync_prologue(struct netmap_kring *kring, struct netmap_ring *ring)
1684 uint32_t const n = kring->nkr_num_slots;
1687 ND(5,"%s kc %d kt %d h %d c %d t %d",
1689 kring->nr_hwcur, kring->nr_hwtail,
1690 ring->head, ring->cur, ring->tail);
1692 * Before storing the new values, we should check they do not
1693 * move backwards. However:
1694 * - head is not an issue because the previous value is hwcur;
1695 * - cur could in principle go back, however it does not matter
1696 * because we are processing a brand new rxsync()
1698 cur = kring->rcur = ring->cur; /* read only once */
1699 head = kring->rhead = ring->head; /* read only once */
1700 #if 1 /* kernel sanity checks */
1701 NM_FAIL_ON(kring->nr_hwcur >= n || kring->nr_hwtail >= n);
1702 #endif /* kernel sanity checks */
1703 /* user sanity checks */
1704 if (kring->nr_hwtail >= kring->nr_hwcur) {
1705 /* want hwcur <= rhead <= hwtail */
1706 NM_FAIL_ON(head < kring->nr_hwcur || head > kring->nr_hwtail);
1707 /* and also rhead <= rcur <= hwtail */
1708 NM_FAIL_ON(cur < head || cur > kring->nr_hwtail);
1710 /* we need rhead outside hwtail..hwcur */
1711 NM_FAIL_ON(head < kring->nr_hwcur && head > kring->nr_hwtail);
1712 /* two cases now: head <= hwtail or head >= hwcur */
1713 if (head <= kring->nr_hwtail) {
1714 /* want head <= cur <= hwtail */
1715 NM_FAIL_ON(cur < head || cur > kring->nr_hwtail);
1717 /* cur must be outside hwtail..head */
1718 NM_FAIL_ON(cur < head && cur > kring->nr_hwtail);
1721 if (ring->tail != kring->rtail) {
1722 RD(5, "%s tail overwritten was %d need %d",
1724 ring->tail, kring->rtail);
1725 ring->tail = kring->rtail;
1732 * Error routine called when txsync/rxsync detects an error.
1733 * Can't do much more than resetting head =cur = hwcur, tail = hwtail
1734 * Return 1 on reinit.
1736 * This routine is only called by the upper half of the kernel.
1737 * It only reads hwcur (which is changed only by the upper half, too)
1738 * and hwtail (which may be changed by the lower half, but only on
1739 * a tx ring and only to increase it, so any error will be recovered
1740 * on the next call). For the above, we don't strictly need to call
1744 netmap_ring_reinit(struct netmap_kring *kring)
1746 struct netmap_ring *ring = kring->ring;
1747 u_int i, lim = kring->nkr_num_slots - 1;
1750 // XXX KASSERT nm_kr_tryget
1751 RD(10, "called for %s", kring->name);
1752 // XXX probably wrong to trust userspace
1753 kring->rhead = ring->head;
1754 kring->rcur = ring->cur;
1755 kring->rtail = ring->tail;
1757 if (ring->cur > lim)
1759 if (ring->head > lim)
1761 if (ring->tail > lim)
1763 for (i = 0; i <= lim; i++) {
1764 u_int idx = ring->slot[i].buf_idx;
1765 u_int len = ring->slot[i].len;
1766 if (idx < 2 || idx >= kring->na->na_lut.objtotal) {
1767 RD(5, "bad index at slot %d idx %d len %d ", i, idx, len);
1768 ring->slot[i].buf_idx = 0;
1769 ring->slot[i].len = 0;
1770 } else if (len > NETMAP_BUF_SIZE(kring->na)) {
1771 ring->slot[i].len = 0;
1772 RD(5, "bad len at slot %d idx %d len %d", i, idx, len);
1776 RD(10, "total %d errors", errors);
1777 RD(10, "%s reinit, cur %d -> %d tail %d -> %d",
1779 ring->cur, kring->nr_hwcur,
1780 ring->tail, kring->nr_hwtail);
1781 ring->head = kring->rhead = kring->nr_hwcur;
1782 ring->cur = kring->rcur = kring->nr_hwcur;
1783 ring->tail = kring->rtail = kring->nr_hwtail;
1785 return (errors ? 1 : 0);
1788 /* interpret the ringid and flags fields of an nmreq, by translating them
1789 * into a pair of intervals of ring indices:
1791 * [priv->np_txqfirst, priv->np_txqlast) and
1792 * [priv->np_rxqfirst, priv->np_rxqlast)
1796 netmap_interp_ringid(struct netmap_priv_d *priv, uint32_t nr_mode,
1797 uint16_t nr_ringid, uint64_t nr_flags)
1799 struct netmap_adapter *na = priv->np_na;
1800 int excluded_direction[] = { NR_TX_RINGS_ONLY, NR_RX_RINGS_ONLY };
1804 if ((nr_flags & NR_PTNETMAP_HOST) && ((nr_mode != NR_REG_ALL_NIC) ||
1805 nr_flags & (NR_RX_RINGS_ONLY|NR_TX_RINGS_ONLY))) {
1806 D("Error: only NR_REG_ALL_NIC supported with netmap passthrough");
1811 if (nr_flags & excluded_direction[t]) {
1812 priv->np_qfirst[t] = priv->np_qlast[t] = 0;
1816 case NR_REG_ALL_NIC:
1817 priv->np_qfirst[t] = 0;
1818 priv->np_qlast[t] = nma_get_nrings(na, t);
1819 ND("ALL/PIPE: %s %d %d", nm_txrx2str(t),
1820 priv->np_qfirst[t], priv->np_qlast[t]);
1824 if (!(na->na_flags & NAF_HOST_RINGS)) {
1825 D("host rings not supported");
1828 priv->np_qfirst[t] = (nr_mode == NR_REG_SW ?
1829 nma_get_nrings(na, t) : 0);
1830 priv->np_qlast[t] = nma_get_nrings(na, t) + 1;
1831 ND("%s: %s %d %d", nr_mode == NR_REG_SW ? "SW" : "NIC+SW",
1833 priv->np_qfirst[t], priv->np_qlast[t]);
1835 case NR_REG_ONE_NIC:
1836 if (nr_ringid >= na->num_tx_rings &&
1837 nr_ringid >= na->num_rx_rings) {
1838 D("invalid ring id %d", nr_ringid);
1841 /* if not enough rings, use the first one */
1843 if (j >= nma_get_nrings(na, t))
1845 priv->np_qfirst[t] = j;
1846 priv->np_qlast[t] = j + 1;
1847 ND("ONE_NIC: %s %d %d", nm_txrx2str(t),
1848 priv->np_qfirst[t], priv->np_qlast[t]);
1851 D("invalid regif type %d", nr_mode);
1855 priv->np_flags = nr_flags | nr_mode; // TODO
1857 /* Allow transparent forwarding mode in the host --> nic
1858 * direction only if all the TX hw rings have been opened. */
1859 if (priv->np_qfirst[NR_TX] == 0 &&
1860 priv->np_qlast[NR_TX] >= na->num_tx_rings) {
1861 priv->np_sync_flags |= NAF_CAN_FORWARD_DOWN;
1864 if (netmap_verbose) {
1865 D("%s: tx [%d,%d) rx [%d,%d) id %d",
1867 priv->np_qfirst[NR_TX],
1868 priv->np_qlast[NR_TX],
1869 priv->np_qfirst[NR_RX],
1870 priv->np_qlast[NR_RX],
1878 * Set the ring ID. For devices with a single queue, a request
1879 * for all rings is the same as a single ring.
1882 netmap_set_ringid(struct netmap_priv_d *priv, uint32_t nr_mode,
1883 uint16_t nr_ringid, uint64_t nr_flags)
1885 struct netmap_adapter *na = priv->np_na;
1889 error = netmap_interp_ringid(priv, nr_mode, nr_ringid, nr_flags);
1894 priv->np_txpoll = (nr_flags & NR_NO_TX_POLL) ? 0 : 1;
1896 /* optimization: count the users registered for more than
1897 * one ring, which are the ones sleeping on the global queue.
1898 * The default netmap_notify() callback will then
1899 * avoid signaling the global queue if nobody is using it
1902 if (nm_si_user(priv, t))
1909 netmap_unset_ringid(struct netmap_priv_d *priv)
1911 struct netmap_adapter *na = priv->np_na;
1915 if (nm_si_user(priv, t))
1917 priv->np_qfirst[t] = priv->np_qlast[t] = 0;
1920 priv->np_txpoll = 0;
1924 /* Set the nr_pending_mode for the requested rings.
1925 * If requested, also try to get exclusive access to the rings, provided
1926 * the rings we want to bind are not exclusively owned by a previous bind.
1929 netmap_krings_get(struct netmap_priv_d *priv)
1931 struct netmap_adapter *na = priv->np_na;
1933 struct netmap_kring *kring;
1934 int excl = (priv->np_flags & NR_EXCLUSIVE);
1938 D("%s: grabbing tx [%d, %d) rx [%d, %d)",
1940 priv->np_qfirst[NR_TX],
1941 priv->np_qlast[NR_TX],
1942 priv->np_qfirst[NR_RX],
1943 priv->np_qlast[NR_RX]);
1945 /* first round: check that all the requested rings
1946 * are neither alread exclusively owned, nor we
1947 * want exclusive ownership when they are already in use
1950 for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) {
1951 kring = NMR(na, t)[i];
1952 if ((kring->nr_kflags & NKR_EXCLUSIVE) ||
1953 (kring->users && excl))
1955 ND("ring %s busy", kring->name);
1961 /* second round: increment usage count (possibly marking them
1962 * as exclusive) and set the nr_pending_mode
1965 for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) {
1966 kring = NMR(na, t)[i];
1969 kring->nr_kflags |= NKR_EXCLUSIVE;
1970 kring->nr_pending_mode = NKR_NETMAP_ON;
1978 /* Undo netmap_krings_get(). This is done by clearing the exclusive mode
1979 * if was asked on regif, and unset the nr_pending_mode if we are the
1980 * last users of the involved rings. */
1982 netmap_krings_put(struct netmap_priv_d *priv)
1984 struct netmap_adapter *na = priv->np_na;
1986 struct netmap_kring *kring;
1987 int excl = (priv->np_flags & NR_EXCLUSIVE);
1990 ND("%s: releasing tx [%d, %d) rx [%d, %d)",
1992 priv->np_qfirst[NR_TX],
1993 priv->np_qlast[NR_TX],
1994 priv->np_qfirst[NR_RX],
1995 priv->np_qlast[MR_RX]);
1998 for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) {
1999 kring = NMR(na, t)[i];
2001 kring->nr_kflags &= ~NKR_EXCLUSIVE;
2003 if (kring->users == 0)
2004 kring->nr_pending_mode = NKR_NETMAP_OFF;
2010 nm_priv_rx_enabled(struct netmap_priv_d *priv)
2012 return (priv->np_qfirst[NR_RX] != priv->np_qlast[NR_RX]);
2016 * possibly move the interface to netmap-mode.
2017 * If success it returns a pointer to netmap_if, otherwise NULL.
2018 * This must be called with NMG_LOCK held.
2020 * The following na callbacks are called in the process:
2022 * na->nm_config() [by netmap_update_config]
2023 * (get current number and size of rings)
2025 * We have a generic one for linux (netmap_linux_config).
2026 * The bwrap has to override this, since it has to forward
2027 * the request to the wrapped adapter (netmap_bwrap_config).
2030 * na->nm_krings_create()
2031 * (create and init the krings array)
2033 * One of the following:
2035 * * netmap_hw_krings_create, (hw ports)
2036 * creates the standard layout for the krings
2037 * and adds the mbq (used for the host rings).
2039 * * netmap_vp_krings_create (VALE ports)
2040 * add leases and scratchpads
2042 * * netmap_pipe_krings_create (pipes)
2043 * create the krings and rings of both ends and
2046 * * netmap_monitor_krings_create (monitors)
2047 * avoid allocating the mbq
2049 * * netmap_bwrap_krings_create (bwraps)
2050 * create both the brap krings array,
2051 * the krings array of the wrapped adapter, and
2052 * (if needed) the fake array for the host adapter
2054 * na->nm_register(, 1)
2055 * (put the adapter in netmap mode)
2057 * This may be one of the following:
2059 * * netmap_hw_reg (hw ports)
2060 * checks that the ifp is still there, then calls
2061 * the hardware specific callback;
2063 * * netmap_vp_reg (VALE ports)
2064 * If the port is connected to a bridge,
2065 * set the NAF_NETMAP_ON flag under the
2066 * bridge write lock.
2068 * * netmap_pipe_reg (pipes)
2069 * inform the other pipe end that it is no
2070 * longer responsible for the lifetime of this
2073 * * netmap_monitor_reg (monitors)
2074 * intercept the sync callbacks of the monitored
2077 * * netmap_bwrap_reg (bwraps)
2078 * cross-link the bwrap and hwna rings,
2079 * forward the request to the hwna, override
2080 * the hwna notify callback (to get the frames
2081 * coming from outside go through the bridge).
2086 netmap_do_regif(struct netmap_priv_d *priv, struct netmap_adapter *na,
2087 uint32_t nr_mode, uint16_t nr_ringid, uint64_t nr_flags)
2089 struct netmap_if *nifp = NULL;
2093 priv->np_na = na; /* store the reference */
2094 error = netmap_set_ringid(priv, nr_mode, nr_ringid, nr_flags);
2097 error = netmap_mem_finalize(na->nm_mem, na);
2101 if (na->active_fds == 0) {
2103 /* cache the allocator info in the na */
2104 error = netmap_mem_get_lut(na->nm_mem, &na->na_lut);
2107 ND("lut %p bufs %u size %u", na->na_lut.lut, na->na_lut.objtotal,
2108 na->na_lut.objsize);
2110 /* ring configuration may have changed, fetch from the card */
2111 netmap_update_config(na);
2114 * If this is the first registration of the adapter,
2115 * perform sanity checks and create the in-kernel view
2116 * of the netmap rings (the netmap krings).
2118 if (na->ifp && nm_priv_rx_enabled(priv)) {
2119 /* This netmap adapter is attached to an ifnet. */
2120 unsigned nbs = netmap_mem_bufsize(na->nm_mem);
2121 unsigned mtu = nm_os_ifnet_mtu(na->ifp);
2123 ND("mtu %d rx_buf_maxsize %d netmap_buf_size %d",
2124 mtu, na->rx_buf_maxsize, nbs);
2126 if (mtu <= na->rx_buf_maxsize) {
2127 /* The MTU fits a single NIC slot. We only
2128 * Need to check that netmap buffers are
2129 * large enough to hold an MTU. NS_MOREFRAG
2130 * cannot be used in this case. */
2132 nm_prerr("error: netmap buf size (%u) "
2133 "< device MTU (%u)\n", nbs, mtu);
2138 /* More NIC slots may be needed to receive
2139 * or transmit a single packet. Check that
2140 * the adapter supports NS_MOREFRAG and that
2141 * netmap buffers are large enough to hold
2142 * the maximum per-slot size. */
2143 if (!(na->na_flags & NAF_MOREFRAG)) {
2144 nm_prerr("error: large MTU (%d) needed "
2145 "but %s does not support "
2146 "NS_MOREFRAG\n", mtu,
2150 } else if (nbs < na->rx_buf_maxsize) {
2151 nm_prerr("error: using NS_MOREFRAG on "
2152 "%s requires netmap buf size "
2153 ">= %u\n", na->ifp->if_xname,
2154 na->rx_buf_maxsize);
2158 nm_prinf("info: netmap application on "
2159 "%s needs to support "
2161 "(MTU=%u,netmap_buf_size=%u)\n",
2162 na->ifp->if_xname, mtu, nbs);
2168 * Depending on the adapter, this may also create
2169 * the netmap rings themselves
2171 error = na->nm_krings_create(na);
2177 /* now the krings must exist and we can check whether some
2178 * previous bind has exclusive ownership on them, and set
2181 error = netmap_krings_get(priv);
2183 goto err_del_krings;
2185 /* create all needed missing netmap rings */
2186 error = netmap_mem_rings_create(na);
2190 /* in all cases, create a new netmap if */
2191 nifp = netmap_mem_if_new(na, priv);
2197 if (nm_kring_pending(priv)) {
2198 /* Some kring is switching mode, tell the adapter to
2200 error = na->nm_register(na, 1);
2205 /* Commit the reference. */
2209 * advertise that the interface is ready by setting np_nifp.
2210 * The barrier is needed because readers (poll, *SYNC and mmap)
2211 * check for priv->np_nifp != NULL without locking
2213 mb(); /* make sure previous writes are visible to all CPUs */
2214 priv->np_nifp = nifp;
2219 netmap_mem_if_delete(na, nifp);
2221 netmap_mem_rings_delete(na);
2223 netmap_krings_put(priv);
2225 if (na->active_fds == 0)
2226 na->nm_krings_delete(na);
2228 if (na->active_fds == 0)
2229 memset(&na->na_lut, 0, sizeof(na->na_lut));
2231 netmap_mem_drop(na);
2239 * update kring and ring at the end of rxsync/txsync.
2242 nm_sync_finalize(struct netmap_kring *kring)
2245 * Update ring tail to what the kernel knows
2246 * After txsync: head/rhead/hwcur might be behind cur/rcur
2249 kring->ring->tail = kring->rtail = kring->nr_hwtail;
2251 ND(5, "%s now hwcur %d hwtail %d head %d cur %d tail %d",
2252 kring->name, kring->nr_hwcur, kring->nr_hwtail,
2253 kring->rhead, kring->rcur, kring->rtail);
2256 /* set ring timestamp */
2258 ring_timestamp_set(struct netmap_ring *ring)
2260 if (netmap_no_timestamp == 0 || ring->flags & NR_TIMESTAMP) {
2261 microtime(&ring->ts);
2265 static int nmreq_copyin(struct nmreq_header *, int);
2266 static int nmreq_copyout(struct nmreq_header *, int);
2267 static int nmreq_checkoptions(struct nmreq_header *);
2270 * ioctl(2) support for the "netmap" device.
2272 * Following a list of accepted commands:
2273 * - NIOCCTRL device control API
2274 * - NIOCTXSYNC sync TX rings
2275 * - NIOCRXSYNC sync RX rings
2276 * - SIOCGIFADDR just for convenience
2277 * - NIOCGINFO deprecated (legacy API)
2278 * - NIOCREGIF deprecated (legacy API)
2280 * Return 0 on success, errno otherwise.
2283 netmap_ioctl(struct netmap_priv_d *priv, u_long cmd, caddr_t data,
2284 struct thread *td, int nr_body_is_user)
2286 struct mbq q; /* packets from RX hw queues to host stack */
2287 struct netmap_adapter *na = NULL;
2288 struct netmap_mem_d *nmd = NULL;
2289 struct ifnet *ifp = NULL;
2291 u_int i, qfirst, qlast;
2292 struct netmap_if *nifp;
2293 struct netmap_kring **krings;
2299 struct nmreq_header *hdr = (struct nmreq_header *)data;
2301 if (hdr->nr_version != NETMAP_API) {
2302 D("API mismatch for reqtype %d: got %d need %d",
2304 hdr->nr_version, NETMAP_API);
2305 hdr->nr_version = NETMAP_API;
2307 if (hdr->nr_version < NETMAP_MIN_API ||
2308 hdr->nr_version > NETMAP_MAX_API) {
2312 /* Make a kernel-space copy of the user-space nr_body.
2313 * For convenince, the nr_body pointer and the pointers
2314 * in the options list will be replaced with their
2315 * kernel-space counterparts. The original pointers are
2316 * saved internally and later restored by nmreq_copyout
2318 error = nmreq_copyin(hdr, nr_body_is_user);
2323 /* Sanitize hdr->nr_name. */
2324 hdr->nr_name[sizeof(hdr->nr_name) - 1] = '\0';
2326 switch (hdr->nr_reqtype) {
2327 case NETMAP_REQ_REGISTER: {
2328 struct nmreq_register *req =
2329 (struct nmreq_register *)hdr->nr_body;
2330 /* Protect access to priv from concurrent requests. */
2335 struct nmreq_option *opt;
2336 #endif /* WITH_EXTMEM */
2338 if (priv->np_nifp != NULL) { /* thread already registered */
2344 opt = nmreq_findoption((struct nmreq_option *)hdr->nr_options,
2345 NETMAP_REQ_OPT_EXTMEM);
2347 struct nmreq_opt_extmem *e =
2348 (struct nmreq_opt_extmem *)opt;
2350 error = nmreq_checkduplicate(opt);
2352 opt->nro_status = error;
2355 nmd = netmap_mem_ext_create(e->nro_usrptr,
2356 &e->nro_info, &error);
2357 opt->nro_status = error;
2361 #endif /* WITH_EXTMEM */
2363 if (nmd == NULL && req->nr_mem_id) {
2364 /* find the allocator and get a reference */
2365 nmd = netmap_mem_find(req->nr_mem_id);
2371 /* find the interface and a reference */
2372 error = netmap_get_na(hdr, &na, &ifp, nmd,
2373 1 /* create */); /* keep reference */
2376 if (NETMAP_OWNED_BY_KERN(na)) {
2381 if (na->virt_hdr_len && !(req->nr_flags & NR_ACCEPT_VNET_HDR)) {
2386 error = netmap_do_regif(priv, na, req->nr_mode,
2387 req->nr_ringid, req->nr_flags);
2388 if (error) { /* reg. failed, release priv and ref */
2391 nifp = priv->np_nifp;
2392 priv->np_td = td; /* for debugging purposes */
2394 /* return the offset of the netmap_if object */
2395 req->nr_rx_rings = na->num_rx_rings;
2396 req->nr_tx_rings = na->num_tx_rings;
2397 req->nr_rx_slots = na->num_rx_desc;
2398 req->nr_tx_slots = na->num_tx_desc;
2399 error = netmap_mem_get_info(na->nm_mem, &req->nr_memsize, &memflags,
2402 netmap_do_unregif(priv);
2405 if (memflags & NETMAP_MEM_PRIVATE) {
2406 *(uint32_t *)(uintptr_t)&nifp->ni_flags |= NI_PRIV_MEM;
2409 priv->np_si[t] = nm_si_user(priv, t) ?
2410 &na->si[t] : &NMR(na, t)[priv->np_qfirst[t]]->si;
2413 if (req->nr_extra_bufs) {
2415 D("requested %d extra buffers",
2416 req->nr_extra_bufs);
2417 req->nr_extra_bufs = netmap_extra_alloc(na,
2418 &nifp->ni_bufs_head, req->nr_extra_bufs);
2420 D("got %d extra buffers", req->nr_extra_bufs);
2422 req->nr_offset = netmap_mem_if_offset(na->nm_mem, nifp);
2424 error = nmreq_checkoptions(hdr);
2426 netmap_do_unregif(priv);
2430 /* store ifp reference so that priv destructor may release it */
2434 netmap_unget_na(na, ifp);
2436 /* release the reference from netmap_mem_find() or
2437 * netmap_mem_ext_create()
2440 netmap_mem_put(nmd);
2445 case NETMAP_REQ_PORT_INFO_GET: {
2446 struct nmreq_port_info_get *req =
2447 (struct nmreq_port_info_get *)hdr->nr_body;
2453 if (hdr->nr_name[0] != '\0') {
2454 /* Build a nmreq_register out of the nmreq_port_info_get,
2455 * so that we can call netmap_get_na(). */
2456 struct nmreq_register regreq;
2457 bzero(®req, sizeof(regreq));
2458 regreq.nr_tx_slots = req->nr_tx_slots;
2459 regreq.nr_rx_slots = req->nr_rx_slots;
2460 regreq.nr_tx_rings = req->nr_tx_rings;
2461 regreq.nr_rx_rings = req->nr_rx_rings;
2462 regreq.nr_mem_id = req->nr_mem_id;
2464 /* get a refcount */
2465 hdr->nr_reqtype = NETMAP_REQ_REGISTER;
2466 hdr->nr_body = (uint64_t)®req;
2467 error = netmap_get_na(hdr, &na, &ifp, NULL, 1 /* create */);
2468 hdr->nr_reqtype = NETMAP_REQ_PORT_INFO_GET; /* reset type */
2469 hdr->nr_body = (uint64_t)req; /* reset nr_body */
2475 nmd = na->nm_mem; /* get memory allocator */
2477 nmd = netmap_mem_find(req->nr_mem_id ? req->nr_mem_id : 1);
2484 error = netmap_mem_get_info(nmd, &req->nr_memsize, &memflags,
2488 if (na == NULL) /* only memory info */
2491 req->nr_rx_slots = req->nr_tx_slots = 0;
2492 netmap_update_config(na);
2493 req->nr_rx_rings = na->num_rx_rings;
2494 req->nr_tx_rings = na->num_tx_rings;
2495 req->nr_rx_slots = na->num_rx_desc;
2496 req->nr_tx_slots = na->num_tx_desc;
2498 netmap_unget_na(na, ifp);
2503 case NETMAP_REQ_VALE_ATTACH: {
2504 error = nm_bdg_ctl_attach(hdr, NULL /* userspace request */);
2508 case NETMAP_REQ_VALE_DETACH: {
2509 error = nm_bdg_ctl_detach(hdr, NULL /* userspace request */);
2513 case NETMAP_REQ_VALE_LIST: {
2514 error = netmap_bdg_list(hdr);
2518 case NETMAP_REQ_PORT_HDR_SET: {
2519 struct nmreq_port_hdr *req =
2520 (struct nmreq_port_hdr *)hdr->nr_body;
2521 /* Build a nmreq_register out of the nmreq_port_hdr,
2522 * so that we can call netmap_get_bdg_na(). */
2523 struct nmreq_register regreq;
2524 bzero(®req, sizeof(regreq));
2525 /* For now we only support virtio-net headers, and only for
2526 * VALE ports, but this may change in future. Valid lengths
2527 * for the virtio-net header are 0 (no header), 10 and 12. */
2528 if (req->nr_hdr_len != 0 &&
2529 req->nr_hdr_len != sizeof(struct nm_vnet_hdr) &&
2530 req->nr_hdr_len != 12) {
2535 hdr->nr_reqtype = NETMAP_REQ_REGISTER;
2536 hdr->nr_body = (uint64_t)®req;
2537 error = netmap_get_bdg_na(hdr, &na, NULL, 0);
2538 hdr->nr_reqtype = NETMAP_REQ_PORT_HDR_SET;
2539 hdr->nr_body = (uint64_t)req;
2541 struct netmap_vp_adapter *vpna =
2542 (struct netmap_vp_adapter *)na;
2543 na->virt_hdr_len = req->nr_hdr_len;
2544 if (na->virt_hdr_len) {
2545 vpna->mfs = NETMAP_BUF_SIZE(na);
2547 D("Using vnet_hdr_len %d for %p", na->virt_hdr_len, na);
2548 netmap_adapter_put(na);
2556 case NETMAP_REQ_PORT_HDR_GET: {
2557 /* Get vnet-header length for this netmap port */
2558 struct nmreq_port_hdr *req =
2559 (struct nmreq_port_hdr *)hdr->nr_body;
2560 /* Build a nmreq_register out of the nmreq_port_hdr,
2561 * so that we can call netmap_get_bdg_na(). */
2562 struct nmreq_register regreq;
2565 bzero(®req, sizeof(regreq));
2567 hdr->nr_reqtype = NETMAP_REQ_REGISTER;
2568 hdr->nr_body = (uint64_t)®req;
2569 error = netmap_get_na(hdr, &na, &ifp, NULL, 0);
2570 hdr->nr_reqtype = NETMAP_REQ_PORT_HDR_GET;
2571 hdr->nr_body = (uint64_t)req;
2573 req->nr_hdr_len = na->virt_hdr_len;
2575 netmap_unget_na(na, ifp);
2580 case NETMAP_REQ_VALE_NEWIF: {
2581 error = nm_vi_create(hdr);
2585 case NETMAP_REQ_VALE_DELIF: {
2586 error = nm_vi_destroy(hdr->nr_name);
2590 case NETMAP_REQ_VALE_POLLING_ENABLE:
2591 case NETMAP_REQ_VALE_POLLING_DISABLE: {
2592 error = nm_bdg_polling(hdr);
2595 #endif /* WITH_VALE */
2596 case NETMAP_REQ_POOLS_INFO_GET: {
2597 struct nmreq_pools_info *req =
2598 (struct nmreq_pools_info *)hdr->nr_body;
2599 /* Get information from the memory allocator. This
2600 * netmap device must already be bound to a port.
2601 * Note that hdr->nr_name is ignored. */
2603 if (priv->np_na && priv->np_na->nm_mem) {
2604 struct netmap_mem_d *nmd = priv->np_na->nm_mem;
2605 error = netmap_mem_pools_info_get(req, nmd);
2618 /* Write back request body to userspace and reset the
2619 * user-space pointer. */
2620 error = nmreq_copyout(hdr, error);
2626 nifp = priv->np_nifp;
2632 mb(); /* make sure following reads are not from cache */
2634 na = priv->np_na; /* we have a reference */
2637 D("Internal error: nifp != NULL && na == NULL");
2643 t = (cmd == NIOCTXSYNC ? NR_TX : NR_RX);
2644 krings = NMR(na, t);
2645 qfirst = priv->np_qfirst[t];
2646 qlast = priv->np_qlast[t];
2647 sync_flags = priv->np_sync_flags;
2649 for (i = qfirst; i < qlast; i++) {
2650 struct netmap_kring *kring = krings[i];
2651 struct netmap_ring *ring = kring->ring;
2653 if (unlikely(nm_kr_tryget(kring, 1, &error))) {
2654 error = (error ? EIO : 0);
2658 if (cmd == NIOCTXSYNC) {
2659 if (netmap_verbose & NM_VERB_TXSYNC)
2660 D("pre txsync ring %d cur %d hwcur %d",
2663 if (nm_txsync_prologue(kring, ring) >= kring->nkr_num_slots) {
2664 netmap_ring_reinit(kring);
2665 } else if (kring->nm_sync(kring, sync_flags | NAF_FORCE_RECLAIM) == 0) {
2666 nm_sync_finalize(kring);
2668 if (netmap_verbose & NM_VERB_TXSYNC)
2669 D("post txsync ring %d cur %d hwcur %d",
2673 if (nm_rxsync_prologue(kring, ring) >= kring->nkr_num_slots) {
2674 netmap_ring_reinit(kring);
2676 if (nm_may_forward_up(kring)) {
2677 /* transparent forwarding, see netmap_poll() */
2678 netmap_grab_packets(kring, &q, netmap_fwd);
2680 if (kring->nm_sync(kring, sync_flags | NAF_FORCE_READ) == 0) {
2681 nm_sync_finalize(kring);
2683 ring_timestamp_set(ring);
2689 netmap_send_up(na->ifp, &q);
2696 return netmap_ioctl_legacy(priv, cmd, data, td);
2705 nmreq_size_by_type(uint16_t nr_reqtype)
2707 switch (nr_reqtype) {
2708 case NETMAP_REQ_REGISTER:
2709 return sizeof(struct nmreq_register);
2710 case NETMAP_REQ_PORT_INFO_GET:
2711 return sizeof(struct nmreq_port_info_get);
2712 case NETMAP_REQ_VALE_ATTACH:
2713 return sizeof(struct nmreq_vale_attach);
2714 case NETMAP_REQ_VALE_DETACH:
2715 return sizeof(struct nmreq_vale_detach);
2716 case NETMAP_REQ_VALE_LIST:
2717 return sizeof(struct nmreq_vale_list);
2718 case NETMAP_REQ_PORT_HDR_SET:
2719 case NETMAP_REQ_PORT_HDR_GET:
2720 return sizeof(struct nmreq_port_hdr);
2721 case NETMAP_REQ_VALE_NEWIF:
2722 return sizeof(struct nmreq_vale_newif);
2723 case NETMAP_REQ_VALE_DELIF:
2725 case NETMAP_REQ_VALE_POLLING_ENABLE:
2726 case NETMAP_REQ_VALE_POLLING_DISABLE:
2727 return sizeof(struct nmreq_vale_polling);
2728 case NETMAP_REQ_POOLS_INFO_GET:
2729 return sizeof(struct nmreq_pools_info);
2735 nmreq_opt_size_by_type(uint16_t nro_reqtype)
2737 size_t rv = sizeof(struct nmreq_option);
2738 #ifdef NETMAP_REQ_OPT_DEBUG
2739 if (nro_reqtype & NETMAP_REQ_OPT_DEBUG)
2740 return (nro_reqtype & ~NETMAP_REQ_OPT_DEBUG);
2741 #endif /* NETMAP_REQ_OPT_DEBUG */
2742 switch (nro_reqtype) {
2744 case NETMAP_REQ_OPT_EXTMEM:
2745 rv = sizeof(struct nmreq_opt_extmem);
2747 #endif /* WITH_EXTMEM */
2749 /* subtract the common header */
2750 return rv - sizeof(struct nmreq_option);
2754 nmreq_copyin(struct nmreq_header *hdr, int nr_body_is_user)
2756 size_t rqsz, optsz, bufsz;
2758 char *ker = NULL, *p;
2759 struct nmreq_option **next, *src;
2760 struct nmreq_option buf;
2763 if (hdr->nr_reserved)
2766 if (!nr_body_is_user)
2769 hdr->nr_reserved = nr_body_is_user;
2771 /* compute the total size of the buffer */
2772 rqsz = nmreq_size_by_type(hdr->nr_reqtype);
2773 if (rqsz > NETMAP_REQ_MAXSIZE) {
2777 if ((rqsz && hdr->nr_body == (uint64_t)NULL) ||
2778 (!rqsz && hdr->nr_body != (uint64_t)NULL)) {
2779 /* Request body expected, but not found; or
2780 * request body found but unexpected. */
2785 bufsz = 2 * sizeof(void *) + rqsz;
2787 for (src = (struct nmreq_option *)hdr->nr_options; src;
2788 src = (struct nmreq_option *)buf.nro_next)
2790 error = copyin(src, &buf, sizeof(*src));
2793 optsz += sizeof(*src);
2794 optsz += nmreq_opt_size_by_type(buf.nro_reqtype);
2795 if (rqsz + optsz > NETMAP_REQ_MAXSIZE) {
2799 bufsz += optsz + sizeof(void *);
2802 ker = nm_os_malloc(bufsz);
2809 /* make a copy of the user pointers */
2810 ptrs = (uint64_t*)p;
2811 *ptrs++ = hdr->nr_body;
2812 *ptrs++ = hdr->nr_options;
2816 error = copyin((void *)hdr->nr_body, p, rqsz);
2819 /* overwrite the user pointer with the in-kernel one */
2820 hdr->nr_body = (uint64_t)p;
2823 /* copy the options */
2824 next = (struct nmreq_option **)&hdr->nr_options;
2827 struct nmreq_option *opt;
2829 /* copy the option header */
2830 ptrs = (uint64_t *)p;
2831 opt = (struct nmreq_option *)(ptrs + 1);
2832 error = copyin(src, opt, sizeof(*src));
2835 /* make a copy of the user next pointer */
2836 *ptrs = opt->nro_next;
2837 /* overwrite the user pointer with the in-kernel one */
2840 /* initialize the option as not supported.
2841 * Recognized options will update this field.
2843 opt->nro_status = EOPNOTSUPP;
2845 p = (char *)(opt + 1);
2847 /* copy the option body */
2848 optsz = nmreq_opt_size_by_type(opt->nro_reqtype);
2850 /* the option body follows the option header */
2851 error = copyin(src + 1, p, optsz);
2857 /* move to next option */
2858 next = (struct nmreq_option **)&opt->nro_next;
2864 ptrs = (uint64_t *)ker;
2865 hdr->nr_body = *ptrs++;
2866 hdr->nr_options = *ptrs++;
2867 hdr->nr_reserved = 0;
2874 nmreq_copyout(struct nmreq_header *hdr, int rerror)
2876 struct nmreq_option *src, *dst;
2877 void *ker = (void *)hdr->nr_body, *bufstart;
2882 if (!hdr->nr_reserved)
2885 /* restore the user pointers in the header */
2886 ptrs = (uint64_t *)ker - 2;
2888 hdr->nr_body = *ptrs++;
2889 src = (struct nmreq_option *)hdr->nr_options;
2890 hdr->nr_options = *ptrs;
2894 bodysz = nmreq_size_by_type(hdr->nr_reqtype);
2895 error = copyout(ker, (void *)hdr->nr_body, bodysz);
2902 /* copy the options */
2903 dst = (struct nmreq_option *)hdr->nr_options;
2908 /* restore the user pointer */
2909 next = src->nro_next;
2910 ptrs = (uint64_t *)src - 1;
2911 src->nro_next = *ptrs;
2913 /* always copy the option header */
2914 error = copyout(src, dst, sizeof(*src));
2920 /* copy the option body only if there was no error */
2921 if (!rerror && !src->nro_status) {
2922 optsz = nmreq_opt_size_by_type(src->nro_reqtype);
2924 error = copyout(src + 1, dst + 1, optsz);
2931 src = (struct nmreq_option *)next;
2932 dst = (struct nmreq_option *)*ptrs;
2937 hdr->nr_reserved = 0;
2938 nm_os_free(bufstart);
2942 struct nmreq_option *
2943 nmreq_findoption(struct nmreq_option *opt, uint16_t reqtype)
2945 for ( ; opt; opt = (struct nmreq_option *)opt->nro_next)
2946 if (opt->nro_reqtype == reqtype)
2952 nmreq_checkduplicate(struct nmreq_option *opt) {
2953 uint16_t type = opt->nro_reqtype;
2956 while ((opt = nmreq_findoption((struct nmreq_option *)opt->nro_next,
2959 opt->nro_status = EINVAL;
2961 return (dup ? EINVAL : 0);
2965 nmreq_checkoptions(struct nmreq_header *hdr)
2967 struct nmreq_option *opt;
2968 /* return error if there is still any option
2969 * marked as not supported
2972 for (opt = (struct nmreq_option *)hdr->nr_options; opt;
2973 opt = (struct nmreq_option *)opt->nro_next)
2974 if (opt->nro_status == EOPNOTSUPP)
2981 * select(2) and poll(2) handlers for the "netmap" device.
2983 * Can be called for one or more queues.
2984 * Return true the event mask corresponding to ready events.
2985 * If there are no ready events, do a selrecord on either individual
2986 * selinfo or on the global one.
2987 * Device-dependent parts (locking and sync of tx/rx rings)
2988 * are done through callbacks.
2990 * On linux, arguments are really pwait, the poll table, and 'td' is struct file *
2991 * The first one is remapped to pwait as selrecord() uses the name as an
2995 netmap_poll(struct netmap_priv_d *priv, int events, NM_SELRECORD_T *sr)
2997 struct netmap_adapter *na;
2998 struct netmap_kring *kring;
2999 struct netmap_ring *ring;
3000 u_int i, check_all_tx, check_all_rx, want[NR_TXRX], revents = 0;
3001 #define want_tx want[NR_TX]
3002 #define want_rx want[NR_RX]
3003 struct mbq q; /* packets from RX hw queues to host stack */
3006 * In order to avoid nested locks, we need to "double check"
3007 * txsync and rxsync if we decide to do a selrecord().
3008 * retry_tx (and retry_rx, later) prevent looping forever.
3010 int retry_tx = 1, retry_rx = 1;
3012 /* Transparent mode: send_down is 1 if we have found some
3013 * packets to forward (host RX ring --> NIC) during the rx
3014 * scan and we have not sent them down to the NIC yet.
3015 * Transparent mode requires to bind all rings to a single
3019 int sync_flags = priv->np_sync_flags;
3023 if (priv->np_nifp == NULL) {
3024 D("No if registered");
3027 mb(); /* make sure following reads are not from cache */
3031 if (!nm_netmap_on(na))
3034 if (netmap_verbose & 0x8000)
3035 D("device %s events 0x%x", na->name, events);
3036 want_tx = events & (POLLOUT | POLLWRNORM);
3037 want_rx = events & (POLLIN | POLLRDNORM);
3040 * check_all_{tx|rx} are set if the card has more than one queue AND
3041 * the file descriptor is bound to all of them. If so, we sleep on
3042 * the "global" selinfo, otherwise we sleep on individual selinfo
3043 * (FreeBSD only allows two selinfo's per file descriptor).
3044 * The interrupt routine in the driver wake one or the other
3045 * (or both) depending on which clients are active.
3047 * rxsync() is only called if we run out of buffers on a POLLIN.
3048 * txsync() is called if we run out of buffers on POLLOUT, or
3049 * there are pending packets to send. The latter can be disabled
3050 * passing NETMAP_NO_TX_POLL in the NIOCREG call.
3052 check_all_tx = nm_si_user(priv, NR_TX);
3053 check_all_rx = nm_si_user(priv, NR_RX);
3057 * We start with a lock free round which is cheap if we have
3058 * slots available. If this fails, then lock and call the sync
3059 * routines. We can't do this on Linux, as the contract says
3060 * that we must call nm_os_selrecord() unconditionally.
3063 enum txrx t = NR_TX;
3064 for (i = priv->np_qfirst[t]; want[t] && i < priv->np_qlast[t]; i++) {
3065 kring = NMR(na, t)[i];
3066 /* XXX compare ring->cur and kring->tail */
3067 if (!nm_ring_empty(kring->ring)) {
3069 want[t] = 0; /* also breaks the loop */
3074 enum txrx t = NR_RX;
3075 want_rx = 0; /* look for a reason to run the handlers */
3076 for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) {
3077 kring = NMR(na, t)[i];
3078 if (kring->ring->cur == kring->ring->tail /* try fetch new buffers */
3079 || kring->rhead != kring->ring->head /* release buffers */) {
3084 revents |= events & (POLLIN | POLLRDNORM); /* we have data */
3089 /* The selrecord must be unconditional on linux. */
3090 nm_os_selrecord(sr, check_all_tx ?
3091 &na->si[NR_TX] : &na->tx_rings[priv->np_qfirst[NR_TX]]->si);
3092 nm_os_selrecord(sr, check_all_rx ?
3093 &na->si[NR_RX] : &na->rx_rings[priv->np_qfirst[NR_RX]]->si);
3097 * If we want to push packets out (priv->np_txpoll) or
3098 * want_tx is still set, we must issue txsync calls
3099 * (on all rings, to avoid that the tx rings stall).
3100 * Fortunately, normal tx mode has np_txpoll set.
3102 if (priv->np_txpoll || want_tx) {
3104 * The first round checks if anyone is ready, if not
3105 * do a selrecord and another round to handle races.
3106 * want_tx goes to 0 if any space is found, and is
3107 * used to skip rings with no pending transmissions.
3110 for (i = priv->np_qfirst[NR_TX]; i < priv->np_qlast[NR_TX]; i++) {
3113 kring = na->tx_rings[i];
3117 * Don't try to txsync this TX ring if we already found some
3118 * space in some of the TX rings (want_tx == 0) and there are no
3119 * TX slots in this ring that need to be flushed to the NIC
3122 if (!send_down && !want_tx && ring->head == kring->nr_hwcur)
3125 if (nm_kr_tryget(kring, 1, &revents))
3128 if (nm_txsync_prologue(kring, ring) >= kring->nkr_num_slots) {
3129 netmap_ring_reinit(kring);
3132 if (kring->nm_sync(kring, sync_flags))
3135 nm_sync_finalize(kring);
3139 * If we found new slots, notify potential
3140 * listeners on the same ring.
3141 * Since we just did a txsync, look at the copies
3142 * of cur,tail in the kring.
3144 found = kring->rcur != kring->rtail;
3146 if (found) { /* notify other listeners */
3150 kring->nm_notify(kring, 0);
3154 /* if there were any packet to forward we must have handled them by now */
3156 if (want_tx && retry_tx && sr) {
3158 nm_os_selrecord(sr, check_all_tx ?
3159 &na->si[NR_TX] : &na->tx_rings[priv->np_qfirst[NR_TX]]->si);
3167 * If want_rx is still set scan receive rings.
3168 * Do it on all rings because otherwise we starve.
3171 /* two rounds here for race avoidance */
3173 for (i = priv->np_qfirst[NR_RX]; i < priv->np_qlast[NR_RX]; i++) {
3176 kring = na->rx_rings[i];
3179 if (unlikely(nm_kr_tryget(kring, 1, &revents)))
3182 if (nm_rxsync_prologue(kring, ring) >= kring->nkr_num_slots) {
3183 netmap_ring_reinit(kring);
3186 /* now we can use kring->rcur, rtail */
3189 * transparent mode support: collect packets from
3190 * hw rxring(s) that have been released by the user
3192 if (nm_may_forward_up(kring)) {
3193 netmap_grab_packets(kring, &q, netmap_fwd);
3196 /* Clear the NR_FORWARD flag anyway, it may be set by
3197 * the nm_sync() below only on for the host RX ring (see
3198 * netmap_rxsync_from_host()). */
3199 kring->nr_kflags &= ~NR_FORWARD;
3200 if (kring->nm_sync(kring, sync_flags))
3203 nm_sync_finalize(kring);
3204 send_down |= (kring->nr_kflags & NR_FORWARD);
3205 ring_timestamp_set(ring);
3206 found = kring->rcur != kring->rtail;
3212 kring->nm_notify(kring, 0);
3218 if (retry_rx && sr) {
3219 nm_os_selrecord(sr, check_all_rx ?
3220 &na->si[NR_RX] : &na->rx_rings[priv->np_qfirst[NR_RX]]->si);
3223 if (send_down || retry_rx) {
3226 goto flush_tx; /* and retry_rx */
3233 * Transparent mode: released bufs (i.e. between kring->nr_hwcur and
3234 * ring->head) marked with NS_FORWARD on hw rx rings are passed up
3235 * to the host stack.
3239 netmap_send_up(na->ifp, &q);
3248 nma_intr_enable(struct netmap_adapter *na, int onoff)
3250 bool changed = false;
3255 for (i = 0; i < nma_get_nrings(na, t); i++) {
3256 struct netmap_kring *kring = NMR(na, t)[i];
3257 int on = !(kring->nr_kflags & NKR_NOINTR);
3259 if (!!onoff != !!on) {
3263 kring->nr_kflags &= ~NKR_NOINTR;
3265 kring->nr_kflags |= NKR_NOINTR;
3271 return 0; /* nothing to do */
3275 D("Cannot %s interrupts for %s", onoff ? "enable" : "disable",
3280 na->nm_intr(na, onoff);
3286 /*-------------------- driver support routines -------------------*/
3288 /* default notify callback */
3290 netmap_notify(struct netmap_kring *kring, int flags)
3292 struct netmap_adapter *na = kring->notify_na;
3293 enum txrx t = kring->tx;
3295 nm_os_selwakeup(&kring->si);
3296 /* optimization: avoid a wake up on the global
3297 * queue if nobody has registered for more
3300 if (na->si_users[t] > 0)
3301 nm_os_selwakeup(&na->si[t]);
3303 return NM_IRQ_COMPLETED;
3306 /* called by all routines that create netmap_adapters.
3307 * provide some defaults and get a reference to the
3311 netmap_attach_common(struct netmap_adapter *na)
3313 if (na->num_tx_rings == 0 || na->num_rx_rings == 0) {
3314 D("%s: invalid rings tx %d rx %d",
3315 na->name, na->num_tx_rings, na->num_rx_rings);
3319 if (!na->rx_buf_maxsize) {
3320 /* Set a conservative default (larger is safer). */
3321 na->rx_buf_maxsize = PAGE_SIZE;
3325 if (na->na_flags & NAF_HOST_RINGS && na->ifp) {
3326 na->if_input = na->ifp->if_input; /* for netmap_send_up */
3328 na->pdev = na; /* make sure netmap_mem_map() is called */
3329 #endif /* __FreeBSD__ */
3330 if (na->nm_krings_create == NULL) {
3331 /* we assume that we have been called by a driver,
3332 * since other port types all provide their own
3335 na->nm_krings_create = netmap_hw_krings_create;
3336 na->nm_krings_delete = netmap_hw_krings_delete;
3338 if (na->nm_notify == NULL)
3339 na->nm_notify = netmap_notify;
3342 if (na->nm_mem == NULL) {
3343 /* use the global allocator */
3344 na->nm_mem = netmap_mem_get(&nm_mem);
3347 if (na->nm_bdg_attach == NULL)
3348 /* no special nm_bdg_attach callback. On VALE
3349 * attach, we need to interpose a bwrap
3351 na->nm_bdg_attach = netmap_bwrap_attach;
3357 /* Wrapper for the register callback provided netmap-enabled
3359 * nm_iszombie(na) means that the driver module has been
3360 * unloaded, so we cannot call into it.
3361 * nm_os_ifnet_lock() must guarantee mutual exclusion with
3365 netmap_hw_reg(struct netmap_adapter *na, int onoff)
3367 struct netmap_hw_adapter *hwna =
3368 (struct netmap_hw_adapter*)na;
3373 if (nm_iszombie(na)) {
3376 } else if (na != NULL) {
3377 na->na_flags &= ~NAF_NETMAP_ON;
3382 error = hwna->nm_hw_register(na, onoff);
3385 nm_os_ifnet_unlock();
3391 netmap_hw_dtor(struct netmap_adapter *na)
3393 if (nm_iszombie(na) || na->ifp == NULL)
3396 WNA(na->ifp) = NULL;
3401 * Allocate a netmap_adapter object, and initialize it from the
3402 * 'arg' passed by the driver on attach.
3403 * We allocate a block of memory of 'size' bytes, which has room
3404 * for struct netmap_adapter plus additional room private to
3406 * Return 0 on success, ENOMEM otherwise.
3409 netmap_attach_ext(struct netmap_adapter *arg, size_t size, int override_reg)
3411 struct netmap_hw_adapter *hwna = NULL;
3412 struct ifnet *ifp = NULL;
3414 if (size < sizeof(struct netmap_hw_adapter)) {
3415 D("Invalid netmap adapter size %d", (int)size);
3419 if (arg == NULL || arg->ifp == NULL)
3423 if (NA(ifp) && !NM_NA_VALID(ifp)) {
3424 /* If NA(ifp) is not null but there is no valid netmap
3425 * adapter it means that someone else is using the same
3426 * pointer (e.g. ax25_ptr on linux). This happens for
3427 * instance when also PF_RING is in use. */
3428 D("Error: netmap adapter hook is busy");
3432 hwna = nm_os_malloc(size);
3436 hwna->up.na_flags |= NAF_HOST_RINGS | NAF_NATIVE;
3437 strncpy(hwna->up.name, ifp->if_xname, sizeof(hwna->up.name));
3439 hwna->nm_hw_register = hwna->up.nm_register;
3440 hwna->up.nm_register = netmap_hw_reg;
3442 if (netmap_attach_common(&hwna->up)) {
3446 netmap_adapter_get(&hwna->up);
3448 NM_ATTACH_NA(ifp, &hwna->up);
3451 if (ifp->netdev_ops) {
3452 /* prepare a clone of the netdev ops */
3453 #ifndef NETMAP_LINUX_HAVE_NETDEV_OPS
3454 hwna->nm_ndo.ndo_start_xmit = ifp->netdev_ops;
3456 hwna->nm_ndo = *ifp->netdev_ops;
3457 #endif /* NETMAP_LINUX_HAVE_NETDEV_OPS */
3459 hwna->nm_ndo.ndo_start_xmit = linux_netmap_start_xmit;
3460 hwna->nm_ndo.ndo_change_mtu = linux_netmap_change_mtu;
3461 if (ifp->ethtool_ops) {
3462 hwna->nm_eto = *ifp->ethtool_ops;
3464 hwna->nm_eto.set_ringparam = linux_netmap_set_ringparam;
3465 #ifdef NETMAP_LINUX_HAVE_SET_CHANNELS
3466 hwna->nm_eto.set_channels = linux_netmap_set_channels;
3467 #endif /* NETMAP_LINUX_HAVE_SET_CHANNELS */
3468 if (arg->nm_config == NULL) {
3469 hwna->up.nm_config = netmap_linux_config;
3472 if (arg->nm_dtor == NULL) {
3473 hwna->up.nm_dtor = netmap_hw_dtor;
3476 if_printf(ifp, "netmap queues/slots: TX %d/%d, RX %d/%d\n",
3477 hwna->up.num_tx_rings, hwna->up.num_tx_desc,
3478 hwna->up.num_rx_rings, hwna->up.num_rx_desc);
3482 D("fail, arg %p ifp %p na %p", arg, ifp, hwna);
3483 return (hwna ? EINVAL : ENOMEM);
3488 netmap_attach(struct netmap_adapter *arg)
3490 return netmap_attach_ext(arg, sizeof(struct netmap_hw_adapter),
3491 1 /* override nm_reg */);
3496 NM_DBG(netmap_adapter_get)(struct netmap_adapter *na)
3502 refcount_acquire(&na->na_refcount);
3506 /* returns 1 iff the netmap_adapter is destroyed */
3508 NM_DBG(netmap_adapter_put)(struct netmap_adapter *na)
3513 if (!refcount_release(&na->na_refcount))
3519 if (na->tx_rings) { /* XXX should not happen */
3520 D("freeing leftover tx_rings");
3521 na->nm_krings_delete(na);
3523 netmap_pipe_dealloc(na);
3525 netmap_mem_put(na->nm_mem);
3526 bzero(na, sizeof(*na));
3532 /* nm_krings_create callback for all hardware native adapters */
3534 netmap_hw_krings_create(struct netmap_adapter *na)
3536 int ret = netmap_krings_create(na, 0);
3538 /* initialize the mbq for the sw rx ring */
3539 mbq_safe_init(&na->rx_rings[na->num_rx_rings]->rx_queue);
3540 ND("initialized sw rx queue %d", na->num_rx_rings);
3548 * Called on module unload by the netmap-enabled drivers
3551 netmap_detach(struct ifnet *ifp)
3553 struct netmap_adapter *na = NA(ifp);
3559 netmap_set_all_rings(na, NM_KR_LOCKED);
3561 * if the netmap adapter is not native, somebody
3562 * changed it, so we can not release it here.
3563 * The NAF_ZOMBIE flag will notify the new owner that
3564 * the driver is gone.
3566 if (!(na->na_flags & NAF_NATIVE) || !netmap_adapter_put(na)) {
3567 na->na_flags |= NAF_ZOMBIE;
3569 /* give active users a chance to notice that NAF_ZOMBIE has been
3570 * turned on, so that they can stop and return an error to userspace.
3571 * Note that this becomes a NOP if there are no active users and,
3572 * therefore, the put() above has deleted the na, since now NA(ifp) is
3575 netmap_enable_all_rings(ifp);
3581 * Intercept packets from the network stack and pass them
3582 * to netmap as incoming packets on the 'software' ring.
3584 * We only store packets in a bounded mbq and then copy them
3585 * in the relevant rxsync routine.
3587 * We rely on the OS to make sure that the ifp and na do not go
3588 * away (typically the caller checks for IFF_DRV_RUNNING or the like).
3589 * In nm_register() or whenever there is a reinitialization,
3590 * we make sure to make the mode change visible here.
3593 netmap_transmit(struct ifnet *ifp, struct mbuf *m)
3595 struct netmap_adapter *na = NA(ifp);
3596 struct netmap_kring *kring, *tx_kring;
3597 u_int len = MBUF_LEN(m);
3598 u_int error = ENOBUFS;
3603 kring = na->rx_rings[na->num_rx_rings];
3604 // XXX [Linux] we do not need this lock
3605 // if we follow the down/configure/up protocol -gl
3606 // mtx_lock(&na->core_lock);
3608 if (!nm_netmap_on(na)) {
3609 D("%s not in netmap mode anymore", na->name);
3615 if (txr >= na->num_tx_rings) {
3616 txr %= na->num_tx_rings;
3618 tx_kring = NMR(na, NR_TX)[txr];
3620 if (tx_kring->nr_mode == NKR_NETMAP_OFF) {
3621 return MBUF_TRANSMIT(na, ifp, m);
3624 q = &kring->rx_queue;
3626 // XXX reconsider long packets if we handle fragments
3627 if (len > NETMAP_BUF_SIZE(na)) { /* too long for us */
3628 D("%s from_host, drop packet size %d > %d", na->name,
3629 len, NETMAP_BUF_SIZE(na));
3633 if (nm_os_mbuf_has_offld(m)) {
3634 RD(1, "%s drop mbuf that needs offloadings", na->name);
3638 /* protect against netmap_rxsync_from_host(), netmap_sw_to_nic()
3639 * and maybe other instances of netmap_transmit (the latter
3640 * not possible on Linux).
3641 * We enqueue the mbuf only if we are sure there is going to be
3642 * enough room in the host RX ring, otherwise we drop it.
3646 busy = kring->nr_hwtail - kring->nr_hwcur;
3648 busy += kring->nkr_num_slots;
3649 if (busy + mbq_len(q) >= kring->nkr_num_slots - 1) {
3650 RD(2, "%s full hwcur %d hwtail %d qlen %d", na->name,
3651 kring->nr_hwcur, kring->nr_hwtail, mbq_len(q));
3654 ND(2, "%s %d bufs in queue", na->name, mbq_len(q));
3655 /* notify outside the lock */
3664 /* unconditionally wake up listeners */
3665 kring->nm_notify(kring, 0);
3666 /* this is normally netmap_notify(), but for nics
3667 * connected to a bridge it is netmap_bwrap_intr_notify(),
3668 * that possibly forwards the frames through the switch
3676 * netmap_reset() is called by the driver routines when reinitializing
3677 * a ring. The driver is in charge of locking to protect the kring.
3678 * If native netmap mode is not set just return NULL.
3679 * If native netmap mode is set, in particular, we have to set nr_mode to
3682 struct netmap_slot *
3683 netmap_reset(struct netmap_adapter *na, enum txrx tx, u_int n,
3686 struct netmap_kring *kring;
3689 if (!nm_native_on(na)) {
3690 ND("interface not in native netmap mode");
3691 return NULL; /* nothing to reinitialize */
3694 /* XXX note- in the new scheme, we are not guaranteed to be
3695 * under lock (e.g. when called on a device reset).
3696 * In this case, we should set a flag and do not trust too
3697 * much the values. In practice: TODO
3698 * - set a RESET flag somewhere in the kring
3699 * - do the processing in a conservative way
3700 * - let the *sync() fixup at the end.
3703 if (n >= na->num_tx_rings)
3706 kring = na->tx_rings[n];
3708 if (kring->nr_pending_mode == NKR_NETMAP_OFF) {
3709 kring->nr_mode = NKR_NETMAP_OFF;
3713 // XXX check whether we should use hwcur or rcur
3714 new_hwofs = kring->nr_hwcur - new_cur;
3716 if (n >= na->num_rx_rings)
3718 kring = na->rx_rings[n];
3720 if (kring->nr_pending_mode == NKR_NETMAP_OFF) {
3721 kring->nr_mode = NKR_NETMAP_OFF;
3725 new_hwofs = kring->nr_hwtail - new_cur;
3727 lim = kring->nkr_num_slots - 1;
3728 if (new_hwofs > lim)
3729 new_hwofs -= lim + 1;
3731 /* Always set the new offset value and realign the ring. */
3733 D("%s %s%d hwofs %d -> %d, hwtail %d -> %d",
3735 tx == NR_TX ? "TX" : "RX", n,
3736 kring->nkr_hwofs, new_hwofs,
3738 tx == NR_TX ? lim : kring->nr_hwtail);
3739 kring->nkr_hwofs = new_hwofs;
3741 kring->nr_hwtail = kring->nr_hwcur + lim;
3742 if (kring->nr_hwtail > lim)
3743 kring->nr_hwtail -= lim + 1;
3747 * Wakeup on the individual and global selwait
3748 * We do the wakeup here, but the ring is not yet reconfigured.
3749 * However, we are under lock so there are no races.
3751 kring->nr_mode = NKR_NETMAP_ON;
3752 kring->nm_notify(kring, 0);
3753 return kring->ring->slot;
3758 * Dispatch rx/tx interrupts to the netmap rings.
3760 * "work_done" is non-null on the RX path, NULL for the TX path.
3761 * We rely on the OS to make sure that there is only one active
3762 * instance per queue, and that there is appropriate locking.
3764 * The 'notify' routine depends on what the ring is attached to.
3765 * - for a netmap file descriptor, do a selwakeup on the individual
3766 * waitqueue, plus one on the global one if needed
3767 * (see netmap_notify)
3768 * - for a nic connected to a switch, call the proper forwarding routine
3769 * (see netmap_bwrap_intr_notify)
3772 netmap_common_irq(struct netmap_adapter *na, u_int q, u_int *work_done)
3774 struct netmap_kring *kring;
3775 enum txrx t = (work_done ? NR_RX : NR_TX);
3777 q &= NETMAP_RING_MASK;
3779 if (netmap_verbose) {
3780 RD(5, "received %s queue %d", work_done ? "RX" : "TX" , q);
3783 if (q >= nma_get_nrings(na, t))
3784 return NM_IRQ_PASS; // not a physical queue
3786 kring = NMR(na, t)[q];
3788 if (kring->nr_mode == NKR_NETMAP_OFF) {
3793 kring->nr_kflags |= NKR_PENDINTR; // XXX atomic ?
3794 *work_done = 1; /* do not fire napi again */
3797 return kring->nm_notify(kring, 0);
3802 * Default functions to handle rx/tx interrupts from a physical device.
3803 * "work_done" is non-null on the RX path, NULL for the TX path.
3805 * If the card is not in netmap mode, simply return NM_IRQ_PASS,
3806 * so that the caller proceeds with regular processing.
3807 * Otherwise call netmap_common_irq().
3809 * If the card is connected to a netmap file descriptor,
3810 * do a selwakeup on the individual queue, plus one on the global one
3811 * if needed (multiqueue card _and_ there are multiqueue listeners),
3812 * and return NR_IRQ_COMPLETED.
3814 * Finally, if called on rx from an interface connected to a switch,
3815 * calls the proper forwarding routine.
3818 netmap_rx_irq(struct ifnet *ifp, u_int q, u_int *work_done)
3820 struct netmap_adapter *na = NA(ifp);
3823 * XXX emulated netmap mode sets NAF_SKIP_INTR so
3824 * we still use the regular driver even though the previous
3825 * check fails. It is unclear whether we should use
3826 * nm_native_on() here.
3828 if (!nm_netmap_on(na))
3831 if (na->na_flags & NAF_SKIP_INTR) {
3832 ND("use regular interrupt");
3836 return netmap_common_irq(na, q, work_done);
3841 * Module loader and unloader
3843 * netmap_init() creates the /dev/netmap device and initializes
3844 * all global variables. Returns 0 on success, errno on failure
3845 * (but there is no chance)
3847 * netmap_fini() destroys everything.
3850 static struct cdev *netmap_dev; /* /dev/netmap character device. */
3851 extern struct cdevsw netmap_cdevsw;
3858 destroy_dev(netmap_dev);
3859 /* we assume that there are no longer netmap users */
3861 netmap_uninit_bridges();
3864 nm_prinf("netmap: unloaded module.\n");
3875 error = netmap_mem_init();
3879 * MAKEDEV_ETERNAL_KLD avoids an expensive check on syscalls
3880 * when the module is compiled in.
3881 * XXX could use make_dev_credv() to get error number
3883 netmap_dev = make_dev_credf(MAKEDEV_ETERNAL_KLD,
3884 &netmap_cdevsw, 0, NULL, UID_ROOT, GID_WHEEL, 0600,
3889 error = netmap_init_bridges();
3894 nm_os_vi_init_index();
3897 error = nm_os_ifnet_init();
3901 nm_prinf("netmap: loaded module\n");
3905 return (EINVAL); /* may be incorrect */