2 * SPDX-License-Identifier: BSD-2-Clause-FreeBSD
4 * Copyright (C) 2011-2014 Matteo Landi
5 * Copyright (C) 2011-2016 Luigi Rizzo
6 * Copyright (C) 2011-2016 Giuseppe Lettieri
7 * Copyright (C) 2011-2016 Vincenzo Maffione
10 * Redistribution and use in source and binary forms, with or without
11 * modification, are permitted provided that the following conditions
13 * 1. Redistributions of source code must retain the above copyright
14 * notice, this list of conditions and the following disclaimer.
15 * 2. Redistributions in binary form must reproduce the above copyright
16 * notice, this list of conditions and the following disclaimer in the
17 * documentation and/or other materials provided with the distribution.
19 * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
20 * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
21 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
22 * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
23 * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
24 * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
25 * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
26 * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
27 * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
28 * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
36 * This module supports memory mapped access to network devices,
39 * The module uses a large, memory pool allocated by the kernel
40 * and accessible as mmapped memory by multiple userspace threads/processes.
41 * The memory pool contains packet buffers and "netmap rings",
42 * i.e. user-accessible copies of the interface's queues.
44 * Access to the network card works like this:
45 * 1. a process/thread issues one or more open() on /dev/netmap, to create
46 * select()able file descriptor on which events are reported.
47 * 2. on each descriptor, the process issues an ioctl() to identify
48 * the interface that should report events to the file descriptor.
49 * 3. on each descriptor, the process issues an mmap() request to
50 * map the shared memory region within the process' address space.
51 * The list of interesting queues is indicated by a location in
52 * the shared memory region.
53 * 4. using the functions in the netmap(4) userspace API, a process
54 * can look up the occupation state of a queue, access memory buffers,
55 * and retrieve received packets or enqueue packets to transmit.
56 * 5. using some ioctl()s the process can synchronize the userspace view
57 * of the queue with the actual status in the kernel. This includes both
58 * receiving the notification of new packets, and transmitting new
59 * packets on the output interface.
60 * 6. select() or poll() can be used to wait for events on individual
61 * transmit or receive queues (or all queues for a given interface).
64 SYNCHRONIZATION (USER)
66 The netmap rings and data structures may be shared among multiple
67 user threads or even independent processes.
68 Any synchronization among those threads/processes is delegated
69 to the threads themselves. Only one thread at a time can be in
70 a system call on the same netmap ring. The OS does not enforce
71 this and only guarantees against system crashes in case of
76 Within the kernel, access to the netmap rings is protected as follows:
78 - a spinlock on each ring, to handle producer/consumer races on
79 RX rings attached to the host stack (against multiple host
80 threads writing from the host stack to the same ring),
81 and on 'destination' rings attached to a VALE switch
82 (i.e. RX rings in VALE ports, and TX rings in NIC/host ports)
83 protecting multiple active senders for the same destination)
85 - an atomic variable to guarantee that there is at most one
86 instance of *_*xsync() on the ring at any time.
87 For rings connected to user file
88 descriptors, an atomic_test_and_set() protects this, and the
89 lock on the ring is not actually used.
90 For NIC RX rings connected to a VALE switch, an atomic_test_and_set()
91 is also used to prevent multiple executions (the driver might indeed
92 already guarantee this).
93 For NIC TX rings connected to a VALE switch, the lock arbitrates
94 access to the queue (both when allocating buffers and when pushing
97 - *xsync() should be protected against initializations of the card.
98 On FreeBSD most devices have the reset routine protected by
99 a RING lock (ixgbe, igb, em) or core lock (re). lem is missing
100 the RING protection on rx_reset(), this should be added.
102 On linux there is an external lock on the tx path, which probably
103 also arbitrates access to the reset routine. XXX to be revised
105 - a per-interface core_lock protecting access from the host stack
106 while interfaces may be detached from netmap mode.
107 XXX there should be no need for this lock if we detach the interfaces
108 only while they are down.
113 NMG_LOCK() serializes all modifications to switches and ports.
114 A switch cannot be deleted until all ports are gone.
116 For each switch, an SX lock (RWlock on linux) protects
117 deletion of ports. When configuring or deleting a new port, the
118 lock is acquired in exclusive mode (after holding NMG_LOCK).
119 When forwarding, the lock is acquired in shared mode (without NMG_LOCK).
120 The lock is held throughout the entire forwarding cycle,
121 during which the thread may incur in a page fault.
122 Hence it is important that sleepable shared locks are used.
124 On the rx ring, the per-port lock is grabbed initially to reserve
125 a number of slot in the ring, then the lock is released,
126 packets are copied from source to destination, and then
127 the lock is acquired again and the receive ring is updated.
128 (A similar thing is done on the tx ring for NIC and host stack
129 ports attached to the switch)
134 /* --- internals ----
136 * Roadmap to the code that implements the above.
138 * > 1. a process/thread issues one or more open() on /dev/netmap, to create
139 * > select()able file descriptor on which events are reported.
141 * Internally, we allocate a netmap_priv_d structure, that will be
142 * initialized on ioctl(NIOCREGIF). There is one netmap_priv_d
143 * structure for each open().
146 * FreeBSD: see netmap_open() (netmap_freebsd.c)
147 * linux: see linux_netmap_open() (netmap_linux.c)
149 * > 2. on each descriptor, the process issues an ioctl() to identify
150 * > the interface that should report events to the file descriptor.
152 * Implemented by netmap_ioctl(), NIOCREGIF case, with nmr->nr_cmd==0.
153 * Most important things happen in netmap_get_na() and
154 * netmap_do_regif(), called from there. Additional details can be
155 * found in the comments above those functions.
157 * In all cases, this action creates/takes-a-reference-to a
158 * netmap_*_adapter describing the port, and allocates a netmap_if
159 * and all necessary netmap rings, filling them with netmap buffers.
161 * In this phase, the sync callbacks for each ring are set (these are used
162 * in steps 5 and 6 below). The callbacks depend on the type of adapter.
163 * The adapter creation/initialization code puts them in the
164 * netmap_adapter (fields na->nm_txsync and na->nm_rxsync). Then, they
165 * are copied from there to the netmap_kring's during netmap_do_regif(), by
166 * the nm_krings_create() callback. All the nm_krings_create callbacks
167 * actually call netmap_krings_create() to perform this and the other
168 * common stuff. netmap_krings_create() also takes care of the host rings,
169 * if needed, by setting their sync callbacks appropriately.
171 * Additional actions depend on the kind of netmap_adapter that has been
174 * - netmap_hw_adapter: [netmap.c]
175 * This is a system netdev/ifp with native netmap support.
176 * The ifp is detached from the host stack by redirecting:
177 * - transmissions (from the network stack) to netmap_transmit()
178 * - receive notifications to the nm_notify() callback for
179 * this adapter. The callback is normally netmap_notify(), unless
180 * the ifp is attached to a bridge using bwrap, in which case it
181 * is netmap_bwrap_intr_notify().
183 * - netmap_generic_adapter: [netmap_generic.c]
184 * A system netdev/ifp without native netmap support.
186 * (the decision about native/non native support is taken in
187 * netmap_get_hw_na(), called by netmap_get_na())
189 * - netmap_vp_adapter [netmap_vale.c]
190 * Returned by netmap_get_bdg_na().
191 * This is a persistent or ephemeral VALE port. Ephemeral ports
192 * are created on the fly if they don't already exist, and are
193 * always attached to a bridge.
194 * Persistent VALE ports must must be created separately, and i
195 * then attached like normal NICs. The NIOCREGIF we are examining
196 * will find them only if they had previosly been created and
197 * attached (see VALE_CTL below).
199 * - netmap_pipe_adapter [netmap_pipe.c]
200 * Returned by netmap_get_pipe_na().
201 * Both pipe ends are created, if they didn't already exist.
203 * - netmap_monitor_adapter [netmap_monitor.c]
204 * Returned by netmap_get_monitor_na().
205 * If successful, the nm_sync callbacks of the monitored adapter
206 * will be intercepted by the returned monitor.
208 * - netmap_bwrap_adapter [netmap_vale.c]
209 * Cannot be obtained in this way, see VALE_CTL below
213 * linux: we first go through linux_netmap_ioctl() to
214 * adapt the FreeBSD interface to the linux one.
217 * > 3. on each descriptor, the process issues an mmap() request to
218 * > map the shared memory region within the process' address space.
219 * > The list of interesting queues is indicated by a location in
220 * > the shared memory region.
223 * FreeBSD: netmap_mmap_single (netmap_freebsd.c).
224 * linux: linux_netmap_mmap (netmap_linux.c).
226 * > 4. using the functions in the netmap(4) userspace API, a process
227 * > can look up the occupation state of a queue, access memory buffers,
228 * > and retrieve received packets or enqueue packets to transmit.
230 * these actions do not involve the kernel.
232 * > 5. using some ioctl()s the process can synchronize the userspace view
233 * > of the queue with the actual status in the kernel. This includes both
234 * > receiving the notification of new packets, and transmitting new
235 * > packets on the output interface.
237 * These are implemented in netmap_ioctl(), NIOCTXSYNC and NIOCRXSYNC
238 * cases. They invoke the nm_sync callbacks on the netmap_kring
239 * structures, as initialized in step 2 and maybe later modified
240 * by a monitor. Monitors, however, will always call the original
241 * callback before doing anything else.
244 * > 6. select() or poll() can be used to wait for events on individual
245 * > transmit or receive queues (or all queues for a given interface).
247 * Implemented in netmap_poll(). This will call the same nm_sync()
248 * callbacks as in step 5 above.
251 * linux: we first go through linux_netmap_poll() to adapt
252 * the FreeBSD interface to the linux one.
255 * ---- VALE_CTL -----
257 * VALE switches are controlled by issuing a NIOCREGIF with a non-null
258 * nr_cmd in the nmreq structure. These subcommands are handled by
259 * netmap_bdg_ctl() in netmap_vale.c. Persistent VALE ports are created
260 * and destroyed by issuing the NETMAP_BDG_NEWIF and NETMAP_BDG_DELIF
261 * subcommands, respectively.
263 * Any network interface known to the system (including a persistent VALE
264 * port) can be attached to a VALE switch by issuing the
265 * NETMAP_REQ_VALE_ATTACH command. After the attachment, persistent VALE ports
266 * look exactly like ephemeral VALE ports (as created in step 2 above). The
267 * attachment of other interfaces, instead, requires the creation of a
268 * netmap_bwrap_adapter. Moreover, the attached interface must be put in
269 * netmap mode. This may require the creation of a netmap_generic_adapter if
270 * we have no native support for the interface, or if generic adapters have
271 * been forced by sysctl.
273 * Both persistent VALE ports and bwraps are handled by netmap_get_bdg_na(),
274 * called by nm_bdg_ctl_attach(), and discriminated by the nm_bdg_attach()
275 * callback. In the case of the bwrap, the callback creates the
276 * netmap_bwrap_adapter. The initialization of the bwrap is then
277 * completed by calling netmap_do_regif() on it, in the nm_bdg_ctl()
278 * callback (netmap_bwrap_bdg_ctl in netmap_vale.c).
279 * A generic adapter for the wrapped ifp will be created if needed, when
280 * netmap_get_bdg_na() calls netmap_get_hw_na().
283 * ---- DATAPATHS -----
285 * -= SYSTEM DEVICE WITH NATIVE SUPPORT =-
287 * na == NA(ifp) == netmap_hw_adapter created in DEVICE_netmap_attach()
289 * - tx from netmap userspace:
291 * 1) ioctl(NIOCTXSYNC)/netmap_poll() in process context
292 * kring->nm_sync() == DEVICE_netmap_txsync()
293 * 2) device interrupt handler
294 * na->nm_notify() == netmap_notify()
295 * - rx from netmap userspace:
297 * 1) ioctl(NIOCRXSYNC)/netmap_poll() in process context
298 * kring->nm_sync() == DEVICE_netmap_rxsync()
299 * 2) device interrupt handler
300 * na->nm_notify() == netmap_notify()
301 * - rx from host stack
305 * na->nm_notify == netmap_notify()
306 * 2) ioctl(NIOCRXSYNC)/netmap_poll() in process context
307 * kring->nm_sync() == netmap_rxsync_from_host
308 * netmap_rxsync_from_host(na, NULL, NULL)
310 * ioctl(NIOCTXSYNC)/netmap_poll() in process context
311 * kring->nm_sync() == netmap_txsync_to_host
312 * netmap_txsync_to_host(na)
314 * FreeBSD: na->if_input() == ether_input()
315 * linux: netif_rx() with NM_MAGIC_PRIORITY_RX
318 * -= SYSTEM DEVICE WITH GENERIC SUPPORT =-
320 * na == NA(ifp) == generic_netmap_adapter created in generic_netmap_attach()
322 * - tx from netmap userspace:
324 * 1) ioctl(NIOCTXSYNC)/netmap_poll() in process context
325 * kring->nm_sync() == generic_netmap_txsync()
326 * nm_os_generic_xmit_frame()
327 * linux: dev_queue_xmit() with NM_MAGIC_PRIORITY_TX
328 * ifp->ndo_start_xmit == generic_ndo_start_xmit()
329 * gna->save_start_xmit == orig. dev. start_xmit
330 * FreeBSD: na->if_transmit() == orig. dev if_transmit
331 * 2) generic_mbuf_destructor()
332 * na->nm_notify() == netmap_notify()
333 * - rx from netmap userspace:
334 * 1) ioctl(NIOCRXSYNC)/netmap_poll() in process context
335 * kring->nm_sync() == generic_netmap_rxsync()
338 * generic_rx_handler()
340 * na->nm_notify() == netmap_notify()
341 * - rx from host stack
342 * FreeBSD: same as native
343 * Linux: same as native except:
345 * dev_queue_xmit() without NM_MAGIC_PRIORITY_TX
346 * ifp->ndo_start_xmit == generic_ndo_start_xmit()
348 * na->nm_notify() == netmap_notify()
349 * - tx to host stack (same as native):
357 * ioctl(NIOCTXSYNC)/netmap_poll() in process context
358 * kring->nm_sync() == netmap_vp_txsync()
360 * - system device with native support:
363 * na->nm_notify() == netmap_bwrap_intr_notify(ring_nr != host ring)
364 * kring->nm_sync() == DEVICE_netmap_rxsync()
366 * kring->nm_sync() == DEVICE_netmap_rxsync()
369 * na->nm_notify() == netmap_bwrap_intr_notify(ring_nr == host ring)
370 * kring->nm_sync() == netmap_rxsync_from_host()
373 * - system device with generic support:
374 * from device driver:
375 * generic_rx_handler()
376 * na->nm_notify() == netmap_bwrap_intr_notify(ring_nr != host ring)
377 * kring->nm_sync() == generic_netmap_rxsync()
379 * kring->nm_sync() == generic_netmap_rxsync()
382 * na->nm_notify() == netmap_bwrap_intr_notify(ring_nr == host ring)
383 * kring->nm_sync() == netmap_rxsync_from_host()
386 * (all cases) --> nm_bdg_flush()
387 * dest_na->nm_notify() == (see below)
393 * 1) ioctl(NIOCRXSYNC)/netmap_poll() in process context
394 * kring->nm_sync() == netmap_vp_rxsync()
395 * 2) from nm_bdg_flush()
396 * na->nm_notify() == netmap_notify()
398 * - system device with native support:
400 * na->nm_notify() == netmap_bwrap_notify()
402 * kring->nm_sync() == DEVICE_netmap_txsync()
406 * kring->nm_sync() == netmap_txsync_to_host
407 * netmap_vp_rxsync_locked()
409 * - system device with generic adapter:
411 * na->nm_notify() == netmap_bwrap_notify()
413 * kring->nm_sync() == generic_netmap_txsync()
417 * kring->nm_sync() == netmap_txsync_to_host
423 * OS-specific code that is used only within this file.
424 * Other OS-specific code that must be accessed by drivers
425 * is present in netmap_kern.h
428 #if defined(__FreeBSD__)
429 #include <sys/cdefs.h> /* prerequisite */
430 #include <sys/types.h>
431 #include <sys/errno.h>
432 #include <sys/param.h> /* defines used in kernel.h */
433 #include <sys/kernel.h> /* types used in module initialization */
434 #include <sys/conf.h> /* cdevsw struct, UID, GID */
435 #include <sys/filio.h> /* FIONBIO */
436 #include <sys/sockio.h>
437 #include <sys/socketvar.h> /* struct socket */
438 #include <sys/malloc.h>
439 #include <sys/poll.h>
440 #include <sys/rwlock.h>
441 #include <sys/socket.h> /* sockaddrs */
442 #include <sys/selinfo.h>
443 #include <sys/sysctl.h>
444 #include <sys/jail.h>
445 #include <net/vnet.h>
447 #include <net/if_var.h>
448 #include <net/bpf.h> /* BIOCIMMEDIATE */
449 #include <machine/bus.h> /* bus_dmamap_* */
450 #include <sys/endian.h>
451 #include <sys/refcount.h>
452 #include <net/ethernet.h> /* ETHER_BPF_MTAP */
457 #include "bsd_glue.h"
459 #elif defined(__APPLE__)
461 #warning OSX support is only partial
462 #include "osx_glue.h"
464 #elif defined (_WIN32)
466 #include "win_glue.h"
470 #error Unsupported platform
472 #endif /* unsupported */
477 #include <net/netmap.h>
478 #include <dev/netmap/netmap_kern.h>
479 #include <dev/netmap/netmap_mem2.h>
482 /* user-controlled variables */
484 #ifdef CONFIG_NETMAP_DEBUG
486 #endif /* CONFIG_NETMAP_DEBUG */
488 static int netmap_no_timestamp; /* don't timestamp on rxsync */
489 int netmap_no_pendintr = 1;
490 int netmap_txsync_retry = 2;
491 static int netmap_fwd = 0; /* force transparent forwarding */
494 * netmap_admode selects the netmap mode to use.
495 * Invalid values are reset to NETMAP_ADMODE_BEST
497 enum { NETMAP_ADMODE_BEST = 0, /* use native, fallback to generic */
498 NETMAP_ADMODE_NATIVE, /* either native or none */
499 NETMAP_ADMODE_GENERIC, /* force generic */
500 NETMAP_ADMODE_LAST };
501 static int netmap_admode = NETMAP_ADMODE_BEST;
503 /* netmap_generic_mit controls mitigation of RX notifications for
504 * the generic netmap adapter. The value is a time interval in
506 int netmap_generic_mit = 100*1000;
508 /* We use by default netmap-aware qdiscs with generic netmap adapters,
509 * even if there can be a little performance hit with hardware NICs.
510 * However, using the qdisc is the safer approach, for two reasons:
511 * 1) it prevents non-fifo qdiscs to break the TX notification
512 * scheme, which is based on mbuf destructors when txqdisc is
514 * 2) it makes it possible to transmit over software devices that
515 * change skb->dev, like bridge, veth, ...
517 * Anyway users looking for the best performance should
518 * use native adapters.
521 int netmap_generic_txqdisc = 1;
524 /* Default number of slots and queues for generic adapters. */
525 int netmap_generic_ringsize = 1024;
526 int netmap_generic_rings = 1;
528 /* Non-zero to enable checksum offloading in NIC drivers */
529 int netmap_generic_hwcsum = 0;
531 /* Non-zero if ptnet devices are allowed to use virtio-net headers. */
532 int ptnet_vnet_hdr = 1;
535 * SYSCTL calls are grouped between SYSBEGIN and SYSEND to be emulated
536 * in some other operating systems
540 SYSCTL_DECL(_dev_netmap);
541 SYSCTL_NODE(_dev, OID_AUTO, netmap, CTLFLAG_RW, 0, "Netmap args");
542 SYSCTL_INT(_dev_netmap, OID_AUTO, verbose,
543 CTLFLAG_RW, &netmap_verbose, 0, "Verbose mode");
544 #ifdef CONFIG_NETMAP_DEBUG
545 SYSCTL_INT(_dev_netmap, OID_AUTO, debug,
546 CTLFLAG_RW, &netmap_debug, 0, "Debug messages");
547 #endif /* CONFIG_NETMAP_DEBUG */
548 SYSCTL_INT(_dev_netmap, OID_AUTO, no_timestamp,
549 CTLFLAG_RW, &netmap_no_timestamp, 0, "no_timestamp");
550 SYSCTL_INT(_dev_netmap, OID_AUTO, no_pendintr, CTLFLAG_RW, &netmap_no_pendintr,
551 0, "Always look for new received packets.");
552 SYSCTL_INT(_dev_netmap, OID_AUTO, txsync_retry, CTLFLAG_RW,
553 &netmap_txsync_retry, 0, "Number of txsync loops in bridge's flush.");
555 SYSCTL_INT(_dev_netmap, OID_AUTO, fwd, CTLFLAG_RW, &netmap_fwd, 0,
556 "Force NR_FORWARD mode");
557 SYSCTL_INT(_dev_netmap, OID_AUTO, admode, CTLFLAG_RW, &netmap_admode, 0,
558 "Adapter mode. 0 selects the best option available,"
559 "1 forces native adapter, 2 forces emulated adapter");
560 SYSCTL_INT(_dev_netmap, OID_AUTO, generic_hwcsum, CTLFLAG_RW, &netmap_generic_hwcsum,
561 0, "Hardware checksums. 0 to disable checksum generation by the NIC (default),"
562 "1 to enable checksum generation by the NIC");
563 SYSCTL_INT(_dev_netmap, OID_AUTO, generic_mit, CTLFLAG_RW, &netmap_generic_mit,
564 0, "RX notification interval in nanoseconds");
565 SYSCTL_INT(_dev_netmap, OID_AUTO, generic_ringsize, CTLFLAG_RW,
566 &netmap_generic_ringsize, 0,
567 "Number of per-ring slots for emulated netmap mode");
568 SYSCTL_INT(_dev_netmap, OID_AUTO, generic_rings, CTLFLAG_RW,
569 &netmap_generic_rings, 0,
570 "Number of TX/RX queues for emulated netmap adapters");
572 SYSCTL_INT(_dev_netmap, OID_AUTO, generic_txqdisc, CTLFLAG_RW,
573 &netmap_generic_txqdisc, 0, "Use qdisc for generic adapters");
575 SYSCTL_INT(_dev_netmap, OID_AUTO, ptnet_vnet_hdr, CTLFLAG_RW, &ptnet_vnet_hdr,
576 0, "Allow ptnet devices to use virtio-net headers");
580 NMG_LOCK_T netmap_global_lock;
583 * mark the ring as stopped, and run through the locks
584 * to make sure other users get to see it.
585 * stopped must be either NR_KR_STOPPED (for unbounded stop)
586 * of NR_KR_LOCKED (brief stop for mutual exclusion purposes)
589 netmap_disable_ring(struct netmap_kring *kr, int stopped)
591 nm_kr_stop(kr, stopped);
592 // XXX check if nm_kr_stop is sufficient
593 mtx_lock(&kr->q_lock);
594 mtx_unlock(&kr->q_lock);
598 /* stop or enable a single ring */
600 netmap_set_ring(struct netmap_adapter *na, u_int ring_id, enum txrx t, int stopped)
603 netmap_disable_ring(NMR(na, t)[ring_id], stopped);
605 NMR(na, t)[ring_id]->nkr_stopped = 0;
609 /* stop or enable all the rings of na */
611 netmap_set_all_rings(struct netmap_adapter *na, int stopped)
616 if (!nm_netmap_on(na))
620 for (i = 0; i < netmap_real_rings(na, t); i++) {
621 netmap_set_ring(na, i, t, stopped);
627 * Convenience function used in drivers. Waits for current txsync()s/rxsync()s
628 * to finish and prevents any new one from starting. Call this before turning
629 * netmap mode off, or before removing the hardware rings (e.g., on module
633 netmap_disable_all_rings(struct ifnet *ifp)
635 if (NM_NA_VALID(ifp)) {
636 netmap_set_all_rings(NA(ifp), NM_KR_STOPPED);
641 * Convenience function used in drivers. Re-enables rxsync and txsync on the
642 * adapter's rings In linux drivers, this should be placed near each
646 netmap_enable_all_rings(struct ifnet *ifp)
648 if (NM_NA_VALID(ifp)) {
649 netmap_set_all_rings(NA(ifp), 0 /* enabled */);
654 netmap_make_zombie(struct ifnet *ifp)
656 if (NM_NA_VALID(ifp)) {
657 struct netmap_adapter *na = NA(ifp);
658 netmap_set_all_rings(na, NM_KR_LOCKED);
659 na->na_flags |= NAF_ZOMBIE;
660 netmap_set_all_rings(na, 0);
665 netmap_undo_zombie(struct ifnet *ifp)
667 if (NM_NA_VALID(ifp)) {
668 struct netmap_adapter *na = NA(ifp);
669 if (na->na_flags & NAF_ZOMBIE) {
670 netmap_set_all_rings(na, NM_KR_LOCKED);
671 na->na_flags &= ~NAF_ZOMBIE;
672 netmap_set_all_rings(na, 0);
678 * generic bound_checking function
681 nm_bound_var(u_int *v, u_int dflt, u_int lo, u_int hi, const char *msg)
684 const char *op = NULL;
693 } else if (oldv > hi) {
698 nm_prinf("%s %s to %d (was %d)", op, msg, *v, oldv);
704 * packet-dump function, user-supplied or static buffer.
705 * The destination buffer must be at least 30+4*len
708 nm_dump_buf(char *p, int len, int lim, char *dst)
710 static char _dst[8192];
712 static char hex[] ="0123456789abcdef";
713 char *o; /* output position */
715 #define P_HI(x) hex[((x) & 0xf0)>>4]
716 #define P_LO(x) hex[((x) & 0xf)]
717 #define P_C(x) ((x) >= 0x20 && (x) <= 0x7e ? (x) : '.')
720 if (lim <= 0 || lim > len)
723 sprintf(o, "buf 0x%p len %d lim %d\n", p, len, lim);
725 /* hexdump routine */
726 for (i = 0; i < lim; ) {
727 sprintf(o, "%5d: ", i);
731 for (j=0; j < 16 && i < lim; i++, j++) {
733 o[j*3+1] = P_LO(p[i]);
736 for (j=0; j < 16 && i < lim; i++, j++)
737 o[j + 48] = P_C(p[i]);
750 * Fetch configuration from the device, to cope with dynamic
751 * reconfigurations after loading the module.
753 /* call with NMG_LOCK held */
755 netmap_update_config(struct netmap_adapter *na)
757 struct nm_config_info info;
759 bzero(&info, sizeof(info));
760 if (na->nm_config == NULL ||
761 na->nm_config(na, &info)) {
762 /* take whatever we had at init time */
763 info.num_tx_rings = na->num_tx_rings;
764 info.num_tx_descs = na->num_tx_desc;
765 info.num_rx_rings = na->num_rx_rings;
766 info.num_rx_descs = na->num_rx_desc;
767 info.rx_buf_maxsize = na->rx_buf_maxsize;
770 if (na->num_tx_rings == info.num_tx_rings &&
771 na->num_tx_desc == info.num_tx_descs &&
772 na->num_rx_rings == info.num_rx_rings &&
773 na->num_rx_desc == info.num_rx_descs &&
774 na->rx_buf_maxsize == info.rx_buf_maxsize)
775 return 0; /* nothing changed */
776 if (na->active_fds == 0) {
777 na->num_tx_rings = info.num_tx_rings;
778 na->num_tx_desc = info.num_tx_descs;
779 na->num_rx_rings = info.num_rx_rings;
780 na->num_rx_desc = info.num_rx_descs;
781 na->rx_buf_maxsize = info.rx_buf_maxsize;
783 nm_prinf("configuration changed for %s: txring %d x %d, "
784 "rxring %d x %d, rxbufsz %d",
785 na->name, na->num_tx_rings, na->num_tx_desc,
786 na->num_rx_rings, na->num_rx_desc, na->rx_buf_maxsize);
789 nm_prerr("WARNING: configuration changed for %s while active: "
790 "txring %d x %d, rxring %d x %d, rxbufsz %d",
791 na->name, info.num_tx_rings, info.num_tx_descs,
792 info.num_rx_rings, info.num_rx_descs,
793 info.rx_buf_maxsize);
797 /* nm_sync callbacks for the host rings */
798 static int netmap_txsync_to_host(struct netmap_kring *kring, int flags);
799 static int netmap_rxsync_from_host(struct netmap_kring *kring, int flags);
801 /* create the krings array and initialize the fields common to all adapters.
802 * The array layout is this:
805 * na->tx_rings ----->| | \
806 * | | } na->num_tx_ring
810 * na->rx_rings ----> +----------+
812 * | | } na->num_rx_rings
817 * na->tailroom ----->| | \
818 * | | } tailroom bytes
822 * Note: for compatibility, host krings are created even when not needed.
823 * The tailroom space is currently used by vale ports for allocating leases.
825 /* call with NMG_LOCK held */
827 netmap_krings_create(struct netmap_adapter *na, u_int tailroom)
830 struct netmap_kring *kring;
834 if (na->tx_rings != NULL) {
835 if (netmap_debug & NM_DEBUG_ON)
836 nm_prerr("warning: krings were already created");
840 /* account for the (possibly fake) host rings */
841 n[NR_TX] = netmap_all_rings(na, NR_TX);
842 n[NR_RX] = netmap_all_rings(na, NR_RX);
844 len = (n[NR_TX] + n[NR_RX]) *
845 (sizeof(struct netmap_kring) + sizeof(struct netmap_kring *))
848 na->tx_rings = nm_os_malloc((size_t)len);
849 if (na->tx_rings == NULL) {
850 nm_prerr("Cannot allocate krings");
853 na->rx_rings = na->tx_rings + n[NR_TX];
854 na->tailroom = na->rx_rings + n[NR_RX];
856 /* link the krings in the krings array */
857 kring = (struct netmap_kring *)((char *)na->tailroom + tailroom);
858 for (i = 0; i < n[NR_TX] + n[NR_RX]; i++) {
859 na->tx_rings[i] = kring;
864 * All fields in krings are 0 except the one initialized below.
865 * but better be explicit on important kring fields.
868 ndesc = nma_get_ndesc(na, t);
869 for (i = 0; i < n[t]; i++) {
870 kring = NMR(na, t)[i];
871 bzero(kring, sizeof(*kring));
873 kring->notify_na = na;
876 kring->nkr_num_slots = ndesc;
877 kring->nr_mode = NKR_NETMAP_OFF;
878 kring->nr_pending_mode = NKR_NETMAP_OFF;
879 if (i < nma_get_nrings(na, t)) {
880 kring->nm_sync = (t == NR_TX ? na->nm_txsync : na->nm_rxsync);
882 if (!(na->na_flags & NAF_HOST_RINGS))
883 kring->nr_kflags |= NKR_FAKERING;
884 kring->nm_sync = (t == NR_TX ?
885 netmap_txsync_to_host:
886 netmap_rxsync_from_host);
888 kring->nm_notify = na->nm_notify;
889 kring->rhead = kring->rcur = kring->nr_hwcur = 0;
891 * IMPORTANT: Always keep one slot empty.
893 kring->rtail = kring->nr_hwtail = (t == NR_TX ? ndesc - 1 : 0);
894 snprintf(kring->name, sizeof(kring->name) - 1, "%s %s%d", na->name,
896 ND("ktx %s h %d c %d t %d",
897 kring->name, kring->rhead, kring->rcur, kring->rtail);
898 mtx_init(&kring->q_lock, (t == NR_TX ? "nm_txq_lock" : "nm_rxq_lock"), NULL, MTX_DEF);
899 nm_os_selinfo_init(&kring->si);
901 nm_os_selinfo_init(&na->si[t]);
909 /* undo the actions performed by netmap_krings_create */
910 /* call with NMG_LOCK held */
912 netmap_krings_delete(struct netmap_adapter *na)
914 struct netmap_kring **kring = na->tx_rings;
917 if (na->tx_rings == NULL) {
918 if (netmap_debug & NM_DEBUG_ON)
919 nm_prerr("warning: krings were already deleted");
924 nm_os_selinfo_uninit(&na->si[t]);
926 /* we rely on the krings layout described above */
927 for ( ; kring != na->tailroom; kring++) {
928 mtx_destroy(&(*kring)->q_lock);
929 nm_os_selinfo_uninit(&(*kring)->si);
931 nm_os_free(na->tx_rings);
932 na->tx_rings = na->rx_rings = na->tailroom = NULL;
937 * Destructor for NIC ports. They also have an mbuf queue
938 * on the rings connected to the host so we need to purge
941 /* call with NMG_LOCK held */
943 netmap_hw_krings_delete(struct netmap_adapter *na)
945 u_int lim = netmap_real_rings(na, NR_RX), i;
947 for (i = nma_get_nrings(na, NR_RX); i < lim; i++) {
948 struct mbq *q = &NMR(na, NR_RX)[i]->rx_queue;
949 ND("destroy sw mbq with len %d", mbq_len(q));
953 netmap_krings_delete(na);
957 netmap_mem_drop(struct netmap_adapter *na)
959 int last = netmap_mem_deref(na->nm_mem, na);
960 /* if the native allocator had been overrided on regif,
961 * restore it now and drop the temporary one
963 if (last && na->nm_mem_prev) {
964 netmap_mem_put(na->nm_mem);
965 na->nm_mem = na->nm_mem_prev;
966 na->nm_mem_prev = NULL;
971 * Undo everything that was done in netmap_do_regif(). In particular,
972 * call nm_register(ifp,0) to stop netmap mode on the interface and
973 * revert to normal operation.
975 /* call with NMG_LOCK held */
976 static void netmap_unset_ringid(struct netmap_priv_d *);
977 static void netmap_krings_put(struct netmap_priv_d *);
979 netmap_do_unregif(struct netmap_priv_d *priv)
981 struct netmap_adapter *na = priv->np_na;
985 /* unset nr_pending_mode and possibly release exclusive mode */
986 netmap_krings_put(priv);
989 /* XXX check whether we have to do something with monitor
990 * when rings change nr_mode. */
991 if (na->active_fds <= 0) {
992 /* walk through all the rings and tell any monitor
993 * that the port is going to exit netmap mode
995 netmap_monitor_stop(na);
999 if (na->active_fds <= 0 || nm_kring_pending(priv)) {
1000 na->nm_register(na, 0);
1003 /* delete rings and buffers that are no longer needed */
1004 netmap_mem_rings_delete(na);
1006 if (na->active_fds <= 0) { /* last instance */
1008 * (TO CHECK) We enter here
1009 * when the last reference to this file descriptor goes
1010 * away. This means we cannot have any pending poll()
1011 * or interrupt routine operating on the structure.
1012 * XXX The file may be closed in a thread while
1013 * another thread is using it.
1014 * Linux keeps the file opened until the last reference
1015 * by any outstanding ioctl/poll or mmap is gone.
1016 * FreeBSD does not track mmap()s (but we do) and
1017 * wakes up any sleeping poll(). Need to check what
1018 * happens if the close() occurs while a concurrent
1019 * syscall is running.
1021 if (netmap_debug & NM_DEBUG_ON)
1022 nm_prinf("deleting last instance for %s", na->name);
1024 if (nm_netmap_on(na)) {
1025 nm_prerr("BUG: netmap on while going to delete the krings");
1028 na->nm_krings_delete(na);
1031 /* possibily decrement counter of tx_si/rx_si users */
1032 netmap_unset_ringid(priv);
1033 /* delete the nifp */
1034 netmap_mem_if_delete(na, priv->np_nifp);
1035 /* drop the allocator */
1036 netmap_mem_drop(na);
1037 /* mark the priv as unregistered */
1039 priv->np_nifp = NULL;
1042 struct netmap_priv_d*
1043 netmap_priv_new(void)
1045 struct netmap_priv_d *priv;
1047 priv = nm_os_malloc(sizeof(struct netmap_priv_d));
1056 * Destructor of the netmap_priv_d, called when the fd is closed
1057 * Action: undo all the things done by NIOCREGIF,
1058 * On FreeBSD we need to track whether there are active mmap()s,
1059 * and we use np_active_mmaps for that. On linux, the field is always 0.
1060 * Return: 1 if we can free priv, 0 otherwise.
1063 /* call with NMG_LOCK held */
1065 netmap_priv_delete(struct netmap_priv_d *priv)
1067 struct netmap_adapter *na = priv->np_na;
1069 /* number of active references to this fd */
1070 if (--priv->np_refs > 0) {
1075 netmap_do_unregif(priv);
1077 netmap_unget_na(na, priv->np_ifp);
1078 bzero(priv, sizeof(*priv)); /* for safety */
1083 /* call with NMG_LOCK *not* held */
1085 netmap_dtor(void *data)
1087 struct netmap_priv_d *priv = data;
1090 netmap_priv_delete(priv);
1096 * Handlers for synchronization of the rings from/to the host stack.
1097 * These are associated to a network interface and are just another
1098 * ring pair managed by userspace.
1100 * Netmap also supports transparent forwarding (NS_FORWARD and NR_FORWARD
1103 * - Before releasing buffers on hw RX rings, the application can mark
1104 * them with the NS_FORWARD flag. During the next RXSYNC or poll(), they
1105 * will be forwarded to the host stack, similarly to what happened if
1106 * the application moved them to the host TX ring.
1108 * - Before releasing buffers on the host RX ring, the application can
1109 * mark them with the NS_FORWARD flag. During the next RXSYNC or poll(),
1110 * they will be forwarded to the hw TX rings, saving the application
1111 * from doing the same task in user-space.
1113 * Transparent fowarding can be enabled per-ring, by setting the NR_FORWARD
1114 * flag, or globally with the netmap_fwd sysctl.
1116 * The transfer NIC --> host is relatively easy, just encapsulate
1117 * into mbufs and we are done. The host --> NIC side is slightly
1118 * harder because there might not be room in the tx ring so it
1119 * might take a while before releasing the buffer.
1124 * Pass a whole queue of mbufs to the host stack as coming from 'dst'
1125 * We do not need to lock because the queue is private.
1126 * After this call the queue is empty.
1129 netmap_send_up(struct ifnet *dst, struct mbq *q)
1132 struct mbuf *head = NULL, *prev = NULL;
1134 /* Send packets up, outside the lock; head/prev machinery
1135 * is only useful for Windows. */
1136 while ((m = mbq_dequeue(q)) != NULL) {
1137 if (netmap_debug & NM_DEBUG_HOST)
1138 nm_prinf("sending up pkt %p size %d", m, MBUF_LEN(m));
1139 prev = nm_os_send_up(dst, m, prev);
1144 nm_os_send_up(dst, NULL, head);
1150 * Scan the buffers from hwcur to ring->head, and put a copy of those
1151 * marked NS_FORWARD (or all of them if forced) into a queue of mbufs.
1152 * Drop remaining packets in the unlikely event
1153 * of an mbuf shortage.
1156 netmap_grab_packets(struct netmap_kring *kring, struct mbq *q, int force)
1158 u_int const lim = kring->nkr_num_slots - 1;
1159 u_int const head = kring->rhead;
1161 struct netmap_adapter *na = kring->na;
1163 for (n = kring->nr_hwcur; n != head; n = nm_next(n, lim)) {
1165 struct netmap_slot *slot = &kring->ring->slot[n];
1167 if ((slot->flags & NS_FORWARD) == 0 && !force)
1169 if (slot->len < 14 || slot->len > NETMAP_BUF_SIZE(na)) {
1170 RD(5, "bad pkt at %d len %d", n, slot->len);
1173 slot->flags &= ~NS_FORWARD; // XXX needed ?
1174 /* XXX TODO: adapt to the case of a multisegment packet */
1175 m = m_devget(NMB(na, slot), slot->len, 0, na->ifp, NULL);
1184 _nm_may_forward(struct netmap_kring *kring)
1186 return ((netmap_fwd || kring->ring->flags & NR_FORWARD) &&
1187 kring->na->na_flags & NAF_HOST_RINGS &&
1188 kring->tx == NR_RX);
1192 nm_may_forward_up(struct netmap_kring *kring)
1194 return _nm_may_forward(kring) &&
1195 kring->ring_id != kring->na->num_rx_rings;
1199 nm_may_forward_down(struct netmap_kring *kring, int sync_flags)
1201 return _nm_may_forward(kring) &&
1202 (sync_flags & NAF_CAN_FORWARD_DOWN) &&
1203 kring->ring_id == kring->na->num_rx_rings;
1207 * Send to the NIC rings packets marked NS_FORWARD between
1208 * kring->nr_hwcur and kring->rhead.
1209 * Called under kring->rx_queue.lock on the sw rx ring.
1211 * It can only be called if the user opened all the TX hw rings,
1212 * see NAF_CAN_FORWARD_DOWN flag.
1213 * We can touch the TX netmap rings (slots, head and cur) since
1214 * we are in poll/ioctl system call context, and the application
1215 * is not supposed to touch the ring (using a different thread)
1216 * during the execution of the system call.
1219 netmap_sw_to_nic(struct netmap_adapter *na)
1221 struct netmap_kring *kring = na->rx_rings[na->num_rx_rings];
1222 struct netmap_slot *rxslot = kring->ring->slot;
1223 u_int i, rxcur = kring->nr_hwcur;
1224 u_int const head = kring->rhead;
1225 u_int const src_lim = kring->nkr_num_slots - 1;
1228 /* scan rings to find space, then fill as much as possible */
1229 for (i = 0; i < na->num_tx_rings; i++) {
1230 struct netmap_kring *kdst = na->tx_rings[i];
1231 struct netmap_ring *rdst = kdst->ring;
1232 u_int const dst_lim = kdst->nkr_num_slots - 1;
1234 /* XXX do we trust ring or kring->rcur,rtail ? */
1235 for (; rxcur != head && !nm_ring_empty(rdst);
1236 rxcur = nm_next(rxcur, src_lim) ) {
1237 struct netmap_slot *src, *dst, tmp;
1238 u_int dst_head = rdst->head;
1240 src = &rxslot[rxcur];
1241 if ((src->flags & NS_FORWARD) == 0 && !netmap_fwd)
1246 dst = &rdst->slot[dst_head];
1250 src->buf_idx = dst->buf_idx;
1251 src->flags = NS_BUF_CHANGED;
1253 dst->buf_idx = tmp.buf_idx;
1255 dst->flags = NS_BUF_CHANGED;
1257 rdst->head = rdst->cur = nm_next(dst_head, dst_lim);
1259 /* if (sent) XXX txsync ? it would be just an optimization */
1266 * netmap_txsync_to_host() passes packets up. We are called from a
1267 * system call in user process context, and the only contention
1268 * can be among multiple user threads erroneously calling
1269 * this routine concurrently.
1272 netmap_txsync_to_host(struct netmap_kring *kring, int flags)
1274 struct netmap_adapter *na = kring->na;
1275 u_int const lim = kring->nkr_num_slots - 1;
1276 u_int const head = kring->rhead;
1279 /* Take packets from hwcur to head and pass them up.
1280 * Force hwcur = head since netmap_grab_packets() stops at head
1283 netmap_grab_packets(kring, &q, 1 /* force */);
1284 ND("have %d pkts in queue", mbq_len(&q));
1285 kring->nr_hwcur = head;
1286 kring->nr_hwtail = head + lim;
1287 if (kring->nr_hwtail > lim)
1288 kring->nr_hwtail -= lim + 1;
1290 netmap_send_up(na->ifp, &q);
1296 * rxsync backend for packets coming from the host stack.
1297 * They have been put in kring->rx_queue by netmap_transmit().
1298 * We protect access to the kring using kring->rx_queue.lock
1300 * also moves to the nic hw rings any packet the user has marked
1301 * for transparent-mode forwarding, then sets the NR_FORWARD
1302 * flag in the kring to let the caller push them out
1305 netmap_rxsync_from_host(struct netmap_kring *kring, int flags)
1307 struct netmap_adapter *na = kring->na;
1308 struct netmap_ring *ring = kring->ring;
1310 u_int const lim = kring->nkr_num_slots - 1;
1311 u_int const head = kring->rhead;
1313 struct mbq *q = &kring->rx_queue, fq;
1315 mbq_init(&fq); /* fq holds packets to be freed */
1319 /* First part: import newly received packets */
1321 if (n) { /* grab packets from the queue */
1325 nm_i = kring->nr_hwtail;
1326 stop_i = nm_prev(kring->nr_hwcur, lim);
1327 while ( nm_i != stop_i && (m = mbq_dequeue(q)) != NULL ) {
1328 int len = MBUF_LEN(m);
1329 struct netmap_slot *slot = &ring->slot[nm_i];
1331 m_copydata(m, 0, len, NMB(na, slot));
1332 ND("nm %d len %d", nm_i, len);
1333 if (netmap_debug & NM_DEBUG_HOST)
1334 nm_prinf("%s", nm_dump_buf(NMB(na, slot),len, 128, NULL));
1338 nm_i = nm_next(nm_i, lim);
1339 mbq_enqueue(&fq, m);
1341 kring->nr_hwtail = nm_i;
1345 * Second part: skip past packets that userspace has released.
1347 nm_i = kring->nr_hwcur;
1348 if (nm_i != head) { /* something was released */
1349 if (nm_may_forward_down(kring, flags)) {
1350 ret = netmap_sw_to_nic(na);
1352 kring->nr_kflags |= NR_FORWARD;
1356 kring->nr_hwcur = head;
1368 /* Get a netmap adapter for the port.
1370 * If it is possible to satisfy the request, return 0
1371 * with *na containing the netmap adapter found.
1372 * Otherwise return an error code, with *na containing NULL.
1374 * When the port is attached to a bridge, we always return
1376 * Otherwise, if the port is already bound to a file descriptor,
1377 * then we unconditionally return the existing adapter into *na.
1378 * In all the other cases, we return (into *na) either native,
1379 * generic or NULL, according to the following table:
1382 * active_fds dev.netmap.admode YES NO
1383 * -------------------------------------------------------
1384 * >0 * NA(ifp) NA(ifp)
1386 * 0 NETMAP_ADMODE_BEST NATIVE GENERIC
1387 * 0 NETMAP_ADMODE_NATIVE NATIVE NULL
1388 * 0 NETMAP_ADMODE_GENERIC GENERIC GENERIC
1391 static void netmap_hw_dtor(struct netmap_adapter *); /* needed by NM_IS_NATIVE() */
1393 netmap_get_hw_na(struct ifnet *ifp, struct netmap_mem_d *nmd, struct netmap_adapter **na)
1395 /* generic support */
1396 int i = netmap_admode; /* Take a snapshot. */
1397 struct netmap_adapter *prev_na;
1400 *na = NULL; /* default */
1402 /* reset in case of invalid value */
1403 if (i < NETMAP_ADMODE_BEST || i >= NETMAP_ADMODE_LAST)
1404 i = netmap_admode = NETMAP_ADMODE_BEST;
1406 if (NM_NA_VALID(ifp)) {
1408 /* If an adapter already exists, return it if
1409 * there are active file descriptors or if
1410 * netmap is not forced to use generic
1413 if (NETMAP_OWNED_BY_ANY(prev_na)
1414 || i != NETMAP_ADMODE_GENERIC
1415 || prev_na->na_flags & NAF_FORCE_NATIVE
1417 /* ugly, but we cannot allow an adapter switch
1418 * if some pipe is referring to this one
1420 || prev_na->na_next_pipe > 0
1428 /* If there isn't native support and netmap is not allowed
1429 * to use generic adapters, we cannot satisfy the request.
1431 if (!NM_IS_NATIVE(ifp) && i == NETMAP_ADMODE_NATIVE)
1434 /* Otherwise, create a generic adapter and return it,
1435 * saving the previously used netmap adapter, if any.
1437 * Note that here 'prev_na', if not NULL, MUST be a
1438 * native adapter, and CANNOT be a generic one. This is
1439 * true because generic adapters are created on demand, and
1440 * destroyed when not used anymore. Therefore, if the adapter
1441 * currently attached to an interface 'ifp' is generic, it
1443 * (NA(ifp)->active_fds > 0 || NETMAP_OWNED_BY_KERN(NA(ifp))).
1444 * Consequently, if NA(ifp) is generic, we will enter one of
1445 * the branches above. This ensures that we never override
1446 * a generic adapter with another generic adapter.
1448 error = generic_netmap_attach(ifp);
1455 if (nmd != NULL && !((*na)->na_flags & NAF_MEM_OWNER) &&
1456 (*na)->active_fds == 0 && ((*na)->nm_mem != nmd)) {
1457 (*na)->nm_mem_prev = (*na)->nm_mem;
1458 (*na)->nm_mem = netmap_mem_get(nmd);
1465 * MUST BE CALLED UNDER NMG_LOCK()
1467 * Get a refcounted reference to a netmap adapter attached
1468 * to the interface specified by req.
1469 * This is always called in the execution of an ioctl().
1471 * Return ENXIO if the interface specified by the request does
1472 * not exist, ENOTSUP if netmap is not supported by the interface,
1473 * EBUSY if the interface is already attached to a bridge,
1474 * EINVAL if parameters are invalid, ENOMEM if needed resources
1475 * could not be allocated.
1476 * If successful, hold a reference to the netmap adapter.
1478 * If the interface specified by req is a system one, also keep
1479 * a reference to it and return a valid *ifp.
1482 netmap_get_na(struct nmreq_header *hdr,
1483 struct netmap_adapter **na, struct ifnet **ifp,
1484 struct netmap_mem_d *nmd, int create)
1486 struct nmreq_register *req = (struct nmreq_register *)(uintptr_t)hdr->nr_body;
1488 struct netmap_adapter *ret = NULL;
1491 *na = NULL; /* default return value */
1494 if (hdr->nr_reqtype != NETMAP_REQ_REGISTER) {
1498 if (req->nr_mode == NR_REG_PIPE_MASTER ||
1499 req->nr_mode == NR_REG_PIPE_SLAVE) {
1500 /* Do not accept deprecated pipe modes. */
1501 nm_prerr("Deprecated pipe nr_mode, use xx{yy or xx}yy syntax");
1507 /* if the request contain a memid, try to find the
1508 * corresponding memory region
1510 if (nmd == NULL && req->nr_mem_id) {
1511 nmd = netmap_mem_find(req->nr_mem_id);
1514 /* keep the rereference */
1518 /* We cascade through all possible types of netmap adapter.
1519 * All netmap_get_*_na() functions return an error and an na,
1520 * with the following combinations:
1523 * 0 NULL type doesn't match
1524 * !0 NULL type matches, but na creation/lookup failed
1525 * 0 !NULL type matches and na created/found
1526 * !0 !NULL impossible
1528 error = netmap_get_null_na(hdr, na, nmd, create);
1529 if (error || *na != NULL)
1532 /* try to see if this is a monitor port */
1533 error = netmap_get_monitor_na(hdr, na, nmd, create);
1534 if (error || *na != NULL)
1537 /* try to see if this is a pipe port */
1538 error = netmap_get_pipe_na(hdr, na, nmd, create);
1539 if (error || *na != NULL)
1542 /* try to see if this is a bridge port */
1543 error = netmap_get_vale_na(hdr, na, nmd, create);
1547 if (*na != NULL) /* valid match in netmap_get_bdg_na() */
1551 * This must be a hardware na, lookup the name in the system.
1552 * Note that by hardware we actually mean "it shows up in ifconfig".
1553 * This may still be a tap, a veth/epair, or even a
1554 * persistent VALE port.
1556 *ifp = ifunit_ref(hdr->nr_name);
1562 error = netmap_get_hw_na(*ifp, nmd, &ret);
1567 netmap_adapter_get(ret);
1572 netmap_adapter_put(ret);
1579 netmap_mem_put(nmd);
1584 /* undo netmap_get_na() */
1586 netmap_unget_na(struct netmap_adapter *na, struct ifnet *ifp)
1591 netmap_adapter_put(na);
1595 #define NM_FAIL_ON(t) do { \
1596 if (unlikely(t)) { \
1597 RD(5, "%s: fail '" #t "' " \
1599 "rh %d rc %d rt %d " \
1602 head, cur, ring->tail, \
1603 kring->rhead, kring->rcur, kring->rtail, \
1604 kring->nr_hwcur, kring->nr_hwtail); \
1605 return kring->nkr_num_slots; \
1610 * validate parameters on entry for *_txsync()
1611 * Returns ring->cur if ok, or something >= kring->nkr_num_slots
1614 * rhead, rcur and rtail=hwtail are stored from previous round.
1615 * hwcur is the next packet to send to the ring.
1618 * hwcur <= *rhead <= head <= cur <= tail = *rtail <= hwtail
1620 * hwcur, rhead, rtail and hwtail are reliable
1623 nm_txsync_prologue(struct netmap_kring *kring, struct netmap_ring *ring)
1625 u_int head = ring->head; /* read only once */
1626 u_int cur = ring->cur; /* read only once */
1627 u_int n = kring->nkr_num_slots;
1629 ND(5, "%s kcur %d ktail %d head %d cur %d tail %d",
1631 kring->nr_hwcur, kring->nr_hwtail,
1632 ring->head, ring->cur, ring->tail);
1633 #if 1 /* kernel sanity checks; but we can trust the kring. */
1634 NM_FAIL_ON(kring->nr_hwcur >= n || kring->rhead >= n ||
1635 kring->rtail >= n || kring->nr_hwtail >= n);
1636 #endif /* kernel sanity checks */
1638 * user sanity checks. We only use head,
1639 * A, B, ... are possible positions for head:
1641 * 0 A rhead B rtail C n-1
1642 * 0 D rtail E rhead F n-1
1644 * B, F, D are valid. A, C, E are wrong
1646 if (kring->rtail >= kring->rhead) {
1647 /* want rhead <= head <= rtail */
1648 NM_FAIL_ON(head < kring->rhead || head > kring->rtail);
1649 /* and also head <= cur <= rtail */
1650 NM_FAIL_ON(cur < head || cur > kring->rtail);
1651 } else { /* here rtail < rhead */
1652 /* we need head outside rtail .. rhead */
1653 NM_FAIL_ON(head > kring->rtail && head < kring->rhead);
1655 /* two cases now: head <= rtail or head >= rhead */
1656 if (head <= kring->rtail) {
1657 /* want head <= cur <= rtail */
1658 NM_FAIL_ON(cur < head || cur > kring->rtail);
1659 } else { /* head >= rhead */
1660 /* cur must be outside rtail..head */
1661 NM_FAIL_ON(cur > kring->rtail && cur < head);
1664 if (ring->tail != kring->rtail) {
1665 RD(5, "%s tail overwritten was %d need %d", kring->name,
1666 ring->tail, kring->rtail);
1667 ring->tail = kring->rtail;
1669 kring->rhead = head;
1676 * validate parameters on entry for *_rxsync()
1677 * Returns ring->head if ok, kring->nkr_num_slots on error.
1679 * For a valid configuration,
1680 * hwcur <= head <= cur <= tail <= hwtail
1682 * We only consider head and cur.
1683 * hwcur and hwtail are reliable.
1687 nm_rxsync_prologue(struct netmap_kring *kring, struct netmap_ring *ring)
1689 uint32_t const n = kring->nkr_num_slots;
1692 ND(5,"%s kc %d kt %d h %d c %d t %d",
1694 kring->nr_hwcur, kring->nr_hwtail,
1695 ring->head, ring->cur, ring->tail);
1697 * Before storing the new values, we should check they do not
1698 * move backwards. However:
1699 * - head is not an issue because the previous value is hwcur;
1700 * - cur could in principle go back, however it does not matter
1701 * because we are processing a brand new rxsync()
1703 cur = kring->rcur = ring->cur; /* read only once */
1704 head = kring->rhead = ring->head; /* read only once */
1705 #if 1 /* kernel sanity checks */
1706 NM_FAIL_ON(kring->nr_hwcur >= n || kring->nr_hwtail >= n);
1707 #endif /* kernel sanity checks */
1708 /* user sanity checks */
1709 if (kring->nr_hwtail >= kring->nr_hwcur) {
1710 /* want hwcur <= rhead <= hwtail */
1711 NM_FAIL_ON(head < kring->nr_hwcur || head > kring->nr_hwtail);
1712 /* and also rhead <= rcur <= hwtail */
1713 NM_FAIL_ON(cur < head || cur > kring->nr_hwtail);
1715 /* we need rhead outside hwtail..hwcur */
1716 NM_FAIL_ON(head < kring->nr_hwcur && head > kring->nr_hwtail);
1717 /* two cases now: head <= hwtail or head >= hwcur */
1718 if (head <= kring->nr_hwtail) {
1719 /* want head <= cur <= hwtail */
1720 NM_FAIL_ON(cur < head || cur > kring->nr_hwtail);
1722 /* cur must be outside hwtail..head */
1723 NM_FAIL_ON(cur < head && cur > kring->nr_hwtail);
1726 if (ring->tail != kring->rtail) {
1727 RD(5, "%s tail overwritten was %d need %d",
1729 ring->tail, kring->rtail);
1730 ring->tail = kring->rtail;
1737 * Error routine called when txsync/rxsync detects an error.
1738 * Can't do much more than resetting head = cur = hwcur, tail = hwtail
1739 * Return 1 on reinit.
1741 * This routine is only called by the upper half of the kernel.
1742 * It only reads hwcur (which is changed only by the upper half, too)
1743 * and hwtail (which may be changed by the lower half, but only on
1744 * a tx ring and only to increase it, so any error will be recovered
1745 * on the next call). For the above, we don't strictly need to call
1749 netmap_ring_reinit(struct netmap_kring *kring)
1751 struct netmap_ring *ring = kring->ring;
1752 u_int i, lim = kring->nkr_num_slots - 1;
1755 // XXX KASSERT nm_kr_tryget
1756 RD(10, "called for %s", kring->name);
1757 // XXX probably wrong to trust userspace
1758 kring->rhead = ring->head;
1759 kring->rcur = ring->cur;
1760 kring->rtail = ring->tail;
1762 if (ring->cur > lim)
1764 if (ring->head > lim)
1766 if (ring->tail > lim)
1768 for (i = 0; i <= lim; i++) {
1769 u_int idx = ring->slot[i].buf_idx;
1770 u_int len = ring->slot[i].len;
1771 if (idx < 2 || idx >= kring->na->na_lut.objtotal) {
1772 RD(5, "bad index at slot %d idx %d len %d ", i, idx, len);
1773 ring->slot[i].buf_idx = 0;
1774 ring->slot[i].len = 0;
1775 } else if (len > NETMAP_BUF_SIZE(kring->na)) {
1776 ring->slot[i].len = 0;
1777 RD(5, "bad len at slot %d idx %d len %d", i, idx, len);
1781 RD(10, "total %d errors", errors);
1782 RD(10, "%s reinit, cur %d -> %d tail %d -> %d",
1784 ring->cur, kring->nr_hwcur,
1785 ring->tail, kring->nr_hwtail);
1786 ring->head = kring->rhead = kring->nr_hwcur;
1787 ring->cur = kring->rcur = kring->nr_hwcur;
1788 ring->tail = kring->rtail = kring->nr_hwtail;
1790 return (errors ? 1 : 0);
1793 /* interpret the ringid and flags fields of an nmreq, by translating them
1794 * into a pair of intervals of ring indices:
1796 * [priv->np_txqfirst, priv->np_txqlast) and
1797 * [priv->np_rxqfirst, priv->np_rxqlast)
1801 netmap_interp_ringid(struct netmap_priv_d *priv, uint32_t nr_mode,
1802 uint16_t nr_ringid, uint64_t nr_flags)
1804 struct netmap_adapter *na = priv->np_na;
1805 int excluded_direction[] = { NR_TX_RINGS_ONLY, NR_RX_RINGS_ONLY };
1810 if (nr_flags & excluded_direction[t]) {
1811 priv->np_qfirst[t] = priv->np_qlast[t] = 0;
1815 case NR_REG_ALL_NIC:
1817 priv->np_qfirst[t] = 0;
1818 priv->np_qlast[t] = nma_get_nrings(na, t);
1819 ND("ALL/PIPE: %s %d %d", nm_txrx2str(t),
1820 priv->np_qfirst[t], priv->np_qlast[t]);
1824 if (!(na->na_flags & NAF_HOST_RINGS)) {
1825 nm_prerr("host rings not supported");
1828 priv->np_qfirst[t] = (nr_mode == NR_REG_SW ?
1829 nma_get_nrings(na, t) : 0);
1830 priv->np_qlast[t] = netmap_all_rings(na, t);
1831 ND("%s: %s %d %d", nr_mode == NR_REG_SW ? "SW" : "NIC+SW",
1833 priv->np_qfirst[t], priv->np_qlast[t]);
1835 case NR_REG_ONE_NIC:
1836 if (nr_ringid >= na->num_tx_rings &&
1837 nr_ringid >= na->num_rx_rings) {
1838 nm_prerr("invalid ring id %d", nr_ringid);
1841 /* if not enough rings, use the first one */
1843 if (j >= nma_get_nrings(na, t))
1845 priv->np_qfirst[t] = j;
1846 priv->np_qlast[t] = j + 1;
1847 ND("ONE_NIC: %s %d %d", nm_txrx2str(t),
1848 priv->np_qfirst[t], priv->np_qlast[t]);
1851 nm_prerr("invalid regif type %d", nr_mode);
1855 priv->np_flags = nr_flags;
1857 /* Allow transparent forwarding mode in the host --> nic
1858 * direction only if all the TX hw rings have been opened. */
1859 if (priv->np_qfirst[NR_TX] == 0 &&
1860 priv->np_qlast[NR_TX] >= na->num_tx_rings) {
1861 priv->np_sync_flags |= NAF_CAN_FORWARD_DOWN;
1864 if (netmap_verbose) {
1865 nm_prinf("%s: tx [%d,%d) rx [%d,%d) id %d",
1867 priv->np_qfirst[NR_TX],
1868 priv->np_qlast[NR_TX],
1869 priv->np_qfirst[NR_RX],
1870 priv->np_qlast[NR_RX],
1878 * Set the ring ID. For devices with a single queue, a request
1879 * for all rings is the same as a single ring.
1882 netmap_set_ringid(struct netmap_priv_d *priv, uint32_t nr_mode,
1883 uint16_t nr_ringid, uint64_t nr_flags)
1885 struct netmap_adapter *na = priv->np_na;
1889 error = netmap_interp_ringid(priv, nr_mode, nr_ringid, nr_flags);
1894 priv->np_txpoll = (nr_flags & NR_NO_TX_POLL) ? 0 : 1;
1896 /* optimization: count the users registered for more than
1897 * one ring, which are the ones sleeping on the global queue.
1898 * The default netmap_notify() callback will then
1899 * avoid signaling the global queue if nobody is using it
1902 if (nm_si_user(priv, t))
1909 netmap_unset_ringid(struct netmap_priv_d *priv)
1911 struct netmap_adapter *na = priv->np_na;
1915 if (nm_si_user(priv, t))
1917 priv->np_qfirst[t] = priv->np_qlast[t] = 0;
1920 priv->np_txpoll = 0;
1921 priv->np_kloop_state = 0;
1925 /* Set the nr_pending_mode for the requested rings.
1926 * If requested, also try to get exclusive access to the rings, provided
1927 * the rings we want to bind are not exclusively owned by a previous bind.
1930 netmap_krings_get(struct netmap_priv_d *priv)
1932 struct netmap_adapter *na = priv->np_na;
1934 struct netmap_kring *kring;
1935 int excl = (priv->np_flags & NR_EXCLUSIVE);
1938 if (netmap_debug & NM_DEBUG_ON)
1939 nm_prinf("%s: grabbing tx [%d, %d) rx [%d, %d)",
1941 priv->np_qfirst[NR_TX],
1942 priv->np_qlast[NR_TX],
1943 priv->np_qfirst[NR_RX],
1944 priv->np_qlast[NR_RX]);
1946 /* first round: check that all the requested rings
1947 * are neither alread exclusively owned, nor we
1948 * want exclusive ownership when they are already in use
1951 for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) {
1952 kring = NMR(na, t)[i];
1953 if ((kring->nr_kflags & NKR_EXCLUSIVE) ||
1954 (kring->users && excl))
1956 ND("ring %s busy", kring->name);
1962 /* second round: increment usage count (possibly marking them
1963 * as exclusive) and set the nr_pending_mode
1966 for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) {
1967 kring = NMR(na, t)[i];
1970 kring->nr_kflags |= NKR_EXCLUSIVE;
1971 kring->nr_pending_mode = NKR_NETMAP_ON;
1979 /* Undo netmap_krings_get(). This is done by clearing the exclusive mode
1980 * if was asked on regif, and unset the nr_pending_mode if we are the
1981 * last users of the involved rings. */
1983 netmap_krings_put(struct netmap_priv_d *priv)
1985 struct netmap_adapter *na = priv->np_na;
1987 struct netmap_kring *kring;
1988 int excl = (priv->np_flags & NR_EXCLUSIVE);
1991 ND("%s: releasing tx [%d, %d) rx [%d, %d)",
1993 priv->np_qfirst[NR_TX],
1994 priv->np_qlast[NR_TX],
1995 priv->np_qfirst[NR_RX],
1996 priv->np_qlast[MR_RX]);
1999 for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) {
2000 kring = NMR(na, t)[i];
2002 kring->nr_kflags &= ~NKR_EXCLUSIVE;
2004 if (kring->users == 0)
2005 kring->nr_pending_mode = NKR_NETMAP_OFF;
2011 nm_priv_rx_enabled(struct netmap_priv_d *priv)
2013 return (priv->np_qfirst[NR_RX] != priv->np_qlast[NR_RX]);
2016 /* Validate the CSB entries for both directions (atok and ktoa).
2017 * To be called under NMG_LOCK(). */
2019 netmap_csb_validate(struct netmap_priv_d *priv, struct nmreq_opt_csb *csbo)
2021 struct nm_csb_atok *csb_atok_base =
2022 (struct nm_csb_atok *)(uintptr_t)csbo->csb_atok;
2023 struct nm_csb_ktoa *csb_ktoa_base =
2024 (struct nm_csb_ktoa *)(uintptr_t)csbo->csb_ktoa;
2026 int num_rings[NR_TXRX], tot_rings;
2027 size_t entry_size[2];
2031 if (priv->np_kloop_state & NM_SYNC_KLOOP_RUNNING) {
2032 nm_prerr("Cannot update CSB while kloop is running");
2038 num_rings[t] = priv->np_qlast[t] - priv->np_qfirst[t];
2039 tot_rings += num_rings[t];
2044 if (!(priv->np_flags & NR_EXCLUSIVE)) {
2045 nm_prerr("CSB mode requires NR_EXCLUSIVE");
2049 entry_size[0] = sizeof(*csb_atok_base);
2050 entry_size[1] = sizeof(*csb_ktoa_base);
2051 csb_start[0] = (void *)csb_atok_base;
2052 csb_start[1] = (void *)csb_ktoa_base;
2054 for (i = 0; i < 2; i++) {
2055 /* On Linux we could use access_ok() to simplify
2056 * the validation. However, the advantage of
2057 * this approach is that it works also on
2059 size_t csb_size = tot_rings * entry_size[i];
2063 if ((uintptr_t)csb_start[i] & (entry_size[i]-1)) {
2064 nm_prerr("Unaligned CSB address");
2068 tmp = nm_os_malloc(csb_size);
2072 /* Application --> kernel direction. */
2073 err = copyin(csb_start[i], tmp, csb_size);
2075 /* Kernel --> application direction. */
2076 memset(tmp, 0, csb_size);
2077 err = copyout(tmp, csb_start[i], csb_size);
2081 nm_prerr("Invalid CSB address");
2086 priv->np_csb_atok_base = csb_atok_base;
2087 priv->np_csb_ktoa_base = csb_ktoa_base;
2089 /* Initialize the CSB. */
2091 for (i = 0; i < num_rings[t]; i++) {
2092 struct netmap_kring *kring =
2093 NMR(priv->np_na, t)[i + priv->np_qfirst[t]];
2094 struct nm_csb_atok *csb_atok = csb_atok_base + i;
2095 struct nm_csb_ktoa *csb_ktoa = csb_ktoa_base + i;
2098 csb_atok += num_rings[NR_TX];
2099 csb_ktoa += num_rings[NR_TX];
2102 CSB_WRITE(csb_atok, head, kring->rhead);
2103 CSB_WRITE(csb_atok, cur, kring->rcur);
2104 CSB_WRITE(csb_atok, appl_need_kick, 1);
2105 CSB_WRITE(csb_atok, sync_flags, 1);
2106 CSB_WRITE(csb_ktoa, hwcur, kring->nr_hwcur);
2107 CSB_WRITE(csb_ktoa, hwtail, kring->nr_hwtail);
2108 CSB_WRITE(csb_ktoa, kern_need_kick, 1);
2110 nm_prinf("csb_init for kring %s: head %u, cur %u, "
2111 "hwcur %u, hwtail %u", kring->name,
2112 kring->rhead, kring->rcur, kring->nr_hwcur,
2120 /* Ensure that the netmap adapter can support the given MTU.
2121 * @return EINVAL if the na cannot be set to mtu, 0 otherwise.
2124 netmap_buf_size_validate(const struct netmap_adapter *na, unsigned mtu) {
2125 unsigned nbs = NETMAP_BUF_SIZE(na);
2127 if (mtu <= na->rx_buf_maxsize) {
2128 /* The MTU fits a single NIC slot. We only
2129 * Need to check that netmap buffers are
2130 * large enough to hold an MTU. NS_MOREFRAG
2131 * cannot be used in this case. */
2133 nm_prerr("error: netmap buf size (%u) "
2134 "< device MTU (%u)", nbs, mtu);
2138 /* More NIC slots may be needed to receive
2139 * or transmit a single packet. Check that
2140 * the adapter supports NS_MOREFRAG and that
2141 * netmap buffers are large enough to hold
2142 * the maximum per-slot size. */
2143 if (!(na->na_flags & NAF_MOREFRAG)) {
2144 nm_prerr("error: large MTU (%d) needed "
2145 "but %s does not support "
2149 } else if (nbs < na->rx_buf_maxsize) {
2150 nm_prerr("error: using NS_MOREFRAG on "
2151 "%s requires netmap buf size "
2152 ">= %u", na->ifp->if_xname,
2153 na->rx_buf_maxsize);
2156 nm_prinf("info: netmap application on "
2157 "%s needs to support "
2159 "(MTU=%u,netmap_buf_size=%u)",
2160 na->ifp->if_xname, mtu, nbs);
2168 * possibly move the interface to netmap-mode.
2169 * If success it returns a pointer to netmap_if, otherwise NULL.
2170 * This must be called with NMG_LOCK held.
2172 * The following na callbacks are called in the process:
2174 * na->nm_config() [by netmap_update_config]
2175 * (get current number and size of rings)
2177 * We have a generic one for linux (netmap_linux_config).
2178 * The bwrap has to override this, since it has to forward
2179 * the request to the wrapped adapter (netmap_bwrap_config).
2182 * na->nm_krings_create()
2183 * (create and init the krings array)
2185 * One of the following:
2187 * * netmap_hw_krings_create, (hw ports)
2188 * creates the standard layout for the krings
2189 * and adds the mbq (used for the host rings).
2191 * * netmap_vp_krings_create (VALE ports)
2192 * add leases and scratchpads
2194 * * netmap_pipe_krings_create (pipes)
2195 * create the krings and rings of both ends and
2198 * * netmap_monitor_krings_create (monitors)
2199 * avoid allocating the mbq
2201 * * netmap_bwrap_krings_create (bwraps)
2202 * create both the brap krings array,
2203 * the krings array of the wrapped adapter, and
2204 * (if needed) the fake array for the host adapter
2206 * na->nm_register(, 1)
2207 * (put the adapter in netmap mode)
2209 * This may be one of the following:
2211 * * netmap_hw_reg (hw ports)
2212 * checks that the ifp is still there, then calls
2213 * the hardware specific callback;
2215 * * netmap_vp_reg (VALE ports)
2216 * If the port is connected to a bridge,
2217 * set the NAF_NETMAP_ON flag under the
2218 * bridge write lock.
2220 * * netmap_pipe_reg (pipes)
2221 * inform the other pipe end that it is no
2222 * longer responsible for the lifetime of this
2225 * * netmap_monitor_reg (monitors)
2226 * intercept the sync callbacks of the monitored
2229 * * netmap_bwrap_reg (bwraps)
2230 * cross-link the bwrap and hwna rings,
2231 * forward the request to the hwna, override
2232 * the hwna notify callback (to get the frames
2233 * coming from outside go through the bridge).
2238 netmap_do_regif(struct netmap_priv_d *priv, struct netmap_adapter *na,
2239 uint32_t nr_mode, uint16_t nr_ringid, uint64_t nr_flags)
2241 struct netmap_if *nifp = NULL;
2245 priv->np_na = na; /* store the reference */
2246 error = netmap_mem_finalize(na->nm_mem, na);
2250 if (na->active_fds == 0) {
2252 /* cache the allocator info in the na */
2253 error = netmap_mem_get_lut(na->nm_mem, &na->na_lut);
2256 ND("lut %p bufs %u size %u", na->na_lut.lut, na->na_lut.objtotal,
2257 na->na_lut.objsize);
2259 /* ring configuration may have changed, fetch from the card */
2260 netmap_update_config(na);
2263 /* compute the range of tx and rx rings to monitor */
2264 error = netmap_set_ringid(priv, nr_mode, nr_ringid, nr_flags);
2268 if (na->active_fds == 0) {
2270 * If this is the first registration of the adapter,
2271 * perform sanity checks and create the in-kernel view
2272 * of the netmap rings (the netmap krings).
2274 if (na->ifp && nm_priv_rx_enabled(priv)) {
2275 /* This netmap adapter is attached to an ifnet. */
2276 unsigned mtu = nm_os_ifnet_mtu(na->ifp);
2278 ND("%s: mtu %d rx_buf_maxsize %d netmap_buf_size %d",
2279 na->name, mtu, na->rx_buf_maxsize, NETMAP_BUF_SIZE(na));
2281 if (na->rx_buf_maxsize == 0) {
2282 nm_prerr("%s: error: rx_buf_maxsize == 0", na->name);
2287 error = netmap_buf_size_validate(na, mtu);
2293 * Depending on the adapter, this may also create
2294 * the netmap rings themselves
2296 error = na->nm_krings_create(na);
2302 /* now the krings must exist and we can check whether some
2303 * previous bind has exclusive ownership on them, and set
2306 error = netmap_krings_get(priv);
2308 goto err_del_krings;
2310 /* create all needed missing netmap rings */
2311 error = netmap_mem_rings_create(na);
2315 /* in all cases, create a new netmap if */
2316 nifp = netmap_mem_if_new(na, priv);
2322 if (nm_kring_pending(priv)) {
2323 /* Some kring is switching mode, tell the adapter to
2325 error = na->nm_register(na, 1);
2330 /* Commit the reference. */
2334 * advertise that the interface is ready by setting np_nifp.
2335 * The barrier is needed because readers (poll, *SYNC and mmap)
2336 * check for priv->np_nifp != NULL without locking
2338 mb(); /* make sure previous writes are visible to all CPUs */
2339 priv->np_nifp = nifp;
2344 netmap_mem_if_delete(na, nifp);
2346 netmap_krings_put(priv);
2347 netmap_mem_rings_delete(na);
2349 if (na->active_fds == 0)
2350 na->nm_krings_delete(na);
2352 if (na->active_fds == 0)
2353 memset(&na->na_lut, 0, sizeof(na->na_lut));
2355 netmap_mem_drop(na);
2363 * update kring and ring at the end of rxsync/txsync.
2366 nm_sync_finalize(struct netmap_kring *kring)
2369 * Update ring tail to what the kernel knows
2370 * After txsync: head/rhead/hwcur might be behind cur/rcur
2373 kring->ring->tail = kring->rtail = kring->nr_hwtail;
2375 ND(5, "%s now hwcur %d hwtail %d head %d cur %d tail %d",
2376 kring->name, kring->nr_hwcur, kring->nr_hwtail,
2377 kring->rhead, kring->rcur, kring->rtail);
2380 /* set ring timestamp */
2382 ring_timestamp_set(struct netmap_ring *ring)
2384 if (netmap_no_timestamp == 0 || ring->flags & NR_TIMESTAMP) {
2385 microtime(&ring->ts);
2389 static int nmreq_copyin(struct nmreq_header *, int);
2390 static int nmreq_copyout(struct nmreq_header *, int);
2391 static int nmreq_checkoptions(struct nmreq_header *);
2394 * ioctl(2) support for the "netmap" device.
2396 * Following a list of accepted commands:
2397 * - NIOCCTRL device control API
2398 * - NIOCTXSYNC sync TX rings
2399 * - NIOCRXSYNC sync RX rings
2400 * - SIOCGIFADDR just for convenience
2401 * - NIOCGINFO deprecated (legacy API)
2402 * - NIOCREGIF deprecated (legacy API)
2404 * Return 0 on success, errno otherwise.
2407 netmap_ioctl(struct netmap_priv_d *priv, u_long cmd, caddr_t data,
2408 struct thread *td, int nr_body_is_user)
2410 struct mbq q; /* packets from RX hw queues to host stack */
2411 struct netmap_adapter *na = NULL;
2412 struct netmap_mem_d *nmd = NULL;
2413 struct ifnet *ifp = NULL;
2415 u_int i, qfirst, qlast;
2416 struct netmap_kring **krings;
2422 struct nmreq_header *hdr = (struct nmreq_header *)data;
2424 if (hdr->nr_version < NETMAP_MIN_API ||
2425 hdr->nr_version > NETMAP_MAX_API) {
2426 nm_prerr("API mismatch: got %d need %d",
2427 hdr->nr_version, NETMAP_API);
2431 /* Make a kernel-space copy of the user-space nr_body.
2432 * For convenince, the nr_body pointer and the pointers
2433 * in the options list will be replaced with their
2434 * kernel-space counterparts. The original pointers are
2435 * saved internally and later restored by nmreq_copyout
2437 error = nmreq_copyin(hdr, nr_body_is_user);
2442 /* Sanitize hdr->nr_name. */
2443 hdr->nr_name[sizeof(hdr->nr_name) - 1] = '\0';
2445 switch (hdr->nr_reqtype) {
2446 case NETMAP_REQ_REGISTER: {
2447 struct nmreq_register *req =
2448 (struct nmreq_register *)(uintptr_t)hdr->nr_body;
2449 struct netmap_if *nifp;
2451 /* Protect access to priv from concurrent requests. */
2454 struct nmreq_option *opt;
2457 if (priv->np_nifp != NULL) { /* thread already registered */
2463 opt = nmreq_findoption((struct nmreq_option *)(uintptr_t)hdr->nr_options,
2464 NETMAP_REQ_OPT_EXTMEM);
2466 struct nmreq_opt_extmem *e =
2467 (struct nmreq_opt_extmem *)opt;
2469 error = nmreq_checkduplicate(opt);
2471 opt->nro_status = error;
2474 nmd = netmap_mem_ext_create(e->nro_usrptr,
2475 &e->nro_info, &error);
2476 opt->nro_status = error;
2480 #endif /* WITH_EXTMEM */
2482 if (nmd == NULL && req->nr_mem_id) {
2483 /* find the allocator and get a reference */
2484 nmd = netmap_mem_find(req->nr_mem_id);
2486 if (netmap_verbose) {
2487 nm_prerr("%s: failed to find mem_id %u",
2488 hdr->nr_name, req->nr_mem_id);
2494 /* find the interface and a reference */
2495 error = netmap_get_na(hdr, &na, &ifp, nmd,
2496 1 /* create */); /* keep reference */
2499 if (NETMAP_OWNED_BY_KERN(na)) {
2504 if (na->virt_hdr_len && !(req->nr_flags & NR_ACCEPT_VNET_HDR)) {
2505 nm_prerr("virt_hdr_len=%d, but application does "
2506 "not accept it", na->virt_hdr_len);
2511 error = netmap_do_regif(priv, na, req->nr_mode,
2512 req->nr_ringid, req->nr_flags);
2513 if (error) { /* reg. failed, release priv and ref */
2517 opt = nmreq_findoption((struct nmreq_option *)(uintptr_t)hdr->nr_options,
2518 NETMAP_REQ_OPT_CSB);
2520 struct nmreq_opt_csb *csbo =
2521 (struct nmreq_opt_csb *)opt;
2522 error = nmreq_checkduplicate(opt);
2524 error = netmap_csb_validate(priv, csbo);
2526 opt->nro_status = error;
2528 netmap_do_unregif(priv);
2533 nifp = priv->np_nifp;
2534 priv->np_td = td; /* for debugging purposes */
2536 /* return the offset of the netmap_if object */
2537 req->nr_rx_rings = na->num_rx_rings;
2538 req->nr_tx_rings = na->num_tx_rings;
2539 req->nr_rx_slots = na->num_rx_desc;
2540 req->nr_tx_slots = na->num_tx_desc;
2541 error = netmap_mem_get_info(na->nm_mem, &req->nr_memsize, &memflags,
2544 netmap_do_unregif(priv);
2547 if (memflags & NETMAP_MEM_PRIVATE) {
2548 *(uint32_t *)(uintptr_t)&nifp->ni_flags |= NI_PRIV_MEM;
2551 priv->np_si[t] = nm_si_user(priv, t) ?
2552 &na->si[t] : &NMR(na, t)[priv->np_qfirst[t]]->si;
2555 if (req->nr_extra_bufs) {
2557 nm_prinf("requested %d extra buffers",
2558 req->nr_extra_bufs);
2559 req->nr_extra_bufs = netmap_extra_alloc(na,
2560 &nifp->ni_bufs_head, req->nr_extra_bufs);
2562 nm_prinf("got %d extra buffers", req->nr_extra_bufs);
2564 req->nr_offset = netmap_mem_if_offset(na->nm_mem, nifp);
2566 error = nmreq_checkoptions(hdr);
2568 netmap_do_unregif(priv);
2572 /* store ifp reference so that priv destructor may release it */
2576 netmap_unget_na(na, ifp);
2578 /* release the reference from netmap_mem_find() or
2579 * netmap_mem_ext_create()
2582 netmap_mem_put(nmd);
2587 case NETMAP_REQ_PORT_INFO_GET: {
2588 struct nmreq_port_info_get *req =
2589 (struct nmreq_port_info_get *)(uintptr_t)hdr->nr_body;
2595 if (hdr->nr_name[0] != '\0') {
2596 /* Build a nmreq_register out of the nmreq_port_info_get,
2597 * so that we can call netmap_get_na(). */
2598 struct nmreq_register regreq;
2599 bzero(®req, sizeof(regreq));
2600 regreq.nr_mode = NR_REG_ALL_NIC;
2601 regreq.nr_tx_slots = req->nr_tx_slots;
2602 regreq.nr_rx_slots = req->nr_rx_slots;
2603 regreq.nr_tx_rings = req->nr_tx_rings;
2604 regreq.nr_rx_rings = req->nr_rx_rings;
2605 regreq.nr_mem_id = req->nr_mem_id;
2607 /* get a refcount */
2608 hdr->nr_reqtype = NETMAP_REQ_REGISTER;
2609 hdr->nr_body = (uintptr_t)®req;
2610 error = netmap_get_na(hdr, &na, &ifp, NULL, 1 /* create */);
2611 hdr->nr_reqtype = NETMAP_REQ_PORT_INFO_GET; /* reset type */
2612 hdr->nr_body = (uintptr_t)req; /* reset nr_body */
2618 nmd = na->nm_mem; /* get memory allocator */
2620 nmd = netmap_mem_find(req->nr_mem_id ? req->nr_mem_id : 1);
2623 nm_prerr("%s: failed to find mem_id %u",
2625 req->nr_mem_id ? req->nr_mem_id : 1);
2631 error = netmap_mem_get_info(nmd, &req->nr_memsize, &memflags,
2635 if (na == NULL) /* only memory info */
2637 netmap_update_config(na);
2638 req->nr_rx_rings = na->num_rx_rings;
2639 req->nr_tx_rings = na->num_tx_rings;
2640 req->nr_rx_slots = na->num_rx_desc;
2641 req->nr_tx_slots = na->num_tx_desc;
2643 netmap_unget_na(na, ifp);
2648 case NETMAP_REQ_VALE_ATTACH: {
2649 error = netmap_vale_attach(hdr, NULL /* userspace request */);
2653 case NETMAP_REQ_VALE_DETACH: {
2654 error = netmap_vale_detach(hdr, NULL /* userspace request */);
2658 case NETMAP_REQ_VALE_LIST: {
2659 error = netmap_vale_list(hdr);
2663 case NETMAP_REQ_PORT_HDR_SET: {
2664 struct nmreq_port_hdr *req =
2665 (struct nmreq_port_hdr *)(uintptr_t)hdr->nr_body;
2666 /* Build a nmreq_register out of the nmreq_port_hdr,
2667 * so that we can call netmap_get_bdg_na(). */
2668 struct nmreq_register regreq;
2669 bzero(®req, sizeof(regreq));
2670 regreq.nr_mode = NR_REG_ALL_NIC;
2672 /* For now we only support virtio-net headers, and only for
2673 * VALE ports, but this may change in future. Valid lengths
2674 * for the virtio-net header are 0 (no header), 10 and 12. */
2675 if (req->nr_hdr_len != 0 &&
2676 req->nr_hdr_len != sizeof(struct nm_vnet_hdr) &&
2677 req->nr_hdr_len != 12) {
2679 nm_prerr("invalid hdr_len %u", req->nr_hdr_len);
2684 hdr->nr_reqtype = NETMAP_REQ_REGISTER;
2685 hdr->nr_body = (uintptr_t)®req;
2686 error = netmap_get_vale_na(hdr, &na, NULL, 0);
2687 hdr->nr_reqtype = NETMAP_REQ_PORT_HDR_SET;
2688 hdr->nr_body = (uintptr_t)req;
2690 struct netmap_vp_adapter *vpna =
2691 (struct netmap_vp_adapter *)na;
2692 na->virt_hdr_len = req->nr_hdr_len;
2693 if (na->virt_hdr_len) {
2694 vpna->mfs = NETMAP_BUF_SIZE(na);
2697 nm_prinf("Using vnet_hdr_len %d for %p", na->virt_hdr_len, na);
2698 netmap_adapter_put(na);
2706 case NETMAP_REQ_PORT_HDR_GET: {
2707 /* Get vnet-header length for this netmap port */
2708 struct nmreq_port_hdr *req =
2709 (struct nmreq_port_hdr *)(uintptr_t)hdr->nr_body;
2710 /* Build a nmreq_register out of the nmreq_port_hdr,
2711 * so that we can call netmap_get_bdg_na(). */
2712 struct nmreq_register regreq;
2715 bzero(®req, sizeof(regreq));
2716 regreq.nr_mode = NR_REG_ALL_NIC;
2718 hdr->nr_reqtype = NETMAP_REQ_REGISTER;
2719 hdr->nr_body = (uintptr_t)®req;
2720 error = netmap_get_na(hdr, &na, &ifp, NULL, 0);
2721 hdr->nr_reqtype = NETMAP_REQ_PORT_HDR_GET;
2722 hdr->nr_body = (uintptr_t)req;
2724 req->nr_hdr_len = na->virt_hdr_len;
2726 netmap_unget_na(na, ifp);
2731 case NETMAP_REQ_VALE_NEWIF: {
2732 error = nm_vi_create(hdr);
2736 case NETMAP_REQ_VALE_DELIF: {
2737 error = nm_vi_destroy(hdr->nr_name);
2741 case NETMAP_REQ_VALE_POLLING_ENABLE:
2742 case NETMAP_REQ_VALE_POLLING_DISABLE: {
2743 error = nm_bdg_polling(hdr);
2746 #endif /* WITH_VALE */
2747 case NETMAP_REQ_POOLS_INFO_GET: {
2748 /* Get information from the memory allocator used for
2750 struct nmreq_pools_info *req =
2751 (struct nmreq_pools_info *)(uintptr_t)hdr->nr_body;
2754 /* Build a nmreq_register out of the nmreq_pools_info,
2755 * so that we can call netmap_get_na(). */
2756 struct nmreq_register regreq;
2757 bzero(®req, sizeof(regreq));
2758 regreq.nr_mem_id = req->nr_mem_id;
2759 regreq.nr_mode = NR_REG_ALL_NIC;
2761 hdr->nr_reqtype = NETMAP_REQ_REGISTER;
2762 hdr->nr_body = (uintptr_t)®req;
2763 error = netmap_get_na(hdr, &na, &ifp, NULL, 1 /* create */);
2764 hdr->nr_reqtype = NETMAP_REQ_POOLS_INFO_GET; /* reset type */
2765 hdr->nr_body = (uintptr_t)req; /* reset nr_body */
2771 nmd = na->nm_mem; /* grab the memory allocator */
2777 /* Finalize the memory allocator, get the pools
2778 * information and release the allocator. */
2779 error = netmap_mem_finalize(nmd, na);
2783 error = netmap_mem_pools_info_get(req, nmd);
2784 netmap_mem_drop(na);
2786 netmap_unget_na(na, ifp);
2791 case NETMAP_REQ_CSB_ENABLE: {
2792 struct nmreq_option *opt;
2794 opt = nmreq_findoption((struct nmreq_option *)(uintptr_t)hdr->nr_options,
2795 NETMAP_REQ_OPT_CSB);
2799 struct nmreq_opt_csb *csbo =
2800 (struct nmreq_opt_csb *)opt;
2801 error = nmreq_checkduplicate(opt);
2804 error = netmap_csb_validate(priv, csbo);
2807 opt->nro_status = error;
2812 case NETMAP_REQ_SYNC_KLOOP_START: {
2813 error = netmap_sync_kloop(priv, hdr);
2817 case NETMAP_REQ_SYNC_KLOOP_STOP: {
2818 error = netmap_sync_kloop_stop(priv);
2827 /* Write back request body to userspace and reset the
2828 * user-space pointer. */
2829 error = nmreq_copyout(hdr, error);
2835 if (unlikely(priv->np_nifp == NULL)) {
2839 mb(); /* make sure following reads are not from cache */
2841 if (unlikely(priv->np_csb_atok_base)) {
2842 nm_prerr("Invalid sync in CSB mode");
2847 na = priv->np_na; /* we have a reference */
2850 t = (cmd == NIOCTXSYNC ? NR_TX : NR_RX);
2851 krings = NMR(na, t);
2852 qfirst = priv->np_qfirst[t];
2853 qlast = priv->np_qlast[t];
2854 sync_flags = priv->np_sync_flags;
2856 for (i = qfirst; i < qlast; i++) {
2857 struct netmap_kring *kring = krings[i];
2858 struct netmap_ring *ring = kring->ring;
2860 if (unlikely(nm_kr_tryget(kring, 1, &error))) {
2861 error = (error ? EIO : 0);
2865 if (cmd == NIOCTXSYNC) {
2866 if (netmap_debug & NM_DEBUG_TXSYNC)
2867 nm_prinf("pre txsync ring %d cur %d hwcur %d",
2870 if (nm_txsync_prologue(kring, ring) >= kring->nkr_num_slots) {
2871 netmap_ring_reinit(kring);
2872 } else if (kring->nm_sync(kring, sync_flags | NAF_FORCE_RECLAIM) == 0) {
2873 nm_sync_finalize(kring);
2875 if (netmap_debug & NM_DEBUG_TXSYNC)
2876 nm_prinf("post txsync ring %d cur %d hwcur %d",
2880 if (nm_rxsync_prologue(kring, ring) >= kring->nkr_num_slots) {
2881 netmap_ring_reinit(kring);
2883 if (nm_may_forward_up(kring)) {
2884 /* transparent forwarding, see netmap_poll() */
2885 netmap_grab_packets(kring, &q, netmap_fwd);
2887 if (kring->nm_sync(kring, sync_flags | NAF_FORCE_READ) == 0) {
2888 nm_sync_finalize(kring);
2890 ring_timestamp_set(ring);
2896 netmap_send_up(na->ifp, &q);
2903 return netmap_ioctl_legacy(priv, cmd, data, td);
2912 nmreq_size_by_type(uint16_t nr_reqtype)
2914 switch (nr_reqtype) {
2915 case NETMAP_REQ_REGISTER:
2916 return sizeof(struct nmreq_register);
2917 case NETMAP_REQ_PORT_INFO_GET:
2918 return sizeof(struct nmreq_port_info_get);
2919 case NETMAP_REQ_VALE_ATTACH:
2920 return sizeof(struct nmreq_vale_attach);
2921 case NETMAP_REQ_VALE_DETACH:
2922 return sizeof(struct nmreq_vale_detach);
2923 case NETMAP_REQ_VALE_LIST:
2924 return sizeof(struct nmreq_vale_list);
2925 case NETMAP_REQ_PORT_HDR_SET:
2926 case NETMAP_REQ_PORT_HDR_GET:
2927 return sizeof(struct nmreq_port_hdr);
2928 case NETMAP_REQ_VALE_NEWIF:
2929 return sizeof(struct nmreq_vale_newif);
2930 case NETMAP_REQ_VALE_DELIF:
2931 case NETMAP_REQ_SYNC_KLOOP_STOP:
2932 case NETMAP_REQ_CSB_ENABLE:
2934 case NETMAP_REQ_VALE_POLLING_ENABLE:
2935 case NETMAP_REQ_VALE_POLLING_DISABLE:
2936 return sizeof(struct nmreq_vale_polling);
2937 case NETMAP_REQ_POOLS_INFO_GET:
2938 return sizeof(struct nmreq_pools_info);
2939 case NETMAP_REQ_SYNC_KLOOP_START:
2940 return sizeof(struct nmreq_sync_kloop_start);
2946 nmreq_opt_size_by_type(uint32_t nro_reqtype, uint64_t nro_size)
2948 size_t rv = sizeof(struct nmreq_option);
2949 #ifdef NETMAP_REQ_OPT_DEBUG
2950 if (nro_reqtype & NETMAP_REQ_OPT_DEBUG)
2951 return (nro_reqtype & ~NETMAP_REQ_OPT_DEBUG);
2952 #endif /* NETMAP_REQ_OPT_DEBUG */
2953 switch (nro_reqtype) {
2955 case NETMAP_REQ_OPT_EXTMEM:
2956 rv = sizeof(struct nmreq_opt_extmem);
2958 #endif /* WITH_EXTMEM */
2959 case NETMAP_REQ_OPT_SYNC_KLOOP_EVENTFDS:
2963 case NETMAP_REQ_OPT_CSB:
2964 rv = sizeof(struct nmreq_opt_csb);
2967 /* subtract the common header */
2968 return rv - sizeof(struct nmreq_option);
2972 nmreq_copyin(struct nmreq_header *hdr, int nr_body_is_user)
2974 size_t rqsz, optsz, bufsz;
2976 char *ker = NULL, *p;
2977 struct nmreq_option **next, *src;
2978 struct nmreq_option buf;
2981 if (hdr->nr_reserved) {
2983 nm_prerr("nr_reserved must be zero");
2987 if (!nr_body_is_user)
2990 hdr->nr_reserved = nr_body_is_user;
2992 /* compute the total size of the buffer */
2993 rqsz = nmreq_size_by_type(hdr->nr_reqtype);
2994 if (rqsz > NETMAP_REQ_MAXSIZE) {
2998 if ((rqsz && hdr->nr_body == (uintptr_t)NULL) ||
2999 (!rqsz && hdr->nr_body != (uintptr_t)NULL)) {
3000 /* Request body expected, but not found; or
3001 * request body found but unexpected. */
3003 nm_prerr("nr_body expected but not found, or vice versa");
3008 bufsz = 2 * sizeof(void *) + rqsz;
3010 for (src = (struct nmreq_option *)(uintptr_t)hdr->nr_options; src;
3011 src = (struct nmreq_option *)(uintptr_t)buf.nro_next)
3013 error = copyin(src, &buf, sizeof(*src));
3016 optsz += sizeof(*src);
3017 optsz += nmreq_opt_size_by_type(buf.nro_reqtype, buf.nro_size);
3018 if (rqsz + optsz > NETMAP_REQ_MAXSIZE) {
3022 bufsz += optsz + sizeof(void *);
3025 ker = nm_os_malloc(bufsz);
3032 /* make a copy of the user pointers */
3033 ptrs = (uint64_t*)p;
3034 *ptrs++ = hdr->nr_body;
3035 *ptrs++ = hdr->nr_options;
3039 error = copyin((void *)(uintptr_t)hdr->nr_body, p, rqsz);
3042 /* overwrite the user pointer with the in-kernel one */
3043 hdr->nr_body = (uintptr_t)p;
3046 /* copy the options */
3047 next = (struct nmreq_option **)&hdr->nr_options;
3050 struct nmreq_option *opt;
3052 /* copy the option header */
3053 ptrs = (uint64_t *)p;
3054 opt = (struct nmreq_option *)(ptrs + 1);
3055 error = copyin(src, opt, sizeof(*src));
3058 /* make a copy of the user next pointer */
3059 *ptrs = opt->nro_next;
3060 /* overwrite the user pointer with the in-kernel one */
3063 /* initialize the option as not supported.
3064 * Recognized options will update this field.
3066 opt->nro_status = EOPNOTSUPP;
3068 p = (char *)(opt + 1);
3070 /* copy the option body */
3071 optsz = nmreq_opt_size_by_type(opt->nro_reqtype,
3074 /* the option body follows the option header */
3075 error = copyin(src + 1, p, optsz);
3081 /* move to next option */
3082 next = (struct nmreq_option **)&opt->nro_next;
3088 ptrs = (uint64_t *)ker;
3089 hdr->nr_body = *ptrs++;
3090 hdr->nr_options = *ptrs++;
3091 hdr->nr_reserved = 0;
3098 nmreq_copyout(struct nmreq_header *hdr, int rerror)
3100 struct nmreq_option *src, *dst;
3101 void *ker = (void *)(uintptr_t)hdr->nr_body, *bufstart;
3106 if (!hdr->nr_reserved)
3109 /* restore the user pointers in the header */
3110 ptrs = (uint64_t *)ker - 2;
3112 hdr->nr_body = *ptrs++;
3113 src = (struct nmreq_option *)(uintptr_t)hdr->nr_options;
3114 hdr->nr_options = *ptrs;
3118 bodysz = nmreq_size_by_type(hdr->nr_reqtype);
3119 error = copyout(ker, (void *)(uintptr_t)hdr->nr_body, bodysz);
3126 /* copy the options */
3127 dst = (struct nmreq_option *)(uintptr_t)hdr->nr_options;
3132 /* restore the user pointer */
3133 next = src->nro_next;
3134 ptrs = (uint64_t *)src - 1;
3135 src->nro_next = *ptrs;
3137 /* always copy the option header */
3138 error = copyout(src, dst, sizeof(*src));
3144 /* copy the option body only if there was no error */
3145 if (!rerror && !src->nro_status) {
3146 optsz = nmreq_opt_size_by_type(src->nro_reqtype,
3149 error = copyout(src + 1, dst + 1, optsz);
3156 src = (struct nmreq_option *)(uintptr_t)next;
3157 dst = (struct nmreq_option *)(uintptr_t)*ptrs;
3162 hdr->nr_reserved = 0;
3163 nm_os_free(bufstart);
3167 struct nmreq_option *
3168 nmreq_findoption(struct nmreq_option *opt, uint16_t reqtype)
3170 for ( ; opt; opt = (struct nmreq_option *)(uintptr_t)opt->nro_next)
3171 if (opt->nro_reqtype == reqtype)
3177 nmreq_checkduplicate(struct nmreq_option *opt) {
3178 uint16_t type = opt->nro_reqtype;
3181 while ((opt = nmreq_findoption((struct nmreq_option *)(uintptr_t)opt->nro_next,
3184 opt->nro_status = EINVAL;
3186 return (dup ? EINVAL : 0);
3190 nmreq_checkoptions(struct nmreq_header *hdr)
3192 struct nmreq_option *opt;
3193 /* return error if there is still any option
3194 * marked as not supported
3197 for (opt = (struct nmreq_option *)(uintptr_t)hdr->nr_options; opt;
3198 opt = (struct nmreq_option *)(uintptr_t)opt->nro_next)
3199 if (opt->nro_status == EOPNOTSUPP)
3206 * select(2) and poll(2) handlers for the "netmap" device.
3208 * Can be called for one or more queues.
3209 * Return true the event mask corresponding to ready events.
3210 * If there are no ready events, do a selrecord on either individual
3211 * selinfo or on the global one.
3212 * Device-dependent parts (locking and sync of tx/rx rings)
3213 * are done through callbacks.
3215 * On linux, arguments are really pwait, the poll table, and 'td' is struct file *
3216 * The first one is remapped to pwait as selrecord() uses the name as an
3220 netmap_poll(struct netmap_priv_d *priv, int events, NM_SELRECORD_T *sr)
3222 struct netmap_adapter *na;
3223 struct netmap_kring *kring;
3224 struct netmap_ring *ring;
3225 u_int i, want[NR_TXRX], revents = 0;
3226 NM_SELINFO_T *si[NR_TXRX];
3227 #define want_tx want[NR_TX]
3228 #define want_rx want[NR_RX]
3229 struct mbq q; /* packets from RX hw queues to host stack */
3232 * In order to avoid nested locks, we need to "double check"
3233 * txsync and rxsync if we decide to do a selrecord().
3234 * retry_tx (and retry_rx, later) prevent looping forever.
3236 int retry_tx = 1, retry_rx = 1;
3238 /* Transparent mode: send_down is 1 if we have found some
3239 * packets to forward (host RX ring --> NIC) during the rx
3240 * scan and we have not sent them down to the NIC yet.
3241 * Transparent mode requires to bind all rings to a single
3245 int sync_flags = priv->np_sync_flags;
3249 if (unlikely(priv->np_nifp == NULL)) {
3252 mb(); /* make sure following reads are not from cache */
3256 if (unlikely(!nm_netmap_on(na)))
3259 if (unlikely(priv->np_csb_atok_base)) {
3260 nm_prerr("Invalid poll in CSB mode");
3264 if (netmap_debug & NM_DEBUG_ON)
3265 nm_prinf("device %s events 0x%x", na->name, events);
3266 want_tx = events & (POLLOUT | POLLWRNORM);
3267 want_rx = events & (POLLIN | POLLRDNORM);
3270 * If the card has more than one queue AND the file descriptor is
3271 * bound to all of them, we sleep on the "global" selinfo, otherwise
3272 * we sleep on individual selinfo (FreeBSD only allows two selinfo's
3273 * per file descriptor).
3274 * The interrupt routine in the driver wake one or the other
3275 * (or both) depending on which clients are active.
3277 * rxsync() is only called if we run out of buffers on a POLLIN.
3278 * txsync() is called if we run out of buffers on POLLOUT, or
3279 * there are pending packets to send. The latter can be disabled
3280 * passing NETMAP_NO_TX_POLL in the NIOCREG call.
3282 si[NR_RX] = nm_si_user(priv, NR_RX) ? &na->si[NR_RX] :
3283 &na->rx_rings[priv->np_qfirst[NR_RX]]->si;
3284 si[NR_TX] = nm_si_user(priv, NR_TX) ? &na->si[NR_TX] :
3285 &na->tx_rings[priv->np_qfirst[NR_TX]]->si;
3289 * We start with a lock free round which is cheap if we have
3290 * slots available. If this fails, then lock and call the sync
3291 * routines. We can't do this on Linux, as the contract says
3292 * that we must call nm_os_selrecord() unconditionally.
3295 enum txrx t = NR_TX;
3296 for (i = priv->np_qfirst[t]; want[t] && i < priv->np_qlast[t]; i++) {
3297 kring = NMR(na, t)[i];
3298 /* XXX compare ring->cur and kring->tail */
3299 if (!nm_ring_empty(kring->ring)) {
3301 want[t] = 0; /* also breaks the loop */
3306 enum txrx t = NR_RX;
3307 want_rx = 0; /* look for a reason to run the handlers */
3308 for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) {
3309 kring = NMR(na, t)[i];
3310 if (kring->ring->cur == kring->ring->tail /* try fetch new buffers */
3311 || kring->rhead != kring->ring->head /* release buffers */) {
3316 revents |= events & (POLLIN | POLLRDNORM); /* we have data */
3321 /* The selrecord must be unconditional on linux. */
3322 nm_os_selrecord(sr, si[NR_RX]);
3323 nm_os_selrecord(sr, si[NR_TX]);
3327 * If we want to push packets out (priv->np_txpoll) or
3328 * want_tx is still set, we must issue txsync calls
3329 * (on all rings, to avoid that the tx rings stall).
3330 * Fortunately, normal tx mode has np_txpoll set.
3332 if (priv->np_txpoll || want_tx) {
3334 * The first round checks if anyone is ready, if not
3335 * do a selrecord and another round to handle races.
3336 * want_tx goes to 0 if any space is found, and is
3337 * used to skip rings with no pending transmissions.
3340 for (i = priv->np_qfirst[NR_TX]; i < priv->np_qlast[NR_TX]; i++) {
3343 kring = na->tx_rings[i];
3347 * Don't try to txsync this TX ring if we already found some
3348 * space in some of the TX rings (want_tx == 0) and there are no
3349 * TX slots in this ring that need to be flushed to the NIC
3352 if (!send_down && !want_tx && ring->head == kring->nr_hwcur)
3355 if (nm_kr_tryget(kring, 1, &revents))
3358 if (nm_txsync_prologue(kring, ring) >= kring->nkr_num_slots) {
3359 netmap_ring_reinit(kring);
3362 if (kring->nm_sync(kring, sync_flags))
3365 nm_sync_finalize(kring);
3369 * If we found new slots, notify potential
3370 * listeners on the same ring.
3371 * Since we just did a txsync, look at the copies
3372 * of cur,tail in the kring.
3374 found = kring->rcur != kring->rtail;
3376 if (found) { /* notify other listeners */
3380 kring->nm_notify(kring, 0);
3384 /* if there were any packet to forward we must have handled them by now */
3386 if (want_tx && retry_tx && sr) {
3388 nm_os_selrecord(sr, si[NR_TX]);
3396 * If want_rx is still set scan receive rings.
3397 * Do it on all rings because otherwise we starve.
3400 /* two rounds here for race avoidance */
3402 for (i = priv->np_qfirst[NR_RX]; i < priv->np_qlast[NR_RX]; i++) {
3405 kring = na->rx_rings[i];
3408 if (unlikely(nm_kr_tryget(kring, 1, &revents)))
3411 if (nm_rxsync_prologue(kring, ring) >= kring->nkr_num_slots) {
3412 netmap_ring_reinit(kring);
3415 /* now we can use kring->rcur, rtail */
3418 * transparent mode support: collect packets from
3419 * hw rxring(s) that have been released by the user
3421 if (nm_may_forward_up(kring)) {
3422 netmap_grab_packets(kring, &q, netmap_fwd);
3425 /* Clear the NR_FORWARD flag anyway, it may be set by
3426 * the nm_sync() below only on for the host RX ring (see
3427 * netmap_rxsync_from_host()). */
3428 kring->nr_kflags &= ~NR_FORWARD;
3429 if (kring->nm_sync(kring, sync_flags))
3432 nm_sync_finalize(kring);
3433 send_down |= (kring->nr_kflags & NR_FORWARD);
3434 ring_timestamp_set(ring);
3435 found = kring->rcur != kring->rtail;
3441 kring->nm_notify(kring, 0);
3447 if (retry_rx && sr) {
3448 nm_os_selrecord(sr, si[NR_RX]);
3451 if (send_down || retry_rx) {
3454 goto flush_tx; /* and retry_rx */
3461 * Transparent mode: released bufs (i.e. between kring->nr_hwcur and
3462 * ring->head) marked with NS_FORWARD on hw rx rings are passed up
3463 * to the host stack.
3467 netmap_send_up(na->ifp, &q);
3476 nma_intr_enable(struct netmap_adapter *na, int onoff)
3478 bool changed = false;
3483 for (i = 0; i < nma_get_nrings(na, t); i++) {
3484 struct netmap_kring *kring = NMR(na, t)[i];
3485 int on = !(kring->nr_kflags & NKR_NOINTR);
3487 if (!!onoff != !!on) {
3491 kring->nr_kflags &= ~NKR_NOINTR;
3493 kring->nr_kflags |= NKR_NOINTR;
3499 return 0; /* nothing to do */
3503 nm_prerr("Cannot %s interrupts for %s", onoff ? "enable" : "disable",
3508 na->nm_intr(na, onoff);
3514 /*-------------------- driver support routines -------------------*/
3516 /* default notify callback */
3518 netmap_notify(struct netmap_kring *kring, int flags)
3520 struct netmap_adapter *na = kring->notify_na;
3521 enum txrx t = kring->tx;
3523 nm_os_selwakeup(&kring->si);
3524 /* optimization: avoid a wake up on the global
3525 * queue if nobody has registered for more
3528 if (na->si_users[t] > 0)
3529 nm_os_selwakeup(&na->si[t]);
3531 return NM_IRQ_COMPLETED;
3534 /* called by all routines that create netmap_adapters.
3535 * provide some defaults and get a reference to the
3539 netmap_attach_common(struct netmap_adapter *na)
3541 if (!na->rx_buf_maxsize) {
3542 /* Set a conservative default (larger is safer). */
3543 na->rx_buf_maxsize = PAGE_SIZE;
3547 if (na->na_flags & NAF_HOST_RINGS && na->ifp) {
3548 na->if_input = na->ifp->if_input; /* for netmap_send_up */
3550 na->pdev = na; /* make sure netmap_mem_map() is called */
3551 #endif /* __FreeBSD__ */
3552 if (na->na_flags & NAF_HOST_RINGS) {
3553 if (na->num_host_rx_rings == 0)
3554 na->num_host_rx_rings = 1;
3555 if (na->num_host_tx_rings == 0)
3556 na->num_host_tx_rings = 1;
3558 if (na->nm_krings_create == NULL) {
3559 /* we assume that we have been called by a driver,
3560 * since other port types all provide their own
3563 na->nm_krings_create = netmap_hw_krings_create;
3564 na->nm_krings_delete = netmap_hw_krings_delete;
3566 if (na->nm_notify == NULL)
3567 na->nm_notify = netmap_notify;
3570 if (na->nm_mem == NULL) {
3571 /* use the global allocator */
3572 na->nm_mem = netmap_mem_get(&nm_mem);
3575 if (na->nm_bdg_attach == NULL)
3576 /* no special nm_bdg_attach callback. On VALE
3577 * attach, we need to interpose a bwrap
3579 na->nm_bdg_attach = netmap_default_bdg_attach;
3585 /* Wrapper for the register callback provided netmap-enabled
3587 * nm_iszombie(na) means that the driver module has been
3588 * unloaded, so we cannot call into it.
3589 * nm_os_ifnet_lock() must guarantee mutual exclusion with
3593 netmap_hw_reg(struct netmap_adapter *na, int onoff)
3595 struct netmap_hw_adapter *hwna =
3596 (struct netmap_hw_adapter*)na;
3601 if (nm_iszombie(na)) {
3604 } else if (na != NULL) {
3605 na->na_flags &= ~NAF_NETMAP_ON;
3610 error = hwna->nm_hw_register(na, onoff);
3613 nm_os_ifnet_unlock();
3619 netmap_hw_dtor(struct netmap_adapter *na)
3621 if (na->ifp == NULL)
3624 NM_DETACH_NA(na->ifp);
3629 * Allocate a netmap_adapter object, and initialize it from the
3630 * 'arg' passed by the driver on attach.
3631 * We allocate a block of memory of 'size' bytes, which has room
3632 * for struct netmap_adapter plus additional room private to
3634 * Return 0 on success, ENOMEM otherwise.
3637 netmap_attach_ext(struct netmap_adapter *arg, size_t size, int override_reg)
3639 struct netmap_hw_adapter *hwna = NULL;
3640 struct ifnet *ifp = NULL;
3642 if (size < sizeof(struct netmap_hw_adapter)) {
3643 if (netmap_debug & NM_DEBUG_ON)
3644 nm_prerr("Invalid netmap adapter size %d", (int)size);
3648 if (arg == NULL || arg->ifp == NULL) {
3649 if (netmap_debug & NM_DEBUG_ON)
3650 nm_prerr("either arg or arg->ifp is NULL");
3654 if (arg->num_tx_rings == 0 || arg->num_rx_rings == 0) {
3655 if (netmap_debug & NM_DEBUG_ON)
3656 nm_prerr("%s: invalid rings tx %d rx %d",
3657 arg->name, arg->num_tx_rings, arg->num_rx_rings);
3662 if (NM_NA_CLASH(ifp)) {
3663 /* If NA(ifp) is not null but there is no valid netmap
3664 * adapter it means that someone else is using the same
3665 * pointer (e.g. ax25_ptr on linux). This happens for
3666 * instance when also PF_RING is in use. */
3667 nm_prerr("Error: netmap adapter hook is busy");
3671 hwna = nm_os_malloc(size);
3675 hwna->up.na_flags |= NAF_HOST_RINGS | NAF_NATIVE;
3676 strlcpy(hwna->up.name, ifp->if_xname, sizeof(hwna->up.name));
3678 hwna->nm_hw_register = hwna->up.nm_register;
3679 hwna->up.nm_register = netmap_hw_reg;
3681 if (netmap_attach_common(&hwna->up)) {
3685 netmap_adapter_get(&hwna->up);
3687 NM_ATTACH_NA(ifp, &hwna->up);
3689 nm_os_onattach(ifp);
3691 if (arg->nm_dtor == NULL) {
3692 hwna->up.nm_dtor = netmap_hw_dtor;
3695 if_printf(ifp, "netmap queues/slots: TX %d/%d, RX %d/%d\n",
3696 hwna->up.num_tx_rings, hwna->up.num_tx_desc,
3697 hwna->up.num_rx_rings, hwna->up.num_rx_desc);
3701 nm_prerr("fail, arg %p ifp %p na %p", arg, ifp, hwna);
3702 return (hwna ? EINVAL : ENOMEM);
3707 netmap_attach(struct netmap_adapter *arg)
3709 return netmap_attach_ext(arg, sizeof(struct netmap_hw_adapter),
3710 1 /* override nm_reg */);
3715 NM_DBG(netmap_adapter_get)(struct netmap_adapter *na)
3721 refcount_acquire(&na->na_refcount);
3725 /* returns 1 iff the netmap_adapter is destroyed */
3727 NM_DBG(netmap_adapter_put)(struct netmap_adapter *na)
3732 if (!refcount_release(&na->na_refcount))
3738 if (na->tx_rings) { /* XXX should not happen */
3739 if (netmap_debug & NM_DEBUG_ON)
3740 nm_prerr("freeing leftover tx_rings");
3741 na->nm_krings_delete(na);
3743 netmap_pipe_dealloc(na);
3745 netmap_mem_put(na->nm_mem);
3746 bzero(na, sizeof(*na));
3752 /* nm_krings_create callback for all hardware native adapters */
3754 netmap_hw_krings_create(struct netmap_adapter *na)
3756 int ret = netmap_krings_create(na, 0);
3758 /* initialize the mbq for the sw rx ring */
3759 u_int lim = netmap_real_rings(na, NR_RX), i;
3760 for (i = na->num_rx_rings; i < lim; i++) {
3761 mbq_safe_init(&NMR(na, NR_RX)[i]->rx_queue);
3763 ND("initialized sw rx queue %d", na->num_rx_rings);
3771 * Called on module unload by the netmap-enabled drivers
3774 netmap_detach(struct ifnet *ifp)
3776 struct netmap_adapter *na = NA(ifp);
3782 netmap_set_all_rings(na, NM_KR_LOCKED);
3784 * if the netmap adapter is not native, somebody
3785 * changed it, so we can not release it here.
3786 * The NAF_ZOMBIE flag will notify the new owner that
3787 * the driver is gone.
3789 if (!(na->na_flags & NAF_NATIVE) || !netmap_adapter_put(na)) {
3790 na->na_flags |= NAF_ZOMBIE;
3792 /* give active users a chance to notice that NAF_ZOMBIE has been
3793 * turned on, so that they can stop and return an error to userspace.
3794 * Note that this becomes a NOP if there are no active users and,
3795 * therefore, the put() above has deleted the na, since now NA(ifp) is
3798 netmap_enable_all_rings(ifp);
3804 * Intercept packets from the network stack and pass them
3805 * to netmap as incoming packets on the 'software' ring.
3807 * We only store packets in a bounded mbq and then copy them
3808 * in the relevant rxsync routine.
3810 * We rely on the OS to make sure that the ifp and na do not go
3811 * away (typically the caller checks for IFF_DRV_RUNNING or the like).
3812 * In nm_register() or whenever there is a reinitialization,
3813 * we make sure to make the mode change visible here.
3816 netmap_transmit(struct ifnet *ifp, struct mbuf *m)
3818 struct netmap_adapter *na = NA(ifp);
3819 struct netmap_kring *kring, *tx_kring;
3820 u_int len = MBUF_LEN(m);
3821 u_int error = ENOBUFS;
3828 if (i >= na->num_host_rx_rings) {
3829 i = i % na->num_host_rx_rings;
3831 kring = NMR(na, NR_RX)[nma_get_nrings(na, NR_RX) + i];
3833 // XXX [Linux] we do not need this lock
3834 // if we follow the down/configure/up protocol -gl
3835 // mtx_lock(&na->core_lock);
3837 if (!nm_netmap_on(na)) {
3838 nm_prerr("%s not in netmap mode anymore", na->name);
3844 if (txr >= na->num_tx_rings) {
3845 txr %= na->num_tx_rings;
3847 tx_kring = NMR(na, NR_TX)[txr];
3849 if (tx_kring->nr_mode == NKR_NETMAP_OFF) {
3850 return MBUF_TRANSMIT(na, ifp, m);
3853 q = &kring->rx_queue;
3855 // XXX reconsider long packets if we handle fragments
3856 if (len > NETMAP_BUF_SIZE(na)) { /* too long for us */
3857 nm_prerr("%s from_host, drop packet size %d > %d", na->name,
3858 len, NETMAP_BUF_SIZE(na));
3862 if (!netmap_generic_hwcsum) {
3863 if (nm_os_mbuf_has_csum_offld(m)) {
3864 RD(1, "%s drop mbuf that needs checksum offload", na->name);
3869 if (nm_os_mbuf_has_seg_offld(m)) {
3870 RD(1, "%s drop mbuf that needs generic segmentation offload", na->name);
3875 ETHER_BPF_MTAP(ifp, m);
3876 #endif /* __FreeBSD__ */
3878 /* protect against netmap_rxsync_from_host(), netmap_sw_to_nic()
3879 * and maybe other instances of netmap_transmit (the latter
3880 * not possible on Linux).
3881 * We enqueue the mbuf only if we are sure there is going to be
3882 * enough room in the host RX ring, otherwise we drop it.
3886 busy = kring->nr_hwtail - kring->nr_hwcur;
3888 busy += kring->nkr_num_slots;
3889 if (busy + mbq_len(q) >= kring->nkr_num_slots - 1) {
3890 RD(2, "%s full hwcur %d hwtail %d qlen %d", na->name,
3891 kring->nr_hwcur, kring->nr_hwtail, mbq_len(q));
3894 ND(2, "%s %d bufs in queue", na->name, mbq_len(q));
3895 /* notify outside the lock */
3904 /* unconditionally wake up listeners */
3905 kring->nm_notify(kring, 0);
3906 /* this is normally netmap_notify(), but for nics
3907 * connected to a bridge it is netmap_bwrap_intr_notify(),
3908 * that possibly forwards the frames through the switch
3916 * netmap_reset() is called by the driver routines when reinitializing
3917 * a ring. The driver is in charge of locking to protect the kring.
3918 * If native netmap mode is not set just return NULL.
3919 * If native netmap mode is set, in particular, we have to set nr_mode to
3922 struct netmap_slot *
3923 netmap_reset(struct netmap_adapter *na, enum txrx tx, u_int n,
3926 struct netmap_kring *kring;
3929 if (!nm_native_on(na)) {
3930 ND("interface not in native netmap mode");
3931 return NULL; /* nothing to reinitialize */
3934 /* XXX note- in the new scheme, we are not guaranteed to be
3935 * under lock (e.g. when called on a device reset).
3936 * In this case, we should set a flag and do not trust too
3937 * much the values. In practice: TODO
3938 * - set a RESET flag somewhere in the kring
3939 * - do the processing in a conservative way
3940 * - let the *sync() fixup at the end.
3943 if (n >= na->num_tx_rings)
3946 kring = na->tx_rings[n];
3948 if (kring->nr_pending_mode == NKR_NETMAP_OFF) {
3949 kring->nr_mode = NKR_NETMAP_OFF;
3953 // XXX check whether we should use hwcur or rcur
3954 new_hwofs = kring->nr_hwcur - new_cur;
3956 if (n >= na->num_rx_rings)
3958 kring = na->rx_rings[n];
3960 if (kring->nr_pending_mode == NKR_NETMAP_OFF) {
3961 kring->nr_mode = NKR_NETMAP_OFF;
3965 new_hwofs = kring->nr_hwtail - new_cur;
3967 lim = kring->nkr_num_slots - 1;
3968 if (new_hwofs > lim)
3969 new_hwofs -= lim + 1;
3971 /* Always set the new offset value and realign the ring. */
3972 if (netmap_debug & NM_DEBUG_ON)
3973 nm_prinf("%s %s%d hwofs %d -> %d, hwtail %d -> %d",
3975 tx == NR_TX ? "TX" : "RX", n,
3976 kring->nkr_hwofs, new_hwofs,
3978 tx == NR_TX ? lim : kring->nr_hwtail);
3979 kring->nkr_hwofs = new_hwofs;
3981 kring->nr_hwtail = kring->nr_hwcur + lim;
3982 if (kring->nr_hwtail > lim)
3983 kring->nr_hwtail -= lim + 1;
3987 * Wakeup on the individual and global selwait
3988 * We do the wakeup here, but the ring is not yet reconfigured.
3989 * However, we are under lock so there are no races.
3991 kring->nr_mode = NKR_NETMAP_ON;
3992 kring->nm_notify(kring, 0);
3993 return kring->ring->slot;
3998 * Dispatch rx/tx interrupts to the netmap rings.
4000 * "work_done" is non-null on the RX path, NULL for the TX path.
4001 * We rely on the OS to make sure that there is only one active
4002 * instance per queue, and that there is appropriate locking.
4004 * The 'notify' routine depends on what the ring is attached to.
4005 * - for a netmap file descriptor, do a selwakeup on the individual
4006 * waitqueue, plus one on the global one if needed
4007 * (see netmap_notify)
4008 * - for a nic connected to a switch, call the proper forwarding routine
4009 * (see netmap_bwrap_intr_notify)
4012 netmap_common_irq(struct netmap_adapter *na, u_int q, u_int *work_done)
4014 struct netmap_kring *kring;
4015 enum txrx t = (work_done ? NR_RX : NR_TX);
4017 q &= NETMAP_RING_MASK;
4019 if (netmap_debug & (NM_DEBUG_RXINTR|NM_DEBUG_TXINTR)) {
4020 nm_prlim(5, "received %s queue %d", work_done ? "RX" : "TX" , q);
4023 if (q >= nma_get_nrings(na, t))
4024 return NM_IRQ_PASS; // not a physical queue
4026 kring = NMR(na, t)[q];
4028 if (kring->nr_mode == NKR_NETMAP_OFF) {
4033 kring->nr_kflags |= NKR_PENDINTR; // XXX atomic ?
4034 *work_done = 1; /* do not fire napi again */
4037 return kring->nm_notify(kring, 0);
4042 * Default functions to handle rx/tx interrupts from a physical device.
4043 * "work_done" is non-null on the RX path, NULL for the TX path.
4045 * If the card is not in netmap mode, simply return NM_IRQ_PASS,
4046 * so that the caller proceeds with regular processing.
4047 * Otherwise call netmap_common_irq().
4049 * If the card is connected to a netmap file descriptor,
4050 * do a selwakeup on the individual queue, plus one on the global one
4051 * if needed (multiqueue card _and_ there are multiqueue listeners),
4052 * and return NR_IRQ_COMPLETED.
4054 * Finally, if called on rx from an interface connected to a switch,
4055 * calls the proper forwarding routine.
4058 netmap_rx_irq(struct ifnet *ifp, u_int q, u_int *work_done)
4060 struct netmap_adapter *na = NA(ifp);
4063 * XXX emulated netmap mode sets NAF_SKIP_INTR so
4064 * we still use the regular driver even though the previous
4065 * check fails. It is unclear whether we should use
4066 * nm_native_on() here.
4068 if (!nm_netmap_on(na))
4071 if (na->na_flags & NAF_SKIP_INTR) {
4072 ND("use regular interrupt");
4076 return netmap_common_irq(na, q, work_done);
4079 /* set/clear native flags and if_transmit/netdev_ops */
4081 nm_set_native_flags(struct netmap_adapter *na)
4083 struct ifnet *ifp = na->ifp;
4085 /* We do the setup for intercepting packets only if we are the
4086 * first user of this adapapter. */
4087 if (na->active_fds > 0) {
4091 na->na_flags |= NAF_NETMAP_ON;
4093 nm_update_hostrings_mode(na);
4097 nm_clear_native_flags(struct netmap_adapter *na)
4099 struct ifnet *ifp = na->ifp;
4101 /* We undo the setup for intercepting packets only if we are the
4102 * last user of this adapter. */
4103 if (na->active_fds > 0) {
4107 nm_update_hostrings_mode(na);
4110 na->na_flags &= ~NAF_NETMAP_ON;
4114 * Module loader and unloader
4116 * netmap_init() creates the /dev/netmap device and initializes
4117 * all global variables. Returns 0 on success, errno on failure
4118 * (but there is no chance)
4120 * netmap_fini() destroys everything.
4123 static struct cdev *netmap_dev; /* /dev/netmap character device. */
4124 extern struct cdevsw netmap_cdevsw;
4131 destroy_dev(netmap_dev);
4132 /* we assume that there are no longer netmap users */
4134 netmap_uninit_bridges();
4137 nm_prinf("netmap: unloaded module.");
4148 error = netmap_mem_init();
4152 * MAKEDEV_ETERNAL_KLD avoids an expensive check on syscalls
4153 * when the module is compiled in.
4154 * XXX could use make_dev_credv() to get error number
4156 netmap_dev = make_dev_credf(MAKEDEV_ETERNAL_KLD,
4157 &netmap_cdevsw, 0, NULL, UID_ROOT, GID_WHEEL, 0600,
4162 error = netmap_init_bridges();
4167 nm_os_vi_init_index();
4170 error = nm_os_ifnet_init();
4174 nm_prinf("netmap: loaded module");
4178 return (EINVAL); /* may be incorrect */