2 * SPDX-License-Identifier: BSD-2-Clause-FreeBSD
4 * Copyright (C) 2011-2014 Matteo Landi
5 * Copyright (C) 2011-2016 Luigi Rizzo
6 * Copyright (C) 2011-2016 Giuseppe Lettieri
7 * Copyright (C) 2011-2016 Vincenzo Maffione
10 * Redistribution and use in source and binary forms, with or without
11 * modification, are permitted provided that the following conditions
13 * 1. Redistributions of source code must retain the above copyright
14 * notice, this list of conditions and the following disclaimer.
15 * 2. Redistributions in binary form must reproduce the above copyright
16 * notice, this list of conditions and the following disclaimer in the
17 * documentation and/or other materials provided with the distribution.
19 * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
20 * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
21 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
22 * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
23 * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
24 * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
25 * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
26 * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
27 * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
28 * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
36 * This module supports memory mapped access to network devices,
39 * The module uses a large, memory pool allocated by the kernel
40 * and accessible as mmapped memory by multiple userspace threads/processes.
41 * The memory pool contains packet buffers and "netmap rings",
42 * i.e. user-accessible copies of the interface's queues.
44 * Access to the network card works like this:
45 * 1. a process/thread issues one or more open() on /dev/netmap, to create
46 * select()able file descriptor on which events are reported.
47 * 2. on each descriptor, the process issues an ioctl() to identify
48 * the interface that should report events to the file descriptor.
49 * 3. on each descriptor, the process issues an mmap() request to
50 * map the shared memory region within the process' address space.
51 * The list of interesting queues is indicated by a location in
52 * the shared memory region.
53 * 4. using the functions in the netmap(4) userspace API, a process
54 * can look up the occupation state of a queue, access memory buffers,
55 * and retrieve received packets or enqueue packets to transmit.
56 * 5. using some ioctl()s the process can synchronize the userspace view
57 * of the queue with the actual status in the kernel. This includes both
58 * receiving the notification of new packets, and transmitting new
59 * packets on the output interface.
60 * 6. select() or poll() can be used to wait for events on individual
61 * transmit or receive queues (or all queues for a given interface).
64 SYNCHRONIZATION (USER)
66 The netmap rings and data structures may be shared among multiple
67 user threads or even independent processes.
68 Any synchronization among those threads/processes is delegated
69 to the threads themselves. Only one thread at a time can be in
70 a system call on the same netmap ring. The OS does not enforce
71 this and only guarantees against system crashes in case of
76 Within the kernel, access to the netmap rings is protected as follows:
78 - a spinlock on each ring, to handle producer/consumer races on
79 RX rings attached to the host stack (against multiple host
80 threads writing from the host stack to the same ring),
81 and on 'destination' rings attached to a VALE switch
82 (i.e. RX rings in VALE ports, and TX rings in NIC/host ports)
83 protecting multiple active senders for the same destination)
85 - an atomic variable to guarantee that there is at most one
86 instance of *_*xsync() on the ring at any time.
87 For rings connected to user file
88 descriptors, an atomic_test_and_set() protects this, and the
89 lock on the ring is not actually used.
90 For NIC RX rings connected to a VALE switch, an atomic_test_and_set()
91 is also used to prevent multiple executions (the driver might indeed
92 already guarantee this).
93 For NIC TX rings connected to a VALE switch, the lock arbitrates
94 access to the queue (both when allocating buffers and when pushing
97 - *xsync() should be protected against initializations of the card.
98 On FreeBSD most devices have the reset routine protected by
99 a RING lock (ixgbe, igb, em) or core lock (re). lem is missing
100 the RING protection on rx_reset(), this should be added.
102 On linux there is an external lock on the tx path, which probably
103 also arbitrates access to the reset routine. XXX to be revised
105 - a per-interface core_lock protecting access from the host stack
106 while interfaces may be detached from netmap mode.
107 XXX there should be no need for this lock if we detach the interfaces
108 only while they are down.
113 NMG_LOCK() serializes all modifications to switches and ports.
114 A switch cannot be deleted until all ports are gone.
116 For each switch, an SX lock (RWlock on linux) protects
117 deletion of ports. When configuring or deleting a new port, the
118 lock is acquired in exclusive mode (after holding NMG_LOCK).
119 When forwarding, the lock is acquired in shared mode (without NMG_LOCK).
120 The lock is held throughout the entire forwarding cycle,
121 during which the thread may incur in a page fault.
122 Hence it is important that sleepable shared locks are used.
124 On the rx ring, the per-port lock is grabbed initially to reserve
125 a number of slot in the ring, then the lock is released,
126 packets are copied from source to destination, and then
127 the lock is acquired again and the receive ring is updated.
128 (A similar thing is done on the tx ring for NIC and host stack
129 ports attached to the switch)
134 /* --- internals ----
136 * Roadmap to the code that implements the above.
138 * > 1. a process/thread issues one or more open() on /dev/netmap, to create
139 * > select()able file descriptor on which events are reported.
141 * Internally, we allocate a netmap_priv_d structure, that will be
142 * initialized on ioctl(NIOCREGIF). There is one netmap_priv_d
143 * structure for each open().
146 * FreeBSD: see netmap_open() (netmap_freebsd.c)
147 * linux: see linux_netmap_open() (netmap_linux.c)
149 * > 2. on each descriptor, the process issues an ioctl() to identify
150 * > the interface that should report events to the file descriptor.
152 * Implemented by netmap_ioctl(), NIOCREGIF case, with nmr->nr_cmd==0.
153 * Most important things happen in netmap_get_na() and
154 * netmap_do_regif(), called from there. Additional details can be
155 * found in the comments above those functions.
157 * In all cases, this action creates/takes-a-reference-to a
158 * netmap_*_adapter describing the port, and allocates a netmap_if
159 * and all necessary netmap rings, filling them with netmap buffers.
161 * In this phase, the sync callbacks for each ring are set (these are used
162 * in steps 5 and 6 below). The callbacks depend on the type of adapter.
163 * The adapter creation/initialization code puts them in the
164 * netmap_adapter (fields na->nm_txsync and na->nm_rxsync). Then, they
165 * are copied from there to the netmap_kring's during netmap_do_regif(), by
166 * the nm_krings_create() callback. All the nm_krings_create callbacks
167 * actually call netmap_krings_create() to perform this and the other
168 * common stuff. netmap_krings_create() also takes care of the host rings,
169 * if needed, by setting their sync callbacks appropriately.
171 * Additional actions depend on the kind of netmap_adapter that has been
174 * - netmap_hw_adapter: [netmap.c]
175 * This is a system netdev/ifp with native netmap support.
176 * The ifp is detached from the host stack by redirecting:
177 * - transmissions (from the network stack) to netmap_transmit()
178 * - receive notifications to the nm_notify() callback for
179 * this adapter. The callback is normally netmap_notify(), unless
180 * the ifp is attached to a bridge using bwrap, in which case it
181 * is netmap_bwrap_intr_notify().
183 * - netmap_generic_adapter: [netmap_generic.c]
184 * A system netdev/ifp without native netmap support.
186 * (the decision about native/non native support is taken in
187 * netmap_get_hw_na(), called by netmap_get_na())
189 * - netmap_vp_adapter [netmap_vale.c]
190 * Returned by netmap_get_bdg_na().
191 * This is a persistent or ephemeral VALE port. Ephemeral ports
192 * are created on the fly if they don't already exist, and are
193 * always attached to a bridge.
194 * Persistent VALE ports must must be created separately, and i
195 * then attached like normal NICs. The NIOCREGIF we are examining
196 * will find them only if they had previosly been created and
197 * attached (see VALE_CTL below).
199 * - netmap_pipe_adapter [netmap_pipe.c]
200 * Returned by netmap_get_pipe_na().
201 * Both pipe ends are created, if they didn't already exist.
203 * - netmap_monitor_adapter [netmap_monitor.c]
204 * Returned by netmap_get_monitor_na().
205 * If successful, the nm_sync callbacks of the monitored adapter
206 * will be intercepted by the returned monitor.
208 * - netmap_bwrap_adapter [netmap_vale.c]
209 * Cannot be obtained in this way, see VALE_CTL below
213 * linux: we first go through linux_netmap_ioctl() to
214 * adapt the FreeBSD interface to the linux one.
217 * > 3. on each descriptor, the process issues an mmap() request to
218 * > map the shared memory region within the process' address space.
219 * > The list of interesting queues is indicated by a location in
220 * > the shared memory region.
223 * FreeBSD: netmap_mmap_single (netmap_freebsd.c).
224 * linux: linux_netmap_mmap (netmap_linux.c).
226 * > 4. using the functions in the netmap(4) userspace API, a process
227 * > can look up the occupation state of a queue, access memory buffers,
228 * > and retrieve received packets or enqueue packets to transmit.
230 * these actions do not involve the kernel.
232 * > 5. using some ioctl()s the process can synchronize the userspace view
233 * > of the queue with the actual status in the kernel. This includes both
234 * > receiving the notification of new packets, and transmitting new
235 * > packets on the output interface.
237 * These are implemented in netmap_ioctl(), NIOCTXSYNC and NIOCRXSYNC
238 * cases. They invoke the nm_sync callbacks on the netmap_kring
239 * structures, as initialized in step 2 and maybe later modified
240 * by a monitor. Monitors, however, will always call the original
241 * callback before doing anything else.
244 * > 6. select() or poll() can be used to wait for events on individual
245 * > transmit or receive queues (or all queues for a given interface).
247 * Implemented in netmap_poll(). This will call the same nm_sync()
248 * callbacks as in step 5 above.
251 * linux: we first go through linux_netmap_poll() to adapt
252 * the FreeBSD interface to the linux one.
255 * ---- VALE_CTL -----
257 * VALE switches are controlled by issuing a NIOCREGIF with a non-null
258 * nr_cmd in the nmreq structure. These subcommands are handled by
259 * netmap_bdg_ctl() in netmap_vale.c. Persistent VALE ports are created
260 * and destroyed by issuing the NETMAP_BDG_NEWIF and NETMAP_BDG_DELIF
261 * subcommands, respectively.
263 * Any network interface known to the system (including a persistent VALE
264 * port) can be attached to a VALE switch by issuing the
265 * NETMAP_REQ_VALE_ATTACH command. After the attachment, persistent VALE ports
266 * look exactly like ephemeral VALE ports (as created in step 2 above). The
267 * attachment of other interfaces, instead, requires the creation of a
268 * netmap_bwrap_adapter. Moreover, the attached interface must be put in
269 * netmap mode. This may require the creation of a netmap_generic_adapter if
270 * we have no native support for the interface, or if generic adapters have
271 * been forced by sysctl.
273 * Both persistent VALE ports and bwraps are handled by netmap_get_bdg_na(),
274 * called by nm_bdg_ctl_attach(), and discriminated by the nm_bdg_attach()
275 * callback. In the case of the bwrap, the callback creates the
276 * netmap_bwrap_adapter. The initialization of the bwrap is then
277 * completed by calling netmap_do_regif() on it, in the nm_bdg_ctl()
278 * callback (netmap_bwrap_bdg_ctl in netmap_vale.c).
279 * A generic adapter for the wrapped ifp will be created if needed, when
280 * netmap_get_bdg_na() calls netmap_get_hw_na().
283 * ---- DATAPATHS -----
285 * -= SYSTEM DEVICE WITH NATIVE SUPPORT =-
287 * na == NA(ifp) == netmap_hw_adapter created in DEVICE_netmap_attach()
289 * - tx from netmap userspace:
291 * 1) ioctl(NIOCTXSYNC)/netmap_poll() in process context
292 * kring->nm_sync() == DEVICE_netmap_txsync()
293 * 2) device interrupt handler
294 * na->nm_notify() == netmap_notify()
295 * - rx from netmap userspace:
297 * 1) ioctl(NIOCRXSYNC)/netmap_poll() in process context
298 * kring->nm_sync() == DEVICE_netmap_rxsync()
299 * 2) device interrupt handler
300 * na->nm_notify() == netmap_notify()
301 * - rx from host stack
305 * na->nm_notify == netmap_notify()
306 * 2) ioctl(NIOCRXSYNC)/netmap_poll() in process context
307 * kring->nm_sync() == netmap_rxsync_from_host
308 * netmap_rxsync_from_host(na, NULL, NULL)
310 * ioctl(NIOCTXSYNC)/netmap_poll() in process context
311 * kring->nm_sync() == netmap_txsync_to_host
312 * netmap_txsync_to_host(na)
314 * FreeBSD: na->if_input() == ether_input()
315 * linux: netif_rx() with NM_MAGIC_PRIORITY_RX
318 * -= SYSTEM DEVICE WITH GENERIC SUPPORT =-
320 * na == NA(ifp) == generic_netmap_adapter created in generic_netmap_attach()
322 * - tx from netmap userspace:
324 * 1) ioctl(NIOCTXSYNC)/netmap_poll() in process context
325 * kring->nm_sync() == generic_netmap_txsync()
326 * nm_os_generic_xmit_frame()
327 * linux: dev_queue_xmit() with NM_MAGIC_PRIORITY_TX
328 * ifp->ndo_start_xmit == generic_ndo_start_xmit()
329 * gna->save_start_xmit == orig. dev. start_xmit
330 * FreeBSD: na->if_transmit() == orig. dev if_transmit
331 * 2) generic_mbuf_destructor()
332 * na->nm_notify() == netmap_notify()
333 * - rx from netmap userspace:
334 * 1) ioctl(NIOCRXSYNC)/netmap_poll() in process context
335 * kring->nm_sync() == generic_netmap_rxsync()
338 * generic_rx_handler()
340 * na->nm_notify() == netmap_notify()
341 * - rx from host stack
342 * FreeBSD: same as native
343 * Linux: same as native except:
345 * dev_queue_xmit() without NM_MAGIC_PRIORITY_TX
346 * ifp->ndo_start_xmit == generic_ndo_start_xmit()
348 * na->nm_notify() == netmap_notify()
349 * - tx to host stack (same as native):
357 * ioctl(NIOCTXSYNC)/netmap_poll() in process context
358 * kring->nm_sync() == netmap_vp_txsync()
360 * - system device with native support:
363 * na->nm_notify() == netmap_bwrap_intr_notify(ring_nr != host ring)
364 * kring->nm_sync() == DEVICE_netmap_rxsync()
366 * kring->nm_sync() == DEVICE_netmap_rxsync()
369 * na->nm_notify() == netmap_bwrap_intr_notify(ring_nr == host ring)
370 * kring->nm_sync() == netmap_rxsync_from_host()
373 * - system device with generic support:
374 * from device driver:
375 * generic_rx_handler()
376 * na->nm_notify() == netmap_bwrap_intr_notify(ring_nr != host ring)
377 * kring->nm_sync() == generic_netmap_rxsync()
379 * kring->nm_sync() == generic_netmap_rxsync()
382 * na->nm_notify() == netmap_bwrap_intr_notify(ring_nr == host ring)
383 * kring->nm_sync() == netmap_rxsync_from_host()
386 * (all cases) --> nm_bdg_flush()
387 * dest_na->nm_notify() == (see below)
393 * 1) ioctl(NIOCRXSYNC)/netmap_poll() in process context
394 * kring->nm_sync() == netmap_vp_rxsync()
395 * 2) from nm_bdg_flush()
396 * na->nm_notify() == netmap_notify()
398 * - system device with native support:
400 * na->nm_notify() == netmap_bwrap_notify()
402 * kring->nm_sync() == DEVICE_netmap_txsync()
406 * kring->nm_sync() == netmap_txsync_to_host
407 * netmap_vp_rxsync_locked()
409 * - system device with generic adapter:
411 * na->nm_notify() == netmap_bwrap_notify()
413 * kring->nm_sync() == generic_netmap_txsync()
417 * kring->nm_sync() == netmap_txsync_to_host
423 * OS-specific code that is used only within this file.
424 * Other OS-specific code that must be accessed by drivers
425 * is present in netmap_kern.h
428 #if defined(__FreeBSD__)
429 #include <sys/cdefs.h> /* prerequisite */
430 #include <sys/types.h>
431 #include <sys/errno.h>
432 #include <sys/param.h> /* defines used in kernel.h */
433 #include <sys/kernel.h> /* types used in module initialization */
434 #include <sys/conf.h> /* cdevsw struct, UID, GID */
435 #include <sys/filio.h> /* FIONBIO */
436 #include <sys/sockio.h>
437 #include <sys/socketvar.h> /* struct socket */
438 #include <sys/malloc.h>
439 #include <sys/poll.h>
440 #include <sys/rwlock.h>
441 #include <sys/socket.h> /* sockaddrs */
442 #include <sys/selinfo.h>
443 #include <sys/sysctl.h>
444 #include <sys/jail.h>
445 #include <net/vnet.h>
447 #include <net/if_var.h>
448 #include <net/bpf.h> /* BIOCIMMEDIATE */
449 #include <machine/bus.h> /* bus_dmamap_* */
450 #include <sys/endian.h>
451 #include <sys/refcount.h>
452 #include <net/ethernet.h> /* ETHER_BPF_MTAP */
457 #include "bsd_glue.h"
459 #elif defined(__APPLE__)
461 #warning OSX support is only partial
462 #include "osx_glue.h"
464 #elif defined (_WIN32)
466 #include "win_glue.h"
470 #error Unsupported platform
472 #endif /* unsupported */
477 #include <net/netmap.h>
478 #include <dev/netmap/netmap_kern.h>
479 #include <dev/netmap/netmap_mem2.h>
482 /* user-controlled variables */
484 #ifdef CONFIG_NETMAP_DEBUG
486 #endif /* CONFIG_NETMAP_DEBUG */
488 static int netmap_no_timestamp; /* don't timestamp on rxsync */
489 int netmap_no_pendintr = 1;
490 int netmap_txsync_retry = 2;
491 static int netmap_fwd = 0; /* force transparent forwarding */
494 * netmap_admode selects the netmap mode to use.
495 * Invalid values are reset to NETMAP_ADMODE_BEST
497 enum { NETMAP_ADMODE_BEST = 0, /* use native, fallback to generic */
498 NETMAP_ADMODE_NATIVE, /* either native or none */
499 NETMAP_ADMODE_GENERIC, /* force generic */
500 NETMAP_ADMODE_LAST };
501 static int netmap_admode = NETMAP_ADMODE_BEST;
503 /* netmap_generic_mit controls mitigation of RX notifications for
504 * the generic netmap adapter. The value is a time interval in
506 int netmap_generic_mit = 100*1000;
508 /* We use by default netmap-aware qdiscs with generic netmap adapters,
509 * even if there can be a little performance hit with hardware NICs.
510 * However, using the qdisc is the safer approach, for two reasons:
511 * 1) it prevents non-fifo qdiscs to break the TX notification
512 * scheme, which is based on mbuf destructors when txqdisc is
514 * 2) it makes it possible to transmit over software devices that
515 * change skb->dev, like bridge, veth, ...
517 * Anyway users looking for the best performance should
518 * use native adapters.
521 int netmap_generic_txqdisc = 1;
524 /* Default number of slots and queues for generic adapters. */
525 int netmap_generic_ringsize = 1024;
526 int netmap_generic_rings = 1;
528 /* Non-zero to enable checksum offloading in NIC drivers */
529 int netmap_generic_hwcsum = 0;
531 /* Non-zero if ptnet devices are allowed to use virtio-net headers. */
532 int ptnet_vnet_hdr = 1;
535 * SYSCTL calls are grouped between SYSBEGIN and SYSEND to be emulated
536 * in some other operating systems
540 SYSCTL_DECL(_dev_netmap);
541 SYSCTL_NODE(_dev, OID_AUTO, netmap, CTLFLAG_RW, 0, "Netmap args");
542 SYSCTL_INT(_dev_netmap, OID_AUTO, verbose,
543 CTLFLAG_RW, &netmap_verbose, 0, "Verbose mode");
544 #ifdef CONFIG_NETMAP_DEBUG
545 SYSCTL_INT(_dev_netmap, OID_AUTO, debug,
546 CTLFLAG_RW, &netmap_debug, 0, "Debug messages");
547 #endif /* CONFIG_NETMAP_DEBUG */
548 SYSCTL_INT(_dev_netmap, OID_AUTO, no_timestamp,
549 CTLFLAG_RW, &netmap_no_timestamp, 0, "no_timestamp");
550 SYSCTL_INT(_dev_netmap, OID_AUTO, no_pendintr, CTLFLAG_RW, &netmap_no_pendintr,
551 0, "Always look for new received packets.");
552 SYSCTL_INT(_dev_netmap, OID_AUTO, txsync_retry, CTLFLAG_RW,
553 &netmap_txsync_retry, 0, "Number of txsync loops in bridge's flush.");
555 SYSCTL_INT(_dev_netmap, OID_AUTO, fwd, CTLFLAG_RW, &netmap_fwd, 0,
556 "Force NR_FORWARD mode");
557 SYSCTL_INT(_dev_netmap, OID_AUTO, admode, CTLFLAG_RW, &netmap_admode, 0,
558 "Adapter mode. 0 selects the best option available,"
559 "1 forces native adapter, 2 forces emulated adapter");
560 SYSCTL_INT(_dev_netmap, OID_AUTO, generic_hwcsum, CTLFLAG_RW, &netmap_generic_hwcsum,
561 0, "Hardware checksums. 0 to disable checksum generation by the NIC (default),"
562 "1 to enable checksum generation by the NIC");
563 SYSCTL_INT(_dev_netmap, OID_AUTO, generic_mit, CTLFLAG_RW, &netmap_generic_mit,
564 0, "RX notification interval in nanoseconds");
565 SYSCTL_INT(_dev_netmap, OID_AUTO, generic_ringsize, CTLFLAG_RW,
566 &netmap_generic_ringsize, 0,
567 "Number of per-ring slots for emulated netmap mode");
568 SYSCTL_INT(_dev_netmap, OID_AUTO, generic_rings, CTLFLAG_RW,
569 &netmap_generic_rings, 0,
570 "Number of TX/RX queues for emulated netmap adapters");
572 SYSCTL_INT(_dev_netmap, OID_AUTO, generic_txqdisc, CTLFLAG_RW,
573 &netmap_generic_txqdisc, 0, "Use qdisc for generic adapters");
575 SYSCTL_INT(_dev_netmap, OID_AUTO, ptnet_vnet_hdr, CTLFLAG_RW, &ptnet_vnet_hdr,
576 0, "Allow ptnet devices to use virtio-net headers");
580 NMG_LOCK_T netmap_global_lock;
583 * mark the ring as stopped, and run through the locks
584 * to make sure other users get to see it.
585 * stopped must be either NR_KR_STOPPED (for unbounded stop)
586 * of NR_KR_LOCKED (brief stop for mutual exclusion purposes)
589 netmap_disable_ring(struct netmap_kring *kr, int stopped)
591 nm_kr_stop(kr, stopped);
592 // XXX check if nm_kr_stop is sufficient
593 mtx_lock(&kr->q_lock);
594 mtx_unlock(&kr->q_lock);
598 /* stop or enable a single ring */
600 netmap_set_ring(struct netmap_adapter *na, u_int ring_id, enum txrx t, int stopped)
603 netmap_disable_ring(NMR(na, t)[ring_id], stopped);
605 NMR(na, t)[ring_id]->nkr_stopped = 0;
609 /* stop or enable all the rings of na */
611 netmap_set_all_rings(struct netmap_adapter *na, int stopped)
616 if (!nm_netmap_on(na))
620 for (i = 0; i < netmap_real_rings(na, t); i++) {
621 netmap_set_ring(na, i, t, stopped);
627 * Convenience function used in drivers. Waits for current txsync()s/rxsync()s
628 * to finish and prevents any new one from starting. Call this before turning
629 * netmap mode off, or before removing the hardware rings (e.g., on module
633 netmap_disable_all_rings(struct ifnet *ifp)
635 if (NM_NA_VALID(ifp)) {
636 netmap_set_all_rings(NA(ifp), NM_KR_STOPPED);
641 * Convenience function used in drivers. Re-enables rxsync and txsync on the
642 * adapter's rings In linux drivers, this should be placed near each
646 netmap_enable_all_rings(struct ifnet *ifp)
648 if (NM_NA_VALID(ifp)) {
649 netmap_set_all_rings(NA(ifp), 0 /* enabled */);
654 netmap_make_zombie(struct ifnet *ifp)
656 if (NM_NA_VALID(ifp)) {
657 struct netmap_adapter *na = NA(ifp);
658 netmap_set_all_rings(na, NM_KR_LOCKED);
659 na->na_flags |= NAF_ZOMBIE;
660 netmap_set_all_rings(na, 0);
665 netmap_undo_zombie(struct ifnet *ifp)
667 if (NM_NA_VALID(ifp)) {
668 struct netmap_adapter *na = NA(ifp);
669 if (na->na_flags & NAF_ZOMBIE) {
670 netmap_set_all_rings(na, NM_KR_LOCKED);
671 na->na_flags &= ~NAF_ZOMBIE;
672 netmap_set_all_rings(na, 0);
678 * generic bound_checking function
681 nm_bound_var(u_int *v, u_int dflt, u_int lo, u_int hi, const char *msg)
684 const char *op = NULL;
693 } else if (oldv > hi) {
698 nm_prinf("%s %s to %d (was %d)", op, msg, *v, oldv);
704 * packet-dump function, user-supplied or static buffer.
705 * The destination buffer must be at least 30+4*len
708 nm_dump_buf(char *p, int len, int lim, char *dst)
710 static char _dst[8192];
712 static char hex[] ="0123456789abcdef";
713 char *o; /* output position */
715 #define P_HI(x) hex[((x) & 0xf0)>>4]
716 #define P_LO(x) hex[((x) & 0xf)]
717 #define P_C(x) ((x) >= 0x20 && (x) <= 0x7e ? (x) : '.')
720 if (lim <= 0 || lim > len)
723 sprintf(o, "buf 0x%p len %d lim %d\n", p, len, lim);
725 /* hexdump routine */
726 for (i = 0; i < lim; ) {
727 sprintf(o, "%5d: ", i);
731 for (j=0; j < 16 && i < lim; i++, j++) {
733 o[j*3+1] = P_LO(p[i]);
736 for (j=0; j < 16 && i < lim; i++, j++)
737 o[j + 48] = P_C(p[i]);
750 * Fetch configuration from the device, to cope with dynamic
751 * reconfigurations after loading the module.
753 /* call with NMG_LOCK held */
755 netmap_update_config(struct netmap_adapter *na)
757 struct nm_config_info info;
759 bzero(&info, sizeof(info));
760 if (na->nm_config == NULL ||
761 na->nm_config(na, &info)) {
762 /* take whatever we had at init time */
763 info.num_tx_rings = na->num_tx_rings;
764 info.num_tx_descs = na->num_tx_desc;
765 info.num_rx_rings = na->num_rx_rings;
766 info.num_rx_descs = na->num_rx_desc;
767 info.rx_buf_maxsize = na->rx_buf_maxsize;
770 if (na->num_tx_rings == info.num_tx_rings &&
771 na->num_tx_desc == info.num_tx_descs &&
772 na->num_rx_rings == info.num_rx_rings &&
773 na->num_rx_desc == info.num_rx_descs &&
774 na->rx_buf_maxsize == info.rx_buf_maxsize)
775 return 0; /* nothing changed */
776 if (na->active_fds == 0) {
777 na->num_tx_rings = info.num_tx_rings;
778 na->num_tx_desc = info.num_tx_descs;
779 na->num_rx_rings = info.num_rx_rings;
780 na->num_rx_desc = info.num_rx_descs;
781 na->rx_buf_maxsize = info.rx_buf_maxsize;
783 nm_prinf("configuration changed for %s: txring %d x %d, "
784 "rxring %d x %d, rxbufsz %d",
785 na->name, na->num_tx_rings, na->num_tx_desc,
786 na->num_rx_rings, na->num_rx_desc, na->rx_buf_maxsize);
789 nm_prerr("WARNING: configuration changed for %s while active: "
790 "txring %d x %d, rxring %d x %d, rxbufsz %d",
791 na->name, info.num_tx_rings, info.num_tx_descs,
792 info.num_rx_rings, info.num_rx_descs,
793 info.rx_buf_maxsize);
797 /* nm_sync callbacks for the host rings */
798 static int netmap_txsync_to_host(struct netmap_kring *kring, int flags);
799 static int netmap_rxsync_from_host(struct netmap_kring *kring, int flags);
801 /* create the krings array and initialize the fields common to all adapters.
802 * The array layout is this:
805 * na->tx_rings ----->| | \
806 * | | } na->num_tx_ring
810 * na->rx_rings ----> +----------+
812 * | | } na->num_rx_rings
817 * na->tailroom ----->| | \
818 * | | } tailroom bytes
822 * Note: for compatibility, host krings are created even when not needed.
823 * The tailroom space is currently used by vale ports for allocating leases.
825 /* call with NMG_LOCK held */
827 netmap_krings_create(struct netmap_adapter *na, u_int tailroom)
830 struct netmap_kring *kring;
835 if (na->tx_rings != NULL) {
836 if (netmap_debug & NM_DEBUG_ON)
837 nm_prerr("warning: krings were already created");
841 /* account for the (possibly fake) host rings */
842 n[NR_TX] = netmap_all_rings(na, NR_TX);
843 n[NR_RX] = netmap_all_rings(na, NR_RX);
845 len = (n[NR_TX] + n[NR_RX]) *
846 (sizeof(struct netmap_kring) + sizeof(struct netmap_kring *))
849 na->tx_rings = nm_os_malloc((size_t)len);
850 if (na->tx_rings == NULL) {
851 nm_prerr("Cannot allocate krings");
854 na->rx_rings = na->tx_rings + n[NR_TX];
855 na->tailroom = na->rx_rings + n[NR_RX];
857 /* link the krings in the krings array */
858 kring = (struct netmap_kring *)((char *)na->tailroom + tailroom);
859 for (i = 0; i < n[NR_TX] + n[NR_RX]; i++) {
860 na->tx_rings[i] = kring;
865 * All fields in krings are 0 except the one initialized below.
866 * but better be explicit on important kring fields.
869 ndesc = nma_get_ndesc(na, t);
870 for (i = 0; i < n[t]; i++) {
871 kring = NMR(na, t)[i];
872 bzero(kring, sizeof(*kring));
873 kring->notify_na = na;
876 kring->nkr_num_slots = ndesc;
877 kring->nr_mode = NKR_NETMAP_OFF;
878 kring->nr_pending_mode = NKR_NETMAP_OFF;
879 if (i < nma_get_nrings(na, t)) {
880 kring->nm_sync = (t == NR_TX ? na->nm_txsync : na->nm_rxsync);
882 if (!(na->na_flags & NAF_HOST_RINGS))
883 kring->nr_kflags |= NKR_FAKERING;
884 kring->nm_sync = (t == NR_TX ?
885 netmap_txsync_to_host:
886 netmap_rxsync_from_host);
888 kring->nm_notify = na->nm_notify;
889 kring->rhead = kring->rcur = kring->nr_hwcur = 0;
891 * IMPORTANT: Always keep one slot empty.
893 kring->rtail = kring->nr_hwtail = (t == NR_TX ? ndesc - 1 : 0);
894 snprintf(kring->name, sizeof(kring->name) - 1, "%s %s%d", na->name,
896 nm_prdis("ktx %s h %d c %d t %d",
897 kring->name, kring->rhead, kring->rcur, kring->rtail);
898 err = nm_os_selinfo_init(&kring->si, kring->name);
900 netmap_krings_delete(na);
903 mtx_init(&kring->q_lock, (t == NR_TX ? "nm_txq_lock" : "nm_rxq_lock"), NULL, MTX_DEF);
904 kring->na = na; /* setting this field marks the mutex as initialized */
906 err = nm_os_selinfo_init(&na->si[t], na->name);
908 netmap_krings_delete(na);
917 /* undo the actions performed by netmap_krings_create */
918 /* call with NMG_LOCK held */
920 netmap_krings_delete(struct netmap_adapter *na)
922 struct netmap_kring **kring = na->tx_rings;
925 if (na->tx_rings == NULL) {
926 if (netmap_debug & NM_DEBUG_ON)
927 nm_prerr("warning: krings were already deleted");
932 nm_os_selinfo_uninit(&na->si[t]);
934 /* we rely on the krings layout described above */
935 for ( ; kring != na->tailroom; kring++) {
936 if ((*kring)->na != NULL)
937 mtx_destroy(&(*kring)->q_lock);
938 nm_os_selinfo_uninit(&(*kring)->si);
940 nm_os_free(na->tx_rings);
941 na->tx_rings = na->rx_rings = na->tailroom = NULL;
946 * Destructor for NIC ports. They also have an mbuf queue
947 * on the rings connected to the host so we need to purge
950 /* call with NMG_LOCK held */
952 netmap_hw_krings_delete(struct netmap_adapter *na)
954 u_int lim = netmap_real_rings(na, NR_RX), i;
956 for (i = nma_get_nrings(na, NR_RX); i < lim; i++) {
957 struct mbq *q = &NMR(na, NR_RX)[i]->rx_queue;
958 nm_prdis("destroy sw mbq with len %d", mbq_len(q));
962 netmap_krings_delete(na);
966 netmap_mem_drop(struct netmap_adapter *na)
968 int last = netmap_mem_deref(na->nm_mem, na);
969 /* if the native allocator had been overrided on regif,
970 * restore it now and drop the temporary one
972 if (last && na->nm_mem_prev) {
973 netmap_mem_put(na->nm_mem);
974 na->nm_mem = na->nm_mem_prev;
975 na->nm_mem_prev = NULL;
980 * Undo everything that was done in netmap_do_regif(). In particular,
981 * call nm_register(ifp,0) to stop netmap mode on the interface and
982 * revert to normal operation.
984 /* call with NMG_LOCK held */
985 static void netmap_unset_ringid(struct netmap_priv_d *);
986 static void netmap_krings_put(struct netmap_priv_d *);
988 netmap_do_unregif(struct netmap_priv_d *priv)
990 struct netmap_adapter *na = priv->np_na;
994 /* unset nr_pending_mode and possibly release exclusive mode */
995 netmap_krings_put(priv);
998 /* XXX check whether we have to do something with monitor
999 * when rings change nr_mode. */
1000 if (na->active_fds <= 0) {
1001 /* walk through all the rings and tell any monitor
1002 * that the port is going to exit netmap mode
1004 netmap_monitor_stop(na);
1008 if (na->active_fds <= 0 || nm_kring_pending(priv)) {
1009 na->nm_register(na, 0);
1012 /* delete rings and buffers that are no longer needed */
1013 netmap_mem_rings_delete(na);
1015 if (na->active_fds <= 0) { /* last instance */
1017 * (TO CHECK) We enter here
1018 * when the last reference to this file descriptor goes
1019 * away. This means we cannot have any pending poll()
1020 * or interrupt routine operating on the structure.
1021 * XXX The file may be closed in a thread while
1022 * another thread is using it.
1023 * Linux keeps the file opened until the last reference
1024 * by any outstanding ioctl/poll or mmap is gone.
1025 * FreeBSD does not track mmap()s (but we do) and
1026 * wakes up any sleeping poll(). Need to check what
1027 * happens if the close() occurs while a concurrent
1028 * syscall is running.
1030 if (netmap_debug & NM_DEBUG_ON)
1031 nm_prinf("deleting last instance for %s", na->name);
1033 if (nm_netmap_on(na)) {
1034 nm_prerr("BUG: netmap on while going to delete the krings");
1037 na->nm_krings_delete(na);
1039 /* restore the default number of host tx and rx rings */
1040 na->num_host_tx_rings = 1;
1041 na->num_host_rx_rings = 1;
1044 /* possibily decrement counter of tx_si/rx_si users */
1045 netmap_unset_ringid(priv);
1046 /* delete the nifp */
1047 netmap_mem_if_delete(na, priv->np_nifp);
1048 /* drop the allocator */
1049 netmap_mem_drop(na);
1050 /* mark the priv as unregistered */
1052 priv->np_nifp = NULL;
1055 struct netmap_priv_d*
1056 netmap_priv_new(void)
1058 struct netmap_priv_d *priv;
1060 priv = nm_os_malloc(sizeof(struct netmap_priv_d));
1069 * Destructor of the netmap_priv_d, called when the fd is closed
1070 * Action: undo all the things done by NIOCREGIF,
1071 * On FreeBSD we need to track whether there are active mmap()s,
1072 * and we use np_active_mmaps for that. On linux, the field is always 0.
1073 * Return: 1 if we can free priv, 0 otherwise.
1076 /* call with NMG_LOCK held */
1078 netmap_priv_delete(struct netmap_priv_d *priv)
1080 struct netmap_adapter *na = priv->np_na;
1082 /* number of active references to this fd */
1083 if (--priv->np_refs > 0) {
1088 netmap_do_unregif(priv);
1090 netmap_unget_na(na, priv->np_ifp);
1091 bzero(priv, sizeof(*priv)); /* for safety */
1096 /* call with NMG_LOCK *not* held */
1098 netmap_dtor(void *data)
1100 struct netmap_priv_d *priv = data;
1103 netmap_priv_delete(priv);
1109 * Handlers for synchronization of the rings from/to the host stack.
1110 * These are associated to a network interface and are just another
1111 * ring pair managed by userspace.
1113 * Netmap also supports transparent forwarding (NS_FORWARD and NR_FORWARD
1116 * - Before releasing buffers on hw RX rings, the application can mark
1117 * them with the NS_FORWARD flag. During the next RXSYNC or poll(), they
1118 * will be forwarded to the host stack, similarly to what happened if
1119 * the application moved them to the host TX ring.
1121 * - Before releasing buffers on the host RX ring, the application can
1122 * mark them with the NS_FORWARD flag. During the next RXSYNC or poll(),
1123 * they will be forwarded to the hw TX rings, saving the application
1124 * from doing the same task in user-space.
1126 * Transparent fowarding can be enabled per-ring, by setting the NR_FORWARD
1127 * flag, or globally with the netmap_fwd sysctl.
1129 * The transfer NIC --> host is relatively easy, just encapsulate
1130 * into mbufs and we are done. The host --> NIC side is slightly
1131 * harder because there might not be room in the tx ring so it
1132 * might take a while before releasing the buffer.
1137 * Pass a whole queue of mbufs to the host stack as coming from 'dst'
1138 * We do not need to lock because the queue is private.
1139 * After this call the queue is empty.
1142 netmap_send_up(struct ifnet *dst, struct mbq *q)
1145 struct mbuf *head = NULL, *prev = NULL;
1147 /* Send packets up, outside the lock; head/prev machinery
1148 * is only useful for Windows. */
1149 while ((m = mbq_dequeue(q)) != NULL) {
1150 if (netmap_debug & NM_DEBUG_HOST)
1151 nm_prinf("sending up pkt %p size %d", m, MBUF_LEN(m));
1152 prev = nm_os_send_up(dst, m, prev);
1157 nm_os_send_up(dst, NULL, head);
1163 * Scan the buffers from hwcur to ring->head, and put a copy of those
1164 * marked NS_FORWARD (or all of them if forced) into a queue of mbufs.
1165 * Drop remaining packets in the unlikely event
1166 * of an mbuf shortage.
1169 netmap_grab_packets(struct netmap_kring *kring, struct mbq *q, int force)
1171 u_int const lim = kring->nkr_num_slots - 1;
1172 u_int const head = kring->rhead;
1174 struct netmap_adapter *na = kring->na;
1176 for (n = kring->nr_hwcur; n != head; n = nm_next(n, lim)) {
1178 struct netmap_slot *slot = &kring->ring->slot[n];
1180 if ((slot->flags & NS_FORWARD) == 0 && !force)
1182 if (slot->len < 14 || slot->len > NETMAP_BUF_SIZE(na)) {
1183 nm_prlim(5, "bad pkt at %d len %d", n, slot->len);
1186 slot->flags &= ~NS_FORWARD; // XXX needed ?
1187 /* XXX TODO: adapt to the case of a multisegment packet */
1188 m = m_devget(NMB(na, slot), slot->len, 0, na->ifp, NULL);
1197 _nm_may_forward(struct netmap_kring *kring)
1199 return ((netmap_fwd || kring->ring->flags & NR_FORWARD) &&
1200 kring->na->na_flags & NAF_HOST_RINGS &&
1201 kring->tx == NR_RX);
1205 nm_may_forward_up(struct netmap_kring *kring)
1207 return _nm_may_forward(kring) &&
1208 kring->ring_id != kring->na->num_rx_rings;
1212 nm_may_forward_down(struct netmap_kring *kring, int sync_flags)
1214 return _nm_may_forward(kring) &&
1215 (sync_flags & NAF_CAN_FORWARD_DOWN) &&
1216 kring->ring_id == kring->na->num_rx_rings;
1220 * Send to the NIC rings packets marked NS_FORWARD between
1221 * kring->nr_hwcur and kring->rhead.
1222 * Called under kring->rx_queue.lock on the sw rx ring.
1224 * It can only be called if the user opened all the TX hw rings,
1225 * see NAF_CAN_FORWARD_DOWN flag.
1226 * We can touch the TX netmap rings (slots, head and cur) since
1227 * we are in poll/ioctl system call context, and the application
1228 * is not supposed to touch the ring (using a different thread)
1229 * during the execution of the system call.
1232 netmap_sw_to_nic(struct netmap_adapter *na)
1234 struct netmap_kring *kring = na->rx_rings[na->num_rx_rings];
1235 struct netmap_slot *rxslot = kring->ring->slot;
1236 u_int i, rxcur = kring->nr_hwcur;
1237 u_int const head = kring->rhead;
1238 u_int const src_lim = kring->nkr_num_slots - 1;
1241 /* scan rings to find space, then fill as much as possible */
1242 for (i = 0; i < na->num_tx_rings; i++) {
1243 struct netmap_kring *kdst = na->tx_rings[i];
1244 struct netmap_ring *rdst = kdst->ring;
1245 u_int const dst_lim = kdst->nkr_num_slots - 1;
1247 /* XXX do we trust ring or kring->rcur,rtail ? */
1248 for (; rxcur != head && !nm_ring_empty(rdst);
1249 rxcur = nm_next(rxcur, src_lim) ) {
1250 struct netmap_slot *src, *dst, tmp;
1251 u_int dst_head = rdst->head;
1253 src = &rxslot[rxcur];
1254 if ((src->flags & NS_FORWARD) == 0 && !netmap_fwd)
1259 dst = &rdst->slot[dst_head];
1263 src->buf_idx = dst->buf_idx;
1264 src->flags = NS_BUF_CHANGED;
1266 dst->buf_idx = tmp.buf_idx;
1268 dst->flags = NS_BUF_CHANGED;
1270 rdst->head = rdst->cur = nm_next(dst_head, dst_lim);
1272 /* if (sent) XXX txsync ? it would be just an optimization */
1279 * netmap_txsync_to_host() passes packets up. We are called from a
1280 * system call in user process context, and the only contention
1281 * can be among multiple user threads erroneously calling
1282 * this routine concurrently.
1285 netmap_txsync_to_host(struct netmap_kring *kring, int flags)
1287 struct netmap_adapter *na = kring->na;
1288 u_int const lim = kring->nkr_num_slots - 1;
1289 u_int const head = kring->rhead;
1292 /* Take packets from hwcur to head and pass them up.
1293 * Force hwcur = head since netmap_grab_packets() stops at head
1296 netmap_grab_packets(kring, &q, 1 /* force */);
1297 nm_prdis("have %d pkts in queue", mbq_len(&q));
1298 kring->nr_hwcur = head;
1299 kring->nr_hwtail = head + lim;
1300 if (kring->nr_hwtail > lim)
1301 kring->nr_hwtail -= lim + 1;
1303 netmap_send_up(na->ifp, &q);
1309 * rxsync backend for packets coming from the host stack.
1310 * They have been put in kring->rx_queue by netmap_transmit().
1311 * We protect access to the kring using kring->rx_queue.lock
1313 * also moves to the nic hw rings any packet the user has marked
1314 * for transparent-mode forwarding, then sets the NR_FORWARD
1315 * flag in the kring to let the caller push them out
1318 netmap_rxsync_from_host(struct netmap_kring *kring, int flags)
1320 struct netmap_adapter *na = kring->na;
1321 struct netmap_ring *ring = kring->ring;
1323 u_int const lim = kring->nkr_num_slots - 1;
1324 u_int const head = kring->rhead;
1326 struct mbq *q = &kring->rx_queue, fq;
1328 mbq_init(&fq); /* fq holds packets to be freed */
1332 /* First part: import newly received packets */
1334 if (n) { /* grab packets from the queue */
1338 nm_i = kring->nr_hwtail;
1339 stop_i = nm_prev(kring->nr_hwcur, lim);
1340 while ( nm_i != stop_i && (m = mbq_dequeue(q)) != NULL ) {
1341 int len = MBUF_LEN(m);
1342 struct netmap_slot *slot = &ring->slot[nm_i];
1344 m_copydata(m, 0, len, NMB(na, slot));
1345 nm_prdis("nm %d len %d", nm_i, len);
1346 if (netmap_debug & NM_DEBUG_HOST)
1347 nm_prinf("%s", nm_dump_buf(NMB(na, slot),len, 128, NULL));
1351 nm_i = nm_next(nm_i, lim);
1352 mbq_enqueue(&fq, m);
1354 kring->nr_hwtail = nm_i;
1358 * Second part: skip past packets that userspace has released.
1360 nm_i = kring->nr_hwcur;
1361 if (nm_i != head) { /* something was released */
1362 if (nm_may_forward_down(kring, flags)) {
1363 ret = netmap_sw_to_nic(na);
1365 kring->nr_kflags |= NR_FORWARD;
1369 kring->nr_hwcur = head;
1381 /* Get a netmap adapter for the port.
1383 * If it is possible to satisfy the request, return 0
1384 * with *na containing the netmap adapter found.
1385 * Otherwise return an error code, with *na containing NULL.
1387 * When the port is attached to a bridge, we always return
1389 * Otherwise, if the port is already bound to a file descriptor,
1390 * then we unconditionally return the existing adapter into *na.
1391 * In all the other cases, we return (into *na) either native,
1392 * generic or NULL, according to the following table:
1395 * active_fds dev.netmap.admode YES NO
1396 * -------------------------------------------------------
1397 * >0 * NA(ifp) NA(ifp)
1399 * 0 NETMAP_ADMODE_BEST NATIVE GENERIC
1400 * 0 NETMAP_ADMODE_NATIVE NATIVE NULL
1401 * 0 NETMAP_ADMODE_GENERIC GENERIC GENERIC
1404 static void netmap_hw_dtor(struct netmap_adapter *); /* needed by NM_IS_NATIVE() */
1406 netmap_get_hw_na(struct ifnet *ifp, struct netmap_mem_d *nmd, struct netmap_adapter **na)
1408 /* generic support */
1409 int i = netmap_admode; /* Take a snapshot. */
1410 struct netmap_adapter *prev_na;
1413 *na = NULL; /* default */
1415 /* reset in case of invalid value */
1416 if (i < NETMAP_ADMODE_BEST || i >= NETMAP_ADMODE_LAST)
1417 i = netmap_admode = NETMAP_ADMODE_BEST;
1419 if (NM_NA_VALID(ifp)) {
1421 /* If an adapter already exists, return it if
1422 * there are active file descriptors or if
1423 * netmap is not forced to use generic
1426 if (NETMAP_OWNED_BY_ANY(prev_na)
1427 || i != NETMAP_ADMODE_GENERIC
1428 || prev_na->na_flags & NAF_FORCE_NATIVE
1430 /* ugly, but we cannot allow an adapter switch
1431 * if some pipe is referring to this one
1433 || prev_na->na_next_pipe > 0
1441 /* If there isn't native support and netmap is not allowed
1442 * to use generic adapters, we cannot satisfy the request.
1444 if (!NM_IS_NATIVE(ifp) && i == NETMAP_ADMODE_NATIVE)
1447 /* Otherwise, create a generic adapter and return it,
1448 * saving the previously used netmap adapter, if any.
1450 * Note that here 'prev_na', if not NULL, MUST be a
1451 * native adapter, and CANNOT be a generic one. This is
1452 * true because generic adapters are created on demand, and
1453 * destroyed when not used anymore. Therefore, if the adapter
1454 * currently attached to an interface 'ifp' is generic, it
1456 * (NA(ifp)->active_fds > 0 || NETMAP_OWNED_BY_KERN(NA(ifp))).
1457 * Consequently, if NA(ifp) is generic, we will enter one of
1458 * the branches above. This ensures that we never override
1459 * a generic adapter with another generic adapter.
1461 error = generic_netmap_attach(ifp);
1468 if (nmd != NULL && !((*na)->na_flags & NAF_MEM_OWNER) &&
1469 (*na)->active_fds == 0 && ((*na)->nm_mem != nmd)) {
1470 (*na)->nm_mem_prev = (*na)->nm_mem;
1471 (*na)->nm_mem = netmap_mem_get(nmd);
1478 * MUST BE CALLED UNDER NMG_LOCK()
1480 * Get a refcounted reference to a netmap adapter attached
1481 * to the interface specified by req.
1482 * This is always called in the execution of an ioctl().
1484 * Return ENXIO if the interface specified by the request does
1485 * not exist, ENOTSUP if netmap is not supported by the interface,
1486 * EBUSY if the interface is already attached to a bridge,
1487 * EINVAL if parameters are invalid, ENOMEM if needed resources
1488 * could not be allocated.
1489 * If successful, hold a reference to the netmap adapter.
1491 * If the interface specified by req is a system one, also keep
1492 * a reference to it and return a valid *ifp.
1495 netmap_get_na(struct nmreq_header *hdr,
1496 struct netmap_adapter **na, struct ifnet **ifp,
1497 struct netmap_mem_d *nmd, int create)
1499 struct nmreq_register *req = (struct nmreq_register *)(uintptr_t)hdr->nr_body;
1501 struct netmap_adapter *ret = NULL;
1504 *na = NULL; /* default return value */
1507 if (hdr->nr_reqtype != NETMAP_REQ_REGISTER) {
1511 if (req->nr_mode == NR_REG_PIPE_MASTER ||
1512 req->nr_mode == NR_REG_PIPE_SLAVE) {
1513 /* Do not accept deprecated pipe modes. */
1514 nm_prerr("Deprecated pipe nr_mode, use xx{yy or xx}yy syntax");
1520 /* if the request contain a memid, try to find the
1521 * corresponding memory region
1523 if (nmd == NULL && req->nr_mem_id) {
1524 nmd = netmap_mem_find(req->nr_mem_id);
1527 /* keep the rereference */
1531 /* We cascade through all possible types of netmap adapter.
1532 * All netmap_get_*_na() functions return an error and an na,
1533 * with the following combinations:
1536 * 0 NULL type doesn't match
1537 * !0 NULL type matches, but na creation/lookup failed
1538 * 0 !NULL type matches and na created/found
1539 * !0 !NULL impossible
1541 error = netmap_get_null_na(hdr, na, nmd, create);
1542 if (error || *na != NULL)
1545 /* try to see if this is a monitor port */
1546 error = netmap_get_monitor_na(hdr, na, nmd, create);
1547 if (error || *na != NULL)
1550 /* try to see if this is a pipe port */
1551 error = netmap_get_pipe_na(hdr, na, nmd, create);
1552 if (error || *na != NULL)
1555 /* try to see if this is a bridge port */
1556 error = netmap_get_vale_na(hdr, na, nmd, create);
1560 if (*na != NULL) /* valid match in netmap_get_bdg_na() */
1564 * This must be a hardware na, lookup the name in the system.
1565 * Note that by hardware we actually mean "it shows up in ifconfig".
1566 * This may still be a tap, a veth/epair, or even a
1567 * persistent VALE port.
1569 *ifp = ifunit_ref(hdr->nr_name);
1575 error = netmap_get_hw_na(*ifp, nmd, &ret);
1580 netmap_adapter_get(ret);
1583 * if the adapter supports the host rings and it is not alread open,
1584 * try to set the number of host rings as requested by the user
1586 if (((*na)->na_flags & NAF_HOST_RINGS) && (*na)->active_fds == 0) {
1587 if (req->nr_host_tx_rings)
1588 (*na)->num_host_tx_rings = req->nr_host_tx_rings;
1589 if (req->nr_host_rx_rings)
1590 (*na)->num_host_rx_rings = req->nr_host_rx_rings;
1592 nm_prdis("%s: host tx %d rx %u", (*na)->name, (*na)->num_host_tx_rings,
1593 (*na)->num_host_rx_rings);
1598 netmap_adapter_put(ret);
1605 netmap_mem_put(nmd);
1610 /* undo netmap_get_na() */
1612 netmap_unget_na(struct netmap_adapter *na, struct ifnet *ifp)
1617 netmap_adapter_put(na);
1621 #define NM_FAIL_ON(t) do { \
1622 if (unlikely(t)) { \
1623 nm_prlim(5, "%s: fail '" #t "' " \
1625 "rh %d rc %d rt %d " \
1628 head, cur, ring->tail, \
1629 kring->rhead, kring->rcur, kring->rtail, \
1630 kring->nr_hwcur, kring->nr_hwtail); \
1631 return kring->nkr_num_slots; \
1636 * validate parameters on entry for *_txsync()
1637 * Returns ring->cur if ok, or something >= kring->nkr_num_slots
1640 * rhead, rcur and rtail=hwtail are stored from previous round.
1641 * hwcur is the next packet to send to the ring.
1644 * hwcur <= *rhead <= head <= cur <= tail = *rtail <= hwtail
1646 * hwcur, rhead, rtail and hwtail are reliable
1649 nm_txsync_prologue(struct netmap_kring *kring, struct netmap_ring *ring)
1651 u_int head = ring->head; /* read only once */
1652 u_int cur = ring->cur; /* read only once */
1653 u_int n = kring->nkr_num_slots;
1655 nm_prdis(5, "%s kcur %d ktail %d head %d cur %d tail %d",
1657 kring->nr_hwcur, kring->nr_hwtail,
1658 ring->head, ring->cur, ring->tail);
1659 #if 1 /* kernel sanity checks; but we can trust the kring. */
1660 NM_FAIL_ON(kring->nr_hwcur >= n || kring->rhead >= n ||
1661 kring->rtail >= n || kring->nr_hwtail >= n);
1662 #endif /* kernel sanity checks */
1664 * user sanity checks. We only use head,
1665 * A, B, ... are possible positions for head:
1667 * 0 A rhead B rtail C n-1
1668 * 0 D rtail E rhead F n-1
1670 * B, F, D are valid. A, C, E are wrong
1672 if (kring->rtail >= kring->rhead) {
1673 /* want rhead <= head <= rtail */
1674 NM_FAIL_ON(head < kring->rhead || head > kring->rtail);
1675 /* and also head <= cur <= rtail */
1676 NM_FAIL_ON(cur < head || cur > kring->rtail);
1677 } else { /* here rtail < rhead */
1678 /* we need head outside rtail .. rhead */
1679 NM_FAIL_ON(head > kring->rtail && head < kring->rhead);
1681 /* two cases now: head <= rtail or head >= rhead */
1682 if (head <= kring->rtail) {
1683 /* want head <= cur <= rtail */
1684 NM_FAIL_ON(cur < head || cur > kring->rtail);
1685 } else { /* head >= rhead */
1686 /* cur must be outside rtail..head */
1687 NM_FAIL_ON(cur > kring->rtail && cur < head);
1690 if (ring->tail != kring->rtail) {
1691 nm_prlim(5, "%s tail overwritten was %d need %d", kring->name,
1692 ring->tail, kring->rtail);
1693 ring->tail = kring->rtail;
1695 kring->rhead = head;
1702 * validate parameters on entry for *_rxsync()
1703 * Returns ring->head if ok, kring->nkr_num_slots on error.
1705 * For a valid configuration,
1706 * hwcur <= head <= cur <= tail <= hwtail
1708 * We only consider head and cur.
1709 * hwcur and hwtail are reliable.
1713 nm_rxsync_prologue(struct netmap_kring *kring, struct netmap_ring *ring)
1715 uint32_t const n = kring->nkr_num_slots;
1718 nm_prdis(5,"%s kc %d kt %d h %d c %d t %d",
1720 kring->nr_hwcur, kring->nr_hwtail,
1721 ring->head, ring->cur, ring->tail);
1723 * Before storing the new values, we should check they do not
1724 * move backwards. However:
1725 * - head is not an issue because the previous value is hwcur;
1726 * - cur could in principle go back, however it does not matter
1727 * because we are processing a brand new rxsync()
1729 cur = kring->rcur = ring->cur; /* read only once */
1730 head = kring->rhead = ring->head; /* read only once */
1731 #if 1 /* kernel sanity checks */
1732 NM_FAIL_ON(kring->nr_hwcur >= n || kring->nr_hwtail >= n);
1733 #endif /* kernel sanity checks */
1734 /* user sanity checks */
1735 if (kring->nr_hwtail >= kring->nr_hwcur) {
1736 /* want hwcur <= rhead <= hwtail */
1737 NM_FAIL_ON(head < kring->nr_hwcur || head > kring->nr_hwtail);
1738 /* and also rhead <= rcur <= hwtail */
1739 NM_FAIL_ON(cur < head || cur > kring->nr_hwtail);
1741 /* we need rhead outside hwtail..hwcur */
1742 NM_FAIL_ON(head < kring->nr_hwcur && head > kring->nr_hwtail);
1743 /* two cases now: head <= hwtail or head >= hwcur */
1744 if (head <= kring->nr_hwtail) {
1745 /* want head <= cur <= hwtail */
1746 NM_FAIL_ON(cur < head || cur > kring->nr_hwtail);
1748 /* cur must be outside hwtail..head */
1749 NM_FAIL_ON(cur < head && cur > kring->nr_hwtail);
1752 if (ring->tail != kring->rtail) {
1753 nm_prlim(5, "%s tail overwritten was %d need %d",
1755 ring->tail, kring->rtail);
1756 ring->tail = kring->rtail;
1763 * Error routine called when txsync/rxsync detects an error.
1764 * Can't do much more than resetting head = cur = hwcur, tail = hwtail
1765 * Return 1 on reinit.
1767 * This routine is only called by the upper half of the kernel.
1768 * It only reads hwcur (which is changed only by the upper half, too)
1769 * and hwtail (which may be changed by the lower half, but only on
1770 * a tx ring and only to increase it, so any error will be recovered
1771 * on the next call). For the above, we don't strictly need to call
1775 netmap_ring_reinit(struct netmap_kring *kring)
1777 struct netmap_ring *ring = kring->ring;
1778 u_int i, lim = kring->nkr_num_slots - 1;
1781 // XXX KASSERT nm_kr_tryget
1782 nm_prlim(10, "called for %s", kring->name);
1783 // XXX probably wrong to trust userspace
1784 kring->rhead = ring->head;
1785 kring->rcur = ring->cur;
1786 kring->rtail = ring->tail;
1788 if (ring->cur > lim)
1790 if (ring->head > lim)
1792 if (ring->tail > lim)
1794 for (i = 0; i <= lim; i++) {
1795 u_int idx = ring->slot[i].buf_idx;
1796 u_int len = ring->slot[i].len;
1797 if (idx < 2 || idx >= kring->na->na_lut.objtotal) {
1798 nm_prlim(5, "bad index at slot %d idx %d len %d ", i, idx, len);
1799 ring->slot[i].buf_idx = 0;
1800 ring->slot[i].len = 0;
1801 } else if (len > NETMAP_BUF_SIZE(kring->na)) {
1802 ring->slot[i].len = 0;
1803 nm_prlim(5, "bad len at slot %d idx %d len %d", i, idx, len);
1807 nm_prlim(10, "total %d errors", errors);
1808 nm_prlim(10, "%s reinit, cur %d -> %d tail %d -> %d",
1810 ring->cur, kring->nr_hwcur,
1811 ring->tail, kring->nr_hwtail);
1812 ring->head = kring->rhead = kring->nr_hwcur;
1813 ring->cur = kring->rcur = kring->nr_hwcur;
1814 ring->tail = kring->rtail = kring->nr_hwtail;
1816 return (errors ? 1 : 0);
1819 /* interpret the ringid and flags fields of an nmreq, by translating them
1820 * into a pair of intervals of ring indices:
1822 * [priv->np_txqfirst, priv->np_txqlast) and
1823 * [priv->np_rxqfirst, priv->np_rxqlast)
1827 netmap_interp_ringid(struct netmap_priv_d *priv, uint32_t nr_mode,
1828 uint16_t nr_ringid, uint64_t nr_flags)
1830 struct netmap_adapter *na = priv->np_na;
1831 int excluded_direction[] = { NR_TX_RINGS_ONLY, NR_RX_RINGS_ONLY };
1836 if (nr_flags & excluded_direction[t]) {
1837 priv->np_qfirst[t] = priv->np_qlast[t] = 0;
1841 case NR_REG_ALL_NIC:
1843 priv->np_qfirst[t] = 0;
1844 priv->np_qlast[t] = nma_get_nrings(na, t);
1845 nm_prdis("ALL/PIPE: %s %d %d", nm_txrx2str(t),
1846 priv->np_qfirst[t], priv->np_qlast[t]);
1850 if (!(na->na_flags & NAF_HOST_RINGS)) {
1851 nm_prerr("host rings not supported");
1854 priv->np_qfirst[t] = (nr_mode == NR_REG_SW ?
1855 nma_get_nrings(na, t) : 0);
1856 priv->np_qlast[t] = netmap_all_rings(na, t);
1857 nm_prdis("%s: %s %d %d", nr_mode == NR_REG_SW ? "SW" : "NIC+SW",
1859 priv->np_qfirst[t], priv->np_qlast[t]);
1861 case NR_REG_ONE_NIC:
1862 if (nr_ringid >= na->num_tx_rings &&
1863 nr_ringid >= na->num_rx_rings) {
1864 nm_prerr("invalid ring id %d", nr_ringid);
1867 /* if not enough rings, use the first one */
1869 if (j >= nma_get_nrings(na, t))
1871 priv->np_qfirst[t] = j;
1872 priv->np_qlast[t] = j + 1;
1873 nm_prdis("ONE_NIC: %s %d %d", nm_txrx2str(t),
1874 priv->np_qfirst[t], priv->np_qlast[t]);
1877 if (!(na->na_flags & NAF_HOST_RINGS)) {
1878 nm_prerr("host rings not supported");
1881 if (nr_ringid >= na->num_host_tx_rings &&
1882 nr_ringid >= na->num_host_rx_rings) {
1883 nm_prerr("invalid ring id %d", nr_ringid);
1886 /* if not enough rings, use the first one */
1888 if (j >= nma_get_host_nrings(na, t))
1890 priv->np_qfirst[t] = nma_get_nrings(na, t) + j;
1891 priv->np_qlast[t] = nma_get_nrings(na, t) + j + 1;
1892 nm_prdis("ONE_SW: %s %d %d", nm_txrx2str(t),
1893 priv->np_qfirst[t], priv->np_qlast[t]);
1896 nm_prerr("invalid regif type %d", nr_mode);
1900 priv->np_flags = nr_flags;
1902 /* Allow transparent forwarding mode in the host --> nic
1903 * direction only if all the TX hw rings have been opened. */
1904 if (priv->np_qfirst[NR_TX] == 0 &&
1905 priv->np_qlast[NR_TX] >= na->num_tx_rings) {
1906 priv->np_sync_flags |= NAF_CAN_FORWARD_DOWN;
1909 if (netmap_verbose) {
1910 nm_prinf("%s: tx [%d,%d) rx [%d,%d) id %d",
1912 priv->np_qfirst[NR_TX],
1913 priv->np_qlast[NR_TX],
1914 priv->np_qfirst[NR_RX],
1915 priv->np_qlast[NR_RX],
1923 * Set the ring ID. For devices with a single queue, a request
1924 * for all rings is the same as a single ring.
1927 netmap_set_ringid(struct netmap_priv_d *priv, uint32_t nr_mode,
1928 uint16_t nr_ringid, uint64_t nr_flags)
1930 struct netmap_adapter *na = priv->np_na;
1934 error = netmap_interp_ringid(priv, nr_mode, nr_ringid, nr_flags);
1939 priv->np_txpoll = (nr_flags & NR_NO_TX_POLL) ? 0 : 1;
1941 /* optimization: count the users registered for more than
1942 * one ring, which are the ones sleeping on the global queue.
1943 * The default netmap_notify() callback will then
1944 * avoid signaling the global queue if nobody is using it
1947 if (nm_si_user(priv, t))
1954 netmap_unset_ringid(struct netmap_priv_d *priv)
1956 struct netmap_adapter *na = priv->np_na;
1960 if (nm_si_user(priv, t))
1962 priv->np_qfirst[t] = priv->np_qlast[t] = 0;
1965 priv->np_txpoll = 0;
1966 priv->np_kloop_state = 0;
1970 /* Set the nr_pending_mode for the requested rings.
1971 * If requested, also try to get exclusive access to the rings, provided
1972 * the rings we want to bind are not exclusively owned by a previous bind.
1975 netmap_krings_get(struct netmap_priv_d *priv)
1977 struct netmap_adapter *na = priv->np_na;
1979 struct netmap_kring *kring;
1980 int excl = (priv->np_flags & NR_EXCLUSIVE);
1983 if (netmap_debug & NM_DEBUG_ON)
1984 nm_prinf("%s: grabbing tx [%d, %d) rx [%d, %d)",
1986 priv->np_qfirst[NR_TX],
1987 priv->np_qlast[NR_TX],
1988 priv->np_qfirst[NR_RX],
1989 priv->np_qlast[NR_RX]);
1991 /* first round: check that all the requested rings
1992 * are neither alread exclusively owned, nor we
1993 * want exclusive ownership when they are already in use
1996 for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) {
1997 kring = NMR(na, t)[i];
1998 if ((kring->nr_kflags & NKR_EXCLUSIVE) ||
1999 (kring->users && excl))
2001 nm_prdis("ring %s busy", kring->name);
2007 /* second round: increment usage count (possibly marking them
2008 * as exclusive) and set the nr_pending_mode
2011 for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) {
2012 kring = NMR(na, t)[i];
2015 kring->nr_kflags |= NKR_EXCLUSIVE;
2016 kring->nr_pending_mode = NKR_NETMAP_ON;
2024 /* Undo netmap_krings_get(). This is done by clearing the exclusive mode
2025 * if was asked on regif, and unset the nr_pending_mode if we are the
2026 * last users of the involved rings. */
2028 netmap_krings_put(struct netmap_priv_d *priv)
2030 struct netmap_adapter *na = priv->np_na;
2032 struct netmap_kring *kring;
2033 int excl = (priv->np_flags & NR_EXCLUSIVE);
2036 nm_prdis("%s: releasing tx [%d, %d) rx [%d, %d)",
2038 priv->np_qfirst[NR_TX],
2039 priv->np_qlast[NR_TX],
2040 priv->np_qfirst[NR_RX],
2041 priv->np_qlast[MR_RX]);
2044 for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) {
2045 kring = NMR(na, t)[i];
2047 kring->nr_kflags &= ~NKR_EXCLUSIVE;
2049 if (kring->users == 0)
2050 kring->nr_pending_mode = NKR_NETMAP_OFF;
2056 nm_priv_rx_enabled(struct netmap_priv_d *priv)
2058 return (priv->np_qfirst[NR_RX] != priv->np_qlast[NR_RX]);
2061 /* Validate the CSB entries for both directions (atok and ktoa).
2062 * To be called under NMG_LOCK(). */
2064 netmap_csb_validate(struct netmap_priv_d *priv, struct nmreq_opt_csb *csbo)
2066 struct nm_csb_atok *csb_atok_base =
2067 (struct nm_csb_atok *)(uintptr_t)csbo->csb_atok;
2068 struct nm_csb_ktoa *csb_ktoa_base =
2069 (struct nm_csb_ktoa *)(uintptr_t)csbo->csb_ktoa;
2071 int num_rings[NR_TXRX], tot_rings;
2072 size_t entry_size[2];
2076 if (priv->np_kloop_state & NM_SYNC_KLOOP_RUNNING) {
2077 nm_prerr("Cannot update CSB while kloop is running");
2083 num_rings[t] = priv->np_qlast[t] - priv->np_qfirst[t];
2084 tot_rings += num_rings[t];
2089 if (!(priv->np_flags & NR_EXCLUSIVE)) {
2090 nm_prerr("CSB mode requires NR_EXCLUSIVE");
2094 entry_size[0] = sizeof(*csb_atok_base);
2095 entry_size[1] = sizeof(*csb_ktoa_base);
2096 csb_start[0] = (void *)csb_atok_base;
2097 csb_start[1] = (void *)csb_ktoa_base;
2099 for (i = 0; i < 2; i++) {
2100 /* On Linux we could use access_ok() to simplify
2101 * the validation. However, the advantage of
2102 * this approach is that it works also on
2104 size_t csb_size = tot_rings * entry_size[i];
2108 if ((uintptr_t)csb_start[i] & (entry_size[i]-1)) {
2109 nm_prerr("Unaligned CSB address");
2113 tmp = nm_os_malloc(csb_size);
2117 /* Application --> kernel direction. */
2118 err = copyin(csb_start[i], tmp, csb_size);
2120 /* Kernel --> application direction. */
2121 memset(tmp, 0, csb_size);
2122 err = copyout(tmp, csb_start[i], csb_size);
2126 nm_prerr("Invalid CSB address");
2131 priv->np_csb_atok_base = csb_atok_base;
2132 priv->np_csb_ktoa_base = csb_ktoa_base;
2134 /* Initialize the CSB. */
2136 for (i = 0; i < num_rings[t]; i++) {
2137 struct netmap_kring *kring =
2138 NMR(priv->np_na, t)[i + priv->np_qfirst[t]];
2139 struct nm_csb_atok *csb_atok = csb_atok_base + i;
2140 struct nm_csb_ktoa *csb_ktoa = csb_ktoa_base + i;
2143 csb_atok += num_rings[NR_TX];
2144 csb_ktoa += num_rings[NR_TX];
2147 CSB_WRITE(csb_atok, head, kring->rhead);
2148 CSB_WRITE(csb_atok, cur, kring->rcur);
2149 CSB_WRITE(csb_atok, appl_need_kick, 1);
2150 CSB_WRITE(csb_atok, sync_flags, 1);
2151 CSB_WRITE(csb_ktoa, hwcur, kring->nr_hwcur);
2152 CSB_WRITE(csb_ktoa, hwtail, kring->nr_hwtail);
2153 CSB_WRITE(csb_ktoa, kern_need_kick, 1);
2155 nm_prinf("csb_init for kring %s: head %u, cur %u, "
2156 "hwcur %u, hwtail %u", kring->name,
2157 kring->rhead, kring->rcur, kring->nr_hwcur,
2165 /* Ensure that the netmap adapter can support the given MTU.
2166 * @return EINVAL if the na cannot be set to mtu, 0 otherwise.
2169 netmap_buf_size_validate(const struct netmap_adapter *na, unsigned mtu) {
2170 unsigned nbs = NETMAP_BUF_SIZE(na);
2172 if (mtu <= na->rx_buf_maxsize) {
2173 /* The MTU fits a single NIC slot. We only
2174 * Need to check that netmap buffers are
2175 * large enough to hold an MTU. NS_MOREFRAG
2176 * cannot be used in this case. */
2178 nm_prerr("error: netmap buf size (%u) "
2179 "< device MTU (%u)", nbs, mtu);
2183 /* More NIC slots may be needed to receive
2184 * or transmit a single packet. Check that
2185 * the adapter supports NS_MOREFRAG and that
2186 * netmap buffers are large enough to hold
2187 * the maximum per-slot size. */
2188 if (!(na->na_flags & NAF_MOREFRAG)) {
2189 nm_prerr("error: large MTU (%d) needed "
2190 "but %s does not support "
2194 } else if (nbs < na->rx_buf_maxsize) {
2195 nm_prerr("error: using NS_MOREFRAG on "
2196 "%s requires netmap buf size "
2197 ">= %u", na->ifp->if_xname,
2198 na->rx_buf_maxsize);
2201 nm_prinf("info: netmap application on "
2202 "%s needs to support "
2204 "(MTU=%u,netmap_buf_size=%u)",
2205 na->ifp->if_xname, mtu, nbs);
2213 * possibly move the interface to netmap-mode.
2214 * If success it returns a pointer to netmap_if, otherwise NULL.
2215 * This must be called with NMG_LOCK held.
2217 * The following na callbacks are called in the process:
2219 * na->nm_config() [by netmap_update_config]
2220 * (get current number and size of rings)
2222 * We have a generic one for linux (netmap_linux_config).
2223 * The bwrap has to override this, since it has to forward
2224 * the request to the wrapped adapter (netmap_bwrap_config).
2227 * na->nm_krings_create()
2228 * (create and init the krings array)
2230 * One of the following:
2232 * * netmap_hw_krings_create, (hw ports)
2233 * creates the standard layout for the krings
2234 * and adds the mbq (used for the host rings).
2236 * * netmap_vp_krings_create (VALE ports)
2237 * add leases and scratchpads
2239 * * netmap_pipe_krings_create (pipes)
2240 * create the krings and rings of both ends and
2243 * * netmap_monitor_krings_create (monitors)
2244 * avoid allocating the mbq
2246 * * netmap_bwrap_krings_create (bwraps)
2247 * create both the brap krings array,
2248 * the krings array of the wrapped adapter, and
2249 * (if needed) the fake array for the host adapter
2251 * na->nm_register(, 1)
2252 * (put the adapter in netmap mode)
2254 * This may be one of the following:
2256 * * netmap_hw_reg (hw ports)
2257 * checks that the ifp is still there, then calls
2258 * the hardware specific callback;
2260 * * netmap_vp_reg (VALE ports)
2261 * If the port is connected to a bridge,
2262 * set the NAF_NETMAP_ON flag under the
2263 * bridge write lock.
2265 * * netmap_pipe_reg (pipes)
2266 * inform the other pipe end that it is no
2267 * longer responsible for the lifetime of this
2270 * * netmap_monitor_reg (monitors)
2271 * intercept the sync callbacks of the monitored
2274 * * netmap_bwrap_reg (bwraps)
2275 * cross-link the bwrap and hwna rings,
2276 * forward the request to the hwna, override
2277 * the hwna notify callback (to get the frames
2278 * coming from outside go through the bridge).
2283 netmap_do_regif(struct netmap_priv_d *priv, struct netmap_adapter *na,
2284 uint32_t nr_mode, uint16_t nr_ringid, uint64_t nr_flags)
2286 struct netmap_if *nifp = NULL;
2290 priv->np_na = na; /* store the reference */
2291 error = netmap_mem_finalize(na->nm_mem, na);
2295 if (na->active_fds == 0) {
2297 /* cache the allocator info in the na */
2298 error = netmap_mem_get_lut(na->nm_mem, &na->na_lut);
2301 nm_prdis("lut %p bufs %u size %u", na->na_lut.lut, na->na_lut.objtotal,
2302 na->na_lut.objsize);
2304 /* ring configuration may have changed, fetch from the card */
2305 netmap_update_config(na);
2308 /* compute the range of tx and rx rings to monitor */
2309 error = netmap_set_ringid(priv, nr_mode, nr_ringid, nr_flags);
2313 if (na->active_fds == 0) {
2315 * If this is the first registration of the adapter,
2316 * perform sanity checks and create the in-kernel view
2317 * of the netmap rings (the netmap krings).
2319 if (na->ifp && nm_priv_rx_enabled(priv)) {
2320 /* This netmap adapter is attached to an ifnet. */
2321 unsigned mtu = nm_os_ifnet_mtu(na->ifp);
2323 nm_prdis("%s: mtu %d rx_buf_maxsize %d netmap_buf_size %d",
2324 na->name, mtu, na->rx_buf_maxsize, NETMAP_BUF_SIZE(na));
2326 if (na->rx_buf_maxsize == 0) {
2327 nm_prerr("%s: error: rx_buf_maxsize == 0", na->name);
2332 error = netmap_buf_size_validate(na, mtu);
2338 * Depending on the adapter, this may also create
2339 * the netmap rings themselves
2341 error = na->nm_krings_create(na);
2347 /* now the krings must exist and we can check whether some
2348 * previous bind has exclusive ownership on them, and set
2351 error = netmap_krings_get(priv);
2353 goto err_del_krings;
2355 /* create all needed missing netmap rings */
2356 error = netmap_mem_rings_create(na);
2360 /* in all cases, create a new netmap if */
2361 nifp = netmap_mem_if_new(na, priv);
2367 if (nm_kring_pending(priv)) {
2368 /* Some kring is switching mode, tell the adapter to
2370 error = na->nm_register(na, 1);
2375 /* Commit the reference. */
2379 * advertise that the interface is ready by setting np_nifp.
2380 * The barrier is needed because readers (poll, *SYNC and mmap)
2381 * check for priv->np_nifp != NULL without locking
2383 mb(); /* make sure previous writes are visible to all CPUs */
2384 priv->np_nifp = nifp;
2389 netmap_mem_if_delete(na, nifp);
2391 netmap_krings_put(priv);
2392 netmap_mem_rings_delete(na);
2394 if (na->active_fds == 0)
2395 na->nm_krings_delete(na);
2397 if (na->active_fds == 0)
2398 memset(&na->na_lut, 0, sizeof(na->na_lut));
2400 netmap_mem_drop(na);
2408 * update kring and ring at the end of rxsync/txsync.
2411 nm_sync_finalize(struct netmap_kring *kring)
2414 * Update ring tail to what the kernel knows
2415 * After txsync: head/rhead/hwcur might be behind cur/rcur
2418 kring->ring->tail = kring->rtail = kring->nr_hwtail;
2420 nm_prdis(5, "%s now hwcur %d hwtail %d head %d cur %d tail %d",
2421 kring->name, kring->nr_hwcur, kring->nr_hwtail,
2422 kring->rhead, kring->rcur, kring->rtail);
2425 /* set ring timestamp */
2427 ring_timestamp_set(struct netmap_ring *ring)
2429 if (netmap_no_timestamp == 0 || ring->flags & NR_TIMESTAMP) {
2430 microtime(&ring->ts);
2434 static int nmreq_copyin(struct nmreq_header *, int);
2435 static int nmreq_copyout(struct nmreq_header *, int);
2436 static int nmreq_checkoptions(struct nmreq_header *);
2439 * ioctl(2) support for the "netmap" device.
2441 * Following a list of accepted commands:
2442 * - NIOCCTRL device control API
2443 * - NIOCTXSYNC sync TX rings
2444 * - NIOCRXSYNC sync RX rings
2445 * - SIOCGIFADDR just for convenience
2446 * - NIOCGINFO deprecated (legacy API)
2447 * - NIOCREGIF deprecated (legacy API)
2449 * Return 0 on success, errno otherwise.
2452 netmap_ioctl(struct netmap_priv_d *priv, u_long cmd, caddr_t data,
2453 struct thread *td, int nr_body_is_user)
2455 struct mbq q; /* packets from RX hw queues to host stack */
2456 struct netmap_adapter *na = NULL;
2457 struct netmap_mem_d *nmd = NULL;
2458 struct ifnet *ifp = NULL;
2460 u_int i, qfirst, qlast;
2461 struct netmap_kring **krings;
2467 struct nmreq_header *hdr = (struct nmreq_header *)data;
2469 if (hdr->nr_version < NETMAP_MIN_API ||
2470 hdr->nr_version > NETMAP_MAX_API) {
2471 nm_prerr("API mismatch: got %d need %d",
2472 hdr->nr_version, NETMAP_API);
2476 /* Make a kernel-space copy of the user-space nr_body.
2477 * For convenince, the nr_body pointer and the pointers
2478 * in the options list will be replaced with their
2479 * kernel-space counterparts. The original pointers are
2480 * saved internally and later restored by nmreq_copyout
2482 error = nmreq_copyin(hdr, nr_body_is_user);
2487 /* Sanitize hdr->nr_name. */
2488 hdr->nr_name[sizeof(hdr->nr_name) - 1] = '\0';
2490 switch (hdr->nr_reqtype) {
2491 case NETMAP_REQ_REGISTER: {
2492 struct nmreq_register *req =
2493 (struct nmreq_register *)(uintptr_t)hdr->nr_body;
2494 struct netmap_if *nifp;
2496 /* Protect access to priv from concurrent requests. */
2499 struct nmreq_option *opt;
2502 if (priv->np_nifp != NULL) { /* thread already registered */
2508 opt = nmreq_findoption((struct nmreq_option *)(uintptr_t)hdr->nr_options,
2509 NETMAP_REQ_OPT_EXTMEM);
2511 struct nmreq_opt_extmem *e =
2512 (struct nmreq_opt_extmem *)opt;
2514 error = nmreq_checkduplicate(opt);
2516 opt->nro_status = error;
2519 nmd = netmap_mem_ext_create(e->nro_usrptr,
2520 &e->nro_info, &error);
2521 opt->nro_status = error;
2525 #endif /* WITH_EXTMEM */
2527 if (nmd == NULL && req->nr_mem_id) {
2528 /* find the allocator and get a reference */
2529 nmd = netmap_mem_find(req->nr_mem_id);
2531 if (netmap_verbose) {
2532 nm_prerr("%s: failed to find mem_id %u",
2533 hdr->nr_name, req->nr_mem_id);
2539 /* find the interface and a reference */
2540 error = netmap_get_na(hdr, &na, &ifp, nmd,
2541 1 /* create */); /* keep reference */
2544 if (NETMAP_OWNED_BY_KERN(na)) {
2549 if (na->virt_hdr_len && !(req->nr_flags & NR_ACCEPT_VNET_HDR)) {
2550 nm_prerr("virt_hdr_len=%d, but application does "
2551 "not accept it", na->virt_hdr_len);
2556 error = netmap_do_regif(priv, na, req->nr_mode,
2557 req->nr_ringid, req->nr_flags);
2558 if (error) { /* reg. failed, release priv and ref */
2562 opt = nmreq_findoption((struct nmreq_option *)(uintptr_t)hdr->nr_options,
2563 NETMAP_REQ_OPT_CSB);
2565 struct nmreq_opt_csb *csbo =
2566 (struct nmreq_opt_csb *)opt;
2567 error = nmreq_checkduplicate(opt);
2569 error = netmap_csb_validate(priv, csbo);
2571 opt->nro_status = error;
2573 netmap_do_unregif(priv);
2578 nifp = priv->np_nifp;
2580 /* return the offset of the netmap_if object */
2581 req->nr_rx_rings = na->num_rx_rings;
2582 req->nr_tx_rings = na->num_tx_rings;
2583 req->nr_rx_slots = na->num_rx_desc;
2584 req->nr_tx_slots = na->num_tx_desc;
2585 req->nr_host_tx_rings = na->num_host_tx_rings;
2586 req->nr_host_rx_rings = na->num_host_rx_rings;
2587 error = netmap_mem_get_info(na->nm_mem, &req->nr_memsize, &memflags,
2590 netmap_do_unregif(priv);
2593 if (memflags & NETMAP_MEM_PRIVATE) {
2594 *(uint32_t *)(uintptr_t)&nifp->ni_flags |= NI_PRIV_MEM;
2597 priv->np_si[t] = nm_si_user(priv, t) ?
2598 &na->si[t] : &NMR(na, t)[priv->np_qfirst[t]]->si;
2601 if (req->nr_extra_bufs) {
2603 nm_prinf("requested %d extra buffers",
2604 req->nr_extra_bufs);
2605 req->nr_extra_bufs = netmap_extra_alloc(na,
2606 &nifp->ni_bufs_head, req->nr_extra_bufs);
2608 nm_prinf("got %d extra buffers", req->nr_extra_bufs);
2610 req->nr_offset = netmap_mem_if_offset(na->nm_mem, nifp);
2612 error = nmreq_checkoptions(hdr);
2614 netmap_do_unregif(priv);
2618 /* store ifp reference so that priv destructor may release it */
2622 netmap_unget_na(na, ifp);
2624 /* release the reference from netmap_mem_find() or
2625 * netmap_mem_ext_create()
2628 netmap_mem_put(nmd);
2633 case NETMAP_REQ_PORT_INFO_GET: {
2634 struct nmreq_port_info_get *req =
2635 (struct nmreq_port_info_get *)(uintptr_t)hdr->nr_body;
2641 if (hdr->nr_name[0] != '\0') {
2642 /* Build a nmreq_register out of the nmreq_port_info_get,
2643 * so that we can call netmap_get_na(). */
2644 struct nmreq_register regreq;
2645 bzero(®req, sizeof(regreq));
2646 regreq.nr_mode = NR_REG_ALL_NIC;
2647 regreq.nr_tx_slots = req->nr_tx_slots;
2648 regreq.nr_rx_slots = req->nr_rx_slots;
2649 regreq.nr_tx_rings = req->nr_tx_rings;
2650 regreq.nr_rx_rings = req->nr_rx_rings;
2651 regreq.nr_host_tx_rings = req->nr_host_tx_rings;
2652 regreq.nr_host_rx_rings = req->nr_host_rx_rings;
2653 regreq.nr_mem_id = req->nr_mem_id;
2655 /* get a refcount */
2656 hdr->nr_reqtype = NETMAP_REQ_REGISTER;
2657 hdr->nr_body = (uintptr_t)®req;
2658 error = netmap_get_na(hdr, &na, &ifp, NULL, 1 /* create */);
2659 hdr->nr_reqtype = NETMAP_REQ_PORT_INFO_GET; /* reset type */
2660 hdr->nr_body = (uintptr_t)req; /* reset nr_body */
2666 nmd = na->nm_mem; /* get memory allocator */
2668 nmd = netmap_mem_find(req->nr_mem_id ? req->nr_mem_id : 1);
2671 nm_prerr("%s: failed to find mem_id %u",
2673 req->nr_mem_id ? req->nr_mem_id : 1);
2679 error = netmap_mem_get_info(nmd, &req->nr_memsize, &memflags,
2683 if (na == NULL) /* only memory info */
2685 netmap_update_config(na);
2686 req->nr_rx_rings = na->num_rx_rings;
2687 req->nr_tx_rings = na->num_tx_rings;
2688 req->nr_rx_slots = na->num_rx_desc;
2689 req->nr_tx_slots = na->num_tx_desc;
2690 req->nr_host_tx_rings = na->num_host_tx_rings;
2691 req->nr_host_rx_rings = na->num_host_rx_rings;
2693 netmap_unget_na(na, ifp);
2698 case NETMAP_REQ_VALE_ATTACH: {
2699 error = netmap_vale_attach(hdr, NULL /* userspace request */);
2703 case NETMAP_REQ_VALE_DETACH: {
2704 error = netmap_vale_detach(hdr, NULL /* userspace request */);
2708 case NETMAP_REQ_VALE_LIST: {
2709 error = netmap_vale_list(hdr);
2713 case NETMAP_REQ_PORT_HDR_SET: {
2714 struct nmreq_port_hdr *req =
2715 (struct nmreq_port_hdr *)(uintptr_t)hdr->nr_body;
2716 /* Build a nmreq_register out of the nmreq_port_hdr,
2717 * so that we can call netmap_get_bdg_na(). */
2718 struct nmreq_register regreq;
2719 bzero(®req, sizeof(regreq));
2720 regreq.nr_mode = NR_REG_ALL_NIC;
2722 /* For now we only support virtio-net headers, and only for
2723 * VALE ports, but this may change in future. Valid lengths
2724 * for the virtio-net header are 0 (no header), 10 and 12. */
2725 if (req->nr_hdr_len != 0 &&
2726 req->nr_hdr_len != sizeof(struct nm_vnet_hdr) &&
2727 req->nr_hdr_len != 12) {
2729 nm_prerr("invalid hdr_len %u", req->nr_hdr_len);
2734 hdr->nr_reqtype = NETMAP_REQ_REGISTER;
2735 hdr->nr_body = (uintptr_t)®req;
2736 error = netmap_get_vale_na(hdr, &na, NULL, 0);
2737 hdr->nr_reqtype = NETMAP_REQ_PORT_HDR_SET;
2738 hdr->nr_body = (uintptr_t)req;
2740 struct netmap_vp_adapter *vpna =
2741 (struct netmap_vp_adapter *)na;
2742 na->virt_hdr_len = req->nr_hdr_len;
2743 if (na->virt_hdr_len) {
2744 vpna->mfs = NETMAP_BUF_SIZE(na);
2747 nm_prinf("Using vnet_hdr_len %d for %p", na->virt_hdr_len, na);
2748 netmap_adapter_put(na);
2756 case NETMAP_REQ_PORT_HDR_GET: {
2757 /* Get vnet-header length for this netmap port */
2758 struct nmreq_port_hdr *req =
2759 (struct nmreq_port_hdr *)(uintptr_t)hdr->nr_body;
2760 /* Build a nmreq_register out of the nmreq_port_hdr,
2761 * so that we can call netmap_get_bdg_na(). */
2762 struct nmreq_register regreq;
2765 bzero(®req, sizeof(regreq));
2766 regreq.nr_mode = NR_REG_ALL_NIC;
2768 hdr->nr_reqtype = NETMAP_REQ_REGISTER;
2769 hdr->nr_body = (uintptr_t)®req;
2770 error = netmap_get_na(hdr, &na, &ifp, NULL, 0);
2771 hdr->nr_reqtype = NETMAP_REQ_PORT_HDR_GET;
2772 hdr->nr_body = (uintptr_t)req;
2774 req->nr_hdr_len = na->virt_hdr_len;
2776 netmap_unget_na(na, ifp);
2781 case NETMAP_REQ_VALE_NEWIF: {
2782 error = nm_vi_create(hdr);
2786 case NETMAP_REQ_VALE_DELIF: {
2787 error = nm_vi_destroy(hdr->nr_name);
2791 case NETMAP_REQ_VALE_POLLING_ENABLE:
2792 case NETMAP_REQ_VALE_POLLING_DISABLE: {
2793 error = nm_bdg_polling(hdr);
2796 #endif /* WITH_VALE */
2797 case NETMAP_REQ_POOLS_INFO_GET: {
2798 /* Get information from the memory allocator used for
2800 struct nmreq_pools_info *req =
2801 (struct nmreq_pools_info *)(uintptr_t)hdr->nr_body;
2804 /* Build a nmreq_register out of the nmreq_pools_info,
2805 * so that we can call netmap_get_na(). */
2806 struct nmreq_register regreq;
2807 bzero(®req, sizeof(regreq));
2808 regreq.nr_mem_id = req->nr_mem_id;
2809 regreq.nr_mode = NR_REG_ALL_NIC;
2811 hdr->nr_reqtype = NETMAP_REQ_REGISTER;
2812 hdr->nr_body = (uintptr_t)®req;
2813 error = netmap_get_na(hdr, &na, &ifp, NULL, 1 /* create */);
2814 hdr->nr_reqtype = NETMAP_REQ_POOLS_INFO_GET; /* reset type */
2815 hdr->nr_body = (uintptr_t)req; /* reset nr_body */
2821 nmd = na->nm_mem; /* grab the memory allocator */
2827 /* Finalize the memory allocator, get the pools
2828 * information and release the allocator. */
2829 error = netmap_mem_finalize(nmd, na);
2833 error = netmap_mem_pools_info_get(req, nmd);
2834 netmap_mem_drop(na);
2836 netmap_unget_na(na, ifp);
2841 case NETMAP_REQ_CSB_ENABLE: {
2842 struct nmreq_option *opt;
2844 opt = nmreq_findoption((struct nmreq_option *)(uintptr_t)hdr->nr_options,
2845 NETMAP_REQ_OPT_CSB);
2849 struct nmreq_opt_csb *csbo =
2850 (struct nmreq_opt_csb *)opt;
2851 error = nmreq_checkduplicate(opt);
2854 error = netmap_csb_validate(priv, csbo);
2857 opt->nro_status = error;
2862 case NETMAP_REQ_SYNC_KLOOP_START: {
2863 error = netmap_sync_kloop(priv, hdr);
2867 case NETMAP_REQ_SYNC_KLOOP_STOP: {
2868 error = netmap_sync_kloop_stop(priv);
2877 /* Write back request body to userspace and reset the
2878 * user-space pointer. */
2879 error = nmreq_copyout(hdr, error);
2885 if (unlikely(priv->np_nifp == NULL)) {
2889 mb(); /* make sure following reads are not from cache */
2891 if (unlikely(priv->np_csb_atok_base)) {
2892 nm_prerr("Invalid sync in CSB mode");
2897 na = priv->np_na; /* we have a reference */
2900 t = (cmd == NIOCTXSYNC ? NR_TX : NR_RX);
2901 krings = NMR(na, t);
2902 qfirst = priv->np_qfirst[t];
2903 qlast = priv->np_qlast[t];
2904 sync_flags = priv->np_sync_flags;
2906 for (i = qfirst; i < qlast; i++) {
2907 struct netmap_kring *kring = krings[i];
2908 struct netmap_ring *ring = kring->ring;
2910 if (unlikely(nm_kr_tryget(kring, 1, &error))) {
2911 error = (error ? EIO : 0);
2915 if (cmd == NIOCTXSYNC) {
2916 if (netmap_debug & NM_DEBUG_TXSYNC)
2917 nm_prinf("pre txsync ring %d cur %d hwcur %d",
2920 if (nm_txsync_prologue(kring, ring) >= kring->nkr_num_slots) {
2921 netmap_ring_reinit(kring);
2922 } else if (kring->nm_sync(kring, sync_flags | NAF_FORCE_RECLAIM) == 0) {
2923 nm_sync_finalize(kring);
2925 if (netmap_debug & NM_DEBUG_TXSYNC)
2926 nm_prinf("post txsync ring %d cur %d hwcur %d",
2930 if (nm_rxsync_prologue(kring, ring) >= kring->nkr_num_slots) {
2931 netmap_ring_reinit(kring);
2933 if (nm_may_forward_up(kring)) {
2934 /* transparent forwarding, see netmap_poll() */
2935 netmap_grab_packets(kring, &q, netmap_fwd);
2937 if (kring->nm_sync(kring, sync_flags | NAF_FORCE_READ) == 0) {
2938 nm_sync_finalize(kring);
2940 ring_timestamp_set(ring);
2946 netmap_send_up(na->ifp, &q);
2953 return netmap_ioctl_legacy(priv, cmd, data, td);
2962 nmreq_size_by_type(uint16_t nr_reqtype)
2964 switch (nr_reqtype) {
2965 case NETMAP_REQ_REGISTER:
2966 return sizeof(struct nmreq_register);
2967 case NETMAP_REQ_PORT_INFO_GET:
2968 return sizeof(struct nmreq_port_info_get);
2969 case NETMAP_REQ_VALE_ATTACH:
2970 return sizeof(struct nmreq_vale_attach);
2971 case NETMAP_REQ_VALE_DETACH:
2972 return sizeof(struct nmreq_vale_detach);
2973 case NETMAP_REQ_VALE_LIST:
2974 return sizeof(struct nmreq_vale_list);
2975 case NETMAP_REQ_PORT_HDR_SET:
2976 case NETMAP_REQ_PORT_HDR_GET:
2977 return sizeof(struct nmreq_port_hdr);
2978 case NETMAP_REQ_VALE_NEWIF:
2979 return sizeof(struct nmreq_vale_newif);
2980 case NETMAP_REQ_VALE_DELIF:
2981 case NETMAP_REQ_SYNC_KLOOP_STOP:
2982 case NETMAP_REQ_CSB_ENABLE:
2984 case NETMAP_REQ_VALE_POLLING_ENABLE:
2985 case NETMAP_REQ_VALE_POLLING_DISABLE:
2986 return sizeof(struct nmreq_vale_polling);
2987 case NETMAP_REQ_POOLS_INFO_GET:
2988 return sizeof(struct nmreq_pools_info);
2989 case NETMAP_REQ_SYNC_KLOOP_START:
2990 return sizeof(struct nmreq_sync_kloop_start);
2996 nmreq_opt_size_by_type(uint32_t nro_reqtype, uint64_t nro_size)
2998 size_t rv = sizeof(struct nmreq_option);
2999 #ifdef NETMAP_REQ_OPT_DEBUG
3000 if (nro_reqtype & NETMAP_REQ_OPT_DEBUG)
3001 return (nro_reqtype & ~NETMAP_REQ_OPT_DEBUG);
3002 #endif /* NETMAP_REQ_OPT_DEBUG */
3003 switch (nro_reqtype) {
3005 case NETMAP_REQ_OPT_EXTMEM:
3006 rv = sizeof(struct nmreq_opt_extmem);
3008 #endif /* WITH_EXTMEM */
3009 case NETMAP_REQ_OPT_SYNC_KLOOP_EVENTFDS:
3013 case NETMAP_REQ_OPT_CSB:
3014 rv = sizeof(struct nmreq_opt_csb);
3016 case NETMAP_REQ_OPT_SYNC_KLOOP_MODE:
3017 rv = sizeof(struct nmreq_opt_sync_kloop_mode);
3020 /* subtract the common header */
3021 return rv - sizeof(struct nmreq_option);
3025 nmreq_copyin(struct nmreq_header *hdr, int nr_body_is_user)
3027 size_t rqsz, optsz, bufsz;
3029 char *ker = NULL, *p;
3030 struct nmreq_option **next, *src;
3031 struct nmreq_option buf;
3034 if (hdr->nr_reserved) {
3036 nm_prerr("nr_reserved must be zero");
3040 if (!nr_body_is_user)
3043 hdr->nr_reserved = nr_body_is_user;
3045 /* compute the total size of the buffer */
3046 rqsz = nmreq_size_by_type(hdr->nr_reqtype);
3047 if (rqsz > NETMAP_REQ_MAXSIZE) {
3051 if ((rqsz && hdr->nr_body == (uintptr_t)NULL) ||
3052 (!rqsz && hdr->nr_body != (uintptr_t)NULL)) {
3053 /* Request body expected, but not found; or
3054 * request body found but unexpected. */
3056 nm_prerr("nr_body expected but not found, or vice versa");
3061 bufsz = 2 * sizeof(void *) + rqsz;
3063 for (src = (struct nmreq_option *)(uintptr_t)hdr->nr_options; src;
3064 src = (struct nmreq_option *)(uintptr_t)buf.nro_next)
3066 error = copyin(src, &buf, sizeof(*src));
3069 optsz += sizeof(*src);
3070 optsz += nmreq_opt_size_by_type(buf.nro_reqtype, buf.nro_size);
3071 if (rqsz + optsz > NETMAP_REQ_MAXSIZE) {
3075 bufsz += optsz + sizeof(void *);
3078 ker = nm_os_malloc(bufsz);
3085 /* make a copy of the user pointers */
3086 ptrs = (uint64_t*)p;
3087 *ptrs++ = hdr->nr_body;
3088 *ptrs++ = hdr->nr_options;
3092 error = copyin((void *)(uintptr_t)hdr->nr_body, p, rqsz);
3095 /* overwrite the user pointer with the in-kernel one */
3096 hdr->nr_body = (uintptr_t)p;
3099 /* copy the options */
3100 next = (struct nmreq_option **)&hdr->nr_options;
3103 struct nmreq_option *opt;
3105 /* copy the option header */
3106 ptrs = (uint64_t *)p;
3107 opt = (struct nmreq_option *)(ptrs + 1);
3108 error = copyin(src, opt, sizeof(*src));
3111 /* make a copy of the user next pointer */
3112 *ptrs = opt->nro_next;
3113 /* overwrite the user pointer with the in-kernel one */
3116 /* initialize the option as not supported.
3117 * Recognized options will update this field.
3119 opt->nro_status = EOPNOTSUPP;
3121 p = (char *)(opt + 1);
3123 /* copy the option body */
3124 optsz = nmreq_opt_size_by_type(opt->nro_reqtype,
3127 /* the option body follows the option header */
3128 error = copyin(src + 1, p, optsz);
3134 /* move to next option */
3135 next = (struct nmreq_option **)&opt->nro_next;
3141 ptrs = (uint64_t *)ker;
3142 hdr->nr_body = *ptrs++;
3143 hdr->nr_options = *ptrs++;
3144 hdr->nr_reserved = 0;
3151 nmreq_copyout(struct nmreq_header *hdr, int rerror)
3153 struct nmreq_option *src, *dst;
3154 void *ker = (void *)(uintptr_t)hdr->nr_body, *bufstart;
3159 if (!hdr->nr_reserved)
3162 /* restore the user pointers in the header */
3163 ptrs = (uint64_t *)ker - 2;
3165 hdr->nr_body = *ptrs++;
3166 src = (struct nmreq_option *)(uintptr_t)hdr->nr_options;
3167 hdr->nr_options = *ptrs;
3171 bodysz = nmreq_size_by_type(hdr->nr_reqtype);
3172 error = copyout(ker, (void *)(uintptr_t)hdr->nr_body, bodysz);
3179 /* copy the options */
3180 dst = (struct nmreq_option *)(uintptr_t)hdr->nr_options;
3185 /* restore the user pointer */
3186 next = src->nro_next;
3187 ptrs = (uint64_t *)src - 1;
3188 src->nro_next = *ptrs;
3190 /* always copy the option header */
3191 error = copyout(src, dst, sizeof(*src));
3197 /* copy the option body only if there was no error */
3198 if (!rerror && !src->nro_status) {
3199 optsz = nmreq_opt_size_by_type(src->nro_reqtype,
3202 error = copyout(src + 1, dst + 1, optsz);
3209 src = (struct nmreq_option *)(uintptr_t)next;
3210 dst = (struct nmreq_option *)(uintptr_t)*ptrs;
3215 hdr->nr_reserved = 0;
3216 nm_os_free(bufstart);
3220 struct nmreq_option *
3221 nmreq_findoption(struct nmreq_option *opt, uint16_t reqtype)
3223 for ( ; opt; opt = (struct nmreq_option *)(uintptr_t)opt->nro_next)
3224 if (opt->nro_reqtype == reqtype)
3230 nmreq_checkduplicate(struct nmreq_option *opt) {
3231 uint16_t type = opt->nro_reqtype;
3234 while ((opt = nmreq_findoption((struct nmreq_option *)(uintptr_t)opt->nro_next,
3237 opt->nro_status = EINVAL;
3239 return (dup ? EINVAL : 0);
3243 nmreq_checkoptions(struct nmreq_header *hdr)
3245 struct nmreq_option *opt;
3246 /* return error if there is still any option
3247 * marked as not supported
3250 for (opt = (struct nmreq_option *)(uintptr_t)hdr->nr_options; opt;
3251 opt = (struct nmreq_option *)(uintptr_t)opt->nro_next)
3252 if (opt->nro_status == EOPNOTSUPP)
3259 * select(2) and poll(2) handlers for the "netmap" device.
3261 * Can be called for one or more queues.
3262 * Return true the event mask corresponding to ready events.
3263 * If there are no ready events (and 'sr' is not NULL), do a
3264 * selrecord on either individual selinfo or on the global one.
3265 * Device-dependent parts (locking and sync of tx/rx rings)
3266 * are done through callbacks.
3268 * On linux, arguments are really pwait, the poll table, and 'td' is struct file *
3269 * The first one is remapped to pwait as selrecord() uses the name as an
3273 netmap_poll(struct netmap_priv_d *priv, int events, NM_SELRECORD_T *sr)
3275 struct netmap_adapter *na;
3276 struct netmap_kring *kring;
3277 struct netmap_ring *ring;
3278 u_int i, want[NR_TXRX], revents = 0;
3279 NM_SELINFO_T *si[NR_TXRX];
3280 #define want_tx want[NR_TX]
3281 #define want_rx want[NR_RX]
3282 struct mbq q; /* packets from RX hw queues to host stack */
3285 * In order to avoid nested locks, we need to "double check"
3286 * txsync and rxsync if we decide to do a selrecord().
3287 * retry_tx (and retry_rx, later) prevent looping forever.
3289 int retry_tx = 1, retry_rx = 1;
3291 /* Transparent mode: send_down is 1 if we have found some
3292 * packets to forward (host RX ring --> NIC) during the rx
3293 * scan and we have not sent them down to the NIC yet.
3294 * Transparent mode requires to bind all rings to a single
3298 int sync_flags = priv->np_sync_flags;
3302 if (unlikely(priv->np_nifp == NULL)) {
3305 mb(); /* make sure following reads are not from cache */
3309 if (unlikely(!nm_netmap_on(na)))
3312 if (unlikely(priv->np_csb_atok_base)) {
3313 nm_prerr("Invalid poll in CSB mode");
3317 if (netmap_debug & NM_DEBUG_ON)
3318 nm_prinf("device %s events 0x%x", na->name, events);
3319 want_tx = events & (POLLOUT | POLLWRNORM);
3320 want_rx = events & (POLLIN | POLLRDNORM);
3323 * If the card has more than one queue AND the file descriptor is
3324 * bound to all of them, we sleep on the "global" selinfo, otherwise
3325 * we sleep on individual selinfo (FreeBSD only allows two selinfo's
3326 * per file descriptor).
3327 * The interrupt routine in the driver wake one or the other
3328 * (or both) depending on which clients are active.
3330 * rxsync() is only called if we run out of buffers on a POLLIN.
3331 * txsync() is called if we run out of buffers on POLLOUT, or
3332 * there are pending packets to send. The latter can be disabled
3333 * passing NETMAP_NO_TX_POLL in the NIOCREG call.
3335 si[NR_RX] = priv->np_si[NR_RX];
3336 si[NR_TX] = priv->np_si[NR_TX];
3340 * We start with a lock free round which is cheap if we have
3341 * slots available. If this fails, then lock and call the sync
3342 * routines. We can't do this on Linux, as the contract says
3343 * that we must call nm_os_selrecord() unconditionally.
3346 const enum txrx t = NR_TX;
3347 for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) {
3348 kring = NMR(na, t)[i];
3349 if (kring->ring->cur != kring->ring->tail) {
3350 /* Some unseen TX space is available, so what
3351 * we don't need to run txsync. */
3359 const enum txrx t = NR_RX;
3360 int rxsync_needed = 0;
3362 for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) {
3363 kring = NMR(na, t)[i];
3364 if (kring->ring->cur == kring->ring->tail
3365 || kring->rhead != kring->ring->head) {
3366 /* There are no unseen packets on this ring,
3367 * or there are some buffers to be returned
3368 * to the netmap port. We therefore go ahead
3369 * and run rxsync. */
3374 if (!rxsync_needed) {
3382 /* The selrecord must be unconditional on linux. */
3383 nm_os_selrecord(sr, si[NR_RX]);
3384 nm_os_selrecord(sr, si[NR_TX]);
3388 * If we want to push packets out (priv->np_txpoll) or
3389 * want_tx is still set, we must issue txsync calls
3390 * (on all rings, to avoid that the tx rings stall).
3391 * Fortunately, normal tx mode has np_txpoll set.
3393 if (priv->np_txpoll || want_tx) {
3395 * The first round checks if anyone is ready, if not
3396 * do a selrecord and another round to handle races.
3397 * want_tx goes to 0 if any space is found, and is
3398 * used to skip rings with no pending transmissions.
3401 for (i = priv->np_qfirst[NR_TX]; i < priv->np_qlast[NR_TX]; i++) {
3404 kring = na->tx_rings[i];
3408 * Don't try to txsync this TX ring if we already found some
3409 * space in some of the TX rings (want_tx == 0) and there are no
3410 * TX slots in this ring that need to be flushed to the NIC
3413 if (!send_down && !want_tx && ring->head == kring->nr_hwcur)
3416 if (nm_kr_tryget(kring, 1, &revents))
3419 if (nm_txsync_prologue(kring, ring) >= kring->nkr_num_slots) {
3420 netmap_ring_reinit(kring);
3423 if (kring->nm_sync(kring, sync_flags))
3426 nm_sync_finalize(kring);
3430 * If we found new slots, notify potential
3431 * listeners on the same ring.
3432 * Since we just did a txsync, look at the copies
3433 * of cur,tail in the kring.
3435 found = kring->rcur != kring->rtail;
3437 if (found) { /* notify other listeners */
3441 kring->nm_notify(kring, 0);
3445 /* if there were any packet to forward we must have handled them by now */
3447 if (want_tx && retry_tx && sr) {
3449 nm_os_selrecord(sr, si[NR_TX]);
3457 * If want_rx is still set scan receive rings.
3458 * Do it on all rings because otherwise we starve.
3461 /* two rounds here for race avoidance */
3463 for (i = priv->np_qfirst[NR_RX]; i < priv->np_qlast[NR_RX]; i++) {
3466 kring = na->rx_rings[i];
3469 if (unlikely(nm_kr_tryget(kring, 1, &revents)))
3472 if (nm_rxsync_prologue(kring, ring) >= kring->nkr_num_slots) {
3473 netmap_ring_reinit(kring);
3476 /* now we can use kring->rcur, rtail */
3479 * transparent mode support: collect packets from
3480 * hw rxring(s) that have been released by the user
3482 if (nm_may_forward_up(kring)) {
3483 netmap_grab_packets(kring, &q, netmap_fwd);
3486 /* Clear the NR_FORWARD flag anyway, it may be set by
3487 * the nm_sync() below only on for the host RX ring (see
3488 * netmap_rxsync_from_host()). */
3489 kring->nr_kflags &= ~NR_FORWARD;
3490 if (kring->nm_sync(kring, sync_flags))
3493 nm_sync_finalize(kring);
3494 send_down |= (kring->nr_kflags & NR_FORWARD);
3495 ring_timestamp_set(ring);
3496 found = kring->rcur != kring->rtail;
3502 kring->nm_notify(kring, 0);
3508 if (retry_rx && sr) {
3509 nm_os_selrecord(sr, si[NR_RX]);
3512 if (send_down || retry_rx) {
3515 goto flush_tx; /* and retry_rx */
3522 * Transparent mode: released bufs (i.e. between kring->nr_hwcur and
3523 * ring->head) marked with NS_FORWARD on hw rx rings are passed up
3524 * to the host stack.
3528 netmap_send_up(na->ifp, &q);
3537 nma_intr_enable(struct netmap_adapter *na, int onoff)
3539 bool changed = false;
3544 for (i = 0; i < nma_get_nrings(na, t); i++) {
3545 struct netmap_kring *kring = NMR(na, t)[i];
3546 int on = !(kring->nr_kflags & NKR_NOINTR);
3548 if (!!onoff != !!on) {
3552 kring->nr_kflags &= ~NKR_NOINTR;
3554 kring->nr_kflags |= NKR_NOINTR;
3560 return 0; /* nothing to do */
3564 nm_prerr("Cannot %s interrupts for %s", onoff ? "enable" : "disable",
3569 na->nm_intr(na, onoff);
3575 /*-------------------- driver support routines -------------------*/
3577 /* default notify callback */
3579 netmap_notify(struct netmap_kring *kring, int flags)
3581 struct netmap_adapter *na = kring->notify_na;
3582 enum txrx t = kring->tx;
3584 nm_os_selwakeup(&kring->si);
3585 /* optimization: avoid a wake up on the global
3586 * queue if nobody has registered for more
3589 if (na->si_users[t] > 0)
3590 nm_os_selwakeup(&na->si[t]);
3592 return NM_IRQ_COMPLETED;
3595 /* called by all routines that create netmap_adapters.
3596 * provide some defaults and get a reference to the
3600 netmap_attach_common(struct netmap_adapter *na)
3602 if (!na->rx_buf_maxsize) {
3603 /* Set a conservative default (larger is safer). */
3604 na->rx_buf_maxsize = PAGE_SIZE;
3608 if (na->na_flags & NAF_HOST_RINGS && na->ifp) {
3609 na->if_input = na->ifp->if_input; /* for netmap_send_up */
3611 na->pdev = na; /* make sure netmap_mem_map() is called */
3612 #endif /* __FreeBSD__ */
3613 if (na->na_flags & NAF_HOST_RINGS) {
3614 if (na->num_host_rx_rings == 0)
3615 na->num_host_rx_rings = 1;
3616 if (na->num_host_tx_rings == 0)
3617 na->num_host_tx_rings = 1;
3619 if (na->nm_krings_create == NULL) {
3620 /* we assume that we have been called by a driver,
3621 * since other port types all provide their own
3624 na->nm_krings_create = netmap_hw_krings_create;
3625 na->nm_krings_delete = netmap_hw_krings_delete;
3627 if (na->nm_notify == NULL)
3628 na->nm_notify = netmap_notify;
3631 if (na->nm_mem == NULL) {
3632 /* use the global allocator */
3633 na->nm_mem = netmap_mem_get(&nm_mem);
3636 if (na->nm_bdg_attach == NULL)
3637 /* no special nm_bdg_attach callback. On VALE
3638 * attach, we need to interpose a bwrap
3640 na->nm_bdg_attach = netmap_default_bdg_attach;
3646 /* Wrapper for the register callback provided netmap-enabled
3648 * nm_iszombie(na) means that the driver module has been
3649 * unloaded, so we cannot call into it.
3650 * nm_os_ifnet_lock() must guarantee mutual exclusion with
3654 netmap_hw_reg(struct netmap_adapter *na, int onoff)
3656 struct netmap_hw_adapter *hwna =
3657 (struct netmap_hw_adapter*)na;
3662 if (nm_iszombie(na)) {
3665 } else if (na != NULL) {
3666 na->na_flags &= ~NAF_NETMAP_ON;
3671 error = hwna->nm_hw_register(na, onoff);
3674 nm_os_ifnet_unlock();
3680 netmap_hw_dtor(struct netmap_adapter *na)
3682 if (na->ifp == NULL)
3685 NM_DETACH_NA(na->ifp);
3690 * Allocate a netmap_adapter object, and initialize it from the
3691 * 'arg' passed by the driver on attach.
3692 * We allocate a block of memory of 'size' bytes, which has room
3693 * for struct netmap_adapter plus additional room private to
3695 * Return 0 on success, ENOMEM otherwise.
3698 netmap_attach_ext(struct netmap_adapter *arg, size_t size, int override_reg)
3700 struct netmap_hw_adapter *hwna = NULL;
3701 struct ifnet *ifp = NULL;
3703 if (size < sizeof(struct netmap_hw_adapter)) {
3704 if (netmap_debug & NM_DEBUG_ON)
3705 nm_prerr("Invalid netmap adapter size %d", (int)size);
3709 if (arg == NULL || arg->ifp == NULL) {
3710 if (netmap_debug & NM_DEBUG_ON)
3711 nm_prerr("either arg or arg->ifp is NULL");
3715 if (arg->num_tx_rings == 0 || arg->num_rx_rings == 0) {
3716 if (netmap_debug & NM_DEBUG_ON)
3717 nm_prerr("%s: invalid rings tx %d rx %d",
3718 arg->name, arg->num_tx_rings, arg->num_rx_rings);
3723 if (NM_NA_CLASH(ifp)) {
3724 /* If NA(ifp) is not null but there is no valid netmap
3725 * adapter it means that someone else is using the same
3726 * pointer (e.g. ax25_ptr on linux). This happens for
3727 * instance when also PF_RING is in use. */
3728 nm_prerr("Error: netmap adapter hook is busy");
3732 hwna = nm_os_malloc(size);
3736 hwna->up.na_flags |= NAF_HOST_RINGS | NAF_NATIVE;
3737 strlcpy(hwna->up.name, ifp->if_xname, sizeof(hwna->up.name));
3739 hwna->nm_hw_register = hwna->up.nm_register;
3740 hwna->up.nm_register = netmap_hw_reg;
3742 if (netmap_attach_common(&hwna->up)) {
3746 netmap_adapter_get(&hwna->up);
3748 NM_ATTACH_NA(ifp, &hwna->up);
3750 nm_os_onattach(ifp);
3752 if (arg->nm_dtor == NULL) {
3753 hwna->up.nm_dtor = netmap_hw_dtor;
3756 if_printf(ifp, "netmap queues/slots: TX %d/%d, RX %d/%d\n",
3757 hwna->up.num_tx_rings, hwna->up.num_tx_desc,
3758 hwna->up.num_rx_rings, hwna->up.num_rx_desc);
3762 nm_prerr("fail, arg %p ifp %p na %p", arg, ifp, hwna);
3763 return (hwna ? EINVAL : ENOMEM);
3768 netmap_attach(struct netmap_adapter *arg)
3770 return netmap_attach_ext(arg, sizeof(struct netmap_hw_adapter),
3771 1 /* override nm_reg */);
3776 NM_DBG(netmap_adapter_get)(struct netmap_adapter *na)
3782 refcount_acquire(&na->na_refcount);
3786 /* returns 1 iff the netmap_adapter is destroyed */
3788 NM_DBG(netmap_adapter_put)(struct netmap_adapter *na)
3793 if (!refcount_release(&na->na_refcount))
3799 if (na->tx_rings) { /* XXX should not happen */
3800 if (netmap_debug & NM_DEBUG_ON)
3801 nm_prerr("freeing leftover tx_rings");
3802 na->nm_krings_delete(na);
3804 netmap_pipe_dealloc(na);
3806 netmap_mem_put(na->nm_mem);
3807 bzero(na, sizeof(*na));
3813 /* nm_krings_create callback for all hardware native adapters */
3815 netmap_hw_krings_create(struct netmap_adapter *na)
3817 int ret = netmap_krings_create(na, 0);
3819 /* initialize the mbq for the sw rx ring */
3820 u_int lim = netmap_real_rings(na, NR_RX), i;
3821 for (i = na->num_rx_rings; i < lim; i++) {
3822 mbq_safe_init(&NMR(na, NR_RX)[i]->rx_queue);
3824 nm_prdis("initialized sw rx queue %d", na->num_rx_rings);
3832 * Called on module unload by the netmap-enabled drivers
3835 netmap_detach(struct ifnet *ifp)
3837 struct netmap_adapter *na = NA(ifp);
3843 netmap_set_all_rings(na, NM_KR_LOCKED);
3845 * if the netmap adapter is not native, somebody
3846 * changed it, so we can not release it here.
3847 * The NAF_ZOMBIE flag will notify the new owner that
3848 * the driver is gone.
3850 if (!(na->na_flags & NAF_NATIVE) || !netmap_adapter_put(na)) {
3851 na->na_flags |= NAF_ZOMBIE;
3853 /* give active users a chance to notice that NAF_ZOMBIE has been
3854 * turned on, so that they can stop and return an error to userspace.
3855 * Note that this becomes a NOP if there are no active users and,
3856 * therefore, the put() above has deleted the na, since now NA(ifp) is
3859 netmap_enable_all_rings(ifp);
3865 * Intercept packets from the network stack and pass them
3866 * to netmap as incoming packets on the 'software' ring.
3868 * We only store packets in a bounded mbq and then copy them
3869 * in the relevant rxsync routine.
3871 * We rely on the OS to make sure that the ifp and na do not go
3872 * away (typically the caller checks for IFF_DRV_RUNNING or the like).
3873 * In nm_register() or whenever there is a reinitialization,
3874 * we make sure to make the mode change visible here.
3877 netmap_transmit(struct ifnet *ifp, struct mbuf *m)
3879 struct netmap_adapter *na = NA(ifp);
3880 struct netmap_kring *kring, *tx_kring;
3881 u_int len = MBUF_LEN(m);
3882 u_int error = ENOBUFS;
3889 if (i >= na->num_host_rx_rings) {
3890 i = i % na->num_host_rx_rings;
3892 kring = NMR(na, NR_RX)[nma_get_nrings(na, NR_RX) + i];
3894 // XXX [Linux] we do not need this lock
3895 // if we follow the down/configure/up protocol -gl
3896 // mtx_lock(&na->core_lock);
3898 if (!nm_netmap_on(na)) {
3899 nm_prerr("%s not in netmap mode anymore", na->name);
3905 if (txr >= na->num_tx_rings) {
3906 txr %= na->num_tx_rings;
3908 tx_kring = NMR(na, NR_TX)[txr];
3910 if (tx_kring->nr_mode == NKR_NETMAP_OFF) {
3911 return MBUF_TRANSMIT(na, ifp, m);
3914 q = &kring->rx_queue;
3916 // XXX reconsider long packets if we handle fragments
3917 if (len > NETMAP_BUF_SIZE(na)) { /* too long for us */
3918 nm_prerr("%s from_host, drop packet size %d > %d", na->name,
3919 len, NETMAP_BUF_SIZE(na));
3923 if (!netmap_generic_hwcsum) {
3924 if (nm_os_mbuf_has_csum_offld(m)) {
3925 nm_prlim(1, "%s drop mbuf that needs checksum offload", na->name);
3930 if (nm_os_mbuf_has_seg_offld(m)) {
3931 nm_prlim(1, "%s drop mbuf that needs generic segmentation offload", na->name);
3936 ETHER_BPF_MTAP(ifp, m);
3937 #endif /* __FreeBSD__ */
3939 /* protect against netmap_rxsync_from_host(), netmap_sw_to_nic()
3940 * and maybe other instances of netmap_transmit (the latter
3941 * not possible on Linux).
3942 * We enqueue the mbuf only if we are sure there is going to be
3943 * enough room in the host RX ring, otherwise we drop it.
3947 busy = kring->nr_hwtail - kring->nr_hwcur;
3949 busy += kring->nkr_num_slots;
3950 if (busy + mbq_len(q) >= kring->nkr_num_slots - 1) {
3951 nm_prlim(2, "%s full hwcur %d hwtail %d qlen %d", na->name,
3952 kring->nr_hwcur, kring->nr_hwtail, mbq_len(q));
3955 nm_prdis(2, "%s %d bufs in queue", na->name, mbq_len(q));
3956 /* notify outside the lock */
3965 /* unconditionally wake up listeners */
3966 kring->nm_notify(kring, 0);
3967 /* this is normally netmap_notify(), but for nics
3968 * connected to a bridge it is netmap_bwrap_intr_notify(),
3969 * that possibly forwards the frames through the switch
3977 * netmap_reset() is called by the driver routines when reinitializing
3978 * a ring. The driver is in charge of locking to protect the kring.
3979 * If native netmap mode is not set just return NULL.
3980 * If native netmap mode is set, in particular, we have to set nr_mode to
3983 struct netmap_slot *
3984 netmap_reset(struct netmap_adapter *na, enum txrx tx, u_int n,
3987 struct netmap_kring *kring;
3990 if (!nm_native_on(na)) {
3991 nm_prdis("interface not in native netmap mode");
3992 return NULL; /* nothing to reinitialize */
3995 /* XXX note- in the new scheme, we are not guaranteed to be
3996 * under lock (e.g. when called on a device reset).
3997 * In this case, we should set a flag and do not trust too
3998 * much the values. In practice: TODO
3999 * - set a RESET flag somewhere in the kring
4000 * - do the processing in a conservative way
4001 * - let the *sync() fixup at the end.
4004 if (n >= na->num_tx_rings)
4007 kring = na->tx_rings[n];
4009 if (kring->nr_pending_mode == NKR_NETMAP_OFF) {
4010 kring->nr_mode = NKR_NETMAP_OFF;
4014 // XXX check whether we should use hwcur or rcur
4015 new_hwofs = kring->nr_hwcur - new_cur;
4017 if (n >= na->num_rx_rings)
4019 kring = na->rx_rings[n];
4021 if (kring->nr_pending_mode == NKR_NETMAP_OFF) {
4022 kring->nr_mode = NKR_NETMAP_OFF;
4026 new_hwofs = kring->nr_hwtail - new_cur;
4028 lim = kring->nkr_num_slots - 1;
4029 if (new_hwofs > lim)
4030 new_hwofs -= lim + 1;
4032 /* Always set the new offset value and realign the ring. */
4033 if (netmap_debug & NM_DEBUG_ON)
4034 nm_prinf("%s %s%d hwofs %d -> %d, hwtail %d -> %d",
4036 tx == NR_TX ? "TX" : "RX", n,
4037 kring->nkr_hwofs, new_hwofs,
4039 tx == NR_TX ? lim : kring->nr_hwtail);
4040 kring->nkr_hwofs = new_hwofs;
4042 kring->nr_hwtail = kring->nr_hwcur + lim;
4043 if (kring->nr_hwtail > lim)
4044 kring->nr_hwtail -= lim + 1;
4048 * Wakeup on the individual and global selwait
4049 * We do the wakeup here, but the ring is not yet reconfigured.
4050 * However, we are under lock so there are no races.
4052 kring->nr_mode = NKR_NETMAP_ON;
4053 kring->nm_notify(kring, 0);
4054 return kring->ring->slot;
4059 * Dispatch rx/tx interrupts to the netmap rings.
4061 * "work_done" is non-null on the RX path, NULL for the TX path.
4062 * We rely on the OS to make sure that there is only one active
4063 * instance per queue, and that there is appropriate locking.
4065 * The 'notify' routine depends on what the ring is attached to.
4066 * - for a netmap file descriptor, do a selwakeup on the individual
4067 * waitqueue, plus one on the global one if needed
4068 * (see netmap_notify)
4069 * - for a nic connected to a switch, call the proper forwarding routine
4070 * (see netmap_bwrap_intr_notify)
4073 netmap_common_irq(struct netmap_adapter *na, u_int q, u_int *work_done)
4075 struct netmap_kring *kring;
4076 enum txrx t = (work_done ? NR_RX : NR_TX);
4078 q &= NETMAP_RING_MASK;
4080 if (netmap_debug & (NM_DEBUG_RXINTR|NM_DEBUG_TXINTR)) {
4081 nm_prlim(5, "received %s queue %d", work_done ? "RX" : "TX" , q);
4084 if (q >= nma_get_nrings(na, t))
4085 return NM_IRQ_PASS; // not a physical queue
4087 kring = NMR(na, t)[q];
4089 if (kring->nr_mode == NKR_NETMAP_OFF) {
4094 kring->nr_kflags |= NKR_PENDINTR; // XXX atomic ?
4095 *work_done = 1; /* do not fire napi again */
4098 return kring->nm_notify(kring, 0);
4103 * Default functions to handle rx/tx interrupts from a physical device.
4104 * "work_done" is non-null on the RX path, NULL for the TX path.
4106 * If the card is not in netmap mode, simply return NM_IRQ_PASS,
4107 * so that the caller proceeds with regular processing.
4108 * Otherwise call netmap_common_irq().
4110 * If the card is connected to a netmap file descriptor,
4111 * do a selwakeup on the individual queue, plus one on the global one
4112 * if needed (multiqueue card _and_ there are multiqueue listeners),
4113 * and return NR_IRQ_COMPLETED.
4115 * Finally, if called on rx from an interface connected to a switch,
4116 * calls the proper forwarding routine.
4119 netmap_rx_irq(struct ifnet *ifp, u_int q, u_int *work_done)
4121 struct netmap_adapter *na = NA(ifp);
4124 * XXX emulated netmap mode sets NAF_SKIP_INTR so
4125 * we still use the regular driver even though the previous
4126 * check fails. It is unclear whether we should use
4127 * nm_native_on() here.
4129 if (!nm_netmap_on(na))
4132 if (na->na_flags & NAF_SKIP_INTR) {
4133 nm_prdis("use regular interrupt");
4137 return netmap_common_irq(na, q, work_done);
4140 /* set/clear native flags and if_transmit/netdev_ops */
4142 nm_set_native_flags(struct netmap_adapter *na)
4144 struct ifnet *ifp = na->ifp;
4146 /* We do the setup for intercepting packets only if we are the
4147 * first user of this adapapter. */
4148 if (na->active_fds > 0) {
4152 na->na_flags |= NAF_NETMAP_ON;
4154 nm_update_hostrings_mode(na);
4158 nm_clear_native_flags(struct netmap_adapter *na)
4160 struct ifnet *ifp = na->ifp;
4162 /* We undo the setup for intercepting packets only if we are the
4163 * last user of this adapter. */
4164 if (na->active_fds > 0) {
4168 nm_update_hostrings_mode(na);
4171 na->na_flags &= ~NAF_NETMAP_ON;
4175 netmap_krings_mode_commit(struct netmap_adapter *na, int onoff)
4182 for (i = 0; i < netmap_real_rings(na, t); i++) {
4183 struct netmap_kring *kring = NMR(na, t)[i];
4185 if (onoff && nm_kring_pending_on(kring))
4186 kring->nr_mode = NKR_NETMAP_ON;
4187 else if (!onoff && nm_kring_pending_off(kring))
4188 kring->nr_mode = NKR_NETMAP_OFF;
4194 * Module loader and unloader
4196 * netmap_init() creates the /dev/netmap device and initializes
4197 * all global variables. Returns 0 on success, errno on failure
4198 * (but there is no chance)
4200 * netmap_fini() destroys everything.
4203 static struct cdev *netmap_dev; /* /dev/netmap character device. */
4204 extern struct cdevsw netmap_cdevsw;
4211 destroy_dev(netmap_dev);
4212 /* we assume that there are no longer netmap users */
4214 netmap_uninit_bridges();
4217 nm_prinf("netmap: unloaded module.");
4228 error = netmap_mem_init();
4232 * MAKEDEV_ETERNAL_KLD avoids an expensive check on syscalls
4233 * when the module is compiled in.
4234 * XXX could use make_dev_credv() to get error number
4236 netmap_dev = make_dev_credf(MAKEDEV_ETERNAL_KLD,
4237 &netmap_cdevsw, 0, NULL, UID_ROOT, GID_WHEEL, 0600,
4242 error = netmap_init_bridges();
4247 nm_os_vi_init_index();
4250 error = nm_os_ifnet_init();
4254 nm_prinf("netmap: loaded module");
4258 return (EINVAL); /* may be incorrect */