1 .\" Copyright (c) 2000-2001 John H. Baldwin <jhb@FreeBSD.org>
2 .\" All rights reserved.
4 .\" Redistribution and use in source and binary forms, with or without
5 .\" modification, are permitted provided that the following conditions
7 .\" 1. Redistributions of source code must retain the above copyright
8 .\" notice, this list of conditions and the following disclaimer.
9 .\" 2. Redistributions in binary form must reproduce the above copyright
10 .\" notice, this list of conditions and the following disclaimer in the
11 .\" documentation and/or other materials provided with the distribution.
13 .\" THIS SOFTWARE IS PROVIDED BY THE DEVELOPERS ``AS IS'' AND ANY EXPRESS OR
14 .\" IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
15 .\" OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
16 .\" IN NO EVENT SHALL THE DEVELOPERS BE LIABLE FOR ANY DIRECT, INDIRECT,
17 .\" INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
18 .\" NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
19 .\" DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
20 .\" THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
21 .\" (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
22 .\" THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
36 .Nm atomic_readandclear ,
45 .Fn atomic_add_[acq_|rel_]<type> "volatile <type> *p" "<type> v"
47 .Fn atomic_clear_[acq_|rel_]<type> "volatile <type> *p" "<type> v"
49 .Fo atomic_cmpset_[acq_|rel_]<type>
50 .Fa "volatile <type> *dst"
55 .Fo atomic_fcmpset_[acq_|rel_]<type>
56 .Fa "volatile <type> *dst"
61 .Fn atomic_fetchadd_<type> "volatile <type> *p" "<type> v"
63 .Fn atomic_load_acq_<type> "volatile <type> *p"
65 .Fn atomic_readandclear_<type> "volatile <type> *p"
67 .Fn atomic_set_[acq_|rel_]<type> "volatile <type> *p" "<type> v"
69 .Fn atomic_subtract_[acq_|rel_]<type> "volatile <type> *p" "<type> v"
71 .Fn atomic_store_rel_<type> "volatile <type> *p" "<type> v"
73 .Fn atomic_swap_<type> "volatile <type> *p" "<type> v"
75 .Fn atomic_testandclear_<type> "volatile <type> *p" "u_int v"
77 .Fn atomic_testandset_<type> "volatile <type> *p" "u_int v"
79 All of these operations are performed atomically across multiple
80 threads and in the presence of interrupts, meaning that they are
81 performed in an indivisible manner from the perspective of concurrently
82 running threads and interrupt handlers.
84 When atomic operations are performed on cache-coherent memory, all
85 operations on the same location are totally ordered.
87 When an atomic load is performed on a location in cache-coherent memory,
88 it reads the entire value that was defined by the last atomic store to
89 each byte of the location.
90 An atomic load will never return a value out of thin air.
91 When an atomic store is performed on a location, no other thread or
92 interrupt handler will observe a
94 or partial modification of the location.
96 On all architectures supported by
98 ordinary loads and stores of naturally aligned integer types
99 are atomic, as executed by the processor.
101 Atomic operations can be used to implement reference counts or as
102 building blocks for synchronization primitives such as mutexes.
106 atomic operations are almost identical to those of the similarly named
108 The one important difference is that the C11 standard does not
109 require ordinary loads and stores to ever be atomic.
111 .Fn atomic_load_explicit memory_order_relaxed
112 operation exists in the C11 standard, but is not provided by
113 .In machine/atomic.h .
115 Each atomic operation operates on a specific
117 The type to use is indicated in the function name.
118 The available types that can be used are:
120 .Bl -tag -offset indent -width short -compact
124 unsigned long integer
126 unsigned integer the size of a pointer
128 unsigned 32-bit integer
130 unsigned 64-bit integer
133 For example, the function to atomically add two integers is called
136 Certain architectures also provide operations for types smaller than
139 .Bl -tag -offset indent -width short -compact
143 unsigned short integer
145 unsigned 8-bit integer
147 unsigned 16-bit integer
150 These must not be used in MI code because the instructions to implement them
151 efficiently might not be available.
152 .Ss Acquire and Release Operations
153 By default, a thread's accesses to different memory locations might not be
156 that is, the order in which the accesses appear in the source code.
157 To optimize the program's execution, both the compiler and processor might
158 reorder the thread's accesses.
159 However, both ensure that their reordering of the accesses is not visible to
161 Otherwise, the traditional memory model that is expected by single-threaded
162 programs would be violated.
163 Nonetheless, other threads in a multithreaded program, such as the
165 kernel, might observe the reordering.
166 Moreover, in some cases, such as the implementation of synchronization between
167 threads, arbitrary reordering might result in the incorrect execution of the
169 To constrain the reordering that both the compiler and processor might perform
170 on a thread's accesses, the thread should use atomic operations with
176 Most of the atomic operations on memory have three variants.
177 The first variant performs the operation without imposing any ordering
178 constraints on memory accesses to other locations.
179 The second variant has acquire semantics, and the third variant has release
181 In effect, operations with acquire and release semantics establish one-way
182 barriers to reordering.
184 When an atomic operation has acquire semantics, the effects of the operation
185 must have completed before any subsequent load or store (by program order) is
187 Conversely, acquire semantics do not require that prior loads or stores have
188 completed before the atomic operation is performed.
189 To denote acquire semantics, the suffix
191 is inserted into the function name immediately prior to the
192 .Dq Li _ Ns Aq Fa type
194 For example, to subtract two integers ensuring that subsequent loads and
195 stores happen after the subtraction is performed, use
196 .Fn atomic_subtract_acq_int .
198 When an atomic operation has release semantics, the effects of all prior
199 loads or stores (by program order) must have completed before the operation
201 Conversely, release semantics do not require that the effects of the
202 atomic operation must have completed before any subsequent load or store is
204 To denote release semantics, the suffix
206 is inserted into the function name immediately prior to the
207 .Dq Li _ Ns Aq Fa type
209 For example, to add two long integers ensuring that all prior loads and
210 stores happen before the addition, use
211 .Fn atomic_add_rel_long .
213 The one-way barriers provided by acquire and release operations allow the
214 implementations of common synchronization primitives to express their
215 ordering requirements without also imposing unnecessary ordering.
216 For example, for a critical section guarded by a mutex, an acquire operation
217 when the mutex is locked and a release operation when the mutex is unlocked
218 will prevent any loads or stores from moving outside of the critical
220 However, they will not prevent the compiler or processor from moving loads
221 or stores into the critical section, which does not violate the semantics of
223 .Ss Multiple Processors
224 In multiprocessor systems, the atomicity of the atomic operations on memory
225 depends on support for cache coherence in the underlying architecture.
226 In general, cache coherence on the default memory type,
227 .Dv VM_MEMATTR_DEFAULT ,
228 is guaranteed by all architectures that are supported by
230 For example, cache coherence is guaranteed on write-back memory by the
235 However, on some architectures, cache coherence might not be enabled on all
237 To determine if cache coherence is enabled for a non-default memory type,
238 consult the architecture's documentation.
240 This section describes the semantics of each operation using a C like notation.
242 .It Fn atomic_add p v
243 .Bd -literal -compact
246 .It Fn atomic_clear p v
247 .Bd -literal -compact
250 .It Fn atomic_cmpset dst old new
251 .Bd -literal -compact
260 Some architectures do not implement the
262 functions for the types
269 .It Fn atomic_fcmpset dst *old new
272 On architectures implementing
274 operation in hardware, the functionality can be described as
275 .Bd -literal -offset indent -compact
284 On architectures which provide
285 .Em Load Linked/Store Conditional
286 primitive, the write to
288 might also fail for several reasons, most important of which
289 is a parallel write to
291 cache line by other CPU.
294 function also returns
299 Some architectures do not implement the
301 functions for the types
308 .It Fn atomic_fetchadd p v
309 .Bd -literal -compact
318 functions are only implemented for the types
323 and do not have any variants with memory barriers at this time.
326 .Bd -literal -compact
333 functions are only provided with acquire memory barriers.
335 .It Fn atomic_readandclear p
336 .Bd -literal -compact
344 .Fn atomic_readandclear
345 functions are not implemented for the types
352 and do not have any variants with memory barriers at this time.
354 .It Fn atomic_set p v
355 .Bd -literal -compact
358 .It Fn atomic_subtract p v
359 .Bd -literal -compact
362 .It Fn atomic_store p v
363 .Bd -literal -compact
370 functions are only provided with release memory barriers.
372 .It Fn atomic_swap p v
373 .Bd -literal -compact
382 functions are not implemented for the types
389 and do not have any variants with memory barriers at this time.
391 .It Fn atomic_testandclear p v
392 .Bd -literal -compact
393 bit = 1 << (v % (sizeof(*p) * NBBY));
394 tmp = (*p & bit) != 0;
400 .It Fn atomic_testandset p v
401 .Bd -literal -compact
402 bit = 1 << (v % (sizeof(*p) * NBBY));
403 tmp = (*p & bit) != 0;
410 .Fn atomic_testandset
412 .Fn atomic_testandclear
413 functions are only implemented for the types
418 and do not have any variants with memory barriers at this time.
422 is currently not implemented for any of the atomic operations on the
431 function returns the result of the compare operation.
436 if the operation succeeded.
443 .Fn atomic_fetchadd ,
445 .Fn atomic_readandclear ,
448 functions return the value at the specified address.
450 .Fn atomic_testandset
452 .Fn atomic_testandclear
453 function returns the result of the test operation.
455 This example uses the
456 .Fn atomic_cmpset_acq_ptr
459 functions to obtain a sleep mutex and handle recursion.
468 /* Try to obtain mtx_lock once. */
469 #define _obtain_lock(mp, tid) \\
470 atomic_cmpset_acq_ptr(&(mp)->mtx_lock, MTX_UNOWNED, (tid))
472 /* Get a sleep lock, deal with recursion inline. */
473 #define _get_sleep_lock(mp, tid, opts, file, line) do { \\
474 uintptr_t _tid = (uintptr_t)(tid); \\
476 if (!_obtain_lock(mp, tid)) { \\
477 if (((mp)->mtx_lock & MTX_FLAGMASK) != _tid) \\
478 _mtx_lock_sleep((mp), _tid, (opts), (file), (line));\\
480 atomic_set_ptr(&(mp)->mtx_lock, MTX_RECURSE); \\
481 (mp)->mtx_recurse++; \\
493 operations were first introduced in
495 This first set only supported the types
504 .Fn atomic_readandclear ,
507 operations were added in
516 and all of the acquire and release variants
522 operations were added in
527 .Fn atomic_testandset
528 operations were added in
530 .Fn atomic_testandclear
531 operation was added in