1 .\" Copyright (c) 2000-2001 John H. Baldwin <jhb@FreeBSD.org>
2 .\" All rights reserved.
4 .\" Redistribution and use in source and binary forms, with or without
5 .\" modification, are permitted provided that the following conditions
7 .\" 1. Redistributions of source code must retain the above copyright
8 .\" notice, this list of conditions and the following disclaimer.
9 .\" 2. Redistributions in binary form must reproduce the above copyright
10 .\" notice, this list of conditions and the following disclaimer in the
11 .\" documentation and/or other materials provided with the distribution.
13 .\" THIS SOFTWARE IS PROVIDED BY THE DEVELOPERS ``AS IS'' AND ANY EXPRESS OR
14 .\" IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
15 .\" OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
16 .\" IN NO EVENT SHALL THE DEVELOPERS BE LIABLE FOR ANY DIRECT, INDIRECT,
17 .\" INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
18 .\" NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
19 .\" DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
20 .\" THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
21 .\" (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
22 .\" THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
35 .Nm atomic_readandclear ,
44 .Fn atomic_add_[acq_|rel_]<type> "volatile <type> *p" "<type> v"
46 .Fn atomic_clear_[acq_|rel_]<type> "volatile <type> *p" "<type> v"
48 .Fo atomic_cmpset_[acq_|rel_]<type>
49 .Fa "volatile <type> *dst"
54 .Fn atomic_fetchadd_<type> "volatile <type> *p" "<type> v"
56 .Fn atomic_load_acq_<type> "volatile <type> *p"
58 .Fn atomic_readandclear_<type> "volatile <type> *p"
60 .Fn atomic_set_[acq_|rel_]<type> "volatile <type> *p" "<type> v"
62 .Fn atomic_subtract_[acq_|rel_]<type> "volatile <type> *p" "<type> v"
64 .Fn atomic_store_rel_<type> "volatile <type> *p" "<type> v"
66 .Fn atomic_swap_<type> "volatile <type> *p" "<type> v"
68 .Fn atomic_testandclear_<type> "volatile <type> *p" "u_int v"
70 .Fn atomic_testandset_<type> "volatile <type> *p" "u_int v"
72 Each of the atomic operations is guaranteed to be atomic across multiple
73 threads and in the presence of interrupts.
74 They can be used to implement reference counts or as building blocks for more
75 advanced synchronization primitives such as mutexes.
77 Each atomic operation operates on a specific
79 The type to use is indicated in the function name.
80 The available types that can be used are:
82 .Bl -tag -offset indent -width short -compact
88 unsigned integer the size of a pointer
90 unsigned 32-bit integer
92 unsigned 64-bit integer
95 For example, the function to atomically add two integers is called
98 Certain architectures also provide operations for types smaller than
101 .Bl -tag -offset indent -width short -compact
105 unsigned short integer
107 unsigned 8-bit integer
109 unsigned 16-bit integer
112 These must not be used in MI code because the instructions to implement them
113 efficiently might not be available.
114 .Ss Acquire and Release Operations
115 By default, a thread's accesses to different memory locations might not be
118 that is, the order in which the accesses appear in the source code.
119 To optimize the program's execution, both the compiler and processor might
120 reorder the thread's accesses.
121 However, both ensure that their reordering of the accesses is not visible to
123 Otherwise, the traditional memory model that is expected by single-threaded
124 programs would be violated.
125 Nonetheless, other threads in a multithreaded program, such as the
127 kernel, might observe the reordering.
128 Moreover, in some cases, such as the implementation of synchronization between
129 threads, arbitrary reordering might result in the incorrect execution of the
131 To constrain the reordering that both the compiler and processor might perform
132 on a thread's accesses, the thread should use atomic operations with
138 Most of the atomic operations on memory have three variants.
139 The first variant performs the operation without imposing any ordering
140 constraints on memory accesses to other locations.
141 The second variant has acquire semantics, and the third variant has release
143 In effect, operations with acquire and release semantics establish one-way
144 barriers to reordering.
146 When an atomic operation has acquire semantics, the effects of the operation
147 must have completed before any subsequent load or store (by program order) is
149 Conversely, acquire semantics do not require that prior loads or stores have
150 completed before the atomic operation is performed.
151 To denote acquire semantics, the suffix
153 is inserted into the function name immediately prior to the
154 .Dq Li _ Ns Aq Fa type
156 For example, to subtract two integers ensuring that subsequent loads and
157 stores happen after the subtraction is performed, use
158 .Fn atomic_subtract_acq_int .
160 When an atomic operation has release semantics, the effects of all prior
161 loads or stores (by program order) must have completed before the operation
163 Conversely, release semantics do not require that the effects of the
164 atomic operation must have completed before any subsequent load or store is
166 To denote release semantics, the suffix
168 is inserted into the function name immediately prior to the
169 .Dq Li _ Ns Aq Fa type
171 For example, to add two long integers ensuring that all prior loads and
172 stores happen before the addition, use
173 .Fn atomic_add_rel_long .
175 The one-way barriers provided by acquire and release operations allow the
176 implementations of common synchronization primitives to express their
177 ordering requirements without also imposing unnecessary ordering.
178 For example, for a critical section guarded by a mutex, an acquire operation
179 when the mutex is locked and a release operation when the mutex is unlocked
180 will prevent any loads or stores from moving outside of the critical
182 However, they will not prevent the compiler or processor from moving loads
183 or stores into the critical section, which does not violate the semantics of
185 .Ss Multiple Processors
186 In multiprocessor systems, the atomicity of the atomic operations on memory
187 depends on support for cache coherence in the underlying architecture.
188 In general, cache coherence on the default memory type,
189 .Dv VM_MEMATTR_DEFAULT ,
190 is guaranteed by all architectures that are supported by
192 For example, cache coherence is guaranteed on write-back memory by the
197 However, on some architectures, cache coherence might not be enabled on all
199 To determine if cache coherence is enabled for a non-default memory type,
200 consult the architecture's documentation.
203 architecture, coherency is only guaranteed for pages that are configured to
204 using a caching policy of either uncached or write back.
206 This section describes the semantics of each operation using a C like notation.
208 .It Fn atomic_add p v
209 .Bd -literal -compact
212 .It Fn atomic_clear p v
213 .Bd -literal -compact
216 .It Fn atomic_cmpset dst old new
217 .Bd -literal -compact
228 functions are not implemented for the types
235 .It Fn atomic_fetchadd p v
236 .Bd -literal -compact
245 functions are only implemented for the types
250 and do not have any variants with memory barriers at this time.
253 .Bd -literal -compact
260 functions are only provided with acquire memory barriers.
262 .It Fn atomic_readandclear p
263 .Bd -literal -compact
271 .Fn atomic_readandclear
272 functions are not implemented for the types
279 and do not have any variants with memory barriers at this time.
281 .It Fn atomic_set p v
282 .Bd -literal -compact
285 .It Fn atomic_subtract p v
286 .Bd -literal -compact
289 .It Fn atomic_store p v
290 .Bd -literal -compact
297 functions are only provided with release memory barriers.
299 .It Fn atomic_swap p v
300 .Bd -literal -compact
309 functions are not implemented for the types
316 and do not have any variants with memory barriers at this time.
318 .It Fn atomic_testandclear p v
319 .Bd -literal -compact
320 bit = 1 << (v % (sizeof(*p) * NBBY));
321 tmp = (*p & bit) != 0;
327 .It Fn atomic_testandset p v
328 .Bd -literal -compact
329 bit = 1 << (v % (sizeof(*p) * NBBY));
330 tmp = (*p & bit) != 0;
337 .Fn atomic_testandset
339 .Fn atomic_testandclear
340 functions are only implemented for the types
345 and do not have any variants with memory barriers at this time.
349 is currently not implemented for any of the atomic operations on the
358 function returns the result of the compare operation.
360 .Fn atomic_fetchadd ,
362 .Fn atomic_readandclear ,
365 functions return the value at the specified address.
367 .Fn atomic_testandset
369 .Fn atomic_testandclear
370 function returns the result of the test operation.
372 This example uses the
373 .Fn atomic_cmpset_acq_ptr
376 functions to obtain a sleep mutex and handle recursion.
385 /* Try to obtain mtx_lock once. */
386 #define _obtain_lock(mp, tid) \\
387 atomic_cmpset_acq_ptr(&(mp)->mtx_lock, MTX_UNOWNED, (tid))
389 /* Get a sleep lock, deal with recursion inline. */
390 #define _get_sleep_lock(mp, tid, opts, file, line) do { \\
391 uintptr_t _tid = (uintptr_t)(tid); \\
393 if (!_obtain_lock(mp, tid)) { \\
394 if (((mp)->mtx_lock & MTX_FLAGMASK) != _tid) \\
395 _mtx_lock_sleep((mp), _tid, (opts), (file), (line));\\
397 atomic_set_ptr(&(mp)->mtx_lock, MTX_RECURSE); \\
398 (mp)->mtx_recurse++; \\
410 operations were first introduced in
412 This first set only supported the types
421 .Fn atomic_readandclear ,
424 operations were added in
433 and all of the acquire and release variants
439 operations were added in
444 .Fn atomic_testandset
445 operations were added in
447 .Fn atomic_testandclear
448 operation was added in