4 * The contents of this file are subject to the terms of the
5 * Common Development and Distribution License (the "License").
6 * You may not use this file except in compliance with the License.
8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 * or http://www.opensolaris.org/os/licensing.
10 * See the License for the specific language governing permissions
11 * and limitations under the License.
13 * When distributing Covered Code, include this CDDL HEADER in each
14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 * If applicable, add the following below this CDDL HEADER, with the
16 * fields enclosed by brackets "[]" replaced with your own identifying
17 * information: Portions Copyright [yyyy] [name of copyright owner]
22 * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
23 * Copyright (c) 2012, Joyent, Inc. All rights reserved.
24 * Copyright (c) 2011, 2017 by Delphix. All rights reserved.
25 * Copyright (c) 2014 by Saso Kiselkov. All rights reserved.
26 * Copyright 2015 Nexenta Systems, Inc. All rights reserved.
30 * DVA-based Adjustable Replacement Cache
32 * While much of the theory of operation used here is
33 * based on the self-tuning, low overhead replacement cache
34 * presented by Megiddo and Modha at FAST 2003, there are some
35 * significant differences:
37 * 1. The Megiddo and Modha model assumes any page is evictable.
38 * Pages in its cache cannot be "locked" into memory. This makes
39 * the eviction algorithm simple: evict the last page in the list.
40 * This also make the performance characteristics easy to reason
41 * about. Our cache is not so simple. At any given moment, some
42 * subset of the blocks in the cache are un-evictable because we
43 * have handed out a reference to them. Blocks are only evictable
44 * when there are no external references active. This makes
45 * eviction far more problematic: we choose to evict the evictable
46 * blocks that are the "lowest" in the list.
48 * There are times when it is not possible to evict the requested
49 * space. In these circumstances we are unable to adjust the cache
50 * size. To prevent the cache growing unbounded at these times we
51 * implement a "cache throttle" that slows the flow of new data
52 * into the cache until we can make space available.
54 * 2. The Megiddo and Modha model assumes a fixed cache size.
55 * Pages are evicted when the cache is full and there is a cache
56 * miss. Our model has a variable sized cache. It grows with
57 * high use, but also tries to react to memory pressure from the
58 * operating system: decreasing its size when system memory is
61 * 3. The Megiddo and Modha model assumes a fixed page size. All
62 * elements of the cache are therefore exactly the same size. So
63 * when adjusting the cache size following a cache miss, its simply
64 * a matter of choosing a single page to evict. In our model, we
65 * have variable sized cache blocks (rangeing from 512 bytes to
66 * 128K bytes). We therefore choose a set of blocks to evict to make
67 * space for a cache miss that approximates as closely as possible
68 * the space used by the new block.
70 * See also: "ARC: A Self-Tuning, Low Overhead Replacement Cache"
71 * by N. Megiddo & D. Modha, FAST 2003
77 * A new reference to a cache buffer can be obtained in two
78 * ways: 1) via a hash table lookup using the DVA as a key,
79 * or 2) via one of the ARC lists. The arc_read() interface
80 * uses method 1, while the internal ARC algorithms for
81 * adjusting the cache use method 2. We therefore provide two
82 * types of locks: 1) the hash table lock array, and 2) the
85 * Buffers do not have their own mutexes, rather they rely on the
86 * hash table mutexes for the bulk of their protection (i.e. most
87 * fields in the arc_buf_hdr_t are protected by these mutexes).
89 * buf_hash_find() returns the appropriate mutex (held) when it
90 * locates the requested buffer in the hash table. It returns
91 * NULL for the mutex if the buffer was not in the table.
93 * buf_hash_remove() expects the appropriate hash mutex to be
94 * already held before it is invoked.
96 * Each ARC state also has a mutex which is used to protect the
97 * buffer list associated with the state. When attempting to
98 * obtain a hash table lock while holding an ARC list lock you
99 * must use: mutex_tryenter() to avoid deadlock. Also note that
100 * the active state mutex must be held before the ghost state mutex.
102 * It as also possible to register a callback which is run when the
103 * arc_meta_limit is reached and no buffers can be safely evicted. In
104 * this case the arc user should drop a reference on some arc buffers so
105 * they can be reclaimed and the arc_meta_limit honored. For example,
106 * when using the ZPL each dentry holds a references on a znode. These
107 * dentries must be pruned before the arc buffer holding the znode can
110 * Note that the majority of the performance stats are manipulated
111 * with atomic operations.
113 * The L2ARC uses the l2ad_mtx on each vdev for the following:
115 * - L2ARC buflist creation
116 * - L2ARC buflist eviction
117 * - L2ARC write completion, which walks L2ARC buflists
118 * - ARC header destruction, as it removes from L2ARC buflists
119 * - ARC header release, as it removes from L2ARC buflists
125 * Every block that is in the ARC is tracked by an arc_buf_hdr_t structure.
126 * This structure can point either to a block that is still in the cache or to
127 * one that is only accessible in an L2 ARC device, or it can provide
128 * information about a block that was recently evicted. If a block is
129 * only accessible in the L2ARC, then the arc_buf_hdr_t only has enough
130 * information to retrieve it from the L2ARC device. This information is
131 * stored in the l2arc_buf_hdr_t sub-structure of the arc_buf_hdr_t. A block
132 * that is in this state cannot access the data directly.
134 * Blocks that are actively being referenced or have not been evicted
135 * are cached in the L1ARC. The L1ARC (l1arc_buf_hdr_t) is a structure within
136 * the arc_buf_hdr_t that will point to the data block in memory. A block can
137 * only be read by a consumer if it has an l1arc_buf_hdr_t. The L1ARC
138 * caches data in two ways -- in a list of ARC buffers (arc_buf_t) and
139 * also in the arc_buf_hdr_t's private physical data block pointer (b_pabd).
141 * The L1ARC's data pointer may or may not be uncompressed. The ARC has the
142 * ability to store the physical data (b_pabd) associated with the DVA of the
143 * arc_buf_hdr_t. Since the b_pabd is a copy of the on-disk physical block,
144 * it will match its on-disk compression characteristics. This behavior can be
145 * disabled by setting 'zfs_compressed_arc_enabled' to B_FALSE. When the
146 * compressed ARC functionality is disabled, the b_pabd will point to an
147 * uncompressed version of the on-disk data.
149 * Data in the L1ARC is not accessed by consumers of the ARC directly. Each
150 * arc_buf_hdr_t can have multiple ARC buffers (arc_buf_t) which reference it.
151 * Each ARC buffer (arc_buf_t) is being actively accessed by a specific ARC
152 * consumer. The ARC will provide references to this data and will keep it
153 * cached until it is no longer in use. The ARC caches only the L1ARC's physical
154 * data block and will evict any arc_buf_t that is no longer referenced. The
155 * amount of memory consumed by the arc_buf_ts' data buffers can be seen via the
156 * "overhead_size" kstat.
158 * Depending on the consumer, an arc_buf_t can be requested in uncompressed or
159 * compressed form. The typical case is that consumers will want uncompressed
160 * data, and when that happens a new data buffer is allocated where the data is
161 * decompressed for them to use. Currently the only consumer who wants
162 * compressed arc_buf_t's is "zfs send", when it streams data exactly as it
163 * exists on disk. When this happens, the arc_buf_t's data buffer is shared
164 * with the arc_buf_hdr_t.
166 * Here is a diagram showing an arc_buf_hdr_t referenced by two arc_buf_t's. The
167 * first one is owned by a compressed send consumer (and therefore references
168 * the same compressed data buffer as the arc_buf_hdr_t) and the second could be
169 * used by any other consumer (and has its own uncompressed copy of the data
184 * | b_buf +------------>+-----------+ arc_buf_t
185 * | b_pabd +-+ |b_next +---->+-----------+
186 * +-----------+ | |-----------| |b_next +-->NULL
187 * | |b_comp = T | +-----------+
188 * | |b_data +-+ |b_comp = F |
189 * | +-----------+ | |b_data +-+
190 * +->+------+ | +-----------+ |
192 * data | |<--------------+ | uncompressed
193 * +------+ compressed, | data
194 * shared +-->+------+
199 * When a consumer reads a block, the ARC must first look to see if the
200 * arc_buf_hdr_t is cached. If the hdr is cached then the ARC allocates a new
201 * arc_buf_t and either copies uncompressed data into a new data buffer from an
202 * existing uncompressed arc_buf_t, decompresses the hdr's b_pabd buffer into a
203 * new data buffer, or shares the hdr's b_pabd buffer, depending on whether the
204 * hdr is compressed and the desired compression characteristics of the
205 * arc_buf_t consumer. If the arc_buf_t ends up sharing data with the
206 * arc_buf_hdr_t and both of them are uncompressed then the arc_buf_t must be
207 * the last buffer in the hdr's b_buf list, however a shared compressed buf can
208 * be anywhere in the hdr's list.
210 * The diagram below shows an example of an uncompressed ARC hdr that is
211 * sharing its data with an arc_buf_t (note that the shared uncompressed buf is
212 * the last element in the buf list):
224 * | | arc_buf_t (shared)
225 * | b_buf +------------>+---------+ arc_buf_t
226 * | | |b_next +---->+---------+
227 * | b_pabd +-+ |---------| |b_next +-->NULL
228 * +-----------+ | | | +---------+
230 * | +---------+ | |b_data +-+
231 * +->+------+ | +---------+ |
233 * uncompressed | | | |
236 * | uncompressed | | |
239 * +---------------------------------+
241 * Writing to the ARC requires that the ARC first discard the hdr's b_pabd
242 * since the physical block is about to be rewritten. The new data contents
243 * will be contained in the arc_buf_t. As the I/O pipeline performs the write,
244 * it may compress the data before writing it to disk. The ARC will be called
245 * with the transformed data and will bcopy the transformed on-disk block into
246 * a newly allocated b_pabd. Writes are always done into buffers which have
247 * either been loaned (and hence are new and don't have other readers) or
248 * buffers which have been released (and hence have their own hdr, if there
249 * were originally other readers of the buf's original hdr). This ensures that
250 * the ARC only needs to update a single buf and its hdr after a write occurs.
252 * When the L2ARC is in use, it will also take advantage of the b_pabd. The
253 * L2ARC will always write the contents of b_pabd to the L2ARC. This means
254 * that when compressed ARC is enabled that the L2ARC blocks are identical
255 * to the on-disk block in the main data pool. This provides a significant
256 * advantage since the ARC can leverage the bp's checksum when reading from the
257 * L2ARC to determine if the contents are valid. However, if the compressed
258 * ARC is disabled, then the L2ARC's block must be transformed to look
259 * like the physical block in the main data pool before comparing the
260 * checksum and determining its validity.
265 #include <sys/spa_impl.h>
266 #include <sys/zio_compress.h>
267 #include <sys/zio_checksum.h>
268 #include <sys/zfs_context.h>
270 #include <sys/refcount.h>
271 #include <sys/vdev.h>
272 #include <sys/vdev_impl.h>
273 #include <sys/dsl_pool.h>
274 #include <sys/zio_checksum.h>
275 #include <sys/multilist.h>
278 #include <sys/vmsystm.h>
280 #include <sys/fs/swapnode.h>
282 #include <linux/mm_compat.h>
284 #include <sys/callb.h>
285 #include <sys/kstat.h>
286 #include <sys/dmu_tx.h>
287 #include <zfs_fletcher.h>
288 #include <sys/arc_impl.h>
289 #include <sys/trace_arc.h>
292 /* set with ZFS_DEBUG=watch, to enable watchpoints on frozen buffers */
293 boolean_t arc_watch = B_FALSE;
296 static kmutex_t arc_reclaim_lock;
297 static kcondvar_t arc_reclaim_thread_cv;
298 static boolean_t arc_reclaim_thread_exit;
299 static kcondvar_t arc_reclaim_waiters_cv;
302 * The number of headers to evict in arc_evict_state_impl() before
303 * dropping the sublist lock and evicting from another sublist. A lower
304 * value means we're more likely to evict the "correct" header (i.e. the
305 * oldest header in the arc state), but comes with higher overhead
306 * (i.e. more invocations of arc_evict_state_impl()).
308 int zfs_arc_evict_batch_limit = 10;
310 /* number of seconds before growing cache again */
311 static int arc_grow_retry = 5;
313 /* shift of arc_c for calculating overflow limit in arc_get_data_impl */
314 int zfs_arc_overflow_shift = 8;
316 /* shift of arc_c for calculating both min and max arc_p */
317 static int arc_p_min_shift = 4;
319 /* log2(fraction of arc to reclaim) */
320 static int arc_shrink_shift = 7;
322 /* percent of pagecache to reclaim arc to */
324 static uint_t zfs_arc_pc_percent = 0;
328 * log2(fraction of ARC which must be free to allow growing).
329 * I.e. If there is less than arc_c >> arc_no_grow_shift free memory,
330 * when reading a new block into the ARC, we will evict an equal-sized block
333 * This must be less than arc_shrink_shift, so that when we shrink the ARC,
334 * we will still not allow it to grow.
336 int arc_no_grow_shift = 5;
340 * minimum lifespan of a prefetch block in clock ticks
341 * (initialized in arc_init())
343 static int arc_min_prefetch_lifespan;
346 * If this percent of memory is free, don't throttle.
348 int arc_lotsfree_percent = 10;
353 * The arc has filled available memory and has now warmed up.
355 static boolean_t arc_warm;
358 * log2 fraction of the zio arena to keep free.
360 int arc_zio_arena_free_shift = 2;
363 * These tunables are for performance analysis.
365 unsigned long zfs_arc_max = 0;
366 unsigned long zfs_arc_min = 0;
367 unsigned long zfs_arc_meta_limit = 0;
368 unsigned long zfs_arc_meta_min = 0;
369 unsigned long zfs_arc_dnode_limit = 0;
370 unsigned long zfs_arc_dnode_reduce_percent = 10;
371 int zfs_arc_grow_retry = 0;
372 int zfs_arc_shrink_shift = 0;
373 int zfs_arc_p_min_shift = 0;
374 int zfs_arc_average_blocksize = 8 * 1024; /* 8KB */
376 int zfs_compressed_arc_enabled = B_TRUE;
379 * ARC will evict meta buffers that exceed arc_meta_limit. This
380 * tunable make arc_meta_limit adjustable for different workloads.
382 unsigned long zfs_arc_meta_limit_percent = 75;
385 * Percentage that can be consumed by dnodes of ARC meta buffers.
387 unsigned long zfs_arc_dnode_limit_percent = 10;
390 * These tunables are Linux specific
392 unsigned long zfs_arc_sys_free = 0;
393 int zfs_arc_min_prefetch_lifespan = 0;
394 int zfs_arc_p_aggressive_disable = 1;
395 int zfs_arc_p_dampener_disable = 1;
396 int zfs_arc_meta_prune = 10000;
397 int zfs_arc_meta_strategy = ARC_STRATEGY_META_BALANCED;
398 int zfs_arc_meta_adjust_restarts = 4096;
399 int zfs_arc_lotsfree_percent = 10;
402 static arc_state_t ARC_anon;
403 static arc_state_t ARC_mru;
404 static arc_state_t ARC_mru_ghost;
405 static arc_state_t ARC_mfu;
406 static arc_state_t ARC_mfu_ghost;
407 static arc_state_t ARC_l2c_only;
409 typedef struct arc_stats {
410 kstat_named_t arcstat_hits;
411 kstat_named_t arcstat_misses;
412 kstat_named_t arcstat_demand_data_hits;
413 kstat_named_t arcstat_demand_data_misses;
414 kstat_named_t arcstat_demand_metadata_hits;
415 kstat_named_t arcstat_demand_metadata_misses;
416 kstat_named_t arcstat_prefetch_data_hits;
417 kstat_named_t arcstat_prefetch_data_misses;
418 kstat_named_t arcstat_prefetch_metadata_hits;
419 kstat_named_t arcstat_prefetch_metadata_misses;
420 kstat_named_t arcstat_mru_hits;
421 kstat_named_t arcstat_mru_ghost_hits;
422 kstat_named_t arcstat_mfu_hits;
423 kstat_named_t arcstat_mfu_ghost_hits;
424 kstat_named_t arcstat_deleted;
426 * Number of buffers that could not be evicted because the hash lock
427 * was held by another thread. The lock may not necessarily be held
428 * by something using the same buffer, since hash locks are shared
429 * by multiple buffers.
431 kstat_named_t arcstat_mutex_miss;
433 * Number of buffers skipped because they have I/O in progress, are
434 * indrect prefetch buffers that have not lived long enough, or are
435 * not from the spa we're trying to evict from.
437 kstat_named_t arcstat_evict_skip;
439 * Number of times arc_evict_state() was unable to evict enough
440 * buffers to reach its target amount.
442 kstat_named_t arcstat_evict_not_enough;
443 kstat_named_t arcstat_evict_l2_cached;
444 kstat_named_t arcstat_evict_l2_eligible;
445 kstat_named_t arcstat_evict_l2_ineligible;
446 kstat_named_t arcstat_evict_l2_skip;
447 kstat_named_t arcstat_hash_elements;
448 kstat_named_t arcstat_hash_elements_max;
449 kstat_named_t arcstat_hash_collisions;
450 kstat_named_t arcstat_hash_chains;
451 kstat_named_t arcstat_hash_chain_max;
452 kstat_named_t arcstat_p;
453 kstat_named_t arcstat_c;
454 kstat_named_t arcstat_c_min;
455 kstat_named_t arcstat_c_max;
456 kstat_named_t arcstat_size;
458 * Number of compressed bytes stored in the arc_buf_hdr_t's b_pabd.
459 * Note that the compressed bytes may match the uncompressed bytes
460 * if the block is either not compressed or compressed arc is disabled.
462 kstat_named_t arcstat_compressed_size;
464 * Uncompressed size of the data stored in b_pabd. If compressed
465 * arc is disabled then this value will be identical to the stat
468 kstat_named_t arcstat_uncompressed_size;
470 * Number of bytes stored in all the arc_buf_t's. This is classified
471 * as "overhead" since this data is typically short-lived and will
472 * be evicted from the arc when it becomes unreferenced unless the
473 * zfs_keep_uncompressed_metadata or zfs_keep_uncompressed_level
474 * values have been set (see comment in dbuf.c for more information).
476 kstat_named_t arcstat_overhead_size;
478 * Number of bytes consumed by internal ARC structures necessary
479 * for tracking purposes; these structures are not actually
480 * backed by ARC buffers. This includes arc_buf_hdr_t structures
481 * (allocated via arc_buf_hdr_t_full and arc_buf_hdr_t_l2only
482 * caches), and arc_buf_t structures (allocated via arc_buf_t
485 kstat_named_t arcstat_hdr_size;
487 * Number of bytes consumed by ARC buffers of type equal to
488 * ARC_BUFC_DATA. This is generally consumed by buffers backing
489 * on disk user data (e.g. plain file contents).
491 kstat_named_t arcstat_data_size;
493 * Number of bytes consumed by ARC buffers of type equal to
494 * ARC_BUFC_METADATA. This is generally consumed by buffers
495 * backing on disk data that is used for internal ZFS
496 * structures (e.g. ZAP, dnode, indirect blocks, etc).
498 kstat_named_t arcstat_metadata_size;
500 * Number of bytes consumed by dmu_buf_impl_t objects.
502 kstat_named_t arcstat_dbuf_size;
504 * Number of bytes consumed by dnode_t objects.
506 kstat_named_t arcstat_dnode_size;
508 * Number of bytes consumed by bonus buffers.
510 kstat_named_t arcstat_bonus_size;
512 * Total number of bytes consumed by ARC buffers residing in the
513 * arc_anon state. This includes *all* buffers in the arc_anon
514 * state; e.g. data, metadata, evictable, and unevictable buffers
515 * are all included in this value.
517 kstat_named_t arcstat_anon_size;
519 * Number of bytes consumed by ARC buffers that meet the
520 * following criteria: backing buffers of type ARC_BUFC_DATA,
521 * residing in the arc_anon state, and are eligible for eviction
522 * (e.g. have no outstanding holds on the buffer).
524 kstat_named_t arcstat_anon_evictable_data;
526 * Number of bytes consumed by ARC buffers that meet the
527 * following criteria: backing buffers of type ARC_BUFC_METADATA,
528 * residing in the arc_anon state, and are eligible for eviction
529 * (e.g. have no outstanding holds on the buffer).
531 kstat_named_t arcstat_anon_evictable_metadata;
533 * Total number of bytes consumed by ARC buffers residing in the
534 * arc_mru state. This includes *all* buffers in the arc_mru
535 * state; e.g. data, metadata, evictable, and unevictable buffers
536 * are all included in this value.
538 kstat_named_t arcstat_mru_size;
540 * Number of bytes consumed by ARC buffers that meet the
541 * following criteria: backing buffers of type ARC_BUFC_DATA,
542 * residing in the arc_mru state, and are eligible for eviction
543 * (e.g. have no outstanding holds on the buffer).
545 kstat_named_t arcstat_mru_evictable_data;
547 * Number of bytes consumed by ARC buffers that meet the
548 * following criteria: backing buffers of type ARC_BUFC_METADATA,
549 * residing in the arc_mru state, and are eligible for eviction
550 * (e.g. have no outstanding holds on the buffer).
552 kstat_named_t arcstat_mru_evictable_metadata;
554 * Total number of bytes that *would have been* consumed by ARC
555 * buffers in the arc_mru_ghost state. The key thing to note
556 * here, is the fact that this size doesn't actually indicate
557 * RAM consumption. The ghost lists only consist of headers and
558 * don't actually have ARC buffers linked off of these headers.
559 * Thus, *if* the headers had associated ARC buffers, these
560 * buffers *would have* consumed this number of bytes.
562 kstat_named_t arcstat_mru_ghost_size;
564 * Number of bytes that *would have been* consumed by ARC
565 * buffers that are eligible for eviction, of type
566 * ARC_BUFC_DATA, and linked off the arc_mru_ghost state.
568 kstat_named_t arcstat_mru_ghost_evictable_data;
570 * Number of bytes that *would have been* consumed by ARC
571 * buffers that are eligible for eviction, of type
572 * ARC_BUFC_METADATA, and linked off the arc_mru_ghost state.
574 kstat_named_t arcstat_mru_ghost_evictable_metadata;
576 * Total number of bytes consumed by ARC buffers residing in the
577 * arc_mfu state. This includes *all* buffers in the arc_mfu
578 * state; e.g. data, metadata, evictable, and unevictable buffers
579 * are all included in this value.
581 kstat_named_t arcstat_mfu_size;
583 * Number of bytes consumed by ARC buffers that are eligible for
584 * eviction, of type ARC_BUFC_DATA, and reside in the arc_mfu
587 kstat_named_t arcstat_mfu_evictable_data;
589 * Number of bytes consumed by ARC buffers that are eligible for
590 * eviction, of type ARC_BUFC_METADATA, and reside in the
593 kstat_named_t arcstat_mfu_evictable_metadata;
595 * Total number of bytes that *would have been* consumed by ARC
596 * buffers in the arc_mfu_ghost state. See the comment above
597 * arcstat_mru_ghost_size for more details.
599 kstat_named_t arcstat_mfu_ghost_size;
601 * Number of bytes that *would have been* consumed by ARC
602 * buffers that are eligible for eviction, of type
603 * ARC_BUFC_DATA, and linked off the arc_mfu_ghost state.
605 kstat_named_t arcstat_mfu_ghost_evictable_data;
607 * Number of bytes that *would have been* consumed by ARC
608 * buffers that are eligible for eviction, of type
609 * ARC_BUFC_METADATA, and linked off the arc_mru_ghost state.
611 kstat_named_t arcstat_mfu_ghost_evictable_metadata;
612 kstat_named_t arcstat_l2_hits;
613 kstat_named_t arcstat_l2_misses;
614 kstat_named_t arcstat_l2_feeds;
615 kstat_named_t arcstat_l2_rw_clash;
616 kstat_named_t arcstat_l2_read_bytes;
617 kstat_named_t arcstat_l2_write_bytes;
618 kstat_named_t arcstat_l2_writes_sent;
619 kstat_named_t arcstat_l2_writes_done;
620 kstat_named_t arcstat_l2_writes_error;
621 kstat_named_t arcstat_l2_writes_lock_retry;
622 kstat_named_t arcstat_l2_evict_lock_retry;
623 kstat_named_t arcstat_l2_evict_reading;
624 kstat_named_t arcstat_l2_evict_l1cached;
625 kstat_named_t arcstat_l2_free_on_write;
626 kstat_named_t arcstat_l2_abort_lowmem;
627 kstat_named_t arcstat_l2_cksum_bad;
628 kstat_named_t arcstat_l2_io_error;
629 kstat_named_t arcstat_l2_size;
630 kstat_named_t arcstat_l2_asize;
631 kstat_named_t arcstat_l2_hdr_size;
632 kstat_named_t arcstat_memory_throttle_count;
633 kstat_named_t arcstat_memory_direct_count;
634 kstat_named_t arcstat_memory_indirect_count;
635 kstat_named_t arcstat_no_grow;
636 kstat_named_t arcstat_tempreserve;
637 kstat_named_t arcstat_loaned_bytes;
638 kstat_named_t arcstat_prune;
639 kstat_named_t arcstat_meta_used;
640 kstat_named_t arcstat_meta_limit;
641 kstat_named_t arcstat_dnode_limit;
642 kstat_named_t arcstat_meta_max;
643 kstat_named_t arcstat_meta_min;
644 kstat_named_t arcstat_sync_wait_for_async;
645 kstat_named_t arcstat_demand_hit_predictive_prefetch;
646 kstat_named_t arcstat_need_free;
647 kstat_named_t arcstat_sys_free;
650 static arc_stats_t arc_stats = {
651 { "hits", KSTAT_DATA_UINT64 },
652 { "misses", KSTAT_DATA_UINT64 },
653 { "demand_data_hits", KSTAT_DATA_UINT64 },
654 { "demand_data_misses", KSTAT_DATA_UINT64 },
655 { "demand_metadata_hits", KSTAT_DATA_UINT64 },
656 { "demand_metadata_misses", KSTAT_DATA_UINT64 },
657 { "prefetch_data_hits", KSTAT_DATA_UINT64 },
658 { "prefetch_data_misses", KSTAT_DATA_UINT64 },
659 { "prefetch_metadata_hits", KSTAT_DATA_UINT64 },
660 { "prefetch_metadata_misses", KSTAT_DATA_UINT64 },
661 { "mru_hits", KSTAT_DATA_UINT64 },
662 { "mru_ghost_hits", KSTAT_DATA_UINT64 },
663 { "mfu_hits", KSTAT_DATA_UINT64 },
664 { "mfu_ghost_hits", KSTAT_DATA_UINT64 },
665 { "deleted", KSTAT_DATA_UINT64 },
666 { "mutex_miss", KSTAT_DATA_UINT64 },
667 { "evict_skip", KSTAT_DATA_UINT64 },
668 { "evict_not_enough", KSTAT_DATA_UINT64 },
669 { "evict_l2_cached", KSTAT_DATA_UINT64 },
670 { "evict_l2_eligible", KSTAT_DATA_UINT64 },
671 { "evict_l2_ineligible", KSTAT_DATA_UINT64 },
672 { "evict_l2_skip", KSTAT_DATA_UINT64 },
673 { "hash_elements", KSTAT_DATA_UINT64 },
674 { "hash_elements_max", KSTAT_DATA_UINT64 },
675 { "hash_collisions", KSTAT_DATA_UINT64 },
676 { "hash_chains", KSTAT_DATA_UINT64 },
677 { "hash_chain_max", KSTAT_DATA_UINT64 },
678 { "p", KSTAT_DATA_UINT64 },
679 { "c", KSTAT_DATA_UINT64 },
680 { "c_min", KSTAT_DATA_UINT64 },
681 { "c_max", KSTAT_DATA_UINT64 },
682 { "size", KSTAT_DATA_UINT64 },
683 { "compressed_size", KSTAT_DATA_UINT64 },
684 { "uncompressed_size", KSTAT_DATA_UINT64 },
685 { "overhead_size", KSTAT_DATA_UINT64 },
686 { "hdr_size", KSTAT_DATA_UINT64 },
687 { "data_size", KSTAT_DATA_UINT64 },
688 { "metadata_size", KSTAT_DATA_UINT64 },
689 { "dbuf_size", KSTAT_DATA_UINT64 },
690 { "dnode_size", KSTAT_DATA_UINT64 },
691 { "bonus_size", KSTAT_DATA_UINT64 },
692 { "anon_size", KSTAT_DATA_UINT64 },
693 { "anon_evictable_data", KSTAT_DATA_UINT64 },
694 { "anon_evictable_metadata", KSTAT_DATA_UINT64 },
695 { "mru_size", KSTAT_DATA_UINT64 },
696 { "mru_evictable_data", KSTAT_DATA_UINT64 },
697 { "mru_evictable_metadata", KSTAT_DATA_UINT64 },
698 { "mru_ghost_size", KSTAT_DATA_UINT64 },
699 { "mru_ghost_evictable_data", KSTAT_DATA_UINT64 },
700 { "mru_ghost_evictable_metadata", KSTAT_DATA_UINT64 },
701 { "mfu_size", KSTAT_DATA_UINT64 },
702 { "mfu_evictable_data", KSTAT_DATA_UINT64 },
703 { "mfu_evictable_metadata", KSTAT_DATA_UINT64 },
704 { "mfu_ghost_size", KSTAT_DATA_UINT64 },
705 { "mfu_ghost_evictable_data", KSTAT_DATA_UINT64 },
706 { "mfu_ghost_evictable_metadata", KSTAT_DATA_UINT64 },
707 { "l2_hits", KSTAT_DATA_UINT64 },
708 { "l2_misses", KSTAT_DATA_UINT64 },
709 { "l2_feeds", KSTAT_DATA_UINT64 },
710 { "l2_rw_clash", KSTAT_DATA_UINT64 },
711 { "l2_read_bytes", KSTAT_DATA_UINT64 },
712 { "l2_write_bytes", KSTAT_DATA_UINT64 },
713 { "l2_writes_sent", KSTAT_DATA_UINT64 },
714 { "l2_writes_done", KSTAT_DATA_UINT64 },
715 { "l2_writes_error", KSTAT_DATA_UINT64 },
716 { "l2_writes_lock_retry", KSTAT_DATA_UINT64 },
717 { "l2_evict_lock_retry", KSTAT_DATA_UINT64 },
718 { "l2_evict_reading", KSTAT_DATA_UINT64 },
719 { "l2_evict_l1cached", KSTAT_DATA_UINT64 },
720 { "l2_free_on_write", KSTAT_DATA_UINT64 },
721 { "l2_abort_lowmem", KSTAT_DATA_UINT64 },
722 { "l2_cksum_bad", KSTAT_DATA_UINT64 },
723 { "l2_io_error", KSTAT_DATA_UINT64 },
724 { "l2_size", KSTAT_DATA_UINT64 },
725 { "l2_asize", KSTAT_DATA_UINT64 },
726 { "l2_hdr_size", KSTAT_DATA_UINT64 },
727 { "memory_throttle_count", KSTAT_DATA_UINT64 },
728 { "memory_direct_count", KSTAT_DATA_UINT64 },
729 { "memory_indirect_count", KSTAT_DATA_UINT64 },
730 { "arc_no_grow", KSTAT_DATA_UINT64 },
731 { "arc_tempreserve", KSTAT_DATA_UINT64 },
732 { "arc_loaned_bytes", KSTAT_DATA_UINT64 },
733 { "arc_prune", KSTAT_DATA_UINT64 },
734 { "arc_meta_used", KSTAT_DATA_UINT64 },
735 { "arc_meta_limit", KSTAT_DATA_UINT64 },
736 { "arc_dnode_limit", KSTAT_DATA_UINT64 },
737 { "arc_meta_max", KSTAT_DATA_UINT64 },
738 { "arc_meta_min", KSTAT_DATA_UINT64 },
739 { "sync_wait_for_async", KSTAT_DATA_UINT64 },
740 { "demand_hit_predictive_prefetch", KSTAT_DATA_UINT64 },
741 { "arc_need_free", KSTAT_DATA_UINT64 },
742 { "arc_sys_free", KSTAT_DATA_UINT64 }
745 #define ARCSTAT(stat) (arc_stats.stat.value.ui64)
747 #define ARCSTAT_INCR(stat, val) \
748 atomic_add_64(&arc_stats.stat.value.ui64, (val))
750 #define ARCSTAT_BUMP(stat) ARCSTAT_INCR(stat, 1)
751 #define ARCSTAT_BUMPDOWN(stat) ARCSTAT_INCR(stat, -1)
753 #define ARCSTAT_MAX(stat, val) { \
755 while ((val) > (m = arc_stats.stat.value.ui64) && \
756 (m != atomic_cas_64(&arc_stats.stat.value.ui64, m, (val)))) \
760 #define ARCSTAT_MAXSTAT(stat) \
761 ARCSTAT_MAX(stat##_max, arc_stats.stat.value.ui64)
764 * We define a macro to allow ARC hits/misses to be easily broken down by
765 * two separate conditions, giving a total of four different subtypes for
766 * each of hits and misses (so eight statistics total).
768 #define ARCSTAT_CONDSTAT(cond1, stat1, notstat1, cond2, stat2, notstat2, stat) \
771 ARCSTAT_BUMP(arcstat_##stat1##_##stat2##_##stat); \
773 ARCSTAT_BUMP(arcstat_##stat1##_##notstat2##_##stat); \
777 ARCSTAT_BUMP(arcstat_##notstat1##_##stat2##_##stat); \
779 ARCSTAT_BUMP(arcstat_##notstat1##_##notstat2##_##stat);\
784 static arc_state_t *arc_anon;
785 static arc_state_t *arc_mru;
786 static arc_state_t *arc_mru_ghost;
787 static arc_state_t *arc_mfu;
788 static arc_state_t *arc_mfu_ghost;
789 static arc_state_t *arc_l2c_only;
792 * There are several ARC variables that are critical to export as kstats --
793 * but we don't want to have to grovel around in the kstat whenever we wish to
794 * manipulate them. For these variables, we therefore define them to be in
795 * terms of the statistic variable. This assures that we are not introducing
796 * the possibility of inconsistency by having shadow copies of the variables,
797 * while still allowing the code to be readable.
799 #define arc_size ARCSTAT(arcstat_size) /* actual total arc size */
800 #define arc_p ARCSTAT(arcstat_p) /* target size of MRU */
801 #define arc_c ARCSTAT(arcstat_c) /* target size of cache */
802 #define arc_c_min ARCSTAT(arcstat_c_min) /* min target cache size */
803 #define arc_c_max ARCSTAT(arcstat_c_max) /* max target cache size */
804 #define arc_no_grow ARCSTAT(arcstat_no_grow) /* do not grow cache size */
805 #define arc_tempreserve ARCSTAT(arcstat_tempreserve)
806 #define arc_loaned_bytes ARCSTAT(arcstat_loaned_bytes)
807 #define arc_meta_limit ARCSTAT(arcstat_meta_limit) /* max size for metadata */
808 #define arc_dnode_limit ARCSTAT(arcstat_dnode_limit) /* max size for dnodes */
809 #define arc_meta_min ARCSTAT(arcstat_meta_min) /* min size for metadata */
810 #define arc_meta_used ARCSTAT(arcstat_meta_used) /* size of metadata */
811 #define arc_meta_max ARCSTAT(arcstat_meta_max) /* max size of metadata */
812 #define arc_dbuf_size ARCSTAT(arcstat_dbuf_size) /* dbuf metadata */
813 #define arc_dnode_size ARCSTAT(arcstat_dnode_size) /* dnode metadata */
814 #define arc_bonus_size ARCSTAT(arcstat_bonus_size) /* bonus buffer metadata */
815 #define arc_need_free ARCSTAT(arcstat_need_free) /* bytes to be freed */
816 #define arc_sys_free ARCSTAT(arcstat_sys_free) /* target system free bytes */
818 /* compressed size of entire arc */
819 #define arc_compressed_size ARCSTAT(arcstat_compressed_size)
820 /* uncompressed size of entire arc */
821 #define arc_uncompressed_size ARCSTAT(arcstat_uncompressed_size)
822 /* number of bytes in the arc from arc_buf_t's */
823 #define arc_overhead_size ARCSTAT(arcstat_overhead_size)
825 static list_t arc_prune_list;
826 static kmutex_t arc_prune_mtx;
827 static taskq_t *arc_prune_taskq;
829 #define GHOST_STATE(state) \
830 ((state) == arc_mru_ghost || (state) == arc_mfu_ghost || \
831 (state) == arc_l2c_only)
833 #define HDR_IN_HASH_TABLE(hdr) ((hdr)->b_flags & ARC_FLAG_IN_HASH_TABLE)
834 #define HDR_IO_IN_PROGRESS(hdr) ((hdr)->b_flags & ARC_FLAG_IO_IN_PROGRESS)
835 #define HDR_IO_ERROR(hdr) ((hdr)->b_flags & ARC_FLAG_IO_ERROR)
836 #define HDR_PREFETCH(hdr) ((hdr)->b_flags & ARC_FLAG_PREFETCH)
837 #define HDR_COMPRESSION_ENABLED(hdr) \
838 ((hdr)->b_flags & ARC_FLAG_COMPRESSED_ARC)
840 #define HDR_L2CACHE(hdr) ((hdr)->b_flags & ARC_FLAG_L2CACHE)
841 #define HDR_L2_READING(hdr) \
842 (((hdr)->b_flags & ARC_FLAG_IO_IN_PROGRESS) && \
843 ((hdr)->b_flags & ARC_FLAG_HAS_L2HDR))
844 #define HDR_L2_WRITING(hdr) ((hdr)->b_flags & ARC_FLAG_L2_WRITING)
845 #define HDR_L2_EVICTED(hdr) ((hdr)->b_flags & ARC_FLAG_L2_EVICTED)
846 #define HDR_L2_WRITE_HEAD(hdr) ((hdr)->b_flags & ARC_FLAG_L2_WRITE_HEAD)
847 #define HDR_SHARED_DATA(hdr) ((hdr)->b_flags & ARC_FLAG_SHARED_DATA)
849 #define HDR_ISTYPE_METADATA(hdr) \
850 ((hdr)->b_flags & ARC_FLAG_BUFC_METADATA)
851 #define HDR_ISTYPE_DATA(hdr) (!HDR_ISTYPE_METADATA(hdr))
853 #define HDR_HAS_L1HDR(hdr) ((hdr)->b_flags & ARC_FLAG_HAS_L1HDR)
854 #define HDR_HAS_L2HDR(hdr) ((hdr)->b_flags & ARC_FLAG_HAS_L2HDR)
856 /* For storing compression mode in b_flags */
857 #define HDR_COMPRESS_OFFSET (highbit64(ARC_FLAG_COMPRESS_0) - 1)
859 #define HDR_GET_COMPRESS(hdr) ((enum zio_compress)BF32_GET((hdr)->b_flags, \
860 HDR_COMPRESS_OFFSET, SPA_COMPRESSBITS))
861 #define HDR_SET_COMPRESS(hdr, cmp) BF32_SET((hdr)->b_flags, \
862 HDR_COMPRESS_OFFSET, SPA_COMPRESSBITS, (cmp));
864 #define ARC_BUF_LAST(buf) ((buf)->b_next == NULL)
865 #define ARC_BUF_SHARED(buf) ((buf)->b_flags & ARC_BUF_FLAG_SHARED)
866 #define ARC_BUF_COMPRESSED(buf) ((buf)->b_flags & ARC_BUF_FLAG_COMPRESSED)
872 #define HDR_FULL_SIZE ((int64_t)sizeof (arc_buf_hdr_t))
873 #define HDR_L2ONLY_SIZE ((int64_t)offsetof(arc_buf_hdr_t, b_l1hdr))
876 * Hash table routines
879 #define HT_LOCK_ALIGN 64
880 #define HT_LOCK_PAD (P2NPHASE(sizeof (kmutex_t), (HT_LOCK_ALIGN)))
885 unsigned char pad[HT_LOCK_PAD];
889 #define BUF_LOCKS 8192
890 typedef struct buf_hash_table {
892 arc_buf_hdr_t **ht_table;
893 struct ht_lock ht_locks[BUF_LOCKS];
896 static buf_hash_table_t buf_hash_table;
898 #define BUF_HASH_INDEX(spa, dva, birth) \
899 (buf_hash(spa, dva, birth) & buf_hash_table.ht_mask)
900 #define BUF_HASH_LOCK_NTRY(idx) (buf_hash_table.ht_locks[idx & (BUF_LOCKS-1)])
901 #define BUF_HASH_LOCK(idx) (&(BUF_HASH_LOCK_NTRY(idx).ht_lock))
902 #define HDR_LOCK(hdr) \
903 (BUF_HASH_LOCK(BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth)))
905 uint64_t zfs_crc64_table[256];
911 #define L2ARC_WRITE_SIZE (8 * 1024 * 1024) /* initial write max */
912 #define L2ARC_HEADROOM 2 /* num of writes */
915 * If we discover during ARC scan any buffers to be compressed, we boost
916 * our headroom for the next scanning cycle by this percentage multiple.
918 #define L2ARC_HEADROOM_BOOST 200
919 #define L2ARC_FEED_SECS 1 /* caching interval secs */
920 #define L2ARC_FEED_MIN_MS 200 /* min caching interval ms */
923 * We can feed L2ARC from two states of ARC buffers, mru and mfu,
924 * and each of the state has two types: data and metadata.
926 #define L2ARC_FEED_TYPES 4
928 #define l2arc_writes_sent ARCSTAT(arcstat_l2_writes_sent)
929 #define l2arc_writes_done ARCSTAT(arcstat_l2_writes_done)
931 /* L2ARC Performance Tunables */
932 unsigned long l2arc_write_max = L2ARC_WRITE_SIZE; /* def max write size */
933 unsigned long l2arc_write_boost = L2ARC_WRITE_SIZE; /* extra warmup write */
934 unsigned long l2arc_headroom = L2ARC_HEADROOM; /* # of dev writes */
935 unsigned long l2arc_headroom_boost = L2ARC_HEADROOM_BOOST;
936 unsigned long l2arc_feed_secs = L2ARC_FEED_SECS; /* interval seconds */
937 unsigned long l2arc_feed_min_ms = L2ARC_FEED_MIN_MS; /* min interval msecs */
938 int l2arc_noprefetch = B_TRUE; /* don't cache prefetch bufs */
939 int l2arc_feed_again = B_TRUE; /* turbo warmup */
940 int l2arc_norw = B_FALSE; /* no reads during writes */
945 static list_t L2ARC_dev_list; /* device list */
946 static list_t *l2arc_dev_list; /* device list pointer */
947 static kmutex_t l2arc_dev_mtx; /* device list mutex */
948 static l2arc_dev_t *l2arc_dev_last; /* last device used */
949 static list_t L2ARC_free_on_write; /* free after write buf list */
950 static list_t *l2arc_free_on_write; /* free after write list ptr */
951 static kmutex_t l2arc_free_on_write_mtx; /* mutex for list */
952 static uint64_t l2arc_ndev; /* number of devices */
954 typedef struct l2arc_read_callback {
955 arc_buf_hdr_t *l2rcb_hdr; /* read header */
956 blkptr_t l2rcb_bp; /* original blkptr */
957 zbookmark_phys_t l2rcb_zb; /* original bookmark */
958 int l2rcb_flags; /* original flags */
959 } l2arc_read_callback_t;
961 typedef struct l2arc_data_free {
962 /* protected by l2arc_free_on_write_mtx */
965 arc_buf_contents_t l2df_type;
966 list_node_t l2df_list_node;
969 static kmutex_t l2arc_feed_thr_lock;
970 static kcondvar_t l2arc_feed_thr_cv;
971 static uint8_t l2arc_thread_exit;
973 static abd_t *arc_get_data_abd(arc_buf_hdr_t *, uint64_t, void *);
974 static void *arc_get_data_buf(arc_buf_hdr_t *, uint64_t, void *);
975 static void arc_get_data_impl(arc_buf_hdr_t *, uint64_t, void *);
976 static void arc_free_data_abd(arc_buf_hdr_t *, abd_t *, uint64_t, void *);
977 static void arc_free_data_buf(arc_buf_hdr_t *, void *, uint64_t, void *);
978 static void arc_free_data_impl(arc_buf_hdr_t *hdr, uint64_t size, void *tag);
979 static void arc_hdr_free_pabd(arc_buf_hdr_t *);
980 static void arc_hdr_alloc_pabd(arc_buf_hdr_t *);
981 static void arc_access(arc_buf_hdr_t *, kmutex_t *);
982 static boolean_t arc_is_overflowing(void);
983 static void arc_buf_watch(arc_buf_t *);
984 static void arc_tuning_update(void);
985 static void arc_prune_async(int64_t);
986 static uint64_t arc_all_memory(void);
988 static arc_buf_contents_t arc_buf_type(arc_buf_hdr_t *);
989 static uint32_t arc_bufc_to_flags(arc_buf_contents_t);
990 static inline void arc_hdr_set_flags(arc_buf_hdr_t *hdr, arc_flags_t flags);
991 static inline void arc_hdr_clear_flags(arc_buf_hdr_t *hdr, arc_flags_t flags);
993 static boolean_t l2arc_write_eligible(uint64_t, arc_buf_hdr_t *);
994 static void l2arc_read_done(zio_t *);
997 buf_hash(uint64_t spa, const dva_t *dva, uint64_t birth)
999 uint8_t *vdva = (uint8_t *)dva;
1000 uint64_t crc = -1ULL;
1003 ASSERT(zfs_crc64_table[128] == ZFS_CRC64_POLY);
1005 for (i = 0; i < sizeof (dva_t); i++)
1006 crc = (crc >> 8) ^ zfs_crc64_table[(crc ^ vdva[i]) & 0xFF];
1008 crc ^= (spa>>8) ^ birth;
1013 #define HDR_EMPTY(hdr) \
1014 ((hdr)->b_dva.dva_word[0] == 0 && \
1015 (hdr)->b_dva.dva_word[1] == 0)
1017 #define HDR_EQUAL(spa, dva, birth, hdr) \
1018 ((hdr)->b_dva.dva_word[0] == (dva)->dva_word[0]) && \
1019 ((hdr)->b_dva.dva_word[1] == (dva)->dva_word[1]) && \
1020 ((hdr)->b_birth == birth) && ((hdr)->b_spa == spa)
1023 buf_discard_identity(arc_buf_hdr_t *hdr)
1025 hdr->b_dva.dva_word[0] = 0;
1026 hdr->b_dva.dva_word[1] = 0;
1030 static arc_buf_hdr_t *
1031 buf_hash_find(uint64_t spa, const blkptr_t *bp, kmutex_t **lockp)
1033 const dva_t *dva = BP_IDENTITY(bp);
1034 uint64_t birth = BP_PHYSICAL_BIRTH(bp);
1035 uint64_t idx = BUF_HASH_INDEX(spa, dva, birth);
1036 kmutex_t *hash_lock = BUF_HASH_LOCK(idx);
1039 mutex_enter(hash_lock);
1040 for (hdr = buf_hash_table.ht_table[idx]; hdr != NULL;
1041 hdr = hdr->b_hash_next) {
1042 if (HDR_EQUAL(spa, dva, birth, hdr)) {
1047 mutex_exit(hash_lock);
1053 * Insert an entry into the hash table. If there is already an element
1054 * equal to elem in the hash table, then the already existing element
1055 * will be returned and the new element will not be inserted.
1056 * Otherwise returns NULL.
1057 * If lockp == NULL, the caller is assumed to already hold the hash lock.
1059 static arc_buf_hdr_t *
1060 buf_hash_insert(arc_buf_hdr_t *hdr, kmutex_t **lockp)
1062 uint64_t idx = BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth);
1063 kmutex_t *hash_lock = BUF_HASH_LOCK(idx);
1064 arc_buf_hdr_t *fhdr;
1067 ASSERT(!DVA_IS_EMPTY(&hdr->b_dva));
1068 ASSERT(hdr->b_birth != 0);
1069 ASSERT(!HDR_IN_HASH_TABLE(hdr));
1071 if (lockp != NULL) {
1073 mutex_enter(hash_lock);
1075 ASSERT(MUTEX_HELD(hash_lock));
1078 for (fhdr = buf_hash_table.ht_table[idx], i = 0; fhdr != NULL;
1079 fhdr = fhdr->b_hash_next, i++) {
1080 if (HDR_EQUAL(hdr->b_spa, &hdr->b_dva, hdr->b_birth, fhdr))
1084 hdr->b_hash_next = buf_hash_table.ht_table[idx];
1085 buf_hash_table.ht_table[idx] = hdr;
1086 arc_hdr_set_flags(hdr, ARC_FLAG_IN_HASH_TABLE);
1088 /* collect some hash table performance data */
1090 ARCSTAT_BUMP(arcstat_hash_collisions);
1092 ARCSTAT_BUMP(arcstat_hash_chains);
1094 ARCSTAT_MAX(arcstat_hash_chain_max, i);
1097 ARCSTAT_BUMP(arcstat_hash_elements);
1098 ARCSTAT_MAXSTAT(arcstat_hash_elements);
1104 buf_hash_remove(arc_buf_hdr_t *hdr)
1106 arc_buf_hdr_t *fhdr, **hdrp;
1107 uint64_t idx = BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth);
1109 ASSERT(MUTEX_HELD(BUF_HASH_LOCK(idx)));
1110 ASSERT(HDR_IN_HASH_TABLE(hdr));
1112 hdrp = &buf_hash_table.ht_table[idx];
1113 while ((fhdr = *hdrp) != hdr) {
1114 ASSERT3P(fhdr, !=, NULL);
1115 hdrp = &fhdr->b_hash_next;
1117 *hdrp = hdr->b_hash_next;
1118 hdr->b_hash_next = NULL;
1119 arc_hdr_clear_flags(hdr, ARC_FLAG_IN_HASH_TABLE);
1121 /* collect some hash table performance data */
1122 ARCSTAT_BUMPDOWN(arcstat_hash_elements);
1124 if (buf_hash_table.ht_table[idx] &&
1125 buf_hash_table.ht_table[idx]->b_hash_next == NULL)
1126 ARCSTAT_BUMPDOWN(arcstat_hash_chains);
1130 * Global data structures and functions for the buf kmem cache.
1132 static kmem_cache_t *hdr_full_cache;
1133 static kmem_cache_t *hdr_l2only_cache;
1134 static kmem_cache_t *buf_cache;
1141 #if defined(_KERNEL) && defined(HAVE_SPL)
1143 * Large allocations which do not require contiguous pages
1144 * should be using vmem_free() in the linux kernel\
1146 vmem_free(buf_hash_table.ht_table,
1147 (buf_hash_table.ht_mask + 1) * sizeof (void *));
1149 kmem_free(buf_hash_table.ht_table,
1150 (buf_hash_table.ht_mask + 1) * sizeof (void *));
1152 for (i = 0; i < BUF_LOCKS; i++)
1153 mutex_destroy(&buf_hash_table.ht_locks[i].ht_lock);
1154 kmem_cache_destroy(hdr_full_cache);
1155 kmem_cache_destroy(hdr_l2only_cache);
1156 kmem_cache_destroy(buf_cache);
1160 * Constructor callback - called when the cache is empty
1161 * and a new buf is requested.
1165 hdr_full_cons(void *vbuf, void *unused, int kmflag)
1167 arc_buf_hdr_t *hdr = vbuf;
1169 bzero(hdr, HDR_FULL_SIZE);
1170 cv_init(&hdr->b_l1hdr.b_cv, NULL, CV_DEFAULT, NULL);
1171 refcount_create(&hdr->b_l1hdr.b_refcnt);
1172 mutex_init(&hdr->b_l1hdr.b_freeze_lock, NULL, MUTEX_DEFAULT, NULL);
1173 list_link_init(&hdr->b_l1hdr.b_arc_node);
1174 list_link_init(&hdr->b_l2hdr.b_l2node);
1175 multilist_link_init(&hdr->b_l1hdr.b_arc_node);
1176 arc_space_consume(HDR_FULL_SIZE, ARC_SPACE_HDRS);
1183 hdr_l2only_cons(void *vbuf, void *unused, int kmflag)
1185 arc_buf_hdr_t *hdr = vbuf;
1187 bzero(hdr, HDR_L2ONLY_SIZE);
1188 arc_space_consume(HDR_L2ONLY_SIZE, ARC_SPACE_L2HDRS);
1195 buf_cons(void *vbuf, void *unused, int kmflag)
1197 arc_buf_t *buf = vbuf;
1199 bzero(buf, sizeof (arc_buf_t));
1200 mutex_init(&buf->b_evict_lock, NULL, MUTEX_DEFAULT, NULL);
1201 arc_space_consume(sizeof (arc_buf_t), ARC_SPACE_HDRS);
1207 * Destructor callback - called when a cached buf is
1208 * no longer required.
1212 hdr_full_dest(void *vbuf, void *unused)
1214 arc_buf_hdr_t *hdr = vbuf;
1216 ASSERT(HDR_EMPTY(hdr));
1217 cv_destroy(&hdr->b_l1hdr.b_cv);
1218 refcount_destroy(&hdr->b_l1hdr.b_refcnt);
1219 mutex_destroy(&hdr->b_l1hdr.b_freeze_lock);
1220 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node));
1221 arc_space_return(HDR_FULL_SIZE, ARC_SPACE_HDRS);
1226 hdr_l2only_dest(void *vbuf, void *unused)
1228 ASSERTV(arc_buf_hdr_t *hdr = vbuf);
1230 ASSERT(HDR_EMPTY(hdr));
1231 arc_space_return(HDR_L2ONLY_SIZE, ARC_SPACE_L2HDRS);
1236 buf_dest(void *vbuf, void *unused)
1238 arc_buf_t *buf = vbuf;
1240 mutex_destroy(&buf->b_evict_lock);
1241 arc_space_return(sizeof (arc_buf_t), ARC_SPACE_HDRS);
1245 * Reclaim callback -- invoked when memory is low.
1249 hdr_recl(void *unused)
1251 dprintf("hdr_recl called\n");
1253 * umem calls the reclaim func when we destroy the buf cache,
1254 * which is after we do arc_fini().
1257 cv_signal(&arc_reclaim_thread_cv);
1263 uint64_t *ct = NULL;
1264 uint64_t hsize = 1ULL << 12;
1268 * The hash table is big enough to fill all of physical memory
1269 * with an average block size of zfs_arc_average_blocksize (default 8K).
1270 * By default, the table will take up
1271 * totalmem * sizeof(void*) / 8K (1MB per GB with 8-byte pointers).
1273 while (hsize * zfs_arc_average_blocksize < arc_all_memory())
1276 buf_hash_table.ht_mask = hsize - 1;
1277 #if defined(_KERNEL) && defined(HAVE_SPL)
1279 * Large allocations which do not require contiguous pages
1280 * should be using vmem_alloc() in the linux kernel
1282 buf_hash_table.ht_table =
1283 vmem_zalloc(hsize * sizeof (void*), KM_SLEEP);
1285 buf_hash_table.ht_table =
1286 kmem_zalloc(hsize * sizeof (void*), KM_NOSLEEP);
1288 if (buf_hash_table.ht_table == NULL) {
1289 ASSERT(hsize > (1ULL << 8));
1294 hdr_full_cache = kmem_cache_create("arc_buf_hdr_t_full", HDR_FULL_SIZE,
1295 0, hdr_full_cons, hdr_full_dest, hdr_recl, NULL, NULL, 0);
1296 hdr_l2only_cache = kmem_cache_create("arc_buf_hdr_t_l2only",
1297 HDR_L2ONLY_SIZE, 0, hdr_l2only_cons, hdr_l2only_dest, hdr_recl,
1299 buf_cache = kmem_cache_create("arc_buf_t", sizeof (arc_buf_t),
1300 0, buf_cons, buf_dest, NULL, NULL, NULL, 0);
1302 for (i = 0; i < 256; i++)
1303 for (ct = zfs_crc64_table + i, *ct = i, j = 8; j > 0; j--)
1304 *ct = (*ct >> 1) ^ (-(*ct & 1) & ZFS_CRC64_POLY);
1306 for (i = 0; i < BUF_LOCKS; i++) {
1307 mutex_init(&buf_hash_table.ht_locks[i].ht_lock,
1308 NULL, MUTEX_DEFAULT, NULL);
1312 #define ARC_MINTIME (hz>>4) /* 62 ms */
1315 * This is the size that the buf occupies in memory. If the buf is compressed,
1316 * it will correspond to the compressed size. You should use this method of
1317 * getting the buf size unless you explicitly need the logical size.
1320 arc_buf_size(arc_buf_t *buf)
1322 return (ARC_BUF_COMPRESSED(buf) ?
1323 HDR_GET_PSIZE(buf->b_hdr) : HDR_GET_LSIZE(buf->b_hdr));
1327 arc_buf_lsize(arc_buf_t *buf)
1329 return (HDR_GET_LSIZE(buf->b_hdr));
1333 arc_get_compression(arc_buf_t *buf)
1335 return (ARC_BUF_COMPRESSED(buf) ?
1336 HDR_GET_COMPRESS(buf->b_hdr) : ZIO_COMPRESS_OFF);
1339 static inline boolean_t
1340 arc_buf_is_shared(arc_buf_t *buf)
1342 boolean_t shared = (buf->b_data != NULL &&
1343 buf->b_hdr->b_l1hdr.b_pabd != NULL &&
1344 abd_is_linear(buf->b_hdr->b_l1hdr.b_pabd) &&
1345 buf->b_data == abd_to_buf(buf->b_hdr->b_l1hdr.b_pabd));
1346 IMPLY(shared, HDR_SHARED_DATA(buf->b_hdr));
1347 IMPLY(shared, ARC_BUF_SHARED(buf));
1348 IMPLY(shared, ARC_BUF_COMPRESSED(buf) || ARC_BUF_LAST(buf));
1351 * It would be nice to assert arc_can_share() too, but the "hdr isn't
1352 * already being shared" requirement prevents us from doing that.
1359 * Free the checksum associated with this header. If there is no checksum, this
1363 arc_cksum_free(arc_buf_hdr_t *hdr)
1365 ASSERT(HDR_HAS_L1HDR(hdr));
1366 mutex_enter(&hdr->b_l1hdr.b_freeze_lock);
1367 if (hdr->b_l1hdr.b_freeze_cksum != NULL) {
1368 kmem_free(hdr->b_l1hdr.b_freeze_cksum, sizeof (zio_cksum_t));
1369 hdr->b_l1hdr.b_freeze_cksum = NULL;
1371 mutex_exit(&hdr->b_l1hdr.b_freeze_lock);
1375 * Return true iff at least one of the bufs on hdr is not compressed.
1378 arc_hdr_has_uncompressed_buf(arc_buf_hdr_t *hdr)
1380 for (arc_buf_t *b = hdr->b_l1hdr.b_buf; b != NULL; b = b->b_next) {
1381 if (!ARC_BUF_COMPRESSED(b)) {
1390 * If we've turned on the ZFS_DEBUG_MODIFY flag, verify that the buf's data
1391 * matches the checksum that is stored in the hdr. If there is no checksum,
1392 * or if the buf is compressed, this is a no-op.
1395 arc_cksum_verify(arc_buf_t *buf)
1397 arc_buf_hdr_t *hdr = buf->b_hdr;
1400 if (!(zfs_flags & ZFS_DEBUG_MODIFY))
1403 if (ARC_BUF_COMPRESSED(buf)) {
1404 ASSERT(hdr->b_l1hdr.b_freeze_cksum == NULL ||
1405 arc_hdr_has_uncompressed_buf(hdr));
1409 ASSERT(HDR_HAS_L1HDR(hdr));
1411 mutex_enter(&hdr->b_l1hdr.b_freeze_lock);
1412 if (hdr->b_l1hdr.b_freeze_cksum == NULL || HDR_IO_ERROR(hdr)) {
1413 mutex_exit(&hdr->b_l1hdr.b_freeze_lock);
1417 fletcher_2_native(buf->b_data, arc_buf_size(buf), NULL, &zc);
1418 if (!ZIO_CHECKSUM_EQUAL(*hdr->b_l1hdr.b_freeze_cksum, zc))
1419 panic("buffer modified while frozen!");
1420 mutex_exit(&hdr->b_l1hdr.b_freeze_lock);
1424 arc_cksum_is_equal(arc_buf_hdr_t *hdr, zio_t *zio)
1426 enum zio_compress compress = BP_GET_COMPRESS(zio->io_bp);
1427 boolean_t valid_cksum;
1429 ASSERT(!BP_IS_EMBEDDED(zio->io_bp));
1430 VERIFY3U(BP_GET_PSIZE(zio->io_bp), ==, HDR_GET_PSIZE(hdr));
1433 * We rely on the blkptr's checksum to determine if the block
1434 * is valid or not. When compressed arc is enabled, the l2arc
1435 * writes the block to the l2arc just as it appears in the pool.
1436 * This allows us to use the blkptr's checksum to validate the
1437 * data that we just read off of the l2arc without having to store
1438 * a separate checksum in the arc_buf_hdr_t. However, if compressed
1439 * arc is disabled, then the data written to the l2arc is always
1440 * uncompressed and won't match the block as it exists in the main
1441 * pool. When this is the case, we must first compress it if it is
1442 * compressed on the main pool before we can validate the checksum.
1444 if (!HDR_COMPRESSION_ENABLED(hdr) && compress != ZIO_COMPRESS_OFF) {
1448 ASSERT3U(HDR_GET_COMPRESS(hdr), ==, ZIO_COMPRESS_OFF);
1450 cbuf = zio_buf_alloc(HDR_GET_PSIZE(hdr));
1451 lsize = HDR_GET_LSIZE(hdr);
1452 csize = zio_compress_data(compress, zio->io_abd, cbuf, lsize);
1454 ASSERT3U(csize, <=, HDR_GET_PSIZE(hdr));
1455 if (csize < HDR_GET_PSIZE(hdr)) {
1457 * Compressed blocks are always a multiple of the
1458 * smallest ashift in the pool. Ideally, we would
1459 * like to round up the csize to the next
1460 * spa_min_ashift but that value may have changed
1461 * since the block was last written. Instead,
1462 * we rely on the fact that the hdr's psize
1463 * was set to the psize of the block when it was
1464 * last written. We set the csize to that value
1465 * and zero out any part that should not contain
1468 bzero((char *)cbuf + csize, HDR_GET_PSIZE(hdr) - csize);
1469 csize = HDR_GET_PSIZE(hdr);
1471 zio_push_transform(zio, cbuf, csize, HDR_GET_PSIZE(hdr), NULL);
1475 * Block pointers always store the checksum for the logical data.
1476 * If the block pointer has the gang bit set, then the checksum
1477 * it represents is for the reconstituted data and not for an
1478 * individual gang member. The zio pipeline, however, must be able to
1479 * determine the checksum of each of the gang constituents so it
1480 * treats the checksum comparison differently than what we need
1481 * for l2arc blocks. This prevents us from using the
1482 * zio_checksum_error() interface directly. Instead we must call the
1483 * zio_checksum_error_impl() so that we can ensure the checksum is
1484 * generated using the correct checksum algorithm and accounts for the
1485 * logical I/O size and not just a gang fragment.
1487 valid_cksum = (zio_checksum_error_impl(zio->io_spa, zio->io_bp,
1488 BP_GET_CHECKSUM(zio->io_bp), zio->io_abd, zio->io_size,
1489 zio->io_offset, NULL) == 0);
1490 zio_pop_transforms(zio);
1491 return (valid_cksum);
1495 * Given a buf full of data, if ZFS_DEBUG_MODIFY is enabled this computes a
1496 * checksum and attaches it to the buf's hdr so that we can ensure that the buf
1497 * isn't modified later on. If buf is compressed or there is already a checksum
1498 * on the hdr, this is a no-op (we only checksum uncompressed bufs).
1501 arc_cksum_compute(arc_buf_t *buf)
1503 arc_buf_hdr_t *hdr = buf->b_hdr;
1505 if (!(zfs_flags & ZFS_DEBUG_MODIFY))
1508 ASSERT(HDR_HAS_L1HDR(hdr));
1510 mutex_enter(&buf->b_hdr->b_l1hdr.b_freeze_lock);
1511 if (hdr->b_l1hdr.b_freeze_cksum != NULL) {
1512 ASSERT(arc_hdr_has_uncompressed_buf(hdr));
1513 mutex_exit(&hdr->b_l1hdr.b_freeze_lock);
1515 } else if (ARC_BUF_COMPRESSED(buf)) {
1516 mutex_exit(&hdr->b_l1hdr.b_freeze_lock);
1520 ASSERT(!ARC_BUF_COMPRESSED(buf));
1521 hdr->b_l1hdr.b_freeze_cksum = kmem_alloc(sizeof (zio_cksum_t),
1523 fletcher_2_native(buf->b_data, arc_buf_size(buf), NULL,
1524 hdr->b_l1hdr.b_freeze_cksum);
1525 mutex_exit(&hdr->b_l1hdr.b_freeze_lock);
1531 arc_buf_sigsegv(int sig, siginfo_t *si, void *unused)
1533 panic("Got SIGSEGV at address: 0x%lx\n", (long)si->si_addr);
1539 arc_buf_unwatch(arc_buf_t *buf)
1543 ASSERT0(mprotect(buf->b_data, arc_buf_size(buf),
1544 PROT_READ | PROT_WRITE));
1551 arc_buf_watch(arc_buf_t *buf)
1555 ASSERT0(mprotect(buf->b_data, arc_buf_size(buf),
1560 static arc_buf_contents_t
1561 arc_buf_type(arc_buf_hdr_t *hdr)
1563 arc_buf_contents_t type;
1564 if (HDR_ISTYPE_METADATA(hdr)) {
1565 type = ARC_BUFC_METADATA;
1567 type = ARC_BUFC_DATA;
1569 VERIFY3U(hdr->b_type, ==, type);
1574 arc_is_metadata(arc_buf_t *buf)
1576 return (HDR_ISTYPE_METADATA(buf->b_hdr) != 0);
1580 arc_bufc_to_flags(arc_buf_contents_t type)
1584 /* metadata field is 0 if buffer contains normal data */
1586 case ARC_BUFC_METADATA:
1587 return (ARC_FLAG_BUFC_METADATA);
1591 panic("undefined ARC buffer type!");
1592 return ((uint32_t)-1);
1596 arc_buf_thaw(arc_buf_t *buf)
1598 arc_buf_hdr_t *hdr = buf->b_hdr;
1600 ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon);
1601 ASSERT(!HDR_IO_IN_PROGRESS(hdr));
1603 arc_cksum_verify(buf);
1606 * Compressed buffers do not manipulate the b_freeze_cksum or
1607 * allocate b_thawed.
1609 if (ARC_BUF_COMPRESSED(buf)) {
1610 ASSERT(hdr->b_l1hdr.b_freeze_cksum == NULL ||
1611 arc_hdr_has_uncompressed_buf(hdr));
1615 ASSERT(HDR_HAS_L1HDR(hdr));
1616 arc_cksum_free(hdr);
1617 arc_buf_unwatch(buf);
1621 arc_buf_freeze(arc_buf_t *buf)
1623 arc_buf_hdr_t *hdr = buf->b_hdr;
1624 kmutex_t *hash_lock;
1626 if (!(zfs_flags & ZFS_DEBUG_MODIFY))
1629 if (ARC_BUF_COMPRESSED(buf)) {
1630 ASSERT(hdr->b_l1hdr.b_freeze_cksum == NULL ||
1631 arc_hdr_has_uncompressed_buf(hdr));
1635 hash_lock = HDR_LOCK(hdr);
1636 mutex_enter(hash_lock);
1638 ASSERT(HDR_HAS_L1HDR(hdr));
1639 ASSERT(hdr->b_l1hdr.b_freeze_cksum != NULL ||
1640 hdr->b_l1hdr.b_state == arc_anon);
1641 arc_cksum_compute(buf);
1642 mutex_exit(hash_lock);
1646 * The arc_buf_hdr_t's b_flags should never be modified directly. Instead,
1647 * the following functions should be used to ensure that the flags are
1648 * updated in a thread-safe way. When manipulating the flags either
1649 * the hash_lock must be held or the hdr must be undiscoverable. This
1650 * ensures that we're not racing with any other threads when updating
1654 arc_hdr_set_flags(arc_buf_hdr_t *hdr, arc_flags_t flags)
1656 ASSERT(MUTEX_HELD(HDR_LOCK(hdr)) || HDR_EMPTY(hdr));
1657 hdr->b_flags |= flags;
1661 arc_hdr_clear_flags(arc_buf_hdr_t *hdr, arc_flags_t flags)
1663 ASSERT(MUTEX_HELD(HDR_LOCK(hdr)) || HDR_EMPTY(hdr));
1664 hdr->b_flags &= ~flags;
1668 * Setting the compression bits in the arc_buf_hdr_t's b_flags is
1669 * done in a special way since we have to clear and set bits
1670 * at the same time. Consumers that wish to set the compression bits
1671 * must use this function to ensure that the flags are updated in
1672 * thread-safe manner.
1675 arc_hdr_set_compress(arc_buf_hdr_t *hdr, enum zio_compress cmp)
1677 ASSERT(MUTEX_HELD(HDR_LOCK(hdr)) || HDR_EMPTY(hdr));
1680 * Holes and embedded blocks will always have a psize = 0 so
1681 * we ignore the compression of the blkptr and set the
1682 * want to uncompress them. Mark them as uncompressed.
1684 if (!zfs_compressed_arc_enabled || HDR_GET_PSIZE(hdr) == 0) {
1685 arc_hdr_clear_flags(hdr, ARC_FLAG_COMPRESSED_ARC);
1686 HDR_SET_COMPRESS(hdr, ZIO_COMPRESS_OFF);
1687 ASSERT(!HDR_COMPRESSION_ENABLED(hdr));
1688 ASSERT3U(HDR_GET_COMPRESS(hdr), ==, ZIO_COMPRESS_OFF);
1690 arc_hdr_set_flags(hdr, ARC_FLAG_COMPRESSED_ARC);
1691 HDR_SET_COMPRESS(hdr, cmp);
1692 ASSERT3U(HDR_GET_COMPRESS(hdr), ==, cmp);
1693 ASSERT(HDR_COMPRESSION_ENABLED(hdr));
1698 * Looks for another buf on the same hdr which has the data decompressed, copies
1699 * from it, and returns true. If no such buf exists, returns false.
1702 arc_buf_try_copy_decompressed_data(arc_buf_t *buf)
1704 arc_buf_hdr_t *hdr = buf->b_hdr;
1705 boolean_t copied = B_FALSE;
1707 ASSERT(HDR_HAS_L1HDR(hdr));
1708 ASSERT3P(buf->b_data, !=, NULL);
1709 ASSERT(!ARC_BUF_COMPRESSED(buf));
1711 for (arc_buf_t *from = hdr->b_l1hdr.b_buf; from != NULL;
1712 from = from->b_next) {
1713 /* can't use our own data buffer */
1718 if (!ARC_BUF_COMPRESSED(from)) {
1719 bcopy(from->b_data, buf->b_data, arc_buf_size(buf));
1726 * There were no decompressed bufs, so there should not be a
1727 * checksum on the hdr either.
1729 EQUIV(!copied, hdr->b_l1hdr.b_freeze_cksum == NULL);
1735 * Given a buf that has a data buffer attached to it, this function will
1736 * efficiently fill the buf with data of the specified compression setting from
1737 * the hdr and update the hdr's b_freeze_cksum if necessary. If the buf and hdr
1738 * are already sharing a data buf, no copy is performed.
1740 * If the buf is marked as compressed but uncompressed data was requested, this
1741 * will allocate a new data buffer for the buf, remove that flag, and fill the
1742 * buf with uncompressed data. You can't request a compressed buf on a hdr with
1743 * uncompressed data, and (since we haven't added support for it yet) if you
1744 * want compressed data your buf must already be marked as compressed and have
1745 * the correct-sized data buffer.
1748 arc_buf_fill(arc_buf_t *buf, boolean_t compressed)
1750 arc_buf_hdr_t *hdr = buf->b_hdr;
1751 boolean_t hdr_compressed = (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF);
1752 dmu_object_byteswap_t bswap = hdr->b_l1hdr.b_byteswap;
1754 ASSERT3P(buf->b_data, !=, NULL);
1755 IMPLY(compressed, hdr_compressed);
1756 IMPLY(compressed, ARC_BUF_COMPRESSED(buf));
1758 if (hdr_compressed == compressed) {
1759 if (!arc_buf_is_shared(buf)) {
1760 abd_copy_to_buf(buf->b_data, hdr->b_l1hdr.b_pabd,
1764 ASSERT(hdr_compressed);
1765 ASSERT(!compressed);
1766 ASSERT3U(HDR_GET_LSIZE(hdr), !=, HDR_GET_PSIZE(hdr));
1769 * If the buf is sharing its data with the hdr, unlink it and
1770 * allocate a new data buffer for the buf.
1772 if (arc_buf_is_shared(buf)) {
1773 ASSERT(ARC_BUF_COMPRESSED(buf));
1775 /* We need to give the buf it's own b_data */
1776 buf->b_flags &= ~ARC_BUF_FLAG_SHARED;
1778 arc_get_data_buf(hdr, HDR_GET_LSIZE(hdr), buf);
1779 arc_hdr_clear_flags(hdr, ARC_FLAG_SHARED_DATA);
1781 /* Previously overhead was 0; just add new overhead */
1782 ARCSTAT_INCR(arcstat_overhead_size, HDR_GET_LSIZE(hdr));
1783 } else if (ARC_BUF_COMPRESSED(buf)) {
1784 /* We need to reallocate the buf's b_data */
1785 arc_free_data_buf(hdr, buf->b_data, HDR_GET_PSIZE(hdr),
1788 arc_get_data_buf(hdr, HDR_GET_LSIZE(hdr), buf);
1790 /* We increased the size of b_data; update overhead */
1791 ARCSTAT_INCR(arcstat_overhead_size,
1792 HDR_GET_LSIZE(hdr) - HDR_GET_PSIZE(hdr));
1796 * Regardless of the buf's previous compression settings, it
1797 * should not be compressed at the end of this function.
1799 buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED;
1802 * Try copying the data from another buf which already has a
1803 * decompressed version. If that's not possible, it's time to
1804 * bite the bullet and decompress the data from the hdr.
1806 if (arc_buf_try_copy_decompressed_data(buf)) {
1807 /* Skip byteswapping and checksumming (already done) */
1808 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, !=, NULL);
1811 int error = zio_decompress_data(HDR_GET_COMPRESS(hdr),
1812 hdr->b_l1hdr.b_pabd, buf->b_data,
1813 HDR_GET_PSIZE(hdr), HDR_GET_LSIZE(hdr));
1816 * Absent hardware errors or software bugs, this should
1817 * be impossible, but log it anyway so we can debug it.
1821 "hdr %p, compress %d, psize %d, lsize %d",
1822 hdr, HDR_GET_COMPRESS(hdr),
1823 HDR_GET_PSIZE(hdr), HDR_GET_LSIZE(hdr));
1824 return (SET_ERROR(EIO));
1829 /* Byteswap the buf's data if necessary */
1830 if (bswap != DMU_BSWAP_NUMFUNCS) {
1831 ASSERT(!HDR_SHARED_DATA(hdr));
1832 ASSERT3U(bswap, <, DMU_BSWAP_NUMFUNCS);
1833 dmu_ot_byteswap[bswap].ob_func(buf->b_data, HDR_GET_LSIZE(hdr));
1836 /* Compute the hdr's checksum if necessary */
1837 arc_cksum_compute(buf);
1843 arc_decompress(arc_buf_t *buf)
1845 return (arc_buf_fill(buf, B_FALSE));
1849 * Return the size of the block, b_pabd, that is stored in the arc_buf_hdr_t.
1852 arc_hdr_size(arc_buf_hdr_t *hdr)
1856 if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF &&
1857 HDR_GET_PSIZE(hdr) > 0) {
1858 size = HDR_GET_PSIZE(hdr);
1860 ASSERT3U(HDR_GET_LSIZE(hdr), !=, 0);
1861 size = HDR_GET_LSIZE(hdr);
1867 * Increment the amount of evictable space in the arc_state_t's refcount.
1868 * We account for the space used by the hdr and the arc buf individually
1869 * so that we can add and remove them from the refcount individually.
1872 arc_evictable_space_increment(arc_buf_hdr_t *hdr, arc_state_t *state)
1874 arc_buf_contents_t type = arc_buf_type(hdr);
1877 ASSERT(HDR_HAS_L1HDR(hdr));
1879 if (GHOST_STATE(state)) {
1880 ASSERT0(hdr->b_l1hdr.b_bufcnt);
1881 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL);
1882 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL);
1883 (void) refcount_add_many(&state->arcs_esize[type],
1884 HDR_GET_LSIZE(hdr), hdr);
1888 ASSERT(!GHOST_STATE(state));
1889 if (hdr->b_l1hdr.b_pabd != NULL) {
1890 (void) refcount_add_many(&state->arcs_esize[type],
1891 arc_hdr_size(hdr), hdr);
1893 for (buf = hdr->b_l1hdr.b_buf; buf != NULL; buf = buf->b_next) {
1894 if (arc_buf_is_shared(buf))
1896 (void) refcount_add_many(&state->arcs_esize[type],
1897 arc_buf_size(buf), buf);
1902 * Decrement the amount of evictable space in the arc_state_t's refcount.
1903 * We account for the space used by the hdr and the arc buf individually
1904 * so that we can add and remove them from the refcount individually.
1907 arc_evictable_space_decrement(arc_buf_hdr_t *hdr, arc_state_t *state)
1909 arc_buf_contents_t type = arc_buf_type(hdr);
1912 ASSERT(HDR_HAS_L1HDR(hdr));
1914 if (GHOST_STATE(state)) {
1915 ASSERT0(hdr->b_l1hdr.b_bufcnt);
1916 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL);
1917 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL);
1918 (void) refcount_remove_many(&state->arcs_esize[type],
1919 HDR_GET_LSIZE(hdr), hdr);
1923 ASSERT(!GHOST_STATE(state));
1924 if (hdr->b_l1hdr.b_pabd != NULL) {
1925 (void) refcount_remove_many(&state->arcs_esize[type],
1926 arc_hdr_size(hdr), hdr);
1928 for (buf = hdr->b_l1hdr.b_buf; buf != NULL; buf = buf->b_next) {
1929 if (arc_buf_is_shared(buf))
1931 (void) refcount_remove_many(&state->arcs_esize[type],
1932 arc_buf_size(buf), buf);
1937 * Add a reference to this hdr indicating that someone is actively
1938 * referencing that memory. When the refcount transitions from 0 to 1,
1939 * we remove it from the respective arc_state_t list to indicate that
1940 * it is not evictable.
1943 add_reference(arc_buf_hdr_t *hdr, void *tag)
1947 ASSERT(HDR_HAS_L1HDR(hdr));
1948 if (!MUTEX_HELD(HDR_LOCK(hdr))) {
1949 ASSERT(hdr->b_l1hdr.b_state == arc_anon);
1950 ASSERT(refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
1951 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL);
1954 state = hdr->b_l1hdr.b_state;
1956 if ((refcount_add(&hdr->b_l1hdr.b_refcnt, tag) == 1) &&
1957 (state != arc_anon)) {
1958 /* We don't use the L2-only state list. */
1959 if (state != arc_l2c_only) {
1960 multilist_remove(state->arcs_list[arc_buf_type(hdr)],
1962 arc_evictable_space_decrement(hdr, state);
1964 /* remove the prefetch flag if we get a reference */
1965 arc_hdr_clear_flags(hdr, ARC_FLAG_PREFETCH);
1970 * Remove a reference from this hdr. When the reference transitions from
1971 * 1 to 0 and we're not anonymous, then we add this hdr to the arc_state_t's
1972 * list making it eligible for eviction.
1975 remove_reference(arc_buf_hdr_t *hdr, kmutex_t *hash_lock, void *tag)
1978 arc_state_t *state = hdr->b_l1hdr.b_state;
1980 ASSERT(HDR_HAS_L1HDR(hdr));
1981 ASSERT(state == arc_anon || MUTEX_HELD(hash_lock));
1982 ASSERT(!GHOST_STATE(state));
1985 * arc_l2c_only counts as a ghost state so we don't need to explicitly
1986 * check to prevent usage of the arc_l2c_only list.
1988 if (((cnt = refcount_remove(&hdr->b_l1hdr.b_refcnt, tag)) == 0) &&
1989 (state != arc_anon)) {
1990 multilist_insert(state->arcs_list[arc_buf_type(hdr)], hdr);
1991 ASSERT3U(hdr->b_l1hdr.b_bufcnt, >, 0);
1992 arc_evictable_space_increment(hdr, state);
1998 * Returns detailed information about a specific arc buffer. When the
1999 * state_index argument is set the function will calculate the arc header
2000 * list position for its arc state. Since this requires a linear traversal
2001 * callers are strongly encourage not to do this. However, it can be helpful
2002 * for targeted analysis so the functionality is provided.
2005 arc_buf_info(arc_buf_t *ab, arc_buf_info_t *abi, int state_index)
2007 arc_buf_hdr_t *hdr = ab->b_hdr;
2008 l1arc_buf_hdr_t *l1hdr = NULL;
2009 l2arc_buf_hdr_t *l2hdr = NULL;
2010 arc_state_t *state = NULL;
2012 memset(abi, 0, sizeof (arc_buf_info_t));
2017 abi->abi_flags = hdr->b_flags;
2019 if (HDR_HAS_L1HDR(hdr)) {
2020 l1hdr = &hdr->b_l1hdr;
2021 state = l1hdr->b_state;
2023 if (HDR_HAS_L2HDR(hdr))
2024 l2hdr = &hdr->b_l2hdr;
2027 abi->abi_bufcnt = l1hdr->b_bufcnt;
2028 abi->abi_access = l1hdr->b_arc_access;
2029 abi->abi_mru_hits = l1hdr->b_mru_hits;
2030 abi->abi_mru_ghost_hits = l1hdr->b_mru_ghost_hits;
2031 abi->abi_mfu_hits = l1hdr->b_mfu_hits;
2032 abi->abi_mfu_ghost_hits = l1hdr->b_mfu_ghost_hits;
2033 abi->abi_holds = refcount_count(&l1hdr->b_refcnt);
2037 abi->abi_l2arc_dattr = l2hdr->b_daddr;
2038 abi->abi_l2arc_hits = l2hdr->b_hits;
2041 abi->abi_state_type = state ? state->arcs_state : ARC_STATE_ANON;
2042 abi->abi_state_contents = arc_buf_type(hdr);
2043 abi->abi_size = arc_hdr_size(hdr);
2047 * Move the supplied buffer to the indicated state. The hash lock
2048 * for the buffer must be held by the caller.
2051 arc_change_state(arc_state_t *new_state, arc_buf_hdr_t *hdr,
2052 kmutex_t *hash_lock)
2054 arc_state_t *old_state;
2057 boolean_t update_old, update_new;
2058 arc_buf_contents_t buftype = arc_buf_type(hdr);
2061 * We almost always have an L1 hdr here, since we call arc_hdr_realloc()
2062 * in arc_read() when bringing a buffer out of the L2ARC. However, the
2063 * L1 hdr doesn't always exist when we change state to arc_anon before
2064 * destroying a header, in which case reallocating to add the L1 hdr is
2067 if (HDR_HAS_L1HDR(hdr)) {
2068 old_state = hdr->b_l1hdr.b_state;
2069 refcnt = refcount_count(&hdr->b_l1hdr.b_refcnt);
2070 bufcnt = hdr->b_l1hdr.b_bufcnt;
2071 update_old = (bufcnt > 0 || hdr->b_l1hdr.b_pabd != NULL);
2073 old_state = arc_l2c_only;
2076 update_old = B_FALSE;
2078 update_new = update_old;
2080 ASSERT(MUTEX_HELD(hash_lock));
2081 ASSERT3P(new_state, !=, old_state);
2082 ASSERT(!GHOST_STATE(new_state) || bufcnt == 0);
2083 ASSERT(old_state != arc_anon || bufcnt <= 1);
2086 * If this buffer is evictable, transfer it from the
2087 * old state list to the new state list.
2090 if (old_state != arc_anon && old_state != arc_l2c_only) {
2091 ASSERT(HDR_HAS_L1HDR(hdr));
2092 multilist_remove(old_state->arcs_list[buftype], hdr);
2094 if (GHOST_STATE(old_state)) {
2096 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL);
2097 update_old = B_TRUE;
2099 arc_evictable_space_decrement(hdr, old_state);
2101 if (new_state != arc_anon && new_state != arc_l2c_only) {
2103 * An L1 header always exists here, since if we're
2104 * moving to some L1-cached state (i.e. not l2c_only or
2105 * anonymous), we realloc the header to add an L1hdr
2108 ASSERT(HDR_HAS_L1HDR(hdr));
2109 multilist_insert(new_state->arcs_list[buftype], hdr);
2111 if (GHOST_STATE(new_state)) {
2113 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL);
2114 update_new = B_TRUE;
2116 arc_evictable_space_increment(hdr, new_state);
2120 ASSERT(!HDR_EMPTY(hdr));
2121 if (new_state == arc_anon && HDR_IN_HASH_TABLE(hdr))
2122 buf_hash_remove(hdr);
2124 /* adjust state sizes (ignore arc_l2c_only) */
2126 if (update_new && new_state != arc_l2c_only) {
2127 ASSERT(HDR_HAS_L1HDR(hdr));
2128 if (GHOST_STATE(new_state)) {
2132 * When moving a header to a ghost state, we first
2133 * remove all arc buffers. Thus, we'll have a
2134 * bufcnt of zero, and no arc buffer to use for
2135 * the reference. As a result, we use the arc
2136 * header pointer for the reference.
2138 (void) refcount_add_many(&new_state->arcs_size,
2139 HDR_GET_LSIZE(hdr), hdr);
2140 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL);
2143 uint32_t buffers = 0;
2146 * Each individual buffer holds a unique reference,
2147 * thus we must remove each of these references one
2150 for (buf = hdr->b_l1hdr.b_buf; buf != NULL;
2151 buf = buf->b_next) {
2152 ASSERT3U(bufcnt, !=, 0);
2156 * When the arc_buf_t is sharing the data
2157 * block with the hdr, the owner of the
2158 * reference belongs to the hdr. Only
2159 * add to the refcount if the arc_buf_t is
2162 if (arc_buf_is_shared(buf))
2165 (void) refcount_add_many(&new_state->arcs_size,
2166 arc_buf_size(buf), buf);
2168 ASSERT3U(bufcnt, ==, buffers);
2170 if (hdr->b_l1hdr.b_pabd != NULL) {
2171 (void) refcount_add_many(&new_state->arcs_size,
2172 arc_hdr_size(hdr), hdr);
2174 ASSERT(GHOST_STATE(old_state));
2179 if (update_old && old_state != arc_l2c_only) {
2180 ASSERT(HDR_HAS_L1HDR(hdr));
2181 if (GHOST_STATE(old_state)) {
2183 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL);
2186 * When moving a header off of a ghost state,
2187 * the header will not contain any arc buffers.
2188 * We use the arc header pointer for the reference
2189 * which is exactly what we did when we put the
2190 * header on the ghost state.
2193 (void) refcount_remove_many(&old_state->arcs_size,
2194 HDR_GET_LSIZE(hdr), hdr);
2197 uint32_t buffers = 0;
2200 * Each individual buffer holds a unique reference,
2201 * thus we must remove each of these references one
2204 for (buf = hdr->b_l1hdr.b_buf; buf != NULL;
2205 buf = buf->b_next) {
2206 ASSERT3U(bufcnt, !=, 0);
2210 * When the arc_buf_t is sharing the data
2211 * block with the hdr, the owner of the
2212 * reference belongs to the hdr. Only
2213 * add to the refcount if the arc_buf_t is
2216 if (arc_buf_is_shared(buf))
2219 (void) refcount_remove_many(
2220 &old_state->arcs_size, arc_buf_size(buf),
2223 ASSERT3U(bufcnt, ==, buffers);
2224 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL);
2225 (void) refcount_remove_many(
2226 &old_state->arcs_size, arc_hdr_size(hdr), hdr);
2230 if (HDR_HAS_L1HDR(hdr))
2231 hdr->b_l1hdr.b_state = new_state;
2234 * L2 headers should never be on the L2 state list since they don't
2235 * have L1 headers allocated.
2237 ASSERT(multilist_is_empty(arc_l2c_only->arcs_list[ARC_BUFC_DATA]) &&
2238 multilist_is_empty(arc_l2c_only->arcs_list[ARC_BUFC_METADATA]));
2242 arc_space_consume(uint64_t space, arc_space_type_t type)
2244 ASSERT(type >= 0 && type < ARC_SPACE_NUMTYPES);
2249 case ARC_SPACE_DATA:
2250 ARCSTAT_INCR(arcstat_data_size, space);
2252 case ARC_SPACE_META:
2253 ARCSTAT_INCR(arcstat_metadata_size, space);
2255 case ARC_SPACE_BONUS:
2256 ARCSTAT_INCR(arcstat_bonus_size, space);
2258 case ARC_SPACE_DNODE:
2259 ARCSTAT_INCR(arcstat_dnode_size, space);
2261 case ARC_SPACE_DBUF:
2262 ARCSTAT_INCR(arcstat_dbuf_size, space);
2264 case ARC_SPACE_HDRS:
2265 ARCSTAT_INCR(arcstat_hdr_size, space);
2267 case ARC_SPACE_L2HDRS:
2268 ARCSTAT_INCR(arcstat_l2_hdr_size, space);
2272 if (type != ARC_SPACE_DATA)
2273 ARCSTAT_INCR(arcstat_meta_used, space);
2275 atomic_add_64(&arc_size, space);
2279 arc_space_return(uint64_t space, arc_space_type_t type)
2281 ASSERT(type >= 0 && type < ARC_SPACE_NUMTYPES);
2286 case ARC_SPACE_DATA:
2287 ARCSTAT_INCR(arcstat_data_size, -space);
2289 case ARC_SPACE_META:
2290 ARCSTAT_INCR(arcstat_metadata_size, -space);
2292 case ARC_SPACE_BONUS:
2293 ARCSTAT_INCR(arcstat_bonus_size, -space);
2295 case ARC_SPACE_DNODE:
2296 ARCSTAT_INCR(arcstat_dnode_size, -space);
2298 case ARC_SPACE_DBUF:
2299 ARCSTAT_INCR(arcstat_dbuf_size, -space);
2301 case ARC_SPACE_HDRS:
2302 ARCSTAT_INCR(arcstat_hdr_size, -space);
2304 case ARC_SPACE_L2HDRS:
2305 ARCSTAT_INCR(arcstat_l2_hdr_size, -space);
2309 if (type != ARC_SPACE_DATA) {
2310 ASSERT(arc_meta_used >= space);
2311 if (arc_meta_max < arc_meta_used)
2312 arc_meta_max = arc_meta_used;
2313 ARCSTAT_INCR(arcstat_meta_used, -space);
2316 ASSERT(arc_size >= space);
2317 atomic_add_64(&arc_size, -space);
2321 * Given a hdr and a buf, returns whether that buf can share its b_data buffer
2322 * with the hdr's b_pabd.
2325 arc_can_share(arc_buf_hdr_t *hdr, arc_buf_t *buf)
2328 * The criteria for sharing a hdr's data are:
2329 * 1. the hdr's compression matches the buf's compression
2330 * 2. the hdr doesn't need to be byteswapped
2331 * 3. the hdr isn't already being shared
2332 * 4. the buf is either compressed or it is the last buf in the hdr list
2334 * Criterion #4 maintains the invariant that shared uncompressed
2335 * bufs must be the final buf in the hdr's b_buf list. Reading this, you
2336 * might ask, "if a compressed buf is allocated first, won't that be the
2337 * last thing in the list?", but in that case it's impossible to create
2338 * a shared uncompressed buf anyway (because the hdr must be compressed
2339 * to have the compressed buf). You might also think that #3 is
2340 * sufficient to make this guarantee, however it's possible
2341 * (specifically in the rare L2ARC write race mentioned in
2342 * arc_buf_alloc_impl()) there will be an existing uncompressed buf that
2343 * is sharable, but wasn't at the time of its allocation. Rather than
2344 * allow a new shared uncompressed buf to be created and then shuffle
2345 * the list around to make it the last element, this simply disallows
2346 * sharing if the new buf isn't the first to be added.
2348 ASSERT3P(buf->b_hdr, ==, hdr);
2349 boolean_t hdr_compressed = HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF;
2350 boolean_t buf_compressed = ARC_BUF_COMPRESSED(buf) != 0;
2351 return (buf_compressed == hdr_compressed &&
2352 hdr->b_l1hdr.b_byteswap == DMU_BSWAP_NUMFUNCS &&
2353 !HDR_SHARED_DATA(hdr) &&
2354 (ARC_BUF_LAST(buf) || ARC_BUF_COMPRESSED(buf)));
2358 * Allocate a buf for this hdr. If you care about the data that's in the hdr,
2359 * or if you want a compressed buffer, pass those flags in. Returns 0 if the
2360 * copy was made successfully, or an error code otherwise.
2363 arc_buf_alloc_impl(arc_buf_hdr_t *hdr, void *tag, boolean_t compressed,
2364 boolean_t fill, arc_buf_t **ret)
2368 ASSERT(HDR_HAS_L1HDR(hdr));
2369 ASSERT3U(HDR_GET_LSIZE(hdr), >, 0);
2370 VERIFY(hdr->b_type == ARC_BUFC_DATA ||
2371 hdr->b_type == ARC_BUFC_METADATA);
2372 ASSERT3P(ret, !=, NULL);
2373 ASSERT3P(*ret, ==, NULL);
2375 hdr->b_l1hdr.b_mru_hits = 0;
2376 hdr->b_l1hdr.b_mru_ghost_hits = 0;
2377 hdr->b_l1hdr.b_mfu_hits = 0;
2378 hdr->b_l1hdr.b_mfu_ghost_hits = 0;
2379 hdr->b_l1hdr.b_l2_hits = 0;
2381 buf = *ret = kmem_cache_alloc(buf_cache, KM_PUSHPAGE);
2384 buf->b_next = hdr->b_l1hdr.b_buf;
2387 add_reference(hdr, tag);
2390 * We're about to change the hdr's b_flags. We must either
2391 * hold the hash_lock or be undiscoverable.
2393 ASSERT(MUTEX_HELD(HDR_LOCK(hdr)) || HDR_EMPTY(hdr));
2396 * Only honor requests for compressed bufs if the hdr is actually
2399 if (compressed && HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF)
2400 buf->b_flags |= ARC_BUF_FLAG_COMPRESSED;
2403 * If the hdr's data can be shared then we share the data buffer and
2404 * set the appropriate bit in the hdr's b_flags to indicate the hdr is
2405 * allocate a new buffer to store the buf's data.
2407 * There are two additional restrictions here because we're sharing
2408 * hdr -> buf instead of the usual buf -> hdr. First, the hdr can't be
2409 * actively involved in an L2ARC write, because if this buf is used by
2410 * an arc_write() then the hdr's data buffer will be released when the
2411 * write completes, even though the L2ARC write might still be using it.
2412 * Second, the hdr's ABD must be linear so that the buf's user doesn't
2413 * need to be ABD-aware.
2415 boolean_t can_share = arc_can_share(hdr, buf) && !HDR_L2_WRITING(hdr) &&
2416 abd_is_linear(hdr->b_l1hdr.b_pabd);
2418 /* Set up b_data and sharing */
2420 buf->b_data = abd_to_buf(hdr->b_l1hdr.b_pabd);
2421 buf->b_flags |= ARC_BUF_FLAG_SHARED;
2422 arc_hdr_set_flags(hdr, ARC_FLAG_SHARED_DATA);
2425 arc_get_data_buf(hdr, arc_buf_size(buf), buf);
2426 ARCSTAT_INCR(arcstat_overhead_size, arc_buf_size(buf));
2428 VERIFY3P(buf->b_data, !=, NULL);
2430 hdr->b_l1hdr.b_buf = buf;
2431 hdr->b_l1hdr.b_bufcnt += 1;
2434 * If the user wants the data from the hdr, we need to either copy or
2435 * decompress the data.
2438 return (arc_buf_fill(buf, ARC_BUF_COMPRESSED(buf) != 0));
2444 static char *arc_onloan_tag = "onloan";
2447 arc_loaned_bytes_update(int64_t delta)
2449 atomic_add_64(&arc_loaned_bytes, delta);
2451 /* assert that it did not wrap around */
2452 ASSERT3S(atomic_add_64_nv(&arc_loaned_bytes, 0), >=, 0);
2456 * Loan out an anonymous arc buffer. Loaned buffers are not counted as in
2457 * flight data by arc_tempreserve_space() until they are "returned". Loaned
2458 * buffers must be returned to the arc before they can be used by the DMU or
2462 arc_loan_buf(spa_t *spa, boolean_t is_metadata, int size)
2464 arc_buf_t *buf = arc_alloc_buf(spa, arc_onloan_tag,
2465 is_metadata ? ARC_BUFC_METADATA : ARC_BUFC_DATA, size);
2467 arc_loaned_bytes_update(size);
2473 arc_loan_compressed_buf(spa_t *spa, uint64_t psize, uint64_t lsize,
2474 enum zio_compress compression_type)
2476 arc_buf_t *buf = arc_alloc_compressed_buf(spa, arc_onloan_tag,
2477 psize, lsize, compression_type);
2479 arc_loaned_bytes_update(psize);
2486 * Return a loaned arc buffer to the arc.
2489 arc_return_buf(arc_buf_t *buf, void *tag)
2491 arc_buf_hdr_t *hdr = buf->b_hdr;
2493 ASSERT3P(buf->b_data, !=, NULL);
2494 ASSERT(HDR_HAS_L1HDR(hdr));
2495 (void) refcount_add(&hdr->b_l1hdr.b_refcnt, tag);
2496 (void) refcount_remove(&hdr->b_l1hdr.b_refcnt, arc_onloan_tag);
2498 arc_loaned_bytes_update(-arc_buf_size(buf));
2501 /* Detach an arc_buf from a dbuf (tag) */
2503 arc_loan_inuse_buf(arc_buf_t *buf, void *tag)
2505 arc_buf_hdr_t *hdr = buf->b_hdr;
2507 ASSERT3P(buf->b_data, !=, NULL);
2508 ASSERT(HDR_HAS_L1HDR(hdr));
2509 (void) refcount_add(&hdr->b_l1hdr.b_refcnt, arc_onloan_tag);
2510 (void) refcount_remove(&hdr->b_l1hdr.b_refcnt, tag);
2512 arc_loaned_bytes_update(arc_buf_size(buf));
2516 l2arc_free_abd_on_write(abd_t *abd, size_t size, arc_buf_contents_t type)
2518 l2arc_data_free_t *df = kmem_alloc(sizeof (*df), KM_SLEEP);
2521 df->l2df_size = size;
2522 df->l2df_type = type;
2523 mutex_enter(&l2arc_free_on_write_mtx);
2524 list_insert_head(l2arc_free_on_write, df);
2525 mutex_exit(&l2arc_free_on_write_mtx);
2529 arc_hdr_free_on_write(arc_buf_hdr_t *hdr)
2531 arc_state_t *state = hdr->b_l1hdr.b_state;
2532 arc_buf_contents_t type = arc_buf_type(hdr);
2533 uint64_t size = arc_hdr_size(hdr);
2535 /* protected by hash lock, if in the hash table */
2536 if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) {
2537 ASSERT(refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
2538 ASSERT(state != arc_anon && state != arc_l2c_only);
2540 (void) refcount_remove_many(&state->arcs_esize[type],
2543 (void) refcount_remove_many(&state->arcs_size, size, hdr);
2544 if (type == ARC_BUFC_METADATA) {
2545 arc_space_return(size, ARC_SPACE_META);
2547 ASSERT(type == ARC_BUFC_DATA);
2548 arc_space_return(size, ARC_SPACE_DATA);
2551 l2arc_free_abd_on_write(hdr->b_l1hdr.b_pabd, size, type);
2555 * Share the arc_buf_t's data with the hdr. Whenever we are sharing the
2556 * data buffer, we transfer the refcount ownership to the hdr and update
2557 * the appropriate kstats.
2560 arc_share_buf(arc_buf_hdr_t *hdr, arc_buf_t *buf)
2562 ASSERT(arc_can_share(hdr, buf));
2563 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL);
2564 ASSERT(MUTEX_HELD(HDR_LOCK(hdr)) || HDR_EMPTY(hdr));
2567 * Start sharing the data buffer. We transfer the
2568 * refcount ownership to the hdr since it always owns
2569 * the refcount whenever an arc_buf_t is shared.
2571 refcount_transfer_ownership(&hdr->b_l1hdr.b_state->arcs_size, buf, hdr);
2572 hdr->b_l1hdr.b_pabd = abd_get_from_buf(buf->b_data, arc_buf_size(buf));
2573 abd_take_ownership_of_buf(hdr->b_l1hdr.b_pabd,
2574 HDR_ISTYPE_METADATA(hdr));
2575 arc_hdr_set_flags(hdr, ARC_FLAG_SHARED_DATA);
2576 buf->b_flags |= ARC_BUF_FLAG_SHARED;
2579 * Since we've transferred ownership to the hdr we need
2580 * to increment its compressed and uncompressed kstats and
2581 * decrement the overhead size.
2583 ARCSTAT_INCR(arcstat_compressed_size, arc_hdr_size(hdr));
2584 ARCSTAT_INCR(arcstat_uncompressed_size, HDR_GET_LSIZE(hdr));
2585 ARCSTAT_INCR(arcstat_overhead_size, -arc_buf_size(buf));
2589 arc_unshare_buf(arc_buf_hdr_t *hdr, arc_buf_t *buf)
2591 ASSERT(arc_buf_is_shared(buf));
2592 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL);
2593 ASSERT(MUTEX_HELD(HDR_LOCK(hdr)) || HDR_EMPTY(hdr));
2596 * We are no longer sharing this buffer so we need
2597 * to transfer its ownership to the rightful owner.
2599 refcount_transfer_ownership(&hdr->b_l1hdr.b_state->arcs_size, hdr, buf);
2600 arc_hdr_clear_flags(hdr, ARC_FLAG_SHARED_DATA);
2601 abd_release_ownership_of_buf(hdr->b_l1hdr.b_pabd);
2602 abd_put(hdr->b_l1hdr.b_pabd);
2603 hdr->b_l1hdr.b_pabd = NULL;
2604 buf->b_flags &= ~ARC_BUF_FLAG_SHARED;
2607 * Since the buffer is no longer shared between
2608 * the arc buf and the hdr, count it as overhead.
2610 ARCSTAT_INCR(arcstat_compressed_size, -arc_hdr_size(hdr));
2611 ARCSTAT_INCR(arcstat_uncompressed_size, -HDR_GET_LSIZE(hdr));
2612 ARCSTAT_INCR(arcstat_overhead_size, arc_buf_size(buf));
2616 * Remove an arc_buf_t from the hdr's buf list and return the last
2617 * arc_buf_t on the list. If no buffers remain on the list then return
2621 arc_buf_remove(arc_buf_hdr_t *hdr, arc_buf_t *buf)
2623 ASSERT(HDR_HAS_L1HDR(hdr));
2624 ASSERT(MUTEX_HELD(HDR_LOCK(hdr)) || HDR_EMPTY(hdr));
2626 arc_buf_t **bufp = &hdr->b_l1hdr.b_buf;
2627 arc_buf_t *lastbuf = NULL;
2630 * Remove the buf from the hdr list and locate the last
2631 * remaining buffer on the list.
2633 while (*bufp != NULL) {
2635 *bufp = buf->b_next;
2638 * If we've removed a buffer in the middle of
2639 * the list then update the lastbuf and update
2642 if (*bufp != NULL) {
2644 bufp = &(*bufp)->b_next;
2648 ASSERT3P(lastbuf, !=, buf);
2649 IMPLY(hdr->b_l1hdr.b_bufcnt > 0, lastbuf != NULL);
2650 IMPLY(hdr->b_l1hdr.b_bufcnt > 0, hdr->b_l1hdr.b_buf != NULL);
2651 IMPLY(lastbuf != NULL, ARC_BUF_LAST(lastbuf));
2657 * Free up buf->b_data and pull the arc_buf_t off of the the arc_buf_hdr_t's
2661 arc_buf_destroy_impl(arc_buf_t *buf)
2663 arc_buf_hdr_t *hdr = buf->b_hdr;
2666 * Free up the data associated with the buf but only if we're not
2667 * sharing this with the hdr. If we are sharing it with the hdr, the
2668 * hdr is responsible for doing the free.
2670 if (buf->b_data != NULL) {
2672 * We're about to change the hdr's b_flags. We must either
2673 * hold the hash_lock or be undiscoverable.
2675 ASSERT(MUTEX_HELD(HDR_LOCK(hdr)) || HDR_EMPTY(hdr));
2677 arc_cksum_verify(buf);
2678 arc_buf_unwatch(buf);
2680 if (arc_buf_is_shared(buf)) {
2681 arc_hdr_clear_flags(hdr, ARC_FLAG_SHARED_DATA);
2683 uint64_t size = arc_buf_size(buf);
2684 arc_free_data_buf(hdr, buf->b_data, size, buf);
2685 ARCSTAT_INCR(arcstat_overhead_size, -size);
2689 ASSERT(hdr->b_l1hdr.b_bufcnt > 0);
2690 hdr->b_l1hdr.b_bufcnt -= 1;
2693 arc_buf_t *lastbuf = arc_buf_remove(hdr, buf);
2695 if (ARC_BUF_SHARED(buf) && !ARC_BUF_COMPRESSED(buf)) {
2697 * If the current arc_buf_t is sharing its data buffer with the
2698 * hdr, then reassign the hdr's b_pabd to share it with the new
2699 * buffer at the end of the list. The shared buffer is always
2700 * the last one on the hdr's buffer list.
2702 * There is an equivalent case for compressed bufs, but since
2703 * they aren't guaranteed to be the last buf in the list and
2704 * that is an exceedingly rare case, we just allow that space be
2705 * wasted temporarily.
2707 if (lastbuf != NULL) {
2708 /* Only one buf can be shared at once */
2709 VERIFY(!arc_buf_is_shared(lastbuf));
2710 /* hdr is uncompressed so can't have compressed buf */
2711 VERIFY(!ARC_BUF_COMPRESSED(lastbuf));
2713 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL);
2714 arc_hdr_free_pabd(hdr);
2717 * We must setup a new shared block between the
2718 * last buffer and the hdr. The data would have
2719 * been allocated by the arc buf so we need to transfer
2720 * ownership to the hdr since it's now being shared.
2722 arc_share_buf(hdr, lastbuf);
2724 } else if (HDR_SHARED_DATA(hdr)) {
2726 * Uncompressed shared buffers are always at the end
2727 * of the list. Compressed buffers don't have the
2728 * same requirements. This makes it hard to
2729 * simply assert that the lastbuf is shared so
2730 * we rely on the hdr's compression flags to determine
2731 * if we have a compressed, shared buffer.
2733 ASSERT3P(lastbuf, !=, NULL);
2734 ASSERT(arc_buf_is_shared(lastbuf) ||
2735 HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF);
2739 * Free the checksum if we're removing the last uncompressed buf from
2742 if (!arc_hdr_has_uncompressed_buf(hdr)) {
2743 arc_cksum_free(hdr);
2746 /* clean up the buf */
2748 kmem_cache_free(buf_cache, buf);
2752 arc_hdr_alloc_pabd(arc_buf_hdr_t *hdr)
2754 ASSERT3U(HDR_GET_LSIZE(hdr), >, 0);
2755 ASSERT(HDR_HAS_L1HDR(hdr));
2756 ASSERT(!HDR_SHARED_DATA(hdr));
2758 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL);
2759 hdr->b_l1hdr.b_pabd = arc_get_data_abd(hdr, arc_hdr_size(hdr), hdr);
2760 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS;
2761 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL);
2763 ARCSTAT_INCR(arcstat_compressed_size, arc_hdr_size(hdr));
2764 ARCSTAT_INCR(arcstat_uncompressed_size, HDR_GET_LSIZE(hdr));
2768 arc_hdr_free_pabd(arc_buf_hdr_t *hdr)
2770 ASSERT(HDR_HAS_L1HDR(hdr));
2771 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL);
2774 * If the hdr is currently being written to the l2arc then
2775 * we defer freeing the data by adding it to the l2arc_free_on_write
2776 * list. The l2arc will free the data once it's finished
2777 * writing it to the l2arc device.
2779 if (HDR_L2_WRITING(hdr)) {
2780 arc_hdr_free_on_write(hdr);
2781 ARCSTAT_BUMP(arcstat_l2_free_on_write);
2783 arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd,
2784 arc_hdr_size(hdr), hdr);
2786 hdr->b_l1hdr.b_pabd = NULL;
2787 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS;
2789 ARCSTAT_INCR(arcstat_compressed_size, -arc_hdr_size(hdr));
2790 ARCSTAT_INCR(arcstat_uncompressed_size, -HDR_GET_LSIZE(hdr));
2793 static arc_buf_hdr_t *
2794 arc_hdr_alloc(uint64_t spa, int32_t psize, int32_t lsize,
2795 enum zio_compress compression_type, arc_buf_contents_t type)
2799 VERIFY(type == ARC_BUFC_DATA || type == ARC_BUFC_METADATA);
2801 hdr = kmem_cache_alloc(hdr_full_cache, KM_PUSHPAGE);
2802 ASSERT(HDR_EMPTY(hdr));
2803 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL);
2804 HDR_SET_PSIZE(hdr, psize);
2805 HDR_SET_LSIZE(hdr, lsize);
2809 arc_hdr_set_flags(hdr, arc_bufc_to_flags(type) | ARC_FLAG_HAS_L1HDR);
2810 arc_hdr_set_compress(hdr, compression_type);
2812 hdr->b_l1hdr.b_state = arc_anon;
2813 hdr->b_l1hdr.b_arc_access = 0;
2814 hdr->b_l1hdr.b_bufcnt = 0;
2815 hdr->b_l1hdr.b_buf = NULL;
2818 * Allocate the hdr's buffer. This will contain either
2819 * the compressed or uncompressed data depending on the block
2820 * it references and compressed arc enablement.
2822 arc_hdr_alloc_pabd(hdr);
2823 ASSERT(refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
2829 * Transition between the two allocation states for the arc_buf_hdr struct.
2830 * The arc_buf_hdr struct can be allocated with (hdr_full_cache) or without
2831 * (hdr_l2only_cache) the fields necessary for the L1 cache - the smaller
2832 * version is used when a cache buffer is only in the L2ARC in order to reduce
2835 static arc_buf_hdr_t *
2836 arc_hdr_realloc(arc_buf_hdr_t *hdr, kmem_cache_t *old, kmem_cache_t *new)
2838 arc_buf_hdr_t *nhdr;
2839 l2arc_dev_t *dev = hdr->b_l2hdr.b_dev;
2841 ASSERT(HDR_HAS_L2HDR(hdr));
2842 ASSERT((old == hdr_full_cache && new == hdr_l2only_cache) ||
2843 (old == hdr_l2only_cache && new == hdr_full_cache));
2845 nhdr = kmem_cache_alloc(new, KM_PUSHPAGE);
2847 ASSERT(MUTEX_HELD(HDR_LOCK(hdr)));
2848 buf_hash_remove(hdr);
2850 bcopy(hdr, nhdr, HDR_L2ONLY_SIZE);
2852 if (new == hdr_full_cache) {
2853 arc_hdr_set_flags(nhdr, ARC_FLAG_HAS_L1HDR);
2855 * arc_access and arc_change_state need to be aware that a
2856 * header has just come out of L2ARC, so we set its state to
2857 * l2c_only even though it's about to change.
2859 nhdr->b_l1hdr.b_state = arc_l2c_only;
2861 /* Verify previous threads set to NULL before freeing */
2862 ASSERT3P(nhdr->b_l1hdr.b_pabd, ==, NULL);
2864 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL);
2865 ASSERT0(hdr->b_l1hdr.b_bufcnt);
2866 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL);
2869 * If we've reached here, We must have been called from
2870 * arc_evict_hdr(), as such we should have already been
2871 * removed from any ghost list we were previously on
2872 * (which protects us from racing with arc_evict_state),
2873 * thus no locking is needed during this check.
2875 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node));
2878 * A buffer must not be moved into the arc_l2c_only
2879 * state if it's not finished being written out to the
2880 * l2arc device. Otherwise, the b_l1hdr.b_pabd field
2881 * might try to be accessed, even though it was removed.
2883 VERIFY(!HDR_L2_WRITING(hdr));
2884 VERIFY3P(hdr->b_l1hdr.b_pabd, ==, NULL);
2886 arc_hdr_clear_flags(nhdr, ARC_FLAG_HAS_L1HDR);
2889 * The header has been reallocated so we need to re-insert it into any
2892 (void) buf_hash_insert(nhdr, NULL);
2894 ASSERT(list_link_active(&hdr->b_l2hdr.b_l2node));
2896 mutex_enter(&dev->l2ad_mtx);
2899 * We must place the realloc'ed header back into the list at
2900 * the same spot. Otherwise, if it's placed earlier in the list,
2901 * l2arc_write_buffers() could find it during the function's
2902 * write phase, and try to write it out to the l2arc.
2904 list_insert_after(&dev->l2ad_buflist, hdr, nhdr);
2905 list_remove(&dev->l2ad_buflist, hdr);
2907 mutex_exit(&dev->l2ad_mtx);
2910 * Since we're using the pointer address as the tag when
2911 * incrementing and decrementing the l2ad_alloc refcount, we
2912 * must remove the old pointer (that we're about to destroy) and
2913 * add the new pointer to the refcount. Otherwise we'd remove
2914 * the wrong pointer address when calling arc_hdr_destroy() later.
2917 (void) refcount_remove_many(&dev->l2ad_alloc, arc_hdr_size(hdr), hdr);
2918 (void) refcount_add_many(&dev->l2ad_alloc, arc_hdr_size(nhdr), nhdr);
2920 buf_discard_identity(hdr);
2921 kmem_cache_free(old, hdr);
2927 * Allocate a new arc_buf_hdr_t and arc_buf_t and return the buf to the caller.
2928 * The buf is returned thawed since we expect the consumer to modify it.
2931 arc_alloc_buf(spa_t *spa, void *tag, arc_buf_contents_t type, int32_t size)
2933 arc_buf_hdr_t *hdr = arc_hdr_alloc(spa_load_guid(spa), size, size,
2934 ZIO_COMPRESS_OFF, type);
2935 ASSERT(!MUTEX_HELD(HDR_LOCK(hdr)));
2937 arc_buf_t *buf = NULL;
2938 VERIFY0(arc_buf_alloc_impl(hdr, tag, B_FALSE, B_FALSE, &buf));
2945 * Allocate a compressed buf in the same manner as arc_alloc_buf. Don't use this
2946 * for bufs containing metadata.
2949 arc_alloc_compressed_buf(spa_t *spa, void *tag, uint64_t psize, uint64_t lsize,
2950 enum zio_compress compression_type)
2952 ASSERT3U(lsize, >, 0);
2953 ASSERT3U(lsize, >=, psize);
2954 ASSERT(compression_type > ZIO_COMPRESS_OFF);
2955 ASSERT(compression_type < ZIO_COMPRESS_FUNCTIONS);
2957 arc_buf_hdr_t *hdr = arc_hdr_alloc(spa_load_guid(spa), psize, lsize,
2958 compression_type, ARC_BUFC_DATA);
2959 ASSERT(!MUTEX_HELD(HDR_LOCK(hdr)));
2961 arc_buf_t *buf = NULL;
2962 VERIFY0(arc_buf_alloc_impl(hdr, tag, B_TRUE, B_FALSE, &buf));
2964 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL);
2966 if (!arc_buf_is_shared(buf)) {
2968 * To ensure that the hdr has the correct data in it if we call
2969 * arc_decompress() on this buf before it's been written to
2970 * disk, it's easiest if we just set up sharing between the
2973 ASSERT(!abd_is_linear(hdr->b_l1hdr.b_pabd));
2974 arc_hdr_free_pabd(hdr);
2975 arc_share_buf(hdr, buf);
2982 arc_hdr_l2hdr_destroy(arc_buf_hdr_t *hdr)
2984 l2arc_buf_hdr_t *l2hdr = &hdr->b_l2hdr;
2985 l2arc_dev_t *dev = l2hdr->b_dev;
2986 uint64_t asize = arc_hdr_size(hdr);
2988 ASSERT(MUTEX_HELD(&dev->l2ad_mtx));
2989 ASSERT(HDR_HAS_L2HDR(hdr));
2991 list_remove(&dev->l2ad_buflist, hdr);
2993 ARCSTAT_INCR(arcstat_l2_asize, -asize);
2994 ARCSTAT_INCR(arcstat_l2_size, -HDR_GET_LSIZE(hdr));
2996 vdev_space_update(dev->l2ad_vdev, -asize, 0, 0);
2998 (void) refcount_remove_many(&dev->l2ad_alloc, asize, hdr);
2999 arc_hdr_clear_flags(hdr, ARC_FLAG_HAS_L2HDR);
3003 arc_hdr_destroy(arc_buf_hdr_t *hdr)
3005 if (HDR_HAS_L1HDR(hdr)) {
3006 ASSERT(hdr->b_l1hdr.b_buf == NULL ||
3007 hdr->b_l1hdr.b_bufcnt > 0);
3008 ASSERT(refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
3009 ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon);
3011 ASSERT(!HDR_IO_IN_PROGRESS(hdr));
3012 ASSERT(!HDR_IN_HASH_TABLE(hdr));
3014 if (!HDR_EMPTY(hdr))
3015 buf_discard_identity(hdr);
3017 if (HDR_HAS_L2HDR(hdr)) {
3018 l2arc_dev_t *dev = hdr->b_l2hdr.b_dev;
3019 boolean_t buflist_held = MUTEX_HELD(&dev->l2ad_mtx);
3022 mutex_enter(&dev->l2ad_mtx);
3025 * Even though we checked this conditional above, we
3026 * need to check this again now that we have the
3027 * l2ad_mtx. This is because we could be racing with
3028 * another thread calling l2arc_evict() which might have
3029 * destroyed this header's L2 portion as we were waiting
3030 * to acquire the l2ad_mtx. If that happens, we don't
3031 * want to re-destroy the header's L2 portion.
3033 if (HDR_HAS_L2HDR(hdr))
3034 arc_hdr_l2hdr_destroy(hdr);
3037 mutex_exit(&dev->l2ad_mtx);
3040 if (HDR_HAS_L1HDR(hdr)) {
3041 arc_cksum_free(hdr);
3043 while (hdr->b_l1hdr.b_buf != NULL)
3044 arc_buf_destroy_impl(hdr->b_l1hdr.b_buf);
3046 if (hdr->b_l1hdr.b_pabd != NULL)
3047 arc_hdr_free_pabd(hdr);
3050 ASSERT3P(hdr->b_hash_next, ==, NULL);
3051 if (HDR_HAS_L1HDR(hdr)) {
3052 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node));
3053 ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL);
3054 kmem_cache_free(hdr_full_cache, hdr);
3056 kmem_cache_free(hdr_l2only_cache, hdr);
3061 arc_buf_destroy(arc_buf_t *buf, void* tag)
3063 arc_buf_hdr_t *hdr = buf->b_hdr;
3064 kmutex_t *hash_lock = HDR_LOCK(hdr);
3066 if (hdr->b_l1hdr.b_state == arc_anon) {
3067 ASSERT3U(hdr->b_l1hdr.b_bufcnt, ==, 1);
3068 ASSERT(!HDR_IO_IN_PROGRESS(hdr));
3069 VERIFY0(remove_reference(hdr, NULL, tag));
3070 arc_hdr_destroy(hdr);
3074 mutex_enter(hash_lock);
3075 ASSERT3P(hdr, ==, buf->b_hdr);
3076 ASSERT(hdr->b_l1hdr.b_bufcnt > 0);
3077 ASSERT3P(hash_lock, ==, HDR_LOCK(hdr));
3078 ASSERT3P(hdr->b_l1hdr.b_state, !=, arc_anon);
3079 ASSERT3P(buf->b_data, !=, NULL);
3081 (void) remove_reference(hdr, hash_lock, tag);
3082 arc_buf_destroy_impl(buf);
3083 mutex_exit(hash_lock);
3087 * Evict the arc_buf_hdr that is provided as a parameter. The resultant
3088 * state of the header is dependent on its state prior to entering this
3089 * function. The following transitions are possible:
3091 * - arc_mru -> arc_mru_ghost
3092 * - arc_mfu -> arc_mfu_ghost
3093 * - arc_mru_ghost -> arc_l2c_only
3094 * - arc_mru_ghost -> deleted
3095 * - arc_mfu_ghost -> arc_l2c_only
3096 * - arc_mfu_ghost -> deleted
3099 arc_evict_hdr(arc_buf_hdr_t *hdr, kmutex_t *hash_lock)
3101 arc_state_t *evicted_state, *state;
3102 int64_t bytes_evicted = 0;
3104 ASSERT(MUTEX_HELD(hash_lock));
3105 ASSERT(HDR_HAS_L1HDR(hdr));
3107 state = hdr->b_l1hdr.b_state;
3108 if (GHOST_STATE(state)) {
3109 ASSERT(!HDR_IO_IN_PROGRESS(hdr));
3110 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL);
3113 * l2arc_write_buffers() relies on a header's L1 portion
3114 * (i.e. its b_pabd field) during it's write phase.
3115 * Thus, we cannot push a header onto the arc_l2c_only
3116 * state (removing its L1 piece) until the header is
3117 * done being written to the l2arc.
3119 if (HDR_HAS_L2HDR(hdr) && HDR_L2_WRITING(hdr)) {
3120 ARCSTAT_BUMP(arcstat_evict_l2_skip);
3121 return (bytes_evicted);
3124 ARCSTAT_BUMP(arcstat_deleted);
3125 bytes_evicted += HDR_GET_LSIZE(hdr);
3127 DTRACE_PROBE1(arc__delete, arc_buf_hdr_t *, hdr);
3129 if (HDR_HAS_L2HDR(hdr)) {
3130 ASSERT(hdr->b_l1hdr.b_pabd == NULL);
3132 * This buffer is cached on the 2nd Level ARC;
3133 * don't destroy the header.
3135 arc_change_state(arc_l2c_only, hdr, hash_lock);
3137 * dropping from L1+L2 cached to L2-only,
3138 * realloc to remove the L1 header.
3140 hdr = arc_hdr_realloc(hdr, hdr_full_cache,
3143 arc_change_state(arc_anon, hdr, hash_lock);
3144 arc_hdr_destroy(hdr);
3146 return (bytes_evicted);
3149 ASSERT(state == arc_mru || state == arc_mfu);
3150 evicted_state = (state == arc_mru) ? arc_mru_ghost : arc_mfu_ghost;
3152 /* prefetch buffers have a minimum lifespan */
3153 if (HDR_IO_IN_PROGRESS(hdr) ||
3154 ((hdr->b_flags & (ARC_FLAG_PREFETCH | ARC_FLAG_INDIRECT)) &&
3155 ddi_get_lbolt() - hdr->b_l1hdr.b_arc_access <
3156 arc_min_prefetch_lifespan)) {
3157 ARCSTAT_BUMP(arcstat_evict_skip);
3158 return (bytes_evicted);
3161 ASSERT0(refcount_count(&hdr->b_l1hdr.b_refcnt));
3162 while (hdr->b_l1hdr.b_buf) {
3163 arc_buf_t *buf = hdr->b_l1hdr.b_buf;
3164 if (!mutex_tryenter(&buf->b_evict_lock)) {
3165 ARCSTAT_BUMP(arcstat_mutex_miss);
3168 if (buf->b_data != NULL)
3169 bytes_evicted += HDR_GET_LSIZE(hdr);
3170 mutex_exit(&buf->b_evict_lock);
3171 arc_buf_destroy_impl(buf);
3174 if (HDR_HAS_L2HDR(hdr)) {
3175 ARCSTAT_INCR(arcstat_evict_l2_cached, HDR_GET_LSIZE(hdr));
3177 if (l2arc_write_eligible(hdr->b_spa, hdr)) {
3178 ARCSTAT_INCR(arcstat_evict_l2_eligible,
3179 HDR_GET_LSIZE(hdr));
3181 ARCSTAT_INCR(arcstat_evict_l2_ineligible,
3182 HDR_GET_LSIZE(hdr));
3186 if (hdr->b_l1hdr.b_bufcnt == 0) {
3187 arc_cksum_free(hdr);
3189 bytes_evicted += arc_hdr_size(hdr);
3192 * If this hdr is being evicted and has a compressed
3193 * buffer then we discard it here before we change states.
3194 * This ensures that the accounting is updated correctly
3195 * in arc_free_data_impl().
3197 arc_hdr_free_pabd(hdr);
3199 arc_change_state(evicted_state, hdr, hash_lock);
3200 ASSERT(HDR_IN_HASH_TABLE(hdr));
3201 arc_hdr_set_flags(hdr, ARC_FLAG_IN_HASH_TABLE);
3202 DTRACE_PROBE1(arc__evict, arc_buf_hdr_t *, hdr);
3205 return (bytes_evicted);
3209 arc_evict_state_impl(multilist_t *ml, int idx, arc_buf_hdr_t *marker,
3210 uint64_t spa, int64_t bytes)
3212 multilist_sublist_t *mls;
3213 uint64_t bytes_evicted = 0;
3215 kmutex_t *hash_lock;
3216 int evict_count = 0;
3218 ASSERT3P(marker, !=, NULL);
3219 IMPLY(bytes < 0, bytes == ARC_EVICT_ALL);
3221 mls = multilist_sublist_lock(ml, idx);
3223 for (hdr = multilist_sublist_prev(mls, marker); hdr != NULL;
3224 hdr = multilist_sublist_prev(mls, marker)) {
3225 if ((bytes != ARC_EVICT_ALL && bytes_evicted >= bytes) ||
3226 (evict_count >= zfs_arc_evict_batch_limit))
3230 * To keep our iteration location, move the marker
3231 * forward. Since we're not holding hdr's hash lock, we
3232 * must be very careful and not remove 'hdr' from the
3233 * sublist. Otherwise, other consumers might mistake the
3234 * 'hdr' as not being on a sublist when they call the
3235 * multilist_link_active() function (they all rely on
3236 * the hash lock protecting concurrent insertions and
3237 * removals). multilist_sublist_move_forward() was
3238 * specifically implemented to ensure this is the case
3239 * (only 'marker' will be removed and re-inserted).
3241 multilist_sublist_move_forward(mls, marker);
3244 * The only case where the b_spa field should ever be
3245 * zero, is the marker headers inserted by
3246 * arc_evict_state(). It's possible for multiple threads
3247 * to be calling arc_evict_state() concurrently (e.g.
3248 * dsl_pool_close() and zio_inject_fault()), so we must
3249 * skip any markers we see from these other threads.
3251 if (hdr->b_spa == 0)
3254 /* we're only interested in evicting buffers of a certain spa */
3255 if (spa != 0 && hdr->b_spa != spa) {
3256 ARCSTAT_BUMP(arcstat_evict_skip);
3260 hash_lock = HDR_LOCK(hdr);
3263 * We aren't calling this function from any code path
3264 * that would already be holding a hash lock, so we're
3265 * asserting on this assumption to be defensive in case
3266 * this ever changes. Without this check, it would be
3267 * possible to incorrectly increment arcstat_mutex_miss
3268 * below (e.g. if the code changed such that we called
3269 * this function with a hash lock held).
3271 ASSERT(!MUTEX_HELD(hash_lock));
3273 if (mutex_tryenter(hash_lock)) {
3274 uint64_t evicted = arc_evict_hdr(hdr, hash_lock);
3275 mutex_exit(hash_lock);
3277 bytes_evicted += evicted;
3280 * If evicted is zero, arc_evict_hdr() must have
3281 * decided to skip this header, don't increment
3282 * evict_count in this case.
3288 * If arc_size isn't overflowing, signal any
3289 * threads that might happen to be waiting.
3291 * For each header evicted, we wake up a single
3292 * thread. If we used cv_broadcast, we could
3293 * wake up "too many" threads causing arc_size
3294 * to significantly overflow arc_c; since
3295 * arc_get_data_impl() doesn't check for overflow
3296 * when it's woken up (it doesn't because it's
3297 * possible for the ARC to be overflowing while
3298 * full of un-evictable buffers, and the
3299 * function should proceed in this case).
3301 * If threads are left sleeping, due to not
3302 * using cv_broadcast, they will be woken up
3303 * just before arc_reclaim_thread() sleeps.
3305 mutex_enter(&arc_reclaim_lock);
3306 if (!arc_is_overflowing())
3307 cv_signal(&arc_reclaim_waiters_cv);
3308 mutex_exit(&arc_reclaim_lock);
3310 ARCSTAT_BUMP(arcstat_mutex_miss);
3314 multilist_sublist_unlock(mls);
3316 return (bytes_evicted);
3320 * Evict buffers from the given arc state, until we've removed the
3321 * specified number of bytes. Move the removed buffers to the
3322 * appropriate evict state.
3324 * This function makes a "best effort". It skips over any buffers
3325 * it can't get a hash_lock on, and so, may not catch all candidates.
3326 * It may also return without evicting as much space as requested.
3328 * If bytes is specified using the special value ARC_EVICT_ALL, this
3329 * will evict all available (i.e. unlocked and evictable) buffers from
3330 * the given arc state; which is used by arc_flush().
3333 arc_evict_state(arc_state_t *state, uint64_t spa, int64_t bytes,
3334 arc_buf_contents_t type)
3336 uint64_t total_evicted = 0;
3337 multilist_t *ml = state->arcs_list[type];
3339 arc_buf_hdr_t **markers;
3342 IMPLY(bytes < 0, bytes == ARC_EVICT_ALL);
3344 num_sublists = multilist_get_num_sublists(ml);
3347 * If we've tried to evict from each sublist, made some
3348 * progress, but still have not hit the target number of bytes
3349 * to evict, we want to keep trying. The markers allow us to
3350 * pick up where we left off for each individual sublist, rather
3351 * than starting from the tail each time.
3353 markers = kmem_zalloc(sizeof (*markers) * num_sublists, KM_SLEEP);
3354 for (i = 0; i < num_sublists; i++) {
3355 multilist_sublist_t *mls;
3357 markers[i] = kmem_cache_alloc(hdr_full_cache, KM_SLEEP);
3360 * A b_spa of 0 is used to indicate that this header is
3361 * a marker. This fact is used in arc_adjust_type() and
3362 * arc_evict_state_impl().
3364 markers[i]->b_spa = 0;
3366 mls = multilist_sublist_lock(ml, i);
3367 multilist_sublist_insert_tail(mls, markers[i]);
3368 multilist_sublist_unlock(mls);
3372 * While we haven't hit our target number of bytes to evict, or
3373 * we're evicting all available buffers.
3375 while (total_evicted < bytes || bytes == ARC_EVICT_ALL) {
3376 int sublist_idx = multilist_get_random_index(ml);
3377 uint64_t scan_evicted = 0;
3380 * Try to reduce pinned dnodes with a floor of arc_dnode_limit.
3381 * Request that 10% of the LRUs be scanned by the superblock
3384 if (type == ARC_BUFC_DATA && arc_dnode_size > arc_dnode_limit)
3385 arc_prune_async((arc_dnode_size - arc_dnode_limit) /
3386 sizeof (dnode_t) / zfs_arc_dnode_reduce_percent);
3389 * Start eviction using a randomly selected sublist,
3390 * this is to try and evenly balance eviction across all
3391 * sublists. Always starting at the same sublist
3392 * (e.g. index 0) would cause evictions to favor certain
3393 * sublists over others.
3395 for (i = 0; i < num_sublists; i++) {
3396 uint64_t bytes_remaining;
3397 uint64_t bytes_evicted;
3399 if (bytes == ARC_EVICT_ALL)
3400 bytes_remaining = ARC_EVICT_ALL;
3401 else if (total_evicted < bytes)
3402 bytes_remaining = bytes - total_evicted;
3406 bytes_evicted = arc_evict_state_impl(ml, sublist_idx,
3407 markers[sublist_idx], spa, bytes_remaining);
3409 scan_evicted += bytes_evicted;
3410 total_evicted += bytes_evicted;
3412 /* we've reached the end, wrap to the beginning */
3413 if (++sublist_idx >= num_sublists)
3418 * If we didn't evict anything during this scan, we have
3419 * no reason to believe we'll evict more during another
3420 * scan, so break the loop.
3422 if (scan_evicted == 0) {
3423 /* This isn't possible, let's make that obvious */
3424 ASSERT3S(bytes, !=, 0);
3427 * When bytes is ARC_EVICT_ALL, the only way to
3428 * break the loop is when scan_evicted is zero.
3429 * In that case, we actually have evicted enough,
3430 * so we don't want to increment the kstat.
3432 if (bytes != ARC_EVICT_ALL) {
3433 ASSERT3S(total_evicted, <, bytes);
3434 ARCSTAT_BUMP(arcstat_evict_not_enough);
3441 for (i = 0; i < num_sublists; i++) {
3442 multilist_sublist_t *mls = multilist_sublist_lock(ml, i);
3443 multilist_sublist_remove(mls, markers[i]);
3444 multilist_sublist_unlock(mls);
3446 kmem_cache_free(hdr_full_cache, markers[i]);
3448 kmem_free(markers, sizeof (*markers) * num_sublists);
3450 return (total_evicted);
3454 * Flush all "evictable" data of the given type from the arc state
3455 * specified. This will not evict any "active" buffers (i.e. referenced).
3457 * When 'retry' is set to B_FALSE, the function will make a single pass
3458 * over the state and evict any buffers that it can. Since it doesn't
3459 * continually retry the eviction, it might end up leaving some buffers
3460 * in the ARC due to lock misses.
3462 * When 'retry' is set to B_TRUE, the function will continually retry the
3463 * eviction until *all* evictable buffers have been removed from the
3464 * state. As a result, if concurrent insertions into the state are
3465 * allowed (e.g. if the ARC isn't shutting down), this function might
3466 * wind up in an infinite loop, continually trying to evict buffers.
3469 arc_flush_state(arc_state_t *state, uint64_t spa, arc_buf_contents_t type,
3472 uint64_t evicted = 0;
3474 while (refcount_count(&state->arcs_esize[type]) != 0) {
3475 evicted += arc_evict_state(state, spa, ARC_EVICT_ALL, type);
3485 * Helper function for arc_prune_async() it is responsible for safely
3486 * handling the execution of a registered arc_prune_func_t.
3489 arc_prune_task(void *ptr)
3491 arc_prune_t *ap = (arc_prune_t *)ptr;
3492 arc_prune_func_t *func = ap->p_pfunc;
3495 func(ap->p_adjust, ap->p_private);
3497 refcount_remove(&ap->p_refcnt, func);
3501 * Notify registered consumers they must drop holds on a portion of the ARC
3502 * buffered they reference. This provides a mechanism to ensure the ARC can
3503 * honor the arc_meta_limit and reclaim otherwise pinned ARC buffers. This
3504 * is analogous to dnlc_reduce_cache() but more generic.
3506 * This operation is performed asynchronously so it may be safely called
3507 * in the context of the arc_reclaim_thread(). A reference is taken here
3508 * for each registered arc_prune_t and the arc_prune_task() is responsible
3509 * for releasing it once the registered arc_prune_func_t has completed.
3512 arc_prune_async(int64_t adjust)
3516 mutex_enter(&arc_prune_mtx);
3517 for (ap = list_head(&arc_prune_list); ap != NULL;
3518 ap = list_next(&arc_prune_list, ap)) {
3520 if (refcount_count(&ap->p_refcnt) >= 2)
3523 refcount_add(&ap->p_refcnt, ap->p_pfunc);
3524 ap->p_adjust = adjust;
3525 if (taskq_dispatch(arc_prune_taskq, arc_prune_task,
3526 ap, TQ_SLEEP) == TASKQID_INVALID) {
3527 refcount_remove(&ap->p_refcnt, ap->p_pfunc);
3530 ARCSTAT_BUMP(arcstat_prune);
3532 mutex_exit(&arc_prune_mtx);
3536 * Evict the specified number of bytes from the state specified,
3537 * restricting eviction to the spa and type given. This function
3538 * prevents us from trying to evict more from a state's list than
3539 * is "evictable", and to skip evicting altogether when passed a
3540 * negative value for "bytes". In contrast, arc_evict_state() will
3541 * evict everything it can, when passed a negative value for "bytes".
3544 arc_adjust_impl(arc_state_t *state, uint64_t spa, int64_t bytes,
3545 arc_buf_contents_t type)
3549 if (bytes > 0 && refcount_count(&state->arcs_esize[type]) > 0) {
3550 delta = MIN(refcount_count(&state->arcs_esize[type]), bytes);
3551 return (arc_evict_state(state, spa, delta, type));
3558 * The goal of this function is to evict enough meta data buffers from the
3559 * ARC in order to enforce the arc_meta_limit. Achieving this is slightly
3560 * more complicated than it appears because it is common for data buffers
3561 * to have holds on meta data buffers. In addition, dnode meta data buffers
3562 * will be held by the dnodes in the block preventing them from being freed.
3563 * This means we can't simply traverse the ARC and expect to always find
3564 * enough unheld meta data buffer to release.
3566 * Therefore, this function has been updated to make alternating passes
3567 * over the ARC releasing data buffers and then newly unheld meta data
3568 * buffers. This ensures forward progress is maintained and arc_meta_used
3569 * will decrease. Normally this is sufficient, but if required the ARC
3570 * will call the registered prune callbacks causing dentry and inodes to
3571 * be dropped from the VFS cache. This will make dnode meta data buffers
3572 * available for reclaim.
3575 arc_adjust_meta_balanced(void)
3577 int64_t delta, prune = 0, adjustmnt;
3578 uint64_t total_evicted = 0;
3579 arc_buf_contents_t type = ARC_BUFC_DATA;
3580 int restarts = MAX(zfs_arc_meta_adjust_restarts, 0);
3584 * This slightly differs than the way we evict from the mru in
3585 * arc_adjust because we don't have a "target" value (i.e. no
3586 * "meta" arc_p). As a result, I think we can completely
3587 * cannibalize the metadata in the MRU before we evict the
3588 * metadata from the MFU. I think we probably need to implement a
3589 * "metadata arc_p" value to do this properly.
3591 adjustmnt = arc_meta_used - arc_meta_limit;
3593 if (adjustmnt > 0 && refcount_count(&arc_mru->arcs_esize[type]) > 0) {
3594 delta = MIN(refcount_count(&arc_mru->arcs_esize[type]),
3596 total_evicted += arc_adjust_impl(arc_mru, 0, delta, type);
3601 * We can't afford to recalculate adjustmnt here. If we do,
3602 * new metadata buffers can sneak into the MRU or ANON lists,
3603 * thus penalize the MFU metadata. Although the fudge factor is
3604 * small, it has been empirically shown to be significant for
3605 * certain workloads (e.g. creating many empty directories). As
3606 * such, we use the original calculation for adjustmnt, and
3607 * simply decrement the amount of data evicted from the MRU.
3610 if (adjustmnt > 0 && refcount_count(&arc_mfu->arcs_esize[type]) > 0) {
3611 delta = MIN(refcount_count(&arc_mfu->arcs_esize[type]),
3613 total_evicted += arc_adjust_impl(arc_mfu, 0, delta, type);
3616 adjustmnt = arc_meta_used - arc_meta_limit;
3618 if (adjustmnt > 0 &&
3619 refcount_count(&arc_mru_ghost->arcs_esize[type]) > 0) {
3620 delta = MIN(adjustmnt,
3621 refcount_count(&arc_mru_ghost->arcs_esize[type]));
3622 total_evicted += arc_adjust_impl(arc_mru_ghost, 0, delta, type);
3626 if (adjustmnt > 0 &&
3627 refcount_count(&arc_mfu_ghost->arcs_esize[type]) > 0) {
3628 delta = MIN(adjustmnt,
3629 refcount_count(&arc_mfu_ghost->arcs_esize[type]));
3630 total_evicted += arc_adjust_impl(arc_mfu_ghost, 0, delta, type);
3634 * If after attempting to make the requested adjustment to the ARC
3635 * the meta limit is still being exceeded then request that the
3636 * higher layers drop some cached objects which have holds on ARC
3637 * meta buffers. Requests to the upper layers will be made with
3638 * increasingly large scan sizes until the ARC is below the limit.
3640 if (arc_meta_used > arc_meta_limit) {
3641 if (type == ARC_BUFC_DATA) {
3642 type = ARC_BUFC_METADATA;
3644 type = ARC_BUFC_DATA;
3646 if (zfs_arc_meta_prune) {
3647 prune += zfs_arc_meta_prune;
3648 arc_prune_async(prune);
3657 return (total_evicted);
3661 * Evict metadata buffers from the cache, such that arc_meta_used is
3662 * capped by the arc_meta_limit tunable.
3665 arc_adjust_meta_only(void)
3667 uint64_t total_evicted = 0;
3671 * If we're over the meta limit, we want to evict enough
3672 * metadata to get back under the meta limit. We don't want to
3673 * evict so much that we drop the MRU below arc_p, though. If
3674 * we're over the meta limit more than we're over arc_p, we
3675 * evict some from the MRU here, and some from the MFU below.
3677 target = MIN((int64_t)(arc_meta_used - arc_meta_limit),
3678 (int64_t)(refcount_count(&arc_anon->arcs_size) +
3679 refcount_count(&arc_mru->arcs_size) - arc_p));
3681 total_evicted += arc_adjust_impl(arc_mru, 0, target, ARC_BUFC_METADATA);
3684 * Similar to the above, we want to evict enough bytes to get us
3685 * below the meta limit, but not so much as to drop us below the
3686 * space allotted to the MFU (which is defined as arc_c - arc_p).
3688 target = MIN((int64_t)(arc_meta_used - arc_meta_limit),
3689 (int64_t)(refcount_count(&arc_mfu->arcs_size) - (arc_c - arc_p)));
3691 total_evicted += arc_adjust_impl(arc_mfu, 0, target, ARC_BUFC_METADATA);
3693 return (total_evicted);
3697 arc_adjust_meta(void)
3699 if (zfs_arc_meta_strategy == ARC_STRATEGY_META_ONLY)
3700 return (arc_adjust_meta_only());
3702 return (arc_adjust_meta_balanced());
3706 * Return the type of the oldest buffer in the given arc state
3708 * This function will select a random sublist of type ARC_BUFC_DATA and
3709 * a random sublist of type ARC_BUFC_METADATA. The tail of each sublist
3710 * is compared, and the type which contains the "older" buffer will be
3713 static arc_buf_contents_t
3714 arc_adjust_type(arc_state_t *state)
3716 multilist_t *data_ml = state->arcs_list[ARC_BUFC_DATA];
3717 multilist_t *meta_ml = state->arcs_list[ARC_BUFC_METADATA];
3718 int data_idx = multilist_get_random_index(data_ml);
3719 int meta_idx = multilist_get_random_index(meta_ml);
3720 multilist_sublist_t *data_mls;
3721 multilist_sublist_t *meta_mls;
3722 arc_buf_contents_t type;
3723 arc_buf_hdr_t *data_hdr;
3724 arc_buf_hdr_t *meta_hdr;
3727 * We keep the sublist lock until we're finished, to prevent
3728 * the headers from being destroyed via arc_evict_state().
3730 data_mls = multilist_sublist_lock(data_ml, data_idx);
3731 meta_mls = multilist_sublist_lock(meta_ml, meta_idx);
3734 * These two loops are to ensure we skip any markers that
3735 * might be at the tail of the lists due to arc_evict_state().
3738 for (data_hdr = multilist_sublist_tail(data_mls); data_hdr != NULL;
3739 data_hdr = multilist_sublist_prev(data_mls, data_hdr)) {
3740 if (data_hdr->b_spa != 0)
3744 for (meta_hdr = multilist_sublist_tail(meta_mls); meta_hdr != NULL;
3745 meta_hdr = multilist_sublist_prev(meta_mls, meta_hdr)) {
3746 if (meta_hdr->b_spa != 0)
3750 if (data_hdr == NULL && meta_hdr == NULL) {
3751 type = ARC_BUFC_DATA;
3752 } else if (data_hdr == NULL) {
3753 ASSERT3P(meta_hdr, !=, NULL);
3754 type = ARC_BUFC_METADATA;
3755 } else if (meta_hdr == NULL) {
3756 ASSERT3P(data_hdr, !=, NULL);
3757 type = ARC_BUFC_DATA;
3759 ASSERT3P(data_hdr, !=, NULL);
3760 ASSERT3P(meta_hdr, !=, NULL);
3762 /* The headers can't be on the sublist without an L1 header */
3763 ASSERT(HDR_HAS_L1HDR(data_hdr));
3764 ASSERT(HDR_HAS_L1HDR(meta_hdr));
3766 if (data_hdr->b_l1hdr.b_arc_access <
3767 meta_hdr->b_l1hdr.b_arc_access) {
3768 type = ARC_BUFC_DATA;
3770 type = ARC_BUFC_METADATA;
3774 multilist_sublist_unlock(meta_mls);
3775 multilist_sublist_unlock(data_mls);
3781 * Evict buffers from the cache, such that arc_size is capped by arc_c.
3786 uint64_t total_evicted = 0;
3791 * If we're over arc_meta_limit, we want to correct that before
3792 * potentially evicting data buffers below.
3794 total_evicted += arc_adjust_meta();
3799 * If we're over the target cache size, we want to evict enough
3800 * from the list to get back to our target size. We don't want
3801 * to evict too much from the MRU, such that it drops below
3802 * arc_p. So, if we're over our target cache size more than
3803 * the MRU is over arc_p, we'll evict enough to get back to
3804 * arc_p here, and then evict more from the MFU below.
3806 target = MIN((int64_t)(arc_size - arc_c),
3807 (int64_t)(refcount_count(&arc_anon->arcs_size) +
3808 refcount_count(&arc_mru->arcs_size) + arc_meta_used - arc_p));
3811 * If we're below arc_meta_min, always prefer to evict data.
3812 * Otherwise, try to satisfy the requested number of bytes to
3813 * evict from the type which contains older buffers; in an
3814 * effort to keep newer buffers in the cache regardless of their
3815 * type. If we cannot satisfy the number of bytes from this
3816 * type, spill over into the next type.
3818 if (arc_adjust_type(arc_mru) == ARC_BUFC_METADATA &&
3819 arc_meta_used > arc_meta_min) {
3820 bytes = arc_adjust_impl(arc_mru, 0, target, ARC_BUFC_METADATA);
3821 total_evicted += bytes;
3824 * If we couldn't evict our target number of bytes from
3825 * metadata, we try to get the rest from data.
3830 arc_adjust_impl(arc_mru, 0, target, ARC_BUFC_DATA);
3832 bytes = arc_adjust_impl(arc_mru, 0, target, ARC_BUFC_DATA);
3833 total_evicted += bytes;
3836 * If we couldn't evict our target number of bytes from
3837 * data, we try to get the rest from metadata.
3842 arc_adjust_impl(arc_mru, 0, target, ARC_BUFC_METADATA);
3848 * Now that we've tried to evict enough from the MRU to get its
3849 * size back to arc_p, if we're still above the target cache
3850 * size, we evict the rest from the MFU.
3852 target = arc_size - arc_c;
3854 if (arc_adjust_type(arc_mfu) == ARC_BUFC_METADATA &&
3855 arc_meta_used > arc_meta_min) {
3856 bytes = arc_adjust_impl(arc_mfu, 0, target, ARC_BUFC_METADATA);
3857 total_evicted += bytes;
3860 * If we couldn't evict our target number of bytes from
3861 * metadata, we try to get the rest from data.
3866 arc_adjust_impl(arc_mfu, 0, target, ARC_BUFC_DATA);
3868 bytes = arc_adjust_impl(arc_mfu, 0, target, ARC_BUFC_DATA);
3869 total_evicted += bytes;
3872 * If we couldn't evict our target number of bytes from
3873 * data, we try to get the rest from data.
3878 arc_adjust_impl(arc_mfu, 0, target, ARC_BUFC_METADATA);
3882 * Adjust ghost lists
3884 * In addition to the above, the ARC also defines target values
3885 * for the ghost lists. The sum of the mru list and mru ghost
3886 * list should never exceed the target size of the cache, and
3887 * the sum of the mru list, mfu list, mru ghost list, and mfu
3888 * ghost list should never exceed twice the target size of the
3889 * cache. The following logic enforces these limits on the ghost
3890 * caches, and evicts from them as needed.
3892 target = refcount_count(&arc_mru->arcs_size) +
3893 refcount_count(&arc_mru_ghost->arcs_size) - arc_c;
3895 bytes = arc_adjust_impl(arc_mru_ghost, 0, target, ARC_BUFC_DATA);
3896 total_evicted += bytes;
3901 arc_adjust_impl(arc_mru_ghost, 0, target, ARC_BUFC_METADATA);
3904 * We assume the sum of the mru list and mfu list is less than
3905 * or equal to arc_c (we enforced this above), which means we
3906 * can use the simpler of the two equations below:
3908 * mru + mfu + mru ghost + mfu ghost <= 2 * arc_c
3909 * mru ghost + mfu ghost <= arc_c
3911 target = refcount_count(&arc_mru_ghost->arcs_size) +
3912 refcount_count(&arc_mfu_ghost->arcs_size) - arc_c;
3914 bytes = arc_adjust_impl(arc_mfu_ghost, 0, target, ARC_BUFC_DATA);
3915 total_evicted += bytes;
3920 arc_adjust_impl(arc_mfu_ghost, 0, target, ARC_BUFC_METADATA);
3922 return (total_evicted);
3926 arc_flush(spa_t *spa, boolean_t retry)
3931 * If retry is B_TRUE, a spa must not be specified since we have
3932 * no good way to determine if all of a spa's buffers have been
3933 * evicted from an arc state.
3935 ASSERT(!retry || spa == 0);
3938 guid = spa_load_guid(spa);
3940 (void) arc_flush_state(arc_mru, guid, ARC_BUFC_DATA, retry);
3941 (void) arc_flush_state(arc_mru, guid, ARC_BUFC_METADATA, retry);
3943 (void) arc_flush_state(arc_mfu, guid, ARC_BUFC_DATA, retry);
3944 (void) arc_flush_state(arc_mfu, guid, ARC_BUFC_METADATA, retry);
3946 (void) arc_flush_state(arc_mru_ghost, guid, ARC_BUFC_DATA, retry);
3947 (void) arc_flush_state(arc_mru_ghost, guid, ARC_BUFC_METADATA, retry);
3949 (void) arc_flush_state(arc_mfu_ghost, guid, ARC_BUFC_DATA, retry);
3950 (void) arc_flush_state(arc_mfu_ghost, guid, ARC_BUFC_METADATA, retry);
3954 arc_shrink(int64_t to_free)
3958 if (c > to_free && c - to_free > arc_c_min) {
3959 arc_c = c - to_free;
3960 atomic_add_64(&arc_p, -(arc_p >> arc_shrink_shift));
3961 if (arc_c > arc_size)
3962 arc_c = MAX(arc_size, arc_c_min);
3964 arc_p = (arc_c >> 1);
3965 ASSERT(arc_c >= arc_c_min);
3966 ASSERT((int64_t)arc_p >= 0);
3971 if (arc_size > arc_c)
3972 (void) arc_adjust();
3976 * Return maximum amount of memory that we could possibly use. Reduced
3977 * to half of all memory in user space which is primarily used for testing.
3980 arc_all_memory(void)
3983 return (MIN(ptob(physmem),
3984 vmem_size(heap_arena, VMEM_FREE | VMEM_ALLOC)));
3986 return (ptob(physmem) / 2);
3990 typedef enum free_memory_reason_t {
3995 FMR_PAGES_PP_MAXIMUM,
3998 } free_memory_reason_t;
4000 int64_t last_free_memory;
4001 free_memory_reason_t last_free_reason;
4005 * Additional reserve of pages for pp_reserve.
4007 int64_t arc_pages_pp_reserve = 64;
4010 * Additional reserve of pages for swapfs.
4012 int64_t arc_swapfs_reserve = 64;
4013 #endif /* _KERNEL */
4016 * Return the amount of memory that can be consumed before reclaim will be
4017 * needed. Positive if there is sufficient free memory, negative indicates
4018 * the amount of memory that needs to be freed up.
4021 arc_available_memory(void)
4023 int64_t lowest = INT64_MAX;
4024 free_memory_reason_t r = FMR_UNKNOWN;
4026 uint64_t available_memory = ptob(freemem);
4029 pgcnt_t needfree = btop(arc_need_free);
4030 pgcnt_t lotsfree = btop(arc_sys_free);
4031 pgcnt_t desfree = 0;
4036 MIN(available_memory, vmem_size(heap_arena, VMEM_FREE));
4040 n = PAGESIZE * (-needfree);
4048 * check that we're out of range of the pageout scanner. It starts to
4049 * schedule paging if freemem is less than lotsfree and needfree.
4050 * lotsfree is the high-water mark for pageout, and needfree is the
4051 * number of needed free pages. We add extra pages here to make sure
4052 * the scanner doesn't start up while we're freeing memory.
4054 n = PAGESIZE * (btop(available_memory) - lotsfree - needfree - desfree);
4062 * check to make sure that swapfs has enough space so that anon
4063 * reservations can still succeed. anon_resvmem() checks that the
4064 * availrmem is greater than swapfs_minfree, and the number of reserved
4065 * swap pages. We also add a bit of extra here just to prevent
4066 * circumstances from getting really dire.
4068 n = PAGESIZE * (availrmem - swapfs_minfree - swapfs_reserve -
4069 desfree - arc_swapfs_reserve);
4072 r = FMR_SWAPFS_MINFREE;
4077 * Check that we have enough availrmem that memory locking (e.g., via
4078 * mlock(3C) or memcntl(2)) can still succeed. (pages_pp_maximum
4079 * stores the number of pages that cannot be locked; when availrmem
4080 * drops below pages_pp_maximum, page locking mechanisms such as
4081 * page_pp_lock() will fail.)
4083 n = PAGESIZE * (availrmem - pages_pp_maximum -
4084 arc_pages_pp_reserve);
4087 r = FMR_PAGES_PP_MAXIMUM;
4093 * If we're on an i386 platform, it's possible that we'll exhaust the
4094 * kernel heap space before we ever run out of available physical
4095 * memory. Most checks of the size of the heap_area compare against
4096 * tune.t_minarmem, which is the minimum available real memory that we
4097 * can have in the system. However, this is generally fixed at 25 pages
4098 * which is so low that it's useless. In this comparison, we seek to
4099 * calculate the total heap-size, and reclaim if more than 3/4ths of the
4100 * heap is allocated. (Or, in the calculation, if less than 1/4th is
4103 n = vmem_size(heap_arena, VMEM_FREE) -
4104 (vmem_size(heap_arena, VMEM_FREE | VMEM_ALLOC) >> 2);
4112 * If zio data pages are being allocated out of a separate heap segment,
4113 * then enforce that the size of available vmem for this arena remains
4114 * above about 1/4th (1/(2^arc_zio_arena_free_shift)) free.
4116 * Note that reducing the arc_zio_arena_free_shift keeps more virtual
4117 * memory (in the zio_arena) free, which can avoid memory
4118 * fragmentation issues.
4120 if (zio_arena != NULL) {
4121 n = (int64_t)vmem_size(zio_arena, VMEM_FREE) -
4122 (vmem_size(zio_arena, VMEM_ALLOC) >>
4123 arc_zio_arena_free_shift);
4130 /* Every 100 calls, free a small amount */
4131 if (spa_get_random(100) == 0)
4133 #endif /* _KERNEL */
4135 last_free_memory = lowest;
4136 last_free_reason = r;
4142 * Determine if the system is under memory pressure and is asking
4143 * to reclaim memory. A return value of B_TRUE indicates that the system
4144 * is under memory pressure and that the arc should adjust accordingly.
4147 arc_reclaim_needed(void)
4149 return (arc_available_memory() < 0);
4153 arc_kmem_reap_now(void)
4156 kmem_cache_t *prev_cache = NULL;
4157 kmem_cache_t *prev_data_cache = NULL;
4158 extern kmem_cache_t *zio_buf_cache[];
4159 extern kmem_cache_t *zio_data_buf_cache[];
4160 extern kmem_cache_t *range_seg_cache;
4162 if ((arc_meta_used >= arc_meta_limit) && zfs_arc_meta_prune) {
4164 * We are exceeding our meta-data cache limit.
4165 * Prune some entries to release holds on meta-data.
4167 arc_prune_async(zfs_arc_meta_prune);
4170 for (i = 0; i < SPA_MAXBLOCKSIZE >> SPA_MINBLOCKSHIFT; i++) {
4172 /* reach upper limit of cache size on 32-bit */
4173 if (zio_buf_cache[i] == NULL)
4176 if (zio_buf_cache[i] != prev_cache) {
4177 prev_cache = zio_buf_cache[i];
4178 kmem_cache_reap_now(zio_buf_cache[i]);
4180 if (zio_data_buf_cache[i] != prev_data_cache) {
4181 prev_data_cache = zio_data_buf_cache[i];
4182 kmem_cache_reap_now(zio_data_buf_cache[i]);
4185 kmem_cache_reap_now(buf_cache);
4186 kmem_cache_reap_now(hdr_full_cache);
4187 kmem_cache_reap_now(hdr_l2only_cache);
4188 kmem_cache_reap_now(range_seg_cache);
4190 if (zio_arena != NULL) {
4192 * Ask the vmem arena to reclaim unused memory from its
4195 vmem_qcache_reap(zio_arena);
4200 * Threads can block in arc_get_data_impl() waiting for this thread to evict
4201 * enough data and signal them to proceed. When this happens, the threads in
4202 * arc_get_data_impl() are sleeping while holding the hash lock for their
4203 * particular arc header. Thus, we must be careful to never sleep on a
4204 * hash lock in this thread. This is to prevent the following deadlock:
4206 * - Thread A sleeps on CV in arc_get_data_impl() holding hash lock "L",
4207 * waiting for the reclaim thread to signal it.
4209 * - arc_reclaim_thread() tries to acquire hash lock "L" using mutex_enter,
4210 * fails, and goes to sleep forever.
4212 * This possible deadlock is avoided by always acquiring a hash lock
4213 * using mutex_tryenter() from arc_reclaim_thread().
4216 arc_reclaim_thread(void)
4218 fstrans_cookie_t cookie = spl_fstrans_mark();
4219 hrtime_t growtime = 0;
4222 CALLB_CPR_INIT(&cpr, &arc_reclaim_lock, callb_generic_cpr, FTAG);
4224 mutex_enter(&arc_reclaim_lock);
4225 while (!arc_reclaim_thread_exit) {
4227 uint64_t evicted = 0;
4228 uint64_t need_free = arc_need_free;
4229 arc_tuning_update();
4232 * This is necessary in order for the mdb ::arc dcmd to
4233 * show up to date information. Since the ::arc command
4234 * does not call the kstat's update function, without
4235 * this call, the command may show stale stats for the
4236 * anon, mru, mru_ghost, mfu, and mfu_ghost lists. Even
4237 * with this change, the data might be up to 1 second
4238 * out of date; but that should suffice. The arc_state_t
4239 * structures can be queried directly if more accurate
4240 * information is needed.
4243 if (arc_ksp != NULL)
4244 arc_ksp->ks_update(arc_ksp, KSTAT_READ);
4246 mutex_exit(&arc_reclaim_lock);
4249 * We call arc_adjust() before (possibly) calling
4250 * arc_kmem_reap_now(), so that we can wake up
4251 * arc_get_data_buf() sooner.
4253 evicted = arc_adjust();
4255 int64_t free_memory = arc_available_memory();
4256 if (free_memory < 0) {
4258 arc_no_grow = B_TRUE;
4262 * Wait at least zfs_grow_retry (default 5) seconds
4263 * before considering growing.
4265 growtime = gethrtime() + SEC2NSEC(arc_grow_retry);
4267 arc_kmem_reap_now();
4270 * If we are still low on memory, shrink the ARC
4271 * so that we have arc_shrink_min free space.
4273 free_memory = arc_available_memory();
4275 to_free = (arc_c >> arc_shrink_shift) - free_memory;
4278 to_free = MAX(to_free, need_free);
4280 arc_shrink(to_free);
4282 } else if (free_memory < arc_c >> arc_no_grow_shift) {
4283 arc_no_grow = B_TRUE;
4284 } else if (gethrtime() >= growtime) {
4285 arc_no_grow = B_FALSE;
4288 mutex_enter(&arc_reclaim_lock);
4291 * If evicted is zero, we couldn't evict anything via
4292 * arc_adjust(). This could be due to hash lock
4293 * collisions, but more likely due to the majority of
4294 * arc buffers being unevictable. Therefore, even if
4295 * arc_size is above arc_c, another pass is unlikely to
4296 * be helpful and could potentially cause us to enter an
4299 if (arc_size <= arc_c || evicted == 0) {
4301 * We're either no longer overflowing, or we
4302 * can't evict anything more, so we should wake
4303 * up any threads before we go to sleep and remove
4304 * the bytes we were working on from arc_need_free
4305 * since nothing more will be done here.
4307 cv_broadcast(&arc_reclaim_waiters_cv);
4308 ARCSTAT_INCR(arcstat_need_free, -need_free);
4311 * Block until signaled, or after one second (we
4312 * might need to perform arc_kmem_reap_now()
4313 * even if we aren't being signalled)
4315 CALLB_CPR_SAFE_BEGIN(&cpr);
4316 (void) cv_timedwait_sig_hires(&arc_reclaim_thread_cv,
4317 &arc_reclaim_lock, SEC2NSEC(1), MSEC2NSEC(1), 0);
4318 CALLB_CPR_SAFE_END(&cpr, &arc_reclaim_lock);
4322 arc_reclaim_thread_exit = B_FALSE;
4323 cv_broadcast(&arc_reclaim_thread_cv);
4324 CALLB_CPR_EXIT(&cpr); /* drops arc_reclaim_lock */
4325 spl_fstrans_unmark(cookie);
4331 * Determine the amount of memory eligible for eviction contained in the
4332 * ARC. All clean data reported by the ghost lists can always be safely
4333 * evicted. Due to arc_c_min, the same does not hold for all clean data
4334 * contained by the regular mru and mfu lists.
4336 * In the case of the regular mru and mfu lists, we need to report as
4337 * much clean data as possible, such that evicting that same reported
4338 * data will not bring arc_size below arc_c_min. Thus, in certain
4339 * circumstances, the total amount of clean data in the mru and mfu
4340 * lists might not actually be evictable.
4342 * The following two distinct cases are accounted for:
4344 * 1. The sum of the amount of dirty data contained by both the mru and
4345 * mfu lists, plus the ARC's other accounting (e.g. the anon list),
4346 * is greater than or equal to arc_c_min.
4347 * (i.e. amount of dirty data >= arc_c_min)
4349 * This is the easy case; all clean data contained by the mru and mfu
4350 * lists is evictable. Evicting all clean data can only drop arc_size
4351 * to the amount of dirty data, which is greater than arc_c_min.
4353 * 2. The sum of the amount of dirty data contained by both the mru and
4354 * mfu lists, plus the ARC's other accounting (e.g. the anon list),
4355 * is less than arc_c_min.
4356 * (i.e. arc_c_min > amount of dirty data)
4358 * 2.1. arc_size is greater than or equal arc_c_min.
4359 * (i.e. arc_size >= arc_c_min > amount of dirty data)
4361 * In this case, not all clean data from the regular mru and mfu
4362 * lists is actually evictable; we must leave enough clean data
4363 * to keep arc_size above arc_c_min. Thus, the maximum amount of
4364 * evictable data from the two lists combined, is exactly the
4365 * difference between arc_size and arc_c_min.
4367 * 2.2. arc_size is less than arc_c_min
4368 * (i.e. arc_c_min > arc_size > amount of dirty data)
4370 * In this case, none of the data contained in the mru and mfu
4371 * lists is evictable, even if it's clean. Since arc_size is
4372 * already below arc_c_min, evicting any more would only
4373 * increase this negative difference.
4376 arc_evictable_memory(void)
4378 uint64_t arc_clean =
4379 refcount_count(&arc_mru->arcs_esize[ARC_BUFC_DATA]) +
4380 refcount_count(&arc_mru->arcs_esize[ARC_BUFC_METADATA]) +
4381 refcount_count(&arc_mfu->arcs_esize[ARC_BUFC_DATA]) +
4382 refcount_count(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]);
4383 uint64_t arc_dirty = MAX((int64_t)arc_size - (int64_t)arc_clean, 0);
4386 * Scale reported evictable memory in proportion to page cache, cap
4387 * at specified min/max.
4389 uint64_t min = (ptob(global_page_state(NR_FILE_PAGES)) / 100) *
4391 min = MAX(arc_c_min, MIN(arc_c_max, min));
4393 if (arc_dirty >= min)
4396 return (MAX((int64_t)arc_size - (int64_t)min, 0));
4400 * If sc->nr_to_scan is zero, the caller is requesting a query of the
4401 * number of objects which can potentially be freed. If it is nonzero,
4402 * the request is to free that many objects.
4404 * Linux kernels >= 3.12 have the count_objects and scan_objects callbacks
4405 * in struct shrinker and also require the shrinker to return the number
4408 * Older kernels require the shrinker to return the number of freeable
4409 * objects following the freeing of nr_to_free.
4411 static spl_shrinker_t
4412 __arc_shrinker_func(struct shrinker *shrink, struct shrink_control *sc)
4416 /* The arc is considered warm once reclaim has occurred */
4417 if (unlikely(arc_warm == B_FALSE))
4420 /* Return the potential number of reclaimable pages */
4421 pages = btop((int64_t)arc_evictable_memory());
4422 if (sc->nr_to_scan == 0)
4425 /* Not allowed to perform filesystem reclaim */
4426 if (!(sc->gfp_mask & __GFP_FS))
4427 return (SHRINK_STOP);
4429 /* Reclaim in progress */
4430 if (mutex_tryenter(&arc_reclaim_lock) == 0) {
4431 ARCSTAT_INCR(arcstat_need_free, ptob(sc->nr_to_scan));
4435 mutex_exit(&arc_reclaim_lock);
4438 * Evict the requested number of pages by shrinking arc_c the
4442 arc_shrink(ptob(sc->nr_to_scan));
4443 if (current_is_kswapd())
4444 arc_kmem_reap_now();
4445 #ifdef HAVE_SPLIT_SHRINKER_CALLBACK
4446 pages = MAX((int64_t)pages -
4447 (int64_t)btop(arc_evictable_memory()), 0);
4449 pages = btop(arc_evictable_memory());
4452 * We've shrunk what we can, wake up threads.
4454 cv_broadcast(&arc_reclaim_waiters_cv);
4456 pages = SHRINK_STOP;
4459 * When direct reclaim is observed it usually indicates a rapid
4460 * increase in memory pressure. This occurs because the kswapd
4461 * threads were unable to asynchronously keep enough free memory
4462 * available. In this case set arc_no_grow to briefly pause arc
4463 * growth to avoid compounding the memory pressure.
4465 if (current_is_kswapd()) {
4466 ARCSTAT_BUMP(arcstat_memory_indirect_count);
4468 arc_no_grow = B_TRUE;
4469 arc_kmem_reap_now();
4470 ARCSTAT_BUMP(arcstat_memory_direct_count);
4475 SPL_SHRINKER_CALLBACK_WRAPPER(arc_shrinker_func);
4477 SPL_SHRINKER_DECLARE(arc_shrinker, arc_shrinker_func, DEFAULT_SEEKS);
4478 #endif /* _KERNEL */
4481 * Adapt arc info given the number of bytes we are trying to add and
4482 * the state that we are coming from. This function is only called
4483 * when we are adding new content to the cache.
4486 arc_adapt(int bytes, arc_state_t *state)
4489 uint64_t arc_p_min = (arc_c >> arc_p_min_shift);
4490 int64_t mrug_size = refcount_count(&arc_mru_ghost->arcs_size);
4491 int64_t mfug_size = refcount_count(&arc_mfu_ghost->arcs_size);
4493 if (state == arc_l2c_only)
4498 * Adapt the target size of the MRU list:
4499 * - if we just hit in the MRU ghost list, then increase
4500 * the target size of the MRU list.
4501 * - if we just hit in the MFU ghost list, then increase
4502 * the target size of the MFU list by decreasing the
4503 * target size of the MRU list.
4505 if (state == arc_mru_ghost) {
4506 mult = (mrug_size >= mfug_size) ? 1 : (mfug_size / mrug_size);
4507 if (!zfs_arc_p_dampener_disable)
4508 mult = MIN(mult, 10); /* avoid wild arc_p adjustment */
4510 arc_p = MIN(arc_c - arc_p_min, arc_p + bytes * mult);
4511 } else if (state == arc_mfu_ghost) {
4514 mult = (mfug_size >= mrug_size) ? 1 : (mrug_size / mfug_size);
4515 if (!zfs_arc_p_dampener_disable)
4516 mult = MIN(mult, 10);
4518 delta = MIN(bytes * mult, arc_p);
4519 arc_p = MAX(arc_p_min, arc_p - delta);
4521 ASSERT((int64_t)arc_p >= 0);
4523 if (arc_reclaim_needed()) {
4524 cv_signal(&arc_reclaim_thread_cv);
4531 if (arc_c >= arc_c_max)
4535 * If we're within (2 * maxblocksize) bytes of the target
4536 * cache size, increment the target cache size
4538 ASSERT3U(arc_c, >=, 2ULL << SPA_MAXBLOCKSHIFT);
4539 if (arc_size >= arc_c - (2ULL << SPA_MAXBLOCKSHIFT)) {
4540 atomic_add_64(&arc_c, (int64_t)bytes);
4541 if (arc_c > arc_c_max)
4543 else if (state == arc_anon)
4544 atomic_add_64(&arc_p, (int64_t)bytes);
4548 ASSERT((int64_t)arc_p >= 0);
4552 * Check if arc_size has grown past our upper threshold, determined by
4553 * zfs_arc_overflow_shift.
4556 arc_is_overflowing(void)
4558 /* Always allow at least one block of overflow */
4559 uint64_t overflow = MAX(SPA_MAXBLOCKSIZE,
4560 arc_c >> zfs_arc_overflow_shift);
4562 return (arc_size >= arc_c + overflow);
4566 arc_get_data_abd(arc_buf_hdr_t *hdr, uint64_t size, void *tag)
4568 arc_buf_contents_t type = arc_buf_type(hdr);
4570 arc_get_data_impl(hdr, size, tag);
4571 if (type == ARC_BUFC_METADATA) {
4572 return (abd_alloc(size, B_TRUE));
4574 ASSERT(type == ARC_BUFC_DATA);
4575 return (abd_alloc(size, B_FALSE));
4580 arc_get_data_buf(arc_buf_hdr_t *hdr, uint64_t size, void *tag)
4582 arc_buf_contents_t type = arc_buf_type(hdr);
4584 arc_get_data_impl(hdr, size, tag);
4585 if (type == ARC_BUFC_METADATA) {
4586 return (zio_buf_alloc(size));
4588 ASSERT(type == ARC_BUFC_DATA);
4589 return (zio_data_buf_alloc(size));
4594 * Allocate a block and return it to the caller. If we are hitting the
4595 * hard limit for the cache size, we must sleep, waiting for the eviction
4596 * thread to catch up. If we're past the target size but below the hard
4597 * limit, we'll only signal the reclaim thread and continue on.
4600 arc_get_data_impl(arc_buf_hdr_t *hdr, uint64_t size, void *tag)
4602 arc_state_t *state = hdr->b_l1hdr.b_state;
4603 arc_buf_contents_t type = arc_buf_type(hdr);
4605 arc_adapt(size, state);
4608 * If arc_size is currently overflowing, and has grown past our
4609 * upper limit, we must be adding data faster than the evict
4610 * thread can evict. Thus, to ensure we don't compound the
4611 * problem by adding more data and forcing arc_size to grow even
4612 * further past it's target size, we halt and wait for the
4613 * eviction thread to catch up.
4615 * It's also possible that the reclaim thread is unable to evict
4616 * enough buffers to get arc_size below the overflow limit (e.g.
4617 * due to buffers being un-evictable, or hash lock collisions).
4618 * In this case, we want to proceed regardless if we're
4619 * overflowing; thus we don't use a while loop here.
4621 if (arc_is_overflowing()) {
4622 mutex_enter(&arc_reclaim_lock);
4625 * Now that we've acquired the lock, we may no longer be
4626 * over the overflow limit, lets check.
4628 * We're ignoring the case of spurious wake ups. If that
4629 * were to happen, it'd let this thread consume an ARC
4630 * buffer before it should have (i.e. before we're under
4631 * the overflow limit and were signalled by the reclaim
4632 * thread). As long as that is a rare occurrence, it
4633 * shouldn't cause any harm.
4635 if (arc_is_overflowing()) {
4636 cv_signal(&arc_reclaim_thread_cv);
4637 cv_wait(&arc_reclaim_waiters_cv, &arc_reclaim_lock);
4640 mutex_exit(&arc_reclaim_lock);
4643 VERIFY3U(hdr->b_type, ==, type);
4644 if (type == ARC_BUFC_METADATA) {
4645 arc_space_consume(size, ARC_SPACE_META);
4647 arc_space_consume(size, ARC_SPACE_DATA);
4651 * Update the state size. Note that ghost states have a
4652 * "ghost size" and so don't need to be updated.
4654 if (!GHOST_STATE(state)) {
4656 (void) refcount_add_many(&state->arcs_size, size, tag);
4659 * If this is reached via arc_read, the link is
4660 * protected by the hash lock. If reached via
4661 * arc_buf_alloc, the header should not be accessed by
4662 * any other thread. And, if reached via arc_read_done,
4663 * the hash lock will protect it if it's found in the
4664 * hash table; otherwise no other thread should be
4665 * trying to [add|remove]_reference it.
4667 if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) {
4668 ASSERT(refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
4669 (void) refcount_add_many(&state->arcs_esize[type],
4674 * If we are growing the cache, and we are adding anonymous
4675 * data, and we have outgrown arc_p, update arc_p
4677 if (arc_size < arc_c && hdr->b_l1hdr.b_state == arc_anon &&
4678 (refcount_count(&arc_anon->arcs_size) +
4679 refcount_count(&arc_mru->arcs_size) > arc_p))
4680 arc_p = MIN(arc_c, arc_p + size);
4685 arc_free_data_abd(arc_buf_hdr_t *hdr, abd_t *abd, uint64_t size, void *tag)
4687 arc_free_data_impl(hdr, size, tag);
4692 arc_free_data_buf(arc_buf_hdr_t *hdr, void *buf, uint64_t size, void *tag)
4694 arc_buf_contents_t type = arc_buf_type(hdr);
4696 arc_free_data_impl(hdr, size, tag);
4697 if (type == ARC_BUFC_METADATA) {
4698 zio_buf_free(buf, size);
4700 ASSERT(type == ARC_BUFC_DATA);
4701 zio_data_buf_free(buf, size);
4706 * Free the arc data buffer.
4709 arc_free_data_impl(arc_buf_hdr_t *hdr, uint64_t size, void *tag)
4711 arc_state_t *state = hdr->b_l1hdr.b_state;
4712 arc_buf_contents_t type = arc_buf_type(hdr);
4714 /* protected by hash lock, if in the hash table */
4715 if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) {
4716 ASSERT(refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
4717 ASSERT(state != arc_anon && state != arc_l2c_only);
4719 (void) refcount_remove_many(&state->arcs_esize[type],
4722 (void) refcount_remove_many(&state->arcs_size, size, tag);
4724 VERIFY3U(hdr->b_type, ==, type);
4725 if (type == ARC_BUFC_METADATA) {
4726 arc_space_return(size, ARC_SPACE_META);
4728 ASSERT(type == ARC_BUFC_DATA);
4729 arc_space_return(size, ARC_SPACE_DATA);
4734 * This routine is called whenever a buffer is accessed.
4735 * NOTE: the hash lock is dropped in this function.
4738 arc_access(arc_buf_hdr_t *hdr, kmutex_t *hash_lock)
4742 ASSERT(MUTEX_HELD(hash_lock));
4743 ASSERT(HDR_HAS_L1HDR(hdr));
4745 if (hdr->b_l1hdr.b_state == arc_anon) {
4747 * This buffer is not in the cache, and does not
4748 * appear in our "ghost" list. Add the new buffer
4752 ASSERT0(hdr->b_l1hdr.b_arc_access);
4753 hdr->b_l1hdr.b_arc_access = ddi_get_lbolt();
4754 DTRACE_PROBE1(new_state__mru, arc_buf_hdr_t *, hdr);
4755 arc_change_state(arc_mru, hdr, hash_lock);
4757 } else if (hdr->b_l1hdr.b_state == arc_mru) {
4758 now = ddi_get_lbolt();
4761 * If this buffer is here because of a prefetch, then either:
4762 * - clear the flag if this is a "referencing" read
4763 * (any subsequent access will bump this into the MFU state).
4765 * - move the buffer to the head of the list if this is
4766 * another prefetch (to make it less likely to be evicted).
4768 if (HDR_PREFETCH(hdr)) {
4769 if (refcount_count(&hdr->b_l1hdr.b_refcnt) == 0) {
4770 /* link protected by hash lock */
4771 ASSERT(multilist_link_active(
4772 &hdr->b_l1hdr.b_arc_node));
4774 arc_hdr_clear_flags(hdr, ARC_FLAG_PREFETCH);
4775 atomic_inc_32(&hdr->b_l1hdr.b_mru_hits);
4776 ARCSTAT_BUMP(arcstat_mru_hits);
4778 hdr->b_l1hdr.b_arc_access = now;
4783 * This buffer has been "accessed" only once so far,
4784 * but it is still in the cache. Move it to the MFU
4787 if (ddi_time_after(now, hdr->b_l1hdr.b_arc_access +
4790 * More than 125ms have passed since we
4791 * instantiated this buffer. Move it to the
4792 * most frequently used state.
4794 hdr->b_l1hdr.b_arc_access = now;
4795 DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr);
4796 arc_change_state(arc_mfu, hdr, hash_lock);
4798 atomic_inc_32(&hdr->b_l1hdr.b_mru_hits);
4799 ARCSTAT_BUMP(arcstat_mru_hits);
4800 } else if (hdr->b_l1hdr.b_state == arc_mru_ghost) {
4801 arc_state_t *new_state;
4803 * This buffer has been "accessed" recently, but
4804 * was evicted from the cache. Move it to the
4808 if (HDR_PREFETCH(hdr)) {
4809 new_state = arc_mru;
4810 if (refcount_count(&hdr->b_l1hdr.b_refcnt) > 0)
4811 arc_hdr_clear_flags(hdr, ARC_FLAG_PREFETCH);
4812 DTRACE_PROBE1(new_state__mru, arc_buf_hdr_t *, hdr);
4814 new_state = arc_mfu;
4815 DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr);
4818 hdr->b_l1hdr.b_arc_access = ddi_get_lbolt();
4819 arc_change_state(new_state, hdr, hash_lock);
4821 atomic_inc_32(&hdr->b_l1hdr.b_mru_ghost_hits);
4822 ARCSTAT_BUMP(arcstat_mru_ghost_hits);
4823 } else if (hdr->b_l1hdr.b_state == arc_mfu) {
4825 * This buffer has been accessed more than once and is
4826 * still in the cache. Keep it in the MFU state.
4828 * NOTE: an add_reference() that occurred when we did
4829 * the arc_read() will have kicked this off the list.
4830 * If it was a prefetch, we will explicitly move it to
4831 * the head of the list now.
4833 if ((HDR_PREFETCH(hdr)) != 0) {
4834 ASSERT(refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
4835 /* link protected by hash_lock */
4836 ASSERT(multilist_link_active(&hdr->b_l1hdr.b_arc_node));
4838 atomic_inc_32(&hdr->b_l1hdr.b_mfu_hits);
4839 ARCSTAT_BUMP(arcstat_mfu_hits);
4840 hdr->b_l1hdr.b_arc_access = ddi_get_lbolt();
4841 } else if (hdr->b_l1hdr.b_state == arc_mfu_ghost) {
4842 arc_state_t *new_state = arc_mfu;
4844 * This buffer has been accessed more than once but has
4845 * been evicted from the cache. Move it back to the
4849 if (HDR_PREFETCH(hdr)) {
4851 * This is a prefetch access...
4852 * move this block back to the MRU state.
4854 ASSERT0(refcount_count(&hdr->b_l1hdr.b_refcnt));
4855 new_state = arc_mru;
4858 hdr->b_l1hdr.b_arc_access = ddi_get_lbolt();
4859 DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr);
4860 arc_change_state(new_state, hdr, hash_lock);
4862 atomic_inc_32(&hdr->b_l1hdr.b_mfu_ghost_hits);
4863 ARCSTAT_BUMP(arcstat_mfu_ghost_hits);
4864 } else if (hdr->b_l1hdr.b_state == arc_l2c_only) {
4866 * This buffer is on the 2nd Level ARC.
4869 hdr->b_l1hdr.b_arc_access = ddi_get_lbolt();
4870 DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr);
4871 arc_change_state(arc_mfu, hdr, hash_lock);
4873 cmn_err(CE_PANIC, "invalid arc state 0x%p",
4874 hdr->b_l1hdr.b_state);
4878 /* a generic arc_done_func_t which you can use */
4881 arc_bcopy_func(zio_t *zio, arc_buf_t *buf, void *arg)
4883 if (zio == NULL || zio->io_error == 0)
4884 bcopy(buf->b_data, arg, arc_buf_size(buf));
4885 arc_buf_destroy(buf, arg);
4888 /* a generic arc_done_func_t */
4890 arc_getbuf_func(zio_t *zio, arc_buf_t *buf, void *arg)
4892 arc_buf_t **bufp = arg;
4893 if (zio && zio->io_error) {
4894 arc_buf_destroy(buf, arg);
4898 ASSERT(buf->b_data);
4903 arc_hdr_verify(arc_buf_hdr_t *hdr, blkptr_t *bp)
4905 if (BP_IS_HOLE(bp) || BP_IS_EMBEDDED(bp)) {
4906 ASSERT3U(HDR_GET_PSIZE(hdr), ==, 0);
4907 ASSERT3U(HDR_GET_COMPRESS(hdr), ==, ZIO_COMPRESS_OFF);
4909 if (HDR_COMPRESSION_ENABLED(hdr)) {
4910 ASSERT3U(HDR_GET_COMPRESS(hdr), ==,
4911 BP_GET_COMPRESS(bp));
4913 ASSERT3U(HDR_GET_LSIZE(hdr), ==, BP_GET_LSIZE(bp));
4914 ASSERT3U(HDR_GET_PSIZE(hdr), ==, BP_GET_PSIZE(bp));
4919 arc_read_done(zio_t *zio)
4921 arc_buf_hdr_t *hdr = zio->io_private;
4922 kmutex_t *hash_lock = NULL;
4923 arc_callback_t *callback_list;
4924 arc_callback_t *acb;
4925 boolean_t freeable = B_FALSE;
4926 boolean_t no_zio_error = (zio->io_error == 0);
4929 * The hdr was inserted into hash-table and removed from lists
4930 * prior to starting I/O. We should find this header, since
4931 * it's in the hash table, and it should be legit since it's
4932 * not possible to evict it during the I/O. The only possible
4933 * reason for it not to be found is if we were freed during the
4936 if (HDR_IN_HASH_TABLE(hdr)) {
4937 arc_buf_hdr_t *found;
4939 ASSERT3U(hdr->b_birth, ==, BP_PHYSICAL_BIRTH(zio->io_bp));
4940 ASSERT3U(hdr->b_dva.dva_word[0], ==,
4941 BP_IDENTITY(zio->io_bp)->dva_word[0]);
4942 ASSERT3U(hdr->b_dva.dva_word[1], ==,
4943 BP_IDENTITY(zio->io_bp)->dva_word[1]);
4945 found = buf_hash_find(hdr->b_spa, zio->io_bp, &hash_lock);
4947 ASSERT((found == hdr &&
4948 DVA_EQUAL(&hdr->b_dva, BP_IDENTITY(zio->io_bp))) ||
4949 (found == hdr && HDR_L2_READING(hdr)));
4950 ASSERT3P(hash_lock, !=, NULL);
4954 /* byteswap if necessary */
4955 if (BP_SHOULD_BYTESWAP(zio->io_bp)) {
4956 if (BP_GET_LEVEL(zio->io_bp) > 0) {
4957 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_UINT64;
4959 hdr->b_l1hdr.b_byteswap =
4960 DMU_OT_BYTESWAP(BP_GET_TYPE(zio->io_bp));
4963 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS;
4967 arc_hdr_clear_flags(hdr, ARC_FLAG_L2_EVICTED);
4968 if (l2arc_noprefetch && HDR_PREFETCH(hdr))
4969 arc_hdr_clear_flags(hdr, ARC_FLAG_L2CACHE);
4971 callback_list = hdr->b_l1hdr.b_acb;
4972 ASSERT3P(callback_list, !=, NULL);
4974 if (hash_lock && no_zio_error && hdr->b_l1hdr.b_state == arc_anon) {
4976 * Only call arc_access on anonymous buffers. This is because
4977 * if we've issued an I/O for an evicted buffer, we've already
4978 * called arc_access (to prevent any simultaneous readers from
4979 * getting confused).
4981 arc_access(hdr, hash_lock);
4985 * If a read request has a callback (i.e. acb_done is not NULL), then we
4986 * make a buf containing the data according to the parameters which were
4987 * passed in. The implementation of arc_buf_alloc_impl() ensures that we
4988 * aren't needlessly decompressing the data multiple times.
4990 int callback_cnt = 0;
4991 for (acb = callback_list; acb != NULL; acb = acb->acb_next) {
4995 /* This is a demand read since prefetches don't use callbacks */
4998 int error = arc_buf_alloc_impl(hdr, acb->acb_private,
4999 acb->acb_compressed, no_zio_error, &acb->acb_buf);
5001 zio->io_error = error;
5004 hdr->b_l1hdr.b_acb = NULL;
5005 arc_hdr_clear_flags(hdr, ARC_FLAG_IO_IN_PROGRESS);
5006 if (callback_cnt == 0) {
5007 ASSERT(HDR_PREFETCH(hdr));
5008 ASSERT0(hdr->b_l1hdr.b_bufcnt);
5009 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL);
5012 ASSERT(refcount_is_zero(&hdr->b_l1hdr.b_refcnt) ||
5013 callback_list != NULL);
5016 arc_hdr_verify(hdr, zio->io_bp);
5018 arc_hdr_set_flags(hdr, ARC_FLAG_IO_ERROR);
5019 if (hdr->b_l1hdr.b_state != arc_anon)
5020 arc_change_state(arc_anon, hdr, hash_lock);
5021 if (HDR_IN_HASH_TABLE(hdr))
5022 buf_hash_remove(hdr);
5023 freeable = refcount_is_zero(&hdr->b_l1hdr.b_refcnt);
5027 * Broadcast before we drop the hash_lock to avoid the possibility
5028 * that the hdr (and hence the cv) might be freed before we get to
5029 * the cv_broadcast().
5031 cv_broadcast(&hdr->b_l1hdr.b_cv);
5033 if (hash_lock != NULL) {
5034 mutex_exit(hash_lock);
5037 * This block was freed while we waited for the read to
5038 * complete. It has been removed from the hash table and
5039 * moved to the anonymous state (so that it won't show up
5042 ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon);
5043 freeable = refcount_is_zero(&hdr->b_l1hdr.b_refcnt);
5046 /* execute each callback and free its structure */
5047 while ((acb = callback_list) != NULL) {
5049 acb->acb_done(zio, acb->acb_buf, acb->acb_private);
5051 if (acb->acb_zio_dummy != NULL) {
5052 acb->acb_zio_dummy->io_error = zio->io_error;
5053 zio_nowait(acb->acb_zio_dummy);
5056 callback_list = acb->acb_next;
5057 kmem_free(acb, sizeof (arc_callback_t));
5061 arc_hdr_destroy(hdr);
5065 * "Read" the block at the specified DVA (in bp) via the
5066 * cache. If the block is found in the cache, invoke the provided
5067 * callback immediately and return. Note that the `zio' parameter
5068 * in the callback will be NULL in this case, since no IO was
5069 * required. If the block is not in the cache pass the read request
5070 * on to the spa with a substitute callback function, so that the
5071 * requested block will be added to the cache.
5073 * If a read request arrives for a block that has a read in-progress,
5074 * either wait for the in-progress read to complete (and return the
5075 * results); or, if this is a read with a "done" func, add a record
5076 * to the read to invoke the "done" func when the read completes,
5077 * and return; or just return.
5079 * arc_read_done() will invoke all the requested "done" functions
5080 * for readers of this block.
5083 arc_read(zio_t *pio, spa_t *spa, const blkptr_t *bp, arc_done_func_t *done,
5084 void *private, zio_priority_t priority, int zio_flags,
5085 arc_flags_t *arc_flags, const zbookmark_phys_t *zb)
5087 arc_buf_hdr_t *hdr = NULL;
5088 kmutex_t *hash_lock = NULL;
5090 uint64_t guid = spa_load_guid(spa);
5091 boolean_t compressed_read = (zio_flags & ZIO_FLAG_RAW) != 0;
5094 ASSERT(!BP_IS_EMBEDDED(bp) ||
5095 BPE_GET_ETYPE(bp) == BP_EMBEDDED_TYPE_DATA);
5098 if (!BP_IS_EMBEDDED(bp)) {
5100 * Embedded BP's have no DVA and require no I/O to "read".
5101 * Create an anonymous arc buf to back it.
5103 hdr = buf_hash_find(guid, bp, &hash_lock);
5106 if (hdr != NULL && HDR_HAS_L1HDR(hdr) && hdr->b_l1hdr.b_pabd != NULL) {
5107 arc_buf_t *buf = NULL;
5108 *arc_flags |= ARC_FLAG_CACHED;
5110 if (HDR_IO_IN_PROGRESS(hdr)) {
5112 if ((hdr->b_flags & ARC_FLAG_PRIO_ASYNC_READ) &&
5113 priority == ZIO_PRIORITY_SYNC_READ) {
5115 * This sync read must wait for an
5116 * in-progress async read (e.g. a predictive
5117 * prefetch). Async reads are queued
5118 * separately at the vdev_queue layer, so
5119 * this is a form of priority inversion.
5120 * Ideally, we would "inherit" the demand
5121 * i/o's priority by moving the i/o from
5122 * the async queue to the synchronous queue,
5123 * but there is currently no mechanism to do
5124 * so. Track this so that we can evaluate
5125 * the magnitude of this potential performance
5128 * Note that if the prefetch i/o is already
5129 * active (has been issued to the device),
5130 * the prefetch improved performance, because
5131 * we issued it sooner than we would have
5132 * without the prefetch.
5134 DTRACE_PROBE1(arc__sync__wait__for__async,
5135 arc_buf_hdr_t *, hdr);
5136 ARCSTAT_BUMP(arcstat_sync_wait_for_async);
5138 if (hdr->b_flags & ARC_FLAG_PREDICTIVE_PREFETCH) {
5139 arc_hdr_clear_flags(hdr,
5140 ARC_FLAG_PREDICTIVE_PREFETCH);
5143 if (*arc_flags & ARC_FLAG_WAIT) {
5144 cv_wait(&hdr->b_l1hdr.b_cv, hash_lock);
5145 mutex_exit(hash_lock);
5148 ASSERT(*arc_flags & ARC_FLAG_NOWAIT);
5151 arc_callback_t *acb = NULL;
5153 acb = kmem_zalloc(sizeof (arc_callback_t),
5155 acb->acb_done = done;
5156 acb->acb_private = private;
5157 acb->acb_compressed = compressed_read;
5159 acb->acb_zio_dummy = zio_null(pio,
5160 spa, NULL, NULL, NULL, zio_flags);
5162 ASSERT3P(acb->acb_done, !=, NULL);
5163 acb->acb_next = hdr->b_l1hdr.b_acb;
5164 hdr->b_l1hdr.b_acb = acb;
5165 mutex_exit(hash_lock);
5168 mutex_exit(hash_lock);
5172 ASSERT(hdr->b_l1hdr.b_state == arc_mru ||
5173 hdr->b_l1hdr.b_state == arc_mfu);
5176 if (hdr->b_flags & ARC_FLAG_PREDICTIVE_PREFETCH) {
5178 * This is a demand read which does not have to
5179 * wait for i/o because we did a predictive
5180 * prefetch i/o for it, which has completed.
5183 arc__demand__hit__predictive__prefetch,
5184 arc_buf_hdr_t *, hdr);
5186 arcstat_demand_hit_predictive_prefetch);
5187 arc_hdr_clear_flags(hdr,
5188 ARC_FLAG_PREDICTIVE_PREFETCH);
5190 ASSERT(!BP_IS_EMBEDDED(bp) || !BP_IS_HOLE(bp));
5192 /* Get a buf with the desired data in it. */
5193 VERIFY0(arc_buf_alloc_impl(hdr, private,
5194 compressed_read, B_TRUE, &buf));
5195 } else if (*arc_flags & ARC_FLAG_PREFETCH &&
5196 refcount_count(&hdr->b_l1hdr.b_refcnt) == 0) {
5197 arc_hdr_set_flags(hdr, ARC_FLAG_PREFETCH);
5199 DTRACE_PROBE1(arc__hit, arc_buf_hdr_t *, hdr);
5200 arc_access(hdr, hash_lock);
5201 if (*arc_flags & ARC_FLAG_L2CACHE)
5202 arc_hdr_set_flags(hdr, ARC_FLAG_L2CACHE);
5203 mutex_exit(hash_lock);
5204 ARCSTAT_BUMP(arcstat_hits);
5205 ARCSTAT_CONDSTAT(!HDR_PREFETCH(hdr),
5206 demand, prefetch, !HDR_ISTYPE_METADATA(hdr),
5207 data, metadata, hits);
5210 done(NULL, buf, private);
5212 uint64_t lsize = BP_GET_LSIZE(bp);
5213 uint64_t psize = BP_GET_PSIZE(bp);
5214 arc_callback_t *acb;
5217 boolean_t devw = B_FALSE;
5221 * Gracefully handle a damaged logical block size as a
5224 if (lsize > spa_maxblocksize(spa)) {
5225 rc = SET_ERROR(ECKSUM);
5230 /* this block is not in the cache */
5231 arc_buf_hdr_t *exists = NULL;
5232 arc_buf_contents_t type = BP_GET_BUFC_TYPE(bp);
5233 hdr = arc_hdr_alloc(spa_load_guid(spa), psize, lsize,
5234 BP_GET_COMPRESS(bp), type);
5236 if (!BP_IS_EMBEDDED(bp)) {
5237 hdr->b_dva = *BP_IDENTITY(bp);
5238 hdr->b_birth = BP_PHYSICAL_BIRTH(bp);
5239 exists = buf_hash_insert(hdr, &hash_lock);
5241 if (exists != NULL) {
5242 /* somebody beat us to the hash insert */
5243 mutex_exit(hash_lock);
5244 buf_discard_identity(hdr);
5245 arc_hdr_destroy(hdr);
5246 goto top; /* restart the IO request */
5250 * This block is in the ghost cache. If it was L2-only
5251 * (and thus didn't have an L1 hdr), we realloc the
5252 * header to add an L1 hdr.
5254 if (!HDR_HAS_L1HDR(hdr)) {
5255 hdr = arc_hdr_realloc(hdr, hdr_l2only_cache,
5259 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL);
5260 ASSERT(GHOST_STATE(hdr->b_l1hdr.b_state));
5261 ASSERT(!HDR_IO_IN_PROGRESS(hdr));
5262 ASSERT(refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
5263 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL);
5264 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL);
5267 * This is a delicate dance that we play here.
5268 * This hdr is in the ghost list so we access it
5269 * to move it out of the ghost list before we
5270 * initiate the read. If it's a prefetch then
5271 * it won't have a callback so we'll remove the
5272 * reference that arc_buf_alloc_impl() created. We
5273 * do this after we've called arc_access() to
5274 * avoid hitting an assert in remove_reference().
5276 arc_access(hdr, hash_lock);
5277 arc_hdr_alloc_pabd(hdr);
5279 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL);
5280 size = arc_hdr_size(hdr);
5283 * If compression is enabled on the hdr, then will do
5284 * RAW I/O and will store the compressed data in the hdr's
5285 * data block. Otherwise, the hdr's data block will contain
5286 * the uncompressed data.
5288 if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF) {
5289 zio_flags |= ZIO_FLAG_RAW;
5292 if (*arc_flags & ARC_FLAG_PREFETCH)
5293 arc_hdr_set_flags(hdr, ARC_FLAG_PREFETCH);
5294 if (*arc_flags & ARC_FLAG_L2CACHE)
5295 arc_hdr_set_flags(hdr, ARC_FLAG_L2CACHE);
5296 if (BP_GET_LEVEL(bp) > 0)
5297 arc_hdr_set_flags(hdr, ARC_FLAG_INDIRECT);
5298 if (*arc_flags & ARC_FLAG_PREDICTIVE_PREFETCH)
5299 arc_hdr_set_flags(hdr, ARC_FLAG_PREDICTIVE_PREFETCH);
5300 ASSERT(!GHOST_STATE(hdr->b_l1hdr.b_state));
5302 acb = kmem_zalloc(sizeof (arc_callback_t), KM_SLEEP);
5303 acb->acb_done = done;
5304 acb->acb_private = private;
5305 acb->acb_compressed = compressed_read;
5307 ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL);
5308 hdr->b_l1hdr.b_acb = acb;
5309 arc_hdr_set_flags(hdr, ARC_FLAG_IO_IN_PROGRESS);
5311 if (HDR_HAS_L2HDR(hdr) &&
5312 (vd = hdr->b_l2hdr.b_dev->l2ad_vdev) != NULL) {
5313 devw = hdr->b_l2hdr.b_dev->l2ad_writing;
5314 addr = hdr->b_l2hdr.b_daddr;
5316 * Lock out device removal.
5318 if (vdev_is_dead(vd) ||
5319 !spa_config_tryenter(spa, SCL_L2ARC, vd, RW_READER))
5323 if (priority == ZIO_PRIORITY_ASYNC_READ)
5324 arc_hdr_set_flags(hdr, ARC_FLAG_PRIO_ASYNC_READ);
5326 arc_hdr_clear_flags(hdr, ARC_FLAG_PRIO_ASYNC_READ);
5328 if (hash_lock != NULL)
5329 mutex_exit(hash_lock);
5332 * At this point, we have a level 1 cache miss. Try again in
5333 * L2ARC if possible.
5335 ASSERT3U(HDR_GET_LSIZE(hdr), ==, lsize);
5337 DTRACE_PROBE4(arc__miss, arc_buf_hdr_t *, hdr, blkptr_t *, bp,
5338 uint64_t, lsize, zbookmark_phys_t *, zb);
5339 ARCSTAT_BUMP(arcstat_misses);
5340 ARCSTAT_CONDSTAT(!HDR_PREFETCH(hdr),
5341 demand, prefetch, !HDR_ISTYPE_METADATA(hdr),
5342 data, metadata, misses);
5344 if (vd != NULL && l2arc_ndev != 0 && !(l2arc_norw && devw)) {
5346 * Read from the L2ARC if the following are true:
5347 * 1. The L2ARC vdev was previously cached.
5348 * 2. This buffer still has L2ARC metadata.
5349 * 3. This buffer isn't currently writing to the L2ARC.
5350 * 4. The L2ARC entry wasn't evicted, which may
5351 * also have invalidated the vdev.
5352 * 5. This isn't prefetch and l2arc_noprefetch is set.
5354 if (HDR_HAS_L2HDR(hdr) &&
5355 !HDR_L2_WRITING(hdr) && !HDR_L2_EVICTED(hdr) &&
5356 !(l2arc_noprefetch && HDR_PREFETCH(hdr))) {
5357 l2arc_read_callback_t *cb;
5359 DTRACE_PROBE1(l2arc__hit, arc_buf_hdr_t *, hdr);
5360 ARCSTAT_BUMP(arcstat_l2_hits);
5361 atomic_inc_32(&hdr->b_l2hdr.b_hits);
5363 cb = kmem_zalloc(sizeof (l2arc_read_callback_t),
5365 cb->l2rcb_hdr = hdr;
5368 cb->l2rcb_flags = zio_flags;
5370 ASSERT(addr >= VDEV_LABEL_START_SIZE &&
5371 addr + lsize < vd->vdev_psize -
5372 VDEV_LABEL_END_SIZE);
5375 * l2arc read. The SCL_L2ARC lock will be
5376 * released by l2arc_read_done().
5377 * Issue a null zio if the underlying buffer
5378 * was squashed to zero size by compression.
5380 ASSERT3U(HDR_GET_COMPRESS(hdr), !=,
5381 ZIO_COMPRESS_EMPTY);
5382 rzio = zio_read_phys(pio, vd, addr,
5383 size, hdr->b_l1hdr.b_pabd,
5385 l2arc_read_done, cb, priority,
5386 zio_flags | ZIO_FLAG_DONT_CACHE |
5388 ZIO_FLAG_DONT_PROPAGATE |
5389 ZIO_FLAG_DONT_RETRY, B_FALSE);
5391 DTRACE_PROBE2(l2arc__read, vdev_t *, vd,
5393 ARCSTAT_INCR(arcstat_l2_read_bytes, size);
5395 if (*arc_flags & ARC_FLAG_NOWAIT) {
5400 ASSERT(*arc_flags & ARC_FLAG_WAIT);
5401 if (zio_wait(rzio) == 0)
5404 /* l2arc read error; goto zio_read() */
5406 DTRACE_PROBE1(l2arc__miss,
5407 arc_buf_hdr_t *, hdr);
5408 ARCSTAT_BUMP(arcstat_l2_misses);
5409 if (HDR_L2_WRITING(hdr))
5410 ARCSTAT_BUMP(arcstat_l2_rw_clash);
5411 spa_config_exit(spa, SCL_L2ARC, vd);
5415 spa_config_exit(spa, SCL_L2ARC, vd);
5416 if (l2arc_ndev != 0) {
5417 DTRACE_PROBE1(l2arc__miss,
5418 arc_buf_hdr_t *, hdr);
5419 ARCSTAT_BUMP(arcstat_l2_misses);
5423 rzio = zio_read(pio, spa, bp, hdr->b_l1hdr.b_pabd, size,
5424 arc_read_done, hdr, priority, zio_flags, zb);
5426 if (*arc_flags & ARC_FLAG_WAIT) {
5427 rc = zio_wait(rzio);
5431 ASSERT(*arc_flags & ARC_FLAG_NOWAIT);
5436 spa_read_history_add(spa, zb, *arc_flags);
5441 arc_add_prune_callback(arc_prune_func_t *func, void *private)
5445 p = kmem_alloc(sizeof (*p), KM_SLEEP);
5447 p->p_private = private;
5448 list_link_init(&p->p_node);
5449 refcount_create(&p->p_refcnt);
5451 mutex_enter(&arc_prune_mtx);
5452 refcount_add(&p->p_refcnt, &arc_prune_list);
5453 list_insert_head(&arc_prune_list, p);
5454 mutex_exit(&arc_prune_mtx);
5460 arc_remove_prune_callback(arc_prune_t *p)
5462 boolean_t wait = B_FALSE;
5463 mutex_enter(&arc_prune_mtx);
5464 list_remove(&arc_prune_list, p);
5465 if (refcount_remove(&p->p_refcnt, &arc_prune_list) > 0)
5467 mutex_exit(&arc_prune_mtx);
5469 /* wait for arc_prune_task to finish */
5471 taskq_wait_outstanding(arc_prune_taskq, 0);
5472 ASSERT0(refcount_count(&p->p_refcnt));
5473 refcount_destroy(&p->p_refcnt);
5474 kmem_free(p, sizeof (*p));
5478 * Notify the arc that a block was freed, and thus will never be used again.
5481 arc_freed(spa_t *spa, const blkptr_t *bp)
5484 kmutex_t *hash_lock;
5485 uint64_t guid = spa_load_guid(spa);
5487 ASSERT(!BP_IS_EMBEDDED(bp));
5489 hdr = buf_hash_find(guid, bp, &hash_lock);
5494 * We might be trying to free a block that is still doing I/O
5495 * (i.e. prefetch) or has a reference (i.e. a dedup-ed,
5496 * dmu_sync-ed block). If this block is being prefetched, then it
5497 * would still have the ARC_FLAG_IO_IN_PROGRESS flag set on the hdr
5498 * until the I/O completes. A block may also have a reference if it is
5499 * part of a dedup-ed, dmu_synced write. The dmu_sync() function would
5500 * have written the new block to its final resting place on disk but
5501 * without the dedup flag set. This would have left the hdr in the MRU
5502 * state and discoverable. When the txg finally syncs it detects that
5503 * the block was overridden in open context and issues an override I/O.
5504 * Since this is a dedup block, the override I/O will determine if the
5505 * block is already in the DDT. If so, then it will replace the io_bp
5506 * with the bp from the DDT and allow the I/O to finish. When the I/O
5507 * reaches the done callback, dbuf_write_override_done, it will
5508 * check to see if the io_bp and io_bp_override are identical.
5509 * If they are not, then it indicates that the bp was replaced with
5510 * the bp in the DDT and the override bp is freed. This allows
5511 * us to arrive here with a reference on a block that is being
5512 * freed. So if we have an I/O in progress, or a reference to
5513 * this hdr, then we don't destroy the hdr.
5515 if (!HDR_HAS_L1HDR(hdr) || (!HDR_IO_IN_PROGRESS(hdr) &&
5516 refcount_is_zero(&hdr->b_l1hdr.b_refcnt))) {
5517 arc_change_state(arc_anon, hdr, hash_lock);
5518 arc_hdr_destroy(hdr);
5519 mutex_exit(hash_lock);
5521 mutex_exit(hash_lock);
5527 * Release this buffer from the cache, making it an anonymous buffer. This
5528 * must be done after a read and prior to modifying the buffer contents.
5529 * If the buffer has more than one reference, we must make
5530 * a new hdr for the buffer.
5533 arc_release(arc_buf_t *buf, void *tag)
5535 kmutex_t *hash_lock;
5537 arc_buf_hdr_t *hdr = buf->b_hdr;
5540 * It would be nice to assert that if its DMU metadata (level >
5541 * 0 || it's the dnode file), then it must be syncing context.
5542 * But we don't know that information at this level.
5545 mutex_enter(&buf->b_evict_lock);
5547 ASSERT(HDR_HAS_L1HDR(hdr));
5550 * We don't grab the hash lock prior to this check, because if
5551 * the buffer's header is in the arc_anon state, it won't be
5552 * linked into the hash table.
5554 if (hdr->b_l1hdr.b_state == arc_anon) {
5555 mutex_exit(&buf->b_evict_lock);
5556 ASSERT(!HDR_IO_IN_PROGRESS(hdr));
5557 ASSERT(!HDR_IN_HASH_TABLE(hdr));
5558 ASSERT(!HDR_HAS_L2HDR(hdr));
5559 ASSERT(HDR_EMPTY(hdr));
5561 ASSERT3U(hdr->b_l1hdr.b_bufcnt, ==, 1);
5562 ASSERT3S(refcount_count(&hdr->b_l1hdr.b_refcnt), ==, 1);
5563 ASSERT(!list_link_active(&hdr->b_l1hdr.b_arc_node));
5565 hdr->b_l1hdr.b_arc_access = 0;
5568 * If the buf is being overridden then it may already
5569 * have a hdr that is not empty.
5571 buf_discard_identity(hdr);
5577 hash_lock = HDR_LOCK(hdr);
5578 mutex_enter(hash_lock);
5581 * This assignment is only valid as long as the hash_lock is
5582 * held, we must be careful not to reference state or the
5583 * b_state field after dropping the lock.
5585 state = hdr->b_l1hdr.b_state;
5586 ASSERT3P(hash_lock, ==, HDR_LOCK(hdr));
5587 ASSERT3P(state, !=, arc_anon);
5589 /* this buffer is not on any list */
5590 ASSERT3S(refcount_count(&hdr->b_l1hdr.b_refcnt), >, 0);
5592 if (HDR_HAS_L2HDR(hdr)) {
5593 mutex_enter(&hdr->b_l2hdr.b_dev->l2ad_mtx);
5596 * We have to recheck this conditional again now that
5597 * we're holding the l2ad_mtx to prevent a race with
5598 * another thread which might be concurrently calling
5599 * l2arc_evict(). In that case, l2arc_evict() might have
5600 * destroyed the header's L2 portion as we were waiting
5601 * to acquire the l2ad_mtx.
5603 if (HDR_HAS_L2HDR(hdr))
5604 arc_hdr_l2hdr_destroy(hdr);
5606 mutex_exit(&hdr->b_l2hdr.b_dev->l2ad_mtx);
5610 * Do we have more than one buf?
5612 if (hdr->b_l1hdr.b_bufcnt > 1) {
5613 arc_buf_hdr_t *nhdr;
5614 uint64_t spa = hdr->b_spa;
5615 uint64_t psize = HDR_GET_PSIZE(hdr);
5616 uint64_t lsize = HDR_GET_LSIZE(hdr);
5617 enum zio_compress compress = HDR_GET_COMPRESS(hdr);
5618 arc_buf_contents_t type = arc_buf_type(hdr);
5619 VERIFY3U(hdr->b_type, ==, type);
5621 ASSERT(hdr->b_l1hdr.b_buf != buf || buf->b_next != NULL);
5622 (void) remove_reference(hdr, hash_lock, tag);
5624 if (arc_buf_is_shared(buf) && !ARC_BUF_COMPRESSED(buf)) {
5625 ASSERT3P(hdr->b_l1hdr.b_buf, !=, buf);
5626 ASSERT(ARC_BUF_LAST(buf));
5630 * Pull the data off of this hdr and attach it to
5631 * a new anonymous hdr. Also find the last buffer
5632 * in the hdr's buffer list.
5634 arc_buf_t *lastbuf = arc_buf_remove(hdr, buf);
5635 ASSERT3P(lastbuf, !=, NULL);
5638 * If the current arc_buf_t and the hdr are sharing their data
5639 * buffer, then we must stop sharing that block.
5641 if (arc_buf_is_shared(buf)) {
5642 ASSERT3P(hdr->b_l1hdr.b_buf, !=, buf);
5643 VERIFY(!arc_buf_is_shared(lastbuf));
5646 * First, sever the block sharing relationship between
5647 * buf and the arc_buf_hdr_t.
5649 arc_unshare_buf(hdr, buf);
5652 * Now we need to recreate the hdr's b_pabd. Since we
5653 * have lastbuf handy, we try to share with it, but if
5654 * we can't then we allocate a new b_pabd and copy the
5655 * data from buf into it.
5657 if (arc_can_share(hdr, lastbuf)) {
5658 arc_share_buf(hdr, lastbuf);
5660 arc_hdr_alloc_pabd(hdr);
5661 abd_copy_from_buf(hdr->b_l1hdr.b_pabd,
5662 buf->b_data, psize);
5664 VERIFY3P(lastbuf->b_data, !=, NULL);
5665 } else if (HDR_SHARED_DATA(hdr)) {
5667 * Uncompressed shared buffers are always at the end
5668 * of the list. Compressed buffers don't have the
5669 * same requirements. This makes it hard to
5670 * simply assert that the lastbuf is shared so
5671 * we rely on the hdr's compression flags to determine
5672 * if we have a compressed, shared buffer.
5674 ASSERT(arc_buf_is_shared(lastbuf) ||
5675 HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF);
5676 ASSERT(!ARC_BUF_SHARED(buf));
5678 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL);
5679 ASSERT3P(state, !=, arc_l2c_only);
5681 (void) refcount_remove_many(&state->arcs_size,
5682 arc_buf_size(buf), buf);
5684 if (refcount_is_zero(&hdr->b_l1hdr.b_refcnt)) {
5685 ASSERT3P(state, !=, arc_l2c_only);
5686 (void) refcount_remove_many(&state->arcs_esize[type],
5687 arc_buf_size(buf), buf);
5690 hdr->b_l1hdr.b_bufcnt -= 1;
5691 arc_cksum_verify(buf);
5692 arc_buf_unwatch(buf);
5694 /* if this is the last uncompressed buf free the checksum */
5695 if (!arc_hdr_has_uncompressed_buf(hdr))
5696 arc_cksum_free(hdr);
5698 mutex_exit(hash_lock);
5701 * Allocate a new hdr. The new hdr will contain a b_pabd
5702 * buffer which will be freed in arc_write().
5704 nhdr = arc_hdr_alloc(spa, psize, lsize, compress, type);
5705 ASSERT3P(nhdr->b_l1hdr.b_buf, ==, NULL);
5706 ASSERT0(nhdr->b_l1hdr.b_bufcnt);
5707 ASSERT0(refcount_count(&nhdr->b_l1hdr.b_refcnt));
5708 VERIFY3U(nhdr->b_type, ==, type);
5709 ASSERT(!HDR_SHARED_DATA(nhdr));
5711 nhdr->b_l1hdr.b_buf = buf;
5712 nhdr->b_l1hdr.b_bufcnt = 1;
5713 nhdr->b_l1hdr.b_mru_hits = 0;
5714 nhdr->b_l1hdr.b_mru_ghost_hits = 0;
5715 nhdr->b_l1hdr.b_mfu_hits = 0;
5716 nhdr->b_l1hdr.b_mfu_ghost_hits = 0;
5717 nhdr->b_l1hdr.b_l2_hits = 0;
5718 (void) refcount_add(&nhdr->b_l1hdr.b_refcnt, tag);
5721 mutex_exit(&buf->b_evict_lock);
5722 (void) refcount_add_many(&arc_anon->arcs_size,
5723 HDR_GET_LSIZE(nhdr), buf);
5725 mutex_exit(&buf->b_evict_lock);
5726 ASSERT(refcount_count(&hdr->b_l1hdr.b_refcnt) == 1);
5727 /* protected by hash lock, or hdr is on arc_anon */
5728 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node));
5729 ASSERT(!HDR_IO_IN_PROGRESS(hdr));
5730 hdr->b_l1hdr.b_mru_hits = 0;
5731 hdr->b_l1hdr.b_mru_ghost_hits = 0;
5732 hdr->b_l1hdr.b_mfu_hits = 0;
5733 hdr->b_l1hdr.b_mfu_ghost_hits = 0;
5734 hdr->b_l1hdr.b_l2_hits = 0;
5735 arc_change_state(arc_anon, hdr, hash_lock);
5736 hdr->b_l1hdr.b_arc_access = 0;
5737 mutex_exit(hash_lock);
5739 buf_discard_identity(hdr);
5745 arc_released(arc_buf_t *buf)
5749 mutex_enter(&buf->b_evict_lock);
5750 released = (buf->b_data != NULL &&
5751 buf->b_hdr->b_l1hdr.b_state == arc_anon);
5752 mutex_exit(&buf->b_evict_lock);
5758 arc_referenced(arc_buf_t *buf)
5762 mutex_enter(&buf->b_evict_lock);
5763 referenced = (refcount_count(&buf->b_hdr->b_l1hdr.b_refcnt));
5764 mutex_exit(&buf->b_evict_lock);
5765 return (referenced);
5770 arc_write_ready(zio_t *zio)
5772 arc_write_callback_t *callback = zio->io_private;
5773 arc_buf_t *buf = callback->awcb_buf;
5774 arc_buf_hdr_t *hdr = buf->b_hdr;
5775 uint64_t psize = BP_IS_HOLE(zio->io_bp) ? 0 : BP_GET_PSIZE(zio->io_bp);
5776 enum zio_compress compress;
5777 fstrans_cookie_t cookie = spl_fstrans_mark();
5779 ASSERT(HDR_HAS_L1HDR(hdr));
5780 ASSERT(!refcount_is_zero(&buf->b_hdr->b_l1hdr.b_refcnt));
5781 ASSERT(hdr->b_l1hdr.b_bufcnt > 0);
5784 * If we're reexecuting this zio because the pool suspended, then
5785 * cleanup any state that was previously set the first time the
5786 * callback was invoked.
5788 if (zio->io_flags & ZIO_FLAG_REEXECUTED) {
5789 arc_cksum_free(hdr);
5790 arc_buf_unwatch(buf);
5791 if (hdr->b_l1hdr.b_pabd != NULL) {
5792 if (arc_buf_is_shared(buf)) {
5793 arc_unshare_buf(hdr, buf);
5795 arc_hdr_free_pabd(hdr);
5799 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL);
5800 ASSERT(!HDR_SHARED_DATA(hdr));
5801 ASSERT(!arc_buf_is_shared(buf));
5803 callback->awcb_ready(zio, buf, callback->awcb_private);
5805 if (HDR_IO_IN_PROGRESS(hdr))
5806 ASSERT(zio->io_flags & ZIO_FLAG_REEXECUTED);
5808 arc_cksum_compute(buf);
5809 arc_hdr_set_flags(hdr, ARC_FLAG_IO_IN_PROGRESS);
5811 if (BP_IS_HOLE(zio->io_bp) || BP_IS_EMBEDDED(zio->io_bp)) {
5812 compress = ZIO_COMPRESS_OFF;
5814 ASSERT3U(HDR_GET_LSIZE(hdr), ==, BP_GET_LSIZE(zio->io_bp));
5815 compress = BP_GET_COMPRESS(zio->io_bp);
5817 HDR_SET_PSIZE(hdr, psize);
5818 arc_hdr_set_compress(hdr, compress);
5821 * Fill the hdr with data. If the hdr is compressed, the data we want
5822 * is available from the zio, otherwise we can take it from the buf.
5824 * We might be able to share the buf's data with the hdr here. However,
5825 * doing so would cause the ARC to be full of linear ABDs if we write a
5826 * lot of shareable data. As a compromise, we check whether scattered
5827 * ABDs are allowed, and assume that if they are then the user wants
5828 * the ARC to be primarily filled with them regardless of the data being
5829 * written. Therefore, if they're allowed then we allocate one and copy
5830 * the data into it; otherwise, we share the data directly if we can.
5832 if (zfs_abd_scatter_enabled || !arc_can_share(hdr, buf)) {
5833 arc_hdr_alloc_pabd(hdr);
5836 * Ideally, we would always copy the io_abd into b_pabd, but the
5837 * user may have disabled compressed ARC, thus we must check the
5838 * hdr's compression setting rather than the io_bp's.
5840 if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF) {
5841 ASSERT3U(BP_GET_COMPRESS(zio->io_bp), !=,
5843 ASSERT3U(psize, >, 0);
5845 abd_copy(hdr->b_l1hdr.b_pabd, zio->io_abd, psize);
5847 ASSERT3U(zio->io_orig_size, ==, arc_hdr_size(hdr));
5849 abd_copy_from_buf(hdr->b_l1hdr.b_pabd, buf->b_data,
5853 ASSERT3P(buf->b_data, ==, abd_to_buf(zio->io_orig_abd));
5854 ASSERT3U(zio->io_orig_size, ==, arc_buf_size(buf));
5855 ASSERT3U(hdr->b_l1hdr.b_bufcnt, ==, 1);
5857 arc_share_buf(hdr, buf);
5860 arc_hdr_verify(hdr, zio->io_bp);
5861 spl_fstrans_unmark(cookie);
5865 arc_write_children_ready(zio_t *zio)
5867 arc_write_callback_t *callback = zio->io_private;
5868 arc_buf_t *buf = callback->awcb_buf;
5870 callback->awcb_children_ready(zio, buf, callback->awcb_private);
5874 * The SPA calls this callback for each physical write that happens on behalf
5875 * of a logical write. See the comment in dbuf_write_physdone() for details.
5878 arc_write_physdone(zio_t *zio)
5880 arc_write_callback_t *cb = zio->io_private;
5881 if (cb->awcb_physdone != NULL)
5882 cb->awcb_physdone(zio, cb->awcb_buf, cb->awcb_private);
5886 arc_write_done(zio_t *zio)
5888 arc_write_callback_t *callback = zio->io_private;
5889 arc_buf_t *buf = callback->awcb_buf;
5890 arc_buf_hdr_t *hdr = buf->b_hdr;
5892 ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL);
5894 if (zio->io_error == 0) {
5895 arc_hdr_verify(hdr, zio->io_bp);
5897 if (BP_IS_HOLE(zio->io_bp) || BP_IS_EMBEDDED(zio->io_bp)) {
5898 buf_discard_identity(hdr);
5900 hdr->b_dva = *BP_IDENTITY(zio->io_bp);
5901 hdr->b_birth = BP_PHYSICAL_BIRTH(zio->io_bp);
5904 ASSERT(HDR_EMPTY(hdr));
5908 * If the block to be written was all-zero or compressed enough to be
5909 * embedded in the BP, no write was performed so there will be no
5910 * dva/birth/checksum. The buffer must therefore remain anonymous
5913 if (!HDR_EMPTY(hdr)) {
5914 arc_buf_hdr_t *exists;
5915 kmutex_t *hash_lock;
5917 ASSERT3U(zio->io_error, ==, 0);
5919 arc_cksum_verify(buf);
5921 exists = buf_hash_insert(hdr, &hash_lock);
5922 if (exists != NULL) {
5924 * This can only happen if we overwrite for
5925 * sync-to-convergence, because we remove
5926 * buffers from the hash table when we arc_free().
5928 if (zio->io_flags & ZIO_FLAG_IO_REWRITE) {
5929 if (!BP_EQUAL(&zio->io_bp_orig, zio->io_bp))
5930 panic("bad overwrite, hdr=%p exists=%p",
5931 (void *)hdr, (void *)exists);
5932 ASSERT(refcount_is_zero(
5933 &exists->b_l1hdr.b_refcnt));
5934 arc_change_state(arc_anon, exists, hash_lock);
5935 mutex_exit(hash_lock);
5936 arc_hdr_destroy(exists);
5937 exists = buf_hash_insert(hdr, &hash_lock);
5938 ASSERT3P(exists, ==, NULL);
5939 } else if (zio->io_flags & ZIO_FLAG_NOPWRITE) {
5941 ASSERT(zio->io_prop.zp_nopwrite);
5942 if (!BP_EQUAL(&zio->io_bp_orig, zio->io_bp))
5943 panic("bad nopwrite, hdr=%p exists=%p",
5944 (void *)hdr, (void *)exists);
5947 ASSERT(hdr->b_l1hdr.b_bufcnt == 1);
5948 ASSERT(hdr->b_l1hdr.b_state == arc_anon);
5949 ASSERT(BP_GET_DEDUP(zio->io_bp));
5950 ASSERT(BP_GET_LEVEL(zio->io_bp) == 0);
5953 arc_hdr_clear_flags(hdr, ARC_FLAG_IO_IN_PROGRESS);
5954 /* if it's not anon, we are doing a scrub */
5955 if (exists == NULL && hdr->b_l1hdr.b_state == arc_anon)
5956 arc_access(hdr, hash_lock);
5957 mutex_exit(hash_lock);
5959 arc_hdr_clear_flags(hdr, ARC_FLAG_IO_IN_PROGRESS);
5962 ASSERT(!refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
5963 callback->awcb_done(zio, buf, callback->awcb_private);
5965 abd_put(zio->io_abd);
5966 kmem_free(callback, sizeof (arc_write_callback_t));
5970 arc_write(zio_t *pio, spa_t *spa, uint64_t txg,
5971 blkptr_t *bp, arc_buf_t *buf, boolean_t l2arc,
5972 const zio_prop_t *zp, arc_done_func_t *ready,
5973 arc_done_func_t *children_ready, arc_done_func_t *physdone,
5974 arc_done_func_t *done, void *private, zio_priority_t priority,
5975 int zio_flags, const zbookmark_phys_t *zb)
5977 arc_buf_hdr_t *hdr = buf->b_hdr;
5978 arc_write_callback_t *callback;
5981 ASSERT3P(ready, !=, NULL);
5982 ASSERT3P(done, !=, NULL);
5983 ASSERT(!HDR_IO_ERROR(hdr));
5984 ASSERT(!HDR_IO_IN_PROGRESS(hdr));
5985 ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL);
5986 ASSERT3U(hdr->b_l1hdr.b_bufcnt, >, 0);
5988 arc_hdr_set_flags(hdr, ARC_FLAG_L2CACHE);
5989 if (ARC_BUF_COMPRESSED(buf)) {
5990 ASSERT3U(zp->zp_compress, !=, ZIO_COMPRESS_OFF);
5991 ASSERT3U(HDR_GET_LSIZE(hdr), !=, arc_buf_size(buf));
5992 zio_flags |= ZIO_FLAG_RAW;
5994 callback = kmem_zalloc(sizeof (arc_write_callback_t), KM_SLEEP);
5995 callback->awcb_ready = ready;
5996 callback->awcb_children_ready = children_ready;
5997 callback->awcb_physdone = physdone;
5998 callback->awcb_done = done;
5999 callback->awcb_private = private;
6000 callback->awcb_buf = buf;
6003 * The hdr's b_pabd is now stale, free it now. A new data block
6004 * will be allocated when the zio pipeline calls arc_write_ready().
6006 if (hdr->b_l1hdr.b_pabd != NULL) {
6008 * If the buf is currently sharing the data block with
6009 * the hdr then we need to break that relationship here.
6010 * The hdr will remain with a NULL data pointer and the
6011 * buf will take sole ownership of the block.
6013 if (arc_buf_is_shared(buf)) {
6014 arc_unshare_buf(hdr, buf);
6016 arc_hdr_free_pabd(hdr);
6018 VERIFY3P(buf->b_data, !=, NULL);
6019 arc_hdr_set_compress(hdr, ZIO_COMPRESS_OFF);
6021 ASSERT(!arc_buf_is_shared(buf));
6022 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL);
6024 zio = zio_write(pio, spa, txg, bp,
6025 abd_get_from_buf(buf->b_data, HDR_GET_LSIZE(hdr)),
6026 HDR_GET_LSIZE(hdr), arc_buf_size(buf), zp,
6028 (children_ready != NULL) ? arc_write_children_ready : NULL,
6029 arc_write_physdone, arc_write_done, callback,
6030 priority, zio_flags, zb);
6036 arc_memory_throttle(uint64_t reserve, uint64_t txg)
6039 uint64_t available_memory = ptob(freemem);
6040 static uint64_t page_load = 0;
6041 static uint64_t last_txg = 0;
6043 pgcnt_t minfree = btop(arc_sys_free / 4);
6048 MIN(available_memory, vmem_size(heap_arena, VMEM_FREE));
6051 if (available_memory > arc_all_memory() * arc_lotsfree_percent / 100)
6054 if (txg > last_txg) {
6059 * If we are in pageout, we know that memory is already tight,
6060 * the arc is already going to be evicting, so we just want to
6061 * continue to let page writes occur as quickly as possible.
6063 if (current_is_kswapd()) {
6064 if (page_load > MAX(ptob(minfree), available_memory) / 4) {
6065 DMU_TX_STAT_BUMP(dmu_tx_memory_reclaim);
6066 return (SET_ERROR(ERESTART));
6068 /* Note: reserve is inflated, so we deflate */
6069 page_load += reserve / 8;
6071 } else if (page_load > 0 && arc_reclaim_needed()) {
6072 /* memory is low, delay before restarting */
6073 ARCSTAT_INCR(arcstat_memory_throttle_count, 1);
6074 DMU_TX_STAT_BUMP(dmu_tx_memory_reclaim);
6075 return (SET_ERROR(EAGAIN));
6083 arc_tempreserve_clear(uint64_t reserve)
6085 atomic_add_64(&arc_tempreserve, -reserve);
6086 ASSERT((int64_t)arc_tempreserve >= 0);
6090 arc_tempreserve_space(uint64_t reserve, uint64_t txg)
6096 reserve > arc_c/4 &&
6097 reserve * 4 > (2ULL << SPA_MAXBLOCKSHIFT))
6098 arc_c = MIN(arc_c_max, reserve * 4);
6101 * Throttle when the calculated memory footprint for the TXG
6102 * exceeds the target ARC size.
6104 if (reserve > arc_c) {
6105 DMU_TX_STAT_BUMP(dmu_tx_memory_reserve);
6106 return (SET_ERROR(ERESTART));
6110 * Don't count loaned bufs as in flight dirty data to prevent long
6111 * network delays from blocking transactions that are ready to be
6112 * assigned to a txg.
6115 /* assert that it has not wrapped around */
6116 ASSERT3S(atomic_add_64_nv(&arc_loaned_bytes, 0), >=, 0);
6118 anon_size = MAX((int64_t)(refcount_count(&arc_anon->arcs_size) -
6119 arc_loaned_bytes), 0);
6122 * Writes will, almost always, require additional memory allocations
6123 * in order to compress/encrypt/etc the data. We therefore need to
6124 * make sure that there is sufficient available memory for this.
6126 error = arc_memory_throttle(reserve, txg);
6131 * Throttle writes when the amount of dirty data in the cache
6132 * gets too large. We try to keep the cache less than half full
6133 * of dirty blocks so that our sync times don't grow too large.
6134 * Note: if two requests come in concurrently, we might let them
6135 * both succeed, when one of them should fail. Not a huge deal.
6138 if (reserve + arc_tempreserve + anon_size > arc_c / 2 &&
6139 anon_size > arc_c / 4) {
6140 uint64_t meta_esize =
6141 refcount_count(&arc_anon->arcs_esize[ARC_BUFC_METADATA]);
6142 uint64_t data_esize =
6143 refcount_count(&arc_anon->arcs_esize[ARC_BUFC_DATA]);
6144 dprintf("failing, arc_tempreserve=%lluK anon_meta=%lluK "
6145 "anon_data=%lluK tempreserve=%lluK arc_c=%lluK\n",
6146 arc_tempreserve >> 10, meta_esize >> 10,
6147 data_esize >> 10, reserve >> 10, arc_c >> 10);
6148 DMU_TX_STAT_BUMP(dmu_tx_dirty_throttle);
6149 return (SET_ERROR(ERESTART));
6151 atomic_add_64(&arc_tempreserve, reserve);
6156 arc_kstat_update_state(arc_state_t *state, kstat_named_t *size,
6157 kstat_named_t *evict_data, kstat_named_t *evict_metadata)
6159 size->value.ui64 = refcount_count(&state->arcs_size);
6160 evict_data->value.ui64 =
6161 refcount_count(&state->arcs_esize[ARC_BUFC_DATA]);
6162 evict_metadata->value.ui64 =
6163 refcount_count(&state->arcs_esize[ARC_BUFC_METADATA]);
6167 arc_kstat_update(kstat_t *ksp, int rw)
6169 arc_stats_t *as = ksp->ks_data;
6171 if (rw == KSTAT_WRITE) {
6174 arc_kstat_update_state(arc_anon,
6175 &as->arcstat_anon_size,
6176 &as->arcstat_anon_evictable_data,
6177 &as->arcstat_anon_evictable_metadata);
6178 arc_kstat_update_state(arc_mru,
6179 &as->arcstat_mru_size,
6180 &as->arcstat_mru_evictable_data,
6181 &as->arcstat_mru_evictable_metadata);
6182 arc_kstat_update_state(arc_mru_ghost,
6183 &as->arcstat_mru_ghost_size,
6184 &as->arcstat_mru_ghost_evictable_data,
6185 &as->arcstat_mru_ghost_evictable_metadata);
6186 arc_kstat_update_state(arc_mfu,
6187 &as->arcstat_mfu_size,
6188 &as->arcstat_mfu_evictable_data,
6189 &as->arcstat_mfu_evictable_metadata);
6190 arc_kstat_update_state(arc_mfu_ghost,
6191 &as->arcstat_mfu_ghost_size,
6192 &as->arcstat_mfu_ghost_evictable_data,
6193 &as->arcstat_mfu_ghost_evictable_metadata);
6200 * This function *must* return indices evenly distributed between all
6201 * sublists of the multilist. This is needed due to how the ARC eviction
6202 * code is laid out; arc_evict_state() assumes ARC buffers are evenly
6203 * distributed between all sublists and uses this assumption when
6204 * deciding which sublist to evict from and how much to evict from it.
6207 arc_state_multilist_index_func(multilist_t *ml, void *obj)
6209 arc_buf_hdr_t *hdr = obj;
6212 * We rely on b_dva to generate evenly distributed index
6213 * numbers using buf_hash below. So, as an added precaution,
6214 * let's make sure we never add empty buffers to the arc lists.
6216 ASSERT(!HDR_EMPTY(hdr));
6219 * The assumption here, is the hash value for a given
6220 * arc_buf_hdr_t will remain constant throughout its lifetime
6221 * (i.e. its b_spa, b_dva, and b_birth fields don't change).
6222 * Thus, we don't need to store the header's sublist index
6223 * on insertion, as this index can be recalculated on removal.
6225 * Also, the low order bits of the hash value are thought to be
6226 * distributed evenly. Otherwise, in the case that the multilist
6227 * has a power of two number of sublists, each sublists' usage
6228 * would not be evenly distributed.
6230 return (buf_hash(hdr->b_spa, &hdr->b_dva, hdr->b_birth) %
6231 multilist_get_num_sublists(ml));
6235 * Called during module initialization and periodically thereafter to
6236 * apply reasonable changes to the exposed performance tunings. Non-zero
6237 * zfs_* values which differ from the currently set values will be applied.
6240 arc_tuning_update(void)
6242 uint64_t percent, allmem = arc_all_memory();
6244 /* Valid range: 64M - <all physical memory> */
6245 if ((zfs_arc_max) && (zfs_arc_max != arc_c_max) &&
6246 (zfs_arc_max > 64 << 20) && (zfs_arc_max < allmem) &&
6247 (zfs_arc_max > arc_c_min)) {
6248 arc_c_max = zfs_arc_max;
6250 arc_p = (arc_c >> 1);
6251 /* Valid range of arc_meta_limit: arc_meta_min - arc_c_max */
6252 percent = MIN(zfs_arc_meta_limit_percent, 100);
6253 arc_meta_limit = MAX(arc_meta_min, (percent * arc_c_max) / 100);
6254 percent = MIN(zfs_arc_dnode_limit_percent, 100);
6255 arc_dnode_limit = (percent * arc_meta_limit) / 100;
6258 /* Valid range: 32M - <arc_c_max> */
6259 if ((zfs_arc_min) && (zfs_arc_min != arc_c_min) &&
6260 (zfs_arc_min >= 2ULL << SPA_MAXBLOCKSHIFT) &&
6261 (zfs_arc_min <= arc_c_max)) {
6262 arc_c_min = zfs_arc_min;
6263 arc_c = MAX(arc_c, arc_c_min);
6266 /* Valid range: 16M - <arc_c_max> */
6267 if ((zfs_arc_meta_min) && (zfs_arc_meta_min != arc_meta_min) &&
6268 (zfs_arc_meta_min >= 1ULL << SPA_MAXBLOCKSHIFT) &&
6269 (zfs_arc_meta_min <= arc_c_max)) {
6270 arc_meta_min = zfs_arc_meta_min;
6271 arc_meta_limit = MAX(arc_meta_limit, arc_meta_min);
6272 arc_dnode_limit = arc_meta_limit / 10;
6275 /* Valid range: <arc_meta_min> - <arc_c_max> */
6276 if ((zfs_arc_meta_limit) && (zfs_arc_meta_limit != arc_meta_limit) &&
6277 (zfs_arc_meta_limit >= zfs_arc_meta_min) &&
6278 (zfs_arc_meta_limit <= arc_c_max))
6279 arc_meta_limit = zfs_arc_meta_limit;
6281 /* Valid range: <arc_meta_min> - <arc_c_max> */
6282 if ((zfs_arc_dnode_limit) && (zfs_arc_dnode_limit != arc_dnode_limit) &&
6283 (zfs_arc_dnode_limit >= zfs_arc_meta_min) &&
6284 (zfs_arc_dnode_limit <= arc_c_max))
6285 arc_dnode_limit = zfs_arc_dnode_limit;
6287 /* Valid range: 1 - N */
6288 if (zfs_arc_grow_retry)
6289 arc_grow_retry = zfs_arc_grow_retry;
6291 /* Valid range: 1 - N */
6292 if (zfs_arc_shrink_shift) {
6293 arc_shrink_shift = zfs_arc_shrink_shift;
6294 arc_no_grow_shift = MIN(arc_no_grow_shift, arc_shrink_shift -1);
6297 /* Valid range: 1 - N */
6298 if (zfs_arc_p_min_shift)
6299 arc_p_min_shift = zfs_arc_p_min_shift;
6301 /* Valid range: 1 - N ticks */
6302 if (zfs_arc_min_prefetch_lifespan)
6303 arc_min_prefetch_lifespan = zfs_arc_min_prefetch_lifespan;
6305 /* Valid range: 0 - 100 */
6306 if ((zfs_arc_lotsfree_percent >= 0) &&
6307 (zfs_arc_lotsfree_percent <= 100))
6308 arc_lotsfree_percent = zfs_arc_lotsfree_percent;
6310 /* Valid range: 0 - <all physical memory> */
6311 if ((zfs_arc_sys_free) && (zfs_arc_sys_free != arc_sys_free))
6312 arc_sys_free = MIN(MAX(zfs_arc_sys_free, 0), allmem);
6317 arc_state_init(void)
6319 arc_anon = &ARC_anon;
6321 arc_mru_ghost = &ARC_mru_ghost;
6323 arc_mfu_ghost = &ARC_mfu_ghost;
6324 arc_l2c_only = &ARC_l2c_only;
6326 arc_mru->arcs_list[ARC_BUFC_METADATA] =
6327 multilist_create(sizeof (arc_buf_hdr_t),
6328 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node),
6329 arc_state_multilist_index_func);
6330 arc_mru->arcs_list[ARC_BUFC_DATA] =
6331 multilist_create(sizeof (arc_buf_hdr_t),
6332 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node),
6333 arc_state_multilist_index_func);
6334 arc_mru_ghost->arcs_list[ARC_BUFC_METADATA] =
6335 multilist_create(sizeof (arc_buf_hdr_t),
6336 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node),
6337 arc_state_multilist_index_func);
6338 arc_mru_ghost->arcs_list[ARC_BUFC_DATA] =
6339 multilist_create(sizeof (arc_buf_hdr_t),
6340 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node),
6341 arc_state_multilist_index_func);
6342 arc_mfu->arcs_list[ARC_BUFC_METADATA] =
6343 multilist_create(sizeof (arc_buf_hdr_t),
6344 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node),
6345 arc_state_multilist_index_func);
6346 arc_mfu->arcs_list[ARC_BUFC_DATA] =
6347 multilist_create(sizeof (arc_buf_hdr_t),
6348 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node),
6349 arc_state_multilist_index_func);
6350 arc_mfu_ghost->arcs_list[ARC_BUFC_METADATA] =
6351 multilist_create(sizeof (arc_buf_hdr_t),
6352 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node),
6353 arc_state_multilist_index_func);
6354 arc_mfu_ghost->arcs_list[ARC_BUFC_DATA] =
6355 multilist_create(sizeof (arc_buf_hdr_t),
6356 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node),
6357 arc_state_multilist_index_func);
6358 arc_l2c_only->arcs_list[ARC_BUFC_METADATA] =
6359 multilist_create(sizeof (arc_buf_hdr_t),
6360 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node),
6361 arc_state_multilist_index_func);
6362 arc_l2c_only->arcs_list[ARC_BUFC_DATA] =
6363 multilist_create(sizeof (arc_buf_hdr_t),
6364 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node),
6365 arc_state_multilist_index_func);
6367 refcount_create(&arc_anon->arcs_esize[ARC_BUFC_METADATA]);
6368 refcount_create(&arc_anon->arcs_esize[ARC_BUFC_DATA]);
6369 refcount_create(&arc_mru->arcs_esize[ARC_BUFC_METADATA]);
6370 refcount_create(&arc_mru->arcs_esize[ARC_BUFC_DATA]);
6371 refcount_create(&arc_mru_ghost->arcs_esize[ARC_BUFC_METADATA]);
6372 refcount_create(&arc_mru_ghost->arcs_esize[ARC_BUFC_DATA]);
6373 refcount_create(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]);
6374 refcount_create(&arc_mfu->arcs_esize[ARC_BUFC_DATA]);
6375 refcount_create(&arc_mfu_ghost->arcs_esize[ARC_BUFC_METADATA]);
6376 refcount_create(&arc_mfu_ghost->arcs_esize[ARC_BUFC_DATA]);
6377 refcount_create(&arc_l2c_only->arcs_esize[ARC_BUFC_METADATA]);
6378 refcount_create(&arc_l2c_only->arcs_esize[ARC_BUFC_DATA]);
6380 refcount_create(&arc_anon->arcs_size);
6381 refcount_create(&arc_mru->arcs_size);
6382 refcount_create(&arc_mru_ghost->arcs_size);
6383 refcount_create(&arc_mfu->arcs_size);
6384 refcount_create(&arc_mfu_ghost->arcs_size);
6385 refcount_create(&arc_l2c_only->arcs_size);
6387 arc_anon->arcs_state = ARC_STATE_ANON;
6388 arc_mru->arcs_state = ARC_STATE_MRU;
6389 arc_mru_ghost->arcs_state = ARC_STATE_MRU_GHOST;
6390 arc_mfu->arcs_state = ARC_STATE_MFU;
6391 arc_mfu_ghost->arcs_state = ARC_STATE_MFU_GHOST;
6392 arc_l2c_only->arcs_state = ARC_STATE_L2C_ONLY;
6396 arc_state_fini(void)
6398 refcount_destroy(&arc_anon->arcs_esize[ARC_BUFC_METADATA]);
6399 refcount_destroy(&arc_anon->arcs_esize[ARC_BUFC_DATA]);
6400 refcount_destroy(&arc_mru->arcs_esize[ARC_BUFC_METADATA]);
6401 refcount_destroy(&arc_mru->arcs_esize[ARC_BUFC_DATA]);
6402 refcount_destroy(&arc_mru_ghost->arcs_esize[ARC_BUFC_METADATA]);
6403 refcount_destroy(&arc_mru_ghost->arcs_esize[ARC_BUFC_DATA]);
6404 refcount_destroy(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]);
6405 refcount_destroy(&arc_mfu->arcs_esize[ARC_BUFC_DATA]);
6406 refcount_destroy(&arc_mfu_ghost->arcs_esize[ARC_BUFC_METADATA]);
6407 refcount_destroy(&arc_mfu_ghost->arcs_esize[ARC_BUFC_DATA]);
6408 refcount_destroy(&arc_l2c_only->arcs_esize[ARC_BUFC_METADATA]);
6409 refcount_destroy(&arc_l2c_only->arcs_esize[ARC_BUFC_DATA]);
6411 refcount_destroy(&arc_anon->arcs_size);
6412 refcount_destroy(&arc_mru->arcs_size);
6413 refcount_destroy(&arc_mru_ghost->arcs_size);
6414 refcount_destroy(&arc_mfu->arcs_size);
6415 refcount_destroy(&arc_mfu_ghost->arcs_size);
6416 refcount_destroy(&arc_l2c_only->arcs_size);
6418 multilist_destroy(arc_mru->arcs_list[ARC_BUFC_METADATA]);
6419 multilist_destroy(arc_mru_ghost->arcs_list[ARC_BUFC_METADATA]);
6420 multilist_destroy(arc_mfu->arcs_list[ARC_BUFC_METADATA]);
6421 multilist_destroy(arc_mfu_ghost->arcs_list[ARC_BUFC_METADATA]);
6422 multilist_destroy(arc_mru->arcs_list[ARC_BUFC_DATA]);
6423 multilist_destroy(arc_mru_ghost->arcs_list[ARC_BUFC_DATA]);
6424 multilist_destroy(arc_mfu->arcs_list[ARC_BUFC_DATA]);
6425 multilist_destroy(arc_mfu_ghost->arcs_list[ARC_BUFC_DATA]);
6426 multilist_destroy(arc_l2c_only->arcs_list[ARC_BUFC_METADATA]);
6427 multilist_destroy(arc_l2c_only->arcs_list[ARC_BUFC_DATA]);
6439 uint64_t percent, allmem = arc_all_memory();
6441 mutex_init(&arc_reclaim_lock, NULL, MUTEX_DEFAULT, NULL);
6442 cv_init(&arc_reclaim_thread_cv, NULL, CV_DEFAULT, NULL);
6443 cv_init(&arc_reclaim_waiters_cv, NULL, CV_DEFAULT, NULL);
6445 /* Convert seconds to clock ticks */
6446 arc_min_prefetch_lifespan = 1 * hz;
6450 * Register a shrinker to support synchronous (direct) memory
6451 * reclaim from the arc. This is done to prevent kswapd from
6452 * swapping out pages when it is preferable to shrink the arc.
6454 spl_register_shrinker(&arc_shrinker);
6456 /* Set to 1/64 of all memory or a minimum of 512K */
6457 arc_sys_free = MAX(allmem / 64, (512 * 1024));
6461 /* Set max to 1/2 of all memory */
6462 arc_c_max = allmem / 2;
6465 * In userland, there's only the memory pressure that we artificially
6466 * create (see arc_available_memory()). Don't let arc_c get too
6467 * small, because it can cause transactions to be larger than
6468 * arc_c, causing arc_tempreserve_space() to fail.
6471 arc_c_min = MAX(arc_c_max / 2, 2ULL << SPA_MAXBLOCKSHIFT);
6473 arc_c_min = 2ULL << SPA_MAXBLOCKSHIFT;
6477 arc_p = (arc_c >> 1);
6480 /* Set min to 1/2 of arc_c_min */
6481 arc_meta_min = 1ULL << SPA_MAXBLOCKSHIFT;
6482 /* Initialize maximum observed usage to zero */
6485 * Set arc_meta_limit to a percent of arc_c_max with a floor of
6486 * arc_meta_min, and a ceiling of arc_c_max.
6488 percent = MIN(zfs_arc_meta_limit_percent, 100);
6489 arc_meta_limit = MAX(arc_meta_min, (percent * arc_c_max) / 100);
6490 percent = MIN(zfs_arc_dnode_limit_percent, 100);
6491 arc_dnode_limit = (percent * arc_meta_limit) / 100;
6493 /* Apply user specified tunings */
6494 arc_tuning_update();
6496 /* if kmem_flags are set, lets try to use less memory */
6497 if (kmem_debugging())
6499 if (arc_c < arc_c_min)
6505 list_create(&arc_prune_list, sizeof (arc_prune_t),
6506 offsetof(arc_prune_t, p_node));
6507 mutex_init(&arc_prune_mtx, NULL, MUTEX_DEFAULT, NULL);
6509 arc_prune_taskq = taskq_create("arc_prune", max_ncpus, defclsyspri,
6510 max_ncpus, INT_MAX, TASKQ_PREPOPULATE | TASKQ_DYNAMIC);
6512 arc_reclaim_thread_exit = B_FALSE;
6514 arc_ksp = kstat_create("zfs", 0, "arcstats", "misc", KSTAT_TYPE_NAMED,
6515 sizeof (arc_stats) / sizeof (kstat_named_t), KSTAT_FLAG_VIRTUAL);
6517 if (arc_ksp != NULL) {
6518 arc_ksp->ks_data = &arc_stats;
6519 arc_ksp->ks_update = arc_kstat_update;
6520 kstat_install(arc_ksp);
6523 (void) thread_create(NULL, 0, arc_reclaim_thread, NULL, 0, &p0,
6524 TS_RUN, defclsyspri);
6530 * Calculate maximum amount of dirty data per pool.
6532 * If it has been set by a module parameter, take that.
6533 * Otherwise, use a percentage of physical memory defined by
6534 * zfs_dirty_data_max_percent (default 10%) with a cap at
6535 * zfs_dirty_data_max_max (default 4G or 25% of physical memory).
6537 if (zfs_dirty_data_max_max == 0)
6538 zfs_dirty_data_max_max = MIN(4ULL * 1024 * 1024 * 1024,
6539 allmem * zfs_dirty_data_max_max_percent / 100);
6541 if (zfs_dirty_data_max == 0) {
6542 zfs_dirty_data_max = allmem *
6543 zfs_dirty_data_max_percent / 100;
6544 zfs_dirty_data_max = MIN(zfs_dirty_data_max,
6545 zfs_dirty_data_max_max);
6555 spl_unregister_shrinker(&arc_shrinker);
6556 #endif /* _KERNEL */
6558 mutex_enter(&arc_reclaim_lock);
6559 arc_reclaim_thread_exit = B_TRUE;
6561 * The reclaim thread will set arc_reclaim_thread_exit back to
6562 * B_FALSE when it is finished exiting; we're waiting for that.
6564 while (arc_reclaim_thread_exit) {
6565 cv_signal(&arc_reclaim_thread_cv);
6566 cv_wait(&arc_reclaim_thread_cv, &arc_reclaim_lock);
6568 mutex_exit(&arc_reclaim_lock);
6570 /* Use B_TRUE to ensure *all* buffers are evicted */
6571 arc_flush(NULL, B_TRUE);
6575 if (arc_ksp != NULL) {
6576 kstat_delete(arc_ksp);
6580 taskq_wait(arc_prune_taskq);
6581 taskq_destroy(arc_prune_taskq);
6583 mutex_enter(&arc_prune_mtx);
6584 while ((p = list_head(&arc_prune_list)) != NULL) {
6585 list_remove(&arc_prune_list, p);
6586 refcount_remove(&p->p_refcnt, &arc_prune_list);
6587 refcount_destroy(&p->p_refcnt);
6588 kmem_free(p, sizeof (*p));
6590 mutex_exit(&arc_prune_mtx);
6592 list_destroy(&arc_prune_list);
6593 mutex_destroy(&arc_prune_mtx);
6594 mutex_destroy(&arc_reclaim_lock);
6595 cv_destroy(&arc_reclaim_thread_cv);
6596 cv_destroy(&arc_reclaim_waiters_cv);
6601 ASSERT0(arc_loaned_bytes);
6607 * The level 2 ARC (L2ARC) is a cache layer in-between main memory and disk.
6608 * It uses dedicated storage devices to hold cached data, which are populated
6609 * using large infrequent writes. The main role of this cache is to boost
6610 * the performance of random read workloads. The intended L2ARC devices
6611 * include short-stroked disks, solid state disks, and other media with
6612 * substantially faster read latency than disk.
6614 * +-----------------------+
6616 * +-----------------------+
6619 * l2arc_feed_thread() arc_read()
6623 * +---------------+ |
6625 * +---------------+ |
6630 * +-------+ +-------+
6632 * | cache | | cache |
6633 * +-------+ +-------+
6634 * +=========+ .-----.
6635 * : L2ARC : |-_____-|
6636 * : devices : | Disks |
6637 * +=========+ `-_____-'
6639 * Read requests are satisfied from the following sources, in order:
6642 * 2) vdev cache of L2ARC devices
6644 * 4) vdev cache of disks
6647 * Some L2ARC device types exhibit extremely slow write performance.
6648 * To accommodate for this there are some significant differences between
6649 * the L2ARC and traditional cache design:
6651 * 1. There is no eviction path from the ARC to the L2ARC. Evictions from
6652 * the ARC behave as usual, freeing buffers and placing headers on ghost
6653 * lists. The ARC does not send buffers to the L2ARC during eviction as
6654 * this would add inflated write latencies for all ARC memory pressure.
6656 * 2. The L2ARC attempts to cache data from the ARC before it is evicted.
6657 * It does this by periodically scanning buffers from the eviction-end of
6658 * the MFU and MRU ARC lists, copying them to the L2ARC devices if they are
6659 * not already there. It scans until a headroom of buffers is satisfied,
6660 * which itself is a buffer for ARC eviction. If a compressible buffer is
6661 * found during scanning and selected for writing to an L2ARC device, we
6662 * temporarily boost scanning headroom during the next scan cycle to make
6663 * sure we adapt to compression effects (which might significantly reduce
6664 * the data volume we write to L2ARC). The thread that does this is
6665 * l2arc_feed_thread(), illustrated below; example sizes are included to
6666 * provide a better sense of ratio than this diagram:
6669 * +---------------------+----------+
6670 * ARC_mfu |:::::#:::::::::::::::|o#o###o###|-->. # already on L2ARC
6671 * +---------------------+----------+ | o L2ARC eligible
6672 * ARC_mru |:#:::::::::::::::::::|#o#ooo####|-->| : ARC buffer
6673 * +---------------------+----------+ |
6674 * 15.9 Gbytes ^ 32 Mbytes |
6676 * l2arc_feed_thread()
6678 * l2arc write hand <--[oooo]--'
6682 * +==============================+
6683 * L2ARC dev |####|#|###|###| |####| ... |
6684 * +==============================+
6687 * 3. If an ARC buffer is copied to the L2ARC but then hit instead of
6688 * evicted, then the L2ARC has cached a buffer much sooner than it probably
6689 * needed to, potentially wasting L2ARC device bandwidth and storage. It is
6690 * safe to say that this is an uncommon case, since buffers at the end of
6691 * the ARC lists have moved there due to inactivity.
6693 * 4. If the ARC evicts faster than the L2ARC can maintain a headroom,
6694 * then the L2ARC simply misses copying some buffers. This serves as a
6695 * pressure valve to prevent heavy read workloads from both stalling the ARC
6696 * with waits and clogging the L2ARC with writes. This also helps prevent
6697 * the potential for the L2ARC to churn if it attempts to cache content too
6698 * quickly, such as during backups of the entire pool.
6700 * 5. After system boot and before the ARC has filled main memory, there are
6701 * no evictions from the ARC and so the tails of the ARC_mfu and ARC_mru
6702 * lists can remain mostly static. Instead of searching from tail of these
6703 * lists as pictured, the l2arc_feed_thread() will search from the list heads
6704 * for eligible buffers, greatly increasing its chance of finding them.
6706 * The L2ARC device write speed is also boosted during this time so that
6707 * the L2ARC warms up faster. Since there have been no ARC evictions yet,
6708 * there are no L2ARC reads, and no fear of degrading read performance
6709 * through increased writes.
6711 * 6. Writes to the L2ARC devices are grouped and sent in-sequence, so that
6712 * the vdev queue can aggregate them into larger and fewer writes. Each
6713 * device is written to in a rotor fashion, sweeping writes through
6714 * available space then repeating.
6716 * 7. The L2ARC does not store dirty content. It never needs to flush
6717 * write buffers back to disk based storage.
6719 * 8. If an ARC buffer is written (and dirtied) which also exists in the
6720 * L2ARC, the now stale L2ARC buffer is immediately dropped.
6722 * The performance of the L2ARC can be tweaked by a number of tunables, which
6723 * may be necessary for different workloads:
6725 * l2arc_write_max max write bytes per interval
6726 * l2arc_write_boost extra write bytes during device warmup
6727 * l2arc_noprefetch skip caching prefetched buffers
6728 * l2arc_headroom number of max device writes to precache
6729 * l2arc_headroom_boost when we find compressed buffers during ARC
6730 * scanning, we multiply headroom by this
6731 * percentage factor for the next scan cycle,
6732 * since more compressed buffers are likely to
6734 * l2arc_feed_secs seconds between L2ARC writing
6736 * Tunables may be removed or added as future performance improvements are
6737 * integrated, and also may become zpool properties.
6739 * There are three key functions that control how the L2ARC warms up:
6741 * l2arc_write_eligible() check if a buffer is eligible to cache
6742 * l2arc_write_size() calculate how much to write
6743 * l2arc_write_interval() calculate sleep delay between writes
6745 * These three functions determine what to write, how much, and how quickly
6750 l2arc_write_eligible(uint64_t spa_guid, arc_buf_hdr_t *hdr)
6753 * A buffer is *not* eligible for the L2ARC if it:
6754 * 1. belongs to a different spa.
6755 * 2. is already cached on the L2ARC.
6756 * 3. has an I/O in progress (it may be an incomplete read).
6757 * 4. is flagged not eligible (zfs property).
6759 if (hdr->b_spa != spa_guid || HDR_HAS_L2HDR(hdr) ||
6760 HDR_IO_IN_PROGRESS(hdr) || !HDR_L2CACHE(hdr))
6767 l2arc_write_size(void)
6772 * Make sure our globals have meaningful values in case the user
6775 size = l2arc_write_max;
6777 cmn_err(CE_NOTE, "Bad value for l2arc_write_max, value must "
6778 "be greater than zero, resetting it to the default (%d)",
6780 size = l2arc_write_max = L2ARC_WRITE_SIZE;
6783 if (arc_warm == B_FALSE)
6784 size += l2arc_write_boost;
6791 l2arc_write_interval(clock_t began, uint64_t wanted, uint64_t wrote)
6793 clock_t interval, next, now;
6796 * If the ARC lists are busy, increase our write rate; if the
6797 * lists are stale, idle back. This is achieved by checking
6798 * how much we previously wrote - if it was more than half of
6799 * what we wanted, schedule the next write much sooner.
6801 if (l2arc_feed_again && wrote > (wanted / 2))
6802 interval = (hz * l2arc_feed_min_ms) / 1000;
6804 interval = hz * l2arc_feed_secs;
6806 now = ddi_get_lbolt();
6807 next = MAX(now, MIN(now + interval, began + interval));
6813 * Cycle through L2ARC devices. This is how L2ARC load balances.
6814 * If a device is returned, this also returns holding the spa config lock.
6816 static l2arc_dev_t *
6817 l2arc_dev_get_next(void)
6819 l2arc_dev_t *first, *next = NULL;
6822 * Lock out the removal of spas (spa_namespace_lock), then removal
6823 * of cache devices (l2arc_dev_mtx). Once a device has been selected,
6824 * both locks will be dropped and a spa config lock held instead.
6826 mutex_enter(&spa_namespace_lock);
6827 mutex_enter(&l2arc_dev_mtx);
6829 /* if there are no vdevs, there is nothing to do */
6830 if (l2arc_ndev == 0)
6834 next = l2arc_dev_last;
6836 /* loop around the list looking for a non-faulted vdev */
6838 next = list_head(l2arc_dev_list);
6840 next = list_next(l2arc_dev_list, next);
6842 next = list_head(l2arc_dev_list);
6845 /* if we have come back to the start, bail out */
6848 else if (next == first)
6851 } while (vdev_is_dead(next->l2ad_vdev));
6853 /* if we were unable to find any usable vdevs, return NULL */
6854 if (vdev_is_dead(next->l2ad_vdev))
6857 l2arc_dev_last = next;
6860 mutex_exit(&l2arc_dev_mtx);
6863 * Grab the config lock to prevent the 'next' device from being
6864 * removed while we are writing to it.
6867 spa_config_enter(next->l2ad_spa, SCL_L2ARC, next, RW_READER);
6868 mutex_exit(&spa_namespace_lock);
6874 * Free buffers that were tagged for destruction.
6877 l2arc_do_free_on_write(void)
6880 l2arc_data_free_t *df, *df_prev;
6882 mutex_enter(&l2arc_free_on_write_mtx);
6883 buflist = l2arc_free_on_write;
6885 for (df = list_tail(buflist); df; df = df_prev) {
6886 df_prev = list_prev(buflist, df);
6887 ASSERT3P(df->l2df_abd, !=, NULL);
6888 abd_free(df->l2df_abd);
6889 list_remove(buflist, df);
6890 kmem_free(df, sizeof (l2arc_data_free_t));
6893 mutex_exit(&l2arc_free_on_write_mtx);
6897 * A write to a cache device has completed. Update all headers to allow
6898 * reads from these buffers to begin.
6901 l2arc_write_done(zio_t *zio)
6903 l2arc_write_callback_t *cb;
6906 arc_buf_hdr_t *head, *hdr, *hdr_prev;
6907 kmutex_t *hash_lock;
6908 int64_t bytes_dropped = 0;
6910 cb = zio->io_private;
6911 ASSERT3P(cb, !=, NULL);
6912 dev = cb->l2wcb_dev;
6913 ASSERT3P(dev, !=, NULL);
6914 head = cb->l2wcb_head;
6915 ASSERT3P(head, !=, NULL);
6916 buflist = &dev->l2ad_buflist;
6917 ASSERT3P(buflist, !=, NULL);
6918 DTRACE_PROBE2(l2arc__iodone, zio_t *, zio,
6919 l2arc_write_callback_t *, cb);
6921 if (zio->io_error != 0)
6922 ARCSTAT_BUMP(arcstat_l2_writes_error);
6925 * All writes completed, or an error was hit.
6928 mutex_enter(&dev->l2ad_mtx);
6929 for (hdr = list_prev(buflist, head); hdr; hdr = hdr_prev) {
6930 hdr_prev = list_prev(buflist, hdr);
6932 hash_lock = HDR_LOCK(hdr);
6935 * We cannot use mutex_enter or else we can deadlock
6936 * with l2arc_write_buffers (due to swapping the order
6937 * the hash lock and l2ad_mtx are taken).
6939 if (!mutex_tryenter(hash_lock)) {
6941 * Missed the hash lock. We must retry so we
6942 * don't leave the ARC_FLAG_L2_WRITING bit set.
6944 ARCSTAT_BUMP(arcstat_l2_writes_lock_retry);
6947 * We don't want to rescan the headers we've
6948 * already marked as having been written out, so
6949 * we reinsert the head node so we can pick up
6950 * where we left off.
6952 list_remove(buflist, head);
6953 list_insert_after(buflist, hdr, head);
6955 mutex_exit(&dev->l2ad_mtx);
6958 * We wait for the hash lock to become available
6959 * to try and prevent busy waiting, and increase
6960 * the chance we'll be able to acquire the lock
6961 * the next time around.
6963 mutex_enter(hash_lock);
6964 mutex_exit(hash_lock);
6969 * We could not have been moved into the arc_l2c_only
6970 * state while in-flight due to our ARC_FLAG_L2_WRITING
6971 * bit being set. Let's just ensure that's being enforced.
6973 ASSERT(HDR_HAS_L1HDR(hdr));
6976 * Skipped - drop L2ARC entry and mark the header as no
6977 * longer L2 eligibile.
6979 if (zio->io_error != 0) {
6981 * Error - drop L2ARC entry.
6983 list_remove(buflist, hdr);
6984 arc_hdr_clear_flags(hdr, ARC_FLAG_HAS_L2HDR);
6986 ARCSTAT_INCR(arcstat_l2_asize, -arc_hdr_size(hdr));
6987 ARCSTAT_INCR(arcstat_l2_size, -HDR_GET_LSIZE(hdr));
6989 bytes_dropped += arc_hdr_size(hdr);
6990 (void) refcount_remove_many(&dev->l2ad_alloc,
6991 arc_hdr_size(hdr), hdr);
6995 * Allow ARC to begin reads and ghost list evictions to
6998 arc_hdr_clear_flags(hdr, ARC_FLAG_L2_WRITING);
7000 mutex_exit(hash_lock);
7003 atomic_inc_64(&l2arc_writes_done);
7004 list_remove(buflist, head);
7005 ASSERT(!HDR_HAS_L1HDR(head));
7006 kmem_cache_free(hdr_l2only_cache, head);
7007 mutex_exit(&dev->l2ad_mtx);
7009 vdev_space_update(dev->l2ad_vdev, -bytes_dropped, 0, 0);
7011 l2arc_do_free_on_write();
7013 kmem_free(cb, sizeof (l2arc_write_callback_t));
7017 * A read to a cache device completed. Validate buffer contents before
7018 * handing over to the regular ARC routines.
7021 l2arc_read_done(zio_t *zio)
7023 l2arc_read_callback_t *cb;
7025 kmutex_t *hash_lock;
7026 boolean_t valid_cksum;
7028 ASSERT3P(zio->io_vd, !=, NULL);
7029 ASSERT(zio->io_flags & ZIO_FLAG_DONT_PROPAGATE);
7031 spa_config_exit(zio->io_spa, SCL_L2ARC, zio->io_vd);
7033 cb = zio->io_private;
7034 ASSERT3P(cb, !=, NULL);
7035 hdr = cb->l2rcb_hdr;
7036 ASSERT3P(hdr, !=, NULL);
7038 hash_lock = HDR_LOCK(hdr);
7039 mutex_enter(hash_lock);
7040 ASSERT3P(hash_lock, ==, HDR_LOCK(hdr));
7042 ASSERT3P(zio->io_abd, !=, NULL);
7045 * Check this survived the L2ARC journey.
7047 ASSERT3P(zio->io_abd, ==, hdr->b_l1hdr.b_pabd);
7048 zio->io_bp_copy = cb->l2rcb_bp; /* XXX fix in L2ARC 2.0 */
7049 zio->io_bp = &zio->io_bp_copy; /* XXX fix in L2ARC 2.0 */
7051 valid_cksum = arc_cksum_is_equal(hdr, zio);
7052 if (valid_cksum && zio->io_error == 0 && !HDR_L2_EVICTED(hdr)) {
7053 mutex_exit(hash_lock);
7054 zio->io_private = hdr;
7057 mutex_exit(hash_lock);
7059 * Buffer didn't survive caching. Increment stats and
7060 * reissue to the original storage device.
7062 if (zio->io_error != 0) {
7063 ARCSTAT_BUMP(arcstat_l2_io_error);
7065 zio->io_error = SET_ERROR(EIO);
7068 ARCSTAT_BUMP(arcstat_l2_cksum_bad);
7071 * If there's no waiter, issue an async i/o to the primary
7072 * storage now. If there *is* a waiter, the caller must
7073 * issue the i/o in a context where it's OK to block.
7075 if (zio->io_waiter == NULL) {
7076 zio_t *pio = zio_unique_parent(zio);
7078 ASSERT(!pio || pio->io_child_type == ZIO_CHILD_LOGICAL);
7080 zio_nowait(zio_read(pio, zio->io_spa, zio->io_bp,
7081 hdr->b_l1hdr.b_pabd, zio->io_size, arc_read_done,
7082 hdr, zio->io_priority, cb->l2rcb_flags,
7087 kmem_free(cb, sizeof (l2arc_read_callback_t));
7091 * This is the list priority from which the L2ARC will search for pages to
7092 * cache. This is used within loops (0..3) to cycle through lists in the
7093 * desired order. This order can have a significant effect on cache
7096 * Currently the metadata lists are hit first, MFU then MRU, followed by
7097 * the data lists. This function returns a locked list, and also returns
7100 static multilist_sublist_t *
7101 l2arc_sublist_lock(int list_num)
7103 multilist_t *ml = NULL;
7106 ASSERT(list_num >= 0 && list_num < L2ARC_FEED_TYPES);
7110 ml = arc_mfu->arcs_list[ARC_BUFC_METADATA];
7113 ml = arc_mru->arcs_list[ARC_BUFC_METADATA];
7116 ml = arc_mfu->arcs_list[ARC_BUFC_DATA];
7119 ml = arc_mru->arcs_list[ARC_BUFC_DATA];
7126 * Return a randomly-selected sublist. This is acceptable
7127 * because the caller feeds only a little bit of data for each
7128 * call (8MB). Subsequent calls will result in different
7129 * sublists being selected.
7131 idx = multilist_get_random_index(ml);
7132 return (multilist_sublist_lock(ml, idx));
7136 * Evict buffers from the device write hand to the distance specified in
7137 * bytes. This distance may span populated buffers, it may span nothing.
7138 * This is clearing a region on the L2ARC device ready for writing.
7139 * If the 'all' boolean is set, every buffer is evicted.
7142 l2arc_evict(l2arc_dev_t *dev, uint64_t distance, boolean_t all)
7145 arc_buf_hdr_t *hdr, *hdr_prev;
7146 kmutex_t *hash_lock;
7149 buflist = &dev->l2ad_buflist;
7151 if (!all && dev->l2ad_first) {
7153 * This is the first sweep through the device. There is
7159 if (dev->l2ad_hand >= (dev->l2ad_end - (2 * distance))) {
7161 * When nearing the end of the device, evict to the end
7162 * before the device write hand jumps to the start.
7164 taddr = dev->l2ad_end;
7166 taddr = dev->l2ad_hand + distance;
7168 DTRACE_PROBE4(l2arc__evict, l2arc_dev_t *, dev, list_t *, buflist,
7169 uint64_t, taddr, boolean_t, all);
7172 mutex_enter(&dev->l2ad_mtx);
7173 for (hdr = list_tail(buflist); hdr; hdr = hdr_prev) {
7174 hdr_prev = list_prev(buflist, hdr);
7176 hash_lock = HDR_LOCK(hdr);
7179 * We cannot use mutex_enter or else we can deadlock
7180 * with l2arc_write_buffers (due to swapping the order
7181 * the hash lock and l2ad_mtx are taken).
7183 if (!mutex_tryenter(hash_lock)) {
7185 * Missed the hash lock. Retry.
7187 ARCSTAT_BUMP(arcstat_l2_evict_lock_retry);
7188 mutex_exit(&dev->l2ad_mtx);
7189 mutex_enter(hash_lock);
7190 mutex_exit(hash_lock);
7194 if (HDR_L2_WRITE_HEAD(hdr)) {
7196 * We hit a write head node. Leave it for
7197 * l2arc_write_done().
7199 list_remove(buflist, hdr);
7200 mutex_exit(hash_lock);
7204 if (!all && HDR_HAS_L2HDR(hdr) &&
7205 (hdr->b_l2hdr.b_daddr > taddr ||
7206 hdr->b_l2hdr.b_daddr < dev->l2ad_hand)) {
7208 * We've evicted to the target address,
7209 * or the end of the device.
7211 mutex_exit(hash_lock);
7215 ASSERT(HDR_HAS_L2HDR(hdr));
7216 if (!HDR_HAS_L1HDR(hdr)) {
7217 ASSERT(!HDR_L2_READING(hdr));
7219 * This doesn't exist in the ARC. Destroy.
7220 * arc_hdr_destroy() will call list_remove()
7221 * and decrement arcstat_l2_size.
7223 arc_change_state(arc_anon, hdr, hash_lock);
7224 arc_hdr_destroy(hdr);
7226 ASSERT(hdr->b_l1hdr.b_state != arc_l2c_only);
7227 ARCSTAT_BUMP(arcstat_l2_evict_l1cached);
7229 * Invalidate issued or about to be issued
7230 * reads, since we may be about to write
7231 * over this location.
7233 if (HDR_L2_READING(hdr)) {
7234 ARCSTAT_BUMP(arcstat_l2_evict_reading);
7235 arc_hdr_set_flags(hdr, ARC_FLAG_L2_EVICTED);
7238 /* Ensure this header has finished being written */
7239 ASSERT(!HDR_L2_WRITING(hdr));
7241 arc_hdr_l2hdr_destroy(hdr);
7243 mutex_exit(hash_lock);
7245 mutex_exit(&dev->l2ad_mtx);
7249 * Find and write ARC buffers to the L2ARC device.
7251 * An ARC_FLAG_L2_WRITING flag is set so that the L2ARC buffers are not valid
7252 * for reading until they have completed writing.
7253 * The headroom_boost is an in-out parameter used to maintain headroom boost
7254 * state between calls to this function.
7256 * Returns the number of bytes actually written (which may be smaller than
7257 * the delta by which the device hand has changed due to alignment).
7260 l2arc_write_buffers(spa_t *spa, l2arc_dev_t *dev, uint64_t target_sz)
7262 arc_buf_hdr_t *hdr, *hdr_prev, *head;
7263 uint64_t write_asize, write_psize, write_sz, headroom;
7265 l2arc_write_callback_t *cb;
7267 uint64_t guid = spa_load_guid(spa);
7270 ASSERT3P(dev->l2ad_vdev, !=, NULL);
7273 write_sz = write_asize = write_psize = 0;
7275 head = kmem_cache_alloc(hdr_l2only_cache, KM_PUSHPAGE);
7276 arc_hdr_set_flags(head, ARC_FLAG_L2_WRITE_HEAD | ARC_FLAG_HAS_L2HDR);
7279 * Copy buffers for L2ARC writing.
7281 for (try = 0; try < L2ARC_FEED_TYPES; try++) {
7282 multilist_sublist_t *mls = l2arc_sublist_lock(try);
7283 uint64_t passed_sz = 0;
7285 VERIFY3P(mls, !=, NULL);
7288 * L2ARC fast warmup.
7290 * Until the ARC is warm and starts to evict, read from the
7291 * head of the ARC lists rather than the tail.
7293 if (arc_warm == B_FALSE)
7294 hdr = multilist_sublist_head(mls);
7296 hdr = multilist_sublist_tail(mls);
7298 headroom = target_sz * l2arc_headroom;
7299 if (zfs_compressed_arc_enabled)
7300 headroom = (headroom * l2arc_headroom_boost) / 100;
7302 for (; hdr; hdr = hdr_prev) {
7303 kmutex_t *hash_lock;
7304 uint64_t asize, size;
7307 if (arc_warm == B_FALSE)
7308 hdr_prev = multilist_sublist_next(mls, hdr);
7310 hdr_prev = multilist_sublist_prev(mls, hdr);
7312 hash_lock = HDR_LOCK(hdr);
7313 if (!mutex_tryenter(hash_lock)) {
7315 * Skip this buffer rather than waiting.
7320 passed_sz += HDR_GET_LSIZE(hdr);
7321 if (passed_sz > headroom) {
7325 mutex_exit(hash_lock);
7329 if (!l2arc_write_eligible(guid, hdr)) {
7330 mutex_exit(hash_lock);
7334 if ((write_asize + HDR_GET_LSIZE(hdr)) > target_sz) {
7336 mutex_exit(hash_lock);
7342 * Insert a dummy header on the buflist so
7343 * l2arc_write_done() can find where the
7344 * write buffers begin without searching.
7346 mutex_enter(&dev->l2ad_mtx);
7347 list_insert_head(&dev->l2ad_buflist, head);
7348 mutex_exit(&dev->l2ad_mtx);
7351 sizeof (l2arc_write_callback_t), KM_SLEEP);
7352 cb->l2wcb_dev = dev;
7353 cb->l2wcb_head = head;
7354 pio = zio_root(spa, l2arc_write_done, cb,
7358 hdr->b_l2hdr.b_dev = dev;
7359 hdr->b_l2hdr.b_hits = 0;
7361 hdr->b_l2hdr.b_daddr = dev->l2ad_hand;
7362 arc_hdr_set_flags(hdr,
7363 ARC_FLAG_L2_WRITING | ARC_FLAG_HAS_L2HDR);
7365 mutex_enter(&dev->l2ad_mtx);
7366 list_insert_head(&dev->l2ad_buflist, hdr);
7367 mutex_exit(&dev->l2ad_mtx);
7370 * We rely on the L1 portion of the header below, so
7371 * it's invalid for this header to have been evicted out
7372 * of the ghost cache, prior to being written out. The
7373 * ARC_FLAG_L2_WRITING bit ensures this won't happen.
7375 ASSERT(HDR_HAS_L1HDR(hdr));
7377 ASSERT3U(HDR_GET_PSIZE(hdr), >, 0);
7378 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL);
7379 ASSERT3U(arc_hdr_size(hdr), >, 0);
7380 size = arc_hdr_size(hdr);
7382 (void) refcount_add_many(&dev->l2ad_alloc, size, hdr);
7385 * Normally the L2ARC can use the hdr's data, but if
7386 * we're sharing data between the hdr and one of its
7387 * bufs, L2ARC needs its own copy of the data so that
7388 * the ZIO below can't race with the buf consumer. To
7389 * ensure that this copy will be available for the
7390 * lifetime of the ZIO and be cleaned up afterwards, we
7391 * add it to the l2arc_free_on_write queue.
7393 if (!HDR_SHARED_DATA(hdr)) {
7394 to_write = hdr->b_l1hdr.b_pabd;
7396 to_write = abd_alloc_for_io(size,
7397 HDR_ISTYPE_METADATA(hdr));
7398 abd_copy(to_write, hdr->b_l1hdr.b_pabd, size);
7399 l2arc_free_abd_on_write(to_write, size,
7402 wzio = zio_write_phys(pio, dev->l2ad_vdev,
7403 hdr->b_l2hdr.b_daddr, size, to_write,
7404 ZIO_CHECKSUM_OFF, NULL, hdr,
7405 ZIO_PRIORITY_ASYNC_WRITE,
7406 ZIO_FLAG_CANFAIL, B_FALSE);
7408 write_sz += HDR_GET_LSIZE(hdr);
7409 DTRACE_PROBE2(l2arc__write, vdev_t *, dev->l2ad_vdev,
7412 write_asize += size;
7414 * Keep the clock hand suitably device-aligned.
7416 asize = vdev_psize_to_asize(dev->l2ad_vdev, size);
7417 write_psize += asize;
7418 dev->l2ad_hand += asize;
7420 mutex_exit(hash_lock);
7422 (void) zio_nowait(wzio);
7425 multilist_sublist_unlock(mls);
7431 /* No buffers selected for writing? */
7434 ASSERT(!HDR_HAS_L1HDR(head));
7435 kmem_cache_free(hdr_l2only_cache, head);
7439 ASSERT3U(write_asize, <=, target_sz);
7440 ARCSTAT_BUMP(arcstat_l2_writes_sent);
7441 ARCSTAT_INCR(arcstat_l2_write_bytes, write_asize);
7442 ARCSTAT_INCR(arcstat_l2_size, write_sz);
7443 ARCSTAT_INCR(arcstat_l2_asize, write_asize);
7444 vdev_space_update(dev->l2ad_vdev, write_asize, 0, 0);
7447 * Bump device hand to the device start if it is approaching the end.
7448 * l2arc_evict() will already have evicted ahead for this case.
7450 if (dev->l2ad_hand >= (dev->l2ad_end - target_sz)) {
7451 dev->l2ad_hand = dev->l2ad_start;
7452 dev->l2ad_first = B_FALSE;
7455 dev->l2ad_writing = B_TRUE;
7456 (void) zio_wait(pio);
7457 dev->l2ad_writing = B_FALSE;
7459 return (write_asize);
7463 * This thread feeds the L2ARC at regular intervals. This is the beating
7464 * heart of the L2ARC.
7467 l2arc_feed_thread(void)
7472 uint64_t size, wrote;
7473 clock_t begin, next = ddi_get_lbolt();
7474 fstrans_cookie_t cookie;
7476 CALLB_CPR_INIT(&cpr, &l2arc_feed_thr_lock, callb_generic_cpr, FTAG);
7478 mutex_enter(&l2arc_feed_thr_lock);
7480 cookie = spl_fstrans_mark();
7481 while (l2arc_thread_exit == 0) {
7482 CALLB_CPR_SAFE_BEGIN(&cpr);
7483 (void) cv_timedwait_sig(&l2arc_feed_thr_cv,
7484 &l2arc_feed_thr_lock, next);
7485 CALLB_CPR_SAFE_END(&cpr, &l2arc_feed_thr_lock);
7486 next = ddi_get_lbolt() + hz;
7489 * Quick check for L2ARC devices.
7491 mutex_enter(&l2arc_dev_mtx);
7492 if (l2arc_ndev == 0) {
7493 mutex_exit(&l2arc_dev_mtx);
7496 mutex_exit(&l2arc_dev_mtx);
7497 begin = ddi_get_lbolt();
7500 * This selects the next l2arc device to write to, and in
7501 * doing so the next spa to feed from: dev->l2ad_spa. This
7502 * will return NULL if there are now no l2arc devices or if
7503 * they are all faulted.
7505 * If a device is returned, its spa's config lock is also
7506 * held to prevent device removal. l2arc_dev_get_next()
7507 * will grab and release l2arc_dev_mtx.
7509 if ((dev = l2arc_dev_get_next()) == NULL)
7512 spa = dev->l2ad_spa;
7513 ASSERT3P(spa, !=, NULL);
7516 * If the pool is read-only then force the feed thread to
7517 * sleep a little longer.
7519 if (!spa_writeable(spa)) {
7520 next = ddi_get_lbolt() + 5 * l2arc_feed_secs * hz;
7521 spa_config_exit(spa, SCL_L2ARC, dev);
7526 * Avoid contributing to memory pressure.
7528 if (arc_reclaim_needed()) {
7529 ARCSTAT_BUMP(arcstat_l2_abort_lowmem);
7530 spa_config_exit(spa, SCL_L2ARC, dev);
7534 ARCSTAT_BUMP(arcstat_l2_feeds);
7536 size = l2arc_write_size();
7539 * Evict L2ARC buffers that will be overwritten.
7541 l2arc_evict(dev, size, B_FALSE);
7544 * Write ARC buffers.
7546 wrote = l2arc_write_buffers(spa, dev, size);
7549 * Calculate interval between writes.
7551 next = l2arc_write_interval(begin, size, wrote);
7552 spa_config_exit(spa, SCL_L2ARC, dev);
7554 spl_fstrans_unmark(cookie);
7556 l2arc_thread_exit = 0;
7557 cv_broadcast(&l2arc_feed_thr_cv);
7558 CALLB_CPR_EXIT(&cpr); /* drops l2arc_feed_thr_lock */
7563 l2arc_vdev_present(vdev_t *vd)
7567 mutex_enter(&l2arc_dev_mtx);
7568 for (dev = list_head(l2arc_dev_list); dev != NULL;
7569 dev = list_next(l2arc_dev_list, dev)) {
7570 if (dev->l2ad_vdev == vd)
7573 mutex_exit(&l2arc_dev_mtx);
7575 return (dev != NULL);
7579 * Add a vdev for use by the L2ARC. By this point the spa has already
7580 * validated the vdev and opened it.
7583 l2arc_add_vdev(spa_t *spa, vdev_t *vd)
7585 l2arc_dev_t *adddev;
7587 ASSERT(!l2arc_vdev_present(vd));
7590 * Create a new l2arc device entry.
7592 adddev = kmem_zalloc(sizeof (l2arc_dev_t), KM_SLEEP);
7593 adddev->l2ad_spa = spa;
7594 adddev->l2ad_vdev = vd;
7595 adddev->l2ad_start = VDEV_LABEL_START_SIZE;
7596 adddev->l2ad_end = VDEV_LABEL_START_SIZE + vdev_get_min_asize(vd);
7597 adddev->l2ad_hand = adddev->l2ad_start;
7598 adddev->l2ad_first = B_TRUE;
7599 adddev->l2ad_writing = B_FALSE;
7600 list_link_init(&adddev->l2ad_node);
7602 mutex_init(&adddev->l2ad_mtx, NULL, MUTEX_DEFAULT, NULL);
7604 * This is a list of all ARC buffers that are still valid on the
7607 list_create(&adddev->l2ad_buflist, sizeof (arc_buf_hdr_t),
7608 offsetof(arc_buf_hdr_t, b_l2hdr.b_l2node));
7610 vdev_space_update(vd, 0, 0, adddev->l2ad_end - adddev->l2ad_hand);
7611 refcount_create(&adddev->l2ad_alloc);
7614 * Add device to global list
7616 mutex_enter(&l2arc_dev_mtx);
7617 list_insert_head(l2arc_dev_list, adddev);
7618 atomic_inc_64(&l2arc_ndev);
7619 mutex_exit(&l2arc_dev_mtx);
7623 * Remove a vdev from the L2ARC.
7626 l2arc_remove_vdev(vdev_t *vd)
7628 l2arc_dev_t *dev, *nextdev, *remdev = NULL;
7631 * Find the device by vdev
7633 mutex_enter(&l2arc_dev_mtx);
7634 for (dev = list_head(l2arc_dev_list); dev; dev = nextdev) {
7635 nextdev = list_next(l2arc_dev_list, dev);
7636 if (vd == dev->l2ad_vdev) {
7641 ASSERT3P(remdev, !=, NULL);
7644 * Remove device from global list
7646 list_remove(l2arc_dev_list, remdev);
7647 l2arc_dev_last = NULL; /* may have been invalidated */
7648 atomic_dec_64(&l2arc_ndev);
7649 mutex_exit(&l2arc_dev_mtx);
7652 * Clear all buflists and ARC references. L2ARC device flush.
7654 l2arc_evict(remdev, 0, B_TRUE);
7655 list_destroy(&remdev->l2ad_buflist);
7656 mutex_destroy(&remdev->l2ad_mtx);
7657 refcount_destroy(&remdev->l2ad_alloc);
7658 kmem_free(remdev, sizeof (l2arc_dev_t));
7664 l2arc_thread_exit = 0;
7666 l2arc_writes_sent = 0;
7667 l2arc_writes_done = 0;
7669 mutex_init(&l2arc_feed_thr_lock, NULL, MUTEX_DEFAULT, NULL);
7670 cv_init(&l2arc_feed_thr_cv, NULL, CV_DEFAULT, NULL);
7671 mutex_init(&l2arc_dev_mtx, NULL, MUTEX_DEFAULT, NULL);
7672 mutex_init(&l2arc_free_on_write_mtx, NULL, MUTEX_DEFAULT, NULL);
7674 l2arc_dev_list = &L2ARC_dev_list;
7675 l2arc_free_on_write = &L2ARC_free_on_write;
7676 list_create(l2arc_dev_list, sizeof (l2arc_dev_t),
7677 offsetof(l2arc_dev_t, l2ad_node));
7678 list_create(l2arc_free_on_write, sizeof (l2arc_data_free_t),
7679 offsetof(l2arc_data_free_t, l2df_list_node));
7686 * This is called from dmu_fini(), which is called from spa_fini();
7687 * Because of this, we can assume that all l2arc devices have
7688 * already been removed when the pools themselves were removed.
7691 l2arc_do_free_on_write();
7693 mutex_destroy(&l2arc_feed_thr_lock);
7694 cv_destroy(&l2arc_feed_thr_cv);
7695 mutex_destroy(&l2arc_dev_mtx);
7696 mutex_destroy(&l2arc_free_on_write_mtx);
7698 list_destroy(l2arc_dev_list);
7699 list_destroy(l2arc_free_on_write);
7705 if (!(spa_mode_global & FWRITE))
7708 (void) thread_create(NULL, 0, l2arc_feed_thread, NULL, 0, &p0,
7709 TS_RUN, defclsyspri);
7715 if (!(spa_mode_global & FWRITE))
7718 mutex_enter(&l2arc_feed_thr_lock);
7719 cv_signal(&l2arc_feed_thr_cv); /* kick thread out of startup */
7720 l2arc_thread_exit = 1;
7721 while (l2arc_thread_exit != 0)
7722 cv_wait(&l2arc_feed_thr_cv, &l2arc_feed_thr_lock);
7723 mutex_exit(&l2arc_feed_thr_lock);
7726 #if defined(_KERNEL) && defined(HAVE_SPL)
7727 EXPORT_SYMBOL(arc_buf_size);
7728 EXPORT_SYMBOL(arc_write);
7729 EXPORT_SYMBOL(arc_read);
7730 EXPORT_SYMBOL(arc_buf_info);
7731 EXPORT_SYMBOL(arc_getbuf_func);
7732 EXPORT_SYMBOL(arc_add_prune_callback);
7733 EXPORT_SYMBOL(arc_remove_prune_callback);
7736 module_param(zfs_arc_min, ulong, 0644);
7737 MODULE_PARM_DESC(zfs_arc_min, "Min arc size");
7739 module_param(zfs_arc_max, ulong, 0644);
7740 MODULE_PARM_DESC(zfs_arc_max, "Max arc size");
7742 module_param(zfs_arc_meta_limit, ulong, 0644);
7743 MODULE_PARM_DESC(zfs_arc_meta_limit, "Meta limit for arc size");
7745 module_param(zfs_arc_meta_limit_percent, ulong, 0644);
7746 MODULE_PARM_DESC(zfs_arc_meta_limit_percent,
7747 "Percent of arc size for arc meta limit");
7749 module_param(zfs_arc_meta_min, ulong, 0644);
7750 MODULE_PARM_DESC(zfs_arc_meta_min, "Min arc metadata");
7752 module_param(zfs_arc_meta_prune, int, 0644);
7753 MODULE_PARM_DESC(zfs_arc_meta_prune, "Meta objects to scan for prune");
7755 module_param(zfs_arc_meta_adjust_restarts, int, 0644);
7756 MODULE_PARM_DESC(zfs_arc_meta_adjust_restarts,
7757 "Limit number of restarts in arc_adjust_meta");
7759 module_param(zfs_arc_meta_strategy, int, 0644);
7760 MODULE_PARM_DESC(zfs_arc_meta_strategy, "Meta reclaim strategy");
7762 module_param(zfs_arc_grow_retry, int, 0644);
7763 MODULE_PARM_DESC(zfs_arc_grow_retry, "Seconds before growing arc size");
7765 module_param(zfs_arc_p_aggressive_disable, int, 0644);
7766 MODULE_PARM_DESC(zfs_arc_p_aggressive_disable, "disable aggressive arc_p grow");
7768 module_param(zfs_arc_p_dampener_disable, int, 0644);
7769 MODULE_PARM_DESC(zfs_arc_p_dampener_disable, "disable arc_p adapt dampener");
7771 module_param(zfs_arc_shrink_shift, int, 0644);
7772 MODULE_PARM_DESC(zfs_arc_shrink_shift, "log2(fraction of arc to reclaim)");
7774 module_param(zfs_arc_pc_percent, uint, 0644);
7775 MODULE_PARM_DESC(zfs_arc_pc_percent,
7776 "Percent of pagecache to reclaim arc to");
7778 module_param(zfs_arc_p_min_shift, int, 0644);
7779 MODULE_PARM_DESC(zfs_arc_p_min_shift, "arc_c shift to calc min/max arc_p");
7781 module_param(zfs_arc_average_blocksize, int, 0444);
7782 MODULE_PARM_DESC(zfs_arc_average_blocksize, "Target average block size");
7784 module_param(zfs_compressed_arc_enabled, int, 0644);
7785 MODULE_PARM_DESC(zfs_compressed_arc_enabled, "Disable compressed arc buffers");
7787 module_param(zfs_arc_min_prefetch_lifespan, int, 0644);
7788 MODULE_PARM_DESC(zfs_arc_min_prefetch_lifespan, "Min life of prefetch block");
7790 module_param(l2arc_write_max, ulong, 0644);
7791 MODULE_PARM_DESC(l2arc_write_max, "Max write bytes per interval");
7793 module_param(l2arc_write_boost, ulong, 0644);
7794 MODULE_PARM_DESC(l2arc_write_boost, "Extra write bytes during device warmup");
7796 module_param(l2arc_headroom, ulong, 0644);
7797 MODULE_PARM_DESC(l2arc_headroom, "Number of max device writes to precache");
7799 module_param(l2arc_headroom_boost, ulong, 0644);
7800 MODULE_PARM_DESC(l2arc_headroom_boost, "Compressed l2arc_headroom multiplier");
7802 module_param(l2arc_feed_secs, ulong, 0644);
7803 MODULE_PARM_DESC(l2arc_feed_secs, "Seconds between L2ARC writing");
7805 module_param(l2arc_feed_min_ms, ulong, 0644);
7806 MODULE_PARM_DESC(l2arc_feed_min_ms, "Min feed interval in milliseconds");
7808 module_param(l2arc_noprefetch, int, 0644);
7809 MODULE_PARM_DESC(l2arc_noprefetch, "Skip caching prefetched buffers");
7811 module_param(l2arc_feed_again, int, 0644);
7812 MODULE_PARM_DESC(l2arc_feed_again, "Turbo L2ARC warmup");
7814 module_param(l2arc_norw, int, 0644);
7815 MODULE_PARM_DESC(l2arc_norw, "No reads during writes");
7817 module_param(zfs_arc_lotsfree_percent, int, 0644);
7818 MODULE_PARM_DESC(zfs_arc_lotsfree_percent,
7819 "System free memory I/O throttle in bytes");
7821 module_param(zfs_arc_sys_free, ulong, 0644);
7822 MODULE_PARM_DESC(zfs_arc_sys_free, "System free memory target size in bytes");
7824 module_param(zfs_arc_dnode_limit, ulong, 0644);
7825 MODULE_PARM_DESC(zfs_arc_dnode_limit, "Minimum bytes of dnodes in arc");
7827 module_param(zfs_arc_dnode_limit_percent, ulong, 0644);
7828 MODULE_PARM_DESC(zfs_arc_dnode_limit_percent,
7829 "Percent of ARC meta buffers for dnodes");
7831 module_param(zfs_arc_dnode_reduce_percent, ulong, 0644);
7832 MODULE_PARM_DESC(zfs_arc_dnode_reduce_percent,
7833 "Percentage of excess dnodes to try to unpin");