]> CyberLeo.Net >> Repos - FreeBSD/FreeBSD.git/commit
Optimize allocation throttling
authorAlexander Motin <mav@FreeBSD.org>
Wed, 21 Jul 2021 12:40:36 +0000 (08:40 -0400)
committerTony Hutter <hutter2@llnl.gov>
Tue, 14 Sep 2021 19:40:15 +0000 (12:40 -0700)
commit32c0b6468cbcfbd6c2c4bc08f88f34e016b4f184
treed1c1adb19a5db5faee4aa2f32959900632a8e8b0
parent7c61e1ef9d9f6c5fa6a3665a88838a19120cf07b
Optimize allocation throttling

Remove mc_lock use from metaslab_class_throttle_*().  The math there
is based on refcounts and so atomic, so the only race possible there
is between zfs_refcount_count() and zfs_refcount_add().  But in most
cases metaslab_class_throttle_reserve() is called with the allocator
lock held, which covers the race.  In cases where the lock is not
held, GANG_ALLOCATION() or METASLAB_MUST_RESERVE are set, and so we
do not use zfs_refcount_count().  And even if we assume some other
non-existing scenario, the worst that may happen from this race is
few more I/Os get to allocation earlier, that is not a problem.

Move locks and data of different allocators into different cache
lines to avoid false sharing.  Group spa_alloc_* arrays together
into single array of aligned struct spa_alloc spa_allocs.  Align
struct metaslab_class_allocator.

Reviewed-by: Paul Dagnelie <pcd@delphix.com>
Reviewed-by: Ryan Moeller <ryan@iXsystems.com>
Reviewed-by: Don Brady <don.brady@delphix.com>
Signed-off-by: Alexander Motin <mav@FreeBSD.org>
Closes #12314
include/sys/metaslab_impl.h
include/sys/spa_impl.h
module/zfs/metaslab.c
module/zfs/spa.c
module/zfs/spa_misc.c
module/zfs/zio.c