From patchwork Fri Apr 18 13:57:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 14057214 Received: from mail-yw1-f178.google.com (mail-yw1-f178.google.com [209.85.128.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D372786347 for ; Fri, 18 Apr 2025 13:57:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744984654; cv=none; b=bKUo+BurFtgoLxJ/cmweZCK857GY5gHdqqYBbZhYpHeYL0Fs8XGid6oN1RuHW/L/PXAwmycbyvqNVO99czvTWZ1N3tZxRej2DdTZviyThFYgWChmlUCnJt1zaO52vZl7Za5uO3RHdFwhJXRvOAcw0by6yKcvqQwBIc0sg4DzboU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744984654; c=relaxed/simple; bh=548YdXoaMkex2BS+F6CQ1eRYpJPS3gu2qzqiKuwttvA=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=P6Y66W3s95AFbAi1xrsAQaZJfN+aht2TDkhJLVVIpn2cTptBLCgmJLqL8qlK6crL9VMKw56Zly/ImSBW9zxxCQ55h2gJ3Rj9+SZRk2AsmsfHkoM1D/oUfkFzbxsb+zAN9GJHOj88mPYrbSBxzvQ+kC5cbEUxOQwwjXx8GDDSd9A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=toxicpanda.com; spf=none smtp.mailfrom=toxicpanda.com; dkim=pass (2048-bit key) header.d=toxicpanda-com.20230601.gappssmtp.com header.i=@toxicpanda-com.20230601.gappssmtp.com header.b=ffQmSg/A; arc=none smtp.client-ip=209.85.128.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=toxicpanda.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=toxicpanda.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=toxicpanda-com.20230601.gappssmtp.com header.i=@toxicpanda-com.20230601.gappssmtp.com header.b="ffQmSg/A" Received: by mail-yw1-f178.google.com with SMTP id 00721157ae682-7042ac0f4d6so17503057b3.1 for ; Fri, 18 Apr 2025 06:57:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20230601.gappssmtp.com; s=20230601; t=1744984650; x=1745589450; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=145/VTj819v4pXL4U+HwUjREpJ6BaveC4vNJxNWQnf4=; b=ffQmSg/AsiJMJX36oAgIsRIHg/fFiCBbRJXhzsK153kIr6ZJOSw0MHUs5/Z6qqyhs8 PoorIWZBbVQI9EUE4YbU5AlxFghi0z7WlSV5xg0aO1YJyo9b5pwGAOn7fUQhKXNqD0yk ontKH5R2kVxpBTff0ft2oli11DQL1+voe1z5ACOgHCPb790BDs+HfhAE6rggqFxn0B9j 1G9cXGW5JdhqvS8lub7F37GDPhzM0hRBE/KfXl2jdAgQPcEmKQ5ahYaqy9Xk9jeeZDKa 4jbcL94oD9dJgvyx2qYzHb1wjz7/BJECEP9wk9v+hpssWr9a6NHck6wO0FQ5nfQkUJKE 0hsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744984650; x=1745589450; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=145/VTj819v4pXL4U+HwUjREpJ6BaveC4vNJxNWQnf4=; b=Okjshwu60XfLgaN5losnwe3PTrgqxNJp0n0on7whPONKg9OsXnAopFmmdKsK6rAujc UFcVWolDa+jblvFvVG7I5M6j1kufDiUEIhAZz1vLub/AYXpnGB1vvszYNcBNjY8r4tfV qqo12yV/T4d9tVbzCQ8jrUQs2BPYxmnbzWqrqD5vCv8nIfy/8kmqOeNjLvcQaFGPT/KQ Z+vHRhYWdYXRUkDlxtkSGLHHk0n3hU9Do0qLnfmPijYE275Z+fITOCRhd+tu5+5Snmp1 udmW9B2uVOBqtFcSIlgK68WWSbOvLBKjsLLGdwbJT7FHnL/FAOfjLBIrscH71y+OgrMq SF2Q== X-Gm-Message-State: AOJu0YxI12xM2pCBKWZOwTqWF7ecW6O2JeSCAUFvSvEkuJry+2eTuKtV RaV9CX7pjdRXNSZf2W66SprdipDFj3IWgEgzjsHon0BqhIAmvjZyOaZBwYxo5W4G+hlTR1+IBAZ F X-Gm-Gg: ASbGncs50rJzoMoODo4fv63UlR6ySTVYU5vGSqhhmd+iQume61pWxwNz+RmjYvsObnc 2rbZARcT4/zheTnO38PlWXyZyoP85Ricv5ReadimB6AAJELj1xPMOdosz4Y+iJxLydMrZciLX/d zGPn/9eXLMgUvWT5bXWkKXe09Sx8XYR961vHR7W1zG9V2RaOAx1JvIMqpBFheEMpQSvvyuwFGfc U6ixuR9Qto1SH0aynWLyHDu6i8z6fsyyNwDW/3eowmRPZ2B2CAnhSzQI+GpPyms1eOipslKndCM dG9s49jGY3zRYurAYpB+bRhtKlDqbKu6blGop0uirHvKZfmCqYziULdueKMgh/4Qa6Gx2geDqkm ecetWDN4t31nc X-Google-Smtp-Source: AGHT+IH8jNWoW8iSvDEWk211DOVQplInoW37W4eTD4LdrRWZyIZsA1CnuK4Yq/md10EWWenJXCRSRQ== X-Received: by 2002:a05:690c:6b8a:b0:703:b30e:12ef with SMTP id 00721157ae682-706ccce4c1amr40714347b3.13.1744984650222; Fri, 18 Apr 2025 06:57:30 -0700 (PDT) Received: from localhost (syn-076-182-020-124.res.spectrum.com. [76.182.20.124]) by smtp.gmail.com with ESMTPSA id 00721157ae682-706ca53baa6sm5124407b3.75.2025.04.18.06.57.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Apr 2025 06:57:29 -0700 (PDT) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 1/3] btrfs: convert the buffer_radix to an xarray Date: Fri, 18 Apr 2025 09:57:21 -0400 Message-ID: X-Mailer: git-send-email 2.48.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In order to fully utilize xarray tagging to improve writeback we need to convert the buffer_radix to a proper xarray. This conversion is relatively straightforward as the radix code uses the xarray underneath. Using xarray directly allows for quite a lot less code. Signed-off-by: Josef Bacik --- fs/btrfs/disk-io.c | 15 ++- fs/btrfs/extent_io.c | 196 +++++++++++++++-------------------- fs/btrfs/fs.h | 4 +- fs/btrfs/tests/btrfs-tests.c | 27 ++--- fs/btrfs/zoned.c | 16 +-- 5 files changed, 113 insertions(+), 145 deletions(-) diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index 59da809b7d57..24c08eb86b7b 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -2762,10 +2762,22 @@ static int __cold init_tree_roots(struct btrfs_fs_info *fs_info) return ret; } +/* + * lockdep gets confused between our buffer_tree which requires IRQ locking + * because we modify marks in the IRQ context, and our delayed inode xarray + * which doesn't have these requirements. Use a class key so lockdep doesn't get + * them mixed up. + */ +static struct lock_class_key buffer_xa_class; + void btrfs_init_fs_info(struct btrfs_fs_info *fs_info) { INIT_RADIX_TREE(&fs_info->fs_roots_radix, GFP_ATOMIC); - INIT_RADIX_TREE(&fs_info->buffer_radix, GFP_ATOMIC); + + /* Use the same flags as mapping->i_pages. */ + xa_init_flags(&fs_info->buffer_tree, XA_FLAGS_LOCK_IRQ | XA_FLAGS_ACCOUNT); + lockdep_set_class(&fs_info->buffer_tree.xa_lock, &buffer_xa_class); + INIT_LIST_HEAD(&fs_info->trans_list); INIT_LIST_HEAD(&fs_info->dead_roots); INIT_LIST_HEAD(&fs_info->delayed_iputs); @@ -2777,7 +2789,6 @@ void btrfs_init_fs_info(struct btrfs_fs_info *fs_info) spin_lock_init(&fs_info->delayed_iput_lock); spin_lock_init(&fs_info->defrag_inodes_lock); spin_lock_init(&fs_info->super_lock); - spin_lock_init(&fs_info->buffer_lock); spin_lock_init(&fs_info->unused_bgs_lock); spin_lock_init(&fs_info->treelog_bg_lock); spin_lock_init(&fs_info->zone_active_bgs_lock); diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 6cfd286b8bbc..aa451ad52528 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1893,19 +1893,20 @@ static void set_btree_ioerr(struct extent_buffer *eb) * context. */ static struct extent_buffer *find_extent_buffer_nolock( - const struct btrfs_fs_info *fs_info, u64 start) + struct btrfs_fs_info *fs_info, u64 start) { + XA_STATE(xas, &fs_info->buffer_tree, start >> fs_info->sectorsize_bits); struct extent_buffer *eb; rcu_read_lock(); - eb = radix_tree_lookup(&fs_info->buffer_radix, - start >> fs_info->sectorsize_bits); - if (eb && atomic_inc_not_zero(&eb->refs)) { - rcu_read_unlock(); - return eb; - } + do { + eb = xas_load(&xas); + } while (xas_retry(&xas, eb)); + + if (eb && !atomic_inc_not_zero(&eb->refs)) + eb = NULL; rcu_read_unlock(); - return NULL; + return eb; } static void end_bbio_meta_write(struct btrfs_bio *bbio) @@ -2769,11 +2770,10 @@ static void detach_extent_buffer_folio(const struct extent_buffer *eb, struct fo if (!btrfs_meta_is_subpage(fs_info)) { /* - * We do this since we'll remove the pages after we've - * removed the eb from the radix tree, so we could race - * and have this page now attached to the new eb. So - * only clear folio if it's still connected to - * this eb. + * We do this since we'll remove the pages after we've removed + * the eb from the xarray, so we could race and have this page + * now attached to the new eb. So only clear folio if it's + * still connected to this eb. */ if (folio_test_private(folio) && folio_get_private(folio) == eb) { BUG_ON(test_bit(EXTENT_BUFFER_DIRTY, &eb->bflags)); @@ -2938,9 +2938,9 @@ static void check_buffer_tree_ref(struct extent_buffer *eb) { int refs; /* - * The TREE_REF bit is first set when the extent_buffer is added - * to the radix tree. It is also reset, if unset, when a new reference - * is created by find_extent_buffer. + * The TREE_REF bit is first set when the extent_buffer is added to the + * xarray. It is also reset, if unset, when a new reference is created + * by find_extent_buffer. * * It is only cleared in two cases: freeing the last non-tree * reference to the extent_buffer when its STALE bit is set or @@ -2952,13 +2952,12 @@ static void check_buffer_tree_ref(struct extent_buffer *eb) * conditions between the calls to check_buffer_tree_ref in those * codepaths and clearing TREE_REF in try_release_extent_buffer. * - * The actual lifetime of the extent_buffer in the radix tree is - * adequately protected by the refcount, but the TREE_REF bit and - * its corresponding reference are not. To protect against this - * class of races, we call check_buffer_tree_ref from the codepaths - * which trigger io. Note that once io is initiated, TREE_REF can no - * longer be cleared, so that is the moment at which any such race is - * best fixed. + * The actual lifetime of the extent_buffer in the xarray is adequately + * protected by the refcount, but the TREE_REF bit and its corresponding + * reference are not. To protect against this class of races, we call + * check_buffer_tree_ref from the codepaths which trigger io. Note that + * once io is initiated, TREE_REF can no longer be cleared, so that is + * the moment at which any such race is best fixed. */ refs = atomic_read(&eb->refs); if (refs >= 2 && test_bit(EXTENT_BUFFER_TREE_REF, &eb->bflags)) @@ -3022,23 +3021,26 @@ struct extent_buffer *alloc_test_extent_buffer(struct btrfs_fs_info *fs_info, return ERR_PTR(-ENOMEM); eb->fs_info = fs_info; again: - ret = radix_tree_preload(GFP_NOFS); - if (ret) { - exists = ERR_PTR(ret); + xa_lock_irq(&fs_info->buffer_tree); + exists = __xa_cmpxchg(&fs_info->buffer_tree, + start >> fs_info->sectorsize_bits, NULL, eb, + GFP_NOFS); + if (xa_is_err(exists)) { + ret = xa_err(exists); + xa_unlock_irq(&fs_info->buffer_tree); + btrfs_release_extent_buffer(eb); + return ERR_PTR(ret); + } + if (exists) { + if (!atomic_inc_not_zero(&exists->refs)) { + /* The extent buffer is being freed, retry. */ + xa_unlock_irq(&fs_info->buffer_tree); + goto again; + } + xa_unlock_irq(&fs_info->buffer_tree); goto free_eb; } - spin_lock(&fs_info->buffer_lock); - ret = radix_tree_insert(&fs_info->buffer_radix, - start >> fs_info->sectorsize_bits, eb); - spin_unlock(&fs_info->buffer_lock); - radix_tree_preload_end(); - if (ret == -EEXIST) { - exists = find_extent_buffer(fs_info, start); - if (exists) - goto free_eb; - else - goto again; - } + xa_unlock_irq(&fs_info->buffer_tree); check_buffer_tree_ref(eb); return eb; @@ -3059,9 +3061,9 @@ static struct extent_buffer *grab_extent_buffer(struct btrfs_fs_info *fs_info, lockdep_assert_held(&folio->mapping->i_private_lock); /* - * For subpage case, we completely rely on radix tree to ensure we - * don't try to insert two ebs for the same bytenr. So here we always - * return NULL and just continue. + * For subpage case, we completely rely on xarray to ensure we don't try + * to insert two ebs for the same bytenr. So here we always return NULL + * and just continue. */ if (btrfs_meta_is_subpage(fs_info)) return NULL; @@ -3194,7 +3196,7 @@ static int attach_eb_folio_to_filemap(struct extent_buffer *eb, int i, /* * To inform we have an extra eb under allocation, so that * detach_extent_buffer_page() won't release the folio private when the - * eb hasn't been inserted into radix tree yet. + * eb hasn't been inserted into the xarray yet. * * The ref will be decreased when the eb releases the page, in * detach_extent_buffer_page(). Thus needs no special handling in the @@ -3328,10 +3330,10 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info, /* * We can't unlock the pages just yet since the extent buffer - * hasn't been properly inserted in the radix tree, this - * opens a race with btree_release_folio which can free a page - * while we are still filling in all pages for the buffer and - * we could crash. + * hasn't been properly inserted in the xarray, this opens a + * race with btree_release_folio which can free a page while we + * are still filling in all pages for the buffer and we could + * crash. */ } if (uptodate) @@ -3340,23 +3342,25 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info, if (page_contig) eb->addr = folio_address(eb->folios[0]) + offset_in_page(eb->start); again: - ret = radix_tree_preload(GFP_NOFS); - if (ret) + xa_lock_irq(&fs_info->buffer_tree); + existing_eb = __xa_cmpxchg(&fs_info->buffer_tree, + start >> fs_info->sectorsize_bits, NULL, eb, + GFP_NOFS); + if (xa_is_err(existing_eb)) { + ret = xa_err(existing_eb); + xa_unlock_irq(&fs_info->buffer_tree); goto out; - - spin_lock(&fs_info->buffer_lock); - ret = radix_tree_insert(&fs_info->buffer_radix, - start >> fs_info->sectorsize_bits, eb); - spin_unlock(&fs_info->buffer_lock); - radix_tree_preload_end(); - if (ret == -EEXIST) { - ret = 0; - existing_eb = find_extent_buffer(fs_info, start); - if (existing_eb) - goto out; - else - goto again; } + if (existing_eb) { + if (!atomic_inc_not_zero(&existing_eb->refs)) { + xa_unlock_irq(&fs_info->buffer_tree); + goto again; + } + xa_unlock_irq(&fs_info->buffer_tree); + goto out; + } + xa_unlock_irq(&fs_info->buffer_tree); + /* add one reference for the tree */ check_buffer_tree_ref(eb); @@ -3426,10 +3430,13 @@ static int release_extent_buffer(struct extent_buffer *eb) spin_unlock(&eb->refs_lock); - spin_lock(&fs_info->buffer_lock); - radix_tree_delete_item(&fs_info->buffer_radix, - eb->start >> fs_info->sectorsize_bits, eb); - spin_unlock(&fs_info->buffer_lock); + /* + * We're erasing, theoretically there will be no allocations, so + * just use GFP_ATOMIC. + */ + xa_cmpxchg_irq(&fs_info->buffer_tree, + eb->start >> fs_info->sectorsize_bits, eb, NULL, + GFP_ATOMIC); btrfs_leak_debug_del_eb(eb); /* Should be safe to release folios at this point. */ @@ -4260,44 +4267,6 @@ void memmove_extent_buffer(const struct extent_buffer *dst, } } -#define GANG_LOOKUP_SIZE 16 -static struct extent_buffer *get_next_extent_buffer( - const struct btrfs_fs_info *fs_info, struct folio *folio, u64 bytenr) -{ - struct extent_buffer *gang[GANG_LOOKUP_SIZE]; - struct extent_buffer *found = NULL; - u64 folio_start = folio_pos(folio); - u64 cur = folio_start; - - ASSERT(in_range(bytenr, folio_start, PAGE_SIZE)); - lockdep_assert_held(&fs_info->buffer_lock); - - while (cur < folio_start + PAGE_SIZE) { - int ret; - int i; - - ret = radix_tree_gang_lookup(&fs_info->buffer_radix, - (void **)gang, cur >> fs_info->sectorsize_bits, - min_t(unsigned int, GANG_LOOKUP_SIZE, - PAGE_SIZE / fs_info->nodesize)); - if (ret == 0) - goto out; - for (i = 0; i < ret; i++) { - /* Already beyond page end */ - if (gang[i]->start >= folio_start + PAGE_SIZE) - goto out; - /* Found one */ - if (gang[i]->start >= bytenr) { - found = gang[i]; - goto out; - } - } - cur = gang[ret - 1]->start + gang[ret - 1]->len; - } -out: - return found; -} - static int try_release_subpage_extent_buffer(struct folio *folio) { struct btrfs_fs_info *fs_info = folio_to_fs_info(folio); @@ -4306,21 +4275,26 @@ static int try_release_subpage_extent_buffer(struct folio *folio) int ret; while (cur < end) { + XA_STATE(xas, &fs_info->buffer_tree, + cur >> fs_info->sectorsize_bits); struct extent_buffer *eb = NULL; /* * Unlike try_release_extent_buffer() which uses folio private - * to grab buffer, for subpage case we rely on radix tree, thus - * we need to ensure radix tree consistency. + * to grab buffer, for subpage case we rely on xarray, thus we + * need to ensure xarray tree consistency. * - * We also want an atomic snapshot of the radix tree, thus go + * We also want an atomic snapshot of the xarray tree, thus go * with spinlock rather than RCU. */ - spin_lock(&fs_info->buffer_lock); - eb = get_next_extent_buffer(fs_info, folio, cur); + xa_lock_irq(&fs_info->buffer_tree); + do { + eb = xas_find(&xas, end >> fs_info->sectorsize_bits); + } while (xas_retry(&xas, eb)); + if (!eb) { /* No more eb in the page range after or at cur */ - spin_unlock(&fs_info->buffer_lock); + xa_unlock_irq(&fs_info->buffer_tree); break; } cur = eb->start + eb->len; @@ -4332,10 +4306,10 @@ static int try_release_subpage_extent_buffer(struct folio *folio) spin_lock(&eb->refs_lock); if (atomic_read(&eb->refs) != 1 || extent_buffer_under_io(eb)) { spin_unlock(&eb->refs_lock); - spin_unlock(&fs_info->buffer_lock); + xa_unlock_irq(&fs_info->buffer_tree); break; } - spin_unlock(&fs_info->buffer_lock); + xa_unlock_irq(&fs_info->buffer_tree); /* * If tree ref isn't set then we know the ref on this eb is a diff --git a/fs/btrfs/fs.h b/fs/btrfs/fs.h index bcca43046064..ed02d276d908 100644 --- a/fs/btrfs/fs.h +++ b/fs/btrfs/fs.h @@ -776,10 +776,8 @@ struct btrfs_fs_info { struct btrfs_delayed_root *delayed_root; - /* Extent buffer radix tree */ - spinlock_t buffer_lock; /* Entries are eb->start / sectorsize */ - struct radix_tree_root buffer_radix; + struct xarray buffer_tree; /* Next backup root to be overwritten */ int backup_root_index; diff --git a/fs/btrfs/tests/btrfs-tests.c b/fs/btrfs/tests/btrfs-tests.c index 02a915eb51fb..27fd05308a96 100644 --- a/fs/btrfs/tests/btrfs-tests.c +++ b/fs/btrfs/tests/btrfs-tests.c @@ -157,9 +157,9 @@ struct btrfs_fs_info *btrfs_alloc_dummy_fs_info(u32 nodesize, u32 sectorsize) void btrfs_free_dummy_fs_info(struct btrfs_fs_info *fs_info) { - struct radix_tree_iter iter; - void **slot; + XA_STATE(xas, &fs_info->buffer_tree, 0); struct btrfs_device *dev, *tmp; + struct extent_buffer *eb; if (!fs_info) return; @@ -169,25 +169,16 @@ void btrfs_free_dummy_fs_info(struct btrfs_fs_info *fs_info) test_mnt->mnt_sb->s_fs_info = NULL; - spin_lock(&fs_info->buffer_lock); - radix_tree_for_each_slot(slot, &fs_info->buffer_radix, &iter, 0) { - struct extent_buffer *eb; - - eb = radix_tree_deref_slot_protected(slot, &fs_info->buffer_lock); - if (!eb) + xa_lock_irq(&fs_info->buffer_tree); + xas_for_each(&xas, eb, ULONG_MAX) { + if (xas_retry(&xas, eb)) continue; - /* Shouldn't happen but that kind of thinking creates CVE's */ - if (radix_tree_exception(eb)) { - if (radix_tree_deref_retry(eb)) - slot = radix_tree_iter_retry(&iter); - continue; - } - slot = radix_tree_iter_resume(slot, &iter); - spin_unlock(&fs_info->buffer_lock); + xas_pause(&xas); + xa_unlock_irq(&fs_info->buffer_tree); free_extent_buffer_stale(eb); - spin_lock(&fs_info->buffer_lock); + xa_lock_irq(&fs_info->buffer_tree); } - spin_unlock(&fs_info->buffer_lock); + xa_unlock_irq(&fs_info->buffer_tree); btrfs_mapping_tree_free(fs_info); list_for_each_entry_safe(dev, tmp, &fs_info->fs_devices->devices, diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c index 7b30700ec930..82fb08790b64 100644 --- a/fs/btrfs/zoned.c +++ b/fs/btrfs/zoned.c @@ -2170,28 +2170,22 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group) static void wait_eb_writebacks(struct btrfs_block_group *block_group) { struct btrfs_fs_info *fs_info = block_group->fs_info; + XA_STATE(xas, &fs_info->buffer_tree, + block_group->start >> fs_info->sectorsize_bits); const u64 end = block_group->start + block_group->length; - struct radix_tree_iter iter; struct extent_buffer *eb; - void __rcu **slot; rcu_read_lock(); - radix_tree_for_each_slot(slot, &fs_info->buffer_radix, &iter, - block_group->start >> fs_info->sectorsize_bits) { - eb = radix_tree_deref_slot(slot); - if (!eb) + xas_for_each(&xas, eb, end >> fs_info->sectorsize_bits) { + if (xas_retry(&xas, eb)) continue; - if (radix_tree_deref_retry(eb)) { - slot = radix_tree_iter_retry(&iter); - continue; - } if (eb->start < block_group->start) continue; if (eb->start >= end) break; - slot = radix_tree_iter_resume(slot, &iter); + xas_pause(&xas); rcu_read_unlock(); wait_on_extent_buffer_writeback(eb); rcu_read_lock(); From patchwork Fri Apr 18 13:57:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 14057215 Received: from mail-yw1-f170.google.com (mail-yw1-f170.google.com [209.85.128.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F2CBA139D1B for ; Fri, 18 Apr 2025 13:57:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744984655; cv=none; b=qaaYKJG9Hs/fsdzHvdB7YTR4Vdmgs5oNpEX2TRQz8MRP8wcSis3zbv5UW8UXWss081OQORZQl6k2k4mr8TchNMCW462dA4SXkzUA8TqgpuPMqBrDJaeFOzQc69yFBeiE+CbVppjIxHEcFQins0VO03G5bCULcNGCs5+CSNwXm6c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744984655; c=relaxed/simple; bh=Jc/YeElEGua0Im6JDWY7UsKTbbmcOMzYPk/TQIMVyx0=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=q+7k0cz8prtThaHWUMdy+0VlPSha23JXcKwyJUroKO24rTRadaUJlw7nRL26fCJvLT99HSmKh4anuhv5X+XyzTtnjYPfIyLaTIimqd2t5VJC36m2e0LcN/wAkXaA46mUg/6FzwBZFQ8ZKW9JWlVzLrKDtSvGD3arJQQEyISkXe0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=toxicpanda.com; spf=none smtp.mailfrom=toxicpanda.com; dkim=pass (2048-bit key) header.d=toxicpanda-com.20230601.gappssmtp.com header.i=@toxicpanda-com.20230601.gappssmtp.com header.b=lDj5u4wV; arc=none smtp.client-ip=209.85.128.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=toxicpanda.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=toxicpanda.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=toxicpanda-com.20230601.gappssmtp.com header.i=@toxicpanda-com.20230601.gappssmtp.com header.b="lDj5u4wV" Received: by mail-yw1-f170.google.com with SMTP id 00721157ae682-6febbd3b75cso17885607b3.0 for ; Fri, 18 Apr 2025 06:57:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20230601.gappssmtp.com; s=20230601; t=1744984652; x=1745589452; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=1fIuQARwpq2+osUCXQnSd4QipWa4ifNx3caGytGzJ20=; b=lDj5u4wV160AvhxaT3t/++ms5zOZN60buTBZ1beUjDktLjDk5NdBT82CPKKv4dbtAn AavGtxcR6Ipzng2k98rwxNn6SiH4SyOrNOZ6G8BA716qewlHcuhMvHorE6+sztrxkqKu sUPmqEMUpsp5HJu3B7vJkVkrGJVVEMx49HgigicCZIDM33UD1MPsPnSyOWWi3JFIwdU9 8IjQ8Ofa8PNsMk6/I5nyo52VBfswPjEwrqXSlY900gRjQsec9UaY2wed9Fw7rwv/Big6 o6/9qJYV0VQQNIDJZvpfdMHHYR6cM+swzUTJDJ65suWMEOxuHMpin1OSd4LiaT/VU1Zn wL9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744984652; x=1745589452; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1fIuQARwpq2+osUCXQnSd4QipWa4ifNx3caGytGzJ20=; b=blpys9WOlw5zeP3sv/JEBX9wg9794L0SGPpuxUm6GBG/BFOQVP96o37QgdSwMpYmfH oXj5H0FvHCqxOqWjCq4tAh87137GFYCTOgwIcdK5qLvJwqpZD++KQ1uxlZuXvmH32kAj 10Nzsos6BevAbQuZb9h6hd+fGN5n1OX7s854anjrJeH3UMngw1i7KrwGa24VuW51xara RC42eMmqD0Ir8ZEuDBd7INP+798olMqAShFfFpDXy6bjBVI2fzzZ7slpcFiGfMwwuikQ 9BaUjd5rtDWRXGUn0GmDICBVrp27juALE34/cfapIaLw3laSuR08VrfH6TYHMPhbaSar bUtw== X-Gm-Message-State: AOJu0Yx6a/7imBcs3SlN4PTR1cq4LuNQN1r6P1XjIgml7D6S1fqtdQ2l QJrrShVmHmmAV2mRZ7AkmNANoOqNPRNuA4jq2fjzWAFUKeUTtXhAP65Nn0MLq8u8uTNwrzbi5ZT x X-Gm-Gg: ASbGncuXa+a1f7ojXMWiq+Fwe+YHcKQrbk0HYrmSRyTO2bGDC1Qe4hqHUCpCWh84L60 CkxPtcD3w1whroV7E9g36hThdLbf5EhIqsc69fP1yXgSMGixmjdPJMqS9mf7+tYBIKQM4LH3fr1 lyx1MPOovJXV6QjBtcjZYT+Hcuog7XfrC75PuniW0NQX9FssSHmf1iNptEdE2Ephrnx3iK0SOvk 6r56iZ1fpJyVcemTT/oJSFEsmVQEE4KRevbFwSlfcTbvm8vEUinwNgn295bzHLXZfNsCEfWIpxO 2j2RP6GiHogbrcVHsgLBPF1P1Ql0ssHICIJ985nXbuSOOlR18mvOKU6VI16f+i6lIv5I3dXOqxq Be6j6eLSwInm8 X-Google-Smtp-Source: AGHT+IEGchbjrVXh74pYZY0pAtUrShs0YHuZ8e0D57HGja9HZcYb6Uyuq3XUYsWClw0+1V3NTJg5dg== X-Received: by 2002:a05:690c:6101:b0:6ef:6536:bb6f with SMTP id 00721157ae682-706ccd229e5mr34871197b3.22.1744984652313; Fri, 18 Apr 2025 06:57:32 -0700 (PDT) Received: from localhost (syn-076-182-020-124.res.spectrum.com. [76.182.20.124]) by smtp.gmail.com with ESMTPSA id 00721157ae682-706ca53bc02sm5197977b3.87.2025.04.18.06.57.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Apr 2025 06:57:31 -0700 (PDT) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 2/3] btrfs: set DIRTY and WRITEBACK tags on the buffer_tree Date: Fri, 18 Apr 2025 09:57:22 -0400 Message-ID: <17df8fc5c719bbe63f6269ec4b2c7bf2df226cd3.1744984487.git.josef@toxicpanda.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In preparation for changing how we do writeout of extent buffers, start tagging the extent buffer xarray with DIRTY and WRITEBACK to make it easier to find extent buffers that are in either state. Signed-off-by: Josef Bacik --- fs/btrfs/extent_io.c | 41 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 41 insertions(+) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index aa451ad52528..ef6df7bcef5d 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1801,8 +1801,19 @@ static noinline_for_stack bool lock_extent_buffer_for_io(struct extent_buffer *e */ spin_lock(&eb->refs_lock); if (test_and_clear_bit(EXTENT_BUFFER_DIRTY, &eb->bflags)) { + XA_STATE(xas, &fs_info->buffer_tree, + eb->start >> fs_info->sectorsize_bits); + unsigned long flags; + set_bit(EXTENT_BUFFER_WRITEBACK, &eb->bflags); spin_unlock(&eb->refs_lock); + + xas_lock_irqsave(&xas, flags); + xas_load(&xas); + xas_set_mark(&xas, PAGECACHE_TAG_WRITEBACK); + xas_clear_mark(&xas, PAGECACHE_TAG_DIRTY); + xas_unlock_irqrestore(&xas, flags); + btrfs_set_header_flag(eb, BTRFS_HEADER_FLAG_WRITTEN); percpu_counter_add_batch(&fs_info->dirty_metadata_bytes, -eb->len, @@ -1888,6 +1899,33 @@ static void set_btree_ioerr(struct extent_buffer *eb) } } +static void buffer_tree_set_mark(const struct extent_buffer *eb, xa_mark_t mark) +{ + struct btrfs_fs_info *fs_info = eb->fs_info; + XA_STATE(xas, &fs_info->buffer_tree, + eb->start >> fs_info->sectorsize_bits); + unsigned long flags; + + xas_lock_irqsave(&xas, flags); + xas_load(&xas); + xas_set_mark(&xas, mark); + xas_unlock_irqrestore(&xas, flags); +} + +static void buffer_tree_clear_mark(const struct extent_buffer *eb, + xa_mark_t mark) +{ + struct btrfs_fs_info *fs_info = eb->fs_info; + XA_STATE(xas, &fs_info->buffer_tree, + eb->start >> fs_info->sectorsize_bits); + unsigned long flags; + + xas_lock_irqsave(&xas, flags); + xas_load(&xas); + xas_clear_mark(&xas, mark); + xas_unlock_irqrestore(&xas, flags); +} + /* * The endio specific version which won't touch any unsafe spinlock in endio * context. @@ -1921,6 +1959,7 @@ static void end_bbio_meta_write(struct btrfs_bio *bbio) btrfs_meta_folio_clear_writeback(fi.folio, eb); } + buffer_tree_clear_mark(eb, PAGECACHE_TAG_WRITEBACK); clear_bit(EXTENT_BUFFER_WRITEBACK, &eb->bflags); smp_mb__after_atomic(); wake_up_bit(&eb->bflags, EXTENT_BUFFER_WRITEBACK); @@ -3537,6 +3576,7 @@ void btrfs_clear_buffer_dirty(struct btrfs_trans_handle *trans, if (!test_and_clear_bit(EXTENT_BUFFER_DIRTY, &eb->bflags)) return; + buffer_tree_clear_mark(eb, PAGECACHE_TAG_DIRTY); percpu_counter_add_batch(&fs_info->dirty_metadata_bytes, -eb->len, fs_info->dirty_metadata_batch); @@ -3585,6 +3625,7 @@ void set_extent_buffer_dirty(struct extent_buffer *eb) folio_lock(eb->folios[0]); for (int i = 0; i < num_extent_folios(eb); i++) btrfs_meta_folio_set_dirty(eb->folios[i], eb); + buffer_tree_set_mark(eb, PAGECACHE_TAG_DIRTY); if (subpage) folio_unlock(eb->folios[0]); percpu_counter_add_batch(&eb->fs_info->dirty_metadata_bytes, From patchwork Fri Apr 18 13:57:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 14057216 Received: from mail-yb1-f170.google.com (mail-yb1-f170.google.com [209.85.219.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 252E7141987 for ; Fri, 18 Apr 2025 13:57:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744984658; cv=none; b=cu6CD0WkKfhW25EK7OIw3CSax8ZxLKe/DrbHALymn+F+YC3fwCM8JZKlfRpQXndLJVncba9xcc8bq0+59lqeR/xedvKTX066V3bYP4BEIqUhcDJwBJVX81Y5wsafJ/9OVBWZnuPuNcq827nfBtuAHQHZg2D8yq4yXG4CV3m4L8c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744984658; c=relaxed/simple; bh=/3pfs0nzK+m+R1DjY+InhlAm7TZB87GSw/NKieNqj5k=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GD2JrHt+R2dDk9UMF+GaIR1wUy/wG/nkF+POKqCvwdWA1yVgRQyPTVLEhvi863phljUO7jAmTPgrkIWAAQLHF4qKnNUHvyDoCAhaY4z0qN0RaitQWdBv2CFLjyTaoN2/qh7yUn35EkMycU+bofW4mh73wAQW/56V46SyGOoVWiU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=toxicpanda.com; spf=none smtp.mailfrom=toxicpanda.com; dkim=pass (2048-bit key) header.d=toxicpanda-com.20230601.gappssmtp.com header.i=@toxicpanda-com.20230601.gappssmtp.com header.b=oms21tpS; arc=none smtp.client-ip=209.85.219.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=toxicpanda.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=toxicpanda.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=toxicpanda-com.20230601.gappssmtp.com header.i=@toxicpanda-com.20230601.gappssmtp.com header.b="oms21tpS" Received: by mail-yb1-f170.google.com with SMTP id 3f1490d57ef6-e7270e0edf5so1792403276.2 for ; Fri, 18 Apr 2025 06:57:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20230601.gappssmtp.com; s=20230601; t=1744984654; x=1745589454; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=pOIUXC/JSqD5u7FJMUZoWntsB2kqudDzYLrI9sTAs2Y=; b=oms21tpSE9BxbUjcANzTe905Kav6Gt13qtGyviU6KvQm0AQZ29pB7ZpAA2CQy4Xfsl yoXpEoVwd5bqrjcoe4uxsVqNMCU0Ej40b6jXksDwLWcoYMPYBIFckueUhg41SthjR26k oWYEehBs1o9I/tOkuT+10UmXA8gRHPVDIvdjDpBNA9HddQzhUJ1A5VC2LTLobK2873k5 SgNRmPNr9jJs9T3u3zo4rF5FHQ/n4ok7hw8Tv7Bm2yapOqSknbwJjB5tTxVpjJI1ogWM AIUv07onWGdoq47aHwDTKpiGtrltjLT4sTLHuBR5dXnLzciQ07suWbFN+DvGIRdXoNZL oj6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744984654; x=1745589454; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pOIUXC/JSqD5u7FJMUZoWntsB2kqudDzYLrI9sTAs2Y=; b=GAEdsrQg9POVLi2/JKO1KcDA4/EGg0NdtpMorqpSNpz/vs5zDqNDvHJiHaIcPAj9dx isU5Z5zWvLzDwlFDqd4Y61mk8ZOLEmcurWeikjvPjDFORDMUmyEw2i1c0J9LJTEw8lrR pIRVzQxsuQLA89t1Wqqk+cggbHrF5Kl5VwPujFzpYVlB6+6EbiPLHUeIPmgv9bLGEnup s6YhRV47qbrYXRGvjOxJaC9E0l/BaUlDLMe42WwsmU1oSg2OZXdrNLpdm9yaUndLLFZ6 pvln/e6C+AIHVEKtCviRNthEOedNI8nnaEVzgLALkf9TtdeQpWe9vhDZnPzvWMY2rMjE I5NA== X-Gm-Message-State: AOJu0Ywah5D4BfWVhz1GUG4PuUCl0d0pZNduISCgjawpSrWPbfWoGY5T W16H5UP28xyX6LWn/fdfBMfdrQNrMQjMOG1Z1EH9rhLGoHmcfqIxn1Z1jVRrbvzXd5lUAGnzOHN s X-Gm-Gg: ASbGncs7yx2UMVCKKG5A9weRDaQFW9cu9Xa2Td1jy8q8fjGRp9aZlADwJGec2PFaCI6 diWaFewNOJ0Z1OnS81k4tScH/6acYrH7ffNyTPWPGLRwpVqCcKH9cT2uNcX2BgSL2YLIL096l+9 ws75KRYpgM/Me430fFurqlABG7yErOG+xPDMy6hJcOzEHdtHehIcsYSI4vUfOlNIIkWVYv1y3r9 VjmGImLPuWhX4GfqwO3VAJgSu/qTiRIvQpqmahldHnvParau8XJrXUfbSsK5NF3OImEWhiNq3Tc 7kcAHsKpnVQxEBiQiaxKdavkwVLIRKOUC/0X4XfJbUCflcARvi4LfhOPUi/1CHwVAA6pw/YDyG8 gRg== X-Google-Smtp-Source: AGHT+IFFIyeQy+gI1rS0gZJSbAxELDngD+2m0rNX8Pxgbk+B8b7cZ5jbg2yQguT4wDYXKDC7zVDpuA== X-Received: by 2002:a05:6902:2b8d:b0:e6d:f320:7825 with SMTP id 3f1490d57ef6-e7297d8f355mr3433585276.3.1744984654473; Fri, 18 Apr 2025 06:57:34 -0700 (PDT) Received: from localhost (syn-076-182-020-124.res.spectrum.com. [76.182.20.124]) by smtp.gmail.com with ESMTPSA id 3f1490d57ef6-e72959f4877sm447731276.52.2025.04.18.06.57.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Apr 2025 06:57:33 -0700 (PDT) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 3/3] btrfs: use buffer radix for extent buffer writeback operations Date: Fri, 18 Apr 2025 09:57:23 -0400 Message-ID: X-Mailer: git-send-email 2.48.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Currently we have this ugly back and forth with the btree writeback where we find the folio, find the eb associated with that folio, and then attempt to writeback. This results in two different paths for subpage eb's and >= pagesize eb's. Clean this up by adding our own infrastructure around looking up tag'ed eb's and writing the eb's out directly. This allows us to unify the subpage and >= pagesize IO paths, resulting in a much cleaner writeback path for extent buffers. I ran this through fsperf on a VM with 8 CPUs and 16gib of ram. I used smallfiles100k, but reduced the files to 1k to make it run faster, the results are as follows, with the statistically significant improvements marked with *, there were no regressions. fsperf was run with -n 10 for both runs, so the baseline is the average 10 runs and the test is the average of 10 runs. smallfiles100k results metric baseline current stdev diff ================================================================================ avg_commit_ms 68.58 58.44 3.35 -14.79% * commits 270.60 254.70 16.24 -5.88% dev_read_iops 48 48 0 0.00% dev_read_kbytes 1044 1044 0 0.00% dev_write_iops 866117.90 850028.10 14292.20 -1.86% dev_write_kbytes 10939976.40 10605701.20 351330.32 -3.06% elapsed 49.30 33 1.64 -33.06% * end_state_mount_ns 41251498.80 35773220.70 2531205.32 -13.28% * end_state_umount_ns 1.90e+09 1.50e+09 14186226.85 -21.38% * max_commit_ms 139 111.60 9.72 -19.71% * sys_cpu 4.90 3.86 0.88 -21.29% write_bw_bytes 42935768.20 64318451.10 1609415.05 49.80% * write_clat_ns_mean 366431.69 243202.60 14161.98 -33.63% * write_clat_ns_p50 49203.20 20992 264.40 -57.34% * write_clat_ns_p99 827392 653721.60 65904.74 -20.99% * write_io_kbytes 2035940 2035940 0 0.00% write_iops 10482.37 15702.75 392.92 49.80% * write_lat_ns_max 1.01e+08 90516129 3910102.06 -10.29% * write_lat_ns_mean 366556.19 243308.48 14154.51 -33.62% * As you can see we get about a 33% decrease runtime, with a 50% throughput increase, which is pretty significant. Signed-off-by: Josef Bacik --- fs/btrfs/extent_io.c | 344 ++++++++++++++++++++--------------------- fs/btrfs/extent_io.h | 1 + fs/btrfs/transaction.c | 5 +- 3 files changed, 173 insertions(+), 177 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index ef6df7bcef5d..080409e068e9 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1926,6 +1926,117 @@ static void buffer_tree_clear_mark(const struct extent_buffer *eb, xas_unlock_irqrestore(&xas, flags); } +static void buffer_tree_tag_for_writeback(struct btrfs_fs_info *fs_info, + unsigned long start, unsigned long end) +{ + XA_STATE(xas, &fs_info->buffer_tree, start); + unsigned int tagged = 0; + void *eb; + + xas_lock_irq(&xas); + xas_for_each_marked(&xas, eb, end, PAGECACHE_TAG_DIRTY) { + xas_set_mark(&xas, PAGECACHE_TAG_TOWRITE); + if (++tagged % XA_CHECK_SCHED) + continue; + xas_pause(&xas); + xas_unlock_irq(&xas); + cond_resched(); + xas_lock_irq(&xas); + } + xas_unlock_irq(&xas); +} + +struct eb_batch { + unsigned int nr; + unsigned int cur; + struct extent_buffer *ebs[PAGEVEC_SIZE]; +}; + +static inline bool eb_batch_add(struct eb_batch *batch, + struct extent_buffer *eb) +{ + batch->ebs[batch->nr++] = eb; + return (batch->nr < PAGEVEC_SIZE); +} + +static inline void eb_batch_init(struct eb_batch *batch) +{ + batch->nr = 0; + batch->cur = 0; +} + +static inline unsigned int eb_batch_count(struct eb_batch *batch) +{ + return batch->nr; +} + +static inline struct extent_buffer *eb_batch_next(struct eb_batch *batch) +{ + if (batch->cur >= batch->nr) + return NULL; + return batch->ebs[batch->cur++]; +} + +static inline void eb_batch_release(struct eb_batch *batch) +{ + for (unsigned int i = 0; i < batch->nr; i++) + free_extent_buffer(batch->ebs[i]); + eb_batch_init(batch); +} + +static inline struct extent_buffer *find_get_eb(struct xa_state *xas, unsigned long max, + xa_mark_t mark) +{ + struct extent_buffer *eb; + +retry: + eb = xas_find_marked(xas, max, mark); + + if (xas_retry(xas, eb)) + goto retry; + + if (!eb) + return NULL; + + if (!atomic_inc_not_zero(&eb->refs)) + goto reset; + + if (unlikely(eb != xas_reload(xas))) { + free_extent_buffer(eb); + goto reset; + } + + return eb; +reset: + xas_reset(xas); + goto retry; +} + +static unsigned int buffer_tree_get_ebs_tag(struct btrfs_fs_info *fs_info, + unsigned long *start, + unsigned long end, xa_mark_t tag, + struct eb_batch *batch) +{ + XA_STATE(xas, &fs_info->buffer_tree, *start); + struct extent_buffer *eb; + + rcu_read_lock(); + while ((eb = find_get_eb(&xas, end, tag)) != NULL) { + if (!eb_batch_add(batch, eb)) { + *start = (eb->start + eb->len) >> fs_info->sectorsize_bits; + goto out; + } + } + if (end == (unsigned long)-1) + *start = (unsigned long)-1; + else + *start = end + 1; +out: + rcu_read_unlock(); + + return eb_batch_count(batch); +} + /* * The endio specific version which won't touch any unsafe spinlock in endio * context. @@ -2031,163 +2142,37 @@ static noinline_for_stack void write_one_eb(struct extent_buffer *eb, } /* - * Submit one subpage btree page. + * Wait for all eb writeback in the given range to finish. * - * The main difference to submit_eb_page() is: - * - Page locking - * For subpage, we don't rely on page locking at all. - * - * - Flush write bio - * We only flush bio if we may be unable to fit current extent buffers into - * current bio. - * - * Return >=0 for the number of submitted extent buffers. - * Return <0 for fatal error. + * @fs_info: the fs_info for this file system + * @start: the offset of the range to start waiting on writeback + * @end: the end of the range, inclusive. This is meant to be used in + * conjuction with wait_marked_extents, so this will usually be + * the_next_eb->start - 1. */ -static int submit_eb_subpage(struct folio *folio, struct writeback_control *wbc) +void btree_wait_writeback_range(struct btrfs_fs_info *fs_info, u64 start, u64 end) { - struct btrfs_fs_info *fs_info = folio_to_fs_info(folio); - int submitted = 0; - u64 folio_start = folio_pos(folio); - int bit_start = 0; - int sectors_per_node = fs_info->nodesize >> fs_info->sectorsize_bits; - const unsigned int blocks_per_folio = btrfs_blocks_per_folio(fs_info, folio); + struct eb_batch batch; + unsigned long start_index = start >> fs_info->sectorsize_bits; + unsigned long end_index = end >> fs_info->sectorsize_bits; - /* Lock and write each dirty extent buffers in the range */ - while (bit_start < blocks_per_folio) { - struct btrfs_subpage *subpage = folio_get_private(folio); + eb_batch_init(&batch); + while (start_index <= end_index) { struct extent_buffer *eb; - unsigned long flags; - u64 start; + unsigned int nr_ebs; - /* - * Take private lock to ensure the subpage won't be detached - * in the meantime. - */ - spin_lock(&folio->mapping->i_private_lock); - if (!folio_test_private(folio)) { - spin_unlock(&folio->mapping->i_private_lock); + nr_ebs = buffer_tree_get_ebs_tag(fs_info, &start_index, + end_index, + PAGECACHE_TAG_WRITEBACK, + &batch); + if (!nr_ebs) break; - } - spin_lock_irqsave(&subpage->lock, flags); - if (!test_bit(bit_start + btrfs_bitmap_nr_dirty * blocks_per_folio, - subpage->bitmaps)) { - spin_unlock_irqrestore(&subpage->lock, flags); - spin_unlock(&folio->mapping->i_private_lock); - bit_start += sectors_per_node; - continue; - } - start = folio_start + bit_start * fs_info->sectorsize; - bit_start += sectors_per_node; - - /* - * Here we just want to grab the eb without touching extra - * spin locks, so call find_extent_buffer_nolock(). - */ - eb = find_extent_buffer_nolock(fs_info, start); - spin_unlock_irqrestore(&subpage->lock, flags); - spin_unlock(&folio->mapping->i_private_lock); - - /* - * The eb has already reached 0 refs thus find_extent_buffer() - * doesn't return it. We don't need to write back such eb - * anyway. - */ - if (!eb) - continue; - - if (lock_extent_buffer_for_io(eb, wbc)) { - write_one_eb(eb, wbc); - submitted++; - } - free_extent_buffer(eb); + while ((eb = eb_batch_next(&batch)) != NULL) + wait_on_extent_buffer_writeback(eb); + eb_batch_release(&batch); + cond_resched(); } - return submitted; -} - -/* - * Submit all page(s) of one extent buffer. - * - * @page: the page of one extent buffer - * @eb_context: to determine if we need to submit this page, if current page - * belongs to this eb, we don't need to submit - * - * The caller should pass each page in their bytenr order, and here we use - * @eb_context to determine if we have submitted pages of one extent buffer. - * - * If we have, we just skip until we hit a new page that doesn't belong to - * current @eb_context. - * - * If not, we submit all the page(s) of the extent buffer. - * - * Return >0 if we have submitted the extent buffer successfully. - * Return 0 if we don't need to submit the page, as it's already submitted by - * previous call. - * Return <0 for fatal error. - */ -static int submit_eb_page(struct folio *folio, struct btrfs_eb_write_context *ctx) -{ - struct writeback_control *wbc = ctx->wbc; - struct address_space *mapping = folio->mapping; - struct extent_buffer *eb; - int ret; - - if (!folio_test_private(folio)) - return 0; - - if (btrfs_meta_is_subpage(folio_to_fs_info(folio))) - return submit_eb_subpage(folio, wbc); - - spin_lock(&mapping->i_private_lock); - if (!folio_test_private(folio)) { - spin_unlock(&mapping->i_private_lock); - return 0; - } - - eb = folio_get_private(folio); - - /* - * Shouldn't happen and normally this would be a BUG_ON but no point - * crashing the machine for something we can survive anyway. - */ - if (WARN_ON(!eb)) { - spin_unlock(&mapping->i_private_lock); - return 0; - } - - if (eb == ctx->eb) { - spin_unlock(&mapping->i_private_lock); - return 0; - } - ret = atomic_inc_not_zero(&eb->refs); - spin_unlock(&mapping->i_private_lock); - if (!ret) - return 0; - - ctx->eb = eb; - - ret = btrfs_check_meta_write_pointer(eb->fs_info, ctx); - if (ret) { - if (ret == -EBUSY) - ret = 0; - free_extent_buffer(eb); - return ret; - } - - if (!lock_extent_buffer_for_io(eb, wbc)) { - free_extent_buffer(eb); - return 0; - } - /* Implies write in zoned mode. */ - if (ctx->zoned_bg) { - /* Mark the last eb in the block group. */ - btrfs_schedule_zone_finish_bg(ctx->zoned_bg, eb); - ctx->zoned_bg->meta_write_pointer += eb->len; - } - write_one_eb(eb, wbc); - free_extent_buffer(eb); - return 1; } int btree_write_cache_pages(struct address_space *mapping, @@ -2198,25 +2183,27 @@ int btree_write_cache_pages(struct address_space *mapping, int ret = 0; int done = 0; int nr_to_write_done = 0; - struct folio_batch fbatch; - unsigned int nr_folios; - pgoff_t index; - pgoff_t end; /* Inclusive */ + struct eb_batch batch; + unsigned int nr_ebs; + unsigned long index; + unsigned long end; int scanned = 0; xa_mark_t tag; - folio_batch_init(&fbatch); + eb_batch_init(&batch); if (wbc->range_cyclic) { - index = mapping->writeback_index; /* Start from prev offset */ + index = (mapping->writeback_index << PAGE_SHIFT) >> fs_info->sectorsize_bits; end = -1; + /* * Start from the beginning does not need to cycle over the * range, mark it as scanned. */ scanned = (index == 0); } else { - index = wbc->range_start >> PAGE_SHIFT; - end = wbc->range_end >> PAGE_SHIFT; + index = wbc->range_start >> fs_info->sectorsize_bits; + end = wbc->range_end >> fs_info->sectorsize_bits; + scanned = 1; } if (wbc->sync_mode == WB_SYNC_ALL) @@ -2226,31 +2213,40 @@ int btree_write_cache_pages(struct address_space *mapping, btrfs_zoned_meta_io_lock(fs_info); retry: if (wbc->sync_mode == WB_SYNC_ALL) - tag_pages_for_writeback(mapping, index, end); + buffer_tree_tag_for_writeback(fs_info, index, end); while (!done && !nr_to_write_done && (index <= end) && - (nr_folios = filemap_get_folios_tag(mapping, &index, end, - tag, &fbatch))) { - unsigned i; + (nr_ebs = buffer_tree_get_ebs_tag(fs_info, &index, end, tag, + &batch))) { + struct extent_buffer *eb; - for (i = 0; i < nr_folios; i++) { - struct folio *folio = fbatch.folios[i]; + while ((eb = eb_batch_next(&batch)) != NULL) { + ctx.eb = eb; - ret = submit_eb_page(folio, &ctx); - if (ret == 0) + ret = btrfs_check_meta_write_pointer(eb->fs_info, &ctx); + if (ret) { + if (ret == -EBUSY) + ret = 0; + if (ret) { + done = 1; + break; + } + free_extent_buffer(eb); continue; - if (ret < 0) { - done = 1; - break; } - /* - * the filesystem may choose to bump up nr_to_write. - * We have to make sure to honor the new nr_to_write - * at any time - */ - nr_to_write_done = wbc->nr_to_write <= 0; + if (!lock_extent_buffer_for_io(eb, wbc)) + continue; + + /* Implies write in zoned mode. */ + if (ctx.zoned_bg) { + /* Mark the last eb in the block group. */ + btrfs_schedule_zone_finish_bg(ctx.zoned_bg, eb); + ctx.zoned_bg->meta_write_pointer += eb->len; + } + write_one_eb(eb, wbc); } - folio_batch_release(&fbatch); + nr_to_write_done = wbc->nr_to_write <= 0; + eb_batch_release(&batch); cond_resched(); } if (!scanned && !done) { diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h index b344162f790c..4f0cf5b0d38f 100644 --- a/fs/btrfs/extent_io.h +++ b/fs/btrfs/extent_io.h @@ -240,6 +240,7 @@ void extent_write_locked_range(struct inode *inode, const struct folio *locked_f int btrfs_writepages(struct address_space *mapping, struct writeback_control *wbc); int btree_write_cache_pages(struct address_space *mapping, struct writeback_control *wbc); +void btree_wait_writeback_range(struct btrfs_fs_info *fs_info, u64 start, u64 end); void btrfs_readahead(struct readahead_control *rac); int set_folio_extent_mapped(struct folio *folio); void clear_folio_extent_mapped(struct folio *folio); diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c index 39e48bf610a1..b72ac8b70e0e 100644 --- a/fs/btrfs/transaction.c +++ b/fs/btrfs/transaction.c @@ -1155,7 +1155,7 @@ int btrfs_write_marked_extents(struct btrfs_fs_info *fs_info, if (!ret) ret = filemap_fdatawrite_range(mapping, start, end); if (!ret && wait_writeback) - ret = filemap_fdatawait_range(mapping, start, end); + btree_wait_writeback_range(fs_info, start, end); btrfs_free_extent_state(cached_state); if (ret) break; @@ -1175,7 +1175,6 @@ int btrfs_write_marked_extents(struct btrfs_fs_info *fs_info, static int __btrfs_wait_marked_extents(struct btrfs_fs_info *fs_info, struct extent_io_tree *dirty_pages) { - struct address_space *mapping = fs_info->btree_inode->i_mapping; struct extent_state *cached_state = NULL; u64 start = 0; u64 end; @@ -1196,7 +1195,7 @@ static int __btrfs_wait_marked_extents(struct btrfs_fs_info *fs_info, if (ret == -ENOMEM) ret = 0; if (!ret) - ret = filemap_fdatawait_range(mapping, start, end); + btree_wait_writeback_range(fs_info, start, end); btrfs_free_extent_state(cached_state); if (ret) break;