From patchwork Thu Oct 1 18:36:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naohiro Aota X-Patchwork-Id: 11812305 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C66AD139F for ; Thu, 1 Oct 2020 18:39:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A1336207DE for ; Thu, 1 Oct 2020 18:39:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="quvwArtx" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733165AbgJASjY (ORCPT ); Thu, 1 Oct 2020 14:39:24 -0400 Received: from esa6.hgst.iphmx.com ([216.71.154.45]:24722 "EHLO esa6.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733076AbgJASjC (ORCPT ); Thu, 1 Oct 2020 14:39:02 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1601577542; x=1633113542; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ugdU8mzAQKkaZkmLNMfFgDBsu6jBgINON9Q0E2NdmGw=; b=quvwArtxBtk8YM9bvTg+SyvJIWh/FAZsKOdqxzv61WuzIg2H141q3In3 jCEPQCvweGMTVrxe+XNPge2+8vl8rLGB4slMD8xv/B2MK0L6LaeNmObdX 740cTn57m3g//ycgbeDCWUInqgLFf3xK5eZsh5mSRGKandb9g+e3UEQxh h+OtNL/1KC4517jnJ0xr/ZfbbEtiidGzKUvg+8m8mLcrH/peOikITN9tq 2Z2dWdmj6zYF4x7OyBDgYZgWrmbVfEyeJE6ejYLsQnys8qdXhY6f192Ju AjiJNp/N0vJS7iz+mI7ErqZF6r64/4LCEle9ZsLsRsOen+Q9in1RNZx+3 g==; IronPort-SDR: PQdfiJMWaa7q663J+l+0M3O77e//nKwym/AuSOVeduVciSzjnKKXLF8ggsSAGoGSXGd/bM/1eN OBsNzQDZJvpoIcSQTsmep+1Y+wTo6fjz/aU68M/xD5fJI0WvD/0QgygWlduC/QrQq2y9++w+Yk 2wswdmSWHqcqwVG0ZLzP9qOPQp+Sufgvxm8HUv4PwOTbiviIOVqyS66pk3L34xbGEHUIdQrSQs O5sFE1epWH8FCa1xskAF6X3+JyOshntZr/KTAgjF0pyfBqjimz/paJgC5FSoU40uV9N3AvKAJF abY= X-IronPort-AV: E=Sophos;i="5.77,324,1596470400"; d="scan'208";a="150036822" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 02 Oct 2020 02:38:34 +0800 IronPort-SDR: ooAqbl7AIrENy7b+SaP4r4fhlKdEIiqMvG1cleg9lGd3L0ZMy3xRqEqQFwr2fS0EXVeYI05aTb uMeAJOU+Pi/w== Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Oct 2020 11:24:31 -0700 IronPort-SDR: TMXv94CbeNa/FnHihQNlVK5h7WUjJt94bHFmjbBUUP1eCmTDwvS5DnsyDyf+II6hccn7eVOe1+ ZEWn+iXIeTEQ== WDCIronportException: Internal Received: from naota.dhcp.fujisawa.hgst.com ([10.149.52.155]) by uls-op-cesaip02.wdc.com with ESMTP; 01 Oct 2020 11:38:33 -0700 From: Naohiro Aota To: linux-btrfs@vger.kernel.org, dsterba@suse.com Cc: hare@suse.com, linux-fsdevel@vger.kernel.org, Naohiro Aota Subject: [PATCH v8 28/41] btrfs: serialize meta IOs on ZONED mode Date: Fri, 2 Oct 2020 03:36:35 +0900 Message-Id: X-Mailer: git-send-email 2.27.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org We cannot use zone append writing for metadata, because the B-tree nodes have references to each other using the logical address. Without knowing the address in advance, we cannot construct the tree in the first place. Thus, we need to serialize write IOs for metadata. We cannot add mutex around allocation and submit because metadata blocks are allocated in an earlier stage to build up B-trees. Thus, this commit add zoned_meta_io_lock and hold it during metadata IO submission in btree_write_cache_pages() to serialize IOs. Furthermore, this commit add per-block group metadata IO submission pointer "meta_write_pointer" to ensure sequential writing, which can be caused when writing back blocks in a not finished transaction. Signed-off-by: Naohiro Aota --- fs/btrfs/block-group.h | 1 + fs/btrfs/ctree.h | 2 ++ fs/btrfs/disk-io.c | 1 + fs/btrfs/extent_io.c | 27 ++++++++++++++++++++++- fs/btrfs/zoned.c | 50 ++++++++++++++++++++++++++++++++++++++++++ fs/btrfs/zoned.h | 31 ++++++++++++++++++++++++++ 6 files changed, 111 insertions(+), 1 deletion(-) diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h index 401e9bcefaec..b2a8a3beceac 100644 --- a/fs/btrfs/block-group.h +++ b/fs/btrfs/block-group.h @@ -190,6 +190,7 @@ struct btrfs_block_group { */ u64 alloc_offset; u64 zone_unusable; + u64 meta_write_pointer; }; static inline u64 btrfs_block_group_end(struct btrfs_block_group *block_group) diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h index e6f0fe1920e9..d021bc4a92cd 100644 --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -953,6 +953,8 @@ struct btrfs_fs_info { /* Type of exclusive operation running */ unsigned long exclusive_operation; + struct mutex zoned_meta_io_lock; + #ifdef CONFIG_BTRFS_FS_REF_VERIFY spinlock_t ref_verify_lock; struct rb_root block_tree; diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index c872f051b0a5..87c978fecaa2 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -2652,6 +2652,7 @@ void btrfs_init_fs_info(struct btrfs_fs_info *fs_info) mutex_init(&fs_info->delete_unused_bgs_mutex); mutex_init(&fs_info->reloc_mutex); mutex_init(&fs_info->delalloc_root_mutex); + mutex_init(&fs_info->zoned_meta_io_lock); seqlock_init(&fs_info->profiles_lock); INIT_LIST_HEAD(&fs_info->dirty_cowonly_roots); diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index bbcdc8dfbd45..ed6a9fce016d 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -25,6 +25,7 @@ #include "backref.h" #include "disk-io.h" #include "zoned.h" +#include "block-group.h" static struct kmem_cache *extent_state_cache; static struct kmem_cache *extent_buffer_cache; @@ -4001,6 +4002,7 @@ int btree_write_cache_pages(struct address_space *mapping, struct writeback_control *wbc) { struct extent_buffer *eb, *prev_eb = NULL; + struct btrfs_block_group *cache = NULL; struct extent_page_data epd = { .bio = NULL, .extent_locked = 0, @@ -4035,6 +4037,7 @@ int btree_write_cache_pages(struct address_space *mapping, tag = PAGECACHE_TAG_TOWRITE; else tag = PAGECACHE_TAG_DIRTY; + btrfs_zoned_meta_io_lock(fs_info); retry: if (wbc->sync_mode == WB_SYNC_ALL) tag_pages_for_writeback(mapping, index, end); @@ -4077,12 +4080,30 @@ int btree_write_cache_pages(struct address_space *mapping, if (!ret) continue; + if (!btrfs_check_meta_write_pointer(fs_info, eb, + &cache)) { + /* + * If for_sync, this hole will be filled with + * trasnsaction commit. + */ + if (wbc->sync_mode == WB_SYNC_ALL && + !wbc->for_sync) + ret = -EAGAIN; + else + ret = 0; + done = 1; + free_extent_buffer(eb); + break; + } + prev_eb = eb; ret = lock_extent_buffer_for_io(eb, &epd); if (!ret) { + btrfs_revert_meta_write_pointer(cache, eb); free_extent_buffer(eb); continue; } else if (ret < 0) { + btrfs_revert_meta_write_pointer(cache, eb); done = 1; free_extent_buffer(eb); break; @@ -4115,10 +4136,12 @@ int btree_write_cache_pages(struct address_space *mapping, index = 0; goto retry; } + if (cache) + btrfs_put_block_group(cache); ASSERT(ret <= 0); if (ret < 0) { end_write_bio(&epd, ret); - return ret; + goto out; } /* * If something went wrong, don't allow any metadata write bio to be @@ -4153,6 +4176,8 @@ int btree_write_cache_pages(struct address_space *mapping, ret = -EROFS; end_write_bio(&epd, ret); } +out: + btrfs_zoned_meta_io_unlock(fs_info); return ret; } diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c index 9e1056e2c2c8..57bd6dbd8f45 100644 --- a/fs/btrfs/zoned.c +++ b/fs/btrfs/zoned.c @@ -991,6 +991,9 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache) ret = -EIO; } + if (!ret) + cache->meta_write_pointer = cache->alloc_offset + cache->start; + kfree(alloc_offsets); free_extent_map(em); @@ -1122,3 +1125,50 @@ void btrfs_rewrite_logical_zoned(struct btrfs_ordered_extent *ordered) kfree(logical); bdput(bdev); } + +bool btrfs_check_meta_write_pointer(struct btrfs_fs_info *fs_info, + struct extent_buffer *eb, + struct btrfs_block_group **cache_ret) +{ + struct btrfs_block_group *cache; + + if (!btrfs_fs_incompat(fs_info, ZONED)) + return true; + + cache = *cache_ret; + + if (cache && (eb->start < cache->start || + cache->start + cache->length <= eb->start)) { + btrfs_put_block_group(cache); + cache = NULL; + *cache_ret = NULL; + } + + if (!cache) + cache = btrfs_lookup_block_group(fs_info, eb->start); + + if (cache) { + *cache_ret = cache; + + if (cache->meta_write_pointer != eb->start) { + btrfs_put_block_group(cache); + cache = NULL; + *cache_ret = NULL; + return false; + } + + cache->meta_write_pointer = eb->start + eb->len; + } + + return true; +} + +void btrfs_revert_meta_write_pointer(struct btrfs_block_group *cache, + struct extent_buffer *eb) +{ + if (!btrfs_fs_incompat(eb->fs_info, ZONED) || !cache) + return; + + ASSERT(cache->meta_write_pointer == eb->start + eb->len); + cache->meta_write_pointer = eb->start; +} diff --git a/fs/btrfs/zoned.h b/fs/btrfs/zoned.h index f6263a893a07..fc8012ebcc36 100644 --- a/fs/btrfs/zoned.h +++ b/fs/btrfs/zoned.h @@ -53,6 +53,11 @@ void btrfs_free_redirty_list(struct btrfs_transaction *trans); void btrfs_record_physical_zoned(struct inode *inode, u64 file_offset, struct bio *bio); void btrfs_rewrite_logical_zoned(struct btrfs_ordered_extent *ordered); +bool btrfs_check_meta_write_pointer(struct btrfs_fs_info *fs_info, + struct extent_buffer *eb, + struct btrfs_block_group **cache_ret); +void btrfs_revert_meta_write_pointer(struct btrfs_block_group *cache, + struct extent_buffer *eb); #else /* CONFIG_BLK_DEV_ZONED */ static inline int btrfs_get_dev_zone(struct btrfs_device *device, u64 pos, struct blk_zone *zone) @@ -128,6 +133,18 @@ static inline void btrfs_record_physical_zoned(struct inode *inode, } static inline void btrfs_rewrite_logical_zoned(struct btrfs_ordered_extent *ordered) { } +static inline bool +btrfs_check_meta_write_pointer(struct btrfs_fs_info *fs_info, + struct extent_buffer *eb, + struct btrfs_block_group **cache_ret) +{ + return true; +} +static inline void +btrfs_revert_meta_write_pointer(struct btrfs_block_group *cache, + struct extent_buffer *eb) +{ +} #endif static inline bool btrfs_dev_is_sequential(struct btrfs_device *device, u64 pos) @@ -230,4 +247,18 @@ static inline bool btrfs_can_zone_reset(struct btrfs_device *device, return true; } +static inline void btrfs_zoned_meta_io_lock(struct btrfs_fs_info *fs_info) +{ + if (!btrfs_fs_incompat(fs_info, ZONED)) + return; + mutex_lock(&fs_info->zoned_meta_io_lock); +} + +static inline void btrfs_zoned_meta_io_unlock(struct btrfs_fs_info *fs_info) +{ + if (!btrfs_fs_incompat(fs_info, ZONED)) + return; + mutex_unlock(&fs_info->zoned_meta_io_lock); +} + #endif