From patchwork Sun Nov 24 23:57:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13884297 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B64723A1B5; Sun, 24 Nov 2024 23:57:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732492673; cv=none; b=lPhR9SRQ0vUbucYPsmpOvx+HiIQN+c44B+hkBXL7a4U2Fhizr0hWSG3Y4ASnMTUykjT1Ude09eHji9AsNJJcAVK3HlAciJjSV5GCnN3sHoQC+isDmAaLl4lNoUqwgP/hKkVqM6nfOMkDfLsfFUhL5P55h4TQqfrlw5RBtgJRW+g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732492673; c=relaxed/simple; bh=1it2bmALOUFjbVfLzFBPZL4ssNqgWxVdoDQjcaIxhUQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gbyjEMBDsXQD7cVefY4B8YJ07ZY1qw7epOwX87S3yl7PD1fpqaUAgVVmA1NIxIbw8zDltUQu/AQONEtFlRlip79A4ZGW/mPK1Bfd78uEvdP3YfU49V4Yadfu8HGaNQf1eznpZ246MGCe1sj/0MerbSPEk0zLv5lJT0eDinWFKPU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=pd2c5CHj; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=pd2c5CHj; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="pd2c5CHj"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="pd2c5CHj" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id A7A691F381; Sun, 24 Nov 2024 23:57:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1732492668; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XY97jdhXj1f2RgFKbwDZR8FO5rD6dbybMky1LNX7Oxk=; b=pd2c5CHjWQ0HKPja6tE8mC9EyGltdPCXxhKL67ZKr/i+O6BO71YgIFRFipjIES1ilzt9bL lnJHTBBekxNdL0lDXYtyqyeTGpAZEYQKCp/IwAz92BG2Rbwa3yIG5t3ryGCc3tDsgHB10l XoY8TyQXiKGss9kW9glNQpBYyeQ1zOs= Authentication-Results: smtp-out2.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1732492668; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XY97jdhXj1f2RgFKbwDZR8FO5rD6dbybMky1LNX7Oxk=; b=pd2c5CHjWQ0HKPja6tE8mC9EyGltdPCXxhKL67ZKr/i+O6BO71YgIFRFipjIES1ilzt9bL lnJHTBBekxNdL0lDXYtyqyeTGpAZEYQKCp/IwAz92BG2Rbwa3yIG5t3ryGCc3tDsgHB10l XoY8TyQXiKGss9kW9glNQpBYyeQ1zOs= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 83B07132CF; Sun, 24 Nov 2024 23:57:47 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id gCllD3u9Q2emSQAAD6G6ig (envelope-from ); Sun, 24 Nov 2024 23:57:47 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Cc: stable@vger.kernel.org Subject: [PATCH v2 1/7] btrfs: fix double accounting of ordered extents during errors Date: Mon, 25 Nov 2024 10:27:21 +1030 Message-ID: <3f91b3e146ca8314c21b9afb44ad84def3d2223d.1732492421.git.wqu@suse.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Score: -2.79 X-Spamd-Result: default: False [-2.79 / 50.00]; BAYES_HAM(-3.00)[100.00%]; MID_CONTAINS_FROM(1.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.19)[-0.953]; MIME_GOOD(-0.10)[text/plain]; RCPT_COUNT_TWO(0.00)[2]; FUZZY_BLOCKED(0.00)[rspamd.com]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:mid,suse.com:email]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_COUNT_TWO(0.00)[2]; TO_MATCH_ENVRCPT_ALL(0.00)[]; TO_DN_NONE(0.00)[]; RCVD_TLS_ALL(0.00)[] X-Spam-Flag: NO X-Spam-Level: [BUG] Btrfs will fail generic/750 randomly if its sector size is smaller than page size. One of the warning looks like this: ------------[ cut here ]------------ WARNING: CPU: 1 PID: 90263 at fs/btrfs/ordered-data.c:360 can_finish_ordered_extent+0x33c/0x390 [btrfs] CPU: 1 UID: 0 PID: 90263 Comm: kworker/u18:1 Tainted: G OE 6.12.0-rc3-custom+ #79 Workqueue: events_unbound btrfs_async_reclaim_metadata_space [btrfs] pc : can_finish_ordered_extent+0x33c/0x390 [btrfs] lr : can_finish_ordered_extent+0xdc/0x390 [btrfs] Call trace: can_finish_ordered_extent+0x33c/0x390 [btrfs] btrfs_mark_ordered_io_finished+0x130/0x2b8 [btrfs] extent_writepage+0xfc/0x338 [btrfs] extent_write_cache_pages+0x1d4/0x4b8 [btrfs] btrfs_writepages+0x94/0x158 [btrfs] do_writepages+0x74/0x190 filemap_fdatawrite_wbc+0x88/0xc8 start_delalloc_inodes+0x180/0x3b0 [btrfs] btrfs_start_delalloc_roots+0x17c/0x288 [btrfs] shrink_delalloc+0x11c/0x280 [btrfs] flush_space+0x27c/0x310 [btrfs] btrfs_async_reclaim_metadata_space+0xcc/0x208 [btrfs] process_one_work+0x228/0x670 worker_thread+0x1bc/0x360 kthread+0x100/0x118 ret_from_fork+0x10/0x20 irq event stamp: 9784200 hardirqs last enabled at (9784199): [] _raw_spin_unlock_irqrestore+0x74/0x80 hardirqs last disabled at (9784200): [] _raw_spin_lock_irqsave+0x8c/0xa0 softirqs last enabled at (9784148): [] handle_softirqs+0x45c/0x4b0 softirqs last disabled at (9784141): [] __do_softirq+0x1c/0x28 ---[ end trace 0000000000000000 ]--- BTRFS critical (device dm-2): bad ordered extent accounting, root=5 ino=1492 OE offset=1654784 OE len=57344 to_dec=49152 left=0 [CAUSE] There are several error paths not properly handling during folio writeback: 1) Partially submitted folio During extent_writepage_io() if some error happened (the only possible case is submit_one_sector() failed to grab an extent map), then we can have partially submitted folio. Since extent_writepage_io() failed, we need to call btrfs_mark_ordered_io_finished() to cleanup the submitted range. But we will call btrfs_mark_ordered_io_finished() for submitted range too, causing double accounting. 2) Partially created ordered extents We cal also fail at writepage_delalloc(), which will stop creating new ordered extents if it hit any error from btrfs_run_delalloc_range(). In that case, we will call btrfs_mark_ordered_io_finished() for ranges where there is no ordered extent at all. Both bugs are only affecting sector size < page size cases. [FIX] - Introduce a new member btrfs_bio_ctrl::last_submitted This will trace the last sector submitted through extent_writepage_io(). So for the above extent_writepage() case, we will know exactly which sectors are submitted and should not do the ordered extent accounting. - Clear the submit_bitmap for ranges where no ordered extent is created So if btrfs_run_delalloc_range() failed for a range, it will be not cleaned up. - Introduce a helper cleanup_ordered_extents() This will do a sector-by-sector cleanup with btrfs_bio_ctrl::last_submitted and btrfs_bio_ctrl::submit_bitmap into consideartion. Using @last_submitted is to avoid double accounting on the submitted ranges. Meanwhile using @submit_bitmap is to avoid touching ranges going through compression. cc: stable@vger.kernel.org # 5.15+ Signed-off-by: Qu Wenruo --- fs/btrfs/extent_io.c | 54 ++++++++++++++++++++++++++++++++++++++------ 1 file changed, 47 insertions(+), 7 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index e629d2ee152a..1c2246d36672 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -108,6 +108,14 @@ struct btrfs_bio_ctrl { * This is to avoid touching ranges covered by compression/inline. */ unsigned long submit_bitmap; + + /* + * The end (exclusive) of the last submitted range in the folio. + * + * This is for sector size < page size case where we may hit error + * half way. + */ + u64 last_submitted; }; static void submit_one_bio(struct btrfs_bio_ctrl *bio_ctrl) @@ -1254,11 +1262,18 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode, /* * We have some ranges that's going to be submitted asynchronously - * (compression or inline). These range have their own control + * (compression or inline, ret > 0). These range have their own control * on when to unlock the pages. We should not touch them - * anymore, so clear the range from the submission bitmap. + * anymore. + * + * We can also have some ranges where we didn't even call + * btrfs_run_delalloc_range() (as previous run failed, ret < 0). + * These error ranges should not be submitted nor cleaned up as + * there is no ordered extent allocated for them. + * + * For either cases, we should clear the submit_bitmap. */ - if (ret > 0) { + if (ret) { unsigned int start_bit = (found_start - page_start) >> fs_info->sectorsize_bits; unsigned int end_bit = (min(page_end + 1, found_start + found_len) - @@ -1435,6 +1450,7 @@ static noinline_for_stack int extent_writepage_io(struct btrfs_inode *inode, ret = submit_one_sector(inode, folio, cur, bio_ctrl, i_size); if (ret < 0) goto out; + bio_ctrl->last_submitted = cur + fs_info->sectorsize; submitted_io = true; } out: @@ -1453,6 +1469,24 @@ static noinline_for_stack int extent_writepage_io(struct btrfs_inode *inode, return ret; } +static void cleanup_ordered_extents(struct btrfs_inode *inode, + struct folio *folio, u64 file_pos, + u64 num_bytes, unsigned long *bitmap) +{ + struct btrfs_fs_info *fs_info = inode->root->fs_info; + unsigned int cur_bit = (file_pos - folio_pos(folio)) >> fs_info->sectorsize_bits; + + for_each_set_bit_from(cur_bit, bitmap, fs_info->sectors_per_page) { + u64 cur_pos = folio_pos(folio) + (cur_bit << fs_info->sectorsize_bits); + + if (cur_pos >= file_pos + num_bytes) + break; + + btrfs_mark_ordered_io_finished(inode, folio, cur_pos, + fs_info->sectorsize, false); + } +} + /* * the writepage semantics are similar to regular writepage. extent * records are inserted to lock ranges in the tree, and as dirty areas @@ -1492,6 +1526,7 @@ static int extent_writepage(struct folio *folio, struct btrfs_bio_ctrl *bio_ctrl * The proper bitmap can only be initialized until writepage_delalloc(). */ bio_ctrl->submit_bitmap = (unsigned long)-1; + bio_ctrl->last_submitted = page_start; ret = set_folio_extent_mapped(folio); if (ret < 0) goto done; @@ -1511,8 +1546,10 @@ static int extent_writepage(struct folio *folio, struct btrfs_bio_ctrl *bio_ctrl done: if (ret) { - btrfs_mark_ordered_io_finished(BTRFS_I(inode), folio, - page_start, PAGE_SIZE, !ret); + cleanup_ordered_extents(BTRFS_I(inode), folio, + bio_ctrl->last_submitted, + page_start + PAGE_SIZE - bio_ctrl->last_submitted, + &bio_ctrl->submit_bitmap); mapping_set_error(folio->mapping, ret); } @@ -2288,14 +2325,17 @@ void extent_write_locked_range(struct inode *inode, const struct folio *locked_f * extent_writepage_io() will do the truncation correctly. */ bio_ctrl.submit_bitmap = (unsigned long)-1; + bio_ctrl.last_submitted = cur; ret = extent_writepage_io(BTRFS_I(inode), folio, cur, cur_len, &bio_ctrl, i_size); if (ret == 1) goto next_page; if (ret) { - btrfs_mark_ordered_io_finished(BTRFS_I(inode), folio, - cur, cur_len, !ret); + cleanup_ordered_extents(BTRFS_I(inode), folio, + bio_ctrl.last_submitted, + cur_end + 1 - bio_ctrl.last_submitted, + &bio_ctrl.submit_bitmap); mapping_set_error(mapping, ret); } btrfs_folio_end_lock(fs_info, folio, cur, cur_len); From patchwork Sun Nov 24 23:57:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13884298 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 30B702500BD for ; Sun, 24 Nov 2024 23:57:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732492674; cv=none; b=jGaqsmVjdt6FeV1+GUbr9Mxpi32r2y+PUb661sp09EP6K87rZ6i2NmUgrMqVhnZPyoqx2jgeZOsryRLT0EkrFx37gB54/7WZxxw786ttM8uwvFWpyi0zmJOcgAVWQ8PeHBTOkXm2bRLp7XhJG/8uPhvIinRj0k/TLk+Ynv1cROY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732492674; c=relaxed/simple; bh=Uy0UpMlK7C/7+otXF/RMdlzWiFooOAfOVsfBzoKJ7Ys=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nSq+hX672g7hvUH9mlNWH0kmsIqbQ5gRRrBY4dgQifKxvdbmzyAgLU/40pGP6YzO4pM8uBEcpIyLe3ktSBEpvG09wVgfegXV4CMh4IqMgh8/HgvvKY6UQuaS3esSc37xSZNn8ZxCAZaKguaXSvIXhsNLwZBfFfokfgq8+FLz4pA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=r0pYRF71; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=r0pYRF71; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="r0pYRF71"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="r0pYRF71" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 226D71F38C for ; Sun, 24 Nov 2024 23:57:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1732492670; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=t/G1/6IQBC+EShBZljA39MqNUo2TUEQVCC3igNKTlWA=; b=r0pYRF712JOAPmvQxqpKuhZynQyR+9GDfietCzU2fmQcg0Fm0kTYDrw8dsPw2KGdlMf0tF ZOEAriENtpruOU7Rgl2lZI3B9MDiRvif7CBT+fToBfu78NVD8KxB3qVNK6KVCyYN5lCJCZ wvU1jNawjHV1Wkra6jpgPc6bVVxqNe8= Authentication-Results: smtp-out2.suse.de; dkim=pass header.d=suse.com header.s=susede1 header.b=r0pYRF71 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1732492670; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=t/G1/6IQBC+EShBZljA39MqNUo2TUEQVCC3igNKTlWA=; b=r0pYRF712JOAPmvQxqpKuhZynQyR+9GDfietCzU2fmQcg0Fm0kTYDrw8dsPw2KGdlMf0tF ZOEAriENtpruOU7Rgl2lZI3B9MDiRvif7CBT+fToBfu78NVD8KxB3qVNK6KVCyYN5lCJCZ wvU1jNawjHV1Wkra6jpgPc6bVVxqNe8= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 383DF132CF for ; Sun, 24 Nov 2024 23:57:48 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id kMWUOHy9Q2emSQAAD6G6ig (envelope-from ) for ; Sun, 24 Nov 2024 23:57:48 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH v2 2/7] btrfs: fix inline data extent reads which zero out the remaining part Date: Mon, 25 Nov 2024 10:27:22 +1030 Message-ID: <64faec049512590bd766d4a822768f8a150204b0.1732492421.git.wqu@suse.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Rspamd-Queue-Id: 226D71F38C X-Spam-Level: X-Spamd-Result: default: False [-3.01 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; MID_CONTAINS_FROM(1.00)[]; R_MISSING_CHARSET(0.50)[]; R_DKIM_ALLOW(-0.20)[suse.com:s=susede1]; NEURAL_HAM_SHORT(-0.20)[-0.999]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; ARC_NA(0.00)[]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; PREVIOUSLY_DELIVERED(0.00)[linux-btrfs@vger.kernel.org]; RCVD_TLS_ALL(0.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:email,suse.com:dkim,suse.com:mid]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; FUZZY_BLOCKED(0.00)[rspamd.com]; TO_DN_NONE(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; MIME_TRACE(0.00)[0:+]; DKIM_TRACE(0.00)[suse.com:+] X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-Rspamd-Action: no action X-Spam-Score: -3.01 X-Spam-Flag: NO [BUG in DEVEL BRANCH] This bug itself can only be reproduced with the following out-of-tree patches: btrfs: allow inline data extents creation if sector size < page size btrfs: allow buffered write to skip full page if it's sector aligned With those out-of-tree patches, we can hit a data corruption: # mkfs.btrfs -f -s 4k $dev # mount $dev $mnt -o compress=zstd # xfs_io -f -c "pwrite 0 4k" $mnt/foobar # sync # echo 3 > /proc/sys/vm/drop_caches # xfs_io -f -c" pwrite 8k 4k" $mnt/foobar # md5sum $mnt/foobar 65df683add4707de8200bad14745b9ec Meanwhile such workload should result a md5sum of 277f3840b275c74d01e979ea9d75ac19 [CAUSE] The first buffered write into range [0, 4k) will result a compressed inline extent (relies on the patch "btrfs: allow inline data extents creation if sector size < page size" to create such inline extent): item 6 key (257 EXTENT_DATA 0) itemoff 15823 itemsize 40 generation 9 type 0 (inline) inline extent data size 19 ram_bytes 4096 compression 3 (zstd) Then all page cache is dropped, and we do the new write into range [8K, 12K) With the out-of-tree patch "btrfs: allow buffered write to skip full page if it's sector aligned", such aligned write won't trigger the full folio read, so we just dirtied range [8K, 12K), and range [0, 4K) is not uptodate. Then md5sum triggered the full folio read, causing us to read the inlined data extent. Then inside function read_inline_extent() and uncomress_inline(), we zero out all the remaining part of the folio, including the new dirtied range [8K, 12K), leading to the corruption. [FIX] Thankfully the bug is not yet reaching any end users. For upstream kernel, the [8K, 12K) write itself will trigger the full folio read before doing any write, thus everything is still fine. Furthermore, for the existing btrfs users with sector size < page size (the most common one is Asahi Linux) with inline data extents created from x86_64, they are still fine, because two factors are saving us: - Inline extents are always at offset 0 - Folio read always follows the file offset order So we always read out the inline extent, zeroing the remaining folio (which has no contents yet), then read the next sector, copying the correct content to the zeroed out part. No end users are affected thankfully. The fix is to make both read_inline_extent() and uncomress_inline() to only zero out the sector, not the whole page. Signed-off-by: Qu Wenruo --- fs/btrfs/inode.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 3ab6d183f5e8..25ed18c12a68 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -6712,6 +6712,7 @@ static noinline int uncompress_inline(struct btrfs_path *path, { int ret; struct extent_buffer *leaf = path->nodes[0]; + const u32 sectorsize = leaf->fs_info->sectorsize; char *tmp; size_t max_size; unsigned long inline_size; @@ -6740,17 +6741,19 @@ static noinline int uncompress_inline(struct btrfs_path *path, * cover that region here. */ - if (max_size < PAGE_SIZE) - folio_zero_range(folio, max_size, PAGE_SIZE - max_size); + if (max_size < sectorsize) + folio_zero_range(folio, max_size, sectorsize - max_size); kfree(tmp); return ret; } -static int read_inline_extent(struct btrfs_path *path, struct folio *folio) +static int read_inline_extent(struct btrfs_fs_info *fs_info, + struct btrfs_path *path, struct folio *folio) { struct btrfs_file_extent_item *fi; void *kaddr; size_t copy_size; + const u32 sectorsize = fs_info->sectorsize; if (!folio || folio_test_uptodate(folio)) return 0; @@ -6768,8 +6771,8 @@ static int read_inline_extent(struct btrfs_path *path, struct folio *folio) read_extent_buffer(path->nodes[0], kaddr, btrfs_file_extent_inline_start(fi), copy_size); kunmap_local(kaddr); - if (copy_size < PAGE_SIZE) - folio_zero_range(folio, copy_size, PAGE_SIZE - copy_size); + if (copy_size < sectorsize) + folio_zero_range(folio, copy_size, sectorsize - copy_size); return 0; } @@ -6944,7 +6947,7 @@ struct extent_map *btrfs_get_extent(struct btrfs_inode *inode, ASSERT(em->disk_bytenr == EXTENT_MAP_INLINE); ASSERT(em->len == fs_info->sectorsize); - ret = read_inline_extent(path, folio); + ret = read_inline_extent(fs_info, path, folio); if (ret < 0) goto out; goto insert; From patchwork Sun Nov 24 23:57:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13884299 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5854D18A92C for ; Sun, 24 Nov 2024 23:57:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732492675; cv=none; b=u6DT7eqWuScR/NKdOetDGxIvCCFEhe1JOv8ZgYvVy/Ppc7nrLd1HYlkL4737iOSMlU1fM5Dd+cilq0n+ggOuFUQ3m8kPfaX9FlucpmItG/6fa/2TOnDYzQBKClgLkhG1DI6uv4CjbmPL4wer7vO/KVA0uK/7F0jQEvqc1yR1fFA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732492675; c=relaxed/simple; bh=WiTGH0kYBFvKnnVlCdEmJHlk3ZPtAbs31gz4b4U80z0=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KeXi9mAXi6fC8MI7XWZlUaKJmGZITBf9yMLjFRipqma9xH/AfyyF4ERImxFXQsJkO7K+rkmji2Z942hG+bT006cq58tdBqvZkLWQNyimOGmruVcWVdqgjYUbyZSJLVWoCMDtVUlcSuaTRhP5PJqBdLVhnl2Cvr2Ep6Ni6dx8fIk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=hd4QVVIv; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=hd4QVVIv; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="hd4QVVIv"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="hd4QVVIv" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 7E51C21189 for ; Sun, 24 Nov 2024 23:57:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1732492671; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BOmScpLJbAqXeUTSqg0hx/vWpPKaMdN9XxaJL/uxoU8=; b=hd4QVVIv42fl03QGB7ORSjcyBSy4OMXvSEWcvp6gdu5FKJfTuOpFWY7wdxt1LxcKysskw3 TVtgiwjPp3F2Ay/fxjmldqE7pKU0gmxjRTnZoS6gM4bDnKsZacL9E6M9A94lt1cwIzGv+Q GHWMiRhNLbHHPNubOxti8foNJjs6rME= Authentication-Results: smtp-out1.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1732492671; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BOmScpLJbAqXeUTSqg0hx/vWpPKaMdN9XxaJL/uxoU8=; b=hd4QVVIv42fl03QGB7ORSjcyBSy4OMXvSEWcvp6gdu5FKJfTuOpFWY7wdxt1LxcKysskw3 TVtgiwjPp3F2Ay/fxjmldqE7pKU0gmxjRTnZoS6gM4bDnKsZacL9E6M9A94lt1cwIzGv+Q GHWMiRhNLbHHPNubOxti8foNJjs6rME= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 9D653132CF for ; Sun, 24 Nov 2024 23:57:50 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id 6D2cFX69Q2emSQAAD6G6ig (envelope-from ) for ; Sun, 24 Nov 2024 23:57:50 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH v2 3/7] btrfs: extract the inner loop of cow_file_range() to enhance the error handling Date: Mon, 25 Nov 2024 10:27:23 +1030 Message-ID: <5c8ac05f4cebaec53bad10f761d62b69f961e92c.1732492421.git.wqu@suse.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Score: -2.80 X-Spamd-Result: default: False [-2.80 / 50.00]; BAYES_HAM(-3.00)[100.00%]; MID_CONTAINS_FROM(1.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.20)[-0.986]; MIME_GOOD(-0.10)[text/plain]; FUZZY_BLOCKED(0.00)[rspamd.com]; RCVD_VIA_SMTP_AUTH(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; ARC_NA(0.00)[]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:mid,suse.com:email]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_COUNT_TWO(0.00)[2]; TO_MATCH_ENVRCPT_ALL(0.00)[]; TO_DN_NONE(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[linux-btrfs@vger.kernel.org]; RCVD_TLS_ALL(0.00)[] X-Spam-Flag: NO X-Spam-Level: [PROBLEMS] Currently cow_file_range() has a very complex error handling, splitting the handling into 3 different parts (with one extra hidden part) |-------(1)----|---(2)---|-------------(3)----------| `- orig_start `- start `- start + cur_alloc_size `- end Range (1) is the one we handled without any error. Range (2) is the current range we're allocating, has data extent reserved but failed to insert ordered extent. Range (3) is the range we have not yet touched. Furthermore there is a special case for range (1) that if a range is for data reloc tree, and btrfs_reloc_clone_csums() failed, such range will be counted as range (1), to avoid metadata space release. Furthermore there is a very complex handling inside the while() loop. [ENHANCEMENT] - Extract the main part of the while() loop into run_one_cow_range() Which does the data extent reservation, extent map creation and ordered extent creation. With its own error handling. For the special case of range (1) where btrfs_reloc_clone_csums() failed, it will return error but still with ins->offset set, so that caller can skip the range for error handling. - Shrink the error range handling to 2 types With the proper error handling inside run_one_cow_range(), if a range failed run_one_cow_range(), there will be no ordered extent or extent map, thus the old range (2) and range (3) can be merged into one. - Remove some unnecessary ALIGN() The range [start, end] is already aligned to sector boundary. Just use ASSERT() to check the alignment. Signed-off-by: Qu Wenruo --- fs/btrfs/inode.c | 337 ++++++++++++++++++++++++----------------------- 1 file changed, 173 insertions(+), 164 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 25ed18c12a68..f9b1d78eb7ff 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -1293,6 +1293,147 @@ u64 btrfs_get_extent_allocation_hint(struct btrfs_inode *inode, u64 start, return alloc_hint; } +/* + * Run one cow range, which includes: + * + * - Reserve the data extent + * - Create the io extent map + * - Create the ordered extent + * + * @ins will be updated if the range should be skipped for error handling, no + * matter the return value. + * + * Return 0 if everything is fine. + * Return -EAGAIN if btrfs_reserve_extent() failed for zoned fs, caller needs + * some extra handling. + * Return <0 for other errors. + */ +static int run_one_cow_range(struct btrfs_inode *inode, + struct folio *locked_folio, + struct btrfs_key *ins, + u64 start, + u64 end, u64 *alloc_hint, bool keep_locked) +{ + struct btrfs_root *root = inode->root; + struct btrfs_fs_info *fs_info = root->fs_info; + struct btrfs_ordered_extent *ordered; + struct btrfs_file_extent file_extent = { 0 }; + struct extent_state *cached = NULL; + struct extent_map *em = NULL; + unsigned long page_ops; + const u64 len = end + 1 - start; + u32 min_alloc_size; + int ret; + + ASSERT(IS_ALIGNED(start, fs_info->sectorsize)); + ASSERT(IS_ALIGNED(end + 1, fs_info->sectorsize)); + + ins->offset = 0; + ins->objectid = 0; + + /* + * Relocation relies on the relocated extents to have exactly the same + * size as the original extents. Normally writeback for relocation data + * extents follows a NOCOW path because relocation preallocates the + * extents. However, due to an operation such as scrub turning a block + * group to RO mode, it may fallback to COW mode, so we must make sure + * an extent allocated during COW has exactly the requested size and can + * not be split into smaller extents, otherwise relocation breaks and + * fails during the stage where it updates the bytenr of file extent + * items. + */ + if (btrfs_is_data_reloc_root(root)) + min_alloc_size = len; + else + min_alloc_size = fs_info->sectorsize; + + ret = btrfs_reserve_extent(root, len, len, min_alloc_size, 0, + *alloc_hint, ins, 1, 1); + if (ret < 0) + return ret; + + file_extent.disk_bytenr = ins->objectid; + file_extent.disk_num_bytes = ins->offset; + file_extent.num_bytes = ins->offset; + file_extent.ram_bytes = ins->offset; + file_extent.offset = 0; + file_extent.compression = BTRFS_COMPRESS_NONE; + + lock_extent(&inode->io_tree, start, start + ins->offset - 1, + &cached); + + em = btrfs_create_io_em(inode, start, &file_extent, + BTRFS_ORDERED_REGULAR); + if (IS_ERR(em)) { + unlock_extent(&inode->io_tree, start, + start + ins->offset - 1, &cached); + ret = PTR_ERR(em); + goto out_free_reserved; + } + free_extent_map(em); + + ordered = btrfs_alloc_ordered_extent(inode, start, &file_extent, + 1 << BTRFS_ORDERED_REGULAR); + if (IS_ERR(ordered)) { + unlock_extent(&inode->io_tree, start, + start + ins->offset - 1, &cached); + ret = PTR_ERR(ordered); + goto out_drop_em; + } + + if (btrfs_is_data_reloc_root(root)) { + ret = btrfs_reloc_clone_csums(ordered); + + /* + * Only drop cache here, and process as normal. + * + * We must not allow extent_clear_unlock_delalloc() + * at error handling to free meta of this ordered + * extent, as its meta should be freed by + * btrfs_finish_ordered_io(). + * + * So we must continue until @start is increased to + * skip current ordered extent. + */ + if (ret < 0) + btrfs_drop_extent_map_range(inode, start, + start + ins->offset - 1, + false); + } + btrfs_put_ordered_extent(ordered); + + btrfs_dec_block_group_reservations(fs_info, ins->objectid); + + /* + * We're not doing compressed IO, don't unlock the first page + * (which the caller expects to stay locked), don't clear any + * dirty bits and don't set any writeback bits + * + * Do set the Ordered (Private2) bit so we know this page was + * properly setup for writepage. + */ + page_ops = (keep_locked ? 0 : PAGE_UNLOCK); + page_ops |= PAGE_SET_ORDERED; + + extent_clear_unlock_delalloc(inode, start, start + ins->offset - 1, + locked_folio, &cached, + EXTENT_LOCKED | EXTENT_DELALLOC, + page_ops); + *alloc_hint = ins->objectid + ins->offset; + return ret; + +out_drop_em: + btrfs_drop_extent_map_range(inode, start, start + ins->offset - 1, false); +out_free_reserved: + btrfs_dec_block_group_reservations(fs_info, ins->objectid); + btrfs_free_reserved_extent(fs_info, ins->objectid, ins->offset, 1); + /* This is reserved for btrfs_reserve_extent() error. */ + ASSERT(ret != -EAGAIN); + ins->offset = 0; + ins->objectid = 0; + return ret; +} + /* * when extent_io.c finds a delayed allocation range in the file, * the call backs end up in this code. The basic idea is to @@ -1328,15 +1469,10 @@ static noinline int cow_file_range(struct btrfs_inode *inode, { struct btrfs_root *root = inode->root; struct btrfs_fs_info *fs_info = root->fs_info; - struct extent_state *cached = NULL; + const u64 num_bytes = end + 1 - start; u64 alloc_hint = 0; u64 orig_start = start; - u64 num_bytes; - u64 cur_alloc_size = 0; - u64 min_alloc_size; - u64 blocksize = fs_info->sectorsize; - struct btrfs_key ins; - struct extent_map *em; + u64 cur = start; unsigned clear_bits; unsigned long page_ops; int ret = 0; @@ -1346,8 +1482,8 @@ static noinline int cow_file_range(struct btrfs_inode *inode, goto out_unlock; } - num_bytes = ALIGN(end - start + 1, blocksize); - num_bytes = max(blocksize, num_bytes); + ASSERT(IS_ALIGNED(start, fs_info->sectorsize)); + ASSERT(IS_ALIGNED(end + 1, fs_info->sectorsize)); ASSERT(num_bytes <= btrfs_super_total_bytes(fs_info->super_copy)); inode_should_defrag(inode, start, end, num_bytes, SZ_64K); @@ -1372,29 +1508,16 @@ static noinline int cow_file_range(struct btrfs_inode *inode, alloc_hint = btrfs_get_extent_allocation_hint(inode, start, num_bytes); - /* - * Relocation relies on the relocated extents to have exactly the same - * size as the original extents. Normally writeback for relocation data - * extents follows a NOCOW path because relocation preallocates the - * extents. However, due to an operation such as scrub turning a block - * group to RO mode, it may fallback to COW mode, so we must make sure - * an extent allocated during COW has exactly the requested size and can - * not be split into smaller extents, otherwise relocation breaks and - * fails during the stage where it updates the bytenr of file extent - * items. - */ - if (btrfs_is_data_reloc_root(root)) - min_alloc_size = num_bytes; - else - min_alloc_size = fs_info->sectorsize; + while (cur < end) { + struct btrfs_key ins = { 0 }; - while (num_bytes > 0) { - struct btrfs_ordered_extent *ordered; - struct btrfs_file_extent file_extent; - - ret = btrfs_reserve_extent(root, num_bytes, num_bytes, - min_alloc_size, 0, alloc_hint, - &ins, 1, 1); + ret = run_one_cow_range(inode, locked_folio, &ins, + cur, end, &alloc_hint, keep_locked); + /* + * @cur must be advanced before error handling. + * Special handling for possible btrfs_reloc_clone_csums() error. + */ + cur += ins.offset; if (ret == -EAGAIN) { /* * btrfs_reserve_extent only returns -EAGAIN for zoned @@ -1408,145 +1531,51 @@ static noinline int cow_file_range(struct btrfs_inode *inode, * us, or return -ENOSPC if it can't handle retries. */ ASSERT(btrfs_is_zoned(fs_info)); - if (start == orig_start) { + if (cur == orig_start) { wait_on_bit_io(&inode->root->fs_info->flags, BTRFS_FS_NEED_ZONE_FINISH, TASK_UNINTERRUPTIBLE); continue; } if (done_offset) { - *done_offset = start - 1; + *done_offset = cur - 1; return 0; } ret = -ENOSPC; } if (ret < 0) goto out_unlock; - cur_alloc_size = ins.offset; - - file_extent.disk_bytenr = ins.objectid; - file_extent.disk_num_bytes = ins.offset; - file_extent.num_bytes = ins.offset; - file_extent.ram_bytes = ins.offset; - file_extent.offset = 0; - file_extent.compression = BTRFS_COMPRESS_NONE; - - lock_extent(&inode->io_tree, start, start + cur_alloc_size - 1, - &cached); - - em = btrfs_create_io_em(inode, start, &file_extent, - BTRFS_ORDERED_REGULAR); - if (IS_ERR(em)) { - unlock_extent(&inode->io_tree, start, - start + cur_alloc_size - 1, &cached); - ret = PTR_ERR(em); - goto out_reserve; - } - free_extent_map(em); - - ordered = btrfs_alloc_ordered_extent(inode, start, &file_extent, - 1 << BTRFS_ORDERED_REGULAR); - if (IS_ERR(ordered)) { - unlock_extent(&inode->io_tree, start, - start + cur_alloc_size - 1, &cached); - ret = PTR_ERR(ordered); - goto out_drop_extent_cache; - } - - if (btrfs_is_data_reloc_root(root)) { - ret = btrfs_reloc_clone_csums(ordered); - - /* - * Only drop cache here, and process as normal. - * - * We must not allow extent_clear_unlock_delalloc() - * at out_unlock label to free meta of this ordered - * extent, as its meta should be freed by - * btrfs_finish_ordered_io(). - * - * So we must continue until @start is increased to - * skip current ordered extent. - */ - if (ret) - btrfs_drop_extent_map_range(inode, start, - start + cur_alloc_size - 1, - false); - } - btrfs_put_ordered_extent(ordered); - - btrfs_dec_block_group_reservations(fs_info, ins.objectid); - - /* - * We're not doing compressed IO, don't unlock the first page - * (which the caller expects to stay locked), don't clear any - * dirty bits and don't set any writeback bits - * - * Do set the Ordered (Private2) bit so we know this page was - * properly setup for writepage. - */ - page_ops = (keep_locked ? 0 : PAGE_UNLOCK); - page_ops |= PAGE_SET_ORDERED; - - extent_clear_unlock_delalloc(inode, start, start + cur_alloc_size - 1, - locked_folio, &cached, - EXTENT_LOCKED | EXTENT_DELALLOC, - page_ops); - if (num_bytes < cur_alloc_size) - num_bytes = 0; - else - num_bytes -= cur_alloc_size; - alloc_hint = ins.objectid + ins.offset; - start += cur_alloc_size; - cur_alloc_size = 0; - - /* - * btrfs_reloc_clone_csums() error, since start is increased - * extent_clear_unlock_delalloc() at out_unlock label won't - * free metadata of current ordered extent, we're OK to exit. - */ - if (ret) - goto out_unlock; } done: if (done_offset) *done_offset = end; return ret; -out_drop_extent_cache: - btrfs_drop_extent_map_range(inode, start, start + cur_alloc_size - 1, false); -out_reserve: - btrfs_dec_block_group_reservations(fs_info, ins.objectid); - btrfs_free_reserved_extent(fs_info, ins.objectid, ins.offset, 1); out_unlock: /* - * Now, we have three regions to clean up: + * Now we have two regions to clean up: * - * |-------(1)----|---(2)---|-------------(3)----------| - * `- orig_start `- start `- start + cur_alloc_size `- end + * |-----(1)-------|----(2)-----| + * `- start `- cur `- end * - * We process each region below. + * @cur is only updated when a successful extent reservation along + * with extent maps/ordered extents created. */ - clear_bits = EXTENT_LOCKED | EXTENT_DELALLOC | EXTENT_DELALLOC_NEW | EXTENT_DEFRAG | EXTENT_CLEAR_META_RESV; page_ops = PAGE_UNLOCK | PAGE_START_WRITEBACK | PAGE_END_WRITEBACK; /* - * For the range (1). We have already instantiated the ordered extents - * for this region. They are cleaned up by - * btrfs_cleanup_ordered_extents() in e.g, - * btrfs_run_delalloc_range(). EXTENT_LOCKED | EXTENT_DELALLOC are - * already cleared in the above loop. And, EXTENT_DELALLOC_NEW | - * EXTENT_DEFRAG | EXTENT_CLEAR_META_RESV are handled by the cleanup - * function. + * For the range (1), we have already instantiated the ordered extents + * for the region. They will be cleaned up by btrfs_cleanup_ordered_extents(). * - * However, in case of @keep_locked, we still need to unlock the pages - * (except @locked_folio) to ensure all the pages are unlocked. + * Here we only need to unlock the pages, which will be done by + * the extent_clear_unlock_delalloc() with PAGE_UNLOCK. */ - if (keep_locked && orig_start < start) { + if (keep_locked && start < cur) { if (!locked_folio) mapping_set_error(inode->vfs_inode.i_mapping, ret); - extent_clear_unlock_delalloc(inode, orig_start, start - 1, + extent_clear_unlock_delalloc(inode, orig_start, cur - 1, locked_folio, NULL, 0, page_ops); } @@ -1555,39 +1584,19 @@ static noinline int cow_file_range(struct btrfs_inode *inode, * clearing these flags under the extent lock, so lock the rest of the * range and clear everything up. */ - lock_extent(&inode->io_tree, start, end, NULL); + lock_extent(&inode->io_tree, cur, end, NULL); /* - * For the range (2). If we reserved an extent for our delalloc range - * (or a subrange) and failed to create the respective ordered extent, - * then it means that when we reserved the extent we decremented the - * extent's size from the data space_info's bytes_may_use counter and - * incremented the space_info's bytes_reserved counter by the same - * amount. We must make sure extent_clear_unlock_delalloc() does not try - * to decrement again the data space_info's bytes_may_use counter, - * therefore we do not pass it the flag EXTENT_CLEAR_DATA_RESV. - */ - if (cur_alloc_size) { - extent_clear_unlock_delalloc(inode, start, - start + cur_alloc_size - 1, - locked_folio, &cached, clear_bits, - page_ops); - btrfs_qgroup_free_data(inode, NULL, start, cur_alloc_size, NULL); - } - - /* - * For the range (3). We never touched the region. In addition to the + * For the range (2). We never touched the region. In addition to the * clear_bits above, we add EXTENT_CLEAR_DATA_RESV to release the data * space_info's bytes_may_use counter, reserved in * btrfs_check_data_free_space(). */ - if (start + cur_alloc_size < end) { + if (cur < end) { clear_bits |= EXTENT_CLEAR_DATA_RESV; - extent_clear_unlock_delalloc(inode, start + cur_alloc_size, - end, locked_folio, - &cached, clear_bits, page_ops); - btrfs_qgroup_free_data(inode, NULL, start + cur_alloc_size, - end - start - cur_alloc_size + 1, NULL); + extent_clear_unlock_delalloc(inode, cur, end, locked_folio, + NULL, clear_bits, page_ops); + btrfs_qgroup_free_data(inode, NULL, cur, end + 1 - cur, NULL); } return ret; } From patchwork Sun Nov 24 23:57:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13884300 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B315E6F06D for ; Sun, 24 Nov 2024 23:57:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732492676; cv=none; b=pwPDKUGbt2N416Ajv3lfY+9mb1p8gxfgxDzzSCqQLOyAtN9Qdg57CQhmY9EM0VrHLDtTb0M5SRxbncv/RndboLNWi8iCCbecUpb6ny5SCB+zrrYk2ahyhllDihKKJ0NKdK1zkzfXTvZOsOFiDEicoXZ3Y8tr8qy4Nj1tu32EoPw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732492676; c=relaxed/simple; bh=Vm4d13l0CvAR50OVqhFblQ1yg8lcF2qUoxyscmkpLlk=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oqXGKgdZeWJXJSqYlXaV4V6AE7GFXa1DsfJ/nDnwK4AVoc5KcHSkFaujPMaGxaXHwgE890bcQIrOmngQaFBmM2/+ELLVD/bBVLywOFrD4yq/Z3XHrOZYs9raFS0DjLSv1hiTCZtdlEmk9W6ZYynYem+lPV6pjXTCpkv/nMXf408= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=MdulkvLr; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=aQXotmp4; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="MdulkvLr"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="aQXotmp4" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id ECB421F393 for ; Sun, 24 Nov 2024 23:57:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1732492673; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=so7/6GOxeSpbQNjQboq6R+3Z6R/ExJ2IA1K0ZFLXALA=; b=MdulkvLrEi8jluL96UfDqJvhJ4wIDIjGaj0AEYMtb8AKDq9tkcyAhwULUW5hlTrkftSAC/ Z3xin86GnaCw9vXaulA/cb9w+AIv73u9UnfgulqhKeblbNWzJTW6x94o7/2yYQnjRn5n0g aYDGeh0s0L5ZKOm2bT42SEoyC5YBnIo= Authentication-Results: smtp-out2.suse.de; dkim=pass header.d=suse.com header.s=susede1 header.b=aQXotmp4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1732492672; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=so7/6GOxeSpbQNjQboq6R+3Z6R/ExJ2IA1K0ZFLXALA=; b=aQXotmp4XHM3kBUqArxidmEC+clx4kOSoP/0n1uMEP8LPHVydCwJ8tYn2HIrMZNAjmMAqp io3of/NZquMqRjEerhs4tyFeWv6MG7tRl+YOAapY6gSe9lC+lkLLr9Y7l6FAZ+1MdvniF4 gVmfV2y4MOvUVBTul14EYmEiPoF/j2U= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 0D94E132CF for ; Sun, 24 Nov 2024 23:57:51 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id GJQgLn+9Q2emSQAAD6G6ig (envelope-from ) for ; Sun, 24 Nov 2024 23:57:51 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH v2 4/7] btrfs: use FGP_STABLE to wait for folio writeback Date: Mon, 25 Nov 2024 10:27:24 +1030 Message-ID: <70c6b105c5aab0336c1c491b0c1158cc6b8e3fd4.1732492421.git.wqu@suse.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Rspamd-Queue-Id: ECB421F393 X-Spam-Level: X-Spamd-Result: default: False [-3.01 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; MID_CONTAINS_FROM(1.00)[]; R_MISSING_CHARSET(0.50)[]; R_DKIM_ALLOW(-0.20)[suse.com:s=susede1]; NEURAL_HAM_SHORT(-0.20)[-0.999]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; ARC_NA(0.00)[]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; PREVIOUSLY_DELIVERED(0.00)[linux-btrfs@vger.kernel.org]; RCVD_TLS_ALL(0.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:email,suse.com:dkim,suse.com:mid]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; FUZZY_BLOCKED(0.00)[rspamd.com]; TO_DN_NONE(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; MIME_TRACE(0.00)[0:+]; DKIM_TRACE(0.00)[suse.com:+] X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-Rspamd-Action: no action X-Spam-Score: -3.01 X-Spam-Flag: NO __filemap_get_folio() provides the flag FGP_STABLE to wait for writeback. There are two call sites doing __filemap_get_folio() then folio_wait_writeback(): - btrfs_truncate_block() - defrag_prepare_one_folio() We can directly utilize that flag instead of manually calling folio_wait_writeback(). Signed-off-by: Qu Wenruo --- fs/btrfs/defrag.c | 4 +--- fs/btrfs/inode.c | 4 +--- 2 files changed, 2 insertions(+), 6 deletions(-) diff --git a/fs/btrfs/defrag.c b/fs/btrfs/defrag.c index 968dae953948..c7fef49bbec8 100644 --- a/fs/btrfs/defrag.c +++ b/fs/btrfs/defrag.c @@ -865,7 +865,7 @@ static struct folio *defrag_prepare_one_folio(struct btrfs_inode *inode, pgoff_t again: folio = __filemap_get_folio(mapping, index, - FGP_LOCK | FGP_ACCESSED | FGP_CREAT, mask); + FGP_LOCK | FGP_ACCESSED | FGP_CREAT | FGP_STABLE, mask); if (IS_ERR(folio)) return folio; @@ -1222,8 +1222,6 @@ static int defrag_one_range(struct btrfs_inode *inode, u64 start, u32 len, goto free_folios; } } - for (i = 0; i < nr_pages; i++) - folio_wait_writeback(folios[i]); /* Lock the pages range */ lock_extent(&inode->io_tree, start_index << PAGE_SHIFT, diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index f9b1d78eb7ff..09dbf5f22304 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -4775,7 +4775,7 @@ int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t len, } again: folio = __filemap_get_folio(mapping, index, - FGP_LOCK | FGP_ACCESSED | FGP_CREAT, mask); + FGP_LOCK | FGP_ACCESSED | FGP_CREAT | FGP_STABLE, mask); if (IS_ERR(folio)) { btrfs_delalloc_release_space(inode, data_reserved, block_start, blocksize, true); @@ -4808,8 +4808,6 @@ int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t len, if (ret < 0) goto out_unlock; - folio_wait_writeback(folio); - lock_extent(io_tree, block_start, block_end, &cached_state); ordered = btrfs_lookup_ordered_extent(inode, block_start); From patchwork Sun Nov 24 23:57:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13884301 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 29A30185924 for ; Sun, 24 Nov 2024 23:57:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732492678; cv=none; b=TTE3ENZbKHEiDACqUAREK0bHqr3rBwZxwHnybo1AfOB0PjqBXFQzKR5AEqKE9+18WnBxjYw1S8MPHZez/Fpd6MtgxPeQfkh5TcEcAJfjFPAyJ/8SJX9Ew3NHtyZEEuAWgszqDbqYD70+aij340AmgQhNc0V0gUiZTzfqYHs6xB0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732492678; c=relaxed/simple; bh=bpotUu4X/Un0NASejgVQv/8UT9cctHerHKQgUqVkufk=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=n3lQxIIoofgjCxGOD5bx58lznySB/jjxgNYLqbEsB4+fW1ZWFPV3LXGlrBFPhLxyQmlHrHLUGG8vmDqoyjMTQHadYLy+gkx/ZBUXmMvWun2VDjCq72QukS32+fCJp7Q//44AM2/56WpCsWfVejzrFgBkjGAmehkWKNNgZ/1TAfA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=L04u2Cvq; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=L04u2Cvq; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="L04u2Cvq"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="L04u2Cvq" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 5B2112118D for ; Sun, 24 Nov 2024 23:57:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1732492674; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MYwCwOJdld1rVPWkcWsrmneXKsud0MRuW/B1j/pfa9U=; b=L04u2CvqJGgbXcBZ9omrMabtW1/6AtEWXubz913yAwFTfsybSvRSLpYKPlwuoiol+t7HzS 5BLUtv3cWeMVQmeV5f2FIKf1hGZ4YnNmfc3gYxdA2JcdPnb9jbjc2Pd6fG4YHkQUdM52JK zRpWAkz66Lk64Qj27Q58YnxsrCEI6/M= Authentication-Results: smtp-out1.suse.de; dkim=pass header.d=suse.com header.s=susede1 header.b=L04u2Cvq DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1732492674; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MYwCwOJdld1rVPWkcWsrmneXKsud0MRuW/B1j/pfa9U=; b=L04u2CvqJGgbXcBZ9omrMabtW1/6AtEWXubz913yAwFTfsybSvRSLpYKPlwuoiol+t7HzS 5BLUtv3cWeMVQmeV5f2FIKf1hGZ4YnNmfc3gYxdA2JcdPnb9jbjc2Pd6fG4YHkQUdM52JK zRpWAkz66Lk64Qj27Q58YnxsrCEI6/M= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 7173C132CF for ; Sun, 24 Nov 2024 23:57:53 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id YAzjCoG9Q2emSQAAD6G6ig (envelope-from ) for ; Sun, 24 Nov 2024 23:57:53 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH v2 5/7] btrfs: make btrfs_do_readpage() to do block-by-block read Date: Mon, 25 Nov 2024 10:27:25 +1030 Message-ID: <805cd02d604d20b11fc3daedac4774171e0de068.1732492421.git.wqu@suse.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Rspamd-Queue-Id: 5B2112118D X-Spam-Level: X-Spamd-Result: default: False [-3.01 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; MID_CONTAINS_FROM(1.00)[]; R_MISSING_CHARSET(0.50)[]; R_DKIM_ALLOW(-0.20)[suse.com:s=susede1]; NEURAL_HAM_SHORT(-0.20)[-0.999]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; ARC_NA(0.00)[]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; PREVIOUSLY_DELIVERED(0.00)[linux-btrfs@vger.kernel.org]; RCVD_TLS_ALL(0.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:email,suse.com:dkim,suse.com:mid]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; FUZZY_BLOCKED(0.00)[rspamd.com]; TO_DN_NONE(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; MIME_TRACE(0.00)[0:+]; DKIM_TRACE(0.00)[suse.com:+] X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-Rspamd-Action: no action X-Spam-Score: -3.01 X-Spam-Flag: NO Currently if a btrfs has sector size < page size, btrfs_do_readpage() will handle the range extent by extent, this is good for performance as it doesn't need to re-lookup the same extent map again and again (although __get_extent_map() already does extra cached em check). This is totally fine and is a valid optimization, but it has an assumption that, there is no partial uptodate range in the page. Meanwhile there is an incoming feature, requiring btrfs to skip the full page read if a buffered write range covers a full sector but not a full page. In that case, we can have a page that is partially uptodate, and the current per-extent lookup can not handle such case. So here we change btrfs_do_readpage() to do block-by-block read, this simplifies the following things: - Remove the need for @iosize variable Because we just use sectorsize as our increment. - Remove @pg_offset, and calculate it inside the loop when needed It's just offset_in_folio(). - Use a for() loop instead of a while() loop This will slightly reduce the read performance for sector size < page size cases, but for the future where we can skip a full page read for a lot of cases, it should still be worthy. For sector size == page size, this brings no performance change. Signed-off-by: Qu Wenruo --- fs/btrfs/extent_io.c | 37 ++++++++++++------------------------- 1 file changed, 12 insertions(+), 25 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 1c2246d36672..5fa200632472 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -961,9 +961,7 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached, u64 block_start; struct extent_map *em; int ret = 0; - size_t pg_offset = 0; - size_t iosize; - size_t blocksize = fs_info->sectorsize; + const size_t blocksize = fs_info->sectorsize; ret = set_folio_extent_mapped(folio); if (ret < 0) { @@ -974,23 +972,22 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached, if (folio->index == last_byte >> folio_shift(folio)) { size_t zero_offset = offset_in_folio(folio, last_byte); - if (zero_offset) { - iosize = folio_size(folio) - zero_offset; - folio_zero_range(folio, zero_offset, iosize); - } + if (zero_offset) + folio_zero_range(folio, zero_offset, + folio_size(folio) - zero_offset); } bio_ctrl->end_io_func = end_bbio_data_read; begin_folio_read(fs_info, folio); - while (cur <= end) { + for (cur = start; cur <= end; cur += blocksize) { enum btrfs_compression_type compress_type = BTRFS_COMPRESS_NONE; + unsigned long pg_offset = offset_in_folio(folio, cur); bool force_bio_submit = false; u64 disk_bytenr; ASSERT(IS_ALIGNED(cur, fs_info->sectorsize)); if (cur >= last_byte) { - iosize = folio_size(folio) - pg_offset; - folio_zero_range(folio, pg_offset, iosize); - end_folio_read(folio, true, cur, iosize); + folio_zero_range(folio, pg_offset, end - cur + 1); + end_folio_read(folio, true, cur, end - cur + 1); break; } em = __get_extent_map(inode, folio, cur, end - cur + 1, @@ -1005,8 +1002,6 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached, compress_type = extent_map_compression(em); - iosize = min(extent_map_end(em) - cur, end - cur + 1); - iosize = ALIGN(iosize, blocksize); if (compress_type != BTRFS_COMPRESS_NONE) disk_bytenr = em->disk_bytenr; else @@ -1062,18 +1057,13 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached, /* we've found a hole, just zero and go on */ if (block_start == EXTENT_MAP_HOLE) { - folio_zero_range(folio, pg_offset, iosize); - - end_folio_read(folio, true, cur, iosize); - cur = cur + iosize; - pg_offset += iosize; + folio_zero_range(folio, pg_offset, blocksize); + end_folio_read(folio, true, cur, blocksize); continue; } /* the get_extent function already copied into the folio */ if (block_start == EXTENT_MAP_INLINE) { - end_folio_read(folio, true, cur, iosize); - cur = cur + iosize; - pg_offset += iosize; + end_folio_read(folio, true, cur, blocksize); continue; } @@ -1084,12 +1074,9 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached, if (force_bio_submit) submit_one_bio(bio_ctrl); - submit_extent_folio(bio_ctrl, disk_bytenr, folio, iosize, + submit_extent_folio(bio_ctrl, disk_bytenr, folio, blocksize, pg_offset); - cur = cur + iosize; - pg_offset += iosize; } - return 0; } From patchwork Sun Nov 24 23:57:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13884302 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 896E418C322 for ; Sun, 24 Nov 2024 23:57:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732492680; cv=none; b=hSMewrxz+Eray+mc/8mWHYEXXbHihXRsSdsCOT3ktdByTe+/20nFTEZag8mBvoyandmUsNXbcFpLNXglZoQ/bANGXaDuUfCudcU9gEfM/+huVzyo5YA6FwQHuUlQssaTUojJH7FXy8ndZM0N8XotsTOdyKN36/Oa3cLUaKTB/58= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732492680; c=relaxed/simple; bh=GEsn015EXZjLewL4knQFH2QYAEHa5PGpI8PTd9mpMRU=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OtLfyth2WNZ2MkOa6bdNHIOn2/wanRrxCvE8NqQhXSj7t2fp4EZiCKL2ceTy3G2nLfqLIV0u3cMFWi2LrH30jKkeXKr/u6GA1EacHhFspoyFQRkCHAb5IAXV3VmQJFzpiwSZCcUHMS3+FuU1bN7zkyLGgEyP0FvT/A4sWYD0biY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=XodK3xrP; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=XodK3xrP; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="XodK3xrP"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="XodK3xrP" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id B6A211F381 for ; Sun, 24 Nov 2024 23:57:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1732492675; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cQWDxIAPwcV3htKE3gYMpPFfy5KcrIwEOA27dhC62so=; b=XodK3xrPP4VqDj/2rqPd3Bw6BjELNGo2VUMDD22kBullZnOxjCpbH5cLF+A/ZlYU/OMJ4d R+Q0rV2WXd1A12bm+dtfKrBDlzzxEilyPKTdYoX4AZA3kTkubY1flraN3qt8pgqBi8Nd1S 3bn57lzUw2g8ylszly6PBUhIfIthL08= Authentication-Results: smtp-out2.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1732492675; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cQWDxIAPwcV3htKE3gYMpPFfy5KcrIwEOA27dhC62so=; b=XodK3xrPP4VqDj/2rqPd3Bw6BjELNGo2VUMDD22kBullZnOxjCpbH5cLF+A/ZlYU/OMJ4d R+Q0rV2WXd1A12bm+dtfKrBDlzzxEilyPKTdYoX4AZA3kTkubY1flraN3qt8pgqBi8Nd1S 3bn57lzUw2g8ylszly6PBUhIfIthL08= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id D65BF132CF for ; Sun, 24 Nov 2024 23:57:54 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id yCeSI4K9Q2emSQAAD6G6ig (envelope-from ) for ; Sun, 24 Nov 2024 23:57:54 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH v2 6/7] btrfs: avoid deadlock when reading a partial uptodate folio Date: Mon, 25 Nov 2024 10:27:26 +1030 Message-ID: <6abae464d646515175263cff340f2d3815c46d7f.1732492421.git.wqu@suse.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Level: X-Spamd-Result: default: False [-2.80 / 50.00]; BAYES_HAM(-3.00)[100.00%]; MID_CONTAINS_FROM(1.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.20)[-0.986]; MIME_GOOD(-0.10)[text/plain]; FUZZY_BLOCKED(0.00)[rspamd.com]; RCVD_VIA_SMTP_AUTH(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; ARC_NA(0.00)[]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:email,suse.com:mid]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_COUNT_TWO(0.00)[2]; TO_MATCH_ENVRCPT_ALL(0.00)[]; TO_DN_NONE(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[linux-btrfs@vger.kernel.org]; RCVD_TLS_ALL(0.00)[] X-Spam-Score: -2.80 X-Spam-Flag: NO [BUG] This is for a deadlock only possible after the out-of-tree patch "btrfs: allow buffered write to skip full page if it's sector aligned". For now it's impossible to hit the deadlock, the reason will be explained in [CAUSE] section. If the sector size is smaller than page size, and we allow btrfs to avoid reading the full page because the buffered write range is sector aligned, we can hit a hang with generic/095 runs: __switch_to+0xf8/0x168 __schedule+0x328/0x8a8 schedule+0x54/0x140 io_schedule+0x44/0x68 folio_wait_bit_common+0x198/0x3f8 __folio_lock+0x24/0x40 extent_write_cache_pages+0x2e0/0x4c0 [btrfs] btrfs_writepages+0x94/0x158 [btrfs] do_writepages+0x74/0x190 filemap_fdatawrite_wbc+0x88/0xc8 __filemap_fdatawrite_range+0x6c/0xa8 filemap_fdatawrite_range+0x1c/0x30 btrfs_start_ordered_extent+0x264/0x2e0 [btrfs] btrfs_lock_and_flush_ordered_range+0x8c/0x160 [btrfs] __get_extent_map+0xa0/0x220 [btrfs] btrfs_do_readpage+0x1bc/0x5d8 [btrfs] btrfs_read_folio+0x50/0xa0 [btrfs] filemap_read_folio+0x54/0x110 filemap_update_page+0x2e0/0x3b8 filemap_get_pages+0x228/0x4d8 filemap_read+0x11c/0x3b8 btrfs_file_read_iter+0x74/0x90 [btrfs] new_sync_read+0xd0/0x1d0 vfs_read+0x1a0/0x1f0 There is also the minimal fio reproducer extracted from that test case to reproduce the deadlock: [global] bs=8k iodepth=1 randrepeat=1 size=256k directory=$mnt numjobs=1 [job1] ioengine=sync bs=512 direct=1 rw=randread filename=file1 [job2] ioengine=libaio rw=randwrite direct=1 filename=file1 [job3] ioengine=posixaio rw=randwrite filename=file1 [CAUSE] The above call trace shows that, during the folio read a writeback is triggered on the same folio. And since during btrfs_do_readpage(), the folio is locked, the writeback will never be able to lock the folio, thus it is waiting on itself thus causing the deadlock. The root cause is a little complex, the system is 64K page sized, with 4K sector size: 1) The folio has its range [48K, 64K) marked dirty by buffered write 0 16K 32K 48K 64K | |///////////| \- sector Uptodate|Dirty 2) Writeback finished for [48K, 64K), but ordered extent not yet finished 0 16K 32K 48K 64K | |///////////| \- sector Uptodate extent map PINNED OE still here 3) The folio is released from page cache This can be triggered by direct IO through the following call chain: iomap_dio_rw() \- kiocb_invalidate_pages() \- filemap_invalidate_pages() \- invalidate_inode_pages2_range() \- invalidate_complete_folio2() \- filemap_release_folio() \- btrfs_release_folio() \- __btrfs_release_folio() \- try_release_extent_mapping() Since there is no extent state with EXTENT_LOCKED flag in the folio range, btrfs allows the folio to be released. Now there is no folio->private to record which block is uptodate. But extent map and OE are still here. 0 16K 32K 48K 64K | |///////////| \- extent map PINNED OE still here 4) Buffered write dirtied range [0, 16K) Since it's sector aligned, btrfs didn't read the full folio from disk. 0 16K 32K 48K 64K |//////////| |///////////| \- sector Uptodate|Dirty \- extent map PINNED OE still here 5) Read on the folio is triggered For the range [0, 16K), since it's already uptodate, btrfs skips this range. For the range [16K, 48K), btrfs submit the read from disk. The problem comes to the range [48K, 64K), the following call chain happens: btrfs_do_readpage() \- __get_extent_map() \- btrfs_lock_and_flush_ordered_range() \- btrfs_start_ordered_extent() \- filemap_fdatawrite_range() \- btrfs_writepages() \- extent_write_cache_pages() \- folio_lock() Since the folio indeed has dirty sectors in range [0, 16K), the range will be written back. But the folio is already locked by the folio read, the writeback will never be able to lock the folio, thus lead to the deadlock. This sequence can only happen if all the following conditions are met: - The sector size is smaller than page size. Or we won't have mixed dirty blocks in the same folio we're reading. - We allow the buffered write to skip the folio read if it's sector aligned. This is done by the incoming patch "btrfs: allow buffered write to skip full page if it's sector aligned". The ultimate goal of that patch is to reduce unnecessary read for sector size < page size cases, and to pass generic/563. Otherwise the folio will be read from the disk during buffered write, before marking it dirty. Thus will not trigger the deadlock. [FIX] Break the step 5) of the above case. By passing an optional @locked_folio into btrfs_start_ordered_extent() and btrfs_lock_and_flush_ordered_range(). If we got such locked folio skip the writeback for ranges of that folio. Here we also do extra asserts to make sure the target range is already not dirty, or the ordered extent we wait will never be able to finish, since part of the ordered extent is never submitted. So far only the call site inside __get_extent_map() is passing the new parameter. Signed-off-by: Qu Wenruo --- fs/btrfs/defrag.c | 2 +- fs/btrfs/direct-io.c | 2 +- fs/btrfs/extent_io.c | 3 +- fs/btrfs/file.c | 8 ++--- fs/btrfs/inode.c | 6 ++-- fs/btrfs/ordered-data.c | 67 ++++++++++++++++++++++++++++++++++++----- fs/btrfs/ordered-data.h | 8 +++-- 7 files changed, 75 insertions(+), 21 deletions(-) diff --git a/fs/btrfs/defrag.c b/fs/btrfs/defrag.c index c7fef49bbec8..bffdbb8e37a9 100644 --- a/fs/btrfs/defrag.c +++ b/fs/btrfs/defrag.c @@ -902,7 +902,7 @@ static struct folio *defrag_prepare_one_folio(struct btrfs_inode *inode, pgoff_t break; folio_unlock(folio); - btrfs_start_ordered_extent(ordered); + btrfs_start_ordered_extent(ordered, NULL); btrfs_put_ordered_extent(ordered); folio_lock(folio); /* diff --git a/fs/btrfs/direct-io.c b/fs/btrfs/direct-io.c index a7c3e221378d..2fb02aa19be0 100644 --- a/fs/btrfs/direct-io.c +++ b/fs/btrfs/direct-io.c @@ -103,7 +103,7 @@ static int lock_extent_direct(struct inode *inode, u64 lockstart, u64 lockend, */ if (writing || test_bit(BTRFS_ORDERED_DIRECT, &ordered->flags)) - btrfs_start_ordered_extent(ordered); + btrfs_start_ordered_extent(ordered, NULL); else ret = nowait ? -EAGAIN : -ENOTBLK; btrfs_put_ordered_extent(ordered); diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 5fa200632472..624abe04401c 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -930,7 +930,8 @@ static struct extent_map *__get_extent_map(struct inode *inode, *em_cached = NULL; } - btrfs_lock_and_flush_ordered_range(BTRFS_I(inode), start, start + len - 1, &cached_state); + btrfs_lock_and_flush_ordered_range(BTRFS_I(inode), folio, start, + start + len - 1, &cached_state); em = btrfs_get_extent(BTRFS_I(inode), folio, start, len); if (!IS_ERR(em)) { BUG_ON(*em_cached); diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c index fbb753300071..e2c6165eba21 100644 --- a/fs/btrfs/file.c +++ b/fs/btrfs/file.c @@ -979,7 +979,7 @@ lock_and_cleanup_extent_if_need(struct btrfs_inode *inode, struct folio *folio, cached_state); folio_unlock(folio); folio_put(folio); - btrfs_start_ordered_extent(ordered); + btrfs_start_ordered_extent(ordered, NULL); btrfs_put_ordered_extent(ordered); return -EAGAIN; } @@ -1047,8 +1047,8 @@ int btrfs_check_nocow_lock(struct btrfs_inode *inode, loff_t pos, return -EAGAIN; } } else { - btrfs_lock_and_flush_ordered_range(inode, lockstart, lockend, - &cached_state); + btrfs_lock_and_flush_ordered_range(inode, NULL, lockstart, + lockend, &cached_state); } ret = can_nocow_extent(&inode->vfs_inode, lockstart, &num_bytes, NULL, nowait, false); @@ -1884,7 +1884,7 @@ static vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf) unlock_extent(io_tree, page_start, page_end, &cached_state); folio_unlock(folio); up_read(&BTRFS_I(inode)->i_mmap_lock); - btrfs_start_ordered_extent(ordered); + btrfs_start_ordered_extent(ordered, NULL); btrfs_put_ordered_extent(ordered); goto again; } diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 09dbf5f22304..dbc63b5cb439 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -2771,7 +2771,7 @@ static void btrfs_writepage_fixup_worker(struct btrfs_work *work) unlock_extent(&inode->io_tree, page_start, page_end, &cached_state); folio_unlock(folio); - btrfs_start_ordered_extent(ordered); + btrfs_start_ordered_extent(ordered, NULL); btrfs_put_ordered_extent(ordered); goto again; } @@ -4815,7 +4815,7 @@ int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t len, unlock_extent(io_tree, block_start, block_end, &cached_state); folio_unlock(folio); folio_put(folio); - btrfs_start_ordered_extent(ordered); + btrfs_start_ordered_extent(ordered, NULL); btrfs_put_ordered_extent(ordered); goto again; } @@ -4950,7 +4950,7 @@ int btrfs_cont_expand(struct btrfs_inode *inode, loff_t oldsize, loff_t size) if (size <= hole_start) return 0; - btrfs_lock_and_flush_ordered_range(inode, hole_start, block_end - 1, + btrfs_lock_and_flush_ordered_range(inode, NULL, hole_start, block_end - 1, &cached_state); cur_offset = hole_start; while (1) { diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c index 2104d60c2161..35b2c441ee6f 100644 --- a/fs/btrfs/ordered-data.c +++ b/fs/btrfs/ordered-data.c @@ -729,7 +729,7 @@ static void btrfs_run_ordered_extent_work(struct btrfs_work *work) struct btrfs_ordered_extent *ordered; ordered = container_of(work, struct btrfs_ordered_extent, flush_work); - btrfs_start_ordered_extent(ordered); + btrfs_start_ordered_extent(ordered, NULL); complete(&ordered->completion); } @@ -845,12 +845,14 @@ void btrfs_wait_ordered_roots(struct btrfs_fs_info *fs_info, u64 nr, * Wait on page writeback for all the pages in the extent and the IO completion * code to insert metadata into the btree corresponding to the extent. */ -void btrfs_start_ordered_extent(struct btrfs_ordered_extent *entry) +void btrfs_start_ordered_extent(struct btrfs_ordered_extent *entry, + struct folio *locked_folio) { u64 start = entry->file_offset; u64 end = start + entry->num_bytes - 1; struct btrfs_inode *inode = entry->inode; bool freespace_inode; + bool skip_writeback = false; trace_btrfs_ordered_extent_start(inode, entry); @@ -860,13 +862,59 @@ void btrfs_start_ordered_extent(struct btrfs_ordered_extent *entry) */ freespace_inode = btrfs_is_free_space_inode(inode); + /* + * The locked folio covers the ordered extent range and the full + * folio is dirty. + * We can not trigger writeback on it, as we will try to lock + * the same folio we already hold. + * + * This only happens for sector size < page size case, and even + * that happens we're still safe because this can only happen + * when the range is submitted and finished, but OE is not yet + * finished. + */ + if (locked_folio) { + const u64 skip_start = max_t(u64, folio_pos(locked_folio), start); + const u64 skip_end = min_t(u64, + folio_pos(locked_folio) + folio_size(locked_folio), + end + 1) - 1; + + ASSERT(folio_test_locked(locked_folio)); + + /* The folio should intersect with the OE range. */ + ASSERT(folio_pos(locked_folio) <= end || + folio_pos(locked_folio) + folio_size(locked_folio) > start); + + /* + * The range must not be dirty. + * + * Since we will skip writeback for the folio, if the involved range + * is dirty the range will never be submitted, thus the ordered + * extent we are going to wait will never finish, cause another deadlock. + */ + btrfs_folio_assert_not_dirty(inode->root->fs_info, locked_folio, + skip_start, skip_end + 1 - skip_start); + skip_writeback = true; + } /* * pages in the range can be dirty, clean or writeback. We * start IO on any dirty ones so the wait doesn't stall waiting * for the flusher thread to find them */ - if (!test_bit(BTRFS_ORDERED_DIRECT, &entry->flags)) - filemap_fdatawrite_range(inode->vfs_inode.i_mapping, start, end); + if (!test_bit(BTRFS_ORDERED_DIRECT, &entry->flags)) { + if (!skip_writeback) { + filemap_fdatawrite_range(inode->vfs_inode.i_mapping, start, end); + } else { + /* Need to skip the locked folio range. */ + if (start < folio_pos(locked_folio)) + filemap_fdatawrite_range(inode->vfs_inode.i_mapping, + start, folio_pos(locked_folio) - 1); + if (end + 1 > folio_pos(locked_folio) + folio_size(locked_folio)) + filemap_fdatawrite_range(inode->vfs_inode.i_mapping, + folio_pos(locked_folio) + folio_size(locked_folio), + end); + } + } if (!freespace_inode) btrfs_might_wait_for_event(inode->root->fs_info, btrfs_ordered_extent); @@ -921,7 +969,7 @@ int btrfs_wait_ordered_range(struct btrfs_inode *inode, u64 start, u64 len) btrfs_put_ordered_extent(ordered); break; } - btrfs_start_ordered_extent(ordered); + btrfs_start_ordered_extent(ordered, NULL); end = ordered->file_offset; /* * If the ordered extent had an error save the error but don't @@ -1141,6 +1189,8 @@ struct btrfs_ordered_extent *btrfs_lookup_first_ordered_range( * @inode: Inode whose ordered tree is to be searched * @start: Beginning of range to flush * @end: Last byte of range to lock + * @locked_folio: If passed, will not start writeback of this folio, to avoid + * locking the same folio already locked by the caller. * @cached_state: If passed, will return the extent state responsible for the * locked range. It's the caller's responsibility to free the * cached state. @@ -1148,8 +1198,9 @@ struct btrfs_ordered_extent *btrfs_lookup_first_ordered_range( * Always return with the given range locked, ensuring after it's called no * order extent can be pending. */ -void btrfs_lock_and_flush_ordered_range(struct btrfs_inode *inode, u64 start, - u64 end, +void btrfs_lock_and_flush_ordered_range(struct btrfs_inode *inode, + struct folio *locked_folio, + u64 start, u64 end, struct extent_state **cached_state) { struct btrfs_ordered_extent *ordered; @@ -1174,7 +1225,7 @@ void btrfs_lock_and_flush_ordered_range(struct btrfs_inode *inode, u64 start, break; } unlock_extent(&inode->io_tree, start, end, cachedp); - btrfs_start_ordered_extent(ordered); + btrfs_start_ordered_extent(ordered, locked_folio); btrfs_put_ordered_extent(ordered); } } diff --git a/fs/btrfs/ordered-data.h b/fs/btrfs/ordered-data.h index 4e152736d06c..a4bb24572c73 100644 --- a/fs/btrfs/ordered-data.h +++ b/fs/btrfs/ordered-data.h @@ -191,7 +191,8 @@ void btrfs_add_ordered_sum(struct btrfs_ordered_extent *entry, struct btrfs_ordered_sum *sum); struct btrfs_ordered_extent *btrfs_lookup_ordered_extent(struct btrfs_inode *inode, u64 file_offset); -void btrfs_start_ordered_extent(struct btrfs_ordered_extent *entry); +void btrfs_start_ordered_extent(struct btrfs_ordered_extent *entry, + struct folio *locked_folio); int btrfs_wait_ordered_range(struct btrfs_inode *inode, u64 start, u64 len); struct btrfs_ordered_extent * btrfs_lookup_first_ordered_extent(struct btrfs_inode *inode, u64 file_offset); @@ -207,8 +208,9 @@ u64 btrfs_wait_ordered_extents(struct btrfs_root *root, u64 nr, const struct btrfs_block_group *bg); void btrfs_wait_ordered_roots(struct btrfs_fs_info *fs_info, u64 nr, const struct btrfs_block_group *bg); -void btrfs_lock_and_flush_ordered_range(struct btrfs_inode *inode, u64 start, - u64 end, +void btrfs_lock_and_flush_ordered_range(struct btrfs_inode *inode, + struct folio *locked_folio, + u64 start, u64 end, struct extent_state **cached_state); bool btrfs_try_lock_ordered_range(struct btrfs_inode *inode, u64 start, u64 end, struct extent_state **cached_state); From patchwork Sun Nov 24 23:57:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13884303 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E8F9518C903 for ; Sun, 24 Nov 2024 23:57:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732492680; cv=none; b=EYz56pMAjHO5DpJr77Q0JOy8dNaxNxw4PZIsVABNOu2u6cDsEhnEjBIW35Y1nL5Hw3e2Gq88lqOFQAtIwuq8cOO6AbQtz+0WHYIC2p/tXNokhJ7hm+OVxYD0MtD8QKdjsYbDcytfYnPUIHxZ42Kv21f6m9RNMDCHJteoDQEhCEg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732492680; c=relaxed/simple; bh=4SxOfTM+8k9V3UUnhSL7J427V1u0hMzPMFrGiKryQbE=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GCAFN2zkxlvBhvX1O+jC4f8R5LFETG89iyfr2o/XVnRE5AIS+On24fqZg49rQupymwnyYJE3czRYkzRDvznEVXp8QU38SU8rae+kdccKRPCP13UdBsCxrunEFNhQM1kjjZd/8lir3fpwzrqXH/55xKIQgjlUaCktex5g3x7Cft8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=QVEMNb4k; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=QVEMNb4k; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="QVEMNb4k"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="QVEMNb4k" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 24C6221189 for ; Sun, 24 Nov 2024 23:57:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1732492677; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QLlrG6iYuL37r+/+Mtb6EzYxcRR3dT6DqW4FXzZ81nY=; b=QVEMNb4ks+Se6NLqAWzyFYJkQlF2wibPxu/kdcBmrtAv2WF7pSRYdEkvYfTMfUV9vHtYXY RWPH3OsYubWZkTnqhAO7RVflHvP9/PRlzf1crkO37ft73Q3CbKAuySmDyqQNaXbb44Kb93 7LSCdiBICOkF1lEn1fI1J9gfmzNDBag= Authentication-Results: smtp-out1.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1732492677; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QLlrG6iYuL37r+/+Mtb6EzYxcRR3dT6DqW4FXzZ81nY=; b=QVEMNb4ks+Se6NLqAWzyFYJkQlF2wibPxu/kdcBmrtAv2WF7pSRYdEkvYfTMfUV9vHtYXY RWPH3OsYubWZkTnqhAO7RVflHvP9/PRlzf1crkO37ft73Q3CbKAuySmDyqQNaXbb44Kb93 7LSCdiBICOkF1lEn1fI1J9gfmzNDBag= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 46506132CF for ; Sun, 24 Nov 2024 23:57:56 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id yF1hAIS9Q2emSQAAD6G6ig (envelope-from ) for ; Sun, 24 Nov 2024 23:57:56 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH v2 7/7] btrfs: allow buffered write to skip full page if it's sector aligned Date: Mon, 25 Nov 2024 10:27:27 +1030 Message-ID: <2cf9783e7f152681164caa6abbc9535bfff9c6f8.1732492421.git.wqu@suse.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Score: -2.80 X-Spamd-Result: default: False [-2.80 / 50.00]; BAYES_HAM(-3.00)[100.00%]; MID_CONTAINS_FROM(1.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.20)[-0.986]; MIME_GOOD(-0.10)[text/plain]; FUZZY_BLOCKED(0.00)[rspamd.com]; RCVD_VIA_SMTP_AUTH(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; ARC_NA(0.00)[]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:mid,suse.com:email]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_COUNT_TWO(0.00)[2]; TO_MATCH_ENVRCPT_ALL(0.00)[]; TO_DN_NONE(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[linux-btrfs@vger.kernel.org]; RCVD_TLS_ALL(0.00)[] X-Spam-Flag: NO X-Spam-Level: [BUG] Since the support of sector size < page size for btrfs, test case generic/563 fails with 4K sector size and 64K page size: --- tests/generic/563.out 2024-04-25 18:13:45.178550333 +0930 +++ /home/adam/xfstests-dev/results//generic/563.out.bad 2024-09-30 09:09:16.155312379 +0930 @@ -3,7 +3,8 @@ read is in range write is in range write -> read/write -read is in range +read has value of 8388608 +read is NOT in range -33792 .. 33792 write is in range ... [CAUSE] The test case creates a 8MiB file, then buffered write into the 8MiB using 4K block size, to overwrite the whole file. On 4K page sized systems, since the write range covers the full sector and page, btrfs will no bother reading the page, just like what XFS and EXT4 do. But 64K page sized systems, although the write is sector aligned, it's not page aligned, thus btrfs still goes the full page alignment check, and read the full page out. This causes extra data read, and fail the test case. [FIX] To skip the full page read, we need to do the following modification: - Do not trigger full page read as long as the buffered write is sector aligned This is pretty simple by modifying the check inside prepare_uptodate_page(). - Skip already uptodate sectors during full page read Or we can lead to the following data corruption: 0 32K 64K |///////| | Where the file range [0, 32K) is dirtied by buffered write, the remaining range [32K, 64K) is not. When reading the full page, since [0,32K) is only dirtied but not written back, there is no data extent map for it, but a hole covering [0, 64k). If we continue reading the full page range [0, 64K), the dirtied range will be filled with 0 (since there is only a hole covering the whole range). This causes the dirtied range to get lost. Signed-off-by: Qu Wenruo --- fs/btrfs/extent_io.c | 4 ++++ fs/btrfs/file.c | 5 +++-- 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 624abe04401c..806315e82db7 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -991,6 +991,10 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached, end_folio_read(folio, true, cur, end - cur + 1); break; } + if (btrfs_folio_test_uptodate(fs_info, folio, cur, blocksize)) { + end_folio_read(folio, true, cur, blocksize); + continue; + } em = __get_extent_map(inode, folio, cur, end - cur + 1, em_cached); if (IS_ERR(em)) { diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c index e2c6165eba21..6f485b8bda66 100644 --- a/fs/btrfs/file.c +++ b/fs/btrfs/file.c @@ -841,14 +841,15 @@ static int prepare_uptodate_folio(struct inode *inode, struct folio *folio, u64 { u64 clamp_start = max_t(u64, pos, folio_pos(folio)); u64 clamp_end = min_t(u64, pos + len, folio_pos(folio) + folio_size(folio)); + const u32 sectorsize = inode_to_fs_info(inode)->sectorsize; int ret = 0; if (folio_test_uptodate(folio)) return 0; if (!force_uptodate && - IS_ALIGNED(clamp_start, PAGE_SIZE) && - IS_ALIGNED(clamp_end, PAGE_SIZE)) + IS_ALIGNED(clamp_start, sectorsize) && + IS_ALIGNED(clamp_end, sectorsize)) return 0; ret = btrfs_read_folio(NULL, folio);