From patchwork Tue May 23 08:13:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13251831 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34558C77B75 for ; Tue, 23 May 2023 08:15:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236076AbjEWIPu (ORCPT ); Tue, 23 May 2023 04:15:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38698 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236270AbjEWIPU (ORCPT ); Tue, 23 May 2023 04:15:20 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 57F19129 for ; Tue, 23 May 2023 01:13:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=C4yUvFKlHSZ5zA3mUqo6HhAQKxoRImKgFZxgmYeGMCg=; b=39UYIIRl4aOmCUPGq5XcnqBAk/ SolilyQNlvyTGgJWcIuC1jsL4cYUE+iQL6T+0x3X4C0YTS3lp2YBylAK9EEeFBB/qgrzg2Lm+OUMY ZsMOysIjSJutSvN3HWQ2IGOddhytLDmlaFmXpuhjyofRFggcDOe7fEyCHcTlaZwnzzgMCKsI1XTjB ah4HgBs6SaTYco9xIIje8ygfVCZl6tZkgpRJtq2NTzVD/5iURlC5dj/KnqhKL7dWVyqnrTVrY8xaW kltAg+6MEsJJgjmJHKtpeDJAkqwUMPDmCWqBVUgphsDgtC6vp1zinjgDWsxAj+PgLCq7EG7VWbsUN w9qt1RXQ==; Received: from [2001:4bb8:188:23b2:6ade:85c9:530f:6eb0] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q1N95-009OSq-0E; Tue, 23 May 2023 08:13:27 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 01/16] btrfs: fix range_end calculation in extent_write_locked_range Date: Tue, 23 May 2023 10:13:07 +0200 Message-Id: <20230523081322.331337-2-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230523081322.331337-1-hch@lst.de> References: <20230523081322.331337-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org The range_end field in struct writeback_control is inclusive, just like the end parameter passed to extent_write_locked_range. Fixes: 771ed689d2cd ("Btrfs: Optimize compressed writeback and reads") Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 5999ac3ee601db..c1b0ca94be34e1 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -2309,7 +2309,7 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end) struct writeback_control wbc_writepages = { .sync_mode = WB_SYNC_ALL, .range_start = start, - .range_end = end + 1, + .range_end = end, .no_cgroup_owner = 1, }; struct btrfs_bio_ctrl bio_ctrl = { From patchwork Tue May 23 08:13:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13251833 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C110C7EE23 for ; Tue, 23 May 2023 08:15:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235096AbjEWIPz (ORCPT ); Tue, 23 May 2023 04:15:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38772 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235288AbjEWIPX (ORCPT ); Tue, 23 May 2023 04:15:23 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 808E5E45 for ; Tue, 23 May 2023 01:13:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=JXJubYclaxIEtCd1baDv6x3BrOt9KI7/Qw4mBSHvllU=; b=ue264ARyzASJJTGw/GnJjcd3xZ x88DbDOdCBuoTfK1u7m0ixuHrL4AH1Hk/mFcNrGlb2xquuwGq0H2H06fbMYrEZBWjIXxWavV8wgXI VbaFYMUSwIjnb/hNkZjdnu9VBgT5vKRrMgpMN0QdLbSbkAmi5KEsO7LjJ68a2eC0ep6D1sbjQpzUF IjvmGT/Yh8tFpA4q8W+ahZ6nFrS5jhCoIJx/GybRpjOKmFjd2g82cc/Zuxs4N/O1ufGU5AVRpmk2n qxmAZ1VUEAxGMVdwd7blYp/GcuA77U5Tze7yRzpoFcQb0DAnoB21wax+TT1p2bWu2AZgaBL2uEzC2 RIYlJJSg==; Received: from [2001:4bb8:188:23b2:6ade:85c9:530f:6eb0] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q1N98-009OTA-02; Tue, 23 May 2023 08:13:30 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 02/16] btrfs: factor out a btrfs_verify_page helper Date: Tue, 23 May 2023 10:13:08 +0200 Message-Id: <20230523081322.331337-3-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230523081322.331337-1-hch@lst.de> References: <20230523081322.331337-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Split all the conditionals for the fsverity calls in end_page_read into a btrfs_verify_page helper to keep the code readable and make additional refacoring easier. Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index c1b0ca94be34e1..fc48888742debd 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -481,6 +481,15 @@ void extent_clear_unlock_delalloc(struct btrfs_inode *inode, u64 start, u64 end, start, end, page_ops, NULL); } +static int btrfs_verify_page(struct page *page, u64 start) +{ + if (!fsverity_active(page->mapping->host) || + PageError(page) || PageUptodate(page) || + start >= i_size_read(page->mapping->host)) + return true; + return fsverity_verify_page(page); +} + static void end_page_read(struct page *page, bool uptodate, u64 start, u32 len) { struct btrfs_fs_info *fs_info = btrfs_sb(page->mapping->host->i_sb); @@ -489,11 +498,7 @@ static void end_page_read(struct page *page, bool uptodate, u64 start, u32 len) start + len <= page_offset(page) + PAGE_SIZE); if (uptodate) { - if (fsverity_active(page->mapping->host) && - !PageError(page) && - !PageUptodate(page) && - start < i_size_read(page->mapping->host) && - !fsverity_verify_page(page)) { + if (!btrfs_verify_page(page, start)) { btrfs_page_set_error(fs_info, page, start, len); } else { btrfs_page_set_uptodate(fs_info, page, start, len); From patchwork Tue May 23 08:13:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13251832 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68BDBC77B75 for ; Tue, 23 May 2023 08:15:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235598AbjEWIP4 (ORCPT ); Tue, 23 May 2023 04:15:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235398AbjEWIPY (ORCPT ); Tue, 23 May 2023 04:15:24 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D533EE66 for ; Tue, 23 May 2023 01:13:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=IUmTL5+pWvpwNn1uvL4NqYHkpeBI72qFbIWfkPv20CY=; b=yEmp79myg84O6IOBUO1e6E9u/7 CryiiXpkHEt6z/iKeFrHAELCqHGvBBnWpXf8eMGkl9qQL5qauceISW0t9+0wjNkxB5343c5UECHeZ 5Uapzxb9mGfYdn+V8J+YO14y5gu17GlDKns6M/DLyweijMmSeTRNAIvAqLaromb7dNb/c/GwcioTb li6OmVgXnRXhOcLZyEQsgeXF4fK48VbDH20Dw59sxOTKwg1VrNNh/ANp2NBo8Tlkxh0O2tJBDvh1J YHBlPMt6fEYkgDyZrNe6hwHPsVc8ylaxKDm/Y3uYLz2aKWil4TDuQH2BZVXDPD7X1xCVy5bAjYycR EVlnL6lA==; Received: from [2001:4bb8:188:23b2:6ade:85c9:530f:6eb0] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q1N9A-009OTS-22; Tue, 23 May 2023 08:13:33 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 03/16] btrfs: unify fsverify vs other read error handling in end_page_read Date: Tue, 23 May 2023 10:13:09 +0200 Message-Id: <20230523081322.331337-4-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230523081322.331337-1-hch@lst.de> References: <20230523081322.331337-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Don't special case the fsverify error handling and clear the uptodate bit for them as well like other readpage implementations (iomap, buffer, mpage) do. Fixes: 146054090b08 ("btrfs: initial fsverity support") Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index fc48888742debd..4297478a7a625d 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -497,12 +497,8 @@ static void end_page_read(struct page *page, bool uptodate, u64 start, u32 len) ASSERT(page_offset(page) <= start && start + len <= page_offset(page) + PAGE_SIZE); - if (uptodate) { - if (!btrfs_verify_page(page, start)) { - btrfs_page_set_error(fs_info, page, start, len); - } else { - btrfs_page_set_uptodate(fs_info, page, start, len); - } + if (uptodate && btrfs_verify_page(page, start)) { + btrfs_page_set_uptodate(fs_info, page, start, len); } else { btrfs_page_clear_uptodate(fs_info, page, start, len); btrfs_page_set_error(fs_info, page, start, len); From patchwork Tue May 23 08:13:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13251834 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BE4EC7EE23 for ; Tue, 23 May 2023 08:16:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236100AbjEWIQB (ORCPT ); Tue, 23 May 2023 04:16:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38854 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235744AbjEWIPb (ORCPT ); Tue, 23 May 2023 04:15:31 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CDA2BE75 for ; Tue, 23 May 2023 01:13:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=z89D5zEJfg+fAuTySzcHbcu3Lw1Db9g2bas+ucs/Rzc=; b=A/OISnxb/AUHva5e4k6ZmhYui0 J6t+6gdP7iBLvo2AMmindZVzKojezHjuzZcJt2L1H77G8vQonv7waC9s9Trvesudra91cr6RLCI3c 7kZNnEFeOf6J3jZQxBXqh4jtt9v/xpH6jEawPeeqcYADvozDrn9anGs/AJdb6XZkn7lAX5HLBgKqQ YYkiBqR523Aomii0iYcAYQPGDpx6vScaEe+FqIBv4XSVRHmGlYNbJ6kJfZidILRJi692HjhWc6T+N STsaG8mtZZHDu4TrNMroo40bDgeEdHQoDml1gwntRfALj5Qk1ZBgSPGLV40KeK5NCoPomATMaAm63 21BiHpxg==; Received: from [2001:4bb8:188:23b2:6ade:85c9:530f:6eb0] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q1N9D-009OTn-0w; Tue, 23 May 2023 08:13:35 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 04/16] btrfs: don't check PageError in btrfs_verify_page Date: Tue, 23 May 2023 10:13:10 +0200 Message-Id: <20230523081322.331337-5-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230523081322.331337-1-hch@lst.de> References: <20230523081322.331337-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org btrfs_verify_page is called from the readpage completion handler, which is only used to read pages, or parts of pages that aren't uptodate yet. The only case where PageError could be set on a page in btrfs is if we had a previous writeback error, but in that case we won't called readpage on it, as it has previously been marked uptodate. Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 4297478a7a625d..b846c46c7a875b 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -484,7 +484,7 @@ void extent_clear_unlock_delalloc(struct btrfs_inode *inode, u64 start, u64 end, static int btrfs_verify_page(struct page *page, u64 start) { if (!fsverity_active(page->mapping->host) || - PageError(page) || PageUptodate(page) || + PageUptodate(page) || start >= i_size_read(page->mapping->host)) return true; return fsverity_verify_page(page); From patchwork Tue May 23 08:13:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13251835 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E83EC7EE2A for ; Tue, 23 May 2023 08:16:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235808AbjEWIQD (ORCPT ); Tue, 23 May 2023 04:16:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38902 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235827AbjEWIPd (ORCPT ); Tue, 23 May 2023 04:15:33 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 94803139 for ; Tue, 23 May 2023 01:13:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=RoSJQO1PQ0NgKGGai55SkC4YaLO/ncIk/jICftRuYcM=; b=PKrGG1LDDThdTRtEMbhahZfnRw aoB5h4J+7r0a5y5JUtirnXu3nsGqIS3g63Crc1jsGHYUkBoFYeTXal9NN7deuu2zlgUDu0XdRN5KJ sQu6iVgwDn7qNkpq7CF1jYksTb3eXVGY+2yCuY0BKlif8LxFN4zTmT9rGfu2lJe9zEjHGMb2hg8TX CQN1NN4abBOmLTIrMKTB7BaKkjSkYoppSJML1f8J7fLzLae1dz0TBaj5dllJ/zjEMzwwnsVa5ZG2j ETOWpb4xdMHwPU8jVlyowLGStMQiC8aJW0sgY9PSXgg7QsYQ0XE8pPbaptRH43XifWOitDfJuZ/8r h5XU5+YQ==; Received: from [2001:4bb8:188:23b2:6ade:85c9:530f:6eb0] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q1N9G-009OUH-0w; Tue, 23 May 2023 08:13:38 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 05/16] btrfs: don't fail writeback when allocating the compression context fails Date: Tue, 23 May 2023 10:13:11 +0200 Message-Id: <20230523081322.331337-6-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230523081322.331337-1-hch@lst.de> References: <20230523081322.331337-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org If cow_file_range_async fails to allocate the asynchronous writeback context, it currently returns an error and entirely fails the writeback. This is not a good idea as a writeback failure is a non-temporary error condition that will make the file system unusuable. Just fall back to synchronous uncompressed writeback instead. This requires us to delay setting the BTRFS_INODE_HAS_ASYNC_EXTENT flag until we've committed to the async writeback. Signed-off-by: Christoph Hellwig --- fs/btrfs/inode.c | 74 ++++++++++++++++++------------------------------ 1 file changed, 28 insertions(+), 46 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 2e83fb22505261..0741294ce3234b 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -1720,58 +1720,36 @@ static noinline void async_cow_free(struct btrfs_work *work) kvfree(async_cow); } -static int cow_file_range_async(struct btrfs_inode *inode, - struct writeback_control *wbc, - struct page *locked_page, - u64 start, u64 end, int *page_started, - unsigned long *nr_written) +static bool cow_file_range_async(struct btrfs_inode *inode, + struct writeback_control *wbc, + struct page *locked_page, + u64 start, u64 end, int *page_started, + unsigned long *nr_written) { struct btrfs_fs_info *fs_info = inode->root->fs_info; struct cgroup_subsys_state *blkcg_css = wbc_blkcg_css(wbc); struct async_cow *ctx; struct async_chunk *async_chunk; unsigned long nr_pages; - u64 cur_end; u64 num_chunks = DIV_ROUND_UP(end - start, SZ_512K); int i; - bool should_compress; unsigned nofs_flag; const blk_opf_t write_flags = wbc_to_write_flags(wbc); - unlock_extent(&inode->io_tree, start, end, NULL); - - if (inode->flags & BTRFS_INODE_NOCOMPRESS && - !btrfs_test_opt(fs_info, FORCE_COMPRESS)) { - num_chunks = 1; - should_compress = false; - } else { - should_compress = true; - } - nofs_flag = memalloc_nofs_save(); ctx = kvmalloc(struct_size(ctx, chunks, num_chunks), GFP_KERNEL); memalloc_nofs_restore(nofs_flag); + if (!ctx) + return false; - if (!ctx) { - unsigned clear_bits = EXTENT_LOCKED | EXTENT_DELALLOC | - EXTENT_DELALLOC_NEW | EXTENT_DEFRAG | - EXTENT_DO_ACCOUNTING; - unsigned long page_ops = PAGE_UNLOCK | PAGE_START_WRITEBACK | - PAGE_END_WRITEBACK | PAGE_SET_ERROR; - - extent_clear_unlock_delalloc(inode, start, end, locked_page, - clear_bits, page_ops); - return -ENOMEM; - } + unlock_extent(&inode->io_tree, start, end, NULL); + set_bit(BTRFS_INODE_HAS_ASYNC_EXTENT, &inode->runtime_flags); async_chunk = ctx->chunks; atomic_set(&ctx->num_chunks, num_chunks); for (i = 0; i < num_chunks; i++) { - if (should_compress) - cur_end = min(end, start + SZ_512K - 1); - else - cur_end = end; + u64 cur_end = min(end, start + SZ_512K - 1); /* * igrab is called higher up in the call chain, take only the @@ -1832,7 +1810,7 @@ static int cow_file_range_async(struct btrfs_inode *inode, start = cur_end + 1; } *page_started = 1; - return 0; + return true; } static noinline int run_delalloc_zoned(struct btrfs_inode *inode, @@ -2413,8 +2391,8 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page u64 start, u64 end, int *page_started, unsigned long *nr_written, struct writeback_control *wbc) { - int ret; const bool zoned = btrfs_is_zoned(inode->root->fs_info); + int ret = 0; /* * The range must cover part of the @locked_page, or the returned @@ -2434,19 +2412,23 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page ASSERT(!zoned || btrfs_is_data_reloc_root(inode->root)); ret = run_delalloc_nocow(inode, locked_page, start, end, page_started, nr_written); - } else if (!btrfs_inode_can_compress(inode) || - !inode_need_compress(inode, start, end)) { - if (zoned) - ret = run_delalloc_zoned(inode, locked_page, start, end, - page_started, nr_written); - else - ret = cow_file_range(inode, locked_page, start, end, - page_started, nr_written, 1, NULL); - } else { - set_bit(BTRFS_INODE_HAS_ASYNC_EXTENT, &inode->runtime_flags); - ret = cow_file_range_async(inode, wbc, locked_page, start, end, - page_started, nr_written); + goto out; } + + if (btrfs_inode_can_compress(inode) && + inode_need_compress(inode, start, end) && + cow_file_range_async(inode, wbc, locked_page, start, + end, page_started, nr_written)) + goto out; + + if (zoned) + ret = run_delalloc_zoned(inode, locked_page, start, end, + page_started, nr_written); + else + ret = cow_file_range(inode, locked_page, start, end, + page_started, nr_written, 1, NULL); + +out: ASSERT(ret <= 0); if (ret) btrfs_cleanup_ordered_extents(inode, locked_page, start, From patchwork Tue May 23 08:13:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13251836 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3A49C7EE23 for ; Tue, 23 May 2023 08:16:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236103AbjEWIQE (ORCPT ); Tue, 23 May 2023 04:16:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38922 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235699AbjEWIPe (ORCPT ); Tue, 23 May 2023 04:15:34 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A7EB10C4 for ; Tue, 23 May 2023 01:13:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=eq4REbHcjfiOaFXeZNJFO0BJjq1kagqwnhr8TtjmZN0=; b=dPXekr+ti1RLOgOpk+7y/NKTGl 3UmT8tBsk5J+ko0uNL4281Pus5ikTqLA2yz75RI2l6+niSMlgib/CyJpfrn8ficqIO1Dtrn6D5NKO tCw6EvkpRXoBnhDqGY8SR5oujkzEyJAf+NUxEq5XykXcyIVaKzzoD6uEpvce7wah0Q6IZ6u9gL4vq 7hneAlXw5VyVM76uIPHRIBf7FcasYQDFYOrEdYMeoIJizgAy4JBdPSk5vYv0DLpK6TSXuPTICqAcR nqhYTk22jUPoQmuNQ2bkTM1ifeIeN+83GqKRw2BCBnmONpK/DfDAMkHSOYcajQ2Bv1T8L7NRPhOXO 0po9PuMg==; Received: from [2001:4bb8:188:23b2:6ade:85c9:530f:6eb0] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q1N9J-009OUo-2C; Tue, 23 May 2023 08:13:42 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 06/16] btrfs: rename cow_file_range_async to run_delalloc_compressed Date: Tue, 23 May 2023 10:13:12 +0200 Message-Id: <20230523081322.331337-7-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230523081322.331337-1-hch@lst.de> References: <20230523081322.331337-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org cow_file_range_async is only used for compressed writeback. Rename it to run_delalloc_compressed, which also fits in with run_delalloc_nocow and run_delalloc_zoned. Signed-off-by: Christoph Hellwig Reviewed-by: Anand Jain --- fs/btrfs/inode.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 0741294ce3234b..c4d4ac0428ee74 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -1693,7 +1693,7 @@ static noinline void async_cow_submit(struct btrfs_work *work) * ->inode could be NULL if async_chunk_start has failed to compress, * in which case we don't have anything to submit, yet we need to * always adjust ->async_delalloc_pages as its paired with the init - * happening in cow_file_range_async + * happening in run_delalloc_compressed */ if (async_chunk->inode) submit_compressed_extents(async_chunk); @@ -1720,11 +1720,11 @@ static noinline void async_cow_free(struct btrfs_work *work) kvfree(async_cow); } -static bool cow_file_range_async(struct btrfs_inode *inode, - struct writeback_control *wbc, - struct page *locked_page, - u64 start, u64 end, int *page_started, - unsigned long *nr_written) +static bool run_delalloc_compressed(struct btrfs_inode *inode, + struct writeback_control *wbc, + struct page *locked_page, + u64 start, u64 end, int *page_started, + unsigned long *nr_written) { struct btrfs_fs_info *fs_info = inode->root->fs_info; struct cgroup_subsys_state *blkcg_css = wbc_blkcg_css(wbc); @@ -2417,8 +2417,8 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page if (btrfs_inode_can_compress(inode) && inode_need_compress(inode, start, end) && - cow_file_range_async(inode, wbc, locked_page, start, - end, page_started, nr_written)) + run_delalloc_compressed(inode, wbc, locked_page, start, + end, page_started, nr_written)) goto out; if (zoned) From patchwork Tue May 23 08:13:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13251837 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE2C3C77B75 for ; Tue, 23 May 2023 08:16:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236112AbjEWIQG (ORCPT ); Tue, 23 May 2023 04:16:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38356 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235873AbjEWIPg (ORCPT ); Tue, 23 May 2023 04:15:36 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D16010D0 for ; Tue, 23 May 2023 01:13:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=7w/PRXKmGfEj1WmV8ZOxgi7sl9nXGx5xTQevApR0MrI=; b=QC7uShyYbkq7LuRBs41+kwITHX w20ax9hHI4yNSIP3ObTq1VP7yO/assr5y1Mej7DcPkheKeaIf51cdTHirclSGfhzhJZq/W55VJSx3 Ikrfa2D4ys8rS7qzYnhPCYb65Xpjrhk+66fg2Nf0SnV+Ms+qUAT3AlkFplewQkiJsAxk2lyDyeOy6 Ybo/0EAnEMkCZZRpEuHdoib1WOTkZx5yPz2fUCqKrMuRkBB+yYZ2oGBaqVpWoGdLrQAeaD0RCRzmu xErnNgz3Bs2hIgYwehfusJsjOBtQYHqoOz9TFyc1Iv5bU49AfckV0h07jt8skxMJj6wDNf7Cscatp 9oYAmUvQ==; Received: from [2001:4bb8:188:23b2:6ade:85c9:530f:6eb0] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q1N9N-009OVF-0U; Tue, 23 May 2023 08:13:45 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 07/16] btrfs: don't check PageError in __extent_writepage Date: Tue, 23 May 2023 10:13:13 +0200 Message-Id: <20230523081322.331337-8-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230523081322.331337-1-hch@lst.de> References: <20230523081322.331337-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org __extent_writepage currenly sets PageError whenever any error happens, and the also checks for PageError to decide if to call error handling. This leads to very unclear responsibility for cleaning up on errors. In the VM and generic writeback helpers the basic idea is that once I/O is fired off all error handling responsibility is delegated to the end I/O handler. But if that end I/O handler sets the PageError bit, and the submitter checks it, the bit could in some cases leak into the submission context for fast enough I/O. Fix this by simply not checking PageError and just using the local ret variable to check for submission errors. This also fundamentally solves the long problem documented in a comment in __extent_writepage by never leaking the error bit into the submission context. Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 33 +-------------------------------- 1 file changed, 1 insertion(+), 32 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index b846c46c7a875b..d7b31888efa17a 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1557,38 +1557,7 @@ static int __extent_writepage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl set_page_writeback(page); end_page_writeback(page); } - /* - * Here we used to have a check for PageError() and then set @ret and - * call end_extent_writepage(). - * - * But in fact setting @ret here will cause different error paths - * between subpage and regular sectorsize. - * - * For regular page size, we never submit current page, but only add - * current page to current bio. - * The bio submission can only happen in next page. - * Thus if we hit the PageError() branch, @ret is already set to - * non-zero value and will not get updated for regular sectorsize. - * - * But for subpage case, it's possible we submit part of current page, - * thus can get PageError() set by submitted bio of the same page, - * while our @ret is still 0. - * - * So here we unify the behavior and don't set @ret. - * Error can still be properly passed to higher layer as page will - * be set error, here we just don't handle the IO failure. - * - * NOTE: This is just a hotfix for subpage. - * The root fix will be properly ending ordered extent when we hit - * an error during writeback. - * - * But that needs a bigger refactoring, as we not only need to grab the - * submitted OE, but also need to know exactly at which bytenr we hit - * the error. - * Currently the full page based __extent_writepage_io() is not - * capable of that. - */ - if (PageError(page)) + if (ret) end_extent_writepage(page, ret, page_start, page_end); if (bio_ctrl->extent_locked) { struct writeback_control *wbc = bio_ctrl->wbc; From patchwork Tue May 23 08:13:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13251838 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53C4DC7EE2A for ; Tue, 23 May 2023 08:16:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235873AbjEWIQJ (ORCPT ); Tue, 23 May 2023 04:16:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38974 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235841AbjEWIPi (ORCPT ); Tue, 23 May 2023 04:15:38 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 386C610CB for ; Tue, 23 May 2023 01:13:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=Ym/X4IBTOCkWhaX4HEgReWoCdFBnxHnxm66YSKoUFIk=; b=dWNUGrS6nKFmsP8LC8WC3vZBKO j99VuKCnseoiwHu187Zj5NT1UZtctX9qdpRaWX1L+qnvowWXXcUJSUv7rH6m8gXELbFXXLA3LR/2q xsHDI9EU676CnMuJ7tPewF5wDibd4NDXc5O7ygVnLSt/ILY4ExQpS8RyWL8quEoSVQE/7wjtk5jh0 790J5KQZn8HB1r5PlZ5W/SiEd9gpYjxbJnBzAuEg9gJRnWvkNxWtsGOLxFCT/fgNX7Aw2I0aWLYkq /CGko+HcpqDhDagoukHI9L3+zD3MUhO86BtGx+Fv0sWFAIm2Z9t9ilE1QUaLmupF8PYNQIJRsRkDa r0Y0koBg==; Received: from [2001:4bb8:188:23b2:6ade:85c9:530f:6eb0] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q1N9P-009OVl-26; Tue, 23 May 2023 08:13:48 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 08/16] btrfs: stop setting PageError in the data I/O path Date: Tue, 23 May 2023 10:13:14 +0200 Message-Id: <20230523081322.331337-9-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230523081322.331337-1-hch@lst.de> References: <20230523081322.331337-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org PageError is not used by the VFS/MM and deprecated. Btrfs now only sets the flag and never clears it for data pages, so just remove all places setting it, and the subpage error bit. Note that the error propagation for superblock writes still uses PageError for now. Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 24 +++++------------------- fs/btrfs/inode.c | 3 --- fs/btrfs/subpage.c | 34 ---------------------------------- fs/btrfs/subpage.h | 10 ++++------ 4 files changed, 9 insertions(+), 62 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index d7b31888efa17a..28610ed0fae913 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -223,8 +223,6 @@ static int process_one_page(struct btrfs_fs_info *fs_info, if (page_ops & PAGE_SET_ORDERED) btrfs_page_clamp_set_ordered(fs_info, page, start, len); - if (page_ops & PAGE_SET_ERROR) - btrfs_page_clamp_set_error(fs_info, page, start, len); if (page_ops & PAGE_START_WRITEBACK) { btrfs_page_clamp_clear_dirty(fs_info, page, start, len); btrfs_page_clamp_set_writeback(fs_info, page, start, len); @@ -497,12 +495,10 @@ static void end_page_read(struct page *page, bool uptodate, u64 start, u32 len) ASSERT(page_offset(page) <= start && start + len <= page_offset(page) + PAGE_SIZE); - if (uptodate && btrfs_verify_page(page, start)) { + if (uptodate && btrfs_verify_page(page, start)) btrfs_page_set_uptodate(fs_info, page, start, len); - } else { + else btrfs_page_clear_uptodate(fs_info, page, start, len); - btrfs_page_set_error(fs_info, page, start, len); - } if (!btrfs_is_subpage(fs_info, page)) unlock_page(page); @@ -530,7 +526,6 @@ void end_extent_writepage(struct page *page, int err, u64 start, u64 end) len = end + 1 - start; btrfs_page_clear_uptodate(fs_info, page, start, len); - btrfs_page_set_error(fs_info, page, start, len); ret = err < 0 ? err : -EIO; mapping_set_error(page->mapping, ret); } @@ -1059,7 +1054,6 @@ static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached, ret = set_page_extent_mapped(page); if (ret < 0) { unlock_extent(tree, start, end, NULL); - btrfs_page_set_error(fs_info, page, start, PAGE_SIZE); unlock_page(page); return ret; } @@ -1263,11 +1257,9 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode, } ret = btrfs_run_delalloc_range(inode, page, delalloc_start, delalloc_end, &page_started, &nr_written, wbc); - if (ret) { - btrfs_page_set_error(inode->root->fs_info, page, - page_offset(page), PAGE_SIZE); + if (ret) return ret; - } + /* * delalloc_end is already one less than the total length, so * we don't subtract one from PAGE_SIZE @@ -1420,7 +1412,6 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, em = btrfs_get_extent(inode, NULL, 0, cur, end - cur + 1); if (IS_ERR(em)) { - btrfs_page_set_error(fs_info, page, cur, end - cur + 1); ret = PTR_ERR_OR_ZERO(em); goto out_error; } @@ -1519,9 +1510,6 @@ static int __extent_writepage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl WARN_ON(!PageLocked(page)); - btrfs_page_clear_error(btrfs_sb(inode->i_sb), page, - page_offset(page), PAGE_SIZE); - pg_offset = offset_in_page(i_size); if (page->index > end_index || (page->index == end_index && !pg_offset)) { @@ -1534,10 +1522,8 @@ static int __extent_writepage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl memzero_page(page, pg_offset, PAGE_SIZE - pg_offset); ret = set_page_extent_mapped(page); - if (ret < 0) { - SetPageError(page); + if (ret < 0) goto done; - } if (!bio_ctrl->extent_locked) { ret = writepage_delalloc(BTRFS_I(inode), page, bio_ctrl->wbc); diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index c4d4ac0428ee74..35b99fde75abb1 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -1153,8 +1153,6 @@ static int submit_uncompressed_range(struct btrfs_inode *inode, const u64 page_start = page_offset(locked_page); const u64 page_end = page_start + PAGE_SIZE - 1; - btrfs_page_set_error(inode->root->fs_info, locked_page, - page_start, PAGE_SIZE); set_page_writeback(locked_page); end_page_writeback(locked_page); end_extent_writepage(locked_page, ret, page_start, page_end); @@ -3028,7 +3026,6 @@ static void btrfs_writepage_fixup_worker(struct btrfs_work *work) mapping_set_error(page->mapping, ret); end_extent_writepage(page, ret, page_start, page_end); clear_page_dirty_for_io(page); - SetPageError(page); } btrfs_page_clear_checked(inode->root->fs_info, page, page_start, PAGE_SIZE); unlock_page(page); diff --git a/fs/btrfs/subpage.c b/fs/btrfs/subpage.c index 045117ca0ddc43..9e9a5e26a15736 100644 --- a/fs/btrfs/subpage.c +++ b/fs/btrfs/subpage.c @@ -100,9 +100,6 @@ void btrfs_init_subpage_info(struct btrfs_subpage_info *subpage_info, u32 sector subpage_info->uptodate_offset = cur; cur += nr_bits; - subpage_info->error_offset = cur; - cur += nr_bits; - subpage_info->dirty_offset = cur; cur += nr_bits; @@ -416,35 +413,6 @@ void btrfs_subpage_clear_uptodate(const struct btrfs_fs_info *fs_info, spin_unlock_irqrestore(&subpage->lock, flags); } -void btrfs_subpage_set_error(const struct btrfs_fs_info *fs_info, - struct page *page, u64 start, u32 len) -{ - struct btrfs_subpage *subpage = (struct btrfs_subpage *)page->private; - unsigned int start_bit = subpage_calc_start_bit(fs_info, page, - error, start, len); - unsigned long flags; - - spin_lock_irqsave(&subpage->lock, flags); - bitmap_set(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits); - SetPageError(page); - spin_unlock_irqrestore(&subpage->lock, flags); -} - -void btrfs_subpage_clear_error(const struct btrfs_fs_info *fs_info, - struct page *page, u64 start, u32 len) -{ - struct btrfs_subpage *subpage = (struct btrfs_subpage *)page->private; - unsigned int start_bit = subpage_calc_start_bit(fs_info, page, - error, start, len); - unsigned long flags; - - spin_lock_irqsave(&subpage->lock, flags); - bitmap_clear(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits); - if (subpage_test_bitmap_all_zero(fs_info, subpage, error)) - ClearPageError(page); - spin_unlock_irqrestore(&subpage->lock, flags); -} - void btrfs_subpage_set_dirty(const struct btrfs_fs_info *fs_info, struct page *page, u64 start, u32 len) { @@ -606,7 +574,6 @@ bool btrfs_subpage_test_##name(const struct btrfs_fs_info *fs_info, \ return ret; \ } IMPLEMENT_BTRFS_SUBPAGE_TEST_OP(uptodate); -IMPLEMENT_BTRFS_SUBPAGE_TEST_OP(error); IMPLEMENT_BTRFS_SUBPAGE_TEST_OP(dirty); IMPLEMENT_BTRFS_SUBPAGE_TEST_OP(writeback); IMPLEMENT_BTRFS_SUBPAGE_TEST_OP(ordered); @@ -674,7 +641,6 @@ bool btrfs_page_clamp_test_##name(const struct btrfs_fs_info *fs_info, \ } IMPLEMENT_BTRFS_PAGE_OPS(uptodate, SetPageUptodate, ClearPageUptodate, PageUptodate); -IMPLEMENT_BTRFS_PAGE_OPS(error, SetPageError, ClearPageError, PageError); IMPLEMENT_BTRFS_PAGE_OPS(dirty, set_page_dirty, clear_page_dirty_for_io, PageDirty); IMPLEMENT_BTRFS_PAGE_OPS(writeback, set_page_writeback, end_page_writeback, diff --git a/fs/btrfs/subpage.h b/fs/btrfs/subpage.h index 0e80ad33690466..998c1b78066e53 100644 --- a/fs/btrfs/subpage.h +++ b/fs/btrfs/subpage.h @@ -8,17 +8,17 @@ /* * Extra info for subpapge bitmap. * - * For subpage we pack all uptodate/error/dirty/writeback/ordered bitmaps into + * For subpage we pack all uptodate/dirty/writeback/ordered bitmaps into * one larger bitmap. * * This structure records how they are organized in the bitmap: * - * /- uptodate_offset /- error_offset /- dirty_offset + * /- uptodate_offset /- dirty_offset /- ordered_offset * | | | * v v v - * |u|u|u|u|........|u|u|e|e|.......|e|e| ... |o|o| + * |u|u|u|u|........|u|u|d|d|.......|d|d|o|o|.......|o|o| * |<- bitmap_nr_bits ->| - * |<--------------- total_nr_bits ---------------->| + * |<----------------- total_nr_bits ------------------>| */ struct btrfs_subpage_info { /* Number of bits for each bitmap */ @@ -32,7 +32,6 @@ struct btrfs_subpage_info { * @bitmap_size, which is calculated from PAGE_SIZE / sectorsize. */ unsigned int uptodate_offset; - unsigned int error_offset; unsigned int dirty_offset; unsigned int writeback_offset; unsigned int ordered_offset; @@ -141,7 +140,6 @@ bool btrfs_page_clamp_test_##name(const struct btrfs_fs_info *fs_info, \ struct page *page, u64 start, u32 len); DECLARE_BTRFS_SUBPAGE_OPS(uptodate); -DECLARE_BTRFS_SUBPAGE_OPS(error); DECLARE_BTRFS_SUBPAGE_OPS(dirty); DECLARE_BTRFS_SUBPAGE_OPS(writeback); DECLARE_BTRFS_SUBPAGE_OPS(ordered); From patchwork Tue May 23 08:13:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13251839 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C0BBC7EE2A for ; Tue, 23 May 2023 08:16:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235940AbjEWIQM (ORCPT ); Tue, 23 May 2023 04:16:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235922AbjEWIPk (ORCPT ); Tue, 23 May 2023 04:15:40 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 74A9619A3 for ; Tue, 23 May 2023 01:13:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=yGI/0eVD+ylSGWrzsKeCm1l04uog+OHByoqL4GEOgTY=; b=ywY1h4JkG7XnnUgd8bkPrQRfmA 82lJCLl+Y2bhkah9yt9p+9Ginb5B6Cf552GdjpDpfylQcOAuQEbHIVFEDdKpOzmA+j3t0WUv4H32G VReNILtE6Kmj5z+Mavly1nxdvLcrnjqRSfcQW/J3T95qlcb6sEU7Kpt1vKf9oMBRAz5/yd9RXqxCD UNgJDSo6DiPEOip4gAu3Td5lw1ajxgAdBr11tXLeKtevTsHm44q88gYGWex/efGELHUyVFk/RPwPM ZvWRdIV49yM1+53maua93anU9BBn4hUV6ohIsmOQv3pFZiSHASn5P4Y1/GkHurVFcFqwuheayFAA5 vrcOYqtQ==; Received: from [2001:4bb8:188:23b2:6ade:85c9:530f:6eb0] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q1N9S-009OW5-2Y; Tue, 23 May 2023 08:13:51 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 09/16] btrfs: remove PAGE_SET_ERROR Date: Tue, 23 May 2023 10:13:15 +0200 Message-Id: <20230523081322.331337-10-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230523081322.331337-1-hch@lst.de> References: <20230523081322.331337-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Now that the btrfs writeback code has stopped using PageError, using PAGE_SET_ERROR to just set the per-address_space error flag is confusing. Just open code the mapping_set_error calls in the callers and remove the PAGE_SET_ERROR flag. Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 3 --- fs/btrfs/extent_io.h | 1 - fs/btrfs/inode.c | 11 ++++++----- 3 files changed, 6 insertions(+), 9 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 28610ed0fae913..8b9e4980d8189c 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -268,9 +268,6 @@ static int __process_pages_contig(struct address_space *mapping, ASSERT(processed_end && *processed_end == start); } - if ((page_ops & PAGE_SET_ERROR) && start_index <= end_index) - mapping_set_error(mapping, -EIO); - folio_batch_init(&fbatch); while (index <= end_index) { int found_folios; diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h index db9148bafd02c3..4317fceeffaddb 100644 --- a/fs/btrfs/extent_io.h +++ b/fs/btrfs/extent_io.h @@ -39,7 +39,6 @@ enum { ENUM_BIT(PAGE_START_WRITEBACK), ENUM_BIT(PAGE_END_WRITEBACK), ENUM_BIT(PAGE_SET_ORDERED), - ENUM_BIT(PAGE_SET_ERROR), ENUM_BIT(PAGE_LOCK), }; diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 35b99fde75abb1..2e6673cdb47bd3 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -835,6 +835,7 @@ static noinline int compress_file_range(struct async_chunk *async_chunk) { struct btrfs_inode *inode = async_chunk->inode; struct btrfs_fs_info *fs_info = inode->root->fs_info; + struct address_space *mapping = inode->vfs_inode.i_mapping; u64 blocksize = fs_info->sectorsize; u64 start = async_chunk->start; u64 end = async_chunk->end; @@ -949,7 +950,7 @@ static noinline int compress_file_range(struct async_chunk *async_chunk) /* Compression level is applied here and only here */ ret = btrfs_compress_pages( compress_type | (fs_info->compress_level << 4), - inode->vfs_inode.i_mapping, start, + mapping, start, pages, &nr_pages, &total_in, @@ -992,9 +993,9 @@ static noinline int compress_file_range(struct async_chunk *async_chunk) unsigned long clear_flags = EXTENT_DELALLOC | EXTENT_DELALLOC_NEW | EXTENT_DEFRAG | EXTENT_DO_ACCOUNTING; - unsigned long page_error_op; - page_error_op = ret < 0 ? PAGE_SET_ERROR : 0; + if (ret < 0) + mapping_set_error(mapping, -EIO); /* * inline extent creation worked or returned error, @@ -1011,7 +1012,6 @@ static noinline int compress_file_range(struct async_chunk *async_chunk) clear_flags, PAGE_UNLOCK | PAGE_START_WRITEBACK | - page_error_op | PAGE_END_WRITEBACK); /* @@ -1271,12 +1271,13 @@ static int submit_one_async_extent(struct btrfs_inode *inode, btrfs_dec_block_group_reservations(fs_info, ins.objectid); btrfs_free_reserved_extent(fs_info, ins.objectid, ins.offset, 1); out_free: + mapping_set_error(inode->vfs_inode.i_mapping, -EIO); extent_clear_unlock_delalloc(inode, start, end, NULL, EXTENT_LOCKED | EXTENT_DELALLOC | EXTENT_DELALLOC_NEW | EXTENT_DEFRAG | EXTENT_DO_ACCOUNTING, PAGE_UNLOCK | PAGE_START_WRITEBACK | - PAGE_END_WRITEBACK | PAGE_SET_ERROR); + PAGE_END_WRITEBACK); free_async_extent_pages(async_extent); goto done; } From patchwork Tue May 23 08:13:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13251840 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F1F4C7EE23 for ; Tue, 23 May 2023 08:16:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236115AbjEWIQR (ORCPT ); Tue, 23 May 2023 04:16:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38408 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235971AbjEWIPl (ORCPT ); Tue, 23 May 2023 04:15:41 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 95FD810CC for ; Tue, 23 May 2023 01:13:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=nC/qJvdFdtjTS9OPWbA/jIIFGrcXX3cLaOhvCrxYDw4=; b=ZVYMf2P5E2feuWqbJGSzearml/ HCf8yXpyjrj/bbybkXAcHyZZW1UgJgo7pOdYDU6mCLUPPcIPmDcJCvBmxMXLBJmTeFTOy1P2V/oLG JMaShpOri8MiB0C298211YlLqGngDEnv6/O8Mnn63f18yovrCVctnJGs0rq+6Je9C6u2CQgsieOG9 DQS6aa7/cLofMpBQ/7uqwv58AiHOUa6YKcccI3uEXOGYcIaBUoTYz9RYKhd01k5xkzHIFqKsbT4OT i/e5LtTDM5kKsglzJPpylZosbhl5dH7uS8G/wy7R2lgUP2BgqWhNfFG2vnQA6Mmu3AmDen1FMbzyt L+0Ahb1A==; Received: from [2001:4bb8:188:23b2:6ade:85c9:530f:6eb0] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q1N9V-009OWc-1e; Tue, 23 May 2023 08:13:53 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 10/16] btrfs: remove non-standard extent handling in __extent_writepage_io Date: Tue, 23 May 2023 10:13:16 +0200 Message-Id: <20230523081322.331337-11-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230523081322.331337-1-hch@lst.de> References: <20230523081322.331337-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org __extent_writepage_io is never called for compressed or inline extents, or holes. Remove the not quite working code for them and replace it with asserts that these cases don't happen. Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 23 +++++------------------ 1 file changed, 5 insertions(+), 18 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 8b9e4980d8189c..6151b38add8759 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1361,7 +1361,6 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, struct extent_map *em; int ret = 0; int nr = 0; - bool compressed; ret = btrfs_writepage_cow_fixup(page); if (ret) { @@ -1419,10 +1418,14 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, ASSERT(cur < end); ASSERT(IS_ALIGNED(em->start, fs_info->sectorsize)); ASSERT(IS_ALIGNED(em->len, fs_info->sectorsize)); + block_start = em->block_start; - compressed = test_bit(EXTENT_FLAG_COMPRESSED, &em->flags); disk_bytenr = em->block_start + extent_offset; + ASSERT(!test_bit(EXTENT_FLAG_COMPRESSED, &em->flags)); + ASSERT(block_start != EXTENT_MAP_HOLE); + ASSERT(block_start != EXTENT_MAP_INLINE); + /* * Note that em_end from extent_map_end() and dirty_range_end from * find_next_dirty_byte() are all exclusive @@ -1431,22 +1434,6 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, free_extent_map(em); em = NULL; - /* - * compressed and inline extents are written through other - * paths in the FS - */ - if (compressed || block_start == EXTENT_MAP_HOLE || - block_start == EXTENT_MAP_INLINE) { - if (compressed) - nr++; - else - btrfs_writepage_endio_finish_ordered(inode, - page, cur, cur + iosize - 1, true); - btrfs_page_clear_dirty(fs_info, page, cur, iosize); - cur += iosize; - continue; - } - btrfs_set_range_writeback(inode, cur, cur + iosize - 1); if (!PageWriteback(page)) { btrfs_err(inode->root->fs_info, From patchwork Tue May 23 08:13:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13251841 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 261ACC7EE23 for ; Tue, 23 May 2023 08:16:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235990AbjEWIQU (ORCPT ); Tue, 23 May 2023 04:16:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235986AbjEWIPm (ORCPT ); Tue, 23 May 2023 04:15:42 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F16019A7 for ; Tue, 23 May 2023 01:13:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=76unRlbV3A9Mop5L5PwCi8hmeO6JSgnUFL1w7O3UiYE=; b=mVFrbjGsFBvIwq7eMUdmxYlJSM c8c/n4fA3D3w+AQemz68YTgSP9TvUh8aDmOV24bBqoASc+cjy4hcXeCT5C9xistjDRsWEejp1giVp 8SHsGm7AAVhZIfNp+CTNa8kEO2yKoU3kVFSQGU7bzJF5oPyzyTZgwACBaS5M/4QYjMqrvQOZynnJU jfFf2sKtGAmA3FeVmDbmzr39ni7BHWpM+0XMHzKEc0LRegaBzSlqneOhChhA03H9V3xD/e1E38Rip otKvaKjRhm028/L+nrscMrm6G+SUrRNVxUT2ZQhBK3+jYDAvLCASWLAPrHREKgeAPIh5YKmEqRUvO VuZIZdog==; Received: from [2001:4bb8:188:23b2:6ade:85c9:530f:6eb0] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q1N9Y-009OXQ-0E; Tue, 23 May 2023 08:13:56 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 11/16] btrfs: move nr_to_write to __extent_writepage Date: Tue, 23 May 2023 10:13:17 +0200 Message-Id: <20230523081322.331337-12-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230523081322.331337-1-hch@lst.de> References: <20230523081322.331337-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Move the nr_to_write accounting from __extent_writepage_io to __extent_writepage_io as we'll grow another __extent_writepage_io that doesn't want this accounting soon. Also drop the obsolete comment - decrementing a counter in the on-stack writeback_control data structure doesn't need the page lock. Signed-off-by: Christoph Hellwig Reviewed-by: Anand Jain --- fs/btrfs/extent_io.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 6151b38add8759..ac4ef0527bed49 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1370,12 +1370,6 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, return 1; } - /* - * we don't want to touch the inode after unlocking the page, - * so we update the mapping writeback index now - */ - bio_ctrl->wbc->nr_to_write--; - bio_ctrl->end_io_func = end_bio_extent_writepage; while (cur <= end) { u64 disk_bytenr; @@ -1521,6 +1515,8 @@ static int __extent_writepage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl if (ret == 1) return 0; + bio_ctrl->wbc->nr_to_write--; + done: if (nr == 0) { /* make sure the mapping tag for page dirty gets cleared */ From patchwork Tue May 23 08:13:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13251842 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70D18C77B75 for ; Tue, 23 May 2023 08:16:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236013AbjEWIQW (ORCPT ); Tue, 23 May 2023 04:16:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38168 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235819AbjEWIPp (ORCPT ); Tue, 23 May 2023 04:15:45 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A57641BF6 for ; Tue, 23 May 2023 01:14:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=W/BpTPabm1Dri+lRdgeyCzqcIjoUwGvA6RIT5FYTF74=; b=cp1vqKYi3+f3dvzvRZJswn45FJ alFcvYNPLDENKzgRpCkWAKGsyPqOTUrezdkg87nJkOabm5yw9etEgvvxGEIZQVUzT/VkGsocF53kR j00pGkkVRIpuy54HlgzAYWdn0IWLZ+jTKjrwQswwvwtu1Tcqnf8CbPEmRLnJTrsiZ+Lv8w1xw1uVL 5YuuAb1H1tEAEa0E8hwCf/9JW2fSsTekidj5dlVWhYV/7CBLzYxRQFqh9XCJXoID7Pg/be6mo/ZIq AsTc2ElpEGv4nq4HxHuXLKJ1czoVWk7SmHZURDiCLkNJNJJQw+cOtqKH3vvtyZaEmdt7kZRDznTaY M99iY7SA==; Received: from [2001:4bb8:188:23b2:6ade:85c9:530f:6eb0] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q1N9a-009OXm-2g; Tue, 23 May 2023 08:13:59 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 12/16] btrfs: only call __extent_writepage_io from extent_write_locked_range Date: Tue, 23 May 2023 10:13:18 +0200 Message-Id: <20230523081322.331337-13-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230523081322.331337-1-hch@lst.de> References: <20230523081322.331337-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org __extent_writepage does a lot of things that make no sense for extent_write_locked_range, given that extent_write_locked_range itself is called from __extent_writepage either directly or through a workqueue, and all this work has already been done in the first invocation and the pages haven't been unlocked since. Just call __extent_writepage_io directly instead. Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 65 ++++++++++++++++++-------------------------- 1 file changed, 26 insertions(+), 39 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index ac4ef0527bed49..cca47909953d6a 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -103,12 +103,6 @@ struct btrfs_bio_ctrl { blk_opf_t opf; btrfs_bio_end_io_t end_io_func; struct writeback_control *wbc; - - /* - * Tell writepage not to lock the state bits for this range, it still - * does the unlocking. - */ - bool extent_locked; }; static void submit_one_bio(struct btrfs_bio_ctrl *bio_ctrl) @@ -1475,7 +1469,6 @@ static int __extent_writepage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl { struct folio *folio = page_folio(page); struct inode *inode = page->mapping->host; - struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); const u64 page_start = page_offset(page); const u64 page_end = page_start + PAGE_SIZE - 1; int ret; @@ -1503,13 +1496,11 @@ static int __extent_writepage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl if (ret < 0) goto done; - if (!bio_ctrl->extent_locked) { - ret = writepage_delalloc(BTRFS_I(inode), page, bio_ctrl->wbc); - if (ret == 1) - return 0; - if (ret) - goto done; - } + ret = writepage_delalloc(BTRFS_I(inode), page, bio_ctrl->wbc); + if (ret == 1) + return 0; + if (ret) + goto done; ret = __extent_writepage_io(BTRFS_I(inode), page, bio_ctrl, i_size, &nr); if (ret == 1) @@ -1525,21 +1516,7 @@ static int __extent_writepage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl } if (ret) end_extent_writepage(page, ret, page_start, page_end); - if (bio_ctrl->extent_locked) { - struct writeback_control *wbc = bio_ctrl->wbc; - - /* - * If bio_ctrl->extent_locked, it's from extent_write_locked_range(), - * the page can either be locked by lock_page() or - * process_one_page(). - * Let btrfs_page_unlock_writer() handle both cases. - */ - ASSERT(wbc); - btrfs_page_unlock_writer(fs_info, page, wbc->range_start, - wbc->range_end + 1 - wbc->range_start); - } else { - unlock_page(page); - } + unlock_page(page); ASSERT(ret <= 0); return ret; } @@ -2238,10 +2215,10 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end) int first_error = 0; int ret = 0; struct address_space *mapping = inode->i_mapping; - struct page *page; + struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); + const u32 sectorsize = fs_info->sectorsize; + loff_t i_size = i_size_read(inode); u64 cur = start; - unsigned long nr_pages; - const u32 sectorsize = btrfs_sb(inode->i_sb)->sectorsize; struct writeback_control wbc_writepages = { .sync_mode = WB_SYNC_ALL, .range_start = start, @@ -2253,17 +2230,15 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end) /* We're called from an async helper function */ .opf = REQ_OP_WRITE | REQ_BTRFS_CGROUP_PUNT | wbc_to_write_flags(&wbc_writepages), - .extent_locked = 1, }; ASSERT(IS_ALIGNED(start, sectorsize) && IS_ALIGNED(end + 1, sectorsize)); - nr_pages = (round_up(end, PAGE_SIZE) - round_down(start, PAGE_SIZE)) >> - PAGE_SHIFT; - wbc_writepages.nr_to_write = nr_pages * 2; wbc_attach_fdatawrite_inode(&wbc_writepages, inode); while (cur <= end) { u64 cur_end = min(round_down(cur, PAGE_SIZE) + PAGE_SIZE - 1, end); + struct page *page; + int nr = 0; page = find_get_page(mapping, cur >> PAGE_SHIFT); /* @@ -2274,12 +2249,25 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end) ASSERT(PageLocked(page)); ASSERT(PageDirty(page)); clear_page_dirty_for_io(page); - ret = __extent_writepage(page, &bio_ctrl); - ASSERT(ret <= 0); + + ret = __extent_writepage_io(BTRFS_I(inode), page, &bio_ctrl, + i_size, &nr); + if (ret == 1) + goto next_page; + + /* Make sure the mapping tag for page dirty gets cleared. */ + if (nr == 0) { + set_page_writeback(page); + end_page_writeback(page); + } + if (ret) + end_extent_writepage(page, ret, cur, cur_end); + btrfs_page_unlock_writer(fs_info, page, cur, cur_end + 1 - cur); if (ret < 0) { found_error = true; first_error = ret; } +next_page: put_page(page); cur = cur_end + 1; } @@ -2300,7 +2288,6 @@ int extent_writepages(struct address_space *mapping, struct btrfs_bio_ctrl bio_ctrl = { .wbc = wbc, .opf = REQ_OP_WRITE | wbc_to_write_flags(wbc), - .extent_locked = 0, }; /* From patchwork Tue May 23 08:13:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13251843 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93517C7EE2A for ; Tue, 23 May 2023 08:16:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236029AbjEWIQX (ORCPT ); Tue, 23 May 2023 04:16:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38642 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231709AbjEWIPq (ORCPT ); Tue, 23 May 2023 04:15:46 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4E1311BF8 for ; Tue, 23 May 2023 01:14:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=gfRMxGIBYVmg8p99RBFU7ip4J8BStlt6AH/KtIRmNuc=; b=r+HVRbN6aBrpShvLIsx7P8J2Ph 9A4oS52UlebGyvTQmWwT3sEI7Qqzweffg2mCAbHJKCP6VV5oJH7EIoTSmOfVBXwfUAUlAy19oP9Gm drCecjcVmZp8wwhOSOhIbhPDi0E0Lyy7s/R57XSVMSfpLfZpKQKko3ROqrK5zBWRCaA1QYuxwHKQs 7/rYE2+45PZEqL2HHoxiM1zC99MGs3nE5WKAwM8ufxBzxdy78ywbs0S4CcEJ9dnOnRgFgAAjFHNtc 7rqjQ93+pWT76ieBgQBiMediCUr6dgTpAJZXZcIpjWjeGdT2TegoQzluJSGaNVRbQkytRF4Q9/w8e dlXY4VLA==; Received: from [2001:4bb8:188:23b2:6ade:85c9:530f:6eb0] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q1N9d-009OYd-26; Tue, 23 May 2023 08:14:02 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 13/16] btrfs: don't treat zoned writeback as being from an async helper thread Date: Tue, 23 May 2023 10:13:19 +0200 Message-Id: <20230523081322.331337-14-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230523081322.331337-1-hch@lst.de> References: <20230523081322.331337-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org When extent_write_locked_range was originally added, it was only used writing back compressed pages from an async helper thread. But it is now also used for writing back pages on zoned devices, where it is called directly from the ->writepage context. In this case we want to to be able to pass on the writeback_control instead of creating a new one, and more importantly want to use all the normal cgroup interaction instead of potentially deferring writeback to another helper. Fixes: 898793d992c2 ("btrfs: zoned: write out partially allocated region") Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 20 +++++++------------- fs/btrfs/extent_io.h | 3 ++- fs/btrfs/inode.c | 20 +++++++++++++++----- 3 files changed, 24 insertions(+), 19 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index cca47909953d6a..885089afb43ecf 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -2209,7 +2209,8 @@ static int extent_write_cache_pages(struct address_space *mapping, * already been ran (aka, ordered extent inserted) and all pages are still * locked. */ -int extent_write_locked_range(struct inode *inode, u64 start, u64 end) +int extent_write_locked_range(struct inode *inode, u64 start, u64 end, + struct writeback_control *wbc) { bool found_error = false; int first_error = 0; @@ -2219,22 +2220,16 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end) const u32 sectorsize = fs_info->sectorsize; loff_t i_size = i_size_read(inode); u64 cur = start; - struct writeback_control wbc_writepages = { - .sync_mode = WB_SYNC_ALL, - .range_start = start, - .range_end = end, - .no_cgroup_owner = 1, - }; struct btrfs_bio_ctrl bio_ctrl = { - .wbc = &wbc_writepages, - /* We're called from an async helper function */ - .opf = REQ_OP_WRITE | REQ_BTRFS_CGROUP_PUNT | - wbc_to_write_flags(&wbc_writepages), + .wbc = wbc, + .opf = REQ_OP_WRITE | wbc_to_write_flags(wbc), }; + if (wbc->no_cgroup_owner) + bio_ctrl.opf |= REQ_BTRFS_CGROUP_PUNT; + ASSERT(IS_ALIGNED(start, sectorsize) && IS_ALIGNED(end + 1, sectorsize)); - wbc_attach_fdatawrite_inode(&wbc_writepages, inode); while (cur <= end) { u64 cur_end = min(round_down(cur, PAGE_SIZE) + PAGE_SIZE - 1, end); struct page *page; @@ -2274,7 +2269,6 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end) submit_write_bio(&bio_ctrl, found_error ? ret : 0); - wbc_detach_inode(&wbc_writepages); if (found_error) return first_error; return ret; diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h index 4317fceeffaddb..eb97043672ebec 100644 --- a/fs/btrfs/extent_io.h +++ b/fs/btrfs/extent_io.h @@ -177,7 +177,8 @@ int try_release_extent_mapping(struct page *page, gfp_t mask); int try_release_extent_buffer(struct page *page); int btrfs_read_folio(struct file *file, struct folio *folio); -int extent_write_locked_range(struct inode *inode, u64 start, u64 end); +int extent_write_locked_range(struct inode *inode, u64 start, u64 end, + struct writeback_control *wbc); int extent_writepages(struct address_space *mapping, struct writeback_control *wbc); int btree_write_cache_pages(struct address_space *mapping, diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 2e6673cdb47bd3..ed137746af82ee 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -1133,6 +1133,12 @@ static int submit_uncompressed_range(struct btrfs_inode *inode, unsigned long nr_written = 0; int page_started = 0; int ret; + struct writeback_control wbc = { + .sync_mode = WB_SYNC_ALL, + .range_start = start, + .range_end = end, + .no_cgroup_owner = 1, + }; /* * Call cow_file_range() to run the delalloc range directly, since we @@ -1162,7 +1168,10 @@ static int submit_uncompressed_range(struct btrfs_inode *inode, } /* All pages will be unlocked, including @locked_page */ - return extent_write_locked_range(&inode->vfs_inode, start, end); + wbc_attach_fdatawrite_inode(&wbc, &inode->vfs_inode); + ret = extent_write_locked_range(&inode->vfs_inode, start, end, &wbc); + wbc_detach_inode(&wbc); + return ret; } static int submit_one_async_extent(struct btrfs_inode *inode, @@ -1815,7 +1824,8 @@ static bool run_delalloc_compressed(struct btrfs_inode *inode, static noinline int run_delalloc_zoned(struct btrfs_inode *inode, struct page *locked_page, u64 start, u64 end, int *page_started, - unsigned long *nr_written) + unsigned long *nr_written, + struct writeback_control *wbc) { u64 done_offset = end; int ret; @@ -1847,8 +1857,8 @@ static noinline int run_delalloc_zoned(struct btrfs_inode *inode, account_page_redirty(locked_page); } locked_page_done = true; - extent_write_locked_range(&inode->vfs_inode, start, done_offset); - + extent_write_locked_range(&inode->vfs_inode, start, done_offset, + wbc); start = done_offset + 1; } @@ -2422,7 +2432,7 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page if (zoned) ret = run_delalloc_zoned(inode, locked_page, start, end, - page_started, nr_written); + page_started, nr_written, wbc); else ret = cow_file_range(inode, locked_page, start, end, page_started, nr_written, 1, NULL); From patchwork Tue May 23 08:13:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13251844 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 268F0C77B75 for ; Tue, 23 May 2023 08:16:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231461AbjEWIQ0 (ORCPT ); Tue, 23 May 2023 04:16:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38244 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236081AbjEWIPu (ORCPT ); Tue, 23 May 2023 04:15:50 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 260B01FC2 for ; Tue, 23 May 2023 01:14:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=RXcVEABuxW8v9vpji885TPLM7LKQN+cNGKerBlCmARk=; b=AThCsUSfjIQHLYt+8H+1CZxWXw XhEongclLcFV7Q1F7hXR4xnzOg3XlURii1MAtL4UnGHAWMek0rAI5Un/oLOEk+qiM6VUC8JFWLQPY VXaH1gvVTIQSeeyg4V8p6vVx9xeEvwgNAINxITZKTqdZ4et8fz4Dk0Xr1LPlH25QnjnCGJ6tQSR0M 3WE/m1Y8rt5B9udgMViP8L82ysYhQ4hDuMu1Gan2W1Dah55d3gGepxGryFrgl7R2j36Q3rdVK7F7e jtRJteNGp30p9wuv7DDtSlyqKfBeGlYGrc9i6Vrgm1zPn69jlUDBWt2VACRhdZd8fICqHxNQC4XCt 9AzSKSSg==; Received: from [2001:4bb8:188:23b2:6ade:85c9:530f:6eb0] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q1N9g-009OZh-1u; Tue, 23 May 2023 08:14:05 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 14/16] btrfs: don't redirty the locked page for extent_write_locked_range Date: Tue, 23 May 2023 10:13:20 +0200 Message-Id: <20230523081322.331337-15-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230523081322.331337-1-hch@lst.de> References: <20230523081322.331337-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Instead of redirtying the locked page before calling extent_write_locked_range, just pass a locked_page argument similar to many other functions in the btrfs writeback code, and then exclude the locked page from clearing the dirty bit in extent_write_locked_range. Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 17 ++++++++++------- fs/btrfs/extent_io.h | 3 ++- fs/btrfs/inode.c | 25 ++++++------------------- 3 files changed, 18 insertions(+), 27 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 885089afb43ecf..77f0e405280736 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -2209,8 +2209,8 @@ static int extent_write_cache_pages(struct address_space *mapping, * already been ran (aka, ordered extent inserted) and all pages are still * locked. */ -int extent_write_locked_range(struct inode *inode, u64 start, u64 end, - struct writeback_control *wbc) +int extent_write_locked_range(struct inode *inode, struct page *locked_page, + u64 start, u64 end, struct writeback_control *wbc) { bool found_error = false; int first_error = 0; @@ -2236,14 +2236,17 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end, int nr = 0; page = find_get_page(mapping, cur >> PAGE_SHIFT); + /* - * All pages in the range are locked since - * btrfs_run_delalloc_range(), thus there is no way to clear - * the page dirty flag. + * All pages have been locked by btrfs_run_delalloc_range(), + * thus the dirty bit can't have been cleared. */ ASSERT(PageLocked(page)); - ASSERT(PageDirty(page)); - clear_page_dirty_for_io(page); + if (page != locked_page) { + /* already cleared by extent_write_cache_pages */ + ASSERT(PageDirty(page)); + clear_page_dirty_for_io(page); + } ret = __extent_writepage_io(BTRFS_I(inode), page, &bio_ctrl, i_size, &nr); diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h index eb97043672ebec..daef9374c2095f 100644 --- a/fs/btrfs/extent_io.h +++ b/fs/btrfs/extent_io.h @@ -177,7 +177,8 @@ int try_release_extent_mapping(struct page *page, gfp_t mask); int try_release_extent_buffer(struct page *page); int btrfs_read_folio(struct file *file, struct folio *folio); -int extent_write_locked_range(struct inode *inode, u64 start, u64 end, +int extent_write_locked_range(struct inode *inode, struct page *locked_page, + u64 start, u64 end, struct writeback_control *wbc); int extent_writepages(struct address_space *mapping, struct writeback_control *wbc); diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index ed137746af82ee..786b88ac0fdd35 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -1088,17 +1088,9 @@ static noinline int compress_file_range(struct async_chunk *async_chunk) cleanup_and_bail_uncompressed: /* * No compression, but we still need to write the pages in the file - * we've been given so far. redirty the locked page if it corresponds - * to our extent and set things up for the async work queue to run - * cow_file_range to do the normal delalloc dance. + * we've been given so far. Set things up for the async work queue to + * run cow_file_range to do the normal delalloc dance. */ - if (async_chunk->locked_page && - (page_offset(async_chunk->locked_page) >= start && - page_offset(async_chunk->locked_page)) <= end) { - __set_page_dirty_nobuffers(async_chunk->locked_page); - /* unlocked later on in the async handlers */ - } - if (redirty) extent_range_redirty_for_io(&inode->vfs_inode, start, end); add_async_extent(async_chunk, start, end - start + 1, 0, NULL, 0, @@ -1169,7 +1161,8 @@ static int submit_uncompressed_range(struct btrfs_inode *inode, /* All pages will be unlocked, including @locked_page */ wbc_attach_fdatawrite_inode(&wbc, &inode->vfs_inode); - ret = extent_write_locked_range(&inode->vfs_inode, start, end, &wbc); + ret = extent_write_locked_range(&inode->vfs_inode, locked_page, start, + end, &wbc); wbc_detach_inode(&wbc); return ret; } @@ -1829,7 +1822,6 @@ static noinline int run_delalloc_zoned(struct btrfs_inode *inode, { u64 done_offset = end; int ret; - bool locked_page_done = false; while (start <= end) { ret = cow_file_range(inode, locked_page, start, end, page_started, @@ -1852,13 +1844,8 @@ static noinline int run_delalloc_zoned(struct btrfs_inode *inode, continue; } - if (!locked_page_done) { - __set_page_dirty_nobuffers(locked_page); - account_page_redirty(locked_page); - } - locked_page_done = true; - extent_write_locked_range(&inode->vfs_inode, start, done_offset, - wbc); + extent_write_locked_range(&inode->vfs_inode, locked_page, start, + done_offset, wbc); start = done_offset + 1; } From patchwork Tue May 23 08:13:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13251845 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A671C7EE2A for ; Tue, 23 May 2023 08:16:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235569AbjEWIQ2 (ORCPT ); Tue, 23 May 2023 04:16:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235398AbjEWIP4 (ORCPT ); Tue, 23 May 2023 04:15:56 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8403A1FCE for ; Tue, 23 May 2023 01:14:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=etfh8PFsrKiHNtcbE64ACMg0LgFwvDDMTigbzOUNKXs=; b=v3Pm+y6MDtPkUXE106AJ2IRvcH 65Y5OytgL7bo8XzhP+4CNciyQYNJ0J4kqe58d0VStbevKy1U3y+Gc3WSR/rnWc1Pd+1GUk1JNQpYa Ha+sO/U9InU/tFBeaWqbsTeAtxzCubXyGnuGCr32a7BTF91QBon9EY6WQWlWw0AZKXRJhlJavNer/ qKVfRS72X0EgoXkjTO8KQGeoaxfA0T+vtRChunV7hdW2OUJwqRvSX76VhSdJlyLkNHDEq+4yUf2F6 ltdVu+zJKZqqDAq2iB4nE11uRfKGaRMjexhkrO7/z/5vOQguXStSbIl9FmA70pX1oxS/PfFJetyQN DX9UqwTw==; Received: from [2001:4bb8:188:23b2:6ade:85c9:530f:6eb0] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q1N9j-009Obn-2O; Tue, 23 May 2023 08:14:08 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 15/16] btrfs: refactor the zoned device handling in cow_file_range Date: Tue, 23 May 2023 10:13:21 +0200 Message-Id: <20230523081322.331337-16-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230523081322.331337-1-hch@lst.de> References: <20230523081322.331337-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Handling of the done_offset to cow_file_range is a bit confusing, as it is not updated at all when the function succeeds, and the -EAGAIN status is used bother for the case where we need to wait for a zone finish and the one where the allocation was partially successful. Change the calling convention so that done_offset is always updated, and 0 is returned if some allocation was successful (partial allocation can still only happen for zoned devices), and -EAGAIN is only returned when the caller needs to wait for a zone finish. Signed-off-by: Christoph Hellwig --- fs/btrfs/inode.c | 53 ++++++++++++++++++++++++------------------------ 1 file changed, 27 insertions(+), 26 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 786b88ac0fdd35..c94eb571ba4b48 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -1403,7 +1403,7 @@ static noinline int cow_file_range(struct btrfs_inode *inode, unsigned clear_bits; unsigned long page_ops; bool extent_reserved = false; - int ret = 0; + int ret; if (btrfs_is_free_space_inode(inode)) { ret = -EINVAL; @@ -1462,7 +1462,7 @@ static noinline int cow_file_range(struct btrfs_inode *inode, * inline extent or a compressed extent. */ unlock_page(locked_page); - goto out; + goto done; } else if (ret < 0) { goto out_unlock; } @@ -1491,6 +1491,23 @@ static noinline int cow_file_range(struct btrfs_inode *inode, ret = btrfs_reserve_extent(root, cur_alloc_size, cur_alloc_size, min_alloc_size, 0, alloc_hint, &ins, 1, 1); + if (ret == -EAGAIN) { + /* + * For zoned devices, let the caller retry after writing + * out the already allocated regions or waiting for a + * zone to finish if no allocation was possible at all. + * + * Else convert to -ENOSPC since the caller cannot + * retry. + */ + if (btrfs_is_zoned(fs_info)) { + if (start == orig_start) + return -EAGAIN; + *done_offset = start - 1; + return 0; + } + ret = -ENOSPC; + } if (ret < 0) goto out_unlock; cur_alloc_size = ins.offset; @@ -1571,8 +1588,10 @@ static noinline int cow_file_range(struct btrfs_inode *inode, if (ret) goto out_unlock; } -out: - return ret; +done: + if (done_offset) + *done_offset = end; + return 0; out_drop_extent_cache: btrfs_drop_extent_map_range(inode, start, start + ram_size - 1, false); @@ -1580,21 +1599,6 @@ static noinline int cow_file_range(struct btrfs_inode *inode, btrfs_dec_block_group_reservations(fs_info, ins.objectid); btrfs_free_reserved_extent(fs_info, ins.objectid, ins.offset, 1); out_unlock: - /* - * If done_offset is non-NULL and ret == -EAGAIN, we expect the - * caller to write out the successfully allocated region and retry. - */ - if (done_offset && ret == -EAGAIN) { - if (orig_start < start) - *done_offset = start - 1; - else - *done_offset = start; - return ret; - } else if (ret == -EAGAIN) { - /* Convert to -ENOSPC since the caller cannot retry. */ - ret = -ENOSPC; - } - /* * Now, we have three regions to clean up: * @@ -1826,23 +1830,20 @@ static noinline int run_delalloc_zoned(struct btrfs_inode *inode, while (start <= end) { ret = cow_file_range(inode, locked_page, start, end, page_started, nr_written, 0, &done_offset); - if (ret && ret != -EAGAIN) - return ret; - if (*page_started) { ASSERT(ret == 0); return 0; } + if (ret == -EAGAIN) { + ASSERT(btrfs_is_zoned(inode->root->fs_info)); - if (ret == 0) - done_offset = end; - - if (done_offset == start) { wait_on_bit_io(&inode->root->fs_info->flags, BTRFS_FS_NEED_ZONE_FINISH, TASK_UNINTERRUPTIBLE); continue; } + if (ret) + return ret; extent_write_locked_range(&inode->vfs_inode, locked_page, start, done_offset, wbc); From patchwork Tue May 23 08:13:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13251846 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C644DC7EE23 for ; Tue, 23 May 2023 08:16:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235769AbjEWIQn (ORCPT ); Tue, 23 May 2023 04:16:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38904 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235772AbjEWIQC (ORCPT ); Tue, 23 May 2023 04:16:02 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3327A10DD for ; Tue, 23 May 2023 01:14:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=ZlF6SPVP/BVyQyAQuB+tRpq0iI//L5LznWpKrqsgkYI=; b=1inQ4DDUZyxOTIEpgTnZVIijMP eB7l+uBaWGku5SPw7qnZj3YW2hNEkc74FK0vb50Ow4VA2NnwUjuusUNidqhFshBuefJvVdfHwOLyy qlcEW2CGMhBzIt6UE2NNctf7clahACy/HKjb2ghRWQcxWyEgEEafKcp1GorGqrc536oS5oQ/+5yan gPXRweBdDFgE2tlp4eiEppuINF8ECO5ohrtUhUV4fzCDp0q5HdOJ83aK4AghSMG0hLi5R/ctyhd+L ACiLjuGk+3cjvXiQjIzTfsFOhjHi6uvlhZ1xSh0iSJ7Q3Tr9Nh+F4S77LmnJ1//OaUWly0APOPnUD p54A1ueQ==; Received: from [2001:4bb8:188:23b2:6ade:85c9:530f:6eb0] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q1N9m-009Of3-22; Tue, 23 May 2023 08:14:11 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 16/16] btrfs: split page locking out of __process_pages_contig Date: Tue, 23 May 2023 10:13:22 +0200 Message-Id: <20230523081322.331337-17-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230523081322.331337-1-hch@lst.de> References: <20230523081322.331337-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org There is a lot of complexity in __process_pages_contig to deal with the PAGE_LOCK case that can return an error unlike all the other actions. Open code the page iteration for page locking in lock_delalloc_pages and remove all the now unused code from __process_pages_contig. Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 149 +++++++++++++++++-------------------------- fs/btrfs/extent_io.h | 1 - 2 files changed, 59 insertions(+), 91 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 77f0e405280736..15021d25155f97 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -197,18 +197,9 @@ void extent_range_redirty_for_io(struct inode *inode, u64 start, u64 end) } } -/* - * Process one page for __process_pages_contig(). - * - * Return >0 if we hit @page == @locked_page. - * Return 0 if we updated the page status. - * Return -EGAIN if the we need to try again. - * (For PAGE_LOCK case but got dirty page or page not belong to mapping) - */ -static int process_one_page(struct btrfs_fs_info *fs_info, - struct address_space *mapping, - struct page *page, struct page *locked_page, - unsigned long page_ops, u64 start, u64 end) +static void process_one_page(struct btrfs_fs_info *fs_info, + struct page *page, struct page *locked_page, + unsigned long page_ops, u64 start, u64 end) { u32 len; @@ -224,29 +215,13 @@ static int process_one_page(struct btrfs_fs_info *fs_info, if (page_ops & PAGE_END_WRITEBACK) btrfs_page_clamp_clear_writeback(fs_info, page, start, len); - if (page == locked_page) - return 1; - - if (page_ops & PAGE_LOCK) { - int ret; - - ret = btrfs_page_start_writer_lock(fs_info, page, start, len); - if (ret) - return ret; - if (!PageDirty(page) || page->mapping != mapping) { - btrfs_page_end_writer_lock(fs_info, page, start, len); - return -EAGAIN; - } - } - if (page_ops & PAGE_UNLOCK) + if (page != locked_page && (page_ops & PAGE_UNLOCK)) btrfs_page_end_writer_lock(fs_info, page, start, len); - return 0; } -static int __process_pages_contig(struct address_space *mapping, - struct page *locked_page, - u64 start, u64 end, unsigned long page_ops, - u64 *processed_end) +static void __process_pages_contig(struct address_space *mapping, + struct page *locked_page, u64 start, u64 end, + unsigned long page_ops) { struct btrfs_fs_info *fs_info = btrfs_sb(mapping->host->i_sb); pgoff_t start_index = start >> PAGE_SHIFT; @@ -254,64 +229,24 @@ static int __process_pages_contig(struct address_space *mapping, pgoff_t index = start_index; unsigned long pages_processed = 0; struct folio_batch fbatch; - int err = 0; int i; - if (page_ops & PAGE_LOCK) { - ASSERT(page_ops == PAGE_LOCK); - ASSERT(processed_end && *processed_end == start); - } - folio_batch_init(&fbatch); while (index <= end_index) { int found_folios; found_folios = filemap_get_folios_contig(mapping, &index, end_index, &fbatch); - - if (found_folios == 0) { - /* - * Only if we're going to lock these pages, we can find - * nothing at @index. - */ - ASSERT(page_ops & PAGE_LOCK); - err = -EAGAIN; - goto out; - } - for (i = 0; i < found_folios; i++) { - int process_ret; struct folio *folio = fbatch.folios[i]; - process_ret = process_one_page(fs_info, mapping, - &folio->page, locked_page, page_ops, - start, end); - if (process_ret < 0) { - err = -EAGAIN; - folio_batch_release(&fbatch); - goto out; - } + + process_one_page(fs_info, &folio->page, locked_page, + page_ops, start, end); pages_processed += folio_nr_pages(folio); } folio_batch_release(&fbatch); cond_resched(); } -out: - if (err && processed_end) { - /* - * Update @processed_end. I know this is awful since it has - * two different return value patterns (inclusive vs exclusive). - * - * But the exclusive pattern is necessary if @start is 0, or we - * underflow and check against processed_end won't work as - * expected. - */ - if (pages_processed) - *processed_end = min(end, - ((u64)(start_index + pages_processed) << PAGE_SHIFT) - 1); - else - *processed_end = start; - } - return err; } static noinline void __unlock_for_delalloc(struct inode *inode, @@ -326,29 +261,63 @@ static noinline void __unlock_for_delalloc(struct inode *inode, return; __process_pages_contig(inode->i_mapping, locked_page, start, end, - PAGE_UNLOCK, NULL); + PAGE_UNLOCK); } static noinline int lock_delalloc_pages(struct inode *inode, struct page *locked_page, - u64 delalloc_start, - u64 delalloc_end) + u64 start, + u64 end) { - unsigned long index = delalloc_start >> PAGE_SHIFT; - unsigned long end_index = delalloc_end >> PAGE_SHIFT; - u64 processed_end = delalloc_start; - int ret; + struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); + struct address_space *mapping = inode->i_mapping; + pgoff_t start_index = start >> PAGE_SHIFT; + pgoff_t end_index = end >> PAGE_SHIFT; + pgoff_t index = start_index; + u64 processed_end = start; + struct folio_batch fbatch; - ASSERT(locked_page); if (index == locked_page->index && index == end_index) return 0; - ret = __process_pages_contig(inode->i_mapping, locked_page, delalloc_start, - delalloc_end, PAGE_LOCK, &processed_end); - if (ret == -EAGAIN && processed_end > delalloc_start) - __unlock_for_delalloc(inode, locked_page, delalloc_start, - processed_end); - return ret; + folio_batch_init(&fbatch); + while (index <= end_index) { + unsigned int found_folios, i; + + found_folios = filemap_get_folios_contig(mapping, &index, + end_index, &fbatch); + if (found_folios == 0) + goto out; + + for (i = 0; i < found_folios; i++) { + struct page *page = &fbatch.folios[i]->page; + u32 len = end + 1 - start; + + if (page == locked_page) + continue; + + if (btrfs_page_start_writer_lock(fs_info, page, start, + len)) + goto out; + + if (!PageDirty(page) || page->mapping != mapping) { + btrfs_page_end_writer_lock(fs_info, page, start, + len); + goto out; + } + + processed_end = page_offset(page) + PAGE_SIZE - 1; + } + folio_batch_release(&fbatch); + cond_resched(); + } + + return 0; +out: + folio_batch_release(&fbatch); + if (processed_end > start) + __unlock_for_delalloc(inode, locked_page, start, processed_end); + return -EAGAIN; } /* @@ -467,7 +436,7 @@ void extent_clear_unlock_delalloc(struct btrfs_inode *inode, u64 start, u64 end, clear_extent_bit(&inode->io_tree, start, end, clear_bits, NULL); __process_pages_contig(inode->vfs_inode.i_mapping, locked_page, - start, end, page_ops, NULL); + start, end, page_ops); } static int btrfs_verify_page(struct page *page, u64 start) diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h index daef9374c2095f..423853be57ed87 100644 --- a/fs/btrfs/extent_io.h +++ b/fs/btrfs/extent_io.h @@ -39,7 +39,6 @@ enum { ENUM_BIT(PAGE_START_WRITEBACK), ENUM_BIT(PAGE_END_WRITEBACK), ENUM_BIT(PAGE_SET_ORDERED), - ENUM_BIT(PAGE_LOCK), }; /*