From patchwork Wed May 31 06:04:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13261494 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94651C7EE2E for ; Wed, 31 May 2023 06:05:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234315AbjEaGFQ (ORCPT ); Wed, 31 May 2023 02:05:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59742 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229479AbjEaGFP (ORCPT ); Wed, 31 May 2023 02:05:15 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9284811C for ; Tue, 30 May 2023 23:05:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=9gPQE0Quw3lg7jqB4gDKLPCUrdRVRQcbxnWJyG7Lt+s=; b=cY1F+urRXgMzYScnS28SHPo+Sh kISTQndJc2DWrPPki8xBz4P3OfDFRI57tKG0dQryyQHxv1AWt5Y3XgPb3bB4BBEBp9YEUnvpUaeE2 4RVqf2zJXwE/dlqshZ3ct9+kR/T7CI+qcOkM2Dlz2cjhIDdnSL5vM+p5axOLYoCSV81RjZnYmt3fP OCJEvP/BQw1zAREFgPFcnwzisRxf6BC1IC64AyZ6JOX6Esm5+HYuX8WrK1Li21VE0jRODomdIB7B2 J0eGjd1RN2qwih9adLhQyMGsbcK89bL6MWW2x8bVfNrHnkShN5HPCpMyjp1cxqrwS64glvHx7K4b3 cofJTZSg==; Received: from [2001:4bb8:182:6d06:f5c3:53d7:b5aa:b6a7] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q4ExM-00GF2Q-1K; Wed, 31 May 2023 06:05:12 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 01/16] btrfs: fix range_end calculation in extent_write_locked_range Date: Wed, 31 May 2023 08:04:50 +0200 Message-Id: <20230531060505.468704-2-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230531060505.468704-1-hch@lst.de> References: <20230531060505.468704-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org The range_end field in struct writeback_control is inclusive, just like the end parameter passed to extent_write_locked_range. Not doing this could cause extra writeout, which is harmless but suboptimal. Fixes: 771ed689d2cd ("Btrfs: Optimize compressed writeback and reads") Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index d5b3b15f7ab222..0726c82db309a2 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -2310,7 +2310,7 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end) struct writeback_control wbc_writepages = { .sync_mode = WB_SYNC_ALL, .range_start = start, - .range_end = end + 1, + .range_end = end, .no_cgroup_owner = 1, }; struct btrfs_bio_ctrl bio_ctrl = { From patchwork Wed May 31 06:04:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13261495 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 994A9C77B7A for ; Wed, 31 May 2023 06:05:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234316AbjEaGFU (ORCPT ); Wed, 31 May 2023 02:05:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229479AbjEaGFT (ORCPT ); Wed, 31 May 2023 02:05:19 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CBEFD11C for ; Tue, 30 May 2023 23:05:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=oAVqdlqWdwKkStitJSHtqa34ShHS8IoE+KVEKcpOlsw=; b=R2pUAVtIoMmuiT2wUMzPkffoCV ZVTDN8wTHT3cIcyOVW2BNfsCnVkJLXQyXpL2tijOQeGjkKqMFlB27FA8tZ9AVU1l1Mg3BKCBAInkw Uba8YRjGt0ZsDqrIqBUU1EvamtprkfUQUojMQucJV4GT9d0N91cZCUuybaWPxsFYkEHFF3f9saP0u jtqbH437gRxHoxWxHGxn4+9pNVNmV42LfXOfTnm9DfCOV8jGsnsKBJGQutDHgAU6ufwCUq/gFNfl/ NM20JXQJ9R86GwToI/jhTid1sQxikbVeDwQpUtq16vWvLIm4/HnKSpJ4phy+BTUZVtKXVfJ6Pc7sE uxRKkkLg==; Received: from [2001:4bb8:182:6d06:f5c3:53d7:b5aa:b6a7] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q4ExP-00GF3I-0S; Wed, 31 May 2023 06:05:15 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 02/16] btrfs: factor out a btrfs_verify_page helper Date: Wed, 31 May 2023 08:04:51 +0200 Message-Id: <20230531060505.468704-3-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230531060505.468704-1-hch@lst.de> References: <20230531060505.468704-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Split all the conditionals for the fsverity calls in end_page_read into a btrfs_verify_page helper to keep the code readable and make additional refacoring easier. Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 0726c82db309a2..8e42ce48b52e4a 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -481,6 +481,15 @@ void extent_clear_unlock_delalloc(struct btrfs_inode *inode, u64 start, u64 end, start, end, page_ops, NULL); } +static bool btrfs_verify_page(struct page *page, u64 start) +{ + if (!fsverity_active(page->mapping->host) || + PageError(page) || PageUptodate(page) || + start >= i_size_read(page->mapping->host)) + return true; + return fsverity_verify_page(page); +} + static void end_page_read(struct page *page, bool uptodate, u64 start, u32 len) { struct btrfs_fs_info *fs_info = btrfs_sb(page->mapping->host->i_sb); @@ -489,11 +498,7 @@ static void end_page_read(struct page *page, bool uptodate, u64 start, u32 len) start + len <= page_offset(page) + PAGE_SIZE); if (uptodate) { - if (fsverity_active(page->mapping->host) && - !PageError(page) && - !PageUptodate(page) && - start < i_size_read(page->mapping->host) && - !fsverity_verify_page(page)) { + if (!btrfs_verify_page(page, start)) { btrfs_page_set_error(fs_info, page, start, len); } else { btrfs_page_set_uptodate(fs_info, page, start, len); From patchwork Wed May 31 06:04:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13261496 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7700AC7EE2E for ; Wed, 31 May 2023 06:05:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234320AbjEaGFV (ORCPT ); Wed, 31 May 2023 02:05:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59754 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234318AbjEaGFU (ORCPT ); Wed, 31 May 2023 02:05:20 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 40C2E11C for ; Tue, 30 May 2023 23:05:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=Loo5pOJNOoshsPeC/eqTPMTRi5TqY72G2D3GAczj/ps=; b=kacSKQ4Rg40xrd5/UtH8+hW5RG 1PNKuKJJ4SfW/1HwwPp6tUBHFgwFkVJMgZpkNx0low/NT22ynPDntemd1IGbZtLzFhEo3tuLBxsLL /u5Sl/nL2oNn62Volqf6lOGmfWhUERAtEV74mNPJgeYliw69BW7FmiRNXMEsOyGmknTOsI1BhlFm9 IF6cpyB1tUwOkdHwagg8ezvkWNOFW5Rnx7AS37UWG6mc9I8xU7ojuQ4IGRrS6lIGNTRmniFJvvG37 q2dqLX+tbisIXPQYnvKikjRWeYxXIzileKNDUPY0PzH5+mZU5jNzmUsQivwdtZG+NJOUt83aF0UCq dFgZrJNg==; Received: from [2001:4bb8:182:6d06:f5c3:53d7:b5aa:b6a7] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q4ExR-00GF3U-2Q; Wed, 31 May 2023 06:05:18 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 03/16] btrfs: fix fsverify read error handling in end_page_read Date: Wed, 31 May 2023 08:04:52 +0200 Message-Id: <20230531060505.468704-4-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230531060505.468704-1-hch@lst.de> References: <20230531060505.468704-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Also clear the uptodate bit to make sure the page isn't seen as uptodate in the page cache if fsverity verification fails. Fixes: 146054090b08 ("btrfs: initial fsverity support") Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 8e42ce48b52e4a..a943a66224891a 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -497,12 +497,8 @@ static void end_page_read(struct page *page, bool uptodate, u64 start, u32 len) ASSERT(page_offset(page) <= start && start + len <= page_offset(page) + PAGE_SIZE); - if (uptodate) { - if (!btrfs_verify_page(page, start)) { - btrfs_page_set_error(fs_info, page, start, len); - } else { - btrfs_page_set_uptodate(fs_info, page, start, len); - } + if (uptodate && btrfs_verify_page(page, start)) { + btrfs_page_set_uptodate(fs_info, page, start, len); } else { btrfs_page_clear_uptodate(fs_info, page, start, len); btrfs_page_set_error(fs_info, page, start, len); From patchwork Wed May 31 06:04:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13261497 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E2A3C77B7A for ; Wed, 31 May 2023 06:05:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234322AbjEaGFZ (ORCPT ); Wed, 31 May 2023 02:05:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59760 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234318AbjEaGFY (ORCPT ); Wed, 31 May 2023 02:05:24 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C4BE11C for ; Tue, 30 May 2023 23:05:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=0qRdnFp6DifbrvpkW0NoRBfwKtfOgSAqng2+KsGs0gw=; b=3TqdFzXQpXYILZl+PB6x8nl37B SLz77qf9iw5G7SSwpOMpqLyL2jNLYOM08eRAJqY3kX0WNWVWu5iwctFo8BhFqpsbp9HFSSGP/FFax 4BS40BB6x57GgdXLVPXxisQALSV+mXcB8prkh1OWdngcaO4jsEOaFFIwW2iOlYK4AiJn+cJE5Cgx+ Kq5b8phnxl2JYeSaGtv6EK6/2s2N6DSE3+oJeUCLOl/48iK9yP33FvELqGmgATiQOLVUA2lAwMGe1 l35qDm1xOQ2cpeols0IVoB06gaRba0Hkeii1cwy09CtGBusjNmr3Jv2hdqhKol6z4tRJq12cnER5i N9qWK9VQ==; Received: from [2001:4bb8:182:6d06:f5c3:53d7:b5aa:b6a7] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q4ExV-00GF4B-0g; Wed, 31 May 2023 06:05:22 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 04/16] btrfs: don't check PageError in btrfs_verify_page Date: Wed, 31 May 2023 08:04:53 +0200 Message-Id: <20230531060505.468704-5-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230531060505.468704-1-hch@lst.de> References: <20230531060505.468704-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org btrfs_verify_page is called from the readpage completion handler, which is only used to read pages, or parts of pages that aren't uptodate yet. The only case where PageError could be set on a page in btrfs is if we had a previous writeback error, but in that case we won't called readpage on it, as it has previously been marked uptodate. Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index a943a66224891a..178e8230c28af9 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -484,7 +484,7 @@ void extent_clear_unlock_delalloc(struct btrfs_inode *inode, u64 start, u64 end, static bool btrfs_verify_page(struct page *page, u64 start) { if (!fsverity_active(page->mapping->host) || - PageError(page) || PageUptodate(page) || + PageUptodate(page) || start >= i_size_read(page->mapping->host)) return true; return fsverity_verify_page(page); From patchwork Wed May 31 06:04:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13261498 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 233AAC77B7A for ; Wed, 31 May 2023 06:05:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234325AbjEaGF3 (ORCPT ); Wed, 31 May 2023 02:05:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234318AbjEaGF2 (ORCPT ); Wed, 31 May 2023 02:05:28 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9751511C for ; Tue, 30 May 2023 23:05:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=cdbJAynPVokCagkKsSWXIpNlCLMK/PHF9v0zp/weOBA=; b=S0Mja6wZ9JTKZZS3gE3GFdDc6g 5HdCoj7mIWOqIUXXa+AbwnTIer8WX8HXgoWKGDHTAO3Sx0h4swmsFzwYsrtdNf/pwzQVjhOeOPXc3 zkX9/a3nA4ssHDsPJEm4s04uDklw31bzXWmDHAplmcTY8KgDvNNBCC4dTHbl/FjjU3vR9quBxKDS9 YkOfcb7vuAE96OyF7PBLTEAv2QZRJJXxonDOIaq/9Mq2sWHsl6aWVX7AFjaomX2F5q5pBiqzrg2Cc PzP0vKcMuppF4fz6DAmbqucw+4nqIvp/6OPKi9jTisM2gtNNOwFF/Sez3EmOxqs68JG1rxSRgyv9m Rht0juAw==; Received: from [2001:4bb8:182:6d06:f5c3:53d7:b5aa:b6a7] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q4ExY-00GF4t-10; Wed, 31 May 2023 06:05:24 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 05/16] btrfs: don't fail writeback when allocating the compression context fails Date: Wed, 31 May 2023 08:04:54 +0200 Message-Id: <20230531060505.468704-6-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230531060505.468704-1-hch@lst.de> References: <20230531060505.468704-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org If cow_file_range_async fails to allocate the asynchronous writeback context, it currently returns an error and entirely fails the writeback. This is not a good idea as a writeback failure is a non-temporary error condition that will make the file system unusuable. Just fall back to synchronous uncompressed writeback instead. This requires us to delay setting the BTRFS_INODE_HAS_ASYNC_EXTENT flag until we've committed to the async writeback. Signed-off-by: Christoph Hellwig --- fs/btrfs/inode.c | 74 ++++++++++++++++++------------------------------ 1 file changed, 28 insertions(+), 46 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index e05978cb89cdce..b4b6bd621264cf 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -1720,58 +1720,36 @@ static noinline void async_cow_free(struct btrfs_work *work) kvfree(async_cow); } -static int cow_file_range_async(struct btrfs_inode *inode, - struct writeback_control *wbc, - struct page *locked_page, - u64 start, u64 end, int *page_started, - unsigned long *nr_written) +static bool cow_file_range_async(struct btrfs_inode *inode, + struct writeback_control *wbc, + struct page *locked_page, + u64 start, u64 end, int *page_started, + unsigned long *nr_written) { struct btrfs_fs_info *fs_info = inode->root->fs_info; struct cgroup_subsys_state *blkcg_css = wbc_blkcg_css(wbc); struct async_cow *ctx; struct async_chunk *async_chunk; unsigned long nr_pages; - u64 cur_end; u64 num_chunks = DIV_ROUND_UP(end - start, SZ_512K); int i; - bool should_compress; unsigned nofs_flag; const blk_opf_t write_flags = wbc_to_write_flags(wbc); - unlock_extent(&inode->io_tree, start, end, NULL); - - if (inode->flags & BTRFS_INODE_NOCOMPRESS && - !btrfs_test_opt(fs_info, FORCE_COMPRESS)) { - num_chunks = 1; - should_compress = false; - } else { - should_compress = true; - } - nofs_flag = memalloc_nofs_save(); ctx = kvmalloc(struct_size(ctx, chunks, num_chunks), GFP_KERNEL); memalloc_nofs_restore(nofs_flag); + if (!ctx) + return false; - if (!ctx) { - unsigned clear_bits = EXTENT_LOCKED | EXTENT_DELALLOC | - EXTENT_DELALLOC_NEW | EXTENT_DEFRAG | - EXTENT_DO_ACCOUNTING; - unsigned long page_ops = PAGE_UNLOCK | PAGE_START_WRITEBACK | - PAGE_END_WRITEBACK | PAGE_SET_ERROR; - - extent_clear_unlock_delalloc(inode, start, end, locked_page, - clear_bits, page_ops); - return -ENOMEM; - } + unlock_extent(&inode->io_tree, start, end, NULL); + set_bit(BTRFS_INODE_HAS_ASYNC_EXTENT, &inode->runtime_flags); async_chunk = ctx->chunks; atomic_set(&ctx->num_chunks, num_chunks); for (i = 0; i < num_chunks; i++) { - if (should_compress) - cur_end = min(end, start + SZ_512K - 1); - else - cur_end = end; + u64 cur_end = min(end, start + SZ_512K - 1); /* * igrab is called higher up in the call chain, take only the @@ -1832,7 +1810,7 @@ static int cow_file_range_async(struct btrfs_inode *inode, start = cur_end + 1; } *page_started = 1; - return 0; + return true; } static noinline int run_delalloc_zoned(struct btrfs_inode *inode, @@ -2413,8 +2391,8 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page u64 start, u64 end, int *page_started, unsigned long *nr_written, struct writeback_control *wbc) { - int ret; const bool zoned = btrfs_is_zoned(inode->root->fs_info); + int ret = 0; /* * The range must cover part of the @locked_page, or the returned @@ -2434,19 +2412,23 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page ASSERT(!zoned || btrfs_is_data_reloc_root(inode->root)); ret = run_delalloc_nocow(inode, locked_page, start, end, page_started, nr_written); - } else if (!btrfs_inode_can_compress(inode) || - !inode_need_compress(inode, start, end)) { - if (zoned) - ret = run_delalloc_zoned(inode, locked_page, start, end, - page_started, nr_written); - else - ret = cow_file_range(inode, locked_page, start, end, - page_started, nr_written, 1, NULL); - } else { - set_bit(BTRFS_INODE_HAS_ASYNC_EXTENT, &inode->runtime_flags); - ret = cow_file_range_async(inode, wbc, locked_page, start, end, - page_started, nr_written); + goto out; } + + if (btrfs_inode_can_compress(inode) && + inode_need_compress(inode, start, end) && + cow_file_range_async(inode, wbc, locked_page, start, + end, page_started, nr_written)) + goto out; + + if (zoned) + ret = run_delalloc_zoned(inode, locked_page, start, end, + page_started, nr_written); + else + ret = cow_file_range(inode, locked_page, start, end, + page_started, nr_written, 1, NULL); + +out: ASSERT(ret <= 0); if (ret) btrfs_cleanup_ordered_extents(inode, locked_page, start, From patchwork Wed May 31 06:04:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13261499 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D489C7EE2E for ; Wed, 31 May 2023 06:05:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234330AbjEaGFb (ORCPT ); Wed, 31 May 2023 02:05:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234318AbjEaGFa (ORCPT ); Wed, 31 May 2023 02:05:30 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D1A5411C for ; Tue, 30 May 2023 23:05:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=4YgVgiMHpTLKO3oULleg9r1hEqIoyw4HiAzL0wGtz+g=; b=bP0Qek8cx+kEkkTGXZ/tHjHS5O 51JohUCF/ea2mYJ/KHs/X8/45XC3jLjMS6NThmHq17sDGo9PXiyhQtyR0r94n6F0aZqomJuHJf/pK 8HziuAtBIGz1K7mOQd6JHkrnwC33O4akddEou4LmKeTAP9rqAjzXpxJlDyAzpNrOVH4hshKr9I0i8 wQErZT8ItiCjBX5pPeLeXYzxQyARk71xeVrplGnOOJPGVY5mCyiuIZ71zyu+wG1Syi4U+ongbOyQC zuF4LWT/kfQm21BB2qjaK9zH63UT9QPptVencEfCNXnV1lTRfDgx+ZMNOXbaRhOhTgMZbuuhZ6Zin Z6m+GJ4Q==; Received: from [2001:4bb8:182:6d06:f5c3:53d7:b5aa:b6a7] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q4Exb-00GF58-0u; Wed, 31 May 2023 06:05:27 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org, Anand Jain Subject: [PATCH 06/16] btrfs: rename cow_file_range_async to run_delalloc_compressed Date: Wed, 31 May 2023 08:04:55 +0200 Message-Id: <20230531060505.468704-7-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230531060505.468704-1-hch@lst.de> References: <20230531060505.468704-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org cow_file_range_async is only used for compressed writeback. Rename it to run_delalloc_compressed, which also fits in with run_delalloc_nocow and run_delalloc_zoned. Reviewed-by: Anand Jain Signed-off-by: Christoph Hellwig --- fs/btrfs/inode.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index b4b6bd621264cf..9a7c0564fb7160 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -1693,7 +1693,7 @@ static noinline void async_cow_submit(struct btrfs_work *work) * ->inode could be NULL if async_chunk_start has failed to compress, * in which case we don't have anything to submit, yet we need to * always adjust ->async_delalloc_pages as its paired with the init - * happening in cow_file_range_async + * happening in run_delalloc_compressed */ if (async_chunk->inode) submit_compressed_extents(async_chunk); @@ -1720,11 +1720,11 @@ static noinline void async_cow_free(struct btrfs_work *work) kvfree(async_cow); } -static bool cow_file_range_async(struct btrfs_inode *inode, - struct writeback_control *wbc, - struct page *locked_page, - u64 start, u64 end, int *page_started, - unsigned long *nr_written) +static bool run_delalloc_compressed(struct btrfs_inode *inode, + struct writeback_control *wbc, + struct page *locked_page, + u64 start, u64 end, int *page_started, + unsigned long *nr_written) { struct btrfs_fs_info *fs_info = inode->root->fs_info; struct cgroup_subsys_state *blkcg_css = wbc_blkcg_css(wbc); @@ -2417,8 +2417,8 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page if (btrfs_inode_can_compress(inode) && inode_need_compress(inode, start, end) && - cow_file_range_async(inode, wbc, locked_page, start, - end, page_started, nr_written)) + run_delalloc_compressed(inode, wbc, locked_page, start, + end, page_started, nr_written)) goto out; if (zoned) From patchwork Wed May 31 06:04:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13261500 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 032BBC77B7C for ; Wed, 31 May 2023 06:05:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234332AbjEaGFf (ORCPT ); Wed, 31 May 2023 02:05:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234318AbjEaGFd (ORCPT ); Wed, 31 May 2023 02:05:33 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9E37411D for ; Tue, 30 May 2023 23:05:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=BTF/fowlJXxRLAv+TuwF0phmMWOWDUXfJU6eqwdq8L4=; b=5BzPcQosJJaSaF3lNH+8gNcBXE hby5SdCTxWfZsEnfJKvAeKG/EqHSa4c4aAtRxyBHsTkTeSD1Njlucou8p96VSqIzeE8ayed3Rx/6a mlNwZTfiHQYah1BO/pEOu4Io2Olg+sWKF5jmmi6A7GdZoZH9DyzWc3XcsDd12YAvB6mo0q2xzv+hP NgTeryIQUSYjj+U8gKCHMd/pR/WX/eKiCPPONdOvweFRWIkbtJ+rk5g82EV5dLMX3ynROnfA3LET1 JK2M9cK0lO1GND1aeoHlzRNHvQzCcnE9E+VrJgurZ5Lel62/4dBQHSRrdiDrVIbk04+VFqx0QURVY XgDiH3jg==; Received: from [2001:4bb8:182:6d06:f5c3:53d7:b5aa:b6a7] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q4Exe-00GF5q-1k; Wed, 31 May 2023 06:05:30 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 07/16] btrfs: don't check PageError in __extent_writepage Date: Wed, 31 May 2023 08:04:56 +0200 Message-Id: <20230531060505.468704-8-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230531060505.468704-1-hch@lst.de> References: <20230531060505.468704-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org __extent_writepage currenly sets PageError whenever any error happens, and the also checks for PageError to decide if to call error handling. This leads to very unclear responsibility for cleaning up on errors. In the VM and generic writeback helpers the basic idea is that once I/O is fired off all error handling responsibility is delegated to the end I/O handler. But if that end I/O handler sets the PageError bit, and the submitter checks it, the bit could in some cases leak into the submission context for fast enough I/O. Fix this by simply not checking PageError and just using the local ret variable to check for submission errors. This also fundamentally solves the long problem documented in a comment in __extent_writepage by never leaking the error bit into the submission context. Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 33 +-------------------------------- 1 file changed, 1 insertion(+), 32 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 178e8230c28af9..33e5f0a31c21a7 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1557,38 +1557,7 @@ static int __extent_writepage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl set_page_writeback(page); end_page_writeback(page); } - /* - * Here we used to have a check for PageError() and then set @ret and - * call end_extent_writepage(). - * - * But in fact setting @ret here will cause different error paths - * between subpage and regular sectorsize. - * - * For regular page size, we never submit current page, but only add - * current page to current bio. - * The bio submission can only happen in next page. - * Thus if we hit the PageError() branch, @ret is already set to - * non-zero value and will not get updated for regular sectorsize. - * - * But for subpage case, it's possible we submit part of current page, - * thus can get PageError() set by submitted bio of the same page, - * while our @ret is still 0. - * - * So here we unify the behavior and don't set @ret. - * Error can still be properly passed to higher layer as page will - * be set error, here we just don't handle the IO failure. - * - * NOTE: This is just a hotfix for subpage. - * The root fix will be properly ending ordered extent when we hit - * an error during writeback. - * - * But that needs a bigger refactoring, as we not only need to grab the - * submitted OE, but also need to know exactly at which bytenr we hit - * the error. - * Currently the full page based __extent_writepage_io() is not - * capable of that. - */ - if (PageError(page)) + if (ret) end_extent_writepage(page, ret, page_start, page_end); if (bio_ctrl->extent_locked) { struct writeback_control *wbc = bio_ctrl->wbc; From patchwork Wed May 31 06:04:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13261501 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62A99C77B7A for ; Wed, 31 May 2023 06:05:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234333AbjEaGFj (ORCPT ); Wed, 31 May 2023 02:05:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59796 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234221AbjEaGFi (ORCPT ); Wed, 31 May 2023 02:05:38 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CAC3611D for ; Tue, 30 May 2023 23:05:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=zx1u65u7e3YdD2oyHyWP75xgdAet+0WaKFeyZs3u8Iw=; b=QZSmrMsa8pnW+2NS8tt+wI3wWm Bdd3vQeA2Z6xfKujr0WJro3468bpDgWn3pyZALVxBFnZl4fwGbo1fUXyrFgCls5Ud4nemmYjJxmKX nIa6z25SEbaWA24XQOXGdFCE0VKo3Na+QjP5RmNzrCMKqbHZThv1OMg+A4z/JoguLue31aerHS8ga nQbAaHjsCYmZzp1z1Mhni/1yNyzUk13gkHhxLAe7Wu1CA1F2D1mmzG5Rj5OfdUVBCXskA8avxrbcM GXrPNdzmof4omgnMWvem1nNOL271cFcrIyxUsB5ucW/z5diA4UqUCwpabAknnJgsqZPyOzT/xdKtY VhRwL9bg==; Received: from [2001:4bb8:182:6d06:f5c3:53d7:b5aa:b6a7] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q4Exh-00GF6K-14; Wed, 31 May 2023 06:05:33 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 08/16] btrfs: stop setting PageError in the data I/O path Date: Wed, 31 May 2023 08:04:57 +0200 Message-Id: <20230531060505.468704-9-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230531060505.468704-1-hch@lst.de> References: <20230531060505.468704-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org PageError is not used by the VFS/MM and deprecated because it uses up a page bit and has no coherent rules. Instead read errors are usually propagated by not setting or clearing the uptodate bit, and write errors are propagated through the address_space. Btrfs now only sets the flag and never clears it for data pages, so just remove all places setting it, and the subpage error bit. Note that the error propagation for superblock writes that work on the block device mapping still uses PageError for now, but that will be addressed in a separate series. Signed-off-by: Christoph Hellwig --- fs/btrfs/compression.c | 2 -- fs/btrfs/extent_io.c | 24 +++++------------------- fs/btrfs/inode.c | 3 --- fs/btrfs/subpage.c | 35 ----------------------------------- fs/btrfs/subpage.h | 10 ++++------ 5 files changed, 9 insertions(+), 65 deletions(-) diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c index 04cd5de4f00f60..bb2d1a4ceca12f 100644 --- a/fs/btrfs/compression.c +++ b/fs/btrfs/compression.c @@ -211,8 +211,6 @@ static noinline void end_compressed_writeback(const struct compressed_bio *cb) for (i = 0; i < ret; i++) { struct folio *folio = fbatch.folios[i]; - if (errno) - folio_set_error(folio); btrfs_page_clamp_clear_writeback(fs_info, &folio->page, cb->start, cb->len); } diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 33e5f0a31c21a7..b7f26a4bb663cd 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -223,8 +223,6 @@ static int process_one_page(struct btrfs_fs_info *fs_info, if (page_ops & PAGE_SET_ORDERED) btrfs_page_clamp_set_ordered(fs_info, page, start, len); - if (page_ops & PAGE_SET_ERROR) - btrfs_page_clamp_set_error(fs_info, page, start, len); if (page_ops & PAGE_START_WRITEBACK) { btrfs_page_clamp_clear_dirty(fs_info, page, start, len); btrfs_page_clamp_set_writeback(fs_info, page, start, len); @@ -497,12 +495,10 @@ static void end_page_read(struct page *page, bool uptodate, u64 start, u32 len) ASSERT(page_offset(page) <= start && start + len <= page_offset(page) + PAGE_SIZE); - if (uptodate && btrfs_verify_page(page, start)) { + if (uptodate && btrfs_verify_page(page, start)) btrfs_page_set_uptodate(fs_info, page, start, len); - } else { + else btrfs_page_clear_uptodate(fs_info, page, start, len); - btrfs_page_set_error(fs_info, page, start, len); - } if (!btrfs_is_subpage(fs_info, page)) unlock_page(page); @@ -530,7 +526,6 @@ void end_extent_writepage(struct page *page, int err, u64 start, u64 end) len = end + 1 - start; btrfs_page_clear_uptodate(fs_info, page, start, len); - btrfs_page_set_error(fs_info, page, start, len); ret = err < 0 ? err : -EIO; mapping_set_error(page->mapping, ret); } @@ -1059,7 +1054,6 @@ static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached, ret = set_page_extent_mapped(page); if (ret < 0) { unlock_extent(tree, start, end, NULL); - btrfs_page_set_error(fs_info, page, start, PAGE_SIZE); unlock_page(page); return ret; } @@ -1263,11 +1257,9 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode, } ret = btrfs_run_delalloc_range(inode, page, delalloc_start, delalloc_end, &page_started, &nr_written, wbc); - if (ret) { - btrfs_page_set_error(inode->root->fs_info, page, - page_offset(page), PAGE_SIZE); + if (ret) return ret; - } + /* * delalloc_end is already one less than the total length, so * we don't subtract one from PAGE_SIZE @@ -1420,7 +1412,6 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, em = btrfs_get_extent(inode, NULL, 0, cur, end - cur + 1); if (IS_ERR(em)) { - btrfs_page_set_error(fs_info, page, cur, end - cur + 1); ret = PTR_ERR_OR_ZERO(em); goto out_error; } @@ -1519,9 +1510,6 @@ static int __extent_writepage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl WARN_ON(!PageLocked(page)); - btrfs_page_clear_error(btrfs_sb(inode->i_sb), page, - page_offset(page), PAGE_SIZE); - pg_offset = offset_in_page(i_size); if (page->index > end_index || (page->index == end_index && !pg_offset)) { @@ -1534,10 +1522,8 @@ static int __extent_writepage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl memzero_page(page, pg_offset, PAGE_SIZE - pg_offset); ret = set_page_extent_mapped(page); - if (ret < 0) { - SetPageError(page); + if (ret < 0) goto done; - } if (!bio_ctrl->extent_locked) { ret = writepage_delalloc(BTRFS_I(inode), page, bio_ctrl->wbc); diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 9a7c0564fb7160..2e1e1f18649d90 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -1153,8 +1153,6 @@ static int submit_uncompressed_range(struct btrfs_inode *inode, const u64 page_start = page_offset(locked_page); const u64 page_end = page_start + PAGE_SIZE - 1; - btrfs_page_set_error(inode->root->fs_info, locked_page, - page_start, PAGE_SIZE); set_page_writeback(locked_page); end_page_writeback(locked_page); end_extent_writepage(locked_page, ret, page_start, page_end); @@ -2942,7 +2940,6 @@ static void btrfs_writepage_fixup_worker(struct btrfs_work *work) mapping_set_error(page->mapping, ret); end_extent_writepage(page, ret, page_start, page_end); clear_page_dirty_for_io(page); - SetPageError(page); } btrfs_page_clear_checked(inode->root->fs_info, page, page_start, PAGE_SIZE); unlock_page(page); diff --git a/fs/btrfs/subpage.c b/fs/btrfs/subpage.c index 74bc43040531fd..1b999c6e419307 100644 --- a/fs/btrfs/subpage.c +++ b/fs/btrfs/subpage.c @@ -100,9 +100,6 @@ void btrfs_init_subpage_info(struct btrfs_subpage_info *subpage_info, u32 sector subpage_info->uptodate_offset = cur; cur += nr_bits; - subpage_info->error_offset = cur; - cur += nr_bits; - subpage_info->dirty_offset = cur; cur += nr_bits; @@ -416,35 +413,6 @@ void btrfs_subpage_clear_uptodate(const struct btrfs_fs_info *fs_info, spin_unlock_irqrestore(&subpage->lock, flags); } -void btrfs_subpage_set_error(const struct btrfs_fs_info *fs_info, - struct page *page, u64 start, u32 len) -{ - struct btrfs_subpage *subpage = (struct btrfs_subpage *)page->private; - unsigned int start_bit = subpage_calc_start_bit(fs_info, page, - error, start, len); - unsigned long flags; - - spin_lock_irqsave(&subpage->lock, flags); - bitmap_set(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits); - SetPageError(page); - spin_unlock_irqrestore(&subpage->lock, flags); -} - -void btrfs_subpage_clear_error(const struct btrfs_fs_info *fs_info, - struct page *page, u64 start, u32 len) -{ - struct btrfs_subpage *subpage = (struct btrfs_subpage *)page->private; - unsigned int start_bit = subpage_calc_start_bit(fs_info, page, - error, start, len); - unsigned long flags; - - spin_lock_irqsave(&subpage->lock, flags); - bitmap_clear(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits); - if (subpage_test_bitmap_all_zero(fs_info, subpage, error)) - ClearPageError(page); - spin_unlock_irqrestore(&subpage->lock, flags); -} - void btrfs_subpage_set_dirty(const struct btrfs_fs_info *fs_info, struct page *page, u64 start, u32 len) { @@ -606,7 +574,6 @@ bool btrfs_subpage_test_##name(const struct btrfs_fs_info *fs_info, \ return ret; \ } IMPLEMENT_BTRFS_SUBPAGE_TEST_OP(uptodate); -IMPLEMENT_BTRFS_SUBPAGE_TEST_OP(error); IMPLEMENT_BTRFS_SUBPAGE_TEST_OP(dirty); IMPLEMENT_BTRFS_SUBPAGE_TEST_OP(writeback); IMPLEMENT_BTRFS_SUBPAGE_TEST_OP(ordered); @@ -674,7 +641,6 @@ bool btrfs_page_clamp_test_##name(const struct btrfs_fs_info *fs_info, \ } IMPLEMENT_BTRFS_PAGE_OPS(uptodate, SetPageUptodate, ClearPageUptodate, PageUptodate); -IMPLEMENT_BTRFS_PAGE_OPS(error, SetPageError, ClearPageError, PageError); IMPLEMENT_BTRFS_PAGE_OPS(dirty, set_page_dirty, clear_page_dirty_for_io, PageDirty); IMPLEMENT_BTRFS_PAGE_OPS(writeback, set_page_writeback, end_page_writeback, @@ -769,7 +735,6 @@ void __cold btrfs_subpage_dump_bitmap(const struct btrfs_fs_info *fs_info, spin_lock_irqsave(&subpage->lock, flags); GET_SUBPAGE_BITMAP(subpage, subpage_info, uptodate, &uptodate_bitmap); - GET_SUBPAGE_BITMAP(subpage, subpage_info, error, &error_bitmap); GET_SUBPAGE_BITMAP(subpage, subpage_info, dirty, &dirty_bitmap); GET_SUBPAGE_BITMAP(subpage, subpage_info, writeback, &writeback_bitmap); GET_SUBPAGE_BITMAP(subpage, subpage_info, ordered, &ordered_bitmap); diff --git a/fs/btrfs/subpage.h b/fs/btrfs/subpage.h index 5905caea840970..5cbf67ccbdeb1d 100644 --- a/fs/btrfs/subpage.h +++ b/fs/btrfs/subpage.h @@ -8,17 +8,17 @@ /* * Extra info for subpapge bitmap. * - * For subpage we pack all uptodate/error/dirty/writeback/ordered bitmaps into + * For subpage we pack all uptodate/dirty/writeback/ordered bitmaps into * one larger bitmap. * * This structure records how they are organized in the bitmap: * - * /- uptodate_offset /- error_offset /- dirty_offset + * /- uptodate_offset /- dirty_offset /- ordered_offset * | | | * v v v - * |u|u|u|u|........|u|u|e|e|.......|e|e| ... |o|o| + * |u|u|u|u|........|u|u|d|d|.......|d|d|o|o|.......|o|o| * |<- bitmap_nr_bits ->| - * |<--------------- total_nr_bits ---------------->| + * |<----------------- total_nr_bits ------------------>| */ struct btrfs_subpage_info { /* Number of bits for each bitmap */ @@ -32,7 +32,6 @@ struct btrfs_subpage_info { * @bitmap_size, which is calculated from PAGE_SIZE / sectorsize. */ unsigned int uptodate_offset; - unsigned int error_offset; unsigned int dirty_offset; unsigned int writeback_offset; unsigned int ordered_offset; @@ -141,7 +140,6 @@ bool btrfs_page_clamp_test_##name(const struct btrfs_fs_info *fs_info, \ struct page *page, u64 start, u32 len); DECLARE_BTRFS_SUBPAGE_OPS(uptodate); -DECLARE_BTRFS_SUBPAGE_OPS(error); DECLARE_BTRFS_SUBPAGE_OPS(dirty); DECLARE_BTRFS_SUBPAGE_OPS(writeback); DECLARE_BTRFS_SUBPAGE_OPS(ordered); From patchwork Wed May 31 06:04:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13261502 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07590C77B7C for ; Wed, 31 May 2023 06:05:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231942AbjEaGFl (ORCPT ); Wed, 31 May 2023 02:05:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59802 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234221AbjEaGFk (ORCPT ); Wed, 31 May 2023 02:05:40 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 175C911D for ; Tue, 30 May 2023 23:05:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=XmiN4L0Dw+zaOM8vLc9fDn+HqBBDqiASTHw5Q/9w7+c=; b=znGZIZX301LypNz3jyTpeC2Cmk RNrbVnNhDQpjvGYoGUzZCaM6yaEjoJ+q2lY90UCcK2yF+IDD6QvZ9h0c5CpsOJT1ykywH2YMvqGd4 adDHG2dKqb3Tu0STX+eQFcWZT4ArAtctrzGhL3aK/7BZDAosvdewIb2wXsnTS/zfPx3DeSo0Q7HoD 7jjORlYMBVmI2p6VqxPABSZvHa+/C2XmerB3SQycjPKDd7FBEH5pTt9Kdq8KirGONtOFOAToL74u+ zBhm7+Ir/Y/08yK5QF59aC04Cdc5PvFWKJdBCWWGcA8YMFvGBbOpRvCg6/FjCpaRpc0VmVYvbAoKw 6qMFPD4Q==; Received: from [2001:4bb8:182:6d06:f5c3:53d7:b5aa:b6a7] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q4Exk-00GF6j-2K; Wed, 31 May 2023 06:05:37 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 09/16] btrfs: remove PAGE_SET_ERROR Date: Wed, 31 May 2023 08:04:58 +0200 Message-Id: <20230531060505.468704-10-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230531060505.468704-1-hch@lst.de> References: <20230531060505.468704-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Now that the btrfs writeback code has stopped using PageError, using PAGE_SET_ERROR to just set the per-address_space error flag is confusing. Just open code the mapping_set_error calls in the callers and remove the PAGE_SET_ERROR flag. Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 3 --- fs/btrfs/extent_io.h | 1 - fs/btrfs/inode.c | 11 ++++++----- 3 files changed, 6 insertions(+), 9 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index b7f26a4bb663cd..09a9973c27ccfb 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -268,9 +268,6 @@ static int __process_pages_contig(struct address_space *mapping, ASSERT(processed_end && *processed_end == start); } - if ((page_ops & PAGE_SET_ERROR) && start_index <= end_index) - mapping_set_error(mapping, -EIO); - folio_batch_init(&fbatch); while (index <= end_index) { int found_folios; diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h index 2d91ca91dca5c1..6723bf3483d9f9 100644 --- a/fs/btrfs/extent_io.h +++ b/fs/btrfs/extent_io.h @@ -40,7 +40,6 @@ enum { ENUM_BIT(PAGE_START_WRITEBACK), ENUM_BIT(PAGE_END_WRITEBACK), ENUM_BIT(PAGE_SET_ORDERED), - ENUM_BIT(PAGE_SET_ERROR), ENUM_BIT(PAGE_LOCK), }; diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 2e1e1f18649d90..ea9e880c8cee76 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -835,6 +835,7 @@ static noinline int compress_file_range(struct async_chunk *async_chunk) { struct btrfs_inode *inode = async_chunk->inode; struct btrfs_fs_info *fs_info = inode->root->fs_info; + struct address_space *mapping = inode->vfs_inode.i_mapping; u64 blocksize = fs_info->sectorsize; u64 start = async_chunk->start; u64 end = async_chunk->end; @@ -949,7 +950,7 @@ static noinline int compress_file_range(struct async_chunk *async_chunk) /* Compression level is applied here and only here */ ret = btrfs_compress_pages( compress_type | (fs_info->compress_level << 4), - inode->vfs_inode.i_mapping, start, + mapping, start, pages, &nr_pages, &total_in, @@ -992,9 +993,9 @@ static noinline int compress_file_range(struct async_chunk *async_chunk) unsigned long clear_flags = EXTENT_DELALLOC | EXTENT_DELALLOC_NEW | EXTENT_DEFRAG | EXTENT_DO_ACCOUNTING; - unsigned long page_error_op; - page_error_op = ret < 0 ? PAGE_SET_ERROR : 0; + if (ret < 0) + mapping_set_error(mapping, -EIO); /* * inline extent creation worked or returned error, @@ -1011,7 +1012,6 @@ static noinline int compress_file_range(struct async_chunk *async_chunk) clear_flags, PAGE_UNLOCK | PAGE_START_WRITEBACK | - page_error_op | PAGE_END_WRITEBACK); /* @@ -1271,12 +1271,13 @@ static int submit_one_async_extent(struct btrfs_inode *inode, btrfs_dec_block_group_reservations(fs_info, ins.objectid); btrfs_free_reserved_extent(fs_info, ins.objectid, ins.offset, 1); out_free: + mapping_set_error(inode->vfs_inode.i_mapping, -EIO); extent_clear_unlock_delalloc(inode, start, end, NULL, EXTENT_LOCKED | EXTENT_DELALLOC | EXTENT_DELALLOC_NEW | EXTENT_DEFRAG | EXTENT_DO_ACCOUNTING, PAGE_UNLOCK | PAGE_START_WRITEBACK | - PAGE_END_WRITEBACK | PAGE_SET_ERROR); + PAGE_END_WRITEBACK); free_async_extent_pages(async_extent); goto done; } From patchwork Wed May 31 06:04:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13261503 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F5EFC7EE2E for ; Wed, 31 May 2023 06:05:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234337AbjEaGFn (ORCPT ); Wed, 31 May 2023 02:05:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234221AbjEaGFm (ORCPT ); Wed, 31 May 2023 02:05:42 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4EAE011D for ; Tue, 30 May 2023 23:05:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=289KYF+1drzpV4J4ETU5HBsedbvwb/8w/5PdyiL0/rY=; b=qwrcgjiwTEOvPHaRuq3773or9o CEsoSUvTtOS4GeAhnygkVEulnk5Zh6B0rT70CCwQR+Y1q9RCfnAwCnwzvGCmiWhcuN1H/QH0A3XBx GxHGrzrUTPYpmKOVd5BHPcuNTtrzS15dN+uV/wSz5s7n1wy2fGHI5Y8MgYkxuLnhjgzgK8NXYAqJW 3SZeWoiC8Ld+9QQZnyBw49K1ckHHSb4Sz0WstnanwiAQ/1Iixla8a9OYSH/9TsPI8VH91RfyzZwHM ASFmOc9Cv13q0nEf/M/fSpooFLB6z7PblZnCXeQ51OphpQywLUry5lTxOS5/2VdodItEd0T5eftSk h2lP/9Uw==; Received: from [2001:4bb8:182:6d06:f5c3:53d7:b5aa:b6a7] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q4Exn-00GF7G-12; Wed, 31 May 2023 06:05:39 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 10/16] btrfs: remove non-standard extent handling in __extent_writepage_io Date: Wed, 31 May 2023 08:04:59 +0200 Message-Id: <20230531060505.468704-11-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230531060505.468704-1-hch@lst.de> References: <20230531060505.468704-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org __extent_writepage_io is never called for compressed or inline extents, or holes. Remove the not quite working code for them and replace it with asserts that these cases don't happen. Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 23 +++++------------------ 1 file changed, 5 insertions(+), 18 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 09a9973c27ccfb..a2e1dbd9b92309 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1361,7 +1361,6 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, struct extent_map *em; int ret = 0; int nr = 0; - bool compressed; ret = btrfs_writepage_cow_fixup(page); if (ret) { @@ -1419,10 +1418,14 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, ASSERT(cur < end); ASSERT(IS_ALIGNED(em->start, fs_info->sectorsize)); ASSERT(IS_ALIGNED(em->len, fs_info->sectorsize)); + block_start = em->block_start; - compressed = test_bit(EXTENT_FLAG_COMPRESSED, &em->flags); disk_bytenr = em->block_start + extent_offset; + ASSERT(!test_bit(EXTENT_FLAG_COMPRESSED, &em->flags)); + ASSERT(block_start != EXTENT_MAP_HOLE); + ASSERT(block_start != EXTENT_MAP_INLINE); + /* * Note that em_end from extent_map_end() and dirty_range_end from * find_next_dirty_byte() are all exclusive @@ -1431,22 +1434,6 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, free_extent_map(em); em = NULL; - /* - * compressed and inline extents are written through other - * paths in the FS - */ - if (compressed || block_start == EXTENT_MAP_HOLE || - block_start == EXTENT_MAP_INLINE) { - if (compressed) - nr++; - else - btrfs_writepage_endio_finish_ordered(inode, - page, cur, cur + iosize - 1, true); - btrfs_page_clear_dirty(fs_info, page, cur, iosize); - cur += iosize; - continue; - } - btrfs_set_range_writeback(inode, cur, cur + iosize - 1); if (!PageWriteback(page)) { btrfs_err(inode->root->fs_info, From patchwork Wed May 31 06:05:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13261504 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 481ABC77B7C for ; Wed, 31 May 2023 06:05:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234142AbjEaGFp (ORCPT ); Wed, 31 May 2023 02:05:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233995AbjEaGFp (ORCPT ); Wed, 31 May 2023 02:05:45 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 38D1911D for ; Tue, 30 May 2023 23:05:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=bOj9JFNBbLSSuYObMrCi4xb+NXf8oe3Z1C4HnwA0Rws=; b=4qXZKgkOAlX9FzAhTtTWfVJ81I ME1INDSCZU4exqOFAsgw8h9E99XLtyFD2/JSRwGkG4pV6gklpxdBQMP2g9j+mQeOytd88en3iaNCU vYpDCCg059jKNttEYU3+bu+nfFTcsCnGDtaQCJw3MzfZYoKTJxZgatBxVX/MfE6GwqMfi8JRUwUi2 1Oy4Pl6i3cHonnp5M1mY//MRIQRR/gk5oA3oqNpYdfX8ReMopmUDmFh+/i2L2+nTvuOnAhsgiDBzn Xk0A2J3bJTKzDjlft2Ds1MOCt/KnH20Gg0hJn7fYXMnZBrQlP10CgD7L2XrOyQ/57ZyFHDdFSIPZG DkVowtoA==; Received: from [2001:4bb8:182:6d06:f5c3:53d7:b5aa:b6a7] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q4Exq-00GF7q-0U; Wed, 31 May 2023 06:05:42 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org, Anand Jain Subject: [PATCH 11/16] btrfs: move nr_to_write to __extent_writepage Date: Wed, 31 May 2023 08:05:00 +0200 Message-Id: <20230531060505.468704-12-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230531060505.468704-1-hch@lst.de> References: <20230531060505.468704-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Move the nr_to_write accounting from __extent_writepage_io to __extent_writepage_io as we'll grow another __extent_writepage_io that doesn't want this accounting soon. Also drop the obsolete comment - decrementing a counter in the on-stack writeback_control data structure doesn't need the page lock. Reviewed-by: Anand Jain Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index a2e1dbd9b92309..f773ee12ce1cee 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1370,12 +1370,6 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, return 1; } - /* - * we don't want to touch the inode after unlocking the page, - * so we update the mapping writeback index now - */ - bio_ctrl->wbc->nr_to_write--; - bio_ctrl->end_io_func = end_bio_extent_writepage; while (cur <= end) { u64 disk_bytenr; @@ -1521,6 +1515,8 @@ static int __extent_writepage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl if (ret == 1) return 0; + bio_ctrl->wbc->nr_to_write--; + done: if (nr == 0) { /* make sure the mapping tag for page dirty gets cleared */ From patchwork Wed May 31 06:05:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13261505 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F568C77B7C for ; Wed, 31 May 2023 06:05:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234338AbjEaGFt (ORCPT ); Wed, 31 May 2023 02:05:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59826 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234221AbjEaGFr (ORCPT ); Wed, 31 May 2023 02:05:47 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C047E11D for ; Tue, 30 May 2023 23:05:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=wEAELs01B9uaA8gjggTe5/25GZYdPh5V8YNv2Zsaohw=; b=dRTSfA34FzYxgCg1o210REG3gC KMGWH+VYux9pKyQ+zXFif2VAcBZDNgWggCca0201x9leuz/pBwh+x4w/+sjK75JPlnzy1g5Oxfkd+ 4OqrMKvb9dF46oRCZBA+4UfvnVmpI/Ht/RX+LBq/uYapgxDdRf7FMzWsUk5y2kYkilRnQTytti3v7 OrkMxsYO0IRl7uqjhbobv0gPYd2IKaSGX3cm2yAt2YZ2L8pfGZh5n//3EiZD1dKaoF2gxVcM5naK5 9+SQ3tLVnojy+f9oUVVaeqmwq/9wkn3VbXG1JItOum8Ds0OrfoQZsXWrNu8o+J+I1nFJoudw9LEkE GAf6VdAA==; Received: from [2001:4bb8:182:6d06:f5c3:53d7:b5aa:b6a7] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q4Ext-00GF8N-0k; Wed, 31 May 2023 06:05:45 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 12/16] btrfs: only call __extent_writepage_io from extent_write_locked_range Date: Wed, 31 May 2023 08:05:01 +0200 Message-Id: <20230531060505.468704-13-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230531060505.468704-1-hch@lst.de> References: <20230531060505.468704-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org __extent_writepage does a lot of things that make no sense for extent_write_locked_range, given that extent_write_locked_range itself is called from __extent_writepage either directly or through a workqueue, and all this work has already been done in the first invocation and the pages haven't been unlocked since. Just call __extent_writepage_io directly instead. Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 65 ++++++++++++++++++-------------------------- 1 file changed, 26 insertions(+), 39 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index f773ee12ce1cee..1da247e753b08a 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -103,12 +103,6 @@ struct btrfs_bio_ctrl { blk_opf_t opf; btrfs_bio_end_io_t end_io_func; struct writeback_control *wbc; - - /* - * Tell writepage not to lock the state bits for this range, it still - * does the unlocking. - */ - bool extent_locked; }; static void submit_one_bio(struct btrfs_bio_ctrl *bio_ctrl) @@ -1475,7 +1469,6 @@ static int __extent_writepage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl { struct folio *folio = page_folio(page); struct inode *inode = page->mapping->host; - struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); const u64 page_start = page_offset(page); const u64 page_end = page_start + PAGE_SIZE - 1; int ret; @@ -1503,13 +1496,11 @@ static int __extent_writepage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl if (ret < 0) goto done; - if (!bio_ctrl->extent_locked) { - ret = writepage_delalloc(BTRFS_I(inode), page, bio_ctrl->wbc); - if (ret == 1) - return 0; - if (ret) - goto done; - } + ret = writepage_delalloc(BTRFS_I(inode), page, bio_ctrl->wbc); + if (ret == 1) + return 0; + if (ret) + goto done; ret = __extent_writepage_io(BTRFS_I(inode), page, bio_ctrl, i_size, &nr); if (ret == 1) @@ -1525,21 +1516,7 @@ static int __extent_writepage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl } if (ret) end_extent_writepage(page, ret, page_start, page_end); - if (bio_ctrl->extent_locked) { - struct writeback_control *wbc = bio_ctrl->wbc; - - /* - * If bio_ctrl->extent_locked, it's from extent_write_locked_range(), - * the page can either be locked by lock_page() or - * process_one_page(). - * Let btrfs_page_unlock_writer() handle both cases. - */ - ASSERT(wbc); - btrfs_page_unlock_writer(fs_info, page, wbc->range_start, - wbc->range_end + 1 - wbc->range_start); - } else { - unlock_page(page); - } + unlock_page(page); ASSERT(ret <= 0); return ret; } @@ -2239,10 +2216,10 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end) int first_error = 0; int ret = 0; struct address_space *mapping = inode->i_mapping; - struct page *page; + struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); + const u32 sectorsize = fs_info->sectorsize; + loff_t i_size = i_size_read(inode); u64 cur = start; - unsigned long nr_pages; - const u32 sectorsize = btrfs_sb(inode->i_sb)->sectorsize; struct writeback_control wbc_writepages = { .sync_mode = WB_SYNC_ALL, .range_start = start, @@ -2254,17 +2231,15 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end) /* We're called from an async helper function */ .opf = REQ_OP_WRITE | REQ_BTRFS_CGROUP_PUNT | wbc_to_write_flags(&wbc_writepages), - .extent_locked = 1, }; ASSERT(IS_ALIGNED(start, sectorsize) && IS_ALIGNED(end + 1, sectorsize)); - nr_pages = (round_up(end, PAGE_SIZE) - round_down(start, PAGE_SIZE)) >> - PAGE_SHIFT; - wbc_writepages.nr_to_write = nr_pages * 2; wbc_attach_fdatawrite_inode(&wbc_writepages, inode); while (cur <= end) { u64 cur_end = min(round_down(cur, PAGE_SIZE) + PAGE_SIZE - 1, end); + struct page *page; + int nr = 0; page = find_get_page(mapping, cur >> PAGE_SHIFT); /* @@ -2275,12 +2250,25 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end) ASSERT(PageLocked(page)); ASSERT(PageDirty(page)); clear_page_dirty_for_io(page); - ret = __extent_writepage(page, &bio_ctrl); - ASSERT(ret <= 0); + + ret = __extent_writepage_io(BTRFS_I(inode), page, &bio_ctrl, + i_size, &nr); + if (ret == 1) + goto next_page; + + /* Make sure the mapping tag for page dirty gets cleared. */ + if (nr == 0) { + set_page_writeback(page); + end_page_writeback(page); + } + if (ret) + end_extent_writepage(page, ret, cur, cur_end); + btrfs_page_unlock_writer(fs_info, page, cur, cur_end + 1 - cur); if (ret < 0) { found_error = true; first_error = ret; } +next_page: put_page(page); cur = cur_end + 1; } @@ -2301,7 +2289,6 @@ int extent_writepages(struct address_space *mapping, struct btrfs_bio_ctrl bio_ctrl = { .wbc = wbc, .opf = REQ_OP_WRITE | wbc_to_write_flags(wbc), - .extent_locked = 0, }; /* From patchwork Wed May 31 06:05:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13261506 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EDE94C77B7A for ; Wed, 31 May 2023 06:05:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234339AbjEaGFw (ORCPT ); Wed, 31 May 2023 02:05:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231922AbjEaGFv (ORCPT ); Wed, 31 May 2023 02:05:51 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A13DD11D for ; Tue, 30 May 2023 23:05:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=lR683BTYUCplAO6tUIGajvNNI99z4p3SGaXckSW12oo=; b=uRSq2Vt1/56nIphztLzuQmY93w +VaqfH9OaS2IWCvtpkhU/q0BQuNauE3c8FUbtHXKbsdEzO/h2u+MNr66I/6/TZKqrU/vIhoh4TMQr rKCACukXbKCYDGB6cY+BCcASEMu6gOgyavsTBJQppdss2T+WYg1uGAn9lGMq8pbeUN4DwL5daKtfy STeMtVjpT5G/p72t6nePgx2QyZshTy2ErxS6BK5X7mEWpy6+1gadXiWQ8I2aewOjmexbY1/rSX5Nr jouCOJ0LkPLM2f8fbymDsNXito/LozU/2MR9TzPuh9+7ofQjOQhUrjZJvrf8S2H5+z5TG8IzPLUus ER4fp/gg==; Received: from [2001:4bb8:182:6d06:f5c3:53d7:b5aa:b6a7] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q4Exv-00GF8n-3A; Wed, 31 May 2023 06:05:48 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 13/16] btrfs: don't treat zoned writeback as being from an async helper thread Date: Wed, 31 May 2023 08:05:02 +0200 Message-Id: <20230531060505.468704-14-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230531060505.468704-1-hch@lst.de> References: <20230531060505.468704-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org When extent_write_locked_range was originally added, it was only used writing back compressed pages from an async helper thread. But it is now also used for writing back pages on zoned devices, where it is called directly from the ->writepage context. In this case we want to to be able to pass on the writeback_control instead of creating a new one, and more importantly want to use all the normal cgroup interaction instead of potentially deferring writeback to another helper. Fixes: 898793d992c2 ("btrfs: zoned: write out partially allocated region") Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 20 +++++++------------- fs/btrfs/extent_io.h | 3 ++- fs/btrfs/inode.c | 20 +++++++++++++++----- 3 files changed, 24 insertions(+), 19 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 1da247e753b08a..f4d3c56b29009b 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -2210,7 +2210,8 @@ static int extent_write_cache_pages(struct address_space *mapping, * already been ran (aka, ordered extent inserted) and all pages are still * locked. */ -int extent_write_locked_range(struct inode *inode, u64 start, u64 end) +int extent_write_locked_range(struct inode *inode, u64 start, u64 end, + struct writeback_control *wbc) { bool found_error = false; int first_error = 0; @@ -2220,22 +2221,16 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end) const u32 sectorsize = fs_info->sectorsize; loff_t i_size = i_size_read(inode); u64 cur = start; - struct writeback_control wbc_writepages = { - .sync_mode = WB_SYNC_ALL, - .range_start = start, - .range_end = end, - .no_cgroup_owner = 1, - }; struct btrfs_bio_ctrl bio_ctrl = { - .wbc = &wbc_writepages, - /* We're called from an async helper function */ - .opf = REQ_OP_WRITE | REQ_BTRFS_CGROUP_PUNT | - wbc_to_write_flags(&wbc_writepages), + .wbc = wbc, + .opf = REQ_OP_WRITE | wbc_to_write_flags(wbc), }; + if (wbc->no_cgroup_owner) + bio_ctrl.opf |= REQ_BTRFS_CGROUP_PUNT; + ASSERT(IS_ALIGNED(start, sectorsize) && IS_ALIGNED(end + 1, sectorsize)); - wbc_attach_fdatawrite_inode(&wbc_writepages, inode); while (cur <= end) { u64 cur_end = min(round_down(cur, PAGE_SIZE) + PAGE_SIZE - 1, end); struct page *page; @@ -2275,7 +2270,6 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end) submit_write_bio(&bio_ctrl, found_error ? ret : 0); - wbc_detach_inode(&wbc_writepages); if (found_error) return first_error; return ret; diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h index 6723bf3483d9f9..c5fae3a7d911bf 100644 --- a/fs/btrfs/extent_io.h +++ b/fs/btrfs/extent_io.h @@ -178,7 +178,8 @@ int try_release_extent_mapping(struct page *page, gfp_t mask); int try_release_extent_buffer(struct page *page); int btrfs_read_folio(struct file *file, struct folio *folio); -int extent_write_locked_range(struct inode *inode, u64 start, u64 end); +int extent_write_locked_range(struct inode *inode, u64 start, u64 end, + struct writeback_control *wbc); int extent_writepages(struct address_space *mapping, struct writeback_control *wbc); int btree_write_cache_pages(struct address_space *mapping, diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index ea9e880c8cee76..54b4b241b354fc 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -1133,6 +1133,12 @@ static int submit_uncompressed_range(struct btrfs_inode *inode, unsigned long nr_written = 0; int page_started = 0; int ret; + struct writeback_control wbc = { + .sync_mode = WB_SYNC_ALL, + .range_start = start, + .range_end = end, + .no_cgroup_owner = 1, + }; /* * Call cow_file_range() to run the delalloc range directly, since we @@ -1162,7 +1168,10 @@ static int submit_uncompressed_range(struct btrfs_inode *inode, } /* All pages will be unlocked, including @locked_page */ - return extent_write_locked_range(&inode->vfs_inode, start, end); + wbc_attach_fdatawrite_inode(&wbc, &inode->vfs_inode); + ret = extent_write_locked_range(&inode->vfs_inode, start, end, &wbc); + wbc_detach_inode(&wbc); + return ret; } static int submit_one_async_extent(struct btrfs_inode *inode, @@ -1815,7 +1824,8 @@ static bool run_delalloc_compressed(struct btrfs_inode *inode, static noinline int run_delalloc_zoned(struct btrfs_inode *inode, struct page *locked_page, u64 start, u64 end, int *page_started, - unsigned long *nr_written) + unsigned long *nr_written, + struct writeback_control *wbc) { u64 done_offset = end; int ret; @@ -1847,8 +1857,8 @@ static noinline int run_delalloc_zoned(struct btrfs_inode *inode, account_page_redirty(locked_page); } locked_page_done = true; - extent_write_locked_range(&inode->vfs_inode, start, done_offset); - + extent_write_locked_range(&inode->vfs_inode, start, done_offset, + wbc); start = done_offset + 1; } @@ -2422,7 +2432,7 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page if (zoned) ret = run_delalloc_zoned(inode, locked_page, start, end, - page_started, nr_written); + page_started, nr_written, wbc); else ret = cow_file_range(inode, locked_page, start, end, page_started, nr_written, 1, NULL); From patchwork Wed May 31 06:05:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13261507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24037C77B7A for ; Wed, 31 May 2023 06:05:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234340AbjEaGFz (ORCPT ); Wed, 31 May 2023 02:05:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59854 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231922AbjEaGFy (ORCPT ); Wed, 31 May 2023 02:05:54 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C6001122 for ; Tue, 30 May 2023 23:05:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=BF0s7T9jjp5y++9jrDQKmApJ8ELIrnZZ+2OaNRXxmH4=; b=HH5lwMvMesx/f7mVQn2QEkdq27 DHfEEIQUKMZm/JfyW2/closnwN+USJKuQZRhH0fm+ebY0fEZfzLTmSlxOqv0U5WDIrujg4giIrRPc UCI/9i11u5fI8KCzH73wBdtdXejQ6c4wwJfC6lkr92KC3I2KWnb6TNdXj/cwCODCLYPSf/vTVG/RY N87BBw9i3cV7uIr8iq6u0oTdry/8HI6fM+0kHDj67V8o5k4QrAYnigLXnzL+pjZOFOArdvBQEJdoY yhBwbBXQviLWTgpwY1yXEWTFemaUP5L67Kqa+J8G/GmvSV1MbPQ98r9tsZB36f5z2jW26/CB84hhg Q2lL/kOA==; Received: from [2001:4bb8:182:6d06:f5c3:53d7:b5aa:b6a7] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q4Exz-00GF9M-0U; Wed, 31 May 2023 06:05:51 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 14/16] btrfs: don't redirty the locked page for extent_write_locked_range Date: Wed, 31 May 2023 08:05:03 +0200 Message-Id: <20230531060505.468704-15-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230531060505.468704-1-hch@lst.de> References: <20230531060505.468704-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Instead of redirtying the locked page before calling extent_write_locked_range, just pass a locked_page argument similar to many other functions in the btrfs writeback code, and then exclude the locked page from clearing the dirty bit in extent_write_locked_range. Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 17 ++++++++++------- fs/btrfs/extent_io.h | 3 ++- fs/btrfs/inode.c | 25 ++++++------------------- 3 files changed, 18 insertions(+), 27 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index f4d3c56b29009b..bc061b2ac55a12 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -2210,8 +2210,8 @@ static int extent_write_cache_pages(struct address_space *mapping, * already been ran (aka, ordered extent inserted) and all pages are still * locked. */ -int extent_write_locked_range(struct inode *inode, u64 start, u64 end, - struct writeback_control *wbc) +int extent_write_locked_range(struct inode *inode, struct page *locked_page, + u64 start, u64 end, struct writeback_control *wbc) { bool found_error = false; int first_error = 0; @@ -2237,14 +2237,17 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end, int nr = 0; page = find_get_page(mapping, cur >> PAGE_SHIFT); + /* - * All pages in the range are locked since - * btrfs_run_delalloc_range(), thus there is no way to clear - * the page dirty flag. + * All pages have been locked by btrfs_run_delalloc_range(), + * thus the dirty bit can't have been cleared. */ ASSERT(PageLocked(page)); - ASSERT(PageDirty(page)); - clear_page_dirty_for_io(page); + if (page != locked_page) { + /* already cleared by extent_write_cache_pages */ + ASSERT(PageDirty(page)); + clear_page_dirty_for_io(page); + } ret = __extent_writepage_io(BTRFS_I(inode), page, &bio_ctrl, i_size, &nr); diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h index c5fae3a7d911bf..00c468aea010a1 100644 --- a/fs/btrfs/extent_io.h +++ b/fs/btrfs/extent_io.h @@ -178,7 +178,8 @@ int try_release_extent_mapping(struct page *page, gfp_t mask); int try_release_extent_buffer(struct page *page); int btrfs_read_folio(struct file *file, struct folio *folio); -int extent_write_locked_range(struct inode *inode, u64 start, u64 end, +int extent_write_locked_range(struct inode *inode, struct page *locked_page, + u64 start, u64 end, struct writeback_control *wbc); int extent_writepages(struct address_space *mapping, struct writeback_control *wbc); diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 54b4b241b354fc..68ae20a3f785e3 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -1088,17 +1088,9 @@ static noinline int compress_file_range(struct async_chunk *async_chunk) cleanup_and_bail_uncompressed: /* * No compression, but we still need to write the pages in the file - * we've been given so far. redirty the locked page if it corresponds - * to our extent and set things up for the async work queue to run - * cow_file_range to do the normal delalloc dance. + * we've been given so far. Set things up for the async work queue to + * run cow_file_range to do the normal delalloc dance. */ - if (async_chunk->locked_page && - (page_offset(async_chunk->locked_page) >= start && - page_offset(async_chunk->locked_page)) <= end) { - __set_page_dirty_nobuffers(async_chunk->locked_page); - /* unlocked later on in the async handlers */ - } - if (redirty) extent_range_redirty_for_io(&inode->vfs_inode, start, end); add_async_extent(async_chunk, start, end - start + 1, 0, NULL, 0, @@ -1169,7 +1161,8 @@ static int submit_uncompressed_range(struct btrfs_inode *inode, /* All pages will be unlocked, including @locked_page */ wbc_attach_fdatawrite_inode(&wbc, &inode->vfs_inode); - ret = extent_write_locked_range(&inode->vfs_inode, start, end, &wbc); + ret = extent_write_locked_range(&inode->vfs_inode, locked_page, start, + end, &wbc); wbc_detach_inode(&wbc); return ret; } @@ -1829,7 +1822,6 @@ static noinline int run_delalloc_zoned(struct btrfs_inode *inode, { u64 done_offset = end; int ret; - bool locked_page_done = false; while (start <= end) { ret = cow_file_range(inode, locked_page, start, end, page_started, @@ -1852,13 +1844,8 @@ static noinline int run_delalloc_zoned(struct btrfs_inode *inode, continue; } - if (!locked_page_done) { - __set_page_dirty_nobuffers(locked_page); - account_page_redirty(locked_page); - } - locked_page_done = true; - extent_write_locked_range(&inode->vfs_inode, start, done_offset, - wbc); + extent_write_locked_range(&inode->vfs_inode, locked_page, start, + done_offset, wbc); start = done_offset + 1; } From patchwork Wed May 31 06:05:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13261508 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFBBEC77B7A for ; Wed, 31 May 2023 06:05:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234345AbjEaGF5 (ORCPT ); Wed, 31 May 2023 02:05:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231922AbjEaGF4 (ORCPT ); Wed, 31 May 2023 02:05:56 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 74201122 for ; Tue, 30 May 2023 23:05:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=SnRAIhj+LdQ1G0qfAt/U4NJTlcCmh+X5KsnsVKLgKd8=; b=GSFGCIb84dy9Yyml2uuV9uXRqp VAr5e5+Eeyr/6kC8vmAjoldrDcVYFtiA3z8Jv/MF6OUXVvu/KoBFq0HQL0VxgAI+OQqeRjcZwk7e5 9DXzQww9wTt716SXXEO2+BHvmIvLnsCXkJTkFHMO+6Wvf6I1/nokszBHlXbUurg/v3z+LycvXQMvr DWF3PFO1eNciogDWc+r5U7T7Arkn9+PsvenYFG0L5Nah2cEFo/fnIISk30R2QY8ryniH0QDGI5bNS zr4GYLSsjxJzh3Cx/HcXe0sw7qtLfYEhP1RhEjTfgS4yqD1LL02hj2lA64/rj6UhGloZSHbc00OBh ugwDssiA==; Received: from [2001:4bb8:182:6d06:f5c3:53d7:b5aa:b6a7] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q4Ey1-00GFA2-2S; Wed, 31 May 2023 06:05:54 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 15/16] btrfs: refactor the zoned device handling in cow_file_range Date: Wed, 31 May 2023 08:05:04 +0200 Message-Id: <20230531060505.468704-16-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230531060505.468704-1-hch@lst.de> References: <20230531060505.468704-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Handling of the done_offset to cow_file_range is a bit confusing, as it is not updated at all when the function succeeds, and the -EAGAIN status is used bother for the case where we need to wait for a zone finish and the one where the allocation was partially successful. Change the calling convention so that done_offset is always updated, and 0 is returned if some allocation was successful (partial allocation can still only happen for zoned devices), and -EAGAIN is only returned when the caller needs to wait for a zone finish. Signed-off-by: Christoph Hellwig --- fs/btrfs/inode.c | 53 ++++++++++++++++++++++++------------------------ 1 file changed, 27 insertions(+), 26 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 68ae20a3f785e3..a12d4f77859706 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -1403,7 +1403,7 @@ static noinline int cow_file_range(struct btrfs_inode *inode, unsigned clear_bits; unsigned long page_ops; bool extent_reserved = false; - int ret = 0; + int ret; if (btrfs_is_free_space_inode(inode)) { ret = -EINVAL; @@ -1462,7 +1462,7 @@ static noinline int cow_file_range(struct btrfs_inode *inode, * inline extent or a compressed extent. */ unlock_page(locked_page); - goto out; + goto done; } else if (ret < 0) { goto out_unlock; } @@ -1491,6 +1491,23 @@ static noinline int cow_file_range(struct btrfs_inode *inode, ret = btrfs_reserve_extent(root, cur_alloc_size, cur_alloc_size, min_alloc_size, 0, alloc_hint, &ins, 1, 1); + if (ret == -EAGAIN) { + /* + * For zoned devices, let the caller retry after writing + * out the already allocated regions or waiting for a + * zone to finish if no allocation was possible at all. + * + * Else convert to -ENOSPC since the caller cannot + * retry. + */ + if (btrfs_is_zoned(fs_info)) { + if (start == orig_start) + return -EAGAIN; + *done_offset = start - 1; + return 0; + } + ret = -ENOSPC; + } if (ret < 0) goto out_unlock; cur_alloc_size = ins.offset; @@ -1571,8 +1588,10 @@ static noinline int cow_file_range(struct btrfs_inode *inode, if (ret) goto out_unlock; } -out: - return ret; +done: + if (done_offset) + *done_offset = end; + return 0; out_drop_extent_cache: btrfs_drop_extent_map_range(inode, start, start + ram_size - 1, false); @@ -1580,21 +1599,6 @@ static noinline int cow_file_range(struct btrfs_inode *inode, btrfs_dec_block_group_reservations(fs_info, ins.objectid); btrfs_free_reserved_extent(fs_info, ins.objectid, ins.offset, 1); out_unlock: - /* - * If done_offset is non-NULL and ret == -EAGAIN, we expect the - * caller to write out the successfully allocated region and retry. - */ - if (done_offset && ret == -EAGAIN) { - if (orig_start < start) - *done_offset = start - 1; - else - *done_offset = start; - return ret; - } else if (ret == -EAGAIN) { - /* Convert to -ENOSPC since the caller cannot retry. */ - ret = -ENOSPC; - } - /* * Now, we have three regions to clean up: * @@ -1826,23 +1830,20 @@ static noinline int run_delalloc_zoned(struct btrfs_inode *inode, while (start <= end) { ret = cow_file_range(inode, locked_page, start, end, page_started, nr_written, 0, &done_offset); - if (ret && ret != -EAGAIN) - return ret; - if (*page_started) { ASSERT(ret == 0); return 0; } + if (ret == -EAGAIN) { + ASSERT(btrfs_is_zoned(inode->root->fs_info)); - if (ret == 0) - done_offset = end; - - if (done_offset == start) { wait_on_bit_io(&inode->root->fs_info->flags, BTRFS_FS_NEED_ZONE_FINISH, TASK_UNINTERRUPTIBLE); continue; } + if (ret) + return ret; extent_write_locked_range(&inode->vfs_inode, locked_page, start, done_offset, wbc); From patchwork Wed May 31 06:05:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13261509 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8C7BC77B7A for ; Wed, 31 May 2023 06:06:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234089AbjEaGGA (ORCPT ); Wed, 31 May 2023 02:06:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231922AbjEaGGA (ORCPT ); Wed, 31 May 2023 02:06:00 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F02711D for ; Tue, 30 May 2023 23:05:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=uT7GkVZbfC2OvrKh0M+zm8sLM5sa0fwlHWI7/ElDSLM=; b=JXrpyppoPquCTOj7ZX0DQpy2N0 eQ9hQPO6tpTpb14dQIO0ZzAEWvt/QpZRZSPH5bdxL1b/l6OoSLWPHE2ZY20cXMx7JyezcJbZ60fR9 DH6MsulIqgKps4jtt6kJc65WzVppXD9B7AqrBDtsj8B+oujvHG/QkiH/dHuDZCR27SwuLDDcaI56M 4kGqEBiFmxqNUbEWn8MywCG6bvSBUPzm1nIyMQy3708A4ZeHN9EljYwcY+5yrHfUpbS4G1AMVlVNh kJ4lgWrmDSJ/TwcVi8FzgR13yin8lYJWYK6Kc36lFCFYDHSQL15/L6WBxB0owtEbu+ilqimi2uXpj 4aUbeqUg==; Received: from [2001:4bb8:182:6d06:f5c3:53d7:b5aa:b6a7] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q4Ey4-00GFAd-1C; Wed, 31 May 2023 06:05:56 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 16/16] btrfs: split page locking out of __process_pages_contig Date: Wed, 31 May 2023 08:05:05 +0200 Message-Id: <20230531060505.468704-17-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230531060505.468704-1-hch@lst.de> References: <20230531060505.468704-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org There is a lot of complexity in __process_pages_contig to deal with the PAGE_LOCK case that can return an error unlike all the other actions. Open code the page iteration for page locking in lock_delalloc_pages and remove all the now unused code from __process_pages_contig. Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 151 +++++++++++++++++-------------------------- fs/btrfs/extent_io.h | 1 - 2 files changed, 59 insertions(+), 93 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index bc061b2ac55a12..ca0601ec73f831 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -197,18 +197,9 @@ void extent_range_redirty_for_io(struct inode *inode, u64 start, u64 end) } } -/* - * Process one page for __process_pages_contig(). - * - * Return >0 if we hit @page == @locked_page. - * Return 0 if we updated the page status. - * Return -EGAIN if the we need to try again. - * (For PAGE_LOCK case but got dirty page or page not belong to mapping) - */ -static int process_one_page(struct btrfs_fs_info *fs_info, - struct address_space *mapping, - struct page *page, struct page *locked_page, - unsigned long page_ops, u64 start, u64 end) +static void process_one_page(struct btrfs_fs_info *fs_info, + struct page *page, struct page *locked_page, + unsigned long page_ops, u64 start, u64 end) { u32 len; @@ -224,94 +215,36 @@ static int process_one_page(struct btrfs_fs_info *fs_info, if (page_ops & PAGE_END_WRITEBACK) btrfs_page_clamp_clear_writeback(fs_info, page, start, len); - if (page == locked_page) - return 1; - - if (page_ops & PAGE_LOCK) { - int ret; - - ret = btrfs_page_start_writer_lock(fs_info, page, start, len); - if (ret) - return ret; - if (!PageDirty(page) || page->mapping != mapping) { - btrfs_page_end_writer_lock(fs_info, page, start, len); - return -EAGAIN; - } - } - if (page_ops & PAGE_UNLOCK) + if (page != locked_page && (page_ops & PAGE_UNLOCK)) btrfs_page_end_writer_lock(fs_info, page, start, len); - return 0; } -static int __process_pages_contig(struct address_space *mapping, - struct page *locked_page, - u64 start, u64 end, unsigned long page_ops, - u64 *processed_end) +static void __process_pages_contig(struct address_space *mapping, + struct page *locked_page, u64 start, u64 end, + unsigned long page_ops) { struct btrfs_fs_info *fs_info = btrfs_sb(mapping->host->i_sb); pgoff_t start_index = start >> PAGE_SHIFT; pgoff_t end_index = end >> PAGE_SHIFT; pgoff_t index = start_index; - unsigned long pages_processed = 0; struct folio_batch fbatch; - int err = 0; int i; - if (page_ops & PAGE_LOCK) { - ASSERT(page_ops == PAGE_LOCK); - ASSERT(processed_end && *processed_end == start); - } - folio_batch_init(&fbatch); while (index <= end_index) { int found_folios; found_folios = filemap_get_folios_contig(mapping, &index, end_index, &fbatch); - - if (found_folios == 0) { - /* - * Only if we're going to lock these pages, we can find - * nothing at @index. - */ - ASSERT(page_ops & PAGE_LOCK); - err = -EAGAIN; - goto out; - } - for (i = 0; i < found_folios; i++) { - int process_ret; struct folio *folio = fbatch.folios[i]; - process_ret = process_one_page(fs_info, mapping, - &folio->page, locked_page, page_ops, - start, end); - if (process_ret < 0) { - err = -EAGAIN; - folio_batch_release(&fbatch); - goto out; - } - pages_processed += folio_nr_pages(folio); + + process_one_page(fs_info, &folio->page, locked_page, + page_ops, start, end); } folio_batch_release(&fbatch); cond_resched(); } -out: - if (err && processed_end) { - /* - * Update @processed_end. I know this is awful since it has - * two different return value patterns (inclusive vs exclusive). - * - * But the exclusive pattern is necessary if @start is 0, or we - * underflow and check against processed_end won't work as - * expected. - */ - if (pages_processed) - *processed_end = min(end, - ((u64)(start_index + pages_processed) << PAGE_SHIFT) - 1); - else - *processed_end = start; - } - return err; } static noinline void __unlock_for_delalloc(struct inode *inode, @@ -326,29 +259,63 @@ static noinline void __unlock_for_delalloc(struct inode *inode, return; __process_pages_contig(inode->i_mapping, locked_page, start, end, - PAGE_UNLOCK, NULL); + PAGE_UNLOCK); } static noinline int lock_delalloc_pages(struct inode *inode, struct page *locked_page, - u64 delalloc_start, - u64 delalloc_end) + u64 start, + u64 end) { - unsigned long index = delalloc_start >> PAGE_SHIFT; - unsigned long end_index = delalloc_end >> PAGE_SHIFT; - u64 processed_end = delalloc_start; - int ret; + struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); + struct address_space *mapping = inode->i_mapping; + pgoff_t start_index = start >> PAGE_SHIFT; + pgoff_t end_index = end >> PAGE_SHIFT; + pgoff_t index = start_index; + u64 processed_end = start; + struct folio_batch fbatch; - ASSERT(locked_page); if (index == locked_page->index && index == end_index) return 0; - ret = __process_pages_contig(inode->i_mapping, locked_page, delalloc_start, - delalloc_end, PAGE_LOCK, &processed_end); - if (ret == -EAGAIN && processed_end > delalloc_start) - __unlock_for_delalloc(inode, locked_page, delalloc_start, - processed_end); - return ret; + folio_batch_init(&fbatch); + while (index <= end_index) { + unsigned int found_folios, i; + + found_folios = filemap_get_folios_contig(mapping, &index, + end_index, &fbatch); + if (found_folios == 0) + goto out; + + for (i = 0; i < found_folios; i++) { + struct page *page = &fbatch.folios[i]->page; + u32 len = end + 1 - start; + + if (page == locked_page) + continue; + + if (btrfs_page_start_writer_lock(fs_info, page, start, + len)) + goto out; + + if (!PageDirty(page) || page->mapping != mapping) { + btrfs_page_end_writer_lock(fs_info, page, start, + len); + goto out; + } + + processed_end = page_offset(page) + PAGE_SIZE - 1; + } + folio_batch_release(&fbatch); + cond_resched(); + } + + return 0; +out: + folio_batch_release(&fbatch); + if (processed_end > start) + __unlock_for_delalloc(inode, locked_page, start, processed_end); + return -EAGAIN; } /* @@ -467,7 +434,7 @@ void extent_clear_unlock_delalloc(struct btrfs_inode *inode, u64 start, u64 end, clear_extent_bit(&inode->io_tree, start, end, clear_bits, NULL); __process_pages_contig(inode->vfs_inode.i_mapping, locked_page, - start, end, page_ops, NULL); + start, end, page_ops); } static bool btrfs_verify_page(struct page *page, u64 start) diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h index 00c468aea010a1..b57a2068b4287b 100644 --- a/fs/btrfs/extent_io.h +++ b/fs/btrfs/extent_io.h @@ -40,7 +40,6 @@ enum { ENUM_BIT(PAGE_START_WRITEBACK), ENUM_BIT(PAGE_END_WRITEBACK), ENUM_BIT(PAGE_SET_ORDERED), - ENUM_BIT(PAGE_LOCK), }; /*