From patchwork Sun Nov 26 12:47:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13468819 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="a9a14tQ9" Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D61AEA; Sun, 26 Nov 2023 04:47:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=H82zLqgdzfDpBHLXwBaMr5xkqQBqQ3VwCLTS3bTb4pY=; b=a9a14tQ9Bn66Bjt7AbTBiihW2G +UsWfrRjcvgJOQ24FLJ6HINKS0zpT3BzjHHPJPR+qfiwXSXtUbDbuY2pywPzEJTHeQZMnoocQCtMp zWNS3dqAz/fBLZ+nYGHD2R75AdYeyApAKyu1+MbeWw+IOODvyhnkXYr7bLEJGFFQXcLPxEHWYEXsP lmp1I9ZmUeebQYX1+Z0LFdbNe+C51+BOrYgqAi3KDsuHFhhGZEYI7vPqRDHTGHgDT4HiK6UBKqmyv iNdqvQ7KwwxXixfO/GdsOdrZyirDxBst6i6LqMojkVFlEPD7NQIh7S4NBpjCyEkZVT+EbWpOivfdA dKLk8ZKg==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1r7EXo-00BCCM-11; Sun, 26 Nov 2023 12:47:28 +0000 From: Christoph Hellwig To: Christian Brauner Cc: "Darrick J. Wong" , Chandan Babu R , Zhang Yi , Ritesh Harjani , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 01/13] iomap: clear the per-folio dirty bits on all writeback failures Date: Sun, 26 Nov 2023 13:47:08 +0100 Message-Id: <20231126124720.1249310-2-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231126124720.1249310-1-hch@lst.de> References: <20231126124720.1249310-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html write_cache_pages always clear the page dirty bit before calling into the file systems, and leaves folios with a writeback failure without the dirty bit after return. We also clear the per-block writeback bits for writeback failures unless no I/O has submitted, which will leave the folio in an inconsistent state where it doesn't have the folio dirty, but one or more per-block dirty bits. This seems to be due the place where the iomap_clear_range_dirty call was inserted into the existing not very clearly structured code when adding per-block dirty bit support and not actually intentional. Switch to always clearing the dirty on writeback failure. Fixes: 4ce02c679722 ("iomap: Add per-block dirty state tracking to improve performance") Signed-off-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index f72df2babe561a..98d52feb220f0a 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1849,10 +1849,6 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, */ if (wpc->ops->discard_folio) wpc->ops->discard_folio(folio, pos); - if (!count) { - folio_unlock(folio); - goto done; - } } /* @@ -1861,6 +1857,12 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, * all the dirty bits in the folio here. */ iomap_clear_range_dirty(folio, 0, folio_size(folio)); + + if (error && !count) { + folio_unlock(folio); + goto done; + } + folio_start_writeback(folio); folio_unlock(folio); From patchwork Sun Nov 26 12:47:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13468820 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="wBpVm4XP" Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A07AEB; Sun, 26 Nov 2023 04:47:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=VOJ+OhWMdxatCYdzTdcHH3XOKvdIXVzUwQzfovaJkaY=; b=wBpVm4XPcC2JRGSYRb7HrWSc1B IkQaEPRGoW1gxtPz85HmukgCC7wiUV7KKbaE5t3qnPPWsJAKBemx8hAFYzZmSccG8RRNi01gEEidi mbzs9Pd/OSBW4KAY77ldP28pwGlHYw2sKuDAZF5nQZHkg0og8of47aG9t42/qacmlEUg/25ZYyq1+ 1M7GW8j5IitSYXN8kPkv7qcfuOMlY7iXsdq/kBAPvti9cXPusulUPIZZv3GCXIFALW0MhBQotaT/K 38w41Yjhsf1UfvrqQ5rVW+NOEmX8LbJ2InEdemGHadn1O+GQYwW+PQDY3kZ05t6VzkFcO5+FMlYNY YGyssV0g==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1r7EXr-00BCDU-2Z; Sun, 26 Nov 2023 12:47:32 +0000 From: Christoph Hellwig To: Christian Brauner Cc: "Darrick J. Wong" , Chandan Babu R , Zhang Yi , Ritesh Harjani , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 02/13] iomap: treat inline data in iomap_writepage_map as an I/O error Date: Sun, 26 Nov 2023 13:47:09 +0100 Message-Id: <20231126124720.1249310-3-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231126124720.1249310-1-hch@lst.de> References: <20231126124720.1249310-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html iomap_writepage_map aready warns about inline data, but then just ignores it. Treat it as an error and return -EIO. Signed-off-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 98d52feb220f0a..b1bcc43baf0caf 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1818,8 +1818,10 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, if (error) break; trace_iomap_writepage_map(inode, &wpc->iomap); - if (WARN_ON_ONCE(wpc->iomap.type == IOMAP_INLINE)) - continue; + if (WARN_ON_ONCE(wpc->iomap.type == IOMAP_INLINE)) { + error = -EIO; + break; + } if (wpc->iomap.type == IOMAP_HOLE) continue; iomap_add_to_ioend(inode, pos, folio, ifs, wpc, wbc, From patchwork Sun Nov 26 12:47:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13468821 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="Vo7JB0Qn" Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFD92E5; Sun, 26 Nov 2023 04:47:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=nO66HhDVj8qK+jri+LLhFBDwrCdlEsgQHS76+niCImc=; b=Vo7JB0QnB3MzgqOklRJt+XL8nq MUR5y32Hh3qVcbveYww8wlDpS3LVOlfj4YWp5pxv0YMlWUp/5lrNBNudiKb20zJsBuQ/b2+nyGThg 6YcwfmoJqQqsYsXWzUfYbjs12IACHESnBkM3WCntqAesy9nN0mJQb4+19G6HI+XDqFSPC8Fe049pW P9H2XxygtCdxFs/ZqZOi/YGWua/FARzJ8denfNvPxQ6aVQ/ZDEC/Cp7gKagrAcZwDa5KQThUUlGTV xvgQWcwDd6gLJCXP1QxrKZtkc14QATiPqgvMf2susJp/5I+R/kY4ErJNKZJC4G3jMkcDQe9PAk39V DabdRsvQ==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1r7EXv-00BCEh-1a; Sun, 26 Nov 2023 12:47:36 +0000 From: Christoph Hellwig To: Christian Brauner Cc: "Darrick J. Wong" , Chandan Babu R , Zhang Yi , Ritesh Harjani , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 03/13] iomap: move the io_folios field out of struct iomap_ioend Date: Sun, 26 Nov 2023 13:47:10 +0100 Message-Id: <20231126124720.1249310-4-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231126124720.1249310-1-hch@lst.de> References: <20231126124720.1249310-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html The io_folios member in struct iomap_ioend counts the number of folios added to an ioend. It is only used at submission time and can thus be moved to iomap_writepage_ctx instead. Signed-off-by: Christoph Hellwig Reviewed-by: Ritesh Harjani (IBM) Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 7 ++++--- include/linux/iomap.h | 2 +- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index b1bcc43baf0caf..b28c57f8603303 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1685,10 +1685,11 @@ iomap_alloc_ioend(struct inode *inode, struct iomap_writepage_ctx *wpc, ioend->io_flags = wpc->iomap.flags; ioend->io_inode = inode; ioend->io_size = 0; - ioend->io_folios = 0; ioend->io_offset = offset; ioend->io_bio = bio; ioend->io_sector = sector; + + wpc->nr_folios = 0; return ioend; } @@ -1732,7 +1733,7 @@ iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset, * also prevents long tight loops ending page writeback on all the * folios in the ioend. */ - if (wpc->ioend->io_folios >= IOEND_BATCH_SIZE) + if (wpc->nr_folios >= IOEND_BATCH_SIZE) return false; return true; } @@ -1829,7 +1830,7 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, count++; } if (count) - wpc->ioend->io_folios++; + wpc->nr_folios++; WARN_ON_ONCE(!wpc->ioend && !list_empty(&submit_list)); WARN_ON_ONCE(!folio_test_locked(folio)); diff --git a/include/linux/iomap.h b/include/linux/iomap.h index 96dd0acbba44ac..b2a05dff914d0c 100644 --- a/include/linux/iomap.h +++ b/include/linux/iomap.h @@ -293,7 +293,6 @@ struct iomap_ioend { struct list_head io_list; /* next ioend in chain */ u16 io_type; u16 io_flags; /* IOMAP_F_* */ - u32 io_folios; /* folios added to ioend */ struct inode *io_inode; /* file being written to */ size_t io_size; /* size of the extent */ loff_t io_offset; /* offset in the file */ @@ -329,6 +328,7 @@ struct iomap_writepage_ctx { struct iomap iomap; struct iomap_ioend *ioend; const struct iomap_writeback_ops *ops; + u32 nr_folios; /* folios added to the ioend */ }; void iomap_finish_ioends(struct iomap_ioend *ioend, int error); From patchwork Sun Nov 26 12:47:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13468822 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="uSLC1Xi9" Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1367AE5; Sun, 26 Nov 2023 04:47:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=v1PUYd9IhEWG+moQCvLaYD8aOxotdARhrKE2eE2JJ/E=; b=uSLC1Xi9Y9AO4iRi5mVnOWLrln M10PIAuVcZQCILhzW5KSAm2av7CYhVep5t5RzwbBnr3GbduP1KxEaEbEfcWDBNlYqW2PmgufGEy/f 8KwFOU0PAmm6vMFwOdkivqFI37xzRuiMCGF2PVLmYpdusgqlahj/EcKgZEf6slQuSRk82vgv6MI/4 aiMN203q8OMcNEMBSGNTnczY7HESS2ToVvJMEL0kFpXvCHoGXZTTVn3JGQa05HY9QEIxRXgveBkl8 naEd2ueBXuEQpbzJMt4TVQVEGpUpu5g0zdcyHDJ4IVg2bRnlE/vw1WfJup/QdW3CvmbS112v7Vtjm JzOTT2TA==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1r7EXy-00BCGH-0L; Sun, 26 Nov 2023 12:47:38 +0000 From: Christoph Hellwig To: Christian Brauner Cc: "Darrick J. Wong" , Chandan Babu R , Zhang Yi , Ritesh Harjani , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 04/13] iomap: drop the obsolete PF_MEMALLOC check in iomap_do_writepage Date: Sun, 26 Nov 2023 13:47:11 +0100 Message-Id: <20231126124720.1249310-5-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231126124720.1249310-1-hch@lst.de> References: <20231126124720.1249310-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html The iomap writepage implementation has been removed in commit 478af190cb6c ("iomap: remove iomap_writepage") and this code is now only called through ->writepages which never happens from memory reclaim. Remove the canary in the coal mine now that the coal mine has been shut down. Signed-off-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 16 ---------------- 1 file changed, 16 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index b28c57f8603303..8148e4c9765dac 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1910,20 +1910,6 @@ static int iomap_do_writepage(struct folio *folio, trace_iomap_writepage(inode, folio_pos(folio), folio_size(folio)); - /* - * Refuse to write the folio out if we're called from reclaim context. - * - * This avoids stack overflows when called from deeply used stacks in - * random callers for direct reclaim or memcg reclaim. We explicitly - * allow reclaim from kswapd as the stack usage there is relatively low. - * - * This should never happen except in the case of a VM regression so - * warn about it. - */ - if (WARN_ON_ONCE((current->flags & (PF_MEMALLOC|PF_KSWAPD)) == - PF_MEMALLOC)) - goto redirty; - /* * Is this folio beyond the end of the file? * @@ -1989,8 +1975,6 @@ static int iomap_do_writepage(struct folio *folio, return iomap_writepage_map(wpc, wbc, inode, folio, end_pos); -redirty: - folio_redirty_for_writepage(wbc, folio); unlock: folio_unlock(folio); return 0; From patchwork Sun Nov 26 12:47:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13468823 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="bPxTBWAp" Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06514E5; Sun, 26 Nov 2023 04:47:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=JUM6L8CrbnkYqbEfacrVZyGyLW+SH6HsStLMsFUT0yg=; b=bPxTBWAprnOy4Je5uI5SanxFcz nnpW+O2N1WDIEkWqOmSmVYV88gB0ZdHokzE6Ycl9cnxDPu1NXbzIsy6PniyGw1D+VJl7sAv+27l8B OA4GPvJB5L42CyqxODTXOWZfl196jaEbAHs2fMWtlHSvPpBC7xK/9QqgCjmSJIEnLysnUV9nNXT/b 2TbtREPGSQLlrDZfLOV7IiVcQg5smcQA4pnsfBnPNZSZw102chNxnhNQC6gfTNHy15tIrUXGzGG+8 DI6lbmCqqNdmfjrHbE90Z6XHfT4zWDMG8ImwduwE8Pq1sLdrMakWCVecffMCoFXnB2owq/6r8I5JL 2Gh6ZUpg==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1r7EY1-00BCHM-1i; Sun, 26 Nov 2023 12:47:41 +0000 From: Christoph Hellwig To: Christian Brauner Cc: "Darrick J. Wong" , Chandan Babu R , Zhang Yi , Ritesh Harjani , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 05/13] iomap: factor out a iomap_writepage_handle_eof helper Date: Sun, 26 Nov 2023 13:47:12 +0100 Message-Id: <20231126124720.1249310-6-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231126124720.1249310-1-hch@lst.de> References: <20231126124720.1249310-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Most of iomap_do_writepage is dedidcated to handling a folio crossing or beyond i_size. Split this is into a separate helper and update the commens to deal with folios instead of pages and make them more readable. Signed-off-by: Christoph Hellwig Reviewed-by: Ritesh Harjani (IBM) Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 128 ++++++++++++++++++++--------------------- 1 file changed, 62 insertions(+), 66 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 8148e4c9765dac..4a5a21809b0182 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1768,6 +1768,64 @@ iomap_add_to_ioend(struct inode *inode, loff_t pos, struct folio *folio, wbc_account_cgroup_owner(wbc, &folio->page, len); } +/* + * Check interaction of the folio with the file end. + * + * If the folio is entirely beyond i_size, return false. If it straddles + * i_size, adjust end_pos and zero all data beyond i_size. + */ +static bool iomap_writepage_handle_eof(struct folio *folio, struct inode *inode, + u64 *end_pos) +{ + u64 isize = i_size_read(inode); + + if (*end_pos > isize) { + size_t poff = offset_in_folio(folio, isize); + pgoff_t end_index = isize >> PAGE_SHIFT; + + /* + * If the folio is entirely ouside of i_size, skip it. + * + * This can happen due to a truncate operation that is in + * progress and in that case truncate will finish it off once + * we've dropped the folio lock. + * + * Note that the pgoff_t used for end_index is an unsigned long. + * If the given offset is greater than 16TB on a 32-bit system, + * then if we checked if the folio is fully outside i_size with + * "if (folio->index >= end_index + 1)", "end_index + 1" would + * overflow and evaluate to 0. Hence this folio would be + * redirtied and written out repeatedly, which would result in + * an infinite loop; the user program performing this operation + * would hang. Instead, we can detect this situation by + * checking if the folio is totally beyond i_size or if its + * offset is just equal to the EOF. + */ + if (folio->index > end_index || + (folio->index == end_index && poff == 0)) + return false; + + /* + * The folio straddles i_size. + * + * It must be zeroed out on each and every writepage invocation + * because it may be mmapped: + * + * A file is mapped in multiples of the page size. For a + * file that is not a multiple of the page size, the + * remaining memory is zeroed when mapped, and writes to that + * region are not written out to the file. + * + * Also adjust the writeback range to skip all blocks entirely + * beyond i_size. + */ + folio_zero_segment(folio, poff, folio_size(folio)); + *end_pos = isize; + } + + return true; +} + /* * We implement an immediate ioend submission policy here to avoid needing to * chain multiple ioends and hence nest mempool allocations which can violate @@ -1906,78 +1964,16 @@ static int iomap_do_writepage(struct folio *folio, { struct iomap_writepage_ctx *wpc = data; struct inode *inode = folio->mapping->host; - u64 end_pos, isize; + u64 end_pos = folio_pos(folio) + folio_size(folio); trace_iomap_writepage(inode, folio_pos(folio), folio_size(folio)); - /* - * Is this folio beyond the end of the file? - * - * The folio index is less than the end_index, adjust the end_pos - * to the highest offset that this folio should represent. - * ----------------------------------------------------- - * | file mapping | | - * ----------------------------------------------------- - * | Page ... | Page N-2 | Page N-1 | Page N | | - * ^--------------------------------^----------|-------- - * | desired writeback range | see else | - * ---------------------------------^------------------| - */ - isize = i_size_read(inode); - end_pos = folio_pos(folio) + folio_size(folio); - if (end_pos > isize) { - /* - * Check whether the page to write out is beyond or straddles - * i_size or not. - * ------------------------------------------------------- - * | file mapping | | - * ------------------------------------------------------- - * | Page ... | Page N-2 | Page N-1 | Page N | Beyond | - * ^--------------------------------^-----------|--------- - * | | Straddles | - * ---------------------------------^-----------|--------| - */ - size_t poff = offset_in_folio(folio, isize); - pgoff_t end_index = isize >> PAGE_SHIFT; - - /* - * Skip the page if it's fully outside i_size, e.g. - * due to a truncate operation that's in progress. We've - * cleaned this page and truncate will finish things off for - * us. - * - * Note that the end_index is unsigned long. If the given - * offset is greater than 16TB on a 32-bit system then if we - * checked if the page is fully outside i_size with - * "if (page->index >= end_index + 1)", "end_index + 1" would - * overflow and evaluate to 0. Hence this page would be - * redirtied and written out repeatedly, which would result in - * an infinite loop; the user program performing this operation - * would hang. Instead, we can detect this situation by - * checking if the page is totally beyond i_size or if its - * offset is just equal to the EOF. - */ - if (folio->index > end_index || - (folio->index == end_index && poff == 0)) - goto unlock; - - /* - * The page straddles i_size. It must be zeroed out on each - * and every writepage invocation because it may be mmapped. - * "A file is mapped in multiples of the page size. For a file - * that is not a multiple of the page size, the remaining - * memory is zeroed when mapped, and writes to that region are - * not written out to the file." - */ - folio_zero_segment(folio, poff, folio_size(folio)); - end_pos = isize; + if (!iomap_writepage_handle_eof(folio, inode, &end_pos)) { + folio_unlock(folio); + return 0; } return iomap_writepage_map(wpc, wbc, inode, folio, end_pos); - -unlock: - folio_unlock(folio); - return 0; } int From patchwork Sun Nov 26 12:47:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13468824 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="SkqyrcyT" Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3F433EB; Sun, 26 Nov 2023 04:47:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=DnW5FaY2JPUdEYeKkwlUgXI6kxbQRrG+lXTwt1j3Pj8=; b=SkqyrcyTWBevzcViL8RCvyh8oM fhW8wGieAcY/7Ik2E1WUOMtf0BhdmTbtGqkOkKltT3v06dwXlY1X3Lev0WuwWF1EouauF0kzdBrlG 8od2UZfj7AYbRGXz0niRE2BB/vC1GN4H8QS4EY0oQ3jxm13V6C3c/RdaV+0vZsCoIhkfYV/efOuZ4 jC4LthkILELqGBgdEjT5H28FgJOWLHF1begp1Xlt9umtOHIpR1Jjzmvco4nD7xnjcAM05Zh3ih/q4 +tSqGFyqbD+sd/EWClA5DXXqWNxFxdtuAbeSYWvk3z2eky16BdUBOgIovQ3oO2lA6bR27qqbdLWHf V+XkGzJw==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1r7EY4-00BCIQ-1w; Sun, 26 Nov 2023 12:47:45 +0000 From: Christoph Hellwig To: Christian Brauner Cc: "Darrick J. Wong" , Chandan Babu R , Zhang Yi , Ritesh Harjani , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 06/13] iomap: move all remaining per-folio logic into xfs_writepage_map Date: Sun, 26 Nov 2023 13:47:13 +0100 Message-Id: <20231126124720.1249310-7-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231126124720.1249310-1-hch@lst.de> References: <20231126124720.1249310-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Move the tracepoint and the iomap check from iomap_do_writepage into iomap_writepage_map. This keeps all logic in one places, and leaves iomap_do_writepage just as the wrapper for the callback conventions of write_cache_pages, which will go away when that is convertd to an iterator. Signed-off-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 34 +++++++++++----------------------- 1 file changed, 11 insertions(+), 23 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 4a5a21809b0182..5834aa46bdb8cf 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1842,19 +1842,25 @@ static bool iomap_writepage_handle_eof(struct folio *folio, struct inode *inode, * At the end of a writeback pass, there will be a cached ioend remaining on the * writepage context that the caller will need to submit. */ -static int -iomap_writepage_map(struct iomap_writepage_ctx *wpc, - struct writeback_control *wbc, struct inode *inode, - struct folio *folio, u64 end_pos) +static int iomap_writepage_map(struct iomap_writepage_ctx *wpc, + struct writeback_control *wbc, struct folio *folio) { struct iomap_folio_state *ifs = folio->private; + struct inode *inode = folio->mapping->host; struct iomap_ioend *ioend, *next; unsigned len = i_blocksize(inode); unsigned nblocks = i_blocks_per_folio(inode, folio); u64 pos = folio_pos(folio); + u64 end_pos = pos + folio_size(folio); int error = 0, count = 0, i; LIST_HEAD(submit_list); + trace_iomap_writepage(inode, pos, folio_size(folio)); + + if (!iomap_writepage_handle_eof(folio, inode, &end_pos)) { + folio_unlock(folio); + return 0; + } WARN_ON_ONCE(end_pos <= pos); if (!ifs && nblocks > 1) { @@ -1952,28 +1958,10 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, return error; } -/* - * Write out a dirty page. - * - * For delalloc space on the page, we need to allocate space and flush it. - * For unwritten space on the page, we need to start the conversion to - * regular allocated space. - */ static int iomap_do_writepage(struct folio *folio, struct writeback_control *wbc, void *data) { - struct iomap_writepage_ctx *wpc = data; - struct inode *inode = folio->mapping->host; - u64 end_pos = folio_pos(folio) + folio_size(folio); - - trace_iomap_writepage(inode, folio_pos(folio), folio_size(folio)); - - if (!iomap_writepage_handle_eof(folio, inode, &end_pos)) { - folio_unlock(folio); - return 0; - } - - return iomap_writepage_map(wpc, wbc, inode, folio, end_pos); + return iomap_writepage_map(data, wbc, folio); } int From patchwork Sun Nov 26 12:47:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13468825 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="skZwzf3e" Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BFAF910F; Sun, 26 Nov 2023 04:47:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=AQK98VN3ZPFPXW9nC8Xre48vrnhsdBlYTtG/aK7s89Q=; b=skZwzf3e1G1ojjmou+3994B9W3 Wk5u8P4QdkWLpiqTSuT2NMCMTwwElJhtxLlNoPUmapDwFmFCZmoFuygoSpy5sQm22i+G7/HpoRoFJ MC6LfoH/AJlmwQH5hsVdk/Jt6dDDmzLt1yBWz5iIVWj3xEboXNwjgqMksVqRhHNb28Axg9KHIkha1 M8aBogiIKY4IJ7xZn5dXREaAjGm2fMIN4nbjtBSHKpxc4ZVmr5ffhndXojq8wDw/KS6ReKBmQACIB JhUXfsjhN20zK5tiCP/e6fx3/qZqRXCAhZdiPABq8W4AcXyqjytr5q0Gv/lwMfHzKy+zpTvw1Z8bM rYc05Z1Q==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1r7EY8-00BCJi-0Z; Sun, 26 Nov 2023 12:47:48 +0000 From: Christoph Hellwig To: Christian Brauner Cc: "Darrick J. Wong" , Chandan Babu R , Zhang Yi , Ritesh Harjani , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 07/13] iomap: clean up the iomap_new_ioend calling convention Date: Sun, 26 Nov 2023 13:47:14 +0100 Message-Id: <20231126124720.1249310-8-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231126124720.1249310-1-hch@lst.de> References: <20231126124720.1249310-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Switch to the same argument order as iomap_writepage_map and remove the ifs argument that can be trivially recalculated. Signed-off-by: Christoph Hellwig Reviewed-by: Ritesh Harjani (IBM) Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 5834aa46bdb8cf..7f86d2f90e3863 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1742,11 +1742,11 @@ iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset, * Test to see if we have an existing ioend structure that we could append to * first; otherwise finish off the current ioend and start another. */ -static void -iomap_add_to_ioend(struct inode *inode, loff_t pos, struct folio *folio, - struct iomap_folio_state *ifs, struct iomap_writepage_ctx *wpc, - struct writeback_control *wbc, struct list_head *iolist) +static void iomap_add_to_ioend(struct iomap_writepage_ctx *wpc, + struct writeback_control *wbc, struct folio *folio, + struct inode *inode, loff_t pos, struct list_head *iolist) { + struct iomap_folio_state *ifs = folio->private; sector_t sector = iomap_sector(&wpc->iomap, pos); unsigned len = i_blocksize(inode); size_t poff = offset_in_folio(folio, pos); @@ -1889,8 +1889,7 @@ static int iomap_writepage_map(struct iomap_writepage_ctx *wpc, } if (wpc->iomap.type == IOMAP_HOLE) continue; - iomap_add_to_ioend(inode, pos, folio, ifs, wpc, wbc, - &submit_list); + iomap_add_to_ioend(wpc, wbc, folio, inode, pos, &submit_list); count++; } if (count) From patchwork Sun Nov 26 12:47:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13468826 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="UJUh123I" Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 58428EA; Sun, 26 Nov 2023 04:47:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=mgwiYB1RJ50rR/80cBc8gN3EYZFZbbJbiDxoHLe+koE=; b=UJUh123I9sNmyUsObTf4+bI4B+ DEwFWcojsYalH5KPzCpAlvV+oWsFqtg3fJKhhokTjBHpWoVAvgEjzh3omVZaqwCgO+BYA5lmdkzUN G9xKYdRK4JTOMXmRhZ8ZqC8qaNOAeaiozpoRtTR7MSyrlRnA1J4wwUvhM2po+4+iyL6DCErYLnRNQ GmSf5yJOj3dQ3ifPT3HJ7Kt0MYFCZGNI8lKoTgPazFcQCib+9O68cyDY2CeN/yIxrvDOLmhFoFuEW AHjgTaajReQaV8jjsITDYw30FCVpA9fKoejqDD8g4/B0VdbQoSfG6RRHgbeAurPF4yKoG0hXRMitN gnu+q7jA==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1r7EYB-00BCLo-0w; Sun, 26 Nov 2023 12:47:51 +0000 From: Christoph Hellwig To: Christian Brauner Cc: "Darrick J. Wong" , Chandan Babu R , Zhang Yi , Ritesh Harjani , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 08/13] iomap: move the iomap_sector sector calculation out of iomap_add_to_ioend Date: Sun, 26 Nov 2023 13:47:15 +0100 Message-Id: <20231126124720.1249310-9-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231126124720.1249310-1-hch@lst.de> References: <20231126124720.1249310-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html The calculation in iomap_sector is pretty trivial and most of the time iomap_add_to_ioend only callers either iomap_can_add_to_ioend or iomap_alloc_ioend from a single invocation. Calculate the sector in the two lower level functions and stop passing it from iomap_add_to_ioend and update the iomap_alloc_ioend argument passing order to match that of iomap_add_to_ioend. Signed-off-by: Christoph Hellwig Reviewed-by: Ritesh Harjani (IBM) --- fs/iomap/buffered-io.c | 22 ++++++++++------------ 1 file changed, 10 insertions(+), 12 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 7f86d2f90e3863..329e2c342f1c64 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1666,9 +1666,8 @@ iomap_submit_ioend(struct iomap_writepage_ctx *wpc, struct iomap_ioend *ioend, return 0; } -static struct iomap_ioend * -iomap_alloc_ioend(struct inode *inode, struct iomap_writepage_ctx *wpc, - loff_t offset, sector_t sector, struct writeback_control *wbc) +static struct iomap_ioend *iomap_alloc_ioend(struct iomap_writepage_ctx *wpc, + struct writeback_control *wbc, struct inode *inode, loff_t pos) { struct iomap_ioend *ioend; struct bio *bio; @@ -1676,7 +1675,7 @@ iomap_alloc_ioend(struct inode *inode, struct iomap_writepage_ctx *wpc, bio = bio_alloc_bioset(wpc->iomap.bdev, BIO_MAX_VECS, REQ_OP_WRITE | wbc_to_write_flags(wbc), GFP_NOFS, &iomap_ioend_bioset); - bio->bi_iter.bi_sector = sector; + bio->bi_iter.bi_sector = iomap_sector(&wpc->iomap, pos); wbc_init_bio(wbc, bio); ioend = container_of(bio, struct iomap_ioend, io_inline_bio); @@ -1685,9 +1684,9 @@ iomap_alloc_ioend(struct inode *inode, struct iomap_writepage_ctx *wpc, ioend->io_flags = wpc->iomap.flags; ioend->io_inode = inode; ioend->io_size = 0; - ioend->io_offset = offset; + ioend->io_offset = pos; ioend->io_bio = bio; - ioend->io_sector = sector; + ioend->io_sector = bio->bi_iter.bi_sector; wpc->nr_folios = 0; return ioend; @@ -1716,8 +1715,7 @@ iomap_chain_bio(struct bio *prev) } static bool -iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset, - sector_t sector) +iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset) { if ((wpc->iomap.flags & IOMAP_F_SHARED) != (wpc->ioend->io_flags & IOMAP_F_SHARED)) @@ -1726,7 +1724,8 @@ iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset, return false; if (offset != wpc->ioend->io_offset + wpc->ioend->io_size) return false; - if (sector != bio_end_sector(wpc->ioend->io_bio)) + if (iomap_sector(&wpc->iomap, offset) != + bio_end_sector(wpc->ioend->io_bio)) return false; /* * Limit ioend bio chain lengths to minimise IO completion latency. This @@ -1747,14 +1746,13 @@ static void iomap_add_to_ioend(struct iomap_writepage_ctx *wpc, struct inode *inode, loff_t pos, struct list_head *iolist) { struct iomap_folio_state *ifs = folio->private; - sector_t sector = iomap_sector(&wpc->iomap, pos); unsigned len = i_blocksize(inode); size_t poff = offset_in_folio(folio, pos); - if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, pos, sector)) { + if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, pos)) { if (wpc->ioend) list_add(&wpc->ioend->io_list, iolist); - wpc->ioend = iomap_alloc_ioend(inode, wpc, pos, sector, wbc); + wpc->ioend = iomap_alloc_ioend(wpc, wbc, inode, pos); } if (!bio_add_folio(wpc->ioend->io_bio, folio, len, poff)) { From patchwork Sun Nov 26 12:47:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13468827 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="c4IReY7r" Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E4B46E5; Sun, 26 Nov 2023 04:47:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=MF8BHtunISNreJcIYMfK58hGhaxU0wiKr5/+SlPN5D8=; b=c4IReY7rtWEfywqWJF3s2aWm9E 8OZtOV/MWxgHgK4meNurK4meq3p3x6BJA8OBNDBG+Le1ea+NvkZYn2tcK5JtaQPZb2gtU1HPg3hHl kz+Y5pTelMTK4RQH1AOftf0gJkk/PjBvVgHoQbqLUd4IK3jZnnZGbm+PWTiyPeCQTG6+CQVqRB1WT 6CxibGLwlH8hAomiGVNNc5BBepiswbMn34yKseq21jCraJwUXp1ojKaiaPC9J8M6JVQZU9RX3xJRW krdj9vEBG1SPZGU/LvqNK5MX0iBhgSrYaoEwgyoQn2woSOJ3ExcLrUl4zhWochcLnIRl8DbgXg2QY D02z7Q0w==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1r7EYE-00BCOh-0V; Sun, 26 Nov 2023 12:47:54 +0000 From: Christoph Hellwig To: Christian Brauner Cc: "Darrick J. Wong" , Chandan Babu R , Zhang Yi , Ritesh Harjani , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 09/13] iomap: don't chain bios Date: Sun, 26 Nov 2023 13:47:16 +0100 Message-Id: <20231126124720.1249310-10-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231126124720.1249310-1-hch@lst.de> References: <20231126124720.1249310-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Back in the days when a single bio could only be filled to the hardware limits, and we scheduled a work item for each bio completion, chaining multiple bios for a single ioend made a lot of sense to reduce the number of completions. But these days bios can be filled until we reach the number of vectors or total size limit, which means we can always fit at least 1 megabyte worth of data in the worst case, but usually a lot more due to large folios. The only thing bio chaining is buying us now is to reduce the size of the allocation from an ioend with an embedded bio into a plain bio, which is a 52 bytes differences on 64-bit systems. This is not worth the added complexity, so remove the bio chaining and only use the bio embedded into the ioend. This will help to simplify further changes to the iomap writeback code. Signed-off-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 90 +++++++++++------------------------------- fs/xfs/xfs_aops.c | 6 +-- include/linux/iomap.h | 8 +++- 3 files changed, 32 insertions(+), 72 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 329e2c342f1c64..17760f3e67c61e 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1489,40 +1489,23 @@ static u32 iomap_finish_ioend(struct iomap_ioend *ioend, int error) { struct inode *inode = ioend->io_inode; - struct bio *bio = &ioend->io_inline_bio; - struct bio *last = ioend->io_bio, *next; - u64 start = bio->bi_iter.bi_sector; - loff_t offset = ioend->io_offset; - bool quiet = bio_flagged(bio, BIO_QUIET); + struct bio *bio = &ioend->io_bio; + struct folio_iter fi; u32 folio_count = 0; - for (bio = &ioend->io_inline_bio; bio; bio = next) { - struct folio_iter fi; - - /* - * For the last bio, bi_private points to the ioend, so we - * need to explicitly end the iteration here. - */ - if (bio == last) - next = NULL; - else - next = bio->bi_private; - - /* walk all folios in bio, ending page IO on them */ - bio_for_each_folio_all(fi, bio) { - iomap_finish_folio_write(inode, fi.folio, fi.length, - error); - folio_count++; - } - bio_put(bio); + /* walk all folios in bio, ending page IO on them */ + bio_for_each_folio_all(fi, bio) { + iomap_finish_folio_write(inode, fi.folio, fi.length, error); + folio_count++; } - /* The ioend has been freed by bio_put() */ - if (unlikely(error && !quiet)) { + if (unlikely(error && !bio_flagged(bio, BIO_QUIET))) { printk_ratelimited(KERN_ERR "%s: writeback error on inode %lu, offset %lld, sector %llu", - inode->i_sb->s_id, inode->i_ino, offset, start); + inode->i_sb->s_id, inode->i_ino, + ioend->io_offset, ioend->io_sector); } + bio_put(bio); /* frees the ioend */ return folio_count; } @@ -1563,7 +1546,7 @@ EXPORT_SYMBOL_GPL(iomap_finish_ioends); static bool iomap_ioend_can_merge(struct iomap_ioend *ioend, struct iomap_ioend *next) { - if (ioend->io_bio->bi_status != next->io_bio->bi_status) + if (ioend->io_bio.bi_status != next->io_bio.bi_status) return false; if ((ioend->io_flags & IOMAP_F_SHARED) ^ (next->io_flags & IOMAP_F_SHARED)) @@ -1628,9 +1611,8 @@ EXPORT_SYMBOL_GPL(iomap_sort_ioends); static void iomap_writepage_end_bio(struct bio *bio) { - struct iomap_ioend *ioend = bio->bi_private; - - iomap_finish_ioend(ioend, blk_status_to_errno(bio->bi_status)); + iomap_finish_ioend(iomap_ioend_from_bio(bio), + blk_status_to_errno(bio->bi_status)); } /* @@ -1645,9 +1627,6 @@ static int iomap_submit_ioend(struct iomap_writepage_ctx *wpc, struct iomap_ioend *ioend, int error) { - ioend->io_bio->bi_private = ioend; - ioend->io_bio->bi_end_io = iomap_writepage_end_bio; - if (wpc->ops->prepare_ioend) error = wpc->ops->prepare_ioend(ioend, error); if (error) { @@ -1657,12 +1636,12 @@ iomap_submit_ioend(struct iomap_writepage_ctx *wpc, struct iomap_ioend *ioend, * as there is only one reference to the ioend at this point in * time. */ - ioend->io_bio->bi_status = errno_to_blk_status(error); - bio_endio(ioend->io_bio); + ioend->io_bio.bi_status = errno_to_blk_status(error); + bio_endio(&ioend->io_bio); return error; } - submit_bio(ioend->io_bio); + submit_bio(&ioend->io_bio); return 0; } @@ -1676,44 +1655,22 @@ static struct iomap_ioend *iomap_alloc_ioend(struct iomap_writepage_ctx *wpc, REQ_OP_WRITE | wbc_to_write_flags(wbc), GFP_NOFS, &iomap_ioend_bioset); bio->bi_iter.bi_sector = iomap_sector(&wpc->iomap, pos); + bio->bi_end_io = iomap_writepage_end_bio; wbc_init_bio(wbc, bio); - ioend = container_of(bio, struct iomap_ioend, io_inline_bio); + ioend = iomap_ioend_from_bio(bio); INIT_LIST_HEAD(&ioend->io_list); ioend->io_type = wpc->iomap.type; ioend->io_flags = wpc->iomap.flags; ioend->io_inode = inode; ioend->io_size = 0; ioend->io_offset = pos; - ioend->io_bio = bio; ioend->io_sector = bio->bi_iter.bi_sector; wpc->nr_folios = 0; return ioend; } -/* - * Allocate a new bio, and chain the old bio to the new one. - * - * Note that we have to perform the chaining in this unintuitive order - * so that the bi_private linkage is set up in the right direction for the - * traversal in iomap_finish_ioend(). - */ -static struct bio * -iomap_chain_bio(struct bio *prev) -{ - struct bio *new; - - new = bio_alloc(prev->bi_bdev, BIO_MAX_VECS, prev->bi_opf, GFP_NOFS); - bio_clone_blkg_association(new, prev); - new->bi_iter.bi_sector = bio_end_sector(prev); - - bio_chain(prev, new); - bio_get(prev); /* for iomap_finish_ioend */ - submit_bio(prev); - return new; -} - static bool iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset) { @@ -1725,7 +1682,7 @@ iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset) if (offset != wpc->ioend->io_offset + wpc->ioend->io_size) return false; if (iomap_sector(&wpc->iomap, offset) != - bio_end_sector(wpc->ioend->io_bio)) + bio_end_sector(&wpc->ioend->io_bio)) return false; /* * Limit ioend bio chain lengths to minimise IO completion latency. This @@ -1750,15 +1707,14 @@ static void iomap_add_to_ioend(struct iomap_writepage_ctx *wpc, size_t poff = offset_in_folio(folio, pos); if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, pos)) { +new_ioend: if (wpc->ioend) list_add(&wpc->ioend->io_list, iolist); wpc->ioend = iomap_alloc_ioend(wpc, wbc, inode, pos); } - if (!bio_add_folio(wpc->ioend->io_bio, folio, len, poff)) { - wpc->ioend->io_bio = iomap_chain_bio(wpc->ioend->io_bio); - bio_add_folio_nofail(wpc->ioend->io_bio, folio, len, poff); - } + if (!bio_add_folio(&wpc->ioend->io_bio, folio, len, poff)) + goto new_ioend; if (ifs) atomic_add(len, &ifs->write_bytes_pending); @@ -1979,7 +1935,7 @@ EXPORT_SYMBOL_GPL(iomap_writepages); static int __init iomap_init(void) { return bioset_init(&iomap_ioend_bioset, 4 * (PAGE_SIZE / SECTOR_SIZE), - offsetof(struct iomap_ioend, io_inline_bio), + offsetof(struct iomap_ioend, io_bio), BIOSET_NEED_BVECS); } fs_initcall(iomap_init); diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index 465d7630bb2185..b45ee6cbbdaab2 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -112,7 +112,7 @@ xfs_end_ioend( * longer dirty. If we don't remove delalloc blocks here, they become * stale and can corrupt free space accounting on unmount. */ - error = blk_status_to_errno(ioend->io_bio->bi_status); + error = blk_status_to_errno(ioend->io_bio.bi_status); if (unlikely(error)) { if (ioend->io_flags & IOMAP_F_SHARED) { xfs_reflink_cancel_cow_range(ip, offset, size, true); @@ -179,7 +179,7 @@ STATIC void xfs_end_bio( struct bio *bio) { - struct iomap_ioend *ioend = bio->bi_private; + struct iomap_ioend *ioend = iomap_ioend_from_bio(bio); struct xfs_inode *ip = XFS_I(ioend->io_inode); unsigned long flags; @@ -444,7 +444,7 @@ xfs_prepare_ioend( /* send ioends that might require a transaction to the completion wq */ if (xfs_ioend_is_append(ioend) || ioend->io_type == IOMAP_UNWRITTEN || (ioend->io_flags & IOMAP_F_SHARED)) - ioend->io_bio->bi_end_io = xfs_end_bio; + ioend->io_bio.bi_end_io = xfs_end_bio; return status; } diff --git a/include/linux/iomap.h b/include/linux/iomap.h index b2a05dff914d0c..b8d3b658ad2b03 100644 --- a/include/linux/iomap.h +++ b/include/linux/iomap.h @@ -297,10 +297,14 @@ struct iomap_ioend { size_t io_size; /* size of the extent */ loff_t io_offset; /* offset in the file */ sector_t io_sector; /* start sector of ioend */ - struct bio *io_bio; /* bio being built */ - struct bio io_inline_bio; /* MUST BE LAST! */ + struct bio io_bio; /* MUST BE LAST! */ }; +static inline struct iomap_ioend *iomap_ioend_from_bio(struct bio *bio) +{ + return container_of(bio, struct iomap_ioend, io_bio); +} + struct iomap_writeback_ops { /* * Required, maps the blocks so that writeback can be performed on From patchwork Sun Nov 26 12:47:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13468828 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="K96gP7o6" Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 70EFB110; Sun, 26 Nov 2023 04:48:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=oRjyiZVRuzcHxCkVzphuZka5+x7mico/0jagFrAkKhs=; b=K96gP7o6ouxbZ6KDzthKDexzUG 7so1BBadANvi2QjfHUT2Dgi1lmR8QbK/vpvHgR60c+mftpcf+cQYI+IF5pX8rofmQUQJs2ZA1DcY9 G4PK/TUOKdD6NypB0rMZ4F0gn9Awk6SuLWVfWsLp0gXZJlXCH2e54x2m8OYOy/LYRXJN8WhamEIvC fWg00iWwUSqSbvNi1X7fUYjRIoxoyzZdVcBO1eZQCG59vl0IlCpUKcH1OBaBgmrp85HCs0Yg5/ISw yqgNTNZCo+S3rMo8c3OpnsKzHtWUuZGG1c10B8ADc9Fm1PH6eveqUuaOfz3ZUKYhncpGmXNjQL8Lo CzINmoBg==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1r7EYH-00BCRS-22; Sun, 26 Nov 2023 12:47:58 +0000 From: Christoph Hellwig To: Christian Brauner Cc: "Darrick J. Wong" , Chandan Babu R , Zhang Yi , Ritesh Harjani , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 10/13] iomap: only call mapping_set_error once for each failed bio Date: Sun, 26 Nov 2023 13:47:17 +0100 Message-Id: <20231126124720.1249310-11-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231126124720.1249310-1-hch@lst.de> References: <20231126124720.1249310-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Instead of clling mapping_set_error once per folio, only do that once per bio, and consolidate all the writeback error handling code in iomap_finish_ioend. Signed-off-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 27 ++++++++++++++------------- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 17760f3e67c61e..e1d5076251702d 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1464,15 +1464,10 @@ vm_fault_t iomap_page_mkwrite(struct vm_fault *vmf, const struct iomap_ops *ops) EXPORT_SYMBOL_GPL(iomap_page_mkwrite); static void iomap_finish_folio_write(struct inode *inode, struct folio *folio, - size_t len, int error) + size_t len) { struct iomap_folio_state *ifs = folio->private; - if (error) { - folio_set_error(folio); - mapping_set_error(inode->i_mapping, error); - } - WARN_ON_ONCE(i_blocks_per_folio(inode, folio) > 1 && !ifs); WARN_ON_ONCE(ifs && atomic_read(&ifs->write_bytes_pending) <= 0); @@ -1493,18 +1488,24 @@ iomap_finish_ioend(struct iomap_ioend *ioend, int error) struct folio_iter fi; u32 folio_count = 0; + if (error) { + mapping_set_error(inode->i_mapping, error); + if (!bio_flagged(bio, BIO_QUIET)) { + pr_err_ratelimited( +"%s: writeback error on inode %lu, offset %lld, sector %llu", + inode->i_sb->s_id, inode->i_ino, + ioend->io_offset, ioend->io_sector); + } + } + /* walk all folios in bio, ending page IO on them */ bio_for_each_folio_all(fi, bio) { - iomap_finish_folio_write(inode, fi.folio, fi.length, error); + if (error) + folio_set_error(fi.folio); + iomap_finish_folio_write(inode, fi.folio, fi.length); folio_count++; } - if (unlikely(error && !bio_flagged(bio, BIO_QUIET))) { - printk_ratelimited(KERN_ERR -"%s: writeback error on inode %lu, offset %lld, sector %llu", - inode->i_sb->s_id, inode->i_ino, - ioend->io_offset, ioend->io_sector); - } bio_put(bio); /* frees the ioend */ return folio_count; } From patchwork Sun Nov 26 12:47:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13468829 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="BJjXnxsM" Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 15F2BEE; Sun, 26 Nov 2023 04:48:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=53uG9fPRAx/CyHUV194Rw5RYICApajAWSzsMOx+ZM2A=; b=BJjXnxsMdWG0tdS31+zIu9J4dH WupXdwT5ANsdMjODZuDvIuOBoN4eyJ+AXyMrJW8hWgtpHPLhyZ6Zi6GbcjiJnmt++WTU1+tMek2ci njxJ5rrJGXkipRyE3GivFb5+9sZ/Vu7W4irD3Nw8ndzVO2IJWrK0BkJLaZp95Z0T2xr7rwa8Y7yMZ /nQA6EXjKVSlPzMesv3FOyROXcMulVTPhtAMumh/bCG27i+kdQJoUYqfWaxzaWnNwOfhtrASApCdN z5Cvq9kH2+CfZgw9O0zN2d9BNZgNqcCmoAmJZHVMhuat6IpYI8RyUHFSc7POzqfGSx+qTkraIkdSO 81P/EG/w==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1r7EYK-00BCTI-3B; Sun, 26 Nov 2023 12:48:01 +0000 From: Christoph Hellwig To: Christian Brauner Cc: "Darrick J. Wong" , Chandan Babu R , Zhang Yi , Ritesh Harjani , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 11/13] iomap: factor out a iomap_writepage_map_block helper Date: Sun, 26 Nov 2023 13:47:18 +0100 Message-Id: <20231126124720.1249310-12-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231126124720.1249310-1-hch@lst.de> References: <20231126124720.1249310-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Split the loop body that calls into the file system to map a block and add it to the ioend into a separate helper to prefer for refactoring of the surrounding code. Note that this was the only place in iomap_writepage_map that could return an error, so include the call to ->discard_folio into the new helper as that will help to avoid code duplication in the future. Signed-off-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 72 +++++++++++++++++++++++++----------------- 1 file changed, 43 insertions(+), 29 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index e1d5076251702d..9f223820f60d22 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1723,6 +1723,45 @@ static void iomap_add_to_ioend(struct iomap_writepage_ctx *wpc, wbc_account_cgroup_owner(wbc, &folio->page, len); } +static int iomap_writepage_map_blocks(struct iomap_writepage_ctx *wpc, + struct writeback_control *wbc, struct folio *folio, + struct inode *inode, u64 pos, unsigned *count, + struct list_head *submit_list) +{ + int error; + + error = wpc->ops->map_blocks(wpc, inode, pos); + if (error) + goto fail; + trace_iomap_writepage_map(inode, &wpc->iomap); + + switch (wpc->iomap.type) { + case IOMAP_INLINE: + WARN_ON_ONCE(1); + error = -EIO; + break; + case IOMAP_HOLE: + break; + default: + iomap_add_to_ioend(wpc, wbc, folio, inode, pos, submit_list); + (*count)++; + } + +fail: + /* + * We cannot cancel the ioend directly here on error. We may have + * already set other pages under writeback and hence we have to run I/O + * completion to mark the error state of the pages under writeback + * appropriately. + * + * Just let the file system know what portion of the folio failed to + * map. + */ + if (error && wpc->ops->discard_folio) + wpc->ops->discard_folio(folio, pos); + return error; +} + /* * Check interaction of the folio with the file end. * @@ -1807,7 +1846,8 @@ static int iomap_writepage_map(struct iomap_writepage_ctx *wpc, unsigned nblocks = i_blocks_per_folio(inode, folio); u64 pos = folio_pos(folio); u64 end_pos = pos + folio_size(folio); - int error = 0, count = 0, i; + unsigned count = 0; + int error = 0, i; LIST_HEAD(submit_list); trace_iomap_writepage(inode, pos, folio_size(folio)); @@ -1833,19 +1873,10 @@ static int iomap_writepage_map(struct iomap_writepage_ctx *wpc, for (i = 0; i < nblocks && pos < end_pos; i++, pos += len) { if (ifs && !ifs_block_is_dirty(folio, ifs, i)) continue; - - error = wpc->ops->map_blocks(wpc, inode, pos); + error = iomap_writepage_map_blocks(wpc, wbc, folio, inode, pos, + &count, &submit_list); if (error) break; - trace_iomap_writepage_map(inode, &wpc->iomap); - if (WARN_ON_ONCE(wpc->iomap.type == IOMAP_INLINE)) { - error = -EIO; - break; - } - if (wpc->iomap.type == IOMAP_HOLE) - continue; - iomap_add_to_ioend(wpc, wbc, folio, inode, pos, &submit_list); - count++; } if (count) wpc->nr_folios++; @@ -1855,23 +1886,6 @@ static int iomap_writepage_map(struct iomap_writepage_ctx *wpc, WARN_ON_ONCE(folio_test_writeback(folio)); WARN_ON_ONCE(folio_test_dirty(folio)); - /* - * We cannot cancel the ioend directly here on error. We may have - * already set other pages under writeback and hence we have to run I/O - * completion to mark the error state of the pages under writeback - * appropriately. - */ - if (unlikely(error)) { - /* - * Let the filesystem know what portion of the current page - * failed to map. If the page hasn't been added to ioend, it - * won't be affected by I/O completion and we must unlock it - * now. - */ - if (wpc->ops->discard_folio) - wpc->ops->discard_folio(folio, pos); - } - /* * We can have dirty bits set past end of file in page_mkwrite path * while mapping the last partial folio. Hence it's better to clear From patchwork Sun Nov 26 12:47:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13468830 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="Dn7ihiYg" Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D7992119; Sun, 26 Nov 2023 04:48:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=eD5Xz+PFTSfBhTegZJwcFjh1w0VLsnsbBtL9OWt1ju4=; b=Dn7ihiYgy9KIIFavyI9CuY1ypN JFqAUKWv0/ZV1ScVmTm9BQux6cxBn0q8VSEsKPDic59RNKL6OqthG2k6PPQDQvYQ5femRbMM2fTVQ 5v3o/9a3w8dQJHrbiecCUUdxoCLg49t48k9et/yt+vjw2gIqeM20Sp6hLa268I4MvCyh6OKGWjmp9 0s+9tBqZaIiUH4ku6UrQYVNadll+ysOaLvkVwmow2UfKEn5YfilOnbObwcHH3CnZ8IIJ7c2SKzp9y mPmGpZcvE6AWTcbXsUvJati+OXTgoz0IkN6+5pEwhsCrdPIyZkSM0sngpNXIsagkt9Mjrk+O/Onhd HPbcU2xQ==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1r7EYO-00BCWh-0x; Sun, 26 Nov 2023 12:48:04 +0000 From: Christoph Hellwig To: Christian Brauner Cc: "Darrick J. Wong" , Chandan Babu R , Zhang Yi , Ritesh Harjani , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 12/13] iomap: submit ioends immediately Date: Sun, 26 Nov 2023 13:47:19 +0100 Message-Id: <20231126124720.1249310-13-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231126124720.1249310-1-hch@lst.de> References: <20231126124720.1249310-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Currently the writeback code delays submitting fill ioends until we reach the end of the folio. The reason for that is that otherwise the end I/O handler could clear the writeback bit before we've even finished submitting all I/O for the folio. Add a bias to ifs->write_bytes_pending while we are submitting I/O for a folio so that it never reaches zero until all I/O is completed to prevent the early writeback bit clearing, and remove the now superfluous submit_list. Signed-off-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 157 ++++++++++++++++++++--------------------- 1 file changed, 75 insertions(+), 82 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 9f223820f60d22..a01b0686e7c8a0 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1620,30 +1620,34 @@ static void iomap_writepage_end_bio(struct bio *bio) * Submit the final bio for an ioend. * * If @error is non-zero, it means that we have a situation where some part of - * the submission process has failed after we've marked pages for writeback - * and unlocked them. In this situation, we need to fail the bio instead of - * submitting it. This typically only happens on a filesystem shutdown. + * the submission process has failed after we've marked pages for writeback. + * We cannot cancel ioend directly in that case, so call the bio end I/O handler + * with the error status here to run the normal I/O completion handler to clear + * the writeback bit and let the file system proess the errors. */ -static int -iomap_submit_ioend(struct iomap_writepage_ctx *wpc, struct iomap_ioend *ioend, - int error) +static int iomap_submit_ioend(struct iomap_writepage_ctx *wpc, int error) { + if (!wpc->ioend) + return error; + + /* + * Let the file systems prepare the I/O submission and hook in an I/O + * comletion handler. This also needs to happen in case after a + * failure happened so that the file system end I/O handler gets called + * to clean up. + */ if (wpc->ops->prepare_ioend) - error = wpc->ops->prepare_ioend(ioend, error); + error = wpc->ops->prepare_ioend(wpc->ioend, error); + if (error) { - /* - * If we're failing the IO now, just mark the ioend with an - * error and finish it. This will run IO completion immediately - * as there is only one reference to the ioend at this point in - * time. - */ - ioend->io_bio.bi_status = errno_to_blk_status(error); - bio_endio(&ioend->io_bio); - return error; + wpc->ioend->io_bio.bi_status = errno_to_blk_status(error); + bio_endio(&wpc->ioend->io_bio); + } else { + submit_bio(&wpc->ioend->io_bio); } - submit_bio(&ioend->io_bio); - return 0; + wpc->ioend = NULL; + return error; } static struct iomap_ioend *iomap_alloc_ioend(struct iomap_writepage_ctx *wpc, @@ -1698,19 +1702,28 @@ iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset) /* * Test to see if we have an existing ioend structure that we could append to * first; otherwise finish off the current ioend and start another. + * + * If a new ioend is created and cached, the old ioend is submitted to the block + * layer instantly. Batching optimisations are provided by higher level block + * plugging. + * + * At the end of a writeback pass, there will be a cached ioend remaining on the + * writepage context that the caller will need to submit. */ -static void iomap_add_to_ioend(struct iomap_writepage_ctx *wpc, +static int iomap_add_to_ioend(struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct folio *folio, - struct inode *inode, loff_t pos, struct list_head *iolist) + struct inode *inode, loff_t pos) { struct iomap_folio_state *ifs = folio->private; unsigned len = i_blocksize(inode); size_t poff = offset_in_folio(folio, pos); + int error; if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, pos)) { new_ioend: - if (wpc->ioend) - list_add(&wpc->ioend->io_list, iolist); + error = iomap_submit_ioend(wpc, 0); + if (error) + return error; wpc->ioend = iomap_alloc_ioend(wpc, wbc, inode, pos); } @@ -1721,12 +1734,12 @@ static void iomap_add_to_ioend(struct iomap_writepage_ctx *wpc, atomic_add(len, &ifs->write_bytes_pending); wpc->ioend->io_size += len; wbc_account_cgroup_owner(wbc, &folio->page, len); + return 0; } static int iomap_writepage_map_blocks(struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct folio *folio, - struct inode *inode, u64 pos, unsigned *count, - struct list_head *submit_list) + struct inode *inode, u64 pos, unsigned *count) { int error; @@ -1743,7 +1756,7 @@ static int iomap_writepage_map_blocks(struct iomap_writepage_ctx *wpc, case IOMAP_HOLE: break; default: - iomap_add_to_ioend(wpc, wbc, folio, inode, pos, submit_list); + iomap_add_to_ioend(wpc, wbc, folio, inode, pos); (*count)++; } @@ -1820,35 +1833,21 @@ static bool iomap_writepage_handle_eof(struct folio *folio, struct inode *inode, return true; } -/* - * We implement an immediate ioend submission policy here to avoid needing to - * chain multiple ioends and hence nest mempool allocations which can violate - * the forward progress guarantees we need to provide. The current ioend we're - * adding blocks to is cached in the writepage context, and if the new block - * doesn't append to the cached ioend, it will create a new ioend and cache that - * instead. - * - * If a new ioend is created and cached, the old ioend is returned and queued - * locally for submission once the entire page is processed or an error has been - * detected. While ioends are submitted immediately after they are completed, - * batching optimisations are provided by higher level block plugging. - * - * At the end of a writeback pass, there will be a cached ioend remaining on the - * writepage context that the caller will need to submit. - */ static int iomap_writepage_map(struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct folio *folio) { struct iomap_folio_state *ifs = folio->private; struct inode *inode = folio->mapping->host; - struct iomap_ioend *ioend, *next; unsigned len = i_blocksize(inode); unsigned nblocks = i_blocks_per_folio(inode, folio); u64 pos = folio_pos(folio); u64 end_pos = pos + folio_size(folio); unsigned count = 0; int error = 0, i; - LIST_HEAD(submit_list); + + WARN_ON_ONCE(!folio_test_locked(folio)); + WARN_ON_ONCE(folio_test_dirty(folio)); + WARN_ON_ONCE(folio_test_writeback(folio)); trace_iomap_writepage(inode, pos, folio_size(folio)); @@ -1858,12 +1857,27 @@ static int iomap_writepage_map(struct iomap_writepage_ctx *wpc, } WARN_ON_ONCE(end_pos <= pos); - if (!ifs && nblocks > 1) { - ifs = ifs_alloc(inode, folio, 0); - iomap_set_range_dirty(folio, 0, end_pos - pos); + if (nblocks > 1) { + if (!ifs) { + ifs = ifs_alloc(inode, folio, 0); + iomap_set_range_dirty(folio, 0, end_pos - pos); + } + + /* + * Keep the I/O completion handler from clearing the writeback + * bit until we have submitted all blocks by adding a bias to + * ifs->write_bytes_pending, which is dropped after submitting + * all blocks. + */ + WARN_ON_ONCE(atomic_read(&ifs->write_bytes_pending) != 0); + atomic_inc(&ifs->write_bytes_pending); } - WARN_ON_ONCE(ifs && atomic_read(&ifs->write_bytes_pending) != 0); + /* + * Set the writeback bit ASAP, as the I/O completion for the single + * block per folio case happen hit as soon as we're submitting the bio. + */ + folio_start_writeback(folio); /* * Walk through the folio to find areas to write back. If we @@ -1874,18 +1888,13 @@ static int iomap_writepage_map(struct iomap_writepage_ctx *wpc, if (ifs && !ifs_block_is_dirty(folio, ifs, i)) continue; error = iomap_writepage_map_blocks(wpc, wbc, folio, inode, pos, - &count, &submit_list); + &count); if (error) break; } if (count) wpc->nr_folios++; - WARN_ON_ONCE(!wpc->ioend && !list_empty(&submit_list)); - WARN_ON_ONCE(!folio_test_locked(folio)); - WARN_ON_ONCE(folio_test_writeback(folio)); - WARN_ON_ONCE(folio_test_dirty(folio)); - /* * We can have dirty bits set past end of file in page_mkwrite path * while mapping the last partial folio. Hence it's better to clear @@ -1893,35 +1902,21 @@ static int iomap_writepage_map(struct iomap_writepage_ctx *wpc, */ iomap_clear_range_dirty(folio, 0, folio_size(folio)); - if (error && !count) { - folio_unlock(folio); - goto done; - } - - folio_start_writeback(folio); - folio_unlock(folio); - /* - * Preserve the original error if there was one; catch - * submission errors here and propagate into subsequent ioend - * submissions. + * Usually the writeback bit is cleared by the I/O completion handler. + * But we may end up either not actually writing any blocks, or (when + * there are multiple blocks in a folio) all I/O might have finished + * already at this point. In that case we need to clear the writeback + * bit ourselves right after unlocking the page. */ - list_for_each_entry_safe(ioend, next, &submit_list, io_list) { - int error2; - - list_del_init(&ioend->io_list); - error2 = iomap_submit_ioend(wpc, ioend, error); - if (error2 && !error) - error = error2; + folio_unlock(folio); + if (ifs) { + if (atomic_dec_and_test(&ifs->write_bytes_pending)) + folio_end_writeback(folio); + } else { + if (!count) + folio_end_writeback(folio); } - - /* - * We can end up here with no error and nothing to write only if we race - * with a partial page truncate on a sub-page block sized filesystem. - */ - if (!count) - folio_end_writeback(folio); -done: mapping_set_error(inode->i_mapping, error); return error; } @@ -1941,9 +1936,7 @@ iomap_writepages(struct address_space *mapping, struct writeback_control *wbc, wpc->ops = ops; ret = write_cache_pages(mapping, wbc, iomap_do_writepage, wpc); - if (!wpc->ioend) - return ret; - return iomap_submit_ioend(wpc, wpc->ioend, ret); + return iomap_submit_ioend(wpc, ret); } EXPORT_SYMBOL_GPL(iomap_writepages); From patchwork Sun Nov 26 12:47:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13468831 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="xiAbGmwj" Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1C9B1E5; Sun, 26 Nov 2023 04:48:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=V0dGJmWPLrCro8izJRCmaNUrP6vtm6TZfGytl8WSF+8=; b=xiAbGmwjllCTTHmyb1j742/Bcs 1I3664YwflMpoVB9CFYWTCNoaRcnldHkp53rabpSUH1EjVPC7tLR/zn3lPHyjLgW2JPTvEEcKgfgh AKAoM2yJ40/jInxZB7OS8L8Sm5ki5EYaY8hSY8lTj7WO+Z2sPfws7IsnQrAUXvh4rIV3Kh6ucg3qk 9vBlMUQcdpJSw9NVe9GbQWgxKt5GVeJh077I4Vfs7gHViRzIVK+uFfV9hbbK83uFiU/ExjTZlehBE praSL2J71q3URRYaEvvF1B+y1qhpU7IB2jPTxk1ANzg86QPOgGoBZ+1kXD9Lrcz+4ZNXoNHyDGA/r 2y3DFlXQ==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1r7EYR-00BCZC-32; Sun, 26 Nov 2023 12:48:08 +0000 From: Christoph Hellwig To: Christian Brauner Cc: "Darrick J. Wong" , Chandan Babu R , Zhang Yi , Ritesh Harjani , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 13/13] iomap: map multiple blocks at a time Date: Sun, 26 Nov 2023 13:47:20 +0100 Message-Id: <20231126124720.1249310-14-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231126124720.1249310-1-hch@lst.de> References: <20231126124720.1249310-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html The ->map_blocks interface returns a valid range for writeback, but we still call back into it for every block, which is a bit inefficient. Change xfs_writepage_map to use the valid range in the map until the end of the folio or the dirty range inside the folio instead of calling back into every block. Note that the range is not used over folio boundaries as we need to be able to check the mapping sequence count under the folio lock. Signed-off-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 113 ++++++++++++++++++++++++++++------------- include/linux/iomap.h | 7 +++ 2 files changed, 86 insertions(+), 34 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index a01b0686e7c8a0..a98cb38a71ebc8 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1,7 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 /* * Copyright (C) 2010 Red Hat, Inc. - * Copyright (C) 2016-2019 Christoph Hellwig. + * Copyright (C) 2016-2023 Christoph Hellwig. */ #include #include @@ -95,6 +95,42 @@ static inline bool ifs_block_is_dirty(struct folio *folio, return test_bit(block + blks_per_folio, ifs->state); } +static unsigned ifs_find_dirty_range(struct folio *folio, + struct iomap_folio_state *ifs, u64 *range_start, u64 range_end) +{ + struct inode *inode = folio->mapping->host; + unsigned start_blk = + offset_in_folio(folio, *range_start) >> inode->i_blkbits; + unsigned end_blk = min_not_zero( + offset_in_folio(folio, range_end) >> inode->i_blkbits, + i_blocks_per_folio(inode, folio)); + unsigned nblks = 1; + + while (!ifs_block_is_dirty(folio, ifs, start_blk)) + if (++start_blk == end_blk) + return 0; + + while (start_blk + nblks < end_blk && + ifs_block_is_dirty(folio, ifs, start_blk + nblks)) + nblks++; + + *range_start = folio_pos(folio) + (start_blk << inode->i_blkbits); + return nblks << inode->i_blkbits; +} + +static unsigned iomap_find_dirty_range(struct folio *folio, u64 *range_start, + u64 range_end) +{ + struct iomap_folio_state *ifs = folio->private; + + if (*range_start >= range_end) + return 0; + + if (ifs) + return ifs_find_dirty_range(folio, ifs, range_start, range_end); + return range_end - *range_start; +} + static void ifs_clear_range_dirty(struct folio *folio, struct iomap_folio_state *ifs, size_t off, size_t len) { @@ -1712,10 +1748,9 @@ iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset) */ static int iomap_add_to_ioend(struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct folio *folio, - struct inode *inode, loff_t pos) + struct inode *inode, loff_t pos, unsigned len) { struct iomap_folio_state *ifs = folio->private; - unsigned len = i_blocksize(inode); size_t poff = offset_in_folio(folio, pos); int error; @@ -1739,28 +1774,41 @@ static int iomap_add_to_ioend(struct iomap_writepage_ctx *wpc, static int iomap_writepage_map_blocks(struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct folio *folio, - struct inode *inode, u64 pos, unsigned *count) + struct inode *inode, u64 pos, unsigned dirty_len, + unsigned *count) { int error; - error = wpc->ops->map_blocks(wpc, inode, pos); - if (error) - goto fail; - trace_iomap_writepage_map(inode, &wpc->iomap); - - switch (wpc->iomap.type) { - case IOMAP_INLINE: - WARN_ON_ONCE(1); - error = -EIO; - break; - case IOMAP_HOLE: - break; - default: - iomap_add_to_ioend(wpc, wbc, folio, inode, pos); - (*count)++; - } + do { + unsigned map_len; + + error = wpc->ops->map_blocks(wpc, inode, pos); + if (error) + break; + trace_iomap_writepage_map(inode, &wpc->iomap); + + map_len = min_t(u64, dirty_len, + wpc->iomap.offset + wpc->iomap.length - pos); + WARN_ON_ONCE(!folio->private && map_len < dirty_len); + + switch (wpc->iomap.type) { + case IOMAP_INLINE: + WARN_ON_ONCE(1); + error = -EIO; + break; + case IOMAP_HOLE: + break; + default: + error = iomap_add_to_ioend(wpc, wbc, folio, inode, pos, + map_len); + if (!error) + (*count)++; + break; + } + dirty_len -= map_len; + pos += map_len; + } while (dirty_len && !error); -fail: /* * We cannot cancel the ioend directly here on error. We may have * already set other pages under writeback and hence we have to run I/O @@ -1827,7 +1875,7 @@ static bool iomap_writepage_handle_eof(struct folio *folio, struct inode *inode, * beyond i_size. */ folio_zero_segment(folio, poff, folio_size(folio)); - *end_pos = isize; + *end_pos = round_up(isize, i_blocksize(inode)); } return true; @@ -1838,12 +1886,11 @@ static int iomap_writepage_map(struct iomap_writepage_ctx *wpc, { struct iomap_folio_state *ifs = folio->private; struct inode *inode = folio->mapping->host; - unsigned len = i_blocksize(inode); - unsigned nblocks = i_blocks_per_folio(inode, folio); u64 pos = folio_pos(folio); u64 end_pos = pos + folio_size(folio); unsigned count = 0; - int error = 0, i; + int error = 0; + u32 rlen; WARN_ON_ONCE(!folio_test_locked(folio)); WARN_ON_ONCE(folio_test_dirty(folio)); @@ -1857,7 +1904,7 @@ static int iomap_writepage_map(struct iomap_writepage_ctx *wpc, } WARN_ON_ONCE(end_pos <= pos); - if (nblocks > 1) { + if (i_blocks_per_folio(inode, folio) > 1) { if (!ifs) { ifs = ifs_alloc(inode, folio, 0); iomap_set_range_dirty(folio, 0, end_pos - pos); @@ -1880,18 +1927,16 @@ static int iomap_writepage_map(struct iomap_writepage_ctx *wpc, folio_start_writeback(folio); /* - * Walk through the folio to find areas to write back. If we - * run off the end of the current map or find the current map - * invalid, grab a new one. + * Walk through the folio to find dirty areas to write back. */ - for (i = 0; i < nblocks && pos < end_pos; i++, pos += len) { - if (ifs && !ifs_block_is_dirty(folio, ifs, i)) - continue; - error = iomap_writepage_map_blocks(wpc, wbc, folio, inode, pos, - &count); + while ((rlen = iomap_find_dirty_range(folio, &pos, end_pos))) { + error = iomap_writepage_map_blocks(wpc, wbc, folio, inode, + pos, rlen, &count); if (error) break; + pos += rlen; } + if (count) wpc->nr_folios++; diff --git a/include/linux/iomap.h b/include/linux/iomap.h index b8d3b658ad2b03..49d93f53878565 100644 --- a/include/linux/iomap.h +++ b/include/linux/iomap.h @@ -309,6 +309,13 @@ struct iomap_writeback_ops { /* * Required, maps the blocks so that writeback can be performed on * the range starting at offset. + * + * Can return arbitrarily large regions, but we need to call into it at + * least once per folio to allow the file systems to synchronize with + * the write path that could be invalidating mappings. + * + * An existing mapping from a previous call to this method can be reused + * by the file system if it is still valid. */ int (*map_blocks)(struct iomap_writepage_ctx *wpc, struct inode *inode, loff_t offset);