From patchwork Sun Jan 8 16:56:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13092626 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F4C7C678D5 for ; Sun, 8 Jan 2023 16:57:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234683AbjAHQ5f (ORCPT ); Sun, 8 Jan 2023 11:57:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44158 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235659AbjAHQ5Q (ORCPT ); Sun, 8 Jan 2023 11:57:16 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B782F6443; Sun, 8 Jan 2023 08:57:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=35W1DfUpdKcvHvkmZd/iv7iohwTShFaTRVBTVeGN8KQ=; b=ptbBbqyy8J9QwzNlwXFf0pVSC9 FObnajdrui3AnmwdcRqx7yS108dlZjq/LA1FcxnBsuCPU71ZtX9OCCm2Jm8xGwS0YPhXrtKcMGdZQ aKbo16bAYFtfrmPEZ3n7MYwDAGEsyygCRN2tTqfvEQyxzZbYdOUzQfkdu8aQI8Yd/qSaCa4uRm3v1 A3aIxTvDqqrWc1GXgBRTfAWdjWi5FuMdOZZqzsr6BcukKQjm8ah9DRASE27rB0bk3R/hAO7T3QdKg jVUW5uVfr+iTFuvvy2ys19Nhv6xvcopqJwiQPvNINiTvL/wDHu+DlU8jvFyW2hJ7jgs+Q1PXdu5Qt qDlB/z8A==; Received: from [2001:4bb8:198:a591:1c7c:bf66:af15:b282] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1pEYyp-00ERur-0M; Sun, 08 Jan 2023 16:57:07 +0000 From: Christoph Hellwig To: Andrew Morton , Chris Mason , Josef Bacik , David Sterba , Dave Kleikamp , Mark Fasheh , Joel Becker , Joseph Qi , Evgeniy Dushistov , "Matthew Wilcox (Oracle)" Cc: linux-btrfs@vger.kernel.org, jfs-discussion@lists.sourceforge.net, ocfs2-devel@oss.oracle.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 7/7] mm,jfs: move write_one_page/folio_write_one to jfs Date: Sun, 8 Jan 2023 17:56:45 +0100 Message-Id: <20230108165645.381077-8-hch@lst.de> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230108165645.381077-1-hch@lst.de> References: <20230108165645.381077-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org The last remaining user of folio_write_one through the write_one_page wrapper is jfs, so move the functionality there and hard code the call to metapage_writepage. Note that the use of the pagecache by the jfs 'metapage' buffer cache is a bit odd, and we could probably do without VM-level dirty tracking at all, but that's a change for another time. Signed-off-by: Christoph Hellwig --- fs/jfs/jfs_metapage.c | 39 ++++++++++++++++++++++++++++++++++----- include/linux/pagemap.h | 6 ------ mm/page-writeback.c | 40 ---------------------------------------- 3 files changed, 34 insertions(+), 51 deletions(-) diff --git a/fs/jfs/jfs_metapage.c b/fs/jfs/jfs_metapage.c index 2e8461ce74de69..961569c1115901 100644 --- a/fs/jfs/jfs_metapage.c +++ b/fs/jfs/jfs_metapage.c @@ -691,6 +691,35 @@ void grab_metapage(struct metapage * mp) unlock_page(mp->page); } +static int metapage_write_one(struct page *page) +{ + struct folio *folio = page_folio(page); + struct address_space *mapping = folio->mapping; + struct writeback_control wbc = { + .sync_mode = WB_SYNC_ALL, + .nr_to_write = folio_nr_pages(folio), + }; + int ret = 0; + + BUG_ON(!folio_test_locked(folio)); + + folio_wait_writeback(folio); + + if (folio_clear_dirty_for_io(folio)) { + folio_get(folio); + ret = metapage_writepage(page, &wbc); + if (ret == 0) + folio_wait_writeback(folio); + folio_put(folio); + } else { + folio_unlock(folio); + } + + if (!ret) + ret = filemap_check_errors(mapping); + return ret; +} + void force_metapage(struct metapage *mp) { struct page *page = mp->page; @@ -700,8 +729,8 @@ void force_metapage(struct metapage *mp) get_page(page); lock_page(page); set_page_dirty(page); - if (write_one_page(page)) - jfs_error(mp->sb, "write_one_page() failed\n"); + if (metapage_write_one(page)) + jfs_error(mp->sb, "metapage_write_one() failed\n"); clear_bit(META_forcewrite, &mp->flag); put_page(page); } @@ -746,9 +775,9 @@ void release_metapage(struct metapage * mp) set_page_dirty(page); if (test_bit(META_sync, &mp->flag)) { clear_bit(META_sync, &mp->flag); - if (write_one_page(page)) - jfs_error(mp->sb, "write_one_page() failed\n"); - lock_page(page); /* write_one_page unlocks the page */ + if (metapage_write_one(page)) + jfs_error(mp->sb, "metapage_write_one() failed\n"); + lock_page(page); } } else if (mp->lsn) /* discard_metapage doesn't remove it */ remove_from_logsync(mp); diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 29e1f9e76eb6dd..4b3a7124c76712 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -1062,12 +1062,6 @@ static inline void folio_cancel_dirty(struct folio *folio) bool folio_clear_dirty_for_io(struct folio *folio); bool clear_page_dirty_for_io(struct page *page); void folio_invalidate(struct folio *folio, size_t offset, size_t length); -int __must_check folio_write_one(struct folio *folio); -static inline int __must_check write_one_page(struct page *page) -{ - return folio_write_one(page_folio(page)); -} - int __set_page_dirty_nobuffers(struct page *page); bool noop_dirty_folio(struct address_space *mapping, struct folio *folio); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index dfeeceebba0ae0..2430fd09607742 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2581,46 +2581,6 @@ int do_writepages(struct address_space *mapping, struct writeback_control *wbc) return ret; } -/** - * folio_write_one - write out a single folio and wait on I/O. - * @folio: The folio to write. - * - * The folio must be locked by the caller and will be unlocked upon return. - * - * Note that the mapping's AS_EIO/AS_ENOSPC flags will be cleared when this - * function returns. - * - * Return: %0 on success, negative error code otherwise - */ -int folio_write_one(struct folio *folio) -{ - struct address_space *mapping = folio->mapping; - int ret = 0; - struct writeback_control wbc = { - .sync_mode = WB_SYNC_ALL, - .nr_to_write = folio_nr_pages(folio), - }; - - BUG_ON(!folio_test_locked(folio)); - - folio_wait_writeback(folio); - - if (folio_clear_dirty_for_io(folio)) { - folio_get(folio); - ret = mapping->a_ops->writepage(&folio->page, &wbc); - if (ret == 0) - folio_wait_writeback(folio); - folio_put(folio); - } else { - folio_unlock(folio); - } - - if (!ret) - ret = filemap_check_errors(mapping); - return ret; -} -EXPORT_SYMBOL(folio_write_one); - /* * For address_spaces which do not use buffers nor write back. */