From patchwork Thu Jul 13 03:55:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13311246 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24A6BC00528 for ; Thu, 13 Jul 2023 03:55:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233757AbjGMDz2 (ORCPT ); Wed, 12 Jul 2023 23:55:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232167AbjGMDzZ (ORCPT ); Wed, 12 Jul 2023 23:55:25 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6FF941BF2; Wed, 12 Jul 2023 20:55:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=KarKDnAdc5ICiKgriHUs+5HIqBDCfuIyL8uoPMfdtgg=; b=ur4gJ0F4o1/TWO3SHQnYnth3Yi zibcyPEy9vabUq9Bi7uSwmrb5LEhyXMLqQBHOPcEhnMdb1df+8ue1EV1/S0LivBGHEzvrjh1bmQfO WY7Hx6D0GILcyJWTHNMC6Bwp1i2zJUCwtEGYIVbbA777fGlsZyuRqFw96x9koSqlMmPXtL/zUzXUE OSJITXRpnTgZv19m7/9aigyvZamxYnHeKARFvjV+mQsBai0Gr2YtlZok1NtsZO95bx/GjzYD5k1KK mN1DID+3DD8SxcRU2pyiiFnLN7PoZfz0wY5Y4u8aaw4GxdHc7Mk5dqsQwUCi2Th4SnqlPYHkReDIZ 2SlqcDLw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qJnQA-00HMre-81; Thu, 13 Jul 2023 03:55:14 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , David Sterba , linux-fsdevel@vger.kernel.org, Pankaj Raghav , Konstantin Komarov , ntfs3@lists.linux.dev, "Theodore Tso" , Jan Kara , linux-ext4@vger.kernel.org Subject: [PATCH 1/7] highmem: Add memcpy_to_folio() and memcpy_from_folio() Date: Thu, 13 Jul 2023 04:55:06 +0100 Message-Id: <20230713035512.4139457-2-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230713035512.4139457-1-willy@infradead.org> References: <20230713035512.4139457-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org These are the folio equivalent of memcpy_to_page() and memcpy_from_page(). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/highmem.h | 44 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 44 insertions(+) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 68da30625a6c..0280f57d4744 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -439,6 +439,50 @@ static inline void memzero_page(struct page *page, size_t offset, size_t len) kunmap_local(addr); } +static inline void memcpy_from_folio(char *to, struct folio *folio, + size_t offset, size_t len) +{ + VM_BUG_ON(offset + len > folio_size(folio)); + + do { + char *from = kmap_local_folio(folio, offset); + size_t chunk = len; + + if (folio_test_highmem(folio) && + (chunk > (PAGE_SIZE - offset_in_page(offset)))) + chunk = PAGE_SIZE - offset_in_page(offset); + memcpy(to, from, len); + kunmap_local(from); + + from += chunk; + offset += chunk; + len -= chunk; + } while (len > 0); +} + +static inline void memcpy_to_folio(struct folio *folio, size_t offset, + const char *from, size_t len) +{ + VM_BUG_ON(offset + len > folio_size(folio)); + + do { + char *to = kmap_local_folio(folio, offset); + size_t chunk = len; + + if (folio_test_highmem(folio) && + (chunk > (PAGE_SIZE - offset_in_page(offset)))) + chunk = PAGE_SIZE - offset_in_page(offset); + memcpy(to, from, len); + kunmap_local(to); + + from += chunk; + offset += chunk; + len -= chunk; + } while (len > 0); + + flush_dcache_folio(folio); +} + /** * memcpy_from_file_folio - Copy some bytes from a file folio. * @to: The destination buffer. From patchwork Thu Jul 13 03:55:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13311252 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 856C2C04E69 for ; Thu, 13 Jul 2023 03:55:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233776AbjGMDzn (ORCPT ); Wed, 12 Jul 2023 23:55:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53890 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232119AbjGMDzk (ORCPT ); Wed, 12 Jul 2023 23:55:40 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6D73E1BF2; Wed, 12 Jul 2023 20:55:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=jYGr8ODFn2NMtrawIS2Yz1RKc1fA/K4tKHEsM65sMaE=; b=aU2YHRc9daHN+V6qJy3ls9y5WI pEeuHSjUwf0LcruyUze/uhjuturw2lvaJWJunAHkc7g8nTM4rnWysCCHU2Iow5jKivM2cbNWO2VF5 CEgtu3zl69TS/qsCiCBEh4nlv1Do/63Etui6LLLOiq52UjloG9dqBSNswd9tcFkGXZFIBGgILQfeu 9XRn2FXx4wmE7gZQZDoAHPtFzR2NoMRM3bGfJQloGzGuopffaHV/O7KthX5s7LjXWN99NGOvhiMS+ EBtPgjf2p6SzYgznFS4EPFaNUc6ppFQcwUiAYTVBiMG4+/KVYx6JAeTJ2vusoOjLuGTQZHOzBNGhH rBX+QdYQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qJnQA-00HMrg-As; Thu, 13 Jul 2023 03:55:14 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , David Sterba , linux-fsdevel@vger.kernel.org, Pankaj Raghav , Konstantin Komarov , ntfs3@lists.linux.dev, "Theodore Tso" , Jan Kara , linux-ext4@vger.kernel.org Subject: [PATCH 2/7] affs: Convert affs_symlink_read_folio() to use the folio Date: Thu, 13 Jul 2023 04:55:07 +0100 Message-Id: <20230713035512.4139457-3-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230713035512.4139457-1-willy@infradead.org> References: <20230713035512.4139457-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Remove use of the old page APIs. That includes use of setting PageError on error; simply not setting the uptodate flag is sufficient. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: David Sterba --- fs/affs/symlink.c | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/fs/affs/symlink.c b/fs/affs/symlink.c index 31d6446dc166..094aec8d17b8 100644 --- a/fs/affs/symlink.c +++ b/fs/affs/symlink.c @@ -13,10 +13,9 @@ static int affs_symlink_read_folio(struct file *file, struct folio *folio) { - struct page *page = &folio->page; struct buffer_head *bh; - struct inode *inode = page->mapping->host; - char *link = page_address(page); + struct inode *inode = folio->mapping->host; + char *link = folio_address(folio); struct slink_front *lf; int i, j; char c; @@ -58,12 +57,11 @@ static int affs_symlink_read_folio(struct file *file, struct folio *folio) } link[i] = '\0'; affs_brelse(bh); - SetPageUptodate(page); - unlock_page(page); + folio_mark_uptodate(folio); + folio_unlock(folio); return 0; fail: - SetPageError(page); - unlock_page(page); + folio_unlock(folio); return -EIO; } From patchwork Thu Jul 13 03:55:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13311251 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CAFCDC04A94 for ; Thu, 13 Jul 2023 03:55:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233772AbjGMDzl (ORCPT ); Wed, 12 Jul 2023 23:55:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233770AbjGMDzh (ORCPT ); Wed, 12 Jul 2023 23:55:37 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A78F1FDA; Wed, 12 Jul 2023 20:55:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=DeCWJbZIEI3UvuMa+3l4cfkxIkNgFoQ48F8WN4E7png=; b=TxD9GhB08ZWtw0Hr2RVcx4E3AB 4qqUDV+cZV4jTdL4GhPuXmr64YTgId79iybVm/wVJ0Lm+UcP0MtU4k94GzKxwPbyO9fZfQngssVtE iGigyXH7jujGSPlLxHLEX26PVTAO6TcnAecExKnOKkFP/9N/OolPoSTxeZ+tUJY05W9yOtli5AxRt WJhPzopW8L5Ri/0KkZnC6XS8ZaXTAr969hhvQUuq7vSFAlqXkxFbQqoEGlcxvXxYajIByimQ1NDCw 1D+z1Pd7Bv0R15YA0rfAVbG6U9whoTVPapm1cohGD1Sglp37efyaZFugge/1m43woXEkaHCTZLw46 UiGBlq5w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qJnQA-00HMri-Dh; Thu, 13 Jul 2023 03:55:14 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , David Sterba , linux-fsdevel@vger.kernel.org, Pankaj Raghav , Konstantin Komarov , ntfs3@lists.linux.dev, "Theodore Tso" , Jan Kara , linux-ext4@vger.kernel.org Subject: [PATCH 3/7] affs: Convert data read and write to use folios Date: Thu, 13 Jul 2023 04:55:08 +0100 Message-Id: <20230713035512.4139457-4-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230713035512.4139457-1-willy@infradead.org> References: <20230713035512.4139457-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org We still need to convert to/from folios in write_begin & write_end to fit the API, but this removes a lot of calls to old page-based functions, removing many hidden calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Pankaj Raghav Acked-by: David Sterba --- fs/affs/file.c | 77 +++++++++++++++++++++++++------------------------- 1 file changed, 38 insertions(+), 39 deletions(-) diff --git a/fs/affs/file.c b/fs/affs/file.c index e43f2f007ac1..705e227ff63d 100644 --- a/fs/affs/file.c +++ b/fs/affs/file.c @@ -520,21 +520,20 @@ affs_getemptyblk_ino(struct inode *inode, int block) return ERR_PTR(err); } -static int -affs_do_readpage_ofs(struct page *page, unsigned to, int create) +static int affs_do_read_folio_ofs(struct folio *folio, size_t to, int create) { - struct inode *inode = page->mapping->host; + struct inode *inode = folio->mapping->host; struct super_block *sb = inode->i_sb; struct buffer_head *bh; - unsigned pos = 0; - u32 bidx, boff, bsize; + size_t pos = 0; + size_t bidx, boff, bsize; u32 tmp; - pr_debug("%s(%lu, %ld, 0, %d)\n", __func__, inode->i_ino, - page->index, to); - BUG_ON(to > PAGE_SIZE); + pr_debug("%s(%lu, %ld, 0, %zu)\n", __func__, inode->i_ino, + folio->index, to); + BUG_ON(to > folio_size(folio)); bsize = AFFS_SB(sb)->s_data_blksize; - tmp = page->index << PAGE_SHIFT; + tmp = folio_pos(folio); bidx = tmp / bsize; boff = tmp % bsize; @@ -544,7 +543,7 @@ affs_do_readpage_ofs(struct page *page, unsigned to, int create) return PTR_ERR(bh); tmp = min(bsize - boff, to - pos); BUG_ON(pos + tmp > to || tmp > bsize); - memcpy_to_page(page, pos, AFFS_DATA(bh) + boff, tmp); + memcpy_to_folio(folio, pos, AFFS_DATA(bh) + boff, tmp); affs_brelse(bh); bidx++; pos += tmp; @@ -624,25 +623,23 @@ affs_extent_file_ofs(struct inode *inode, u32 newsize) return PTR_ERR(bh); } -static int -affs_read_folio_ofs(struct file *file, struct folio *folio) +static int affs_read_folio_ofs(struct file *file, struct folio *folio) { - struct page *page = &folio->page; - struct inode *inode = page->mapping->host; - u32 to; + struct inode *inode = folio->mapping->host; + size_t to; int err; - pr_debug("%s(%lu, %ld)\n", __func__, inode->i_ino, page->index); - to = PAGE_SIZE; - if (((page->index + 1) << PAGE_SHIFT) > inode->i_size) { - to = inode->i_size & ~PAGE_MASK; - memset(page_address(page) + to, 0, PAGE_SIZE - to); + pr_debug("%s(%lu, %ld)\n", __func__, inode->i_ino, folio->index); + to = folio_size(folio); + if (folio_pos(folio) + to > inode->i_size) { + to = inode->i_size - folio_pos(folio); + folio_zero_segment(folio, to, folio_size(folio)); } - err = affs_do_readpage_ofs(page, to, 0); + err = affs_do_read_folio_ofs(folio, to, 0); if (!err) - SetPageUptodate(page); - unlock_page(page); + folio_mark_uptodate(folio); + folio_unlock(folio); return err; } @@ -651,7 +648,7 @@ static int affs_write_begin_ofs(struct file *file, struct address_space *mapping struct page **pagep, void **fsdata) { struct inode *inode = mapping->host; - struct page *page; + struct folio *folio; pgoff_t index; int err = 0; @@ -667,19 +664,20 @@ static int affs_write_begin_ofs(struct file *file, struct address_space *mapping } index = pos >> PAGE_SHIFT; - page = grab_cache_page_write_begin(mapping, index); - if (!page) - return -ENOMEM; - *pagep = page; + folio = __filemap_get_folio(mapping, index, FGP_WRITEBEGIN, + mapping_gfp_mask(mapping)); + if (IS_ERR(folio)) + return PTR_ERR(folio); + *pagep = &folio->page; - if (PageUptodate(page)) + if (folio_test_uptodate(folio)) return 0; /* XXX: inefficient but safe in the face of short writes */ - err = affs_do_readpage_ofs(page, PAGE_SIZE, 1); + err = affs_do_read_folio_ofs(folio, folio_size(folio), 1); if (err) { - unlock_page(page); - put_page(page); + folio_unlock(folio); + folio_put(folio); } return err; } @@ -688,6 +686,7 @@ static int affs_write_end_ofs(struct file *file, struct address_space *mapping, loff_t pos, unsigned len, unsigned copied, struct page *page, void *fsdata) { + struct folio *folio = page_folio(page); struct inode *inode = mapping->host; struct super_block *sb = inode->i_sb; struct buffer_head *bh, *prev_bh; @@ -701,18 +700,18 @@ static int affs_write_end_ofs(struct file *file, struct address_space *mapping, to = from + len; /* * XXX: not sure if this can handle short copies (len < copied), but - * we don't have to, because the page should always be uptodate here, + * we don't have to, because the folio should always be uptodate here, * due to write_begin. */ pr_debug("%s(%lu, %llu, %llu)\n", __func__, inode->i_ino, pos, pos + len); bsize = AFFS_SB(sb)->s_data_blksize; - data = page_address(page); + data = folio_address(folio); bh = NULL; written = 0; - tmp = (page->index << PAGE_SHIFT) + from; + tmp = (folio->index << PAGE_SHIFT) + from; bidx = tmp / bsize; boff = tmp % bsize; if (boff) { @@ -804,11 +803,11 @@ static int affs_write_end_ofs(struct file *file, struct address_space *mapping, from += tmp; bidx++; } - SetPageUptodate(page); + folio_mark_uptodate(folio); done: affs_brelse(bh); - tmp = (page->index << PAGE_SHIFT) + from; + tmp = (folio->index << PAGE_SHIFT) + from; if (tmp > inode->i_size) inode->i_size = AFFS_I(inode)->mmu_private = tmp; @@ -819,8 +818,8 @@ static int affs_write_end_ofs(struct file *file, struct address_space *mapping, } err_first_bh: - unlock_page(page); - put_page(page); + folio_unlock(folio); + folio_put(folio); return written; From patchwork Thu Jul 13 03:55:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13311245 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BAC1C0015E for ; Thu, 13 Jul 2023 03:55:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233749AbjGMDzX (ORCPT ); Wed, 12 Jul 2023 23:55:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232133AbjGMDzW (ORCPT ); Wed, 12 Jul 2023 23:55:22 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F5BB1BF2; Wed, 12 Jul 2023 20:55:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=W0UYXwWjWGda5omCDxw1jAlh390Lk5TbGcfd3nEU6uE=; b=LRqHvclw5T4FaH8TRmIhenQus/ ki6KdHGC98/kq7B7fdscjEH+dx/6NClTrk+nPnZo2txh+X0ura3Z57FPxOmRqZYaKu1IBBgBp3Stj udId/40CzjMN71gdsy9O489V0KjWEY3t+BDw6lWN2LSPPCsm63yiP3XRiIPA+tlDmsQy0B4CJ8sQA lErTqVmfPayPxj6jzEE1IkakDLp+7G/JhdTBnJ+4ErBpIGFFt78zD34/oRemUcsvy8HOKVxiEXnc8 V8WL/M/E3P4RUIJU69kXS5jJRFWQe4iTfyPOqPenlw+PtcmmnAQICVsH3UnAfg4Ool8Na71sQyvDm NmA+BO0Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qJnQA-00HMrk-Gp; Thu, 13 Jul 2023 03:55:14 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , David Sterba , linux-fsdevel@vger.kernel.org, Pankaj Raghav , Konstantin Komarov , ntfs3@lists.linux.dev, "Theodore Tso" , Jan Kara , linux-ext4@vger.kernel.org Subject: [PATCH 4/7] migrate: Use folio_set_bh() instead of set_bh_page() Date: Thu, 13 Jul 2023 04:55:09 +0100 Message-Id: <20230713035512.4139457-5-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230713035512.4139457-1-willy@infradead.org> References: <20230713035512.4139457-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This function was converted before folio_set_bh() existed. Catch up to the new API. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Jan Kara --- mm/migrate.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/migrate.c b/mm/migrate.c index af8557d78549..1363053894ce 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -773,7 +773,7 @@ static int __buffer_migrate_folio(struct address_space *mapping, bh = head; do { - set_bh_page(bh, &dst->page, bh_offset(bh)); + folio_set_bh(bh, dst, bh_offset(bh)); bh = bh->b_this_page; } while (bh != head); From patchwork Thu Jul 13 03:55:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13311247 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D779C001DD for ; Thu, 13 Jul 2023 03:55:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233753AbjGMDzY (ORCPT ); Wed, 12 Jul 2023 23:55:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232167AbjGMDzW (ORCPT ); Wed, 12 Jul 2023 23:55:22 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6CCA51FC7; Wed, 12 Jul 2023 20:55:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=JVMDLCJl/ahr8KIqhVfeFDOeM/wgOLTi12cImSKMuZE=; b=Y1f/pCg03zeVz2SDpGIeEbHPg2 59lT4PRUm9yNtQwqsolrKPZ5OQR2LEA/0PnNwdt9pYSJUKh8w3RkcI1/H+BDlHjgGW+PMtysN+/np HyzzTc4eyhO137g8RrlJXGDKxIqg+2ssOYtiLLkvjwY//NKSYzcS28klzgb8aONWjunn5VBwd74+a G6ZOtv5+iD+tZU/jK8q6vNzQ7erA6DfjOrwrWeSf6iM12PdPwoeiII8JVyMlM8wWz4/1QK8bEZ+0K darR+0VjipCVODKe32gLL+cdbLNzUTK51Sfhutza7+wgYgmTvZEcOAxuviRNB2pSEkcS9C+5FJwY4 OwWM2/sQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qJnQA-00HMrm-Jc; Thu, 13 Jul 2023 03:55:14 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , David Sterba , linux-fsdevel@vger.kernel.org, Pankaj Raghav , Konstantin Komarov , ntfs3@lists.linux.dev, "Theodore Tso" , Jan Kara , linux-ext4@vger.kernel.org Subject: [PATCH 5/7] ntfs3: Convert ntfs_get_block_vbo() to use a folio Date: Thu, 13 Jul 2023 04:55:10 +0100 Message-Id: <20230713035512.4139457-6-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230713035512.4139457-1-willy@infradead.org> References: <20230713035512.4139457-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Remove a user of set_bh_page(). Signed-off-by: Matthew Wilcox (Oracle) --- fs/ntfs3/inode.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c index dc7e7ab701c6..8ae572aacc69 100644 --- a/fs/ntfs3/inode.c +++ b/fs/ntfs3/inode.c @@ -554,7 +554,7 @@ static noinline int ntfs_get_block_vbo(struct inode *inode, u64 vbo, struct super_block *sb = inode->i_sb; struct ntfs_sb_info *sbi = sb->s_fs_info; struct ntfs_inode *ni = ntfs_i(inode); - struct page *page = bh->b_page; + struct folio *folio = bh->b_folio; u8 cluster_bits = sbi->cluster_bits; u32 block_size = sb->s_blocksize; u64 bytes, lbo, valid; @@ -569,7 +569,7 @@ static noinline int ntfs_get_block_vbo(struct inode *inode, u64 vbo, if (is_resident(ni)) { ni_lock(ni); - err = attr_data_read_resident(ni, page); + err = attr_data_read_resident(ni, &folio->page); ni_unlock(ni); if (!err) @@ -642,17 +642,17 @@ static noinline int ntfs_get_block_vbo(struct inode *inode, u64 vbo, */ bytes = block_size; - if (page) { + if (folio) { u32 voff = valid - vbo; bh->b_size = block_size; off = vbo & (PAGE_SIZE - 1); - set_bh_page(bh, page, off); + folio_set_bh(bh, folio, off); err = bh_read(bh, 0); if (err < 0) goto out; - zero_user_segment(page, off + voff, off + block_size); + folio_zero_segment(folio, off + voff, off + block_size); } } From patchwork Thu Jul 13 03:55:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13311248 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BCA4C0015E for ; Thu, 13 Jul 2023 03:55:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233764AbjGMDzf (ORCPT ); Wed, 12 Jul 2023 23:55:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53830 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233760AbjGMDz3 (ORCPT ); Wed, 12 Jul 2023 23:55:29 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 873401BF2; Wed, 12 Jul 2023 20:55:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dJKZzviGYlfiGYN+f/pIQkUos4W3rI9n2ymwoSwEN4E=; b=O8wuDQaNnlTjc1xtfxNp+u8rnS Ap0eYMhkM967U5HZdWTHyw9vqDYUSM1YXM6dXL7PDULiwGZyXgTNUuq20ah7k8RpTu2V3Y3Yb7KxP 3BPPsSUZAoDFil2dDgaLEKI28kZU5lm6PL/C9fSTnIhsGuNvt9c+nsAR7LYUGmUOt0m1t8WnhXny6 /gL8+3hrg3NbDijiSmztoUEzENKXaxhO/F/7M+zgh4i7e51x4V7G53dgRW7qsSF91Sj+4dGKCXiWZ MgypbL7Dz0Fgms3laHpwK4wCk13rQHseo/6VYw2dbuGNshjlyw1mDcH4iHHmyV40rt/bB8K5007ha AURiAvkw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qJnQA-00HMro-Mn; Thu, 13 Jul 2023 03:55:14 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , David Sterba , linux-fsdevel@vger.kernel.org, Pankaj Raghav , Konstantin Komarov , ntfs3@lists.linux.dev, "Theodore Tso" , Jan Kara , linux-ext4@vger.kernel.org Subject: [PATCH 6/7] jbd2: Use a folio in jbd2_journal_write_metadata_buffer() Date: Thu, 13 Jul 2023 04:55:11 +0100 Message-Id: <20230713035512.4139457-7-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230713035512.4139457-1-willy@infradead.org> References: <20230713035512.4139457-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The primary goal here is removing the use of set_bh_page(). Take the opportunity to switch from kmap_atomic() to kmap_local(). This simplifies the function as the offset is already added to the pointer. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Jan Kara --- fs/jbd2/journal.c | 35 ++++++++++++++++------------------- 1 file changed, 16 insertions(+), 19 deletions(-) diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c index fbce16fedaa4..1b5a45ab62b0 100644 --- a/fs/jbd2/journal.c +++ b/fs/jbd2/journal.c @@ -341,7 +341,7 @@ int jbd2_journal_write_metadata_buffer(transaction_t *transaction, int do_escape = 0; char *mapped_data; struct buffer_head *new_bh; - struct page *new_page; + struct folio *new_folio; unsigned int new_offset; struct buffer_head *bh_in = jh2bh(jh_in); journal_t *journal = transaction->t_journal; @@ -370,14 +370,14 @@ int jbd2_journal_write_metadata_buffer(transaction_t *transaction, */ if (jh_in->b_frozen_data) { done_copy_out = 1; - new_page = virt_to_page(jh_in->b_frozen_data); - new_offset = offset_in_page(jh_in->b_frozen_data); + new_folio = virt_to_folio(jh_in->b_frozen_data); + new_offset = offset_in_folio(new_folio, jh_in->b_frozen_data); } else { - new_page = jh2bh(jh_in)->b_page; - new_offset = offset_in_page(jh2bh(jh_in)->b_data); + new_folio = jh2bh(jh_in)->b_folio; + new_offset = offset_in_folio(new_folio, jh2bh(jh_in)->b_data); } - mapped_data = kmap_atomic(new_page); + mapped_data = kmap_local_folio(new_folio, new_offset); /* * Fire data frozen trigger if data already wasn't frozen. Do this * before checking for escaping, as the trigger may modify the magic @@ -385,18 +385,17 @@ int jbd2_journal_write_metadata_buffer(transaction_t *transaction, * data in the buffer. */ if (!done_copy_out) - jbd2_buffer_frozen_trigger(jh_in, mapped_data + new_offset, + jbd2_buffer_frozen_trigger(jh_in, mapped_data, jh_in->b_triggers); /* * Check for escaping */ - if (*((__be32 *)(mapped_data + new_offset)) == - cpu_to_be32(JBD2_MAGIC_NUMBER)) { + if (*((__be32 *)mapped_data) == cpu_to_be32(JBD2_MAGIC_NUMBER)) { need_copy_out = 1; do_escape = 1; } - kunmap_atomic(mapped_data); + kunmap_local(mapped_data); /* * Do we need to do a data copy? @@ -417,12 +416,10 @@ int jbd2_journal_write_metadata_buffer(transaction_t *transaction, } jh_in->b_frozen_data = tmp; - mapped_data = kmap_atomic(new_page); - memcpy(tmp, mapped_data + new_offset, bh_in->b_size); - kunmap_atomic(mapped_data); + memcpy_from_folio(tmp, new_folio, new_offset, bh_in->b_size); - new_page = virt_to_page(tmp); - new_offset = offset_in_page(tmp); + new_folio = virt_to_folio(tmp); + new_offset = offset_in_folio(new_folio, tmp); done_copy_out = 1; /* @@ -438,12 +435,12 @@ int jbd2_journal_write_metadata_buffer(transaction_t *transaction, * copying, we can finally do so. */ if (do_escape) { - mapped_data = kmap_atomic(new_page); - *((unsigned int *)(mapped_data + new_offset)) = 0; - kunmap_atomic(mapped_data); + mapped_data = kmap_local_folio(new_folio, new_offset); + *((unsigned int *)mapped_data) = 0; + kunmap_local(mapped_data); } - set_bh_page(new_bh, new_page, new_offset); + folio_set_bh(new_bh, new_folio, new_offset); new_bh->b_size = bh_in->b_size; new_bh->b_bdev = journal->j_dev; new_bh->b_blocknr = blocknr; From patchwork Thu Jul 13 03:55:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13311249 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2A28C001DD for ; Thu, 13 Jul 2023 03:55:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233768AbjGMDzh (ORCPT ); Wed, 12 Jul 2023 23:55:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53836 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232167AbjGMDzb (ORCPT ); Wed, 12 Jul 2023 23:55:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BC7EF1FD7; Wed, 12 Jul 2023 20:55:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=elFZct2zLYsO/DhALW8T/PkTJ32hYfvGI0lmoYqZOm8=; b=fCiRDDQ0vsrg84r3mAsPC/UJja 5X9ojaJStev+XTF+eZJYbr17rKhQEEX9Ks9wL4MCB6e7OI6ojgep5XBSuh/DBw8cfgfEvy2BmKc6u Li69Ba8gh1z058APNhZq5Aim+JeSvfn8oczbZ4uMvAO4v7izUuF/mdeN/YyX/c+/D3ShMIyWMhaHy t7H/nzZ29DsCV9n+Duqfz91bmWB+fVjVvJVRGMp+cqXowLegsfEb7Q9RgjYONN/HJhfmvQwB4DFzh AK67mENltmR7LSdQ2RuuBlSfxaQqWnviR/iXAJKzBqP6VDC98loO+gHarV82dAFbYyh5TW3hzlwdr Fwca4Cew==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qJnQA-00HMrq-Qg; Thu, 13 Jul 2023 03:55:14 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , David Sterba , linux-fsdevel@vger.kernel.org, Pankaj Raghav , Konstantin Komarov , ntfs3@lists.linux.dev, "Theodore Tso" , Jan Kara , linux-ext4@vger.kernel.org Subject: [PATCH 7/7] buffer: Remove set_bh_page() Date: Thu, 13 Jul 2023 04:55:12 +0100 Message-Id: <20230713035512.4139457-8-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230713035512.4139457-1-willy@infradead.org> References: <20230713035512.4139457-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org With all users converted to folio_set_bh(), remove this function. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Jan Kara --- fs/buffer.c | 15 --------------- include/linux/buffer_head.h | 2 -- 2 files changed, 17 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 587e4d4af9de..f0563ebae75f 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -1539,21 +1539,6 @@ void invalidate_bh_lrus_cpu(void) bh_lru_unlock(); } -void set_bh_page(struct buffer_head *bh, - struct page *page, unsigned long offset) -{ - bh->b_page = page; - BUG_ON(offset >= PAGE_SIZE); - if (PageHighMem(page)) - /* - * This catches illegal uses and preserves the offset: - */ - bh->b_data = (char *)(0 + offset); - else - bh->b_data = page_address(page) + offset; -} -EXPORT_SYMBOL(set_bh_page); - void folio_set_bh(struct buffer_head *bh, struct folio *folio, unsigned long offset) { diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h index a7377877ff4e..06566aee94ca 100644 --- a/include/linux/buffer_head.h +++ b/include/linux/buffer_head.h @@ -194,8 +194,6 @@ void buffer_check_dirty_writeback(struct folio *folio, void mark_buffer_dirty(struct buffer_head *bh); void mark_buffer_write_io_error(struct buffer_head *bh); void touch_buffer(struct buffer_head *bh); -void set_bh_page(struct buffer_head *bh, - struct page *page, unsigned long offset); void folio_set_bh(struct buffer_head *bh, struct folio *folio, unsigned long offset); bool try_to_free_buffers(struct folio *);