From patchwork Fri Apr 29 17:24:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832473 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE977C433F5 for ; Fri, 29 Apr 2022 17:26:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379511AbiD2R3Z (ORCPT ); Fri, 29 Apr 2022 13:29:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42436 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376592AbiD2R3X (ORCPT ); Fri, 29 Apr 2022 13:29:23 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E131C972F5 for ; Fri, 29 Apr 2022 10:26:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=vbsx+quejBFeLn9wOCiR2ga53RNajVJdqgmVEXZMboY=; b=b8oGoj6wVBJub8eSoP8yBLt8Ol rVsP718EmtPw0YpCM2P1WLPZ7j5u5gOhGb1vcEV5naEPw1hlU7n2PIi7oWay1GNO8y/HpNGbJQT7+ KARPkmuk6LCD5xTqOL426E/UaX5/vOusQ5o8CVM76gLOQOVeBfONkbCBQtV+fqstNl2n4NI2nr2xL yXRORK/1GHPLt0le6PJ6QdNkliFVcoLcDWfTPQR7sfGAkcHzmdRuFNBqDQ5/q1dCc3aomrbQsy17S ogST8abjnPG7zb8WPuakbIKPa3Bo7PPNj47ndvGGNba1Z6ExvCK3MruHFdsZ1V0pFygiopjeuQ0ho Jm5ZbC3A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNW-00CdX1-4R; Fri, 29 Apr 2022 17:26:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 01/69] scsicam: Fix use of page cache Date: Fri, 29 Apr 2022 18:24:48 +0100 Message-Id: <20220429172556.3011843-2-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Filesystems do not necessarily set PageError; instead they will leave PageUptodate clear on errors. We should also kmap() the page before accessing it in case the page is allocated from HIGHMEM. Signed-off-by: Matthew Wilcox (Oracle) --- drivers/scsi/scsicam.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/scsicam.c b/drivers/scsi/scsicam.c index acdc0aceca5e..baba801895df 100644 --- a/drivers/scsi/scsicam.c +++ b/drivers/scsi/scsicam.c @@ -40,8 +40,12 @@ unsigned char *scsi_bios_ptable(struct block_device *dev) if (IS_ERR(page)) return NULL; - if (!PageError(page)) - res = kmemdup(page_address(page) + 0x1be, 66, GFP_KERNEL); + if (PageUptodate(page)) { + char *addr = kmap_local_page(page); + + res = kmemdup(addr + 0x1be, 66, GFP_KERNEL); + kunmap_local(addr); + } put_page(page); return res; } From patchwork Fri Apr 29 17:24:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832506 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B470DC433EF for ; Fri, 29 Apr 2022 17:26:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379647AbiD2R34 (ORCPT ); Fri, 29 Apr 2022 13:29:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42514 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1378214AbiD2R3a (ORCPT ); Fri, 29 Apr 2022 13:29:30 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8955B9F3AD for ; Fri, 29 Apr 2022 10:26:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=+FJqpBnjKvkYcFghyH0RiRG/N81yoH33P8bIUw3ARaI=; b=mkd8v8p8LvF//ofhTfiImSMMxs NCDpcbuKIEPF5/YEvERIfZapwWX1relZn33EoXF9lk6GKlGGqHP+FM6Zv2JQmU9gxllRMG/TE4kxW /JOrHTxyynIGl0M2Ecnr9pd5sOkYPYVn9zvNP9KfphSkhZ1xRJ3RoCEeFr4w11nF1jMg7aeXQNKbI EU18qw7pJHfPv33W9PlVbYP8VqjxbmDJ4RtynjEBFeMoFCiyzgRNj5RZSd1pTlRNIoRQMkp0uL/l/ vib0GGU09JAW8xig04ZSLAA/hkIkk7RBrm4NJQD5sxccyL3zHaPTmdXwBS4gqEXJJdzPBZbsJuTiT dZLtM34Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNW-00CdX6-9M; Fri, 29 Apr 2022 17:26:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Christoph Hellwig Subject: [PATCH 02/69] ext4: Use page_symlink() instead of __page_symlink() Date: Fri, 29 Apr 2022 18:24:49 +0100 Message-Id: <20220429172556.3011843-3-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org By using the memalloc_nofs_save() functionality, we can call page_symlink(), safe in the knowledge that it won't recurse into the filesystem. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/ext4/namei.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c index 767b4bfe39c3..1e7c5deed5e3 100644 --- a/fs/ext4/namei.c +++ b/fs/ext4/namei.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include #include @@ -3308,6 +3309,8 @@ static int ext4_symlink(struct user_namespace *mnt_userns, struct inode *dir, } if ((disk_link.len > EXT4_N_BLOCKS * 4)) { + unsigned int flags; + if (!IS_ENCRYPTED(inode)) inode->i_op = &ext4_symlink_inode_operations; inode_nohighmem(inode); @@ -3329,7 +3332,9 @@ static int ext4_symlink(struct user_namespace *mnt_userns, struct inode *dir, handle = NULL; if (err) goto err_drop_inode; - err = __page_symlink(inode, disk_link.name, disk_link.len, 1); + flags = memalloc_nofs_save(); + err = page_symlink(inode, disk_link.name, disk_link.len); + memalloc_nofs_restore(flags); if (err) goto err_drop_inode; /* From patchwork Fri Apr 29 17:24:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832546 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08E2CC433F5 for ; Fri, 29 Apr 2022 17:27:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379686AbiD2RbD (ORCPT ); Fri, 29 Apr 2022 13:31:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43146 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379643AbiD2R3q (ORCPT ); Fri, 29 Apr 2022 13:29:46 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 573A6A8887 for ; Fri, 29 Apr 2022 10:26:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ukNbbRbZxH56Po3ysYnQL8YLpC4stX4G0l+kdkGV3Og=; b=RbZLPtvTCpjE3oqUumnXA3Foog HI/Dwe6bCapy51eVa7MBWFtmJOnaScWN5sKk8kRvk79qp0cpBAdb7nuJ0NVFRJgCxNjT4NQi6dcjk RI9/aNYnV4If2XzgAgmFUlA6RcJrEvG+UhNR+LRMSFRmkyWP4pzE6h4Xl9ODhTkLHuKwVEX+egMVP Uo4CHwCE2SSif5X2C7AKu8IqwgcidRCg5tEVswkGGrMiIvYaywFq5UqS5VjeauOXiAenkT7/mJR/8 guiuZs+IvDalLwfILS8zMUiwO/K7a2Bhvbn/riIf5Qvy4qqJ/8xm8CXQ3eDdkfkucQBCgT3nuNiwe wB6d5SgA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNW-00CdXB-DT; Fri, 29 Apr 2022 17:26:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Christoph Hellwig , Christian Brauner Subject: [PATCH 03/69] namei: Merge page_symlink() and __page_symlink() Date: Fri, 29 Apr 2022 18:24:50 +0100 Message-Id: <20220429172556.3011843-4-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org There are no callers of __page_symlink() left, so we can remove that entry point. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Christian Brauner --- Documentation/filesystems/porting.rst | 2 +- fs/namei.c | 13 ++----------- include/linux/fs.h | 2 -- 3 files changed, 3 insertions(+), 14 deletions(-) diff --git a/Documentation/filesystems/porting.rst b/Documentation/filesystems/porting.rst index 7c1583dbeb59..2e0e4f0e0c6f 100644 --- a/Documentation/filesystems/porting.rst +++ b/Documentation/filesystems/porting.rst @@ -624,7 +624,7 @@ any symlink that might use page_follow_link_light/page_put_link() must have inode_nohighmem(inode) called before anything might start playing with its pagecache. No highmem pages should end up in the pagecache of such symlinks. That includes any preseeding that might be done during symlink -creation. __page_symlink() will honour the mapping gfp flags, so once +creation. page_symlink() will honour the mapping gfp flags, so once you've done inode_nohighmem() it's safe to use, but if you allocate and insert the page manually, make sure to use the right gfp flags. diff --git a/fs/namei.c b/fs/namei.c index 509657fdf4f5..6153581073b1 100644 --- a/fs/namei.c +++ b/fs/namei.c @@ -5001,12 +5001,10 @@ int page_readlink(struct dentry *dentry, char __user *buffer, int buflen) } EXPORT_SYMBOL(page_readlink); -/* - * The nofs argument instructs pagecache_write_begin to pass AOP_FLAG_NOFS - */ -int __page_symlink(struct inode *inode, const char *symname, int len, int nofs) +int page_symlink(struct inode *inode, const char *symname, int len) { struct address_space *mapping = inode->i_mapping; + bool nofs = !mapping_gfp_constraint(mapping, __GFP_FS); struct page *page; void *fsdata; int err; @@ -5034,13 +5032,6 @@ int __page_symlink(struct inode *inode, const char *symname, int len, int nofs) fail: return err; } -EXPORT_SYMBOL(__page_symlink); - -int page_symlink(struct inode *inode, const char *symname, int len) -{ - return __page_symlink(inode, symname, len, - !mapping_gfp_constraint(inode->i_mapping, __GFP_FS)); -} EXPORT_SYMBOL(page_symlink); const struct inode_operations page_symlink_inode_operations = { diff --git a/include/linux/fs.h b/include/linux/fs.h index bbde95387a23..e108aff23a28 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -3109,8 +3109,6 @@ extern int page_readlink(struct dentry *, char __user *, int); extern const char *page_get_link(struct dentry *, struct inode *, struct delayed_call *); extern void page_put_link(void *); -extern int __page_symlink(struct inode *inode, const char *symname, int len, - int nofs); extern int page_symlink(struct inode *inode, const char *symname, int len); extern const struct inode_operations page_symlink_inode_operations; extern void kfree_link(void *); From patchwork Fri Apr 29 17:24:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832472 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6361C433EF for ; Fri, 29 Apr 2022 17:26:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379236AbiD2R3Y (ORCPT ); Fri, 29 Apr 2022 13:29:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42438 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1359830AbiD2R3X (ORCPT ); Fri, 29 Apr 2022 13:29:23 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 365D797BAE for ; Fri, 29 Apr 2022 10:26:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=oOCFSTQjXi3I8TjZeXPYoY2HCI8zdIL+BInag5eurSU=; b=hBU+Wpmqa6wAtE97xy8l3furtm fcJ1GHhvjVXPx3RYOdkuqfWgrDOUd+ZRNfiPJjmIlpcZSbyhoGZ/RT1WLUtYh4NZMRTLpJ7x3/Jvw 1lNJ6q4Qrxx5PfVQiyGKYPsQasWjvOZgwczFnlaOIuFHUE0hJGXTqvZnyCr3CHzaUXukzwoJzQkce t97zDVNAEYJGeqoyYNidqqtccRdyeBUjQdzOKEL3Yi859CSPRQCxUzZY8dF4Xyp1+o2ZWqpPhtelF P/SzuhRk9fxjbXMVw76lc20V4lGoTF2XEaOpFMGKtBuBZr+kZeEOpOxDfppiDl6EiRayYes+Z3bJn SbxomobA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNW-00CdXD-G8; Fri, 29 Apr 2022 17:26:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Christoph Hellwig Subject: [PATCH 04/69] namei: Convert page_symlink() to use memalloc_nofs_save() Date: Fri, 29 Apr 2022 18:24:51 +0100 Message-Id: <20220429172556.3011843-5-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Stop using AOP_FLAG_NOFS in favour of the scoped memory API. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/namei.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/fs/namei.c b/fs/namei.c index 6153581073b1..0c84b4326dc9 100644 --- a/fs/namei.c +++ b/fs/namei.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -5008,13 +5009,15 @@ int page_symlink(struct inode *inode, const char *symname, int len) struct page *page; void *fsdata; int err; - unsigned int flags = 0; - if (nofs) - flags |= AOP_FLAG_NOFS; + unsigned int flags; retry: + if (nofs) + flags = memalloc_nofs_save(); err = pagecache_write_begin(NULL, mapping, 0, len-1, - flags, &page, &fsdata); + 0, &page, &fsdata); + if (nofs) + memalloc_nofs_restore(flags); if (err) goto fail; From patchwork Fri Apr 29 17:24:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832544 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37CEDC433EF for ; Fri, 29 Apr 2022 17:27:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379617AbiD2Ra7 (ORCPT ); Fri, 29 Apr 2022 13:30:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43290 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379650AbiD2R3q (ORCPT ); Fri, 29 Apr 2022 13:29:46 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 40D17AAB7A for ; Fri, 29 Apr 2022 10:26:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=KXbqSvumSVrPe8vjC4VZrIVjZd7wuuA9EJ+sQnxubXQ=; b=EuE++xVHoESbIEJo8/yhksKspe EbccO0QE24LwZogIo4dpVjThAlcqBXCYesKbkgy1eh9OJSbrXesiwJifnLWaFxWUh/E6Hc+D6NQpt QEcMX3UtIkl5XNxz1pEZY1rlj0rNI3BFINVCh4YLy6ts94e9i9rjGokD1SzvXBAjT8aE9fzNij3dD kdouRyENNZc++SbAcHkOJ4iLUpRWTlKqwfqxQc/B3yws0UKUyIsydU943f4TdoP+hgItGCvI4Gmi5 yAEisai4VBokNznkFz9wsDxjYBxHVdDGWXon7NO9MGwYa+tghekvSqvdcDxrpdqB4+ddmjmst1obw oUvFg1hA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNW-00CdXF-IK; Fri, 29 Apr 2022 17:26:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Christoph Hellwig Subject: [PATCH 05/69] f2fs: Convert f2fs_grab_cache_page() to use scoped memory APIs Date: Fri, 29 Apr 2022 18:24:52 +0100 Message-Id: <20220429172556.3011843-6-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Prevent GFP_FS allocations by using memalloc_nofs_save() instead of AOP_FLAG_NOFS. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/f2fs/f2fs.h | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index 8c570de21ed5..74929ade4b5e 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -2654,6 +2655,7 @@ static inline struct page *f2fs_grab_cache_page(struct address_space *mapping, pgoff_t index, bool for_write) { struct page *page; + unsigned int flags; if (IS_ENABLED(CONFIG_F2FS_FAULT_INJECTION)) { if (!for_write) @@ -2673,7 +2675,12 @@ static inline struct page *f2fs_grab_cache_page(struct address_space *mapping, if (!for_write) return grab_cache_page(mapping, index); - return grab_cache_page_write_begin(mapping, index, AOP_FLAG_NOFS); + + flags = memalloc_nofs_save(); + page = grab_cache_page_write_begin(mapping, index, 0); + memalloc_nofs_restore(flags); + + return page; } static inline struct page *f2fs_pagecache_get_page( From patchwork Fri Apr 29 17:24:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832475 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48BEFC433F5 for ; Fri, 29 Apr 2022 17:26:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379513AbiD2R31 (ORCPT ); Fri, 29 Apr 2022 13:29:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42440 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376912AbiD2R3X (ORCPT ); Fri, 29 Apr 2022 13:29:23 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6906898593 for ; Fri, 29 Apr 2022 10:26:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ATnuT7N0epb/e80siA2Nrn8r9Wwz8RTRUGV2l5yaIvs=; b=MfzfO2F8yxV9alwNEz7a8TBUaA IYq5uTh1879Jo6ekmx+r/i9Nb1repBKAW1PAJYQX0bbKkeggi/sa9TDfhCaI7naGYuSjMYUhs+OEC 7NTRhLeunnd8ILSr2+MM1zva2qMOmFB4M5ChTsTDn2tedMGuu3gKVmM3HhDS3x/4qLAkADVTZx9Ao lYIwb8QAgQkNg6Qbdf2d7Kv6RYF0sUesVJqzW9nyfAbON+yl2tTYRK1qt6E1LlaQVeU/q01Z3BZZ3 27GoFt9+lp8uCKRA+bEiuI2waKLteeFEdWSVnCn8Fbp3r4SxUFthvFmVT/JsAiFjfaj92zZUAhg4M CJt3m/Ew==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNW-00CdXH-KC; Fri, 29 Apr 2022 17:26:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Theodore Ts'o Subject: [PATCH 06/69] ext4: Allow GFP_FS allocations in ext4_da_convert_inline_data_to_extent() Date: Fri, 29 Apr 2022 18:24:53 +0100 Message-Id: <20220429172556.3011843-7-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Since commit 8bc1379b82b8, the transaction is stopped before calling ext4_da_convert_inline_data_to_extent(), which means we can do GFP_FS allocations and recurse into the filesystem. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Theodore Ts'o --- fs/ext4/inline.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c index 9c076262770d..93694ceb5a34 100644 --- a/fs/ext4/inline.c +++ b/fs/ext4/inline.c @@ -848,13 +848,12 @@ ext4_journalled_write_inline_data(struct inode *inode, */ static int ext4_da_convert_inline_data_to_extent(struct address_space *mapping, struct inode *inode, - unsigned flags, void **fsdata) { int ret = 0, inline_size; struct page *page; - page = grab_cache_page_write_begin(mapping, 0, flags); + page = grab_cache_page_write_begin(mapping, 0, 0); if (!page) return -ENOMEM; @@ -942,7 +941,6 @@ int ext4_da_write_inline_data_begin(struct address_space *mapping, ext4_journal_stop(handle); ret = ext4_da_convert_inline_data_to_extent(mapping, inode, - flags, fsdata); if (ret == -ENOSPC && ext4_should_retry_alloc(inode->i_sb, &retries)) From patchwork Fri Apr 29 17:24:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832503 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96643C4332F for ; Fri, 29 Apr 2022 17:26:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379534AbiD2R3x (ORCPT ); Fri, 29 Apr 2022 13:29:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42486 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379530AbiD2R3a (ORCPT ); Fri, 29 Apr 2022 13:29:30 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B4869F3B5 for ; Fri, 29 Apr 2022 10:26:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=fs3yjOFU7Is8zXjtGhR6ALd+llANqAadapxJ56oler4=; b=blgo+ZKytRcLnVc9HhJl1LG+g4 Jn9nqkOdsl8B3DD6X+agWMPLHevcA6TGki7IAzIW5eGBS5zxjZEV7x3OeY/m5b6rtHqw6GNRlMdFw 9zKS4B0iYyr3XLzbAHOY9+gfaBBLngMliWgGVEufuOfIQ+LKZqkAeLxRwIoYa3haRFKHUUQWm2gvF 5xIN3hJvvbMHWA4WUgSAvy9UD/G8PjDqdbHzlRrhrrm10mQd9r1839y+npwscdp9+lulwfGVHCw4h F85UeF9UJku/BdGDtroTl3Q+wKpZ3BqwFeKb1heK+x44RMXYPcb8I0qK8Z0yM95LUbXpPrP1TCk9i rREZxroQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNW-00CdXQ-Nl; Fri, 29 Apr 2022 17:26:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Theodore Ts'o Subject: [PATCH 07/69] ext4: Use scoped memory API in mext_page_double_lock() Date: Fri, 29 Apr 2022 18:24:54 +0100 Message-Id: <20220429172556.3011843-8-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Replace use of AOP_FLAG_NOFS with calls to memalloc_nofs_save() and memalloc_nofs_restore(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Theodore Ts'o --- fs/ext4/move_extent.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c index 95aa212f0863..56f21272fb00 100644 --- a/fs/ext4/move_extent.c +++ b/fs/ext4/move_extent.c @@ -8,6 +8,7 @@ #include #include #include +#include #include "ext4_jbd2.h" #include "ext4.h" #include "ext4_extents.h" @@ -127,7 +128,7 @@ mext_page_double_lock(struct inode *inode1, struct inode *inode2, pgoff_t index1, pgoff_t index2, struct page *page[2]) { struct address_space *mapping[2]; - unsigned fl = AOP_FLAG_NOFS; + unsigned int flags; BUG_ON(!inode1 || !inode2); if (inode1 < inode2) { @@ -139,11 +140,15 @@ mext_page_double_lock(struct inode *inode1, struct inode *inode2, mapping[1] = inode1->i_mapping; } - page[0] = grab_cache_page_write_begin(mapping[0], index1, fl); - if (!page[0]) + flags = memalloc_nofs_save(); + page[0] = grab_cache_page_write_begin(mapping[0], index1, 0); + if (!page[0]) { + memalloc_nofs_restore(flags); return -ENOMEM; + } - page[1] = grab_cache_page_write_begin(mapping[1], index2, fl); + page[1] = grab_cache_page_write_begin(mapping[1], index2, 0); + memalloc_nofs_restore(flags); if (!page[1]) { unlock_page(page[0]); put_page(page[0]); From patchwork Fri Apr 29 17:24:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832474 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD01EC433FE for ; Fri, 29 Apr 2022 17:26:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379527AbiD2R30 (ORCPT ); Fri, 29 Apr 2022 13:29:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42446 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376937AbiD2R3X (ORCPT ); Fri, 29 Apr 2022 13:29:23 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 74B259859C for ; Fri, 29 Apr 2022 10:26:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=trDP4gjjFpmqZvJ+JNrSlXV+L8vVqC1TdJmWMDKM3+I=; b=ruv00QPsH7xIftAVygZ6Uv0yqb 2+o2EFUemLx20PxRuSrmsEHUywVJBm0ZX/utB2XGf1NR7pfBDO9E6zBDeTfxl3MCkYlx8+RsTmJPv y3lmjtYT7rmxhQfPnbJgoDgESZ9F6JBLNNnzZrisjl7cfLBYxjD9mnphJ4njCry7HFhfgGx0jHgWR rlDjepK1nTi/g3H455g0mjmafwaGNY6arr5S5MRSVPjEGJtFGabeWjxdzRDR/gOHH5vUXeSi7OakB NGC950mKkNphQc+rISuQ1aYcrmr6HHB5Gh8AKRHQ/LnlRboqYP7Bv7PfKUG6WhmFXsB0kQ3UxuVQv KVxGZwtA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNW-00CdXS-QH; Fri, 29 Apr 2022 17:26:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Theodore Ts'o Subject: [PATCH 08/69] ext4: Use scoped memory APIs in ext4_da_write_begin() Date: Fri, 29 Apr 2022 18:24:55 +0100 Message-Id: <20220429172556.3011843-9-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Instead of setting AOP_FLAG_NOFS, use memalloc_nofs_save() and memalloc_nofs_restore() to prevent GFP_FS allocations recursing into the filesystem with a journal already started. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Theodore Ts'o --- fs/ext4/ext4.h | 1 - fs/ext4/inline.c | 16 ++++++++-------- fs/ext4/inode.c | 3 +-- 3 files changed, 9 insertions(+), 11 deletions(-) diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index a743b1e3b89e..90677e30e52d 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -3604,7 +3604,6 @@ ext4_journalled_write_inline_data(struct inode *inode, extern int ext4_da_write_inline_data_begin(struct address_space *mapping, struct inode *inode, loff_t pos, unsigned len, - unsigned flags, struct page **pagep, void **fsdata); extern int ext4_try_add_inline_entry(handle_t *handle, diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c index 93694ceb5a34..d965ba08f68f 100644 --- a/fs/ext4/inline.c +++ b/fs/ext4/inline.c @@ -906,7 +906,6 @@ static int ext4_da_convert_inline_data_to_extent(struct address_space *mapping, int ext4_da_write_inline_data_begin(struct address_space *mapping, struct inode *inode, loff_t pos, unsigned len, - unsigned flags, struct page **pagep, void **fsdata) { @@ -915,6 +914,7 @@ int ext4_da_write_inline_data_begin(struct address_space *mapping, struct page *page; struct ext4_iloc iloc; int retries = 0; + unsigned int flags; ret = ext4_get_inode_loc(inode, &iloc); if (ret) @@ -931,12 +931,6 @@ int ext4_da_write_inline_data_begin(struct address_space *mapping, if (ret && ret != -ENOSPC) goto out_journal; - /* - * We cannot recurse into the filesystem as the transaction - * is already started. - */ - flags |= AOP_FLAG_NOFS; - if (ret == -ENOSPC) { ext4_journal_stop(handle); ret = ext4_da_convert_inline_data_to_extent(mapping, @@ -948,7 +942,13 @@ int ext4_da_write_inline_data_begin(struct address_space *mapping, goto out; } - page = grab_cache_page_write_begin(mapping, 0, flags); + /* + * We cannot recurse into the filesystem as the transaction + * is already started. + */ + flags = memalloc_nofs_save(); + page = grab_cache_page_write_begin(mapping, 0, 0); + memalloc_nofs_restore(flags); if (!page) { ret = -ENOMEM; goto out_journal; diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 646ece9b3455..21ebcb3c59ba 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -2954,8 +2954,7 @@ static int ext4_da_write_begin(struct file *file, struct address_space *mapping, trace_ext4_da_write_begin(inode, pos, len, flags); if (ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA)) { - ret = ext4_da_write_inline_data_begin(mapping, inode, - pos, len, flags, + ret = ext4_da_write_inline_data_begin(mapping, inode, pos, len, pagep, fsdata); if (ret < 0) return ret; From patchwork Fri Apr 29 17:24:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832477 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CB42C433EF for ; Fri, 29 Apr 2022 17:26:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379514AbiD2R3a (ORCPT ); Fri, 29 Apr 2022 13:29:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42486 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345884AbiD2R3Y (ORCPT ); Fri, 29 Apr 2022 13:29:24 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB31F985B5 for ; Fri, 29 Apr 2022 10:26:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dYZKrf5pdSYug9lsibPHpic0LKqrclyIK/b7UwBqBrU=; b=kQWNFmYz1+3AFcoyfS0IlZjr1Q dosLh0R2+GB3csWq2s1cGC2FZtiP0quIfTMlcHfUk7OkRi7xPbjuKZjU/Unfi6goYXPcG1S/0PQev gHBe2KrrbN3I7bk3sH65HBc127mGCLyDZsIHNL8pqbNy37Nd8MxgHOVBa8DBhu8rMfN/H0hCoHMnL O5vLE5H8H0YtA/VOIe/LN4ZJKQ4QpkwYe7DrXh+ocnCEzTDS3jnKMZaiCrGYHUAAwLpTXGxL0oDYB 81yG2qh+ExlK+H/qRCHEaq7HLGdY8CMgNmTbzEbrFpdPFxj8us6h3y8jfdJtGaUslsQIZCbQ1g+u7 DfWeaWHw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNW-00CdXZ-Tj; Fri, 29 Apr 2022 17:26:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Theodore Ts'o Subject: [PATCH 09/69] ext4: Use scoped memory APIs in ext4_write_begin() Date: Fri, 29 Apr 2022 18:24:56 +0100 Message-Id: <20220429172556.3011843-10-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Instead of setting AOP_FLAG_NOFS, use memalloc_nofs_save() and memalloc_nofs_restore() to prevent GFP_FS allocations recursing into the filesystem with a journal already started. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Theodore Ts'o --- fs/ext4/ext4.h | 1 - fs/ext4/inline.c | 21 ++++++++++----------- fs/ext4/inode.c | 2 +- 3 files changed, 11 insertions(+), 13 deletions(-) diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index 90677e30e52d..0c3308bac6c1 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -3591,7 +3591,6 @@ extern int ext4_readpage_inline(struct inode *inode, struct page *page); extern int ext4_try_to_write_inline_data(struct address_space *mapping, struct inode *inode, loff_t pos, unsigned len, - unsigned flags, struct page **pagep); extern int ext4_write_inline_data_end(struct inode *inode, loff_t pos, unsigned len, diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c index d965ba08f68f..b2ef5ba568bc 100644 --- a/fs/ext4/inline.c +++ b/fs/ext4/inline.c @@ -527,13 +527,13 @@ int ext4_readpage_inline(struct inode *inode, struct page *page) } static int ext4_convert_inline_data_to_extent(struct address_space *mapping, - struct inode *inode, - unsigned flags) + struct inode *inode) { int ret, needed_blocks, no_expand; handle_t *handle = NULL; int retries = 0, sem_held = 0; struct page *page = NULL; + unsigned int flags; unsigned from, to; struct ext4_iloc iloc; @@ -562,9 +562,9 @@ static int ext4_convert_inline_data_to_extent(struct address_space *mapping, /* We cannot recurse into the filesystem as the transaction is already * started */ - flags |= AOP_FLAG_NOFS; - - page = grab_cache_page_write_begin(mapping, 0, flags); + flags = memalloc_nofs_save(); + page = grab_cache_page_write_begin(mapping, 0, 0); + memalloc_nofs_restore(flags); if (!page) { ret = -ENOMEM; goto out; @@ -649,11 +649,11 @@ static int ext4_convert_inline_data_to_extent(struct address_space *mapping, int ext4_try_to_write_inline_data(struct address_space *mapping, struct inode *inode, loff_t pos, unsigned len, - unsigned flags, struct page **pagep) { int ret; handle_t *handle; + unsigned int flags; struct page *page; struct ext4_iloc iloc; @@ -691,9 +691,9 @@ int ext4_try_to_write_inline_data(struct address_space *mapping, if (ret) goto out; - flags |= AOP_FLAG_NOFS; - - page = grab_cache_page_write_begin(mapping, 0, flags); + flags = memalloc_nofs_save(); + page = grab_cache_page_write_begin(mapping, 0, 0); + memalloc_nofs_restore(flags); if (!page) { ret = -ENOMEM; goto out; @@ -727,8 +727,7 @@ int ext4_try_to_write_inline_data(struct address_space *mapping, brelse(iloc.bh); return ret; convert: - return ext4_convert_inline_data_to_extent(mapping, - inode, flags); + return ext4_convert_inline_data_to_extent(mapping, inode); } int ext4_write_inline_data_end(struct inode *inode, loff_t pos, unsigned len, diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 21ebcb3c59ba..01a55647c959 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -1156,7 +1156,7 @@ static int ext4_write_begin(struct file *file, struct address_space *mapping, if (ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA)) { ret = ext4_try_to_write_inline_data(mapping, inode, pos, len, - flags, pagep); + pagep); if (ret < 0) return ret; if (ret == 1) From patchwork Fri Apr 29 17:24:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832478 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A255C433FE for ; Fri, 29 Apr 2022 17:26:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379612AbiD2R3b (ORCPT ); Fri, 29 Apr 2022 13:29:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1377195AbiD2R3Y (ORCPT ); Fri, 29 Apr 2022 13:29:24 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B55FF986D9 for ; Fri, 29 Apr 2022 10:26:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ha8ebKgRGxtayEYqUXJ+XYzoZ7W6fZr5XiakkCOM3DA=; b=OaXJ2LSwFsfEqRUeCbaot+K7Ng 8G4ErGJ+dNymu62YwVsE+mjV/jT4G/hF4J+u729lvWdDHAmweHf5vI9rS5z6uxR58B3L1LpfdQRpb 2bQGSbbxq5CBCvk7Zk/miFidUlANFv7DXeBtuB8LoicH6H60+CYrU+pT8BVdcckpqoViZviZ2d6xy 4+z8Mb8+1heTQx6FVwJYrxTbNyplX+APtE2bCvbN/9rMOdDi9BCaHNRgNul3Mpd2hn/i5sf2x92a4 dc1tIyFJuXL8xWUgkkd+2BRvF1093Q4MnzcEvbACbSBeO4XfRNHX+GjBDFnZZBAIWwsi7urC2TNoH 70nri9Eg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNX-00CdXf-1D; Fri, 29 Apr 2022 17:26:03 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Christoph Hellwig Subject: [PATCH 10/69] fs: Remove AOP_FLAG_NOFS Date: Fri, 29 Apr 2022 18:24:57 +0100 Message-Id: <20220429172556.3011843-11-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org With all users of this flag gone, we can stop testing whether it's set. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/netfs/buffered_read.c | 6 +----- include/linux/fs.h | 4 ---- mm/folio-compat.c | 2 -- 3 files changed, 1 insertion(+), 11 deletions(-) diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index 281a88a5b8dc..65c17c5a5567 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -302,7 +302,6 @@ static bool netfs_skip_folio_read(struct folio *folio, loff_t pos, size_t len, * @mapping: The mapping to read from * @pos: File position at which the write will begin * @len: The length of the write (may extend beyond the end of the folio chosen) - * @aop_flags: AOP_* flags * @_folio: Where to put the resultant folio * @_fsdata: Place for the netfs to store a cookie * @@ -335,16 +334,13 @@ int netfs_write_begin(struct file *file, struct address_space *mapping, struct netfs_io_request *rreq; struct netfs_i_context *ctx = netfs_i_context(file_inode(file )); struct folio *folio; - unsigned int fgp_flags; + unsigned int fgp_flags = FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE; pgoff_t index = pos >> PAGE_SHIFT; int ret; DEFINE_READAHEAD(ractl, file, NULL, mapping, index); retry: - fgp_flags = FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE; - if (aop_flags & AOP_FLAG_NOFS) - fgp_flags |= FGP_NOFS; folio = __filemap_get_folio(mapping, index, fgp_flags, mapping_gfp_mask(mapping)); if (!folio) diff --git a/include/linux/fs.h b/include/linux/fs.h index e108aff23a28..f81bc5cbcbb6 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -275,10 +275,6 @@ enum positive_aop_returns { AOP_TRUNCATED_PAGE = 0x80001, }; -#define AOP_FLAG_NOFS 0x0002 /* used by filesystem to direct - * helper code (eg buffer layer) - * to clear GFP_FS from alloc */ - /* * oh the beauties of C type declarations. */ diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 46fa179e32fb..3e42ddb81918 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -135,8 +135,6 @@ struct page *grab_cache_page_write_begin(struct address_space *mapping, { unsigned fgp_flags = FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE; - if (flags & AOP_FLAG_NOFS) - fgp_flags |= FGP_NOFS; return pagecache_get_page(mapping, index, fgp_flags, mapping_gfp_mask(mapping)); } From patchwork Fri Apr 29 17:24:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832479 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74CF8C433F5 for ; Fri, 29 Apr 2022 17:26:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379521AbiD2R3d (ORCPT ); Fri, 29 Apr 2022 13:29:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42514 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1378319AbiD2R3Y (ORCPT ); Fri, 29 Apr 2022 13:29:24 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E986E98F6B for ; Fri, 29 Apr 2022 10:26:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=XGk4D40/83Mk2b0xnEMti2TPyZ0SXFmKONNwBKO+QXQ=; b=RbMRdyntGiYhSBvJF0sn5L4M1s bQJTtWuh7pocQ24MIRgCiUEvkC9c90vXjVoIivP4OxI4h6y9MisVH3tTX3e30dF7cTjty+yK67XIm PtCvwVQsn2BgsyauWLvYt1wp7Jx6FvYRC1+udI+/EReQECvq7a9gFD47pMFMj3C5HTWuRnTsFAd+/ AbPuhnrWeLiT7iwC2CXVG/J/k3aqlHAY53UiJ94HMAjJpVMHGqPZCd4VbLDO7tGBLAzb55wUC5xX/ /7zv1XAVvxymEv0RV/QhCY4JdVo4KxCduuNTpwtTmi7WomI1Lau9t/0iS0pZe14JOFvV+IU5abVb3 f/ykBLHQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNX-00CdXi-4F; Fri, 29 Apr 2022 17:26:03 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Christoph Hellwig Subject: [PATCH 11/69] fs: Remove aop_flags parameter from netfs_write_begin() Date: Fri, 29 Apr 2022 18:24:58 +0100 Message-Id: <20220429172556.3011843-12-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org There are no more aop flags left, so remove the parameter. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- Documentation/filesystems/netfs_library.rst | 1 - fs/9p/vfs_addr.c | 2 +- fs/afs/write.c | 2 +- fs/ceph/addr.c | 2 +- fs/netfs/buffered_read.c | 4 ++-- include/linux/netfs.h | 2 +- 6 files changed, 6 insertions(+), 7 deletions(-) diff --git a/Documentation/filesystems/netfs_library.rst b/Documentation/filesystems/netfs_library.rst index 69f00179fdfe..d51c2a5ccf57 100644 --- a/Documentation/filesystems/netfs_library.rst +++ b/Documentation/filesystems/netfs_library.rst @@ -142,7 +142,6 @@ Three read helpers are provided:: struct address_space *mapping, loff_t pos, unsigned int len, - unsigned int flags, struct folio **_folio, void **_fsdata); diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c index 501128188343..d311e68e21fd 100644 --- a/fs/9p/vfs_addr.c +++ b/fs/9p/vfs_addr.c @@ -275,7 +275,7 @@ static int v9fs_write_begin(struct file *filp, struct address_space *mapping, * file. We need to do this before we get a lock on the page in case * there's more than one writer competing for the same cache block. */ - retval = netfs_write_begin(filp, mapping, pos, len, flags, &folio, fsdata); + retval = netfs_write_begin(filp, mapping, pos, len, &folio, fsdata); if (retval < 0) return retval; diff --git a/fs/afs/write.c b/fs/afs/write.c index 4763132ca57e..af496c98d394 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -60,7 +60,7 @@ int afs_write_begin(struct file *file, struct address_space *mapping, * file. We need to do this before we get a lock on the page in case * there's more than one writer competing for the same cache block. */ - ret = netfs_write_begin(file, mapping, pos, len, flags, &folio, fsdata); + ret = netfs_write_begin(file, mapping, pos, len, &folio, fsdata); if (ret < 0) return ret; diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index aa25bffd4823..415f0886bc25 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -1318,7 +1318,7 @@ static int ceph_write_begin(struct file *file, struct address_space *mapping, struct folio *folio = NULL; int r; - r = netfs_write_begin(file, inode->i_mapping, pos, len, 0, &folio, NULL); + r = netfs_write_begin(file, inode->i_mapping, pos, len, &folio, NULL); if (r == 0) folio_wait_fscache(folio); if (r < 0) { diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index 65c17c5a5567..1d44509455a5 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -328,8 +328,8 @@ static bool netfs_skip_folio_read(struct folio *folio, loff_t pos, size_t len, * This is usable whether or not caching is enabled. */ int netfs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned int len, unsigned int aop_flags, - struct folio **_folio, void **_fsdata) + loff_t pos, unsigned int len, struct folio **_folio, + void **_fsdata) { struct netfs_io_request *rreq; struct netfs_i_context *ctx = netfs_i_context(file_inode(file )); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index c7bf1eaf51d5..1c29f317d907 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -276,7 +276,7 @@ struct readahead_control; extern void netfs_readahead(struct readahead_control *); extern int netfs_readpage(struct file *, struct page *); extern int netfs_write_begin(struct file *, struct address_space *, - loff_t, unsigned int, unsigned int, struct folio **, + loff_t, unsigned int, struct folio **, void **); extern void netfs_subreq_terminated(struct netfs_io_subrequest *, ssize_t, bool); From patchwork Fri Apr 29 17:24:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832492 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0CB6C433F5 for ; Fri, 29 Apr 2022 17:26:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379621AbiD2R3f (ORCPT ); Fri, 29 Apr 2022 13:29:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42486 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379239AbiD2R3Y (ORCPT ); Fri, 29 Apr 2022 13:29:24 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0BC86991A8 for ; Fri, 29 Apr 2022 10:26:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=LOIGMDSQvqjbbLhfwrNoP8DhHNVzMWWPwJ1QHy+c/8o=; b=C5zr1ffl0rlYchOLShL8YAtHNR 6tQk1EbWTAenS7r/W1ro2/NducwJ/h+YFAyaI/uezBGtR1yl/WveckI3EyQFdqlLv7ULY6a3zzljV HleemYQwYZFk6VoEQtfVwmjFk+1uEePBnwnrJ6i6j++NeHUZB+8x6X4Vvxt4QCO4wgahK4lFglRmr Dzgi9W3A9dzZvHU4vUko+X0tM/+qDVj3Ddc8f/yPfi64tbIZ5QHBGOMyGgLo6SvOWWQFwggGx2uzA NqDc12AUa0v0q6KSznkwCLuoEoQFeFo3xN1+AbGh6vj2GuVMYyfdZJKMhduAOjTmoEJhEHXdFMsYk PaupfBow==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNX-00CdXs-9j; Fri, 29 Apr 2022 17:26:03 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Christoph Hellwig Subject: [PATCH 12/69] fs: Remove aop flags parameter from block_write_begin() Date: Fri, 29 Apr 2022 18:24:59 +0100 Message-Id: <20220429172556.3011843-13-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org There are no more aop flags left, so remove the parameter. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- block/fops.c | 3 +-- fs/bfs/file.c | 3 +-- fs/buffer.c | 6 +++--- fs/ext2/inode.c | 3 +-- fs/minix/inode.c | 3 +-- fs/nilfs2/inode.c | 3 +-- fs/nilfs2/recovery.c | 2 +- fs/ntfs3/inode.c | 4 ++-- fs/omfs/file.c | 3 +-- fs/sysv/itree.c | 2 +- fs/udf/inode.c | 2 +- fs/ufs/inode.c | 3 +-- include/linux/buffer_head.h | 2 +- 13 files changed, 16 insertions(+), 23 deletions(-) diff --git a/block/fops.c b/block/fops.c index 9f2ecec406b0..b432756570c6 100644 --- a/block/fops.c +++ b/block/fops.c @@ -401,8 +401,7 @@ static int blkdev_write_begin(struct file *file, struct address_space *mapping, loff_t pos, unsigned len, unsigned flags, struct page **pagep, void **fsdata) { - return block_write_begin(mapping, pos, len, flags, pagep, - blkdev_get_block); + return block_write_begin(mapping, pos, len, pagep, blkdev_get_block); } static int blkdev_write_end(struct file *file, struct address_space *mapping, diff --git a/fs/bfs/file.c b/fs/bfs/file.c index 03139344568f..9408f45225cb 100644 --- a/fs/bfs/file.c +++ b/fs/bfs/file.c @@ -174,8 +174,7 @@ static int bfs_write_begin(struct file *file, struct address_space *mapping, { int ret; - ret = block_write_begin(mapping, pos, len, flags, pagep, - bfs_get_block); + ret = block_write_begin(mapping, pos, len, pagep, bfs_get_block); if (unlikely(ret)) bfs_write_failed(mapping, pos + len); diff --git a/fs/buffer.c b/fs/buffer.c index 2b5561ae5d0b..4ec6eb03c0eb 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -2104,13 +2104,13 @@ static int __block_commit_write(struct inode *inode, struct page *page, * The filesystem needs to handle block truncation upon failure. */ int block_write_begin(struct address_space *mapping, loff_t pos, unsigned len, - unsigned flags, struct page **pagep, get_block_t *get_block) + struct page **pagep, get_block_t *get_block) { pgoff_t index = pos >> PAGE_SHIFT; struct page *page; int status; - page = grab_cache_page_write_begin(mapping, index, flags); + page = grab_cache_page_write_begin(mapping, index, 0); if (!page) return -ENOMEM; @@ -2460,7 +2460,7 @@ int cont_write_begin(struct file *file, struct address_space *mapping, (*bytes)++; } - return block_write_begin(mapping, pos, len, flags, pagep, get_block); + return block_write_begin(mapping, pos, len, pagep, get_block); } EXPORT_SYMBOL(cont_write_begin); diff --git a/fs/ext2/inode.c b/fs/ext2/inode.c index 52377a0ee735..97192932ea56 100644 --- a/fs/ext2/inode.c +++ b/fs/ext2/inode.c @@ -892,8 +892,7 @@ ext2_write_begin(struct file *file, struct address_space *mapping, { int ret; - ret = block_write_begin(mapping, pos, len, flags, pagep, - ext2_get_block); + ret = block_write_begin(mapping, pos, len, pagep, ext2_get_block); if (ret < 0) ext2_write_failed(mapping, pos + len); return ret; diff --git a/fs/minix/inode.c b/fs/minix/inode.c index f1a6610e4ee6..5e8d7ba661cf 100644 --- a/fs/minix/inode.c +++ b/fs/minix/inode.c @@ -428,8 +428,7 @@ static int minix_write_begin(struct file *file, struct address_space *mapping, { int ret; - ret = block_write_begin(mapping, pos, len, flags, pagep, - minix_get_block); + ret = block_write_begin(mapping, pos, len, pagep, minix_get_block); if (unlikely(ret)) minix_write_failed(mapping, pos + len); diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c index 6045cea21f52..be09a0d10f04 100644 --- a/fs/nilfs2/inode.c +++ b/fs/nilfs2/inode.c @@ -258,8 +258,7 @@ static int nilfs_write_begin(struct file *file, struct address_space *mapping, if (unlikely(err)) return err; - err = block_write_begin(mapping, pos, len, flags, pagep, - nilfs_get_block); + err = block_write_begin(mapping, pos, len, pagep, nilfs_get_block); if (unlikely(err)) { nilfs_write_failed(mapping, pos + len); nilfs_transaction_abort(inode->i_sb); diff --git a/fs/nilfs2/recovery.c b/fs/nilfs2/recovery.c index 9e2ed76c0f25..0955b657938f 100644 --- a/fs/nilfs2/recovery.c +++ b/fs/nilfs2/recovery.c @@ -511,7 +511,7 @@ static int nilfs_recover_dsync_blocks(struct the_nilfs *nilfs, pos = rb->blkoff << inode->i_blkbits; err = block_write_begin(inode->i_mapping, pos, blocksize, - 0, &page, nilfs_get_block); + &page, nilfs_get_block); if (unlikely(err)) { loff_t isize = inode->i_size; diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c index 9eab11e3b034..3914138fd8ba 100644 --- a/fs/ntfs3/inode.c +++ b/fs/ntfs3/inode.c @@ -894,7 +894,7 @@ static int ntfs_write_begin(struct file *file, struct address_space *mapping, goto out; } - err = block_write_begin(mapping, pos, len, flags, pagep, + err = block_write_begin(mapping, pos, len, pagep, ntfs_get_block_write_begin); out: @@ -975,7 +975,7 @@ int reset_log_file(struct inode *inode) len = pos + PAGE_SIZE > log_size ? (log_size - pos) : PAGE_SIZE; - err = block_write_begin(mapping, pos, len, 0, &page, + err = block_write_begin(mapping, pos, len, &page, ntfs_get_block_write_begin); if (err) goto out; diff --git a/fs/omfs/file.c b/fs/omfs/file.c index 3f297b541713..349b96d89c44 100644 --- a/fs/omfs/file.c +++ b/fs/omfs/file.c @@ -321,8 +321,7 @@ static int omfs_write_begin(struct file *file, struct address_space *mapping, { int ret; - ret = block_write_begin(mapping, pos, len, flags, pagep, - omfs_get_block); + ret = block_write_begin(mapping, pos, len, pagep, omfs_get_block); if (unlikely(ret)) omfs_write_failed(mapping, pos + len); diff --git a/fs/sysv/itree.c b/fs/sysv/itree.c index 409ab5e17803..96b7fd4facf3 100644 --- a/fs/sysv/itree.c +++ b/fs/sysv/itree.c @@ -482,7 +482,7 @@ static int sysv_write_begin(struct file *file, struct address_space *mapping, { int ret; - ret = block_write_begin(mapping, pos, len, flags, pagep, get_block); + ret = block_write_begin(mapping, pos, len, pagep, get_block); if (unlikely(ret)) sysv_write_failed(mapping, pos + len); diff --git a/fs/udf/inode.c b/fs/udf/inode.c index ca4fa710e562..88a95886ce8a 100644 --- a/fs/udf/inode.c +++ b/fs/udf/inode.c @@ -209,7 +209,7 @@ static int udf_write_begin(struct file *file, struct address_space *mapping, { int ret; - ret = block_write_begin(mapping, pos, len, flags, pagep, udf_get_block); + ret = block_write_begin(mapping, pos, len, pagep, udf_get_block); if (unlikely(ret)) udf_write_failed(mapping, pos + len); return ret; diff --git a/fs/ufs/inode.c b/fs/ufs/inode.c index d0dda01620f0..bd0e0c66f93d 100644 --- a/fs/ufs/inode.c +++ b/fs/ufs/inode.c @@ -500,8 +500,7 @@ static int ufs_write_begin(struct file *file, struct address_space *mapping, { int ret; - ret = block_write_begin(mapping, pos, len, flags, pagep, - ufs_getfrag_block); + ret = block_write_begin(mapping, pos, len, pagep, ufs_getfrag_block); if (unlikely(ret)) ufs_write_failed(mapping, pos + len); diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h index bcb4fe9b8575..63e49dfa7738 100644 --- a/include/linux/buffer_head.h +++ b/include/linux/buffer_head.h @@ -226,7 +226,7 @@ int __block_write_full_page(struct inode *inode, struct page *page, int block_read_full_page(struct page*, get_block_t*); bool block_is_partially_uptodate(struct folio *, size_t from, size_t count); int block_write_begin(struct address_space *mapping, loff_t pos, unsigned len, - unsigned flags, struct page **pagep, get_block_t *get_block); + struct page **pagep, get_block_t *get_block); int __block_write_begin(struct page *page, loff_t pos, unsigned len, get_block_t *get_block); int block_write_end(struct file *, struct address_space *, From patchwork Fri Apr 29 17:25:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832480 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FEEAC433FE for ; Fri, 29 Apr 2022 17:26:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379544AbiD2R3e (ORCPT ); Fri, 29 Apr 2022 13:29:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379233AbiD2R3Y (ORCPT ); Fri, 29 Apr 2022 13:29:24 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2337F991AE for ; Fri, 29 Apr 2022 10:26:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=H5CtA24JAJu+NBwusxxz2cXtqAyK3BgCY38iwuz6yt8=; b=hFh8sOoRaCBGV2hLEgxaX1wFWp DunZyDdJTr/8WhqmO7xlcxZR4ONY0OulHDbPCPgg9uXSvko+0RjcKg8j303N94Xq+zYi/OsQ0S5Jc 19pr1wgz51ACEd6uFhFUgHkO8k+KTjQffB3Zpf9eGpwoUvrnGWIi/8gXflYA3YFqaNklXAsxinoR1 Rbz/pxdLbegykwStZJ5C9g6oV1DQvV/7Vt84tdowubg0cR8FT+XItRuJDciyTnOrfGeLNJbttjfCk B+P+4tAefcsI0dLv7lfistpvGBUKLAhKGAiDF1fQihZ3EXjVnTYj75hybPt9TvDBxHpFSHSYP/5Pp F9Cw+7Vg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNX-00CdXy-E0; Fri, 29 Apr 2022 17:26:03 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Christoph Hellwig Subject: [PATCH 13/69] fs: Remove aop flags parameter from cont_write_begin() Date: Fri, 29 Apr 2022 18:25:00 +0100 Message-Id: <20220429172556.3011843-14-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org There are no more aop flags left, so remove the parameter. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/adfs/inode.c | 2 +- fs/affs/file.c | 2 +- fs/buffer.c | 2 +- fs/exfat/inode.c | 2 +- fs/fat/inode.c | 2 +- fs/hfs/inode.c | 2 +- fs/hfsplus/inode.c | 2 +- fs/hpfs/file.c | 2 +- include/linux/buffer_head.h | 2 +- 9 files changed, 9 insertions(+), 9 deletions(-) diff --git a/fs/adfs/inode.c b/fs/adfs/inode.c index 561bc748c04a..b6912496bb19 100644 --- a/fs/adfs/inode.c +++ b/fs/adfs/inode.c @@ -58,7 +58,7 @@ static int adfs_write_begin(struct file *file, struct address_space *mapping, int ret; *pagep = NULL; - ret = cont_write_begin(file, mapping, pos, len, flags, pagep, fsdata, + ret = cont_write_begin(file, mapping, pos, len, pagep, fsdata, adfs_get_block, &ADFS_I(mapping->host)->mmu_private); if (unlikely(ret)) diff --git a/fs/affs/file.c b/fs/affs/file.c index b3f81d84ff4c..704911d6aeba 100644 --- a/fs/affs/file.c +++ b/fs/affs/file.c @@ -420,7 +420,7 @@ static int affs_write_begin(struct file *file, struct address_space *mapping, int ret; *pagep = NULL; - ret = cont_write_begin(file, mapping, pos, len, flags, pagep, fsdata, + ret = cont_write_begin(file, mapping, pos, len, pagep, fsdata, affs_get_block, &AFFS_I(mapping->host)->mmu_private); if (unlikely(ret)) diff --git a/fs/buffer.c b/fs/buffer.c index 4ec6eb03c0eb..fb97646d1977 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -2441,7 +2441,7 @@ static int cont_expand_zero(struct file *file, struct address_space *mapping, * We may have to extend the file. */ int cont_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata, get_block_t *get_block, loff_t *bytes) { diff --git a/fs/exfat/inode.c b/fs/exfat/inode.c index fc0ea1684880..8ed3c4b700cd 100644 --- a/fs/exfat/inode.c +++ b/fs/exfat/inode.c @@ -395,7 +395,7 @@ static int exfat_write_begin(struct file *file, struct address_space *mapping, int ret; *pagep = NULL; - ret = cont_write_begin(file, mapping, pos, len, flags, pagep, fsdata, + ret = cont_write_begin(file, mapping, pos, len, pagep, fsdata, exfat_get_block, &EXFAT_I(mapping->host)->i_size_ondisk); diff --git a/fs/fat/inode.c b/fs/fat/inode.c index bf6051bdf1d1..9b34ccef2501 100644 --- a/fs/fat/inode.c +++ b/fs/fat/inode.c @@ -232,7 +232,7 @@ static int fat_write_begin(struct file *file, struct address_space *mapping, int err; *pagep = NULL; - err = cont_write_begin(file, mapping, pos, len, flags, + err = cont_write_begin(file, mapping, pos, len, pagep, fsdata, fat_get_block, &MSDOS_I(mapping->host)->mmu_private); if (err < 0) diff --git a/fs/hfs/inode.c b/fs/hfs/inode.c index 55f45e9b4930..396735dd3407 100644 --- a/fs/hfs/inode.c +++ b/fs/hfs/inode.c @@ -56,7 +56,7 @@ static int hfs_write_begin(struct file *file, struct address_space *mapping, int ret; *pagep = NULL; - ret = cont_write_begin(file, mapping, pos, len, flags, pagep, fsdata, + ret = cont_write_begin(file, mapping, pos, len, pagep, fsdata, hfs_get_block, &HFS_I(mapping->host)->phys_size); if (unlikely(ret)) diff --git a/fs/hfsplus/inode.c b/fs/hfsplus/inode.c index 446a816aa8e1..435b6202532a 100644 --- a/fs/hfsplus/inode.c +++ b/fs/hfsplus/inode.c @@ -50,7 +50,7 @@ static int hfsplus_write_begin(struct file *file, struct address_space *mapping, int ret; *pagep = NULL; - ret = cont_write_begin(file, mapping, pos, len, flags, pagep, fsdata, + ret = cont_write_begin(file, mapping, pos, len, pagep, fsdata, hfsplus_get_block, &HFSPLUS_I(mapping->host)->phys_size); if (unlikely(ret)) diff --git a/fs/hpfs/file.c b/fs/hpfs/file.c index 99493a23c5d0..8740b4ea0b52 100644 --- a/fs/hpfs/file.c +++ b/fs/hpfs/file.c @@ -200,7 +200,7 @@ static int hpfs_write_begin(struct file *file, struct address_space *mapping, int ret; *pagep = NULL; - ret = cont_write_begin(file, mapping, pos, len, flags, pagep, fsdata, + ret = cont_write_begin(file, mapping, pos, len, pagep, fsdata, hpfs_get_block, &hpfs_i(mapping->host)->mmu_private); if (unlikely(ret)) diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h index 63e49dfa7738..127b60fad77e 100644 --- a/include/linux/buffer_head.h +++ b/include/linux/buffer_head.h @@ -238,7 +238,7 @@ int generic_write_end(struct file *, struct address_space *, void page_zero_new_buffers(struct page *page, unsigned from, unsigned to); void clean_page_buffers(struct page *page); int cont_write_begin(struct file *, struct address_space *, loff_t, - unsigned, unsigned, struct page **, void **, + unsigned, struct page **, void **, get_block_t *, loff_t *); int generic_cont_expand_simple(struct inode *inode, loff_t size); int block_commit_write(struct page *page, unsigned from, unsigned to); From patchwork Fri Apr 29 17:25:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832495 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60CFEC4332F for ; Fri, 29 Apr 2022 17:26:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379626AbiD2R3m (ORCPT ); Fri, 29 Apr 2022 13:29:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42486 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379515AbiD2R3Z (ORCPT ); Fri, 29 Apr 2022 13:29:25 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F5439AE53 for ; Fri, 29 Apr 2022 10:26:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=O6jon2wZcOuRWNjQMdKziInEpsy1o/Bcpi2Jw+DC7f0=; b=rxLYLYEu8TV4xiWKg06V8SdDkF rSh0C5qH7KjUPk0SQ8HGG6TUzHGOzyYiZ7n0iLxo2CQPVIsq8QC2E2kpQeAZ0aEvGhrrtNiJQHA+q khApZYINXX2ui8JGCl0WpfYrrN/pyF2c5FrJQS6QBlx2bVIcaegF2jmVAp93GZvog08MNQjO/+xoA AIbIqdlED/rxup4O+R/vaAigU2SrLTpSh/cN+7dCdyTYR29gT/lIDbBNwenGyNXveYf98EIyhE84r y8NDk+cNnX2LewfVFKXmT3CVavniQedFL6m/jR2sfQNifvLeJx1EZ7TaCCKBGPzps5vYumQSsB8dA tanKqHcA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNX-00CdY4-IA; Fri, 29 Apr 2022 17:26:03 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Christoph Hellwig Subject: [PATCH 14/69] fs: Remove aop flags parameter from grab_cache_page_write_begin() Date: Fri, 29 Apr 2022 18:25:01 +0100 Message-Id: <20220429172556.3011843-15-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org There are no more aop flags left, so remove the parameter. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/affs/file.c | 2 +- fs/buffer.c | 4 ++-- fs/cifs/file.c | 2 +- fs/ecryptfs/mmap.c | 2 +- fs/ext4/inline.c | 8 ++++---- fs/ext4/inode.c | 4 ++-- fs/ext4/move_extent.c | 4 ++-- fs/f2fs/f2fs.h | 2 +- fs/fuse/file.c | 4 ++-- fs/hostfs/hostfs_kern.c | 2 +- fs/jffs2/file.c | 2 +- fs/libfs.c | 2 +- fs/nfs/file.c | 2 +- fs/ntfs3/inode.c | 2 +- fs/orangefs/inode.c | 2 +- fs/reiserfs/inode.c | 2 +- fs/ubifs/file.c | 4 ++-- fs/udf/file.c | 2 +- include/linux/pagemap.h | 2 +- mm/folio-compat.c | 2 +- 20 files changed, 28 insertions(+), 28 deletions(-) diff --git a/fs/affs/file.c b/fs/affs/file.c index 704911d6aeba..06645d05c717 100644 --- a/fs/affs/file.c +++ b/fs/affs/file.c @@ -670,7 +670,7 @@ static int affs_write_begin_ofs(struct file *file, struct address_space *mapping } index = pos >> PAGE_SHIFT; - page = grab_cache_page_write_begin(mapping, index, flags); + page = grab_cache_page_write_begin(mapping, index); if (!page) return -ENOMEM; *pagep = page; diff --git a/fs/buffer.c b/fs/buffer.c index fb97646d1977..01630218c75f 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -2110,7 +2110,7 @@ int block_write_begin(struct address_space *mapping, loff_t pos, unsigned len, struct page *page; int status; - page = grab_cache_page_write_begin(mapping, index, 0); + page = grab_cache_page_write_begin(mapping, index); if (!page) return -ENOMEM; @@ -2591,7 +2591,7 @@ int nobh_write_begin(struct address_space *mapping, from = pos & (PAGE_SIZE - 1); to = from + len; - page = grab_cache_page_write_begin(mapping, index, flags); + page = grab_cache_page_write_begin(mapping, index); if (!page) return -ENOMEM; *pagep = page; diff --git a/fs/cifs/file.c b/fs/cifs/file.c index d511a78383c3..91aeae7fced8 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -4695,7 +4695,7 @@ static int cifs_write_begin(struct file *file, struct address_space *mapping, cifs_dbg(FYI, "write_begin from %lld len %d\n", (long long)pos, len); start: - page = grab_cache_page_write_begin(mapping, index, flags); + page = grab_cache_page_write_begin(mapping, index); if (!page) { rc = -ENOMEM; goto out; diff --git a/fs/ecryptfs/mmap.c b/fs/ecryptfs/mmap.c index 9ad61b582f07..84e399a921ad 100644 --- a/fs/ecryptfs/mmap.c +++ b/fs/ecryptfs/mmap.c @@ -272,7 +272,7 @@ static int ecryptfs_write_begin(struct file *file, loff_t prev_page_end_size; int rc = 0; - page = grab_cache_page_write_begin(mapping, index, flags); + page = grab_cache_page_write_begin(mapping, index); if (!page) return -ENOMEM; *pagep = page; diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c index b2ef5ba568bc..6d253edebf9f 100644 --- a/fs/ext4/inline.c +++ b/fs/ext4/inline.c @@ -563,7 +563,7 @@ static int ext4_convert_inline_data_to_extent(struct address_space *mapping, /* We cannot recurse into the filesystem as the transaction is already * started */ flags = memalloc_nofs_save(); - page = grab_cache_page_write_begin(mapping, 0, 0); + page = grab_cache_page_write_begin(mapping, 0); memalloc_nofs_restore(flags); if (!page) { ret = -ENOMEM; @@ -692,7 +692,7 @@ int ext4_try_to_write_inline_data(struct address_space *mapping, goto out; flags = memalloc_nofs_save(); - page = grab_cache_page_write_begin(mapping, 0, 0); + page = grab_cache_page_write_begin(mapping, 0); memalloc_nofs_restore(flags); if (!page) { ret = -ENOMEM; @@ -852,7 +852,7 @@ static int ext4_da_convert_inline_data_to_extent(struct address_space *mapping, int ret = 0, inline_size; struct page *page; - page = grab_cache_page_write_begin(mapping, 0, 0); + page = grab_cache_page_write_begin(mapping, 0); if (!page) return -ENOMEM; @@ -946,7 +946,7 @@ int ext4_da_write_inline_data_begin(struct address_space *mapping, * is already started. */ flags = memalloc_nofs_save(); - page = grab_cache_page_write_begin(mapping, 0, 0); + page = grab_cache_page_write_begin(mapping, 0); memalloc_nofs_restore(flags); if (!page) { ret = -ENOMEM; diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 01a55647c959..512d8143c765 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -1171,7 +1171,7 @@ static int ext4_write_begin(struct file *file, struct address_space *mapping, * the page (if needed) without using GFP_NOFS. */ retry_grab: - page = grab_cache_page_write_begin(mapping, index, flags); + page = grab_cache_page_write_begin(mapping, index); if (!page) return -ENOMEM; unlock_page(page); @@ -2963,7 +2963,7 @@ static int ext4_da_write_begin(struct file *file, struct address_space *mapping, } retry: - page = grab_cache_page_write_begin(mapping, index, flags); + page = grab_cache_page_write_begin(mapping, index); if (!page) return -ENOMEM; diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c index 56f21272fb00..4172a7d22471 100644 --- a/fs/ext4/move_extent.c +++ b/fs/ext4/move_extent.c @@ -141,13 +141,13 @@ mext_page_double_lock(struct inode *inode1, struct inode *inode2, } flags = memalloc_nofs_save(); - page[0] = grab_cache_page_write_begin(mapping[0], index1, 0); + page[0] = grab_cache_page_write_begin(mapping[0], index1); if (!page[0]) { memalloc_nofs_restore(flags); return -ENOMEM; } - page[1] = grab_cache_page_write_begin(mapping[1], index2, 0); + page[1] = grab_cache_page_write_begin(mapping[1], index2); memalloc_nofs_restore(flags); if (!page[1]) { unlock_page(page[0]); diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index 74929ade4b5e..18df53ef3d7e 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -2677,7 +2677,7 @@ static inline struct page *f2fs_grab_cache_page(struct address_space *mapping, return grab_cache_page(mapping, index); flags = memalloc_nofs_save(); - page = grab_cache_page_write_begin(mapping, index, 0); + page = grab_cache_page_write_begin(mapping, index); memalloc_nofs_restore(flags); return page; diff --git a/fs/fuse/file.c b/fs/fuse/file.c index f18d14d5fea1..e35e394264ad 100644 --- a/fs/fuse/file.c +++ b/fs/fuse/file.c @@ -1174,7 +1174,7 @@ static ssize_t fuse_fill_write_pages(struct fuse_io_args *ia, break; err = -ENOMEM; - page = grab_cache_page_write_begin(mapping, index, 0); + page = grab_cache_page_write_begin(mapping, index); if (!page) break; @@ -2284,7 +2284,7 @@ static int fuse_write_begin(struct file *file, struct address_space *mapping, WARN_ON(!fc->writeback_cache); - page = grab_cache_page_write_begin(mapping, index, flags); + page = grab_cache_page_write_begin(mapping, index); if (!page) goto error; diff --git a/fs/hostfs/hostfs_kern.c b/fs/hostfs/hostfs_kern.c index 14f9ac973a2e..2bfd316e1bf1 100644 --- a/fs/hostfs/hostfs_kern.c +++ b/fs/hostfs/hostfs_kern.c @@ -468,7 +468,7 @@ static int hostfs_write_begin(struct file *file, struct address_space *mapping, { pgoff_t index = pos >> PAGE_SHIFT; - *pagep = grab_cache_page_write_begin(mapping, index, flags); + *pagep = grab_cache_page_write_begin(mapping, index); if (!*pagep) return -ENOMEM; return 0; diff --git a/fs/jffs2/file.c b/fs/jffs2/file.c index bd7d58d27bfc..142d3ba9f0a8 100644 --- a/fs/jffs2/file.c +++ b/fs/jffs2/file.c @@ -213,7 +213,7 @@ static int jffs2_write_begin(struct file *filp, struct address_space *mapping, * page in read_cache_page(), which causes a deadlock. */ mutex_lock(&c->alloc_sem); - pg = grab_cache_page_write_begin(mapping, index, flags); + pg = grab_cache_page_write_begin(mapping, index); if (!pg) { ret = -ENOMEM; goto release_sem; diff --git a/fs/libfs.c b/fs/libfs.c index e64bdedef168..d4395e1c6696 100644 --- a/fs/libfs.c +++ b/fs/libfs.c @@ -557,7 +557,7 @@ int simple_write_begin(struct file *file, struct address_space *mapping, index = pos >> PAGE_SHIFT; - page = grab_cache_page_write_begin(mapping, index, flags); + page = grab_cache_page_write_begin(mapping, index); if (!page) return -ENOMEM; diff --git a/fs/nfs/file.c b/fs/nfs/file.c index 150b7fa8f0a7..d66088dd33e7 100644 --- a/fs/nfs/file.c +++ b/fs/nfs/file.c @@ -325,7 +325,7 @@ static int nfs_write_begin(struct file *file, struct address_space *mapping, file, mapping->host->i_ino, len, (long long) pos); start: - page = grab_cache_page_write_begin(mapping, index, flags); + page = grab_cache_page_write_begin(mapping, index); if (!page) return -ENOMEM; *pagep = page; diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c index 3914138fd8ba..16466c8648f3 100644 --- a/fs/ntfs3/inode.c +++ b/fs/ntfs3/inode.c @@ -872,7 +872,7 @@ static int ntfs_write_begin(struct file *file, struct address_space *mapping, *pagep = NULL; if (is_resident(ni)) { struct page *page = grab_cache_page_write_begin( - mapping, pos >> PAGE_SHIFT, flags); + mapping, pos >> PAGE_SHIFT); if (!page) { err = -ENOMEM; diff --git a/fs/orangefs/inode.c b/fs/orangefs/inode.c index 79c1025d18ea..809690db8be2 100644 --- a/fs/orangefs/inode.c +++ b/fs/orangefs/inode.c @@ -338,7 +338,7 @@ static int orangefs_write_begin(struct file *file, index = pos >> PAGE_SHIFT; - page = grab_cache_page_write_begin(mapping, index, flags); + page = grab_cache_page_write_begin(mapping, index); if (!page) return -ENOMEM; diff --git a/fs/reiserfs/inode.c b/fs/reiserfs/inode.c index 36c59b25486c..aa31cf1dbba6 100644 --- a/fs/reiserfs/inode.c +++ b/fs/reiserfs/inode.c @@ -2764,7 +2764,7 @@ static int reiserfs_write_begin(struct file *file, inode = mapping->host; index = pos >> PAGE_SHIFT; - page = grab_cache_page_write_begin(mapping, index, flags); + page = grab_cache_page_write_begin(mapping, index); if (!page) return -ENOMEM; *pagep = page; diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c index 0383fbdc95ff..0911fc311434 100644 --- a/fs/ubifs/file.c +++ b/fs/ubifs/file.c @@ -244,7 +244,7 @@ static int write_begin_slow(struct address_space *mapping, if (unlikely(err)) return err; - page = grab_cache_page_write_begin(mapping, index, flags); + page = grab_cache_page_write_begin(mapping, index); if (unlikely(!page)) { ubifs_release_budget(c, &req); return -ENOMEM; @@ -437,7 +437,7 @@ static int ubifs_write_begin(struct file *file, struct address_space *mapping, return -EROFS; /* Try out the fast-path part first */ - page = grab_cache_page_write_begin(mapping, index, flags); + page = grab_cache_page_write_begin(mapping, index); if (unlikely(!page)) return -ENOMEM; diff --git a/fs/udf/file.c b/fs/udf/file.c index 0f6bf2504437..724bb3141fda 100644 --- a/fs/udf/file.c +++ b/fs/udf/file.c @@ -94,7 +94,7 @@ static int udf_adinicb_write_begin(struct file *file, if (WARN_ON_ONCE(pos >= PAGE_SIZE)) return -EIO; - page = grab_cache_page_write_begin(mapping, 0, flags); + page = grab_cache_page_write_begin(mapping, 0); if (!page) return -ENOMEM; *pagep = page; diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 993994cd943a..65ae8f96554b 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -735,7 +735,7 @@ static inline unsigned find_get_pages_tag(struct address_space *mapping, } struct page *grab_cache_page_write_begin(struct address_space *mapping, - pgoff_t index, unsigned flags); + pgoff_t index); /* * Returns locked page at given index in given cache, creating it if needed. diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 3e42ddb81918..20bc15b57d93 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -131,7 +131,7 @@ struct page *pagecache_get_page(struct address_space *mapping, pgoff_t index, EXPORT_SYMBOL(pagecache_get_page); struct page *grab_cache_page_write_begin(struct address_space *mapping, - pgoff_t index, unsigned flags) + pgoff_t index) { unsigned fgp_flags = FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE; From patchwork Fri Apr 29 17:25:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832481 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47184C433EF for ; Fri, 29 Apr 2022 17:26:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379620AbiD2R3f (ORCPT ); Fri, 29 Apr 2022 13:29:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42490 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379288AbiD2R3Y (ORCPT ); Fri, 29 Apr 2022 13:29:24 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BD32B9AE5E for ; Fri, 29 Apr 2022 10:26:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=j+3bGVW2lhzCFFuW8GzV5oNJ7nLQ6ySvvt58EVwwUTQ=; b=klXAgMOy7K0g61XphDkuIqIRqi JcJYY+x4vyHvBxe2fKlU5j2xtxO7qFbBFaGmHo8JAq01q/z6wHP6o64TmxIuBJ3FtOtHAbW/o5bMs XpPwfF7xZLR/l0sca5b7AO/ryphM06rMl6wne8fhWtXMv99LcOh4LLteTgHO2MUp5bwk3dLitcmiV ax0ajMzDAIexcLoODpXm1AKq8KvM/U2iSWgps0skVK8pz7b56fkLVsfLBWy7sWxgbhY8TgDXxbjcb IhGg5BruRx+cQ7vTA28j6ODoanzXJ+lBcK8oO+M2F9qta6HFU+rB8+oqRd/e/o47OrN5QghN8UNtQ 4HOYcsww==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNX-00CdYA-LK; Fri, 29 Apr 2022 17:26:03 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Christoph Hellwig Subject: [PATCH 15/69] fs: Remove aop flags parameter from nobh_write_begin() Date: Fri, 29 Apr 2022 18:25:02 +0100 Message-Id: <20220429172556.3011843-16-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org There are no more aop flags left, so remove the parameter. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/buffer.c | 3 +-- fs/ext2/inode.c | 2 +- fs/jfs/inode.c | 3 +-- include/linux/buffer_head.h | 2 +- 4 files changed, 4 insertions(+), 6 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 01630218c75f..02b50e3e4fbb 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -2568,8 +2568,7 @@ static void attach_nobh_buffers(struct page *page, struct buffer_head *head) * On exit the page is fully uptodate in the areas outside (from,to) * The filesystem needs to handle block truncation upon failure. */ -int nobh_write_begin(struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, +int nobh_write_begin(struct address_space *mapping, loff_t pos, unsigned len, struct page **pagep, void **fsdata, get_block_t *get_block) { diff --git a/fs/ext2/inode.c b/fs/ext2/inode.c index 97192932ea56..bfa69c52ce2c 100644 --- a/fs/ext2/inode.c +++ b/fs/ext2/inode.c @@ -917,7 +917,7 @@ ext2_nobh_write_begin(struct file *file, struct address_space *mapping, { int ret; - ret = nobh_write_begin(mapping, pos, len, flags, pagep, fsdata, + ret = nobh_write_begin(mapping, pos, len, pagep, fsdata, ext2_get_block); if (ret < 0) ext2_write_failed(mapping, pos + len); diff --git a/fs/jfs/inode.c b/fs/jfs/inode.c index d1943a7b4b04..e16f77b4e84c 100644 --- a/fs/jfs/inode.c +++ b/fs/jfs/inode.c @@ -319,8 +319,7 @@ static int jfs_write_begin(struct file *file, struct address_space *mapping, { int ret; - ret = nobh_write_begin(mapping, pos, len, flags, pagep, fsdata, - jfs_get_block); + ret = nobh_write_begin(mapping, pos, len, pagep, fsdata, jfs_get_block); if (unlikely(ret)) jfs_write_failed(mapping, pos + len); diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h index 127b60fad77e..6e5a64005fef 100644 --- a/include/linux/buffer_head.h +++ b/include/linux/buffer_head.h @@ -258,7 +258,7 @@ static inline vm_fault_t block_page_mkwrite_return(int err) } sector_t generic_block_bmap(struct address_space *, sector_t, get_block_t *); int block_truncate_page(struct address_space *, loff_t, get_block_t *); -int nobh_write_begin(struct address_space *, loff_t, unsigned, unsigned, +int nobh_write_begin(struct address_space *, loff_t, unsigned len, struct page **, void **, get_block_t*); int nobh_write_end(struct file *, struct address_space *, loff_t, unsigned, unsigned, From patchwork Fri Apr 29 17:25:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832512 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F259CC4332F for ; Fri, 29 Apr 2022 17:26:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379680AbiD2RaE (ORCPT ); Fri, 29 Apr 2022 13:30:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42750 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379525AbiD2R30 (ORCPT ); Fri, 29 Apr 2022 13:29:26 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D53149AE6A for ; Fri, 29 Apr 2022 10:26:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=PMMFqnfx33ltUl2cmgc1sH2zu3gIcFs97QY4wzRnWG4=; b=l99ewDm8GryGzsghAZGZVPLm1i /Vk7IL+Bq+UTXmdPNvTrHTTS2B0Mj99Kh6MjVfMIAPj73EATso2MNuoSAsb8przjeyrEg1mzrJnww wnIFRYOOVgb8Oc1afafCnTNKa2aamwRUVC01lzcwjj3sH6CtrxfokK+lossp0QuEXwj9yvXm0dEaX UtxCl1ySmnORBB+z0RQzkHwWCfLBFo0F4GK2+0pxCyr2x+2rJB+7m/D14mv49ZK6HiQHMbIRZZi29 nhAXh8k+GwcIcajxqDldUCX0tTu59OWa4BU+fcbCxNhEUDeGXLuy/XToedFwo6t+EtAOWJL5/FZSt fkOlks/A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNX-00CdYG-Qc; Fri, 29 Apr 2022 17:26:04 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Christoph Hellwig Subject: [PATCH 16/69] fs: Remove flags parameter from aops->write_begin Date: Fri, 29 Apr 2022 18:25:03 +0100 Message-Id: <20220429172556.3011843-17-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org There are no more aop flags left, so remove the parameter. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- Documentation/filesystems/locking.rst | 2 +- Documentation/filesystems/vfs.rst | 5 +---- block/fops.c | 3 +-- fs/9p/vfs_addr.c | 2 +- fs/adfs/inode.c | 2 +- fs/affs/file.c | 6 +++--- fs/afs/internal.h | 2 +- fs/afs/write.c | 2 +- fs/bfs/file.c | 2 +- fs/ceph/addr.c | 2 +- fs/cifs/file.c | 2 +- fs/ecryptfs/mmap.c | 2 +- fs/exfat/inode.c | 2 +- fs/ext2/inode.c | 6 ++---- fs/ext4/inode.c | 10 +++++----- fs/f2fs/data.c | 5 ++--- fs/f2fs/super.c | 2 +- fs/fat/inode.c | 2 +- fs/fuse/file.c | 3 +-- fs/hfs/inode.c | 2 +- fs/hfsplus/inode.c | 2 +- fs/hostfs/hostfs_kern.c | 2 +- fs/hpfs/file.c | 2 +- fs/hugetlbfs/inode.c | 2 +- fs/jffs2/file.c | 4 ++-- fs/jfs/inode.c | 2 +- fs/libfs.c | 2 +- fs/minix/inode.c | 2 +- fs/nfs/file.c | 2 +- fs/nilfs2/inode.c | 2 +- fs/ntfs3/inode.c | 2 +- fs/ocfs2/aops.c | 2 +- fs/omfs/file.c | 2 +- fs/orangefs/inode.c | 5 ++--- fs/reiserfs/inode.c | 2 +- fs/sysv/itree.c | 2 +- fs/ubifs/file.c | 7 +++---- fs/udf/file.c | 2 +- fs/udf/inode.c | 2 +- fs/ufs/inode.c | 2 +- include/linux/fs.h | 4 ++-- include/trace/events/ext4.h | 21 ++++++++------------- include/trace/events/f2fs.h | 12 ++++-------- mm/filemap.c | 6 ++---- mm/shmem.c | 2 +- 45 files changed, 69 insertions(+), 90 deletions(-) diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst index c26d854275a0..fd9d9caf09ab 100644 --- a/Documentation/filesystems/locking.rst +++ b/Documentation/filesystems/locking.rst @@ -242,7 +242,7 @@ prototypes:: bool (*dirty_folio)(struct address_space *, struct folio *folio); void (*readahead)(struct readahead_control *); int (*write_begin)(struct file *, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata); int (*write_end)(struct file *, struct address_space *mapping, loff_t pos, unsigned len, unsigned copied, diff --git a/Documentation/filesystems/vfs.rst b/Documentation/filesystems/vfs.rst index 794bd1a66bfb..30f303180a7d 100644 --- a/Documentation/filesystems/vfs.rst +++ b/Documentation/filesystems/vfs.rst @@ -727,7 +727,7 @@ cache in your filesystem. The following members are defined: bool (*dirty_folio)(struct address_space *, struct folio *); void (*readahead)(struct readahead_control *); int (*write_begin)(struct file *, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata); int (*write_end)(struct file *, struct address_space *mapping, loff_t pos, unsigned len, unsigned copied, @@ -832,9 +832,6 @@ cache in your filesystem. The following members are defined: passed to write_begin is greater than the number of bytes copied into the page). - flags is a field for AOP_FLAG_xxx flags, described in - include/linux/fs.h. - A void * may be returned in fsdata, which then gets passed into write_end. diff --git a/block/fops.c b/block/fops.c index b432756570c6..712affe56e29 100644 --- a/block/fops.c +++ b/block/fops.c @@ -398,8 +398,7 @@ static void blkdev_readahead(struct readahead_control *rac) } static int blkdev_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, struct page **pagep, - void **fsdata) + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { return block_write_begin(mapping, pos, len, pagep, blkdev_get_block); } diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c index d311e68e21fd..a2d57112f53e 100644 --- a/fs/9p/vfs_addr.c +++ b/fs/9p/vfs_addr.c @@ -260,7 +260,7 @@ v9fs_direct_IO(struct kiocb *iocb, struct iov_iter *iter) } static int v9fs_write_begin(struct file *filp, struct address_space *mapping, - loff_t pos, unsigned int len, unsigned int flags, + loff_t pos, unsigned int len, struct page **subpagep, void **fsdata) { int retval; diff --git a/fs/adfs/inode.c b/fs/adfs/inode.c index b6912496bb19..f7959b1a2d52 100644 --- a/fs/adfs/inode.c +++ b/fs/adfs/inode.c @@ -52,7 +52,7 @@ static void adfs_write_failed(struct address_space *mapping, loff_t to) } static int adfs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { int ret; diff --git a/fs/affs/file.c b/fs/affs/file.c index 06645d05c717..b952f65c3f06 100644 --- a/fs/affs/file.c +++ b/fs/affs/file.c @@ -414,7 +414,7 @@ affs_direct_IO(struct kiocb *iocb, struct iov_iter *iter) } static int affs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { int ret; @@ -650,7 +650,7 @@ affs_readpage_ofs(struct file *file, struct page *page) } static int affs_write_begin_ofs(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { struct inode *inode = mapping->host; @@ -887,7 +887,7 @@ affs_truncate(struct inode *inode) loff_t isize = inode->i_size; int res; - res = mapping->a_ops->write_begin(NULL, mapping, isize, 0, 0, &page, &fsdata); + res = mapping->a_ops->write_begin(NULL, mapping, isize, 0, &page, &fsdata); if (!res) res = mapping->a_ops->write_end(NULL, mapping, isize, 0, 0, page, fsdata); else diff --git a/fs/afs/internal.h b/fs/afs/internal.h index 7b7ef945dc78..7a72e9c60423 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -1535,7 +1535,7 @@ bool afs_dirty_folio(struct address_space *, struct folio *); #define afs_dirty_folio filemap_dirty_folio #endif extern int afs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata); extern int afs_write_end(struct file *file, struct address_space *mapping, loff_t pos, unsigned len, unsigned copied, diff --git a/fs/afs/write.c b/fs/afs/write.c index af496c98d394..5224e346fbad 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -42,7 +42,7 @@ static void afs_folio_start_fscache(bool caching, struct folio *folio) * prepare to perform part of a write to a page */ int afs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **_page, void **fsdata) { struct afs_vnode *vnode = AFS_FS_I(file_inode(file)); diff --git a/fs/bfs/file.c b/fs/bfs/file.c index 9408f45225cb..dc97c9b8f23b 100644 --- a/fs/bfs/file.c +++ b/fs/bfs/file.c @@ -169,7 +169,7 @@ static void bfs_write_failed(struct address_space *mapping, loff_t to) } static int bfs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { int ret; diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index 415f0886bc25..e65541a51b68 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -1311,7 +1311,7 @@ static int ceph_netfs_check_write_begin(struct file *file, loff_t pos, unsigned * clean, or already dirty within the same snap context. */ static int ceph_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned aop_flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { struct inode *inode = file_inode(file); diff --git a/fs/cifs/file.c b/fs/cifs/file.c index 91aeae7fced8..da362b5a0c96 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -4681,7 +4681,7 @@ bool is_size_safe_to_change(struct cifsInodeInfo *cifsInode, __u64 end_of_file) } static int cifs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { int oncethru = 0; diff --git a/fs/ecryptfs/mmap.c b/fs/ecryptfs/mmap.c index 84e399a921ad..47904d40ef88 100644 --- a/fs/ecryptfs/mmap.c +++ b/fs/ecryptfs/mmap.c @@ -264,7 +264,7 @@ static int fill_zeros_to_end_of_page(struct page *page, unsigned int to) */ static int ecryptfs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { pgoff_t index = pos >> PAGE_SHIFT; diff --git a/fs/exfat/inode.c b/fs/exfat/inode.c index 8ed3c4b700cd..b9f63113db2d 100644 --- a/fs/exfat/inode.c +++ b/fs/exfat/inode.c @@ -389,7 +389,7 @@ static void exfat_write_failed(struct address_space *mapping, loff_t to) } static int exfat_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned int len, unsigned int flags, + loff_t pos, unsigned int len, struct page **pagep, void **fsdata) { int ret; diff --git a/fs/ext2/inode.c b/fs/ext2/inode.c index bfa69c52ce2c..d8ca8050945a 100644 --- a/fs/ext2/inode.c +++ b/fs/ext2/inode.c @@ -887,8 +887,7 @@ static void ext2_readahead(struct readahead_control *rac) static int ext2_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, - struct page **pagep, void **fsdata) + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { int ret; @@ -912,8 +911,7 @@ static int ext2_write_end(struct file *file, struct address_space *mapping, static int ext2_nobh_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, - struct page **pagep, void **fsdata) + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { int ret; diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 512d8143c765..d3a7e8581291 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -1130,7 +1130,7 @@ static int ext4_block_write_begin(struct page *page, loff_t pos, unsigned len, #endif static int ext4_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { struct inode *inode = mapping->host; @@ -1144,7 +1144,7 @@ static int ext4_write_begin(struct file *file, struct address_space *mapping, if (unlikely(ext4_forced_shutdown(EXT4_SB(inode->i_sb)))) return -EIO; - trace_ext4_write_begin(inode, pos, len, flags); + trace_ext4_write_begin(inode, pos, len); /* * Reserve one block more for addition to orphan list in case * we allocate blocks but write fails for some reason @@ -2931,7 +2931,7 @@ static int ext4_nonda_switch(struct super_block *sb) } static int ext4_da_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { int ret, retries = 0; @@ -2948,10 +2948,10 @@ static int ext4_da_write_begin(struct file *file, struct address_space *mapping, ext4_verity_in_progress(inode)) { *fsdata = (void *)FALL_BACK_TO_NONDELALLOC; return ext4_write_begin(file, mapping, pos, - len, flags, pagep, fsdata); + len, pagep, fsdata); } *fsdata = (void *)0; - trace_ext4_da_write_begin(inode, pos, len, flags); + trace_ext4_da_write_begin(inode, pos, len); if (ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA)) { ret = ext4_da_write_inline_data_begin(mapping, inode, pos, len, diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index 9a1a526f2092..b3cf49136b9f 100644 --- a/fs/f2fs/data.c +++ b/fs/f2fs/data.c @@ -3314,8 +3314,7 @@ static int prepare_write_begin(struct f2fs_sb_info *sbi, } static int f2fs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, - struct page **pagep, void **fsdata) + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { struct inode *inode = mapping->host; struct f2fs_sb_info *sbi = F2FS_I_SB(inode); @@ -3325,7 +3324,7 @@ static int f2fs_write_begin(struct file *file, struct address_space *mapping, block_t blkaddr = NULL_ADDR; int err = 0; - trace_f2fs_write_begin(inode, pos, len, flags); + trace_f2fs_write_begin(inode, pos, len); if (!f2fs_is_checkpoint_ready(sbi)) { err = -ENOSPC; diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c index 4368f90571bd..ed3e8b7a8260 100644 --- a/fs/f2fs/super.c +++ b/fs/f2fs/super.c @@ -2483,7 +2483,7 @@ static ssize_t f2fs_quota_write(struct super_block *sb, int type, tocopy = min_t(unsigned long, sb->s_blocksize - offset, towrite); retry: - err = a_ops->write_begin(NULL, mapping, off, tocopy, 0, + err = a_ops->write_begin(NULL, mapping, off, tocopy, &page, &fsdata); if (unlikely(err)) { if (err == -ENOMEM) { diff --git a/fs/fat/inode.c b/fs/fat/inode.c index 9b34ccef2501..1f15b0fd1bb0 100644 --- a/fs/fat/inode.c +++ b/fs/fat/inode.c @@ -226,7 +226,7 @@ static void fat_write_failed(struct address_space *mapping, loff_t to) } static int fat_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { int err; diff --git a/fs/fuse/file.c b/fs/fuse/file.c index e35e394264ad..bca8c2135ec5 100644 --- a/fs/fuse/file.c +++ b/fs/fuse/file.c @@ -2273,8 +2273,7 @@ static int fuse_writepages(struct address_space *mapping, * but how to implement it without killing performance need more thinking. */ static int fuse_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, - struct page **pagep, void **fsdata) + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { pgoff_t index = pos >> PAGE_SHIFT; struct fuse_conn *fc = get_fuse_conn(file_inode(file)); diff --git a/fs/hfs/inode.c b/fs/hfs/inode.c index 396735dd3407..93d9aa832139 100644 --- a/fs/hfs/inode.c +++ b/fs/hfs/inode.c @@ -50,7 +50,7 @@ static void hfs_write_failed(struct address_space *mapping, loff_t to) } static int hfs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { int ret; diff --git a/fs/hfsplus/inode.c b/fs/hfsplus/inode.c index 435b6202532a..73010aa4623f 100644 --- a/fs/hfsplus/inode.c +++ b/fs/hfsplus/inode.c @@ -44,7 +44,7 @@ static void hfsplus_write_failed(struct address_space *mapping, loff_t to) } static int hfsplus_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { int ret; diff --git a/fs/hostfs/hostfs_kern.c b/fs/hostfs/hostfs_kern.c index 2bfd316e1bf1..e658d8edde35 100644 --- a/fs/hostfs/hostfs_kern.c +++ b/fs/hostfs/hostfs_kern.c @@ -463,7 +463,7 @@ static int hostfs_readpage(struct file *file, struct page *page) } static int hostfs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { pgoff_t index = pos >> PAGE_SHIFT; diff --git a/fs/hpfs/file.c b/fs/hpfs/file.c index 8740b4ea0b52..8b590b3826c3 100644 --- a/fs/hpfs/file.c +++ b/fs/hpfs/file.c @@ -194,7 +194,7 @@ static void hpfs_write_failed(struct address_space *mapping, loff_t to) } static int hpfs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { int ret; diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index dd3a088db11d..2de9ca5d260d 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -383,7 +383,7 @@ static ssize_t hugetlbfs_read_iter(struct kiocb *iocb, struct iov_iter *to) static int hugetlbfs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { return -EINVAL; diff --git a/fs/jffs2/file.c b/fs/jffs2/file.c index 142d3ba9f0a8..2b35811772de 100644 --- a/fs/jffs2/file.c +++ b/fs/jffs2/file.c @@ -25,7 +25,7 @@ static int jffs2_write_end(struct file *filp, struct address_space *mapping, loff_t pos, unsigned len, unsigned copied, struct page *pg, void *fsdata); static int jffs2_write_begin(struct file *filp, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata); static int jffs2_readpage (struct file *filp, struct page *pg); @@ -130,7 +130,7 @@ static int jffs2_readpage (struct file *filp, struct page *pg) } static int jffs2_write_begin(struct file *filp, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { struct page *pg; diff --git a/fs/jfs/inode.c b/fs/jfs/inode.c index e16f77b4e84c..aa9f112107b2 100644 --- a/fs/jfs/inode.c +++ b/fs/jfs/inode.c @@ -314,7 +314,7 @@ static void jfs_write_failed(struct address_space *mapping, loff_t to) } static int jfs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { int ret; diff --git a/fs/libfs.c b/fs/libfs.c index d4395e1c6696..a1c10d3163e0 100644 --- a/fs/libfs.c +++ b/fs/libfs.c @@ -549,7 +549,7 @@ static int simple_readpage(struct file *file, struct page *page) } int simple_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { struct page *page; diff --git a/fs/minix/inode.c b/fs/minix/inode.c index 5e8d7ba661cf..3add78bccedc 100644 --- a/fs/minix/inode.c +++ b/fs/minix/inode.c @@ -423,7 +423,7 @@ static void minix_write_failed(struct address_space *mapping, loff_t to) } static int minix_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { int ret; diff --git a/fs/nfs/file.c b/fs/nfs/file.c index d66088dd33e7..314d2d7ba84a 100644 --- a/fs/nfs/file.c +++ b/fs/nfs/file.c @@ -313,7 +313,7 @@ static bool nfs_want_read_modify_write(struct file *file, struct page *page, * increment the page use counts until he is done with the page. */ static int nfs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { int ret; diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c index be09a0d10f04..02297ec8dc55 100644 --- a/fs/nilfs2/inode.c +++ b/fs/nilfs2/inode.c @@ -248,7 +248,7 @@ void nilfs_write_failed(struct address_space *mapping, loff_t to) } static int nilfs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c index 16466c8648f3..1364174cc6c9 100644 --- a/fs/ntfs3/inode.c +++ b/fs/ntfs3/inode.c @@ -862,7 +862,7 @@ static int ntfs_get_block_write_begin(struct inode *inode, sector_t vbn, } static int ntfs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, u32 len, u32 flags, struct page **pagep, + loff_t pos, u32 len, struct page **pagep, void **fsdata) { int err; diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c index 4b9af65cb61b..7cffe9dcad17 100644 --- a/fs/ocfs2/aops.c +++ b/fs/ocfs2/aops.c @@ -1881,7 +1881,7 @@ int ocfs2_write_begin_nolock(struct address_space *mapping, } static int ocfs2_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { int ret; diff --git a/fs/omfs/file.c b/fs/omfs/file.c index 349b96d89c44..980b0a72c172 100644 --- a/fs/omfs/file.c +++ b/fs/omfs/file.c @@ -316,7 +316,7 @@ static void omfs_write_failed(struct address_space *mapping, loff_t to) } static int omfs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { int ret; diff --git a/fs/orangefs/inode.c b/fs/orangefs/inode.c index 809690db8be2..bc7ccd15d7a3 100644 --- a/fs/orangefs/inode.c +++ b/fs/orangefs/inode.c @@ -326,9 +326,8 @@ static int orangefs_readpage(struct file *file, struct page *page) } static int orangefs_write_begin(struct file *file, - struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, struct page **pagep, - void **fsdata) + struct address_space *mapping, loff_t pos, unsigned len, + struct page **pagep, void **fsdata) { struct orangefs_write_range *wr; struct folio *folio; diff --git a/fs/reiserfs/inode.c b/fs/reiserfs/inode.c index aa31cf1dbba6..46ba4892030a 100644 --- a/fs/reiserfs/inode.c +++ b/fs/reiserfs/inode.c @@ -2753,7 +2753,7 @@ static void reiserfs_truncate_failed_write(struct inode *inode) static int reiserfs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { struct inode *inode; diff --git a/fs/sysv/itree.c b/fs/sysv/itree.c index 96b7fd4facf3..96ad24fe0ffb 100644 --- a/fs/sysv/itree.c +++ b/fs/sysv/itree.c @@ -477,7 +477,7 @@ static void sysv_write_failed(struct address_space *mapping, loff_t to) } static int sysv_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { int ret; diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c index 0911fc311434..81c085c4decf 100644 --- a/fs/ubifs/file.c +++ b/fs/ubifs/file.c @@ -215,8 +215,7 @@ static void release_existing_page_budget(struct ubifs_info *c) } static int write_begin_slow(struct address_space *mapping, - loff_t pos, unsigned len, struct page **pagep, - unsigned flags) + loff_t pos, unsigned len, struct page **pagep) { struct inode *inode = mapping->host; struct ubifs_info *c = inode->i_sb->s_fs_info; @@ -419,7 +418,7 @@ static int allocate_budget(struct ubifs_info *c, struct page *page, * without forcing write-back. The slow path does not make this assumption. */ static int ubifs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { struct inode *inode = mapping->host; @@ -493,7 +492,7 @@ static int ubifs_write_begin(struct file *file, struct address_space *mapping, unlock_page(page); put_page(page); - return write_begin_slow(mapping, pos, len, pagep, flags); + return write_begin_slow(mapping, pos, len, pagep); } /* diff --git a/fs/udf/file.c b/fs/udf/file.c index 724bb3141fda..3f4d5c44c784 100644 --- a/fs/udf/file.c +++ b/fs/udf/file.c @@ -87,7 +87,7 @@ static int udf_adinicb_writepage(struct page *page, static int udf_adinicb_write_begin(struct file *file, struct address_space *mapping, loff_t pos, - unsigned len, unsigned flags, struct page **pagep, + unsigned len, struct page **pagep, void **fsdata) { struct page *page; diff --git a/fs/udf/inode.c b/fs/udf/inode.c index 88a95886ce8a..866f9a53248e 100644 --- a/fs/udf/inode.c +++ b/fs/udf/inode.c @@ -204,7 +204,7 @@ static void udf_readahead(struct readahead_control *rac) } static int udf_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { int ret; diff --git a/fs/ufs/inode.c b/fs/ufs/inode.c index bd0e0c66f93d..6c973b71cab2 100644 --- a/fs/ufs/inode.c +++ b/fs/ufs/inode.c @@ -495,7 +495,7 @@ static void ufs_write_failed(struct address_space *mapping, loff_t to) } static int ufs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { int ret; diff --git a/include/linux/fs.h b/include/linux/fs.h index f81bc5cbcbb6..a0e73432526f 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -346,7 +346,7 @@ struct address_space_operations { void (*readahead)(struct readahead_control *); int (*write_begin)(struct file *, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata); int (*write_end)(struct file *, struct address_space *mapping, loff_t pos, unsigned len, unsigned copied, @@ -3179,7 +3179,7 @@ extern int noop_fsync(struct file *, loff_t, loff_t, int); extern ssize_t noop_direct_IO(struct kiocb *iocb, struct iov_iter *iter); extern int simple_empty(struct dentry *); extern int simple_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata); extern const struct address_space_operations ram_aops; extern int always_delete_dentry(const struct dentry *); diff --git a/include/trace/events/ext4.h b/include/trace/events/ext4.h index d06ffffad434..229e8fae66a3 100644 --- a/include/trace/events/ext4.h +++ b/include/trace/events/ext4.h @@ -335,17 +335,15 @@ TRACE_EVENT(ext4_begin_ordered_truncate, DECLARE_EVENT_CLASS(ext4__write_begin, - TP_PROTO(struct inode *inode, loff_t pos, unsigned int len, - unsigned int flags), + TP_PROTO(struct inode *inode, loff_t pos, unsigned int len), - TP_ARGS(inode, pos, len, flags), + TP_ARGS(inode, pos, len), TP_STRUCT__entry( __field( dev_t, dev ) __field( ino_t, ino ) __field( loff_t, pos ) __field( unsigned int, len ) - __field( unsigned int, flags ) ), TP_fast_assign( @@ -353,29 +351,26 @@ DECLARE_EVENT_CLASS(ext4__write_begin, __entry->ino = inode->i_ino; __entry->pos = pos; __entry->len = len; - __entry->flags = flags; ), - TP_printk("dev %d,%d ino %lu pos %lld len %u flags %u", + TP_printk("dev %d,%d ino %lu pos %lld len %u", MAJOR(__entry->dev), MINOR(__entry->dev), (unsigned long) __entry->ino, - __entry->pos, __entry->len, __entry->flags) + __entry->pos, __entry->len) ); DEFINE_EVENT(ext4__write_begin, ext4_write_begin, - TP_PROTO(struct inode *inode, loff_t pos, unsigned int len, - unsigned int flags), + TP_PROTO(struct inode *inode, loff_t pos, unsigned int len), - TP_ARGS(inode, pos, len, flags) + TP_ARGS(inode, pos, len) ); DEFINE_EVENT(ext4__write_begin, ext4_da_write_begin, - TP_PROTO(struct inode *inode, loff_t pos, unsigned int len, - unsigned int flags), + TP_PROTO(struct inode *inode, loff_t pos, unsigned int len), - TP_ARGS(inode, pos, len, flags) + TP_ARGS(inode, pos, len) ); DECLARE_EVENT_CLASS(ext4__write_end, diff --git a/include/trace/events/f2fs.h b/include/trace/events/f2fs.h index 1779e133cea0..bea654a85e6b 100644 --- a/include/trace/events/f2fs.h +++ b/include/trace/events/f2fs.h @@ -1159,17 +1159,15 @@ DEFINE_EVENT_CONDITION(f2fs__bio, f2fs_submit_write_bio, TRACE_EVENT(f2fs_write_begin, - TP_PROTO(struct inode *inode, loff_t pos, unsigned int len, - unsigned int flags), + TP_PROTO(struct inode *inode, loff_t pos, unsigned int len), - TP_ARGS(inode, pos, len, flags), + TP_ARGS(inode, pos, len), TP_STRUCT__entry( __field(dev_t, dev) __field(ino_t, ino) __field(loff_t, pos) __field(unsigned int, len) - __field(unsigned int, flags) ), TP_fast_assign( @@ -1177,14 +1175,12 @@ TRACE_EVENT(f2fs_write_begin, __entry->ino = inode->i_ino; __entry->pos = pos; __entry->len = len; - __entry->flags = flags; ), - TP_printk("dev = (%d,%d), ino = %lu, pos = %llu, len = %u, flags = %u", + TP_printk("dev = (%d,%d), ino = %lu, pos = %llu, len = %u", show_dev_ino(__entry), (unsigned long long)__entry->pos, - __entry->len, - __entry->flags) + __entry->len) ); TRACE_EVENT(f2fs_write_end, diff --git a/mm/filemap.c b/mm/filemap.c index 9a1eef6c5d35..0751843b052f 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3628,8 +3628,7 @@ int pagecache_write_begin(struct file *file, struct address_space *mapping, { const struct address_space_operations *aops = mapping->a_ops; - return aops->write_begin(file, mapping, pos, len, flags, - pagep, fsdata); + return aops->write_begin(file, mapping, pos, len, pagep, fsdata); } EXPORT_SYMBOL(pagecache_write_begin); @@ -3754,7 +3753,6 @@ ssize_t generic_perform_write(struct kiocb *iocb, struct iov_iter *i) const struct address_space_operations *a_ops = mapping->a_ops; long status = 0; ssize_t written = 0; - unsigned int flags = 0; do { struct page *page; @@ -3784,7 +3782,7 @@ ssize_t generic_perform_write(struct kiocb *iocb, struct iov_iter *i) break; } - status = a_ops->write_begin(file, mapping, pos, bytes, flags, + status = a_ops->write_begin(file, mapping, pos, bytes, &page, &fsdata); if (unlikely(status < 0)) break; diff --git a/mm/shmem.c b/mm/shmem.c index 4b2fea33158e..0f557a512171 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2426,7 +2426,7 @@ static int shmem_initxattrs(struct inode *, const struct xattr *, void *); static int shmem_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { struct inode *inode = mapping->host; From patchwork Fri Apr 29 17:25:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832493 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0788EC433FE for ; Fri, 29 Apr 2022 17:26:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379623AbiD2R3h (ORCPT ); Fri, 29 Apr 2022 13:29:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42514 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379510AbiD2R3Z (ORCPT ); Fri, 29 Apr 2022 13:29:25 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4EB209D06D for ; Fri, 29 Apr 2022 10:26:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=HjnRYAbJ+5kbXyHJ7/0H9QbKM79C+qHmGeEOMq2GrLA=; b=FExHvEErGUlAF3jXdaaVmpctv8 gfc23L6DvOpXS9XYLFr6cDKwTn6dmSBEgPgU/rkZoLdHd7iGh+5iGc8Sys2gtrSGuvVq36LoPzi5f +JtQf7TzsuP1bHLw4ycZ/IS3R9tgEL+2V3YAyDwiTteDtXcCGpTm22x0uJgeuy2AuMHbUUsntL9jv IlaBl5BYk63GI6tzLYZUB7iFSSOmYsdO7Not9V+kkrtyq9Ke7impnFL2mgghKpa0toZ3xZLAZUDAv vFJOhWcxfPym0y65Jj/XmHZA6ljnAIvZA4Gzk8OWym/skRPmzhaxNO1h6TXoISo8MQ90L7BU9ObPc iOzlh57Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNY-00CdYO-8F; Fri, 29 Apr 2022 17:26:04 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 17/69] buffer: Call aops write_begin() and write_end() directly Date: Fri, 29 Apr 2022 18:25:04 +0100 Message-Id: <20220429172556.3011843-18-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org pagecache_write_begin() and pagecache_write_end() are now trivial wrappers, so call the aops directly. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/buffer.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 02b50e3e4fbb..d538495a0553 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -2344,6 +2344,7 @@ EXPORT_SYMBOL(block_read_full_page); int generic_cont_expand_simple(struct inode *inode, loff_t size) { struct address_space *mapping = inode->i_mapping; + const struct address_space_operations *aops = mapping->a_ops; struct page *page; void *fsdata; int err; @@ -2352,11 +2353,11 @@ int generic_cont_expand_simple(struct inode *inode, loff_t size) if (err) goto out; - err = pagecache_write_begin(NULL, mapping, size, 0, 0, &page, &fsdata); + err = aops->write_begin(NULL, mapping, size, 0, &page, &fsdata); if (err) goto out; - err = pagecache_write_end(NULL, mapping, size, 0, 0, page, fsdata); + err = aops->write_end(NULL, mapping, size, 0, 0, page, fsdata); BUG_ON(err > 0); out: @@ -2368,6 +2369,7 @@ static int cont_expand_zero(struct file *file, struct address_space *mapping, loff_t pos, loff_t *bytes) { struct inode *inode = mapping->host; + const struct address_space_operations *aops = mapping->a_ops; unsigned int blocksize = i_blocksize(inode); struct page *page; void *fsdata; @@ -2387,12 +2389,12 @@ static int cont_expand_zero(struct file *file, struct address_space *mapping, } len = PAGE_SIZE - zerofrom; - err = pagecache_write_begin(file, mapping, curpos, len, 0, + err = aops->write_begin(file, mapping, curpos, len, &page, &fsdata); if (err) goto out; zero_user(page, zerofrom, len); - err = pagecache_write_end(file, mapping, curpos, len, len, + err = aops->write_end(file, mapping, curpos, len, len, page, fsdata); if (err < 0) goto out; @@ -2420,12 +2422,12 @@ static int cont_expand_zero(struct file *file, struct address_space *mapping, } len = offset - zerofrom; - err = pagecache_write_begin(file, mapping, curpos, len, 0, + err = aops->write_begin(file, mapping, curpos, len, &page, &fsdata); if (err) goto out; zero_user(page, zerofrom, len); - err = pagecache_write_end(file, mapping, curpos, len, len, + err = aops->write_end(file, mapping, curpos, len, len, page, fsdata); if (err < 0) goto out; From patchwork Fri Apr 29 17:25:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832498 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18B0AC433F5 for ; Fri, 29 Apr 2022 17:26:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379648AbiD2R3q (ORCPT ); Fri, 29 Apr 2022 13:29:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379516AbiD2R3Z (ORCPT ); Fri, 29 Apr 2022 13:29:25 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4E8EC9D060 for ; Fri, 29 Apr 2022 10:26:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=UFDZIRx1hkM0Ul5+W881mhRPV4GLz1eVueiPIgyeI6o=; b=Xc22LtdzGhIfVWHRgThCkjairv wiYyyaJytaCjqdlqXruQdmnswkg74Y8KFa9caEc5yBNMbCqbbvWOx9VzJZ9m/Eyl2fhpmZySvUBZ/ eT8lYPIGqkqJ521kOsVpQQ4leCnligAqYcZh+axF6VSEbDyvaAPywkku+A2f21GnTIVZlyAMJ2gQO 4GhNyEQR3sScdcILSgqsmLziHPx0/YxuE/Nb7I7UPZE0OAyM7FGEC0+1w1cXSFEgp2A5rnNJrQuWW MMkshUiZzZHNlgHcWYF0KbwpahVap/zR/uMZo+zmteBtVNwkg+vb1UfzLDclUcz5oYT7Xfjg9zvdi Mikct9RQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNY-00CdYV-Cu; Fri, 29 Apr 2022 17:26:04 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 18/69] namei: Call aops write_begin() and write_end() directly Date: Fri, 29 Apr 2022 18:25:05 +0100 Message-Id: <20220429172556.3011843-19-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org pagecache_write_begin() and pagecache_write_end() are now trivial wrappers, so call the aops directly. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/namei.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/fs/namei.c b/fs/namei.c index 0c84b4326dc9..896ade8b7400 100644 --- a/fs/namei.c +++ b/fs/namei.c @@ -5005,6 +5005,7 @@ EXPORT_SYMBOL(page_readlink); int page_symlink(struct inode *inode, const char *symname, int len) { struct address_space *mapping = inode->i_mapping; + const struct address_space_operations *aops = mapping->a_ops; bool nofs = !mapping_gfp_constraint(mapping, __GFP_FS); struct page *page; void *fsdata; @@ -5014,8 +5015,7 @@ int page_symlink(struct inode *inode, const char *symname, int len) retry: if (nofs) flags = memalloc_nofs_save(); - err = pagecache_write_begin(NULL, mapping, 0, len-1, - 0, &page, &fsdata); + err = aops->write_begin(NULL, mapping, 0, len-1, &page, &fsdata); if (nofs) memalloc_nofs_restore(flags); if (err) @@ -5023,7 +5023,7 @@ int page_symlink(struct inode *inode, const char *symname, int len) memcpy(page_address(page), symname, len-1); - err = pagecache_write_end(NULL, mapping, 0, len-1, len-1, + err = aops->write_end(NULL, mapping, 0, len-1, len-1, page, fsdata); if (err < 0) goto fail; From patchwork Fri Apr 29 17:25:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832494 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D361C433F5 for ; Fri, 29 Apr 2022 17:26:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379625AbiD2R3j (ORCPT ); Fri, 29 Apr 2022 13:29:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42538 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1359830AbiD2R3Z (ORCPT ); Fri, 29 Apr 2022 13:29:25 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4F13A972F5 for ; Fri, 29 Apr 2022 10:26:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dGvwuPP/1TGONgcNifsHiVFL7k2OtG7nG8HG9AaJDWo=; b=hxYGGnKsentpljxlHrI897bYGY MoSyIjj1pIc0hcpLBYjV7MIVqyTdvRVgu2O1+SNo178dQFhJOil2pIo3sDNVk4mNGHG7HdArm4RDh JM1u6I8jzjrDlNy3x18Afgw2lPEzK1eEPTtcItBtCD5S9GkUdUXRVtdco5iVO4WHZaAirhkHuDH2s Sm6RSUxHMazV3RHdZe1SdzcI0XVXsuKsOIhAZ7cFbl09d8HtGhno2fWGZHBQNWQxk/BL0JwYwReAP c5t4Kpmv6vGhndQIJSGzP1kdNVoNZVtEern3Uh16OQCUr+bDOeJlRWONWwO9OlsGQ6HhjmyHXxJgQ 6e1T4+jQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNY-00CdYb-H0; Fri, 29 Apr 2022 17:26:04 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 19/69] ntfs3: Call ntfs_write_begin() and ntfs_write_end() directly Date: Fri, 29 Apr 2022 18:25:06 +0100 Message-Id: <20220429172556.3011843-20-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org There is only one kind of write_begin/write_end aops, so we don't need to look up which aop it is, just make ntfs_write_begin() and ntfs_write_end() available to this file and call them directly. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Namjae Jeon Reviewed-by: Christoph Hellwig --- fs/ntfs3/file.c | 5 ++--- fs/ntfs3/inode.c | 12 +++++------- fs/ntfs3/ntfs_fs.h | 5 +++++ 3 files changed, 12 insertions(+), 10 deletions(-) diff --git a/fs/ntfs3/file.c b/fs/ntfs3/file.c index 787b53b984ee..c2e7e561958a 100644 --- a/fs/ntfs3/file.c +++ b/fs/ntfs3/file.c @@ -157,15 +157,14 @@ static int ntfs_extend_initialized_size(struct file *file, if (pos + len > new_valid) len = new_valid - pos; - err = pagecache_write_begin(file, mapping, pos, len, 0, &page, - &fsdata); + err = ntfs_write_begin(file, mapping, pos, len, &page, &fsdata); if (err) goto out; zero_user_segment(page, zerofrom, PAGE_SIZE); /* This function in any case puts page. */ - err = pagecache_write_end(file, mapping, pos, len, len, page, + err = ntfs_write_end(file, mapping, pos, len, len, page, fsdata); if (err < 0) goto out; diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c index 1364174cc6c9..bfd71f384e21 100644 --- a/fs/ntfs3/inode.c +++ b/fs/ntfs3/inode.c @@ -861,9 +861,8 @@ static int ntfs_get_block_write_begin(struct inode *inode, sector_t vbn, bh_result, create, GET_BLOCK_WRITE_BEGIN); } -static int ntfs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, u32 len, struct page **pagep, - void **fsdata) +int ntfs_write_begin(struct file *file, struct address_space *mapping, + loff_t pos, u32 len, struct page **pagep, void **fsdata) { int err; struct inode *inode = mapping->host; @@ -904,10 +903,9 @@ static int ntfs_write_begin(struct file *file, struct address_space *mapping, /* * ntfs_write_end - Address_space_operations::write_end. */ -static int ntfs_write_end(struct file *file, struct address_space *mapping, - loff_t pos, u32 len, u32 copied, struct page *page, - void *fsdata) - +int ntfs_write_end(struct file *file, struct address_space *mapping, + loff_t pos, u32 len, u32 copied, struct page *page, + void *fsdata) { struct inode *inode = mapping->host; struct ntfs_inode *ni = ntfs_i(inode); diff --git a/fs/ntfs3/ntfs_fs.h b/fs/ntfs3/ntfs_fs.h index fb825059d488..8de129a6419b 100644 --- a/fs/ntfs3/ntfs_fs.h +++ b/fs/ntfs3/ntfs_fs.h @@ -689,6 +689,11 @@ int ntfs_set_size(struct inode *inode, u64 new_size); int reset_log_file(struct inode *inode); int ntfs_get_block(struct inode *inode, sector_t vbn, struct buffer_head *bh_result, int create); +int ntfs_write_begin(struct file *file, struct address_space *mapping, + loff_t pos, u32 len, struct page **pagep, void **fsdata); +int ntfs_write_end(struct file *file, struct address_space *mapping, + loff_t pos, u32 len, u32 copied, struct page *page, + void *fsdata); int ntfs3_write_inode(struct inode *inode, struct writeback_control *wbc); int ntfs_sync_inode(struct inode *inode); int ntfs_flush_inodes(struct super_block *sb, struct inode *i1, From patchwork Fri Apr 29 17:25:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832496 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 176EAC433F5 for ; Fri, 29 Apr 2022 17:26:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379627AbiD2R3n (ORCPT ); Fri, 29 Apr 2022 13:29:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42538 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379517AbiD2R3Z (ORCPT ); Fri, 29 Apr 2022 13:29:25 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D4969D06E for ; Fri, 29 Apr 2022 10:26:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=PAfZABp1iNExCHmAXKO7YXQ6u4PT0MOTepjBvoeGvoE=; b=Q6hnBP8ElC2lTUtltAG/zcIR4P P67M7J5OLof00m01U1PoYp0A9y8riu5SDsZEwzLrf49wd+mMGOwx4mPc4bsPpXkmjerb+6mld2pU0 926bXaNAd1UZxF6W7ThWeHRph9F7nAA8oyQWwKx9ANbJnS1SDOU67Lw41saND8fSGrzUfv/SXs8TL xXlYQEgGLzqDfTrkqx15tdROMutQuCxexOKzqh0zTR90VfHenJ+Y/6XTme/TYrWM5XqXIoLybzk1R 5ONFIQetnVRc65go6CLgPm4LP9IWQ0PN6N1K3qldihO1ItYrQkcLPwLQeW6qcI79qhGpERifTxqtE Z31XSnmg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNY-00CdYh-L1; Fri, 29 Apr 2022 17:26:04 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 20/69] hfs: Call hfs_write_begin() and generic_write_end() directly Date: Fri, 29 Apr 2022 18:25:07 +0100 Message-Id: <20220429172556.3011843-21-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org There is only one kind of write_begin/write_end aops, so we don't need to look up which aop it is, just make hfs_write_begin() available to this file and call it directly. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/hfs/extent.c | 6 +++--- fs/hfs/hfs_fs.h | 2 ++ fs/hfs/inode.c | 5 ++--- 3 files changed, 7 insertions(+), 6 deletions(-) diff --git a/fs/hfs/extent.c b/fs/hfs/extent.c index 263d5028d9d1..3f7e9bef9874 100644 --- a/fs/hfs/extent.c +++ b/fs/hfs/extent.c @@ -491,10 +491,10 @@ void hfs_file_truncate(struct inode *inode) /* XXX: Can use generic_cont_expand? */ size = inode->i_size - 1; - res = pagecache_write_begin(NULL, mapping, size+1, 0, 0, - &page, &fsdata); + res = hfs_write_begin(NULL, mapping, size + 1, 0, &page, + &fsdata); if (!res) { - res = pagecache_write_end(NULL, mapping, size+1, 0, 0, + res = generic_write_end(NULL, mapping, size + 1, 0, 0, page, fsdata); } if (res) diff --git a/fs/hfs/hfs_fs.h b/fs/hfs/hfs_fs.h index b8eb0322a3e5..68d0305880f7 100644 --- a/fs/hfs/hfs_fs.h +++ b/fs/hfs/hfs_fs.h @@ -201,6 +201,8 @@ extern int hfs_get_block(struct inode *, sector_t, struct buffer_head *, int); extern const struct address_space_operations hfs_aops; extern const struct address_space_operations hfs_btree_aops; +int hfs_write_begin(struct file *file, struct address_space *mapping, + loff_t pos, unsigned len, struct page **pagep, void **fsdata); extern struct inode *hfs_new_inode(struct inode *, const struct qstr *, umode_t); extern void hfs_inode_write_fork(struct inode *, struct hfs_extent *, __be32 *, __be32 *); extern int hfs_write_inode(struct inode *, struct writeback_control *); diff --git a/fs/hfs/inode.c b/fs/hfs/inode.c index 93d9aa832139..9a26b9510da0 100644 --- a/fs/hfs/inode.c +++ b/fs/hfs/inode.c @@ -49,9 +49,8 @@ static void hfs_write_failed(struct address_space *mapping, loff_t to) } } -static int hfs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, - struct page **pagep, void **fsdata) +int hfs_write_begin(struct file *file, struct address_space *mapping, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { int ret; From patchwork Fri Apr 29 17:25:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832500 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C820C433FE for ; Fri, 29 Apr 2022 17:26:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379677AbiD2R3t (ORCPT ); Fri, 29 Apr 2022 13:29:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379518AbiD2R3Z (ORCPT ); Fri, 29 Apr 2022 13:29:25 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 72AB59D4C7 for ; Fri, 29 Apr 2022 10:26:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Orqe2oWBOGHVWK0VnqM2RUhWmSeOi4EotrHMHk3kAYg=; b=Av7OHygSIKUlWXFjhOouto8eW2 OEnIFqHH2ZLo+ORpPGGoNBeizk3BpO0i6ICoRg1hC/K3ZfQhcEYB/OMczJjN6Zl2aw6KNdWLGsKQR 3e5RsQDj/C5wwHTnpN4iJQTZqAfAkOWRl7hMzVEs8fEgPf3xYKr3p9waphJWdyM0gmKvlvzypvZrF oGfmxzQOCR2zjl60qbmnA49bBhOpqK2TDwPlwBu2LrZ/V8nPeFC719GCSOeNHjBYR558R2vbg1A9n vrXIBnsX32U6R9zALwtJH5oyN6rfypbIfELDpoXkH9CTsyj2lf8OJMRgOv6xaJrN0jD8lDAS7+gLz ZTGbSTjg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNY-00CdYn-Of; Fri, 29 Apr 2022 17:26:04 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 21/69] hfsplus: Call hfsplus_write_begin() and generic_write_end() directly Date: Fri, 29 Apr 2022 18:25:08 +0100 Message-Id: <20220429172556.3011843-22-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org There is only one kind of write_begin/write_end aops, so we don't need to look up which aop it is, just make hfsplus_write_begin() available to this file and call it directly. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/hfsplus/extents.c | 8 ++++---- fs/hfsplus/hfsplus_fs.h | 2 ++ fs/hfsplus/inode.c | 5 ++--- 3 files changed, 8 insertions(+), 7 deletions(-) diff --git a/fs/hfsplus/extents.c b/fs/hfsplus/extents.c index 7054a542689f..721f779b4ec3 100644 --- a/fs/hfsplus/extents.c +++ b/fs/hfsplus/extents.c @@ -557,12 +557,12 @@ void hfsplus_file_truncate(struct inode *inode) void *fsdata; loff_t size = inode->i_size; - res = pagecache_write_begin(NULL, mapping, size, 0, 0, - &page, &fsdata); + res = hfsplus_write_begin(NULL, mapping, size, 0, + &page, &fsdata); if (res) return; - res = pagecache_write_end(NULL, mapping, size, - 0, 0, page, fsdata); + res = generic_write_end(NULL, mapping, size, 0, 0, + page, fsdata); if (res < 0) return; mark_inode_dirty(inode); diff --git a/fs/hfsplus/hfsplus_fs.h b/fs/hfsplus/hfsplus_fs.h index 1798949f269b..396e73aa0961 100644 --- a/fs/hfsplus/hfsplus_fs.h +++ b/fs/hfsplus/hfsplus_fs.h @@ -468,6 +468,8 @@ extern const struct address_space_operations hfsplus_aops; extern const struct address_space_operations hfsplus_btree_aops; extern const struct dentry_operations hfsplus_dentry_operations; +int hfsplus_write_begin(struct file *file, struct address_space *mapping, + loff_t pos, unsigned len, struct page **pagep, void **fsdata); struct inode *hfsplus_new_inode(struct super_block *sb, struct inode *dir, umode_t mode); void hfsplus_delete_inode(struct inode *inode); diff --git a/fs/hfsplus/inode.c b/fs/hfsplus/inode.c index 73010aa4623f..905ae3660315 100644 --- a/fs/hfsplus/inode.c +++ b/fs/hfsplus/inode.c @@ -43,9 +43,8 @@ static void hfsplus_write_failed(struct address_space *mapping, loff_t to) } } -static int hfsplus_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, - struct page **pagep, void **fsdata) +int hfsplus_write_begin(struct file *file, struct address_space *mapping, + loff_t pos, unsigned len, struct page **pagep, void **fsdata) { int ret; From patchwork Fri Apr 29 17:25:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832497 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 972FFC433EF for ; Fri, 29 Apr 2022 17:26:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379628AbiD2R3p (ORCPT ); Fri, 29 Apr 2022 13:29:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42682 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379519AbiD2R3Z (ORCPT ); Fri, 29 Apr 2022 13:29:25 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C94AA9E9D2 for ; Fri, 29 Apr 2022 10:26:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=PIbS69XPyfEWg59JWx0WNAEQv+JtVO+L2gtIxkTD4P8=; b=mBSEtIBIQU3fXgad+0Q7oeZWC9 XjzyfFRWlMd2aRUYLlIL/q4j7dVpYLHecvET3Ffx7CG1r32xLmd7Zllm2UhgwUqAeEy4aWrWqOCqQ 7WRpaFzXwb1yHjutDfMO/ahXb0Qmkx/kv332TD4shC4sDWMtEXh2fnHxwE8Mz4rVSX5o1DNg1mJPM hmzE3pSP3Mmyl4AFh3EfOyjBmMkySkhxPSoA5TPjJKDeWtf9YkECdXVOwpWvY0ist90ajPprRkzhA IK3vbvlQFg17lJYhjD/48WfzwH+JT9PzN49BRizG9fIV6ZN8s+zdEsJlGR7uXMYLrzENviaw0QLE0 8IEJyxqw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNY-00CdYs-Sd; Fri, 29 Apr 2022 17:26:04 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 22/69] ext4: Call aops write_begin() and write_end() directly Date: Fri, 29 Apr 2022 18:25:09 +0100 Message-Id: <20220429172556.3011843-23-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org pagecache_write_begin() and pagecache_write_end() are now trivial wrappers, so call the aops directly. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/ext4/verity.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/fs/ext4/verity.c b/fs/ext4/verity.c index eacbd489e3bf..b051d19b5c8a 100644 --- a/fs/ext4/verity.c +++ b/fs/ext4/verity.c @@ -69,6 +69,9 @@ static int pagecache_read(struct inode *inode, void *buf, size_t count, static int pagecache_write(struct inode *inode, const void *buf, size_t count, loff_t pos) { + struct address_space *mapping = inode->i_mapping; + const struct address_space_operations *aops = mapping->a_ops; + if (pos + count > inode->i_sb->s_maxbytes) return -EFBIG; @@ -79,15 +82,13 @@ static int pagecache_write(struct inode *inode, const void *buf, size_t count, void *fsdata; int res; - res = pagecache_write_begin(NULL, inode->i_mapping, pos, n, 0, - &page, &fsdata); + res = aops->write_begin(NULL, mapping, pos, n, &page, &fsdata); if (res) return res; memcpy_to_page(page, offset_in_page(pos), buf, n); - res = pagecache_write_end(NULL, inode->i_mapping, pos, n, n, - page, fsdata); + res = aops->write_end(NULL, mapping, pos, n, n, page, fsdata); if (res < 0) return res; if (res != n) From patchwork Fri Apr 29 17:25:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832499 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F807C433EF for ; Fri, 29 Apr 2022 17:26:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379674AbiD2R3s (ORCPT ); Fri, 29 Apr 2022 13:29:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379523AbiD2R3Z (ORCPT ); Fri, 29 Apr 2022 13:29:25 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DC54B972FC for ; Fri, 29 Apr 2022 10:26:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=eBwsOuO41PPe1bFLp/Nt2l81HUxjuF3P4f7UhnknnGo=; b=v5KFB1rXhdkeOCG58gvpdxnIDs r6elDCZk3UvQo0Kb+lgYq9CS3CmA85Qc39A45wP7t9AjExO0PpTrIZzeZDPh2N+JZwKzzLDgrCh6s LkErfWA1MyOz2GTNnfHJ/PwXgs1bKXv84sHl9LVfmFSvv6b33kzZHumgi/1Fda4tOsF3XF2ArZuO0 ChjvuDlWaf8Xhwiu7/cLVlcDWN9avfVg4oN+aN01+6IE2hBxpdEb/7hFLkByTQIS7EszoR9Q9oMPW 9XYjG81vsnRJf7PGoS13zTCNfQfChOCCw2IpUHgmdSgHOqdmnLAwTU3WfnzQq7+f0CMwI0rAjdcqm LsD1Gchw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNZ-00CdYx-0Q; Fri, 29 Apr 2022 17:26:05 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 23/69] f2fs: Call aops write_begin() and write_end() directly Date: Fri, 29 Apr 2022 18:25:10 +0100 Message-Id: <20220429172556.3011843-24-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org pagecache_write_begin() and pagecache_write_end() are now trivial wrappers, so call the aops directly. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/f2fs/verity.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/fs/f2fs/verity.c b/fs/f2fs/verity.c index 3d793202cc9f..65395ae188aa 100644 --- a/fs/f2fs/verity.c +++ b/fs/f2fs/verity.c @@ -74,6 +74,9 @@ static int pagecache_read(struct inode *inode, void *buf, size_t count, static int pagecache_write(struct inode *inode, const void *buf, size_t count, loff_t pos) { + struct address_space *mapping = inode->i_mapping; + const struct address_space_operations *aops = mapping->a_ops; + if (pos + count > inode->i_sb->s_maxbytes) return -EFBIG; @@ -85,8 +88,7 @@ static int pagecache_write(struct inode *inode, const void *buf, size_t count, void *addr; int res; - res = pagecache_write_begin(NULL, inode->i_mapping, pos, n, 0, - &page, &fsdata); + res = aops->write_begin(NULL, mapping, pos, n, &page, &fsdata); if (res) return res; @@ -94,8 +96,7 @@ static int pagecache_write(struct inode *inode, const void *buf, size_t count, memcpy(addr + offset_in_page(pos), buf, n); kunmap_atomic(addr); - res = pagecache_write_end(NULL, inode->i_mapping, pos, n, n, - page, fsdata); + res = aops->write_end(NULL, mapping, pos, n, n, page, fsdata); if (res < 0) return res; if (res != n) From patchwork Fri Apr 29 17:25:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832510 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80C19C433EF for ; Fri, 29 Apr 2022 17:26:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379646AbiD2RaC (ORCPT ); Fri, 29 Apr 2022 13:30:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42490 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379524AbiD2R30 (ORCPT ); Fri, 29 Apr 2022 13:29:26 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DDF60985B5 for ; Fri, 29 Apr 2022 10:26:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=fNlCZXPKuBm6VOqh/zfrwpxBSeDGW/beXlyZ2VpUwJ0=; b=Q39RMJ17cXlicAVx6LmOlUfYWp iR31she0yAXnXYbsTKyFBFR1rje2wl9yniNKaeA+HRFcrDTcOGVvDOqVA2YNP8Ym2lh2uFc29RdGW 2MfoMZHArQEYXpIxONnAESJCQe1G0HxZZxidpYLlMpOs/PBnaStwFRpFM+QtCY3ZSUEPY0YWjjsJs DFQx2RUlrU6U8GjN3V0xj4/2bV6Gy/soQkPM54c5UVIr0MrZJpp6SvTqFmlSZi29YlqHzsq+6ueKY Jb6t24TrLnQXYyx9lOMjvwHaQrGoGyxbebqqvDMHThHF4j3KK6QJRMAoc9bptPTmj2GLm/+QMIKGQ UpIK5qgA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNZ-00CdZ2-4W; Fri, 29 Apr 2022 17:26:05 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 24/69] i915: Call aops write_begin() and write_end() directly Date: Fri, 29 Apr 2022 18:25:11 +0100 Message-Id: <20220429172556.3011843-25-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org pagecache_write_begin() and pagecache_write_end() are now trivial wrappers, so call the aops directly. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 23 +++++++++++------------ 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 3a1c782ed791..e92cc9d7257c 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -408,6 +408,7 @@ shmem_pwrite(struct drm_i915_gem_object *obj, const struct drm_i915_gem_pwrite *arg) { struct address_space *mapping = obj->base.filp->f_mapping; + const struct address_space_operations *aops = mapping->a_ops; char __user *user_data = u64_to_user_ptr(arg->data_ptr); u64 remain, offset; unsigned int pg; @@ -465,9 +466,8 @@ shmem_pwrite(struct drm_i915_gem_object *obj, if (err) return err; - err = pagecache_write_begin(obj->base.filp, mapping, - offset, len, 0, - &page, &data); + err = aops->write_begin(obj->base.filp, mapping, offset, len, + &page, &data); if (err < 0) return err; @@ -477,9 +477,8 @@ shmem_pwrite(struct drm_i915_gem_object *obj, len); kunmap_atomic(vaddr); - err = pagecache_write_end(obj->base.filp, mapping, - offset, len, len - unwritten, - page, data); + err = aops->write_end(obj->base.filp, mapping, offset, len, + len - unwritten, page, data); if (err < 0) return err; @@ -622,6 +621,7 @@ i915_gem_object_create_shmem_from_data(struct drm_i915_private *dev_priv, { struct drm_i915_gem_object *obj; struct file *file; + const struct address_space_operations *aops; resource_size_t offset; int err; @@ -633,15 +633,15 @@ i915_gem_object_create_shmem_from_data(struct drm_i915_private *dev_priv, GEM_BUG_ON(obj->write_domain != I915_GEM_DOMAIN_CPU); file = obj->base.filp; + aops = file->f_mapping->a_ops; offset = 0; do { unsigned int len = min_t(typeof(size), size, PAGE_SIZE); struct page *page; void *pgdata, *vaddr; - err = pagecache_write_begin(file, file->f_mapping, - offset, len, 0, - &page, &pgdata); + err = aops->write_begin(file, file->f_mapping, offset, len, + &page, &pgdata); if (err < 0) goto fail; @@ -649,9 +649,8 @@ i915_gem_object_create_shmem_from_data(struct drm_i915_private *dev_priv, memcpy(vaddr, data, len); kunmap(page); - err = pagecache_write_end(file, file->f_mapping, - offset, len, len, - page, pgdata); + err = aops->write_end(file, file->f_mapping, offset, len, len, + page, pgdata); if (err < 0) goto fail; From patchwork Fri Apr 29 17:25:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7F67C433F5 for ; Fri, 29 Apr 2022 17:26:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379664AbiD2R37 (ORCPT ); Fri, 29 Apr 2022 13:29:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379526AbiD2R30 (ORCPT ); Fri, 29 Apr 2022 13:29:26 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 03E2E9E9D3 for ; Fri, 29 Apr 2022 10:26:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=U3hUOwNygGz1zGvOEmViKYtkKTgTTOWc2Xf9zuBI+cE=; b=fjgI8Z/VqMFxH86T6rInwI/wYQ 0pbwWchwA9TozxH2c36wn2yY/gJc0DZysookR2oz83x0HVm454wbbJ3x4mXF3bD7SShuN7zAai7xs +uNUYByb14RKQ9Pw/gZYy/hwblGAVpIEVjcNC2F9KH5YDBeW85nQicJlZUTCDsTBmi+HYY1ppOd0L cpUcDklnJof/tBfRwc65jt8eebV45epb0+AT6v2OvyFpuYSU+OMh/VHZkMwgDsWMGE8gwAdtDSEmr ls2k4fYrdFyoGTEqQBGCQUUYj5zTTVVUSZfqTR35Bq2q6oQNJiznwTZNQHq+bCpW8XaScPp9q9b56 J1Am1j0Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNZ-00CdZ7-8q; Fri, 29 Apr 2022 17:26:05 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 25/69] fs: Remove pagecache_write_begin() and pagecache_write_end() Date: Fri, 29 Apr 2022 18:25:12 +0100 Message-Id: <20220429172556.3011843-26-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org These wrappers have no more users; remove them. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- include/linux/fs.h | 12 ------------ mm/filemap.c | 20 -------------------- 2 files changed, 32 deletions(-) diff --git a/include/linux/fs.h b/include/linux/fs.h index a0e73432526f..b35ce086a7a1 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -380,18 +380,6 @@ struct address_space_operations { extern const struct address_space_operations empty_aops; -/* - * pagecache_write_begin/pagecache_write_end must be used by general code - * to write into the pagecache. - */ -int pagecache_write_begin(struct file *, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, - struct page **pagep, void **fsdata); - -int pagecache_write_end(struct file *, struct address_space *mapping, - loff_t pos, unsigned len, unsigned copied, - struct page *page, void *fsdata); - /** * struct address_space - Contents of a cacheable, mappable object. * @host: Owner, either the inode or the block_device. diff --git a/mm/filemap.c b/mm/filemap.c index 0751843b052f..c15cfc28f9ce 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3622,26 +3622,6 @@ struct page *read_cache_page_gfp(struct address_space *mapping, } EXPORT_SYMBOL(read_cache_page_gfp); -int pagecache_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned flags, - struct page **pagep, void **fsdata) -{ - const struct address_space_operations *aops = mapping->a_ops; - - return aops->write_begin(file, mapping, pos, len, pagep, fsdata); -} -EXPORT_SYMBOL(pagecache_write_begin); - -int pagecache_write_end(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned copied, - struct page *page, void *fsdata) -{ - const struct address_space_operations *aops = mapping->a_ops; - - return aops->write_end(file, mapping, pos, len, copied, page, fsdata); -} -EXPORT_SYMBOL(pagecache_write_end); - /* * Warn about a page cache invalidation failure during a direct I/O write. */ From patchwork Fri Apr 29 17:25:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832521 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38B47C433F5 for ; Fri, 29 Apr 2022 17:27:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379574AbiD2RaS (ORCPT ); Fri, 29 Apr 2022 13:30:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42750 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379536AbiD2R3b (ORCPT ); Fri, 29 Apr 2022 13:29:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 989CEA27FA for ; Fri, 29 Apr 2022 10:26:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ZNKJWFDCUWil8jx/xHZmnIEY4w4yDlyboSDNkzjtICU=; b=Ph992qZNAl0F6XMJcn6N2/BBcY VdH9rl/eDs2xVZys8GqVgXe0P+Nz9+xkduDldVfs80sLR+M0oEoF4Ov2JPts4qot7NIUyMI9+n98U 1/3pOwvi02aEED+HADweMbRyE2dsi9A5BdBWdCxzSRD73e9XyQRg4ttvT6xyH31WRpcixPG4rB+4t 64kj+BTWSbxObvJvykTiEpfzVdOaWSgEF7ljGP7AP6rSUVX5O6dtocyOIy18ArGKqHj130xNZbGfG zU70QI+sa/4Pw8isrY+zawox6VishHklgcI4ZZq8im+C9J5O+Jpfg2dwGM8D63vqcG24qbIbSC7tB eGKg/fZQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNZ-00CdZC-D7; Fri, 29 Apr 2022 17:26:05 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: Miaohe Lin , Matthew Wilcox Subject: [PATCH 26/69] filemap: Remove obsolete comment in lock_page Date: Fri, 29 Apr 2022 18:25:13 +0100 Message-Id: <20220429172556.3011843-27-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Miaohe Lin We no longer need the page's inode pinned. This comment dates back to commit db37648cd6ce ("[PATCH] mm: non syncing lock_page()") which added lock_page_nosync(). That was removed by commit 7eaceaccab5f ("block: remove per-queue plugging") which also made this comment obsolete. Signed-off-by: Miaohe Lin Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- include/linux/pagemap.h | 3 --- 1 file changed, 3 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 65ae8f96554b..ab47579af434 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -908,9 +908,6 @@ static inline void folio_lock(struct folio *folio) __folio_lock(folio); } -/* - * lock_page may only be called if we have the page's inode pinned. - */ static inline void lock_page(struct page *page) { struct folio *folio; From patchwork Fri Apr 29 17:25:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832508 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A8EFC433EF for ; Fri, 29 Apr 2022 17:26:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379523AbiD2RaA (ORCPT ); Fri, 29 Apr 2022 13:30:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43146 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379529AbiD2R3a (ORCPT ); Fri, 29 Apr 2022 13:29:30 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 686319F3A8 for ; Fri, 29 Apr 2022 10:26:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=RHxad9HVpNslqwK8CBIckHgoeK7awCd5JTOwD8ARf4E=; b=bZn+9Q5rOkHbqDJuPT/UfmnKxE HvP6MTBx3pwy37EtfiwC3pqDiJOMLne9ZamCXdnyZDkOtR+3re3I6sMVfoWw5zxiKdqLbkv00btFf 64Fumzbyvzc+LtjAp7c0IMJMMjZ/ChbhEyQVfTwuERlPrmV/FypmCjjws3K2ddRDl/ta0fHI8j/iO a2VHxVS5GVy18AVq8c+8rGmorIr1BAr36FOaQF1ZxL1GYk0MGor1snKXTYyXmNnNFn/Hl80umTGZ+ 8jC9tNzGANDQpiaowTf/YUclKopq9vmvLD1bYsLPQL4SMo7Zq4Vw0LgiQ6oxajkueygaiKGjYpxH9 7HcB4qFQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNZ-00CdZI-GV; Fri, 29 Apr 2022 17:26:05 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 27/69] filemap: Update the folio_lock documentation Date: Fri, 29 Apr 2022 18:25:14 +0100 Message-Id: <20220429172556.3011843-28-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Add kernel-doc for several functions relating to take the folio lock. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 59 +++++++++++++++++++++++++++++++++++++++-- 1 file changed, 57 insertions(+), 2 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index ab47579af434..60657132080f 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -888,6 +888,18 @@ bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, void unlock_page(struct page *page); void folio_unlock(struct folio *folio); +/** + * folio_trylock() - Attempt to lock a folio. + * @folio: The folio to attempt to lock. + * + * Sometimes it is undesirable to wait for a folio to be unlocked (eg + * when the locks are being taken in the wrong order, or if making + * progress through a batch of folios is more important than processing + * them in order). Usually folio_lock() is the correct function to call. + * + * Context: Any context. + * Return: Whether the lock was successfully acquired. + */ static inline bool folio_trylock(struct folio *folio) { return likely(!test_and_set_bit_lock(PG_locked, folio_flags(folio, 0))); @@ -901,6 +913,28 @@ static inline int trylock_page(struct page *page) return folio_trylock(page_folio(page)); } +/** + * folio_lock() - Lock this folio. + * @folio: The folio to lock. + * + * The folio lock protects against many things, probably more than it + * should. It is primarily held while a folio is being brought uptodate, + * either from its backing file or from swap. It is also held while a + * folio is being truncated from its address_space, so holding the lock + * is sufficient to keep folio->mapping stable. + * + * The folio lock is also held while write() is modifying the page to + * provide POSIX atomicity guarantees (as long as the write does not + * cross a page boundary). Other modifications to the data in the folio + * do not hold the folio lock and can race with writes, eg DMA and stores + * to mapped pages. + * + * Context: May sleep. If you need to acquire the locks of two or + * more folios, they must be in order of ascending index, if they are + * in the same address_space. If they are in different address_spaces, + * acquire the lock of the folio which belongs to the address_space which + * has the lowest address in memory first. + */ static inline void folio_lock(struct folio *folio) { might_sleep(); @@ -908,6 +942,17 @@ static inline void folio_lock(struct folio *folio) __folio_lock(folio); } +/** + * lock_page() - Lock the folio containing this page. + * @page: The page to lock. + * + * See folio_lock() for a description of what the lock protects. + * This is a legacy function and new code should probably use folio_lock() + * instead. + * + * Context: May sleep. Pages in the same folio share a lock, so do not + * attempt to lock two pages which share a folio. + */ static inline void lock_page(struct page *page) { struct folio *folio; @@ -918,6 +963,16 @@ static inline void lock_page(struct page *page) __folio_lock(folio); } +/** + * folio_lock_killable() - Lock this folio, interruptible by a fatal signal. + * @folio: The folio to lock. + * + * Attempts to lock the folio, like folio_lock(), except that the sleep + * to acquire the lock is interruptible by a fatal signal. + * + * Context: May sleep; see folio_lock(). + * Return: 0 if the lock was acquired; -EINTR if a fatal signal was received. + */ static inline int folio_lock_killable(struct folio *folio) { might_sleep(); @@ -964,8 +1019,8 @@ int folio_wait_bit_killable(struct folio *folio, int bit_nr); * Wait for a folio to be unlocked. * * This must be called with the caller "holding" the folio, - * ie with increased "page->count" so that the folio won't - * go away during the wait.. + * ie with increased folio reference count so that the folio won't + * go away during the wait. */ static inline void folio_wait_locked(struct folio *folio) { From patchwork Fri Apr 29 17:25:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832501 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30BB9C433F5 for ; Fri, 29 Apr 2022 17:26:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379645AbiD2R3t (ORCPT ); Fri, 29 Apr 2022 13:29:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43174 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379533AbiD2R3a (ORCPT ); Fri, 29 Apr 2022 13:29:30 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B09B9F3B4 for ; Fri, 29 Apr 2022 10:26:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=O1t4085GeyAcPbSh9EMTxNvJaZKXVDwCSXz6nFjuqnI=; b=kBXcCWY1nMi2zzWBRHgzmoedWC kQvrzaM2RTq4M3Xd+h4Oml6FBfxhX7/veuwxWNNIxuZhKqafFgKAlKUGy6x+LbtVtdmvkuBT9DvaC G8G5fcO1XsXOe9vC0TBDjMxV35OBUogiAYUmJ01/6sps3f7nAGFy+DF/lAXT9oGs3dq8Cf3JJWUPe oH0Tx5HzoL7G4p8BrvxZv9CDgicC2eu4fikSnc+kBz0eThjMfJhou0pUxkFWrFUZRMJSx0kQD7/++ vQE0o4FeWa+sj2IAJk5xWdOiMl1o4t2YokfamnzdxXBtwJkOog5HGa3pLhXTMW6CL+ZQTq4zxC1hp kYGZ3BfA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNZ-00CdZK-Is; Fri, 29 Apr 2022 17:26:05 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 28/69] filemap: Update the folio_mark_dirty documentation Date: Fri, 29 Apr 2022 18:25:15 +0100 Message-Id: <20220429172556.3011843-29-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The previous comment was not terribly helpful. Be a bit more explicit about the necessary locking environment. Signed-off-by: Matthew Wilcox (Oracle) --- mm/page-writeback.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 7e2da284e427..fa1117db4610 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2602,10 +2602,12 @@ EXPORT_SYMBOL(folio_redirty_for_writepage); * folio_mark_dirty - Mark a folio as being modified. * @folio: The folio. * - * For folios with a mapping this should be done with the folio lock held - * for the benefit of asynchronous memory errors who prefer a consistent - * dirty state. This rule can be broken in some special cases, - * but should be better not to. + * The folio may not be truncated while this function is running. + * Holding the folio lock is sufficient to prevent truncation, but some + * callers cannot acquire a sleeping lock. These callers instead hold + * the page table lock for a page table which contains at least one page + * in this folio. Truncation will block on the page table lock as it + * unmaps pages before removing the folio from its mapping. * * Return: True if the folio was newly dirtied, false if it was already dirty. */ From patchwork Fri Apr 29 17:25:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832524 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7DC9C433F5 for ; Fri, 29 Apr 2022 17:27:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1358155AbiD2RaZ (ORCPT ); Fri, 29 Apr 2022 13:30:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42538 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379595AbiD2R3b (ORCPT ); Fri, 29 Apr 2022 13:29:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 810D3A622F for ; Fri, 29 Apr 2022 10:26:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=KQBtz+ZwwMq1JJtGAGPM1Be9nShDRCLK9wXs7Y5ncl4=; b=V+6HbnkTPoAsUbRw3EetQwre33 4yOkXR6vJE6zfmXZrrL/zzg8JoE4mPSTtFJvDohXoyMhgeso5dQtNv3DxAQJyScV36IGe5o1fbhx7 Fl4CS2oTGNZ4DCrhYmcsQbMIxwlCFVQ2WxcY+CA4vT/xpNP+RPuMF6Sc+D3yuEX3mh+19PnJKDpw8 ZZIh3ftH5lJ0E8zWcrOHosKAW+bNrFg0szRypWL5re2FFxDXVynk6tQ6rvplxq/gfKIrsngCtckqx dEdXLTK8sozCIXodofUJjGVKDlNu4t0Vgt/KCzPaPaHwztIRIWLqVER8O+Qd+fI58PvXouyP/eD3Y zppxJLyA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNZ-00CdZR-NI; Fri, 29 Apr 2022 17:26:05 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 29/69] readahead: Use a folio in read_pages() Date: Fri, 29 Apr 2022 18:25:16 +0100 Message-Id: <20220429172556.3011843-30-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Handle multi-page folios correctly and removes a few calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- mm/readahead.c | 25 ++++++++++++------------- 1 file changed, 12 insertions(+), 13 deletions(-) diff --git a/mm/readahead.c b/mm/readahead.c index 8e3775829513..947a7a1fd867 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -145,7 +145,7 @@ EXPORT_SYMBOL_GPL(file_ra_state_init); static void read_pages(struct readahead_control *rac) { const struct address_space_operations *aops = rac->mapping->a_ops; - struct page *page; + struct folio *folio; struct blk_plug plug; if (!readahead_count(rac)) @@ -156,24 +156,23 @@ static void read_pages(struct readahead_control *rac) if (aops->readahead) { aops->readahead(rac); /* - * Clean up the remaining pages. The sizes in ->ra + * Clean up the remaining folios. The sizes in ->ra * may be used to size the next readahead, so make sure * they accurately reflect what happened. */ - while ((page = readahead_page(rac))) { - rac->ra->size -= 1; - if (rac->ra->async_size > 0) { - rac->ra->async_size -= 1; - delete_from_page_cache(page); + while ((folio = readahead_folio(rac)) != NULL) { + unsigned long nr = folio_nr_pages(folio); + + rac->ra->size -= nr; + if (rac->ra->async_size >= nr) { + rac->ra->async_size -= nr; + filemap_remove_folio(folio); } - unlock_page(page); - put_page(page); + folio_unlock(folio); } } else { - while ((page = readahead_page(rac))) { - aops->readpage(rac->file, page); - put_page(page); - } + while ((folio = readahead_folio(rac))) + aops->readpage(rac->file, &folio->page); } blk_finish_plug(&plug); From patchwork Fri Apr 29 17:25:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832505 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4669FC433FE for ; Fri, 29 Apr 2022 17:26:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379593AbiD2R3z (ORCPT ); Fri, 29 Apr 2022 13:29:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42682 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379531AbiD2R3a (ORCPT ); Fri, 29 Apr 2022 13:29:30 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A34D8A0BC6 for ; Fri, 29 Apr 2022 10:26:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=vB/FUDznW007D/1dFC/irSstDYRSCilgx1jaSg3ClnQ=; b=KZdZlT9gQJyZpixyUsbYP3rXB0 elGwm107T+HsDmLsqbDob56qvrMC5kd2EzB5fYyCpAW4eKWN1/9mUwykKsMEv8NXN7pv60MxoFnAj mHOUsqfVwmewR8Izgi4Nb84SoOtvZq8cPm5WaF5Yqi/W2AusnPwsS9mzmsyuaEIa9JVXeigbC12Fx WKOVsDWyOSTPV/140J8eRmiM8QBgfrSfOVl7w2rPe6DV84tHKd+H5qr3yLzTy/wOVoBdATYmBbWrX q7wdXrgFOE35dTm0AXIXOPz9LxlN2p4FVjCgBlju4e94iXmDiXnOzBGMoe+wrX330GApO5Z3HQj3k IW1RWOAA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNZ-00CdZZ-Qn; Fri, 29 Apr 2022 17:26:05 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 30/69] fs: Convert is_dirty_writeback() to take a folio Date: Fri, 29 Apr 2022 18:25:17 +0100 Message-Id: <20220429172556.3011843-31-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Pass a folio instead of a page to aops->is_dirty_writeback(). Convert both implementations and the caller. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- Documentation/filesystems/vfs.rst | 10 +++++----- fs/buffer.c | 16 ++++++++-------- fs/nfs/file.c | 21 +++++++++------------ include/linux/buffer_head.h | 2 +- include/linux/fs.h | 2 +- mm/vmscan.c | 2 +- 6 files changed, 25 insertions(+), 28 deletions(-) diff --git a/Documentation/filesystems/vfs.rst b/Documentation/filesystems/vfs.rst index 30f303180a7d..469882f72fc1 100644 --- a/Documentation/filesystems/vfs.rst +++ b/Documentation/filesystems/vfs.rst @@ -747,7 +747,7 @@ cache in your filesystem. The following members are defined: bool (*is_partially_uptodate) (struct folio *, size_t from, size_t count); - void (*is_dirty_writeback) (struct page *, bool *, bool *); + void (*is_dirty_writeback)(struct folio *, bool *, bool *); int (*error_remove_page) (struct mapping *mapping, struct page *page); int (*swap_activate)(struct file *); int (*swap_deactivate)(struct file *); @@ -932,14 +932,14 @@ cache in your filesystem. The following members are defined: without needing I/O to bring the whole page up to date. ``is_dirty_writeback`` - Called by the VM when attempting to reclaim a page. The VM uses + Called by the VM when attempting to reclaim a folio. The VM uses dirty and writeback information to determine if it needs to stall to allow flushers a chance to complete some IO. - Ordinarily it can use PageDirty and PageWriteback but some - filesystems have more complex state (unstable pages in NFS + Ordinarily it can use folio_test_dirty and folio_test_writeback but + some filesystems have more complex state (unstable folios in NFS prevent reclaim) or do not set those flags due to locking problems. This callback allows a filesystem to indicate to the - VM if a page should be treated as dirty or writeback for the + VM if a folio should be treated as dirty or writeback for the purposes of stalling. ``error_remove_page`` diff --git a/fs/buffer.c b/fs/buffer.c index d538495a0553..fb4df259c92d 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -79,26 +79,26 @@ void unlock_buffer(struct buffer_head *bh) EXPORT_SYMBOL(unlock_buffer); /* - * Returns if the page has dirty or writeback buffers. If all the buffers - * are unlocked and clean then the PageDirty information is stale. If - * any of the pages are locked, it is assumed they are locked for IO. + * Returns if the folio has dirty or writeback buffers. If all the buffers + * are unlocked and clean then the folio_test_dirty information is stale. If + * any of the buffers are locked, it is assumed they are locked for IO. */ -void buffer_check_dirty_writeback(struct page *page, +void buffer_check_dirty_writeback(struct folio *folio, bool *dirty, bool *writeback) { struct buffer_head *head, *bh; *dirty = false; *writeback = false; - BUG_ON(!PageLocked(page)); + BUG_ON(!folio_test_locked(folio)); - if (!page_has_buffers(page)) + head = folio_buffers(folio); + if (!head) return; - if (PageWriteback(page)) + if (folio_test_writeback(folio)) *writeback = true; - head = page_buffers(page); bh = head; do { if (buffer_locked(bh)) diff --git a/fs/nfs/file.c b/fs/nfs/file.c index 314d2d7ba84a..f05c4b18b681 100644 --- a/fs/nfs/file.c +++ b/fs/nfs/file.c @@ -430,19 +430,16 @@ static int nfs_release_page(struct page *page, gfp_t gfp) return nfs_fscache_release_page(page, gfp); } -static void nfs_check_dirty_writeback(struct page *page, +static void nfs_check_dirty_writeback(struct folio *folio, bool *dirty, bool *writeback) { struct nfs_inode *nfsi; - struct address_space *mapping = page_file_mapping(page); - - if (!mapping || PageSwapCache(page)) - return; + struct address_space *mapping = folio->mapping; /* - * Check if an unstable page is currently being committed and - * if so, have the VM treat it as if the page is under writeback - * so it will not block due to pages that will shortly be freeable. + * Check if an unstable folio is currently being committed and + * if so, have the VM treat it as if the folio is under writeback + * so it will not block due to folios that will shortly be freeable. */ nfsi = NFS_I(mapping->host); if (atomic_read(&nfsi->commit_info.rpcs_out)) { @@ -451,11 +448,11 @@ static void nfs_check_dirty_writeback(struct page *page, } /* - * If PagePrivate() is set, then the page is not freeable and as the - * inode is not being committed, it's not going to be cleaned in the - * near future so treat it as dirty + * If the private flag is set, then the folio is not freeable + * and as the inode is not being committed, it's not going to + * be cleaned in the near future so treat it as dirty */ - if (PagePrivate(page)) + if (folio_test_private(folio)) *dirty = true; } diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h index 6e5a64005fef..805c4e12700a 100644 --- a/include/linux/buffer_head.h +++ b/include/linux/buffer_head.h @@ -146,7 +146,7 @@ BUFFER_FNS(Defer_Completion, defer_completion) #define page_has_buffers(page) PagePrivate(page) #define folio_buffers(folio) folio_get_private(folio) -void buffer_check_dirty_writeback(struct page *page, +void buffer_check_dirty_writeback(struct folio *folio, bool *dirty, bool *writeback); /* diff --git a/include/linux/fs.h b/include/linux/fs.h index b35ce086a7a1..2be852661a29 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -369,7 +369,7 @@ struct address_space_operations { int (*launder_folio)(struct folio *); bool (*is_partially_uptodate) (struct folio *, size_t from, size_t count); - void (*is_dirty_writeback) (struct page *, bool *, bool *); + void (*is_dirty_writeback) (struct folio *, bool *dirty, bool *wb); int (*error_remove_page)(struct address_space *, struct page *); /* swapfile support */ diff --git a/mm/vmscan.c b/mm/vmscan.c index 1678802e03e7..27851232e00c 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1451,7 +1451,7 @@ static void folio_check_dirty_writeback(struct folio *folio, mapping = folio_mapping(folio); if (mapping && mapping->a_ops->is_dirty_writeback) - mapping->a_ops->is_dirty_writeback(&folio->page, dirty, writeback); + mapping->a_ops->is_dirty_writeback(folio, dirty, writeback); } static struct page *alloc_demote_page(struct page *page, unsigned long node) From patchwork Fri Apr 29 17:25:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832502 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3D5BC433F5 for ; Fri, 29 Apr 2022 17:26:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344266AbiD2R3u (ORCPT ); Fri, 29 Apr 2022 13:29:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42538 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379534AbiD2R3a (ORCPT ); Fri, 29 Apr 2022 13:29:30 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 10685A0BDD for ; Fri, 29 Apr 2022 10:26:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=5tD+FSQ/nAMq2dGk2GcxZrL+cShNG6uIImA7c0IG2h8=; b=e6+UB+JcZ3y54lrLDg5xyEcvwX hr7s+TH9jw/GMzSM5c4kpRpNBAZsoUr27nAVDEg5rvw007RoAuJnhZ5v5i2efPU8aw/Q1hWiYGgsT FSjriE4kHlL9JG89kHWn80ITUu9hyyKuyNZhryQPXFsJsOh9CjGbUN518/GObaS3xVnnbLmKy46Mr BUXBMAfvnW6dDGUWuZjPUc/T+V8SS7zYgGjPoRis3Hph5wm4Kv/wCLy2kU9MMm5pp+9h8SMj2UJLw 3Q589NdHQ3EpkCsdmdFCZrfNmr36DTiqmCgJDpZ7dAYuLaP287wXcS0sMHvO9nN6MkDMg90pjhZXs nvyNAoSg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNZ-00CdZf-UK; Fri, 29 Apr 2022 17:26:05 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 31/69] mm/readahead: Convert page_cache_async_readahead to take a folio Date: Fri, 29 Apr 2022 18:25:18 +0100 Message-Id: <20220429172556.3011843-32-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Removes a couple of calls to compound_head and saves a few bytes. Also convert verity's read_file_data_page() to be folio-based. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/btrfs/relocation.c | 5 +++-- fs/btrfs/send.c | 3 ++- fs/verity/enable.c | 29 ++++++++++++++--------------- include/linux/pagemap.h | 6 +++--- 4 files changed, 22 insertions(+), 21 deletions(-) diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c index fdc2c4b411f0..9ae06895ffc9 100644 --- a/fs/btrfs/relocation.c +++ b/fs/btrfs/relocation.c @@ -2967,8 +2967,9 @@ static int relocate_one_page(struct inode *inode, struct file_ra_state *ra, goto release_page; if (PageReadahead(page)) - page_cache_async_readahead(inode->i_mapping, ra, NULL, page, - page_index, last_index + 1 - page_index); + page_cache_async_readahead(inode->i_mapping, ra, NULL, + page_folio(page), page_index, + last_index + 1 - page_index); if (!PageUptodate(page)) { btrfs_readpage(NULL, page); diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c index 7d1642937274..b327dbe0cbf5 100644 --- a/fs/btrfs/send.c +++ b/fs/btrfs/send.c @@ -4986,7 +4986,8 @@ static int put_file_data(struct send_ctx *sctx, u64 offset, u32 len) if (PageReadahead(page)) { page_cache_async_readahead(inode->i_mapping, &sctx->ra, - NULL, page, index, last_index + 1 - index); + NULL, page_folio(page), index, + last_index + 1 - index); } if (!PageUptodate(page)) { diff --git a/fs/verity/enable.c b/fs/verity/enable.c index 60a4372aa4d7..f75d2c010f36 100644 --- a/fs/verity/enable.c +++ b/fs/verity/enable.c @@ -18,27 +18,26 @@ * Read a file data page for Merkle tree construction. Do aggressive readahead, * since we're sequentially reading the entire file. */ -static struct page *read_file_data_page(struct file *filp, pgoff_t index, +static struct page *read_file_data_page(struct file *file, pgoff_t index, struct file_ra_state *ra, unsigned long remaining_pages) { - struct page *page; + DEFINE_READAHEAD(ractl, file, ra, file->f_mapping, index); + struct folio *folio; - page = find_get_page_flags(filp->f_mapping, index, FGP_ACCESSED); - if (!page || !PageUptodate(page)) { - if (page) - put_page(page); + folio = __filemap_get_folio(ractl.mapping, index, FGP_ACCESSED, 0); + if (!folio || !folio_test_uptodate(folio)) { + if (folio) + folio_put(folio); else - page_cache_sync_readahead(filp->f_mapping, ra, filp, - index, remaining_pages); - page = read_mapping_page(filp->f_mapping, index, NULL); - if (IS_ERR(page)) - return page; + page_cache_sync_ra(&ractl, remaining_pages); + folio = read_cache_folio(ractl.mapping, index, NULL, file); + if (IS_ERR(folio)) + return &folio->page; } - if (PageReadahead(page)) - page_cache_async_readahead(filp->f_mapping, ra, filp, page, - index, remaining_pages); - return page; + if (folio_test_readahead(folio)) + page_cache_async_ra(&ractl, folio, remaining_pages); + return folio_file_page(folio, index); } static int build_merkle_tree_level(struct file *filp, unsigned int level, diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 60657132080f..b70192f56454 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -1242,7 +1242,7 @@ void page_cache_sync_readahead(struct address_space *mapping, * @mapping: address_space which holds the pagecache and I/O vectors * @ra: file_ra_state which holds the readahead state * @file: Used by the filesystem for authentication. - * @page: The page at @index which triggered the readahead call. + * @folio: The folio at @index which triggered the readahead call. * @index: Index of first page to be read. * @req_count: Total number of pages being read by the caller. * @@ -1254,10 +1254,10 @@ void page_cache_sync_readahead(struct address_space *mapping, static inline void page_cache_async_readahead(struct address_space *mapping, struct file_ra_state *ra, struct file *file, - struct page *page, pgoff_t index, unsigned long req_count) + struct folio *folio, pgoff_t index, unsigned long req_count) { DEFINE_READAHEAD(ractl, file, ra, mapping, index); - page_cache_async_ra(&ractl, page_folio(page), req_count); + page_cache_async_ra(&ractl, folio, req_count); } static inline struct folio *__readahead_folio(struct readahead_control *ractl) From patchwork Fri Apr 29 17:25:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832504 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34A00C433F5 for ; Fri, 29 Apr 2022 17:26:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379617AbiD2R3z (ORCPT ); Fri, 29 Apr 2022 13:29:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379535AbiD2R3a (ORCPT ); Fri, 29 Apr 2022 13:29:30 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14162A145F for ; Fri, 29 Apr 2022 10:26:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=oltpZnUdWxXe1PX5MIFD6kGux3W85Rmiqb/miN+Kgrc=; b=oYzDPOPTsm3kfRBFTeufi+HpF0 W6KaCTYcF+642LBzwq1wZVaTjVkrhlf47iNfCj4HDXbr52IFaabO91bErQWL9Wn/COm1wwOQL7I1Y D+2jFmyqFDi0t3UB8HJ1CBUTTVFEA3Qytv3HYf2lLdqZ7D0wUVGv2NBVKntR3p1cAMajQNC2AFdte x+twqreIC0y0N/zEiRsxsWj0950xhJFuP2i8frIacp1l6DsunrXK7+iE6qs74ITf7Plu8R2vIAmx9 wxl8JeytRbU6UAAGpVxskWMZAhyTR2mJnTf+KAt86Chf/Lgmw/E5bO7eCZX4j51D0582/riGeKxwM JWMFiJAQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNa-00CdZl-2i; Fri, 29 Apr 2022 17:26:06 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 32/69] buffer: Rewrite nobh_truncate_page() to use folios Date: Fri, 29 Apr 2022 18:25:19 +0100 Message-Id: <20220429172556.3011843-33-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org - Calculate iblock directly instead of using a while loop - Move has_buffers to the end to remove a backwards jump - Use __filemap_get_folio() instead of grab_cache_page(), which removes a spurious FGP_ACCESSED flag. - Eliminate length and pos variables - Use folio APIs where they exist Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/buffer.c | 64 ++++++++++++++++++++++------------------------------- 1 file changed, 27 insertions(+), 37 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index fb4df259c92d..9737e0dbe3ec 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -2791,44 +2791,28 @@ int nobh_truncate_page(struct address_space *mapping, loff_t from, get_block_t *get_block) { pgoff_t index = from >> PAGE_SHIFT; - unsigned offset = from & (PAGE_SIZE-1); - unsigned blocksize; - sector_t iblock; - unsigned length, pos; struct inode *inode = mapping->host; - struct page *page; + unsigned blocksize = i_blocksize(inode); + struct folio *folio; struct buffer_head map_bh; + size_t offset; + sector_t iblock; int err; - blocksize = i_blocksize(inode); - length = offset & (blocksize - 1); - /* Block boundary? Nothing to do */ - if (!length) + if (!(from & (blocksize - 1))) return 0; - length = blocksize - length; - iblock = (sector_t)index << (PAGE_SHIFT - inode->i_blkbits); - - page = grab_cache_page(mapping, index); + folio = __filemap_get_folio(mapping, index, FGP_LOCK | FGP_CREAT, + mapping_gfp_mask(mapping)); err = -ENOMEM; - if (!page) + if (!folio) goto out; - if (page_has_buffers(page)) { -has_buffers: - unlock_page(page); - put_page(page); - return block_truncate_page(mapping, from, get_block); - } - - /* Find the buffer that contains "offset" */ - pos = blocksize; - while (offset >= pos) { - iblock++; - pos += blocksize; - } + if (folio_buffers(folio)) + goto has_buffers; + iblock = from >> inode->i_blkbits; map_bh.b_size = blocksize; map_bh.b_state = 0; err = get_block(inode, iblock, &map_bh, 0); @@ -2839,29 +2823,35 @@ int nobh_truncate_page(struct address_space *mapping, goto unlock; /* Ok, it's mapped. Make sure it's up-to-date */ - if (!PageUptodate(page)) { - err = mapping->a_ops->readpage(NULL, page); + if (!folio_test_uptodate(folio)) { + err = mapping->a_ops->readpage(NULL, &folio->page); if (err) { - put_page(page); + folio_put(folio); goto out; } - lock_page(page); - if (!PageUptodate(page)) { + folio_lock(folio); + if (!folio_test_uptodate(folio)) { err = -EIO; goto unlock; } - if (page_has_buffers(page)) + if (folio_buffers(folio)) goto has_buffers; } - zero_user(page, offset, length); - set_page_dirty(page); + offset = offset_in_folio(folio, from); + folio_zero_segment(folio, offset, round_up(offset, blocksize)); + folio_mark_dirty(folio); err = 0; unlock: - unlock_page(page); - put_page(page); + folio_unlock(folio); + folio_put(folio); out: return err; + +has_buffers: + folio_unlock(folio); + folio_put(folio); + return block_truncate_page(mapping, from, get_block); } EXPORT_SYMBOL(nobh_truncate_page); From patchwork Fri Apr 29 17:25:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832511 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E141C433FE for ; Fri, 29 Apr 2022 17:26:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379681AbiD2RaE (ORCPT ); Fri, 29 Apr 2022 13:30:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43222 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379546AbiD2R3a (ORCPT ); Fri, 29 Apr 2022 13:29:30 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 37268985B6 for ; Fri, 29 Apr 2022 10:26:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ACrrQU8hDoFkSs7Skuwk/HVCT6svdmLE6NpycilctPs=; b=UAB41RWrYQsgAoe6GofjVQ46vU 5zMDjpcMnICYKxcce/ekDk3y47+wQdri5KS6OfOUI4lGJ61tQwTWabANs75NQX1lwyXTW434n5tSr ntHplMe+lh6ZAnMZpr706usrcplYz7sQ8b1hknV1SeWSBMbj2rW3JwjUgb3eGk8h6+Skquvn8G/r2 JfdRbPPZH8yTMw6obRRbZzTXR31kd+egJDfatmYAjov5nx5EI/6BDc0G2JC1bpuIxY4s0V/W0qVWX hcQpK91qA97yDGmuLcpnafzUUm1TWh+NyWaCRpYV/NlCXv2OMKrnDX7Xi612pHw649QVxijtvAzZq fGPLLBMQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNa-00CdZs-6E; Fri, 29 Apr 2022 17:26:06 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 33/69] fs: Introduce aops->read_folio Date: Fri, 29 Apr 2022 18:25:20 +0100 Message-Id: <20220429172556.3011843-34-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The ->readpage and ->read_folio operations are always called with the same set of bits; it's only the type which differs. Use a union to help with the transition and convert all the callers to use ->read_folio. Signed-off-by: Matthew Wilcox (Oracle) --- fs/buffer.c | 2 +- include/linux/fs.h | 5 ++++- mm/filemap.c | 6 +++--- mm/readahead.c | 10 +++++----- 4 files changed, 13 insertions(+), 10 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 9737e0dbe3ec..5826ef29fe70 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -2824,7 +2824,7 @@ int nobh_truncate_page(struct address_space *mapping, /* Ok, it's mapped. Make sure it's up-to-date */ if (!folio_test_uptodate(folio)) { - err = mapping->a_ops->readpage(NULL, &folio->page); + err = mapping->a_ops->read_folio(NULL, folio); if (err) { folio_put(folio); goto out; diff --git a/include/linux/fs.h b/include/linux/fs.h index 2be852661a29..5ecc4b74204d 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -335,7 +335,10 @@ static inline bool is_sync_kiocb(struct kiocb *kiocb) struct address_space_operations { int (*writepage)(struct page *page, struct writeback_control *wbc); - int (*readpage)(struct file *, struct page *); + union { + int (*readpage)(struct file *, struct page *); + int (*read_folio)(struct file *, struct folio *); + }; /* Write back some dirty pages from this mapping. */ int (*writepages)(struct address_space *, struct writeback_control *); diff --git a/mm/filemap.c b/mm/filemap.c index c15cfc28f9ce..132015e42384 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2419,7 +2419,7 @@ static int filemap_read_folio(struct file *file, struct address_space *mapping, */ folio_clear_error(folio); /* Start the actual read. The read will unlock the page. */ - error = mapping->a_ops->readpage(file, &folio->page); + error = mapping->a_ops->read_folio(file, folio); if (error) return error; @@ -3447,7 +3447,7 @@ int generic_file_mmap(struct file *file, struct vm_area_struct *vma) { struct address_space *mapping = file->f_mapping; - if (!mapping->a_ops->readpage) + if (!mapping->a_ops->read_folio) return -ENOEXEC; file_accessed(file); vma->vm_ops = &generic_file_vm_ops; @@ -3506,7 +3506,7 @@ static struct folio *do_read_cache_folio(struct address_space *mapping, if (filler) err = filler(data, &folio->page); else - err = mapping->a_ops->readpage(data, &folio->page); + err = mapping->a_ops->read_folio(data, folio); if (err < 0) { folio_put(folio); diff --git a/mm/readahead.c b/mm/readahead.c index 947a7a1fd867..2004aa58ae24 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -15,7 +15,7 @@ * explicitly requested by the application. Readahead only ever * attempts to read folios that are not yet in the page cache. If a * folio is present but not up-to-date, readahead will not try to read - * it. In that case a simple ->readpage() will be requested. + * it. In that case a simple ->read_folio() will be requested. * * Readahead is triggered when an application read request (whether a * system call or a page fault) finds that the requested folio is not in @@ -78,7 +78,7 @@ * address space operation, for which mpage_readahead() is a canonical * implementation. ->readahead() should normally initiate reads on all * folios, but may fail to read any or all folios without causing an I/O - * error. The page cache reading code will issue a ->readpage() request + * error. The page cache reading code will issue a ->read_folio() request * for any folio which ->readahead() did not read, and only an error * from this will be final. * @@ -110,7 +110,7 @@ * were not fetched with readahead_folio(). This will allow a * subsequent synchronous readahead request to try them again. If they * are left in the page cache, then they will be read individually using - * ->readpage() which may be less efficient. + * ->read_folio() which may be less efficient. */ #include @@ -172,7 +172,7 @@ static void read_pages(struct readahead_control *rac) } } else { while ((folio = readahead_folio(rac))) - aops->readpage(rac->file, &folio->page); + aops->read_folio(rac->file, folio); } blk_finish_plug(&plug); @@ -302,7 +302,7 @@ void force_page_cache_ra(struct readahead_control *ractl, struct backing_dev_info *bdi = inode_to_bdi(mapping->host); unsigned long max_pages, index; - if (unlikely(!mapping->a_ops->readpage && !mapping->a_ops->readahead)) + if (unlikely(!mapping->a_ops->read_folio && !mapping->a_ops->readahead)) return; /* From patchwork Fri Apr 29 17:25:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832509 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2CB36C433F5 for ; Fri, 29 Apr 2022 17:26:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379641AbiD2RaB (ORCPT ); Fri, 29 Apr 2022 13:30:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43224 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379547AbiD2R3a (ORCPT ); Fri, 29 Apr 2022 13:29:30 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 58355A147B for ; Fri, 29 Apr 2022 10:26:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=vhj5dyAvBi2vnnYXBQVdBQQ6HBrdGJDWlN4F5I/avk8=; b=psgsktlqHOPYXFjbOoQeyNceJA clCW6SkZ08bqg6SsGxnERtFxCKYIghk8r5vlw6cVziIMN5iF4nR+xxosggIf90AJCqAbZaaLFmYxh 0Z5I+Em5Qw9d+95BdHrXRuL/qUen3W64QUwt+nB0Bb/HHOBsqqiK/VUngMNOB9H/W6eujGIoPo1uT g4s+e+OfL9/uGAe2/BZPzc8rYEe8NkhZ6+Xk2Ghk96qhlnJiA4iTBdnrCnddLPzqlktOgnN3AqKuY 4i/qfd5HFMbAypr+vD0pE+w8nqvqfuNhPijXyva61kAwWoaI5dUNl0Aoy01/WdQy7neJiVnM6H3Cm vUP0jSpA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNa-00CdZy-A3; Fri, 29 Apr 2022 17:26:06 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 34/69] fs: read_folio documentation Date: Fri, 29 Apr 2022 18:25:21 +0100 Message-Id: <20220429172556.3011843-35-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Convert all the ->readpage documentation to ->read_folio. Signed-off-by: Matthew Wilcox (Oracle) --- Documentation/filesystems/fscrypt.rst | 2 +- Documentation/filesystems/fsverity.rst | 2 +- Documentation/filesystems/locking.rst | 10 +++++----- Documentation/filesystems/netfs_library.rst | 8 ++++---- Documentation/filesystems/vfs.rst | 20 ++++++++++---------- 5 files changed, 21 insertions(+), 21 deletions(-) diff --git a/Documentation/filesystems/fscrypt.rst b/Documentation/filesystems/fscrypt.rst index 6ccd5efb25b7..2e9aaa295125 100644 --- a/Documentation/filesystems/fscrypt.rst +++ b/Documentation/filesystems/fscrypt.rst @@ -1256,7 +1256,7 @@ inline encryption hardware will encrypt/decrypt the file contents. When inline encryption isn't used, filesystems must encrypt/decrypt the file contents themselves, as described below: -For the read path (->readpage()) of regular files, filesystems can +For the read path (->read_folio()) of regular files, filesystems can read the ciphertext into the page cache and decrypt it in-place. The page lock must be held until decryption has finished, to prevent the page from becoming visible to userspace prematurely. diff --git a/Documentation/filesystems/fsverity.rst b/Documentation/filesystems/fsverity.rst index 8cc536d08f51..36290530e194 100644 --- a/Documentation/filesystems/fsverity.rst +++ b/Documentation/filesystems/fsverity.rst @@ -548,7 +548,7 @@ already verified). Below, we describe how filesystems implement this. Pagecache ~~~~~~~~~ -For filesystems using Linux's pagecache, the ``->readpage()`` and +For filesystems using Linux's pagecache, the ``->read_folio()`` and ``->readahead()`` methods must be modified to verify pages before they are marked Uptodate. Merely hooking ``->read_iter()`` would be insufficient, since ``->read_iter()`` is not used for memory maps. diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst index fd9d9caf09ab..aeba2475a53c 100644 --- a/Documentation/filesystems/locking.rst +++ b/Documentation/filesystems/locking.rst @@ -237,7 +237,7 @@ address_space_operations prototypes:: int (*writepage)(struct page *page, struct writeback_control *wbc); - int (*readpage)(struct file *, struct page *); + int (*read_folio)(struct file *, struct folio *); int (*writepages)(struct address_space *, struct writeback_control *); bool (*dirty_folio)(struct address_space *, struct folio *folio); void (*readahead)(struct readahead_control *); @@ -268,7 +268,7 @@ locking rules: ops PageLocked(page) i_rwsem invalidate_lock ====================== ======================== ========= =============== writepage: yes, unlocks (see below) -readpage: yes, unlocks shared +read_folio: yes, unlocks shared writepages: dirty_folio maybe readahead: yes, unlocks shared @@ -289,13 +289,13 @@ swap_activate: no swap_deactivate: no ====================== ======================== ========= =============== -->write_begin(), ->write_end() and ->readpage() may be called from +->write_begin(), ->write_end() and ->read_folio() may be called from the request handler (/dev/loop). -->readpage() unlocks the page, either synchronously or via I/O +->read_folio() unlocks the folio, either synchronously or via I/O completion. -->readahead() unlocks the pages that I/O is attempted on like ->readpage(). +->readahead() unlocks the folios that I/O is attempted on like ->read_folio(). ->writepage() is used for two purposes: for "memory cleansing" and for "sync". These are quite different operations and the behaviour may differ diff --git a/Documentation/filesystems/netfs_library.rst b/Documentation/filesystems/netfs_library.rst index d51c2a5ccf57..a80a59941d2f 100644 --- a/Documentation/filesystems/netfs_library.rst +++ b/Documentation/filesystems/netfs_library.rst @@ -96,7 +96,7 @@ attached to an inode (or NULL if fscache is disabled):: Buffered Read Helpers ===================== -The library provides a set of read helpers that handle the ->readpage(), +The library provides a set of read helpers that handle the ->read_folio(), ->readahead() and much of the ->write_begin() VM operations and translate them into a common call framework. @@ -136,8 +136,8 @@ Read Helper Functions Three read helpers are provided:: void netfs_readahead(struct readahead_control *ractl); - int netfs_readpage(struct file *file, - struct page *page); + int netfs_read_folio(struct file *file, + struct folio *folio); int netfs_write_begin(struct file *file, struct address_space *mapping, loff_t pos, @@ -148,7 +148,7 @@ Three read helpers are provided:: Each corresponds to a VM address space operation. These operations use the state in the per-inode context. -For ->readahead() and ->readpage(), the network filesystem just point directly +For ->readahead() and ->read_folio(), the network filesystem just point directly at the corresponding read helper; whereas for ->write_begin(), it may be a little more complicated as the network filesystem might want to flush conflicting writes or track dirty data and needs to put the acquired folio if diff --git a/Documentation/filesystems/vfs.rst b/Documentation/filesystems/vfs.rst index 469882f72fc1..0919a4ad973a 100644 --- a/Documentation/filesystems/vfs.rst +++ b/Documentation/filesystems/vfs.rst @@ -656,7 +656,7 @@ by memory-mapping the page. Data is written into the address space by the application, and then written-back to storage typically in whole pages, however the address_space has finer control of write sizes. -The read process essentially only requires 'readpage'. The write +The read process essentially only requires 'read_folio'. The write process is more complicated and uses write_begin/write_end or dirty_folio to write data into the address_space, and writepage and writepages to writeback data to storage. @@ -722,7 +722,7 @@ cache in your filesystem. The following members are defined: struct address_space_operations { int (*writepage)(struct page *page, struct writeback_control *wbc); - int (*readpage)(struct file *, struct page *); + int (*read_folio)(struct file *, struct folio *); int (*writepages)(struct address_space *, struct writeback_control *); bool (*dirty_folio)(struct address_space *, struct folio *); void (*readahead)(struct readahead_control *); @@ -772,14 +772,14 @@ cache in your filesystem. The following members are defined: See the file "Locking" for more details. -``readpage`` - called by the VM to read a page from backing store. The page - will be Locked when readpage is called, and should be unlocked - and marked uptodate once the read completes. If ->readpage - discovers that it needs to unlock the page for some reason, it - can do so, and then return AOP_TRUNCATED_PAGE. In this case, - the page will be relocated, relocked and if that all succeeds, - ->readpage will be called again. +``read_folio`` + called by the VM to read a folio from backing store. The folio + will be locked when read_folio is called, and should be unlocked + and marked uptodate once the read completes. If ->read_folio + discovers that it cannot perform the I/O at this time, it can + unlock the folio and return AOP_TRUNCATED_PAGE. In this case, + the folio will be looked up again, relocked and if that all succeeds, + ->read_folio will be called again. ``writepages`` called by the VM to write out pages associated with the From patchwork Fri Apr 29 17:25:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832513 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F814C433EF for ; Fri, 29 Apr 2022 17:26:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379687AbiD2RaF (ORCPT ); Fri, 29 Apr 2022 13:30:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42514 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379549AbiD2R3b (ORCPT ); Fri, 29 Apr 2022 13:29:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5866FA147C for ; Fri, 29 Apr 2022 10:26:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=kLHvtK2PEJqV3VRmzYKvwtRC8J4WDZZsodUTyXD7yXw=; b=oykdJTp2d5NpASn7vYY6GAVW/a Ov7QgJpIdkhdsB08mMzSGq511s1V/iyNogwoftcPuWvX8I2LG2gJj35JJ+tNluK0J+bGVOWSqDiCL STgq708PE3hY+f5Xc/cW4OmKC55+zG2JPq0vI+Ujfr7HPT5Xy8ufIJryVbbejQsB4AeUjlzL8Lyvi Y1K9shA2ZZTxs+DXiLP+mswgjmLJ2+OSZ5k0txlxOPyULgQ94M+ZP5Ob9z5OWeg5DfX5mImdTBNpW nYDEVODyh1+xYkA4/t1bsCYW0cWfEGMIMLnhDjq6U1hK5+iDf4jzZiGZe6oh/hvf8X68CqPx8QJR3 BYfxQlmw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNa-00Cda3-Db; Fri, 29 Apr 2022 17:26:06 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 35/69] fs: Convert netfs_readpage to netfs_read_folio Date: Fri, 29 Apr 2022 18:25:22 +0100 Message-Id: <20220429172556.3011843-36-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is straightforward because netfs already worked in terms of folios. Signed-off-by: Matthew Wilcox (Oracle) --- fs/9p/vfs_addr.c | 2 +- fs/afs/file.c | 2 +- fs/ceph/addr.c | 2 +- fs/netfs/buffered_read.c | 15 +++++++-------- include/linux/netfs.h | 2 +- 5 files changed, 11 insertions(+), 12 deletions(-) diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c index a2d57112f53e..3a84167f4893 100644 --- a/fs/9p/vfs_addr.c +++ b/fs/9p/vfs_addr.c @@ -336,7 +336,7 @@ static bool v9fs_dirty_folio(struct address_space *mapping, struct folio *folio) #endif const struct address_space_operations v9fs_addr_operations = { - .readpage = netfs_readpage, + .read_folio = netfs_read_folio, .readahead = netfs_readahead, .dirty_folio = v9fs_dirty_folio, .writepage = v9fs_vfs_writepage, diff --git a/fs/afs/file.c b/fs/afs/file.c index 26292a110a8f..e277fbe55262 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -50,7 +50,7 @@ const struct inode_operations afs_file_inode_operations = { }; const struct address_space_operations afs_file_aops = { - .readpage = netfs_readpage, + .read_folio = netfs_read_folio, .readahead = netfs_readahead, .dirty_folio = afs_dirty_folio, .launder_folio = afs_launder_folio, diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index e65541a51b68..3acd33da6d8c 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -1372,7 +1372,7 @@ static int ceph_write_end(struct file *file, struct address_space *mapping, } const struct address_space_operations ceph_aops = { - .readpage = netfs_readpage, + .read_folio = netfs_read_folio, .readahead = netfs_readahead, .writepage = ceph_writepage, .writepages = ceph_writepages_start, diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index 1d44509455a5..8742d22dfd2b 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -198,22 +198,21 @@ void netfs_readahead(struct readahead_control *ractl) EXPORT_SYMBOL(netfs_readahead); /** - * netfs_readpage - Helper to manage a readpage request + * netfs_read_folio - Helper to manage a read_folio request * @file: The file to read from - * @subpage: A subpage of the folio to read + * @folio: The folio to read * - * Fulfil a readpage request by drawing data from the cache if possible, or the - * netfs if not. Space beyond the EOF is zero-filled. Multiple I/O requests - * from different sources will get munged together. + * Fulfil a read_folio request by drawing data from the cache if + * possible, or the netfs if not. Space beyond the EOF is zero-filled. + * Multiple I/O requests from different sources will get munged together. * * The calling netfs must initialise a netfs context contiguous to the vfs * inode before calling this. * * This is usable whether or not caching is enabled. */ -int netfs_readpage(struct file *file, struct page *subpage) +int netfs_read_folio(struct file *file, struct folio *folio) { - struct folio *folio = page_folio(subpage); struct address_space *mapping = folio_file_mapping(folio); struct netfs_io_request *rreq; struct netfs_i_context *ctx = netfs_i_context(mapping->host); @@ -245,7 +244,7 @@ int netfs_readpage(struct file *file, struct page *subpage) folio_unlock(folio); return ret; } -EXPORT_SYMBOL(netfs_readpage); +EXPORT_SYMBOL(netfs_read_folio); /* * Prepare a folio for writing without reading first diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 1c29f317d907..4bd5ee709daa 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -274,7 +274,7 @@ struct netfs_cache_ops { struct readahead_control; extern void netfs_readahead(struct readahead_control *); -extern int netfs_readpage(struct file *, struct page *); +int netfs_read_folio(struct file *, struct folio *); extern int netfs_write_begin(struct file *, struct address_space *, loff_t, unsigned int, struct folio **, void **); From patchwork Fri Apr 29 17:25:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832515 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77404C433F5 for ; Fri, 29 Apr 2022 17:26:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379691AbiD2RaK (ORCPT ); Fri, 29 Apr 2022 13:30:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379553AbiD2R3b (ORCPT ); Fri, 29 Apr 2022 13:29:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A363A2046 for ; Fri, 29 Apr 2022 10:26:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=vOAIN/In0MbNioyKOxTz1tgWXM+3BlLsUhXZLSiXz50=; b=s1KGUpwl/O/h9nV/Rpd79EL8zA BA++av9j/ddUpHsCWlr5XAdcmYxGGe/8xMYfRXycly4x+/cou77QwhjpX5C5D2WE5TnWdB6IIBuOU W9I7WwIE6GbBQL1rUC8QFxCW3K6hm/mVW4qBOy4iwELOUgKTlir/eNDrm1L2O2h8a5YsfiuAgcC1f Mw96/N+93uONxNrARAMqHlaiQz2LXHokKaUednjNO2C8M4t0xOnyLVSP9DWjjQt5vQ7OGzYspUFSo vr37JLBOJWO3VpV+Yz0HHkauVOyYly4kSGhI5Wfxb2EstGdGwAx0qrLRp3L318EMMMZKHNoAAXtof jp8F9qgQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNa-00CdaA-He; Fri, 29 Apr 2022 17:26:06 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 36/69] fs: Convert iomap_readpage to iomap_read_folio Date: Fri, 29 Apr 2022 18:25:23 +0100 Message-Id: <20220429172556.3011843-37-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org A straightforward conversion as iomap_readpage already worked in folios. Signed-off-by: Matthew Wilcox (Oracle) --- fs/erofs/data.c | 6 +++--- fs/gfs2/aops.c | 3 ++- fs/iomap/buffered-io.c | 12 +++++------- fs/xfs/xfs_aops.c | 8 ++++---- fs/zonefs/super.c | 6 +++--- include/linux/iomap.h | 2 +- 6 files changed, 18 insertions(+), 19 deletions(-) diff --git a/fs/erofs/data.c b/fs/erofs/data.c index 780db1e5f4b7..2edca5669578 100644 --- a/fs/erofs/data.c +++ b/fs/erofs/data.c @@ -337,9 +337,9 @@ int erofs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo, * since we dont have write or truncate flows, so no inode * locking needs to be held at the moment. */ -static int erofs_readpage(struct file *file, struct page *page) +static int erofs_read_folio(struct file *file, struct folio *folio) { - return iomap_readpage(page, &erofs_iomap_ops); + return iomap_read_folio(folio, &erofs_iomap_ops); } static void erofs_readahead(struct readahead_control *rac) @@ -394,7 +394,7 @@ static ssize_t erofs_file_read_iter(struct kiocb *iocb, struct iov_iter *to) /* for uncompressed (aligned) files and raw access for other files */ const struct address_space_operations erofs_raw_access_aops = { - .readpage = erofs_readpage, + .read_folio = erofs_read_folio, .readahead = erofs_readahead, .bmap = erofs_bmap, .direct_IO = noop_direct_IO, diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c index 72c9f31ce724..a29eb1e5bfe2 100644 --- a/fs/gfs2/aops.c +++ b/fs/gfs2/aops.c @@ -467,6 +467,7 @@ static int stuffed_readpage(struct gfs2_inode *ip, struct page *page) static int __gfs2_readpage(void *file, struct page *page) { + struct folio *folio = page_folio(page); struct inode *inode = page->mapping->host; struct gfs2_inode *ip = GFS2_I(inode); struct gfs2_sbd *sdp = GFS2_SB(inode); @@ -474,7 +475,7 @@ static int __gfs2_readpage(void *file, struct page *page) if (!gfs2_is_jdata(ip) || (i_blocksize(inode) == PAGE_SIZE && !page_has_buffers(page))) { - error = iomap_readpage(page, &gfs2_iomap_ops); + error = iomap_read_folio(folio, &gfs2_iomap_ops); } else if (gfs2_is_stuffed(ip)) { error = stuffed_readpage(ip, page); unlock_page(page); diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 8ce8720093b9..72f63d719c7c 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -320,10 +320,8 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, return pos - orig_pos + plen; } -int -iomap_readpage(struct page *page, const struct iomap_ops *ops) +int iomap_read_folio(struct folio *folio, const struct iomap_ops *ops) { - struct folio *folio = page_folio(page); struct iomap_iter iter = { .inode = folio->mapping->host, .pos = folio_pos(folio), @@ -352,12 +350,12 @@ iomap_readpage(struct page *page, const struct iomap_ops *ops) /* * Just like mpage_readahead and block_read_full_page, we always - * return 0 and just mark the page as PageError on errors. This + * return 0 and just set the folio error flag on errors. This * should be cleaned up throughout the stack eventually. */ return 0; } -EXPORT_SYMBOL_GPL(iomap_readpage); +EXPORT_SYMBOL_GPL(iomap_read_folio); static loff_t iomap_readahead_iter(const struct iomap_iter *iter, struct iomap_readpage_ctx *ctx) @@ -663,10 +661,10 @@ static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, /* * The blocks that were entirely written will now be uptodate, so we - * don't have to worry about a readpage reading them and overwriting a + * don't have to worry about a read_folio reading them and overwriting a * partial write. However, if we've encountered a short write and only * partially written into a block, it will not be marked uptodate, so a - * readpage might come in and destroy our partial write. + * read_folio might come in and destroy our partial write. * * Do the simplest thing and just treat any short write to a * non-uptodate page as a zero-length write, and force the caller to diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index 90b7f4d127de..a9c4bb500d53 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -538,11 +538,11 @@ xfs_vm_bmap( } STATIC int -xfs_vm_readpage( +xfs_vm_read_folio( struct file *unused, - struct page *page) + struct folio *folio) { - return iomap_readpage(page, &xfs_read_iomap_ops); + return iomap_read_folio(folio, &xfs_read_iomap_ops); } STATIC void @@ -564,7 +564,7 @@ xfs_iomap_swapfile_activate( } const struct address_space_operations xfs_address_space_operations = { - .readpage = xfs_vm_readpage, + .read_folio = xfs_vm_read_folio, .readahead = xfs_vm_readahead, .writepages = xfs_vm_writepages, .dirty_folio = filemap_dirty_folio, diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c index e20e7c841489..c3a38f711b24 100644 --- a/fs/zonefs/super.c +++ b/fs/zonefs/super.c @@ -124,9 +124,9 @@ static const struct iomap_ops zonefs_iomap_ops = { .iomap_begin = zonefs_iomap_begin, }; -static int zonefs_readpage(struct file *unused, struct page *page) +static int zonefs_read_folio(struct file *unused, struct folio *folio) { - return iomap_readpage(page, &zonefs_iomap_ops); + return iomap_read_folio(folio, &zonefs_iomap_ops); } static void zonefs_readahead(struct readahead_control *rac) @@ -192,7 +192,7 @@ static int zonefs_swap_activate(struct swap_info_struct *sis, } static const struct address_space_operations zonefs_file_aops = { - .readpage = zonefs_readpage, + .read_folio = zonefs_read_folio, .readahead = zonefs_readahead, .writepage = zonefs_writepage, .writepages = zonefs_writepages, diff --git a/include/linux/iomap.h b/include/linux/iomap.h index b76f0dd149fb..5b2aa45ddda3 100644 --- a/include/linux/iomap.h +++ b/include/linux/iomap.h @@ -225,7 +225,7 @@ static inline const struct iomap *iomap_iter_srcmap(const struct iomap_iter *i) ssize_t iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *from, const struct iomap_ops *ops); -int iomap_readpage(struct page *page, const struct iomap_ops *ops); +int iomap_read_folio(struct folio *folio, const struct iomap_ops *ops); void iomap_readahead(struct readahead_control *, const struct iomap_ops *ops); bool iomap_is_partially_uptodate(struct folio *, size_t from, size_t count); int iomap_releasepage(struct page *page, gfp_t gfp_mask); From patchwork Fri Apr 29 17:25:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832525 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 879FEC433FE for ; Fri, 29 Apr 2022 17:27:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379657AbiD2RaY (ORCPT ); Fri, 29 Apr 2022 13:30:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43312 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345884AbiD2R3b (ORCPT ); Fri, 29 Apr 2022 13:29:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B0E31A2061 for ; Fri, 29 Apr 2022 10:26:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=8k/MTnVfPcV8IkbozblERVgKuYJ+p1U9hCwSncciu/c=; b=NH6K0IRazCqwWXH0HiD32WFSUm 1AjOT8mXdkV3BIxweXLa+yH7l47n8g+arpH7IKKY7PlEtyd5jbqZ0i7qQwX/OZQ2WFmIGNUVYaHXc ifYLxfsAyVjXykuzUVkuY3TPiOHWrw9bT7Q4o4eEL/j46Ha+8MDBlFM2O1EHF5PCH5vBtPn+tU+Sd zvaN8EEXDAvf5FtWkT3LlPpR7bmGyIVCi64VRd50I7NTV1d4oncHWhUOZ15i6+olBtS9bp13pKSVC OfXb9WB6qcieqx7dg2HWXrdXLUOGlzpqAzhsv9yTYT934h2lKGVDb+IwB9HE+CwpUgB9ce2sYrJJo ppOXJksg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNa-00CdaH-M7; Fri, 29 Apr 2022 17:26:06 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 37/69] fs: Convert block_read_full_page() to block_read_full_folio() Date: Fri, 29 Apr 2022 18:25:24 +0100 Message-Id: <20220429172556.3011843-38-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This function is NOT converted to handle large folios, so include an assert that the filesystem isn't passing one in. Otherwise, use the folio functions instead of the page functions, where they exist. Convert all filesystems which use block_read_full_page(). Signed-off-by: Matthew Wilcox (Oracle) --- block/fops.c | 6 ++--- fs/adfs/inode.c | 6 ++--- fs/affs/file.c | 6 ++--- fs/befs/linuxvfs.c | 10 +++---- fs/bfs/file.c | 6 ++--- fs/buffer.c | 53 ++++++++++++++++++++----------------- fs/efs/inode.c | 8 +++--- fs/ext4/readpage.c | 4 +-- fs/freevxfs/vxfs_subr.c | 17 ++++++------ fs/hfs/inode.c | 8 +++--- fs/hfsplus/inode.c | 8 +++--- fs/iomap/buffered-io.c | 2 +- fs/minix/inode.c | 6 ++--- fs/mpage.c | 10 +++---- fs/ntfs/compress.c | 4 +-- fs/ocfs2/aops.c | 6 ++--- fs/ocfs2/refcounttree.c | 6 +++-- fs/omfs/file.c | 6 ++--- fs/qnx4/inode.c | 7 ++--- fs/reiserfs/file.c | 2 +- fs/reiserfs/inode.c | 12 ++++----- fs/sysv/itree.c | 6 ++--- fs/ufs/inode.c | 8 +++--- include/linux/buffer_head.h | 2 +- 24 files changed, 108 insertions(+), 101 deletions(-) diff --git a/block/fops.c b/block/fops.c index 712affe56e29..06feb41d798b 100644 --- a/block/fops.c +++ b/block/fops.c @@ -387,9 +387,9 @@ static int blkdev_writepage(struct page *page, struct writeback_control *wbc) return block_write_full_page(page, blkdev_get_block, wbc); } -static int blkdev_readpage(struct file * file, struct page * page) +static int blkdev_read_folio(struct file *file, struct folio *folio) { - return block_read_full_page(page, blkdev_get_block); + return block_read_full_folio(folio, blkdev_get_block); } static void blkdev_readahead(struct readahead_control *rac) @@ -425,7 +425,7 @@ static int blkdev_writepages(struct address_space *mapping, const struct address_space_operations def_blk_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, - .readpage = blkdev_readpage, + .read_folio = blkdev_read_folio, .readahead = blkdev_readahead, .writepage = blkdev_writepage, .write_begin = blkdev_write_begin, diff --git a/fs/adfs/inode.c b/fs/adfs/inode.c index f7959b1a2d52..ee22278b0cfc 100644 --- a/fs/adfs/inode.c +++ b/fs/adfs/inode.c @@ -38,9 +38,9 @@ static int adfs_writepage(struct page *page, struct writeback_control *wbc) return block_write_full_page(page, adfs_get_block, wbc); } -static int adfs_readpage(struct file *file, struct page *page) +static int adfs_read_folio(struct file *file, struct folio *folio) { - return block_read_full_page(page, adfs_get_block); + return block_read_full_folio(folio, adfs_get_block); } static void adfs_write_failed(struct address_space *mapping, loff_t to) @@ -75,7 +75,7 @@ static sector_t _adfs_bmap(struct address_space *mapping, sector_t block) static const struct address_space_operations adfs_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, - .readpage = adfs_readpage, + .read_folio = adfs_read_folio, .writepage = adfs_writepage, .write_begin = adfs_write_begin, .write_end = generic_write_end, diff --git a/fs/affs/file.c b/fs/affs/file.c index b952f65c3f06..5da562cc7fb7 100644 --- a/fs/affs/file.c +++ b/fs/affs/file.c @@ -375,9 +375,9 @@ static int affs_writepage(struct page *page, struct writeback_control *wbc) return block_write_full_page(page, affs_get_block, wbc); } -static int affs_readpage(struct file *file, struct page *page) +static int affs_read_folio(struct file *file, struct folio *folio) { - return block_read_full_page(page, affs_get_block); + return block_read_full_folio(folio, affs_get_block); } static void affs_write_failed(struct address_space *mapping, loff_t to) @@ -455,7 +455,7 @@ static sector_t _affs_bmap(struct address_space *mapping, sector_t block) const struct address_space_operations affs_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, - .readpage = affs_readpage, + .read_folio = affs_read_folio, .writepage = affs_writepage, .write_begin = affs_write_begin, .write_end = affs_write_end, diff --git a/fs/befs/linuxvfs.c b/fs/befs/linuxvfs.c index b4b3567ac655..25350dd22cda 100644 --- a/fs/befs/linuxvfs.c +++ b/fs/befs/linuxvfs.c @@ -40,7 +40,7 @@ MODULE_LICENSE("GPL"); static int befs_readdir(struct file *, struct dir_context *); static int befs_get_block(struct inode *, sector_t, struct buffer_head *, int); -static int befs_readpage(struct file *file, struct page *page); +static int befs_read_folio(struct file *file, struct folio *folio); static sector_t befs_bmap(struct address_space *mapping, sector_t block); static struct dentry *befs_lookup(struct inode *, struct dentry *, unsigned int); @@ -87,7 +87,7 @@ static const struct inode_operations befs_dir_inode_operations = { }; static const struct address_space_operations befs_aops = { - .readpage = befs_readpage, + .read_folio = befs_read_folio, .bmap = befs_bmap, }; @@ -102,16 +102,16 @@ static const struct export_operations befs_export_operations = { }; /* - * Called by generic_file_read() to read a page of data + * Called by generic_file_read() to read a folio of data * * In turn, simply calls a generic block read function and * passes it the address of befs_get_block, for mapping file * positions to disk blocks. */ static int -befs_readpage(struct file *file, struct page *page) +befs_read_folio(struct file *file, struct folio *folio) { - return block_read_full_page(page, befs_get_block); + return block_read_full_folio(folio, befs_get_block); } static sector_t diff --git a/fs/bfs/file.c b/fs/bfs/file.c index dc97c9b8f23b..57ae5ee6deec 100644 --- a/fs/bfs/file.c +++ b/fs/bfs/file.c @@ -155,9 +155,9 @@ static int bfs_writepage(struct page *page, struct writeback_control *wbc) return block_write_full_page(page, bfs_get_block, wbc); } -static int bfs_readpage(struct file *file, struct page *page) +static int bfs_read_folio(struct file *file, struct folio *folio) { - return block_read_full_page(page, bfs_get_block); + return block_read_full_folio(folio, bfs_get_block); } static void bfs_write_failed(struct address_space *mapping, loff_t to) @@ -189,7 +189,7 @@ static sector_t bfs_bmap(struct address_space *mapping, sector_t block) const struct address_space_operations bfs_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, - .readpage = bfs_readpage, + .read_folio = bfs_read_folio, .writepage = bfs_writepage, .write_begin = bfs_write_begin, .write_end = generic_write_end, diff --git a/fs/buffer.c b/fs/buffer.c index 5826ef29fe70..786ef5b98c80 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -314,7 +314,7 @@ static void decrypt_bh(struct work_struct *work) } /* - * I/O completion handler for block_read_full_page() - pages + * I/O completion handler for block_read_full_folio() - pages * which come unlocked at the end of I/O. */ static void end_buffer_async_read_io(struct buffer_head *bh, int uptodate) @@ -1060,8 +1060,8 @@ __getblk_slow(struct block_device *bdev, sector_t block, * Also. When blockdev buffers are explicitly read with bread(), they * individually become uptodate. But their backing page remains not * uptodate - even if all of its buffers are uptodate. A subsequent - * block_read_full_page() against that page will discover all the uptodate - * buffers, will set the page uptodate and will perform no I/O. + * block_read_full_folio() against that folio will discover all the uptodate + * buffers, will set the folio uptodate and will perform no I/O. */ /** @@ -2088,7 +2088,7 @@ static int __block_commit_write(struct inode *inode, struct page *page, /* * If this is a partial write which happened to make all buffers - * uptodate then we can optimize away a bogus readpage() for + * uptodate then we can optimize away a bogus read_folio() for * the next read(). Here we 'discover' whether the page went * uptodate as a result of this (potentially partial) write. */ @@ -2137,12 +2137,12 @@ int block_write_end(struct file *file, struct address_space *mapping, if (unlikely(copied < len)) { /* - * The buffers that were written will now be uptodate, so we - * don't have to worry about a readpage reading them and - * overwriting a partial write. However if we have encountered - * a short write and only partially written into a buffer, it - * will not be marked uptodate, so a readpage might come in and - * destroy our partial write. + * The buffers that were written will now be uptodate, so + * we don't have to worry about a read_folio reading them + * and overwriting a partial write. However if we have + * encountered a short write and only partially written + * into a buffer, it will not be marked uptodate, so a + * read_folio might come in and destroy our partial write. * * Do the simplest thing, and just treat any short write to a * non uptodate page as a zero-length write, and force the @@ -2245,26 +2245,28 @@ bool block_is_partially_uptodate(struct folio *folio, size_t from, size_t count) EXPORT_SYMBOL(block_is_partially_uptodate); /* - * Generic "read page" function for block devices that have the normal + * Generic "read_folio" function for block devices that have the normal * get_block functionality. This is most of the block device filesystems. - * Reads the page asynchronously --- the unlock_buffer() and + * Reads the folio asynchronously --- the unlock_buffer() and * set/clear_buffer_uptodate() functions propagate buffer state into the - * page struct once IO has completed. + * folio once IO has completed. */ -int block_read_full_page(struct page *page, get_block_t *get_block) +int block_read_full_folio(struct folio *folio, get_block_t *get_block) { - struct inode *inode = page->mapping->host; + struct inode *inode = folio->mapping->host; sector_t iblock, lblock; struct buffer_head *bh, *head, *arr[MAX_BUF_PER_PAGE]; unsigned int blocksize, bbits; int nr, i; int fully_mapped = 1; - head = create_page_buffers(page, inode, 0); + VM_BUG_ON_FOLIO(folio_test_large(folio), folio); + + head = create_page_buffers(&folio->page, inode, 0); blocksize = head->b_size; bbits = block_size_bits(blocksize); - iblock = (sector_t)page->index << (PAGE_SHIFT - bbits); + iblock = (sector_t)folio->index << (PAGE_SHIFT - bbits); lblock = (i_size_read(inode)+blocksize-1) >> bbits; bh = head; nr = 0; @@ -2282,10 +2284,11 @@ int block_read_full_page(struct page *page, get_block_t *get_block) WARN_ON(bh->b_size != blocksize); err = get_block(inode, iblock, bh, 0); if (err) - SetPageError(page); + folio_set_error(folio); } if (!buffer_mapped(bh)) { - zero_user(page, i * blocksize, blocksize); + folio_zero_range(folio, i * blocksize, + blocksize); if (!err) set_buffer_uptodate(bh); continue; @@ -2301,16 +2304,16 @@ int block_read_full_page(struct page *page, get_block_t *get_block) } while (i++, iblock++, (bh = bh->b_this_page) != head); if (fully_mapped) - SetPageMappedToDisk(page); + folio_set_mappedtodisk(folio); if (!nr) { /* - * All buffers are uptodate - we can set the page uptodate + * All buffers are uptodate - we can set the folio uptodate * as well. But not if get_block() returned an error. */ - if (!PageError(page)) - SetPageUptodate(page); - unlock_page(page); + if (!folio_test_error(folio)) + folio_mark_uptodate(folio); + folio_unlock(folio); return 0; } @@ -2335,7 +2338,7 @@ int block_read_full_page(struct page *page, get_block_t *get_block) } return 0; } -EXPORT_SYMBOL(block_read_full_page); +EXPORT_SYMBOL(block_read_full_folio); /* utility function for filesystems that need to do work on expanding * truncates. Uses filesystem pagecache writes to allow the filesystem to diff --git a/fs/efs/inode.c b/fs/efs/inode.c index 89e73a6f0d36..3ba94bb005a6 100644 --- a/fs/efs/inode.c +++ b/fs/efs/inode.c @@ -14,16 +14,18 @@ #include "efs.h" #include -static int efs_readpage(struct file *file, struct page *page) +static int efs_read_folio(struct file *file, struct folio *folio) { - return block_read_full_page(page,efs_get_block); + return block_read_full_folio(folio, efs_get_block); } + static sector_t _efs_bmap(struct address_space *mapping, sector_t block) { return generic_block_bmap(mapping,block,efs_get_block); } + static const struct address_space_operations efs_aops = { - .readpage = efs_readpage, + .read_folio = efs_read_folio, .bmap = _efs_bmap }; diff --git a/fs/ext4/readpage.c b/fs/ext4/readpage.c index af491e170c4a..e02a5f14e021 100644 --- a/fs/ext4/readpage.c +++ b/fs/ext4/readpage.c @@ -163,7 +163,7 @@ static bool bio_post_read_required(struct bio *bio) * * The mpage code never puts partial pages into a BIO (except for end-of-file). * If a page does not map to a contiguous run of blocks then it simply falls - * back to block_read_full_page(). + * back to block_read_full_folio(). * * Why is this? If a page's completion depends on a number of different BIOs * which can complete in any order (or at the same time) then determining the @@ -394,7 +394,7 @@ int ext4_mpage_readpages(struct inode *inode, bio = NULL; } if (!PageUptodate(page)) - block_read_full_page(page, ext4_get_block); + block_read_full_folio(page_folio(page), ext4_get_block); else unlock_page(page); next_page: diff --git a/fs/freevxfs/vxfs_subr.c b/fs/freevxfs/vxfs_subr.c index e806694d4145..6143ebab940d 100644 --- a/fs/freevxfs/vxfs_subr.c +++ b/fs/freevxfs/vxfs_subr.c @@ -38,11 +38,11 @@ #include "vxfs_extern.h" -static int vxfs_readpage(struct file *, struct page *); +static int vxfs_read_folio(struct file *, struct folio *); static sector_t vxfs_bmap(struct address_space *, sector_t); const struct address_space_operations vxfs_aops = { - .readpage = vxfs_readpage, + .read_folio = vxfs_read_folio, .bmap = vxfs_bmap, }; @@ -141,24 +141,23 @@ vxfs_getblk(struct inode *ip, sector_t iblock, } /** - * vxfs_readpage - read one page synchronously into the pagecache + * vxfs_read_folio - read one page synchronously into the pagecache * @file: file context (unused) - * @page: page frame to fill in. + * @folio: folio to fill in. * * Description: - * The vxfs_readpage routine reads @page synchronously into the + * The vxfs_read_folio routine reads @folio synchronously into the * pagecache. * * Returns: * Zero on success, else a negative error code. * * Locking status: - * @page is locked and will be unlocked. + * @folio is locked and will be unlocked. */ -static int -vxfs_readpage(struct file *file, struct page *page) +static int vxfs_read_folio(struct file *file, struct folio *folio) { - return block_read_full_page(page, vxfs_getblk); + return block_read_full_folio(folio, vxfs_getblk); } /** diff --git a/fs/hfs/inode.c b/fs/hfs/inode.c index 9a26b9510da0..ba3ff9cd7cfc 100644 --- a/fs/hfs/inode.c +++ b/fs/hfs/inode.c @@ -34,9 +34,9 @@ static int hfs_writepage(struct page *page, struct writeback_control *wbc) return block_write_full_page(page, hfs_get_block, wbc); } -static int hfs_readpage(struct file *file, struct page *page) +static int hfs_read_folio(struct file *file, struct folio *folio) { - return block_read_full_page(page, hfs_get_block); + return block_read_full_folio(folio, hfs_get_block); } static void hfs_write_failed(struct address_space *mapping, loff_t to) @@ -160,7 +160,7 @@ static int hfs_writepages(struct address_space *mapping, const struct address_space_operations hfs_btree_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, - .readpage = hfs_readpage, + .read_folio = hfs_read_folio, .writepage = hfs_writepage, .write_begin = hfs_write_begin, .write_end = generic_write_end, @@ -171,7 +171,7 @@ const struct address_space_operations hfs_btree_aops = { const struct address_space_operations hfs_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, - .readpage = hfs_readpage, + .read_folio = hfs_read_folio, .writepage = hfs_writepage, .write_begin = hfs_write_begin, .write_end = generic_write_end, diff --git a/fs/hfsplus/inode.c b/fs/hfsplus/inode.c index 905ae3660315..982b34eefec7 100644 --- a/fs/hfsplus/inode.c +++ b/fs/hfsplus/inode.c @@ -23,9 +23,9 @@ #include "hfsplus_raw.h" #include "xattr.h" -static int hfsplus_readpage(struct file *file, struct page *page) +static int hfsplus_read_folio(struct file *file, struct folio *folio) { - return block_read_full_page(page, hfsplus_get_block); + return block_read_full_folio(folio, hfsplus_get_block); } static int hfsplus_writepage(struct page *page, struct writeback_control *wbc) @@ -157,7 +157,7 @@ static int hfsplus_writepages(struct address_space *mapping, const struct address_space_operations hfsplus_btree_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, - .readpage = hfsplus_readpage, + .read_folio = hfsplus_read_folio, .writepage = hfsplus_writepage, .write_begin = hfsplus_write_begin, .write_end = generic_write_end, @@ -168,7 +168,7 @@ const struct address_space_operations hfsplus_btree_aops = { const struct address_space_operations hfsplus_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, - .readpage = hfsplus_readpage, + .read_folio = hfsplus_read_folio, .writepage = hfsplus_writepage, .write_begin = hfsplus_write_begin, .write_end = generic_write_end, diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 72f63d719c7c..75eb0c27a0e8 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -349,7 +349,7 @@ int iomap_read_folio(struct folio *folio, const struct iomap_ops *ops) } /* - * Just like mpage_readahead and block_read_full_page, we always + * Just like mpage_readahead and block_read_full_folio, we always * return 0 and just set the folio error flag on errors. This * should be cleaned up throughout the stack eventually. */ diff --git a/fs/minix/inode.c b/fs/minix/inode.c index 3add78bccedc..da8bdd1712a7 100644 --- a/fs/minix/inode.c +++ b/fs/minix/inode.c @@ -402,9 +402,9 @@ static int minix_writepage(struct page *page, struct writeback_control *wbc) return block_write_full_page(page, minix_get_block, wbc); } -static int minix_readpage(struct file *file, struct page *page) +static int minix_read_folio(struct file *file, struct folio *folio) { - return block_read_full_page(page,minix_get_block); + return block_read_full_folio(folio, minix_get_block); } int minix_prepare_chunk(struct page *page, loff_t pos, unsigned len) @@ -443,7 +443,7 @@ static sector_t minix_bmap(struct address_space *mapping, sector_t block) static const struct address_space_operations minix_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, - .readpage = minix_readpage, + .read_folio = minix_read_folio, .writepage = minix_writepage, .write_begin = minix_write_begin, .write_end = generic_write_end, diff --git a/fs/mpage.c b/fs/mpage.c index 1fe56f8c495f..a04439b84ae2 100644 --- a/fs/mpage.c +++ b/fs/mpage.c @@ -36,7 +36,7 @@ * * The mpage code never puts partial pages into a BIO (except for end-of-file). * If a page does not map to a contiguous run of blocks then it simply falls - * back to block_read_full_page(). + * back to block_read_full_folio(). * * Why is this? If a page's completion depends on a number of different BIOs * which can complete in any order (or at the same time) then determining the @@ -68,7 +68,7 @@ static struct bio *mpage_bio_submit(struct bio *bio) /* * support function for mpage_readahead. The fs supplied get_block might * return an up to date buffer. This is used to map that buffer into - * the page, which allows readpage to avoid triggering a duplicate call + * the page, which allows read_folio to avoid triggering a duplicate call * to get_block. * * The idea is to avoid adding buffers to pages that don't already have @@ -296,7 +296,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) if (args->bio) args->bio = mpage_bio_submit(args->bio); if (!PageUptodate(page)) - block_read_full_page(page, args->get_block); + block_read_full_folio(page_folio(page), args->get_block); else unlock_page(page); goto out; @@ -425,7 +425,7 @@ static void clean_buffers(struct page *page, unsigned first_unmapped) /* * we cannot drop the bh if the page is not uptodate or a concurrent - * readpage would fail to serialize with the bh and it would read from + * read_folio would fail to serialize with the bh and it would read from * disk before we reach the platter. */ if (buffer_heads_over_limit && PageUptodate(page)) @@ -510,7 +510,7 @@ static int __mpage_writepage(struct page *page, struct writeback_control *wbc, /* * Page has buffers, but they are all unmapped. The page was * created by pagein or read over a hole which was handled by - * block_read_full_page(). If this address_space is also + * block_read_full_folio(). If this address_space is also * using mpage_readahead then this can rarely happen. */ goto confused; diff --git a/fs/ntfs/compress.c b/fs/ntfs/compress.c index d2f9d6a0ee32..a60f543e7557 100644 --- a/fs/ntfs/compress.c +++ b/fs/ntfs/compress.c @@ -780,12 +780,12 @@ int ntfs_read_compressed_block(struct page *page) /* Uncompressed cb, copy it to the destination pages. */ /* * TODO: As a big optimization, we could detect this case - * before we read all the pages and use block_read_full_page() + * before we read all the pages and use block_read_full_folio() * on all full pages instead (we still have to treat partial * pages especially but at least we are getting rid of the * synchronous io for the majority of pages. * Or if we choose not to do the read-ahead/-behind stuff, we - * could just return block_read_full_page(pages[xpage]) as long + * could just return block_read_full_folio(pages[xpage]) as long * as PAGE_SIZE <= cb_size. */ if (cb_max_ofs) diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c index 7cffe9dcad17..7bf4b6fd93bf 100644 --- a/fs/ocfs2/aops.c +++ b/fs/ocfs2/aops.c @@ -309,7 +309,7 @@ static int ocfs2_readpage(struct file *file, struct page *page) /* * i_size might have just been updated as we grabed the meta lock. We * might now be discovering a truncate that hit on another node. - * block_read_full_page->get_block freaks out if it is asked to read + * block_read_full_folio->get_block freaks out if it is asked to read * beyond the end of a file, so we check here. Callers * (generic_file_read, vm_ops->fault) are clever enough to check i_size * and notice that the page they just read isn't needed. @@ -326,7 +326,7 @@ static int ocfs2_readpage(struct file *file, struct page *page) if (oi->ip_dyn_features & OCFS2_INLINE_DATA_FL) ret = ocfs2_readpage_inline(inode, page); else - ret = block_read_full_page(page, ocfs2_get_block); + ret = block_read_full_folio(page_folio(page), ocfs2_get_block); unlock = 0; out_alloc: @@ -1897,7 +1897,7 @@ static int ocfs2_write_begin(struct file *file, struct address_space *mapping, /* * Take alloc sem here to prevent concurrent lookups. That way * the mapping, zeroing and tree manipulation within - * ocfs2_write() will be safe against ->readpage(). This + * ocfs2_write() will be safe against ->read_folio(). This * should also serve to lock out allocation from a shared * writeable region. */ diff --git a/fs/ocfs2/refcounttree.c b/fs/ocfs2/refcounttree.c index 7f6355cbb587..e04358a46b68 100644 --- a/fs/ocfs2/refcounttree.c +++ b/fs/ocfs2/refcounttree.c @@ -2961,12 +2961,14 @@ int ocfs2_duplicate_clusters_by_page(handle_t *handle, } if (!PageUptodate(page)) { - ret = block_read_full_page(page, ocfs2_get_block); + struct folio *folio = page_folio(page); + + ret = block_read_full_folio(folio, ocfs2_get_block); if (ret) { mlog_errno(ret); goto unlock; } - lock_page(page); + folio_lock(folio); } if (page_has_buffers(page)) { diff --git a/fs/omfs/file.c b/fs/omfs/file.c index 980b0a72c172..fa7fe2393ff6 100644 --- a/fs/omfs/file.c +++ b/fs/omfs/file.c @@ -284,9 +284,9 @@ static int omfs_get_block(struct inode *inode, sector_t block, return ret; } -static int omfs_readpage(struct file *file, struct page *page) +static int omfs_read_folio(struct file *file, struct folio *folio) { - return block_read_full_page(page, omfs_get_block); + return block_read_full_folio(folio, omfs_get_block); } static void omfs_readahead(struct readahead_control *rac) @@ -373,7 +373,7 @@ const struct inode_operations omfs_file_inops = { const struct address_space_operations omfs_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, - .readpage = omfs_readpage, + .read_folio = omfs_read_folio, .readahead = omfs_readahead, .writepage = omfs_writepage, .writepages = omfs_writepages, diff --git a/fs/qnx4/inode.c b/fs/qnx4/inode.c index a635bb6615e9..391ea402920d 100644 --- a/fs/qnx4/inode.c +++ b/fs/qnx4/inode.c @@ -245,17 +245,18 @@ static void qnx4_kill_sb(struct super_block *sb) } } -static int qnx4_readpage(struct file *file, struct page *page) +static int qnx4_read_folio(struct file *file, struct folio *folio) { - return block_read_full_page(page,qnx4_get_block); + return block_read_full_folio(folio, qnx4_get_block); } static sector_t qnx4_bmap(struct address_space *mapping, sector_t block) { return generic_block_bmap(mapping,block,qnx4_get_block); } + static const struct address_space_operations qnx4_aops = { - .readpage = qnx4_readpage, + .read_folio = qnx4_read_folio, .bmap = qnx4_bmap }; diff --git a/fs/reiserfs/file.c b/fs/reiserfs/file.c index 203a47232707..6e228bfbe7ef 100644 --- a/fs/reiserfs/file.c +++ b/fs/reiserfs/file.c @@ -227,7 +227,7 @@ int reiserfs_commit_page(struct inode *inode, struct page *page, } /* * If this is a partial write which happened to make all buffers - * uptodate then we can optimize away a bogus readpage() for + * uptodate then we can optimize away a bogus read_folio() for * the next read(). Here we 'discover' whether the page went * uptodate as a result of this (potentially partial) write. */ diff --git a/fs/reiserfs/inode.c b/fs/reiserfs/inode.c index 46ba4892030a..33a9555f77b9 100644 --- a/fs/reiserfs/inode.c +++ b/fs/reiserfs/inode.c @@ -167,10 +167,10 @@ inline void make_le_item_head(struct item_head *ih, const struct cpu_key *key, * cutting the code is fine, since it really isn't in use yet and is easy * to add back in. But, Vladimir has a really good idea here. Think * about what happens for reading a file. For each page, - * The VFS layer calls reiserfs_readpage, who searches the tree to find + * The VFS layer calls reiserfs_read_folio, who searches the tree to find * an indirect item. This indirect item has X number of pointers, where * X is a big number if we've done the block allocation right. But, - * we only use one or two of these pointers during each call to readpage, + * we only use one or two of these pointers during each call to read_folio, * needlessly researching again later on. * * The size of the cache could be dynamic based on the size of the file. @@ -966,7 +966,7 @@ int reiserfs_get_block(struct inode *inode, sector_t block, * it is important the set_buffer_uptodate is done * after the direct2indirect. The buffer might * contain valid data newer than the data on disk - * (read by readpage, changed, and then sent here by + * (read by read_folio, changed, and then sent here by * writepage). direct2indirect needs to know if unbh * was already up to date, so it can decide if the * data in unbh needs to be replaced with data from @@ -2733,9 +2733,9 @@ static int reiserfs_write_full_page(struct page *page, goto done; } -static int reiserfs_readpage(struct file *f, struct page *page) +static int reiserfs_read_folio(struct file *f, struct folio *folio) { - return block_read_full_page(page, reiserfs_get_block); + return block_read_full_folio(folio, reiserfs_get_block); } static int reiserfs_writepage(struct page *page, struct writeback_control *wbc) @@ -3421,7 +3421,7 @@ int reiserfs_setattr(struct user_namespace *mnt_userns, struct dentry *dentry, const struct address_space_operations reiserfs_address_space_operations = { .writepage = reiserfs_writepage, - .readpage = reiserfs_readpage, + .read_folio = reiserfs_read_folio, .readahead = reiserfs_readahead, .releasepage = reiserfs_releasepage, .invalidate_folio = reiserfs_invalidate_folio, diff --git a/fs/sysv/itree.c b/fs/sysv/itree.c index 96ad24fe0ffb..d4ec9bb97de9 100644 --- a/fs/sysv/itree.c +++ b/fs/sysv/itree.c @@ -456,9 +456,9 @@ static int sysv_writepage(struct page *page, struct writeback_control *wbc) return block_write_full_page(page,get_block,wbc); } -static int sysv_readpage(struct file *file, struct page *page) +static int sysv_read_folio(struct file *file, struct folio *folio) { - return block_read_full_page(page,get_block); + return block_read_full_folio(folio, get_block); } int sysv_prepare_chunk(struct page *page, loff_t pos, unsigned len) @@ -497,7 +497,7 @@ static sector_t sysv_bmap(struct address_space *mapping, sector_t block) const struct address_space_operations sysv_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, - .readpage = sysv_readpage, + .read_folio = sysv_read_folio, .writepage = sysv_writepage, .write_begin = sysv_write_begin, .write_end = generic_write_end, diff --git a/fs/ufs/inode.c b/fs/ufs/inode.c index 6c973b71cab2..a873de7dec1c 100644 --- a/fs/ufs/inode.c +++ b/fs/ufs/inode.c @@ -390,7 +390,7 @@ ufs_inode_getblock(struct inode *inode, u64 ind_block, /** * ufs_getfrag_block() - `get_block_t' function, interface between UFS and - * readpage, writepage and so on + * read_folio, writepage and so on */ static int ufs_getfrag_block(struct inode *inode, sector_t fragment, struct buffer_head *bh_result, int create) @@ -472,9 +472,9 @@ static int ufs_writepage(struct page *page, struct writeback_control *wbc) return block_write_full_page(page,ufs_getfrag_block,wbc); } -static int ufs_readpage(struct file *file, struct page *page) +static int ufs_read_folio(struct file *file, struct folio *folio) { - return block_read_full_page(page,ufs_getfrag_block); + return block_read_full_folio(folio, ufs_getfrag_block); } int ufs_prepare_chunk(struct page *page, loff_t pos, unsigned len) @@ -527,7 +527,7 @@ static sector_t ufs_bmap(struct address_space *mapping, sector_t block) const struct address_space_operations ufs_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, - .readpage = ufs_readpage, + .read_folio = ufs_read_folio, .writepage = ufs_writepage, .write_begin = ufs_write_begin, .write_end = ufs_write_end, diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h index 805c4e12700a..31d82fd9abe8 100644 --- a/include/linux/buffer_head.h +++ b/include/linux/buffer_head.h @@ -223,7 +223,7 @@ int block_write_full_page(struct page *page, get_block_t *get_block, int __block_write_full_page(struct inode *inode, struct page *page, get_block_t *get_block, struct writeback_control *wbc, bh_end_io_t *handler); -int block_read_full_page(struct page*, get_block_t*); +int block_read_full_folio(struct folio *, get_block_t *); bool block_is_partially_uptodate(struct folio *, size_t from, size_t count); int block_write_begin(struct address_space *mapping, loff_t pos, unsigned len, struct page **pagep, get_block_t *get_block); From patchwork Fri Apr 29 17:25:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832519 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BCE4C4332F for ; Fri, 29 Apr 2022 17:26:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379548AbiD2RaP (ORCPT ); Fri, 29 Apr 2022 13:30:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43282 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379570AbiD2R3b (ORCPT ); Fri, 29 Apr 2022 13:29:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B0EF0A2067 for ; Fri, 29 Apr 2022 10:26:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=CQhlAxAHQzk+AEjI6LKArIMIixM35AqfoLm5FxZqqHk=; b=JeHab9DKa1eBDLQLfygP4EG0Op hMYpe/7DrW7aq0V5gxpjFVtu5+Jr3GjSMWGAgD4XwDYS1xPZDTeGpVb2XJ31Ymp6WlKLaXXg3S/PQ EA34nyNhqnPpnyb/fQDy1BIUD6bTFdZdruON+zRc+vhdpluoBp+AP5QJU0hiotKCH1LMkeYILWY16 jr4qGiPBMgXslphqN2DlwSZTEEDwQ58CmUjF66o6qHe8pVADx+Ki31dCY+TVIESTgk/4vprjk+kbp B/yMxmdgTrHlL0MSrk6FxdyRE+t7wRq1iCHLk3yTRF4mbMDigbBXw59nM8EPzY8JZAjn4LQeUsf0T rlO9C3EQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNa-00CdaM-Q8; Fri, 29 Apr 2022 17:26:06 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 38/69] fs: Convert mpage_readpage to mpage_read_folio Date: Fri, 29 Apr 2022 18:25:25 +0100 Message-Id: <20220429172556.3011843-39-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org mpage_readpage still works in terms of pages, and has not been audited for correctness with large folios, so include an assertion that the filesystem is not passing it large folios. Convert all the filesystems to call mpage_read_folio() instead of mpage_readpage(). Signed-off-by: Matthew Wilcox (Oracle) --- fs/exfat/inode.c | 6 +++--- fs/ext2/inode.c | 8 ++++---- fs/fat/inode.c | 6 +++--- fs/gfs2/aops.c | 15 +++++++-------- fs/hpfs/file.c | 6 +++--- fs/iomap/buffered-io.c | 2 +- fs/isofs/inode.c | 6 +++--- fs/jfs/inode.c | 6 +++--- fs/mpage.c | 8 +++++--- fs/nilfs2/inode.c | 10 +++++----- fs/ntfs3/inode.c | 9 +++++---- fs/qnx6/inode.c | 6 +++--- fs/udf/inode.c | 6 +++--- include/linux/mpage.h | 2 +- 14 files changed, 49 insertions(+), 47 deletions(-) diff --git a/fs/exfat/inode.c b/fs/exfat/inode.c index b9f63113db2d..0133d385d8e8 100644 --- a/fs/exfat/inode.c +++ b/fs/exfat/inode.c @@ -357,9 +357,9 @@ static int exfat_get_block(struct inode *inode, sector_t iblock, return err; } -static int exfat_readpage(struct file *file, struct page *page) +static int exfat_read_folio(struct file *file, struct folio *folio) { - return mpage_readpage(page, exfat_get_block); + return mpage_read_folio(folio, exfat_get_block); } static void exfat_readahead(struct readahead_control *rac) @@ -492,7 +492,7 @@ int exfat_block_truncate_page(struct inode *inode, loff_t from) static const struct address_space_operations exfat_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, - .readpage = exfat_readpage, + .read_folio = exfat_read_folio, .readahead = exfat_readahead, .writepage = exfat_writepage, .writepages = exfat_writepages, diff --git a/fs/ext2/inode.c b/fs/ext2/inode.c index d8ca8050945a..9e1ecd89f47f 100644 --- a/fs/ext2/inode.c +++ b/fs/ext2/inode.c @@ -875,9 +875,9 @@ static int ext2_writepage(struct page *page, struct writeback_control *wbc) return block_write_full_page(page, ext2_get_block, wbc); } -static int ext2_readpage(struct file *file, struct page *page) +static int ext2_read_folio(struct file *file, struct folio *folio) { - return mpage_readpage(page, ext2_get_block); + return mpage_read_folio(folio, ext2_get_block); } static void ext2_readahead(struct readahead_control *rac) @@ -966,7 +966,7 @@ ext2_dax_writepages(struct address_space *mapping, struct writeback_control *wbc const struct address_space_operations ext2_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, - .readpage = ext2_readpage, + .read_folio = ext2_read_folio, .readahead = ext2_readahead, .writepage = ext2_writepage, .write_begin = ext2_write_begin, @@ -982,7 +982,7 @@ const struct address_space_operations ext2_aops = { const struct address_space_operations ext2_nobh_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, - .readpage = ext2_readpage, + .read_folio = ext2_read_folio, .readahead = ext2_readahead, .writepage = ext2_nobh_writepage, .write_begin = ext2_nobh_write_begin, diff --git a/fs/fat/inode.c b/fs/fat/inode.c index 1f15b0fd1bb0..8a81017f8d60 100644 --- a/fs/fat/inode.c +++ b/fs/fat/inode.c @@ -205,9 +205,9 @@ static int fat_writepages(struct address_space *mapping, return mpage_writepages(mapping, wbc, fat_get_block); } -static int fat_readpage(struct file *file, struct page *page) +static int fat_read_folio(struct file *file, struct folio *folio) { - return mpage_readpage(page, fat_get_block); + return mpage_read_folio(folio, fat_get_block); } static void fat_readahead(struct readahead_control *rac) @@ -344,7 +344,7 @@ int fat_block_truncate_page(struct inode *inode, loff_t from) static const struct address_space_operations fat_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, - .readpage = fat_readpage, + .read_folio = fat_read_folio, .readahead = fat_readahead, .writepage = fat_writepage, .writepages = fat_writepages, diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c index a29eb1e5bfe2..340bf5d0e835 100644 --- a/fs/gfs2/aops.c +++ b/fs/gfs2/aops.c @@ -480,7 +480,7 @@ static int __gfs2_readpage(void *file, struct page *page) error = stuffed_readpage(ip, page); unlock_page(page); } else { - error = mpage_readpage(page, gfs2_block_map); + error = mpage_read_folio(folio, gfs2_block_map); } if (unlikely(gfs2_withdrawn(sdp))) @@ -490,14 +490,13 @@ static int __gfs2_readpage(void *file, struct page *page) } /** - * gfs2_readpage - read a page of a file + * gfs2_read_folio - read a folio from a file * @file: The file to read - * @page: The page of the file + * @folio: The folio in the file */ - -static int gfs2_readpage(struct file *file, struct page *page) +static int gfs2_read_folio(struct file *file, struct folio *folio) { - return __gfs2_readpage(file, page); + return __gfs2_readpage(file, &folio->page); } /** @@ -773,7 +772,7 @@ int gfs2_releasepage(struct page *page, gfp_t gfp_mask) static const struct address_space_operations gfs2_aops = { .writepage = gfs2_writepage, .writepages = gfs2_writepages, - .readpage = gfs2_readpage, + .read_folio = gfs2_read_folio, .readahead = gfs2_readahead, .dirty_folio = filemap_dirty_folio, .releasepage = iomap_releasepage, @@ -788,7 +787,7 @@ static const struct address_space_operations gfs2_aops = { static const struct address_space_operations gfs2_jdata_aops = { .writepage = gfs2_jdata_writepage, .writepages = gfs2_jdata_writepages, - .readpage = gfs2_readpage, + .read_folio = gfs2_read_folio, .readahead = gfs2_readahead, .dirty_folio = jdata_dirty_folio, .bmap = gfs2_bmap, diff --git a/fs/hpfs/file.c b/fs/hpfs/file.c index 8b590b3826c3..f7547a62c81f 100644 --- a/fs/hpfs/file.c +++ b/fs/hpfs/file.c @@ -158,9 +158,9 @@ static const struct iomap_ops hpfs_iomap_ops = { .iomap_begin = hpfs_iomap_begin, }; -static int hpfs_readpage(struct file *file, struct page *page) +static int hpfs_read_folio(struct file *file, struct folio *folio) { - return mpage_readpage(page, hpfs_get_block); + return mpage_read_folio(folio, hpfs_get_block); } static int hpfs_writepage(struct page *page, struct writeback_control *wbc) @@ -247,7 +247,7 @@ static int hpfs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo, const struct address_space_operations hpfs_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, - .readpage = hpfs_readpage, + .read_folio = hpfs_read_folio, .writepage = hpfs_writepage, .readahead = hpfs_readahead, .writepages = hpfs_writepages, diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 75eb0c27a0e8..2de087ac87b6 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -297,7 +297,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, /* * If the bio_alloc fails, try it again for a single page to * avoid having to deal with partial page reads. This emulates - * what do_mpage_readpage does. + * what do_mpage_read_folio does. */ if (!ctx->bio) { ctx->bio = bio_alloc(iomap->bdev, 1, REQ_OP_READ, diff --git a/fs/isofs/inode.c b/fs/isofs/inode.c index d7491692aea3..88bf20303466 100644 --- a/fs/isofs/inode.c +++ b/fs/isofs/inode.c @@ -1174,9 +1174,9 @@ struct buffer_head *isofs_bread(struct inode *inode, sector_t block) return sb_bread(inode->i_sb, blknr); } -static int isofs_readpage(struct file *file, struct page *page) +static int isofs_read_folio(struct file *file, struct folio *folio) { - return mpage_readpage(page, isofs_get_block); + return mpage_read_folio(folio, isofs_get_block); } static void isofs_readahead(struct readahead_control *rac) @@ -1190,7 +1190,7 @@ static sector_t _isofs_bmap(struct address_space *mapping, sector_t block) } static const struct address_space_operations isofs_aops = { - .readpage = isofs_readpage, + .read_folio = isofs_read_folio, .readahead = isofs_readahead, .bmap = _isofs_bmap }; diff --git a/fs/jfs/inode.c b/fs/jfs/inode.c index aa9f112107b2..a5dd7e53754a 100644 --- a/fs/jfs/inode.c +++ b/fs/jfs/inode.c @@ -293,9 +293,9 @@ static int jfs_writepages(struct address_space *mapping, return mpage_writepages(mapping, wbc, jfs_get_block); } -static int jfs_readpage(struct file *file, struct page *page) +static int jfs_read_folio(struct file *file, struct folio *folio) { - return mpage_readpage(page, jfs_get_block); + return mpage_read_folio(folio, jfs_get_block); } static void jfs_readahead(struct readahead_control *rac) @@ -359,7 +359,7 @@ static ssize_t jfs_direct_IO(struct kiocb *iocb, struct iov_iter *iter) const struct address_space_operations jfs_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, - .readpage = jfs_readpage, + .read_folio = jfs_read_folio, .readahead = jfs_readahead, .writepage = jfs_writepage, .writepages = jfs_writepages, diff --git a/fs/mpage.c b/fs/mpage.c index a04439b84ae2..6df9c3aa5728 100644 --- a/fs/mpage.c +++ b/fs/mpage.c @@ -364,20 +364,22 @@ EXPORT_SYMBOL(mpage_readahead); /* * This isn't called much at all */ -int mpage_readpage(struct page *page, get_block_t get_block) +int mpage_read_folio(struct folio *folio, get_block_t get_block) { struct mpage_readpage_args args = { - .page = page, + .page = &folio->page, .nr_pages = 1, .get_block = get_block, }; + VM_BUG_ON_FOLIO(folio_test_large(folio), folio); + args.bio = do_mpage_readpage(&args); if (args.bio) mpage_bio_submit(args.bio); return 0; } -EXPORT_SYMBOL(mpage_readpage); +EXPORT_SYMBOL(mpage_read_folio); /* * Writing is not so simple. diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c index 02297ec8dc55..26b8065401b0 100644 --- a/fs/nilfs2/inode.c +++ b/fs/nilfs2/inode.c @@ -140,14 +140,14 @@ int nilfs_get_block(struct inode *inode, sector_t blkoff, } /** - * nilfs_readpage() - implement readpage() method of nilfs_aops {} + * nilfs_read_folio() - implement read_folio() method of nilfs_aops {} * address_space_operations. * @file - file struct of the file to be read - * @page - the page to be read + * @folio - the folio to be read */ -static int nilfs_readpage(struct file *file, struct page *page) +static int nilfs_read_folio(struct file *file, struct folio *folio) { - return mpage_readpage(page, nilfs_get_block); + return mpage_read_folio(folio, nilfs_get_block); } static void nilfs_readahead(struct readahead_control *rac) @@ -298,7 +298,7 @@ nilfs_direct_IO(struct kiocb *iocb, struct iov_iter *iter) const struct address_space_operations nilfs_aops = { .writepage = nilfs_writepage, - .readpage = nilfs_readpage, + .read_folio = nilfs_read_folio, .writepages = nilfs_writepages, .dirty_folio = nilfs_dirty_folio, .readahead = nilfs_readahead, diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c index bfd71f384e21..74f60c457f28 100644 --- a/fs/ntfs3/inode.c +++ b/fs/ntfs3/inode.c @@ -676,8 +676,9 @@ static sector_t ntfs_bmap(struct address_space *mapping, sector_t block) return generic_block_bmap(mapping, block, ntfs_get_block_bmap); } -static int ntfs_readpage(struct file *file, struct page *page) +static int ntfs_read_folio(struct file *file, struct folio *folio) { + struct page *page = &folio->page; int err; struct address_space *mapping = page->mapping; struct inode *inode = mapping->host; @@ -701,7 +702,7 @@ static int ntfs_readpage(struct file *file, struct page *page) } /* Normal + sparse files. */ - return mpage_readpage(page, ntfs_get_block); + return mpage_read_folio(folio, ntfs_get_block); } static void ntfs_readahead(struct readahead_control *rac) @@ -1940,7 +1941,7 @@ const struct inode_operations ntfs_link_inode_operations = { }; const struct address_space_operations ntfs_aops = { - .readpage = ntfs_readpage, + .read_folio = ntfs_read_folio, .readahead = ntfs_readahead, .writepage = ntfs_writepage, .writepages = ntfs_writepages, @@ -1952,7 +1953,7 @@ const struct address_space_operations ntfs_aops = { }; const struct address_space_operations ntfs_aops_cmpr = { - .readpage = ntfs_readpage, + .read_folio = ntfs_read_folio, .readahead = ntfs_readahead, }; // clang-format on diff --git a/fs/qnx6/inode.c b/fs/qnx6/inode.c index 9d8e7e9788a1..b9895afca9d1 100644 --- a/fs/qnx6/inode.c +++ b/fs/qnx6/inode.c @@ -94,9 +94,9 @@ static int qnx6_check_blockptr(__fs32 ptr) return 1; } -static int qnx6_readpage(struct file *file, struct page *page) +static int qnx6_read_folio(struct file *file, struct folio *folio) { - return mpage_readpage(page, qnx6_get_block); + return mpage_read_folio(folio, qnx6_get_block); } static void qnx6_readahead(struct readahead_control *rac) @@ -496,7 +496,7 @@ static sector_t qnx6_bmap(struct address_space *mapping, sector_t block) return generic_block_bmap(mapping, block, qnx6_get_block); } static const struct address_space_operations qnx6_aops = { - .readpage = qnx6_readpage, + .read_folio = qnx6_read_folio, .readahead = qnx6_readahead, .bmap = qnx6_bmap }; diff --git a/fs/udf/inode.c b/fs/udf/inode.c index 866f9a53248e..edc88716751a 100644 --- a/fs/udf/inode.c +++ b/fs/udf/inode.c @@ -193,9 +193,9 @@ static int udf_writepages(struct address_space *mapping, return mpage_writepages(mapping, wbc, udf_get_block); } -static int udf_readpage(struct file *file, struct page *page) +static int udf_read_folio(struct file *file, struct folio *folio) { - return mpage_readpage(page, udf_get_block); + return mpage_read_folio(folio, udf_get_block); } static void udf_readahead(struct readahead_control *rac) @@ -237,7 +237,7 @@ static sector_t udf_bmap(struct address_space *mapping, sector_t block) const struct address_space_operations udf_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, - .readpage = udf_readpage, + .read_folio = udf_read_folio, .readahead = udf_readahead, .writepage = udf_writepage, .writepages = udf_writepages, diff --git a/include/linux/mpage.h b/include/linux/mpage.h index f4f5e90a6844..43986f7ec4dd 100644 --- a/include/linux/mpage.h +++ b/include/linux/mpage.h @@ -16,7 +16,7 @@ struct writeback_control; struct readahead_control; void mpage_readahead(struct readahead_control *, get_block_t get_block); -int mpage_readpage(struct page *page, get_block_t get_block); +int mpage_read_folio(struct folio *folio, get_block_t get_block); int mpage_writepages(struct address_space *mapping, struct writeback_control *wbc, get_block_t get_block); int mpage_writepage(struct page *page, get_block_t *get_block, From patchwork Fri Apr 29 17:25:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832520 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CEF8C433EF for ; Fri, 29 Apr 2022 17:27:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379570AbiD2RaQ (ORCPT ); Fri, 29 Apr 2022 13:30:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42706 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379555AbiD2R3b (ORCPT ); Fri, 29 Apr 2022 13:29:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AEABCA2060 for ; Fri, 29 Apr 2022 10:26:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=bwi7+MWtzB6nmt/BL53YCai3AHddZpQBVdmLo8uttkU=; b=gL3IgrOd9w1aNvGX4ZO8XkXYWD Do0tCP70xzRVz/6ejUTD9vSoOhcEetnoa2uegZg2F4QSmt9jh6SsqiSX4LS3O5oZQXFymUB3ia5uF V+OJSDEoCS+QlmOVyKJwYn4o7nOeGDoAfsMLfbUay+LPIsZZWv2837zVEcN3w1WncU0dENcg7/uOM Vc/p8Pbz/eejDmolMNDZyQdfD51CNljklfPjguA9dv9GEj0WFiThoPlH11d9qDDhHu7OBbM7IGkFN bmkNRjUMpZe+6PiNrbLe4ihaKHirF8fQNrxCg9kZPSoLJtLpD4CH9XlGN9Z8al6gA5Ido9Srs3nGI w4gX0TMg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNa-00CdaR-Ts; Fri, 29 Apr 2022 17:26:06 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 39/69] fs: Convert simple_readpage to simple_read_folio Date: Fri, 29 Apr 2022 18:25:26 +0100 Message-Id: <20220429172556.3011843-40-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a full folio conversion; it is prepared to handle folios of arbitrary size. Signed-off-by: Matthew Wilcox (Oracle) --- fs/libfs.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/fs/libfs.c b/fs/libfs.c index a1c10d3163e0..31b0ddf01c31 100644 --- a/fs/libfs.c +++ b/fs/libfs.c @@ -539,12 +539,12 @@ int simple_setattr(struct user_namespace *mnt_userns, struct dentry *dentry, } EXPORT_SYMBOL(simple_setattr); -static int simple_readpage(struct file *file, struct page *page) +static int simple_read_folio(struct file *file, struct folio *folio) { - clear_highpage(page); - flush_dcache_page(page); - SetPageUptodate(page); - unlock_page(page); + folio_zero_range(folio, 0, folio_size(folio)); + flush_dcache_folio(folio); + folio_mark_uptodate(folio); + folio_unlock(folio); return 0; } @@ -592,7 +592,7 @@ EXPORT_SYMBOL(simple_write_begin); * should extend on what's done here with a call to mark_inode_dirty() in the * case that i_size has changed. * - * Use *ONLY* with simple_readpage() + * Use *ONLY* with simple_read_folio() */ static int simple_write_end(struct file *file, struct address_space *mapping, loff_t pos, unsigned len, unsigned copied, @@ -628,7 +628,7 @@ static int simple_write_end(struct file *file, struct address_space *mapping, * Provides ramfs-style behavior: data in the pagecache, but no writeback. */ const struct address_space_operations ram_aops = { - .readpage = simple_readpage, + .read_folio = simple_read_folio, .write_begin = simple_write_begin, .write_end = simple_write_end, .dirty_folio = noop_dirty_folio, From patchwork Fri Apr 29 17:25:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832514 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34580C433F5 for ; Fri, 29 Apr 2022 17:26:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379689AbiD2RaH (ORCPT ); Fri, 29 Apr 2022 13:30:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43174 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379551AbiD2R3b (ORCPT ); Fri, 29 Apr 2022 13:29:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DDA0EA2071 for ; Fri, 29 Apr 2022 10:26:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=BJj177M1qr0mdk+V5h83h2yX7XDhGHky1f+ZyB2GnAE=; b=u8fLA+/Iosdyw8McNYYZIxHcDj I7kRdQJRKR+gKtXZrxS3cMNLDS1aB8YvCBszJOfSD4fFq2zqp+5d78zHAIA5qy7zzNKyel823Ks0O PEl94YyDmECOXioaocAA+O2uXSpaX27kTrffj3dZTEgTllF7h+F+cSVuiCCg/PUjR3pKyYMZlWw3p Ejh+OD++P/JfKiccHeFONw7P32W3kqTvhKXWVBSnH9WIAKc9/JPUygCVpyMN2tHq9FGzzBVqIC3FP LLnShlxLs8fOj+Rnrljn4W39bOWeh5VL2JLmKUclUHFcPmGxAcRtVgbX+mZvowOMVXZ8Qup2UJU+F 9oAsOn3g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNb-00CdaW-1f; Fri, 29 Apr 2022 17:26:07 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 40/69] affs: Convert affs to read_folio Date: Fri, 29 Apr 2022 18:25:27 +0100 Message-Id: <20220429172556.3011843-41-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a "weak" conversion which converts straight back to using pages. A full conversion should be performed at some point, hopefully by someone familiar with the filesystem. Signed-off-by: Matthew Wilcox (Oracle) --- fs/affs/file.c | 5 +++-- fs/affs/symlink.c | 5 +++-- 2 files changed, 6 insertions(+), 4 deletions(-) diff --git a/fs/affs/file.c b/fs/affs/file.c index 5da562cc7fb7..cd00a4c68a12 100644 --- a/fs/affs/file.c +++ b/fs/affs/file.c @@ -629,8 +629,9 @@ affs_extent_file_ofs(struct inode *inode, u32 newsize) } static int -affs_readpage_ofs(struct file *file, struct page *page) +affs_read_folio_ofs(struct file *file, struct folio *folio) { + struct page *page = &folio->page; struct inode *inode = page->mapping->host; u32 to; int err; @@ -837,7 +838,7 @@ static int affs_write_end_ofs(struct file *file, struct address_space *mapping, const struct address_space_operations affs_aops_ofs = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, - .readpage = affs_readpage_ofs, + .read_folio = affs_read_folio_ofs, //.writepage = affs_writepage_ofs, .write_begin = affs_write_begin_ofs, .write_end = affs_write_end_ofs diff --git a/fs/affs/symlink.c b/fs/affs/symlink.c index a7531b26e8f0..31d6446dc166 100644 --- a/fs/affs/symlink.c +++ b/fs/affs/symlink.c @@ -11,8 +11,9 @@ #include "affs.h" -static int affs_symlink_readpage(struct file *file, struct page *page) +static int affs_symlink_read_folio(struct file *file, struct folio *folio) { + struct page *page = &folio->page; struct buffer_head *bh; struct inode *inode = page->mapping->host; char *link = page_address(page); @@ -67,7 +68,7 @@ static int affs_symlink_readpage(struct file *file, struct page *page) } const struct address_space_operations affs_symlink_aops = { - .readpage = affs_symlink_readpage, + .read_folio = affs_symlink_read_folio, }; const struct inode_operations affs_symlink_inode_operations = { From patchwork Fri Apr 29 17:25:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832517 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6461CC433F5 for ; Fri, 29 Apr 2022 17:26:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379651AbiD2RaO (ORCPT ); Fri, 29 Apr 2022 13:30:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43146 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379557AbiD2R3b (ORCPT ); Fri, 29 Apr 2022 13:29:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E89CAA2079 for ; Fri, 29 Apr 2022 10:26:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=uljn2IWBs8DqnOE3i3bTad2L0Ix1+446tUcqtKovHII=; b=GQd3ByjOdq8lsWpJeQbah4rhXZ Oj7JxCeYwwfTrlgrbHXhfr77AbQYO6looN6utlWGSmgL8AQ+IVx8sw9CXV+4f07II2K1SsFFfrqRZ qbaio9LjZoPjPVxTZj0cO07YiGvvyxEieeQb8cx+rwqqrRP8GHgfe7iA39Eratf5e+ajACqnmwFyj Lw7PlEVMAq4GMWjK5Qhn6mNAg4k6iOO7tt88qJuKSK0w6rN/fK3/778Zk+t+LGbdlj5F3yDRDnNZ3 H9VV3biOxOWLto67uQRRXwpLpMAy3B1HpcsYV+c8qbt0eiIdGLywXOj6zmGTl5a1UoFmn9ZajKnOk P80ayonA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNb-00Cdab-5Y; Fri, 29 Apr 2022 17:26:07 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 41/69] afs: Convert afs_symlink_readpage to afs_symlink_read_folio Date: Fri, 29 Apr 2022 18:25:28 +0100 Message-Id: <20220429172556.3011843-42-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This function mostly used folios already, and only a few minor changes were needed. Signed-off-by: Matthew Wilcox (Oracle) --- fs/afs/file.c | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/fs/afs/file.c b/fs/afs/file.c index e277fbe55262..65ef69a1f78e 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -19,7 +19,7 @@ #include "internal.h" static int afs_file_mmap(struct file *file, struct vm_area_struct *vma); -static int afs_symlink_readpage(struct file *file, struct page *page); +static int afs_symlink_read_folio(struct file *file, struct folio *folio); static void afs_invalidate_folio(struct folio *folio, size_t offset, size_t length); static int afs_releasepage(struct page *page, gfp_t gfp_flags); @@ -63,7 +63,7 @@ const struct address_space_operations afs_file_aops = { }; const struct address_space_operations afs_symlink_aops = { - .readpage = afs_symlink_readpage, + .read_folio = afs_symlink_read_folio, .releasepage = afs_releasepage, .invalidate_folio = afs_invalidate_folio, }; @@ -332,11 +332,10 @@ static void afs_issue_read(struct netfs_io_subrequest *subreq) afs_put_read(fsreq); } -static int afs_symlink_readpage(struct file *file, struct page *page) +static int afs_symlink_read_folio(struct file *file, struct folio *folio) { - struct afs_vnode *vnode = AFS_FS_I(page->mapping->host); + struct afs_vnode *vnode = AFS_FS_I(folio->mapping->host); struct afs_read *fsreq; - struct folio *folio = page_folio(page); int ret; fsreq = afs_alloc_read(GFP_NOFS); @@ -347,13 +346,13 @@ static int afs_symlink_readpage(struct file *file, struct page *page) fsreq->len = folio_size(folio); fsreq->vnode = vnode; fsreq->iter = &fsreq->def_iter; - iov_iter_xarray(&fsreq->def_iter, READ, &page->mapping->i_pages, + iov_iter_xarray(&fsreq->def_iter, READ, &folio->mapping->i_pages, fsreq->pos, fsreq->len); ret = afs_fetch_data(fsreq->vnode, fsreq); if (ret == 0) - SetPageUptodate(page); - unlock_page(page); + folio_mark_uptodate(folio); + folio_unlock(folio); return ret; } From patchwork Fri Apr 29 17:25:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832550 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6D0AC433F5 for ; Fri, 29 Apr 2022 17:27:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379698AbiD2RbJ (ORCPT ); Fri, 29 Apr 2022 13:31:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43286 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379582AbiD2R3b (ORCPT ); Fri, 29 Apr 2022 13:29:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 98AD4A5E87 for ; Fri, 29 Apr 2022 10:26:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=7i+rQsprs4PBXqaChHELvwK4DblnGhyte9L47UaC7aI=; b=b86J6gzV8jBL+LZuTBUfCQ+PCX MnzX9au4CTbYolXUJhOef5+dP9k3aZmErNtcVXfLOBKIpxnGwi4ty4SjrbzUHNNugj2KUs+kfqPCp RkqM35nOxGtw350eXVQQpfFoIDckQlQPq/9RVsF4erTsJUesiPTOak7kfebEX0sD1VKGVccsLdoBE cxIGybB1t+WOk3bUz9h+BJx6KncySlHviV1hhpALRHVM2cGR2+rFDXF9YgcO6d/A0A+r2eoUH1WjO R0pi1j0RpcRcTn+S/yEkN1B8mz1zKZ/7nY2OXsa9g7ihdoWskmka6hZKdt47qbJSrh5/7aRnb2KQa 7K1CNb/A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNb-00Cdaj-9a; Fri, 29 Apr 2022 17:26:07 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 42/69] befs: Convert befs to read_folio Date: Fri, 29 Apr 2022 18:25:29 +0100 Message-Id: <20220429172556.3011843-43-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a "weak" conversion which converts straight back to using pages. A full conversion should be performed at some point, hopefully by someone familiar with the filesystem. Signed-off-by: Matthew Wilcox (Oracle) --- fs/befs/linuxvfs.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/fs/befs/linuxvfs.c b/fs/befs/linuxvfs.c index 25350dd22cda..be383fa46b12 100644 --- a/fs/befs/linuxvfs.c +++ b/fs/befs/linuxvfs.c @@ -48,7 +48,7 @@ static struct inode *befs_iget(struct super_block *, unsigned long); static struct inode *befs_alloc_inode(struct super_block *sb); static void befs_free_inode(struct inode *inode); static void befs_destroy_inodecache(void); -static int befs_symlink_readpage(struct file *, struct page *); +static int befs_symlink_read_folio(struct file *, struct folio *); static int befs_utf2nls(struct super_block *sb, const char *in, int in_len, char **out, int *out_len); static int befs_nls2utf(struct super_block *sb, const char *in, int in_len, @@ -92,7 +92,7 @@ static const struct address_space_operations befs_aops = { }; static const struct address_space_operations befs_symlink_aops = { - .readpage = befs_symlink_readpage, + .read_folio = befs_symlink_read_folio, }; static const struct export_operations befs_export_operations = { @@ -468,8 +468,9 @@ befs_destroy_inodecache(void) * The data stream become link name. Unless the LONG_SYMLINK * flag is set. */ -static int befs_symlink_readpage(struct file *unused, struct page *page) +static int befs_symlink_read_folio(struct file *unused, struct folio *folio) { + struct page *page = &folio->page; struct inode *inode = page->mapping->host; struct super_block *sb = inode->i_sb; struct befs_inode_info *befs_ino = BEFS_I(inode); From patchwork Fri Apr 29 17:25:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832516 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5041CC433EF for ; Fri, 29 Apr 2022 17:26:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379692AbiD2RaM (ORCPT ); Fri, 29 Apr 2022 13:30:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42486 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379556AbiD2R3b (ORCPT ); Fri, 29 Apr 2022 13:29:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 985FFA27F2 for ; Fri, 29 Apr 2022 10:26:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=EmS/g/wfHHe+anzEX7PHJ19CY4ZCLaPDaACn1QgK1iU=; b=hy/zC0CK9Q2Gt/qnmTeg1wNkuu mEoqlfT3FNEUAUpv0J0L9iKMWzd9bOmZNjXm38lFxc2IeFDK6UffiuReoTbMsPk68+dfa42TsYiSG 83vILfX3z4GilEFeySuhWuwdvX4a6Dha/v8qX4/CxuehSksmigOrIvfrWeHN3djHNtQ0KLwkFVi/a 3RO9sY4OXZF6sMMicSJ0HBOx5zOlOe1HMhYsC5wyG4+1GGYgi3qOzml+Eho29bwnjihJwb4VLsfBV 18loEYSvwtxHt7e027L/xPagha8Y16Pwh2zJ68RAGxf2UWBfY7pVX6prlME3B2gvTM62NTRzDRUO0 ze1OxNxA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNb-00Cdao-DO; Fri, 29 Apr 2022 17:26:07 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 43/69] btrfs: Convert btrfs to read_folio Date: Fri, 29 Apr 2022 18:25:30 +0100 Message-Id: <20220429172556.3011843-44-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a "weak" conversion which converts straight back to using pages. A full conversion should be performed at some point, hopefully by someone familiar with the filesystem. Signed-off-by: Matthew Wilcox (Oracle) --- fs/btrfs/ctree.h | 2 +- fs/btrfs/file.c | 7 ++++--- fs/btrfs/free-space-cache.c | 2 +- fs/btrfs/inode.c | 7 ++++--- fs/btrfs/ioctl.c | 2 +- fs/btrfs/relocation.c | 8 ++++---- fs/btrfs/send.c | 2 +- 7 files changed, 16 insertions(+), 14 deletions(-) diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h index 077c95e9baa5..8d4b5edd4059 100644 --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -3269,7 +3269,7 @@ void btrfs_split_delalloc_extent(struct inode *inode, struct extent_state *orig, u64 split); void btrfs_set_range_writeback(struct btrfs_inode *inode, u64 start, u64 end); vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf); -int btrfs_readpage(struct file *file, struct page *page); +int btrfs_read_folio(struct file *file, struct folio *folio); void btrfs_evict_inode(struct inode *inode); int btrfs_write_inode(struct inode *inode, struct writeback_control *wbc); struct inode *btrfs_alloc_inode(struct super_block *sb); diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c index 380054c94e4b..57fba5abb059 100644 --- a/fs/btrfs/file.c +++ b/fs/btrfs/file.c @@ -1307,11 +1307,12 @@ static int prepare_uptodate_page(struct inode *inode, struct page *page, u64 pos, bool force_uptodate) { + struct folio *folio = page_folio(page); int ret = 0; if (((pos & (PAGE_SIZE - 1)) || force_uptodate) && !PageUptodate(page)) { - ret = btrfs_readpage(NULL, page); + ret = btrfs_read_folio(NULL, folio); if (ret) return ret; lock_page(page); @@ -1321,7 +1322,7 @@ static int prepare_uptodate_page(struct inode *inode, } /* - * Since btrfs_readpage() will unlock the page before it + * Since btrfs_read_folio() will unlock the folio before it * returns, there is a window where btrfs_releasepage() can be * called to release the page. Here we check both inode * mapping and PagePrivate() to make sure the page was not @@ -2401,7 +2402,7 @@ static int btrfs_file_mmap(struct file *filp, struct vm_area_struct *vma) { struct address_space *mapping = filp->f_mapping; - if (!mapping->a_ops->readpage) + if (!mapping->a_ops->read_folio) return -ENOEXEC; file_accessed(filp); diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c index 01a408db5683..829a414a7ecb 100644 --- a/fs/btrfs/free-space-cache.c +++ b/fs/btrfs/free-space-cache.c @@ -465,7 +465,7 @@ static int io_ctl_prepare_pages(struct btrfs_io_ctl *io_ctl, bool uptodate) io_ctl->pages[i] = page; if (uptodate && !PageUptodate(page)) { - btrfs_readpage(NULL, page); + btrfs_read_folio(NULL, page_folio(page)); lock_page(page); if (page->mapping != inode->i_mapping) { btrfs_err(BTRFS_I(inode)->root->fs_info, diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 1c8a43ecfb9f..34d452d350d6 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -4725,7 +4725,7 @@ int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t len, goto out_unlock; if (!PageUptodate(page)) { - ret = btrfs_readpage(NULL, page); + ret = btrfs_read_folio(NULL, page_folio(page)); lock_page(page); if (page->mapping != mapping) { unlock_page(page); @@ -8124,8 +8124,9 @@ static int btrfs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo, return extent_fiemap(BTRFS_I(inode), fieinfo, start, len); } -int btrfs_readpage(struct file *file, struct page *page) +int btrfs_read_folio(struct file *file, struct folio *folio) { + struct page *page = &folio->page; struct btrfs_inode *inode = BTRFS_I(page->mapping->host); u64 start = page_offset(page); u64 end = start + PAGE_SIZE - 1; @@ -11368,7 +11369,7 @@ static const struct file_operations btrfs_dir_file_operations = { * For now we're avoiding this by dropping bmap. */ static const struct address_space_operations btrfs_aops = { - .readpage = btrfs_readpage, + .read_folio = btrfs_read_folio, .writepage = btrfs_writepage, .writepages = btrfs_writepages, .readahead = btrfs_readahead, diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index be6c24577dbe..8d0c4d23b743 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -1359,7 +1359,7 @@ static struct page *defrag_prepare_one_page(struct btrfs_inode *inode, * make it uptodate. */ if (!PageUptodate(page)) { - btrfs_readpage(NULL, page); + btrfs_read_folio(NULL, page_folio(page)); lock_page(page); if (page->mapping != mapping || !PagePrivate(page)) { unlock_page(page); diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c index 9ae06895ffc9..fb16c484bbae 100644 --- a/fs/btrfs/relocation.c +++ b/fs/btrfs/relocation.c @@ -1101,7 +1101,7 @@ int replace_file_extents(struct btrfs_trans_handle *trans, continue; /* - * if we are modifying block in fs tree, wait for readpage + * if we are modifying block in fs tree, wait for read_folio * to complete and drop the extent cache */ if (root->root_key.objectid != BTRFS_TREE_RELOC_OBJECTID) { @@ -1563,7 +1563,7 @@ static int invalidate_extent_cache(struct btrfs_root *root, end = (u64)-1; } - /* the lock_extent waits for readpage to complete */ + /* the lock_extent waits for read_folio to complete */ lock_extent(&BTRFS_I(inode)->io_tree, start, end); btrfs_drop_extent_cache(BTRFS_I(inode), start, end, 1); unlock_extent(&BTRFS_I(inode)->io_tree, start, end); @@ -2818,7 +2818,7 @@ static noinline_for_stack int prealloc_file_extent_cluster( * Subpage can't handle page with DIRTY but without UPTODATE * bit as it can lead to the following deadlock: * - * btrfs_readpage() + * btrfs_read_folio() * | Page already *locked* * |- btrfs_lock_and_flush_ordered_range() * |- btrfs_start_ordered_extent() @@ -2972,7 +2972,7 @@ static int relocate_one_page(struct inode *inode, struct file_ra_state *ra, last_index + 1 - page_index); if (!PageUptodate(page)) { - btrfs_readpage(NULL, page); + btrfs_read_folio(NULL, page_folio(page)); lock_page(page); if (!PageUptodate(page)) { ret = -EIO; diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c index b327dbe0cbf5..8985d115559d 100644 --- a/fs/btrfs/send.c +++ b/fs/btrfs/send.c @@ -4991,7 +4991,7 @@ static int put_file_data(struct send_ctx *sctx, u64 offset, u32 len) } if (!PageUptodate(page)) { - btrfs_readpage(NULL, page); + btrfs_read_folio(NULL, page_folio(page)); lock_page(page); if (!PageUptodate(page)) { unlock_page(page); From patchwork Fri Apr 29 17:25:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832549 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E73DEC433EF for ; Fri, 29 Apr 2022 17:27:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379693AbiD2RbI (ORCPT ); Fri, 29 Apr 2022 13:31:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379578AbiD2R3b (ORCPT ); Fri, 29 Apr 2022 13:29:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 988EEA27F9 for ; Fri, 29 Apr 2022 10:26:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ulnnpW7LCqHVAAAx5GiwIjukDKStgvzrHinbZXC+krM=; b=D3FSbBmPq0bC6ueZiZjT02KXst 0zX5TH8EuTVALAt2r1f5RGQ4iI/XEnNg2hxZaX1MDf55v4NUAYy8KlP5sr+GOS5Z9yaN3rlYjMuFt 8WbqhfYPeZc7vO4pVzFk+nOPV4g5jwFZT46SsnsPynqS3IzQh/dVqdEZw1RWvcGFvEm4P9zRUcEk7 p1wLE7E0SUBlL75w7y70tGqRhwlZJKmBGQE6Z2V58jqhcP1FZNYMsB6fWAnr1Q4JewhrcHNQLnKWm S6dk6Gl6PnGHPC0tqWHyGxRSsZK0lTTxaFvBG1cP9znI3AH3vluJquIP58g4ypmu9P1YYCToC7NgE GCbfj2CA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNb-00Cdat-IJ; Fri, 29 Apr 2022 17:26:07 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 44/69] cifs: Convert cifs to read_folio Date: Fri, 29 Apr 2022 18:25:31 +0100 Message-Id: <20220429172556.3011843-45-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a "weak" conversion which converts straight back to using pages. CIFS should probably be converted to use netfs_read_folio() by someone familiar with it. Signed-off-by: Matthew Wilcox (Oracle) --- fs/cifs/file.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/fs/cifs/file.c b/fs/cifs/file.c index da362b5a0c96..bc6d88e2e672 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -4612,8 +4612,9 @@ static int cifs_readpage_worker(struct file *file, struct page *page, return rc; } -static int cifs_readpage(struct file *file, struct page *page) +static int cifs_read_folio(struct file *file, struct folio *folio) { + struct page *page = &folio->page; loff_t offset = page_file_offset(page); int rc = -EACCES; unsigned int xid; @@ -4626,7 +4627,7 @@ static int cifs_readpage(struct file *file, struct page *page) return rc; } - cifs_dbg(FYI, "readpage %p at offset %d 0x%x\n", + cifs_dbg(FYI, "read_folio %p at offset %d 0x%x\n", page, (int)offset, (int)offset); rc = cifs_readpage_worker(file, page, &offset); @@ -4965,7 +4966,7 @@ static bool cifs_dirty_folio(struct address_space *mapping, struct folio *folio) #endif const struct address_space_operations cifs_addr_ops = { - .readpage = cifs_readpage, + .read_folio = cifs_read_folio, .readahead = cifs_readahead, .writepage = cifs_writepage, .writepages = cifs_writepages, @@ -4986,12 +4987,12 @@ const struct address_space_operations cifs_addr_ops = { }; /* - * cifs_readpages requires the server to support a buffer large enough to + * cifs_readahead requires the server to support a buffer large enough to * contain the header plus one complete page of data. Otherwise, we need - * to leave cifs_readpages out of the address space operations. + * to leave cifs_readahead out of the address space operations. */ const struct address_space_operations cifs_addr_ops_smallbuf = { - .readpage = cifs_readpage, + .read_folio = cifs_read_folio, .writepage = cifs_writepage, .writepages = cifs_writepages, .write_begin = cifs_write_begin, From patchwork Fri Apr 29 17:25:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832518 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 775CEC433EF for ; Fri, 29 Apr 2022 17:26:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379526AbiD2RaP (ORCPT ); Fri, 29 Apr 2022 13:30:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43290 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379574AbiD2R3b (ORCPT ); Fri, 29 Apr 2022 13:29:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 98A39A5E86 for ; Fri, 29 Apr 2022 10:26:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=TAxcWC6FMiXuwr7R63PsgyvedCcLMofVrsLaAeQ1AwY=; b=fy+nHMZsYK4lfImxWNWGQ/fpbz vzkUsx53jXY0hrjgClRZ1INfAnqp0t5HBPf7q9e/MFsChZOtmxjyKlrjRFDUqkH6MHROMkhApYWDY /I/xcYhlcqeeyDpa38O+1ZBt9hIx+B6Q9QeG0/oQMpqUx3pUQmPtuHo32xt9hr3NNXi2qjQeKjmkD TNjUpaGy9D7d5jH9t2BOR03bIN3S8Zgp5phPUT87UAuftdUoW074afJHXs+KMfVeX/CgiksCTNzki IjOAYtCykQKqafdp0srhZIT2vC5ARQhl3xQvi7FN4C27kCXLwyKQW+O0DJm0IzAUvyeAvfW7ykGCf FUdrLHqA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNb-00Cdb0-M9; Fri, 29 Apr 2022 17:26:07 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 45/69] coda: Convert coda to read_folio Date: Fri, 29 Apr 2022 18:25:32 +0100 Message-Id: <20220429172556.3011843-46-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a "weak" conversion which converts straight back to using pages. A full conversion should be performed at some point, hopefully by someone familiar with the filesystem. Signed-off-by: Matthew Wilcox (Oracle) --- fs/coda/symlink.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/fs/coda/symlink.c b/fs/coda/symlink.c index 8907d0508198..8adf81042498 100644 --- a/fs/coda/symlink.c +++ b/fs/coda/symlink.c @@ -20,9 +20,10 @@ #include "coda_psdev.h" #include "coda_linux.h" -static int coda_symlink_filler(struct file *file, struct page *page) +static int coda_symlink_filler(struct file *file, struct folio *folio) { - struct inode *inode = page->mapping->host; + struct page *page = &folio->page; + struct inode *inode = folio->mapping->host; int error; struct coda_inode_info *cii; unsigned int len = PAGE_SIZE; @@ -44,5 +45,5 @@ static int coda_symlink_filler(struct file *file, struct page *page) } const struct address_space_operations coda_symlink_aops = { - .readpage = coda_symlink_filler, + .read_folio = coda_symlink_filler, }; From patchwork Fri Apr 29 17:25:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832522 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3475BC433EF for ; Fri, 29 Apr 2022 17:27:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379665AbiD2RaW (ORCPT ); Fri, 29 Apr 2022 13:30:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43280 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379571AbiD2R3b (ORCPT ); Fri, 29 Apr 2022 13:29:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 983D1A27E5 for ; Fri, 29 Apr 2022 10:26:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=XQRW3lZzumjUXQZd6nQRKP1nNWR2SkdAYLiqccqYpe0=; b=buGfuJ/8y/xdCRNIUfe8CSM4n2 HTgUSH16Bw1r3nJgyRzzc+1PxFVrYavBfn/2O8dVVAsD1qYI/7pPiOdjtRlbVS+erntiu3SOIApFc vWr1W41kw6k/rBfb6l7xI+FhWPhr43VdAQy5JJpYDOQ6GTQXS9IQuSu8bjgTtLq3skMYHoLUr7Xef N601wRkqhOQ5sWCyKNUXdZIr2A+sbtSgxfFdvwb7VRd362ZqiyJpgylxZlIQ4BWoF1KfcZ7Xv3DTY JxdNTcuts9VBsJxRZ59T0K8P3PSMlssirAJV22oU1FW+ZvQtUSIJeniTQn4KRN9P99h7mgy5tMXYV npeekr4g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNb-00Cdb7-Pt; Fri, 29 Apr 2022 17:26:07 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 46/69] cramfs: Convert cramfs to read_folio Date: Fri, 29 Apr 2022 18:25:33 +0100 Message-Id: <20220429172556.3011843-47-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a "weak" conversion which converts straight back to using pages. A full conversion should be performed at some point, hopefully by someone familiar with the filesystem. Signed-off-by: Matthew Wilcox (Oracle) --- fs/cramfs/README | 8 ++++---- fs/cramfs/inode.c | 7 ++++--- 2 files changed, 8 insertions(+), 7 deletions(-) diff --git a/fs/cramfs/README b/fs/cramfs/README index d71b27e0ff15..778df5c4d70b 100644 --- a/fs/cramfs/README +++ b/fs/cramfs/README @@ -115,7 +115,7 @@ Block Size (Block size in cramfs refers to the size of input data that is compressed at a time. It's intended to be somewhere around -PAGE_SIZE for cramfs_readpage's convenience.) +PAGE_SIZE for cramfs_read_folio's convenience.) The superblock ought to indicate the block size that the fs was written for, since comments in indicate that @@ -161,7 +161,7 @@ size. The options are: PAGE_SIZE. It's easy enough to change the kernel to use a smaller value than -PAGE_SIZE: just make cramfs_readpage read multiple blocks. +PAGE_SIZE: just make cramfs_read_folio read multiple blocks. The cost of option 1 is that kernels with a larger PAGE_SIZE value don't get as good compression as they can. @@ -173,9 +173,9 @@ they don't mind their cramfs being inaccessible to kernels with smaller PAGE_SIZE values. Option 3 is easy to implement if we don't mind being CPU-inefficient: -e.g. get readpage to decompress to a buffer of size MAX_BLKSIZE (which +e.g. get read_folio to decompress to a buffer of size MAX_BLKSIZE (which must be no larger than 32KB) and discard what it doesn't need. -Getting readpage to read into all the covered pages is harder. +Getting read_folio to read into all the covered pages is harder. The main advantage of option 3 over 1, 2, is better compression. The cost is greater complexity. Probably not worth it, but I hope someone diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c index 666aa380011e..7ae59a6afc5c 100644 --- a/fs/cramfs/inode.c +++ b/fs/cramfs/inode.c @@ -414,7 +414,7 @@ static int cramfs_physmem_mmap(struct file *file, struct vm_area_struct *vma) /* * Let's create a mixed map if we can't map it all. * The normal paging machinery will take care of the - * unpopulated ptes via cramfs_readpage(). + * unpopulated ptes via cramfs_read_folio(). */ int i; vma->vm_flags |= VM_MIXEDMAP; @@ -814,8 +814,9 @@ static struct dentry *cramfs_lookup(struct inode *dir, struct dentry *dentry, un return d_splice_alias(inode, dentry); } -static int cramfs_readpage(struct file *file, struct page *page) +static int cramfs_read_folio(struct file *file, struct folio *folio) { + struct page *page = &folio->page; struct inode *inode = page->mapping->host; u32 maxblock; int bytes_filled; @@ -925,7 +926,7 @@ static int cramfs_readpage(struct file *file, struct page *page) } static const struct address_space_operations cramfs_aops = { - .readpage = cramfs_readpage + .read_folio = cramfs_read_folio }; /* From patchwork Fri Apr 29 17:25:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832551 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9AA5BC433EF for ; Fri, 29 Apr 2022 17:27:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379706AbiD2RbL (ORCPT ); Fri, 29 Apr 2022 13:31:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43284 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379572AbiD2R3b (ORCPT ); Fri, 29 Apr 2022 13:29:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E5C43A5E8D for ; Fri, 29 Apr 2022 10:26:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=LMGCME+l7x4GzgKB/yPRpDrmevCiF9Q8eBSQcevd08A=; b=f6d4PCNg2xmxG3XfJnhEYl1c8C Y2nfWsdZc1+S2olVppYeNiGzr3U+EYN/nARU62UB/yTGFY4hE1bbCf210MnDHJ15XDyfUddULG0iB uC/qfTEnTp+DOeLNEhKfAkQVN2PJDJKAxZJQr2LAaKz2gPUrYjnw6hMfh5eHtwWi+oanH3WbD+22N zEwLYvc1NDk1r1SC/PsnX9GIJOsYVg8SwEKelI/LBbkrOldGOqu3hHEe3qC6YPxIsuPtwlroNRcQK 74Z3Mnk0rw+5ZEjfIcgS1U/NxxfdMrjFgKxnMBJh1M8ggYerm2RqqfrOIVY0YHVmlR8uclUq4HdVL ZDXB6F4A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNb-00CdbG-Uo; Fri, 29 Apr 2022 17:26:07 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 47/69] ecryptfs: Convert ecryptfs to read_folio Date: Fri, 29 Apr 2022 18:25:34 +0100 Message-Id: <20220429172556.3011843-48-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a "weak" conversion which converts straight back to using pages. A full conversion should be performed at some point, hopefully by someone familiar with the filesystem. Signed-off-by: Matthew Wilcox (Oracle) --- fs/ecryptfs/mmap.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/fs/ecryptfs/mmap.c b/fs/ecryptfs/mmap.c index 47904d40ef88..19af229eb7ca 100644 --- a/fs/ecryptfs/mmap.c +++ b/fs/ecryptfs/mmap.c @@ -170,16 +170,17 @@ ecryptfs_copy_up_encrypted_with_header(struct page *page, } /** - * ecryptfs_readpage + * ecryptfs_read_folio * @file: An eCryptfs file - * @page: Page from eCryptfs inode mapping into which to stick the read data + * @folio: Folio from eCryptfs inode mapping into which to stick the read data * - * Read in a page, decrypting if necessary. + * Read in a folio, decrypting if necessary. * * Returns zero on success; non-zero on error. */ -static int ecryptfs_readpage(struct file *file, struct page *page) +static int ecryptfs_read_folio(struct file *file, struct folio *folio) { + struct page *page = &folio->page; struct ecryptfs_crypt_stat *crypt_stat = &ecryptfs_inode_to_private(page->mapping->host)->crypt_stat; int rc = 0; @@ -549,7 +550,7 @@ const struct address_space_operations ecryptfs_aops = { .invalidate_folio = block_invalidate_folio, #endif .writepage = ecryptfs_writepage, - .readpage = ecryptfs_readpage, + .read_folio = ecryptfs_read_folio, .write_begin = ecryptfs_write_begin, .write_end = ecryptfs_write_end, .bmap = ecryptfs_bmap, From patchwork Fri Apr 29 17:25:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832534 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA41AC433FE for ; Fri, 29 Apr 2022 17:27:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379663AbiD2Ran (ORCPT ); Fri, 29 Apr 2022 13:30:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379587AbiD2R3b (ORCPT ); Fri, 29 Apr 2022 13:29:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E5CA5A5E8E for ; Fri, 29 Apr 2022 10:26:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Cha7rdYhynQkzfrMoZf+OEIc5RWuGXtJ7jbLzk85xcY=; b=vm111IiruDj4EXD/yuURb3sWkR ty6OQnONqeV/dWAQfj0yY2uxOKiHC8JakCVgMYsrS7WGvf5EqCmZRySwGVFYDhL/JiTHyl/r4P60W pEwCIPbCRv4j64YB8oTPIENDTFcU5BKA+6DoPyc6f02Vtq0buobz2IXvm9Hlud8QIjap6S7coZati Hd3ImXnrOaX4srSnic0StgEzM4RyhRWc1VtK2XkuOMwXH3lG1jOD9pqjqIRUQU7bWARJ5nx2JhUdB dT16iLxSH0X2NG4S5KXFJ78TuHow0GUfm9QHDex9UXPCO1mGT+IhueI5V2R826ltxc5hgXzeRZ1uY 0p7xWPZA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNc-00CdbT-2d; Fri, 29 Apr 2022 17:26:08 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 48/69] efs: Convert efs symlinks to read_folio Date: Fri, 29 Apr 2022 18:25:35 +0100 Message-Id: <20220429172556.3011843-49-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a "weak" conversion which converts straight back to using pages. A full conversion should be performed at some point, hopefully by someone familiar with the filesystem. Signed-off-by: Matthew Wilcox (Oracle) --- fs/efs/symlink.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/fs/efs/symlink.c b/fs/efs/symlink.c index 923eb91654d5..3b03a573cb1a 100644 --- a/fs/efs/symlink.c +++ b/fs/efs/symlink.c @@ -12,8 +12,9 @@ #include #include "efs.h" -static int efs_symlink_readpage(struct file *file, struct page *page) +static int efs_symlink_read_folio(struct file *file, struct folio *folio) { + struct page *page = &folio->page; char *link = page_address(page); struct buffer_head * bh; struct inode * inode = page->mapping->host; @@ -49,5 +50,5 @@ static int efs_symlink_readpage(struct file *file, struct page *page) } const struct address_space_operations efs_symlink_aops = { - .readpage = efs_symlink_readpage + .read_folio = efs_symlink_read_folio }; From patchwork Fri Apr 29 17:25:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832527 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E142C433F5 for ; Fri, 29 Apr 2022 17:27:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379660AbiD2Ra1 (ORCPT ); Fri, 29 Apr 2022 13:30:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379604AbiD2R3b (ORCPT ); Fri, 29 Apr 2022 13:29:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 27542A5E9F for ; Fri, 29 Apr 2022 10:26:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=5q6pihEh5qHpwkIUMREQkE2pZvOoCqZcQAv/Kv9aXYE=; b=BEznQgZze2Mc5AzggAzzEjKjjF JvSP7w+35WEbePGhuWyI60vw8DSp8ompmTQO4BjY7mgl//TrI+jL7Dik6e0F8b1u++1t0m1KJb9og peGFC2CAd7UZTTVOK9IsLod6/Tyk2paUxEDryh+ohVLBrkdmomk61eFyxFmUjdJZM+pr4L9Jxv7Rz T19ss5WPUGEp0cIBmy5w3RtebgEc3JDm4owf+ojqFX3VH6rOXPeBZuDm61ynYgwGZIrJlWh5U1TEk fkL++JYmnW/hEjC1A4ZUfmt7ggk3bPYJa1K/i8gwITMpt0BnatnV7o/CwkTjmqqFUYbeF07yvMCfG gvK+XBzg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNc-00Cdbr-7b; Fri, 29 Apr 2022 17:26:08 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 49/69] erofs: Convert erofs zdata to read_folio Date: Fri, 29 Apr 2022 18:25:36 +0100 Message-Id: <20220429172556.3011843-50-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a "weak" conversion which converts straight back to using pages. A full conversion should be performed at some point, hopefully by someone familiar with the filesystem. Signed-off-by: Matthew Wilcox (Oracle) --- fs/erofs/zdata.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c index e6dea6dfca16..95efc127b2ba 100644 --- a/fs/erofs/zdata.c +++ b/fs/erofs/zdata.c @@ -791,7 +791,7 @@ static int z_erofs_do_read_page(struct z_erofs_decompress_frontend *fe, static bool z_erofs_get_sync_decompress_policy(struct erofs_sb_info *sbi, unsigned int readahead_pages) { - /* auto: enable for readpage, disable for readahead */ + /* auto: enable for read_folio, disable for readahead */ if ((sbi->opt.sync_decompress == EROFS_SYNC_DECOMPRESS_AUTO) && !readahead_pages) return true; @@ -1488,8 +1488,9 @@ static void z_erofs_pcluster_readmore(struct z_erofs_decompress_frontend *f, } } -static int z_erofs_readpage(struct file *file, struct page *page) +static int z_erofs_read_folio(struct file *file, struct folio *folio) { + struct page *page = &folio->page; struct inode *const inode = page->mapping->host; struct erofs_sb_info *const sbi = EROFS_I_SB(inode); struct z_erofs_decompress_frontend f = DECOMPRESS_FRONTEND_INIT(inode); @@ -1563,6 +1564,6 @@ static void z_erofs_readahead(struct readahead_control *rac) } const struct address_space_operations z_erofs_aops = { - .readpage = z_erofs_readpage, + .read_folio = z_erofs_read_folio, .readahead = z_erofs_readahead, }; From patchwork Fri Apr 29 17:25:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832523 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC45FC4332F for ; Fri, 29 Apr 2022 17:27:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379655AbiD2RaX (ORCPT ); Fri, 29 Apr 2022 13:30:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42490 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379589AbiD2R3b (ORCPT ); Fri, 29 Apr 2022 13:29:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 275A2A5EAA for ; Fri, 29 Apr 2022 10:26:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Y6x4tCLM/EtIuFI41ik1GpmwkPE/byLnfHHJAw1Tkrw=; b=Sj67EQ8SYixy67ewGb2QAQsR7/ RCEJLbryPYIETpTdy1B+bP/OuNoS9ovbSv5Yr+3i2JXuE4h2FtQokFDxuBhrV4/niW+z9UrpYG0Sf qE/+Lx6RiBN4ZkxEMQIMOO7rePbrACZsMlS08/fSWRhAvaPJyLonXx7+tjEMWxrUrd+feXUzXKnmn MUDlC8MjDHpa83LTV2Ju+NWH5J0OMBY3RZPj46t+DW3uyex8uI4mBlmxrmby2O+Qkp5HtxQAxWQE/ giLqaGvFqvPXr12B+xU9xqu9GGCPfsEQhOG11BNl2H8T7mKI56wFwy4AtuCquEGPslD6Q40v8F9NF eGNkZhYg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNc-00Cdbx-Bu; Fri, 29 Apr 2022 17:26:08 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 50/69] ext4: Convert ext4 to read_folio Date: Fri, 29 Apr 2022 18:25:37 +0100 Message-Id: <20220429172556.3011843-51-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a "weak" conversion which converts straight back to using pages. A full conversion should be performed at some point, hopefully by someone familiar with the filesystem. Signed-off-by: Matthew Wilcox (Oracle) --- fs/ext4/inode.c | 9 +++++---- fs/ext4/move_extent.c | 4 ++-- 2 files changed, 7 insertions(+), 6 deletions(-) diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index d3a7e8581291..c6b8cb4949f1 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -3180,8 +3180,9 @@ static sector_t ext4_bmap(struct address_space *mapping, sector_t block) return iomap_bmap(mapping, block, &ext4_iomap_ops); } -static int ext4_readpage(struct file *file, struct page *page) +static int ext4_read_folio(struct file *file, struct folio *folio) { + struct page *page = &folio->page; int ret = -EAGAIN; struct inode *inode = page->mapping->host; @@ -3608,7 +3609,7 @@ static int ext4_iomap_swap_activate(struct swap_info_struct *sis, } static const struct address_space_operations ext4_aops = { - .readpage = ext4_readpage, + .read_folio = ext4_read_folio, .readahead = ext4_readahead, .writepage = ext4_writepage, .writepages = ext4_writepages, @@ -3626,7 +3627,7 @@ static const struct address_space_operations ext4_aops = { }; static const struct address_space_operations ext4_journalled_aops = { - .readpage = ext4_readpage, + .read_folio = ext4_read_folio, .readahead = ext4_readahead, .writepage = ext4_writepage, .writepages = ext4_writepages, @@ -3643,7 +3644,7 @@ static const struct address_space_operations ext4_journalled_aops = { }; static const struct address_space_operations ext4_da_aops = { - .readpage = ext4_readpage, + .read_folio = ext4_read_folio, .readahead = ext4_readahead, .writepage = ext4_writepage, .writepages = ext4_writepages, diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c index 4172a7d22471..701f1d6a217f 100644 --- a/fs/ext4/move_extent.c +++ b/fs/ext4/move_extent.c @@ -669,8 +669,8 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp, __u64 orig_blk, * Up semaphore to avoid following problems: * a. transaction deadlock among ext4_journal_start, * ->write_begin via pagefault, and jbd2_journal_commit - * b. racing with ->readpage, ->write_begin, and ext4_get_block - * in move_extent_per_page + * b. racing with ->read_folio, ->write_begin, and + * ext4_get_block in move_extent_per_page */ ext4_double_up_write_data_sem(orig_inode, donor_inode); /* Swap original branches with new branches */ From patchwork Fri Apr 29 17:25:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832531 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53268C433FE for ; Fri, 29 Apr 2022 17:27:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379607AbiD2Raf (ORCPT ); Fri, 29 Apr 2022 13:30:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43328 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379605AbiD2R3b (ORCPT ); Fri, 29 Apr 2022 13:29:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A43A6A66CF for ; Fri, 29 Apr 2022 10:26:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=+XuT2UNZ8tyOhatktPLdCXVQdkBow5gH+ALSLhjgNso=; b=WkS+nmcjDuYlLRrI9nLQ1v3VCW WOGm6lH9KvEasVGKMz1PYuPmHZtZFsFryCTKmfVl1SA1vmsnzxYxNyXPqUISMo6cJm94LDUb9zUhV MlNjgnXImvMap57HhjWrxBQz2NSFvgyUeU5SiVqNpanqnSb0FO1vGu1xZPwZN2f5b1v/tRegOGQbL kRwjih/opZTJW51D3NrZudTguvyxQlDDMrZWzXPsBIYPZMjHnZxZfq5mnl0/FjCGYicWVCqFK8Ck4 QIAtEE3vAM8M0vppIkcPpNYN/ddrI3gGcXD0zv9EwYTmBuZOSs1Ffg4cUWrcP0N7bWsTpROp0Cxu8 fFlMDZAA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNc-00Cdc2-Fj; Fri, 29 Apr 2022 17:26:08 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 51/69] f2fs: Convert f2fs to read_folio Date: Fri, 29 Apr 2022 18:25:38 +0100 Message-Id: <20220429172556.3011843-52-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a "weak" conversion which converts straight back to using pages. A full conversion should be performed at some point, hopefully by someone familiar with the filesystem. Signed-off-by: Matthew Wilcox (Oracle) --- fs/f2fs/data.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index b3cf49136b9f..f894267f0722 100644 --- a/fs/f2fs/data.c +++ b/fs/f2fs/data.c @@ -2372,8 +2372,9 @@ static int f2fs_mpage_readpages(struct inode *inode, return ret; } -static int f2fs_read_data_page(struct file *file, struct page *page) +static int f2fs_read_data_folio(struct file *file, struct folio *folio) { + struct page *page = &folio->page; struct inode *inode = page_file_mapping(page)->host; int ret = -EAGAIN; @@ -3935,7 +3936,7 @@ static void f2fs_swap_deactivate(struct file *file) #endif const struct address_space_operations f2fs_dblock_aops = { - .readpage = f2fs_read_data_page, + .read_folio = f2fs_read_data_folio, .readahead = f2fs_readahead, .writepage = f2fs_write_data_page, .writepages = f2fs_write_data_pages, From patchwork Fri Apr 29 17:25:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832526 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA140C433EF for ; Fri, 29 Apr 2022 17:27:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379528AbiD2Ra0 (ORCPT ); Fri, 29 Apr 2022 13:30:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43314 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379594AbiD2R3b (ORCPT ); Fri, 29 Apr 2022 13:29:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 81771A6230 for ; Fri, 29 Apr 2022 10:26:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dsJD4hWlcYDmhUrlvN64XAtOIu0W3eZefceiArYxrO4=; b=SG49es2Ho6iIMmKSApoMWWHlff UgboG/I1Sf2Cftx5VHfIMfs336CnaoShLivGF3GnbQ5APZtmUpzpPLO0y78n6m6zUXqHB1nFldzEq ADAK70l62W7U8O1s59iXBKruYKm2zWvgfVZ7ZkIkLakSKwLhAQI44PLAxX8ViNM/hAZZeyDW0NpW+ jlSpgfl3I2cwBjxNxQjHs8vyxR3grTvzNSFCYt2Ixha1m0UfO4MFnjUWfOQGCh5BrZgaqfaGjQGMP 8NP01ppSqNMAHfjQSZiH10NkfTiGngemPpAV1ruQc1DL1J+UrnACi32f1O3lnkw+zr5C0pghdqgst IPzFTIDg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNc-00Cdc7-JS; Fri, 29 Apr 2022 17:26:08 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 52/69] freevxfs: Convert vxfs_immed to read_folio Date: Fri, 29 Apr 2022 18:25:39 +0100 Message-Id: <20220429172556.3011843-53-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a "weak" conversion which converts straight back to using pages. A full conversion should be performed at some point, hopefully by someone familiar with the filesystem. Signed-off-by: Matthew Wilcox (Oracle) --- fs/freevxfs/vxfs_immed.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/fs/freevxfs/vxfs_immed.c b/fs/freevxfs/vxfs_immed.c index bfc780c682fb..a37431e443d3 100644 --- a/fs/freevxfs/vxfs_immed.c +++ b/fs/freevxfs/vxfs_immed.c @@ -38,33 +38,34 @@ #include "vxfs_inode.h" -static int vxfs_immed_readpage(struct file *, struct page *); +static int vxfs_immed_read_folio(struct file *, struct folio *); /* * Address space operations for immed files and directories. */ const struct address_space_operations vxfs_immed_aops = { - .readpage = vxfs_immed_readpage, + .read_folio = vxfs_immed_read_folio, }; /** - * vxfs_immed_readpage - read part of an immed inode into pagecache + * vxfs_immed_read_folio - read part of an immed inode into pagecache * @file: file context (unused) - * @page: page frame to fill in. + * @folio: folio to fill in. * * Description: - * vxfs_immed_readpage reads a part of the immed area of the + * vxfs_immed_read_folio reads a part of the immed area of the * file that hosts @pp into the pagecache. * * Returns: * Zero on success, else a negative error code. * * Locking status: - * @page is locked and will be unlocked. + * @folio is locked and will be unlocked. */ static int -vxfs_immed_readpage(struct file *fp, struct page *pp) +vxfs_immed_read_folio(struct file *fp, struct folio *folio) { + struct page *pp = &folio->page; struct vxfs_inode_info *vip = VXFS_INO(pp->mapping->host); u_int64_t offset = (u_int64_t)pp->index << PAGE_SHIFT; caddr_t kaddr; From patchwork Fri Apr 29 17:25:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832529 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF139C433FE for ; Fri, 29 Apr 2022 17:27:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379666AbiD2Rac (ORCPT ); Fri, 29 Apr 2022 13:30:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43326 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379607AbiD2R3b (ORCPT ); Fri, 29 Apr 2022 13:29:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A41C2A66CD for ; Fri, 29 Apr 2022 10:26:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dkrajfA2mxp2aowMuf2gUwqeOPobJuO7Ne+a3M1SyHc=; b=VYfmFmk3/i6hORInXI0jd3n+Ki xWYtxWPwfMMYtXtiO5f6Zy+9URO/czU05GIVyYRTYyUQ3EDA1eI6orGbRFR3IyYV8amvqfX2VH3hL d9Uh/w2oQVCDFc56nrJXh0girqg8gvF45g6erjrE/eBCl/nRAmp4N5Ir3YffgzzeHBP1XFppoe4YO G2cwubAb2LbmtaQN/4Mv9aYcv4vz9MUrHaq4tsXuOrdAOkPgybyEP+whV8zAmHSN3Z2VhTP8y+WJ5 kfl/GWVG+Ap0uQ0LCJSh0G8knvwDLIQbtf30a7ZUcjATbyDOyz4Yo7tzMPRKeepnPCWifimAI7u72 j7dHJe9w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNc-00CdcF-PC; Fri, 29 Apr 2022 17:26:08 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 53/69] fuse: Convert fuse to read_folio Date: Fri, 29 Apr 2022 18:25:40 +0100 Message-Id: <20220429172556.3011843-54-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a "weak" conversion which converts straight back to using pages. A full conversion should be performed at some point, hopefully by someone familiar with the filesystem. Signed-off-by: Matthew Wilcox (Oracle) --- fs/fuse/dir.c | 10 +++++----- fs/fuse/file.c | 5 +++-- 2 files changed, 8 insertions(+), 7 deletions(-) diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c index 9ff27b8a9782..74303d6e987b 100644 --- a/fs/fuse/dir.c +++ b/fs/fuse/dir.c @@ -1957,20 +1957,20 @@ void fuse_init_dir(struct inode *inode) fi->rdc.version = 0; } -static int fuse_symlink_readpage(struct file *null, struct page *page) +static int fuse_symlink_read_folio(struct file *null, struct folio *folio) { - int err = fuse_readlink_page(page->mapping->host, page); + int err = fuse_readlink_page(folio->mapping->host, &folio->page); if (!err) - SetPageUptodate(page); + folio_mark_uptodate(folio); - unlock_page(page); + folio_unlock(folio); return err; } static const struct address_space_operations fuse_symlink_aops = { - .readpage = fuse_symlink_readpage, + .read_folio = fuse_symlink_read_folio, }; void fuse_init_symlink(struct inode *inode) diff --git a/fs/fuse/file.c b/fs/fuse/file.c index bca8c2135ec5..05caa2b9272e 100644 --- a/fs/fuse/file.c +++ b/fs/fuse/file.c @@ -857,8 +857,9 @@ static int fuse_do_readpage(struct file *file, struct page *page) return 0; } -static int fuse_readpage(struct file *file, struct page *page) +static int fuse_read_folio(struct file *file, struct folio *folio) { + struct page *page = &folio->page; struct inode *inode = page->mapping->host; int err; @@ -3174,7 +3175,7 @@ static const struct file_operations fuse_file_operations = { }; static const struct address_space_operations fuse_file_aops = { - .readpage = fuse_readpage, + .read_folio = fuse_read_folio, .readahead = fuse_readahead, .writepage = fuse_writepage, .writepages = fuse_writepages, From patchwork Fri Apr 29 17:25:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832528 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18F71C433EF for ; Fri, 29 Apr 2022 17:27:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379678AbiD2Ra3 (ORCPT ); Fri, 29 Apr 2022 13:30:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42682 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379520AbiD2R3b (ORCPT ); Fri, 29 Apr 2022 13:29:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B864A66D6 for ; Fri, 29 Apr 2022 10:26:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Zlrw23/9zsoNnmB/ZFg3rEXTUeeweSd78Bi4kp8O0ig=; b=JeC/X1JreHV5CXxHbMuWjBuStA PCxvXTXBYcAnJqXHrnQHYnfOaB0hkR39EMyQtN0wR/ODTetJ6wrgLYLsN6/olvITubyLltVpymueU /oK8DFrpz1aaH2D0RWAV/hD3C7Se/4smy0PofsIDV+ad/A2sNArcQgsmFeqbipF0WTVrrTBfmYVrJ DDWfSfIPobiYadM8EfsLiG4dhCprzwXeqJZF80PSnti33VjdwI3oC4HELnmNftLnI6o7godInx82/ +z7bbu32gyzfyt3/6hJsqeQm6LvM0vifn9InmtXEYmOSpAVgIxuSdsNMoTqPtGTRvpQMMg+c+iUu1 knfazqCA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNc-00CdcN-UJ; Fri, 29 Apr 2022 17:26:08 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 54/69] hostfs: Convert hostfs to read_folio Date: Fri, 29 Apr 2022 18:25:41 +0100 Message-Id: <20220429172556.3011843-55-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a "weak" conversion which converts straight back to using pages. A full conversion should be performed at some point, hopefully by someone familiar with the filesystem. Signed-off-by: Matthew Wilcox (Oracle) --- fs/hostfs/hostfs_kern.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/fs/hostfs/hostfs_kern.c b/fs/hostfs/hostfs_kern.c index e658d8edde35..cc1bc6f93a01 100644 --- a/fs/hostfs/hostfs_kern.c +++ b/fs/hostfs/hostfs_kern.c @@ -434,8 +434,9 @@ static int hostfs_writepage(struct page *page, struct writeback_control *wbc) return err; } -static int hostfs_readpage(struct file *file, struct page *page) +static int hostfs_read_folio(struct file *file, struct folio *folio) { + struct page *page = &folio->page; char *buffer; loff_t start = page_offset(page); int bytes_read, ret = 0; @@ -504,7 +505,7 @@ static int hostfs_write_end(struct file *file, struct address_space *mapping, static const struct address_space_operations hostfs_aops = { .writepage = hostfs_writepage, - .readpage = hostfs_readpage, + .read_folio = hostfs_read_folio, .dirty_folio = filemap_dirty_folio, .write_begin = hostfs_write_begin, .write_end = hostfs_write_end, From patchwork Fri Apr 29 17:25:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832530 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50087C433F5 for ; Fri, 29 Apr 2022 17:27:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232003AbiD2Rae (ORCPT ); Fri, 29 Apr 2022 13:30:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43376 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379532AbiD2R3b (ORCPT ); Fri, 29 Apr 2022 13:29:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B9A9A66D7 for ; Fri, 29 Apr 2022 10:26:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=UOrF5uwFh36CRfSuCcl6334HdyNVUD9aDYbL7xJahM8=; b=XS+4aaOjo33V2lGkFhGPYj0yzN hPPdXBEayeacPBHLU+Z7P/0m1MIFGBNOm+aLTynUhPEUGKzax/rPg/GxOcvxJ5ONvGvwD9tdE1Zfe ngNf17A88J10hvLwd+nxAK8WLhkv7Uy25z1rxkv/bGAZaG1OIxr0zZYVLmmUiCTOO+N7kuACqbWaC qzmyvq4liGi90A8xPYFpaG8K3nFZpQZ5wB0X1HntPbJo8Sug1AJ6zvQZwA5yl7T6R/sw8sg4I/wmz Yx3L9Z5AfnP5uDd9U6zK6G0vP4Ko0whFablkhLox51F5hzNhoMJZgi0hZsbdmA0DDiKmVproFVqKO 5GEQVVpA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNd-00CdcS-1x; Fri, 29 Apr 2022 17:26:09 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 55/69] hpfs: Convert symlinks to read_folio Date: Fri, 29 Apr 2022 18:25:42 +0100 Message-Id: <20220429172556.3011843-56-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a "weak" conversion which converts straight back to using pages. A full conversion should be performed at some point, hopefully by someone familiar with the filesystem. Signed-off-by: Matthew Wilcox (Oracle) --- fs/hpfs/namei.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/fs/hpfs/namei.c b/fs/hpfs/namei.c index d73f8a67168e..15fc63276caa 100644 --- a/fs/hpfs/namei.c +++ b/fs/hpfs/namei.c @@ -479,8 +479,9 @@ static int hpfs_rmdir(struct inode *dir, struct dentry *dentry) return err; } -static int hpfs_symlink_readpage(struct file *file, struct page *page) +static int hpfs_symlink_read_folio(struct file *file, struct folio *folio) { + struct page *page = &folio->page; char *link = page_address(page); struct inode *i = page->mapping->host; struct fnode *fnode; @@ -508,7 +509,7 @@ static int hpfs_symlink_readpage(struct file *file, struct page *page) } const struct address_space_operations hpfs_symlink_aops = { - .readpage = hpfs_symlink_readpage + .read_folio = hpfs_symlink_read_folio }; static int hpfs_rename(struct user_namespace *mnt_userns, struct inode *old_dir, From patchwork Fri Apr 29 17:25:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832535 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 632F5C433F5 for ; Fri, 29 Apr 2022 17:27:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1378101AbiD2Rao (ORCPT ); Fri, 29 Apr 2022 13:30:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43226 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379616AbiD2R3c (ORCPT ); Fri, 29 Apr 2022 13:29:32 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 61C8AA66E5 for ; Fri, 29 Apr 2022 10:26:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=jk1UW5fp5fPlTLJ/xWRH7xjgxYLQuIENvNwJAeh2LlM=; b=GUn9BVb1bPozHjyfS+qVueU4ym yrm0jSx+58w2CDEjYms0CP9S+Qgfzbqx/W3OHafzDWXn9q2xxPVga3MhZN9qeWiLYDck8KZZ78K8Q iGOgzENS8pNQmxS9OBS87QqZPiWqUNVghmVTt/rQnOVBQw2U4LlcB1HP3oDRglGgqRIxJ70FnZvpR sApVrcUWNrWVtipT8RyO4xwaxMiry4cgC96rIi2o37Q4Zg8j6Fx/DfbgnZkHOzITmL0s8zUWJ4Bz5 e98vqPN9FvgccUqozdt/tm/4tc+TcPTlj7qyZDsFOzkpvocwmlWEwrdGzBxyLbIXAUNkwgv3aNO6i X/DW0j8A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNd-00CdcY-A6; Fri, 29 Apr 2022 17:26:09 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 56/69] isofs: Convert symlinks and zisofs to read_folio Date: Fri, 29 Apr 2022 18:25:43 +0100 Message-Id: <20220429172556.3011843-57-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a "weak" conversion which converts straight back to using pages. A full conversion should be performed at some point, hopefully by someone familiar with the filesystem. Signed-off-by: Matthew Wilcox (Oracle) --- fs/isofs/compress.c | 5 +++-- fs/isofs/rock.c | 7 ++++--- 2 files changed, 7 insertions(+), 5 deletions(-) diff --git a/fs/isofs/compress.c b/fs/isofs/compress.c index bc12ac7e2312..95a19f25d61c 100644 --- a/fs/isofs/compress.c +++ b/fs/isofs/compress.c @@ -296,8 +296,9 @@ static int zisofs_fill_pages(struct inode *inode, int full_page, int pcount, * per reference. We inject the additional pages into the page * cache as a form of readahead. */ -static int zisofs_readpage(struct file *file, struct page *page) +static int zisofs_read_folio(struct file *file, struct folio *folio) { + struct page *page = &folio->page; struct inode *inode = file_inode(file); struct address_space *mapping = inode->i_mapping; int err; @@ -369,7 +370,7 @@ static int zisofs_readpage(struct file *file, struct page *page) } const struct address_space_operations zisofs_aops = { - .readpage = zisofs_readpage, + .read_folio = zisofs_read_folio, /* No bmap operation supported */ }; diff --git a/fs/isofs/rock.c b/fs/isofs/rock.c index 4880146babaf..48f58c6c9e69 100644 --- a/fs/isofs/rock.c +++ b/fs/isofs/rock.c @@ -687,11 +687,12 @@ int parse_rock_ridge_inode(struct iso_directory_record *de, struct inode *inode, } /* - * readpage() for symlinks: reads symlink contents into the page and either + * read_folio() for symlinks: reads symlink contents into the folio and either * makes it uptodate and returns 0 or returns error (-EIO) */ -static int rock_ridge_symlink_readpage(struct file *file, struct page *page) +static int rock_ridge_symlink_read_folio(struct file *file, struct folio *folio) { + struct page *page = &folio->page; struct inode *inode = page->mapping->host; struct iso_inode_info *ei = ISOFS_I(inode); struct isofs_sb_info *sbi = ISOFS_SB(inode->i_sb); @@ -804,5 +805,5 @@ static int rock_ridge_symlink_readpage(struct file *file, struct page *page) } const struct address_space_operations isofs_symlink_aops = { - .readpage = rock_ridge_symlink_readpage + .read_folio = rock_ridge_symlink_read_folio }; From patchwork Fri Apr 29 17:25:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832533 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D6F1C433EF for ; Fri, 29 Apr 2022 17:27:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379685AbiD2Rak (ORCPT ); Fri, 29 Apr 2022 13:30:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43222 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379614AbiD2R3c (ORCPT ); Fri, 29 Apr 2022 13:29:32 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A889BA66E8 for ; Fri, 29 Apr 2022 10:26:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=MbxjnmP4ZXSg5OMir1jLNbwogQDgOF2ioH/l9iNR8ww=; b=NlAgHbScUk5bi8UBvz0r4nf+co Raot42c/uNzPJKqyJHITt5KJftE5uJT1T41iz9kAZLTAV8NRAjkD2Fy+3VCmc/MG3ij0k+06Ki4+Z yLfB4AlZ/nttBUi+21GE90r4Pt1Zs9XxQMGrG9Euj44stLyC+B6r3936sROepEWOuNzrafTm/HK0m mT2J7VVEibAEOVOYcm8oCjHu8ip1O8sWgvYyKKQE3S52FbeKodw+Z/db12krOPSzxQDH+5C4aHTO0 7tIjBP1yW/9W+Ld2osfO6feQYW5XnpRuNdyd/NKVXilhMiOz4bUpYvH1Kp14U+fhf7CYPy0L5X9dh 6cCrzLjA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNd-00Cdce-EB; Fri, 29 Apr 2022 17:26:09 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 57/69] jffs2: Convert jffs2 to read_folio Date: Fri, 29 Apr 2022 18:25:44 +0100 Message-Id: <20220429172556.3011843-58-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a "weak" conversion which converts straight back to using pages. A full conversion should be performed at some point, hopefully by someone familiar with the filesystem. Signed-off-by: Matthew Wilcox (Oracle) --- fs/jffs2/file.c | 10 +++++----- fs/jffs2/fs.c | 2 +- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/fs/jffs2/file.c b/fs/jffs2/file.c index 2b35811772de..f8616683fbee 100644 --- a/fs/jffs2/file.c +++ b/fs/jffs2/file.c @@ -27,7 +27,7 @@ static int jffs2_write_end(struct file *filp, struct address_space *mapping, static int jffs2_write_begin(struct file *filp, struct address_space *mapping, loff_t pos, unsigned len, struct page **pagep, void **fsdata); -static int jffs2_readpage (struct file *filp, struct page *pg); +static int jffs2_read_folio(struct file *filp, struct folio *folio); int jffs2_fsync(struct file *filp, loff_t start, loff_t end, int datasync) { @@ -72,7 +72,7 @@ const struct inode_operations jffs2_file_inode_operations = const struct address_space_operations jffs2_file_address_operations = { - .readpage = jffs2_readpage, + .read_folio = jffs2_read_folio, .write_begin = jffs2_write_begin, .write_end = jffs2_write_end, }; @@ -118,13 +118,13 @@ int jffs2_do_readpage_unlock(void *data, struct page *pg) } -static int jffs2_readpage (struct file *filp, struct page *pg) +static int jffs2_read_folio(struct file *file, struct folio *folio) { - struct jffs2_inode_info *f = JFFS2_INODE_INFO(pg->mapping->host); + struct jffs2_inode_info *f = JFFS2_INODE_INFO(folio->mapping->host); int ret; mutex_lock(&f->sem); - ret = jffs2_do_readpage_unlock(pg->mapping->host, pg); + ret = jffs2_do_readpage_unlock(folio->mapping->host, &folio->page); mutex_unlock(&f->sem); return ret; } diff --git a/fs/jffs2/fs.c b/fs/jffs2/fs.c index 71f03a5d36ed..00a110f40e10 100644 --- a/fs/jffs2/fs.c +++ b/fs/jffs2/fs.c @@ -178,7 +178,7 @@ int jffs2_do_setattr (struct inode *inode, struct iattr *iattr) jffs2_complete_reservation(c); /* We have to do the truncate_setsize() without f->sem held, since - some pages may be locked and waiting for it in readpage(). + some pages may be locked and waiting for it in read_folio(). We are protected from a simultaneous write() extending i_size back past iattr->ia_size, because do_truncate() holds the generic inode semaphore. */ From patchwork Fri Apr 29 17:25:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832532 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63DD2C433F5 for ; Fri, 29 Apr 2022 17:27:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379605AbiD2Rah (ORCPT ); Fri, 29 Apr 2022 13:30:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43224 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379613AbiD2R3c (ORCPT ); Fri, 29 Apr 2022 13:29:32 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D76B8A6E06 for ; Fri, 29 Apr 2022 10:26:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=rfqqXMeB9rJaPeF1zuhJSPvJieW4idxHLaMUnr+URVs=; b=sFfWVN5iJYI5mQdbFqxagd0EeD iNCezQP+ZK7eo3uysp6v6aWgazDcZ6wwWuUZX//WcnkAVvVoNYfyXG6NCo9kfQGvGzUOTEWyp4IhN I4TX78h9+cWSRV3sRHTChyTilddduABV3MTya37fK0YMBXNsF1N89P0DXaGryP+xj6e5jxUGuskQp bjSp/+yH3X9A2QXuPRsc9TAr2wseV4qN/z+wB0+osqjkA9vUfxVN4Hfw19wpEsmgz2PBmHOPO112a XowCEO4/ufO6FZPBeOU9g6TAkbwmcbxrP9meb/DdgavMt6Q9HbFBteBNsdrBLMXbj1cz9MAkqAKKd kw2+TL7g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNd-00Cdck-K1; Fri, 29 Apr 2022 17:26:09 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 58/69] jfs: Convert metadata pages to read_folio Date: Fri, 29 Apr 2022 18:25:45 +0100 Message-Id: <20220429172556.3011843-59-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a "weak" conversion which converts straight back to using pages. A full conversion should be performed at some point, hopefully by someone familiar with the filesystem. Signed-off-by: Matthew Wilcox (Oracle) --- fs/jfs/jfs_metapage.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/fs/jfs/jfs_metapage.c b/fs/jfs/jfs_metapage.c index c4220ccdedef..2fc78405b3f2 100644 --- a/fs/jfs/jfs_metapage.c +++ b/fs/jfs/jfs_metapage.c @@ -467,8 +467,9 @@ static int metapage_writepage(struct page *page, struct writeback_control *wbc) return -EIO; } -static int metapage_readpage(struct file *fp, struct page *page) +static int metapage_read_folio(struct file *fp, struct folio *folio) { + struct page *page = &folio->page; struct inode *inode = page->mapping->host; struct bio *bio = NULL; int block_offset; @@ -563,7 +564,7 @@ static void metapage_invalidate_folio(struct folio *folio, size_t offset, } const struct address_space_operations jfs_metapage_aops = { - .readpage = metapage_readpage, + .read_folio = metapage_read_folio, .writepage = metapage_writepage, .releasepage = metapage_releasepage, .invalidate_folio = metapage_invalidate_folio, From patchwork Fri Apr 29 17:25:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832548 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D9C9C433FE for ; Fri, 29 Apr 2022 17:27:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379688AbiD2RbH (ORCPT ); Fri, 29 Apr 2022 13:31:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43394 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379618AbiD2R3c (ORCPT ); Fri, 29 Apr 2022 13:29:32 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4DEA7A6E1D for ; Fri, 29 Apr 2022 10:26:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=i6g5f+22tHBDTbtx3SYZWrTrLEq/pguWRferaojoIv4=; b=qjce6aUHQPnQp6PnLcOBRRCV8U FQksiTvUr1H2YfTzhkcCjQXcpgyy/4WYDgGj9AweWbxRpU6Li7pzr3ADfVv2boh1pZ44CnIai24xc /a7KNKl+aa0INcFu6RNwuZeS9j4N4RNwApyzTLmszZEuDFulDvgk/vVfYLMvGyb4eFlV1OJTEYeNF LaI6SU8hUVFBIMBGBB5prfK8T2OklwZ3FJH2fFBfSJ36jHFT8jbG86ul/fvmBKkxO+tFy3uCLYaPD yuGpoESzwwV4VaGLJbRJ7oChRIKfhdu4jN2fFe3cLsaR10kNuWMP8Os/5i0H+VVu4LTuHw+ajs9l/ nkt3J14A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNd-00Cdcp-OF; Fri, 29 Apr 2022 17:26:09 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 59/69] nfs: Convert nfs to read_folio Date: Fri, 29 Apr 2022 18:25:46 +0100 Message-Id: <20220429172556.3011843-60-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a "weak" conversion which converts straight back to using pages. A full conversion should be performed at some point, hopefully by someone familiar with the filesystem. Signed-off-by: Matthew Wilcox (Oracle) --- fs/nfs/file.c | 4 ++-- fs/nfs/read.c | 3 ++- include/linux/nfs_fs.h | 2 +- 3 files changed, 5 insertions(+), 4 deletions(-) diff --git a/fs/nfs/file.c b/fs/nfs/file.c index f05c4b18b681..4f6d1f90b87f 100644 --- a/fs/nfs/file.c +++ b/fs/nfs/file.c @@ -337,7 +337,7 @@ static int nfs_write_begin(struct file *file, struct address_space *mapping, } else if (!once_thru && nfs_want_read_modify_write(file, page, pos, len)) { once_thru = 1; - ret = nfs_readpage(file, page); + ret = nfs_read_folio(file, page_folio(page)); put_page(page); if (!ret) goto start; @@ -514,7 +514,7 @@ static void nfs_swap_deactivate(struct file *file) } const struct address_space_operations nfs_file_aops = { - .readpage = nfs_readpage, + .read_folio = nfs_read_folio, .readahead = nfs_readahead, .dirty_folio = filemap_dirty_folio, .writepage = nfs_writepage, diff --git a/fs/nfs/read.c b/fs/nfs/read.c index 5e7657374bc3..5a9b043662e9 100644 --- a/fs/nfs/read.c +++ b/fs/nfs/read.c @@ -333,8 +333,9 @@ readpage_async_filler(struct nfs_readdesc *desc, struct page *page) * - The error flag is set for this page. This happens only when a * previous async read operation failed. */ -int nfs_readpage(struct file *file, struct page *page) +int nfs_read_folio(struct file *file, struct folio *folio) { + struct page *page = &folio->page; struct nfs_readdesc desc; struct inode *inode = page_file_mapping(page)->host; int ret; diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h index b48b9259e02c..1bba71757d62 100644 --- a/include/linux/nfs_fs.h +++ b/include/linux/nfs_fs.h @@ -594,7 +594,7 @@ static inline bool nfs_have_writebacks(const struct inode *inode) /* * linux/fs/nfs/read.c */ -extern int nfs_readpage(struct file *, struct page *); +int nfs_read_folio(struct file *, struct folio *); void nfs_readahead(struct readahead_control *); /* From patchwork Fri Apr 29 17:25:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832537 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 836F2C433F5 for ; Fri, 29 Apr 2022 17:27:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379690AbiD2Ras (ORCPT ); Fri, 29 Apr 2022 13:30:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44372 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379633AbiD2R3q (ORCPT ); Fri, 29 Apr 2022 13:29:46 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4E094A6E1E for ; Fri, 29 Apr 2022 10:26:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=JjSBvqXioFat+k9nej9A9hxsku2dzX1ixwNpMvQ6+/c=; b=IHqOYvkO2Vyc+w3/7zLUHlmtDO PDGEt6FcWboTanh6ks6/fVcfshkpWemcl15Xxk7QGrffOqQ5vpC0bisfHLky0WLYgAxQvkmY0iTg0 XuaNx2ZO3txLKwRLXgUAAUO/R/oeNWxvpE8lIGNZVh7zVBp/NF0glyxI7sQ/gb5Gs8v9N2ywfHBaz 28HGTYPcr5sdje5mz6qJ19v7RlZ5eGksn+lQrBjMTsPL2c5T/ugHmBWj5AsxPu2JKxzkD91YHpmZQ 1TGQDHYjq8bLXOjlMkXG3btoVPEQIJnrFK7JuJkF6s2Vr512VGdA76Zv8cWpTrrB3r0nS6On9sh+A n1pj9Kpw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNd-00Cdcw-V5; Fri, 29 Apr 2022 17:26:10 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 60/69] ntfs: Convert ntfs to read_folio Date: Fri, 29 Apr 2022 18:25:47 +0100 Message-Id: <20220429172556.3011843-61-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a "weak" conversion which converts straight back to using pages. A full conversion should be performed at some point, hopefully by someone familiar with the filesystem. Signed-off-by: Matthew Wilcox (Oracle) --- fs/ntfs/aops.c | 40 +++++++++++++++++++++------------------- fs/ntfs/aops.h | 6 +++--- fs/ntfs/attrib.c | 2 +- fs/ntfs/file.c | 4 ++-- fs/ntfs/inode.c | 4 ++-- fs/ntfs/mft.h | 2 +- 6 files changed, 30 insertions(+), 28 deletions(-) diff --git a/fs/ntfs/aops.c b/fs/ntfs/aops.c index 90e3dad8ee45..9e3964ea2ea0 100644 --- a/fs/ntfs/aops.c +++ b/fs/ntfs/aops.c @@ -159,7 +159,7 @@ static void ntfs_end_buffer_async_read(struct buffer_head *bh, int uptodate) * * Return 0 on success and -errno on error. * - * Contains an adapted version of fs/buffer.c::block_read_full_page(). + * Contains an adapted version of fs/buffer.c::block_read_full_folio(). */ static int ntfs_read_block(struct page *page) { @@ -358,16 +358,16 @@ static int ntfs_read_block(struct page *page) } /** - * ntfs_readpage - fill a @page of a @file with data from the device - * @file: open file to which the page @page belongs or NULL - * @page: page cache page to fill with data + * ntfs_read_folio - fill a @folio of a @file with data from the device + * @file: open file to which the folio @folio belongs or NULL + * @folio: page cache folio to fill with data * - * For non-resident attributes, ntfs_readpage() fills the @page of the open - * file @file by calling the ntfs version of the generic block_read_full_page() + * For non-resident attributes, ntfs_read_folio() fills the @folio of the open + * file @file by calling the ntfs version of the generic block_read_full_folio() * function, ntfs_read_block(), which in turn creates and reads in the buffers - * associated with the page asynchronously. + * associated with the folio asynchronously. * - * For resident attributes, OTOH, ntfs_readpage() fills @page by copying the + * For resident attributes, OTOH, ntfs_read_folio() fills @folio by copying the * data from the mft record (which at this stage is most likely in memory) and * fills the remainder with zeroes. Thus, in this case, I/O is synchronous, as * even if the mft record is not cached at this point in time, we need to wait @@ -375,8 +375,9 @@ static int ntfs_read_block(struct page *page) * * Return 0 on success and -errno on error. */ -static int ntfs_readpage(struct file *file, struct page *page) +static int ntfs_read_folio(struct file *file, struct folio *folio) { + struct page *page = &folio->page; loff_t i_size; struct inode *vi; ntfs_inode *ni, *base_ni; @@ -458,7 +459,7 @@ static int ntfs_readpage(struct file *file, struct page *page) } /* * If a parallel write made the attribute non-resident, drop the mft - * record and retry the readpage. + * record and retry the read_folio. */ if (unlikely(NInoNonResident(ni))) { unmap_mft_record(base_ni); @@ -637,10 +638,11 @@ static int ntfs_write_block(struct page *page, struct writeback_control *wbc) if (unlikely((block >= iblock) && (initialized_size < i_size))) { /* - * If this page is fully outside initialized size, zero - * out all pages between the current initialized size - * and the current page. Just use ntfs_readpage() to do - * the zeroing transparently. + * If this page is fully outside initialized + * size, zero out all pages between the current + * initialized size and the current page. Just + * use ntfs_read_folio() to do the zeroing + * transparently. */ if (block > iblock) { // TODO: @@ -798,7 +800,7 @@ static int ntfs_write_block(struct page *page, struct writeback_control *wbc) /* For the error case, need to reset bh to the beginning. */ bh = head; - /* Just an optimization, so ->readpage() is not called later. */ + /* Just an optimization, so ->read_folio() is not called later. */ if (unlikely(!PageUptodate(page))) { int uptodate = 1; do { @@ -1329,7 +1331,7 @@ static int ntfs_write_mst_block(struct page *page, * vfs inode dirty code path for the inode the mft record belongs to or via the * vm page dirty code path for the page the mft record is in. * - * Based on ntfs_readpage() and fs/buffer.c::block_write_full_page(). + * Based on ntfs_read_folio() and fs/buffer.c::block_write_full_page(). * * Return 0 on success and -errno on error. */ @@ -1651,7 +1653,7 @@ static sector_t ntfs_bmap(struct address_space *mapping, sector_t block) * attributes. */ const struct address_space_operations ntfs_normal_aops = { - .readpage = ntfs_readpage, + .read_folio = ntfs_read_folio, #ifdef NTFS_RW .writepage = ntfs_writepage, .dirty_folio = block_dirty_folio, @@ -1666,7 +1668,7 @@ const struct address_space_operations ntfs_normal_aops = { * ntfs_compressed_aops - address space operations for compressed inodes */ const struct address_space_operations ntfs_compressed_aops = { - .readpage = ntfs_readpage, + .read_folio = ntfs_read_folio, #ifdef NTFS_RW .writepage = ntfs_writepage, .dirty_folio = block_dirty_folio, @@ -1681,7 +1683,7 @@ const struct address_space_operations ntfs_compressed_aops = { * and attributes */ const struct address_space_operations ntfs_mst_aops = { - .readpage = ntfs_readpage, /* Fill page with data. */ + .read_folio = ntfs_read_folio, /* Fill page with data. */ #ifdef NTFS_RW .writepage = ntfs_writepage, /* Write dirty page to disk. */ .dirty_folio = filemap_dirty_folio, diff --git a/fs/ntfs/aops.h b/fs/ntfs/aops.h index f0962d46bd67..934d5f79b9e7 100644 --- a/fs/ntfs/aops.h +++ b/fs/ntfs/aops.h @@ -37,9 +37,9 @@ static inline void ntfs_unmap_page(struct page *page) * Read a page from the page cache of the address space @mapping at position * @index, where @index is in units of PAGE_SIZE, and not in bytes. * - * If the page is not in memory it is loaded from disk first using the readpage - * method defined in the address space operations of @mapping and the page is - * added to the page cache of @mapping in the process. + * If the page is not in memory it is loaded from disk first using the + * read_folio method defined in the address space operations of @mapping + * and the page is added to the page cache of @mapping in the process. * * If the page belongs to an mst protected attribute and it is marked as such * in its ntfs inode (NInoMstProtected()) the mst fixups are applied but no diff --git a/fs/ntfs/attrib.c b/fs/ntfs/attrib.c index 2911c04a33e0..4de597a83b88 100644 --- a/fs/ntfs/attrib.c +++ b/fs/ntfs/attrib.c @@ -1719,7 +1719,7 @@ int ntfs_attr_make_non_resident(ntfs_inode *ni, const u32 data_size) vi->i_blocks = ni->allocated_size >> 9; write_unlock_irqrestore(&ni->size_lock, flags); /* - * This needs to be last since the address space operations ->readpage + * This needs to be last since the address space operations ->read_folio * and ->writepage can run concurrently with us as they are not * serialized on i_mutex. Note, we are not allowed to fail once we flip * this switch, which is another reason to do this last. diff --git a/fs/ntfs/file.c b/fs/ntfs/file.c index 2ae25e48a41a..e1392a9b8ceb 100644 --- a/fs/ntfs/file.c +++ b/fs/ntfs/file.c @@ -251,14 +251,14 @@ static int ntfs_attr_extend_initialized(ntfs_inode *ni, const s64 new_init_size) * * TODO: For sparse pages could optimize this workload by using * the FsMisc / MiscFs page bit as a "PageIsSparse" bit. This - * would be set in readpage for sparse pages and here we would + * would be set in read_folio for sparse pages and here we would * not need to mark dirty any pages which have this bit set. * The only caveat is that we have to clear the bit everywhere * where we allocate any clusters that lie in the page or that * contain the page. * * TODO: An even greater optimization would be for us to only - * call readpage() on pages which are not in sparse regions as + * call read_folio() on pages which are not in sparse regions as * determined from the runlist. This would greatly reduce the * number of pages we read and make dirty in the case of sparse * files. diff --git a/fs/ntfs/inode.c b/fs/ntfs/inode.c index efe0602b4e51..db0f1995aedd 100644 --- a/fs/ntfs/inode.c +++ b/fs/ntfs/inode.c @@ -1832,7 +1832,7 @@ int ntfs_read_inode_mount(struct inode *vi) /* Need this to sanity check attribute list references to $MFT. */ vi->i_generation = ni->seq_no = le16_to_cpu(m->sequence_number); - /* Provides readpage() for map_mft_record(). */ + /* Provides read_folio() for map_mft_record(). */ vi->i_mapping->a_ops = &ntfs_mst_aops; ctx = ntfs_attr_get_search_ctx(ni, m); @@ -2503,7 +2503,7 @@ int ntfs_truncate(struct inode *vi) * between the old data_size, i.e. old_size, and the new_size * has not been zeroed. Fortunately, we do not need to zero it * either since on one hand it will either already be zero due - * to both readpage and writepage clearing partial page data + * to both read_folio and writepage clearing partial page data * beyond i_size in which case there is nothing to do or in the * case of the file being mmap()ped at the same time, POSIX * specifies that the behaviour is unspecified thus we do not diff --git a/fs/ntfs/mft.h b/fs/ntfs/mft.h index 17bfefc30271..49c001af16ed 100644 --- a/fs/ntfs/mft.h +++ b/fs/ntfs/mft.h @@ -79,7 +79,7 @@ extern int write_mft_record_nolock(ntfs_inode *ni, MFT_RECORD *m, int sync); * paths and via the page cache write back code paths or between writing * neighbouring mft records residing in the same page. * - * Locking the page also serializes us against ->readpage() if the page is not + * Locking the page also serializes us against ->read_folio() if the page is not * uptodate. * * On success, clean the mft record and return 0. On error, leave the mft From patchwork Fri Apr 29 17:25:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832547 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49F99C433EF for ; Fri, 29 Apr 2022 17:27:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379643AbiD2RbF (ORCPT ); Fri, 29 Apr 2022 13:31:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43396 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379619AbiD2R3c (ORCPT ); Fri, 29 Apr 2022 13:29:32 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4E298A6E21 for ; Fri, 29 Apr 2022 10:26:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=UyiQtrOlJApVo8TqEZs/5bvdj5z9rp2u4At+a4oNQzQ=; b=VhaC8U35yC590pCxrbYl4QtkdD JmFfOkLuvJfWw8yE4w4JgDeD3Ty3vMgvIKbr7JohmcovdNEM+dVNAqmaUy5loFO0V6Ky6T6T3GreA rWF1TeASBH8mzVpr2TtL2mkM5jeeFLQwdw4/US2hxLy/3KEpESGClFL/dS/f4OCt3lEHyjmfOB4FF ME90fu9KcdKK7rn4e/g36y9fR1SqdSp/XSILaAnJIKoWFTTKiABCjg/PzEh4sLaRNsVmJu3gt1GKC dLL8NoFY/5OCfHRbiOH4fs0r8JbSIzTG3KKT42wwktQjVRLAYaafdL5SFLZ24e5ic2jmcHdPfbC3z moQ+FIgA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNe-00Cdd4-DI; Fri, 29 Apr 2022 17:26:10 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 61/69] ocfs2: Convert ocfs2 to read_folio Date: Fri, 29 Apr 2022 18:25:48 +0100 Message-Id: <20220429172556.3011843-62-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a "weak" conversion which converts straight back to using pages. A full conversion should be performed at some point, hopefully by someone familiar with the filesystem. Signed-off-by: Matthew Wilcox (Oracle) --- fs/ocfs2/alloc.c | 2 +- fs/ocfs2/aops.c | 5 +++-- fs/ocfs2/file.c | 2 +- fs/ocfs2/symlink.c | 5 +++-- 4 files changed, 8 insertions(+), 6 deletions(-) diff --git a/fs/ocfs2/alloc.c b/fs/ocfs2/alloc.c index 49f41074baad..51c93929a146 100644 --- a/fs/ocfs2/alloc.c +++ b/fs/ocfs2/alloc.c @@ -7427,7 +7427,7 @@ int ocfs2_truncate_inline(struct inode *inode, struct buffer_head *di_bh, /* * No need to worry about the data page here - it's been * truncated already and inline data doesn't need it for - * pushing zero's to disk, so we'll let readpage pick it up + * pushing zero's to disk, so we'll let read_folio pick it up * later. */ if (trunc) { diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c index 7bf4b6fd93bf..6b1679db9636 100644 --- a/fs/ocfs2/aops.c +++ b/fs/ocfs2/aops.c @@ -275,8 +275,9 @@ static int ocfs2_readpage_inline(struct inode *inode, struct page *page) return ret; } -static int ocfs2_readpage(struct file *file, struct page *page) +static int ocfs2_read_folio(struct file *file, struct folio *folio) { + struct page *page = &folio->page; struct inode *inode = page->mapping->host; struct ocfs2_inode_info *oi = OCFS2_I(inode); loff_t start = (loff_t)page->index << PAGE_SHIFT; @@ -2454,7 +2455,7 @@ static ssize_t ocfs2_direct_IO(struct kiocb *iocb, struct iov_iter *iter) const struct address_space_operations ocfs2_aops = { .dirty_folio = block_dirty_folio, - .readpage = ocfs2_readpage, + .read_folio = ocfs2_read_folio, .readahead = ocfs2_readahead, .writepage = ocfs2_writepage, .write_begin = ocfs2_write_begin, diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c index 01b7407a8893..7497cd592258 100644 --- a/fs/ocfs2/file.c +++ b/fs/ocfs2/file.c @@ -2526,7 +2526,7 @@ static ssize_t ocfs2_file_read_iter(struct kiocb *iocb, return -EOPNOTSUPP; /* - * buffered reads protect themselves in ->readpage(). O_DIRECT reads + * buffered reads protect themselves in ->read_folio(). O_DIRECT reads * need locks to protect pending reads from racing with truncate. */ if (direct_io) { diff --git a/fs/ocfs2/symlink.c b/fs/ocfs2/symlink.c index f755a4985821..d4c5fdcfa1e4 100644 --- a/fs/ocfs2/symlink.c +++ b/fs/ocfs2/symlink.c @@ -52,8 +52,9 @@ #include "buffer_head_io.h" -static int ocfs2_fast_symlink_readpage(struct file *unused, struct page *page) +static int ocfs2_fast_symlink_read_folio(struct file *f, struct folio *folio) { + struct page *page = &folio->page; struct inode *inode = page->mapping->host; struct buffer_head *bh = NULL; int status = ocfs2_read_inode_block(inode, &bh); @@ -81,7 +82,7 @@ static int ocfs2_fast_symlink_readpage(struct file *unused, struct page *page) } const struct address_space_operations ocfs2_fast_symlink_aops = { - .readpage = ocfs2_fast_symlink_readpage, + .read_folio = ocfs2_fast_symlink_read_folio, }; const struct inode_operations ocfs2_symlink_inode_operations = { From patchwork Fri Apr 29 17:25:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832539 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD6DBC433EF for ; Fri, 29 Apr 2022 17:27:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379682AbiD2Rav (ORCPT ); Fri, 29 Apr 2022 13:30:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42514 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379629AbiD2R3p (ORCPT ); Fri, 29 Apr 2022 13:29:45 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B219A0BE4 for ; Fri, 29 Apr 2022 10:26:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=aH16BpIhqvfVPG+WJB3hRIv4pV6OIvK+9fdHP6ll06M=; b=WO8IFe/uH26sLqxXXZogOgPe8m /hbZtWYgJuEm3RjL1GFG9R0MJxJciwaQFnEE+PRwRGPrU7a2ospgCFhfoSniOv6+A2bnvYK6q/QET qzox9fjHzg0LrwEeWmXGLHLXIwwgxXrZFWM6PTNIiWgJGTvzJ/D1px91fuHCnF4sQkdPvyuTxwVeF NjuvcMsWaHPBjvrm9K8SyJO4lELMWCCMPEiZRFJDoHERlHovPMBJZcuYAIBrzySZqOc7II3qvAvvz tsx1Z4iHUIkDEaRJ2o1p6tPy1L1rgUGbyC0LAdubjbHg1M3iChz9OT8ZJ8ACEH5DW/GVvPPOM23ly xgrIo6cA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNe-00Cdd9-H4; Fri, 29 Apr 2022 17:26:10 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 62/69] orangefs: Convert orangefs to read_folio Date: Fri, 29 Apr 2022 18:25:49 +0100 Message-Id: <20220429172556.3011843-63-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a full conversion which should be large folio ready, although I have not tested it. Signed-off-by: Matthew Wilcox (Oracle) --- fs/orangefs/inode.c | 33 ++++++++++++++++----------------- 1 file changed, 16 insertions(+), 17 deletions(-) diff --git a/fs/orangefs/inode.c b/fs/orangefs/inode.c index bc7ccd15d7a3..241ac21f527b 100644 --- a/fs/orangefs/inode.c +++ b/fs/orangefs/inode.c @@ -288,40 +288,39 @@ static void orangefs_readahead(struct readahead_control *rac) } } -static int orangefs_readpage(struct file *file, struct page *page) +static int orangefs_read_folio(struct file *file, struct folio *folio) { - struct folio *folio = page_folio(page); - struct inode *inode = page->mapping->host; + struct inode *inode = folio->mapping->host; struct iov_iter iter; struct bio_vec bv; ssize_t ret; - loff_t off; /* offset into this page */ + loff_t off; /* offset of this folio in the file */ if (folio_test_dirty(folio)) orangefs_launder_folio(folio); - off = page_offset(page); - bv.bv_page = page; - bv.bv_len = PAGE_SIZE; + off = folio_pos(folio); + bv.bv_page = &folio->page; + bv.bv_len = folio_size(folio); bv.bv_offset = 0; - iov_iter_bvec(&iter, READ, &bv, 1, PAGE_SIZE); + iov_iter_bvec(&iter, READ, &bv, 1, folio_size(folio)); ret = wait_for_direct_io(ORANGEFS_IO_READ, inode, &off, &iter, - PAGE_SIZE, inode->i_size, NULL, NULL, file); + folio_size(folio), inode->i_size, NULL, NULL, file); /* this will only zero remaining unread portions of the page data */ iov_iter_zero(~0U, &iter); /* takes care of potential aliasing */ - flush_dcache_page(page); + flush_dcache_folio(folio); if (ret < 0) { - SetPageError(page); + folio_set_error(folio); } else { - SetPageUptodate(page); - if (PageError(page)) - ClearPageError(page); + folio_mark_uptodate(folio); + if (folio_test_error(folio)) + folio_clear_error(folio); ret = 0; } - /* unlock the page after the ->readpage() routine completes */ - unlock_page(page); + /* unlock the folio after the ->read_folio() routine completes */ + folio_unlock(folio); return ret; } @@ -631,7 +630,7 @@ static ssize_t orangefs_direct_IO(struct kiocb *iocb, static const struct address_space_operations orangefs_address_operations = { .writepage = orangefs_writepage, .readahead = orangefs_readahead, - .readpage = orangefs_readpage, + .read_folio = orangefs_read_folio, .writepages = orangefs_writepages, .dirty_folio = filemap_dirty_folio, .write_begin = orangefs_write_begin, From patchwork Fri Apr 29 17:25:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832536 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12230C433EF for ; Fri, 29 Apr 2022 17:27:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230170AbiD2Rar (ORCPT ); Fri, 29 Apr 2022 13:30:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44414 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379635AbiD2R3q (ORCPT ); Fri, 29 Apr 2022 13:29:46 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D7B01A6E36 for ; Fri, 29 Apr 2022 10:26:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=8H/SPWZE91QCw1tKsAIWypBRoIZkX5Fqb49XpUP2PFg=; b=VI3eZ48LXcQp2H8NqcJWC+U3+6 b+RNC2dlMjB0alnnu7ywzn4VGIhEP7i/zbB3siFHe9ZS98rG6HgJYAlYJleR2mvOMuga0Fu9ARHzi 7H2T/FxLe2Z79dgl3UPSFAGTOmb6aE/tGEb3sbcrbZJD9UIRd6KHmBwTbQu0SfwvfADUJzO10UCJS 9e9BB0Netnh0BPk9/vAhU5E/BjPX5mhuG3x2yq0oC9fheGIVeW/bZayTea5VvT8LsCBRhXGvZFQRU 95l8Q+Cvw1PRy0tcCdVPrHKINZ0qCYYV0BDSzg+dv8FT7vxbChpUh+xavgUdxSRLvfjhCzj5hv2im mi38AF/A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNe-00CddE-KY; Fri, 29 Apr 2022 17:26:10 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 63/69] romfs: Convert romfs to read_folio Date: Fri, 29 Apr 2022 18:25:50 +0100 Message-Id: <20220429172556.3011843-64-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a "weak" conversion which converts straight back to using pages. A full conversion should be performed at some point, hopefully by someone familiar with the filesystem. Signed-off-by: Matthew Wilcox (Oracle) --- fs/romfs/super.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/fs/romfs/super.c b/fs/romfs/super.c index 9e6bbb4219de..c59b230d55b4 100644 --- a/fs/romfs/super.c +++ b/fs/romfs/super.c @@ -18,7 +18,7 @@ * Changed for 2.1.19 modules * Jan 1997 Initial release * Jun 1997 2.1.43+ changes - * Proper page locking in readpage + * Proper page locking in read_folio * Changed to work with 2.1.45+ fs * Jul 1997 Fixed follow_link * 2.1.47 @@ -41,7 +41,7 @@ * dentries in lookup * clean up page flags setting * (error, uptodate, locking) in - * in readpage + * in read_folio * use init_special_inode for * fifos/sockets (and streamline) in * read_inode, fix _ops table order @@ -99,8 +99,9 @@ static struct inode *romfs_iget(struct super_block *sb, unsigned long pos); /* * read a page worth of data from the image */ -static int romfs_readpage(struct file *file, struct page *page) +static int romfs_read_folio(struct file *file, struct folio *folio) { + struct page *page = &folio->page; struct inode *inode = page->mapping->host; loff_t offset, size; unsigned long fillsize, pos; @@ -142,7 +143,7 @@ static int romfs_readpage(struct file *file, struct page *page) } static const struct address_space_operations romfs_aops = { - .readpage = romfs_readpage + .read_folio = romfs_read_folio }; /* From patchwork Fri Apr 29 17:25:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832538 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A162C433FE for ; Fri, 29 Apr 2022 17:27:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379711AbiD2Rau (ORCPT ); Fri, 29 Apr 2022 13:30:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43174 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379519AbiD2R3q (ORCPT ); Fri, 29 Apr 2022 13:29:46 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CA14DA6E23 for ; Fri, 29 Apr 2022 10:26:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=5hLucSIlWDT7gvjGd1USQbBcvVhzGdatd4MGsucOT5s=; b=F37OYXJBaaL1kbIepcSiiBWZsh QXoTxghyd7mgtoKYEyjNYn7QRQojmz5KeyFkhyrfskbqlkuFKe0IkYMvdGEEkBmsn/amiWklkOZWk NH7njvlSTtj4pPRaCqzqIC/ae8wlmr/p8ojmn7iEpadPmR1NllUKzyckYkyd0cnzWV/EMgwMMKFUJ MGRTCwG4EFOcqgQVlDOe/yfykp9eefE3MM5ds8mbpSqn0ufwXstzqCqFWYgQOx0juGwlog37AW0SD MKgvchLAVj7vHpN4TA0yjBcZkxdbj5IOLMOa8eQW5AEO7z5TZVMTvLYk8LMaxN4w4If+ak+MMKE/Z MLlvgxtQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNe-00CddT-P1; Fri, 29 Apr 2022 17:26:10 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 64/69] squashfs: Convert squashfs to read_folio Date: Fri, 29 Apr 2022 18:25:51 +0100 Message-Id: <20220429172556.3011843-65-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a "weak" conversion which converts straight back to using pages. A full conversion should be performed at some point, hopefully by someone familiar with the filesystem. Signed-off-by: Matthew Wilcox (Oracle) --- fs/squashfs/file.c | 5 +++-- fs/squashfs/super.c | 2 +- fs/squashfs/symlink.c | 5 +++-- 3 files changed, 7 insertions(+), 5 deletions(-) diff --git a/fs/squashfs/file.c b/fs/squashfs/file.c index 89d492916dea..a8e495d8eb86 100644 --- a/fs/squashfs/file.c +++ b/fs/squashfs/file.c @@ -444,8 +444,9 @@ static int squashfs_readpage_sparse(struct page *page, int expected) return 0; } -static int squashfs_readpage(struct file *file, struct page *page) +static int squashfs_read_folio(struct file *file, struct folio *folio) { + struct page *page = &folio->page; struct inode *inode = page->mapping->host; struct squashfs_sb_info *msblk = inode->i_sb->s_fs_info; int index = page->index >> (msblk->block_log - PAGE_SHIFT); @@ -496,5 +497,5 @@ static int squashfs_readpage(struct file *file, struct page *page) const struct address_space_operations squashfs_aops = { - .readpage = squashfs_readpage + .read_folio = squashfs_read_folio }; diff --git a/fs/squashfs/super.c b/fs/squashfs/super.c index 4f74abbc1a54..6d594ba2ed28 100644 --- a/fs/squashfs/super.c +++ b/fs/squashfs/super.c @@ -148,7 +148,7 @@ static int squashfs_fill_super(struct super_block *sb, struct fs_context *fc) /* * squashfs provides 'backing_dev_info' in order to disable read-ahead. For - * squashfs, I/O is not deferred, it is done immediately in readpage, + * squashfs, I/O is not deferred, it is done immediately in read_folio, * which means the user would always have to wait their own I/O. So the effect * of readahead is very weak for squashfs. squashfs_bdi_init will set * sb->s_bdi->ra_pages and sb->s_bdi->io_pages to 0 and close readahead for diff --git a/fs/squashfs/symlink.c b/fs/squashfs/symlink.c index 1430613183e6..2bf977a52c2c 100644 --- a/fs/squashfs/symlink.c +++ b/fs/squashfs/symlink.c @@ -30,8 +30,9 @@ #include "squashfs.h" #include "xattr.h" -static int squashfs_symlink_readpage(struct file *file, struct page *page) +static int squashfs_symlink_read_folio(struct file *file, struct folio *folio) { + struct page *page = &folio->page; struct inode *inode = page->mapping->host; struct super_block *sb = inode->i_sb; struct squashfs_sb_info *msblk = sb->s_fs_info; @@ -101,7 +102,7 @@ static int squashfs_symlink_readpage(struct file *file, struct page *page) const struct address_space_operations squashfs_symlink_aops = { - .readpage = squashfs_symlink_readpage + .read_folio = squashfs_symlink_read_folio }; const struct inode_operations squashfs_symlink_inode_ops = { From patchwork Fri Apr 29 17:25:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832543 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16DCAC433F5 for ; Fri, 29 Apr 2022 17:27:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379672AbiD2Ra6 (ORCPT ); Fri, 29 Apr 2022 13:30:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44422 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379638AbiD2R3q (ORCPT ); Fri, 29 Apr 2022 13:29:46 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D7722A6E31 for ; Fri, 29 Apr 2022 10:26:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=fYNWAWF0UHuB+svW5UzmqrUpnQ209GYdqP5ggOpbJzM=; b=X8q/B7NbfVv6URviCjtAELqTji LmAk527mPCzgNJBqtsp+XSsuCeZWnj02CkkrSUcugK90BIeN1yoojyg7m8p3wky7fnCTbuCLchjQk LyGufQ32rqeoQ2STs98pYIin1wPaI72es1Q7P6Fr8DmZEBuUzR3ZCsGCKwAYvbnOveuWkbZEN89ha 3rWfJhU1ochFYo+sF6lYJrBwDcrkkFk9Qc4Xqk9+dNobMdHUXELexx9GRKvXRjtGLa7wXQlRxSlnG 8kdBRCbl/KlJ9e4nKTuV2YGbVpNDVVaky2TpKdZrdRAkSrZppsEMfZg6YxYtF977lg2O8kLtleoLS ucVikgXg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNe-00CddY-Vh; Fri, 29 Apr 2022 17:26:11 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 65/69] ubifs: Convert ubifs to read_folio Date: Fri, 29 Apr 2022 18:25:52 +0100 Message-Id: <20220429172556.3011843-66-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a "weak" conversion which converts straight back to using pages. A full conversion should be performed at some point, hopefully by someone familiar with the filesystem. Signed-off-by: Matthew Wilcox (Oracle) --- fs/ubifs/file.c | 12 +++++++----- fs/ubifs/super.c | 2 +- 2 files changed, 8 insertions(+), 6 deletions(-) diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c index 81c085c4decf..7cbf2edf8907 100644 --- a/fs/ubifs/file.c +++ b/fs/ubifs/file.c @@ -31,9 +31,9 @@ * in the "sys_write -> alloc_pages -> direct reclaim path". So, in * 'ubifs_writepage()' we are only guaranteed that the page is locked. * - * Similarly, @i_mutex is not always locked in 'ubifs_readpage()', e.g., the + * Similarly, @i_mutex is not always locked in 'ubifs_read_folio()', e.g., the * read-ahead path does not lock it ("sys_read -> generic_file_aio_read -> - * ondemand_readahead -> readpage"). In case of readahead, @I_SYNC flag is not + * ondemand_readahead -> read_folio"). In case of readahead, @I_SYNC flag is not * set as well. However, UBIFS disables readahead. */ @@ -889,12 +889,14 @@ static int ubifs_bulk_read(struct page *page) return err; } -static int ubifs_readpage(struct file *file, struct page *page) +static int ubifs_read_folio(struct file *file, struct folio *folio) { + struct page *page = &folio->page; + if (ubifs_bulk_read(page)) return 0; do_readpage(page); - unlock_page(page); + folio_unlock(folio); return 0; } @@ -1641,7 +1643,7 @@ static int ubifs_symlink_getattr(struct user_namespace *mnt_userns, } const struct address_space_operations ubifs_file_address_operations = { - .readpage = ubifs_readpage, + .read_folio = ubifs_read_folio, .writepage = ubifs_writepage, .write_begin = ubifs_write_begin, .write_end = ubifs_write_end, diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c index bad67455215f..0978d01b0ea4 100644 --- a/fs/ubifs/super.c +++ b/fs/ubifs/super.c @@ -2191,7 +2191,7 @@ static int ubifs_fill_super(struct super_block *sb, void *data, int silent) /* * UBIFS provides 'backing_dev_info' in order to disable read-ahead. For - * UBIFS, I/O is not deferred, it is done immediately in readpage, + * UBIFS, I/O is not deferred, it is done immediately in read_folio, * which means the user would have to wait not just for their own I/O * but the read-ahead I/O as well i.e. completely pointless. * From patchwork Fri Apr 29 17:25:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832545 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B962C433EF for ; Fri, 29 Apr 2022 17:27:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379684AbiD2RbC (ORCPT ); Fri, 29 Apr 2022 13:31:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379637AbiD2R3q (ORCPT ); Fri, 29 Apr 2022 13:29:46 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ED0DBA6E3A for ; Fri, 29 Apr 2022 10:26:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=YgXlVyE6uLpqizpQn34DWWJORGr525Z9ytETikZF540=; b=k9Uv25E5rpvkW5X2jEjWXNk4Q5 FW5dLhiaczEcS0X1Tic1guJlhdX8XsWH2NDiSgoP6xe5QGlV24/ToM8iLYJGb6pJBZRuzhl3ZYebj yfV5US9KwfOkOKs681JyqKDIGJevbRTivrB1++LpSAqQ15VPXR6YAvDuGet9apuSWLReaz0zmw4d8 HISTYvaZF2i/1eBGOmuYZk5me1eyEjUMu+4JLxr405+yrQCBSbErRPK76FkJPjrMFIKmBGmUaLoag Zj36ZtPrrOXvK5OXCbzHhWpOLADYHiGkra12aNentdywivthTb3ZujBFDagnIgkpQ9DvnsupLVPUr 3MNdYJ9w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNf-00Cdde-3V; Fri, 29 Apr 2022 17:26:11 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 66/69] udf: Convert adinicb and symlinks to read_folio Date: Fri, 29 Apr 2022 18:25:53 +0100 Message-Id: <20220429172556.3011843-67-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a "weak" conversion which converts straight back to using pages. A full conversion should be performed at some point, hopefully by someone familiar with the filesystem. Signed-off-by: Matthew Wilcox (Oracle) --- fs/udf/file.c | 10 +++++----- fs/udf/symlink.c | 5 +++-- 2 files changed, 8 insertions(+), 7 deletions(-) diff --git a/fs/udf/file.c b/fs/udf/file.c index 3f4d5c44c784..09aef77269fe 100644 --- a/fs/udf/file.c +++ b/fs/udf/file.c @@ -57,11 +57,11 @@ static void __udf_adinicb_readpage(struct page *page) kunmap_atomic(kaddr); } -static int udf_adinicb_readpage(struct file *file, struct page *page) +static int udf_adinicb_read_folio(struct file *file, struct folio *folio) { - BUG_ON(!PageLocked(page)); - __udf_adinicb_readpage(page); - unlock_page(page); + BUG_ON(!folio_test_locked(folio)); + __udf_adinicb_readpage(&folio->page); + folio_unlock(folio); return 0; } @@ -127,7 +127,7 @@ static int udf_adinicb_write_end(struct file *file, struct address_space *mappin const struct address_space_operations udf_adinicb_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, - .readpage = udf_adinicb_readpage, + .read_folio = udf_adinicb_read_folio, .writepage = udf_adinicb_writepage, .write_begin = udf_adinicb_write_begin, .write_end = udf_adinicb_write_end, diff --git a/fs/udf/symlink.c b/fs/udf/symlink.c index 9b223421a3c5..f3642f9c23f8 100644 --- a/fs/udf/symlink.c +++ b/fs/udf/symlink.c @@ -101,8 +101,9 @@ static int udf_pc_to_char(struct super_block *sb, unsigned char *from, return 0; } -static int udf_symlink_filler(struct file *file, struct page *page) +static int udf_symlink_filler(struct file *file, struct folio *folio) { + struct page *page = &folio->page; struct inode *inode = page->mapping->host; struct buffer_head *bh = NULL; unsigned char *symlink; @@ -183,7 +184,7 @@ static int udf_symlink_getattr(struct user_namespace *mnt_userns, * symlinks can't do much... */ const struct address_space_operations udf_symlink_aops = { - .readpage = udf_symlink_filler, + .read_folio = udf_symlink_filler, }; const struct inode_operations udf_symlink_inode_operations = { From patchwork Fri Apr 29 17:25:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832542 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 971F1C433EF for ; Fri, 29 Apr 2022 17:27:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379653AbiD2Raz (ORCPT ); Fri, 29 Apr 2022 13:30:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379636AbiD2R3q (ORCPT ); Fri, 29 Apr 2022 13:29:46 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 405F69E9D2 for ; Fri, 29 Apr 2022 10:26:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=FBVTiOMvlPZSoDA0qENoBLUC+UtSA69cBDDlOQEHp1w=; b=AdL4H8B4xUKKWBlVXQWuyZwVot PmZznv6kYzAdsyfD53728z9aBuMv+9iiheVt0JMi8Ynxqdc6dcFqgWOQPsWhkzVMdDpHZ4sXKCkCC 7TH7JjV1IjQ9m50nfkK4TI6Aoc+212OCTIHgka1X3+6jKnt6MVi133ItR3txR85KoFpFYwwXhCnTF rGNz5eDCj3SULSCcZE5guwhV6r9T8CuWjZLos8FKzti8L2SvSC7y3ozI8D8b3OwKq8YnGG4BEUVaH 8nWY+TTTJYxMq8xA930p4gQ5+a2lx/zKrtedvBzyWEa6lutZ1SsVAxRDZEqkOiEIqrQGqMS7Lif4l Hkc2s85g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNf-00Cddn-9G; Fri, 29 Apr 2022 17:26:11 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 67/69] vboxsf: Convert vboxsf to read_folio Date: Fri, 29 Apr 2022 18:25:54 +0100 Message-Id: <20220429172556.3011843-68-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a "weak" conversion which converts straight back to using pages. A full conversion should be performed at some point, hopefully by someone familiar with the filesystem. Signed-off-by: Matthew Wilcox (Oracle) --- fs/vboxsf/file.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/fs/vboxsf/file.c b/fs/vboxsf/file.c index d74e0d336995..572aa1c43b37 100644 --- a/fs/vboxsf/file.c +++ b/fs/vboxsf/file.c @@ -225,8 +225,9 @@ const struct inode_operations vboxsf_reg_iops = { .setattr = vboxsf_setattr }; -static int vboxsf_readpage(struct file *file, struct page *page) +static int vboxsf_read_folio(struct file *file, struct folio *folio) { + struct page *page = &folio->page; struct vboxsf_handle *sf_handle = file->private_data; loff_t off = page_offset(page); u32 nread = PAGE_SIZE; @@ -352,7 +353,7 @@ static int vboxsf_write_end(struct file *file, struct address_space *mapping, * page and it does not call SetPageUptodate for partial writes. */ const struct address_space_operations vboxsf_reg_aops = { - .readpage = vboxsf_readpage, + .read_folio = vboxsf_read_folio, .writepage = vboxsf_writepage, .dirty_folio = filemap_dirty_folio, .write_begin = simple_write_begin, From patchwork Fri Apr 29 17:25:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832540 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33119C433F5 for ; Fri, 29 Apr 2022 17:27:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379683AbiD2Rax (ORCPT ); Fri, 29 Apr 2022 13:30:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44424 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379640AbiD2R3q (ORCPT ); Fri, 29 Apr 2022 13:29:46 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4A725A76C3 for ; Fri, 29 Apr 2022 10:26:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=nfgdyrwqFh++TrxoSevQeO/+ZnIqinu9pA/W/va2zkw=; b=JhGMxC4D0GlKIl5yfvAF8i4MxH rG+fY3D25vsM9pqqkv9u6jz/vKWhKrwvBQFZqAOPi8E+dLZedi0Xvx4oNYyobpem7IVDWGaZVBgzr xF5SMaRYYq4m5aTUdvTgWDz/YSzG6GNfu5D6sVrG8Xhvew1cvC5aBkhBBKUTOHLO9yStg+mYAmVvq LGqd9u2zoyI9Kn7Ob8AILgVv3/bKInnO3VqSMobH4mtSZDcoMn6cxqNOPLnhMzgJhHEBBbG939gNw akda0oE0RRJzXBy4iAp9/4shzbVjuew4S8pv8pD+ifeaewuZCwbAbnESkobXtbk/N9EV6B3A60nN7 3kRJQzFg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNf-00Cddt-DI; Fri, 29 Apr 2022 17:26:11 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 68/69] mm: Convert swap_readpage to call read_folio instead of readpage Date: Fri, 29 Apr 2022 18:25:55 +0100 Message-Id: <20220429172556.3011843-69-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This commit is split out so it can be dropped when resolving conflicts with Neil Brown's series to stop calling ->readpage in the swap code. Signed-off-by: Matthew Wilcox (Oracle) --- mm/page_io.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/page_io.c b/mm/page_io.c index 89fbf3cae30f..1ae4be14f9d3 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -336,7 +336,7 @@ int swap_readpage(struct page *page, bool synchronous) struct file *swap_file = sis->swap_file; struct address_space *mapping = swap_file->f_mapping; - ret = mapping->a_ops->readpage(swap_file, page); + ret = mapping->a_ops->read_folio(swap_file, page_folio(page)); if (!ret) count_vm_event(PSWPIN); goto out; From patchwork Fri Apr 29 17:25:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832541 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F488C433EF for ; Fri, 29 Apr 2022 17:27:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379640AbiD2Ray (ORCPT ); Fri, 29 Apr 2022 13:30:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42706 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379644AbiD2R3q (ORCPT ); Fri, 29 Apr 2022 13:29:46 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46345A76C2 for ; Fri, 29 Apr 2022 10:26:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=R6LbfKQvTFbi1836ZzFHIxDANLvoYzMJ5oh4xMfFpAY=; b=hfWyEG9Ke2FD4kuDhgHVB2rO+C QLigGpED8QfS5ULhD+UHVKJFs3dUB2Ut8wxborRW1wLP3vHT/VkPzB1FnEkTdam4fWWXyMKUYKCbh Ea7n1T+jSMh7L8vD9FilV1HFnKFptX896tNmgJxFgKjMnZ5V591vapkyT9o3qt/PDa6Q6inG0cU7J jou4qS1ZOy+aU31ppzg7xth3gR75nUdPwII5a4XcsEd8UEVLCI97WMHTX5yD7k/tVDBXi/Dfdmpy1 /FuDpFVfQWNZFGdVqAOpGl5sJJolVz3Ju6VBtO7BSK3j1/QEn0ivyI4AXMZYkcHb4WI+H45ezLcoA 3WszwLug==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkUNf-00Cde0-Jo; Fri, 29 Apr 2022 17:26:11 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 69/69] mm,fs: Remove stray references to ->readpage Date: Fri, 29 Apr 2022 18:25:56 +0100 Message-Id: <20220429172556.3011843-70-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429172556.3011843-1-willy@infradead.org> References: <20220429172556.3011843-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Get rid of all references to readpage. Signed-off-by: Matthew Wilcox (Oracle) --- fs/ceph/addr.c | 2 +- include/linux/fs.h | 7 ++----- kernel/events/uprobes.c | 7 ++++--- mm/filemap.c | 4 ++-- mm/memory.c | 4 ++-- mm/readahead.c | 4 ++-- mm/shmem.c | 2 +- mm/swapfile.c | 2 +- 8 files changed, 15 insertions(+), 17 deletions(-) diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index 3acd33da6d8c..e040b92bb17c 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -1772,7 +1772,7 @@ int ceph_mmap(struct file *file, struct vm_area_struct *vma) { struct address_space *mapping = file->f_mapping; - if (!mapping->a_ops->readpage) + if (!mapping->a_ops->read_folio) return -ENOEXEC; file_accessed(file); vma->vm_ops = &ceph_vmops; diff --git a/include/linux/fs.h b/include/linux/fs.h index 5ecc4b74204d..f812f5aa07dd 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -262,7 +262,7 @@ struct iattr { * trying again. The aop will be taking reasonable * precautions not to livelock. If the caller held a page * reference, it should drop it before retrying. Returned - * by readpage(). + * by read_folio(). * * address_space_operation functions return these large constants to indicate * special semantics to the caller. These are much larger than the bytes in a @@ -335,10 +335,7 @@ static inline bool is_sync_kiocb(struct kiocb *kiocb) struct address_space_operations { int (*writepage)(struct page *page, struct writeback_control *wbc); - union { - int (*readpage)(struct file *, struct page *); - int (*read_folio)(struct file *, struct folio *); - }; + int (*read_folio)(struct file *, struct folio *); /* Write back some dirty pages from this mapping. */ int (*writepages)(struct address_space *, struct writeback_control *); diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 6418083901d4..a9bc3c98f76a 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -787,10 +787,10 @@ static int __copy_insn(struct address_space *mapping, struct file *filp, struct page *page; /* * Ensure that the page that has the original instruction is populated - * and in page-cache. If ->readpage == NULL it must be shmem_mapping(), + * and in page-cache. If ->read_folio == NULL it must be shmem_mapping(), * see uprobe_register(). */ - if (mapping->a_ops->readpage) + if (mapping->a_ops->read_folio) page = read_mapping_page(mapping, offset >> PAGE_SHIFT, filp); else page = shmem_read_mapping_page(mapping, offset >> PAGE_SHIFT); @@ -1143,7 +1143,8 @@ static int __uprobe_register(struct inode *inode, loff_t offset, return -EINVAL; /* copy_insn() uses read_mapping_page() or shmem_read_mapping_page() */ - if (!inode->i_mapping->a_ops->readpage && !shmem_mapping(inode->i_mapping)) + if (!inode->i_mapping->a_ops->read_folio && + !shmem_mapping(inode->i_mapping)) return -EIO; /* Racy, just to catch the obvious mistakes */ if (offset > i_size_read(inode)) diff --git a/mm/filemap.c b/mm/filemap.c index 132015e42384..079f8cca7959 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2414,7 +2414,7 @@ static int filemap_read_folio(struct file *file, struct address_space *mapping, /* * A previous I/O error may have been due to temporary failures, - * eg. multipath errors. PG_error will be set again if readpage + * eg. multipath errors. PG_error will be set again if read_folio * fails. */ folio_clear_error(folio); @@ -2636,7 +2636,7 @@ static int filemap_get_pages(struct kiocb *iocb, struct iov_iter *iter, * @already_read: Number of bytes already read by the caller. * * Copies data from the page cache. If the data is not currently present, - * uses the readahead and readpage address_space operations to fetch it. + * uses the readahead and read_folio address_space operations to fetch it. * * Return: Total number of bytes copied, including those already read by * the caller. If an error happens before any bytes are copied, returns diff --git a/mm/memory.c b/mm/memory.c index 76e3af9639d9..2a12028a3749 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -555,11 +555,11 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr, dump_page(page, "bad pte"); pr_alert("addr:%px vm_flags:%08lx anon_vma:%px mapping:%px index:%lx\n", (void *)addr, vma->vm_flags, vma->anon_vma, mapping, index); - pr_alert("file:%pD fault:%ps mmap:%ps readpage:%ps\n", + pr_alert("file:%pD fault:%ps mmap:%ps read_folio:%ps\n", vma->vm_file, vma->vm_ops ? vma->vm_ops->fault : NULL, vma->vm_file ? vma->vm_file->f_op->mmap : NULL, - mapping ? mapping->a_ops->readpage : NULL); + mapping ? mapping->a_ops->read_folio : NULL); dump_stack(); add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); } diff --git a/mm/readahead.c b/mm/readahead.c index 2004aa58ae24..ef506df2de7f 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -253,8 +253,8 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, } /* - * Now start the IO. We ignore I/O errors - if the page is not - * uptodate then the caller will launch readpage again, and + * Now start the IO. We ignore I/O errors - if the folio is not + * uptodate then the caller will launch read_folio again, and * will then handle the error. */ read_pages(ractl); diff --git a/mm/shmem.c b/mm/shmem.c index 0f557a512171..f3e8de8ff75c 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -4162,7 +4162,7 @@ int shmem_zero_setup(struct vm_area_struct *vma) * * This behaves as a tmpfs "read_cache_page_gfp(mapping, index, gfp)", * with any new page allocations done using the specified allocation flags. - * But read_cache_page_gfp() uses the ->readpage() method: which does not + * But read_cache_page_gfp() uses the ->read_folio() method: which does not * suit tmpfs, since it may have pages in swapcache, and needs to find those * for itself; although drivers/gpu/drm i915 and ttm rely upon this support. * diff --git a/mm/swapfile.c b/mm/swapfile.c index 63c61f8b2611..ecd45bdbad9b 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -3041,7 +3041,7 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) /* * Read the swap header. */ - if (!mapping->a_ops->readpage) { + if (!mapping->a_ops->read_folio) { error = -EINVAL; goto bad_swap_unlock_inode; }