From patchwork Wed Jun 8 15:02:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12874080 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A9B6CCA487 for ; Wed, 8 Jun 2022 15:12:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243740AbiFHPMD (ORCPT ); Wed, 8 Jun 2022 11:12:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56606 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242303AbiFHPLp (ORCPT ); Wed, 8 Jun 2022 11:11:45 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A2B4149C822; Wed, 8 Jun 2022 08:03:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ebo5nyfkwGkX0SM0pROh1Dct7QqdSa3ZjR42/FbSW98=; b=VP4hPmud7SPFmC3bIKYlVqSBNM ye5qs83/sHqcoYsTLqQd+m9uMu5ak7kQqhrxPejY6kjGezj2IywtVb2KqugczyFSDrVR4wV9pSrVK iSf/0bC449kpqY51nqYYffjd6WE1rtpD+iRnk+HIYHGnjNZhOOroRGUq/pB0InvMW4DRxGDTxdKau LV1b+mg2Re0TultwkHBXglCIcQryDYl7+UF2rFc7QVpqSYJdzc0Al5iEjRY7dgk86mOekBEL9k1YS ihpfsRLVoOMwePjy82T+0hAWr4N6/fZmtc9OYicZTzmrFQgTMbaOM2r5zfKDW1WGYyLJ19faZbb2i WKGEyvZA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nyxCt-00CjFK-V0; Wed, 08 Jun 2022 15:02:51 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-aio@kvack.org, linux-btrfs@vger.kernel.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, cluster-devel@redhat.com, linux-mm@kvack.org, linux-xfs@vger.kernel.org, linux-nfs@vger.kernel.org, linux-ntfs-dev@lists.sourceforge.net, ocfs2-devel@oss.oracle.com, linux-mtd@lists.infradead.org, virtualization@lists.linux-foundation.org, Christoph Hellwig Subject: [PATCH v2 03/19] fs: Add aops->migrate_folio Date: Wed, 8 Jun 2022 16:02:33 +0100 Message-Id: <20220608150249.3033815-4-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220608150249.3033815-1-willy@infradead.org> References: <20220608150249.3033815-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Provide a folio-based replacement for aops->migratepage. Update the documentation to document migrate_folio instead of migratepage. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- Documentation/filesystems/locking.rst | 5 ++-- Documentation/filesystems/vfs.rst | 13 ++++++----- Documentation/vm/page_migration.rst | 33 ++++++++++++++------------- include/linux/fs.h | 4 +++- mm/compaction.c | 4 +++- mm/migrate.c | 11 +++++---- 6 files changed, 40 insertions(+), 30 deletions(-) diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst index c0fe711f14d3..3d28b23676bd 100644 --- a/Documentation/filesystems/locking.rst +++ b/Documentation/filesystems/locking.rst @@ -253,7 +253,8 @@ prototypes:: void (*free_folio)(struct folio *); int (*direct_IO)(struct kiocb *, struct iov_iter *iter); bool (*isolate_page) (struct page *, isolate_mode_t); - int (*migratepage)(struct address_space *, struct page *, struct page *); + int (*migrate_folio)(struct address_space *, struct folio *dst, + struct folio *src, enum migrate_mode); void (*putback_page) (struct page *); int (*launder_folio)(struct folio *); bool (*is_partially_uptodate)(struct folio *, size_t from, size_t count); @@ -281,7 +282,7 @@ release_folio: yes free_folio: yes direct_IO: isolate_page: yes -migratepage: yes (both) +migrate_folio: yes (both) putback_page: yes launder_folio: yes is_partially_uptodate: yes diff --git a/Documentation/filesystems/vfs.rst b/Documentation/filesystems/vfs.rst index a08c652467d7..3ae1b039b03f 100644 --- a/Documentation/filesystems/vfs.rst +++ b/Documentation/filesystems/vfs.rst @@ -740,7 +740,8 @@ cache in your filesystem. The following members are defined: /* isolate a page for migration */ bool (*isolate_page) (struct page *, isolate_mode_t); /* migrate the contents of a page to the specified target */ - int (*migratepage) (struct page *, struct page *); + int (*migrate_folio)(struct mapping *, struct folio *dst, + struct folio *src, enum migrate_mode); /* put migration-failed page back to right list */ void (*putback_page) (struct page *); int (*launder_folio) (struct folio *); @@ -935,12 +936,12 @@ cache in your filesystem. The following members are defined: is successfully isolated, VM marks the page as PG_isolated via __SetPageIsolated. -``migrate_page`` +``migrate_folio`` This is used to compact the physical memory usage. If the VM - wants to relocate a page (maybe off a memory card that is - signalling imminent failure) it will pass a new page and an old - page to this function. migrate_page should transfer any private - data across and update any references that it has to the page. + wants to relocate a folio (maybe from a memory device that is + signalling imminent failure) it will pass a new folio and an old + folio to this function. migrate_folio should transfer any private + data across and update any references that it has to the folio. ``putback_page`` Called by the VM when isolated page's migration fails. diff --git a/Documentation/vm/page_migration.rst b/Documentation/vm/page_migration.rst index 8c5cb8147e55..e0f73ddfabb1 100644 --- a/Documentation/vm/page_migration.rst +++ b/Documentation/vm/page_migration.rst @@ -181,22 +181,23 @@ which are function pointers of struct address_space_operations. Once page is successfully isolated, VM uses page.lru fields so driver shouldn't expect to preserve values in those fields. -2. ``int (*migratepage) (struct address_space *mapping,`` -| ``struct page *newpage, struct page *oldpage, enum migrate_mode);`` - - After isolation, VM calls migratepage() of driver with the isolated page. - The function of migratepage() is to move the contents of the old page to the - new page - and set up fields of struct page newpage. Keep in mind that you should - indicate to the VM the oldpage is no longer movable via __ClearPageMovable() - under page_lock if you migrated the oldpage successfully and returned - MIGRATEPAGE_SUCCESS. If driver cannot migrate the page at the moment, driver - can return -EAGAIN. On -EAGAIN, VM will retry page migration in a short time - because VM interprets -EAGAIN as "temporary migration failure". On returning - any error except -EAGAIN, VM will give up the page migration without - retrying. - - Driver shouldn't touch the page.lru field while in the migratepage() function. +2. ``int (*migrate_folio) (struct address_space *mapping,`` +| ``struct folio *dst, struct folio *src, enum migrate_mode);`` + + After isolation, VM calls the driver's migrate_folio() with the + isolated folio. The purpose of migrate_folio() is to move the contents + of the source folio to the destination folio and set up the fields + of destination folio. Keep in mind that you should indicate to the + VM the source folio is no longer movable via __ClearPageMovable() + under folio if you migrated the source successfully and returned + MIGRATEPAGE_SUCCESS. If driver cannot migrate the folio at the + moment, driver can return -EAGAIN. On -EAGAIN, VM will retry folio + migration in a short time because VM interprets -EAGAIN as "temporary + migration failure". On returning any error except -EAGAIN, VM will + give up the folio migration without retrying. + + Driver shouldn't touch the folio.lru field while in the migrate_folio() + function. 3. ``void (*putback_page)(struct page *);`` diff --git a/include/linux/fs.h b/include/linux/fs.h index 5d8ee3155ca2..47431cf8fbb3 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -362,9 +362,11 @@ struct address_space_operations { void (*free_folio)(struct folio *folio); ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter); /* - * migrate the contents of a page to the specified target. If + * migrate the contents of a folio to the specified target. If * migrate_mode is MIGRATE_ASYNC, it must not block. */ + int (*migrate_folio)(struct address_space *, struct folio *dst, + struct folio *src, enum migrate_mode); int (*migratepage) (struct address_space *, struct page *, struct page *, enum migrate_mode); int (*launder_folio)(struct folio *); diff --git a/mm/compaction.c b/mm/compaction.c index f23efba1d118..458f49f9ab09 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1042,7 +1042,9 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, goto isolate_fail_put; mapping = page_mapping(page); - migrate_dirty = !mapping || mapping->a_ops->migratepage; + migrate_dirty = !mapping || + mapping->a_ops->migrate_folio || + mapping->a_ops->migratepage; unlock_page(page); if (!migrate_dirty) goto isolate_fail_put; diff --git a/mm/migrate.c b/mm/migrate.c index 3ce6fee87efa..e064b998ead0 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -856,14 +856,17 @@ static int move_to_new_folio(struct folio *dst, struct folio *src, if (!mapping) rc = migrate_page(mapping, &dst->page, &src->page, mode); - else if (mapping->a_ops->migratepage) + else if (mapping->a_ops->migrate_folio) /* - * Most pages have a mapping and most filesystems - * provide a migratepage callback. Anonymous pages + * Most folios have a mapping and most filesystems + * provide a migrate_folio callback. Anonymous folios * are part of swap space which also has its own - * migratepage callback. This is the most common path + * migrate_folio callback. This is the most common path * for page migration. */ + rc = mapping->a_ops->migrate_folio(mapping, dst, src, + mode); + else if (mapping->a_ops->migratepage) rc = mapping->a_ops->migratepage(mapping, &dst->page, &src->page, mode); else