diff mbox series

[f2fs-dev,5/6] userfault; Expand folio use in mfill_atomic_install_pte()

Message ID 20240423225552.4113447-6-willy@infradead.org (mailing list archive)
State Accepted
Commit a568b4126b20ebbc01914e12d083379720911799
Headers show
Series Remove page_mapping() | expand

Commit Message

Matthew Wilcox April 23, 2024, 10:55 p.m. UTC
Call page_folio() a little earlier so we can use folio_mapping()
instead of page_mapping(), saving a call to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/userfaultfd.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

Comments

David Hildenbrand April 24, 2024, 11:55 a.m. UTC | #1
On 24.04.24 00:55, Matthew Wilcox (Oracle) wrote:
> Call page_folio() a little earlier so we can use folio_mapping()
> instead of page_mapping(), saving a call to compound_head().
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---

Reviewed-by: David Hildenbrand <david@redhat.com>
diff mbox series

Patch

diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index d9dcc7d71a39..e6486923263c 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -180,9 +180,9 @@  int mfill_atomic_install_pte(pmd_t *dst_pmd,
 	pte_t _dst_pte, *dst_pte;
 	bool writable = dst_vma->vm_flags & VM_WRITE;
 	bool vm_shared = dst_vma->vm_flags & VM_SHARED;
-	bool page_in_cache = page_mapping(page);
 	spinlock_t *ptl;
-	struct folio *folio;
+	struct folio *folio = page_folio(page);
+	bool page_in_cache = folio_mapping(folio);
 
 	_dst_pte = mk_pte(page, dst_vma->vm_page_prot);
 	_dst_pte = pte_mkdirty(_dst_pte);
@@ -212,7 +212,6 @@  int mfill_atomic_install_pte(pmd_t *dst_pmd,
 	if (!pte_none_mostly(ptep_get(dst_pte)))
 		goto out_unlock;
 
-	folio = page_folio(page);
 	if (page_in_cache) {
 		/* Usually, cache pages are already added to LRU */
 		if (newly_allocated)