diff mbox series

[next] mm/thp: fix collapse_file()'s try_to_unmap(folio,)

Message ID 3f187b6c-e5e8-e66d-e0c0-7455ca6abb4c@google.com (mailing list archive)
State New
Headers show
Series [next] mm/thp: fix collapse_file()'s try_to_unmap(folio,) | expand

Commit Message

Hugh Dickins Feb. 27, 2022, 2:22 a.m. UTC
The foliation of THP collapse_file()'s call to try_to_unmap() is
currently wrong, crashing on a test in rmap_walk() when xas_next()
delivered a value (after which page has been loaded independently).

Fixes: c3b522d9a698 ("mm/rmap: Convert try_to_unmap() to take a folio")
Signed-off-by: Hugh Dickins <hughd@google.com>
---
Please just fold in if you agree.

 mm/khugepaged.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Comments

Matthew Wilcox March 1, 2022, 4:18 a.m. UTC | #1
On Sat, Feb 26, 2022 at 06:22:47PM -0800, Hugh Dickins wrote:
> The foliation of THP collapse_file()'s call to try_to_unmap() is
> currently wrong, crashing on a test in rmap_walk() when xas_next()
> delivered a value (after which page has been loaded independently).

Argh.  I have a fear of this exact bug, and I must have missed checking
for it this time.  I hate trying to keep two variables in sync, so my
preferred fix for this is to remove it for this merge window:

+++ b/mm/khugepaged.c
@@ -1699,8 +1699,7 @@ static void collapse_file(struct mm_struct *mm,
 
        xas_set(&xas, start);
        for (index = start; index < end; index++) {
-               struct folio *folio = xas_next(&xas);
-               struct page *page = &folio->page;
+               struct page *page = xas_next(&xas);
 
                VM_BUG_ON(index != xas.xa_index);
                if (is_shmem) {
@@ -1835,7 +1834,8 @@ static void collapse_file(struct mm_struct *mm,
                }
 
                if (page_mapped(page))
-                       try_to_unmap(folio, TTU_IGNORE_MLOCK | TTU_BATCH_FLUSH);
+                       try_to_unmap(page_folio(page),
+                                       TTU_IGNORE_MLOCK | TTU_BATCH_FLUSH);
 
                xas_lock_irq(&xas);
                xas_set(&xas, index);

(ie revert the first hunk).  I'll come back to khugepaged in the next
merge window and convert this function properly.  It's going to take
some surgery to shmem in order to use folios there first ...
diff mbox series

Patch

--- mmotm/mm/khugepaged.c
+++ linux/mm/khugepaged.c
@@ -1824,7 +1824,8 @@  static void collapse_file(struct mm_stru
 		}
 
 		if (page_mapped(page))
-			try_to_unmap(folio, TTU_IGNORE_MLOCK | TTU_BATCH_FLUSH);
+			try_to_unmap(page_folio(page),
+				     TTU_IGNORE_MLOCK | TTU_BATCH_FLUSH);
 
 		xas_lock_irq(&xas);
 		xas_set(&xas, index);