diff mbox series

[RFC,2/2] mm/migrate: annotate possible unnecessary xas_load()

Message ID 20200501210520.6B29706C@viggo.jf.intel.com (mailing list archive)
State New, archived
Headers show
Series mm: tweak page cache migration | expand

Commit Message

Dave Hansen May 1, 2020, 9:05 p.m. UTC
From: Dave Hansen <dave.hansen@linux.intel.com>

The xas_load() in question also originated in "e286781: mm:
speculative page references" as a radix_tree_deref_slot(), the
only one in the tree at the time.

I'm thoroughly confused why it is needed, though.  A page's
slot in the page cache should be stabilized by lock_page()
being held.

So, first of all, add a VM_BUG_ON_ONCE() to make it totally
clear that the page is locked.

But, even if the page was truncated, we normally check:

	page_mapping(page) != mapping

to check for truncation.  This would seem to imply that we
are looking for some kind of state change that can happen
to the xarray slot for a page, but without changing
page->mapping.

I'm at a loss for that that might be.  Stick a WARN_ON_ONCE()
in there to see if we ever actually hit this.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Yang Shi <yang.shi@linux.alibaba.com>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
---

 b/mm/migrate.c |    8 ++++++++
 1 file changed, 8 insertions(+)
diff mbox series

Patch

diff -puN mm/migrate.c~remove_extra_xas_load_check mm/migrate.c
--- a/mm/migrate.c~remove_extra_xas_load_check	2020-05-01 14:00:43.377525921 -0700
+++ b/mm/migrate.c	2020-05-01 14:00:43.381525921 -0700
@@ -407,6 +407,8 @@  int migrate_page_move_mapping(struct add
 	int dirty;
 	int expected_count = expected_page_refs(mapping, page) + extra_count;
 
+	VM_WARN_ONCE(!PageLocked(page));
+
 	if (!mapping) {
 		/* Anonymous page without mapping */
 		if (page_count(page) != expected_count)
@@ -425,7 +427,13 @@  int migrate_page_move_mapping(struct add
 	newzone = page_zone(newpage);
 
 	xas_lock_irq(&xas);
+	/*
+	 * 'mapping' was established under the page lock, which
+	 * prevents the xarray slot for 'page' from being changed.
+	 * Thus, xas_load() failure here is unexpected.
+	 */
 	if (xas_load(&xas) != page) {
+		WARN_ON_ONCE(1);
 		xas_unlock_irq(&xas);
 		return -EAGAIN;
 	}