mm: migrate: Fix reference check race between __find_get_block() and migration
diff mbox series

Message ID
State New
Headers show
  • mm: migrate: Fix reference check race between __find_get_block() and migration
Related show

Commit Message

Mel Gorman July 18, 2019, 9:02 a.m. UTC
From: Jan Kara <>

buffer_migrate_page_norefs() can race with bh users in the following way:

CPU1                                    CPU2
  checks bh refs
                                          grab bh ref
  move page                               do bh work

This can result in various issues like lost updates to buffers (i.e.
metadata corruption) or use after free issues for the old page.

This patch closes the race by holding mapping->private_lock while the
mapping is being moved to a new page. Ordinarily, a reference can be taken
outside of the private_lock using the per-cpu BH LRU but the references
are checked and the LRU invalidated if necessary. The private_lock is held
once the references are known so the buffer lookup slow path will spin
on the private_lock. Between the page lock and private_lock, it should
be impossible for other references to be acquired and updates to happen
during the migration.

A user had reported data corruption issues on a distribution kernel with
a similar page migration implementation as mainline. The data corruption
could not be reproduced with this patch applied. A small number of
migration-intensive tests were run and no performance problems were noted.

[ Changelog, removed tracing]
Fixes: 89cb0888ca14 "mm: migrate: provide buffer_migrate_page_norefs()"
CC: # v5.0+
Signed-off-by: Jan Kara <>
Signed-off-by: Mel Gorman <>
 mm/migrate.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff mbox series

diff --git a/mm/migrate.c b/mm/migrate.c
index e9594bc0d406..a59e4aed6d2e 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -771,12 +771,12 @@  static int __buffer_migrate_page(struct address_space *mapping,
 			bh = bh->b_this_page;
 		} while (bh != head);
-		spin_unlock(&mapping->private_lock);
 		if (busy) {
 			if (invalidated) {
 				rc = -EAGAIN;
 				goto unlock_buffers;
+			spin_unlock(&mapping->private_lock);
 			invalidated = true;
 			goto recheck_buffers;
@@ -809,6 +809,8 @@  static int __buffer_migrate_page(struct address_space *mapping,
+	if (check_refs)
+		spin_unlock(&mapping->private_lock);
 	bh = head;
 	do {