diff mbox series

lib/test_hmm: avoid accessing uninitialized pages

Message ID 20220609130835.35110-1-linmiaohe@huawei.com (mailing list archive)
State New
Headers show
Series lib/test_hmm: avoid accessing uninitialized pages | expand

Commit Message

Miaohe Lin June 9, 2022, 1:08 p.m. UTC
If make_device_exclusive_range() fails or returns pages marked for
exclusive access less than required, remaining fields of pages will
left uninitialized. So dmirror_atomic_map() will access those yet
uninitialized fields of pages. To fix it, do dmirror_atomic_map()
iff all pages are marked for exclusive access (we will break if
mapped is less than required anyway) so we won't access those
uninitialized fields of pages.

Fixes: b659baea7546 ("mm: selftests for exclusive device memory")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 lib/test_hmm.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)
diff mbox series

Patch

diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index 7930853e7fc5..e3965cafd27c 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -797,7 +797,7 @@  static int dmirror_exclusive(struct dmirror *dmirror,
 
 	mmap_read_lock(mm);
 	for (addr = start; addr < end; addr = next) {
-		unsigned long mapped;
+		unsigned long mapped = 0;
 		int i;
 
 		if (end < addr + (ARRAY_SIZE(pages) << PAGE_SHIFT))
@@ -806,7 +806,13 @@  static int dmirror_exclusive(struct dmirror *dmirror,
 			next = addr + (ARRAY_SIZE(pages) << PAGE_SHIFT);
 
 		ret = make_device_exclusive_range(mm, addr, next, pages, NULL);
-		mapped = dmirror_atomic_map(addr, next, pages, dmirror);
+		/*
+		 * Do dmirror_atomic_map() iff all pages are marked for
+		 * exclusive access to avoid accessing uninitialized
+		 * fields of pages.
+		 */
+		if (ret == (next - addr) >> PAGE_SHIFT)
+			mapped = dmirror_atomic_map(addr, next, pages, dmirror);
 		for (i = 0; i < ret; i++) {
 			if (pages[i]) {
 				unlock_page(pages[i]);