diff mbox series

[v5,38/38] kmsan: block: skip bio block merging logic for KMSAN

Message ID 20200325161249.55095-39-glider@google.com (mailing list archive)
State New, archived
Headers show
Series Add KernelMemorySanitizer infrastructure | expand

Commit Message

Alexander Potapenko March 25, 2020, 4:12 p.m. UTC
KMSAN doesn't allow treating adjacent memory pages as such, if they were
allocated by different alloc_pages() calls.
The block layer however does so: adjacent pages end up being used
together. To prevent this, make page_is_mergeable() return false under
KMSAN.

Suggested-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Alexander Potapenko <glider@google.com>
Cc: Eric Biggers <ebiggers@google.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Vegard Nossum <vegard.nossum@oracle.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Marco Elver <elver@google.com>
Cc: Andrey Konovalov <andreyknvl@google.com>
Cc: linux-mm@kvack.org
---

Change-Id: Iff367f421d51fac549e31ed122365b7539642cff
---
 block/bio.c | 2 ++
 1 file changed, 2 insertions(+)
diff mbox series

Patch

diff --git a/block/bio.c b/block/bio.c
index 0985f34225561..09503ef00bc20 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -696,6 +696,8 @@  static inline bool page_is_mergeable(const struct bio_vec *bv,
 	*same_page = ((vec_end_addr & PAGE_MASK) == page_addr);
 	if (!*same_page && pfn_to_page(PFN_DOWN(vec_end_addr)) + 1 != page)
 		return false;
+	if (!*same_page && IS_ENABLED(CONFIG_KMSAN))
+		return false;
 	return true;
 }