From patchwork Tue Dec 14 16:20:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70EF9C433FE for ; Tue, 14 Dec 2021 16:30:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 265966B008A; Tue, 14 Dec 2021 11:22:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 213446B008C; Tue, 14 Dec 2021 11:22:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 08CCB6B0092; Tue, 14 Dec 2021 11:22:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay037.a.hostedemail.com [64.99.140.37]) by kanga.kvack.org (Postfix) with ESMTP id EC9A36B008A for ; Tue, 14 Dec 2021 11:22:45 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id C970B8043B for ; Tue, 14 Dec 2021 16:22:35 +0000 (UTC) X-FDA: 78916917912.01.6C42455 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf31.hostedemail.com (Postfix) with ESMTP id E3BB020016 for ; Tue, 14 Dec 2021 16:22:29 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id c4-20020adfed84000000b00185ca4eba36so4843834wro.21 for ; Tue, 14 Dec 2021 08:22:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=SHGFxV/gzg6jQ5t0gjyp2ovTzZOCQJ4LpEXLIh7BBvo=; b=gLqDvJvAj1s+UtVet2O47yEADN974yana8zFcjRXXUxtrkBKyBLgaZ2HAqBwJY+UcL hI1MbnIUZUKNrQBOdT7a48hxDTAFPFAgQ8+yJJnH6ZuS0SfKPeOzZnGLwzRFEy+icZsB Qw+w5kU3lfplzKHYhJo2uyWN8VCDHvEfS6rz5RS2v/UzOgeLulPvPcExx4WVOSIYK0k4 xd36oeHrEaaXfZepx2tUz9ytnGjtCilrSqyRWeNzXQ8TMvoceZY0UeofgGM2ZUlQ16hU LjSd0+s8vf43ARXxZMDOaqa4S20p3RKJ1hy8wRZsJsK2M0VrZ/HJ5/IQDGJGWPlrCP5u t1ng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=SHGFxV/gzg6jQ5t0gjyp2ovTzZOCQJ4LpEXLIh7BBvo=; b=xf6mIkWe95+IShNtyD4JNlf++xYvmRsqq8Ng5le8nMn7a+t6kTGHgDi5EVEeLQeUzt 1aIrwPicknvVAMH26W1BTqigaErgL34Nfj2+HZmFbf3tsjHOGv/JzxQePi/sgVw0tuiz h2BdHFvnVEzLSwuyznnHRNdyUH7vO46n6O2QwafdJTGDjsYrc4r6EMGCGbfIubwN6Ndd Gt/G78yTl2yjQggqm/Qi6mWOx/y/StMzLlxp/SwTIddQOu0NIIzU/bnicjv/RuwrCTC9 0CwzNewDIU4vbh710OlW1I2XZonN3QuDIiCftmdvlXTrITkkGxPe1aFZgY1BqAmnQjET 9qLQ== X-Gm-Message-State: AOAM533nPKZ+4XlSgJR16ro/AKEcm7jrvXF63JZtGrbj51ulxwAE0UmZ YCLbkTuHTFLux6pOaNa3LoBYyNJerDI= X-Google-Smtp-Source: ABdhPJzy0w1gHA+9P/NuPa42yjowF48uXfrttj3XSDC2uNI+hUQVLdANIi/yKdgT316+epfJDUX1ID5jeus= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a5d:648d:: with SMTP id o13mr6875509wri.636.1639498953968; Tue, 14 Dec 2021 08:22:33 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:22 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-16-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 15/43] kmsan: mm: maintain KMSAN metadata for page operations From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=gLqDvJvA; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf31.hostedemail.com: domain of 3ycS4YQYKCEosxupq3s00sxq.o0yxuz69-yyw7mow.03s@flex--glider.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=3ycS4YQYKCEosxupq3s00sxq.o0yxuz69-yyw7mow.03s@flex--glider.bounces.google.com X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: E3BB020016 X-Stat-Signature: jpdyskud14int4bz61u515xgisny5y48 X-HE-Tag: 1639498949-212231 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Insert KMSAN hooks that make the necessary bookkeeping changes: - poison page shadow and origins in alloc_pages()/free_page(); - clear page shadow and origins in clear_page(), copy_user_highpage(); - copy page metadata in copy_highpage(), wp_page_copy(); - handle vmap()/vunmap()/iounmap(); Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I6d4f53a0e7eab46fa29f0348f3095d9f2e326850 --- arch/x86/include/asm/page_64.h | 13 +++++++++++++ arch/x86/mm/ioremap.c | 3 +++ include/linux/highmem.h | 3 +++ mm/memory.c | 2 ++ mm/page_alloc.c | 14 ++++++++++++++ mm/vmalloc.c | 20 ++++++++++++++++++-- 6 files changed, 53 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h index 4bde0dc66100c..c10547510f1f4 100644 --- a/arch/x86/include/asm/page_64.h +++ b/arch/x86/include/asm/page_64.h @@ -44,14 +44,27 @@ void clear_page_orig(void *page); void clear_page_rep(void *page); void clear_page_erms(void *page); +/* This is an assembly header, avoid including too much of kmsan.h */ +#ifdef CONFIG_KMSAN +void kmsan_unpoison_memory(const void *addr, size_t size); +#endif +__no_sanitize_memory static inline void clear_page(void *page) { +#ifdef CONFIG_KMSAN + /* alternative_call_2() changes @page. */ + void *page_copy = page; +#endif alternative_call_2(clear_page_orig, clear_page_rep, X86_FEATURE_REP_GOOD, clear_page_erms, X86_FEATURE_ERMS, "=D" (page), "0" (page) : "cc", "memory", "rax", "rcx"); +#ifdef CONFIG_KMSAN + /* Clear KMSAN shadow for the pages that have it. */ + kmsan_unpoison_memory(page_copy, PAGE_SIZE); +#endif } void copy_page(void *to, void *from); diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 026031b3b7829..4d0349ecc7cd7 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include @@ -474,6 +475,8 @@ void iounmap(volatile void __iomem *addr) return; } + kmsan_iounmap_page_range((unsigned long)addr, + (unsigned long)addr + get_vm_area_size(p)); memtype_free(p->phys_addr, p->phys_addr + get_vm_area_size(p)); /* Finally remove it */ diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 39bb9b47fa9cd..3e1898a44d7e3 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -6,6 +6,7 @@ #include #include #include +#include #include #include #include @@ -277,6 +278,7 @@ static inline void copy_user_highpage(struct page *to, struct page *from, vfrom = kmap_local_page(from); vto = kmap_local_page(to); copy_user_page(vto, vfrom, vaddr, to); + kmsan_unpoison_memory(page_address(to), PAGE_SIZE); kunmap_local(vto); kunmap_local(vfrom); } @@ -292,6 +294,7 @@ static inline void copy_highpage(struct page *to, struct page *from) vfrom = kmap_local_page(from); vto = kmap_local_page(to); copy_page(vto, vfrom); + kmsan_copy_page_meta(to, from); kunmap_local(vto); kunmap_local(vfrom); } diff --git a/mm/memory.c b/mm/memory.c index 8f1de811a1dcb..ea9e48daadb15 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -51,6 +51,7 @@ #include #include #include +#include #include #include #include @@ -3003,6 +3004,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) put_page(old_page); return 0; } + kmsan_copy_page_meta(new_page, old_page); } if (mem_cgroup_charge(page_folio(new_page), mm, GFP_KERNEL)) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c5952749ad40b..fa8029b714a81 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -26,6 +26,7 @@ #include #include #include +#include #include #include #include @@ -1288,6 +1289,7 @@ static __always_inline bool free_pages_prepare(struct page *page, VM_BUG_ON_PAGE(PageTail(page), page); trace_mm_page_free(page, order); + kmsan_free_page(page, order); if (unlikely(PageHWPoison(page)) && !order) { /* @@ -1734,6 +1736,9 @@ void __init memblock_free_pages(struct page *page, unsigned long pfn, { if (early_page_uninitialised(pfn)) return; + if (!kmsan_memblock_free_pages(page, order)) + /* KMSAN will take care of these pages. */ + return; __free_pages_core(page, order); } @@ -3663,6 +3668,14 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone, /* * Allocate a page from the given zone. Use pcplists for order-0 allocations. */ + +/* + * Do not instrument rmqueue() with KMSAN. This function may call + * __msan_poison_alloca() through a call to set_pfnblock_flags_mask(). + * If __msan_poison_alloca() attempts to allocate pages for the stack depot, it + * may call rmqueue() again, which will result in a deadlock. + */ +__no_sanitize_memory static inline struct page *rmqueue(struct zone *preferred_zone, struct zone *zone, unsigned int order, @@ -5389,6 +5402,7 @@ struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, } trace_mm_page_alloc(page, order, alloc_gfp, ac.migratetype); + kmsan_alloc_page(page, order, alloc_gfp); return page; } diff --git a/mm/vmalloc.c b/mm/vmalloc.c index d2a00ad4e1dd1..333de26b3c56e 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -319,6 +319,9 @@ int ioremap_page_range(unsigned long addr, unsigned long end, err = vmap_range_noflush(addr, end, phys_addr, pgprot_nx(prot), ioremap_max_page_shift); flush_cache_vmap(addr, end); + if (!err) + kmsan_ioremap_page_range(addr, end, phys_addr, prot, + ioremap_max_page_shift); return err; } @@ -418,7 +421,7 @@ static void vunmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, * * This is an internal function only. Do not use outside mm/. */ -void vunmap_range_noflush(unsigned long start, unsigned long end) +void __vunmap_range_noflush(unsigned long start, unsigned long end) { unsigned long next; pgd_t *pgd; @@ -440,6 +443,12 @@ void vunmap_range_noflush(unsigned long start, unsigned long end) arch_sync_kernel_mappings(start, end); } +void vunmap_range_noflush(unsigned long start, unsigned long end) +{ + kmsan_vunmap_range_noflush(start, end); + __vunmap_range_noflush(start, end); +} + /** * vunmap_range - unmap kernel virtual addresses * @addr: start of the VM area to unmap @@ -574,7 +583,7 @@ static int vmap_small_pages_range_noflush(unsigned long addr, unsigned long end, * * This is an internal function only. Do not use outside mm/. */ -int vmap_pages_range_noflush(unsigned long addr, unsigned long end, +int __vmap_pages_range_noflush(unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, unsigned int page_shift) { unsigned int i, nr = (end - addr) >> PAGE_SHIFT; @@ -600,6 +609,13 @@ int vmap_pages_range_noflush(unsigned long addr, unsigned long end, return 0; } +int vmap_pages_range_noflush(unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages, unsigned int page_shift) +{ + kmsan_vmap_pages_range_noflush(addr, end, prot, pages, page_shift); + return __vmap_pages_range_noflush(addr, end, prot, pages, page_shift); +} + /** * vmap_pages_range - map pages to a kernel virtual address * @addr: start of the VM area to map