From patchwork Wed Mar 25 16:12:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458269 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E034014B4 for ; Wed, 25 Mar 2020 16:14:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9320920774 for ; Wed, 25 Mar 2020 16:14:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Vj4rZEJ2" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9320920774 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 756A16B0087; Wed, 25 Mar 2020 12:13:58 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 72E946B0088; Wed, 25 Mar 2020 12:13:58 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5CE3E6B0089; Wed, 25 Mar 2020 12:13:58 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0199.hostedemail.com [216.40.44.199]) by kanga.kvack.org (Postfix) with ESMTP id 435986B0087 for ; Wed, 25 Mar 2020 12:13:58 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 246FA805C0 for ; Wed, 25 Mar 2020 16:13:58 +0000 (UTC) X-FDA: 76634380956.19.skin39_79e7fef097b60 X-Spam-Summary: 2,0,0,f9abaea1df3f2caa,d41d8cd98f00b204,3q4n7xgykcdiuzwrsfuccuzs.qcazwbil-aayjoqy.cfu@flex--glider.bounces.google.com,,RULES_HIT:1:41:152:355:379:541:800:960:966:968:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1593:1594:1605:1730:1747:1777:1792:1801:2196:2198:2199:2200:2393:2538:2553:2559:2562:2637:2693:2731:2892:2895:2896:3138:3139:3140:3141:3142:3152:3865:3866:3867:3868:3870:3871:3872:3874:4250:4321:4385:4605:5007:6117:6119:6261:6653:6742:6743:7875:7903:8603:8660:8957:9969:10004:11026:11232:11473:11657:11658:11914:12043:12048:12291:12296:12297:12438:12555:12691:12737:12895:12986:13148:13230:13846:14096:14097:14394:14659:21080:21365:21433:21444:21451:21611:21627:21990:30054:30064:30067:30070:30075:30079:30090,0,RBL:209.85.128.73:@flex--glider.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:1,LUA_SUMM ARY:none X-HE-Tag: skin39_79e7fef097b60 X-Filterd-Recvd-Size: 14675 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:13:57 +0000 (UTC) Received: by mail-wm1-f73.google.com with SMTP id v184so1066923wme.7 for ; Wed, 25 Mar 2020 09:13:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=OKO6nEfQr1AbR7sx/iTvYwCH8Kd7BfHfKURGBj01uGQ=; b=Vj4rZEJ2oFzn/uUjrjG+TBqNmr9derrAvbyMNdmgJv/qjwIKkB9JJSBPuNbMiwrzGi yJDleczp7ztMHUcriAFRRidgyVJtIR2mSN+0KBbgb50Py7uI2Imqb7Ql+6r9y9PCNtEJ gBCk0NJ/NEiuh5pY2QHgQMcEYeN22SpiqTY7zG6QPE7G9vk7OPVp/jfEa5ik20hxBx8i bvivz/euGgV5/dOqixniHhScgNGHwdVdWmRi9ePdqG/BeazlHfhhqgM6ybrkOJI6MJ+e agtGQk2kI7xSvS6qz/HQcicDCWbd/Ik+3DTkEgb5y3Lt2sscn/mX5z/k5iD3pGpWnB01 Si3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=OKO6nEfQr1AbR7sx/iTvYwCH8Kd7BfHfKURGBj01uGQ=; b=d2EQ1QGTcf5cbc6akxWqkTbgllBVhiYNGr2xzTR7+tMh8sQwDO+t7jjC1pTaHX5u41 6qqamJ2qqdJih1w8895hPwiiT+etu8CSsUOh+nr6blgSxiYebECw/TiIL5fvXXDzi4Mr T3IHGToKfxIWvULy1WcS5FGZoslcYSIhYCDIfjZ/wAfws3pwRNhX1KPZKXeXAPTPo2ut QfKNEX++yFnxEaD6h3Wu0zIBiHwj1MvU3AeRxSUDPVJUJFMMm4Jh4W0eonxnOz/6TT/E xOe0HI0YalQNOw91Hz1sEri8OAiEYwOe2TI3Kd0aQowrZLaLt7we6ubPuSIRnHEBkAmH nAlA== X-Gm-Message-State: ANhLgQ323hO0ZwuTTwfd9TmKno6OiqyjThdySUk/OuzGnF1CVDG4jyr9 UCQI5YoJuNF4DhtTizjlkPKNlFv/kE8= X-Google-Smtp-Source: ADFU+vtQ9TE7XFKZnayfNeeq68SyxwoeD6m2YheHiwn6zFWz2hWJnnCazyeJsSS04+37+mDsMCb73//5jzc= X-Received: by 2002:adf:d849:: with SMTP id k9mr4229491wrl.108.1585152835999; Wed, 25 Mar 2020 09:13:55 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:30 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-20-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 19/38] kmsan: mm: maintain KMSAN metadata for page operations From: glider@google.com To: Andrew Morton , Greg Kroah-Hartman , Eric Dumazet , Wolfram Sang , Petr Mladek , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, ericvh@gmail.com, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Insert KMSAN hooks that make the necessary bookkeeping changes: - allocate/split/deallocate metadata pages in alloc_pages()/split_page()/free_page(); - clear page shadow and origins in clear_page(), copy_user_highpage(); - copy page metadata in copy_highpage(), wp_page_copy(); - handle vmap()/vunmap()/iounmap(); Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Andrew Morton Cc: Greg Kroah-Hartman Cc: Eric Dumazet Cc: Wolfram Sang Cc: Petr Mladek Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- This patch was previously called "kmsan: call KMSAN hooks where needed" v2: - dropped call to kmsan_handle_vprintk, updated comment in printk.c v3: - put KMSAN_INIT_VALUE on a separate line in vprintk_store() - dropped call to kmsan_handle_i2c_transfer() - minor style fixes v4: - split mm-unrelated bits to other patches as requested by Andrey Konovalov - dropped changes to mm/compaction.c - use kmsan_unpoison_shadow in page_64.h and highmem.h Change-Id: I1250a928d9263bf71fdaa067a070bdee686ef47b --- arch/x86/include/asm/page_64.h | 13 +++++++++++++ arch/x86/mm/ioremap.c | 3 +++ include/linux/highmem.h | 3 +++ lib/ioremap.c | 5 +++++ mm/gup.c | 3 +++ mm/memory.c | 2 ++ mm/page_alloc.c | 17 +++++++++++++++++ mm/vmalloc.c | 24 ++++++++++++++++++++++-- 8 files changed, 68 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h index 939b1cff4a7b7..045856c38f494 100644 --- a/arch/x86/include/asm/page_64.h +++ b/arch/x86/include/asm/page_64.h @@ -44,14 +44,27 @@ void clear_page_orig(void *page); void clear_page_rep(void *page); void clear_page_erms(void *page); +/* This is an assembly header, avoid including too much of kmsan.h */ +#ifdef CONFIG_KMSAN +void kmsan_unpoison_shadow(const void *addr, size_t size); +#endif +__no_sanitize_memory static inline void clear_page(void *page) { +#ifdef CONFIG_KMSAN + /* alternative_call_2() changes |page|. */ + void *page_copy = page; +#endif alternative_call_2(clear_page_orig, clear_page_rep, X86_FEATURE_REP_GOOD, clear_page_erms, X86_FEATURE_ERMS, "=D" (page), "0" (page) : "cc", "memory", "rax", "rcx"); +#ifdef CONFIG_KMSAN + /* Clear KMSAN shadow for the pages that have it. */ + kmsan_unpoison_shadow(page_copy, PAGE_SIZE); +#endif } void copy_page(void *to, void *from); diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 935a91e1fd774..80399defe90aa 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -7,6 +7,7 @@ * (C) Copyright 1995 1996 Linus Torvalds */ +#include #include #include #include @@ -469,6 +470,8 @@ void iounmap(volatile void __iomem *addr) return; } + kmsan_iounmap_page_range((unsigned long)addr, + (unsigned long)addr + get_vm_area_size(p)); memtype_free(p->phys_addr, p->phys_addr + get_vm_area_size(p)); /* Finally remove it */ diff --git a/include/linux/highmem.h b/include/linux/highmem.h index ea5cdbd8c2c32..9f6efa26e9b5c 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -5,6 +5,7 @@ #include #include #include +#include #include #include #include @@ -255,6 +256,7 @@ static inline void copy_user_highpage(struct page *to, struct page *from, vfrom = kmap_atomic(from); vto = kmap_atomic(to); copy_user_page(vto, vfrom, vaddr, to); + kmsan_unpoison_shadow(page_address(to), PAGE_SIZE); kunmap_atomic(vto); kunmap_atomic(vfrom); } @@ -270,6 +272,7 @@ static inline void copy_highpage(struct page *to, struct page *from) vfrom = kmap_atomic(from); vto = kmap_atomic(to); copy_page(vto, vfrom); + kmsan_copy_page_meta(to, from); kunmap_atomic(vto); kunmap_atomic(vfrom); } diff --git a/lib/ioremap.c b/lib/ioremap.c index 3f0e18543de84..14b0325b6fa9e 100644 --- a/lib/ioremap.c +++ b/lib/ioremap.c @@ -6,6 +6,7 @@ * * (C) Copyright 1995 1996 Linus Torvalds */ +#include #include #include #include @@ -214,6 +215,8 @@ int ioremap_page_range(unsigned long addr, unsigned long start; unsigned long next; int err; + unsigned long old_addr = addr; + phys_addr_t old_phys_addr = phys_addr; might_sleep(); BUG_ON(addr >= end); @@ -228,6 +231,8 @@ int ioremap_page_range(unsigned long addr, } while (pgd++, phys_addr += (next - addr), addr = next, addr != end); flush_cache_vmap(start, end); + if (!err) + kmsan_ioremap_page_range(old_addr, end, old_phys_addr, prot); return err; } diff --git a/mm/gup.c b/mm/gup.c index a212305695209..a2546215f165f 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -4,6 +4,7 @@ #include #include +#include #include #include #include @@ -2710,6 +2711,7 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write, gup_fast_permitted(start, end)) { local_irq_save(flags); gup_pgd_range(start, end, gup_flags, pages, &nr_pinned); + kmsan_gup_pgd_range(pages, nr_pinned); local_irq_restore(flags); } @@ -2765,6 +2767,7 @@ static int internal_get_user_pages_fast(unsigned long start, int nr_pages, gup_fast_permitted(start, end)) { local_irq_disable(); gup_pgd_range(addr, end, gup_flags, pages, &nr_pinned); + kmsan_gup_pgd_range(pages, nr_pinned); local_irq_enable(); ret = nr_pinned; } diff --git a/mm/memory.c b/mm/memory.c index 8d7f387dd0c77..aa9e266449e26 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -51,6 +51,7 @@ #include #include #include +#include #include #include #include @@ -2676,6 +2677,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) put_page(old_page); return 0; } + kmsan_copy_page_meta(new_page, old_page); } if (mem_cgroup_try_charge_delay(new_page, mm, GFP_KERNEL, &memcg, false)) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index ca1453204e667..869dc64226296 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -26,6 +26,8 @@ #include #include #include +#include +#include #include #include #include @@ -1178,6 +1180,7 @@ static __always_inline bool free_pages_prepare(struct page *page, VM_BUG_ON_PAGE(PageTail(page), page); trace_mm_page_free(page, order); + kmsan_free_page(page, order); /* * Check tail pages before head page information is cleared to @@ -3199,6 +3202,7 @@ void split_page(struct page *page, unsigned int order) VM_BUG_ON_PAGE(PageCompound(page), page); VM_BUG_ON_PAGE(!page_count(page), page); + kmsan_split_page(page, order); for (i = 1; i < (1 << order); i++) set_page_refcounted(page + i); split_page_owner(page, order); @@ -3349,6 +3353,14 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone, /* * Allocate a page from the given zone. Use pcplists for order-0 allocations. */ + +/* + * Do not instrument rmqueue() with KMSAN. This function may call + * __msan_poison_alloca() through a call to set_pfnblock_flags_mask(). + * If __msan_poison_alloca() attempts to allocate pages for the stack depot, it + * may call rmqueue() again, which will result in a deadlock. + */ +__no_sanitize_memory static inline struct page *rmqueue(struct zone *preferred_zone, struct zone *zone, unsigned int order, @@ -4862,6 +4874,11 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, trace_mm_page_alloc(page, order, alloc_mask, ac.migratetype); + if (page) + if (kmsan_alloc_page(page, order, gfp_mask)) { + __free_pages(page, order); + page = NULL; + } return page; } EXPORT_SYMBOL(__alloc_pages_nodemask); diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 6b8eeb0ecee51..c5577e616c33b 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include #include @@ -127,7 +128,8 @@ static void vunmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end) } while (p4d++, addr = next, addr != end); } -static void vunmap_page_range(unsigned long addr, unsigned long end) +/* Exported for KMSAN, visible in mm/kmsan/kmsan.h only. */ +void __vunmap_page_range(unsigned long addr, unsigned long end) { pgd_t *pgd; unsigned long next; @@ -141,6 +143,13 @@ static void vunmap_page_range(unsigned long addr, unsigned long end) vunmap_p4d_range(pgd, addr, next); } while (pgd++, addr = next, addr != end); } +EXPORT_SYMBOL(__vunmap_page_range); + +static void vunmap_page_range(unsigned long addr, unsigned long end) +{ + kmsan_vunmap_page_range(addr, end); + __vunmap_page_range(addr, end); +} static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, int *nr) @@ -224,8 +233,11 @@ static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, * will have pfns corresponding to the "pages" array. * * Ie. pte at addr+N*PAGE_SIZE shall point to pfn corresponding to pages[N] + * + * This function is exported for use in KMSAN, but is only declared in KMSAN + * headers. */ -static int vmap_page_range_noflush(unsigned long start, unsigned long end, +int __vmap_page_range_noflush(unsigned long start, unsigned long end, pgprot_t prot, struct page **pages) { pgd_t *pgd; @@ -245,6 +257,14 @@ static int vmap_page_range_noflush(unsigned long start, unsigned long end, return nr; } +EXPORT_SYMBOL(__vmap_page_range_noflush); + +static int vmap_page_range_noflush(unsigned long start, unsigned long end, + pgprot_t prot, struct page **pages) +{ + kmsan_vmap_page_range_noflush(start, end, prot, pages); + return __vmap_page_range_noflush(start, end, prot, pages); +} static int vmap_page_range(unsigned long start, unsigned long end, pgprot_t prot, struct page **pages)