From patchwork Wed Jul 31 07:15:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11067183 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D18B113B1 for ; Wed, 31 Jul 2019 07:16:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B73D428847 for ; Wed, 31 Jul 2019 07:16:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AB23528864; Wed, 31 Jul 2019 07:16:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 78F3928847 for ; Wed, 31 Jul 2019 07:16:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1A1C78E0005; Wed, 31 Jul 2019 03:16:03 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 12C4E8E0001; Wed, 31 Jul 2019 03:16:03 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E98FA8E0005; Wed, 31 Jul 2019 03:16:02 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by kanga.kvack.org (Postfix) with ESMTP id A620F8E0001 for ; Wed, 31 Jul 2019 03:16:02 -0400 (EDT) Received: by mail-pl1-f199.google.com with SMTP id n1so36921629plk.11 for ; Wed, 31 Jul 2019 00:16:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=ckINI7DWJx4t0SHL49BoktL6GXoY0zg6g25Z0xvH2hk=; b=TXyAnGH+R/jf4vF9N4ESnMcY0Jm2/ImAqFm83injrF9xDGoaFwcuADInIbyDkQ6dIu h293eroJK+o8gAch64zQuq9Y0AEK7ozxGnMgyG7zRwrOGfJ4m6SCK52TjkHGk98G5ozV wBrimLYwp2Kn9JtTrUdIge9i+lDryCtQRFMstwKvHMv8NzmJQPVu8sWdK6Db4F2vEUEW 37D2B9FCJ3t4yJDkekA9Jp8I147ilzENQ/Zblf17FAqgWBeOlwnWkUi30Z8ay3+GDiHO x5lAdPhv1sbY4wfhn022QHikxYjyYtGyKJCfgSIj6IWwbiQFD8Vx8L96b2Zsb6O7aSIl NP9g== X-Gm-Message-State: APjAAAV6l7YCW7KIs/pju99ZNp3ZJ2eyUlTPRbV3cZZaKj9flBKYLHoh rbB3OTW9Oe0Zze3bCmrbvzTsybmnDt61LprrQj1RZC7XByax0YWNN1Q8sizUJxFSS2D0UtTyfzi TBMHtsHbrJTs+5zo3MgncK8SqlL4nq/vt/3vatM3chVeJmlBHWYv/AgbV6ezatQQCRg== X-Received: by 2002:a63:724b:: with SMTP id c11mr33752859pgn.30.1564557362157; Wed, 31 Jul 2019 00:16:02 -0700 (PDT) X-Received: by 2002:a63:724b:: with SMTP id c11mr33752777pgn.30.1564557360699; Wed, 31 Jul 2019 00:16:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564557360; cv=none; d=google.com; s=arc-20160816; b=ipk4zXYSV8IbFLh2iqqnZw7//yLof5Lwqg9+NhEbj+1yA4tp2MTJMn1lFaOoMWgtZt V5rR0Z7c3SsFVHSda0xliL8BZbGSOOdI91Gm4g7JZZmRPU6PT+n6fPFTrK4GFUkg6RZN xTgDC7oX0vMyRxyTXlU/uJLEWfufQUIyDEky3ZWilyumSwUMARU28ryRlfJIYrMRz2J5 We/zZBs58uqNzvz1FBAr6Rn9o0oXWFwjXN4j6kaJSMTuJQFeXCF50f6J9IVTpf2JXsxY kMongL68k9ihkD03wZGghAOh8pd4N0/ewDvQ1gggRPNDlhS5imDKstdoAu9+0AgUgejl omdQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=ckINI7DWJx4t0SHL49BoktL6GXoY0zg6g25Z0xvH2hk=; b=fhhU8j9DwY/jE54nB39wNthVmvtRCFYk4ItIxSWpVz7d0FX5Fnm5HUdWDJTLPTx5qG Dkm8nujIKop5IW/b4YiErbzhaTVLPVrYSfVOGbVxHCoF4fE6Oa8TKdt7R97NuRn8fP6d vQUPx4Y27e8FYKpkkxzERlNV7kEJfiXD4MzVY6U45KHsp6T6pbzm8zTjtEu3FzGtnbBa 2FwuVveb2Kj6J2GUO2SHD6FVwmZ88G2+xRTNXGwCgKAQf1lLUVoXEhS1YBQBqu1F9e0A M4CB0qqYJJBmIDl/UGnGiu7bveqtRNYCM5KrHJ0DKU+JnBOd9GEhlFiLoEzqweEn0gi2 j4Hg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@axtens.net header.s=google header.b=qIDHcXG7; spf=pass (google.com: domain of dja@axtens.net designates 209.85.220.65 as permitted sender) smtp.mailfrom=dja@axtens.net Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id e8sor80231854plb.48.2019.07.31.00.16.00 for (Google Transport Security); Wed, 31 Jul 2019 00:16:00 -0700 (PDT) Received-SPF: pass (google.com: domain of dja@axtens.net designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@axtens.net header.s=google header.b=qIDHcXG7; spf=pass (google.com: domain of dja@axtens.net designates 209.85.220.65 as permitted sender) smtp.mailfrom=dja@axtens.net DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ckINI7DWJx4t0SHL49BoktL6GXoY0zg6g25Z0xvH2hk=; b=qIDHcXG7MZ6YBLrXZ8X9Z0nxZJfJIB398pKrMnxdkeoWHy2TCsM/Q1u81zD2QZ8NZo Tp9hx6OMnB1JXE5wuUV0k/+0YF67z9Eoh6vyV7Kof06FifGh29/DrGFZ+mxbkZwbzv7s EUZ5spCa8dOA6tBY2mV1qWu5s+tQIwSXLRO+o= X-Google-Smtp-Source: APXvYqwmSn+PSbfCfYnS4KFTbfhgA29FWbLkLPxT0OQiiObhc6sA3EP2f+Fo/CyVHiZ+exqLtN/Xfg== X-Received: by 2002:a17:902:bd94:: with SMTP id q20mr108836009pls.307.1564557360253; Wed, 31 Jul 2019 00:16:00 -0700 (PDT) Received: from localhost (ppp167-251-205.static.internode.on.net. [59.167.251.205]) by smtp.gmail.com with ESMTPSA id 67sm39489035pfd.177.2019.07.31.00.15.58 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Wed, 31 Jul 2019 00:15:59 -0700 (PDT) From: Daniel Axtens To: kasan-dev@googlegroups.com, linux-mm@kvack.org, x86@kernel.org, aryabinin@virtuozzo.com, glider@google.com, luto@kernel.org, linux-kernel@vger.kernel.org, mark.rutland@arm.com, dvyukov@google.com Cc: Daniel Axtens Subject: [PATCH v3 1/3] kasan: support backing vmalloc space with real shadow memory Date: Wed, 31 Jul 2019 17:15:48 +1000 Message-Id: <20190731071550.31814-2-dja@axtens.net> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190731071550.31814-1-dja@axtens.net> References: <20190731071550.31814-1-dja@axtens.net> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Hook into vmalloc and vmap, and dynamically allocate real shadow memory to back the mappings. Most mappings in vmalloc space are small, requiring less than a full page of shadow space. Allocating a full shadow page per mapping would therefore be wasteful. Furthermore, to ensure that different mappings use different shadow pages, mappings would have to be aligned to KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE. Instead, share backing space across multiple mappings. Allocate a backing page the first time a mapping in vmalloc space uses a particular page of the shadow region. Keep this page around regardless of whether the mapping is later freed - in the mean time the page could have become shared by another vmalloc mapping. This can in theory lead to unbounded memory growth, but the vmalloc allocator is pretty good at reusing addresses, so the practical memory usage grows at first but then stays fairly stable. This requires architecture support to actually use: arches must stop mapping the read-only zero page over portion of the shadow region that covers the vmalloc space and instead leave it unmapped. This allows KASAN with VMAP_STACK, and will be needed for architectures that do not have a separate module space (e.g. powerpc64, which I am currently working on). It also allows relaxing the module alignment back to PAGE_SIZE. Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009 Signed-off-by: Daniel Axtens Acked-by: Vasily Gorbik --- v2: let kasan_unpoison_shadow deal with ranges that do not use a full shadow byte. v3: relax module alignment rename to kasan_populate_vmalloc which is a much better name deal with concurrency correctly --- Documentation/dev-tools/kasan.rst | 60 ++++++++++++++++++++++ include/linux/kasan.h | 16 ++++++ include/linux/moduleloader.h | 2 +- lib/Kconfig.kasan | 16 ++++++ lib/test_kasan.c | 26 ++++++++++ mm/kasan/common.c | 83 +++++++++++++++++++++++++++++++ mm/kasan/generic_report.c | 3 ++ mm/kasan/kasan.h | 1 + mm/vmalloc.c | 15 +++++- 9 files changed, 220 insertions(+), 2 deletions(-) diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst index b72d07d70239..35fda484a672 100644 --- a/Documentation/dev-tools/kasan.rst +++ b/Documentation/dev-tools/kasan.rst @@ -215,3 +215,63 @@ brk handler is used to print bug reports. A potential expansion of this mode is a hardware tag-based mode, which would use hardware memory tagging support instead of compiler instrumentation and manual shadow memory manipulation. + +What memory accesses are sanitised by KASAN? +-------------------------------------------- + +The kernel maps memory in a number of different parts of the address +space. This poses something of a problem for KASAN, which requires +that all addresses accessed by instrumented code have a valid shadow +region. + +The range of kernel virtual addresses is large: there is not enough +real memory to support a real shadow region for every address that +could be accessed by the kernel. + +By default +~~~~~~~~~~ + +By default, architectures only map real memory over the shadow region +for the linear mapping (and potentially other small areas). For all +other areas - such as vmalloc and vmemmap space - a single read-only +page is mapped over the shadow area. This read-only shadow page +declares all memory accesses as permitted. + +This presents a problem for modules: they do not live in the linear +mapping, but in a dedicated module space. By hooking in to the module +allocator, KASAN can temporarily map real shadow memory to cover +them. This allows detection of invalid accesses to module globals, for +example. + +This also creates an incompatibility with ``VMAP_STACK``: if the stack +lives in vmalloc space, it will be shadowed by the read-only page, and +the kernel will fault when trying to set up the shadow data for stack +variables. + +CONFIG_KASAN_VMALLOC +~~~~~~~~~~~~~~~~~~~~ + +With ``CONFIG_KASAN_VMALLOC``, KASAN can cover vmalloc space at the +cost of greater memory usage. Currently this is only supported on x86. + +This works by hooking into vmalloc and vmap, and dynamically +allocating real shadow memory to back the mappings. + +Most mappings in vmalloc space are small, requiring less than a full +page of shadow space. Allocating a full shadow page per mapping would +therefore be wasteful. Furthermore, to ensure that different mappings +use different shadow pages, mappings would have to be aligned to +``KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE``. + +Instead, we share backing space across multiple mappings. We allocate +a backing page the first time a mapping in vmalloc space uses a +particular page of the shadow region. We keep this page around +regardless of whether the mapping is later freed - in the mean time +this page could have become shared by another vmalloc mapping. + +This can in theory lead to unbounded memory growth, but the vmalloc +allocator is pretty good at reusing addresses, so the practical memory +usage grows at first but then stays fairly stable. + +This allows ``VMAP_STACK`` support on x86, and enables support of +architectures that do not have a fixed module region. diff --git a/include/linux/kasan.h b/include/linux/kasan.h index cc8a03cc9674..ec81113fcee4 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -70,8 +70,18 @@ struct kasan_cache { int free_meta_offset; }; +/* + * These functions provide a special case to support backing module + * allocations with real shadow memory. With KASAN vmalloc, the special + * case is unnecessary, as the work is handled in the generic case. + */ +#ifndef CONFIG_KASAN_VMALLOC int kasan_module_alloc(void *addr, size_t size); void kasan_free_shadow(const struct vm_struct *vm); +#else +static inline int kasan_module_alloc(void *addr, size_t size) { return 0; } +static inline void kasan_free_shadow(const struct vm_struct *vm) {} +#endif int kasan_add_zero_shadow(void *start, unsigned long size); void kasan_remove_zero_shadow(void *start, unsigned long size); @@ -194,4 +204,10 @@ static inline void *kasan_reset_tag(const void *addr) #endif /* CONFIG_KASAN_SW_TAGS */ +#ifdef CONFIG_KASAN_VMALLOC +void kasan_populate_vmalloc(unsigned long requested_size, struct vm_struct *area); +#else +static inline void kasan_populate_vmalloc(unsigned long requested_size, struct vm_struct *area) {} +#endif + #endif /* LINUX_KASAN_H */ diff --git a/include/linux/moduleloader.h b/include/linux/moduleloader.h index 5229c18025e9..ca92aea8a6bd 100644 --- a/include/linux/moduleloader.h +++ b/include/linux/moduleloader.h @@ -91,7 +91,7 @@ void module_arch_cleanup(struct module *mod); /* Any cleanup before freeing mod->module_init */ void module_arch_freeing_init(struct module *mod); -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN) && !defined(CONFIG_KASAN_VMALLOC) #include #define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT) #else diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan index 4fafba1a923b..a320dc2e9317 100644 --- a/lib/Kconfig.kasan +++ b/lib/Kconfig.kasan @@ -6,6 +6,9 @@ config HAVE_ARCH_KASAN config HAVE_ARCH_KASAN_SW_TAGS bool +config HAVE_ARCH_KASAN_VMALLOC + bool + config CC_HAS_KASAN_GENERIC def_bool $(cc-option, -fsanitize=kernel-address) @@ -135,6 +138,19 @@ config KASAN_S390_4_LEVEL_PAGING to 3TB of RAM with KASan enabled). This options allows to force 4-level paging instead. +config KASAN_VMALLOC + bool "Back mappings in vmalloc space with real shadow memory" + depends on KASAN && HAVE_ARCH_KASAN_VMALLOC + help + By default, the shadow region for vmalloc space is the read-only + zero page. This means that KASAN cannot detect errors involving + vmalloc space. + + Enabling this option will hook in to vmap/vmalloc and back those + mappings with real shadow memory allocated on demand. This allows + for KASAN to detect more sorts of errors (and to support vmapped + stacks), but at the cost of higher memory usage. + config TEST_KASAN tristate "Module for testing KASAN for bug detection" depends on m && KASAN diff --git a/lib/test_kasan.c b/lib/test_kasan.c index b63b367a94e8..d375246f5f96 100644 --- a/lib/test_kasan.c +++ b/lib/test_kasan.c @@ -18,6 +18,7 @@ #include #include #include +#include /* * Note: test functions are marked noinline so that their names appear in @@ -709,6 +710,30 @@ static noinline void __init kmalloc_double_kzfree(void) kzfree(ptr); } +#ifdef CONFIG_KASAN_VMALLOC +static noinline void __init vmalloc_oob(void) +{ + void *area; + + pr_info("vmalloc out-of-bounds\n"); + + /* + * We have to be careful not to hit the guard page. + * The MMU will catch that and crash us. + */ + area = vmalloc(3000); + if (!area) { + pr_err("Allocation failed\n"); + return; + } + + ((volatile char *)area)[3100]; + vfree(area); +} +#else +static void __init vmalloc_oob(void) {} +#endif + static int __init kmalloc_tests_init(void) { /* @@ -752,6 +777,7 @@ static int __init kmalloc_tests_init(void) kasan_strings(); kasan_bitops(); kmalloc_double_kzfree(); + vmalloc_oob(); kasan_restore_multi_shot(multishot); diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 2277b82902d8..e1a748c3f3db 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -568,6 +568,7 @@ void kasan_kfree_large(void *ptr, unsigned long ip) /* The object will be poisoned by page_alloc. */ } +#ifndef CONFIG_KASAN_VMALLOC int kasan_module_alloc(void *addr, size_t size) { void *ret; @@ -603,6 +604,7 @@ void kasan_free_shadow(const struct vm_struct *vm) if (vm->flags & VM_KASAN) vfree(kasan_mem_to_shadow(vm->addr)); } +#endif extern void __kasan_report(unsigned long addr, size_t size, bool is_write, unsigned long ip); @@ -722,3 +724,84 @@ static int __init kasan_memhotplug_init(void) core_initcall(kasan_memhotplug_init); #endif + +#ifdef CONFIG_KASAN_VMALLOC +void kasan_populate_vmalloc(unsigned long requested_size, struct vm_struct *area) +{ + unsigned long shadow_alloc_start, shadow_alloc_end; + unsigned long addr; + unsigned long page; + pgd_t *pgdp; + p4d_t *p4dp; + pud_t *pudp; + pmd_t *pmdp; + pte_t *ptep; + pte_t pte; + + shadow_alloc_start = ALIGN_DOWN( + (unsigned long)kasan_mem_to_shadow(area->addr), + PAGE_SIZE); + shadow_alloc_end = ALIGN( + (unsigned long)kasan_mem_to_shadow(area->addr + area->size), + PAGE_SIZE); + + addr = shadow_alloc_start; + do { + pgdp = pgd_offset_k(addr); + p4dp = p4d_alloc(&init_mm, pgdp, addr); + pudp = pud_alloc(&init_mm, p4dp, addr); + pmdp = pmd_alloc(&init_mm, pudp, addr); + ptep = pte_alloc_kernel(pmdp, addr); + + /* + * The pte may not be none if we allocated the page earlier to + * use part of it for another allocation. + * + * Because we only ever add to the vmalloc shadow pages and + * never free any, we can optimise here by checking for the pte + * presence outside the lock. It's OK to race with another + * allocation here because we do the 'real' test under the lock. + * This just allows us to save creating/freeing the new shadow + * page in the common case. + */ + if (!pte_none(*ptep)) + continue; + + /* + * We're probably going to need to populate the shadow. + * Allocate and poision the shadow page now, outside the lock. + */ + page = __get_free_page(GFP_KERNEL); + memset((void *)page, KASAN_VMALLOC_INVALID, PAGE_SIZE); + pte = pfn_pte(PFN_DOWN(__pa(page)), PAGE_KERNEL); + + spin_lock(&init_mm.page_table_lock); + if (pte_none(*ptep)) { + set_pte_at(&init_mm, addr, ptep, pte); + page = 0; + } + spin_unlock(&init_mm.page_table_lock); + + /* catch the case where we raced and don't need the page */ + if (page) + free_page(page); + } while (addr += PAGE_SIZE, addr != shadow_alloc_end); + + kasan_unpoison_shadow(area->addr, requested_size); + + /* + * We have to poison the remainder of the allocation each time, not + * just when the shadow page is first allocated, because vmalloc may + * reuse addresses, and an early large allocation would cause us to + * miss OOBs in future smaller allocations. + * + * The alternative is to poison the shadow on vfree()/vunmap(). We + * don't because the unmapping the virtual addresses should be + * sufficient to find most UAFs. + */ + requested_size = round_up(requested_size, KASAN_SHADOW_SCALE_SIZE); + kasan_poison_shadow(area->addr + requested_size, + area->size - requested_size, + KASAN_VMALLOC_INVALID); +} +#endif diff --git a/mm/kasan/generic_report.c b/mm/kasan/generic_report.c index 36c645939bc9..2d97efd4954f 100644 --- a/mm/kasan/generic_report.c +++ b/mm/kasan/generic_report.c @@ -86,6 +86,9 @@ static const char *get_shadow_bug_type(struct kasan_access_info *info) case KASAN_ALLOCA_RIGHT: bug_type = "alloca-out-of-bounds"; break; + case KASAN_VMALLOC_INVALID: + bug_type = "vmalloc-out-of-bounds"; + break; } return bug_type; diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 014f19e76247..8b1f2fbc780b 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -25,6 +25,7 @@ #endif #define KASAN_GLOBAL_REDZONE 0xFA /* redzone for global variable */ +#define KASAN_VMALLOC_INVALID 0xF9 /* unallocated space in vmapped page */ /* * Stack redzone shadow values diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 4fa8d84599b0..406097ff8ced 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2012,6 +2012,15 @@ static void setup_vmalloc_vm(struct vm_struct *vm, struct vmap_area *va, va->vm = vm; va->flags |= VM_VM_AREA; spin_unlock(&vmap_area_lock); + + /* + * If we are in vmalloc space we need to cover the shadow area with + * real memory. If we come here through VM_ALLOC, this is done + * by a higher level function that has access to the true size, + * which might not be a full page. + */ + if (is_vmalloc_addr(vm->addr) && !(vm->flags & VM_ALLOC)) + kasan_populate_vmalloc(vm->size, vm); } static void clear_vm_uninitialized_flag(struct vm_struct *vm) @@ -2483,6 +2492,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, if (!addr) return NULL; + kasan_populate_vmalloc(real_size, area); + /* * In this function, newly allocated vm_struct has VM_UNINITIALIZED * flag. It means that vm_struct is not fully initialized. @@ -3324,9 +3335,11 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, spin_unlock(&vmap_area_lock); /* insert all vm's */ - for (area = 0; area < nr_vms; area++) + for (area = 0; area < nr_vms; area++) { setup_vmalloc_vm(vms[area], vas[area], VM_ALLOC, pcpu_get_vm_areas); + kasan_populate_vmalloc(sizes[area], vms[area]); + } kfree(vas); return vms; From patchwork Wed Jul 31 07:15:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11067185 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D7176112C for ; Wed, 31 Jul 2019 07:16:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BD4FB28847 for ; Wed, 31 Jul 2019 07:16:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B150C28864; Wed, 31 Jul 2019 07:16:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4E2BF28847 for ; Wed, 31 Jul 2019 07:16:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8F5688E0006; Wed, 31 Jul 2019 03:16:06 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8A6838E0001; Wed, 31 Jul 2019 03:16:06 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6A99B8E0006; Wed, 31 Jul 2019 03:16:06 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f200.google.com (mail-pf1-f200.google.com [209.85.210.200]) by kanga.kvack.org (Postfix) with ESMTP id 328C08E0001 for ; Wed, 31 Jul 2019 03:16:06 -0400 (EDT) Received: by mail-pf1-f200.google.com with SMTP id 21so42625010pfu.9 for ; Wed, 31 Jul 2019 00:16:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=w7zQctoUbbEOIFpo2YzbJpxQDN+pRrmLCUjs8vOkAsU=; b=CKOvryPYk04F9vmWTH8vFPvJl85JUkDKvbkAf1ikB07N4cxTY1+fIu3TSZs3vQmCIx ClwjKvRGD7o4dk1aNusmiPBvPQ3jJAyj6Eo8lyuZZLE3WBLETT0GYCRSyIEC3IqRT8KP 7E+rlvYx/3cI9dPIqkizyIyEaRiWgMEc/+Igx1YGqG5JPKS7Q4+2myaHhu8JdleYgnLo QS2g5AxGi14qG1RM+7O36KaNZGBtnNVDFjhatmWQWCO7k/W4Due7nogYg+6uVHEpfcuq o+saBlu6TB4d9fhM2lr1IzS6MKvNAg+MK8u81+stWstWMG7yF8lHv9o6zZfsCXxJjdUo nw8w== X-Gm-Message-State: APjAAAWjVg2bFlxgWmH+DjXqj5UXqs7xwC04KZMYHi6h/y2DQW5tV886 wxkSJJnMj/Mc7ifxV+znKA3yR4e82c6ZzkWutP57WhYFMzoEdwB6dDucgX3NAtgNHH8alpqOoZ6 T//wsLk+KQxZVvkljhgI0mXnnO7e/76YNS0z5r/S0elJZMR6EDLzmFNVucqIhDOt7yg== X-Received: by 2002:a63:31cc:: with SMTP id x195mr101010667pgx.147.1564557365781; Wed, 31 Jul 2019 00:16:05 -0700 (PDT) X-Received: by 2002:a63:31cc:: with SMTP id x195mr101010615pgx.147.1564557364952; Wed, 31 Jul 2019 00:16:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564557364; cv=none; d=google.com; s=arc-20160816; b=zGAhjufIsqDLwziMqO5oxIMJDWhCwrYI2XODngvVfczrTetdKw2rdCH3jX3kV7i4J8 sQgSHlkRTF/LdPAnx209IzZALmGWrc6o+Ai7mSRay22c2ctFpKSy2ki5Zt9+Q5TXDb2V RTiWMRbAWVrAVnn6QXj43Fa9qyBgWNeGsbZuw2qIIEFboRq9mo653g5of/B3Suxj+di8 oIqLcIs8Xp4SzBXAbhPbfJgAw7iVw0lta1dJdjcQj2Sg72e0It/UtIXNf7qr3vYUrpSv Vzu+XOK9ZbkWuKrKsSc2fsBtl59r42QcDqnvdgNf9M6qMm+VZ55hepC0YVplqDilUPPC 9hGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=w7zQctoUbbEOIFpo2YzbJpxQDN+pRrmLCUjs8vOkAsU=; b=a7wGYOG9fJd007rOLNCGLoPf4lQgQr5/dj6DUckc76j9fZyK9LayARR5ow+iKbtvAY BVCHuLXHs+gy7wL+TqfgPtlo8nRzkLIvqQvSJD4O1yzf8q9OVABOzj/MQoc8cK14VQNp 97SkEea3x6Igw+IQz4P1Cr/QXzkpx/5YmFRazDWpv8zxh8Zuh523m5QK7Urs2wMHaRjX BsNNyQwcGlzmtuUZHD7LA3ZTVsvMqqKlXO60jqt709MQWq5e8Q1lq9nzRA2AnEcY4vZa mnYh3koMWykRafX3Ez+hE7RTso8UID77bFT7rjHRC5EqV0Q4O02LwUyvonpLY9h+AvLF GjoQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@axtens.net header.s=google header.b=Vb4l6hRe; spf=pass (google.com: domain of dja@axtens.net designates 209.85.220.65 as permitted sender) smtp.mailfrom=dja@axtens.net Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id o8sor12385106pgs.47.2019.07.31.00.16.04 for (Google Transport Security); Wed, 31 Jul 2019 00:16:04 -0700 (PDT) Received-SPF: pass (google.com: domain of dja@axtens.net designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@axtens.net header.s=google header.b=Vb4l6hRe; spf=pass (google.com: domain of dja@axtens.net designates 209.85.220.65 as permitted sender) smtp.mailfrom=dja@axtens.net DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=w7zQctoUbbEOIFpo2YzbJpxQDN+pRrmLCUjs8vOkAsU=; b=Vb4l6hReLTp+4x430W2Q9iCkhEPhmle92DV4T4Oq0+uq8zWtE92+0O6qof/DPoNPyF YdRUYu/PcneNAH0iqXabGkesddRL+oLpR9N90INnd+Y3Fss+aGSOqXY5POFTJWjEGn2i 0MAVCK7UhOW7Qx8U8eyASdlV+21e9EzlwRo/E= X-Google-Smtp-Source: APXvYqxRKmF2ao0UKMBSYVpd5vzwez7RUnrUwEAuK+c+NERvitf8/zrVX2MiXPL5uuDGm6YwnJfXug== X-Received: by 2002:a63:e807:: with SMTP id s7mr109013541pgh.194.1564557364627; Wed, 31 Jul 2019 00:16:04 -0700 (PDT) Received: from localhost (ppp167-251-205.static.internode.on.net. [59.167.251.205]) by smtp.gmail.com with ESMTPSA id f32sm597045pgb.21.2019.07.31.00.16.03 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Wed, 31 Jul 2019 00:16:03 -0700 (PDT) From: Daniel Axtens To: kasan-dev@googlegroups.com, linux-mm@kvack.org, x86@kernel.org, aryabinin@virtuozzo.com, glider@google.com, luto@kernel.org, linux-kernel@vger.kernel.org, mark.rutland@arm.com, dvyukov@google.com Cc: Daniel Axtens Subject: [PATCH v3 2/3] fork: support VMAP_STACK with KASAN_VMALLOC Date: Wed, 31 Jul 2019 17:15:49 +1000 Message-Id: <20190731071550.31814-3-dja@axtens.net> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190731071550.31814-1-dja@axtens.net> References: <20190731071550.31814-1-dja@axtens.net> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Supporting VMAP_STACK with KASAN_VMALLOC is straightforward: - clear the shadow region of vmapped stacks when swapping them in - tweak Kconfig to allow VMAP_STACK to be turned on with KASAN Reviewed-by: Dmitry Vyukov Signed-off-by: Daniel Axtens --- arch/Kconfig | 9 +++++---- kernel/fork.c | 4 ++++ 2 files changed, 9 insertions(+), 4 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index a7b57dd42c26..e791196005e1 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -825,16 +825,17 @@ config HAVE_ARCH_VMAP_STACK config VMAP_STACK default y bool "Use a virtually-mapped stack" - depends on HAVE_ARCH_VMAP_STACK && !KASAN + depends on HAVE_ARCH_VMAP_STACK + depends on !KASAN || KASAN_VMALLOC ---help--- Enable this if you want the use virtually-mapped kernel stacks with guard pages. This causes kernel stack overflows to be caught immediately rather than causing difficult-to-diagnose corruption. - This is presently incompatible with KASAN because KASAN expects - the stack to map directly to the KASAN shadow map using a formula - that is incorrect if the stack is in vmalloc space. + To use this with KASAN, the architecture must support backing + virtual mappings with real shadow memory, and KASAN_VMALLOC must + be enabled. config ARCH_OPTIONAL_KERNEL_RWX def_bool n diff --git a/kernel/fork.c b/kernel/fork.c index d8ae0f1b4148..ce3150fe8ff2 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -94,6 +94,7 @@ #include #include #include +#include #include #include @@ -215,6 +216,9 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node) if (!s) continue; + /* Clear the KASAN shadow of the stack. */ + kasan_unpoison_shadow(s->addr, THREAD_SIZE); + /* Clear stale pointers from reused stack. */ memset(s->addr, 0, THREAD_SIZE); From patchwork Wed Jul 31 07:15:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11067187 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 17BA613B1 for ; Wed, 31 Jul 2019 07:16:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F275428847 for ; Wed, 31 Jul 2019 07:16:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E548428864; Wed, 31 Jul 2019 07:16:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5A64428847 for ; Wed, 31 Jul 2019 07:16:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 12C998E0007; Wed, 31 Jul 2019 03:16:11 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 08FF08E0001; Wed, 31 Jul 2019 03:16:11 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E4ACE8E0007; Wed, 31 Jul 2019 03:16:10 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by kanga.kvack.org (Postfix) with ESMTP id AC2B98E0001 for ; Wed, 31 Jul 2019 03:16:10 -0400 (EDT) Received: by mail-pl1-f197.google.com with SMTP id j12so36919590pll.14 for ; Wed, 31 Jul 2019 00:16:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=UahEbtdXwZNAU+KwZPeJAGjgE2lYz0Dt6XczqvKwLUQ=; b=TqqdM4WM924SZEIXNxrBHcLq8oXZpcXZsqCO3pc3dXWqs5IXalW7Zx0wGuFcD0mWhg YsBUW4g4+TIzyug3SYtGhBv4Vwxsk8vez8aFX8gP2mwAc2IG8brIdrfwTm0poTTgYifW gMZf+Cdqkpk/SZ7N1ULcUTfpTjzU25bdsxB1uaxMTWBLPbyFCoyiiHiCvDM4u30oEVq+ tRwmuqaCYmMhV/dr9vlbTNqJEQAxJ7fdPGLzHfuIgp7u7SlZeU5GIhcSsSCNNQMliqfH LcvCfYgj4/RGgewZZfurzy+yifVp4xFEu8HAuKJBuEtnJ3S0aVn2NFW9tDNV+yL+0hlH xoBg== X-Gm-Message-State: APjAAAUtyQwaqp8Eh666aYkm5B/KmChpW7OCHJwtbKHRfyEkjtK7FinS +Vq6JZf7YVVJm6AupSKkgA4QT+f7wSSi7WI/UmNUVh+gmBj4Va06NKSy4D2pqC3rQzB3Jiho9UR yqWugVelWTebduRTjNg6HofdgTBL23h4tLk45xxul+ftMdKt4ESlNj4dRJtgUFQ/yNg== X-Received: by 2002:a17:902:549:: with SMTP id 67mr118544521plf.86.1564557370369; Wed, 31 Jul 2019 00:16:10 -0700 (PDT) X-Received: by 2002:a17:902:549:: with SMTP id 67mr118544446plf.86.1564557369159; Wed, 31 Jul 2019 00:16:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564557369; cv=none; d=google.com; s=arc-20160816; b=VqOrJrLQCFdyLsLhHqhWqD7cg7IZRbC0xvgpJW/4ksgvRpw6HXPEvzJg1T30MtdAj4 ZTHxCXlsyLIxiWFYEbb3SpZeY7dsg2AKuWgUnVAazjXJqpRtrXvxhvZffbM5tfHYPNDs In56PUyWlW+/yn2i949oqP4/WAuu3LfLiekEfHvtUH3viXq86R5IGOHbwBZWgoBCWv4V z9UoH4yCFrTouvCaJKXeZPcpdXLxQOM4t0iMzG4T1O5kZ92UEkmZ8W4wbiV/aqgpHRXO LxnORV/+tHk/c7/rZO+nqT/WdIHFCkUYCjgcgFEf/MpZgMhENTNXOXDcRG8vlhJeNHtz EqIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=UahEbtdXwZNAU+KwZPeJAGjgE2lYz0Dt6XczqvKwLUQ=; b=fQaRWDqJ06NWW2VvqJ4R7CJdvjV2O7R95g3c2N0DeadTAXXsqw8GCShFGr1Cv1ft2C 4El0rNW/EvAj1eU8Cb/cOrFoumexLIjXyDWpbAjrgjZZXCD5h8OpkIUNeiIRvIiP7n4t jKCpddbHPJaQCMcNghyT+ROwQhe7r5H9eWhmHPQIK9iQYASYD9Yp0HU5VaJzTeDiJEDC X50eDKSY9Up/QvfXVL7/JuazPlPdJrS31+Uxav9qgkPa/awduaPTrqtY7celUyG/lN5k lPbxld8kY4mnKY1IPWS9K1twG4XkolBoD2+DD0orchUMqC344vZcVaZofC/E/RtROGIx Gu/Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@axtens.net header.s=google header.b=ClaBu1v6; spf=pass (google.com: domain of dja@axtens.net designates 209.85.220.65 as permitted sender) smtp.mailfrom=dja@axtens.net Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id l10sor44345832pgp.54.2019.07.31.00.16.09 for (Google Transport Security); Wed, 31 Jul 2019 00:16:09 -0700 (PDT) Received-SPF: pass (google.com: domain of dja@axtens.net designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@axtens.net header.s=google header.b=ClaBu1v6; spf=pass (google.com: domain of dja@axtens.net designates 209.85.220.65 as permitted sender) smtp.mailfrom=dja@axtens.net DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UahEbtdXwZNAU+KwZPeJAGjgE2lYz0Dt6XczqvKwLUQ=; b=ClaBu1v6WJ4WtUuG4BiRwFve0q7yeVjT3ieM+Bmxe0M/RaaIqHw2L2kaStQrsQQxvm SFh9kyie4gCs8EuO+wJTMeMSxCVDiM43VRfcY1z1mY4NGrA1ObPdQYQUQRETS8V9bGJZ 4s/nbwxEGP4iyHoPO4NUWgW+BhSG8jTSAFgzY= X-Google-Smtp-Source: APXvYqzREZaoD42yk5RfPOCAOs282MqhtyUaNihf4MOdto/t4vS1Yi0RCP7Z1XYHxt7MnHtN/9LsEQ== X-Received: by 2002:a63:fd57:: with SMTP id m23mr47211876pgj.204.1564557368818; Wed, 31 Jul 2019 00:16:08 -0700 (PDT) Received: from localhost (ppp167-251-205.static.internode.on.net. [59.167.251.205]) by smtp.gmail.com with ESMTPSA id i14sm104075707pfk.0.2019.07.31.00.16.07 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Wed, 31 Jul 2019 00:16:08 -0700 (PDT) From: Daniel Axtens To: kasan-dev@googlegroups.com, linux-mm@kvack.org, x86@kernel.org, aryabinin@virtuozzo.com, glider@google.com, luto@kernel.org, linux-kernel@vger.kernel.org, mark.rutland@arm.com, dvyukov@google.com Cc: Daniel Axtens Subject: [PATCH v3 3/3] x86/kasan: support KASAN_VMALLOC Date: Wed, 31 Jul 2019 17:15:50 +1000 Message-Id: <20190731071550.31814-4-dja@axtens.net> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190731071550.31814-1-dja@axtens.net> References: <20190731071550.31814-1-dja@axtens.net> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP In the case where KASAN directly allocates memory to back vmalloc space, don't map the early shadow page over it. We prepopulate pgds/p4ds for the range that would otherwise be empty. This is required to get it synced to hardware on boot, allowing the lower levels of the page tables to be filled dynamically. Acked-by: Dmitry Vyukov Signed-off-by: Daniel Axtens --- v2: move from faulting in shadow pgds to prepopulating --- arch/x86/Kconfig | 1 + arch/x86/mm/kasan_init_64.c | 61 +++++++++++++++++++++++++++++++++++++ 2 files changed, 62 insertions(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 222855cc0158..40562cc3771f 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -134,6 +134,7 @@ config X86 select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_JUMP_LABEL_RELATIVE select HAVE_ARCH_KASAN if X86_64 + select HAVE_ARCH_KASAN_VMALLOC if X86_64 select HAVE_ARCH_KGDB select HAVE_ARCH_MMAP_RND_BITS if MMU select HAVE_ARCH_MMAP_RND_COMPAT_BITS if MMU && COMPAT diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c index 296da58f3013..2f57c4ddff61 100644 --- a/arch/x86/mm/kasan_init_64.c +++ b/arch/x86/mm/kasan_init_64.c @@ -245,6 +245,52 @@ static void __init kasan_map_early_shadow(pgd_t *pgd) } while (pgd++, addr = next, addr != end); } +static void __init kasan_shallow_populate_p4ds(pgd_t *pgd, + unsigned long addr, + unsigned long end, + int nid) +{ + p4d_t *p4d; + unsigned long next; + void *p; + + p4d = p4d_offset(pgd, addr); + do { + next = p4d_addr_end(addr, end); + + if (p4d_none(*p4d)) { + p = early_alloc(PAGE_SIZE, nid, true); + p4d_populate(&init_mm, p4d, p); + } + } while (p4d++, addr = next, addr != end); +} + +static void __init kasan_shallow_populate_pgds(void *start, void *end) +{ + unsigned long addr, next; + pgd_t *pgd; + void *p; + int nid = early_pfn_to_nid((unsigned long)start); + + addr = (unsigned long)start; + pgd = pgd_offset_k(addr); + do { + next = pgd_addr_end(addr, (unsigned long)end); + + if (pgd_none(*pgd)) { + p = early_alloc(PAGE_SIZE, nid, true); + pgd_populate(&init_mm, pgd, p); + } + + /* + * we need to populate p4ds to be synced when running in + * four level mode - see sync_global_pgds_l4() + */ + kasan_shallow_populate_p4ds(pgd, addr, next, nid); + } while (pgd++, addr = next, addr != (unsigned long)end); +} + + #ifdef CONFIG_KASAN_INLINE static int kasan_die_handler(struct notifier_block *self, unsigned long val, @@ -352,9 +398,24 @@ void __init kasan_init(void) shadow_cpu_entry_end = (void *)round_up( (unsigned long)shadow_cpu_entry_end, PAGE_SIZE); + /* + * If we're in full vmalloc mode, don't back vmalloc space with early + * shadow pages. Instead, prepopulate pgds/p4ds so they are synced to + * the global table and we can populate the lower levels on demand. + */ +#ifdef CONFIG_KASAN_VMALLOC + kasan_shallow_populate_pgds( + kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM), + kasan_mem_to_shadow((void *)VMALLOC_END)); + + kasan_populate_early_shadow( + kasan_mem_to_shadow((void *)VMALLOC_END + 1), + shadow_cpu_entry_begin); +#else kasan_populate_early_shadow( kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM), shadow_cpu_entry_begin); +#endif kasan_populate_shadow((unsigned long)shadow_cpu_entry_begin, (unsigned long)shadow_cpu_entry_end, 0);