From patchwork Sun Mar 18 12:53:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Abbott Liu X-Patchwork-Id: 10291179 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3CA0D60291 for ; Sun, 18 Mar 2018 14:10:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 256D828AB7 for ; Sun, 18 Mar 2018 14:10:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 14EA228CB7; Sun, 18 Mar 2018 14:10:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3F6E128AB7 for ; Sun, 18 Mar 2018 14:10:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=vDenzSuDiMnTEErTiuvGN1scizwgvKM7Gw2aEh3/Zrw=; b=gszN4DwTqby7U9 /A6jmRNU9D0YnpOmXgc78uJ3va1KBbIdB0oTa6cjExCozZCSRHDC/Zhj4CSb6ClrvferihUQfxXqK Tn/6+aRhRcB+95b6+pM9b9PKdQJs5wm0V9i9W7gX4H9fSE/Vop5usaYDyGdDiIw9isKtJ30OwTAtA v9s3meQJsHdxoatXNtTuPFWweqH4fOJBygsmCf2TZdWrzdtZnnXiMXXc5UQ+FnGJGCqIeTy+yhWUW rg2x9J7i9+pz4wAhT4EtsV5W0dcXcjInHEm7ErAhe1q+Oy8ocyalYIjLuO7MiqTIl+hwcn4qOm/Ww S1ZE0Qe8bUFkP8C5ybFw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1exZ0x-0003Ei-Rb; Sun, 18 Mar 2018 14:10:23 +0000 Received: from merlin.infradead.org ([2001:8b0:10b:1231::1]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1exZ0P-0002ZE-2K for linux-arm-kernel@bombadil.infradead.org; Sun, 18 Mar 2018 14:09:49 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=brvJdc6vzMBaC3v/ZYsYN9LacGU9XTPPx+WVNT7Y1YU=; b=VX5imfoYk7K/ORzhpl2tS0dee DwYdCGf2YI3G72XpFJMJLJzPx62WSRaKZUHUmmgwOVQWIjSytbVhkjRohwIzJ+5FpY2yWgYXCtICj PYOIiDJud8BKdXlD9QhRZksbP6+Qfx/NO82ITwTw8fo9b1xv5eq6sb7LyLGMNlWSImDUGIE6tCVva GgygHsMJigMnavexZRnB5A4+vhh6tck+A2BuXgdhJAgnokBf/aVwYU6kEZ7qK8kYx6WpZ3NJSW54z fM9/gMqNBTEgadXoB+t31GIKpxg+7w8n8q/dW08K6AKKpPus6tghMqLVPWtCFQCyF47+l3J0j72YI IfQqBTUwA==; Received: from szxga05-in.huawei.com ([45.249.212.191] helo=huawei.com) by merlin.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1exY8i-0008WO-PJ for linux-arm-kernel@lists.infradead.org; Sun, 18 Mar 2018 13:14:22 +0000 Received: from DGGEMS405-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 3558373A6ADE1; Sun, 18 Mar 2018 21:13:50 +0800 (CST) Received: from linux.site (10.67.187.223) by DGGEMS405-HUB.china.huawei.com (10.3.19.205) with Microsoft SMTP Server id 14.3.361.1; Sun, 18 Mar 2018 21:13:43 +0800 From: Abbott Liu To: , , , , , , , , , Subject: [PATCH 6/7] Initialize the mapping of KASan shadow memory Date: Sun, 18 Mar 2018 20:53:41 +0800 Message-ID: <20180318125342.4278-7-liuwenliang@huawei.com> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20180318125342.4278-1-liuwenliang@huawei.com> References: <20180318125342.4278-1-liuwenliang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.187.223] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180318_091422_227257_609F38F5 X-CRM114-Status: GOOD ( 26.07 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: nicolas.pitre@linaro.org, tixy@linaro.org, grygorii.strashko@linaro.org, catalin.marinas@arm.com, linux@rasmusvillemoes.dk, dhowells@redhat.com, linux-mm@kvack.org, glider@google.com, mark.rutland@arm.com, kvmarm@lists.cs.columbia.edu, opendmb@gmail.com, mawilcox@microsoft.com, kasan-dev@googlegroups.com, geert@linux-m68k.org, linux-arm-kernel@lists.infradead.org, zhichao.huang@linaro.org, labbott@redhat.com, vladimir.murzin@arm.com, keescook@chromium.org, arnd@arndb.de, philip@cog.systems, jinb.park7@gmail.com, tglx@linutronix.de, dvyukov@google.com, ard.biesheuvel@linaro.org, linux-kernel@vger.kernel.org, james.morse@arm.com, kirill.shutemov@linux.intel.com, pombredanne@nexb.com, thgarnie@google.com, christoffer.dall@linaro.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From: Andrey Ryabinin This patch initializes KASan shadow region's page table and memory. There are two stage for KASan initializing: 1. At early boot stage the whole shadow region is mapped to just one physical page (kasan_zero_page). It's finished by the function kasan_early_init which is called by __mmap_switched(arch/arm/kernel/ head-common.S) 2. After the calling of paging_init, we use kasan_zero_page as zero shadow for some memory that KASan don't need to track, and we alloc new shadow space for the other memory that KASan need to track. These issues are finished by the function kasan_init which is call by setup_arch. 3. Add support arm LPAE ---Abbott Liu If LPAE is enabled, KASan shadow region's mapping table need be copyed in pgd_alloc function. 4. In 64bit machine, size_t is unsigned long, but int 32bit machine, size_t is unsigned int, so we need type conversion in the function of kasan_cache_create. Cc: Andrey Ryabinin Co-Developed-by: Abbott Liu Reviewed-by: Russell King - ARM Linux Reviewed-by: Florian Fainelli Tested-by: Florian Fainelli Signed-off-by: Abbott Liu --- arch/arm/include/asm/kasan.h | 23 +++ arch/arm/include/asm/pgalloc.h | 7 +- arch/arm/include/asm/thread_info.h | 4 + arch/arm/kernel/head-common.S | 3 + arch/arm/kernel/setup.c | 2 + arch/arm/mm/Makefile | 3 + arch/arm/mm/kasan_init.c | 290 +++++++++++++++++++++++++++++++++++++ arch/arm/mm/pgd.c | 14 ++ mm/kasan/kasan.c | 5 +- 9 files changed, 347 insertions(+), 4 deletions(-) create mode 100644 arch/arm/include/asm/kasan.h create mode 100644 arch/arm/mm/kasan_init.c diff --git a/arch/arm/include/asm/kasan.h b/arch/arm/include/asm/kasan.h new file mode 100644 index 0000000..5561461 --- /dev/null +++ b/arch/arm/include/asm/kasan.h @@ -0,0 +1,23 @@ +#ifndef __ASM_KASAN_H +#define __ASM_KASAN_H + +#ifdef CONFIG_KASAN + +#include + +#define KASAN_SHADOW_SCALE_SHIFT 3 + +/* + * Compiler uses shadow offset assuming that addresses start + * from 0. Kernel addresses don't start from 0, so shadow + * for kernel really starts from 'compiler's shadow offset' + + * ('kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT) + */ + +extern void kasan_init(void); + +#else +static inline void kasan_init(void) { } +#endif + +#endif diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h index 2d7344f..f170659 100644 --- a/arch/arm/include/asm/pgalloc.h +++ b/arch/arm/include/asm/pgalloc.h @@ -50,8 +50,11 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd) */ #define pmd_alloc_one(mm,addr) ({ BUG(); ((pmd_t *)2); }) #define pmd_free(mm, pmd) do { } while (0) -#define pud_populate(mm,pmd,pte) BUG() - +#ifndef CONFIG_KASAN +#define pud_populate(mm, pmd, pte) BUG() +#else +#define pud_populate(mm, pmd, pte) do { } while (0) +#endif #endif /* CONFIG_ARM_LPAE */ extern pgd_t *pgd_alloc(struct mm_struct *mm); diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h index e71cc35..bc681a0 100644 --- a/arch/arm/include/asm/thread_info.h +++ b/arch/arm/include/asm/thread_info.h @@ -16,7 +16,11 @@ #include #include +#ifdef CONFIG_KASAN +#define THREAD_SIZE_ORDER 2 +#else #define THREAD_SIZE_ORDER 1 +#endif #define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER) #define THREAD_START_SP (THREAD_SIZE - 8) diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S index c79b829..20161e2 100644 --- a/arch/arm/kernel/head-common.S +++ b/arch/arm/kernel/head-common.S @@ -115,6 +115,9 @@ __mmap_switched: str r8, [r2] @ Save atags pointer cmp r3, #0 strne r10, [r3] @ Save control register values +#ifdef CONFIG_KASAN + bl kasan_early_init +#endif mov lr, #0 b start_kernel ENDPROC(__mmap_switched) diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c index fc40a2b..81c3e9df 100644 --- a/arch/arm/kernel/setup.c +++ b/arch/arm/kernel/setup.c @@ -62,6 +62,7 @@ #include #include #include +#include #include "atags.h" @@ -1118,6 +1119,7 @@ void __init setup_arch(char **cmdline_p) early_ioremap_reset(); paging_init(mdesc); + kasan_init(); request_standard_resources(mdesc); if (mdesc->restart) diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile index 9dbb849..573203e 100644 --- a/arch/arm/mm/Makefile +++ b/arch/arm/mm/Makefile @@ -111,3 +111,6 @@ obj-$(CONFIG_CACHE_L2X0_PMU) += cache-l2x0-pmu.o obj-$(CONFIG_CACHE_XSC3L2) += cache-xsc3l2.o obj-$(CONFIG_CACHE_TAUROS2) += cache-tauros2.o obj-$(CONFIG_CACHE_UNIPHIER) += cache-uniphier.o + +KASAN_SANITIZE_kasan_init.o := n +obj-$(CONFIG_KASAN) += kasan_init.o diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c new file mode 100644 index 0000000..d316f37 --- /dev/null +++ b/arch/arm/mm/kasan_init.c @@ -0,0 +1,290 @@ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "mm.h" + +static pgd_t tmp_pgd_table[PTRS_PER_PGD] __initdata __aligned(1ULL << 14); + +pmd_t tmp_pmd_table[PTRS_PER_PMD] __page_aligned_bss; + +static __init void *kasan_alloc_block(size_t size, int node) +{ + return memblock_virt_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS), + BOOTMEM_ALLOC_ACCESSIBLE, node); +} + +static void __init kasan_early_pmd_populate(unsigned long start, + unsigned long end, pud_t *pud) +{ + unsigned long addr; + unsigned long next; + pmd_t *pmd; + + pmd = pmd_offset(pud, start); + for (addr = start; addr < end;) { + pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte); + next = pmd_addr_end(addr, end); + addr = next; + flush_pmd_entry(pmd); + pmd++; + } +} + +static void __init kasan_early_pud_populate(unsigned long start, + unsigned long end, pgd_t *pgd) +{ + unsigned long addr; + unsigned long next; + pud_t *pud; + + pud = pud_offset(pgd, start); + for (addr = start; addr < end;) { + next = pud_addr_end(addr, end); + kasan_early_pmd_populate(addr, next, pud); + addr = next; + pud++; + } +} + +void __init kasan_map_early_shadow(pgd_t *pgdp) +{ + int i; + unsigned long start = KASAN_SHADOW_START; + unsigned long end = KASAN_SHADOW_END; + unsigned long addr; + unsigned long next; + pgd_t *pgd; + + for (i = 0; i < PTRS_PER_PTE; i++) + set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE, + &kasan_zero_pte[i], pfn_pte( + virt_to_pfn(kasan_zero_page), + __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY + | L_PTE_XN))); + + pgd = pgd_offset_k(start); + for (addr = start; addr < end;) { + next = pgd_addr_end(addr, end); + kasan_early_pud_populate(addr, next, pgd); + addr = next; + pgd++; + } +} + +extern struct proc_info_list *lookup_processor_type(unsigned int); + +void __init kasan_early_init(void) +{ + struct proc_info_list *list; + + /* + * locate processor in the list of supported processor + * types. The linker builds this table for us from the + * entries in arch/arm/mm/proc-*.S + */ + list = lookup_processor_type(read_cpuid_id()); + if (list) { +#ifdef MULTI_CPU + processor = *list->proc; +#endif + } + + BUILD_BUG_ON((KASAN_SHADOW_END - (1UL << 29)) != KASAN_SHADOW_OFFSET); + kasan_map_early_shadow(swapper_pg_dir); +} + +static void __init clear_pgds(unsigned long start, + unsigned long end) +{ + for (; start && start < end; start += PMD_SIZE) + pmd_clear(pmd_off_k(start)); +} + +pte_t * __meminit kasan_pte_populate(pmd_t *pmd, unsigned long addr, int node) +{ + pte_t *pte = pte_offset_kernel(pmd, addr); + + if (pte_none(*pte)) { + pte_t entry; + void *p = kasan_alloc_block(PAGE_SIZE, node); + + if (!p) + return NULL; + entry = pfn_pte(virt_to_pfn(p), + __pgprot(pgprot_val(PAGE_KERNEL))); + set_pte_at(&init_mm, addr, pte, entry); + } + return pte; +} + +pmd_t * __meminit kasan_pmd_populate(pud_t *pud, unsigned long addr, int node) +{ + pmd_t *pmd = pmd_offset(pud, addr); + + if (pmd_none(*pmd)) { + void *p = kasan_alloc_block(PAGE_SIZE, node); + + if (!p) + return NULL; + pmd_populate_kernel(&init_mm, pmd, p); + } + return pmd; +} + +pud_t * __meminit kasan_pud_populate(pgd_t *pgd, unsigned long addr, int node) +{ + pud_t *pud = pud_offset(pgd, addr); + + if (pud_none(*pud)) { + void *p = kasan_alloc_block(PAGE_SIZE, node); + + if (!p) + return NULL; + pr_err("populating pud addr %lx\n", addr); + pud_populate(&init_mm, pud, p); + } + return pud; +} + +pgd_t * __meminit kasan_pgd_populate(unsigned long addr, int node) +{ + pgd_t *pgd = pgd_offset_k(addr); + + if (pgd_none(*pgd)) { + void *p = kasan_alloc_block(PAGE_SIZE, node); + + if (!p) + return NULL; + pgd_populate(&init_mm, pgd, p); + } + return pgd; +} + +static int __init create_mapping(unsigned long start, unsigned long end, + int node) +{ + unsigned long addr = start; + pgd_t *pgd; + pud_t *pud; + pmd_t *pmd; + pte_t *pte; + + pr_info("populating shadow for %lx, %lx\n", start, end); + + for (; addr < end; addr += PAGE_SIZE) { + pgd = kasan_pgd_populate(addr, node); + if (!pgd) + return -ENOMEM; + + pud = kasan_pud_populate(pgd, addr, node); + if (!pud) + return -ENOMEM; + + pmd = kasan_pmd_populate(pud, addr, node); + if (!pmd) + return -ENOMEM; + + pte = kasan_pte_populate(pmd, addr, node); + if (!pte) + return -ENOMEM; + } + return 0; +} + + +void __init kasan_init(void) +{ + struct memblock_region *reg; + u64 orig_ttbr0; + int i; + + /* + * We are going to perform proper setup of shadow memory. + * At first we should unmap early shadow (clear_pgds() call bellow). + * However, instrumented code couldn't execute without shadow memory. + * tmp_pgd_table and tmp_pmd_table used to keep early shadow mapped + * until full shadow setup will be finished. + */ + orig_ttbr0 = get_ttbr0(); + +#ifdef CONFIG_ARM_LPAE + memcpy(tmp_pmd_table, + pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)), + sizeof(tmp_pmd_table)); + memcpy(tmp_pgd_table, swapper_pg_dir, sizeof(tmp_pgd_table)); + set_pgd(&tmp_pgd_table[pgd_index(KASAN_SHADOW_START)], + __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER)); + set_ttbr0(__pa(tmp_pgd_table)); +#else + memcpy(tmp_pgd_table, swapper_pg_dir, sizeof(tmp_pgd_table)); + set_ttbr0((u64)__pa(tmp_pgd_table)); +#endif + flush_cache_all(); + local_flush_bp_all(); + local_flush_tlb_all(); + + clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END); + + kasan_populate_zero_shadow(kasan_mem_to_shadow((void *)VMALLOC_START), + kasan_mem_to_shadow((void *)-1UL) + 1); + + for_each_memblock(memory, reg) { + void *start = __va(reg->base); + void *end = __va(reg->base + reg->size); + + if (reg->base + reg->size > arm_lowmem_limit) + end = __va(arm_lowmem_limit); + if (start >= end) + break; + + create_mapping((unsigned long)kasan_mem_to_shadow(start), + (unsigned long)kasan_mem_to_shadow(end), + NUMA_NO_NODE); + } + + /*1.the module's global variable is in MODULES_VADDR ~ MODULES_END, + * so we need mapping. + *2.PKMAP_BASE ~ PKMAP_BASE+PMD_SIZE's shadow and MODULES_VADDR + * ~ MODULES_END's shadow is in the same PMD_SIZE, so we cant + * use kasan_populate_zero_shadow. + */ + create_mapping( + (unsigned long)kasan_mem_to_shadow((void *)MODULES_VADDR), + + (unsigned long)kasan_mem_to_shadow((void *)(PKMAP_BASE + + PMD_SIZE)), + NUMA_NO_NODE); + + /* + * KAsan may reuse the contents of kasan_zero_pte directly, so we + * should make sure that it maps the zero page read-only. + */ + for (i = 0; i < PTRS_PER_PTE; i++) + set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE, + &kasan_zero_pte[i], + pfn_pte(virt_to_pfn(kasan_zero_page), + __pgprot(pgprot_val(PAGE_KERNEL) + | L_PTE_RDONLY))); + memset(kasan_zero_page, 0, PAGE_SIZE); + set_ttbr0(orig_ttbr0); + flush_cache_all(); + local_flush_bp_all(); + local_flush_tlb_all(); + pr_info("Kernel address sanitizer initialized\n"); + init_task.kasan_depth = 0; +} diff --git a/arch/arm/mm/pgd.c b/arch/arm/mm/pgd.c index 61e281c..4644a21 100644 --- a/arch/arm/mm/pgd.c +++ b/arch/arm/mm/pgd.c @@ -64,6 +64,20 @@ pgd_t *pgd_alloc(struct mm_struct *mm) new_pmd = pmd_alloc(mm, new_pud, 0); if (!new_pmd) goto no_pmd; +#ifdef CONFIG_KASAN + /* + *Copy PMD table for KASAN shadow mappings. + */ + init_pgd = pgd_offset_k(TASK_SIZE); + init_pud = pud_offset(init_pgd, TASK_SIZE); + init_pmd = pmd_offset(init_pud, TASK_SIZE); + new_pmd = pmd_offset(new_pud, TASK_SIZE); + memcpy(new_pmd, init_pmd, + (pmd_index(MODULES_VADDR)-pmd_index(TASK_SIZE)) + * sizeof(pmd_t)); + clean_dcache_area(new_pmd, PTRS_PER_PMD*sizeof(pmd_t)); +#endif + #endif if (!vectors_high()) { diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c index 104839a..af67b64 100644 --- a/mm/kasan/kasan.c +++ b/mm/kasan/kasan.c @@ -365,8 +365,9 @@ void kasan_cache_create(struct kmem_cache *cache, size_t *size, if (redzone_adjust > 0) *size += redzone_adjust; - *size = min(KMALLOC_MAX_SIZE, max(*size, cache->object_size + - optimal_redzone(cache->object_size))); + *size = min_t(unsigned long, KMALLOC_MAX_SIZE, + max(*size, cache->object_size + + optimal_redzone(cache->object_size))); /* * If the metadata doesn't fit, don't enable KASAN at all.