From patchwork Tue Jun 5 06:14:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Abbott Liu X-Patchwork-Id: 10447743 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9D50460375 for ; Tue, 5 Jun 2018 06:42:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8A94428D70 for ; Tue, 5 Jun 2018 06:42:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7D40228D82; Tue, 5 Jun 2018 06:42:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D23E128D70 for ; Tue, 5 Jun 2018 06:42:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=HRiH/JyotzvwGecdwUCgSg9WT8/oAkL0gXal6VKMYx0=; b=AxUiXREnZlzwQE 1LaYbgSMiZEm/cPCGOf3A7GeeMNkZsaVyeBGz1GyZjVLEtDsBApGxTB15IseTq0gP7DYtzh6Yf+Vv 7CCDTZdj/KcruaShEwg/2PVaLl/xiz1xENO3osWfXvaO826pMiiEb8Q9lHXHPhKuXBahLt5Ryo+5f QN+ef1i17s5VeYdtUvmLPlPjFn/eXuT5exEA8SYSbVbZUxMtIhCnoDXFnFK/nX5YC7zOqzyzJjgI+ 7QMfeFs1BRrna4lxAQXt2UATKIo7BrRV34r01be22TuKQ0CN2X0SNYzkl8frw1DvdyGAdPQyec1St 60mWqcmexbyFUicpYoZQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fQ5g5-0000ia-IM; Tue, 05 Jun 2018 06:42:45 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1fQ5Ze-0005R3-Sp for linux-arm-kernel@lists.infradead.org; Tue, 05 Jun 2018 06:36:33 +0000 Received: from DGGEMS406-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 85EF35BAC26AD; Tue, 5 Jun 2018 14:35:35 +0800 (CST) Received: from linux.site (10.67.187.223) by DGGEMS406-HUB.china.huawei.com (10.3.19.206) with Microsoft SMTP Server id 14.3.382.0; Tue, 5 Jun 2018 14:35:28 +0800 From: Abbott Liu To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v5 4/6] Define the virtual space of KASan's shadow region Date: Tue, 5 Jun 2018 14:14:14 +0800 Message-ID: <20180605061416.18690-5-liuwenliang@huawei.com> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20180605061416.18690-1-liuwenliang@huawei.com> References: <20180605061416.18690-1-liuwenliang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.187.223] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180604_233607_306896_B1706975 X-CRM114-Status: GOOD ( 21.09 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: a.ryabinin@samsung.com, ryabinin.a.a@gmail.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Define KASAN_SHADOW_OFFSET,KASAN_SHADOW_START and KASAN_SHADOW_END for arm kernel address sanitizer. +----+ 0xffffffff | | | | | | +----+ CONFIG_PAGE_OFFSET | | | | |-> module virtual address space area. | |/ +----+ MODULE_VADDR = KASAN_SHADOW_END | | | | |-> the shadow area of kernel virtual address. | |/ +----+ TASK_SIZE(start of kernel space) = KASAN_SHADOW_START the | |\ shadow address of MODULE_VADDR | | ---------------------+ | | | + + KASAN_SHADOW_OFFSET |-> the user space area. Kernel address | | | sanitizer do not use this space. | | ---------------------+ | |/ ------ 0 1)KASAN_SHADOW_OFFSET: This value is used to map an address to the corresponding shadow address by the following formula: shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET; 2)KASAN_SHADOW_START This value is the MODULE_VADDR's shadow address. It is the start of kernel virtual space. 3)KASAN_SHADOW_END This value is the 0x100000000's shadow address. It is the end of kernel addresssanitizer's shadow area. It is also the start of the module area. When enable kasan, the definition of TASK_SIZE is not an an 8-bit rotated constant, so we need to modify the TASK_SIZE access code in the *.s file. Cc: Andrey Ryabinin Reported-by: Ard Biesheuvel Tested-by: Joel Stanley Tested-by: Florian Fainelli Tested-by: Abbott Liu Signed-off-by: Abbott Liu --- arch/arm/include/asm/kasan_def.h | 64 ++++++++++++++++++++++++++++++++++++++++ arch/arm/include/asm/memory.h | 5 ++++ arch/arm/kernel/entry-armv.S | 5 ++-- arch/arm/kernel/entry-common.S | 9 ++++-- arch/arm/mm/init.c | 6 ++++ arch/arm/mm/mmu.c | 7 ++++- 6 files changed, 90 insertions(+), 6 deletions(-) create mode 100644 arch/arm/include/asm/kasan_def.h diff --git a/arch/arm/include/asm/kasan_def.h b/arch/arm/include/asm/kasan_def.h new file mode 100644 index 0000000..7b7f424 --- /dev/null +++ b/arch/arm/include/asm/kasan_def.h @@ -0,0 +1,64 @@ +/* + * arch/arm/include/asm/kasan_def.h + * + * Copyright (c) 2018 Huawei Technologies Co., Ltd. + * + * Author: Abbott Liu + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#ifndef __ASM_KASAN_DEF_H +#define __ASM_KASAN_DEF_H + +#ifdef CONFIG_KASAN + +/* + * +----+ 0xffffffff + * | | + * | | + * | | + * +----+ CONFIG_PAGE_OFFSET + * | |\ + * | | |-> module virtual address space area. + * | |/ + * +----+ MODULE_VADDR = KASAN_SHADOW_END + * | |\ + * | | |-> the shadow area of kernel virtual address. + * | |/ + * +----+ TASK_SIZE(start of kernel space) = KASAN_SHADOW_START the + * | |\ shadow address of MODULE_VADDR + * | | ---------------------+ + * | | | + * + + KASAN_SHADOW_OFFSET |-> the user space area. Kernel address + * | | | sanitizer do not use this space. + * | | ---------------------+ + * | |/ + * ------ 0 + * + *1)KASAN_SHADOW_OFFSET: + * This value is used to map an address to the corresponding shadow + * address by the following formula: + * shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET; + * + * 2)KASAN_SHADOW_START + * This value is the MODULE_VADDR's shadow address. It is the start + * of kernel virtual space. + * + * 3) KASAN_SHADOW_END + * This value is the 0x100000000's shadow address. It is the end of + * kernel addresssanitizer's shadow area. It is also the start of the + * module area. + * + */ + +#define KASAN_SHADOW_OFFSET (KASAN_SHADOW_END - (1<<29)) + +#define KASAN_SHADOW_START ((KASAN_SHADOW_END >> 3) + KASAN_SHADOW_OFFSET) + +#define KASAN_SHADOW_END (UL(CONFIG_PAGE_OFFSET) - UL(SZ_16M)) + +#endif +#endif diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h index ed8fd0d..6e099a5 100644 --- a/arch/arm/include/asm/memory.h +++ b/arch/arm/include/asm/memory.h @@ -21,6 +21,7 @@ #ifdef CONFIG_NEED_MACH_MEMORY_H #include #endif +#include /* PAGE_OFFSET - the virtual address of the start of the kernel image */ #define PAGE_OFFSET UL(CONFIG_PAGE_OFFSET) @@ -31,7 +32,11 @@ * TASK_SIZE - the maximum size of a user space task. * TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area */ +#ifndef CONFIG_KASAN #define TASK_SIZE (UL(CONFIG_PAGE_OFFSET) - UL(SZ_16M)) +#else +#define TASK_SIZE (KASAN_SHADOW_START) +#endif #define TASK_UNMAPPED_BASE ALIGN(TASK_SIZE / 3, SZ_16M) /* diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S index 1752033..b4de9e4 100644 --- a/arch/arm/kernel/entry-armv.S +++ b/arch/arm/kernel/entry-armv.S @@ -183,7 +183,7 @@ ENDPROC(__und_invalid) get_thread_info tsk ldr r0, [tsk, #TI_ADDR_LIMIT] - mov r1, #TASK_SIZE + ldr r1, =TASK_SIZE str r1, [tsk, #TI_ADDR_LIMIT] str r0, [sp, #SVC_ADDR_LIMIT] @@ -437,7 +437,8 @@ ENDPROC(__fiq_abt) @ if it was interrupted in a critical region. Here we @ perform a quick test inline since it should be false @ 99.9999% of the time. The rest is done out of line. - cmp r4, #TASK_SIZE + ldr r0, =TASK_SIZE + cmp r4, r0 blhs kuser_cmpxchg64_fixup #endif #endif diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S index 3c4f887..78046de 100644 --- a/arch/arm/kernel/entry-common.S +++ b/arch/arm/kernel/entry-common.S @@ -51,7 +51,8 @@ ret_fast_syscall: UNWIND(.cantunwind ) disable_irq_notrace @ disable interrupts ldr r2, [tsk, #TI_ADDR_LIMIT] - cmp r2, #TASK_SIZE + ldr r1, =TASK_SIZE + cmp r2, r1 blne addr_limit_check_failed ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK @@ -81,7 +82,8 @@ ret_fast_syscall: str r0, [sp, #S_R0 + S_OFF]! @ save returned r0 disable_irq_notrace @ disable interrupts ldr r2, [tsk, #TI_ADDR_LIMIT] - cmp r2, #TASK_SIZE + ldr r1, =TASK_SIZE + cmp r2, r1 blne addr_limit_check_failed ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK @@ -116,7 +118,8 @@ ret_slow_syscall: disable_irq_notrace @ disable interrupts ENTRY(ret_to_user_from_irq) ldr r2, [tsk, #TI_ADDR_LIMIT] - cmp r2, #TASK_SIZE + ldr r1, =TASK_SIZE + cmp r2, r1 blne addr_limit_check_failed ldr r1, [tsk, #TI_FLAGS] tst r1, #_TIF_WORK_MASK diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index c186474..9320cf5 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -538,6 +538,9 @@ void __init mem_init(void) #ifdef CONFIG_MODULES " modules : 0x%08lx - 0x%08lx (%4ld MB)\n" #endif +#ifdef CONFIG_KASAN + " kasan : 0x%08lx - 0x%08lx (%4ld MB)\n" +#endif " .text : 0x%p" " - 0x%p" " (%4td kB)\n" " .init : 0x%p" " - 0x%p" " (%4td kB)\n" " .data : 0x%p" " - 0x%p" " (%4td kB)\n" @@ -558,6 +561,9 @@ void __init mem_init(void) #ifdef CONFIG_MODULES MLM(MODULES_VADDR, MODULES_END), #endif +#ifdef CONFIG_KASAN + MLM(KASAN_SHADOW_START, KASAN_SHADOW_END), +#endif MLK_ROUNDUP(_text, _etext), MLK_ROUNDUP(__init_begin, __init_end), diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index e46a6a4..f5aa1de 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1251,9 +1251,14 @@ static inline void prepare_page_table(void) /* * Clear out all the mappings below the kernel image. */ - for (addr = 0; addr < MODULES_VADDR; addr += PMD_SIZE) + for (addr = 0; addr < TASK_SIZE; addr += PMD_SIZE) pmd_clear(pmd_off_k(addr)); +#ifdef CONFIG_KASAN + /*TASK_SIZE ~ MODULES_VADDR is the KASAN's shadow area -- skip over it*/ + addr = MODULES_VADDR; +#endif + #ifdef CONFIG_XIP_KERNEL /* The XIP kernel is mapped in the module area -- skip over it */ addr = ((unsigned long)_exiprom + PMD_SIZE - 1) & PMD_MASK;