From patchwork Wed Feb 14 11:36:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10218483 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 88E5A6055C for ; Wed, 14 Feb 2018 11:38:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 68D0228BFA for ; Wed, 14 Feb 2018 11:38:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5CDCE28F35; Wed, 14 Feb 2018 11:38:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B45CB28BFA for ; Wed, 14 Feb 2018 11:38:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=kn0XWnGr8dmccnw55C+689myKSPH7a+GXEsgSucr/J8=; b=kb+f75Y6u3s4fbtteJ4Xb3PFPB wRHVA/UdOqywT2xRwzb4aOFCU9sYB+fH5zWJQhkJjvmtQLyOi07HasbSIICcJo3DL0HK0qubyh+MR S49hw+nY0shLepkLQDrRydjUiBPTirAVo0K8Zzgb1DosmHpXHyzJ7xFKZq35s2/69Hb/+ULxjmgdF VUJqbcw6lgGpPOps1in0PRHGeaEvygsyYrGbrielKkXJoFVenJdKhA5Un7M3XjHwfIoIuRmd3VctR tV8le4IBtnRSVqal89Aw3ThsxQdThaPxIOqNqDITKDBot3VTqdlCwEovd+Cnx8FUsxRyQ79x0/XWl w3miHJTQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.89 #1 (Red Hat Linux)) id 1elvOD-0006SC-I7; Wed, 14 Feb 2018 11:38:17 +0000 Received: from mail-wm0-x243.google.com ([2a00:1450:400c:c09::243]) by bombadil.infradead.org with esmtps (Exim 4.89 #1 (Red Hat Linux)) id 1elvNF-0005gm-UD for linux-arm-kernel@lists.infradead.org; Wed, 14 Feb 2018 11:37:26 +0000 Received: by mail-wm0-x243.google.com with SMTP id x4so17367683wmc.0 for ; Wed, 14 Feb 2018 03:37:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=u7IONgT8aDhsrP+NZ3xxKC4aeG7D9S5vd3KRhfVd7k4=; b=kTslV981QO7kbnR0JisG1tsHgFoA3wS2kPVY3WW7m0EYd+wI2RSLYJs/qhqqj+c/bX byQPdAjqYn0Hg1Nnv9Zk8END7u/SrWUyJacitHQ4piC0/3gxSx3eQAZUBctdBoJdu9OE IJZoYW349WY0b2wS6r2f/llYv9xtF4EzgUdro= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=u7IONgT8aDhsrP+NZ3xxKC4aeG7D9S5vd3KRhfVd7k4=; b=Vlodg8EDx+oWZ3483KY/A29NLv4It3clUthHiWEJI7vAUAHFNiBebsHN/4nNjq9OJA VydSyUFoekws3PbHMFOIZHCrNTt+YUdaeRbtVQe64YuoWw7Xs3W3dMfSAOO9fsx+rAal 7HlXpHzrTUy1dxFsU+Xd70HBvKmyJiii8P/ttxj1NfcQM0AIW8BYfPdo0YPUr01+kPSN 2290Dm7TFVDCaCkh7lcOYp6e37wULNHO5wU6nxWcKfP61smK+q9j4FNgYjAhPlCHgWh7 NjIXKOGrsXTBWmyA0dtC8rCoCIGoTbXxWgeIjwY0vIsPGSiXGfa0p7SpLK8ssDJ//KWn fVQw== X-Gm-Message-State: APf1xPBrc54M8cb+1frFRr6Fbs8GeE2UgYf87MtQlJOdmxI39/UNU5sC l3ZKpshiwabF1BsvzDgqTAYQQiBBIVU= X-Google-Smtp-Source: AH8x224dn26oAqJDySSIiHKtumIFNY0N0zsKcvbWE2EpGZoThtUJ9dQBFp+7Bv6RQqrLcM9gjdKneQ== X-Received: by 10.28.139.1 with SMTP id n1mr4143032wmd.27.1518608225611; Wed, 14 Feb 2018 03:37:05 -0800 (PST) Received: from localhost.localdomain ([154.145.114.50]) by smtp.gmail.com with ESMTPSA id o82sm9158905wmo.30.2018.02.14.03.37.03 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 14 Feb 2018 03:37:04 -0800 (PST) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org, will.deacon@arm.com, mark.rutland@arm.com Subject: [RFC PATCH v3 1/3] arm64/kernel: kaslr: reduce module randomization range to 4 GB Date: Wed, 14 Feb 2018 11:36:43 +0000 Message-Id: <20180214113645.16793-2-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180214113645.16793-1-ard.biesheuvel@linaro.org> References: <20180214113645.16793-1-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180214_033719_373833_183A3348 X-CRM114-Status: GOOD ( 22.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: marc.zyngier@arm.com, catalin.marinas@arm.com, Ard Biesheuvel , suzuki.poulose@arm.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP We currently have to rely on the GCC large code model for KASLR for two distinct but related reasons: - if we enable full randomization, modules will be loaded very far away from the core kernel, where they are out of range for ADRP instructions, - even without full randomization, the fact that the 128 MB module region is now no longer fully reserved for kernel modules means that there is a very low likelihood that the normal bottom-up allocation of other vmalloc regions may collide, and use up the range for other things. Large model code is suboptimal, given that each symbol reference involves a literal load that goes through the D-cache, reducing cache utilization. But more importantly, literals are not instructions but part of .text nonetheless, and hence mapped with executable permissions. So let's get rid of our dependency on the large model for KASLR, by: - reducing the full randomization range to 4 GB, thereby ensuring that ADRP references between modules and the kernel are always in range, - reduce the spillover range to 4 GB as well, so that we fallback to a region that is still guaranteed to be in range - move the randomization window of the core kernel to the middle of the VMALLOC space Note that KASAN always uses the module region outside of the vmalloc space, so keep the kernel close to that if KASAN is enabled. Signed-off-by: Ard Biesheuvel --- arch/arm64/Kconfig | 7 +++---- arch/arm64/kernel/kaslr.c | 20 ++++++++++++-------- arch/arm64/kernel/module.c | 7 ++++--- include/linux/sizes.h | 2 ++ 4 files changed, 21 insertions(+), 15 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 7381eeb7ef8e..ae7d3d4c0bbe 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1109,7 +1109,6 @@ config ARM64_MODULE_CMODEL_LARGE config ARM64_MODULE_PLTS bool - select ARM64_MODULE_CMODEL_LARGE select HAVE_MOD_ARCH_SPECIFIC config RELOCATABLE @@ -1143,12 +1142,12 @@ config RANDOMIZE_BASE If unsure, say N. config RANDOMIZE_MODULE_REGION_FULL - bool "Randomize the module region independently from the core kernel" + bool "Randomize the module region over a 4 GB range" depends on RANDOMIZE_BASE default y help - Randomizes the location of the module region without considering the - location of the core kernel. This way, it is impossible for modules + Randomizes the location of the module region inside a 4 GB window + covering the core kernel. This way, it is less likely for modules to leak information about the location of core kernel data structures but it does imply that function calls between modules and the core kernel will need to be resolved via veneers in the module PLT. diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c index 47080c49cc7e..17dbdb055314 100644 --- a/arch/arm64/kernel/kaslr.c +++ b/arch/arm64/kernel/kaslr.c @@ -117,13 +117,15 @@ u64 __init kaslr_early_init(u64 dt_phys) /* * OK, so we are proceeding with KASLR enabled. Calculate a suitable * kernel image offset from the seed. Let's place the kernel in the - * lower half of the VMALLOC area (VA_BITS - 2). + * middle half of the VMALLOC area (VA_BITS - 2), and stay clear of + * the lower and upper quarters to avoid colliding with other + * allocations. * Even if we could randomize at page granularity for 16k and 64k pages, * let's always round to 2 MB so we don't interfere with the ability to * map using contiguous PTEs */ mask = ((1UL << (VA_BITS - 2)) - 1) & ~(SZ_2M - 1); - offset = seed & mask; + offset = BIT(VA_BITS - 3) + (seed & mask); /* use the top 16 bits to randomize the linear region */ memstart_offset_seed = seed >> 48; @@ -149,21 +151,23 @@ u64 __init kaslr_early_init(u64 dt_phys) * vmalloc region, since shadow memory is allocated for each * module at load time, whereas the vmalloc region is shadowed * by KASAN zero pages. So keep modules out of the vmalloc - * region if KASAN is enabled. + * region if KASAN is enabled, and put the kernel well within + * 4 GB of the module region. */ - return offset; + return offset % SZ_2G; if (IS_ENABLED(CONFIG_RANDOMIZE_MODULE_REGION_FULL)) { /* - * Randomize the module region independently from the core - * kernel. This prevents modules from leaking any information + * Randomize the module region over a 4 GB window covering the + * kernel. This reduces the risk of modules leaking information * about the address of the kernel itself, but results in * branches between modules and the core kernel that are * resolved via PLTs. (Branches between modules will be * resolved normally.) */ - module_range = VMALLOC_END - VMALLOC_START - MODULES_VSIZE; - module_alloc_base = VMALLOC_START; + module_range = SZ_4G - (u64)(_end - _stext); + module_alloc_base = max((u64)_end + offset - SZ_4G, + (u64)MODULES_VADDR); } else { /* * Randomize the module region by setting module_alloc_base to diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c index f469e0435903..10c6ab9534e8 100644 --- a/arch/arm64/kernel/module.c +++ b/arch/arm64/kernel/module.c @@ -55,9 +55,10 @@ void *module_alloc(unsigned long size) * less likely that the module region gets exhausted, so we * can simply omit this fallback in that case. */ - p = __vmalloc_node_range(size, MODULE_ALIGN, VMALLOC_START, - VMALLOC_END, GFP_KERNEL, PAGE_KERNEL_EXEC, 0, - NUMA_NO_NODE, __builtin_return_address(0)); + p = __vmalloc_node_range(size, MODULE_ALIGN, module_alloc_base, + module_alloc_base + SZ_4G, GFP_KERNEL, + PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE, + __builtin_return_address(0)); if (p && (kasan_module_alloc(p, size) < 0)) { vfree(p); diff --git a/include/linux/sizes.h b/include/linux/sizes.h index ce3e8150c174..bc621db852d9 100644 --- a/include/linux/sizes.h +++ b/include/linux/sizes.h @@ -44,4 +44,6 @@ #define SZ_1G 0x40000000 #define SZ_2G 0x80000000 +#define SZ_4G 0x100000000ULL + #endif /* __LINUX_SIZES_H__ */