From patchwork Tue Mar 6 17:15:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10262281 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id F047D6055D for ; Tue, 6 Mar 2018 17:17:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DFACA28CE1 for ; Tue, 6 Mar 2018 17:17:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D415828D11; Tue, 6 Mar 2018 17:17:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 21CA928CE1 for ; Tue, 6 Mar 2018 17:17:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=zfZYd0lDz/IwCC2oI3fYWtF3gid0pnDuM4Sdyude2Uc=; b=YoeB7OlbCR8s6c7dwbL4AqvK36 rD7glUcn1g1z3ItR6RiRMDav8jncmSOK4JkdjzReFvjJxvMlnOkA9nqiOCuTNTz1DqowEBjtFautc iYHl0ymY3G65leVpMXK3NfnaZuuns+OaPdHq2+qzkubOPkt3XaUWl0w3cxgFpVD4Mya0Sy24dlKF4 YZ9ie4GVFE9B3V78RywEuH18lIkULEOMZOSm/bG6+NYtoBjg9N7wIq8dCh27LtrpIYHzPfvNOVlyM eMgxq8ScMmfqSM+JvW/Z1ymhlFTCt/pyHLfHuLJRL7HXQpxW/LCilksSbatXfYJcJGu90z1a5Q42q GosuDRPA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.89 #1 (Red Hat Linux)) id 1etGDN-0004Mx-Es; Tue, 06 Mar 2018 17:17:25 +0000 Received: from mail-wm0-x241.google.com ([2a00:1450:400c:c09::241]) by bombadil.infradead.org with esmtps (Exim 4.89 #1 (Red Hat Linux)) id 1etGCC-0003e6-5S for linux-arm-kernel@lists.infradead.org; Tue, 06 Mar 2018 17:16:15 +0000 Received: by mail-wm0-x241.google.com with SMTP id q83so24408370wme.5 for ; Tue, 06 Mar 2018 09:16:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ye+7/e+hxU8Hb4aVmHDx9VdnJiE6LWXUNKbanmXQcT8=; b=dKeTRTmNIIBXJEdJKPS+8Prt41uPNS/6hz2Ih1SVHxym8M0t2eZaEQ0M8vSusBxbtJ vAqJIWBfiJsds2oqPNq86EQFtaQocsen24z5DvgRfPsPkrZpzi3ZuK3QoWT7M2d86lC1 EXe5KYPxHJc47wMYauZ/ATTxFlPl6UuRSWpL4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ye+7/e+hxU8Hb4aVmHDx9VdnJiE6LWXUNKbanmXQcT8=; b=ZPpCc/JqUEoSh7cGV8qgzc6hwokcodIyZ1sol9/xiuhzybSpE3jgFoPoIMAZgOQcrJ ZqWQq2K8cNQXCVmnGjLlSs//mtl9UIFXcg2wPXV/TUlDl8ujVExQTQhA/lqjkZO9eq8z sBCWNTcADEz/2rSiCJIMkNplQbZWRQrrV0SuL50MbfvA4GEnmWX04Alj3WPJdCM3X2AO G04NrxUN0a8KbnKogjV0WKsrvqn5eVWgE00YQMLxK3GCeF5m7JaWXydGOKuwwGN9ybY4 WZ190Y89rpCKQQDE02Zl8cN9y4zDtr6DNTHZmCGtzLnaqb8fc6KYDKaPTUuQ5kdXT5YC 28TQ== X-Gm-Message-State: AElRT7G1xdKkQ34PxIBGfW8ksVdY+NTLdvo9pfZAG2vGwSalDC1gvp5k ZPypYj8VvD+B+xovUgz4g8ri0eNxqgo= X-Google-Smtp-Source: AG47ELuMKbOz9wx1VK/haKZPsF0f8B1UvfKKOB1tUrZUqU0Tht+hkOBhN14acC1WP+LDnR74qKLJKQ== X-Received: by 10.28.88.15 with SMTP id m15mr13110688wmb.0.1520356560296; Tue, 06 Mar 2018 09:16:00 -0800 (PST) Received: from localhost.localdomain ([160.168.113.39]) by smtp.gmail.com with ESMTPSA id j89sm10570026wrj.92.2018.03.06.09.15.58 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 06 Mar 2018 09:15:59 -0800 (PST) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 2/5] arm64/kernel: kaslr: reduce module randomization range to 4 GB Date: Tue, 6 Mar 2018 17:15:32 +0000 Message-Id: <20180306171535.25681-3-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180306171535.25681-1-ard.biesheuvel@linaro.org> References: <20180306171535.25681-1-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180306_091612_217398_9D95A043 X-CRM114-Status: GOOD ( 22.59 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, suzuki.poulose@arm.com, marc.zyngier@arm.com, catalin.marinas@arm.com, Ard Biesheuvel , will.deacon@arm.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP We currently have to rely on the GCC large code model for KASLR for two distinct but related reasons: - if we enable full randomization, modules will be loaded very far away from the core kernel, where they are out of range for ADRP instructions, - even without full randomization, the fact that the 128 MB module region is now no longer fully reserved for kernel modules means that there is a very low likelihood that the normal bottom-up allocation of other vmalloc regions may collide, and use up the range for other things. Large model code is suboptimal, given that each symbol reference involves a literal load that goes through the D-cache, reducing cache utilization. But more importantly, literals are not instructions but part of .text nonetheless, and hence mapped with executable permissions. So let's get rid of our dependency on the large model for KASLR, by: - reducing the full randomization range to 4 GB, thereby ensuring that ADRP references between modules and the kernel are always in range, - reduce the spillover range to 4 GB as well, so that we fallback to a region that is still guaranteed to be in range - move the randomization window of the core kernel to the middle of the VMALLOC space Note that KASAN always uses the module region outside of the vmalloc space, so keep the kernel close to that if KASAN is enabled. Signed-off-by: Ard Biesheuvel --- arch/arm64/Kconfig | 7 +++---- arch/arm64/kernel/kaslr.c | 20 ++++++++++++-------- arch/arm64/kernel/module.c | 7 ++++--- include/linux/sizes.h | 4 ++++ 4 files changed, 23 insertions(+), 15 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 7381eeb7ef8e..ae7d3d4c0bbe 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1109,7 +1109,6 @@ config ARM64_MODULE_CMODEL_LARGE config ARM64_MODULE_PLTS bool - select ARM64_MODULE_CMODEL_LARGE select HAVE_MOD_ARCH_SPECIFIC config RELOCATABLE @@ -1143,12 +1142,12 @@ config RANDOMIZE_BASE If unsure, say N. config RANDOMIZE_MODULE_REGION_FULL - bool "Randomize the module region independently from the core kernel" + bool "Randomize the module region over a 4 GB range" depends on RANDOMIZE_BASE default y help - Randomizes the location of the module region without considering the - location of the core kernel. This way, it is impossible for modules + Randomizes the location of the module region inside a 4 GB window + covering the core kernel. This way, it is less likely for modules to leak information about the location of core kernel data structures but it does imply that function calls between modules and the core kernel will need to be resolved via veneers in the module PLT. diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c index 47080c49cc7e..17dbdb055314 100644 --- a/arch/arm64/kernel/kaslr.c +++ b/arch/arm64/kernel/kaslr.c @@ -117,13 +117,15 @@ u64 __init kaslr_early_init(u64 dt_phys) /* * OK, so we are proceeding with KASLR enabled. Calculate a suitable * kernel image offset from the seed. Let's place the kernel in the - * lower half of the VMALLOC area (VA_BITS - 2). + * middle half of the VMALLOC area (VA_BITS - 2), and stay clear of + * the lower and upper quarters to avoid colliding with other + * allocations. * Even if we could randomize at page granularity for 16k and 64k pages, * let's always round to 2 MB so we don't interfere with the ability to * map using contiguous PTEs */ mask = ((1UL << (VA_BITS - 2)) - 1) & ~(SZ_2M - 1); - offset = seed & mask; + offset = BIT(VA_BITS - 3) + (seed & mask); /* use the top 16 bits to randomize the linear region */ memstart_offset_seed = seed >> 48; @@ -149,21 +151,23 @@ u64 __init kaslr_early_init(u64 dt_phys) * vmalloc region, since shadow memory is allocated for each * module at load time, whereas the vmalloc region is shadowed * by KASAN zero pages. So keep modules out of the vmalloc - * region if KASAN is enabled. + * region if KASAN is enabled, and put the kernel well within + * 4 GB of the module region. */ - return offset; + return offset % SZ_2G; if (IS_ENABLED(CONFIG_RANDOMIZE_MODULE_REGION_FULL)) { /* - * Randomize the module region independently from the core - * kernel. This prevents modules from leaking any information + * Randomize the module region over a 4 GB window covering the + * kernel. This reduces the risk of modules leaking information * about the address of the kernel itself, but results in * branches between modules and the core kernel that are * resolved via PLTs. (Branches between modules will be * resolved normally.) */ - module_range = VMALLOC_END - VMALLOC_START - MODULES_VSIZE; - module_alloc_base = VMALLOC_START; + module_range = SZ_4G - (u64)(_end - _stext); + module_alloc_base = max((u64)_end + offset - SZ_4G, + (u64)MODULES_VADDR); } else { /* * Randomize the module region by setting module_alloc_base to diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c index c8c6c2828b79..70c3e5518e95 100644 --- a/arch/arm64/kernel/module.c +++ b/arch/arm64/kernel/module.c @@ -55,9 +55,10 @@ void *module_alloc(unsigned long size) * less likely that the module region gets exhausted, so we * can simply omit this fallback in that case. */ - p = __vmalloc_node_range(size, MODULE_ALIGN, VMALLOC_START, - VMALLOC_END, GFP_KERNEL, PAGE_KERNEL_EXEC, 0, - NUMA_NO_NODE, __builtin_return_address(0)); + p = __vmalloc_node_range(size, MODULE_ALIGN, module_alloc_base, + module_alloc_base + SZ_4G, GFP_KERNEL, + PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE, + __builtin_return_address(0)); if (p && (kasan_module_alloc(p, size) < 0)) { vfree(p); diff --git a/include/linux/sizes.h b/include/linux/sizes.h index ce3e8150c174..fbde0bc7e882 100644 --- a/include/linux/sizes.h +++ b/include/linux/sizes.h @@ -8,6 +8,8 @@ #ifndef __LINUX_SIZES_H__ #define __LINUX_SIZES_H__ +#include + #define SZ_1 0x00000001 #define SZ_2 0x00000002 #define SZ_4 0x00000004 @@ -44,4 +46,6 @@ #define SZ_1G 0x40000000 #define SZ_2G 0x80000000 +#define SZ_4G _AC(0x100000000, ULL) + #endif /* __LINUX_SIZES_H__ */