From patchwork Thu Nov 24 12:39:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13054930 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8FA32C43217 for ; Thu, 24 Nov 2022 12:42:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=R4UfCM6NQwkyxwrRciGmaaqAZ+z2Y7sVvr992UwjUAQ=; b=auWaTPPcKIRlJW gkpP/pWmBqAeTUoTBkdOaFoitEcPEb26S807PMRvuJu4By3Ag8ryDyxx5WNJ6mw4ivBhLW5h1Hg2y zTwS8vNkYBmOMc5b3pMX2fZJmkoIzaBBH/UoROtDYk7DL17yQzmOXYuzUQK9eFgqpcGcsQgJJM3yU oGzhEz6kWtCJA0C5Zyqz2PSYHo9uIACmfttxj/kjV7psjpJ9q+4MpEa74QynJLg6IgMWtfIllUwkP eWulmmZYg5JB9yMZIeV25btgwa/P3JAEiplTzg0t2sb5T2KV3zZi/8xsV0tyQlnPFuIfg68g0JN99 9Z1+5qf6b4c8cPOZFmGQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBXY-008Oxz-92; Thu, 24 Nov 2022 12:41:16 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBWO-008ON7-Cq for linux-arm-kernel@lists.infradead.org; Thu, 24 Nov 2022 12:40:06 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E70D662117; Thu, 24 Nov 2022 12:40:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B9388C433D7; Thu, 24 Nov 2022 12:40:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669293603; bh=xSFrAvwR6J04giDLoxveOTvYXqHd5hkR0CDS+RT77k0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jEim0TuOSfpVfbZ7bRKhSbNCY5pJKdIQyh6OVIcx4uZxobNvJEKSyLGE/+Ad7iOSp y93IZjfY9RIeyNfOwHI2Z2T5pLiiSr4+92ubBAQy2K+CRPJHlhkRp1BTVQMEAtatjI Pv0483wE8/xKhfsZnk4675RorWvBcIN51JEPFXMkfU89CwBgHxIcuLhdNlSxmBAMyT Hj8rUxAPdkBsiD4iLHaOADMQvRTWyvUtcBMpV8paPpwTcaIGBBx9BBpI3vGzc2guVy M5RTq4OWJOsToQE4UO7tNmUOGDPzUOWfUP3/bXpBfjoqDzJAu9/ydIeoG7W0hlDPEI Pf4PzOIxR+SIw== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson , Ryan Roberts Subject: [PATCH v2 04/19] arm64: kaslr: Adjust randomization range dynamically Date: Thu, 24 Nov 2022 13:39:17 +0100 Message-Id: <20221124123932.2648991-5-ardb@kernel.org> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog In-Reply-To: <20221124123932.2648991-1-ardb@kernel.org> References: <20221124123932.2648991-1-ardb@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221124_044004_542045_B5140B43 X-CRM114-Status: GOOD ( 24.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently, we base the KASLR randomization range on a rough estimate of the available space in the vmalloc region: the lower 1/4th has the module region and the upper 1/4th has the fixmap, vmemmap and PCI I/O ranges, and so we pick a random location in the remaining space in the middle. Once we enable support for 5-level paging with 4k pages, this no longer works: the vmemmap region, being dimensioned to cover a 52-bit linear region, takes up so much space in the upper VA region (whose size is based on a 48-bit VA space for compatibility with non-LVA hardware) that the region above the vmalloc region takes up more than a quarter of the available space. So instead of a heuristic, let's derive the randomization range from the actual boundaries of the various regions. Note that this requires some tweaks to the early fixmap init logic so it can deal with upper translation levels having already been populated by the time we reach that function. Note, however, that the level 3 page table cannot be shared between the fixmap and the kernel, so this needs to be taken into account when defining the range. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/pi/kaslr_early.c | 23 +++++++++++++++----- arch/arm64/mm/mmu.c | 21 +++++------------- 2 files changed, 23 insertions(+), 21 deletions(-) diff --git a/arch/arm64/kernel/pi/kaslr_early.c b/arch/arm64/kernel/pi/kaslr_early.c index 965515f7f180..51142c4f2659 100644 --- a/arch/arm64/kernel/pi/kaslr_early.c +++ b/arch/arm64/kernel/pi/kaslr_early.c @@ -13,7 +13,9 @@ #include #include +#include #include +#include #include "pi.h" @@ -40,6 +42,16 @@ static u64 __init get_kaslr_seed(void *fdt, int node) u64 __init kaslr_early_init(void *fdt, int chosen) { + /* + * The kernel can be mapped almost anywhere in the vmalloc region, + * although we have to ensure that we don't share a level 3 table with + * the fixmap, which installs its own statically allocated one (bm_pte) + * and manipulates the slots by writing into the array directly. + * We also have to account for the offset modulo 2 MiB resulting from + * the physical placement of the image. + */ + const u64 range = (VMALLOC_END & PMD_MASK) - MODULES_END - + ((u64)_end - ALIGN_DOWN((u64)_text, MIN_KIMG_ALIGN)); u64 seed; if (cpuid_feature_extract_unsigned_field(arm64_sw_feature_override.val, @@ -56,11 +68,10 @@ u64 __init kaslr_early_init(void *fdt, int chosen) memstart_offset_seed = seed & U16_MAX; /* - * OK, so we are proceeding with KASLR enabled. Calculate a suitable - * kernel image offset from the seed. Let's place the kernel in the - * middle half of the VMALLOC area (VA_BITS_MIN - 2), and stay clear of - * the lower and upper quarters to avoid colliding with other - * allocations. + * Multiply 'range' by a value in [0 .. U32_MAX], and shift the result + * right by 32 bits, to obtain a value in the range [0 .. range). To + * avoid loss of precision in the multiplication, split the right shift + * in two shifts by 16 (range is 64k aligned in any case) */ - return BIT(VA_BITS_MIN - 3) + (seed & GENMASK(VA_BITS_MIN - 3, 16)); + return (((range >> 16) * (seed >> 32)) >> 16) & ~(MIN_KIMG_ALIGN - 1); } diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 6942255056ae..222c1154b550 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1222,23 +1222,14 @@ void __init early_fixmap_init(void) pgdp = pgd_offset_k(addr); p4dp = p4d_offset(pgdp, addr); p4d = READ_ONCE(*p4dp); - if (CONFIG_PGTABLE_LEVELS > 3 && - !(p4d_none(p4d) || p4d_page_paddr(p4d) == __pa_symbol(bm_pud))) { - /* - * We only end up here if the kernel mapping and the fixmap - * share the top level pgd entry, which should only happen on - * 16k/4 levels configurations. - */ - BUG_ON(!IS_ENABLED(CONFIG_ARM64_16K_PAGES)); - pudp = pud_offset_kimg(p4dp, addr); - } else { - if (p4d_none(p4d)) - __p4d_populate(p4dp, __pa_symbol(bm_pud), P4D_TYPE_TABLE); - pudp = fixmap_pud(addr); - } + if (p4d_none(p4d)) + __p4d_populate(p4dp, __pa_symbol(bm_pud), P4D_TYPE_TABLE); + + pudp = pud_offset_kimg(p4dp, addr); if (pud_none(READ_ONCE(*pudp))) __pud_populate(pudp, __pa_symbol(bm_pmd), PUD_TYPE_TABLE); - pmdp = fixmap_pmd(addr); + + pmdp = pmd_offset_kimg(pudp, addr); __pmd_populate(pmdp, __pa_symbol(bm_pte), PMD_TYPE_TABLE); /*