From patchwork Thu Jun 30 14:42:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12901947 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9430AC43334 for ; Thu, 30 Jun 2022 14:43:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=hGvele9SRxDB9iF+yqQ/2+8kZc19+4wP2b2zjg1kKVY=; b=3v6to5+uGfegmQ teThC7WzJK0oob1lcOqz2wh4bw2DJ3Q3Af3iSP77i3Apf9lEfM1xaUZZ+oh/scHh8/iFQWRCE5WNv k5E6V7ID14+SMgGTeKXuv1uHtsXvwod/FuUGKWEy18zmc+tXZV7W42QpCl7cDBiWg8KwLxnt2ii8E L4OaJgR2ii1MojcA4pQBRh8nmZFDnsd7IrEyG8Fdi7256R/IbfTFv1JDgptkbCbTgfoeLCxbRjzl/ vDHyxMeXILrBICJKjzV8pJMop7lBvy/7ym6cJfESYqi1F1vrhPCh9ZQkd4Dxxb7D9Dk3F+LNG/OCA BPsSgFDLC42F6KN6gDSg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6vNa-0007he-Bo; Thu, 30 Jun 2022 14:42:50 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6vNO-0007dQ-J5 for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 14:42:40 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id ED9CD61FC0; Thu, 30 Jun 2022 14:42:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D037EC341CD; Thu, 30 Jun 2022 14:42:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656600157; bh=yKzAtmQqDHKeriNUxaPWTpMM8mu/8wkHAf5dF48Z5iM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RWHoVILu8uM7iDd3R3qopqNNal/EQBqzj6km3aYRLWBveEdYe72TQJSMco8WAqYqJ kbhcGb2/p6vRPFN0WdkWLfmGWErytT+Bw6ih4DfYV/uWhG5DI1phU/1W5fF3Hy9Ayp 4+igaNtCW0gYJmgYFHF0FF0ziDZS4Vlm15MPQsyyLE+QRBrrvdcGZbEi1RTUFrpTJZ /Wz7ZTIvyFW/y1qbrH49ndS5aMCyQ9PZxQYNzeg/CxwhdzHbGwIclwwdVns4xgHeFQ WdW0vaLAkOTtAZsRJqqbrGjzF7erbO5ULWLJ+2DpylumlR94/8xa3uGSmL1vbBMYD2 +CEkxseGd8BEA== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Will Deacon , Marc Zyngier , =?utf-8?q?Pierre-Cl=C3=A9ment_Tosi?= , Quentin Perret , Mark Rutland Subject: [PATCH 1/6] arm64: lds: reduce effective minimum image alignment to 64k Date: Thu, 30 Jun 2022 16:42:25 +0200 Message-Id: <20220630144230.2332555-2-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220630144230.2332555-1-ardb@kernel.org> References: <20220630144230.2332555-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5090; h=from:subject; bh=yKzAtmQqDHKeriNUxaPWTpMM8mu/8wkHAf5dF48Z5iM=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBivbZLvJEbt9YqMqyjqFpG7xGYk/dxiM/H0t0K1UEi hKMpzm+JAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYr22SwAKCRDDTyI5ktmPJHgxC/ 9YZbu5GVqsv3VSgoxBpBSbepW36bX6nU4EdlpYFgtLBMD+DvNStzYYsoGbKqKDTYTYIevbLAFUxPYK ngnK9k7BOEectcZWhgSAyGS3XyFwLS6TNAzExEGxn9PU+MfLwo4J/Xl6o+uxT5ty9/aAd+sjeLjYiF mTsVOBu7moD8NA32Nstf235r+gI4eC5FL9L0Xupjo/zGbELfnXwsLlt7SjZUx/lJA4BkQp2665hFiT zY0NTcv7bZcKzb5ox4qjCML/sZBpwd9nFRebvznkGPAvbvUo/dBOWLuwXUcoFraYcDgTeqQf86XL7N n374MMB1tQ8/ESvPUXHRp6/+UCXCeX6B0nmryQ+w8uo4+uObWdPD+KJkFh3EBdoyfW3LCHFC95KSp8 NVHkc5ul6CrEQe5hry0XrtB2FenH4/7smZPeiKli54iKTzeaOSfGp9Gc9REZygguKymAl2S45YqoDs VrE2uhOhWb17YHQyktUezo/Sxtj0oWolgGP3yDBRhrNEc= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_074238_752598_C604ED9C X-CRM114-Status: GOOD ( 23.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Our segment alignment is 64k for all configurations, and coincidentally, this is the largest alignment supported by the PE/COFF executable format used by EFI. This means that generally, there is no need to move the image around in memory after it has been loaded by the firmware, which can be advantageous as it also permits us to rely on the memory attributes set by the firmware (R-X for [_text, __inittext_end] and RW- for [__initdata_begin, _end]. This means we can jump right from the EFI stub into the image with the MMU and caches enabled. However, the minimum alignment of the image is actually 128k on 64k pages configurations with CONFIG_VMAP_STACK=y, due to the existence of a single 128k aligned object in the image, which is the stack of the init task. Let's work around this by adding some padding before the init stack allocation, so we can round down the stack pointer to a suitably aligned value if the image is not aligned to 128k in memory. Note that this does not affect the boot protocol, which still requires 2 MiB alignment for bare metal boot. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/efi.h | 7 ------- arch/arm64/kernel/head.S | 3 +++ arch/arm64/kernel/vmlinux.lds.S | 11 ++++++++++- drivers/firmware/efi/libstub/arm64-stub.c | 2 +- include/linux/efi.h | 6 +----- 5 files changed, 15 insertions(+), 14 deletions(-) diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h index ad55079abe47..3be3efee8fac 100644 --- a/arch/arm64/include/asm/efi.h +++ b/arch/arm64/include/asm/efi.h @@ -57,13 +57,6 @@ efi_status_t __efi_rt_asm_wrapper(void *, const char *, ...); /* arch specific definitions used by the stub code */ -/* - * In some configurations (e.g. VMAP_STACK && 64K pages), stacks built into the - * kernel need greater alignment than we require the segments to be padded to. - */ -#define EFI_KIMG_ALIGN \ - (SEGMENT_ALIGN > THREAD_ALIGN ? SEGMENT_ALIGN : THREAD_ALIGN) - /* * On arm64, we have to ensure that the initrd ends up in the linear region, * which is a 1 GB aligned region of size '1UL << (VA_BITS_MIN - 1)' that is diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 5089660788fd..09b0cddf2161 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -394,6 +394,9 @@ SYM_FUNC_END(create_kernel_mapping) msr sp_el0, \tsk ldr \tmp1, [\tsk, #TSK_STACK] +#if THREAD_ALIGN > SEGMENT_ALIGN + bic \tmp1, \tmp1, #THREAD_ALIGN - 1 +#endif add sp, \tmp1, #THREAD_SIZE sub sp, sp, #PT_REGS_SIZE diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 45131e354e27..0efccdf52be2 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -274,7 +274,16 @@ SECTIONS _data = .; _sdata = .; - RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_ALIGN) +#if THREAD_ALIGN > SEGMENT_ALIGN + /* + * Add some padding for the init stack so we can fix up any potential + * misalignment at runtime. In practice, this can only occur on 64k + * pages configurations with CONFIG_VMAP_STACK=y. + */ + . += THREAD_ALIGN - SEGMENT_ALIGN; + ASSERT(. == init_stack, "init_stack not at start of RW_DATA as expected") +#endif + RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, SEGMENT_ALIGN) /* * Data written with the MMU off but read with the MMU on requires diff --git a/drivers/firmware/efi/libstub/arm64-stub.c b/drivers/firmware/efi/libstub/arm64-stub.c index 577173ee1f83..ad7392e6c200 100644 --- a/drivers/firmware/efi/libstub/arm64-stub.c +++ b/drivers/firmware/efi/libstub/arm64-stub.c @@ -98,7 +98,7 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, * 2M alignment if KASLR was explicitly disabled, even if it was not * going to be activated to begin with. */ - u64 min_kimg_align = efi_nokaslr ? MIN_KIMG_ALIGN : EFI_KIMG_ALIGN; + u64 min_kimg_align = efi_nokaslr ? MIN_KIMG_ALIGN : SEGMENT_ALIGN; if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) { efi_guid_t li_fixed_proto = LINUX_EFI_LOADED_IMAGE_FIXED_GUID; diff --git a/include/linux/efi.h b/include/linux/efi.h index 7d9b0bb47eb3..492497054a5a 100644 --- a/include/linux/efi.h +++ b/include/linux/efi.h @@ -416,11 +416,7 @@ void efi_native_runtime_setup(void); /* * This GUID may be installed onto the kernel image's handle as a NULL protocol * to signal to the stub that the placement of the image should be respected, - * and moving the image in physical memory is undesirable. To ensure - * compatibility with 64k pages kernels with virtually mapped stacks, and to - * avoid defeating physical randomization, this protocol should only be - * installed if the image was placed at a randomized 128k aligned address in - * memory. + * and moving the image in physical memory is undesirable. */ #define LINUX_EFI_LOADED_IMAGE_FIXED_GUID EFI_GUID(0xf5a37b6d, 0x3344, 0x42a5, 0xb6, 0xbb, 0x97, 0x86, 0x48, 0xc1, 0x89, 0x0a)