From patchwork Tue Mar 7 14:04:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163637 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7BDEAC678D4 for ; Tue, 7 Mar 2023 14:12:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-Id:Date:Subject:Cc :To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=2yHlyCeLqtzZ+ggGMm41I4EeTI7rGZ0T2hJC9ToMKzg=; b=siIWnbhZ1oX2lF bGmAKfOXYl+LtwCvBqwTgmMNALk5fbaWKWdeaNqNXB80PUzXXFOreWQiGh62w8zaPPUI9ZRnTN6mL er0D1dnGfOjjQDa02cSZ74HL6mD7msxXCC75D9ni/ZhLO6ehaXN+UxxiNWvWXkEkIcfhPJ8ni8LoA 4y34VeKbE010/TfAdffEcalkw2paytWpLjXUIcWo+c7cUFlIclLBwtKRYi5Gux19EggGMshS/WZAO BGouxceRj3yM9v0JxPTuL6WUkz+AMyVj9kTbQFGG3Vz9B95GPVlkc75PtzUp1cnX2/6Lti2fV1K8a ozjs0GTq2Wj5A3qQV56g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZY2E-000g1t-7W; Tue, 07 Mar 2023 14:11:23 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXxi-000djG-Lb for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:06:47 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id D994FCE1BDB; Tue, 7 Mar 2023 14:06:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 453E9C4339C; Tue, 7 Mar 2023 14:06:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678197998; bh=q9N7KrKYI/2Gh9tZra2QeuqsR/1uRQqHKm5vwtrhO70=; h=From:To:Cc:Subject:Date:From; b=L9IZuemDij39vUJnsSLcL/wSszCzKQ0F6PWeQmhpZYTwhCwPbXvhgjabfOODlv1Qq 3Qq7nBT/E+ihP2gOZuZI0dvK1gumnXL+PF7KusbTHhtrbwT+5nF1h9DZ0SkhX34VC8 m6DNN5umrCoAoAP76hh2F6VIygMRLKzvLTbxNIW6vfln4u2Lc9l3o2K8pnlGxsxV/x FQuvnxfpd+JcFfZqcJSqlKvCRtGE1Qjp/D6ku/hGaj+KStJ5cKfEI4Mt9nEhvDbQ10 do2xc2qgF5KOqnI/++dmsjUPy7ceNzYaOo0s7Ne5/1oXQCKMOsjzYV/+igcrNxeSGa APavsMU3Tsfjg== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 00/60] arm64: Add support for LPA2 at stage1 and WXN Date: Tue, 7 Mar 2023 15:04:22 +0100 Message-Id: <20230307140522.2311461-1-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=11466; i=ardb@kernel.org; h=from:subject; bh=q9N7KrKYI/2Gh9tZra2QeuqsR/1uRQqHKm5vwtrhO70=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXd+W/cjC9JATezvjO/dJpmx6p/YpbhprMVhWF1is9Fd 932qLnRUcrCIMbBICumyCIw+++7nacnStU6z5KFmcPKBDKEgYtTACaSlsDwv2Tpu3R2ftXjfId8 RIz/OM+cW/8mNN1gdUXtj23FE7V3+zIyHFNeOFWOtUa6ZsujGXNOK+64qqs/fcfhWv+8L7vFJxf 2cAIA X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060643_183014_08C2BEB5 X-CRM114-Status: GOOD ( 36.38 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This is a followup to [0], which was a lot smaller. Thanks to Ryan for feedback and review. This series is independent from Ryan's work on adding support for LPA2 to KVM - the only potential source of conflict should be the patch "arm64: kvm: Limit HYP VA and host S2 range to 48 bits when LPA2 is in effect", which could simply be dropped in favour of the KVM changes to make it support LPA2. The first ~15 patches of this series rework how the kernel VA space is organized, so that the vmemmap region does not take up more space than necessary, and so that most of it can be reclaimed when running a build capable of 52-bit virtual addressing on hardware that is not. This is needed because the vmemmap region will take up a substantial part of the upper VA region that it shares with the kernel, modules and vmalloc/vmap mappings once we enable LPA2 with 4k pages. The next ~30 patches rework the early init code, reimplementing most of the page table and relocation handling in C code. There are several reasons why this is beneficial: - we generally prefer C code over asm for these things, and the macros that currently exist in head.S for creating the kernel pages tables are a good example why; - we no longer need to create the kernel mapping in two passes, which means we can remove the logic that copies parts of the fixmap and the KAsan shadow from one set of page tables to the other; this is especially advantageous for KAsan with LPA2, which needs more elaborate shadow handling across multiple levels, since the KAsan region cannot be placed on exact pgd_t bouundaries in that case; - we can read the ID registers and parse command line overrides before creating the page tables, which simplifies the LPA2 case, as flicking the global TCR_EL1.DS bit at a later stage would require elaborate repainting of all page table descriptors, some of which with the MMU disabled; - we can use more elaborate logic to create the mappings, which means we can use more precise mappings for code and data sections even when using 2 MiB granularity, and this is a prerequisite for running with WXN. As part of the ID map changes, we decouple the ID map size from the kernel VA size, and switch to a 48-bit VA map for all configurations. The next 18 patches rework the existing LVA support as a CPU feature, which simplifies some code and gets rid of the vabits_actual variable. Then, LPA2 support is implemented in the same vein. This requires adding support for 5 level paging as well, given that LPA2 introduces a new paging level '-1' when using 4k pages. Combined with the vmemmap changes at the start of the series, the resulting LPA2/4k pages configuration will have the exact same VA space layout as the ordinary 4k/4 levels configuration, and so LPA2 support can reasonably be enabled by default, as the fallback is seamless on non-LPA2 hardware. In the 16k/LPA2 case, the fallback also reduces the number of paging levels, resulting in a 47-bit VA space. This is based on the assumption that hybrid LPA2/non-LPA2 16k pages kernels in production use would prefer not to take the performance hit of 4 level paging to gain only a single additional bit of VA space. (Note that generic Android kernels use only 3 levels of paging today.) Bespoke 16k configurations can still configure 48-bit virtual addressing as before. Finally, the last two patches enable support for running with the WXN control enabled. This was previously part of a separate series [1], but given that the delta is tiny, it is included here as well. [0] https://lore.kernel.org/all/20221124123932.2648991-1-ardb@kernel.org/ [1] https://lore.kernel.org/all/20221111171201.2088501-1-ardb@kernel.org/ Cc: Catalin Marinas Cc: Will Deacon Cc: Marc Zyngier Cc: Mark Rutland Cc: Ryan Roberts Cc: Anshuman Khandual Cc: Kees Cook Anshuman Khandual (2): arm64/mm: Add FEAT_LPA2 specific TCR_EL1.DS field arm64/mm: Add FEAT_LPA2 specific ID_AA64MMFR0.TGRAN[2] Ard Biesheuvel (57): // KASLR / vmemmap reorg arm64: kernel: Disable latent_entropy GCC plugin in early C runtime arm64: mm: Take potential load offset into account when KASLR is off arm64: mm: get rid of kimage_vaddr global variable arm64: mm: Move PCI I/O emulation region above the vmemmap region arm64: mm: Move fixmap region above vmemmap region arm64: ptdump: Allow VMALLOC_END to be defined at boot arm64: ptdump: Discover start of vmemmap region at runtime arm64: vmemmap: Avoid base2 order of struct page size to dimension region arm64: mm: Reclaim unused vmemmap region for vmalloc use arm64: kaslr: Adjust randomization range dynamically arm64: kaslr: drop special case for ThunderX in kaslr_requires_kpti() arm64: kvm: honour 'nokaslr' command line option for the HYP VA space // Reimplement page table creation code in C arm64: kernel: Manage absolute relocations in code built under pi/ arm64: kernel: Don't rely on objcopy to make code under pi/ __init arm64: head: move relocation handling to C code arm64: idreg-override: Omit non-NULL checks for override pointer arm64: idreg-override: Prepare for place relative reloc patching arm64: idreg-override: Avoid parameq() and parameqn() arm64: idreg-override: avoid strlen() to check for empty strings arm64: idreg-override: Avoid sprintf() for simple string concatenation arm64: idreg-override: Avoid kstrtou64() to parse a single hex digit arm64: idreg-override: Move to early mini C runtime arm64: kernel: Remove early fdt remap code arm64: head: Clear BSS and the kernel page tables in one go arm64: Move feature overrides into the BSS section arm64: head: Run feature override detection before mapping the kernel arm64: head: move dynamic shadow call stack patching into early C runtime arm64: kaslr: Use feature override instead of parsing the cmdline again arm64: idreg-override: Create a pseudo feature for rodata=off arm64: Add helpers to probe local CPU for PAC/BTI/E0PD support arm64: head: allocate more pages for the kernel mapping arm64: head: move memstart_offset_seed handling to C code arm64: head: Move early kernel mapping routines into C code arm64: mm: Use 48-bit virtual addressing for the permanent ID map arm64: pgtable: Decouple PGDIR size macros from PGD/PUD/PMD levels arm64: kernel: Create initial ID map from C code arm64: mm: avoid fixmap for early swapper_pg_dir updates arm64: mm: omit redundant remap of kernel image arm64: Revert "mm: provide idmap pointer to cpu_replace_ttbr1()" // Implement LPA2 support arm64: mm: Handle LVA support as a CPU feature arm64: mm: Add feature override support for LVA arm64: mm: Wire up TCR.DS bit to PTE shareability fields arm64: mm: Add LPA2 support to phys<->pte conversion routines arm64: mm: Add definitions to support 5 levels of paging arm64: mm: add LPA2 and 5 level paging support to G-to-nG conversion arm64: Enable LPA2 at boot if supported by the system arm64: mm: Add 5 level paging support to fixmap and swapper handling arm64: kasan: Reduce minimum shadow alignment and enable 5 level paging arm64: mm: Add support for folding PUDs at runtime arm64: ptdump: Disregard unaddressable VA space arm64: ptdump: Deal with translation levels folded at runtime arm64: kvm: avoid CONFIG_PGTABLE_LEVELS for runtime levels arm64: kvm: Limit HYP VA and host S2 range to 48 bits when LPA2 is in effect arm64: Enable 52-bit virtual addressing for 4k and 16k granule configs arm64: defconfig: Enable LPA2 support // Allow WXN control to be enabled at boot mm: add arch hook to validate mmap() prot flags arm64: mm: add support for WXN memory translation attribute Marc Zyngier (1): arm64: Turn kaslr_feature_override into a generic SW feature override arch/arm64/Kconfig | 34 +- arch/arm64/configs/defconfig | 2 +- arch/arm64/include/asm/assembler.h | 55 +-- arch/arm64/include/asm/cpufeature.h | 102 +++++ arch/arm64/include/asm/fixmap.h | 1 + arch/arm64/include/asm/kasan.h | 2 - arch/arm64/include/asm/kernel-pgtable.h | 104 ++--- arch/arm64/include/asm/memory.h | 50 +-- arch/arm64/include/asm/mman.h | 36 ++ arch/arm64/include/asm/mmu.h | 26 +- arch/arm64/include/asm/mmu_context.h | 49 ++- arch/arm64/include/asm/pgalloc.h | 53 ++- arch/arm64/include/asm/pgtable-hwdef.h | 33 +- arch/arm64/include/asm/pgtable-prot.h | 18 +- arch/arm64/include/asm/pgtable-types.h | 6 + arch/arm64/include/asm/pgtable.h | 229 +++++++++- arch/arm64/include/asm/scs.h | 34 +- arch/arm64/include/asm/setup.h | 3 - arch/arm64/include/asm/sysreg.h | 2 + arch/arm64/include/asm/tlb.h | 3 +- arch/arm64/kernel/Makefile | 7 +- arch/arm64/kernel/cpu_errata.c | 2 +- arch/arm64/kernel/cpufeature.c | 90 ++-- arch/arm64/kernel/head.S | 465 ++------------------ arch/arm64/kernel/idreg-override.c | 322 -------------- arch/arm64/kernel/image-vars.h | 32 ++ arch/arm64/kernel/kaslr.c | 4 +- arch/arm64/kernel/module.c | 2 +- arch/arm64/kernel/pi/Makefile | 28 +- arch/arm64/kernel/pi/idreg-override.c | 396 +++++++++++++++++ arch/arm64/kernel/pi/kaslr_early.c | 78 +--- arch/arm64/kernel/pi/map_kernel.c | 284 ++++++++++++ arch/arm64/kernel/pi/map_range.c | 104 +++++ arch/arm64/kernel/{ => pi}/patch-scs.c | 36 +- arch/arm64/kernel/pi/pi.h | 30 ++ arch/arm64/kernel/pi/relacheck.c | 130 ++++++ arch/arm64/kernel/pi/relocate.c | 64 +++ arch/arm64/kernel/setup.c | 22 - arch/arm64/kernel/sleep.S | 3 - arch/arm64/kernel/suspend.c | 2 +- arch/arm64/kernel/vmlinux.lds.S | 14 +- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 2 + arch/arm64/kvm/mmu.c | 22 +- arch/arm64/kvm/va_layout.c | 10 +- arch/arm64/mm/init.c | 2 +- arch/arm64/mm/kasan_init.c | 154 +++++-- arch/arm64/mm/mmap.c | 4 + arch/arm64/mm/mmu.c | 268 ++++++----- arch/arm64/mm/pgd.c | 17 +- arch/arm64/mm/proc.S | 106 ++++- arch/arm64/mm/ptdump.c | 43 +- arch/arm64/tools/cpucaps | 1 + include/linux/mman.h | 15 + mm/mmap.c | 3 + 54 files changed, 2259 insertions(+), 1345 deletions(-) delete mode 100644 arch/arm64/kernel/idreg-override.c create mode 100644 arch/arm64/kernel/pi/idreg-override.c create mode 100644 arch/arm64/kernel/pi/map_kernel.c create mode 100644 arch/arm64/kernel/pi/map_range.c rename arch/arm64/kernel/{ => pi}/patch-scs.c (89%) create mode 100644 arch/arm64/kernel/pi/pi.h create mode 100644 arch/arm64/kernel/pi/relacheck.c create mode 100644 arch/arm64/kernel/pi/relocate.c