From patchwork Thu Nov 17 13:24:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13046884 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F253AC4332F for ; Thu, 17 Nov 2022 13:26:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=QOGz06deIiyqZ0RZ5KF2QKeZuIOjEYoDde2BgJr2gyc=; b=jJMbhb/xl4e6HI yh34ehOji2HAKSF/ZHuMg0/ZjULxwI9bnrLS4a80gbuFslFGYFcRjsOKN2gZjY8+8f6CwyUirrAZ5 R6Vt4Vwd00cgn3ox2uh3pOo9kK5DRjDjxzFocOu8URPjtzbL1fYWEZ2oA3MlBl4p+nN/YEf9t+IdW PHHGpMw9fxWg3vJ5acj53prbvtyyv2wDILkvBEjIEkfzgi1X0W9Wv/0FfKWz3psDkkRMclaa7LJOg NGQ+uePqzfyfZ+IFtEEl/KG5uCsNVp6DGw9dLt856iDzZ7h/YdQaEW4GsjMle7w6KEvMmVtr6r/hj 526+6fpF17W1gfZoOCCA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovetT-00EEvo-LP; Thu, 17 Nov 2022 13:25:27 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovesd-00EEIp-B1 for linux-arm-kernel@lists.infradead.org; Thu, 17 Nov 2022 13:24:37 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id A3BE6B8206C; Thu, 17 Nov 2022 13:24:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 44759C43141; Thu, 17 Nov 2022 13:24:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1668691472; bh=g37aA3YoYN0HWL09AC2QchQp+ELezXG/xhhYiN8WxJI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Z+k3XwwoI01zTSY/0/DZAybHRzWOxi+Y62CgnueMK23bKVhzUNOqXKxVvirbF/BuF BXqJNh4CCqw2uqA7OdmQiiWS1eN6KPI81GYveOlwo+yppnPgGvK8lFe4u1vVtohJiG e/3oogWj1PH2E4t8RmLSh5Gh0OTpyjKKKcj3bZZI9tqs6SElhFOxr6S+V7YpdipwFg EcwNuZ528XB/3Zh2N3v+hGoUBUsQEa3UTj9WR+AgVQ2UhxtyGXIr3FU2bJAPEeFW3o JuFvozkCJ+zrxSPmdKpkJkG95qEKo3U1YrpdnBUK4vwBEn6e2FmnPUuseJibvyVPx0 DGPVcpG6f85dw== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson Subject: [RFC PATCH 1/7] arm64: ptdump: Disregard unaddressable VA space Date: Thu, 17 Nov 2022 14:24:17 +0100 Message-Id: <20221117132423.1252942-2-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221117132423.1252942-1-ardb@kernel.org> References: <20221117132423.1252942-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1559; i=ardb@kernel.org; h=from:subject; bh=g37aA3YoYN0HWL09AC2QchQp+ELezXG/xhhYiN8WxJI=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjdjX8d4zbUoMmFr+Dh9bdU/ycgyOjobS1FrhWWax6 CddCGKGJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY3Y1/AAKCRDDTyI5ktmPJLhnC/ 4xOQ7sq5ggQZn27GQ7fIBGu9ZpdFVzoLWHVEYNK93G6AcF9DBu6qbxzVa6h8ygTSPAjk8WUXPSsgju hudp9LsxdszaDoOzOpHOXg9myhPSEnemx1WT6Fzf7C/mG316fzy0O1o095BLDoxdu5pwZ7aYIUv7N0 iIJ6hOH5yVmg1tcHDD9WH/gcZ5e9II2zG6lk0xlzO0BDj5NI2y4aeoIozq+eWLDMnGGncsjRiklG5e sFkoS1BRNMzCTLOS9hIloYCNPDHs9zScnzwxI5vBdFCgZvOQXsySx+hOrjkl1hEwPEbraR/upvt90v VyssH60YhUM7zaMmWLaBnpDaWQEqVVPd9+FnAcKu9VngvvSXwtRZnS2Jp93uegOWKp06TPXNSl4QlR X9MUtjuo2J3b+M+uLpxFbe4FmcwJcPksqk81Mytl5nzYvF6cj4Dhf4YsYnRWtjl3a0wSJCIDdQjE7D ySXVZsTjZvOyVtAkq40Ywfyv0Ubv4RcYxTy8ylNulHq7g= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221117_052435_725100_FABD9811 X-CRM114-Status: GOOD ( 16.09 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Configurations built with support for 52-bit virtual addressing can also run on CPUs that only support 48 bits of VA space, in which case only that part of swapper_pg_dir that represents the 48-bit addressable region is relevant, and everything else is ignored by the hardware. In a future patch, we will clone the top 2 pgd_t entries at the bottom, to support 52-bit VA configurations built for 16k pages and LPA2, where we cannot simply point TTBR1 to the pgd_t entries where they actually reside. However, we should avoid misinterpreting those cloned entries as describing the start of the 52-bit VA space when the hardware does not support that. Signed-off-by: Ard Biesheuvel --- arch/arm64/mm/ptdump.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/arm64/mm/ptdump.c b/arch/arm64/mm/ptdump.c index 9bc4066c5bf33a72..6c13bf6b7d5df4e4 100644 --- a/arch/arm64/mm/ptdump.c +++ b/arch/arm64/mm/ptdump.c @@ -358,7 +358,7 @@ void ptdump_check_wx(void) .ptdump = { .note_page = note_page, .range = (struct ptdump_range[]) { - {PAGE_OFFSET, ~0UL}, + {_PAGE_OFFSET(vabits_actual), ~0UL}, {0, 0} } } @@ -380,6 +380,8 @@ static int __init ptdump_init(void) address_markers[KASAN_START_NR].start_address = KASAN_SHADOW_START; #endif ptdump_initialize(); + if (VA_BITS > VA_BITS_MIN) + kernel_ptdump_info.base_addr = _PAGE_OFFSET(vabits_actual); ptdump_debugfs_register(&kernel_ptdump_info, "kernel_page_tables"); return 0; } From patchwork Thu Nov 17 13:24:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13046885 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EFFCBC4332F for ; Thu, 17 Nov 2022 13:26:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=r/fIBnepGq67RMMvHP9nma7CbudqZD+sXpiDGqhIoIw=; b=Ocbzvr+L9pH0Yl ba+vlykR2VTyjPiuARFU5NPpanaaPyWiFe0wjo/3oXq3veUx5O6m5QLm+0iHdYQDnyHy3J2BFyUq7 WlBYtJCEi5eG8hz9wEnEjOl4Sx9f98cxlCdIQ+8Z/HwvukFUXht6u659u/+ZFiliIro0XMyQy13i/ tYvsO70VRv5v8vfVTxd0YmODrhhjQiyNIK0HcIWvhDYNUX38CYZ3gicUr97KNbeP2uSvsBhO7OPPw FXNd92x+wIYFmQ6ODybHf8XFTXnIJ+f7MNrh1ZV+sYdIjGyLazeZO4Jb05QXYRkf1XbKEPo0ytXv3 gjul+/r92lXAPtIiEijQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovetl-00EFKY-S4; Thu, 17 Nov 2022 13:25:45 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovese-00EELC-6e for linux-arm-kernel@lists.infradead.org; Thu, 17 Nov 2022 13:24:37 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 74CF561E12; Thu, 17 Nov 2022 13:24:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C393CC43147; Thu, 17 Nov 2022 13:24:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1668691474; bh=pkN5Mwyz2eezDoSj/Bp9rGtwwycTPyvCRefUHMOLCd0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=To1Ob654V5RLFyDEf/i5NWc+bCZkcMCJ8OGT/U80xlOLGDzkmJ/mrCNUcQjsc8Cyh uW0nBTlJSbKgx6KWbv686GAGUe4ptadt4zA6WioCFpRwCm4sILgiC6KCZVsmS2lj2f j3ZIgqt2VIhyDgIba+SMDFCEoxXS5udcoBc1jRlOxOQJHqxfIMpMK3y2sxpBMNtDcf 1eg9fbpWhplS8k/rzNje/ybEq1V0dHyC6Np6uthfMOu5sCBVeDSb3FLXiLTZRc3+y7 j8A+Ah3H28yADbtxODbbg7dLqkE6QgQqDSGvTsLuR9fP705mFSMomHq3DDVK9F99IH UySV+389Vzp0w== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson Subject: [RFC PATCH 2/7] arm64: mm: Disable all 52-bit virtual addressing support with arm64.nolva Date: Thu, 17 Nov 2022 14:24:18 +0100 Message-Id: <20221117132423.1252942-3-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221117132423.1252942-1-ardb@kernel.org> References: <20221117132423.1252942-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=4476; i=ardb@kernel.org; h=from:subject; bh=pkN5Mwyz2eezDoSj/Bp9rGtwwycTPyvCRefUHMOLCd0=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjdjX+Of0NfaR83yimKuNPD+dyor1wEMxyV1gk8lE6 vCyqaiWJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY3Y1/gAKCRDDTyI5ktmPJHlaC/ 9hJ0W6Mv33zZXQuI4I4SPAZbbPczlcSqm6QgIMasbpBa8BpGFrVXYLowcT5Zbqe1jpl+PTSd6pNFH5 qnVd2ujjRYCZ1rcV6n+yRfU2EApE/BWm3BSzObGf9EOJa37x1AaFDjWS6o2NLldoSfwrA55UxWsBC2 8pGmIUKukE1Lkym9Jr0wxml1Wf4TI87SGYRCo//A0Um4aZcbWWMXbh2LfmSVK6A7fTVheXpNm9hpiU q8CyeEMspXKuG1M+V9jPWIunmeTnUOurT1BDAv3sx0rmj5SWcl6VnUNlWnRi+4O3GA1NqZX3J9qEvs EhfPDW/zl3KJzTuX9WHcg2UEtl7JvF5Hp9kJ+628u72QtEuGMZcG4D1lOpFUuBQ4+iPv3ZbuhT3M/w RXll2nS5QnZGCNiuI9QspyWFDXPtu5+FSG/mEfRKbkmsJ/7YUTAeV2PLYpc2IyjlhDMbph8Zcsskw4 3RACHJ69bL1WN6gdLU8POr2SmgJO1SVIW27mDvtK6ZGok= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221117_052436_389323_FFCE6B75 X-CRM114-Status: GOOD ( 15.87 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The LVA feature only applies to 64k pages configurations, and for smaller page sizes there are other feature registers that describe the virtual addressing capabilities of the CPU. Let's adhere to the principle of least surprise, and wire up arm64.nolva so that it disables 52-bit virtual addressing support regardless of the page size. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/cpufeature.h | 1 + arch/arm64/kernel/cpufeature.c | 4 ++- arch/arm64/kernel/image-vars.h | 1 + arch/arm64/kernel/pi/idreg-override.c | 26 ++++++++++++++++++++ 4 files changed, 31 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 7aa9cd4fc67f7c61..dbf0186f46ae54ef 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -910,6 +910,7 @@ static inline unsigned int get_vmid_bits(u64 mmfr1) struct arm64_ftr_reg *get_arm64_ftr_reg(u32 sys_id); +extern struct arm64_ftr_override id_aa64mmfr0_override; extern struct arm64_ftr_override id_aa64mmfr1_override; extern struct arm64_ftr_override id_aa64mmfr2_override; extern struct arm64_ftr_override id_aa64pfr0_override; diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 469d8b31487e88b6..4a631a6e7e42b981 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -636,6 +636,7 @@ static const struct arm64_ftr_bits ftr_raz[] = { #define ARM64_FTR_REG(id, table) \ __ARM64_FTR_REG_OVERRIDE(#id, id, table, &no_override) +struct arm64_ftr_override id_aa64mmfr0_override; struct arm64_ftr_override id_aa64mmfr1_override; struct arm64_ftr_override id_aa64mmfr2_override; struct arm64_ftr_override id_aa64pfr0_override; @@ -701,7 +702,8 @@ static const struct __ftr_reg_entry { &id_aa64isar2_override), /* Op1 = 0, CRn = 0, CRm = 7 */ - ARM64_FTR_REG(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0), + ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0, + &id_aa64mmfr0_override), ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64MMFR1_EL1, ftr_id_aa64mmfr1, &id_aa64mmfr1_override), ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64MMFR2_EL1, ftr_id_aa64mmfr2, diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 3bdf0e7865730213..82bafa1f869c3a8b 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -45,6 +45,7 @@ PROVIDE(__pi_memstart_offset_seed = memstart_offset_seed); PROVIDE(__pi_id_aa64isar1_override = id_aa64isar1_override); PROVIDE(__pi_id_aa64isar2_override = id_aa64isar2_override); +PROVIDE(__pi_id_aa64mmfr0_override = id_aa64mmfr0_override); PROVIDE(__pi_id_aa64mmfr1_override = id_aa64mmfr1_override); PROVIDE(__pi_id_aa64mmfr2_override = id_aa64mmfr2_override); PROVIDE(__pi_id_aa64pfr0_override = id_aa64pfr0_override); diff --git a/arch/arm64/kernel/pi/idreg-override.c b/arch/arm64/kernel/pi/idreg-override.c index 3be2f887e6cae29f..aeab2198720ac67c 100644 --- a/arch/arm64/kernel/pi/idreg-override.c +++ b/arch/arm64/kernel/pi/idreg-override.c @@ -139,10 +139,36 @@ DEFINE_OVERRIDE(6, sw_features, "arm64_sw", arm64_sw_feature_override, FIELD("nowxn", ARM64_SW_FEATURE_OVERRIDE_NOWXN), {}); +asmlinkage bool __init mmfr2_varange_filter(u64 val) +{ + u64 mmfr0, tg4, tg16; + + if (val) + return false; + + mmfr0 = read_sysreg(id_aa64mmfr0_el1); + tg4 = (mmfr0 & ID_AA64MMFR0_EL1_TGRAN4_MASK) >> ID_AA64MMFR0_EL1_TGRAN4_SHIFT; + tg16 = (mmfr0 & ID_AA64MMFR0_EL1_TGRAN16_MASK) >> ID_AA64MMFR0_EL1_TGRAN16_SHIFT; + + if (tg4 == ID_AA64MMFR0_EL1_TGRAN4_52_BIT) { + id_aa64mmfr0_override.val |= + ID_AA64MMFR0_EL1_TGRAN4_IMP << ID_AA64MMFR0_EL1_TGRAN4_SHIFT; + id_aa64mmfr0_override.mask |= ID_AA64MMFR0_EL1_TGRAN4_MASK; + } + + if (tg16 == ID_AA64MMFR0_EL1_TGRAN16_52_BIT) { + id_aa64mmfr0_override.val |= + ID_AA64MMFR0_EL1_TGRAN16_IMP << ID_AA64MMFR0_EL1_TGRAN16_SHIFT; + id_aa64mmfr0_override.mask |= ID_AA64MMFR0_EL1_TGRAN16_MASK; + } + return true; +} + DEFINE_OVERRIDE(7, mmfr2, "id_aa64mmfr2", id_aa64mmfr2_override, FIELD("varange", ID_AA64MMFR2_EL1_VARange_SHIFT), FIELD("e0pd", ID_AA64MMFR2_EL1_E0PD_SHIFT), {}); +DEFINE_OVERRIDE_FILTER(mmfr2, 0, mmfr2_varange_filter); /* * regs[] is populated by R_AARCH64_PREL32 directives invisible to the compiler From patchwork Thu Nov 17 13:24:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13046886 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1C621C433FE for ; Thu, 17 Nov 2022 13:27:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wNu6+klsiVpioW4EX73Zjmop2/RJR193Xg60dyygl2Y=; b=4/s6a7bfcy754/ ImeEuzLJHniorqDrDa7qoWj6/W2KE3/qP3rQArK70lN0bU9vP5VDIuh+VRtE0WTzA0wwgJ4AjaRXN qgjrgIrXFaKAiaLlcz5diNwQ5XjZlxHgYlL6gpD47Ecs6VjsO/cL/GOx/dU6olbMGZp6qJw8ftHfa HjvELOZq3oJf+tCHOeQ4790+Fvw0ECbgsdFfXvm6k203vavIQhRd8lYsJEKDBuo14lsSjDT6JXVLw trtigAMvVSh/7n7j5PBQFag8YuMKRdWP6DRycHbGBDPaawihec3d8viqO10ZOTirVvAOXr+R3AtpI amrwiWPeytQZteEBvO9Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oveu9-00EFam-Sk; Thu, 17 Nov 2022 13:26:10 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovesi-00EEON-29 for linux-arm-kernel@lists.infradead.org; Thu, 17 Nov 2022 13:24:41 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 9E3EBB8204A; Thu, 17 Nov 2022 13:24:38 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4EA21C43146; Thu, 17 Nov 2022 13:24:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1668691477; bh=f3jU8SOL8V1K9vf5WNLO3RnL2atHxpm/RSRZiHJjccU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=a80QJ2kFC+t9GWJodiJIAZomNTHcc0GLBfpxdNXKEdNQVck1WvX+G2TLdZ8m0Pwcc 8n3/0aafypIqouWABpFAQ3AE1o8SLkrtUfbaevUalE1bL65hRZK9l8SlCwV3WxZ3f0 RQBlTAl/n3qHzqSJ43mp8TGQHe4v1IpFD35FoDK27XuLl+kZalxpF03VeAlB1M6hS7 mvz7QsWlz+AzIcu0k26Tx3ei6ca9nLeS0KFgki5Rxht+WXBQjpHi4EhkvbpvPhzhpD BpTlbhIszgDLnf71hZ/6bv5ezvrVpt1oFonDpz2UGPSSFaXwXjU6XD/6xgHs5+cBDf gUkHWWfxPIGJg== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson Subject: [RFC PATCH 3/7] arm64: mm: Wire up TCR.DS bit to PTE shareability fields Date: Thu, 17 Nov 2022 14:24:19 +0100 Message-Id: <20221117132423.1252942-4-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221117132423.1252942-1-ardb@kernel.org> References: <20221117132423.1252942-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3012; i=ardb@kernel.org; h=from:subject; bh=f3jU8SOL8V1K9vf5WNLO3RnL2atHxpm/RSRZiHJjccU=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjdjYA/KvhDAnmqSRgQZ7ryOYanDvvk/q4jXMrJt0o Gy3KO9SJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY3Y2AAAKCRDDTyI5ktmPJHQoC/ 4rcDbkRMFrVRDV4eOuM6fK+1WMto41s/kSPIau/YLFsI69nr2s6Eif3QtzwKHEkKVn1ht7Yn3OiUEc bsOYb8K4Ap316zkjM9hWbDh+1TQiF7brZMpRlv03sm0PFgUm4Fho0eV7KbozXDKiH0tm69Daf0pAi/ xIlOTcHLV26ELOG0/9VmJIl4NmDlfacqrm3p0Sq2NZwhCHsY3jMopUcxxMW+LUvgAC1UoEYXAWbFhJ KGsgEMpTHRcyTH6WW20SpEgTEN9182OfgOt4B506PLh0LmA/EdQ6LVwouO5UHWFIf+cP3x2Ded4qWm 3BLvpF/vvqcSasfdOyf9hzjelID8O1UnbvDU3ZdgydCfWcacizb37g83TgUUNFNOjomMD2nT9LfCIv UbtXtQL437VKREqJWzkj8nxC/2KTSli7szZuygheHmM4BkoyNEmFMx05TlkJQB+Id0n+iu5eYsyzSD bXw/kLFnIyKJMz78pa9qGSyEs2GOQca8TbbsI/7KmMO78= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221117_052440_438273_9794208A X-CRM114-Status: GOOD ( 16.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When LPA2 is enabled, bits 8 and 9 of page and block descriptors become part of the output address instead of carrying shareability attributes for the region in question. So avoid setting these bits if TCR.DS == 1, which means LPA2 is enabled. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/pgtable-hwdef.h | 1 + arch/arm64/include/asm/pgtable-prot.h | 18 ++++++++++++++++-- arch/arm64/mm/mmap.c | 4 ++++ 3 files changed, 21 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index f658aafc47dfa29a..c4ad7fbb12c5c07a 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -276,6 +276,7 @@ #define TCR_E0PD1 (UL(1) << 56) #define TCR_TCMA0 (UL(1) << 57) #define TCR_TCMA1 (UL(1) << 58) +#define TCR_DS (UL(1) << 59) /* * TTBR. diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h index 9b165117a454595a..15888fa87072f609 100644 --- a/arch/arm64/include/asm/pgtable-prot.h +++ b/arch/arm64/include/asm/pgtable-prot.h @@ -40,6 +40,20 @@ extern bool arm64_use_ng_mappings; #define PTE_MAYBE_NG (arm64_use_ng_mappings ? PTE_NG : 0) #define PMD_MAYBE_NG (arm64_use_ng_mappings ? PMD_SECT_NG : 0) +#if !defined(CONFIG_ARM64_PA_BITS_52) || defined(CONFIG_ARM64_64K_PAGES) +#define lpa2_is_enabled() false +#define PTE_MAYBE_SHARED PTE_SHARED +#define PMD_MAYBE_SHARED PMD_SECT_S +#else +static inline bool lpa2_is_enabled(void) +{ + return read_sysreg(tcr_el1) & TCR_DS; +} + +#define PTE_MAYBE_SHARED (lpa2_is_enabled() ? 0 : PTE_SHARED) +#define PMD_MAYBE_SHARED (lpa2_is_enabled() ? 0 : PMD_SECT_S) +#endif + /* * If we have userspace only BTI we don't want to mark kernel pages * guarded even if the system does support BTI. @@ -50,8 +64,8 @@ extern bool arm64_use_ng_mappings; #define PTE_MAYBE_GP 0 #endif -#define PROT_DEFAULT (_PROT_DEFAULT | PTE_MAYBE_NG) -#define PROT_SECT_DEFAULT (_PROT_SECT_DEFAULT | PMD_MAYBE_NG) +#define PROT_DEFAULT (PTE_TYPE_PAGE | PTE_MAYBE_NG | PTE_MAYBE_SHARED | PTE_AF) +#define PROT_SECT_DEFAULT (PMD_TYPE_SECT | PMD_MAYBE_NG | PMD_MAYBE_SHARED | PMD_SECT_AF) #define PROT_DEVICE_nGnRnE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRnE)) #define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRE)) diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c index 8f5b7ce857ed4a8f..adcf547f74eb8e60 100644 --- a/arch/arm64/mm/mmap.c +++ b/arch/arm64/mm/mmap.c @@ -73,6 +73,10 @@ static int __init adjust_protection_map(void) protection_map[VM_EXEC | VM_SHARED] = PAGE_EXECONLY; } + if (lpa2_is_enabled()) + for (int i = 0; i < ARRAY_SIZE(protection_map); i++) + pgprot_val(protection_map[i]) &= ~PTE_SHARED; + return 0; } arch_initcall(adjust_protection_map); From patchwork Thu Nov 17 13:24:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13046888 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 76980C4332F for ; Thu, 17 Nov 2022 13:28:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2RPLN/Xgw7eczVoBM27q3fXKV4CteVW3qu5Us+IrVgo=; b=u2K1lz6nKyNNJF 6wqsIAP/Blub4rLCG6XShcokh9WzM5Q4AtS6LTIXFdnCbVFc2K+O5cUtky131XgLyL0W+fpRHzqBX D/w3AbOzSYC6NywSbWA+XOhGyucTPFjhT9lKeVN/rXI5fnfmuqlp4+LyJ3BEtsm1QIJU9Vx9CRRM1 XA/6lXkTdgl2qu6RVtR6+uormOZ9tuZRl3eJEd6BNUdLQWG/IX1nhE3Vg9NF6aJLJbN0slmc7RKvJ hxEWT/SoUeAB/ReK0Y9SDsOd9TEP796U0FkIXXaHkkeseo6OCW7hg3We+W/GDdhM643bAGDOHrP3x mqLwUBP1GsLqo0Dkn2ng==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovevA-00EGG6-7b; Thu, 17 Nov 2022 13:27:12 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovesk-00EERk-W4 for linux-arm-kernel@lists.infradead.org; Thu, 17 Nov 2022 13:24:45 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 41C47B8206C; Thu, 17 Nov 2022 13:24:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CD398C433C1; Thu, 17 Nov 2022 13:24:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1668691479; bh=dMCoD/5SQJ+5X7W9Wc5IDp06pQN/jd09DJXNRjWh2tw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Atyy7vVFDKNzjwAp5trmef1nqyxihtMBF4npxFuScWd2HDex4FH5FB8Qenz52dfnk JLOtoVnGcsIT4Po1zRQmWP1bSs1HHoUCcl7V0icNwoblnbsBkCdo5OuUcInv3h2ibg DgdPD7iclqEbFEeHHlRFSybfPRgNKvO7SCCYpoleBf84giRqWSgCBZwWcNFsQxMYYG uIEZFNJ28Le2pP7qH3ZFC2Vnhs5ruhbkgu6bC+wrXMVadg4j7qogGufsq4vM9rkndN IM4FtwaPqVqWyJZzsGnNefCdrU0U4j0buUGzdj3Q9WVQbtTnNREF317CQujnFstxj0 vnRD5H5ublKdg== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson Subject: [RFC PATCH 4/7] arm64: mm: Support use of 52-bit pgdirs on 48-bit/16k systems Date: Thu, 17 Nov 2022 14:24:20 +0100 Message-Id: <20221117132423.1252942-5-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221117132423.1252942-1-ardb@kernel.org> References: <20221117132423.1252942-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5955; i=ardb@kernel.org; h=from:subject; bh=dMCoD/5SQJ+5X7W9Wc5IDp06pQN/jd09DJXNRjWh2tw=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjdjYBZ7AFNtM/1pM/XEMbJECyKrdEOmyua53FypxX zpPNkDiJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY3Y2AQAKCRDDTyI5ktmPJADNC/ 9Fc2l4ca98ErBe3oKBpMTIijzjjevWxOj/HJQuU1CW/O+lwbVq2dh17WGBXhlRIBphamXOC3sWxagc fwR0+pk3dFJ3wM/0NW6nzWQkDx4y6DpvW3T8tzgZW33WsrOXRbiYOmjHwZvrdW/rsYtRheCV2k6xmG M2BegSBgzT7I0pa1phNZ9OGNhTajvJ+JMP58kMoTnrzbFBdRw0mXydENnK+tVZ+eTnaANvtbJKDuPj lwvPjMpjtbYyRtp1hl54XK/HMMwl03AbzwRls9cHS1PjdgIxl4olLopv/WWke84BWVWEngU24d/82N q80Dy0OiY7VSPZaBlAd/ho3dD8TA/aDmSoYYanrDps6NcpbT4gypPHVGxP3oJsUCnQ5lxRBTWWpcrP mkChz3+1jTZd9bMcwxK0wz3Cel4Ye5hBZebWikYmhWppysLJywdO22bdKuwAFpfo64+TeTLaOw98Bj z4W0tX/UrbE6b3wCH3cAgMRBXmaLxFwcW6UAXWXzr4rvY= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221117_052443_415851_1C51FB24 X-CRM114-Status: GOOD ( 28.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On LVA/64k granule configurations, we simply extend the level 1 root page table to cover 52 bits of VA space, and if the system in question only supports 48 bits, we point TTBR1 to the pgdir entry that covers the start of the 48-bit addressable part of the VA space. Sadly, we cannot use the same trick on LPA2/16k granule configurations. This is due to the fact that TTBR registers require 64 byte aligned addresses, while the 48-bit addressable entries in question will not appear at a 64 byte aligned address if the entire 52-bit VA table is aligned to its size (which is another requirement for TTBR registers). Fortunately, we are only dealing with two entries in this case: one that covers the kernel/vmalloc region and one covering the linear map. This makes it feasible to simply clone those entries into the start of the page table after the first mapping into the respective region is created. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/assembler.h | 17 +++++------------ arch/arm64/include/asm/mmu.h | 18 ++++++++++++++++++ arch/arm64/kernel/cpufeature.c | 1 + arch/arm64/kernel/pi/map_kernel.c | 2 +- arch/arm64/mm/mmu.c | 2 ++ 5 files changed, 27 insertions(+), 13 deletions(-) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 4cb84dc6e2205a91..9fa62f102c1c94e9 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -609,11 +609,15 @@ alternative_endif * but we have to add an offset so that the TTBR1 address corresponds with the * pgdir entry that covers the lowest 48-bit addressable VA. * + * Note that this trick only works for 64k pages - 4k pages uses an additional + * paging level, and on 16k pages, we would end up with a TTBR address that is + * not 64 byte aligned. + * * orr is used as it can cover the immediate value (and is idempotent). * ttbr: Value of ttbr to set, modified. */ .macro offset_ttbr1, ttbr, tmp -#ifdef CONFIG_ARM64_VA_BITS_52 +#if defined(CONFIG_ARM64_VA_BITS_52) && defined(CONFIG_ARM64_64K_PAGES) mrs \tmp, tcr_el1 and \tmp, \tmp, #TCR_T1SZ_MASK cmp \tmp, #TCR_T1SZ(VA_BITS_MIN) @@ -622,17 +626,6 @@ alternative_endif #endif .endm -/* - * Perform the reverse of offset_ttbr1. - * bic is used as it can cover the immediate value and, in future, won't need - * to be nop'ed out when dealing with 52-bit kernel VAs. - */ - .macro restore_ttbr1, ttbr -#ifdef CONFIG_ARM64_VA_BITS_52 - bic \ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET -#endif - .endm - /* * Arrange a physical address in a TTBR register, taking care of 52-bit * addresses. diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index a93d495d6e8c94a2..aa9fdefdb8c8b9e6 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -16,6 +16,7 @@ #include #include +#include typedef struct { atomic64_t id; @@ -72,6 +73,23 @@ extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot); extern void mark_linear_text_alias_ro(void); extern bool kaslr_requires_kpti(void); +static inline void sync_kernel_pgdir_root_entries(pgd_t *pgdir) +{ + /* + * On 16k pages, we cannot advance the TTBR1 address to the pgdir entry + * that covers the start of the 48-bit addressable kernel VA space like + * we do on 64k pages when the hardware does not support LPA2, since the + * resulting address would not be 64 byte aligned. So instead, copy the + * pgdir entry that covers the mapping we just created to the start of + * the page table. + */ + if (IS_ENABLED(CONFIG_ARM64_16K_PAGES) && + VA_BITS > VA_BITS_MIN && !lpa2_is_enabled()) { + pgdir[0] = pgdir[PTRS_PER_PGD - 2]; + pgdir[1] = pgdir[PTRS_PER_PGD - 1]; + } +} + #define INIT_MM_CONTEXT(name) \ .pgd = swapper_pg_dir, diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 4a631a6e7e42b981..d19f9c1a93d9d000 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1768,6 +1768,7 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused) create_kpti_ng_temp_pgd(kpti_ng_temp_pgd, __pa(alloc), KPTI_NG_TEMP_VA, PAGE_SIZE, PAGE_KERNEL, kpti_ng_pgd_alloc, 0); + sync_kernel_pgdir_root_entries(kpti_ng_temp_pgd); } cpu_install_idmap(); diff --git a/arch/arm64/kernel/pi/map_kernel.c b/arch/arm64/kernel/pi/map_kernel.c index 6c5d78dcb90e55c5..3b0b3fecf2bd533b 100644 --- a/arch/arm64/kernel/pi/map_kernel.c +++ b/arch/arm64/kernel/pi/map_kernel.c @@ -217,8 +217,8 @@ static void __init map_kernel(u64 kaslr_offset, u64 va_offset) map_segment(init_pg_dir, &pgdp, va_offset, __initdata_begin, __initdata_end, data_prot, false); map_segment(init_pg_dir, &pgdp, va_offset, _data, _end, data_prot, true); + sync_kernel_pgdir_root_entries(init_pg_dir); dsb(ishst); - idmap_cpu_replace_ttbr1(init_pg_dir); if (twopass) { diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 63fb62e16a1f8873..90733567f0b89a31 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -665,6 +665,7 @@ static int __init map_entry_trampoline(void) __create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS, entry_tramp_text_size(), prot, __pgd_pgtable_alloc, NO_BLOCK_MAPPINGS); + sync_kernel_pgdir_root_entries(tramp_pg_dir); /* Map both the text and data into the kernel page table */ for (i = 0; i < DIV_ROUND_UP(entry_tramp_text_size(), PAGE_SIZE); i++) @@ -729,6 +730,7 @@ void __init paging_init(void) idmap_t0sz = 63UL - __fls(__pa_symbol(_end) | GENMASK(VA_BITS_MIN - 1, 0)); map_mem(swapper_pg_dir); + sync_kernel_pgdir_root_entries(swapper_pg_dir); memblock_allow_resize(); From patchwork Thu Nov 17 13:24:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13046887 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8ADFEC4332F for ; Thu, 17 Nov 2022 13:27:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=FpnqzHhemDlF1WHLrFMxP1yGz8HrXQCmnZ/NRd+zAy4=; b=YUtGc2wT0Kvp/K 8E8jzt+MqfS15I4NLtduVJ4Cs86+mgGbwzrNdisax0KyfJ8xrPVnrzQqFjtlFt6vGFWpcDieJbAPW e/+VmMyaBHgbfD8WlOfwQgxHsxX74Rz4VCT1c8GkbZ7HFJc85ULbo8zWUsQU0IRziYWRxbBufz1ce QnFXCMJ94p3HzW0eMr6xzfooVNXYuY8yLI4WdWotEW6HCXw3B0mDFni4CYznHCwRenY0UfdsRmwFS 5I1kyecREDQZjKCHHks2UUG2h+1qv3hj+cslDIrIbcjal1vCTFqNcQrIxFa7bpwk5PncAN9TU226V QLadb3cWE5MCTfopDbgA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oveuZ-00EFsj-BQ; Thu, 17 Nov 2022 13:26:36 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovesm-00EESM-5N for linux-arm-kernel@lists.infradead.org; Thu, 17 Nov 2022 13:24:46 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 066F261DDB; Thu, 17 Nov 2022 13:24:43 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5929FC43145; Thu, 17 Nov 2022 13:24:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1668691482; bh=JVg2NwngbQcTNpL08is0iO4FiyT5nKKVP/xhLJGt3PM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cm65h20lYxVHfhzb+QYo6So+zY8dPQTbZRCmxbyXXzya5fKJn/AU/LeePOO8ozPe4 TL/QFCv/zez/iIpDutn2Lvt8BD9tTzDTRcmxCAo80/W8mr0+4vity8LO0kwwifLjZA v8WYXevPldYhtBrSl2C4Apc+ILv7LQDr+G4icFeXWjEJlwNRwEkEdHpHP4tUhxeQJA ZwFhuA58nlxoVovnOH83wBcv21G9LK/5f61RMwll3AXEBKTU6pkAXEmTE1TRXfhuGk ACBCNCZUL9dQLzboCkcrfhNyT10un9WIDmpDjvN4Y0IUYnkB+S7pp3vyv7W8gBSA6M uDGNX8P9tsukQ== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson Subject: [RFC PATCH 5/7] arm64: mm: Add LPA2 support to phys<->pte conversion routines Date: Thu, 17 Nov 2022 14:24:21 +0100 Message-Id: <20221117132423.1252942-6-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221117132423.1252942-1-ardb@kernel.org> References: <20221117132423.1252942-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=4773; i=ardb@kernel.org; h=from:subject; bh=JVg2NwngbQcTNpL08is0iO4FiyT5nKKVP/xhLJGt3PM=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjdjYD0TnPQVELbMMOqX21V1WkHqIuy1B8Fu+o2UR1 TmvspSaJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY3Y2AwAKCRDDTyI5ktmPJGQHC/ 9i5duAo0lJubGfvFNZoUq8LcaQvgBzyv6Kr23VWEWNSmILfFIW0SFFTquKc1iAtED+X68/CSG9yi9F w2Jpd9xc5E/a88IcWsWfSalbd7hZ27xq4fQPbxDrK8xkYD65b8tWobV0B3RIDf9tJVIdKwWelAqj+R fzF0/h+kIN0ogBcGsnZ9z8bWAF/7viTG0+hH+WD33mSdxveif3PZ3JnJEVo6s5FVaKRgL5lDX+hsFM Alo+nWGFNnRBt84qzse6q485ksAcAR7RO42SmmIgHSNdqG0SGZybHP2su+qUy1qS6CI3f4iK5ulnCy ux6NNvkWIYvz7Rp6gPZXsHPy7VrzdNDN1t+MSY1qPyjfxQBgHyhSREolIqqgILqY8qDcNPUYdSCcXr jZENMV3AvTI+jqvzthswgv9viFiAl3KSoOT+4z7KS7LdGHTxHU+wMN1V0LvKe+f8ciirzGzg68O0JX bvl/Jk052gY59hwHlf2yIvm8UQ8oxxj1hE8btc/3EUdm8= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221117_052444_423195_8EDFC3F6 X-CRM114-Status: GOOD ( 15.70 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for enabling LPA2 support, introduce the mask values for converting between physical addresses and their representations in a page table descriptor. While at it, move pte_to_phys into its only user, which gets invoked when system-wide alternatives are applied, which means we can rely on a boot-time alternative here. For LPA2, the PTE_ADDR_MASK contains two non-adjacent sequences of zero bits, which means it no longer fits into the immediate field of an ordinary ALU instruction. So let's redefine it to include the bits in between as well, and only use it when converting from physical address to PTE representation, where the distinction does not matter. Also update the name accordingly to emphasize this. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/assembler.h | 16 ++-------------- arch/arm64/include/asm/pgtable-hwdef.h | 10 +++++++--- arch/arm64/include/asm/pgtable.h | 5 +++-- arch/arm64/mm/proc.S | 10 ++++++++++ 4 files changed, 22 insertions(+), 19 deletions(-) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 9fa62f102c1c94e9..44a801e1dc4bf027 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -644,25 +644,13 @@ alternative_endif .macro phys_to_pte, pte, phys #ifdef CONFIG_ARM64_PA_BITS_52 - /* - * We assume \phys is 64K aligned and this is guaranteed by only - * supporting this configuration with 64K pages. - */ - orr \pte, \phys, \phys, lsr #36 - and \pte, \pte, #PTE_ADDR_MASK + orr \pte, \phys, \phys, lsr #PTE_ADDR_HIGH_SHIFT + and \pte, \pte, #PHYS_TO_PTE_ADDR_MASK #else mov \pte, \phys #endif .endm - .macro pte_to_phys, phys, pte - and \phys, \pte, #PTE_ADDR_MASK -#ifdef CONFIG_ARM64_PA_BITS_52 - orr \phys, \phys, \phys, lsl #PTE_ADDR_HIGH_SHIFT - and \phys, \phys, GENMASK_ULL(PHYS_MASK_SHIFT - 1, PAGE_SHIFT) -#endif - .endm - /* * tcr_clear_errata_bits - Clear TCR bits that trigger an errata on this CPU. */ diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index c4ad7fbb12c5c07a..b91fe4781b066d54 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -155,13 +155,17 @@ #define PTE_PXN (_AT(pteval_t, 1) << 53) /* Privileged XN */ #define PTE_UXN (_AT(pteval_t, 1) << 54) /* User XN */ -#define PTE_ADDR_LOW (((_AT(pteval_t, 1) << (48 - PAGE_SHIFT)) - 1) << PAGE_SHIFT) +#define PTE_ADDR_LOW (((_AT(pteval_t, 1) << (50 - PAGE_SHIFT)) - 1) << PAGE_SHIFT) #ifdef CONFIG_ARM64_PA_BITS_52 +#ifdef CONFIG_ARM64_64K_PAGES #define PTE_ADDR_HIGH (_AT(pteval_t, 0xf) << 12) -#define PTE_ADDR_MASK (PTE_ADDR_LOW | PTE_ADDR_HIGH) #define PTE_ADDR_HIGH_SHIFT 36 +#define PHYS_TO_PTE_ADDR_MASK (PTE_ADDR_LOW | PTE_ADDR_HIGH) #else -#define PTE_ADDR_MASK PTE_ADDR_LOW +#define PTE_ADDR_HIGH (_AT(pteval_t, 0x3) << 8) +#define PTE_ADDR_HIGH_SHIFT 42 +#define PHYS_TO_PTE_ADDR_MASK GENMASK_ULL(49, 8) +#endif #endif /* diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index daedd6172227f0ca..666db7173d0f9b66 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -76,15 +76,16 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; #ifdef CONFIG_ARM64_PA_BITS_52 static inline phys_addr_t __pte_to_phys(pte_t pte) { + pte_val(pte) &= ~PTE_MAYBE_SHARED; return (pte_val(pte) & PTE_ADDR_LOW) | ((pte_val(pte) & PTE_ADDR_HIGH) << PTE_ADDR_HIGH_SHIFT); } static inline pteval_t __phys_to_pte_val(phys_addr_t phys) { - return (phys | (phys >> PTE_ADDR_HIGH_SHIFT)) & PTE_ADDR_MASK; + return (phys | (phys >> PTE_ADDR_HIGH_SHIFT)) & PHYS_TO_PTE_ADDR_MASK; } #else -#define __pte_to_phys(pte) (pte_val(pte) & PTE_ADDR_MASK) +#define __pte_to_phys(pte) (pte_val(pte) & PTE_ADDR_LOW) #define __phys_to_pte_val(phys) (phys) #endif diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 02818fa6aded3218..c747a2ef478cabec 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -208,6 +208,16 @@ SYM_FUNC_ALIAS(__pi_idmap_cpu_replace_ttbr1, idmap_cpu_replace_ttbr1) .pushsection ".idmap.text", "awx" + .macro pte_to_phys, phys, pte + and \phys, \pte, #PTE_ADDR_LOW +#ifdef CONFIG_ARM64_PA_BITS_52 +alternative_if ARM64_HAS_LVA + orr \phys, \phys, \pte, lsl #PTE_ADDR_HIGH_SHIFT + and \phys, \phys, GENMASK_ULL(PHYS_MASK_SHIFT - 1, PAGE_SHIFT) +alternative_else_nop_endif +#endif + .endm + .macro kpti_mk_tbl_ng, type, num_entries add end_\type\()p, cur_\type\()p, #\num_entries * 8 .Ldo_\type: From patchwork Thu Nov 17 13:24:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13046889 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B4888C4332F for ; Thu, 17 Nov 2022 13:29:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=hMOYluGB28B1NOyDhdWTr3FEvH3p0mXCImWawDFIv4g=; b=E0jA7cp+gEyxs0 U5p6WU19my/8QeE3LzeOvBcmtOUSjSMu/4t572Z0n5eHmg9c9odJoRwzDNK+c85wa784oKZJfEtk/ 6GrDdHZPP5Tjb8O2XUOu90O6McHqBrPAqs8TP4qcogjmAKR5Qkgis2D5oWyNUpC/MDa1DbMbYitPw QBTfvw58putAeCBUVbcN2m9wWwaYtpT/T7452DNmwDEbFEquBq58e2sY/nlkCXuil6L/5gXoEY9Sl rb8YIg9jqWpkaNn5yzqi2k8GgwAYA7zs8zCeUCYTym6mp5JytjC2sCyjNqu8HWE4jfnJ6MXkWQy2c avss8h3l2ahsksmJ7Rpg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovevq-00EGcS-BT; Thu, 17 Nov 2022 13:27:55 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovesp-00EEV2-QL for linux-arm-kernel@lists.infradead.org; Thu, 17 Nov 2022 13:24:50 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 47229B82067; Thu, 17 Nov 2022 13:24:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D82BEC4347C; Thu, 17 Nov 2022 13:24:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1668691485; bh=q/sTMZf4SE5RLavgidjEj/Xg0i/vPb7XoCWr41sZjso=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JpkfNfJimqEFMGze4iho7jlIOuWC9l/4K0r25jEqHD6418bnRumy5CP51+FT0lVVE wKkqpGAshlwHK1lttssN9zXOWUVGMD6C6zM6GR48h6BclMlIQ9zsnMKfps9QFEsO6g 7iVcHdax8lpBC10vGs2a671Zjhkze5CJpWl+cbdE0ObL0CGA4H6gDc+HaSfH/tTWHt duXoJ5TE1niMpHtbQ/rWjgJvGnhMYWHw7sD5HYZxhZQ+iRWlY879hHayydzTHylZwt 4MbOFGZ960yyE8t5GScUUuGkuwdGkCLD8/JGMxKiup9SI23i3RZlYYZ02dnKhkggZT kHBkvXv8DA8Wg== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson Subject: [RFC PATCH 6/7] arm64: Enable LPA2 at boot if supported by the system Date: Thu, 17 Nov 2022 14:24:22 +0100 Message-Id: <20221117132423.1252942-7-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221117132423.1252942-1-ardb@kernel.org> References: <20221117132423.1252942-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=7436; i=ardb@kernel.org; h=from:subject; bh=q/sTMZf4SE5RLavgidjEj/Xg0i/vPb7XoCWr41sZjso=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjdjYFuX82fhQFUDI35OV5d0nR04OuLQe37WAaltMg v/POsoGJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY3Y2BQAKCRDDTyI5ktmPJNRLC/ 4rYX6v4nFlFse1W4mYU1YVS3D15iQhqhp7GZY4hoRuyxXiC42hDz0pL44tohmUmQUdS40E2FRfwc/c 71Dq6dJZCOpPFBAiSsrWi4DsxgDuTNGRXD3Et0fe4d4/L/wn2h2ECdIg4lQbYuo1zW2lynt5LUhLFF ucn1uItAMUVbYd45LHRdfU0UHtOzpCqH1TncXmpIEBq5LLP5n46HoKftr5bjuZp4ReJ3V8EihzwIvc qO/t9Q+wBvTS/6tQdQO/PQJtAf2qelx0FwH9HTQhuHqTKrkYiztqWV8ZynZQKZUxzo5UbFQ6ndIQCg rsOJOT9cUCr2ex5dgAxu7LotemZsegtQcX5R/hAdfCUyztJGKffoDx7O+C2PPZSjRUzqLnevOrm7rW qTEI1/Dbk8GEBSQ36wY1uRfN45GlUeIHx0Y6feAGbHs4/LNnjwXzJq25uRj2EjQWWfhY2GhC3ECjBH /pI5rJMxpsoo7t8XDsbPI5xu8OiRqiuRpnk5UzWl9YwxA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221117_052448_237124_4F129649 X-CRM114-Status: GOOD ( 27.02 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Update the early kernel mapping code to take 52-bit virtual addressing into account based on the LPA2 feature. This is a bit more involved than LVA, given that some page table descriptor bits change meaning in this case. To keep the handling in asm to a minimum, the initial ID map is still created with 48-bit virtual addressing, which implies that the kernel image must be loaded into 48-bit addressable physical memory. This is currently required by the boot protocol, even though we happen to support placement outside of that for LVA/64k based configurations. Enabling LPA2 involves more than setting TCR.T1SZ to a lower value, there is also a DS bit in TCR that changes the meaning of bits [9:8] in all page table descriptors. Since we cannot enable DS and every live page table descriptor at the same time, we have to pivot through another temporary mapping. This avoids reintroducing manipulations of the page tables with the MMU and caches disabled, which is something we generally try to avoid. To permit the LPA2 feature to be overridden on the kernel command line, which may be necessary to work around silicon errata, or to deal with mismatched features on heterogeneous SoC designs, test for CPU feature overrides first, and only then enable LPA2. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/image-vars.h | 2 + arch/arm64/kernel/pi/map_kernel.c | 101 +++++++++++++++++++- arch/arm64/mm/proc.S | 3 + 3 files changed, 103 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 82bafa1f869c3a8b..f48b6f09d278d3bf 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -56,6 +56,8 @@ PROVIDE(__pi_arm64_sw_feature_override = arm64_sw_feature_override); PROVIDE(__pi_arm64_use_ng_mappings = arm64_use_ng_mappings); PROVIDE(__pi__ctype = _ctype); +PROVIDE(__pi_init_idmap_pg_dir = init_idmap_pg_dir); +PROVIDE(__pi_init_idmap_pg_end = init_idmap_pg_end); PROVIDE(__pi_init_pg_dir = init_pg_dir); PROVIDE(__pi_init_pg_end = init_pg_end); PROVIDE(__pi_swapper_pg_dir = swapper_pg_dir); diff --git a/arch/arm64/kernel/pi/map_kernel.c b/arch/arm64/kernel/pi/map_kernel.c index 3b0b3fecf2bd533b..71cc32cd2545b85a 100644 --- a/arch/arm64/kernel/pi/map_kernel.c +++ b/arch/arm64/kernel/pi/map_kernel.c @@ -137,6 +137,22 @@ static bool __init arm64_early_this_cpu_has_lva(void) ID_AA64MMFR2_EL1_VARange_SHIFT); } +static bool __init arm64_early_this_cpu_has_lpa2(void) +{ + bool gran4k = IS_ENABLED(CONFIG_ARM64_4K_PAGES); + u64 mmfr0; + int feat; + + mmfr0 = read_sysreg(id_aa64mmfr0_el1); + mmfr0 &= ~id_aa64mmfr0_override.mask; + mmfr0 |= id_aa64mmfr0_override.val; + feat = cpuid_feature_extract_field(mmfr0, ID_AA64MMFR0_EL1_TGRAN_SHIFT, + gran4k /* signed */); + + return gran4k ? feat >= ID_AA64MMFR0_EL1_TGRAN4_52_BIT + : feat >= ID_AA64MMFR0_EL1_TGRAN16_52_BIT; +} + static bool __init arm64_early_this_cpu_has_pac(void) { u64 isar1, isar2; @@ -279,6 +295,74 @@ static void noinline __section(".idmap.text") disable_wxn(void) :: "r"(sctlr & ~SCTLR_ELx_M), "r"(sctlr)); } +static void noinline __section(".idmap.text") set_ttbr0_for_lpa2(u64 ttbr) +{ + u64 sctlr = read_sysreg(sctlr_el1); + u64 tcr = read_sysreg(tcr_el1) | TCR_DS; + + asm(" msr sctlr_el1, %0 ;" + " isb ;" + " msr ttbr0_el1, %1 ;" + " msr tcr_el1, %2 ;" + " isb ;" + " tlbi vmalle1 ;" + " dsb nsh ;" + " isb ;" + " msr sctlr_el1, %3 ;" + " isb ;" + :: "r"(sctlr & ~SCTLR_ELx_M), "r"(ttbr), "r"(tcr), "r"(sctlr)); +} + +static void remap_idmap_for_lpa2(void) +{ + extern pgd_t init_idmap_pg_dir[], init_idmap_pg_end[]; + pgd_t *pgdp = (void *)init_pg_dir + PAGE_SIZE; + pgprot_t text_prot = PAGE_KERNEL_ROX; + pgprot_t data_prot = PAGE_KERNEL; + + /* clear the bits that change meaning once LPA2 is turned on */ + pgprot_val(text_prot) &= ~PTE_SHARED; + pgprot_val(data_prot) &= ~PTE_SHARED; + + /* + * We have to clear bits [9:8] in all block or page descriptors in the + * initial ID map, as otherwise they will be (mis)interpreted as + * physical address bits once we flick the LPA2 switch (TCR.DS). Since + * we cannot manipulate live descriptors in that way without creating + * potential TLB conflicts, let's create another temporary ID map in a + * LPA2 compatible fashion, and update the initial ID map while running + * from that. + */ + map_segment(init_pg_dir, &pgdp, 0, _stext, __inittext_end, text_prot, false); + map_segment(init_pg_dir, &pgdp, 0, __initdata_begin, _end, data_prot, false); + dsb(ishst); + set_ttbr0_for_lpa2((u64)init_pg_dir); + + /* + * Recreate the initial ID map with the same granularity as before. + * Don't bother with the FDT, we no longer need it after this. + */ + memset(init_idmap_pg_dir, 0, + (u64)init_idmap_pg_dir - (u64)init_idmap_pg_end); + + pgdp = (void *)init_idmap_pg_dir + PAGE_SIZE; + map_segment(init_idmap_pg_dir, &pgdp, 0, + PTR_ALIGN_DOWN(&_stext[0], INIT_IDMAP_BLOCK_SIZE), + PTR_ALIGN_DOWN(&__bss_start[0], INIT_IDMAP_BLOCK_SIZE), + text_prot, false); + map_segment(init_idmap_pg_dir, &pgdp, 0, + PTR_ALIGN_DOWN(&__bss_start[0], INIT_IDMAP_BLOCK_SIZE), + PTR_ALIGN(&_end[0], INIT_IDMAP_BLOCK_SIZE), + data_prot, false); + dsb(ishst); + + /* switch back to the updated initial ID map */ + set_ttbr0_for_lpa2((u64)init_idmap_pg_dir); + + /* wipe the temporary ID map from memory */ + memset(init_pg_dir, 0, (u64)init_pg_end - (u64)init_pg_dir); +} + asmlinkage void __init early_map_kernel(u64 boot_status, void *fdt) { static char const chosen_str[] __initconst = "/chosen"; @@ -292,9 +376,6 @@ asmlinkage void __init early_map_kernel(u64 boot_status, void *fdt) /* Parse the command line for CPU feature overrides */ init_feature_override(boot_status, fdt, chosen); - if (VA_BITS > VA_BITS_MIN && arm64_early_this_cpu_has_lva()) - sysreg_clear_set(tcr_el1, TCR_T1SZ_MASK, TCR_T1SZ(VA_BITS)); - if (IS_ENABLED(CONFIG_ARM64_WXN) && cpuid_feature_extract_unsigned_field(arm64_sw_feature_override.val, ARM64_SW_FEATURE_OVERRIDE_NOWXN)) @@ -322,6 +403,20 @@ asmlinkage void __init early_map_kernel(u64 boot_status, void *fdt) arm64_use_ng_mappings = true; } + if (VA_BITS > VA_BITS_MIN) { + bool va52 = false; + + if (IS_ENABLED(CONFIG_ARM64_64K_PAGES)) { + va52 = arm64_early_this_cpu_has_lva(); + } else if (arm64_early_this_cpu_has_lpa2()) { + remap_idmap_for_lpa2(); + va52 = true; + } + if (va52) + sysreg_clear_set(tcr_el1, TCR_T1SZ_MASK, + TCR_T1SZ(VA_BITS)); + } + va_base = KIMAGE_VADDR + kaslr_offset; map_kernel(kaslr_offset, va_base - pa_base); } diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index c747a2ef478cabec..8197663a54f63c9d 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -443,6 +443,9 @@ SYM_FUNC_START(__cpu_setup) #if VA_BITS > VA_BITS_MIN alternative_if ARM64_HAS_LVA eor tcr, tcr, #TCR_T1SZ(VA_BITS) ^ TCR_T1SZ(VA_BITS_MIN) +#ifndef CONFIG_ARM64_64K_PAGES + orr tcr, tcr, #TCR_DS +#endif alternative_else_nop_endif #elif VA_BITS < 48 idmap_get_t0sz x9 From patchwork Thu Nov 17 13:24:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13046890 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1BD1DC4332F for ; Thu, 17 Nov 2022 13:29:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=nofZGWUVBDbLk14M287yIPrcwxFwJOBt2iyQtb8wMkM=; b=uT01OSxo2I3/YA BKoh3KZGoT3VWDswQbEA5wly+fLgbs5/JUH9vghYByQMmp/rk3cPiISt73vYT2/ZUKBh8/2d0N4+M /bvLISyihHji4kwr5FoQ85jB7w6lCh9uzjTsr2A6PIcNuprDsTv+SeQER9aW1VVNX3plU48gu9J2T MghxlU85Jk4WALP/hngrYHSbepxN3JjUMGAc8vrr11zAaR4qEXrTb7vJQxVj/BUvpOQDkNzRaZACo To1hA2xQCTYXI3lt/LWHnwRgTIXVAKrWGWYU/NnK3vUsojUvg3zPL5MGGGOvTiy6UAx0VKHJABv+w vmRLBHjGkzBNO0krv4Zw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovewR-00EGzv-GU; Thu, 17 Nov 2022 13:28:32 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovesq-00EEVU-Np for linux-arm-kernel@lists.infradead.org; Thu, 17 Nov 2022 13:24:51 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 13E3761E10; Thu, 17 Nov 2022 13:24:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 639A9C43141; Thu, 17 Nov 2022 13:24:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1668691487; bh=0GhLy5Cshy5IkVgnhpWwU/ogaFYRS/icDGallZ9wCJ4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UZcdQLx0Cyeaca1CcXity/d8UXAuNs0szXK6+KxGoUnxZs3JdLzckPDB3N74bEh6E E54doMy9orPPNsIaLrqzZOPauh8kxRxtb12n594pxCI5DQ5hBhZMAoxYbBlY/Hrgt7 pnIjBCt3gO+eDkE/mJIeU9tnYyYBuMEvHi8x9CKgsifhoasGn2XJ6GbuiDWgmJs51C Uun+HR/RsTsjUT5kjxmvnvNrKbjO8JbI7P/+2Gz/MxKWtjnjJIO8uBBfhU/87ICiFK 0B+OgFHn0Wb8kWVDb5YRWJiqd4o8vLCfVZQwXanr0bvEYWMI2vLERmXi9geyJaPoTv 5mRPXleLdgVgA== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson Subject: [RFC PATCH 7/7] arm64: Enable 52-bit virtual addressing for 16k granule configs Date: Thu, 17 Nov 2022 14:24:23 +0100 Message-Id: <20221117132423.1252942-8-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221117132423.1252942-1-ardb@kernel.org> References: <20221117132423.1252942-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3089; i=ardb@kernel.org; h=from:subject; bh=0GhLy5Cshy5IkVgnhpWwU/ogaFYRS/icDGallZ9wCJ4=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjdjYGsDU++l6LE1z/Be6QaeILyf8UFBmzOz37hUpD OGv2e4uJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY3Y2BgAKCRDDTyI5ktmPJMdpDA CL1ceEL6pxwe9iAHbVjMgyvleCuY0tepV/rOz649PMF9A1a0czMgYPLAEKiniYJuHX1dfyH6C/Ik3d yleZJPpAHkp2SmTyLBzWKPntPLN9WYcffYBfK3COUqwg1cp7eokfAi8f0I6aWeIf3gAgcEuYYlXQMp dDAX0YH8Tm+Yw+tNKHlRgk6fMvRkq2kIHLopRzIYm+0ucRAtwMttzegphAoCcKPswH2BJIzsr4q89p GGnOuUHWoG2Dd3qr40VA2cYoc8m9DZrqDoMvp72qyWZK+D7OcmfUJSOOyfklzdIiU8js2gcwtJE/6x XhxwQ6F7aTS3n9zRZyYNp97AMAAXD/6MouXUon6zmStjJg9aMMqqlZBDtNEjNuI/ZDnz/im2tzvBmM /4PfNwLHL3Hg16Rnyydww8+bbYcEBGuKxVL5EYiojjqOrVZwUcT91zfLY3RVtdTMb5vuq0hkti43bz vXxZtEZ+G4sqTD/AX4sDP5Ynkx1aG7SpiCfgAiiwobCD0= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221117_052448_909426_A4AC7D18 X-CRM114-Status: GOOD ( 15.47 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Update Kconfig to permit 16k granule configurations to be built with 52-bit virtual addressing, now that all the prerequisites are in place. While at it, update the feature description so it matches on the appropriate feature bits depending on the page size. For simplicity, let's just keep ARM64_HAS_LVA as the feature name. Signed-off-by: Ard Biesheuvel --- arch/arm64/Kconfig | 6 ++++-- arch/arm64/kernel/cpufeature.c | 22 ++++++++++++++++---- 2 files changed, 22 insertions(+), 6 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 79ec4bc05694acec..aece91a774a84276 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -344,6 +344,7 @@ config PGTABLE_LEVELS default 3 if ARM64_64K_PAGES && (ARM64_VA_BITS_48 || ARM64_VA_BITS_52) default 3 if ARM64_4K_PAGES && ARM64_VA_BITS_39 default 3 if ARM64_16K_PAGES && ARM64_VA_BITS_47 + default 4 if ARM64_16K_PAGES && (ARM64_VA_BITS_48 || ARM64_VA_BITS_52) default 4 if !ARM64_64K_PAGES && ARM64_VA_BITS_48 config ARCH_SUPPORTS_UPROBES @@ -1197,7 +1198,8 @@ config ARM64_VA_BITS_48 config ARM64_VA_BITS_52 bool "52-bit" - depends on ARM64_64K_PAGES && (ARM64_PAN || !ARM64_SW_TTBR0_PAN) + depends on ARM64_64K_PAGES || ARM64_16K_PAGES + depends on ARM64_PAN || !ARM64_SW_TTBR0_PAN help Enable 52-bit virtual addressing for userspace when explicitly requested via a hint to mmap(). The kernel will also use 52-bit @@ -1247,7 +1249,7 @@ config ARM64_PA_BITS_48 config ARM64_PA_BITS_52 bool "52-bit (ARMv8.2)" - depends on ARM64_64K_PAGES + depends on ARM64_64K_PAGES || ARM64_16K_PAGES depends on ARM64_PAN || !ARM64_SW_TTBR0_PAN help Enable support for a 52-bit physical address space, introduced as diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index d19f9c1a93d9d000..05c46e9c1b5a4c9c 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2663,15 +2663,29 @@ static const struct arm64_cpu_capabilities arm64_features[] = { }, #ifdef CONFIG_ARM64_VA_BITS_52 { - .desc = "52-bit Virtual Addressing (LVA)", .capability = ARM64_HAS_LVA, .type = ARM64_CPUCAP_BOOT_CPU_FEATURE, - .sys_reg = SYS_ID_AA64MMFR2_EL1, - .sign = FTR_UNSIGNED, + .matches = has_cpuid_feature, .field_width = 4, +#ifdef CONFIG_ARM64_64K_PAGES + .desc = "52-bit Virtual Addressing (LVA)", + .sign = FTR_SIGNED, + .sys_reg = SYS_ID_AA64MMFR2_EL1, .field_pos = ID_AA64MMFR2_EL1_VARange_SHIFT, - .matches = has_cpuid_feature, .min_field_value = ID_AA64MMFR2_EL1_VARange_52, +#else + .desc = "52-bit Virtual Addressing (LPA2)", + .sys_reg = SYS_ID_AA64MMFR0_EL1, +#ifdef CONFIG_ARM64_4K_PAGES + .sign = FTR_SIGNED, + .field_pos = ID_AA64MMFR0_EL1_TGRAN4_SHIFT, + .min_field_value = ID_AA64MMFR0_EL1_TGRAN4_52_BIT, +#else + .sign = FTR_UNSIGNED, + .field_pos = ID_AA64MMFR0_EL1_TGRAN16_SHIFT, + .min_field_value = ID_AA64MMFR0_EL1_TGRAN16_52_BIT, +#endif +#endif }, #endif {},