From patchwork Thu Nov 24 12:39:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13054924 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8679DC433FE for ; Thu, 24 Nov 2022 12:41:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=sOsIWI5rBDA9437WJn74bqdHiHPQaJ/lcYczHqc3tgc=; b=koqz1HXYwapD9K bX4h2HVN+3KV3/2Cm5rb5RiPIws9p7kCbF2V7Opm6FF78h2JmCncvKxuBhKR8ituSJ/B4dqvDz512 m6TFFY64infxs5fky+2yrjzK9IlDzNI9GCkj0E2EfQlma5852XdE68jtB6KwPwcI9dOtr9Uslw3DZ 2iKhlo733TYncAlf3gqSnNEEKBaC8Ql32kH/b4EvOXzCyeFoCrT+v+CRVEQWvwNl7OH4pzaoZccqB QA1GWPwKZaDoPhpZdqzrDE7DoGJyganuNrTKdCLOviAcAYPTOsuTSqO7caTlpUsTxnACS2Fh8y0LQ Z0oYe2WRmyH6cDqLn7JA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBWP-008OO6-VX; Thu, 24 Nov 2022 12:40:06 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBWG-008OIe-Nd for linux-arm-kernel@lists.infradead.org; Thu, 24 Nov 2022 12:39:58 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 4A731B827C3; Thu, 24 Nov 2022 12:39:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 45D70C43147; Thu, 24 Nov 2022 12:39:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669293594; bh=ncTuy2wGTIYIRDuDgupcnhxQcFcwTch9Wfg3k1aZ64w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jjTwBiy76tw/z8GXhzp7QGLMPYjOTQ4GUipQ+LY3vcZbJjAoMAQyoCyr4buSbQM6G hvXp4kr/JgTjB/tzfWuLZQ9PEYxy9K5aRN7XDx3A/HvTSo7pko+gL1famGfEhxxYv8 yu8VpBT1nF1YTjEOgDcalRx7UoYrefJRVJuB32L4sWx/B/f5XPLMWdi9lnYjL/jvQX t44w/FTdSzMR6051hi4NUn3NOzBlWnfa8ovv6hVgDVxODyL6GlNEJ/5hSDcW2wLn2h Sq8Vf8xYARV42Y6Hz+Tx+GhplGi4WVfHs3kGiWfkNIsWRX/vm24xg1KsCs2ErJmeEG eaurHt8JaA/nA== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson , Ryan Roberts , linux-kernel@vger.kernel.org Subject: [PATCH v2 01/19] arm64/mm: Simplify and document pte_to_phys() for 52 bit addresses Date: Thu, 24 Nov 2022 13:39:14 +0100 Message-Id: <20221124123932.2648991-2-ardb@kernel.org> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog In-Reply-To: <20221124123932.2648991-1-ardb@kernel.org> References: <20221124123932.2648991-1-ardb@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221124_043957_085358_C28B3213 X-CRM114-Status: GOOD ( 14.68 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Anshuman Khandual pte_to_phys() assembly definition does multiple bits field transformations to derive physical address, embedded inside a page table entry. Unlike its C counter part i.e __pte_to_phys(), pte_to_phys() is not very apparent. It simplifies these operations via a new macro PTE_ADDR_HIGH_SHIFT indicating how far the pte encoded higher address bits need to be left shifted. While here, this also updates __pte_to_phys() and __phys_to_pte_val(). Cc: Catalin Marinas Cc: Will Deacon Cc: Mark Brown Cc: Mark Rutland Cc: Ard Biesheuvel Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Reviewed-by: Ard Biesheuvel Suggested-by: Ard Biesheuvel Signed-off-by: Anshuman Khandual Link: https://lore.kernel.org/r/20221107141753.2938621-1-anshuman.khandual@arm.com Signed-off-by: Will Deacon --- arch/arm64/include/asm/assembler.h | 8 +++----- arch/arm64/include/asm/pgtable-hwdef.h | 1 + arch/arm64/include/asm/pgtable.h | 4 ++-- 3 files changed, 6 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index e5957a53be39..89038067ef34 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -660,12 +660,10 @@ alternative_endif .endm .macro pte_to_phys, phys, pte -#ifdef CONFIG_ARM64_PA_BITS_52 - ubfiz \phys, \pte, #(48 - 16 - 12), #16 - bfxil \phys, \pte, #16, #32 - lsl \phys, \phys, #16 -#else and \phys, \pte, #PTE_ADDR_MASK +#ifdef CONFIG_ARM64_PA_BITS_52 + orr \phys, \phys, \phys, lsl #PTE_ADDR_HIGH_SHIFT + and \phys, \phys, GENMASK_ULL(PHYS_MASK_SHIFT - 1, PAGE_SHIFT) #endif .endm diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index 5ab8d163198f..f658aafc47df 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -159,6 +159,7 @@ #ifdef CONFIG_ARM64_PA_BITS_52 #define PTE_ADDR_HIGH (_AT(pteval_t, 0xf) << 12) #define PTE_ADDR_MASK (PTE_ADDR_LOW | PTE_ADDR_HIGH) +#define PTE_ADDR_HIGH_SHIFT 36 #else #define PTE_ADDR_MASK PTE_ADDR_LOW #endif diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 71a1af42f0e8..daedd6172227 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -77,11 +77,11 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; static inline phys_addr_t __pte_to_phys(pte_t pte) { return (pte_val(pte) & PTE_ADDR_LOW) | - ((pte_val(pte) & PTE_ADDR_HIGH) << 36); + ((pte_val(pte) & PTE_ADDR_HIGH) << PTE_ADDR_HIGH_SHIFT); } static inline pteval_t __phys_to_pte_val(phys_addr_t phys) { - return (phys | (phys >> 36)) & PTE_ADDR_MASK; + return (phys | (phys >> PTE_ADDR_HIGH_SHIFT)) & PTE_ADDR_MASK; } #else #define __pte_to_phys(pte) (pte_val(pte) & PTE_ADDR_MASK) From patchwork Thu Nov 24 12:39:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13054925 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B3C00C433FE for ; Thu, 24 Nov 2022 12:41:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Y45BHGSSGy2BuXI6vDcSU9WqH4a6/ipbhYebrKRenZY=; b=zBGdRyMzDO/jsd r0orC8B+bwJarafpzSKg/LajBR9Z81b9DQuq7YjTpuLM7F192kI61CpQ2PcuMRgCLtMVhC8sP6ym0 UocfvqjE00u54Qe3RTqejqpNeHeSF6DhhPpgWgMaMNeIHbfCWFVf/sHDaQy4m0/VQtMlRDewKeiZK 2aRi20Ya2zrRA3I/HkSkRDg/fwYyCNEGFeQbHClAOfsgX3UjNX6V42EeWTIbIVLZFtkp8roNm7l+D yWhsMnWD7QtHjvUY4Mu+TihnbP579psYHQEIcS0PyjLR1X8/EoPEi8k+yACEfejK8juqVaxlfLBo+ WlWG8PG/PZ/LrrLv5oOw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBWc-008OV2-V9; Thu, 24 Nov 2022 12:40:19 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBWI-008OJt-6Y for linux-arm-kernel@lists.infradead.org; Thu, 24 Nov 2022 12:39:59 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id BDBA462119; Thu, 24 Nov 2022 12:39:57 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8DD48C4347C; Thu, 24 Nov 2022 12:39:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669293597; bh=PpzT1Qz99YbomlQ4NisMUyGFgf+yzCr/p1fvYsKo6F4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tY/+TYRDyADlzSYV6i3H16Tw2A2sLZZbvVxIw/sPU92r79VcrX9kYCvudiSiBUhz1 Ktdiy0kQ4nn6IPjq269Bq/uYOi3V6Ak7A8Xxianj4Z+Js+IHFkxj4ll6a46kpJXVNM 4ZfW8QdPkruryR4XDClVDtd/4cfqhTCNhY0VDI3crSCPAANKsXktdKKEKbI5rb6CoT 36HR3Own5BpuhxDqQQ8RGoz+E8sTckf2ymAn7OzjA6BqLcuEZvHYA9uqYW6/nctaJn wOakipM5wjlfk5r+9QAe/0XA7bF0XNXry9Wsk2BxX95rj/MeQU4I1h/+vQTf8sMRRM 5YogmZtOFSGEA== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson , Ryan Roberts Subject: [PATCH v2 02/19] arm64/mm: Add FEAT_LPA2 specific TCR_EL1.DS field Date: Thu, 24 Nov 2022 13:39:15 +0100 Message-Id: <20221124123932.2648991-3-ardb@kernel.org> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog In-Reply-To: <20221124123932.2648991-1-ardb@kernel.org> References: <20221124123932.2648991-1-ardb@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221124_043958_303091_21447929 X-CRM114-Status: GOOD ( 11.83 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Anshuman Khandual As per ARM ARM (0487G.A) TCR_EL1.DS fields controls whether 52 bit input and output address get supported on 4K and 16K page size configuration, when FEAT_LPA2 is known to have been implemented. This adds TCR_DS field definition which would be used when FEAT_LPA2 gets enabled. Acked-by: Catalin Marinas Signed-off-by: Anshuman Khandual Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/pgtable-hwdef.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index f658aafc47df..c4ad7fbb12c5 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -276,6 +276,7 @@ #define TCR_E0PD1 (UL(1) << 56) #define TCR_TCMA0 (UL(1) << 57) #define TCR_TCMA1 (UL(1) << 58) +#define TCR_DS (UL(1) << 59) /* * TTBR. From patchwork Thu Nov 24 12:39:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13054926 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4C892C433FE for ; Thu, 24 Nov 2022 12:41:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=SXjV9zVEXBttWvjAkgeWV+VST9sovV3TobzKIg9HaTQ=; b=Djuw/Rocst9xDh YxENNGU4E2EywmaWxUa4qm993IKXn0lGv5uthJdfkp15JKoAU7YePd1M6e/FWNFag8SYGSvU8+2LX Q6jWL/IusYcV1az3ikHQIT6nvDS7aFEhNbQPQYfaHZh0Yk0MTnnxzxVo4hKQ40lVMhHdKVCaNa7hd XXv53tAQm4fKCnNEHUgJn8wYbG2kaZtrchUkLt7H/xl335PljxwTm1G/eRejFjJmfgfnTB85Eipt0 misSr2bvCLkKneGydIhKNE27AL8iGGG4gL3AaI7sp6NB47xwIXlBqSzXx0W4OaNiKNU3+jwKO+PVx KgP03xbshioaxl5z0eeg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBWr-008OcE-FG; Thu, 24 Nov 2022 12:40:33 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBWM-008OM6-Qz for linux-arm-kernel@lists.infradead.org; Thu, 24 Nov 2022 12:40:04 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 7F392B827C8; Thu, 24 Nov 2022 12:40:01 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A7544C433D6; Thu, 24 Nov 2022 12:39:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669293600; bh=hy50KuoaG5fqCNFMq78Qju4YpcanX5/71T0KNVs7hhM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=W9Shgui1nUeaIDMCkDR3T9xcT1X+3j3pyzCDEIEmy8WrbERcT5DdiDGrR12tQdo2i /MpKZNwz554M3Jj/IYoXYTxtv9T4r8xRVz7gVPmTSgDCeTdPLIY9PDOpB8rYLKUIdf ISr6jwRx6A1LjEGP594cb7YeBju/wcHAmNIS+2mkthm+7qtQPeYpgvletTssBjTvl6 msKs236pwJqF7UODD7Wty8jtncGwk/7aimsLIzi6Yoh8kdvLgVDXRQjhpkmEWFiXB/ bTm0OI7F0tVV0IK1sCxEnF7XvXOy68okAqMFJeK9p8wI4Tf8Hw4z00cuetWjOMCaUd POWyU2Q+pF+VQ== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson , Ryan Roberts Subject: [PATCH v2 03/19] arm64/mm: Add FEAT_LPA2 specific ID_AA64MMFR0.TGRAN[2] Date: Thu, 24 Nov 2022 13:39:16 +0100 Message-Id: <20221124123932.2648991-4-ardb@kernel.org> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog In-Reply-To: <20221124123932.2648991-1-ardb@kernel.org> References: <20221124123932.2648991-1-ardb@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221124_044003_047783_451626FA X-CRM114-Status: GOOD ( 11.62 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Anshuman Khandual PAGE_SIZE support is tested against possible minimum and maximum values for its respective ID_AA64MMFR0.TGRAN field, depending on whether it is signed or unsigned. But then FEAT_LPA2 implementation needs to be validated for 4K and 16K page sizes via feature specific ID_AA64MMFR0.TGRAN values. Hence it adds FEAT_LPA2 specific ID_AA64MMFR0.TGRAN[2] values per ARM ARM (0487G.A). Acked-by: Catalin Marinas Signed-off-by: Anshuman Khandual Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/sysreg.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 7d301700d1a9..9547be423e1f 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -800,11 +800,13 @@ #if defined(CONFIG_ARM64_4K_PAGES) #define ID_AA64MMFR0_EL1_TGRAN_SHIFT ID_AA64MMFR0_EL1_TGRAN4_SHIFT +#define ID_AA64MMFR0_EL1_TGRAN_LPA2 ID_AA64MMFR0_EL1_TGRAN4_52_BIT #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MIN #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MAX #define ID_AA64MMFR0_EL1_TGRAN_2_SHIFT ID_AA64MMFR0_EL1_TGRAN4_2_SHIFT #elif defined(CONFIG_ARM64_16K_PAGES) #define ID_AA64MMFR0_EL1_TGRAN_SHIFT ID_AA64MMFR0_EL1_TGRAN16_SHIFT +#define ID_AA64MMFR0_EL1_TGRAN_LPA2 ID_AA64MMFR0_EL1_TGRAN16_52_BIT #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MIN #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MAX #define ID_AA64MMFR0_EL1_TGRAN_2_SHIFT ID_AA64MMFR0_EL1_TGRAN16_2_SHIFT From patchwork Thu Nov 24 12:39:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13054930 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8FA32C43217 for ; Thu, 24 Nov 2022 12:42:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=R4UfCM6NQwkyxwrRciGmaaqAZ+z2Y7sVvr992UwjUAQ=; b=auWaTPPcKIRlJW gkpP/pWmBqAeTUoTBkdOaFoitEcPEb26S807PMRvuJu4By3Ag8ryDyxx5WNJ6mw4ivBhLW5h1Hg2y zTwS8vNkYBmOMc5b3pMX2fZJmkoIzaBBH/UoROtDYk7DL17yQzmOXYuzUQK9eFgqpcGcsQgJJM3yU oGzhEz6kWtCJA0C5Zyqz2PSYHo9uIACmfttxj/kjV7psjpJ9q+4MpEa74QynJLg6IgMWtfIllUwkP eWulmmZYg5JB9yMZIeV25btgwa/P3JAEiplTzg0t2sb5T2KV3zZi/8xsV0tyQlnPFuIfg68g0JN99 9Z1+5qf6b4c8cPOZFmGQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBXY-008Oxz-92; Thu, 24 Nov 2022 12:41:16 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBWO-008ON7-Cq for linux-arm-kernel@lists.infradead.org; Thu, 24 Nov 2022 12:40:06 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E70D662117; Thu, 24 Nov 2022 12:40:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B9388C433D7; Thu, 24 Nov 2022 12:40:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669293603; bh=xSFrAvwR6J04giDLoxveOTvYXqHd5hkR0CDS+RT77k0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jEim0TuOSfpVfbZ7bRKhSbNCY5pJKdIQyh6OVIcx4uZxobNvJEKSyLGE/+Ad7iOSp y93IZjfY9RIeyNfOwHI2Z2T5pLiiSr4+92ubBAQy2K+CRPJHlhkRp1BTVQMEAtatjI Pv0483wE8/xKhfsZnk4675RorWvBcIN51JEPFXMkfU89CwBgHxIcuLhdNlSxmBAMyT Hj8rUxAPdkBsiD4iLHaOADMQvRTWyvUtcBMpV8paPpwTcaIGBBx9BBpI3vGzc2guVy M5RTq4OWJOsToQE4UO7tNmUOGDPzUOWfUP3/bXpBfjoqDzJAu9/ydIeoG7W0hlDPEI Pf4PzOIxR+SIw== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson , Ryan Roberts Subject: [PATCH v2 04/19] arm64: kaslr: Adjust randomization range dynamically Date: Thu, 24 Nov 2022 13:39:17 +0100 Message-Id: <20221124123932.2648991-5-ardb@kernel.org> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog In-Reply-To: <20221124123932.2648991-1-ardb@kernel.org> References: <20221124123932.2648991-1-ardb@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221124_044004_542045_B5140B43 X-CRM114-Status: GOOD ( 24.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently, we base the KASLR randomization range on a rough estimate of the available space in the vmalloc region: the lower 1/4th has the module region and the upper 1/4th has the fixmap, vmemmap and PCI I/O ranges, and so we pick a random location in the remaining space in the middle. Once we enable support for 5-level paging with 4k pages, this no longer works: the vmemmap region, being dimensioned to cover a 52-bit linear region, takes up so much space in the upper VA region (whose size is based on a 48-bit VA space for compatibility with non-LVA hardware) that the region above the vmalloc region takes up more than a quarter of the available space. So instead of a heuristic, let's derive the randomization range from the actual boundaries of the various regions. Note that this requires some tweaks to the early fixmap init logic so it can deal with upper translation levels having already been populated by the time we reach that function. Note, however, that the level 3 page table cannot be shared between the fixmap and the kernel, so this needs to be taken into account when defining the range. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/pi/kaslr_early.c | 23 +++++++++++++++----- arch/arm64/mm/mmu.c | 21 +++++------------- 2 files changed, 23 insertions(+), 21 deletions(-) diff --git a/arch/arm64/kernel/pi/kaslr_early.c b/arch/arm64/kernel/pi/kaslr_early.c index 965515f7f180..51142c4f2659 100644 --- a/arch/arm64/kernel/pi/kaslr_early.c +++ b/arch/arm64/kernel/pi/kaslr_early.c @@ -13,7 +13,9 @@ #include #include +#include #include +#include #include "pi.h" @@ -40,6 +42,16 @@ static u64 __init get_kaslr_seed(void *fdt, int node) u64 __init kaslr_early_init(void *fdt, int chosen) { + /* + * The kernel can be mapped almost anywhere in the vmalloc region, + * although we have to ensure that we don't share a level 3 table with + * the fixmap, which installs its own statically allocated one (bm_pte) + * and manipulates the slots by writing into the array directly. + * We also have to account for the offset modulo 2 MiB resulting from + * the physical placement of the image. + */ + const u64 range = (VMALLOC_END & PMD_MASK) - MODULES_END - + ((u64)_end - ALIGN_DOWN((u64)_text, MIN_KIMG_ALIGN)); u64 seed; if (cpuid_feature_extract_unsigned_field(arm64_sw_feature_override.val, @@ -56,11 +68,10 @@ u64 __init kaslr_early_init(void *fdt, int chosen) memstart_offset_seed = seed & U16_MAX; /* - * OK, so we are proceeding with KASLR enabled. Calculate a suitable - * kernel image offset from the seed. Let's place the kernel in the - * middle half of the VMALLOC area (VA_BITS_MIN - 2), and stay clear of - * the lower and upper quarters to avoid colliding with other - * allocations. + * Multiply 'range' by a value in [0 .. U32_MAX], and shift the result + * right by 32 bits, to obtain a value in the range [0 .. range). To + * avoid loss of precision in the multiplication, split the right shift + * in two shifts by 16 (range is 64k aligned in any case) */ - return BIT(VA_BITS_MIN - 3) + (seed & GENMASK(VA_BITS_MIN - 3, 16)); + return (((range >> 16) * (seed >> 32)) >> 16) & ~(MIN_KIMG_ALIGN - 1); } diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 6942255056ae..222c1154b550 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1222,23 +1222,14 @@ void __init early_fixmap_init(void) pgdp = pgd_offset_k(addr); p4dp = p4d_offset(pgdp, addr); p4d = READ_ONCE(*p4dp); - if (CONFIG_PGTABLE_LEVELS > 3 && - !(p4d_none(p4d) || p4d_page_paddr(p4d) == __pa_symbol(bm_pud))) { - /* - * We only end up here if the kernel mapping and the fixmap - * share the top level pgd entry, which should only happen on - * 16k/4 levels configurations. - */ - BUG_ON(!IS_ENABLED(CONFIG_ARM64_16K_PAGES)); - pudp = pud_offset_kimg(p4dp, addr); - } else { - if (p4d_none(p4d)) - __p4d_populate(p4dp, __pa_symbol(bm_pud), P4D_TYPE_TABLE); - pudp = fixmap_pud(addr); - } + if (p4d_none(p4d)) + __p4d_populate(p4dp, __pa_symbol(bm_pud), P4D_TYPE_TABLE); + + pudp = pud_offset_kimg(p4dp, addr); if (pud_none(READ_ONCE(*pudp))) __pud_populate(pudp, __pa_symbol(bm_pmd), PUD_TYPE_TABLE); - pmdp = fixmap_pmd(addr); + + pmdp = pmd_offset_kimg(pudp, addr); __pmd_populate(pmdp, __pa_symbol(bm_pte), PMD_TYPE_TABLE); /* From patchwork Thu Nov 24 12:39:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13054931 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 429F2C43217 for ; Thu, 24 Nov 2022 12:42:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ECRn96/8I+FtR9zgxxkaAQT8m2Zu9gT0a0JapRqH484=; b=ZxF86VqbRDq53y XRD7pdebmpaut+Q1HQeNMtyraKfQ2OL3/9XlXSyLy1C77XHqb/lMx02kF07PcQdDqKTSvUCvbHT+x TSA70KHK50gVNACQdE8EzGG7orCS0JpbTCGJ3+ZfxTk05aOwTkVuL2iv6gQaPHUB7mS9rj1Ed0ir7 tbatjaiprc/SpTqJQDBibfQ5jBVOf+nBLngLLlkK4jLaLD7MV1B2GAuFpnFhgRgngURVfuWkp/whI IcgA0UB6s1WR0b1iZiN1oHIjEtqCkqRsWacdL2LALUnEkfdzMqDgwuGWC1g/48wbzLWZmP97NYC8P rmfhFpk/CCQ2XWfluRFg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBXs-008P6d-LI; Thu, 24 Nov 2022 12:41:37 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBWS-008OOm-4O for linux-arm-kernel@lists.infradead.org; Thu, 24 Nov 2022 12:40:09 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id B5412B827C8; Thu, 24 Nov 2022 12:40:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CF02BC43145; Thu, 24 Nov 2022 12:40:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669293606; bh=Q5CdlRQ+4LrdUUjU3nMXjbb2w+TFiPv1YFvplNTaYkc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=R74BDQS62Iw+mUKhsJ8jPpmoP7iU5XtkFeKXG1YVd65eGqvrY4YGywpZviuJzw1nG u+rAZTzYd+jztI7f4ZgRygkbUokKjOw/6wPUXPFG01dvXyCUOijsIQnZyNADM88n6/ QoKuqhPl72qLJUZytlAqRvvZBouOaU9mhOXOCL/9MlwT91fVMPjNOfLLHf3tbJxRU3 pTLd5X6x7CFwqKSXCltoBkvTOUKTz5JmmbXON2DGJ7BQWPEypYVwwym7imYk9VV6bK ktgLUhNnuKJzwXU1nAh6aw54iCRrAIIATUjTeRC8LzBWH2NnZbp64a3/Mx1OdCluFV czvJSHT8uIxWA== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson , Ryan Roberts Subject: [PATCH v2 05/19] arm64: mm: get rid of kimage_vaddr global variable Date: Thu, 24 Nov 2022 13:39:18 +0100 Message-Id: <20221124123932.2648991-6-ardb@kernel.org> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog In-Reply-To: <20221124123932.2648991-1-ardb@kernel.org> References: <20221124123932.2648991-1-ardb@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221124_044008_493711_5C8B51C8 X-CRM114-Status: GOOD ( 17.17 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We store the address of _text in kimage_vaddr, but since commit 09e3c22a86f6889d ("arm64: Use a variable to store non-global mappings decision"), we no longer reference this variable from modules so we no longer need to export it. In fact, we don't need it at all so let's just get rid of it. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/memory.h | 6 ++---- arch/arm64/kernel/head.S | 2 +- arch/arm64/mm/mmu.c | 3 --- 3 files changed, 3 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 78e5163836a0..a4e1d832a15a 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -182,6 +182,7 @@ #include #include #include +#include #if VA_BITS > 48 extern u64 vabits_actual; @@ -193,15 +194,12 @@ extern s64 memstart_addr; /* PHYS_OFFSET - the physical address of the start of memory. */ #define PHYS_OFFSET ({ VM_BUG_ON(memstart_addr & 1); memstart_addr; }) -/* the virtual base of the kernel image */ -extern u64 kimage_vaddr; - /* the offset between the kernel virtual and physical mappings */ extern u64 kimage_voffset; static inline unsigned long kaslr_offset(void) { - return kimage_vaddr - KIMAGE_VADDR; + return (u64)&_text - KIMAGE_VADDR; } static inline bool kaslr_enabled(void) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 3ea5bf0a6e17..3b3c5e8e84af 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -399,7 +399,7 @@ SYM_FUNC_START_LOCAL(__primary_switched) str_l x21, __fdt_pointer, x5 // Save FDT pointer - ldr_l x4, kimage_vaddr // Save the offset between + adrp x4, _text // Save the offset between sub x4, x4, x0 // the kernel virtual and str_l x4, kimage_voffset, x5 // physical mappings diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 222c1154b550..d083ac6a0764 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -50,9 +50,6 @@ u64 vabits_actual __ro_after_init = VA_BITS_MIN; EXPORT_SYMBOL(vabits_actual); #endif -u64 kimage_vaddr __ro_after_init = (u64)&_text; -EXPORT_SYMBOL(kimage_vaddr); - u64 kimage_voffset __ro_after_init; EXPORT_SYMBOL(kimage_voffset); From patchwork Thu Nov 24 12:39:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13054932 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 738F5C4321E for ; Thu, 24 Nov 2022 12:43:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2jFqQ8on86rCQoZxrle8VEfzz5G3xPLljI9RCI8ZSiU=; b=iE8GMlSle/mgLP L8TJr/je8PN9s4q0gYumIgmjM0NSQauZ6MJyKd57rx2MN7L7xbCpIALCTQ/lq64F3S2umrY8/2g9P f+YBt66QA8TQlpGHLGt5OdkOl2Htrdz2BWL/HQop6N6rxjK/HkCUYdW8K7El/KPdqU8YKbPaTtBye IbaRypZeJU9yU+7VMCLMx4wtZs9Q3DELnkAhvJ1v273K0Vca3VXrtL1aY7+F/ydP4ioHWZ8ec52da OkGWhbcb1HsE+JVeTPzG0M1JN3zR5kfKrMG0WoM6Zg5F85yN3DTi6Ee7k/rWqEqzLLRns37e77od2 5mzTO/N7EI4T8weUZPqw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBYJ-008PJj-9t; Thu, 24 Nov 2022 12:42:03 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBWV-008OQo-6l for linux-arm-kernel@lists.infradead.org; Thu, 24 Nov 2022 12:40:13 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id D23D3B82788; Thu, 24 Nov 2022 12:40:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E182DC4347C; Thu, 24 Nov 2022 12:40:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669293609; bh=rxULJ+ZEZZdxxXHJRaxeyp0ZqnT5ERP8nql4vRFNirU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AEJns3hj9ODcQ/SVZDxDVMrNbal6h5VmjlhkCWIGIuoP6Tjxyu37NHFLU5+MwGkEU MKoX7SKH/5Ml05eMbiu6K9SMh1o7y1k9vpbgyOavsriFzXX0yxOfb64ch5z4JAfY12 iDZ4toUmjMg+T3JEI7SZ5hGj9/g7lJvYAhyqErrU5xC5CIpAUd8ZsfXYZDVQFHci4u oXsJ3iq3BQPmB5Rl+xwscJBYtATcJzm710bkQVqzDU9p5r3XeZC/hnQt/o6oj8/sZm B7BTDuZuP7S3bg1H69h+Vs6hTKLz3xVsk82++XPIX2V4PKzYF5R5kkg0oYpUdSRn76 hc+NFOxmbMTAQ== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson , Ryan Roberts Subject: [PATCH v2 06/19] arm64: head: remove order argument from early mapping routine Date: Thu, 24 Nov 2022 13:39:19 +0100 Message-Id: <20221124123932.2648991-7-ardb@kernel.org> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog In-Reply-To: <20221124123932.2648991-1-ardb@kernel.org> References: <20221124123932.2648991-1-ardb@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221124_044011_585918_AB460097 X-CRM114-Status: GOOD ( 19.94 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When creating mappings in the upper region of the address space, it is important to know the order of the table being created, i.e., the number of bits that are being translated at the level in question. Bits beyond that number do not contribute to the virtual address, and need to be masked out. Now that we no longer use the asm kernel page creation code for mappings in the upper region, those bits are guaranteed to be zero anyway, so we don't have to account for them in the masking. This means we can simply use the maximum order for the all tables including the root level table. Doing so will also allow us to transparently use the same routines creating the initial ID map covering 4 levels when the VA space is configured for 5. Note that the root level tables are always statically allocated as full pages regardless of how many VA bits they translate. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 26 +++++++++----------- 1 file changed, 11 insertions(+), 15 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 3b3c5e8e84af..a37525a5ee34 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -158,7 +158,6 @@ SYM_CODE_END(preserve_boot_args) * vstart: virtual address of start of range * vend: virtual address of end of range - we map [vstart, vend] * shift: shift used to transform virtual address into index - * order: #imm 2log(number of entries in page table) * istart: index in table corresponding to vstart * iend: index in table corresponding to vend * count: On entry: how many extra entries were required in previous level, scales @@ -168,10 +167,10 @@ SYM_CODE_END(preserve_boot_args) * Preserves: vstart, vend * Returns: istart, iend, count */ - .macro compute_indices, vstart, vend, shift, order, istart, iend, count - ubfx \istart, \vstart, \shift, \order - ubfx \iend, \vend, \shift, \order - add \iend, \iend, \count, lsl \order + .macro compute_indices, vstart, vend, shift, istart, iend, count + ubfx \istart, \vstart, \shift, #PAGE_SHIFT - 3 + ubfx \iend, \vend, \shift, #PAGE_SHIFT - 3 + add \iend, \iend, \count, lsl #PAGE_SHIFT - 3 sub \count, \iend, \istart .endm @@ -186,7 +185,6 @@ SYM_CODE_END(preserve_boot_args) * vend: virtual address of end of range - we map [vstart, vend - 1] * flags: flags to use to map last level entries * phys: physical address corresponding to vstart - physical memory is contiguous - * order: #imm 2log(number of entries in PGD table) * * If extra_shift is set, an extra level will be populated if the end address does * not fit in 'extra_shift' bits. This assumes vend is in the TTBR0 range. @@ -195,7 +193,7 @@ SYM_CODE_END(preserve_boot_args) * Preserves: vstart, flags * Corrupts: tbl, rtbl, vend, istart, iend, tmp, count, sv */ - .macro map_memory, tbl, rtbl, vstart, vend, flags, phys, order, istart, iend, tmp, count, sv, extra_shift + .macro map_memory, tbl, rtbl, vstart, vend, flags, phys, istart, iend, tmp, count, sv, extra_shift sub \vend, \vend, #1 add \rtbl, \tbl, #PAGE_SIZE mov \count, #0 @@ -203,32 +201,32 @@ SYM_CODE_END(preserve_boot_args) .ifnb \extra_shift tst \vend, #~((1 << (\extra_shift)) - 1) b.eq .L_\@ - compute_indices \vstart, \vend, #\extra_shift, #(PAGE_SHIFT - 3), \istart, \iend, \count + compute_indices \vstart, \vend, #\extra_shift, \istart, \iend, \count mov \sv, \rtbl populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp mov \tbl, \sv .endif .L_\@: - compute_indices \vstart, \vend, #PGDIR_SHIFT, #\order, \istart, \iend, \count + compute_indices \vstart, \vend, #PGDIR_SHIFT, \istart, \iend, \count mov \sv, \rtbl populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp mov \tbl, \sv #if INIT_IDMAP_TABLE_LEVELS > 3 - compute_indices \vstart, \vend, #PUD_SHIFT, #(PAGE_SHIFT - 3), \istart, \iend, \count + compute_indices \vstart, \vend, #PUD_SHIFT, \istart, \iend, \count mov \sv, \rtbl populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp mov \tbl, \sv #endif #if INIT_IDMAP_TABLE_LEVELS > 2 - compute_indices \vstart, \vend, #INIT_IDMAP_TABLE_SHIFT, #(PAGE_SHIFT - 3), \istart, \iend, \count + compute_indices \vstart, \vend, #INIT_IDMAP_TABLE_SHIFT, \istart, \iend, \count mov \sv, \rtbl populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp mov \tbl, \sv #endif - compute_indices \vstart, \vend, #INIT_IDMAP_BLOCK_SHIFT, #(PAGE_SHIFT - 3), \istart, \iend, \count + compute_indices \vstart, \vend, #INIT_IDMAP_BLOCK_SHIFT, \istart, \iend, \count bic \rtbl, \phys, #INIT_IDMAP_BLOCK_SIZE - 1 populate_entries \tbl, \rtbl, \istart, \iend, \flags, #INIT_IDMAP_BLOCK_SIZE, \tmp .endm @@ -294,7 +292,6 @@ SYM_FUNC_START_LOCAL(create_idmap) * requires more than 47 or 48 bits, respectively. */ #if (VA_BITS < 48) -#define IDMAP_PGD_ORDER (VA_BITS - PGDIR_SHIFT) #define EXTRA_SHIFT (PGDIR_SHIFT + PAGE_SHIFT - 3) /* @@ -308,7 +305,6 @@ SYM_FUNC_START_LOCAL(create_idmap) #error "Mismatch between VA_BITS and page size/number of translation levels" #endif #else -#define IDMAP_PGD_ORDER (PHYS_MASK_SHIFT - PGDIR_SHIFT) #define EXTRA_SHIFT /* * If VA_BITS == 48, we don't have to configure an additional @@ -320,7 +316,7 @@ SYM_FUNC_START_LOCAL(create_idmap) adrp x6, _end + MAX_FDT_SIZE + INIT_IDMAP_BLOCK_SIZE mov x7, INIT_IDMAP_RX_MMUFLAGS - map_memory x0, x1, x3, x6, x7, x3, IDMAP_PGD_ORDER, x10, x11, x12, x13, x14, EXTRA_SHIFT + map_memory x0, x1, x3, x6, x7, x3, x10, x11, x12, x13, x14, EXTRA_SHIFT /* Remap BSS and the kernel page tables r/w in the ID map */ adrp x1, _text From patchwork Thu Nov 24 12:39:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13054933 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B3573C43217 for ; Thu, 24 Nov 2022 12:43:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=5a9lcJFGemwqSc1UK++p7/vm9I/6DGnurKbj1Evoeso=; b=UMKQ0PpfMJlBhq +W9u+yA4Kwej6nAmB0ZVI03kyANL6PoCqX2QJhWYJFelTFbAfFR7cHciscJC44iKJs9V43HDXAis1 M2wMHo+7YVkHDTpOKt5xv3mQWwlN8oF1gmuxLJBTYcwvwDwMMd0BQvqMtl7CnRi+HyensbfBW2+t9 59iBQZAZnsW3cr3hT1g2Zer3Adp5/4P+8lxvfocv3f7IILdU3wi93AU8VCD2kv1oLbJPBpG24FWDZ +lmbcxoZrrSfDk6UlgxG3XVGh4/U/1yJU6DNJfwdVUP4X9fB/08zl6WQDYgExe3op7QYHA0F1erZT 449gc7hGn5s2dbvUzsbw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBYv-008Pe7-0r; Thu, 24 Nov 2022 12:42:42 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBWX-008OSP-NH for linux-arm-kernel@lists.infradead.org; Thu, 24 Nov 2022 12:40:15 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 32EE0620F8; Thu, 24 Nov 2022 12:40:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 03AD4C433B5; Thu, 24 Nov 2022 12:40:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669293612; bh=fsK7Dubd2ljDi42vhFW/iErqb1Efu19N/9uPOcGLFqo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=orgs9GE4gXM/F8FVTV1s+YGxXVNAj9SRBIgWdtH4NKpfSRAFAvQqrrdlMY+qSeouB ebNbzckIa7QPyjb9cjw3+6EU/m07BfvTmPFgjBG/ywc///jwuiJMUEYvfUUMaD+Ljk NvyBbOsaOklfRC3Uakp1m7ZG0Kaic2Czuibcs7dlnYE4MZYw/6KLLt/JWNh3feQ07/ dIAJ3k+lALWpEQ7pc3/yOsUlQrdsYAjIUeCaZuABV/tLzFy8I3spbA6nG2v39ZESzc poDO5vBEkoCf214K8TYLAemVFWN8nXmEYdxagppQS9sKFXyPn0dS3AKqOS4eOKMotH R5eEqYgolxrMw== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson , Ryan Roberts Subject: [PATCH v2 07/19] arm64: mm: Handle LVA support as a CPU feature Date: Thu, 24 Nov 2022 13:39:20 +0100 Message-Id: <20221124123932.2648991-8-ardb@kernel.org> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog In-Reply-To: <20221124123932.2648991-1-ardb@kernel.org> References: <20221124123932.2648991-1-ardb@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221124_044013_893664_64C6D32A X-CRM114-Status: GOOD ( 24.14 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently, we detect CPU support for 52-bit virtual addressing (LVA) extremely early, before creating the kernel page tables or enabling the MMU. We cannot override the feature this early, and so large virtual addressing is always enabled on CPUs that implement support for it if the software support for it was enabled at build time. It also means we rely on non-trivial code in asm to deal with this feature. Given that both the ID map and the TTBR1 mapping of the kernel image are guaranteed to be 48-bit addressable, it is not actually necessary to enable support this early, and instead, we can model it as a CPU feature. That way, we can rely on code patching to get the correct TCR.T1SZ values programmed on secondary boot and suspend from resume. On the primary boot path, we simply enable the MMU with 48-bit virtual addressing initially, and update TCR.T1SZ if LVA is supported from C code, right before creating the kernel mapping. Given that TTBR1 still points to reserved_pg_dir at this point, updating TCR.T1SZ should be safe without the need for explicit TLB maintenance. Since this gets rid of all accesses to the vabits_actual variable from asm code that occurred before TCR.T1SZ had been programmed, we no longer have a need for this variable, and we can replace it with a C expression that produces the correct value directly, based on the value of TCR.T1SZ. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/memory.h | 13 ++++++++++- arch/arm64/kernel/cpufeature.c | 13 +++++++++++ arch/arm64/kernel/head.S | 24 +++----------------- arch/arm64/kernel/pi/map_kernel.c | 12 ++++++++++ arch/arm64/kernel/sleep.S | 3 --- arch/arm64/mm/mmu.c | 5 ---- arch/arm64/mm/proc.S | 17 +++++++------- arch/arm64/tools/cpucaps | 1 + 8 files changed, 49 insertions(+), 39 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index a4e1d832a15a..b3826ff2e52b 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -183,9 +183,20 @@ #include #include #include +#include + +static inline u64 __pure read_tcr(void) +{ + u64 tcr; + + // read_sysreg() uses asm volatile, so avoid it here + asm("mrs %0, tcr_el1" : "=r"(tcr)); + return tcr; +} #if VA_BITS > 48 -extern u64 vabits_actual; +// For reasons of #include hell, we can't use TCR_T1SZ_OFFSET/TCR_T1SZ_MASK here +#define vabits_actual (64 - ((read_tcr() >> 16) & 63)) #else #define vabits_actual ((u64)VA_BITS) #endif diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index eca9df123a8b..b44aece5024c 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2654,6 +2654,19 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .matches = has_cpuid_feature, .cpu_enable = cpu_trap_el0_impdef, }, +#ifdef CONFIG_ARM64_VA_BITS_52 + { + .desc = "52-bit Virtual Addressing (LVA)", + .capability = ARM64_HAS_LVA, + .type = ARM64_CPUCAP_BOOT_CPU_FEATURE, + .sys_reg = SYS_ID_AA64MMFR2_EL1, + .sign = FTR_UNSIGNED, + .field_width = 4, + .field_pos = ID_AA64MMFR2_EL1_VARange_SHIFT, + .matches = has_cpuid_feature, + .min_field_value = ID_AA64MMFR2_EL1_VARange_52, + }, +#endif {}, }; diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index a37525a5ee34..d423ff78474e 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -80,7 +80,6 @@ * x20 primary_entry() .. __primary_switch() CPU boot mode * x21 primary_entry() .. start_kernel() FDT pointer passed at boot in x0 * x22 create_idmap() .. start_kernel() ID map VA of the DT blob - * x25 primary_entry() .. start_kernel() supported VA size * x28 create_idmap() callee preserved temp register */ SYM_CODE_START(primary_entry) @@ -95,14 +94,6 @@ SYM_CODE_START(primary_entry) * On return, the CPU will be ready for the MMU to be turned on and * the TCR will have been set. */ -#if VA_BITS > 48 - mrs_s x0, SYS_ID_AA64MMFR2_EL1 - tst x0, #0xf << ID_AA64MMFR2_EL1_VARange_SHIFT - mov x0, #VA_BITS - mov x25, #VA_BITS_MIN - csel x25, x25, x0, eq - mov x0, x25 -#endif bl __cpu_setup // initialise processor b __primary_switch SYM_CODE_END(primary_entry) @@ -402,11 +393,6 @@ SYM_FUNC_START_LOCAL(__primary_switched) mov x0, x20 bl set_cpu_boot_mode_flag -#if VA_BITS > 48 - adr_l x8, vabits_actual // Set this early so KASAN early init - str x25, [x8] // ... observes the correct value - dc civac, x8 // Make visible to booting secondaries -#endif #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) bl kasan_early_init #endif @@ -521,9 +507,6 @@ SYM_FUNC_START_LOCAL(secondary_startup) mov x20, x0 // preserve boot mode bl finalise_el2 bl __cpu_secondary_check52bitva -#if VA_BITS > 48 - ldr_l x0, vabits_actual -#endif bl __cpu_setup // initialise processor adrp x1, swapper_pg_dir adrp x2, idmap_pg_dir @@ -624,10 +607,9 @@ SYM_FUNC_END(__enable_mmu) SYM_FUNC_START(__cpu_secondary_check52bitva) #if VA_BITS > 48 - ldr_l x0, vabits_actual - cmp x0, #52 - b.ne 2f - +alternative_if_not ARM64_HAS_LVA + ret +alternative_else_nop_endif mrs_s x0, SYS_ID_AA64MMFR2_EL1 and x0, x0, #(0xf << ID_AA64MMFR2_EL1_VARange_SHIFT) cbnz x0, 2f diff --git a/arch/arm64/kernel/pi/map_kernel.c b/arch/arm64/kernel/pi/map_kernel.c index 36537d1b837b..7dd6daee0ffd 100644 --- a/arch/arm64/kernel/pi/map_kernel.c +++ b/arch/arm64/kernel/pi/map_kernel.c @@ -122,6 +122,15 @@ static bool __init arm64_early_this_cpu_has_e0pd(void) ID_AA64MMFR2_EL1_E0PD_SHIFT); } +static bool __init arm64_early_this_cpu_has_lva(void) +{ + u64 mmfr2; + + mmfr2 = read_sysreg_s(SYS_ID_AA64MMFR2_EL1); + return cpuid_feature_extract_unsigned_field(mmfr2, + ID_AA64MMFR2_EL1_VARange_SHIFT); +} + static bool __init arm64_early_this_cpu_has_pac(void) { u64 isar1, isar2; @@ -284,6 +293,9 @@ asmlinkage void __init early_map_kernel(u64 boot_status, void *fdt) arm64_use_ng_mappings = true; } + if (VA_BITS == 52 && arm64_early_this_cpu_has_lva()) + sysreg_clear_set(tcr_el1, TCR_T1SZ_MASK, TCR_T1SZ(VA_BITS)); + va_base = KIMAGE_VADDR + kaslr_offset; map_kernel(kaslr_offset, va_base - pa_base, root_level); } diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S index 97c9de57725d..617f78ad43a1 100644 --- a/arch/arm64/kernel/sleep.S +++ b/arch/arm64/kernel/sleep.S @@ -101,9 +101,6 @@ SYM_FUNC_END(__cpu_suspend_enter) SYM_CODE_START(cpu_resume) bl init_kernel_el bl finalise_el2 -#if VA_BITS > 48 - ldr_l x0, vabits_actual -#endif bl __cpu_setup /* enable the MMU early - so we can access sleep_save_stash by va */ adrp x1, swapper_pg_dir diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index d083ac6a0764..b0c702e5bf66 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -45,11 +45,6 @@ int idmap_t0sz __ro_after_init; -#if VA_BITS > 48 -u64 vabits_actual __ro_after_init = VA_BITS_MIN; -EXPORT_SYMBOL(vabits_actual); -#endif - u64 kimage_voffset __ro_after_init; EXPORT_SYMBOL(kimage_voffset); diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index ae2caab15ac8..e14648ca820b 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -400,8 +400,6 @@ SYM_FUNC_END(idmap_kpti_install_ng_mappings) * * Initialise the processor for turning the MMU on. * - * Input: - * x0 - actual number of VA bits (ignored unless VA_BITS > 48) * Output: * Return in x0 the value of the SCTLR_EL1 register. */ @@ -426,20 +424,21 @@ SYM_FUNC_START(__cpu_setup) mair .req x17 tcr .req x16 mov_q mair, MAIR_EL1_SET - mov_q tcr, TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \ + mov_q tcr, TCR_TxSZ(VA_BITS_MIN) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \ TCR_TG_FLAGS | TCR_KASLR_FLAGS | TCR_ASID16 | \ TCR_TBI0 | TCR_A1 | TCR_KASAN_SW_FLAGS | TCR_MTE_FLAGS tcr_clear_errata_bits tcr, x9, x5 -#ifdef CONFIG_ARM64_VA_BITS_52 - sub x9, xzr, x0 - add x9, x9, #64 - tcr_set_t1sz tcr, x9 -#else +#if VA_BITS < 48 idmap_get_t0sz x9 -#endif tcr_set_t0sz tcr, x9 +#elif VA_BITS > VA_BITS_MIN + mov x9, #64 - VA_BITS +alternative_if ARM64_HAS_LVA + tcr_set_t1sz tcr, x9 +alternative_else_nop_endif +#endif /* * Set the IPS bits in TCR_EL1. diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps index f1c0347ec31a..ec650a2cf433 100644 --- a/arch/arm64/tools/cpucaps +++ b/arch/arm64/tools/cpucaps @@ -30,6 +30,7 @@ HAS_GENERIC_AUTH_IMP_DEF HAS_IRQ_PRIO_MASKING HAS_LDAPR HAS_LSE_ATOMICS +HAS_LVA HAS_NO_FPSIMD HAS_NO_HW_PREFETCH HAS_PAN From patchwork Thu Nov 24 12:39:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13054934 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4DAF5C433FE for ; Thu, 24 Nov 2022 12:44:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=W/9xjie7XD2oFVVX5Ojp3N8NZSnuONAsF/sfucPMmhQ=; b=NyGF6xBUOH4sBc W4O9aXNqgjAPMCQk4gnSQYhCG15UtKoTDfWcrHHidNV1VZWU/BZL4Cy5Gz8b+SOSHHlJlqvEIpc8W y/8iWjGYIqHlJNmjhvDf/B3Mm07XbYKfJoeL83VKD/TjHj/kR1YLqfJ4u7cJ1JcRD4FEMWU20wIKN v1HcC5/n8xJdAeg7XpeW3+fPK1AeETsTCl74qpTBhKlrH9EiEECsXeeuOzQagyJiEbpxSpF80UmYd NDTq/NoSrBzWtJQhUFiQjdpY7A2HBo+3RnkXjZ/lmEPPfkZevD5hlBFqQdVMmB3TvSnIjqQdFl45v Y+VbuOQKcN/A3Wxs+L6Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBZP-008PxY-0G; Thu, 24 Nov 2022 12:43:11 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBWa-008OTy-Li for linux-arm-kernel@lists.infradead.org; Thu, 24 Nov 2022 12:40:18 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3E550620F8; Thu, 24 Nov 2022 12:40:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 17A59C433D7; Thu, 24 Nov 2022 12:40:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669293615; bh=qcj/VXBa9HsYQa/yB03lZWjYFItXlx2dXWMhWosGmag=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Mq/pGKwAJ6dt1KpWZ0oj/kjTqxq5tO3atNnpxj0axdG5SRsfaLRgo+Dph3VpDFY7v djh4rrgMdLPo6z9qzOWEZqeeByMVSFa//bHN+ignfFD23badjjEFryhz7RtKAz945e GXJY5Sms2FaEQKhB8U94sWhdmxujIj5z2IpLIBPsZfKHthRJmrUbhQqv1BKkL4Qqwt P3QtdXXphip6lAWBSlnZyf7KeEpX0Z+X+8JTcpbteC62WFmCZFD5RbHu05bnAcIhHI r10genkqMltgpfhYg3AHLK52q8KDo6BHOUYuM0XI6WIFHvIDmfDEu9atS4BdeC+TwP LFZEX5RG3jMmQ== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson , Ryan Roberts Subject: [PATCH v2 08/19] arm64: mm: Deal with potential ID map extension if VA_BITS > VA_BITS_MIN Date: Thu, 24 Nov 2022 13:39:21 +0100 Message-Id: <20221124123932.2648991-9-ardb@kernel.org> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog In-Reply-To: <20221124123932.2648991-1-ardb@kernel.org> References: <20221124123932.2648991-1-ardb@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221124_044016_839242_4B12BCA1 X-CRM114-Status: GOOD ( 25.62 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 16k pages with LPA2, we will fall back to 47 bits of VA space in case LPA2 is not implemented. Since we support loading the kernel anywhere in the 48-bit addressable PA space, this means we may have to extend the ID map like we normally do in such cases, even when VA_BITS >= 48. Since VA_BITS_MIN will equal 47 in that case, use that symbolic constant instead to determine whether ID map extension is required. Also, use vabits_actual to determine whether create_idmap() needs to add an extra level as well, as this is never needed if LPA2 is enabled at runtime. Note that VA_BITS, VA_BITS_MIN and vabits_actual all collapse into the same compile time constant on configurations that support ID map extension currently, so there this change should have no effect whatsoever. Note that the use of PGDIR_SHIFT in the calculation of EXTRA_SHIFT is no longer appropriate, as it is derived from VA_BITS not VA_BITS_MIN. So rephrase the check whether VA_BITS_MIN is consistent with the number of levels. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/kernel-pgtable.h | 2 +- arch/arm64/kernel/head.S | 40 ++++++++++---------- arch/arm64/mm/mmu.c | 4 +- arch/arm64/mm/proc.S | 5 ++- 4 files changed, 26 insertions(+), 25 deletions(-) diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h index 4278cd088347..faa11e8b4a0e 100644 --- a/arch/arm64/include/asm/kernel-pgtable.h +++ b/arch/arm64/include/asm/kernel-pgtable.h @@ -73,7 +73,7 @@ #define INIT_DIR_SIZE (PAGE_SIZE * (EARLY_PAGES(KIMAGE_VADDR, _end, EARLY_KASLR) + EARLY_SEGMENT_EXTRA_PAGES)) /* the initial ID map may need two extra pages if it needs to be extended */ -#if VA_BITS < 48 +#if VA_BITS_MIN < 48 #define INIT_IDMAP_DIR_SIZE ((INIT_IDMAP_DIR_PAGES + 2) * PAGE_SIZE) #else #define INIT_IDMAP_DIR_SIZE (INIT_IDMAP_DIR_PAGES * PAGE_SIZE) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index d423ff78474e..94de42dfe97d 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -266,40 +266,40 @@ SYM_FUNC_START_LOCAL(create_idmap) * space for ordinary kernel and user space mappings. * * There are three cases to consider here: - * - 39 <= VA_BITS < 48, and the ID map needs up to 48 VA bits to cover - * the placement of the image. In this case, we configure one extra - * level of translation on the fly for the ID map only. (This case - * also covers 42-bit VA/52-bit PA on 64k pages). + * - 39 <= VA_BITS_MIN < 48, and the ID map needs up to 48 VA bits to + * cover the placement of the image. In this case, we configure one + * extra level of translation on the fly for the ID map only. (This + * case also covers 42-bit VA/52-bit PA on 64k pages). * - * - VA_BITS == 48, and the ID map needs more than 48 VA bits. This can - * only happen when using 64k pages, in which case we need to extend - * the root level table rather than add a level. Note that we can - * treat this case as 'always extended' as long as we take care not - * to program an unsupported T0SZ value into the TCR register. + * - VA_BITS_MIN == 48, and the ID map needs more than 48 VA bits. This + * can only happen when using 64k pages, in which case we need to + * extend the root level table rather than add a level. Note that we + * can treat this case as 'always extended' as long as we take care + * not to program an unsupported T0SZ value into the TCR register. * * - Combinations that would require two additional levels of * translation are not supported, e.g., VA_BITS==36 on 16k pages, or * VA_BITS==39/4k pages with 5-level paging, where the input address * requires more than 47 or 48 bits, respectively. */ -#if (VA_BITS < 48) -#define EXTRA_SHIFT (PGDIR_SHIFT + PAGE_SHIFT - 3) +#if VA_BITS_MIN < 48 +#define EXTRA_SHIFT VA_BITS_MIN /* - * If VA_BITS < 48, we have to configure an additional table level. - * First, we have to verify our assumption that the current value of - * VA_BITS was chosen such that all translation levels are fully - * utilised, and that lowering T0SZ will always result in an additional - * translation level to be configured. + * If VA_BITS_MIN < 48, we may have to configure an additional table + * level. First, we have to verify our assumption that the current + * value of VA_BITS_MIN was chosen such that all translation levels are + * fully utilised, and that lowering T0SZ will always result in an + * additional translation level to be configured. */ -#if VA_BITS != EXTRA_SHIFT -#error "Mismatch between VA_BITS and page size/number of translation levels" +#if ((VA_BITS_MIN - PAGE_SHIFT) % (PAGE_SHIFT - 3)) != 0 +#error "Mismatch between VA_BITS_MIN and page size/number of translation levels" #endif #else #define EXTRA_SHIFT /* - * If VA_BITS == 48, we don't have to configure an additional - * translation level, but the top-level table has more entries. + * If VA_BITS_MIN == 48, we don't have to configure an additional + * translation level, but the top-level table may have more entries. */ #endif adrp x0, init_idmap_pg_dir diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index b0c702e5bf66..bcf617f956cb 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -702,9 +702,9 @@ static void __init create_idmap(void) u64 pgd_phys; /* check if we need an additional level of translation */ - if (VA_BITS < 48 && idmap_t0sz < (64 - VA_BITS_MIN)) { + if (vabits_actual < 48 && idmap_t0sz < (64 - VA_BITS_MIN)) { pgd_phys = early_pgtable_alloc(PAGE_SHIFT); - set_pgd(&idmap_pg_dir[start >> VA_BITS], + set_pgd(&idmap_pg_dir[start >> VA_BITS_MIN], __pgd(pgd_phys | P4D_TYPE_TABLE)); pgd = __va(pgd_phys); } diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index e14648ca820b..873adaccb12f 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -430,10 +430,11 @@ SYM_FUNC_START(__cpu_setup) tcr_clear_errata_bits tcr, x9, x5 -#if VA_BITS < 48 +#if VA_BITS_MIN < 48 idmap_get_t0sz x9 tcr_set_t0sz tcr, x9 -#elif VA_BITS > VA_BITS_MIN +#endif +#if VA_BITS > VA_BITS_MIN mov x9, #64 - VA_BITS alternative_if ARM64_HAS_LVA tcr_set_t1sz tcr, x9 From patchwork Thu Nov 24 12:39:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13054935 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 25DECC43217 for ; Thu, 24 Nov 2022 12:44:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=L3fbqEVWIMbQmAQiDRCdBNTUNXJpevnZqsxqbZVPpKE=; b=R4010kBFKA4E9c pvpzVFF3v3bGiYj1yIia+YGssJ5DDvgVvblVLtgbkkAdZqZ5lMAQg6SJKbu279jQ36bqyEiSkOstI ObRwMOGe81F1ZuJhH5ZeawgIXZ9lkroubzpLtCfV53dbsMJW9BnV4ZATiPRCETHWX2tBU/Ssehcmf ft3gfp018KsYvpNCylsI37ICPDLMAUBfupncs2Xw8R80Xl8Ogy0xmi6l290baXpyV/OicMreO4evR bPGZZZlphWdnUj2BdAr5VRLzHiQBiCkd+e9CGLx3+jfNaA9Y91dFua7xm6irr1d5W8b+/p75cj3b4 QkAJb6w2OvoOwov+LJBg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBZr-008QJ7-IS; Thu, 24 Nov 2022 12:43:40 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBWf-008OWL-EV for linux-arm-kernel@lists.infradead.org; Thu, 24 Nov 2022 12:40:23 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id EBA06B82788; Thu, 24 Nov 2022 12:40:19 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2D511C4347C; Thu, 24 Nov 2022 12:40:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669293618; bh=zeyZ/iYgTCUqfLeOkE0nDIsYTu8lfOKQBg9bTNphjRM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=R2fcHYb++L4j37iXeWqMmFA6VgeN6MnxJTOL4r0F3KfUWupzoSmrrIqhCtxw22Noa TnjQVrwMwF/Gs/bjjWCBUkXqCuzU+pUBRhdinJp4iJ2Ge+RYvEcmz1w4LAMH55ilCL 487E5Xb/6i5nWHwiX0XgzG9MeHYLo4pUAOH3AlkJh0PXgeseV0DPbNAiKUgKAJCCl2 P71qZFlLt1Ub7iY/AuTheIeIfO0LoNg0WFl10WbRS7m3a79EI+VY/qEjRWjfWezWEc U7wBzVL8LQTZEO30ykEjIrypsoGoDDdnGHp4HgFMMGKClenZgsCB3xEsINN4wvv0Qq cMNlXGi7LqM6g== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson , Ryan Roberts Subject: [PATCH v2 09/19] arm64: mm: Add feature override support for LVA Date: Thu, 24 Nov 2022 13:39:22 +0100 Message-Id: <20221124123932.2648991-10-ardb@kernel.org> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog In-Reply-To: <20221124123932.2648991-1-ardb@kernel.org> References: <20221124123932.2648991-1-ardb@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221124_044021_822046_91E2A773 X-CRM114-Status: GOOD ( 23.99 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add support for overriding the VARange field of the MMFR2 CPU ID register. This permits the associated LVA feature to be overridden early enough for the boot code that creates the kernel mapping to take it into account. Given that LPA2 implies LVA, disabling the latter should disable the former as well. So override the ID_AA64MMFR0.TGran field of the current page size as well if it advertises support for 52-bit virtual addressing. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/assembler.h | 17 +++++++----- arch/arm64/include/asm/cpufeature.h | 2 ++ arch/arm64/kernel/cpufeature.c | 8 ++++-- arch/arm64/kernel/image-vars.h | 2 ++ arch/arm64/kernel/pi/idreg-override.c | 29 +++++++++++++++++++- arch/arm64/kernel/pi/map_kernel.c | 2 ++ 6 files changed, 50 insertions(+), 10 deletions(-) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 89038067ef34..4cb84dc6e220 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -604,18 +604,21 @@ alternative_endif .endm /* - * Offset ttbr1 to allow for 48-bit kernel VAs set with 52-bit PTRS_PER_PGD. + * If the kernel is built for 52-bit virtual addressing but the hardware only + * supports 48 bits, we cannot program the pgdir address into TTBR1 directly, + * but we have to add an offset so that the TTBR1 address corresponds with the + * pgdir entry that covers the lowest 48-bit addressable VA. + * * orr is used as it can cover the immediate value (and is idempotent). - * In future this may be nop'ed out when dealing with 52-bit kernel VAs. * ttbr: Value of ttbr to set, modified. */ .macro offset_ttbr1, ttbr, tmp #ifdef CONFIG_ARM64_VA_BITS_52 - mrs_s \tmp, SYS_ID_AA64MMFR2_EL1 - and \tmp, \tmp, #(0xf << ID_AA64MMFR2_EL1_VARange_SHIFT) - cbnz \tmp, .Lskipoffs_\@ - orr \ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET -.Lskipoffs_\@ : + mrs \tmp, tcr_el1 + and \tmp, \tmp, #TCR_T1SZ_MASK + cmp \tmp, #TCR_T1SZ(VA_BITS_MIN) + orr \tmp, \ttbr, #TTBR1_BADDR_4852_OFFSET + csel \ttbr, \tmp, \ttbr, eq #endif .endm diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index b8c7a2d13bbe..e1d194350d72 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -909,7 +909,9 @@ static inline unsigned int get_vmid_bits(u64 mmfr1) struct arm64_ftr_reg *get_arm64_ftr_reg(u32 sys_id); +extern struct arm64_ftr_override id_aa64mmfr0_override; extern struct arm64_ftr_override id_aa64mmfr1_override; +extern struct arm64_ftr_override id_aa64mmfr2_override; extern struct arm64_ftr_override id_aa64pfr0_override; extern struct arm64_ftr_override id_aa64pfr1_override; extern struct arm64_ftr_override id_aa64zfr0_override; diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index b44aece5024c..2ae42db621fe 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -636,7 +636,9 @@ static const struct arm64_ftr_bits ftr_raz[] = { #define ARM64_FTR_REG(id, table) \ __ARM64_FTR_REG_OVERRIDE(#id, id, table, &no_override) +struct arm64_ftr_override id_aa64mmfr0_override; struct arm64_ftr_override id_aa64mmfr1_override; +struct arm64_ftr_override id_aa64mmfr2_override; struct arm64_ftr_override id_aa64pfr0_override; struct arm64_ftr_override id_aa64pfr1_override; struct arm64_ftr_override id_aa64zfr0_override; @@ -700,10 +702,12 @@ static const struct __ftr_reg_entry { &id_aa64isar2_override), /* Op1 = 0, CRn = 0, CRm = 7 */ - ARM64_FTR_REG(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0), + ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0, + &id_aa64mmfr0_override), ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64MMFR1_EL1, ftr_id_aa64mmfr1, &id_aa64mmfr1_override), - ARM64_FTR_REG(SYS_ID_AA64MMFR2_EL1, ftr_id_aa64mmfr2), + ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64MMFR2_EL1, ftr_id_aa64mmfr2, + &id_aa64mmfr2_override), /* Op1 = 0, CRn = 1, CRm = 2 */ ARM64_FTR_REG(SYS_ZCR_EL1, ftr_zcr), diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 14787614b2e7..82bafa1f869c 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -45,7 +45,9 @@ PROVIDE(__pi_memstart_offset_seed = memstart_offset_seed); PROVIDE(__pi_id_aa64isar1_override = id_aa64isar1_override); PROVIDE(__pi_id_aa64isar2_override = id_aa64isar2_override); +PROVIDE(__pi_id_aa64mmfr0_override = id_aa64mmfr0_override); PROVIDE(__pi_id_aa64mmfr1_override = id_aa64mmfr1_override); +PROVIDE(__pi_id_aa64mmfr2_override = id_aa64mmfr2_override); PROVIDE(__pi_id_aa64pfr0_override = id_aa64pfr0_override); PROVIDE(__pi_id_aa64pfr1_override = id_aa64pfr1_override); PROVIDE(__pi_id_aa64smfr0_override = id_aa64smfr0_override); diff --git a/arch/arm64/kernel/pi/idreg-override.c b/arch/arm64/kernel/pi/idreg-override.c index d0ce3dc4e07a..3de1fe1e2559 100644 --- a/arch/arm64/kernel/pi/idreg-override.c +++ b/arch/arm64/kernel/pi/idreg-override.c @@ -138,12 +138,38 @@ DEFINE_OVERRIDE(6, sw_features, "arm64_sw", arm64_sw_feature_override, FIELD("rodataoff", ARM64_SW_FEATURE_OVERRIDE_RODATA_OFF), {}); +DEFINE_OVERRIDE(7, mmfr2, "id_aa64mmfr2", id_aa64mmfr2_override, + FIELD("varange", ID_AA64MMFR2_EL1_VARange_SHIFT), + {}); + +#ifdef ID_AA64MMFR0_EL1_TGRAN_LPA2 +asmlinkage bool __init mmfr2_varange_filter(u64 val) +{ + u64 mmfr0; + int feat; + + if (val) + return false; + + mmfr0 = read_sysreg(id_aa64mmfr0_el1); + feat = cpuid_feature_extract_signed_field(mmfr0, + ID_AA64MMFR0_EL1_TGRAN_SHIFT); + if (feat >= ID_AA64MMFR0_EL1_TGRAN_LPA2) { + id_aa64mmfr0_override.val |= + (ID_AA64MMFR0_EL1_TGRAN_LPA2 - 1) << ID_AA64MMFR0_EL1_TGRAN_SHIFT; + id_aa64mmfr0_override.mask |= 0xf << ID_AA64MMFR0_EL1_TGRAN_SHIFT; + } + return true; +} +DEFINE_OVERRIDE_FILTER(mmfr2, 0, mmfr2_varange_filter); +#endif + /* * regs[] is populated by R_AARCH64_PREL32 directives invisible to the compiler * so it cannot be static or const, or the compiler might try to use constant * propagation on the values. */ -asmlinkage s32 regs[7] __initdata = { [0 ... ARRAY_SIZE(regs) - 1] = S32_MAX }; +asmlinkage s32 regs[8] __initdata = { [0 ... ARRAY_SIZE(regs) - 1] = S32_MAX }; static struct arm64_ftr_override * __init reg_override(int i) { @@ -168,6 +194,7 @@ static const struct { { "arm64.nomte", "id_aa64pfr1.mte=0" }, { "nokaslr", "arm64_sw.nokaslr=1" }, { "rodata=off", "arm64_sw.rodataoff=1" }, + { "arm64.nolva", "id_aa64mmfr2.varange=0" }, }; static int __init find_field(const char *cmdline, char *opt, int len, diff --git a/arch/arm64/kernel/pi/map_kernel.c b/arch/arm64/kernel/pi/map_kernel.c index 7dd6daee0ffd..a9472ab8d901 100644 --- a/arch/arm64/kernel/pi/map_kernel.c +++ b/arch/arm64/kernel/pi/map_kernel.c @@ -127,6 +127,8 @@ static bool __init arm64_early_this_cpu_has_lva(void) u64 mmfr2; mmfr2 = read_sysreg_s(SYS_ID_AA64MMFR2_EL1); + mmfr2 &= ~id_aa64mmfr2_override.mask; + mmfr2 |= id_aa64mmfr2_override.val; return cpuid_feature_extract_unsigned_field(mmfr2, ID_AA64MMFR2_EL1_VARange_SHIFT); } From patchwork Thu Nov 24 12:39:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13054936 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 30D92C4321E for ; Thu, 24 Nov 2022 12:45:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+3kMOzDj3bJULWfxOsqkqhYhQ8YZH6fcMRgnf8HTPew=; b=IQdl5MbBmYVsKY zaTfxOicUwZzvRYX3YFFl2kulKp+ALMf0WedVZKPLLwz39wDMsZ1cmQD3VLl7IiRN0dmONaThJpeP fMbdkOEQsgxHcOGcqybthwRqqCkDNFoY1eQMq5zaVYt/gxsetaF79rlAbUhqfsifvi20EXAdfZqMi Q63Af4RouvDisFq1BUjs2WNMvMKe39GLGx/I9AAYtM2uVeOT5O3Bggo1v6tuZgQveRxVvB0fuGkHr U5R481MgQAixv6x3VeOpDxYGfsKC94UhxwZObAi+27y+zg24ZoF+ULP4DaFzbFWecX5v7UxhiW3rT g44K4EH3zCETuJaXDhnw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBaW-008QlL-Aa; Thu, 24 Nov 2022 12:44:20 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBWg-008OXK-Qi for linux-arm-kernel@lists.infradead.org; Thu, 24 Nov 2022 12:40:24 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 689816210B; Thu, 24 Nov 2022 12:40:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3EE2DC433D6; Thu, 24 Nov 2022 12:40:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669293621; bh=pF90lXgtccxdoLzmn1+lmWSi1OrSbQLggAaRT4/wiFg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dMaBx9UUWz/AW7Q+vBLPDVtT/EcnitGgq4HVadszdmuBc9kfJufdth7GR5usENhKT dnMW64QzMzhxEb3ow7IlMiZYkMDOjglrTNAQ36oa1eYwmyG6fFtsXaK3YjAMZZ4YVh +e0EhpFupX/dWDiftpdB4Vie5TR9Elf0gRByAbWRdWpvWGWNWOYulI/G7CB8K82BbY U0Z6LURrIcAnNcF2r9m02mXHQhKoHJrX3hNqzAnsuMuDf+jlAV0Y9k7W3Eb+B49msC vtaxZLnyHj38STLly6IuHNNEZwMpLwFZ41OLQWrXEUcrkC2Zfi5akU6lipJI09Evws /4jX43Q695Omw== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson , Ryan Roberts Subject: [PATCH v2 10/19] arm64: mm: Wire up TCR.DS bit to PTE shareability fields Date: Thu, 24 Nov 2022 13:39:23 +0100 Message-Id: <20221124123932.2648991-11-ardb@kernel.org> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog In-Reply-To: <20221124123932.2648991-1-ardb@kernel.org> References: <20221124123932.2648991-1-ardb@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221124_044022_987026_542E3810 X-CRM114-Status: GOOD ( 18.52 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When LPA2 is enabled, bits 8 and 9 of page and block descriptors become part of the output address instead of carrying shareability attributes for the region in question. So avoid setting these bits if TCR.DS == 1, which means LPA2 is enabled. Signed-off-by: Ard Biesheuvel --- arch/arm64/Kconfig | 4 ++++ arch/arm64/include/asm/pgtable-prot.h | 18 ++++++++++++++++-- arch/arm64/mm/mmap.c | 4 ++++ arch/arm64/mm/proc.S | 8 ++++++++ 4 files changed, 32 insertions(+), 2 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 170832f31eff..6d299c6c0a56 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1264,6 +1264,10 @@ config ARM64_PA_BITS default 48 if ARM64_PA_BITS_48 default 52 if ARM64_PA_BITS_52 +config ARM64_LPA2 + def_bool y + depends on ARM64_PA_BITS_52 && !ARM64_64K_PAGES + choice prompt "Endianness" default CPU_LITTLE_ENDIAN diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h index 9b165117a454..269584d5a2c0 100644 --- a/arch/arm64/include/asm/pgtable-prot.h +++ b/arch/arm64/include/asm/pgtable-prot.h @@ -40,6 +40,20 @@ extern bool arm64_use_ng_mappings; #define PTE_MAYBE_NG (arm64_use_ng_mappings ? PTE_NG : 0) #define PMD_MAYBE_NG (arm64_use_ng_mappings ? PMD_SECT_NG : 0) +#ifndef CONFIG_ARM64_LPA2 +#define lpa2_is_enabled() false +#define PTE_MAYBE_SHARED PTE_SHARED +#define PMD_MAYBE_SHARED PMD_SECT_S +#else +static inline bool __pure lpa2_is_enabled(void) +{ + return read_tcr() & TCR_DS; +} + +#define PTE_MAYBE_SHARED (lpa2_is_enabled() ? 0 : PTE_SHARED) +#define PMD_MAYBE_SHARED (lpa2_is_enabled() ? 0 : PMD_SECT_S) +#endif + /* * If we have userspace only BTI we don't want to mark kernel pages * guarded even if the system does support BTI. @@ -50,8 +64,8 @@ extern bool arm64_use_ng_mappings; #define PTE_MAYBE_GP 0 #endif -#define PROT_DEFAULT (_PROT_DEFAULT | PTE_MAYBE_NG) -#define PROT_SECT_DEFAULT (_PROT_SECT_DEFAULT | PMD_MAYBE_NG) +#define PROT_DEFAULT (PTE_TYPE_PAGE | PTE_MAYBE_NG | PTE_MAYBE_SHARED | PTE_AF) +#define PROT_SECT_DEFAULT (PMD_TYPE_SECT | PMD_MAYBE_NG | PMD_MAYBE_SHARED | PMD_SECT_AF) #define PROT_DEVICE_nGnRnE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRnE)) #define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRE)) diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c index 8f5b7ce857ed..adcf547f74eb 100644 --- a/arch/arm64/mm/mmap.c +++ b/arch/arm64/mm/mmap.c @@ -73,6 +73,10 @@ static int __init adjust_protection_map(void) protection_map[VM_EXEC | VM_SHARED] = PAGE_EXECONLY; } + if (lpa2_is_enabled()) + for (int i = 0; i < ARRAY_SIZE(protection_map); i++) + pgprot_val(protection_map[i]) &= ~PTE_SHARED; + return 0; } arch_initcall(adjust_protection_map); diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 873adaccb12f..0f576d72b847 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -297,6 +297,14 @@ SYM_TYPED_FUNC_START(idmap_kpti_install_ng_mappings) mov temp_pte, x5 mov pte_flags, #KPTI_NG_PTE_FLAGS +#ifdef CONFIG_ARM64_LPA2 + /* Omit PTE_SHARED from pte_flags if LPA2 is enabled */ + mrs x5, tcr_el1 + tst x5, #TCR_DS + mov x5, #KPTI_NG_PTE_FLAGS & ~PTE_SHARED + csel pte_flags, x5, pte_flags, ne +#endif + /* Everybody is enjoying the idmap, so we can rewrite swapper. */ /* PGD */ adrp cur_pgdp, swapper_pg_dir From patchwork Thu Nov 24 12:39:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13054937 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6C9F7C43217 for ; Thu, 24 Nov 2022 12:46:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=m+mOYJJvcGAJEgOZ0s+7aYJuZwTkWQjtxXN8Otx6AMo=; b=YZkW9U+SrpcX8P Jgv0h+8chzKB1WpXUGXgQobTPn6XGi5W5HYZP+CL2ZBcFva6p3IUFC003xZGAyafcOOUbtJkMFuck EvcIV0qk26zTDXAZ6p8BvbdsuTOInX25FMc7QAbza9N7gYxtYrus9AZBNZEBzHXutLbTBTP2/KytX sHtRsvoBuvYkS1zYNQbRG5i6+0ywN9mNEYWTumjWCNY7rdeus/vwuPP8ZW+zaCrieYJap8QwTAI+a h+V9B+LTRHErdSI9Y1i+DjpiO7IFfCmUz1ZHMFpK87ShNFz34KfL/FFLSxDGGt+UHyozkJPwo61FJ QXeih/r/xCkFStKFPfgQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBbE-008R8v-Iq; Thu, 24 Nov 2022 12:45:05 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBWj-008OYs-Ue for linux-arm-kernel@lists.infradead.org; Thu, 24 Nov 2022 12:40:27 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7D4B46210B; Thu, 24 Nov 2022 12:40:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 53ABCC43145; Thu, 24 Nov 2022 12:40:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669293624; bh=Niw4af6e3FlSytS2iH0FsPOiJDzMll3jlZhbjdn3Sqo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=k+2BmSDSv+xd1YFntYpMdqKCWV9pdK32mSAR5Ebt3jybgMmDtVaU0GoEyGnwbirWH 6H+ORFOTli/hYB7ozJR+h5/hkoNX8JyVkS9sAP34azsa/4oegIindYCyVzS7C2iBWg c2teRRSJE3pZ5Jp/C5O0m9cW7YPOysYoHJUMg4yUA5T6Iwr0+Tx2NeYy+616E+DOpo Vckg/bwnB5VWdJVLAUdhn/LcgdJEKHYxCy00Ry78C34fWM0QF0yNbIMHBijl/A8PZl +InSduKeZX+ViQI6aysj4pbMoCAE77taaBtEUQeyJnQEhiXVxmp2B2SYNueb85HbcO oOkY2uYMgg+MQ== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson , Ryan Roberts Subject: [PATCH v2 11/19] arm64: mm: Add LPA2 support to phys<->pte conversion routines Date: Thu, 24 Nov 2022 13:39:24 +0100 Message-Id: <20221124123932.2648991-12-ardb@kernel.org> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog In-Reply-To: <20221124123932.2648991-1-ardb@kernel.org> References: <20221124123932.2648991-1-ardb@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221124_044026_091387_6A503933 X-CRM114-Status: GOOD ( 17.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for enabling LPA2 support, introduce the mask values for converting between physical addresses and their representations in a page table descriptor. While at it, move the pte_to_phys asm macro into its only user, so that we can freely modify it to use its input value register as a temp register. For LPA2, the PTE_ADDR_MASK contains two non-adjacent sequences of zero bits, which means it no longer fits into the immediate field of an ordinary ALU instruction. So let's redefine it to include the bits in between as well, and only use it when converting from physical address to PTE representation, where the distinction does not matter. Also update the name accordingly to emphasize this. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/assembler.h | 16 ++-------------- arch/arm64/include/asm/pgtable-hwdef.h | 10 +++++++--- arch/arm64/include/asm/pgtable.h | 5 +++-- arch/arm64/mm/proc.S | 8 ++++++++ 4 files changed, 20 insertions(+), 19 deletions(-) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 4cb84dc6e220..786bf62826a8 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -651,25 +651,13 @@ alternative_endif .macro phys_to_pte, pte, phys #ifdef CONFIG_ARM64_PA_BITS_52 - /* - * We assume \phys is 64K aligned and this is guaranteed by only - * supporting this configuration with 64K pages. - */ - orr \pte, \phys, \phys, lsr #36 - and \pte, \pte, #PTE_ADDR_MASK + orr \pte, \phys, \phys, lsr #PTE_ADDR_HIGH_SHIFT + and \pte, \pte, #PHYS_TO_PTE_ADDR_MASK #else mov \pte, \phys #endif .endm - .macro pte_to_phys, phys, pte - and \phys, \pte, #PTE_ADDR_MASK -#ifdef CONFIG_ARM64_PA_BITS_52 - orr \phys, \phys, \phys, lsl #PTE_ADDR_HIGH_SHIFT - and \phys, \phys, GENMASK_ULL(PHYS_MASK_SHIFT - 1, PAGE_SHIFT) -#endif - .endm - /* * tcr_clear_errata_bits - Clear TCR bits that trigger an errata on this CPU. */ diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index c4ad7fbb12c5..b91fe4781b06 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -155,13 +155,17 @@ #define PTE_PXN (_AT(pteval_t, 1) << 53) /* Privileged XN */ #define PTE_UXN (_AT(pteval_t, 1) << 54) /* User XN */ -#define PTE_ADDR_LOW (((_AT(pteval_t, 1) << (48 - PAGE_SHIFT)) - 1) << PAGE_SHIFT) +#define PTE_ADDR_LOW (((_AT(pteval_t, 1) << (50 - PAGE_SHIFT)) - 1) << PAGE_SHIFT) #ifdef CONFIG_ARM64_PA_BITS_52 +#ifdef CONFIG_ARM64_64K_PAGES #define PTE_ADDR_HIGH (_AT(pteval_t, 0xf) << 12) -#define PTE_ADDR_MASK (PTE_ADDR_LOW | PTE_ADDR_HIGH) #define PTE_ADDR_HIGH_SHIFT 36 +#define PHYS_TO_PTE_ADDR_MASK (PTE_ADDR_LOW | PTE_ADDR_HIGH) #else -#define PTE_ADDR_MASK PTE_ADDR_LOW +#define PTE_ADDR_HIGH (_AT(pteval_t, 0x3) << 8) +#define PTE_ADDR_HIGH_SHIFT 42 +#define PHYS_TO_PTE_ADDR_MASK GENMASK_ULL(49, 8) +#endif #endif /* diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index daedd6172227..666db7173d0f 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -76,15 +76,16 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; #ifdef CONFIG_ARM64_PA_BITS_52 static inline phys_addr_t __pte_to_phys(pte_t pte) { + pte_val(pte) &= ~PTE_MAYBE_SHARED; return (pte_val(pte) & PTE_ADDR_LOW) | ((pte_val(pte) & PTE_ADDR_HIGH) << PTE_ADDR_HIGH_SHIFT); } static inline pteval_t __phys_to_pte_val(phys_addr_t phys) { - return (phys | (phys >> PTE_ADDR_HIGH_SHIFT)) & PTE_ADDR_MASK; + return (phys | (phys >> PTE_ADDR_HIGH_SHIFT)) & PHYS_TO_PTE_ADDR_MASK; } #else -#define __pte_to_phys(pte) (pte_val(pte) & PTE_ADDR_MASK) +#define __pte_to_phys(pte) (pte_val(pte) & PTE_ADDR_LOW) #define __phys_to_pte_val(phys) (phys) #endif diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 0f576d72b847..6415623b7ebf 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -208,6 +208,14 @@ SYM_FUNC_ALIAS(__pi_idmap_cpu_replace_ttbr1, idmap_cpu_replace_ttbr1) .pushsection ".idmap.text", "awx" + .macro pte_to_phys, phys, pte + and \phys, \pte, #PTE_ADDR_LOW +#ifdef CONFIG_ARM64_PA_BITS_52 + and \pte, \pte, #PTE_ADDR_HIGH + orr \phys, \phys, \pte, lsl #PTE_ADDR_HIGH_SHIFT +#endif + .endm + .macro kpti_mk_tbl_ng, type, num_entries add end_\type\()p, cur_\type\()p, #\num_entries * 8 .Ldo_\type: From patchwork Thu Nov 24 12:39:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13054938 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6E114C43217 for ; Thu, 24 Nov 2022 12:46:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=3N+P5k3M3Wz1f+u0gN9XSeowDS6v7NgPE2TVvS8QTqc=; b=T2mo6ZDM02MU8L xFNx8X6V14CKFiqCE+mSmT9AFAGc2WJmHgolNp378qhsEFWQVNqG6Ss9vZqmJrOkROBmnSJk8nkAk l1hGe63DTf4Qkw+2GNSTOg35FwhcJnAUcVhF2aSAY/baxkFP3HQTgq3SGvNfKNWs0ayKkU5Qhb34d UUgrpPrgOWK3IbbhxAmsFoh6hdrt0AwQi8fE0AgwyHDemiq2/lo9ey/bPAbisJGMcNKY8SvSmLMAi EfHZoxgjWaCSGGambDK1qT3zOyeeDjYHz0VSt1Z94ET+M94+g1InjQrBJVRpstuhCSkG3YhrakYZP tNzvcldv+LAfynieCFFg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBbu-008RZ7-V3; Thu, 24 Nov 2022 12:45:47 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBWn-008OaG-1O for linux-arm-kernel@lists.infradead.org; Thu, 24 Nov 2022 12:40:31 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 94BD36210B; Thu, 24 Nov 2022 12:40:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6A5FFC433D6; Thu, 24 Nov 2022 12:40:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669293628; bh=+BA2WGeCjI1duiSkD12tCpSaOJsinLppv/5OnYQGyhE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pvk1/81i0Ltj19u+YRoBBbQDhXZ9z6iSj4wOP+Tt+dKCr3f8SDndA+/3fvTHXGNxi nOfM7/vi6e8toCEQJUEojKgiHh1ayeRwVy+KUUAAAaCWHxKY18U7A4QKc+niAxh3bb 9uhhpCLho8ey6/k6QBk7edVEZ+WB4Dj2imqItSd4iW/aaAfx4hMbTGyKRgeEQOAxTo KKfLFXsLGqgLUgRLdHY96RbVRl74Hhy3ixdcFbzEk9YS6uVsHHT8b77FDJjBRY7jwE AU2vwAez1aoIPh2yNHxQVElCsWf8bGUD0/6DW7yY+2FH8jwPTnWAwztI2WG9hDjVB6 V/FJuAsRcoIjA== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson , Ryan Roberts Subject: [PATCH v2 12/19] arm64: mm: Add definitions to support 5 levels of paging Date: Thu, 24 Nov 2022 13:39:25 +0100 Message-Id: <20221124123932.2648991-13-ardb@kernel.org> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog In-Reply-To: <20221124123932.2648991-1-ardb@kernel.org> References: <20221124123932.2648991-1-ardb@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221124_044029_185057_F391955B X-CRM114-Status: GOOD ( 25.38 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add the required types and descriptor accessors to support 5 levels of paging in the common code. This is one of the prerequisites for supporting 52-bit virtual addressing with 4k pages. Note that this does not cover the code that handles kernel mappings or the fixmap. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/pgalloc.h | 41 +++++++++++ arch/arm64/include/asm/pgtable-hwdef.h | 22 +++++- arch/arm64/include/asm/pgtable-types.h | 6 ++ arch/arm64/include/asm/pgtable.h | 75 +++++++++++++++++++- arch/arm64/mm/mmu.c | 31 +++++++- arch/arm64/mm/pgd.c | 15 +++- 6 files changed, 181 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h index 237224484d0f..cae8c648f462 100644 --- a/arch/arm64/include/asm/pgalloc.h +++ b/arch/arm64/include/asm/pgalloc.h @@ -60,6 +60,47 @@ static inline void __p4d_populate(p4d_t *p4dp, phys_addr_t pudp, p4dval_t prot) } #endif /* CONFIG_PGTABLE_LEVELS > 3 */ +#if CONFIG_PGTABLE_LEVELS > 4 + +static inline void __pgd_populate(pgd_t *pgdp, phys_addr_t p4dp, pgdval_t prot) +{ + if (pgtable_l5_enabled()) + set_pgd(pgdp, __pgd(__phys_to_pgd_val(p4dp) | prot)); +} + +static inline void pgd_populate(struct mm_struct *mm, pgd_t *pgdp, p4d_t *p4dp) +{ + pgdval_t pgdval = PGD_TYPE_TABLE; + + pgdval |= (mm == &init_mm) ? PGD_TABLE_UXN : PGD_TABLE_PXN; + __pgd_populate(pgdp, __pa(p4dp), pgdval); +} + +static inline p4d_t *p4d_alloc_one(struct mm_struct *mm, unsigned long addr) +{ + gfp_t gfp = GFP_PGTABLE_USER; + + if (mm == &init_mm) + gfp = GFP_PGTABLE_KERNEL; + return (p4d_t *)get_zeroed_page(gfp); +} + +static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d) +{ + if (!pgtable_l5_enabled()) + return; + BUG_ON((unsigned long)p4d & (PAGE_SIZE-1)); + free_page((unsigned long)p4d); +} + +#define __p4d_free_tlb(tlb, p4d, addr) p4d_free((tlb)->mm, p4d) +#else +static inline void __pgd_populate(pgd_t *pgdp, phys_addr_t p4dp, pgdval_t prot) +{ + BUILD_BUG(); +} +#endif /* CONFIG_PGTABLE_LEVELS > 4 */ + extern pgd_t *pgd_alloc(struct mm_struct *mm); extern void pgd_free(struct mm_struct *mm, pgd_t *pgdp); diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index b91fe4781b06..b364b02e696b 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -26,10 +26,10 @@ #define ARM64_HW_PGTABLE_LEVELS(va_bits) (((va_bits) - 4) / (PAGE_SHIFT - 3)) /* - * Size mapped by an entry at level n ( 0 <= n <= 3) + * Size mapped by an entry at level n ( -1 <= n <= 3) * We map (PAGE_SHIFT - 3) at all translation levels and PAGE_SHIFT bits * in the final page. The maximum number of translation levels supported by - * the architecture is 4. Hence, starting at level n, we have further + * the architecture is 5. Hence, starting at level n, we have further * ((4 - n) - 1) levels of translation excluding the offset within the page. * So, the total number of bits mapped by an entry at level n is : * @@ -62,9 +62,16 @@ #define PTRS_PER_PUD (1 << (PAGE_SHIFT - 3)) #endif +#if CONFIG_PGTABLE_LEVELS > 4 +#define P4D_SHIFT ARM64_HW_PGTABLE_LEVEL_SHIFT(0) +#define P4D_SIZE (_AC(1, UL) << P4D_SHIFT) +#define P4D_MASK (~(P4D_SIZE-1)) +#define PTRS_PER_P4D (1 << (PAGE_SHIFT - 3)) +#endif + /* * PGDIR_SHIFT determines the size a top-level page table entry can map - * (depending on the configuration, this level can be 0, 1 or 2). + * (depending on the configuration, this level can be -1, 0, 1 or 2). */ #define PGDIR_SHIFT ARM64_HW_PGTABLE_LEVEL_SHIFT(4 - CONFIG_PGTABLE_LEVELS) #define PGDIR_SIZE (_AC(1, UL) << PGDIR_SHIFT) @@ -87,6 +94,15 @@ /* * Hardware page table definitions. * + * Level -1 descriptor (PGD). + */ +#define PGD_TYPE_TABLE (_AT(pgdval_t, 3) << 0) +#define PGD_TABLE_BIT (_AT(pgdval_t, 1) << 1) +#define PGD_TYPE_MASK (_AT(pgdval_t, 3) << 0) +#define PGD_TABLE_PXN (_AT(pgdval_t, 1) << 59) +#define PGD_TABLE_UXN (_AT(pgdval_t, 1) << 60) + +/* * Level 0 descriptor (P4D). */ #define P4D_TYPE_TABLE (_AT(p4dval_t, 3) << 0) diff --git a/arch/arm64/include/asm/pgtable-types.h b/arch/arm64/include/asm/pgtable-types.h index b8f158ae2527..6d6d4065b0cb 100644 --- a/arch/arm64/include/asm/pgtable-types.h +++ b/arch/arm64/include/asm/pgtable-types.h @@ -36,6 +36,12 @@ typedef struct { pudval_t pud; } pud_t; #define __pud(x) ((pud_t) { (x) } ) #endif +#if CONFIG_PGTABLE_LEVELS > 4 +typedef struct { p4dval_t p4d; } p4d_t; +#define p4d_val(x) ((x).p4d) +#define __p4d(x) ((p4d_t) { (x) } ) +#endif + typedef struct { pgdval_t pgd; } pgd_t; #define pgd_val(x) ((x).pgd) #define __pgd(x) ((pgd_t) { (x) } ) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 666db7173d0f..2f7202d03d98 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -793,7 +793,6 @@ static inline pud_t *p4d_pgtable(p4d_t p4d) #else #define p4d_page_paddr(p4d) ({ BUILD_BUG(); 0;}) -#define pgd_page_paddr(pgd) ({ BUILD_BUG(); 0;}) /* Match pud_offset folding in */ #define pud_set_fixmap(addr) NULL @@ -804,6 +803,80 @@ static inline pud_t *p4d_pgtable(p4d_t p4d) #endif /* CONFIG_PGTABLE_LEVELS > 3 */ +#if CONFIG_PGTABLE_LEVELS > 4 + +static __always_inline bool pgtable_l5_enabled(void) +{ + if (!alternative_has_feature_likely(ARM64_ALWAYS_BOOT)) + return vabits_actual == VA_BITS; + return alternative_has_feature_unlikely(ARM64_HAS_LVA); +} + +static inline bool mm_p4d_folded(struct mm_struct *mm) +{ + return !pgtable_l5_enabled(); +} +#define mm_p4d_folded mm_p4d_folded + +#define p4d_ERROR(e) \ + pr_err("%s:%d: bad p4d %016llx.\n", __FILE__, __LINE__, p4d_val(e)) + +#define pgd_none(pgd) (pgtable_l5_enabled() && !pgd_val(pgd)) +#define pgd_bad(pgd) (pgtable_l5_enabled() && !(pgd_val(pgd) & 2)) +#define pgd_present(pgd) (!pgd_none(pgd)) + +static inline void set_pgd(pgd_t *pgdp, pgd_t pgd) +{ + if (in_swapper_pgdir(pgdp)) { + set_swapper_pgd(pgdp, __pgd(pgd_val(pgd))); + return; + } + + WRITE_ONCE(*pgdp, pgd); + dsb(ishst); + isb(); +} + +static inline void pgd_clear(pgd_t *pgdp) +{ + if (pgtable_l5_enabled()) + set_pgd(pgdp, __pgd(0)); +} + +static inline phys_addr_t pgd_page_paddr(pgd_t pgd) +{ + return __pgd_to_phys(pgd); +} + +#define p4d_index(addr) (((addr) >> P4D_SHIFT) & (PTRS_PER_P4D - 1)) + +static inline p4d_t *pgd_to_folded_p4d(pgd_t *pgdp, unsigned long addr) +{ + return (p4d_t *)PTR_ALIGN_DOWN(pgdp, PAGE_SIZE) + p4d_index(addr); +} + +static inline phys_addr_t p4d_offset_phys(pgd_t *pgdp, unsigned long addr) +{ + BUG_ON(!pgtable_l5_enabled()); + + return pgd_page_paddr(READ_ONCE(*pgdp)) + p4d_index(addr) * sizeof(p4d_t); +} + +static inline p4d_t *p4d_offset(pgd_t *pgdp, unsigned long addr) +{ + if (!pgtable_l5_enabled()) + return pgd_to_folded_p4d(pgdp, addr); + return (p4d_t *)__va(p4d_offset_phys(pgdp, addr)); +} + +#define pgd_page(pgd) pfn_to_page(__phys_to_pfn(__pgd_to_phys(pgd))) + +#else + +static inline bool pgtable_l5_enabled(void) { return false; } + +#endif /* CONFIG_PGTABLE_LEVELS > 4 */ + #define pgd_ERROR(e) \ pr_err("%s:%d: bad pgd %016llx.\n", __FILE__, __LINE__, pgd_val(e)) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index bcf617f956cb..d089bc78e592 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1049,7 +1049,7 @@ static void free_empty_pud_table(p4d_t *p4dp, unsigned long addr, if (CONFIG_PGTABLE_LEVELS <= 3) return; - if (!pgtable_range_aligned(start, end, floor, ceiling, PGDIR_MASK)) + if (!pgtable_range_aligned(start, end, floor, ceiling, P4D_MASK)) return; /* @@ -1072,8 +1072,8 @@ static void free_empty_p4d_table(pgd_t *pgdp, unsigned long addr, unsigned long end, unsigned long floor, unsigned long ceiling) { - unsigned long next; p4d_t *p4dp, p4d; + unsigned long i, next, start = addr; do { next = p4d_addr_end(addr, end); @@ -1085,6 +1085,27 @@ static void free_empty_p4d_table(pgd_t *pgdp, unsigned long addr, WARN_ON(!p4d_present(p4d)); free_empty_pud_table(p4dp, addr, next, floor, ceiling); } while (addr = next, addr < end); + + if (!pgtable_l5_enabled()) + return; + + if (!pgtable_range_aligned(start, end, floor, ceiling, PGDIR_MASK)) + return; + + /* + * Check whether we can free the p4d page if the rest of the + * entries are empty. Overlap with other regions have been + * handled by the floor/ceiling check. + */ + p4dp = p4d_offset(pgdp, 0UL); + for (i = 0; i < PTRS_PER_P4D; i++) { + if (!p4d_none(READ_ONCE(p4dp[i]))) + return; + } + + pgd_clear(pgdp); + __flush_tlb_kernel_pgtable(start); + free_hotplug_pgtable_page(virt_to_page(p4dp)); } static void free_empty_tables(unsigned long addr, unsigned long end, @@ -1351,6 +1372,12 @@ int pmd_set_huge(pmd_t *pmdp, phys_addr_t phys, pgprot_t prot) return 1; } +#ifndef __PAGETABLE_P4D_FOLDED +void p4d_clear_huge(p4d_t *p4dp) +{ +} +#endif + int pud_clear_huge(pud_t *pudp) { if (!pud_sect(READ_ONCE(*pudp))) diff --git a/arch/arm64/mm/pgd.c b/arch/arm64/mm/pgd.c index 4a64089e5771..3c4f8a279d2b 100644 --- a/arch/arm64/mm/pgd.c +++ b/arch/arm64/mm/pgd.c @@ -17,11 +17,20 @@ static struct kmem_cache *pgd_cache __ro_after_init; +static bool pgdir_is_page_size(void) +{ + if (PGD_SIZE == PAGE_SIZE) + return true; + if (CONFIG_PGTABLE_LEVELS == 5) + return !pgtable_l5_enabled(); + return false; +} + pgd_t *pgd_alloc(struct mm_struct *mm) { gfp_t gfp = GFP_PGTABLE_USER; - if (PGD_SIZE == PAGE_SIZE) + if (pgdir_is_page_size()) return (pgd_t *)__get_free_page(gfp); else return kmem_cache_alloc(pgd_cache, gfp); @@ -29,7 +38,7 @@ pgd_t *pgd_alloc(struct mm_struct *mm) void pgd_free(struct mm_struct *mm, pgd_t *pgd) { - if (PGD_SIZE == PAGE_SIZE) + if (pgdir_is_page_size()) free_page((unsigned long)pgd); else kmem_cache_free(pgd_cache, pgd); @@ -37,7 +46,7 @@ void pgd_free(struct mm_struct *mm, pgd_t *pgd) void __init pgtable_cache_init(void) { - if (PGD_SIZE == PAGE_SIZE) + if (pgdir_is_page_size()) return; #ifdef CONFIG_ARM64_PA_BITS_52 From patchwork Thu Nov 24 12:39:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13054939 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4379EC43217 for ; Thu, 24 Nov 2022 12:47:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=dkq78Gsuo2yJjZ23WT/YSabsKrZBH9ZV6ymRS5hm0Bo=; b=GlH0H4b7tJjZWj 1Uk6/77ueCkIBS1W1swhpa3WatqaxVPPRAcyDAKsFGmhMN3Prz59/YEnZJ2+opvgVEzu+rYrNXElp pZhqkdXHL5mHOL1j1yfhNBeMFxYKSGaPXmzUPe6imzRC8zyHZcARRsi60cOs6pkxiyHysqm0Vt55P m7wLdF7OZriT9s1HQ1fSJveCXScNVuy6EdhsbBodgJtMmiOzg/lZ1rraqXyyPhe7M1qOhvYbsUq2Y SvgbvVE/UA4lin0CxN6XEF4YVZ3Vuc8faHaK6Y7MLb+gyZOVBKQwtSmwscZXw7J5p6Me+w493ixGp 8icYaXC+qktKMowj9nxg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBci-008S5S-AZ; Thu, 24 Nov 2022 12:46:36 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBWr-008OcM-In for linux-arm-kernel@lists.infradead.org; Thu, 24 Nov 2022 12:40:35 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 4BB2FB827AF; Thu, 24 Nov 2022 12:40:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7D877C43145; Thu, 24 Nov 2022 12:40:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669293631; bh=YX9wlhZ33Sx9Gdy3HHW1BNJzhd376BwX5Sm8ZygArSA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Rcgpk/LAyiEAA8odPSVoU7RO1MVvMzrKu2N2B8rwYVHoQl5Lww9Nyaadk7VeID3Qv dARnwaqVRBTpi3GOV9IH9j3G7IlCBOBLnypOBrJq2KIhYZM6DeeWyd+HErlquT0IIU 4TPyTl7RCVElnu5To7hes/EEXiiOGAO0Do11OpgrIra4rxiQBNpsiChj5iTwgFj49z Cq+TEIhdz1Ozdoul266zSu83/bEycMDgP1SjWtoHVPSUj6zNqpvGE66tv4nFQIkjp3 +4PH76OCbkUQw6o0tW/ZNxlAsnqaakBh5pYcq6eh4UyZIN34lMPuP8pCSCENV6q6eU 4rGKiI05eRNIQ== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson , Ryan Roberts Subject: [PATCH v2 13/19] arm64: mm: add 5 level paging support to G-to-nG conversion routine Date: Thu, 24 Nov 2022 13:39:26 +0100 Message-Id: <20221124123932.2648991-14-ardb@kernel.org> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog In-Reply-To: <20221124123932.2648991-1-ardb@kernel.org> References: <20221124123932.2648991-1-ardb@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221124_044033_952568_4993ADC0 X-CRM114-Status: GOOD ( 18.49 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add support for 5 level paging in the G-to-nG routine that creates its own temporary page tables to traverse the swapper page tables. Also add support for running the 5 level configuration with the top level folded at runtime, to support CPUs that do not implement the LPA2 extension. While at it, wire up the level skipping logic so it will also trigger on 4 level configurations with LPA2 enabled at build time but not active at runtime, as we'll fall back to 3 level paging in that case. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/cpufeature.c | 9 +++-- arch/arm64/mm/proc.S | 40 +++++++++++++++++++- 2 files changed, 44 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 2ae42db621fe..c20c3cbd42ef 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1726,6 +1726,9 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused) pgd_t *kpti_ng_temp_pgd; u64 alloc = 0; + if (levels == 5 && !pgtable_l5_enabled()) + levels = 4; + if (__this_cpu_read(this_cpu_vector) == vectors) { const char *v = arm64_get_bp_hardening_vector(EL1_VECTOR_KPTI); @@ -1753,9 +1756,9 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused) // // The physical pages are laid out as follows: // - // +--------+-/-------+-/------ +-\\--------+ - // : PTE[] : | PMD[] : | PUD[] : || PGD[] : - // +--------+-\-------+-\------ +-//--------+ + // +--------+-/-------+-/------ +-/------ +-\\\--------+ + // : PTE[] : | PMD[] : | PUD[] : | P4D[] : ||| PGD[] : + // +--------+-\-------+-\------ +-\------ +-///--------+ // ^ // The first page is mapped into this hierarchy at a PMD_SHIFT // aligned virtual address, so that we can manipulate the PTE diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 6415623b7ebf..179e213bbe2d 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -282,6 +282,8 @@ SYM_TYPED_FUNC_START(idmap_kpti_install_ng_mappings) end_ptep .req x15 pte .req x16 valid .req x17 + cur_p4dp .req x19 + end_p4dp .req x20 mov x5, x3 // preserve temp_pte arg mrs swapper_ttb, ttbr1_el1 @@ -289,6 +291,12 @@ SYM_TYPED_FUNC_START(idmap_kpti_install_ng_mappings) cbnz cpu, __idmap_kpti_secondary +#if CONFIG_PGTABLE_LEVELS > 4 + stp x29, x30, [sp, #-32]! + mov x29, sp + stp x19, x20, [sp, #16] +#endif + /* We're the boot CPU. Wait for the others to catch up */ sevl 1: wfe @@ -316,6 +324,14 @@ SYM_TYPED_FUNC_START(idmap_kpti_install_ng_mappings) /* Everybody is enjoying the idmap, so we can rewrite swapper. */ /* PGD */ adrp cur_pgdp, swapper_pg_dir +#ifdef CONFIG_ARM64_LPA2 +alternative_if_not ARM64_HAS_LVA + /* skip one level of translation if 52-bit VAs are not enabled */ + mov pgd, cur_pgdp + add end_pgdp, cur_pgdp, #8 // stop condition at pgd level + b .Lderef_pgd +alternative_else_nop_endif +#endif kpti_map_pgtbl pgd, 0 kpti_mk_tbl_ng pgd, PTRS_PER_PGD @@ -329,16 +345,33 @@ SYM_TYPED_FUNC_START(idmap_kpti_install_ng_mappings) /* Set the flag to zero to indicate that we're all done */ str wzr, [flag_ptr] +#if CONFIG_PGTABLE_LEVELS > 4 + ldp x19, x20, [sp, #16] + ldp x29, x30, [sp], #32 +#endif ret .Lderef_pgd: + /* P4D */ + .if CONFIG_PGTABLE_LEVELS > 4 + p4d .req x30 + pte_to_phys cur_p4dp, pgd + kpti_map_pgtbl p4d, 4 + kpti_mk_tbl_ng p4d, PTRS_PER_P4D + b .Lnext_pgd + .else /* CONFIG_PGTABLE_LEVELS <= 4 */ + p4d .req pgd + .set .Lnext_p4d, .Lnext_pgd + .endif + +.Lderef_p4d: /* PUD */ .if CONFIG_PGTABLE_LEVELS > 3 pud .req x10 - pte_to_phys cur_pudp, pgd + pte_to_phys cur_pudp, p4d kpti_map_pgtbl pud, 1 kpti_mk_tbl_ng pud, PTRS_PER_PUD - b .Lnext_pgd + b .Lnext_p4d .else /* CONFIG_PGTABLE_LEVELS <= 3 */ pud .req pgd .set .Lnext_pud, .Lnext_pgd @@ -382,6 +415,9 @@ SYM_TYPED_FUNC_START(idmap_kpti_install_ng_mappings) .unreq end_ptep .unreq pte .unreq valid + .unreq cur_p4dp + .unreq end_p4dp + .unreq p4d /* Secondary CPUs end up here */ __idmap_kpti_secondary: From patchwork Thu Nov 24 12:39:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13054940 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6C57AC43217 for ; Thu, 24 Nov 2022 12:48:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ruDAd4mveVEEkQTKPKXmPZA90+7mYI+1VIgjq/izJ4I=; b=f41/5Vk5C94lDT +fe+NxCblY1XzZsUsEEb9FgyYtuSP14PRv2qhhranp9Wgoy3ML2jDDLEuW/Xuuj5iHsclIL2BkIc8 yphZLN2vMzJlrvl1LhfLXl0Vh1fAlwibkL1DvFGR8L783eYIulVmcDQ2lVBjhEAhYA5ZbzMT7Ae0+ zskf/6Sj9zFz6U8zfiwzrMUM1cynKbzfMzkiUoQfb9aWneS390mBJ4noA99eQ7nOJ2+E+PHjCkyuv LElucpNVI+MKyz9/3FbakEMI19CwY+5imZ0CXsnkgzB5gx/uNhDAU7wPmdJPE9WJ1KYHhhf/1AxmR Yu7BWUVsbgWsbvnTAfiQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBdG-008SMa-RG; Thu, 24 Nov 2022 12:47:11 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBWv-008Odr-36 for linux-arm-kernel@lists.infradead.org; Thu, 24 Nov 2022 12:40:40 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 67C13B827CB; Thu, 24 Nov 2022 12:40:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 93D53C433C1; Thu, 24 Nov 2022 12:40:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669293634; bh=pfjxr70zz2WNbPyfGHNV2mhdwABgsZ76CQokZ9dRxDw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KizhI8HRWGvBVeK2O/LQSjx4lkigt/lZwxSlKsEgSyJrjeCvxnlbjhCsPHzwMk5dS G9XYhGV46h5qljn4zM1CMhe6FmXcDn0Vvy/PYe63f3czeRFZDyLVZ3ZpgdW1bHIOLy xiNM+qBhdGDxkuhL2wYqpmy4uL5+I6/TVRxLP8XovpmZRnJ9XODmRfQY9NdilAE2Wr iCFFbnwcgKw6fbZCPqVSfTIv+c3AvF4awIoQXQZrBkXTcd0jKtlTX2w4rVnQ/E59Iz 2IxH7v6jKgrV0dVdVti1FLtlUmBoi/FYnzzPkknDtxWjs6hBFGRnOXgSgaigr+Q+OC H7bJVO0MElqzw== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson , Ryan Roberts Subject: [PATCH v2 14/19] arm64: Enable LPA2 at boot if supported by the system Date: Thu, 24 Nov 2022 13:39:27 +0100 Message-Id: <20221124123932.2648991-15-ardb@kernel.org> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog In-Reply-To: <20221124123932.2648991-1-ardb@kernel.org> References: <20221124123932.2648991-1-ardb@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221124_044037_545269_2882ECE4 X-CRM114-Status: GOOD ( 38.33 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Update the early kernel mapping code to take 52-bit virtual addressing into account based on the LPA2 feature. This is a bit more involved than LVA (which is supported with 64k pages only), given that some page table descriptor bits change meaning in this case. To keep the handling in asm to a minimum, the initial ID map is still created with 48-bit virtual addressing, which implies that the kernel image must be loaded into 48-bit addressable physical memory. This is currently required by the boot protocol, even though we happen to support placement outside of that for LVA/64k based configurations. Enabling LPA2 involves more than setting TCR.T1SZ to a lower value, there is also a DS bit in TCR that needs to be set, and which changes the meaning of bits [9:8] in all page table descriptors. Since we cannot enable DS and every live page table descriptor at the same time, let's pivot through another temporary mapping. This avoids the need to reintroduce manipulations of the page tables with the MMU and caches disabled. To permit the LPA2 feature to be overridden on the kernel command line, which may be necessary to work around silicon errata, or to deal with mismatched features on heterogeneous SoC designs, test for CPU feature overrides first, and only then enable LPA2. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/assembler.h | 7 +- arch/arm64/include/asm/kernel-pgtable.h | 25 +++-- arch/arm64/include/asm/memory.h | 4 + arch/arm64/kernel/head.S | 9 +- arch/arm64/kernel/image-vars.h | 2 + arch/arm64/kernel/pi/map_kernel.c | 103 +++++++++++++++++++- arch/arm64/mm/init.c | 2 +- arch/arm64/mm/mmu.c | 8 +- arch/arm64/mm/proc.S | 4 + 9 files changed, 151 insertions(+), 13 deletions(-) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 786bf62826a8..30eee6473cf0 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -609,11 +609,16 @@ alternative_endif * but we have to add an offset so that the TTBR1 address corresponds with the * pgdir entry that covers the lowest 48-bit addressable VA. * + * Note that this trick only works for LVA/64k pages - LPA2/4k pages uses an + * additional paging level, and on LPA2/16k pages, we would end up with a TTBR + * address that is not 64 byte aligned, so there we reduce the number of paging + * levels for the non-LPA2 case. + * * orr is used as it can cover the immediate value (and is idempotent). * ttbr: Value of ttbr to set, modified. */ .macro offset_ttbr1, ttbr, tmp -#ifdef CONFIG_ARM64_VA_BITS_52 +#if defined(CONFIG_ARM64_VA_BITS_52) && !defined(CONFIG_ARM64_LPA2) mrs \tmp, tcr_el1 and \tmp, \tmp, #TCR_T1SZ_MASK cmp \tmp, #TCR_T1SZ(VA_BITS_MIN) diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h index faa11e8b4a0e..2359b2af0c4c 100644 --- a/arch/arm64/include/asm/kernel-pgtable.h +++ b/arch/arm64/include/asm/kernel-pgtable.h @@ -20,12 +20,16 @@ */ #ifdef CONFIG_ARM64_4K_PAGES #define INIT_IDMAP_USES_PMD_MAPS 1 -#define INIT_IDMAP_TABLE_LEVELS (CONFIG_PGTABLE_LEVELS - 1) #else #define INIT_IDMAP_USES_PMD_MAPS 0 -#define INIT_IDMAP_TABLE_LEVELS (CONFIG_PGTABLE_LEVELS) #endif +/* how many levels of translation are required to cover 'x' bits of VA space */ +#define VA_LEVELS(x) (((x) - 4) / (PAGE_SHIFT - 3)) +#define INIT_IDMAP_TABLE_LEVELS (VA_LEVELS(VA_BITS_MIN) - INIT_IDMAP_USES_PMD_MAPS) + +#define INIT_IDMAP_ROOT_SHIFT (VA_LEVELS(VA_BITS_MIN) * (PAGE_SHIFT - 3) + 3) + /* * If KASLR is enabled, then an offset K is added to the kernel address * space. The bottom 21 bits of this offset are zero to guarantee 2MB @@ -52,7 +56,14 @@ #define EARLY_ENTRIES(vstart, vend, shift, add) \ ((((vend) - 1) >> (shift)) - ((vstart) >> (shift)) + 1 + add) -#define EARLY_PGDS(vstart, vend, add) (EARLY_ENTRIES(vstart, vend, PGDIR_SHIFT, add)) +#if CONFIG_PGTABLE_LEVELS > 4 +/* the kernel is covered entirely by the pgd_t at the top of the VA space */ +#define EARLY_PGDS 1 +#else +#define EARLY_PGDS 0 +#endif + +#define EARLY_P4DS(vstart, vend, add) (EARLY_ENTRIES(vstart, vend, INIT_IDMAP_ROOT_SHIFT, add)) #if INIT_IDMAP_TABLE_LEVELS > 3 #define EARLY_PUDS(vstart, vend, add) (EARLY_ENTRIES(vstart, vend, PUD_SHIFT, add)) @@ -66,11 +77,13 @@ #define EARLY_PMDS(vstart, vend, add) (0) #endif -#define EARLY_PAGES(vstart, vend, add) ( 1 /* PGDIR page */ \ - + EARLY_PGDS((vstart), (vend), add) /* each PGDIR needs a next level page table */ \ +#define EARLY_PAGES(vstart, vend, add) ( 1 /* PGDIR/P4D page */ \ + + EARLY_P4DS((vstart), (vend), add) /* each P4D needs a next level page table */ \ + EARLY_PUDS((vstart), (vend), add) /* each PUD needs a next level page table */ \ + EARLY_PMDS((vstart), (vend), add)) /* each PMD needs a next level page table */ -#define INIT_DIR_SIZE (PAGE_SIZE * (EARLY_PAGES(KIMAGE_VADDR, _end, EARLY_KASLR) + EARLY_SEGMENT_EXTRA_PAGES)) + +#define INIT_DIR_SIZE (PAGE_SIZE * (EARLY_PAGES(KIMAGE_VADDR, _end, EARLY_KASLR) + \ + EARLY_SEGMENT_EXTRA_PAGES + EARLY_PGDS)) /* the initial ID map may need two extra pages if it needs to be extended */ #if VA_BITS_MIN < 48 diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index b3826ff2e52b..4f617e271008 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -54,7 +54,11 @@ #define FIXADDR_TOP (VMEMMAP_START - SZ_32M) #if VA_BITS > 48 +#ifdef CONFIG_ARM64_16K_PAGES +#define VA_BITS_MIN (47) +#else #define VA_BITS_MIN (48) +#endif #else #define VA_BITS_MIN (VA_BITS) #endif diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 94de42dfe97d..6be121949c06 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -198,7 +198,7 @@ SYM_CODE_END(preserve_boot_args) mov \tbl, \sv .endif .L_\@: - compute_indices \vstart, \vend, #PGDIR_SHIFT, \istart, \iend, \count + compute_indices \vstart, \vend, #INIT_IDMAP_ROOT_SHIFT, \istart, \iend, \count mov \sv, \rtbl populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp mov \tbl, \sv @@ -610,9 +610,16 @@ SYM_FUNC_START(__cpu_secondary_check52bitva) alternative_if_not ARM64_HAS_LVA ret alternative_else_nop_endif +#ifndef CONFIG_ARM64_LPA2 mrs_s x0, SYS_ID_AA64MMFR2_EL1 and x0, x0, #(0xf << ID_AA64MMFR2_EL1_VARange_SHIFT) cbnz x0, 2f +#else + mrs x0, id_aa64mmfr0_el1 + sbfx x0, x0, #ID_AA64MMFR0_EL1_TGRAN_SHIFT, 4 + cmp x0, #ID_AA64MMFR0_EL1_TGRAN_LPA2 + b.ge 2f +#endif update_early_cpu_boot_status \ CPU_STUCK_IN_KERNEL | CPU_STUCK_REASON_52_BIT_VA, x0, x1 diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 82bafa1f869c..f48b6f09d278 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -56,6 +56,8 @@ PROVIDE(__pi_arm64_sw_feature_override = arm64_sw_feature_override); PROVIDE(__pi_arm64_use_ng_mappings = arm64_use_ng_mappings); PROVIDE(__pi__ctype = _ctype); +PROVIDE(__pi_init_idmap_pg_dir = init_idmap_pg_dir); +PROVIDE(__pi_init_idmap_pg_end = init_idmap_pg_end); PROVIDE(__pi_init_pg_dir = init_pg_dir); PROVIDE(__pi_init_pg_end = init_pg_end); PROVIDE(__pi_swapper_pg_dir = swapper_pg_dir); diff --git a/arch/arm64/kernel/pi/map_kernel.c b/arch/arm64/kernel/pi/map_kernel.c index a9472ab8d901..75d643da56c8 100644 --- a/arch/arm64/kernel/pi/map_kernel.c +++ b/arch/arm64/kernel/pi/map_kernel.c @@ -133,6 +133,20 @@ static bool __init arm64_early_this_cpu_has_lva(void) ID_AA64MMFR2_EL1_VARange_SHIFT); } +static bool __init arm64_early_this_cpu_has_lpa2(void) +{ + u64 mmfr0; + int feat; + + mmfr0 = read_sysreg(id_aa64mmfr0_el1); + mmfr0 &= ~id_aa64mmfr0_override.mask; + mmfr0 |= id_aa64mmfr0_override.val; + feat = cpuid_feature_extract_signed_field(mmfr0, + ID_AA64MMFR0_EL1_TGRAN_SHIFT); + + return feat >= ID_AA64MMFR0_EL1_TGRAN_LPA2; +} + static bool __init arm64_early_this_cpu_has_pac(void) { u64 isar1, isar2; @@ -254,11 +268,85 @@ static void __init map_kernel(u64 kaslr_offset, u64 va_offset, int root_level) } /* Copy the root page table to its final location */ - memcpy((void *)swapper_pg_dir + va_offset, init_pg_dir, PGD_SIZE); + memcpy((void *)swapper_pg_dir + va_offset, init_pg_dir, PAGE_SIZE); dsb(ishst); idmap_cpu_replace_ttbr1(swapper_pg_dir); } +static void noinline __section(".idmap.text") set_ttbr0_for_lpa2(u64 ttbr) +{ + u64 sctlr = read_sysreg(sctlr_el1); + u64 tcr = read_sysreg(tcr_el1) | TCR_DS; + + /* Update TCR.T0SZ in case we entered with a 47-bit ID map */ + tcr &= ~TCR_T0SZ_MASK; + tcr |= TCR_T0SZ(48); + + asm(" msr sctlr_el1, %0 ;" + " isb ;" + " msr ttbr0_el1, %1 ;" + " msr tcr_el1, %2 ;" + " isb ;" + " tlbi vmalle1 ;" + " dsb nsh ;" + " isb ;" + " msr sctlr_el1, %3 ;" + " isb ;" + :: "r"(sctlr & ~SCTLR_ELx_M), "r"(ttbr), "r"(tcr), "r"(sctlr)); +} + +static void remap_idmap_for_lpa2(void) +{ + extern pgd_t init_idmap_pg_dir[], init_idmap_pg_end[]; + pgd_t *pgdp = (void *)init_pg_dir + PAGE_SIZE; + pgprot_t text_prot = PAGE_KERNEL_ROX; + pgprot_t data_prot = PAGE_KERNEL; + + /* clear the bits that change meaning once LPA2 is turned on */ + pgprot_val(text_prot) &= ~PTE_SHARED; + pgprot_val(data_prot) &= ~PTE_SHARED; + + /* + * We have to clear bits [9:8] in all block or page descriptors in the + * initial ID map, as otherwise they will be (mis)interpreted as + * physical address bits once we flick the LPA2 switch (TCR.DS). Since + * we cannot manipulate live descriptors in that way without creating + * potential TLB conflicts, let's create another temporary ID map in a + * LPA2 compatible fashion, and update the initial ID map while running + * from that. + */ + map_segment(init_pg_dir, &pgdp, 0, _stext, __inittext_end, text_prot, + false, 0); + map_segment(init_pg_dir, &pgdp, 0, __initdata_begin, _end, data_prot, + false, 0); + dsb(ishst); + set_ttbr0_for_lpa2((u64)init_pg_dir); + + /* + * Recreate the initial ID map with the same granularity as before. + * Don't bother with the FDT, we no longer need it after this. + */ + memset(init_idmap_pg_dir, 0, + (u64)init_idmap_pg_dir - (u64)init_idmap_pg_end); + + pgdp = (void *)init_idmap_pg_dir + PAGE_SIZE; + map_segment(init_idmap_pg_dir, &pgdp, 0, + PTR_ALIGN_DOWN(&_stext[0], INIT_IDMAP_BLOCK_SIZE), + PTR_ALIGN_DOWN(&__bss_start[0], INIT_IDMAP_BLOCK_SIZE), + text_prot, false, 0); + map_segment(init_idmap_pg_dir, &pgdp, 0, + PTR_ALIGN_DOWN(&__bss_start[0], INIT_IDMAP_BLOCK_SIZE), + PTR_ALIGN(&_end[0], INIT_IDMAP_BLOCK_SIZE), + data_prot, false, 0); + dsb(ishst); + + /* switch back to the updated initial ID map */ + set_ttbr0_for_lpa2((u64)init_idmap_pg_dir); + + /* wipe the temporary ID map from memory */ + memset(init_pg_dir, 0, (u64)init_pg_end - (u64)init_pg_dir); +} + asmlinkage void __init early_map_kernel(u64 boot_status, void *fdt) { static char const chosen_str[] __initconst = "/chosen"; @@ -266,6 +354,7 @@ asmlinkage void __init early_map_kernel(u64 boot_status, void *fdt) u64 va_base, pa_base = (u64)&_text; u64 kaslr_offset = pa_base % MIN_KIMG_ALIGN; int root_level = 4 - CONFIG_PGTABLE_LEVELS; + bool va52 = (VA_BITS == 52); /* Clear BSS and the initial page tables */ memset(__bss_start, 0, (u64)init_pg_end - (u64)__bss_start); @@ -295,7 +384,17 @@ asmlinkage void __init early_map_kernel(u64 boot_status, void *fdt) arm64_use_ng_mappings = true; } - if (VA_BITS == 52 && arm64_early_this_cpu_has_lva()) + if (IS_ENABLED(CONFIG_ARM64_LPA2)) { + if (arm64_early_this_cpu_has_lpa2()) { + remap_idmap_for_lpa2(); + } else { + va52 = false; + root_level++; + } + } else if (IS_ENABLED(CONFIG_ARM64_64K_PAGES)) { + va52 &= arm64_early_this_cpu_has_lva(); + } + if (va52) sysreg_clear_set(tcr_el1, TCR_T1SZ_MASK, TCR_T1SZ(VA_BITS)); va_base = KIMAGE_VADDR + kaslr_offset; diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 4b4651ee47f2..498d327341b4 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -315,7 +315,7 @@ void __init arm64_memblock_init(void) * physical address of PAGE_OFFSET, we have to *subtract* from it. */ if (IS_ENABLED(CONFIG_ARM64_VA_BITS_52) && (vabits_actual != 52)) - memstart_addr -= _PAGE_OFFSET(48) - _PAGE_OFFSET(52); + memstart_addr -= _PAGE_OFFSET(vabits_actual) - _PAGE_OFFSET(52); /* * Apply the memory limit if it was set. Since the kernel may be loaded diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index d089bc78e592..ba5423ff7039 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -541,8 +541,12 @@ static void __init map_mem(pgd_t *pgdp) * entries at any level are being shared between the linear region and * the vmalloc region. Check whether this is true for the PGD level, in * which case it is guaranteed to be true for all other levels as well. + * (Unless we are running with support for LPA2, in which case the + * entire reduced VA space is covered by a single pgd_t which will have + * been populated without the PXNTable attribute by the time we get here.) */ - BUILD_BUG_ON(pgd_index(direct_map_end - 1) == pgd_index(direct_map_end)); + BUILD_BUG_ON(pgd_index(direct_map_end - 1) == pgd_index(direct_map_end) && + pgd_index(_PAGE_OFFSET(VA_BITS_MIN)) != PTRS_PER_PGD - 1); if (can_set_direct_map()) flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; @@ -726,7 +730,7 @@ static void __init create_idmap(void) void __init paging_init(void) { - idmap_t0sz = 63UL - __fls(__pa_symbol(_end) | GENMASK(VA_BITS_MIN - 1, 0)); + idmap_t0sz = 63UL - __fls(__pa_symbol(_end) | GENMASK(vabits_actual - 1, 0)); map_mem(swapper_pg_dir); diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 179e213bbe2d..d95df732b672 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -489,7 +489,11 @@ SYM_FUNC_START(__cpu_setup) #if VA_BITS > VA_BITS_MIN mov x9, #64 - VA_BITS alternative_if ARM64_HAS_LVA + tcr_set_t0sz tcr, x9 tcr_set_t1sz tcr, x9 +#ifdef CONFIG_ARM64_LPA2 + orr tcr, tcr, #TCR_DS +#endif alternative_else_nop_endif #endif From patchwork Thu Nov 24 12:39:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13054941 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BA25DC433FE for ; Thu, 24 Nov 2022 12:48:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=PJP+hgI43eyKWOlAmxV4SoQAHWaR3zpZSwCIMAQZ4iY=; b=iQ878NiLmgYXeT dgpyuWA+dAObPnGHxk4bN+7+Z0+bMdSIPApZwKMOqCFgMyYWx8RAZ1rPEKgKKgxN+5jLUdxBfF3BI vQoHRb4n0m8PSd3wYXSzVf7kKI0QrssYbDfiGxZjTu7gndiwi6wLqqCl7K/Mc4MEHnEiw1jfNfYGQ YDv3HvE7jfi88KPqhV6yc2ZnB/qEkFwEQLxKdcYQR40rEFSFIWKX4Ztg+2eEaJla8cu+1CPTwbj2Z Sff9U0h/k+4SO1aFlPD3chusw1hcwwJKx1P6gcbLi/MBz8PfycBbsvB4tTVYFLDDKt72Fnl9Kaius weAnD+bDVAsmEwAqfsig==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBdt-008Sg2-9n; Thu, 24 Nov 2022 12:47:50 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBWw-008OfB-Fw for linux-arm-kernel@lists.infradead.org; Thu, 24 Nov 2022 12:40:43 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id CE06F6210B; Thu, 24 Nov 2022 12:40:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A7C5BC43145; Thu, 24 Nov 2022 12:40:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669293637; bh=C1Z0cresBn/yamqmw4fi/QB955We7HG9maOOVZK8VPY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BvJ4C0vQ4igcqfr2l19k9HaTeXgUvvBoVK7++Ip0m0n54JOJ/beornSb65hKaFA9E kNCr5U3Kk89ZjOMiPI2AYs2RbC1VfY9vSslvSfjB2wa+3onxN5lV7NCaelp8Qsq3Cq LponqBgp+IIcX0HOP6Wtc8v6bcYRm3g9CkzP/kGocJk7zAnjIkRCAmn86qxw8urvo9 C+VwSL8tTFF68RPGXigVCEqvS4f5eBxzG45hXKyqIqjqCRF5uHE+w8SsgLCkKp93j/ KgSTGyKYWdoVbm+0nS2ZUoC+6GIWotTLhkZXnR0Ds0kCj9YgLk8aWQBaboA2B85uH1 Vxex/sAK6lU2A== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson , Ryan Roberts Subject: [PATCH v2 15/19] arm64: mm: Add 5 level paging support to fixmap and swapper handling Date: Thu, 24 Nov 2022 13:39:28 +0100 Message-Id: <20221124123932.2648991-16-ardb@kernel.org> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog In-Reply-To: <20221124123932.2648991-1-ardb@kernel.org> References: <20221124123932.2648991-1-ardb@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221124_044038_674975_B83CCC4F X-CRM114-Status: GOOD ( 17.07 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add support for using 5 levels of paging in the fixmap, as well as in the kernel page table handling code which uses fixmaps internally. This also handles the case where a 5 level build runs on hardware that only supports 4 levels of paging. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/fixmap.h | 1 + arch/arm64/include/asm/pgtable.h | 35 +++++++++++ arch/arm64/mm/mmu.c | 64 +++++++++++++++++--- 3 files changed, 91 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/fixmap.h b/arch/arm64/include/asm/fixmap.h index d09654af5b12..675e08e98e8b 100644 --- a/arch/arm64/include/asm/fixmap.h +++ b/arch/arm64/include/asm/fixmap.h @@ -91,6 +91,7 @@ enum fixed_addresses { FIX_PTE, FIX_PMD, FIX_PUD, + FIX_P4D, FIX_PGD, __end_of_fixed_addresses diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 2f7202d03d98..057f079bb2c7 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -869,12 +869,47 @@ static inline p4d_t *p4d_offset(pgd_t *pgdp, unsigned long addr) return (p4d_t *)__va(p4d_offset_phys(pgdp, addr)); } +static inline p4d_t *p4d_set_fixmap(unsigned long addr) +{ + if (!pgtable_l5_enabled()) + return NULL; + return (p4d_t *)set_fixmap_offset(FIX_P4D, addr); +} + +static inline p4d_t *p4d_set_fixmap_offset(pgd_t *pgdp, unsigned long addr) +{ + if (!pgtable_l5_enabled()) + return pgd_to_folded_p4d(pgdp, addr); + return p4d_set_fixmap(p4d_offset_phys(pgdp, addr)); +} + +static inline void p4d_clear_fixmap(void) +{ + if (pgtable_l5_enabled()) + clear_fixmap(FIX_P4D); +} + +/* use ONLY for statically allocated translation tables */ +static inline p4d_t *p4d_offset_kimg(pgd_t *pgdp, u64 addr) +{ + if (!pgtable_l5_enabled()) + return pgd_to_folded_p4d(pgdp, addr); + return (p4d_t *)__phys_to_kimg(p4d_offset_phys(pgdp, addr)); +} + #define pgd_page(pgd) pfn_to_page(__phys_to_pfn(__pgd_to_phys(pgd))) #else static inline bool pgtable_l5_enabled(void) { return false; } +/* Match p4d_offset folding in */ +#define p4d_set_fixmap(addr) NULL +#define p4d_set_fixmap_offset(p4dp, addr) ((p4d_t *)p4dp) +#define p4d_clear_fixmap() + +#define p4d_offset_kimg(dir,addr) ((p4d_t *)dir) + #endif /* CONFIG_PGTABLE_LEVELS > 4 */ #define pgd_ERROR(e) \ diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index ba5423ff7039..000ae84da0ef 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -313,15 +313,14 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr, } while (addr = next, addr != end); } -static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end, +static void alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end, phys_addr_t phys, pgprot_t prot, phys_addr_t (*pgtable_alloc)(int), int flags) { unsigned long next; - pud_t *pudp; - p4d_t *p4dp = p4d_offset(pgdp, addr); p4d_t p4d = READ_ONCE(*p4dp); + pud_t *pudp; if (p4d_none(p4d)) { p4dval_t p4dval = P4D_TYPE_TABLE | P4D_TABLE_UXN; @@ -369,6 +368,46 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end, pud_clear_fixmap(); } +static void alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end, + phys_addr_t phys, pgprot_t prot, + phys_addr_t (*pgtable_alloc)(int), + int flags) +{ + unsigned long next; + pgd_t pgd = READ_ONCE(*pgdp); + p4d_t *p4dp; + + if (pgd_none(pgd)) { + pgdval_t pgdval = PGD_TYPE_TABLE | PGD_TABLE_UXN; + phys_addr_t p4d_phys; + + if (flags & NO_EXEC_MAPPINGS) + pgdval |= PGD_TABLE_PXN; + BUG_ON(!pgtable_alloc); + p4d_phys = pgtable_alloc(P4D_SHIFT); + __pgd_populate(pgdp, p4d_phys, pgdval); + pgd = READ_ONCE(*pgdp); + } + BUG_ON(pgd_bad(pgd)); + + p4dp = p4d_set_fixmap_offset(pgdp, addr); + do { + p4d_t old_p4d = READ_ONCE(*p4dp); + + next = p4d_addr_end(addr, end); + + alloc_init_pud(p4dp, addr, next, phys, prot, + pgtable_alloc, flags); + + BUG_ON(p4d_val(old_p4d) != 0 && + p4d_val(old_p4d) != READ_ONCE(p4d_val(*p4dp))); + + phys += next - addr; + } while (p4dp++, addr = next, addr != end); + + p4d_clear_fixmap(); +} + static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys, unsigned long virt, phys_addr_t size, pgprot_t prot, @@ -391,7 +430,7 @@ static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys, do { next = pgd_addr_end(addr, end); - alloc_init_pud(pgdp, addr, next, phys, prot, pgtable_alloc, + alloc_init_p4d(pgdp, addr, next, phys, prot, pgtable_alloc, flags); phys += next - addr; } while (pgdp++, addr = next, addr != end); @@ -1196,10 +1235,19 @@ void vmemmap_free(unsigned long start, unsigned long end, } #endif /* CONFIG_MEMORY_HOTPLUG */ -static inline pud_t *fixmap_pud(unsigned long addr) +static inline p4d_t *fixmap_p4d(unsigned long addr) { pgd_t *pgdp = pgd_offset_k(addr); - p4d_t *p4dp = p4d_offset(pgdp, addr); + pgd_t pgd = READ_ONCE(*pgdp); + + BUG_ON(pgd_none(pgd) || pgd_bad(pgd)); + + return p4d_offset_kimg(pgdp, addr); +} + +static inline pud_t *fixmap_pud(unsigned long addr) +{ + p4d_t *p4dp = fixmap_p4d(addr); p4d_t p4d = READ_ONCE(*p4dp); BUG_ON(p4d_none(p4d) || p4d_bad(p4d)); @@ -1230,14 +1278,12 @@ static inline pte_t *fixmap_pte(unsigned long addr) */ void __init early_fixmap_init(void) { - pgd_t *pgdp; p4d_t *p4dp, p4d; pud_t *pudp; pmd_t *pmdp; unsigned long addr = FIXADDR_START; - pgdp = pgd_offset_k(addr); - p4dp = p4d_offset(pgdp, addr); + p4dp = fixmap_p4d(addr); p4d = READ_ONCE(*p4dp); if (p4d_none(p4d)) __p4d_populate(p4dp, __pa_symbol(bm_pud), P4D_TYPE_TABLE); From patchwork Thu Nov 24 12:39:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13054942 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7654CC43217 for ; Thu, 24 Nov 2022 12:49:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=0i2rM1/ibBvAVVeIRTUpNko8sT71pYxSZh14djM/YyU=; b=PCwhcnS7T6srfk dxq/gQ+s9fptX49uPn7MKznuhF5elLUTfxUTd/kpl9FGCuTk4fYrRe8PXTdqAIU52Ht7T1Kaydg0l ogtm5RkjKrVffekSDfp3tSdakAGjPWypf9FTfCm6QhImZhrvWqa+Joss4Bjz9lm15SoRZ3YpZlv1p RwTu4beva+yOTgUqpYCMAUvXkHi0+aNaT+wjFDwFj7x03awKZFnQC3s3k+zUMciOlTcY68tcNyabN rj4+y+VQMkRmfEB2LuKOAYzSqOOZoNuizInR4oXSdHqW4gxzS8v3n5NhMLmb+/Ny1FiguU4tR5j0B zU0PFHKbfgAbBc6Ip1Zw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBec-008T4p-Et; Thu, 24 Nov 2022 12:48:34 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBX1-008OhE-0B for linux-arm-kernel@lists.infradead.org; Thu, 24 Nov 2022 12:40:46 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id A188EB827CD; Thu, 24 Nov 2022 12:40:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BE05EC43148; Thu, 24 Nov 2022 12:40:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669293640; bh=QazXwiGL4toOUVUy1zdDdNYKVTpoWhvOCLD0Vobh9KE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VtvkmJX+BvX3wcUV2PjWVNVRrJxQ1F6b9775jlKlv7SciCC5dsKa6LyiGmZdktBIP LyAL9xtVtVZv8S5Woi5PIcHQq1IiG1WS6F93ju0f6aGLBooLhaiBatjPjC7ZGWKs0G tn8UB5f8XScyWLoQ7VKfX7gHiTHF7K6qR+Hjq1jNJ/MDK74msnnHh15zEQOxG6rZNG wBEOjDFaXtY2twh15t8HDeS1esMnTakW/nu+nZAKf+y74sx0dRnEPJXr6S49jtO/al VeCrC4pCN46a3rjwbtB+pMVswjfkBJ5KGEz1WmaqupqoyXRRlHiRQrBYrWtWJT6AJO Y88uMSgYRzqwA== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson , Ryan Roberts Subject: [PATCH v2 16/19] arm64: kasan: Reduce minimum shadow alignment and enable 5 level paging Date: Thu, 24 Nov 2022 13:39:29 +0100 Message-Id: <20221124123932.2648991-17-ardb@kernel.org> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog In-Reply-To: <20221124123932.2648991-1-ardb@kernel.org> References: <20221124123932.2648991-1-ardb@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221124_044043_374197_45D28679 X-CRM114-Status: GOOD ( 24.86 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Allow the KASAN init code to deal with 5 levels of paging, and relax the requirement that the shadow region is aligned to the top level pgd_t size. This is necessary for LPA2 based 52-bit virtual addressing, where the KASAN shadow will never be aligned to the pgd_t size. Allowing this also enables the 16k/48-bit case for KASAN, which is a nice bonus. This involves some hackery to manipulate the root and next level page tables without having to distinguish all the various configurations, including 16k/48-bits (which has a two entry pgd_t level), and LPA2 configurations running with one translation level less on non-LPA2 hardware. Signed-off-by: Ard Biesheuvel --- arch/arm64/Kconfig | 2 +- arch/arm64/mm/kasan_init.c | 124 ++++++++++++++++++-- 2 files changed, 112 insertions(+), 14 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 6d299c6c0a56..901f4d73476d 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -153,7 +153,7 @@ config ARM64 select HAVE_ARCH_HUGE_VMAP select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_JUMP_LABEL_RELATIVE - select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48) + select HAVE_ARCH_KASAN select HAVE_ARCH_KASAN_VMALLOC if HAVE_ARCH_KASAN select HAVE_ARCH_KASAN_SW_TAGS if HAVE_ARCH_KASAN select HAVE_ARCH_KASAN_HW_TAGS if (HAVE_ARCH_KASAN && ARM64_MTE) diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index 7e32f21fb8e1..c422952e439b 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -23,7 +23,7 @@ #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) -static pgd_t tmp_pg_dir[PTRS_PER_PGD] __initdata __aligned(PGD_SIZE); +static pgd_t tmp_pg_dir[PTRS_PER_PTE] __initdata __aligned(PAGE_SIZE); /* * The p*d_populate functions call virt_to_phys implicitly so they can't be used @@ -99,6 +99,19 @@ static pud_t *__init kasan_pud_offset(p4d_t *p4dp, unsigned long addr, int node, return early ? pud_offset_kimg(p4dp, addr) : pud_offset(p4dp, addr); } +static p4d_t *__init kasan_p4d_offset(pgd_t *pgdp, unsigned long addr, int node, + bool early) +{ + if (pgd_none(READ_ONCE(*pgdp))) { + phys_addr_t p4d_phys = early ? + __pa_symbol(kasan_early_shadow_p4d) + : kasan_alloc_zeroed_page(node); + __pgd_populate(pgdp, p4d_phys, PGD_TYPE_TABLE); + } + + return early ? p4d_offset_kimg(pgdp, addr) : p4d_offset(pgdp, addr); +} + static void __init kasan_pte_populate(pmd_t *pmdp, unsigned long addr, unsigned long end, int node, bool early) { @@ -144,7 +157,7 @@ static void __init kasan_p4d_populate(pgd_t *pgdp, unsigned long addr, unsigned long end, int node, bool early) { unsigned long next; - p4d_t *p4dp = p4d_offset(pgdp, addr); + p4d_t *p4dp = kasan_p4d_offset(pgdp, addr, node, early); do { next = p4d_addr_end(addr, end); @@ -165,14 +178,20 @@ static void __init kasan_pgd_populate(unsigned long addr, unsigned long end, } while (pgdp++, addr = next, addr != end); } +#if defined(CONFIG_ARM64_64K_PAGES) || CONFIG_PGTABLE_LEVELS > 4 +#define SHADOW_ALIGN P4D_SIZE +#else +#define SHADOW_ALIGN PUD_SIZE +#endif + /* The early shadow maps everything to a single page of zeroes */ asmlinkage void __init kasan_early_init(void) { BUILD_BUG_ON(KASAN_SHADOW_OFFSET != KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT))); - BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS), PGDIR_SIZE)); - BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS_MIN), PGDIR_SIZE)); - BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE)); + BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS), SHADOW_ALIGN)); + BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS_MIN), SHADOW_ALIGN)); + BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, SHADOW_ALIGN)); kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, NUMA_NO_NODE, true); } @@ -184,20 +203,86 @@ static void __init kasan_map_populate(unsigned long start, unsigned long end, kasan_pgd_populate(start & PAGE_MASK, PAGE_ALIGN(end), node, false); } -static void __init clear_pgds(unsigned long start, - unsigned long end) +/* + * Return whether 'addr' is aligned to the size covered by a top level + * descriptor. + */ +static bool __init top_level_aligned(u64 addr) +{ + int shift = (VA_LEVELS(vabits_actual) - 1) * (PAGE_SHIFT - 3); + + return (addr % (PAGE_SIZE << shift)) == 0; +} + +/* + * Return the descriptor index of 'addr' in the top level table + */ +static int __init top_level_idx(u64 addr) { /* - * Remove references to kasan page tables from - * swapper_pg_dir. pgd_clear() can't be used - * here because it's nop on 2,3-level pagetable setups + * On 64k pages, the TTBR1 range root tables are extended for 52-bit + * virtual addressing, and TTBR1 will simply point to the pgd_t entry + * that covers the start of the 48-bit addressable VA space if LVA is + * not implemented. This means we need to index the table as usual, + * instead of masking off bits based on vabits_actual. */ - for (; start < end; start += PGDIR_SIZE) - set_pgd(pgd_offset_k(start), __pgd(0)); + u64 vabits = IS_ENABLED(CONFIG_ARM64_64K_PAGES) ? VA_BITS + : vabits_actual; + int shift = (VA_LEVELS(vabits) - 1) * (PAGE_SHIFT - 3); + + return (addr & ~_PAGE_OFFSET(vabits)) >> (shift + PAGE_SHIFT); +} + +/* + * Clone a next level table from swapper_pg_dir into tmp_pg_dir + */ +static void __init clone_next_level(u64 addr, pgd_t *tmp_pg_dir, pud_t *pud) +{ + int idx = top_level_idx(addr); + pgd_t pgd = READ_ONCE(swapper_pg_dir[idx]); + pud_t *pudp = (pud_t *)__phys_to_kimg(__pgd_to_phys(pgd)); + + memcpy(pud, pudp, PAGE_SIZE); + tmp_pg_dir[idx] = __pgd(__phys_to_pgd_val(__pa_symbol(pud)) | + PUD_TYPE_TABLE); +} + +/* + * Return the descriptor index of 'addr' in the next level table + */ +static int __init next_level_idx(u64 addr) +{ + int shift = (VA_LEVELS(vabits_actual) - 2) * (PAGE_SHIFT - 3); + + return (addr >> (shift + PAGE_SHIFT)) % PTRS_PER_PTE; +} + +/* + * Dereference the table descriptor at 'pgd_idx' and clear the entries from + * 'start' to 'end' from the table. + */ +static void __init clear_next_level(int pgd_idx, int start, int end) +{ + pgd_t pgd = READ_ONCE(swapper_pg_dir[pgd_idx]); + pud_t *pudp = (pud_t *)__phys_to_kimg(__pgd_to_phys(pgd)); + + memset(&pudp[start], 0, (end - start) * sizeof(pud_t)); +} + +static void __init clear_shadow(u64 start, u64 end) +{ + int l = top_level_idx(start), m = top_level_idx(end); + + if (!top_level_aligned(start)) + clear_next_level(l++, next_level_idx(start), PTRS_PER_PTE - 1); + if (!top_level_aligned(end)) + clear_next_level(m, 0, next_level_idx(end)); + memset(&swapper_pg_dir[l], 0, (m - l) * sizeof(pgd_t)); } static void __init kasan_init_shadow(void) { + static pud_t pud[2][PTRS_PER_PUD] __initdata __aligned(PAGE_SIZE); u64 kimg_shadow_start, kimg_shadow_end; u64 mod_shadow_start, mod_shadow_end; u64 vmalloc_shadow_end; @@ -220,10 +305,23 @@ static void __init kasan_init_shadow(void) * setup will be finished. */ memcpy(tmp_pg_dir, swapper_pg_dir, sizeof(tmp_pg_dir)); + + /* + * If the start or end address of the shadow region is not aligned to + * the top level size, we have to allocate a temporary next-level table + * in each case, clone the next level of descriptors, and install the + * table into tmp_pg_dir. Note that with 5 levels of paging, the next + * level will in fact be p4d_t, but that makes no difference in this + * case. + */ + if (!top_level_aligned(KASAN_SHADOW_START)) + clone_next_level(KASAN_SHADOW_START, tmp_pg_dir, pud[0]); + if (!top_level_aligned(KASAN_SHADOW_END)) + clone_next_level(KASAN_SHADOW_END, tmp_pg_dir, pud[1]); dsb(ishst); cpu_replace_ttbr1(lm_alias(tmp_pg_dir)); - clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END); + clear_shadow(KASAN_SHADOW_START, KASAN_SHADOW_END); kasan_map_populate(kimg_shadow_start, kimg_shadow_end, early_pfn_to_nid(virt_to_pfn(lm_alias(KERNEL_START)))); From patchwork Thu Nov 24 12:39:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13054943 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3CB81C43217 for ; Thu, 24 Nov 2022 12:50:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=0/8M47VgxfgmmB92cGWJddV1w8zEXbAsV/N6TnYZiQk=; b=DImso7TeOFy8US 98CSZjBa00imE/JwFpuO0B0Q6ZFMRrw7CecEgUzORHekJNL57aCk18fi4umk387gyCg0TiTlyu+1+ 64FhewLXAQ5TLwEQI9YZaoxxvmivVfXUv+h9/Hc21jxICZpns9jIOTeQv1eEMq0yLizEpyZPIT51Y rODA4CKRyQ5co8U3KMCS1yrASakAOy3ct9H5S106rah/77J66loXUsUm26SZ2pnZSApd95CbtvPov 3sPca1b3NwG59lNVEGx6a2JbvX5PJ864om3mZlrFjuofVvUCg4AiQ3h1RcMIWiMi4V9Ov+nqhCf6/ m+JvH2jMLeRHo6eW8+wg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBfP-008TUE-3C; Thu, 24 Nov 2022 12:49:23 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBX2-008OiX-Nw for linux-arm-kernel@lists.infradead.org; Thu, 24 Nov 2022 12:40:48 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0E777620F4; Thu, 24 Nov 2022 12:40:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D2422C433C1; Thu, 24 Nov 2022 12:40:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669293643; bh=b0mIPE+mZU8aAwYODY8HIM/C7pyVuY1Z1O/saHX0R9Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=d/4CEZKRjtSi4OhXUWKmvm3BKKxs9m220r2CQMlSUeHqS2VXkuWVxgeT/ZsgAXbj9 an1Cb8YtVAqTq1HjvGaNBImbc+WHkeRsSUt+hQqi7kbThs8oJFXmm+DHlWZ2bpoK0g FOCvcoRd7JdV5ZwayglHt2lwOdcgtbM1Fvg1Dhu66BZmA2C2VwKtjaEPH5lGnuSExm /7Z5T+7maKkaBiNDD245BZHEwRNyL/K6HIdBVqmJN4onn6bRJI2cYUDxKzjYQVB0bq hnrr89uNOUVX6Z1uLliWB4Dgv5GpwK5gxnwDghtlA/PKsmRMVno1NdK2yx+vvd4Ouv +1brXaeY7QmKQ== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson , Ryan Roberts Subject: [PATCH v2 17/19] arm64: mm: Add support for folding PUDs at runtime Date: Thu, 24 Nov 2022 13:39:30 +0100 Message-Id: <20221124123932.2648991-18-ardb@kernel.org> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog In-Reply-To: <20221124123932.2648991-1-ardb@kernel.org> References: <20221124123932.2648991-1-ardb@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221124_044044_909102_FBF10BB4 X-CRM114-Status: GOOD ( 22.44 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In order to support LPA2 on 16k pages in a way that permits non-LPA2 systems to run the same kernel image, we have to be able to fall back to at most 48 bits of virtual addressing. Falling back to 48 bits would result in a level 0 with only 2 entries, which is suboptimal in terms of TLB utilization, and problematic because we cannot dimension the level 0 table for 52-bit virtual addressing and simply point TTBR1 to the top 2 entries when using only 48 bits of address space (analogous to how we handle this on 64k pages), given that those two entries will not appear at a 64 byte aligned address, which is a requirement for TTBRx address values. So instead, let's fall back to 47 bits, solving both problems. This means we need to be able to fold PUDs dynamically, similar to how we fold P4Ds for 48 bit virtual addressing on LPA2 with 4k pages. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/pgalloc.h | 12 ++- arch/arm64/include/asm/pgtable.h | 80 +++++++++++++++++--- arch/arm64/include/asm/tlb.h | 3 +- arch/arm64/kernel/cpufeature.c | 2 + arch/arm64/mm/mmu.c | 2 +- arch/arm64/mm/pgd.c | 2 + 6 files changed, 87 insertions(+), 14 deletions(-) diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h index cae8c648f462..aeba2cf15a25 100644 --- a/arch/arm64/include/asm/pgalloc.h +++ b/arch/arm64/include/asm/pgalloc.h @@ -14,6 +14,7 @@ #include #define __HAVE_ARCH_PGD_FREE +#define __HAVE_ARCH_PUD_FREE #include #define PGD_SIZE (PTRS_PER_PGD * sizeof(pgd_t)) @@ -43,7 +44,8 @@ static inline void __pud_populate(pud_t *pudp, phys_addr_t pmdp, pudval_t prot) static inline void __p4d_populate(p4d_t *p4dp, phys_addr_t pudp, p4dval_t prot) { - set_p4d(p4dp, __p4d(__phys_to_p4d_val(pudp) | prot)); + if (pgtable_l4_enabled()) + set_p4d(p4dp, __p4d(__phys_to_p4d_val(pudp) | prot)); } static inline void p4d_populate(struct mm_struct *mm, p4d_t *p4dp, pud_t *pudp) @@ -53,6 +55,14 @@ static inline void p4d_populate(struct mm_struct *mm, p4d_t *p4dp, pud_t *pudp) p4dval |= (mm == &init_mm) ? P4D_TABLE_UXN : P4D_TABLE_PXN; __p4d_populate(p4dp, __pa(pudp), p4dval); } + +static inline void pud_free(struct mm_struct *mm, pud_t *pud) +{ + if (!pgtable_l4_enabled()) + return; + BUG_ON((unsigned long)pud & (PAGE_SIZE-1)); + free_page((unsigned long)pud); +} #else static inline void __p4d_populate(p4d_t *p4dp, phys_addr_t pudp, p4dval_t prot) { diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 057f079bb2c7..3fb75680f3bc 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -744,12 +744,27 @@ static inline pmd_t *pud_pgtable(pud_t pud) #if CONFIG_PGTABLE_LEVELS > 3 +static __always_inline bool pgtable_l4_enabled(void) +{ + if (CONFIG_PGTABLE_LEVELS > 4 || !IS_ENABLED(CONFIG_ARM64_LPA2)) + return true; + if (!alternative_has_feature_likely(ARM64_ALWAYS_BOOT)) + return vabits_actual == VA_BITS; + return alternative_has_feature_unlikely(ARM64_HAS_LVA); +} + +static inline bool mm_pud_folded(struct mm_struct *mm) +{ + return !pgtable_l4_enabled(); +} +#define mm_pud_folded mm_pud_folded + #define pud_ERROR(e) \ pr_err("%s:%d: bad pud %016llx.\n", __FILE__, __LINE__, pud_val(e)) -#define p4d_none(p4d) (!p4d_val(p4d)) -#define p4d_bad(p4d) (!(p4d_val(p4d) & 2)) -#define p4d_present(p4d) (p4d_val(p4d)) +#define p4d_none(p4d) (pgtable_l4_enabled() && !p4d_val(p4d)) +#define p4d_bad(p4d) (pgtable_l4_enabled() && !(p4d_val(p4d) & 2)) +#define p4d_present(p4d) (!p4d_none(p4d)) static inline void set_p4d(p4d_t *p4dp, p4d_t p4d) { @@ -765,7 +780,8 @@ static inline void set_p4d(p4d_t *p4dp, p4d_t p4d) static inline void p4d_clear(p4d_t *p4dp) { - set_p4d(p4dp, __p4d(0)); + if (pgtable_l4_enabled()) + set_p4d(p4dp, __p4d(0)); } static inline phys_addr_t p4d_page_paddr(p4d_t p4d) @@ -773,25 +789,67 @@ static inline phys_addr_t p4d_page_paddr(p4d_t p4d) return __p4d_to_phys(p4d); } +#define pud_index(addr) (((addr) >> PUD_SHIFT) & (PTRS_PER_PUD - 1)) + +static inline pud_t *p4d_to_folded_pud(p4d_t *p4dp, unsigned long addr) +{ + return (pud_t *)PTR_ALIGN_DOWN(p4dp, PAGE_SIZE) + pud_index(addr); +} + static inline pud_t *p4d_pgtable(p4d_t p4d) { return (pud_t *)__va(p4d_page_paddr(p4d)); } -/* Find an entry in the first-level page table. */ -#define pud_offset_phys(dir, addr) (p4d_page_paddr(READ_ONCE(*(dir))) + pud_index(addr) * sizeof(pud_t)) +static inline phys_addr_t pud_offset_phys(p4d_t *p4dp, unsigned long addr) +{ + BUG_ON(!pgtable_l4_enabled()); -#define pud_set_fixmap(addr) ((pud_t *)set_fixmap_offset(FIX_PUD, addr)) -#define pud_set_fixmap_offset(p4d, addr) pud_set_fixmap(pud_offset_phys(p4d, addr)) -#define pud_clear_fixmap() clear_fixmap(FIX_PUD) + return p4d_page_paddr(READ_ONCE(*p4dp)) + pud_index(addr) * sizeof(pud_t); +} -#define p4d_page(p4d) pfn_to_page(__phys_to_pfn(__p4d_to_phys(p4d))) +static inline pud_t *pud_offset(p4d_t *p4dp, unsigned long addr) +{ + if (!pgtable_l4_enabled()) + return p4d_to_folded_pud(p4dp, addr); + return (pud_t *)__va(pud_offset_phys(p4dp, addr)); +} +#define pud_offset pud_offset + +static inline pud_t *pud_set_fixmap(unsigned long addr) +{ + if (!pgtable_l4_enabled()) + return NULL; + return (pud_t *)set_fixmap_offset(FIX_PUD, addr); +} + +static inline pud_t *pud_set_fixmap_offset(p4d_t *p4dp, unsigned long addr) +{ + if (!pgtable_l4_enabled()) + return p4d_to_folded_pud(p4dp, addr); + return pud_set_fixmap(pud_offset_phys(p4dp, addr)); +} + +static inline void pud_clear_fixmap(void) +{ + if (pgtable_l4_enabled()) + clear_fixmap(FIX_PUD); +} /* use ONLY for statically allocated translation tables */ -#define pud_offset_kimg(dir,addr) ((pud_t *)__phys_to_kimg(pud_offset_phys((dir), (addr)))) +static inline pud_t *pud_offset_kimg(p4d_t *p4dp, u64 addr) +{ + if (!pgtable_l4_enabled()) + return p4d_to_folded_pud(p4dp, addr); + return (pud_t *)__phys_to_kimg(pud_offset_phys(p4dp, addr)); +} + +#define p4d_page(p4d) pfn_to_page(__phys_to_pfn(__p4d_to_phys(p4d))) #else +static inline bool pgtable_l4_enabled(void) { return false; } + #define p4d_page_paddr(p4d) ({ BUILD_BUG(); 0;}) /* Match pud_offset folding in */ diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index c995d1f4594f..a23d33b6b56b 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -94,7 +94,8 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp, unsigned long addr) { - tlb_remove_table(tlb, virt_to_page(pudp)); + if (pgtable_l4_enabled()) + tlb_remove_table(tlb, virt_to_page(pudp)); } #endif diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index c20c3cbd42ef..f1b9c638afa7 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1728,6 +1728,8 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused) if (levels == 5 && !pgtable_l5_enabled()) levels = 4; + else if (levels == 4 && !pgtable_l4_enabled()) + levels = 3; if (__this_cpu_read(this_cpu_vector) == vectors) { const char *v = arm64_get_bp_hardening_vector(EL1_VECTOR_KPTI); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 000ae84da0ef..54273b37808b 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1089,7 +1089,7 @@ static void free_empty_pud_table(p4d_t *p4dp, unsigned long addr, free_empty_pmd_table(pudp, addr, next, floor, ceiling); } while (addr = next, addr < end); - if (CONFIG_PGTABLE_LEVELS <= 3) + if (!pgtable_l4_enabled()) return; if (!pgtable_range_aligned(start, end, floor, ceiling, P4D_MASK)) diff --git a/arch/arm64/mm/pgd.c b/arch/arm64/mm/pgd.c index 3c4f8a279d2b..0c501cabc238 100644 --- a/arch/arm64/mm/pgd.c +++ b/arch/arm64/mm/pgd.c @@ -21,6 +21,8 @@ static bool pgdir_is_page_size(void) { if (PGD_SIZE == PAGE_SIZE) return true; + if (CONFIG_PGTABLE_LEVELS == 4) + return !pgtable_l4_enabled(); if (CONFIG_PGTABLE_LEVELS == 5) return !pgtable_l5_enabled(); return false; From patchwork Thu Nov 24 12:39:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13054944 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7A0BDC43217 for ; Thu, 24 Nov 2022 12:51:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Gslbpl5X2abytE7Qir+MWeDOUB765a77sHHSGx9PmTE=; b=H3WCl05IpVMMic 2eXwquiT9dbLzkHlldW+rUgYXHLeu1vjzMOBpmOFuLVN+uzRNXGVf/F4SJdKya2GB21x+w8kQB5ID X3CYCQAxMeg7S3dVaXR4LfpnpHvQZh5db0/eX7yaVhApC2NMCHl1M9u3OE9oam0qtjTP6vdOt1b+s 0c0qTtL4DPNf5GM8LGahA87Ni2ZGzLPoHr1fE1z3FhDW/FMMIbN8rAhyYYnUB1OfdRIRpPlgqp8ii PBRRrHjwlHvILRk2lKcl3yL1gtwKybkGFmGpGDDGsRyruCx79fSqVWbdBEn6rh71qVaE0qLzeLgFZ 776D0EKzHCCCajVsemiw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBgL-008Tws-Md; Thu, 24 Nov 2022 12:50:22 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBX7-008Okb-Bf for linux-arm-kernel@lists.infradead.org; Thu, 24 Nov 2022 12:40:51 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id BCBFEB82788; Thu, 24 Nov 2022 12:40:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E97E1C43148; Thu, 24 Nov 2022 12:40:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669293646; bh=wUgo5hOEF/bq/9/ZsmwLcIumxx+oSNKNX7Tl3tUX49k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OWGBgRhLVIBOGMqIQhT5MIozEi0fbCHehvwzta1NuNQdF2AYyXkKOnYBOJqdDpT0Z XYBNjGBFXzxVlX0hBS8i5nVYkteshCsHpVI2D/04MXvGL/Fh4WRpmcGpUblClyo7/b N2IlRz07hhDqu+NTYcCzWoNTgMPbPmV2pbGfIXECcPrHvzCt4fIf8h5bIdGb9G0L93 OjNLrhIZ58ZXlW3ULLR1aeJ1DhmCMMAxl4DpT6u2PSfVdZExXNfoWs6/EXRuWcJjO7 4AEp6je91fsnhFh+bSpgPHPj+AFsXxWyN4cC203g7+doOKhniQiqSO8ZBQeLkjo/h5 nnb8clcN/LXug== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson , Ryan Roberts Subject: [PATCH v2 18/19] arm64: ptdump: Disregard unaddressable VA space Date: Thu, 24 Nov 2022 13:39:31 +0100 Message-Id: <20221124123932.2648991-19-ardb@kernel.org> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog In-Reply-To: <20221124123932.2648991-1-ardb@kernel.org> References: <20221124123932.2648991-1-ardb@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221124_044049_593120_B9002FD8 X-CRM114-Status: GOOD ( 17.22 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Configurations built with support for 52-bit virtual addressing can also run on CPUs that only support 48 bits of VA space, in which case only that part of swapper_pg_dir that represents the 48-bit addressable region is relevant, and everything else is ignored by the hardware. Our software pagetable walker has little in the way of input address validation, and so it will happily start a walk from an address that is not representable by the number of paging levels that are actually active, resulting in lots of bogus output from the page table dumper unless we take care to start at a valid address. Signed-off-by: Ard Biesheuvel --- arch/arm64/mm/ptdump.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/arm64/mm/ptdump.c b/arch/arm64/mm/ptdump.c index 9bc4066c5bf3..ca38ac2637b5 100644 --- a/arch/arm64/mm/ptdump.c +++ b/arch/arm64/mm/ptdump.c @@ -358,7 +358,7 @@ void ptdump_check_wx(void) .ptdump = { .note_page = note_page, .range = (struct ptdump_range[]) { - {PAGE_OFFSET, ~0UL}, + {_PAGE_OFFSET(vabits_actual), ~0UL}, {0, 0} } } @@ -380,6 +380,8 @@ static int __init ptdump_init(void) address_markers[KASAN_START_NR].start_address = KASAN_SHADOW_START; #endif ptdump_initialize(); + if (vabits_actual < VA_BITS) + kernel_ptdump_info.base_addr = _PAGE_OFFSET(vabits_actual); ptdump_debugfs_register(&kernel_ptdump_info, "kernel_page_tables"); return 0; } From patchwork Thu Nov 24 12:39:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13054958 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7BC35C433FE for ; Thu, 24 Nov 2022 12:52:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Ai+/DY1qSTLtD4R5naxr5YfdRGaZ5o1pa0Qi6hu9Y4U=; b=rQvzqiDOQr5D9p XbF9NzZf4WA8JYiurSm786m10xn6ACIE3tP/cC3xoRjiOaIobIH0vTjaZOzL0vF9dPQXZNjIZZ69V xCmBTY5M6dv4aYLU6Fl8yV8SahaXKY4K9dKYofrly0n1p9JBcblkp0ERrRqWgfWEiN3SUw7gJJJni 2g3d7dzo3sLD+yqoaU9+mRE/GnBqUaASr9XvrAjf5vPfDjDMCBtiVtmwJRv/FLJ2P+lXnKKMoez8F F8EsYU0emVOkIiy6JSL2a1DfA5pCpSM4t28eha9pRK+q8LWlBMCRkvGmtpKExCHkDeGLEwPS/0A3K Fq2KCJo/XU+FdgZyNmFA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBhH-008UM4-RY; Thu, 24 Nov 2022 12:51:20 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oyBX8-008Oln-Mv for linux-arm-kernel@lists.infradead.org; Thu, 24 Nov 2022 12:40:52 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3478B62119; Thu, 24 Nov 2022 12:40:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 08031C4347C; Thu, 24 Nov 2022 12:40:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669293649; bh=+ef9ifCchfLkTx1NrLXWvz4KTUjF/3u+xk8wJp4huq4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TpyezbthVD9am0OmaheykFXLYvS708DbFnqJGdasYvle6wvkK9NZ+sj70zywiu/25 cq0v6PrTOm6koq2ZgHMU8oNKd4AsYfx43nkxQCYZvM2xNfFHZeP97VlpcwLS8NTxtd 6oSKjO8/CiS34CDeUAMdxYWU8t+VkE1FyirYWHIjZr4NbJ5rYWeNAR6E+uqkSU63CC GHwG8Y62vd2BbgNOsxL0UAILnRs0abeIXnvn+JXTI9m+EaxwxQo+UDGSwXrmhbpN0o lNHf2HUeB3Wm8qivgb9HhXmEK9o9yJui3HZsRb3WOod+TVVyhtW3KTgQklP2OctSYI Jkn56iEbDhslA== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson , Ryan Roberts Subject: [PATCH v2 19/19] arm64: Enable 52-bit virtual addressing for 4k and 16k granule configs Date: Thu, 24 Nov 2022 13:39:32 +0100 Message-Id: <20221124123932.2648991-20-ardb@kernel.org> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog In-Reply-To: <20221124123932.2648991-1-ardb@kernel.org> References: <20221124123932.2648991-1-ardb@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221124_044050_875403_F51EEFA8 X-CRM114-Status: GOOD ( 17.84 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Update Kconfig to permit 4k and 16k granule configurations to be built with 52-bit virtual addressing, now that all the prerequisites are in place. While at it, update the feature description so it matches on the appropriate feature bits depending on the page size. For simplicity, let's just keep ARM64_HAS_LVA as the feature name. Note that LPA2 based 52-bit virtual addressing requires 52-bit physical addressing support to be enabled as well, as programming TCR.TxSZ to values below 16 is not allowed unless TCR.DS is set, which is what activates the 52-bit physical addressing support. While supporting the converse (52-bit physical addressing without 52-bit virtual addressing) would be possible in principle, let's keep things simple, by only allowing these features to be enabled at the same time. Signed-off-by: Ard Biesheuvel --- arch/arm64/Kconfig | 17 ++++++++------- arch/arm64/kernel/cpufeature.c | 22 ++++++++++++++++---- 2 files changed, 28 insertions(+), 11 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 901f4d73476d..ade7a9a007c0 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -344,7 +344,9 @@ config PGTABLE_LEVELS default 3 if ARM64_64K_PAGES && (ARM64_VA_BITS_48 || ARM64_VA_BITS_52) default 3 if ARM64_4K_PAGES && ARM64_VA_BITS_39 default 3 if ARM64_16K_PAGES && ARM64_VA_BITS_47 + default 4 if ARM64_16K_PAGES && (ARM64_VA_BITS_48 || ARM64_VA_BITS_52) default 4 if !ARM64_64K_PAGES && ARM64_VA_BITS_48 + default 5 if ARM64_4K_PAGES && ARM64_VA_BITS_52 config ARCH_SUPPORTS_UPROBES def_bool y @@ -358,13 +360,13 @@ config BROKEN_GAS_INST config KASAN_SHADOW_OFFSET hex depends on KASAN_GENERIC || KASAN_SW_TAGS - default 0xdfff800000000000 if (ARM64_VA_BITS_48 || ARM64_VA_BITS_52) && !KASAN_SW_TAGS - default 0xdfffc00000000000 if ARM64_VA_BITS_47 && !KASAN_SW_TAGS + default 0xdfff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS_52 && !ARM64_16K_PAGES)) && !KASAN_SW_TAGS + default 0xdfffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_52) && ARM64_16K_PAGES && !KASAN_SW_TAGS default 0xdffffe0000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS default 0xdfffffc000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS default 0xdffffff800000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS - default 0xefff800000000000 if (ARM64_VA_BITS_48 || ARM64_VA_BITS_52) && KASAN_SW_TAGS - default 0xefffc00000000000 if ARM64_VA_BITS_47 && KASAN_SW_TAGS + default 0xefff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS_52 && !ARM64_16K_PAGES)) && KASAN_SW_TAGS + default 0xefffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_52) && ARM64_16K_PAGES && KASAN_SW_TAGS default 0xeffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS default 0xefffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS default 0xeffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS @@ -1197,7 +1199,7 @@ config ARM64_VA_BITS_48 config ARM64_VA_BITS_52 bool "52-bit" - depends on ARM64_64K_PAGES && (ARM64_PAN || !ARM64_SW_TTBR0_PAN) + depends on ARM64_PAN || !ARM64_SW_TTBR0_PAN help Enable 52-bit virtual addressing for userspace when explicitly requested via a hint to mmap(). The kernel will also use 52-bit @@ -1244,10 +1246,11 @@ choice config ARM64_PA_BITS_48 bool "48-bit" + depends on ARM64_64K_PAGES || !ARM64_VA_BITS_52 config ARM64_PA_BITS_52 - bool "52-bit (ARMv8.2)" - depends on ARM64_64K_PAGES + bool "52-bit" + depends on ARM64_64K_PAGES || ARM64_VA_BITS_52 depends on ARM64_PAN || !ARM64_SW_TTBR0_PAN help Enable support for a 52-bit physical address space, introduced as diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index f1b9c638afa7..834dc7b76e1c 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2665,15 +2665,29 @@ static const struct arm64_cpu_capabilities arm64_features[] = { }, #ifdef CONFIG_ARM64_VA_BITS_52 { - .desc = "52-bit Virtual Addressing (LVA)", .capability = ARM64_HAS_LVA, .type = ARM64_CPUCAP_BOOT_CPU_FEATURE, - .sys_reg = SYS_ID_AA64MMFR2_EL1, - .sign = FTR_UNSIGNED, + .matches = has_cpuid_feature, .field_width = 4, +#ifdef CONFIG_ARM64_64K_PAGES + .desc = "52-bit Virtual Addressing (LVA)", + .sign = FTR_SIGNED, + .sys_reg = SYS_ID_AA64MMFR2_EL1, .field_pos = ID_AA64MMFR2_EL1_VARange_SHIFT, - .matches = has_cpuid_feature, .min_field_value = ID_AA64MMFR2_EL1_VARange_52, +#else + .desc = "52-bit Virtual Addressing (LPA2)", + .sys_reg = SYS_ID_AA64MMFR0_EL1, +#ifdef CONFIG_ARM64_4K_PAGES + .sign = FTR_SIGNED, + .field_pos = ID_AA64MMFR0_EL1_TGRAN4_SHIFT, + .min_field_value = ID_AA64MMFR0_EL1_TGRAN4_52_BIT, +#else + .sign = FTR_UNSIGNED, + .field_pos = ID_AA64MMFR0_EL1_TGRAN16_SHIFT, + .min_field_value = ID_AA64MMFR0_EL1_TGRAN16_52_BIT, +#endif +#endif }, #endif {},