From patchwork Tue Mar 7 14:04:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163636 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 473B2C678D4 for ; Tue, 7 Mar 2023 14:11:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=TaRIrjFQhQglIg+B9oCBV+63/YxsYRATERcG5xPIKM0=; b=NQc2vDD8p23npG +sjEWGPk33X5hMImG/fLwWBMrxgVOdTGh2OYi1L/K9TOXBPINqTZP8+yUmReuQujN6x3xwI90WsuJ hUEas6KEMf7c3BKQkEUYjQB+xq863zSQ+RGAxDxSZ5IOXls75MnlqH4KdonKK5i25tIkZWBepMLvk nCffVfPE7CPWNhl4ix0n+aX4i7Z9oWBVwRtTn9/AhTF7g4hU/LK3mYnWWlOo/0tFpGkogdaRygpyH S/vstQ3fUML2ROSnUP/hyyF9zHsn1fA9w9TxbpLUzcxVM6MH6B8/PC7KOWtvFI51f7vihZBWheTl4 S2XO97UXoIB4VQez7GEA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZY1R-000fgY-LD; Tue, 07 Mar 2023 14:10:33 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXxi-000djc-Bq for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:06:45 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4ADAF6143F; Tue, 7 Mar 2023 14:06:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B8D5EC433A4; Tue, 7 Mar 2023 14:06:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198000; bh=UU7UHHKN3Bd0erW9JoS5OAfIvVMowd83YmJWUTJJyEU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Uc+w1oi7J43W+sD4f84reyCRhvx0QojS7oA8M0lOPvS0aFzJKjUuEgNxf9ToK6s9H t5NKs9aiJSi4y6ECXVmFzT7L5R6oT9nhikg9pIBHIJNXBQkADMSEntgvaHVatFhP03 n/WdLin+s9pnhkEaY9KiL/eV06cWSA4zmnKpSnCN/PHmRRkGFF7ohSZaEbIowVEgrH cjgqyleUtRxu9/mpXpndcjP+jPwNVTXNOoVC2AIqc/1nArhkEuENQWWo5s4Zx9JoKu ymYJt15AFHg4rKXCor9PpfVU8vVmka8c8XCfuo/hEG7AG1rOWzVnQ/SRbZUMlfgIy2 U5pRvMDz1+LDQ== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 01/60] arm64: kernel: Disable latent_entropy GCC plugin in early C runtime Date: Tue, 7 Mar 2023 15:04:23 +0100 Message-Id: <20230307140522.2311461-2-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=849; i=ardb@kernel.org; h=from:subject; bh=UU7UHHKN3Bd0erW9JoS5OAfIvVMowd83YmJWUTJJyEU=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXd+d/V5Cme/D+mi/NJGKn5Lcrht9rvkyvS90//SWbo/ 41eltc6SlkYxDgYZMUUWQRm/3238/REqVrnWbIwc1iZQIYwcHEKwERWujD89+3Q6bQ9XsZZ9DDh 9J6VMdFZz4Nmx57J2xIq8LhbYU9+DiPDvai9f307ln1mOLlHKEtLYVurzUH73XN25TuaR2Tu/97 NBAA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060642_477700_8549CD65 X-CRM114-Status: GOOD ( 11.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Avoid build issues in the early C code related to the latent_entropy GCC plugin, by incorporating the C flags fragment that disables it. Signed-off-by: Ard Biesheuvel Acked-by: Mark Rutland --- arch/arm64/kernel/pi/Makefile | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/kernel/pi/Makefile b/arch/arm64/kernel/pi/Makefile index 4c0ea3cd4ea406b6..c844a0546d7f0e62 100644 --- a/arch/arm64/kernel/pi/Makefile +++ b/arch/arm64/kernel/pi/Makefile @@ -3,6 +3,7 @@ KBUILD_CFLAGS := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) -fpie \ -Os -DDISABLE_BRANCH_PROFILING $(DISABLE_STACKLEAK_PLUGIN) \ + $(DISABLE_LATENT_ENTROPY_PLUGIN) \ $(call cc-option,-mbranch-protection=none) \ -I$(srctree)/scripts/dtc/libfdt -fno-stack-protector \ -include $(srctree)/include/linux/hidden.h \ From patchwork Tue Mar 7 14:04:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163642 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A90DEC678D5 for ; Tue, 7 Mar 2023 14:13:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=HUX2KQoun/VlE//65vdQtCCNrl5Sby8/G+fqruloop8=; b=tI2UHkQau021k3 ebjqd0LSvWxYcf1mWstcwDC5uInZaBbuHkFWpwyWj7XJkV9wy3Fv1bH/ZW9D95tmVYg3TwM6tyIMp WGY93NToCYI8W5h4E84dwu14rk0Ka/PeA32qXULIHNYhKxTQ3XApOIezKma18JFFM9LStDmWV48/D dMIBFlOfzsiXvNNAr9zs1e7kuvdlq2g7J4r0/P22uNK1g5ttGXAi2e5a381QZPornC5LQsA1WZJVS 2YZWrKMNhpSVB95skpBrV8kyjg8WFyUySR4zC2wlGBpIYGA2sBeO/oJqqx6eEBLvBRBl/DiN1QRlw s8T5lutjrzjvtbNA+I4A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZY2w-000gMH-NC; Tue, 07 Mar 2023 14:12:07 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXxm-000dmc-SR for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:06:48 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id CBD0FCE1BDC; Tue, 7 Mar 2023 14:06:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 282A4C433A7; Tue, 7 Mar 2023 14:06:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198003; bh=zTJJcfi4SEXlPHHgiZzEiE+Bb8xBDVbYVHpApgM1uRw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YG60S4NTJZYgf3VEsEcTvEU1ODazqeuMok+heVfq9BEVOpoLLoRlc+L36ndWed+Vd Y/kNnRROpeBA4CAZPt2aPeEnf7Y4IpJ4veTA5A3sb0Ooe9ey1J90gXWhIdK6FBiyHD ndoaG1DMjk0mbVAu3oKL8bXI3aPjCvMWiP6amOjD9EdXwwahb+svONi2hfby7rJupk m00Mp4wQ6WE3qMqPcAauZnNujdlPUJnuGVA4mgZKkismjjOa+G3aWrGaKxfPtdtL0j CJ6VCR1V03Z3EdfcAog4jL4BorLGM0tiu0r1pIi0gyVOFefQk13K1z0R5Ulz/XK/PC gPPE/c6iQ0/+w== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 02/60] arm64: mm: Take potential load offset into account when KASLR is off Date: Tue, 7 Mar 2023 15:04:24 +0100 Message-Id: <20230307140522.2311461-3-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1054; i=ardb@kernel.org; h=from:subject; bh=zTJJcfi4SEXlPHHgiZzEiE+Bb8xBDVbYVHpApgM1uRw=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdhVFh1hf+2B0Xupfde1M390PLw8mn1ebfWHuToyHbc Fq5zt61HaUsDGIcDLJiiiwCs/++23l6olSt8yxZmDmsTCBDGLg4BWAiAnoM/z0bzqXare9T+fh1 5b5a0ZvP995afGTb08qvjU+PhqrP+/OdkaHl1UefP3x2TjnRh8vj1xS7zA3frX5Bc+4bqdrH+t+ s1VgA X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060647_164733_FD5AB2E4 X-CRM114-Status: GOOD ( 14.33 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We enable CONFIG_RELOCATABLE even when CONFIG_RANDOMIZE_BASE is disabled, and this permits the loader (i.e., EFI) to place the kernel anywhere in physical memory as long as the base address is 64k aligned. This means that the 'KASLR' case described in the header that defines the size of the statically allocated page tables could take effect even when CONFIG_RANDMIZE_BASE=n. So check for CONFIG_RELOCATABLE instead. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/kernel-pgtable.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h index fcd14197756f0619..4d13c73171e1e360 100644 --- a/arch/arm64/include/asm/kernel-pgtable.h +++ b/arch/arm64/include/asm/kernel-pgtable.h @@ -53,7 +53,7 @@ * address is just pushed over a boundary and the start address isn't). */ -#ifdef CONFIG_RANDOMIZE_BASE +#ifdef CONFIG_RELOCATABLE #define EARLY_KASLR (1) #else #define EARLY_KASLR (0) From patchwork Tue Mar 7 14:04:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163643 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 199DFC678D5 for ; Tue, 7 Mar 2023 14:14:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=cQVSmxGmDCwhTrSRysph7KWoUTicvq8TV+w8gCS+BLg=; b=oRzG0BuGj2ZLW9 ituvL1H2eLejUO5f3EK2Kz9ZhTRmmJW+zz2ibc8iBEle1nRuuBCkv0aR5CwCKNRAr8IVCvFX1hZqO EbEy8rnOIaHVp0AV2wzv4eOVvC9dH9xZgCmYXxJ2iVrDCblvedxbbvhOdq2CEF4UrWYEG6zOR6mYJ GyvCd+TS3QElXBHklZP4albsA4eGrZvCO9J3Wa6agkrdjThYk2kPwGvdSbTP+Q7KC+bkQNMxSmgEZ IuwlIqvWT+xry4bwu3xXGp73mAiaoHpU/HmX7HXdDCfRM9ls9KH8q/Alf/JmUxwHD9TFLf/i8FZ+V hthiTXLDMdedhwpCYFqQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZY42-000goI-5V; Tue, 07 Mar 2023 14:13:16 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXxo-000dnW-OE for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:06:50 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id BD606B818FE; Tue, 7 Mar 2023 14:06:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8D267C433A8; Tue, 7 Mar 2023 14:06:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198005; bh=hBUqn7EZsvUyWwjx7asK+P907e4suMk6qjEd4crdyQM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lvo/Hg08mBnHc9fz669GQGSADGEm5qsN1v4mbr78UWGMA8gQe98pgftKpnVST0tN1 xC9ri5iafO2UOEVPxceNcF4KQg2SwdHwWxL+IjoUVT1payPlvueLo0XmL8VFYHBCFq 88fTDGd0sMGo1LJvIFRybz6G4QXq/mNuQpibkAXJJQRuHJKZ9RiPxSiyDosnUsLEEe kAZHAUp1PQOKlXNBEhpcgZXhtHpArh9s49lU7tAo41mjudrNd5xXZAbtupUKc7lTpm lHAxPQ4mvXdzyG9cc7FfZeh6QCyKXUCvWG1YZI/cZfFxhfCqN8kMHECBwzqVCa+Pst tdP1q3jbsw5QQ== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 03/60] arm64: mm: get rid of kimage_vaddr global variable Date: Tue, 7 Mar 2023 15:04:25 +0100 Message-Id: <20230307140522.2311461-4-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2417; i=ardb@kernel.org; h=from:subject; bh=hBUqn7EZsvUyWwjx7asK+P907e4suMk6qjEd4crdyQM=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdhUXbq3bTyunX+x8VqO+4uYjb8fxDT/u9ia8d7esSu wsqGc92lLIwiHEwyIopsgjM/vtu5+mJUrXOs2Rh5rAygQxh4OIUgInIOTL8Fe1LX33GdE3hs/xX zytaNbrmR3x5sNdS2qxbTdiC8fVFM0aG67eunz/Q91rl4Po9wiZXVhx4w7nqLAfzSZ/qFDNltYk rOAE= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060649_124306_D2AAD92F X-CRM114-Status: GOOD ( 16.89 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We store the address of _text in kimage_vaddr, but since commit 09e3c22a86f6889d ("arm64: Use a variable to store non-global mappings decision"), we no longer reference this variable from modules so we no longer need to export it. In fact, we don't need it at all so let's just get rid of it. Signed-off-by: Ard Biesheuvel Acked-by: Mark Rutland --- arch/arm64/include/asm/memory.h | 6 ++---- arch/arm64/kernel/head.S | 2 +- arch/arm64/mm/mmu.c | 3 --- 3 files changed, 3 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 78e5163836a0ab95..a4e1d832a15a2d7a 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -182,6 +182,7 @@ #include #include #include +#include #if VA_BITS > 48 extern u64 vabits_actual; @@ -193,15 +194,12 @@ extern s64 memstart_addr; /* PHYS_OFFSET - the physical address of the start of memory. */ #define PHYS_OFFSET ({ VM_BUG_ON(memstart_addr & 1); memstart_addr; }) -/* the virtual base of the kernel image */ -extern u64 kimage_vaddr; - /* the offset between the kernel virtual and physical mappings */ extern u64 kimage_voffset; static inline unsigned long kaslr_offset(void) { - return kimage_vaddr - KIMAGE_VADDR; + return (u64)&_text - KIMAGE_VADDR; } static inline bool kaslr_enabled(void) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index b98970907226b36c..65cdaaa2c859418f 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -482,7 +482,7 @@ SYM_FUNC_START_LOCAL(__primary_switched) str_l x21, __fdt_pointer, x5 // Save FDT pointer - ldr_l x4, kimage_vaddr // Save the offset between + adrp x4, _text // Save the offset between sub x4, x4, x0 // the kernel virtual and str_l x4, kimage_voffset, x5 // physical mappings diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 6f9d8898a02516f6..81e1420d2cc13246 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -50,9 +50,6 @@ u64 vabits_actual __ro_after_init = VA_BITS_MIN; EXPORT_SYMBOL(vabits_actual); #endif -u64 kimage_vaddr __ro_after_init = (u64)&_text; -EXPORT_SYMBOL(kimage_vaddr); - u64 kimage_voffset __ro_after_init; EXPORT_SYMBOL(kimage_voffset); From patchwork Tue Mar 7 14:04:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163644 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 50973C678D5 for ; Tue, 7 Mar 2023 14:15:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=yC9ZnaRsidld040SHcF4y5NCRFysb1UzdU42TTIGWbI=; b=xYB63sXqKjUIqC XrFshFUc28681t2fJhgKpp+KmkYrYUINuU5qDqhUOD/j4Y9I6L84W6hj2CXi7g35w7Pm7geqo1OOc Tp7d2/tx8u4u+5e/e0N+ksag+LdMLnVD4wW7UzmztpEkKv7NC+u6IYnU4SEasyh7vNjPwd5WNIdvQ 6OlKDEC6exzBRClz35kdN2Kgu6gsepg4PnFjrRh4PCH7xSfOwg/EncHinC0ZT6vrcnAkyZqqjC1nu IMMGXYYSLYOHkJfc5bbZW/9Gfbkf14pGb0r9kBCnQ0FbdQXk07WjdkjmXlcnc1P3BVYfNSGA1VMeP IgYGtU42KlkhRg0f+ktQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZY4z-000hE6-MO; Tue, 07 Mar 2023 14:14:15 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXxp-000dol-Nw for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:06:51 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 44BADB81900; Tue, 7 Mar 2023 14:06:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EEBACC433A4; Tue, 7 Mar 2023 14:06:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198007; bh=P/Q6t7rGGrSXbvlzWrWnoJP4iB2HWpKfFqCADdc1lgE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gwp2La2iddTcaO5BMNNOPuVAT9WS4dmHFjHNfde5f+XzIcIDDocQ6b7xeVV/BTkm2 GpNCceBhgGyKYYDxDYcPHUM2yrdunOFUem7nz21/sxRscXFDPb+BG2A7fKawAhuepa po5n3+EWTmDlLXGwlavOftOm2dcOfe4C+TtFe9bAE8VZj81URGR9F6Vr6FQdLOERjd LuRel6uISFFS8Rs/C9XeSmUpH0DSHc8awSSTfOLUTYCjSrMXMgbjD5IsBefSWXybYT kbcq5jlHopdeuXXX5C43JmyrMKpYQawMQb2EJaqjcTAi+MlcdXlyPavNItmx7vO/b+ rU9DWdgh/T62g== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 04/60] arm64: mm: Move PCI I/O emulation region above the vmemmap region Date: Tue, 7 Mar 2023 15:04:26 +0100 Message-Id: <20230307140522.2311461-5-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1724; i=ardb@kernel.org; h=from:subject; bh=P/Q6t7rGGrSXbvlzWrWnoJP4iB2HWpKfFqCADdc1lgE=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdhW3uwg7zyGrTWq2OPMntRVtTlS6qmjxWaKxpKQ6f9 XvR0fSOUhYGMQ4GWTFFFoHZf9/tPD1RqtZ5lizMHFYmkCEMXJwCMJFlvIwM13YKMulFrT+lrqrw blGp2iZ1j2Vc9SFW4aJsfLfObr/5i5FhW4ablmzFbSWN3SdCtHfu0v//PcO4wHqCwp472yR4pJn 4AQ== X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060649_977702_AA3F836E X-CRM114-Status: GOOD ( 13.90 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Move the PCI I/O region above the vmemmap region in the kernel's VA space. This will permit us to reclaim the lower part of the vmemmap region for vmalloc/vmap allocations when running a 52-bit VA capable build on a 48-bit VA capable system. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/memory.h | 4 ++-- arch/arm64/mm/ptdump.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index a4e1d832a15a2d7a..6e321cc06a3c30f0 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -49,8 +49,8 @@ #define MODULES_VSIZE (SZ_128M) #define VMEMMAP_START (-(UL(1) << (VA_BITS - VMEMMAP_SHIFT))) #define VMEMMAP_END (VMEMMAP_START + VMEMMAP_SIZE) -#define PCI_IO_END (VMEMMAP_START - SZ_8M) -#define PCI_IO_START (PCI_IO_END - PCI_IO_SIZE) +#define PCI_IO_START (VMEMMAP_END + SZ_8M) +#define PCI_IO_END (PCI_IO_START + PCI_IO_SIZE) #define FIXADDR_TOP (VMEMMAP_START - SZ_32M) #if VA_BITS > 48 diff --git a/arch/arm64/mm/ptdump.c b/arch/arm64/mm/ptdump.c index 9bc4066c5bf33a72..9d1f4cdc6672ed5f 100644 --- a/arch/arm64/mm/ptdump.c +++ b/arch/arm64/mm/ptdump.c @@ -47,10 +47,10 @@ static struct addr_marker address_markers[] = { { VMALLOC_END, "vmalloc() end" }, { FIXADDR_START, "Fixmap start" }, { FIXADDR_TOP, "Fixmap end" }, - { PCI_IO_START, "PCI I/O start" }, - { PCI_IO_END, "PCI I/O end" }, { VMEMMAP_START, "vmemmap start" }, { VMEMMAP_START + VMEMMAP_SIZE, "vmemmap end" }, + { PCI_IO_START, "PCI I/O start" }, + { PCI_IO_END, "PCI I/O end" }, { -1, NULL }, }; From patchwork Tue Mar 7 14:04:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163646 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3D23FC678D4 for ; Tue, 7 Mar 2023 14:17:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KdenN67r3zE8/ZEEOgQAzD2GUsVI1ERN1nksUmZPXtc=; b=fk9zvFCrPSHtSp fsca7tROA7KR6+VxsCG6ikGPlaJXDTCPPTIaQ/Lu6Ok4I7Mx66cr4WsYlYQe5h/kMALCQBc8CV53o PyuZvmeJWLOU2wWTcZ1k1eWRErYKD7mEKx10yIKOlCfrrY46iQrRSDo2bLt0JBKMqriMnLVumNLSJ ENhfFQVNdyBa6lwJUgl0EXgt2aGP2up5ftrK8nHVobXJDh/lw621ytSGJs77HfdooAWvOq6f7fByZ j4MKZmBXm9hy6W64rEVmkyoKGe0+KBZvyd5slKILSAKMfKD3hQOcVQjKzCuUT+soRYdVn5BUPDDio 2xk+yJy0Zck6w5NQPQkw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZY75-000i4k-1x; Tue, 07 Mar 2023 14:16:23 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXxt-000dqI-3c for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:06:54 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7911F6143F; Tue, 7 Mar 2023 14:06:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5EAB8C433A0; Tue, 7 Mar 2023 14:06:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198010; bh=8ouA8Mhr3C7XfVU8ngz8EZDrejuiodn55YE1KadufeU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hd9/VBgk9NJvRDq6RUK+jbr33uWwgSWL7Z2sMsvjMCGWKtaEF7wwPVx7AwJx1jPZG +oNKTb3na926drLq1jcY/lSVENGZFGaQGZnZUjbw7VcymUUPEjwv1yRUraOx3LlTUK psYhWxSNZjX84Dj5B+DOLH3m5L8e5paB5Bp9tBo83V2HoaALaxjnYJ9MSv+YW7gw1X ioRTxJ9T6Wo9andEzTsIwvTWuBGgWgKJOvZ0gvxuj3CuNIT8usV4fEAuybERvgGtV+ cOkkuFRb2AeOYmp80YeBbDUiCngbzjQRYJwIeCa6+xFsfXik272RpkAOhh6g0/GF8d 74ms3rGX4bOog== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 05/60] arm64: mm: Move fixmap region above vmemmap region Date: Tue, 7 Mar 2023 15:04:27 +0100 Message-Id: <20230307140522.2311461-6-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2530; i=ardb@kernel.org; h=from:subject; bh=8ouA8Mhr3C7XfVU8ngz8EZDrejuiodn55YE1KadufeU=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdhcNR7sNjKYZbhT4M/9T8r06doHveU+vhprsP7h9bf n/+zSe3OkpZGMQ4GGTFFFkEZv99t/P0RKla51myMHNYmUCGMHBxCsBE2tsY/heuu9/7ll/w4fGc fSI/vb5++dTxW/r8Jq0DEdqbDsResJ3DyPBii/aRJQss1urfnvRh+XVF1bU71mnduMutsMrpyLu 95nasAA== X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060653_284571_F5E79ACD X-CRM114-Status: GOOD ( 16.59 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Move the fixmap region above the vmemmap region, so that the start of the vmemmap delineates the end of the region available for vmalloc and vmap allocations and the randomized placement of the kernel and modules. In a subsequent patch, we will take advantage of this to reclaim most of the vmemmap area when running a 52-bit VA capable build with 52-bit virtual addressing disabled at runtime. Note that the existing guard region of 256 MiB covers the fixmap and PCI I/O regions as well, so we can reduce it 8 MiB, which is what we use in other places too. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/memory.h | 2 +- arch/arm64/include/asm/pgtable.h | 2 +- arch/arm64/mm/ptdump.c | 4 ++-- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 6e321cc06a3c30f0..9b9e52d823beccc6 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -51,7 +51,7 @@ #define VMEMMAP_END (VMEMMAP_START + VMEMMAP_SIZE) #define PCI_IO_START (VMEMMAP_END + SZ_8M) #define PCI_IO_END (PCI_IO_START + PCI_IO_SIZE) -#define FIXADDR_TOP (VMEMMAP_START - SZ_32M) +#define FIXADDR_TOP (ULONG_MAX - SZ_8M + 1) #if VA_BITS > 48 #define VA_BITS_MIN (48) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index b6ba466e2e8a3fc7..3eff06c5d0eb73c7 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -22,7 +22,7 @@ * and fixed mappings */ #define VMALLOC_START (MODULES_END) -#define VMALLOC_END (VMEMMAP_START - SZ_256M) +#define VMALLOC_END (VMEMMAP_START - SZ_8M) #define vmemmap ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT)) diff --git a/arch/arm64/mm/ptdump.c b/arch/arm64/mm/ptdump.c index 9d1f4cdc6672ed5f..76d28056bd14920a 100644 --- a/arch/arm64/mm/ptdump.c +++ b/arch/arm64/mm/ptdump.c @@ -45,12 +45,12 @@ static struct addr_marker address_markers[] = { { MODULES_END, "Modules end" }, { VMALLOC_START, "vmalloc() area" }, { VMALLOC_END, "vmalloc() end" }, - { FIXADDR_START, "Fixmap start" }, - { FIXADDR_TOP, "Fixmap end" }, { VMEMMAP_START, "vmemmap start" }, { VMEMMAP_START + VMEMMAP_SIZE, "vmemmap end" }, { PCI_IO_START, "PCI I/O start" }, { PCI_IO_END, "PCI I/O end" }, + { FIXADDR_START, "Fixmap start" }, + { FIXADDR_TOP, "Fixmap end" }, { -1, NULL }, }; From patchwork Tue Mar 7 14:04:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163662 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 23418C678D5 for ; Tue, 7 Mar 2023 14:18:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Xv+kvDF4Cctx/H6xxRZsrXRipLeR9hxNypvOuvEso1g=; b=gtRfyN7MUOHAXc JJRiCows3ocyV0lEKEAs3VEHRO7Mu4ieyExrNLCaig/GUAdwuTYiZ6gfk4lhFA29yVZjTOibMg2Ec U1rWMTvyreC6AW3gW1Sum/iq3qIJg1TLePQPu7kbneBEeKlQKXmUhBBpTrk4VvmSkntbCWqyn26Hj FtXZYW9YMK4z3zZRMUngljEcSSfeEr78iF4EV9zDW68j40+uYp87+cijhTpCYHbB8QqU4oymR8oiN YKZZ/kkz1heusOGcDFSLDw3fZjt86oAa6R2HH7LnEXG+QwdJfKz9iu/CUS4j0ZO3MSRPO13gqDsBa WokoJAJkOK1VcDWQxYDQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZY88-000iVW-NG; Tue, 07 Mar 2023 14:17:29 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXxv-000drf-MH for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:06:57 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 1CE7FB818FE; Tue, 7 Mar 2023 14:06:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C16AAC433A7; Tue, 7 Mar 2023 14:06:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198012; bh=owcyJumNf3y3cUM/cEIpL/WydIOtZp+T7x6pIbYv5Io=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aMwvFmNaNLM5W+HA7V2dgWKjjWKcBz/DZvm04RAG5GfmFb6yT1tAmIveKyH1pM2wq BL3E8LHRLShEu2ySQqp7GfnYXgCWnk9GSqVxNpeEyIQ58En3oYbQxMS3JYwPiz0Cam 67R+wYayd/NIgXduNmlfn+0iLBXl1mvRUakys3n/gX/+YMUfk7JFM/GcYjPilL6Amc sJGo+d66g8V23Q2RdFHTigfp1UHFr+CZ/l6oL1CECHnKl/eOW9dONRbghBB0xN7AsQ 8KOgKPDZ5tBrGsHSlVzFbHQF7F7II74rQ8FAzceFvMx2PksCOGq7HA/gLs0rDBQve/ crgdftJzmEolA== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 06/60] arm64: ptdump: Allow VMALLOC_END to be defined at boot Date: Tue, 7 Mar 2023 15:04:28 +0100 Message-Id: <20230307140522.2311461-7-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1618; i=ardb@kernel.org; h=from:subject; bh=owcyJumNf3y3cUM/cEIpL/WydIOtZp+T7x6pIbYv5Io=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdhSvk+ObO1Ru4vzoqOIfUCvQvbOgJZJkd/jBFb/8tK 8MjjCodpSwMYhwMsmKKLAKz/77beXqiVK3zLFmYOaxMIEMYuDgFYCK5fAx/hQ8bmR/2SjTx8QyM XJe158Os2D8H/C/dn700REz/JNMKeUaGvfMPuP6NtT92ul/H7NBXeZeQNtmGCK/rLD5XlM9tDN7 ABwA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060655_920915_8CDF96DA X-CRM114-Status: GOOD ( 14.46 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Extend the existing pattern for populating ptdump marker entries at boot, and add handling of VMALLOC_END, which will cease to be a compile time constant for configurations that support 52-bit virtual addressing. Signed-off-by: Ard Biesheuvel --- arch/arm64/mm/ptdump.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/arm64/mm/ptdump.c b/arch/arm64/mm/ptdump.c index 76d28056bd14920a..910b35f02280cbdb 100644 --- a/arch/arm64/mm/ptdump.c +++ b/arch/arm64/mm/ptdump.c @@ -31,7 +31,12 @@ enum address_markers_idx { PAGE_END_NR, #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) KASAN_START_NR, + KASAN_END_NR, #endif + MODULES_NR, + MODULES_END_NR, + VMALLOC_START_NR, + VMALLOC_END_NR, }; static struct addr_marker address_markers[] = { @@ -44,7 +49,7 @@ static struct addr_marker address_markers[] = { { MODULES_VADDR, "Modules start" }, { MODULES_END, "Modules end" }, { VMALLOC_START, "vmalloc() area" }, - { VMALLOC_END, "vmalloc() end" }, + { 0, "vmalloc() end" }, { VMEMMAP_START, "vmemmap start" }, { VMEMMAP_START + VMEMMAP_SIZE, "vmemmap end" }, { PCI_IO_START, "PCI I/O start" }, @@ -379,6 +384,7 @@ static int __init ptdump_init(void) #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) address_markers[KASAN_START_NR].start_address = KASAN_SHADOW_START; #endif + address_markers[VMALLOC_END_NR].start_address = VMALLOC_END; ptdump_initialize(); ptdump_debugfs_register(&kernel_ptdump_info, "kernel_page_tables"); return 0; From patchwork Tue Mar 7 14:04:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163645 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 46A69C678D4 for ; Tue, 7 Mar 2023 14:16:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=hBIKbtlWSHF++7R4O6D1qB40En8/qHuEwAbk+mBP00M=; b=JLKgLHHIUsqBFs aVbfzV6Xa5oe/fpSRbbw6jY7VsmHdGWJ6R3FiUhpQ28THDNWlvfBhwVpSous9tAGmd33hH1Xe6Edg 09XJEUnG0S6ZGNEdTqdl4+uqxe8ivmTT3XIh2dk2XT0Z3qUWuoTl1qRiXV3GvEFph2HU6okGasasT Keg4qTfz/odp5AwFOIhGaxM/3+O1c87dgBMgfSWKQ3Dolh44855SObU8w83lxUyMlWUNaBfhDXTDD xUYcUg65mC0cxWqAJPJJ4kn5B9gKN59AgMo+0QaGnl6zPMCaMSQlD5UvgIUAg/Jn4GO8o6I3HTYFw 1YPm3uGVpalfUrpDU9fw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZY68-000heP-KK; Tue, 07 Mar 2023 14:15:25 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXxv-000drx-Oh for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:06:57 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 55D966145E; Tue, 7 Mar 2023 14:06:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 300F9C433A1; Tue, 7 Mar 2023 14:06:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198015; bh=DPMsp/QC9zVXvP3OPcvpzDcOxYcfkqxfml7V5i/ONDM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ugCg8BhB2NxQRFOpkGV3d/8rLVwopqUuCo0pAIsvasbJWio4WyKQ4muCtnCI2cAes r6vaMX85+sulFrsENmqvthSjf4MvtbenBosS6ncM9VhhE2/qo1T6DDQzGo3nscEz7O xGr33xhaQcKfn1suSW1007w0xDIb5Kz/ImB/rvoRqjXiS/Z+7toIrFTN3l/FOR1kzc 6wMNiCAVQkc+M2Jp3vWOQW1RmcnLKsoM8pgOK61lTTFU0azTdlZkMfv1F4xDciJtl6 d/UREpoF/8yToWEbxHPO4TlPF88y+OaH4WBceFB58Bjjynl6EtdL/v+JTlnkr316RQ CrSALKjQ01qnA== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 07/60] arm64: ptdump: Discover start of vmemmap region at runtime Date: Tue, 7 Mar 2023 15:04:29 +0100 Message-Id: <20230307140522.2311461-8-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1232; i=ardb@kernel.org; h=from:subject; bh=DPMsp/QC9zVXvP3OPcvpzDcOxYcfkqxfml7V5i/ONDM=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdhefVN36elVZhTm+/XHq73zDLPZh7jnbAdXPthqkfH nY8t9nYUcrCIMbBICumyCIw+++7nacnStU6z5KFmcPKBDKEgYtTACbyTZWRYQs/e/OZQwk6/YxT lj6SyM4IWuTvHnPp8J4w3WOVv9eL2jEy3P9S21mxsO7GocuN0reW3VnT2HIlUEAvJu1P/oHLmde eMgIA X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060655_907802_5F04168F X-CRM114-Status: GOOD ( 14.72 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We will soon reclaim the part of the vmemmap region that covers VA space that is not addressable by the hardware. To avoid confusion, ensure that the 'vmemmap start' marker points at the start of the region that is actually being used for the struct page array, rather than the start of the region we set aside for it at build time. Signed-off-by: Ard Biesheuvel --- arch/arm64/mm/ptdump.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/arm64/mm/ptdump.c b/arch/arm64/mm/ptdump.c index 910b35f02280cbdb..8f37d6d8b5216473 100644 --- a/arch/arm64/mm/ptdump.c +++ b/arch/arm64/mm/ptdump.c @@ -37,6 +37,7 @@ enum address_markers_idx { MODULES_END_NR, VMALLOC_START_NR, VMALLOC_END_NR, + VMEMMAP_START_NR, }; static struct addr_marker address_markers[] = { @@ -386,6 +387,10 @@ static int __init ptdump_init(void) #endif address_markers[VMALLOC_END_NR].start_address = VMALLOC_END; ptdump_initialize(); + if (vabits_actual < VA_BITS) { + address_markers[VMEMMAP_START_NR].start_address = + (unsigned long)virt_to_page(_PAGE_OFFSET(vabits_actual)); + } ptdump_debugfs_register(&kernel_ptdump_info, "kernel_page_tables"); return 0; } From patchwork Tue Mar 7 14:04:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163663 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 90B47C678D4 for ; Tue, 7 Mar 2023 14:20:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=b9DGv9rwud6diyB6Qq+BSBUrwZYV1gDbUjYN234KBe0=; b=Z7Cnx/le5pMSan yWid7WwvchUVOfMBAjcGfM48k0XLuDdTy56pi1dA4BbQeH3GeK2DGF2sb4b/jfoa26/woH++lKnhT Q+nomoXKJWM2AjBfxDLZEQaOUmkLxrXyuTo1pFTpf0XwVIWcG2NKjtSOhj77JWDYX4Zi5HVuegADI PXfKWyY0DGkkdt0yj68JX3vkH67OtV7DM3DJ/j3dIIben6txL4Cx0dVIQ6+GKHd3ggWPjbZ6qmf2g 4WFQKJ+E1/W6Ff05DlGuI65GKbbALBGzJ+Y9DqmuVjKQCEuJGmKBMVXPIBajqLjjcMy1HF4C5CD0n x0ET0GOOnDlCq0uqdECA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZY9S-000j3M-IJ; Tue, 07 Mar 2023 14:18:52 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXxz-000duM-D6 for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:07:00 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id EE5866143F; Tue, 7 Mar 2023 14:06:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 91483C433EF; Tue, 7 Mar 2023 14:06:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198017; bh=ywYQ7UcqIIS79iv7ntjeEpAqcLnJP0A5n4rXQJWafZ8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=St+QY/UycgrASh5ECehIA8iTtJ4dKOYbVOY8uHJHxUSm0HuG6ndhv3ZJB7fsQV+0p LkCvojMp+3BBJYeZ3ZTWnlM8G7OH/D2zkJ1idMJ8xQfGv13xtMOX4qpCuLux4Ux3Ul PZ0iNYxHSrXuEuLVHalp6ew5iV3EtAI80dxE2YlOVYGiGN50Up6PO+JUcdTOfL+hfW GX8wbMhaO6Hm2x8J1XoO2iLqf9YksC5kpb4cBtQsYxACwegWde/3bpSzwXAr0l2zhV 1HMf/ffHCOrxY7CnNRxBl8dYg6jfIo0ChjdZMy20bTMEHZn/SwFkapn3VgCyQL2CKQ WhzpvnnTXRpaQ== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 08/60] arm64: vmemmap: Avoid base2 order of struct page size to dimension region Date: Tue, 7 Mar 2023 15:04:30 +0100 Message-Id: <20230307140522.2311461-9-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2467; i=ardb@kernel.org; h=from:subject; bh=ywYQ7UcqIIS79iv7ntjeEpAqcLnJP0A5n4rXQJWafZ8=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdhS/0ZUuZrrTL9d0HdLbuj/IRLz0876vtuZwymSN+S 9okv0h3lLIwiHEwyIopsgjM/vtu5+mJUrXOs2Rh5rAygQxh4OIUgIk02DEyPBBp8Z6/++iHaIH6 315yubsVruxxDveqPRlwZ53THYEZrgz/zCcxnrzfd5+/t0PMUrmpedXJDcuTVPcZanz/wC+36NB JTgA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060659_556792_71F13B4B X-CRM114-Status: GOOD ( 18.88 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The placement and size of the vmemmap region in the kernel virtual address space is currently derived from the base2 order of the size of a struct page. This makes for nicely aligned constants with lots of leading 0xf and trailing 0x0 digits, but given that the actual struct pages are indexed as an ordinary array, this resulting region is severely overdimensioned when the size of a struct page is just over a power of 2. This doesn't matter today, but once we enable 52-bit virtual addressing for 4k pages configurations, the vmemmap region may take up almost half of the upper VA region with the current struct page upper bound at 64 bytes. And once we enable KMSAN or other features that push the size of a struct page over 64 bytes, we will run out of VMALLOC space entirely. So instead, let's derive the region size from the actual size of a struct page, and place the entire region 1 GB from the top of the VA space, where it still doesn't share any lower level translation table entries with the fixmap. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/memory.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 9b9e52d823beccc6..830740ff79bab902 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -30,8 +30,8 @@ * keep a constant PAGE_OFFSET and "fallback" to using the higher end * of the VMEMMAP where 52-bit support is not available in hardware. */ -#define VMEMMAP_SHIFT (PAGE_SHIFT - STRUCT_PAGE_MAX_SHIFT) -#define VMEMMAP_SIZE ((_PAGE_END(VA_BITS_MIN) - PAGE_OFFSET) >> VMEMMAP_SHIFT) +#define VMEMMAP_RANGE (_PAGE_END(VA_BITS_MIN) - PAGE_OFFSET) +#define VMEMMAP_SIZE ((VMEMMAP_RANGE >> PAGE_SHIFT) * sizeof(struct page)) /* * PAGE_OFFSET - the virtual address of the start of the linear map, at the @@ -47,8 +47,8 @@ #define MODULES_END (MODULES_VADDR + MODULES_VSIZE) #define MODULES_VADDR (_PAGE_END(VA_BITS_MIN)) #define MODULES_VSIZE (SZ_128M) -#define VMEMMAP_START (-(UL(1) << (VA_BITS - VMEMMAP_SHIFT))) -#define VMEMMAP_END (VMEMMAP_START + VMEMMAP_SIZE) +#define VMEMMAP_START (VMEMMAP_END - VMEMMAP_SIZE) +#define VMEMMAP_END (ULONG_MAX - SZ_1G + 1) #define PCI_IO_START (VMEMMAP_END + SZ_8M) #define PCI_IO_END (PCI_IO_START + PCI_IO_SIZE) #define FIXADDR_TOP (ULONG_MAX - SZ_8M + 1) From patchwork Tue Mar 7 14:04:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163664 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A1AF7C678D4 for ; Tue, 7 Mar 2023 14:21:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=3wh8KsHLOeaUOTBW37bfjgGjrOtGWSQI8xUNTsDiiX0=; b=2MMmLjCIkmL5LA Um3lbVOMgTUj3TGYQE0z7rrF8/SZ69kHhS6+Py8bO4ap27nLULhe0l1yfxsv5PnWQBDf9zEHIcQkz YPLnKJxLc4YjG8mSefchEBDgKG0LSswpozb7LZx4aHvPGXM+9LkQvaWgtjeESSR7n3vrljmp0bSwr FvSOtSk8n5aKdvPF78xpkQja3u8ONZWtDrY28nlZSOSco/lL0Qu70WRlzjp1AIQ4DJVac7q85VxYB 1Yd/cS/V0ELEwwM8r5rEKM0Itj+pVnVq0uTgRfRUR5W5yQEl+wfMd04UarMEGr9zrECd78GsY9Ah0 1HrtrRyIa471S5/HsD4A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYAl-000jcP-B2; Tue, 07 Mar 2023 14:20:11 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXy0-000dwC-ID for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:07:02 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2573C61454; Tue, 7 Mar 2023 14:07:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F367EC433A4; Tue, 7 Mar 2023 14:06:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198020; bh=Z0j9wObyTVXTZrQkvIz4QQhYFAAbwmqHXOPjNCPpS40=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dmh/0+WLWyZKE1AZ3RFWsVNPhgKp5N7kvnG6UXYFoRSwV742n6zU5hHla6sYel8b8 gelii+DvPq3+paT6Q8VQpr+1MSznAYJVO+H7ewASUZC36Uj3UvEP7totQL9gEYlkUd nkXfyque6u9vmwh4N65YehGqBPLgrLSlovO5XKvglPPlGyjzUFVy6/x/37Mc40jx+3 v846mKqopwhcSJm6xWrdnZMQmYZFQ9kdYn+kvst5kBLKb9C+ncfkHBEjHEZDbqEE6u /NDV7UJF0Dvris3SEjHM3WMXfNv6tAxPZPZ2ZkA4x3XVCQaqfW2AqHZpASMt+E+isr FTwy4IHFKhl+g== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 09/60] arm64: mm: Reclaim unused vmemmap region for vmalloc use Date: Tue, 7 Mar 2023 15:04:31 +0100 Message-Id: <20230307140522.2311461-10-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1777; i=ardb@kernel.org; h=from:subject; bh=Z0j9wObyTVXTZrQkvIz4QQhYFAAbwmqHXOPjNCPpS40=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdRcCqS6Dq49uTy+6v12D4px0bKZe08mLAWU8/5j8OC cuv8jh0lLIwiHEwyIopsgjM/vtu5+mJUrXOs2Rh5rAygQxh4OIUgIksFmBkeHaolWfTxmUGO+cd 7IjIXa8+J3uSbt/t8G/LL7Z2sP77NJuR4fedH2YyrWsUV6+7uqPwdlK95S7N0p3zL53T3LZ9Ybj JBXYA X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060700_667756_BF8CA356 X-CRM114-Status: GOOD ( 16.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The vmemmap array is statically sized based on the maximum supported size of the virtual address space, but it is located inside the upper VA region, which is statically sized based on the *minimum* supported size of the VA space. This doesn't matter much when using 64k pages, which is the only configuration that currently supports 52-bit virtual addressing. However, upcoming LPA2 support will change this picture somewhat, as in that case, the vmemmap array will take up more than 25% of the upper VA region when using 4k pages. Given that most of this space is never used when running on a system that does not support 52-bit virtual addressing, let's reclaim the unused vmemmap area in that case. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/pgtable.h | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 3eff06c5d0eb73c7..2259898e8c3d990a 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -18,11 +18,15 @@ * VMALLOC range. * * VMALLOC_START: beginning of the kernel vmalloc space - * VMALLOC_END: extends to the available space below vmemmap, PCI I/O space - * and fixed mappings + * VMALLOC_END: extends to the available space below vmemmap */ #define VMALLOC_START (MODULES_END) +#if VA_BITS == VA_BITS_MIN #define VMALLOC_END (VMEMMAP_START - SZ_8M) +#else +#define VMEMMAP_UNUSED_NPAGES ((_PAGE_OFFSET(vabits_actual) - PAGE_OFFSET) >> PAGE_SHIFT) +#define VMALLOC_END (VMEMMAP_START + VMEMMAP_UNUSED_NPAGES * sizeof(struct page) - SZ_8M) +#endif #define vmemmap ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT)) From patchwork Tue Mar 7 14:04:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163665 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1AD1DC678D4 for ; Tue, 7 Mar 2023 14:22:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=zOo/PMAw6PP+xQtSgu2DiAlk9WZbX8jYrsAYR1nW0r8=; b=D++Tx8LVO0CcsI qaJ0IC6lGfPCY0t0/+kcbaHgoNsgyO6x6kzyE9xNBVzWIHXfKyrlxZn9R/wO0/eud4s0Ny0y5ILQo 6WrIkp4RgAHS3dJY73PeOCPFT/sT8M8p/hOe6ZBUk4VpvbTN3wJxKW8rxBnTSj/8NwGZ3/f3JHT2v I8d2So/YjCXqvTE1/w3xR5hp3DMTCAfPB4nDomGbQoTb0QS/C8pAShPKBB1SP3WIFL1lRymBP9QDe iI4BC+eiszTl7LgmJZ6D3TaFUVYakUlP6aWqmgqheEVXDBTjLPy+vpLFBDRD1IAeWbyTpuGdBqaIv mLPiHbwHuWD0C0K2lnKg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYBW-000jxM-BW; Tue, 07 Mar 2023 14:20:58 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXy3-000dxv-Du for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:07:04 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E4C4B613CA; Tue, 7 Mar 2023 14:07:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 614BDC4339C; Tue, 7 Mar 2023 14:07:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198022; bh=I8MxwL21SRYQVbydpsDFcPf3vt74+IxUQU27Dld7h4c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Qp+dRVrDYWXKQ2Z4zwwsNBCYRqaD6UOgUCntdwC+5sDbVarIDLaSuriL4tNQffUu0 MgnDccBLBrIzxwHYtppwtgy6CrkFnK0aI4KqMk/TUgKExeLkC8gTYlam/4aWdZ7uP4 t6dXIF459MQXdNoh+lzmxgyN0jFishmUa/NVR+2o2hyZUoPPZIt3nJxaX+l/qDrO4U 5h3tyQkbrN2IcYAmd+6BBEavXotcy49QgZeXd5PTdXOFD4YA/b3h7TjSjsifJpIFln igDPrAnMshSJqJ1uQ/XkaX+UiAwfun7mgO33ZZxGi3qOlW3TAK5E1n5XVQtHBnwZn5 LzuMhCggLh0BQ== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 10/60] arm64: kaslr: Adjust randomization range dynamically Date: Tue, 7 Mar 2023 15:04:32 +0100 Message-Id: <20230307140522.2311461-11-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2758; i=ardb@kernel.org; h=from:subject; bh=I8MxwL21SRYQVbydpsDFcPf3vt74+IxUQU27Dld7h4c=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdRSiFc+mN/GOGRgZ6IoWXfrgrxTwxmpn7fE3tof8b7 6796lveUcrCIMbBICumyCIw+++7nacnStU6z5KFmcPKBDKEgYtTACYSE87wTzc8VvzmGaa9hqsX zYxeOr9kR/78oh898ueXijxY38/If5Xhf8JNfYfi/xlyz3culPq99P7St8/W8LccvvNOqPrsbZe rR3gA X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060703_600055_2C5C134A X-CRM114-Status: GOOD ( 17.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently, we base the KASLR randomization range on a rough estimate of the available space in the upper VA region: the lower 1/4th has the module region and the upper 1/4th has the fixmap, vmemmap and PCI I/O ranges, and so we pick a random location in the remaining space in the middle. Once we enable support for 5-level paging with 4k pages, this no longer works: the vmemmap region, being dimensioned to cover a 52-bit linear region, takes up so much space in the upper VA region (the size of which is based on a 48-bit VA space for compatibility with non-LVA hardware) that the region above the vmalloc region takes up more than a quarter of the available space. So instead of a heuristic, let's derive the randomization range from the actual boundaries of the vmalloc region. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/image-vars.h | 2 ++ arch/arm64/kernel/pi/kaslr_early.c | 11 ++++++----- 2 files changed, 8 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 8309197c0ebd4a8e..b5906f8e18d7eb8d 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -34,6 +34,8 @@ PROVIDE(__pi___memcpy = __pi_memcpy); PROVIDE(__pi___memmove = __pi_memmove); PROVIDE(__pi___memset = __pi_memset); +PROVIDE(__pi_vabits_actual = vabits_actual); + #ifdef CONFIG_KVM /* diff --git a/arch/arm64/kernel/pi/kaslr_early.c b/arch/arm64/kernel/pi/kaslr_early.c index 17bff6e399e46b0b..b9e0bb4bc6a9766f 100644 --- a/arch/arm64/kernel/pi/kaslr_early.c +++ b/arch/arm64/kernel/pi/kaslr_early.c @@ -14,6 +14,7 @@ #include #include +#include /* taken from lib/string.c */ static char *__strstr(const char *s1, const char *s2) @@ -87,7 +88,7 @@ static u64 get_kaslr_seed(void *fdt) asmlinkage u64 kaslr_early_init(void *fdt) { - u64 seed; + u64 seed, range; if (is_kaslr_disabled_cmdline(fdt)) return 0; @@ -102,9 +103,9 @@ asmlinkage u64 kaslr_early_init(void *fdt) /* * OK, so we are proceeding with KASLR enabled. Calculate a suitable * kernel image offset from the seed. Let's place the kernel in the - * middle half of the VMALLOC area (VA_BITS_MIN - 2), and stay clear of - * the lower and upper quarters to avoid colliding with other - * allocations. + * 'middle' half of the VMALLOC area, and stay clear of the lower and + * upper quarters to avoid colliding with other allocations. */ - return BIT(VA_BITS_MIN - 3) + (seed & GENMASK(VA_BITS_MIN - 3, 0)); + range = (VMALLOC_END - KIMAGE_VADDR) / 2; + return range / 2 + (((__uint128_t)range * seed) >> 64); } From patchwork Tue Mar 7 14:04:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163666 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5CC43C678D5 for ; Tue, 7 Mar 2023 14:22:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2rX+qg0ToeHJoLmdIvKnMJ9XdMYVMZ/zNZlccuo+d9g=; b=10o3+V8/EuD5NW xAg1peASyA6LvkunzG/zr/BzDl4PXrW8GzAIKGltCbitZoD2viFmmW1JoUiBhRS0XmiB92Q0Ffcyn PCMqTNidaCkQjhGWRvxXt+9I19D6tnj6yn2d0g64fhjiXD+NBwGt8yEf5qCd6VpTrIvF4uE82e7ly nqE0yc0A5X90DPwUsb9jWiqfiHrHqXUg+bjhoV7Q5L5xQu0f1XTRozeQOPwRKyOUQNTXvYC+G13p6 5zNtpog9+F7ftOt6rQwtY3CJijBICX4KujsxQcBgdvCML8KKEtsqHiV3Pse34bMsyrIVebBD9Ycg2 lGlTBxJI5Q3sqVtTLA9w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYBx-000kA9-Uw; Tue, 07 Mar 2023 14:21:26 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXy6-000dzi-CZ for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:07:08 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3AA8C613CA; Tue, 7 Mar 2023 14:07:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C32E0C4339E; Tue, 7 Mar 2023 14:07:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198024; bh=0KQy78BFjE2XXHrdYNysZXORIwBR4S+/FhQgUDVlHf8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mNSnk087CzN67/StBZFJBuS7EwPop8FwoYrXCt8azLMwSe5R5olrcHbdDsYjs9gQo KI+mnuHO6WyqMrnGFmbID6HwJ3E4IOV5BjI11vGiiRRcSK7ZXOiPksVNWXZfZ1oo4T Uvudy8YGC6p9R1L380O8Qq1CpSs/G6Uo6Ss2lJ3bsgHjjpmPCCE6SK7vn7q8rjUWTr C4Dg+w4mSuJ0X5LP0F3adB5+ws2HXU2xjlSqBS28gdGWiyICI00EWvfFMddnlwzN6l bZdFvRnScCGYjs1JBBkDnnk4LPyrSbPlYWDbUVWw+TbFwn4s7sdUBJLv+hply095e6 KROHmesRFFxDA== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 11/60] arm64: kaslr: drop special case for ThunderX in kaslr_requires_kpti() Date: Tue, 7 Mar 2023 15:04:33 +0100 Message-Id: <20230307140522.2311461-12-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2046; i=ardb@kernel.org; h=from:subject; bh=0KQy78BFjE2XXHrdYNysZXORIwBR4S+/FhQgUDVlHf8=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdRaSuaYdKGecRnTbZ/aoPbHZMclcWbp1kdumz2eqCC 6f/e3d3lLIwiHEwyIopsgjM/vtu5+mJUrXOs2Rh5rAygQxh4OIUgIlsncvIcDm1TUglY9JWx6ML b385O7d3eqLEH+4/AXxxfS6Z/xa65jAydK9YMXtCR17AYvM57EpfJO3Z3cxCF/dyTLixLkU4pkG NEQA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060706_561832_ECE98258 X-CRM114-Status: GOOD ( 16.03 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org ThunderX is an obsolete platform that shipped without support for the EFI_RNG_PROTOCOL in its firmware. Now that we no longer misidentify small KASLR offsets as randomization being enabled, we can drop the explicit check for ThunderX as well, given that KASLR is known to be unavailable. Note that we never enable KPTI on these systems, in spite of what this function returns. However, using non-global mappings for code that is executable at EL1 is what tickles the erratum on these cores, regardless of whether KPTI is enabled or not, so non-global mappings should simply never be used here. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/cpu_errata.c | 2 +- arch/arm64/kernel/cpufeature.c | 12 ------------ 2 files changed, 1 insertion(+), 13 deletions(-) diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 307faa2b4395ed9f..530bbd6a2f6331fd 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -261,7 +261,7 @@ static const struct midr_range cavium_erratum_23154_cpus[] = { #endif #ifdef CONFIG_CAVIUM_ERRATUM_27456 -const struct midr_range cavium_erratum_27456_cpus[] = { +static const struct midr_range cavium_erratum_27456_cpus[] = { /* Cavium ThunderX, T88 pass 1.x - 2.1 */ MIDR_RANGE(MIDR_THUNDERX, 0, 0, 1, 1), /* Cavium ThunderX, T81 pass 1.0 */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 2e3e5513977733b7..e9a138b7e3b22cc7 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1621,18 +1621,6 @@ bool kaslr_requires_kpti(void) return false; } - /* - * Systems affected by Cavium erratum 24756 are incompatible - * with KPTI. - */ - if (IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456)) { - extern const struct midr_range cavium_erratum_27456_cpus[]; - - if (is_midr_in_range_list(read_cpuid_id(), - cavium_erratum_27456_cpus)) - return false; - } - return kaslr_enabled(); } From patchwork Tue Mar 7 14:04:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163667 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 65892C6FD1B for ; Tue, 7 Mar 2023 14:23:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=MHYsnuuplrOMdEmEo5nWMDWQfr9s4GEos5UKrt2kFcY=; b=VAk2rIMVu8f5c+ urOAhX1qxeBrcj4Z1eG1+3exIhNSCprs69UzfuPuJPE7RDXpC10FeT2pAm/cugecRZ0LG6xvlYY5L oqAkyUM/i21MNzQ4MafSufwWFykfGdA1ypt/VRjOnGCW0qcoNroSJ5dsaLRPi0ovC63iD01kQpknb ib2UL2PzxxCALG20su+oxZE8H2AHenmBqmaTo5WF1EEz2oRxT5G5zQIdQkpPJVuacOPkjFnh507Xf BWoYkpqz3afpnrfO+NNJf1dk4iHDMsfh6yDD32FWITK0SpZn+yuEQ65jC/MnGKVdmRYjEoWXgzVl7 73KIxjPsgKrwj0MYUOgQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYCt-000kYO-Gi; Tue, 07 Mar 2023 14:22:23 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXy8-000e1b-AV for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:07:09 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B945A6143A; Tue, 7 Mar 2023 14:07:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 338C3C433A1; Tue, 7 Mar 2023 14:07:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198027; bh=Hq8nHTyQr1XHq5oR1wC0C3a0J3sTD/A+RXL9gidFldA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tasMtYwVLBsYQ2coKtO/UrLuNh5Xk04axrum5aB+CLGXjp2+1xnnfrDABu+WuSdFR sFKVeRhFY/rtzakdiLI9Vx9zHgeju92OdtjnyyWJK9W0lP041NFlw7x5tY9MRtMEVh P45tZmK/0H1DuVTOHtalkzTXpWOAyXQr67p0hNoeXjdshmmlhni9HA8jS+qNWVeFx1 oC+5r5Km1QyyyXZkSaDgc/I5r+SSCvTKxh86NSSTh0fIFG2wmJ35E+RN8ojn5jrVGu TFPUyrz+CMcFWrf3MUW/nRk1HjPHaLaPT+Wudbn63vTeXHaRsssPUhsFNjZw+4ypJ9 riEyjmcFs3NJA== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 12/60] arm64: Turn kaslr_feature_override into a generic SW feature override Date: Tue, 7 Mar 2023 15:04:34 +0100 Message-Id: <20230307140522.2311461-13-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=4014; i=ardb@kernel.org; h=from:subject; bh=GSuvLi6RwTLuJqeqKu2UWNgSFMdiiU3kHbNFtROgd2w=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdRWzH1bdWNns5bhsHSbeKR5p8q2/fmdl97PnBHRcbO ufbPlveUcrCIMbBICumyCIw+++7nacnStU6z5KFmcPKBDKEgYtTACYincnIsH/u4TUfvh0MmHMg 3nyO8enfLZccfx5cPmtCV/EqdY1d0/4wMvy52bzHzZV3gcT6dyr/T0/dm5SWXOzFmckQ0XEowf3 1V3YA X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060708_487041_A7D589DF X-CRM114-Status: GOOD ( 18.03 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier Disabling KASLR from the command line is implemented as a feature override. Repaint it slightly so that it can further be used as more generic infrastructure for SW override purposes. Signed-off-by: Marc Zyngier [ardb: don't apply the override mask to val] Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/cpufeature.h | 4 ++++ arch/arm64/kernel/cpufeature.c | 2 ++ arch/arm64/kernel/idreg-override.c | 16 ++++++---------- arch/arm64/kernel/kaslr.c | 5 ++--- 4 files changed, 14 insertions(+), 13 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 6bf013fb110d7923..bc10098901808c00 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -15,6 +15,8 @@ #define MAX_CPU_FEATURES 128 #define cpu_feature(x) KERNEL_HWCAP_ ## x +#define ARM64_SW_FEATURE_OVERRIDE_NOKASLR 0 + #ifndef __ASSEMBLY__ #include @@ -925,6 +927,8 @@ extern struct arm64_ftr_override id_aa64smfr0_override; extern struct arm64_ftr_override id_aa64isar1_override; extern struct arm64_ftr_override id_aa64isar2_override; +extern struct arm64_ftr_override arm64_sw_feature_override; + u32 get_kvm_ipa_limit(void); void dump_cpu_features(void); diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index e9a138b7e3b22cc7..88cec6c14743c4c5 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -657,6 +657,8 @@ struct arm64_ftr_override __ro_after_init id_aa64smfr0_override; struct arm64_ftr_override __ro_after_init id_aa64isar1_override; struct arm64_ftr_override __ro_after_init id_aa64isar2_override; +struct arm64_ftr_override arm64_sw_feature_override; + static const struct __ftr_reg_entry { u32 sys_id; struct arm64_ftr_reg *reg; diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c index d833d78a7f313563..434703e4e55cb785 100644 --- a/arch/arm64/kernel/idreg-override.c +++ b/arch/arm64/kernel/idreg-override.c @@ -138,15 +138,11 @@ static const struct ftr_set_desc smfr0 __initconst = { }, }; -extern struct arm64_ftr_override kaslr_feature_override; - -static const struct ftr_set_desc kaslr __initconst = { - .name = "kaslr", -#ifdef CONFIG_RANDOMIZE_BASE - .override = &kaslr_feature_override, -#endif +static const struct ftr_set_desc sw_features __initconst = { + .name = "arm64_sw", + .override = &arm64_sw_feature_override, .fields = { - FIELD("disabled", 0, NULL), + FIELD("nokaslr", ARM64_SW_FEATURE_OVERRIDE_NOKASLR, NULL), {} }, }; @@ -158,7 +154,7 @@ static const struct ftr_set_desc * const regs[] __initconst = { &isar1, &isar2, &smfr0, - &kaslr, + &sw_features, }; static const struct { @@ -175,7 +171,7 @@ static const struct { "id_aa64isar1.api=0 id_aa64isar1.apa=0 " "id_aa64isar2.gpa3=0 id_aa64isar2.apa3=0" }, { "arm64.nomte", "id_aa64pfr1.mte=0" }, - { "nokaslr", "kaslr.disabled=1" }, + { "nokaslr", "arm64_sw.nokaslr=1" }, }; static int __init find_field(const char *cmdline, diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c index e7477f21a4c9d062..abcd996c747b8c97 100644 --- a/arch/arm64/kernel/kaslr.c +++ b/arch/arm64/kernel/kaslr.c @@ -23,8 +23,6 @@ u64 __ro_after_init module_alloc_base; u16 __initdata memstart_offset_seed; -struct arm64_ftr_override kaslr_feature_override __initdata; - static int __init kaslr_init(void) { u64 module_range; @@ -36,7 +34,8 @@ static int __init kaslr_init(void) */ module_alloc_base = (u64)_etext - MODULES_VSIZE; - if (kaslr_feature_override.val & kaslr_feature_override.mask & 0xf) { + if (cpuid_feature_extract_unsigned_field(arm64_sw_feature_override.val, + ARM64_SW_FEATURE_OVERRIDE_NOKASLR)) { pr_info("KASLR disabled on command line\n"); return 0; } From patchwork Tue Mar 7 14:04:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163713 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BA6B9C678D4 for ; Tue, 7 Mar 2023 14:24:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=7ndYMQ/oB6p0cxMRs9keky3xHzfany+OYAV7gtGZ+1w=; b=nd+ya9Om+svWVd fu0r+v3Iztf2186lF/+RGt7urM7QbXdqH38DE00aLJw7ym6pPtm1g0u47mEsx2X5uyTPfioUPQu4E ZYgaA5pNsm1EjwUWVxvh8QCg2dyqXfIainL6EMiEcOVVm6vzZsP+TJwm7dkc9oc8t7opFA9iSxsPq jyRPWWh09hw8xVpMuhbDvEh9HFbROyK0TXwRdoxooP4ls1lX0WRhA2yBndZcay+w6JozCm6llBOAJ xFKwpw0JvAK5XeqPGSwVNjIqlokwk3Q0OQ96wgUmKZzYURFulkFkvaTXOyKa2/Kpagf9wcILTjDsq qvfeLCH3HXMyVr+Q2FDw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYDU-000krg-Ii; Tue, 07 Mar 2023 14:23:00 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXyA-000e3L-Bo for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:07:11 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E4BAB6145E; Tue, 7 Mar 2023 14:07:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 97EF3C4339E; Tue, 7 Mar 2023 14:07:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198029; bh=1oCuWk9VXyzMZa12N2n3i7rjkfl0ioumXcP43D4virk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LZxoqVBA3o+TyPC758j6soT3BLLlD+0n2bXQHlRjTHq7geYmp6Z8l4AS488r1xkuv f2bgyEgCycl8ABzRlf+0gVsyFwlRKMU4xaCTa4ZzhIxzTgqRzvmpGZmbG0KWEP/HOQ VpbGvaQLWT+EG8nsz+5NDqtZ8YGTFkvfvdpEnfQ0mFitGbXRH8TQtmQEEuobjvxqD3 6eSJVA2Vp5rswFTQ/MX5afnlJaGDDzN/v9bLKW8KWtTHoI3xaTWp5ywNjdsvd9q8r5 Mrw88ipYYgdh1nvW39wimqxkvhcHveIfaM4iUvtPtWqTDxOGyWwrgrkI83Y02kz8Tj zdRYobKGrWuTw== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 13/60] arm64: kvm: honour 'nokaslr' command line option for the HYP VA space Date: Tue, 7 Mar 2023 15:04:35 +0100 Message-Id: <20230307140522.2311461-14-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3632; i=ardb@kernel.org; h=from:subject; bh=1oCuWk9VXyzMZa12N2n3i7rjkfl0ioumXcP43D4virk=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdRdJbqv235+uE8w374+M/zdWw+XQn9vDuVzFzE5YF2 l3Ivx7cUcrCIMbBICumyCIw+++7nacnStU6z5KFmcPKBDKEgYtTACaiuYORod/hRt3pTfbXdvie +zon9n8PzyKlohXsTTab7JbF9f+8tIHhnwGj8hnP7Gev/7+3l1qjZ9am5nk+fWIvq/HTeubDlY8 +MgAA X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060710_535372_5AC23140 X-CRM114-Status: GOOD ( 19.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Debugging the code that runs in HYP is rather difficult, especially when its placement is randomized. So let's take the 'nokaslr' command line option into account, and allow it to be disabled at boot. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/memory.h | 15 ------------- arch/arm64/include/asm/mmu.h | 23 ++++++++++++++++++++ arch/arm64/kernel/kaslr.c | 3 +-- arch/arm64/kvm/va_layout.c | 3 ++- 4 files changed, 26 insertions(+), 18 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 830740ff79bab902..f96975466ef1b752 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -197,21 +197,6 @@ extern s64 memstart_addr; /* the offset between the kernel virtual and physical mappings */ extern u64 kimage_voffset; -static inline unsigned long kaslr_offset(void) -{ - return (u64)&_text - KIMAGE_VADDR; -} - -static inline bool kaslr_enabled(void) -{ - /* - * The KASLR offset modulo MIN_KIMG_ALIGN is taken from the physical - * placement of the image rather than from the seed, so a displacement - * of less than MIN_KIMG_ALIGN means that no seed was provided. - */ - return kaslr_offset() >= MIN_KIMG_ALIGN; -} - /* * Allow all memory at the discovery stage. We will clip it later. */ diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 48f8466a4be92ac3..9b9a206e4e9c9d4e 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -72,6 +72,29 @@ extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot); extern void mark_linear_text_alias_ro(void); extern bool kaslr_requires_kpti(void); +static inline unsigned long kaslr_offset(void) +{ + return (u64)&_text - KIMAGE_VADDR; +} + +static inline bool kaslr_enabled(void) +{ + /* + * The KASLR offset modulo MIN_KIMG_ALIGN is taken from the physical + * placement of the image rather than from the seed, so a displacement + * of less than MIN_KIMG_ALIGN means that no seed was provided. + */ + return kaslr_offset() >= MIN_KIMG_ALIGN; +} + +static inline bool kaslr_disabled_cmdline(void) +{ + if (cpuid_feature_extract_unsigned_field(arm64_sw_feature_override.val, + ARM64_SW_FEATURE_OVERRIDE_NOKASLR)) + return true; + return false; +} + #define INIT_MM_CONTEXT(name) \ .pgd = init_pg_dir, diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c index abcd996c747b8c97..9e7f0bae5b5fb24b 100644 --- a/arch/arm64/kernel/kaslr.c +++ b/arch/arm64/kernel/kaslr.c @@ -34,8 +34,7 @@ static int __init kaslr_init(void) */ module_alloc_base = (u64)_etext - MODULES_VSIZE; - if (cpuid_feature_extract_unsigned_field(arm64_sw_feature_override.val, - ARM64_SW_FEATURE_OVERRIDE_NOKASLR)) { + if (kaslr_disabled_cmdline()) { pr_info("KASLR disabled on command line\n"); return 0; } diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c index 91b22a014610b22f..341b67e2f2514e55 100644 --- a/arch/arm64/kvm/va_layout.c +++ b/arch/arm64/kvm/va_layout.c @@ -72,7 +72,8 @@ __init void kvm_compute_layout(void) va_mask = GENMASK_ULL(tag_lsb - 1, 0); tag_val = hyp_va_msb; - if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && tag_lsb != (vabits_actual - 1)) { + if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && tag_lsb != (vabits_actual - 1) && + !kaslr_disabled_cmdline()) { /* We have some free bits to insert a random tag. */ tag_val |= get_random_long() & GENMASK_ULL(vabits_actual - 2, tag_lsb); } From patchwork Tue Mar 7 14:04:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163714 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DD718C678D4 for ; Tue, 7 Mar 2023 14:24:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ZtvPpxDr8YNrTd46pStdmv2DYa9UJ5Kjc+Np4XQL2I8=; b=YJrRXpzcR8O8LZ aZzq/2DebJG9nnQCqD7ly7nQNtYKOon+5Xa00ACBJ3YkbqMZ1Eyka4nV7EG8mLoxETs4ZJWP2uY8x mqPSN24WaDzSjgm8dxhJzQu56tz25EC7+YQMZx19Tb0/ZStoPrUXZKkwb2WtI1nB/dClswAEWOkDP 6xFIKY11YxDfn9ORSmGM1Tzt1WS0xadl5Dzgcwe3hjtwgZ5I948lUJ17iyOQVcrcWFUhfZA4fZasp 2yzYf5fDH6WfOl2naGkE1oZuPHAnYUEF7Hh+q4v0sXxyib3zTAiprL+nwqiFrOafuOXhfJnrjNIXq O/afxSlmLp7/OM0654Ng==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYE0-000l6m-0E; Tue, 07 Mar 2023 14:23:33 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXyE-000e6L-Ue for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:07:17 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 5BDD0B81900; Tue, 7 Mar 2023 14:07:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 070B6C433A0; Tue, 7 Mar 2023 14:07:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198032; bh=nNddimpTffzJ/299GfOhbDILTTlvYfXW/l3wr38dpy8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MwXKEabQb8M2BefvBIsC4AI8Fjk7zHxQXNXYj13EaS45Mj1GQ67so+tc7gpAHNd39 rUeO7Tf4aBgEBdK6HkyLODroiH9+6yI6/uZtMmawwIcNTR12YWaxzMnbfZDAPehRwY FFcvRAbpP3g54gws/uNopwlmXhefzh2EKlyuxAD3kKgnJUt1D64MqcIHVo1r2T9fUh /bomFTiKRw2Z1tTaeeJe1INmpfFH1dzScPXOSGV/VEYOwUV4KptGqRjrdX+ig6QtoF nErqQTKfXjdUNLHU0KfRnnTpGIVFgZwoJFITWt33Opv2NG8WynMIrB5CMsy1UfQnpb j4xone8w9rQUg== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 14/60] arm64: kernel: Manage absolute relocations in code built under pi/ Date: Tue, 7 Mar 2023 15:04:36 +0100 Message-Id: <20230307140522.2311461-15-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6466; i=ardb@kernel.org; h=from:subject; bh=nNddimpTffzJ/299GfOhbDILTTlvYfXW/l3wr38dpy8=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdReZQhTsr07KvW7jVP1Zx7z8hpdYwkTF+guvSadfXl vEddnPtKGVhEONgkBVTZBGY/ffdztMTpWqdZ8nCzGFlAhnCwMUpABPJ0Gb4Z+fMoLZd1yeoZ/OU 8h7Zt/KzXz4RTGc+/W3x+3kLnZXX7mb47/Zwkanv2Wk/Xbq1HzwNOfPhBXfV23Us7sa7p3OeV3C fyQIA X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060715_378401_9982861C X-CRM114-Status: GOOD ( 25.22 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The mini C runtime runs before relocations are processed, and so it cannot rely on statically initialized pointer variables. Add a check to ensure that such code does not get introduced by accident, by going over the relocations in each object, identifying the ones that operate on data sections that are part of the executable image, and raising an error if any relocations of type R_AARCH64_ABS64 exist. Note that such relocations are permitted in other places (e.g., debug sections) and will never occur in compiler generated code sections when using the small code model, so only check sections that have SHF_ALLOC set and SHF_EXECINSTR cleared. To accommodate cases where statically initialized symbol references are unavoidable, introduce a special case for ELF input data sections that have ".rodata.prel64" in their names, and in these cases, instead of rejecting any encountered ABS64 relocations, convert them into PREL64 relocations, which don't require any runtime fixups. Note that the code in question must still be modified to deal with this, as it needs to convert the 64-bit signed offsets into absolute addresses before use. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/pi/Makefile | 9 +- arch/arm64/kernel/pi/pi.h | 14 +++ arch/arm64/kernel/pi/relacheck.c | 130 ++++++++++++++++++++ 3 files changed, 151 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/pi/Makefile b/arch/arm64/kernel/pi/Makefile index c844a0546d7f0e62..bc32a431fe35e6f4 100644 --- a/arch/arm64/kernel/pi/Makefile +++ b/arch/arm64/kernel/pi/Makefile @@ -22,11 +22,16 @@ KCSAN_SANITIZE := n UBSAN_SANITIZE := n KCOV_INSTRUMENT := n +hostprogs := relacheck + +quiet_cmd_piobjcopy = $(quiet_cmd_objcopy) + cmd_piobjcopy = $(cmd_objcopy) && $(obj)/relacheck $(@) $(<) + $(obj)/%.pi.o: OBJCOPYFLAGS := --prefix-symbols=__pi_ \ --remove-section=.note.gnu.property \ --prefix-alloc-sections=.init -$(obj)/%.pi.o: $(obj)/%.o FORCE - $(call if_changed,objcopy) +$(obj)/%.pi.o: $(obj)/%.o $(obj)/relacheck FORCE + $(call if_changed,piobjcopy) $(obj)/lib-%.o: $(srctree)/lib/%.c FORCE $(call if_changed_rule,cc_o_c) diff --git a/arch/arm64/kernel/pi/pi.h b/arch/arm64/kernel/pi/pi.h new file mode 100644 index 0000000000000000..f455ad385976a664 --- /dev/null +++ b/arch/arm64/kernel/pi/pi.h @@ -0,0 +1,14 @@ +// SPDX-License-Identifier: GPL-2.0-only +// Copyright 2023 Google LLC +// Author: Ard Biesheuvel + +#define __prel64_initconst __section(".init.rodata.prel64") + +typedef volatile signed long prel64_t; + +static inline void *prel64_to_pointer(const prel64_t *offset) +{ + if (!*offset) + return NULL; + return (void *)offset + *offset; +} diff --git a/arch/arm64/kernel/pi/relacheck.c b/arch/arm64/kernel/pi/relacheck.c new file mode 100644 index 0000000000000000..29310c632e9312e7 --- /dev/null +++ b/arch/arm64/kernel/pi/relacheck.c @@ -0,0 +1,130 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2022 - Google LLC + * Author: Ard Biesheuvel + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ +#define HOST_ORDER ELFDATA2LSB +#elif __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ +#define HOST_ORDER ELFDATA2MSB +#endif + +static Elf64_Ehdr *ehdr; +static Elf64_Shdr *shdr; +static const char *strtab; +static bool swap; + +static uint64_t swab_elfxword(uint64_t val) +{ + return swap ? __builtin_bswap64(val) : val; +} + +static uint32_t swab_elfword(uint32_t val) +{ + return swap ? __builtin_bswap32(val) : val; +} + +static uint16_t swab_elfhword(uint16_t val) +{ + return swap ? __builtin_bswap16(val) : val; +} + +int main(int argc, char *argv[]) +{ + struct stat stat; + int fd, ret; + + if (argc < 3) { + fprintf(stderr, "file arguments missing\n"); + exit(EXIT_FAILURE); + } + + fd = open(argv[1], O_RDWR); + if (fd < 0) { + fprintf(stderr, "failed to open %s\n", argv[1]); + exit(EXIT_FAILURE); + } + + ret = fstat(fd, &stat); + if (ret < 0) { + fprintf(stderr, "failed to stat() %s\n", argv[1]); + exit(EXIT_FAILURE); + } + + ehdr = mmap(0, stat.st_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + if (ehdr == MAP_FAILED) { + fprintf(stderr, "failed to mmap() %s\n", argv[1]); + exit(EXIT_FAILURE); + } + + swap = ehdr->e_ident[EI_DATA] != HOST_ORDER; + shdr = (void *)ehdr + swab_elfxword(ehdr->e_shoff); + strtab = (void *)ehdr + + swab_elfxword(shdr[swab_elfhword(ehdr->e_shstrndx)].sh_offset); + + for (int i = 0; i < swab_elfhword(ehdr->e_shnum); i++) { + unsigned long info, flags; + bool prel64 = false; + Elf64_Rela *rela; + int numrela; + + if (swab_elfword(shdr[i].sh_type) != SHT_RELA) + continue; + + /* only consider RELA sections operating on data */ + info = swab_elfword(shdr[i].sh_info); + flags = swab_elfxword(shdr[info].sh_flags); + if ((flags & (SHF_ALLOC | SHF_EXECINSTR)) != SHF_ALLOC) + continue; + + /* + * We generally don't permit ABS64 relocations in the code that + * runs before relocation processing occurs. If statically + * initialized absolute symbol references are unavoidable, they + * may be emitted into a *.rodata.prel64 section and they will + * be converted to place-relative 64-bit references. This + * requires special handling in the referring code. + */ + if (strstr(strtab + swab_elfword(shdr[info].sh_name), + ".rodata.prel64")) { + prel64 = true; + } + + rela = (void *)ehdr + swab_elfxword(shdr[i].sh_offset); + numrela = swab_elfxword(shdr[i].sh_size) / sizeof(*rela); + + for (int j = 0; j < numrela; j++) { + uint64_t info = swab_elfxword(rela[j].r_info); + + if (ELF64_R_TYPE(info) != R_AARCH64_ABS64) + continue; + + if (prel64) { + /* convert ABS64 into PREL64 */ + info ^= R_AARCH64_ABS64 ^ R_AARCH64_PREL64; + rela[j].r_info = swab_elfxword(info); + } else { + fprintf(stderr, + "Unexpected absolute relocations detected in %s\n", + argv[2]); + close(fd); + unlink(argv[1]); + exit(EXIT_FAILURE); + } + } + } + close(fd); + return 0; +} From patchwork Tue Mar 7 14:04:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163715 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3A10DC678D5 for ; Tue, 7 Mar 2023 14:25:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=O9r7B/Ujq2MofWjrg93MnldVBdbaae1aJEqNtQ4NznI=; b=lvTF/Un1gzKsuh 4UvK15LaeKkHRcZ3GX/9iJraI1V2zv1CGpID73D90DBa8aaT1VGclsVIF8YgIVjPXOxqrL1BxEn6R sh9CB82U2iwIJiWE5K2paeteJCtltsg/e7fb6qeRtEDX4fc3Fvo60ZaIkFkPCncVtHWgccA6P403W tvXpGyC/M52S5AxRhrM5/btjsJOkS/vW6pQTYMIeqNKgIM06QJPQ6W6Wwo31Vz41KI/1qSBVXJUCo n0r6Cui7igqPmsp8uwXUYO3HafWShX+XCQxN0rs8PrjD2G6GdEHo0UZnTUuysjE4tmRGA0j4Laobo VLofJiJOl9Huz4EWGfYQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYEg-000lRC-Rd; Tue, 07 Mar 2023 14:24:15 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXyF-000e74-KQ for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:07:19 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id ECA936143A; Tue, 7 Mar 2023 14:07:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 68B8CC433A7; Tue, 7 Mar 2023 14:07:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198034; bh=U9Q2LWSJSvk5m1MyFlpjrltj8tlAIY8VDuwltSIfhXY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZIJ/gOJheON8Aji0qDNhQnEuwDW+QhHyhlXMvKQh6vVAZLh0STmdvxvoH5/BNUnwZ xGzDlcNyVp21u+h8JbSnk142i3mHFJukXTNWBJ6aw1RmaIbf8rLIULZPu+HBfwIqhu Ux8ZNB//NMLY5KDo0p5ZavKJVmr3r0z2uCpaOLhxGfnaralV01HeefZvtUIfNl66UF Zi69BEXLblnsIXv6g0g1R6Cgr2EFgYnoavH/m01/ojo4PxGxFloAymQvQOh3oZ2y/E 9VjTrQuuUq+CaQ5sGpFJOC4tgF+pBn/SZc9y3PSsOW4sCEDGjuKONibxqN4ZtfGnTg qTBMLKqMaZ0IQ== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 15/60] arm64: kernel: Don't rely on objcopy to make code under pi/ __init Date: Tue, 7 Mar 2023 15:04:37 +0100 Message-Id: <20230307140522.2311461-16-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3206; i=ardb@kernel.org; h=from:subject; bh=U9Q2LWSJSvk5m1MyFlpjrltj8tlAIY8VDuwltSIfhXY=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdRbG38f/aXp9Pgttn779Utcn/n7TWPZPmGPOFM76wZ uf9NDrbUcrCIMbBICumyCIw+++7nacnStU6z5KFmcPKBDKEgYtTACbSGs7wT/G007ec+QdK2F44 ciTFyzPn5e43fbCWc6/Qed7bqjPFYhkZ1qz0b79+Je8Yw8Lwq7FHlSYy5/5bFOnqlXvia35F9LV kTgA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060715_813671_3441FF16 X-CRM114-Status: GOOD ( 13.97 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We will add some code under pi/ that contains global variables that should not end up in __initdata, as they will not be writable via the initial ID map. So only rely on objcopy for making the libfdt code __init, and use explicit annotations for the rest. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/pi/Makefile | 6 ++++-- arch/arm64/kernel/pi/kaslr_early.c | 16 +++++++++------- 2 files changed, 13 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kernel/pi/Makefile b/arch/arm64/kernel/pi/Makefile index bc32a431fe35e6f4..2bbe866417d453ff 100644 --- a/arch/arm64/kernel/pi/Makefile +++ b/arch/arm64/kernel/pi/Makefile @@ -28,11 +28,13 @@ quiet_cmd_piobjcopy = $(quiet_cmd_objcopy) cmd_piobjcopy = $(cmd_objcopy) && $(obj)/relacheck $(@) $(<) $(obj)/%.pi.o: OBJCOPYFLAGS := --prefix-symbols=__pi_ \ - --remove-section=.note.gnu.property \ - --prefix-alloc-sections=.init + --remove-section=.note.gnu.property $(obj)/%.pi.o: $(obj)/%.o $(obj)/relacheck FORCE $(call if_changed,piobjcopy) +# ensure that all the lib- code ends up as __init code and data +$(obj)/lib-%.pi.o: OBJCOPYFLAGS += --prefix-alloc-sections=.init + $(obj)/lib-%.o: $(srctree)/lib/%.c FORCE $(call if_changed_rule,cc_o_c) diff --git a/arch/arm64/kernel/pi/kaslr_early.c b/arch/arm64/kernel/pi/kaslr_early.c index b9e0bb4bc6a9766f..167081b30a152d0a 100644 --- a/arch/arm64/kernel/pi/kaslr_early.c +++ b/arch/arm64/kernel/pi/kaslr_early.c @@ -17,7 +17,7 @@ #include /* taken from lib/string.c */ -static char *__strstr(const char *s1, const char *s2) +static char *__init __strstr(const char *s1, const char *s2) { size_t l1, l2; @@ -33,7 +33,7 @@ static char *__strstr(const char *s1, const char *s2) } return NULL; } -static bool cmdline_contains_nokaslr(const u8 *cmdline) +static bool __init cmdline_contains_nokaslr(const u8 *cmdline) { const u8 *str; @@ -41,7 +41,7 @@ static bool cmdline_contains_nokaslr(const u8 *cmdline) return str == cmdline || (str > cmdline && *(str - 1) == ' '); } -static bool is_kaslr_disabled_cmdline(void *fdt) +static bool __init is_kaslr_disabled_cmdline(void *fdt) { if (!IS_ENABLED(CONFIG_CMDLINE_FORCE)) { int node; @@ -67,17 +67,19 @@ static bool is_kaslr_disabled_cmdline(void *fdt) return cmdline_contains_nokaslr(CONFIG_CMDLINE); } -static u64 get_kaslr_seed(void *fdt) +static u64 __init get_kaslr_seed(void *fdt) { + static char const chosen_str[] __initconst = "chosen"; + static char const seed_str[] __initconst = "kaslr-seed"; int node, len; fdt64_t *prop; u64 ret; - node = fdt_path_offset(fdt, "/chosen"); + node = fdt_path_offset(fdt, chosen_str); if (node < 0) return 0; - prop = fdt_getprop_w(fdt, node, "kaslr-seed", &len); + prop = fdt_getprop_w(fdt, node, seed_str, &len); if (!prop || len != sizeof(u64)) return 0; @@ -86,7 +88,7 @@ static u64 get_kaslr_seed(void *fdt) return ret; } -asmlinkage u64 kaslr_early_init(void *fdt) +asmlinkage u64 __init kaslr_early_init(void *fdt) { u64 seed, range; From patchwork Tue Mar 7 14:04:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163716 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 87ACAC678D4 for ; Tue, 7 Mar 2023 14:26:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=eKAq0TLgim2tQOEOem7I2HqCqlk9GwO6bXqIUkr16hY=; b=sF79M0rGyqiXxM rIytMI1EerYF3tRMWho5UZZX+G7EFfXBOQKBf7yQ2quEmFqlqYszALAtnlxK+GTFI3Mom1O97/jEP gPrXkvAkyytyrcUVm+QyrXs3FP/i3fygUQ4N0khFfqW+Gajhs4c4tR9EeTKSLhlWZMW8GafuhfGak IoU4vEV9kGfvSdNVm+oHads8Z37CK8CP1M6O7JkYsG0rjTub+tR+vvENn4Dh59lRTlNJGKETLacLi 7Xzkemk9aRv4utV21aKssYKuNl53Ip6j4DK+58mENVoqkBKin9KTkL3T/X3aJnei0jbV/+FHOID+4 FIlGneWOaI3AJcOD5SrA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYF7-000leA-Db; Tue, 07 Mar 2023 14:24:41 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXyI-000e8X-BV for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:07:21 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 5BA6261458; Tue, 7 Mar 2023 14:07:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CAD4DC4339C; Tue, 7 Mar 2023 14:07:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198036; bh=XnTvDw4ttdVCcfAeL2dMDAGtCMBFr94CQLQQmSGCFiY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iioG0f+xikmcLwqqv55zuuJUYG85WwLmOHchnc+kJlH7ULC9s3GK6OaXARyL2o6S/ qsDiDHjm9S9ectzT2lStvTr3IjA0jmFl0ZfJu5U/tcNVCy0uNA6w5qSCUcYI6sZQTN luBE6dOrPzni/K4wPVCg54bQ4KxRhVnUM9XeSAlvtxXk1YRRp4lYqMp/2TrQvJxlAd bZlAMiPUy4WKwTNCjMAfKAGowQOJLSYCjk24URoe97yQfEPQ3AEoGkhevB9f8caHAi Nu6fB9sIC4ZC1xa38Tq/+l1hHv2WC9VkENVmh9AlzhrM9ArsozwL9/LubPKgMzNcLN UsjyXIJ1KnzbQ== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 16/60] arm64: head: move relocation handling to C code Date: Tue, 7 Mar 2023 15:04:38 +0100 Message-Id: <20230307140522.2311461-17-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=10151; i=ardb@kernel.org; h=from:subject; bh=XnTvDw4ttdVCcfAeL2dMDAGtCMBFr94CQLQQmSGCFiY=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdReXu2uPdS+6/s5VVaPtz8/m7KVvcSoOO+aQ+dN2Y9 jv561aFjlIWBjEOBlkxRRaB2X/f7Tw9UarWeZYszBxWJpAhDFycAjARU2GG/ynPPnA/CTiuyLvc ZWG4oG6RVE2454H7CrfPbwjgkiz/9oLhf1aNmxv3C8d5OZKnrTadUN9jLm5qcn9zqBHjqYjq0sg ULgA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060718_580158_9A360679 X-CRM114-Status: GOOD ( 29.10 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Now that we have a mini C runtime before the kernel mapping is up, we can move the non-trivial relocation processing code out of head.S and reimplement it in C. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/Makefile | 3 +- arch/arm64/kernel/head.S | 104 ++------------------ arch/arm64/kernel/pi/Makefile | 5 +- arch/arm64/kernel/pi/relocate.c | 62 ++++++++++++ arch/arm64/kernel/vmlinux.lds.S | 12 ++- 5 files changed, 82 insertions(+), 104 deletions(-) diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index ceba6792f5b3c473..e27168d6ed2050b9 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -58,7 +58,8 @@ obj-$(CONFIG_ACPI) += acpi.o obj-$(CONFIG_ACPI_NUMA) += acpi_numa.o obj-$(CONFIG_ARM64_ACPI_PARKING_PROTOCOL) += acpi_parking_protocol.o obj-$(CONFIG_PARAVIRT) += paravirt.o -obj-$(CONFIG_RANDOMIZE_BASE) += kaslr.o pi/ +obj-$(CONFIG_RELOCATABLE) += pi/ +obj-$(CONFIG_RANDOMIZE_BASE) += kaslr.o obj-$(CONFIG_HIBERNATION) += hibernate.o hibernate-asm.o obj-$(CONFIG_ELF_CORE) += elfcore.o obj-$(CONFIG_KEXEC_CORE) += machine_kexec.o relocate_kernel.o \ diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 65cdaaa2c859418f..5047a2952ec273f9 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -81,7 +81,7 @@ * x20 primary_entry() .. __primary_switch() CPU boot mode * x21 primary_entry() .. start_kernel() FDT pointer passed at boot in x0 * x22 create_idmap() .. start_kernel() ID map VA of the DT blob - * x23 primary_entry() .. start_kernel() physical misalignment/KASLR offset + * x23 __primary_switch() physical misalignment/KASLR offset * x24 __primary_switch() linear map KASLR seed * x25 primary_entry() .. start_kernel() supported VA size * x28 create_idmap() callee preserved temp register @@ -389,7 +389,7 @@ SYM_FUNC_START_LOCAL(create_idmap) /* Remap the kernel page tables r/w in the ID map */ adrp x1, _text adrp x2, init_pg_dir - adrp x3, init_pg_end + adrp x3, _end bic x4, x2, #SWAPPER_BLOCK_SIZE - 1 mov x5, SWAPPER_RW_MMUFLAGS mov x6, #SWAPPER_BLOCK_SHIFT @@ -777,97 +777,6 @@ SYM_FUNC_START_LOCAL(__no_granule_support) b 1b SYM_FUNC_END(__no_granule_support) -#ifdef CONFIG_RELOCATABLE -SYM_FUNC_START_LOCAL(__relocate_kernel) - /* - * Iterate over each entry in the relocation table, and apply the - * relocations in place. - */ - adr_l x9, __rela_start - adr_l x10, __rela_end - mov_q x11, KIMAGE_VADDR // default virtual offset - add x11, x11, x23 // actual virtual offset - -0: cmp x9, x10 - b.hs 1f - ldp x12, x13, [x9], #24 - ldr x14, [x9, #-8] - cmp w13, #R_AARCH64_RELATIVE - b.ne 0b - add x14, x14, x23 // relocate - str x14, [x12, x23] - b 0b - -1: -#ifdef CONFIG_RELR - /* - * Apply RELR relocations. - * - * RELR is a compressed format for storing relative relocations. The - * encoded sequence of entries looks like: - * [ AAAAAAAA BBBBBBB1 BBBBBBB1 ... AAAAAAAA BBBBBB1 ... ] - * - * i.e. start with an address, followed by any number of bitmaps. The - * address entry encodes 1 relocation. The subsequent bitmap entries - * encode up to 63 relocations each, at subsequent offsets following - * the last address entry. - * - * The bitmap entries must have 1 in the least significant bit. The - * assumption here is that an address cannot have 1 in lsb. Odd - * addresses are not supported. Any odd addresses are stored in the RELA - * section, which is handled above. - * - * Excluding the least significant bit in the bitmap, each non-zero - * bit in the bitmap represents a relocation to be applied to - * a corresponding machine word that follows the base address - * word. The second least significant bit represents the machine - * word immediately following the initial address, and each bit - * that follows represents the next word, in linear order. As such, - * a single bitmap can encode up to 63 relocations in a 64-bit object. - * - * In this implementation we store the address of the next RELR table - * entry in x9, the address being relocated by the current address or - * bitmap entry in x13 and the address being relocated by the current - * bit in x14. - */ - adr_l x9, __relr_start - adr_l x10, __relr_end - -2: cmp x9, x10 - b.hs 7f - ldr x11, [x9], #8 - tbnz x11, #0, 3f // branch to handle bitmaps - add x13, x11, x23 - ldr x12, [x13] // relocate address entry - add x12, x12, x23 - str x12, [x13], #8 // adjust to start of bitmap - b 2b - -3: mov x14, x13 -4: lsr x11, x11, #1 - cbz x11, 6f - tbz x11, #0, 5f // skip bit if not set - ldr x12, [x14] // relocate bit - add x12, x12, x23 - str x12, [x14] - -5: add x14, x14, #8 // move to next bit's address - b 4b - -6: /* - * Move to the next bitmap's address. 8 is the word size, and 63 is the - * number of significant bits in a bitmap entry. - */ - add x13, x13, #(8 * 63) - b 2b - -7: -#endif - ret - -SYM_FUNC_END(__relocate_kernel) -#endif - SYM_FUNC_START_LOCAL(__primary_switch) adrp x1, reserved_pg_dir adrp x2, init_idmap_pg_dir @@ -875,11 +784,11 @@ SYM_FUNC_START_LOCAL(__primary_switch) #ifdef CONFIG_RELOCATABLE adrp x23, KERNEL_START and x23, x23, MIN_KIMG_ALIGN - 1 -#ifdef CONFIG_RANDOMIZE_BASE - mov x0, x22 - adrp x1, init_pg_end + adrp x1, early_init_stack mov sp, x1 mov x29, xzr +#ifdef CONFIG_RANDOMIZE_BASE + mov x0, x22 bl __pi_kaslr_early_init and x24, x0, #SZ_2M - 1 // capture memstart offset seed bic x0, x0, #SZ_2M - 1 @@ -892,7 +801,8 @@ SYM_FUNC_START_LOCAL(__primary_switch) adrp x1, init_pg_dir load_ttbr1 x1, x1, x2 #ifdef CONFIG_RELOCATABLE - bl __relocate_kernel + mov x0, x23 + bl __pi_relocate_kernel #endif ldr x8, =__primary_switched adrp x0, KERNEL_START // __pa(KERNEL_START) diff --git a/arch/arm64/kernel/pi/Makefile b/arch/arm64/kernel/pi/Makefile index 2bbe866417d453ff..d084c1dcf4165420 100644 --- a/arch/arm64/kernel/pi/Makefile +++ b/arch/arm64/kernel/pi/Makefile @@ -38,5 +38,6 @@ $(obj)/lib-%.pi.o: OBJCOPYFLAGS += --prefix-alloc-sections=.init $(obj)/lib-%.o: $(srctree)/lib/%.c FORCE $(call if_changed_rule,cc_o_c) -obj-y := kaslr_early.pi.o lib-fdt.pi.o lib-fdt_ro.pi.o -extra-y := $(patsubst %.pi.o,%.o,$(obj-y)) +obj-y := relocate.pi.o +obj-$(CONFIG_RANDOMIZE_BASE) += kaslr_early.pi.o lib-fdt.pi.o lib-fdt_ro.pi.o +extra-y := $(patsubst %.pi.o,%.o,$(obj-y)) diff --git a/arch/arm64/kernel/pi/relocate.c b/arch/arm64/kernel/pi/relocate.c new file mode 100644 index 0000000000000000..1853408ea76b0e5d --- /dev/null +++ b/arch/arm64/kernel/pi/relocate.c @@ -0,0 +1,62 @@ +// SPDX-License-Identifier: GPL-2.0-only +// Copyright 2023 Google LLC +// Authors: Ard Biesheuvel +// Peter Collingbourne + +#include +#include +#include + +extern const Elf64_Rela rela_start[], rela_end[]; +extern const u64 relr_start[], relr_end[]; + +void __init relocate_kernel(u64 offset) +{ + u64 *place = NULL; + + for (const Elf64_Rela *rela = rela_start; rela < rela_end; rela++) { + if (ELF64_R_TYPE(rela->r_info) != R_AARCH64_RELATIVE) + continue; + *(u64 *)(rela->r_offset + offset) = rela->r_addend + offset; + } + + if (!IS_ENABLED(CONFIG_RELR) || !offset) + return; + + /* + * Apply RELR relocations. + * + * RELR is a compressed format for storing relative relocations. The + * encoded sequence of entries looks like: + * [ AAAAAAAA BBBBBBB1 BBBBBBB1 ... AAAAAAAA BBBBBB1 ... ] + * + * i.e. start with an address, followed by any number of bitmaps. The + * address entry encodes 1 relocation. The subsequent bitmap entries + * encode up to 63 relocations each, at subsequent offsets following + * the last address entry. + * + * The bitmap entries must have 1 in the least significant bit. The + * assumption here is that an address cannot have 1 in lsb. Odd + * addresses are not supported. Any odd addresses are stored in the + * RELA section, which is handled above. + * + * With the exception of the least significant bit, each bit in the + * bitmap corresponds with a machine word that follows the base address + * word, and the bit value indicates whether or not a relocation needs + * to be applied to it. The second least significant bit represents the + * machine word immediately following the initial address, and each bit + * that follows represents the next word, in linear order. As such, a + * single bitmap can encode up to 63 relocations in a 64-bit object. + */ + for (const u64 *relr = relr_start; relr < relr_end; relr++) { + if ((*relr & 1) == 0) { + place = (u64 *)(*relr + offset); + *place++ += offset; + } else { + for (u64 *p = place, r = *relr >> 1; r; p++, r >>= 1) + if (r & 1) + *p += offset; + place += 63; + } + } +} diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index b9202c2ee18e02d8..ec24b1e70d606ec8 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -271,15 +271,15 @@ SECTIONS HYPERVISOR_RELOC_SECTION .rela.dyn : ALIGN(8) { - __rela_start = .; + __pi_rela_start = .; *(.rela .rela*) - __rela_end = .; + __pi_rela_end = .; } .relr.dyn : ALIGN(8) { - __relr_start = .; + __pi_relr_start = .; *(.relr.dyn) - __relr_end = .; + __pi_relr_end = .; } . = ALIGN(SEGMENT_ALIGN); @@ -318,6 +318,10 @@ SECTIONS init_pg_dir = .; . += INIT_DIR_SIZE; init_pg_end = .; +#ifdef CONFIG_RELOCATABLE + . += SZ_4K; /* stack for the early relocation code */ + early_init_stack = .; +#endif . = ALIGN(SEGMENT_ALIGN); __pecoff_data_size = ABSOLUTE(. - __initdata_begin); From patchwork Tue Mar 7 14:04:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163717 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 98B13C678D4 for ; Tue, 7 Mar 2023 14:26:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=tmqyr1bN3w5N9N1zhYxoopUvofOXVoYx0IL5X/25GOw=; b=JXYxfYKq72qCOG kX1/KhcfMjQcB1I+bmW2q0xqF1C1fl2YA1nRw9CiNNyYhVPAX2sbB96THxn28kJazW9cHiMpAqbCP ZXfb7Y6KE/cTr2zLvA161B2Pja52Vsy6N49LO/dqQeFoW8MgcTVjjYHVloTeD59Sy2LUwUaIHXSGb TAmmRUhBJXOFJYRYF86h68Hhq5hFVyvJFbrdmDyga5b8Mvu//UBdKDt8XyPoc3klbdDm9Hfw5zZKE gyQJSJOUG8FKHAoJCw1Q3q/8kia2iVAxxzIqBGX68H/XqCOYmmJAQp/lBx9LJzcEGScAjc3Jdjwz0 IVF6pHc9Qtz9WD+NnzQA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYFt-000lyp-KA; Tue, 07 Mar 2023 14:25:29 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXyK-000eA2-HF for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:07:22 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id BCD6E6145B; Tue, 7 Mar 2023 14:07:19 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 38B4CC4339E; Tue, 7 Mar 2023 14:07:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198039; bh=7Sb3pK6D439uyI1XL+ABWWcj41jxBle1rc2gq0YdsQ8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tixza3fh2iLkW/0IOjwq8L0ccv12ajKwiWMFqjfERXEu7HSt5xQM4RMYD7XEP+u79 cfhBU+nzvjZBjDx3bgCjXKLyksXOKPU0r00+EubWat9SsXgv5itENFu5pbP8S+Khok +i+XwncHtc+7xzoTav3Op9UyxCrW9h3ACRAAPn7/c519u535tcluevGZlBqZfYgX9B X+gchjw4P+K9x/PifAighK4hf1I3bqYuN8DvEACQkc6nF/m9Rf3N8Rv/ACT2rD4Wx9 S2Sixivicqys7VHcN5FR6vb+p1YnTRFtbefl8HAaPlo4LAKcCdMyLzU6vYx2DNRWe+ A8ZOrhygGINOw== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 17/60] arm64: idreg-override: Omit non-NULL checks for override pointer Date: Tue, 7 Mar 2023 15:04:39 +0100 Message-Id: <20230307140522.2311461-18-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1672; i=ardb@kernel.org; h=from:subject; bh=7Sb3pK6D439uyI1XL+ABWWcj41jxBle1rc2gq0YdsQ8=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdRd1lZ7VMw4MrKsxPpF5dipOxfnDdPsrv7sykWMHlj du3/JLpKGVhEONgkBVTZBGY/ffdztMTpWqdZ8nCzGFlAhnCwMUpABMpD2b4p/h4qfT+1/9YMlfO CFV51uRQs1Rtrfrtf+2ZbE4/Zc+EOzL89/g4f2FIxvHNs59XOy5bYpC4UCGBU+7KpFPbGLQa2ZJ kGQA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060720_695234_3D93EB3A X-CRM114-Status: GOOD ( 14.73 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Now that override pointers are always set, we can drop the various non-NULL checks that we have in the code. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/idreg-override.c | 16 +++++----------- 1 file changed, 5 insertions(+), 11 deletions(-) diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c index 434703e4e55cb785..b6e90ee6857eb758 100644 --- a/arch/arm64/kernel/idreg-override.c +++ b/arch/arm64/kernel/idreg-override.c @@ -196,9 +196,6 @@ static void __init match_options(const char *cmdline) for (i = 0; i < ARRAY_SIZE(regs); i++) { int f; - if (!regs[i]->override) - continue; - for (f = 0; strlen(regs[i]->fields[f].name); f++) { u64 shift = regs[i]->fields[f].shift; u64 width = regs[i]->fields[f].width ?: 4; @@ -299,10 +296,8 @@ asmlinkage void __init init_feature_override(u64 boot_status) int i; for (i = 0; i < ARRAY_SIZE(regs); i++) { - if (regs[i]->override) { - regs[i]->override->val = 0; - regs[i]->override->mask = 0; - } + regs[i]->override->val = 0; + regs[i]->override->mask = 0; } __boot_status = boot_status; @@ -310,9 +305,8 @@ asmlinkage void __init init_feature_override(u64 boot_status) parse_cmdline(); for (i = 0; i < ARRAY_SIZE(regs); i++) { - if (regs[i]->override) - dcache_clean_inval_poc((unsigned long)regs[i]->override, - (unsigned long)regs[i]->override + - sizeof(*regs[i]->override)); + dcache_clean_inval_poc((unsigned long)regs[i]->override, + (unsigned long)regs[i]->override + + sizeof(*regs[i]->override)); } } From patchwork Tue Mar 7 14:04:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163718 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 042BAC6FD1B for ; Tue, 7 Mar 2023 14:27:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=iapANg9BGbh6VWe3G1NNfHWy7DP/1eA7WnBkaLy+LdA=; b=xuKx/VC+CYOtgr QPGcacuJqh7pIa/rBmRJAh+/vpRRV2UYXSDQuqA5O4UPbtzSR2624o1su0BoOAoHck42uyo/FN25V UXwIQy54yamN5gygP/v2TUI1J22Rim3KfG2VsbFwiGhcuBXkAaEZO1ub1FZuTOxCEmaoyYCPBghtp Xv+9I+K3ovvGgWsULWCIOJejj8hJVpQwAZm+3Q/o3+PBvNAcuqphYGoYDrxK5f88x2b7q4ZqUSNgo thxFmqR15aqjUveZ2U7kMUaIovvw8ZEQAK0GFwyIe8tOiVznX39Gy76wW3T56Yxyb0m6HSx1fGwGl /gRpldC1UFKM25DmxAaQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYGf-000mOX-NN; Tue, 07 Mar 2023 14:26:17 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXyN-000eBp-65 for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:07:25 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2BDB661456; Tue, 7 Mar 2023 14:07:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9B308C4339C; Tue, 7 Mar 2023 14:07:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198041; bh=VdvMqNl4JSt5SmirAQSento60Ayr11cxcKTY8Dzuwjc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fP2ted0R9U/fUqsNQvWqEs27N/xb5x7NJ/nMlSTGL/QA7FezlBjpNvyK44mZsySyz /er95ikyQsAh3q5bM+6HjENLnCJkMZkh1M9mxeLidLx/sUhdcIO5jMeRX0f6wDGg7/ TnZX3zm4HPnT8gB+Q5BaZehllTf7Jf5xnOZGJoxPtHgYN+5mHz0RYt2ea8aO+u65Zs Eov+QoD3vNrBWy1phH/z0Kc5TubMmTZ3WENb0wgiljV840iyvDKbetLCSMp6Jljx8r Qd0N2X0FejaNEbwzp31B9BVPV66O1rUQoBO7Ck5XyzSXi5Jwfc15XBv2qXSAg5baM2 2UzUpl4DbAQAw== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 18/60] arm64: idreg-override: Prepare for place relative reloc patching Date: Tue, 7 Mar 2023 15:04:40 +0100 Message-Id: <20230307140522.2311461-19-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=7371; i=ardb@kernel.org; h=from:subject; bh=VdvMqNl4JSt5SmirAQSento60Ayr11cxcKTY8Dzuwjc=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdRestC9O/ad8POB/+d0bZ7zdX7Y4FN5c9XXpNYLlex IWfthfjOkpZGMQ4GGTFFFkEZv99t/P0RKla51myMHNYmUCGMHBxCsBE/pxg+Ke2XL7kg7St14oj l0Ssj82ck2N4q3oGxzJuzZU3+AwNrvcw/M9ZpnpW+Osl8w25TyaY59WVfe27M9uj8/PDiJjUlFu CNiwA X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060723_399006_DD2DC30C X-CRM114-Status: GOOD ( 21.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The ID reg override handling code uses a rather elaborate data structure that relies on statically initialized absolute address values in pointer fields. This means that this code cannot run until relocation fixups have been applied, and this is unfortunate, because it means we cannot discover overrides for KASLR or LVA/LPA without creating the kernel mapping and performing the relocations first. This can be solved by switching to place-relative relocations, which can be applied by the linker at build time. This means some additional arithmetic is required when dereferencing these pointers, as we can no longer dereference the pointer members directly. So let's implement this for idreg-override.c in a preliminary way, i.e., convert all the references in code to use a special accessor that produces the correct absolute value at runtime. To preserve the strong type checking for the static initializers, use union types for representing the hybrid quantities. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/idreg-override.c | 98 +++++++++++++------- 1 file changed, 65 insertions(+), 33 deletions(-) diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c index b6e90ee6857eb758..fc9ed722621412bf 100644 --- a/arch/arm64/kernel/idreg-override.c +++ b/arch/arm64/kernel/idreg-override.c @@ -21,14 +21,32 @@ static u64 __boot_status __initdata; +// temporary __prel64 related definitions +// to be removed when this code is moved under pi/ + +#define __prel64_initconst __initconst + +typedef void *prel64_t; + +static void *prel64_to_pointer(const prel64_t *p) +{ + return *p; +} + struct ftr_set_desc { char name[FTR_DESC_NAME_LEN]; - struct arm64_ftr_override *override; + union { + struct arm64_ftr_override *override; + prel64_t override_prel; + }; struct { char name[FTR_DESC_FIELD_LEN]; u8 shift; u8 width; - bool (*filter)(u64 val); + union { + bool (*filter)(u64 val); + prel64_t filter_prel; + }; } fields[]; }; @@ -46,7 +64,7 @@ static bool __init mmfr1_vh_filter(u64 val) val == 0); } -static const struct ftr_set_desc mmfr1 __initconst = { +static const struct ftr_set_desc mmfr1 __prel64_initconst = { .name = "id_aa64mmfr1", .override = &id_aa64mmfr1_override, .fields = { @@ -70,7 +88,7 @@ static bool __init pfr0_sve_filter(u64 val) return true; } -static const struct ftr_set_desc pfr0 __initconst = { +static const struct ftr_set_desc pfr0 __prel64_initconst = { .name = "id_aa64pfr0", .override = &id_aa64pfr0_override, .fields = { @@ -94,7 +112,7 @@ static bool __init pfr1_sme_filter(u64 val) return true; } -static const struct ftr_set_desc pfr1 __initconst = { +static const struct ftr_set_desc pfr1 __prel64_initconst = { .name = "id_aa64pfr1", .override = &id_aa64pfr1_override, .fields = { @@ -105,7 +123,7 @@ static const struct ftr_set_desc pfr1 __initconst = { }, }; -static const struct ftr_set_desc isar1 __initconst = { +static const struct ftr_set_desc isar1 __prel64_initconst = { .name = "id_aa64isar1", .override = &id_aa64isar1_override, .fields = { @@ -117,7 +135,7 @@ static const struct ftr_set_desc isar1 __initconst = { }, }; -static const struct ftr_set_desc isar2 __initconst = { +static const struct ftr_set_desc isar2 __prel64_initconst = { .name = "id_aa64isar2", .override = &id_aa64isar2_override, .fields = { @@ -127,7 +145,7 @@ static const struct ftr_set_desc isar2 __initconst = { }, }; -static const struct ftr_set_desc smfr0 __initconst = { +static const struct ftr_set_desc smfr0 __prel64_initconst = { .name = "id_aa64smfr0", .override = &id_aa64smfr0_override, .fields = { @@ -138,7 +156,7 @@ static const struct ftr_set_desc smfr0 __initconst = { }, }; -static const struct ftr_set_desc sw_features __initconst = { +static const struct ftr_set_desc sw_features __prel64_initconst = { .name = "arm64_sw", .override = &arm64_sw_feature_override, .fields = { @@ -147,14 +165,17 @@ static const struct ftr_set_desc sw_features __initconst = { }, }; -static const struct ftr_set_desc * const regs[] __initconst = { - &mmfr1, - &pfr0, - &pfr1, - &isar1, - &isar2, - &smfr0, - &sw_features, +static const union { + const struct ftr_set_desc *reg; + prel64_t reg_prel; +} regs[] __prel64_initconst = { + { .reg = &mmfr1 }, + { .reg = &pfr0 }, + { .reg = &pfr1 }, + { .reg = &isar1 }, + { .reg = &isar2 }, + { .reg = &smfr0 }, + { .reg = &sw_features }, }; static const struct { @@ -194,15 +215,20 @@ static void __init match_options(const char *cmdline) int i; for (i = 0; i < ARRAY_SIZE(regs); i++) { + const struct ftr_set_desc *reg = prel64_to_pointer(®s[i].reg_prel); + struct arm64_ftr_override *override; int f; - for (f = 0; strlen(regs[i]->fields[f].name); f++) { - u64 shift = regs[i]->fields[f].shift; - u64 width = regs[i]->fields[f].width ?: 4; + override = prel64_to_pointer(®->override_prel); + + for (f = 0; strlen(reg->fields[f].name); f++) { + u64 shift = reg->fields[f].shift; + u64 width = reg->fields[f].width ?: 4; u64 mask = GENMASK_ULL(shift + width - 1, shift); + bool (*filter)(u64 val); u64 v; - if (find_field(cmdline, regs[i], f, &v)) + if (find_field(cmdline, reg, f, &v)) continue; /* @@ -210,16 +236,16 @@ static void __init match_options(const char *cmdline) * it by setting the value to the all-ones while * clearing the mask... Yes, this is fragile. */ - if (regs[i]->fields[f].filter && - !regs[i]->fields[f].filter(v)) { - regs[i]->override->val |= mask; - regs[i]->override->mask &= ~mask; + filter = prel64_to_pointer(®->fields[f].filter_prel); + if (filter && !filter(v)) { + override->val |= mask; + override->mask &= ~mask; continue; } - regs[i]->override->val &= ~mask; - regs[i]->override->val |= (v << shift) & mask; - regs[i]->override->mask |= mask; + override->val &= ~mask; + override->val |= (v << shift) & mask; + override->mask |= mask; return; } @@ -293,11 +319,16 @@ void init_feature_override(u64 boot_status); asmlinkage void __init init_feature_override(u64 boot_status) { + struct arm64_ftr_override *override; + const struct ftr_set_desc *reg; int i; for (i = 0; i < ARRAY_SIZE(regs); i++) { - regs[i]->override->val = 0; - regs[i]->override->mask = 0; + reg = prel64_to_pointer(®s[i].reg_prel); + override = prel64_to_pointer(®->override_prel); + + override->val = 0; + override->mask = 0; } __boot_status = boot_status; @@ -305,8 +336,9 @@ asmlinkage void __init init_feature_override(u64 boot_status) parse_cmdline(); for (i = 0; i < ARRAY_SIZE(regs); i++) { - dcache_clean_inval_poc((unsigned long)regs[i]->override, - (unsigned long)regs[i]->override + - sizeof(*regs[i]->override)); + reg = prel64_to_pointer(®s[i].reg_prel); + override = prel64_to_pointer(®->override_prel); + dcache_clean_inval_poc((unsigned long)override, + (unsigned long)(override + 1)); } } From patchwork Tue Mar 7 14:04:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163719 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ABEADC678D4 for ; Tue, 7 Mar 2023 14:28:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=3geEek3x1NARgTBArxhnqH4h4pnm5KrZHQmRQtBpY4I=; b=Sz2cOd061OMx72 3thC5jwVFXIE3P9w9IKEebGAIdShY0HfnVApt4anD7RpnHDclrEB2M3K4vEQslAqSWv7/vPQv10fS fkRe89SMIaemXU95QNt3tz23xOgsXq+h7fuQN5qeIZuj0+EHSjRkC7jAB1QLcovEgLhAIqssAe565 cSe+q1VT1S3zG3BkwOSBIN637cNpTgnrNhyVlb8lsn8WzIETspaFcbjT1qf7yZW85Nc5VF6vdumIj 9CmjYnxrynD1mznC1lIJNZnhP2s3XZnkZqvZ1UC3+visbPabX9OiEvMrfzSzuPhuwNDCOzgx3diSY 8o2XgBgQ3qeomtyI1B5Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYHL-000miF-Ja; Tue, 07 Mar 2023 14:26:59 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXyO-000eCz-K0 for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:07:26 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3009B61464; Tue, 7 Mar 2023 14:07:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 090FFC433EF; Tue, 7 Mar 2023 14:07:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198044; bh=W8RSmb13M0UD0uiBw0YgqdVpB2dNrJMQsA88S6nGwfI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hNoGrTgRatEitvHDZe6dRhngMdqkE6TSYrZjyagRu+CAcui5laeM+yXG/3f+WrBvp WQZMTMHzssBRfJPVj3QphBI89sgkH+aG1IViRKaijZDAtmxOaGjlb/E9uSmdiLyUBo 3Olj7MpO8msdIObHe08BXSkBuXxvwToUZHEP1mKF+0kxphwIz6Hc4ztqLpp+4JtEa3 i6TBJGPSjI6/WzBhMhiHRFzXcuAfNYC61Kuqcn3FFwKPUa4psBthXOGsR2Y5Ol8//t ErctYB2JSTwRcGPQxwjYKC55bXg6ZbE3NNKByy7VK3SjNhr410n6y2wuu0JGLyeKYe kQv178eK2cxSQ== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 19/60] arm64: idreg-override: Avoid parameq() and parameqn() Date: Tue, 7 Mar 2023 15:04:41 +0100 Message-Id: <20230307140522.2311461-20-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2747; i=ardb@kernel.org; h=from:subject; bh=W8RSmb13M0UD0uiBw0YgqdVpB2dNrJMQsA88S6nGwfI=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdRUuPQUKqZkZ5qO2sVS/OHP6ds+f3ui9rTm63bdcuv lu0XOl5RwkLgxgHg6yYIovA7L/vdp6eKFXrPEsWZg4rE8gQBi5OAZiIdAzD9xDHX3EvDX+a9a9+ smtG0QQrI+FdWiFT/36+Xf+V5bq0OSPD2wPhcfoB8a7fDSa9Edxy9KlML5/fE8/fi6aJbf1gerK VHQA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060724_781603_7159B632 X-CRM114-Status: GOOD ( 19.15 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The only way parameq() and parameqn() deviate from the ordinary string and memory routines is that they ignore the difference between dashes and underscores. Since we copy each command line argument into a buffer before passing it to parameq() and parameqn() numerous times, let's just convert all dashes to underscores just once, and update the alias array accordingly. This also helps reduce the dependency on kernel APIs that are no longer available once we move this code into the early mini C runtime. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/idreg-override.c | 26 ++++++++++++-------- 1 file changed, 16 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c index fc9ed722621412bf..23bbbc37ac24ba09 100644 --- a/arch/arm64/kernel/idreg-override.c +++ b/arch/arm64/kernel/idreg-override.c @@ -182,8 +182,8 @@ static const struct { char alias[FTR_ALIAS_NAME_LEN]; char feature[FTR_ALIAS_OPTION_LEN]; } aliases[] __initconst = { - { "kvm-arm.mode=nvhe", "id_aa64mmfr1.vh=0" }, - { "kvm-arm.mode=protected", "id_aa64mmfr1.vh=0" }, + { "kvm_arm.mode=nvhe", "id_aa64mmfr1.vh=0" }, + { "kvm_arm.mode=protected", "id_aa64mmfr1.vh=0" }, { "arm64.nosve", "id_aa64pfr0.sve=0 id_aa64pfr1.sme=0" }, { "arm64.nosme", "id_aa64pfr1.sme=0" }, { "arm64.nobti", "id_aa64pfr1.bt=0" }, @@ -204,7 +204,7 @@ static int __init find_field(const char *cmdline, len = snprintf(opt, ARRAY_SIZE(opt), "%s.%s=", reg->name, reg->fields[f].name); - if (!parameqn(cmdline, opt, len)) + if (memcmp(cmdline, opt, len)) return -1; return kstrtou64(cmdline + len, 0, v); @@ -261,23 +261,29 @@ static __init void __parse_cmdline(const char *cmdline, bool parse_aliases) cmdline = skip_spaces(cmdline); - for (len = 0; cmdline[len] && !isspace(cmdline[len]); len++); + /* terminate on "--" appearing on the command line by itself */ + if (cmdline[0] == '-' && cmdline[1] == '-' && isspace(cmdline[2])) + return; + + for (len = 0; cmdline[len] && !isspace(cmdline[len]); len++) { + if (len >= sizeof(buf) - 1) + break; + if (cmdline[len] == '-') + buf[len] = '_'; + else + buf[len] = cmdline[len]; + } if (!len) return; - len = min(len, ARRAY_SIZE(buf) - 1); - strncpy(buf, cmdline, len); buf[len] = 0; - if (strcmp(buf, "--") == 0) - return; - cmdline += len; match_options(buf); for (i = 0; parse_aliases && i < ARRAY_SIZE(aliases); i++) - if (parameq(buf, aliases[i].alias)) + if (!memcmp(buf, aliases[i].alias, len + 1)) __parse_cmdline(aliases[i].feature, false); } while (1); } From patchwork Tue Mar 7 14:04:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163720 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A1B64C678D5 for ; Tue, 7 Mar 2023 14:29:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=hUAOWg9tuvpbA2wbDRj/i6mA6E9Z2XkIr7ZDic14plc=; b=lnWWqPJdkKIgrR m1SNeAKqH4c/SAlz+TRfujWhu2hxigm3gacfulYYSdraWnDEO4lBRB8JmRWzwzu0xP0tI4DQNlm3K q6igbJA99zVV4U5J1EnY1WYxD6q4BE+jB3P6UcjzXDlzzUlscDx7CW/JL7aIJKKCfHXQwVRc7iKLj luxnV7zHDfb9wc1TS0CcGD4Y3rswyIqC75BKZZRe5cuLTa0d660ghChQdZJxGuZOSm7wKtR66Il6P zUaXKAnCKLIgO7Z3AfACWfjkhR+Wd0Y/PY4KRYomfthyZ6G9ZUS6kO/UM/tzB2yidMkDyGkPUgfjD sllGi3qd2miiGwuolgaw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYII-000nAg-1E; Tue, 07 Mar 2023 14:27:58 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXyR-000eF8-F0 for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:07:28 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id F024361455; Tue, 7 Mar 2023 14:07:26 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6BF1BC4339E; Tue, 7 Mar 2023 14:07:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198046; bh=QDcwBf9EH2tJ8eEyC1Nj4A3MrnVEC+rPwfe1iysJZno=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=f3P88CgXVT+Ugyk0GX6xkiiygfzJAeGNw293YRfGLC057gYzoTHRmK6hjtQSwj4YC Yh37QCaD5DvY0bZCIqX+wZOYIIu0iJnSRermRuZvfn0qgFzZ9jr6MPERqUO4eUpg1k Rpzewkl2SxFKHHrgwew2jxR3UKrbcfI3A4T6iyR2skp+LtvQ2ZtVdPllTpylOgGb9z tROQ+mp1S3i8iC5N0aVSCOT1E74eHKITY7L2K5cQ6/QaHaxPDBqDGK463ghHVtRIaf RQ/e+ZBzvvLANTf2L7eE1CnPvrvp0BXV/SgxE2rsN520wJ0Xk8G7zNYCGIap2cIF2X SWYiIda8F6NuQ== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 20/60] arm64: idreg-override: avoid strlen() to check for empty strings Date: Tue, 7 Mar 2023 15:04:42 +0100 Message-Id: <20230307140522.2311461-21-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=911; i=ardb@kernel.org; h=from:subject; bh=QDcwBf9EH2tJ8eEyC1Nj4A3MrnVEC+rPwfe1iysJZno=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdxWAN6+mCPXEZupER8qbJd7ZNU/Be93y60EaL59uSn 53iNTHoKGVhEONgkBVTZBGY/ffdztMTpWqdZ8nCzGFlAhnCwMUpABMpO8zI8HPWzi0xmf8dy3b6 +ztr3XNmn57R2daXdcHiysRopbDPRgz/ywJybG0DPkQGeW8xOqQwaePOdVVetV9qQ1KYOL7bG4o xAQA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060727_571791_68EF87FD X-CRM114-Status: GOOD ( 15.29 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org strlen() is a costly way to decide whether a string is empty, as in that case, the first character will be NUL so we can check for that directly. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/idreg-override.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c index 23bbbc37ac24ba09..476dc3f0e9d90e22 100644 --- a/arch/arm64/kernel/idreg-override.c +++ b/arch/arm64/kernel/idreg-override.c @@ -221,7 +221,7 @@ static void __init match_options(const char *cmdline) override = prel64_to_pointer(®->override_prel); - for (f = 0; strlen(reg->fields[f].name); f++) { + for (f = 0; reg->fields[f].name[0] != '\0'; f++) { u64 shift = reg->fields[f].shift; u64 width = reg->fields[f].width ?: 4; u64 mask = GENMASK_ULL(shift + width - 1, shift); From patchwork Tue Mar 7 14:04:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163753 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 44167C678D4 for ; Tue, 7 Mar 2023 14:29:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=I5n+H//ZymhV3hqV+OkD4UmkOZmo7YQQk1TR1/U/D6g=; b=fo/lUQaU1OKc7H Sp5nNc6NAZ5O4odWwcUgVriynkeezws62Zkl918g8EN4x/+GEfOW/fJjaZtjeSMXm+jff1YUUK7YQ gkYp/pMw5Dv7TX3gCNy8634naG1PomtD5tf0r/zQRVj0EizRN1dBvcO8koKrNBgDI1cr0mp+Nt/mC KxJt7YG3Y2UUrhN5wd8kvE7BjWEN50V7kV5VOGJwqUcYnUqXIAqoNdSyNjfgfN6KxD4lE45WGZHNg zlFlng1XCyZY69d0Uc8RBgkE8UsKX4w+ZKlsy5us3bTswhTNmg9oKA1v0ro0Y5rfbxLTE/V7+276A 4kcNRpkYtpDrR3fyPZXg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYJ1-000nZh-BJ; Tue, 07 Mar 2023 14:28:44 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXyU-000eH7-6F for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:07:31 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C500961465; Tue, 7 Mar 2023 14:07:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CDD83C433A1; Tue, 7 Mar 2023 14:07:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198048; bh=SoZODBy3e6pBd6VVEQFBkwGASrxh7Q1UeOly8/M6eFE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=E6vHoFygj0T9SHRspVnNeZkNyqEvvRmUqh2JLfhWYKsLHPiAq4RLn1hylVd6DyySr l6Bskgb3l8DhedzlzzJs0QE0QGVFMO1cLVgEHyGwPx4a1khXczJotkE6XRMpuQwlGW fg8+HMF7R0OcB71Xao+uUXf6TngveI7iZJR8ucvUmJLtdT4wE0u8b/Xpr9QffbTth7 qZbZPOj0tQEoUhBOHrg3IbRKdEK8tSHsWsIK6eoEbnlNe2QzaLjvb5BXn2gTFN6QQx ER+32leoaVp8UWAFAQ8whZUy4vsQ5z9vPbpV84AV/RnoN3dv0ZcmVi0QJHsnqB6IDp S5ToOELo5vp/w== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 21/60] arm64: idreg-override: Avoid sprintf() for simple string concatenation Date: Tue, 7 Mar 2023 15:04:43 +0100 Message-Id: <20230307140522.2311461-22-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2395; i=ardb@kernel.org; h=from:subject; bh=SoZODBy3e6pBd6VVEQFBkwGASrxh7Q1UeOly8/M6eFE=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdxcjoZFmWf6zz8v36e+vaVbyFmPSMb+ZOteAwOKEv/ kPq+raOUhYGMQ4GWTFFFoHZf9/tPD1RqtZ5lizMHFYmkCEMXJwCMJGn5xn+qT54KGDTlave6FxY d8JcYOqzzpuT1a+nJlwt5t1/4PeZZwz/E8+6mO/1Ztz187VXi92UufIn1TUWHXmb5Hww8oHGhtf /uAA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060730_351697_92E668B4 X-CRM114-Status: GOOD ( 18.43 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Instead of using sprintf() with the "%s.%s=" format, where the first string argument is always the same in the inner loop of match_options(), use simple memcpy() for string concatenation, and move the first copy to the outer loop. This removes the dependency on sprintf(), which will be difficult to fulfil when we move this code into the early mini C runtime. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/idreg-override.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c index 476dc3f0e9d90e22..58c02fe20e1050b4 100644 --- a/arch/arm64/kernel/idreg-override.c +++ b/arch/arm64/kernel/idreg-override.c @@ -195,14 +195,15 @@ static const struct { { "nokaslr", "arm64_sw.nokaslr=1" }, }; -static int __init find_field(const char *cmdline, +static int __init find_field(const char *cmdline, char *opt, int len, const struct ftr_set_desc *reg, int f, u64 *v) { - char opt[FTR_DESC_NAME_LEN + FTR_DESC_FIELD_LEN + 2]; - int len; + int flen = strlen(reg->fields[f].name); - len = snprintf(opt, ARRAY_SIZE(opt), "%s.%s=", - reg->name, reg->fields[f].name); + // append '=' to obtain '.=' + memcpy(opt + len, reg->fields[f].name, flen); + len += flen; + opt[len++] = '='; if (memcmp(cmdline, opt, len)) return -1; @@ -212,15 +213,21 @@ static int __init find_field(const char *cmdline, static void __init match_options(const char *cmdline) { + char opt[FTR_DESC_NAME_LEN + FTR_DESC_FIELD_LEN + 2]; int i; for (i = 0; i < ARRAY_SIZE(regs); i++) { const struct ftr_set_desc *reg = prel64_to_pointer(®s[i].reg_prel); struct arm64_ftr_override *override; + int len = strlen(reg->name); int f; override = prel64_to_pointer(®->override_prel); + // set opt[] to '.' + memcpy(opt, reg->name, len); + opt[len++] = '.'; + for (f = 0; reg->fields[f].name[0] != '\0'; f++) { u64 shift = reg->fields[f].shift; u64 width = reg->fields[f].width ?: 4; @@ -228,7 +235,7 @@ static void __init match_options(const char *cmdline) bool (*filter)(u64 val); u64 v; - if (find_field(cmdline, reg, f, &v)) + if (find_field(cmdline, opt, len, reg, f, &v)) continue; /* From patchwork Tue Mar 7 14:04:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163754 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B36EDC678D5 for ; Tue, 7 Mar 2023 14:30:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=YV2FMo6SbOJv8xq+jHY+/YSqdCwYsZ8GE54kT+ssF0A=; b=EeaPDvIvqCk379 FoLXg7h02gbhXoz3EWsfoEdxiIgkkICcM1jAhYucFEwemRAQTJ32sPF3f16EhVS0zTZeJjj37ztDC +G2Go7IYzR/4rXxnX6YkyFLeGKX0w84p3OCYyLy/7bNy/9DjzBls1r4vTrIiSdfWUK9pw49qhCaTY S+S1MYJRb0EQQwsJaGry57tqbrjJeN8gbZQ8+ri/VoUqYcfTXj1RViQdom4UM+7DgePO5oVqcRfBZ cHgfKGowIk50NO3Cc+bV9gBdvRmIl1VBfG8NG1C9ZCBoI8bblaES1WugZzpXtTa8kp785pa0QOUKo ppiT/V29EMjnjITmWhFA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYJk-000ntQ-BN; Tue, 07 Mar 2023 14:29:29 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXyW-000eIm-W1 for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:07:34 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 8221DB818F6; Tue, 7 Mar 2023 14:07:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3BFA7C433A0; Tue, 7 Mar 2023 14:07:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198051; bh=hxezYyQWYIw+0PPMhQw20oQnBobe/ik8sJQ0xwbtFKk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PN/3C4zb71kHR2EC4VJjtXnKKx1XE1/0BQIwxpmI4Sxj9+qp1VKbeKRhbtc3YAbeS C8nfCAr7iCCKwFTrN4Oh04FXik3kVT7LDN9uTZw0+Lk1OXgla25XwYfRCUcwluEryz /L5oexRVHy0Z9i2k868BgOZpWj/b2H/a82nB9wrH7nQMv/s1XjFi2u1QLr5aSPnfnQ +MASJV4gL5qSvvoYDl3yX9YX2CIRFZ7bKraUQhipCfIyJRyIAY6xWfykKoC7KlvYfS Itdtg0RwutZeNXFZsjZOf1ERf3ap3TCr0axn8gYhejqt8n+W569S/elox04drH9bCk 4WDOl8gatJ64w== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 22/60] arm64: idreg-override: Avoid kstrtou64() to parse a single hex digit Date: Tue, 7 Mar 2023 15:04:44 +0100 Message-Id: <20230307140522.2311461-23-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1644; i=ardb@kernel.org; h=from:subject; bh=hxezYyQWYIw+0PPMhQw20oQnBobe/ik8sJQ0xwbtFKk=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdxXTJ4/+G+3m9lvhO8o42u3pI5v863Y/XyvXOtszzz 05/ukGio5SFQYyDQVZMkUVg9t93O09PlKp1niULM4eVCWQIAxenAEzknRvD/zxxTYk8/79ts88/ O2Muq6LueLq+zpBx629WkdAZvWJeYYwMbT/OVrn5yD+Ku7yh+e0MMb+Xx97qOCw7raRkftTqL4M tEwA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060733_224046_6CFC88C4 X-CRM114-Status: GOOD ( 16.92 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org All ID register value overrides are =0 with the exception of the nokaslr pseudo feature which uses =1. In order to remove the dependency on kstrtou64(), which is part of the core kernel and no longer usable once we move idreg-override into the early mini C runtime, let's just parse a single hex digit (with optional leading 0x) and set the output value accordingly. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/idreg-override.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c index 58c02fe20e1050b4..758f0e86e2bd2a34 100644 --- a/arch/arm64/kernel/idreg-override.c +++ b/arch/arm64/kernel/idreg-override.c @@ -195,6 +195,20 @@ static const struct { { "nokaslr", "arm64_sw.nokaslr=1" }, }; +static int __init parse_hexdigit(const char *p, u64 *v) +{ + // skip "0x" if it comes next + if (p[0] == '0' && tolower(p[1]) == 'x') + p += 2; + + // check whether the RHS is a single hex digit + if (!isxdigit(p[0]) || (p[1] && !isspace(p[1]))) + return -EINVAL; + + *v = tolower(*p) - (isdigit(*p) ? '0' : 'a' - 10); + return 0; +} + static int __init find_field(const char *cmdline, char *opt, int len, const struct ftr_set_desc *reg, int f, u64 *v) { @@ -208,7 +222,7 @@ static int __init find_field(const char *cmdline, char *opt, int len, if (memcmp(cmdline, opt, len)) return -1; - return kstrtou64(cmdline + len, 0, v); + return parse_hexdigit(cmdline + len, v); } static void __init match_options(const char *cmdline) From patchwork Tue Mar 7 14:04:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163757 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E9DEDC6FD1B for ; Tue, 7 Mar 2023 14:32:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=UsLJYOSiKUZa81RyMyfkpRj5jPUpv/vJ8NCWMCdH5KU=; b=4p1HFn4UYfVbcr QGgzCc9ctZrGSPvKszp2DhQp6mh3vDVcQ8xEH6i4WGFLvDhbVwawjscjtV9JniaXCAyfJUdari4M7 NIPg21F57f2t3b9GI7n0d4gHt+1gotZ/y08NmhYcwMLOc5WqvD5SfA26TpuLWdL/mUeBNxs1Xx/iU TmIIi2uSDuCpb0cXUlkgHzTt0zb9VHfcT5pCzaLlG10IzVCVQNOW7fubvd1sF/wU62eJ8iVm82nUg 0z8N2hFiAPuAiPbtuiO+/M03VOGaQfCJLQ55vhE3lOfQbjwszPyIZ8m5//qzMD39PLFqWvuPOgMPh AZ5Bo2mcrG1bRq+O2pzQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYLi-000omO-Bu; Tue, 07 Mar 2023 14:31:31 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXyb-000eMc-WE for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:07:40 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 8BFB8B818F9; Tue, 7 Mar 2023 14:07:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A1C6FC4339E; Tue, 7 Mar 2023 14:07:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198053; bh=v8v/TkqMB9OCCzVoe5QhdocYNnv3zbGgGyZ+eM5YmgQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DXT3451NNJDKpngopF9FhSnKez/+LmLSPb0f8C5FoGyHiY1+Jb69+bZvtp1qCM7fc 6h/PPkb8iMbPpyNCPEQ28i0rF9EKsivkbqV+89rFdyIKAvqeq6io6GVm+qn/+bzhpe EEQR9gezKj65AX2z9Ed+6Jxnx+bRhxMNfto9BmSyfT21nRifPnh219yQmKdkrDoJC1 87y83HBdyR8j45cLoR83hfIxIXcJpiFtTs7kRcoinzvTbHSuGUuR0+asJvTagEBwu9 FcKCb3x2h9M4WStIdUnBsGkbWwBZKXDGnP6aC0e1zUp9gE8kOFqH0MD08iQw+aI3wr KcHXStRRKyYeA== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 23/60] arm64: idreg-override: Move to early mini C runtime Date: Tue, 7 Mar 2023 15:04:45 +0100 Message-Id: <20230307140522.2311461-24-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6750; i=ardb@kernel.org; h=from:subject; bh=v8v/TkqMB9OCCzVoe5QhdocYNnv3zbGgGyZ+eM5YmgQ=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdxXwxtwOb0Kwq3ab+mJWN9cyOTE29fnwzzT3XnbyqM vfK/GUdpSwMYhwMsmKKLAKz/77beXqiVK3zLFmYOaxMIEMYuDgFYCJexxj+CjxZ6WQZ/KZuT4vD 1Js7VhyrXZEie8WsRmdjabj9Evkf3xgZdvgu++rzvuBa2kvH88debqy6Y9shLXdshcaz8Akf1kX 3cwMA X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060738_375947_A2331822 X-CRM114-Status: GOOD ( 18.63 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We will want to parse the ID register overrides even earlier, so that we can take them into account before creating the kernel mapping. So migrate the code and make it work in the context of the early C runtime. We will move the invocation to an earlier stage in a subsequent patch. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/Makefile | 4 +-- arch/arm64/kernel/head.S | 5 ++- arch/arm64/kernel/image-vars.h | 9 +++++ arch/arm64/kernel/pi/Makefile | 5 +-- arch/arm64/kernel/{ => pi}/idreg-override.c | 38 ++++++++------------ 5 files changed, 30 insertions(+), 31 deletions(-) diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index e27168d6ed2050b9..4f1fcaebafcfe077 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -33,8 +33,7 @@ obj-y := debug-monitors.o entry.o irq.o fpsimd.o \ return_address.o cpuinfo.o cpu_errata.o \ cpufeature.o alternative.o cacheinfo.o \ smp.o smp_spin_table.o topology.o smccc-call.o \ - syscall.o proton-pack.o idreg-override.o idle.o \ - patching.o + syscall.o proton-pack.o idle.o patching.o pi/ obj-$(CONFIG_COMPAT) += sys32.o signal32.o \ sys_compat.o @@ -58,7 +57,6 @@ obj-$(CONFIG_ACPI) += acpi.o obj-$(CONFIG_ACPI_NUMA) += acpi_numa.o obj-$(CONFIG_ARM64_ACPI_PARKING_PROTOCOL) += acpi_parking_protocol.o obj-$(CONFIG_PARAVIRT) += paravirt.o -obj-$(CONFIG_RELOCATABLE) += pi/ obj-$(CONFIG_RANDOMIZE_BASE) += kaslr.o obj-$(CONFIG_HIBERNATION) += hibernate.o hibernate-asm.o obj-$(CONFIG_ELF_CORE) += elfcore.o diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 5047a2952ec273f9..0fa44b3188c1e204 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -510,10 +510,9 @@ SYM_FUNC_START_LOCAL(__primary_switched) #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) bl kasan_early_init #endif - mov x0, x21 // pass FDT address in x0 - bl early_fdt_map // Try mapping the FDT early mov x0, x20 // pass the full boot status - bl init_feature_override // Parse cpu feature overrides + mov x1, x22 // pass the low FDT mapping + bl __pi_init_feature_override // Parse cpu feature overrides #ifdef CONFIG_UNWIND_PATCH_PAC_INTO_SCS bl scs_patch_vmlinux #endif diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index b5906f8e18d7eb8d..5aa914ea919a1149 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -35,6 +35,15 @@ PROVIDE(__pi___memmove = __pi_memmove); PROVIDE(__pi___memset = __pi_memset); PROVIDE(__pi_vabits_actual = vabits_actual); +PROVIDE(__pi_id_aa64isar1_override = id_aa64isar1_override); +PROVIDE(__pi_id_aa64isar2_override = id_aa64isar2_override); +PROVIDE(__pi_id_aa64mmfr1_override = id_aa64mmfr1_override); +PROVIDE(__pi_id_aa64pfr0_override = id_aa64pfr0_override); +PROVIDE(__pi_id_aa64pfr1_override = id_aa64pfr1_override); +PROVIDE(__pi_id_aa64smfr0_override = id_aa64smfr0_override); +PROVIDE(__pi_id_aa64zfr0_override = id_aa64zfr0_override); +PROVIDE(__pi_arm64_sw_feature_override = arm64_sw_feature_override); +PROVIDE(__pi__ctype = _ctype); #ifdef CONFIG_KVM diff --git a/arch/arm64/kernel/pi/Makefile b/arch/arm64/kernel/pi/Makefile index d084c1dcf4165420..7f6dfce893c3b88f 100644 --- a/arch/arm64/kernel/pi/Makefile +++ b/arch/arm64/kernel/pi/Makefile @@ -38,6 +38,7 @@ $(obj)/lib-%.pi.o: OBJCOPYFLAGS += --prefix-alloc-sections=.init $(obj)/lib-%.o: $(srctree)/lib/%.c FORCE $(call if_changed_rule,cc_o_c) -obj-y := relocate.pi.o -obj-$(CONFIG_RANDOMIZE_BASE) += kaslr_early.pi.o lib-fdt.pi.o lib-fdt_ro.pi.o +obj-y := idreg-override.pi.o lib-fdt.pi.o lib-fdt_ro.pi.o +obj-$(CONFIG_RELOCATABLE) += relocate.pi.o +obj-$(CONFIG_RANDOMIZE_BASE) += kaslr_early.pi.o extra-y := $(patsubst %.pi.o,%.o,$(obj-y)) diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/pi/idreg-override.c similarity index 93% rename from arch/arm64/kernel/idreg-override.c rename to arch/arm64/kernel/pi/idreg-override.c index 758f0e86e2bd2a34..4e76db6eb72c2087 100644 --- a/arch/arm64/kernel/idreg-override.c +++ b/arch/arm64/kernel/pi/idreg-override.c @@ -14,6 +14,8 @@ #include #include +#include "pi.h" + #define FTR_DESC_NAME_LEN 20 #define FTR_DESC_FIELD_LEN 10 #define FTR_ALIAS_NAME_LEN 30 @@ -21,18 +23,6 @@ static u64 __boot_status __initdata; -// temporary __prel64 related definitions -// to be removed when this code is moved under pi/ - -#define __prel64_initconst __initconst - -typedef void *prel64_t; - -static void *prel64_to_pointer(const prel64_t *p) -{ - return *p; -} - struct ftr_set_desc { char name[FTR_DESC_NAME_LEN]; union { @@ -309,16 +299,11 @@ static __init void __parse_cmdline(const char *cmdline, bool parse_aliases) } while (1); } -static __init const u8 *get_bootargs_cmdline(void) +static __init const u8 *get_bootargs_cmdline(const void *fdt) { const u8 *prop; - void *fdt; int node; - fdt = get_early_fdt_ptr(); - if (!fdt) - return NULL; - node = fdt_path_offset(fdt, "/chosen"); if (node < 0) return NULL; @@ -330,9 +315,9 @@ static __init const u8 *get_bootargs_cmdline(void) return strlen(prop) ? prop : NULL; } -static __init void parse_cmdline(void) +static __init void parse_cmdline(const void *fdt) { - const u8 *prop = get_bootargs_cmdline(); + const u8 *prop = get_bootargs_cmdline(fdt); if (IS_ENABLED(CONFIG_CMDLINE_FORCE) || !prop) __parse_cmdline(CONFIG_CMDLINE, true); @@ -342,9 +327,9 @@ static __init void parse_cmdline(void) } /* Keep checkers quiet */ -void init_feature_override(u64 boot_status); +void init_feature_override(u64 boot_status, const void *fdt); -asmlinkage void __init init_feature_override(u64 boot_status) +asmlinkage void __init init_feature_override(u64 boot_status, const void *fdt) { struct arm64_ftr_override *override; const struct ftr_set_desc *reg; @@ -360,7 +345,7 @@ asmlinkage void __init init_feature_override(u64 boot_status) __boot_status = boot_status; - parse_cmdline(); + parse_cmdline(fdt); for (i = 0; i < ARRAY_SIZE(regs); i++) { reg = prel64_to_pointer(®s[i].reg_prel); @@ -369,3 +354,10 @@ asmlinkage void __init init_feature_override(u64 boot_status) (unsigned long)(override + 1)); } } + +char * __init skip_spaces(const char *str) +{ + while (isspace(*str)) + ++str; + return (char *)str; +} From patchwork Tue Mar 7 14:04:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163755 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A450DC678D4 for ; Tue, 7 Mar 2023 14:31:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=XdAX+FL4c0+TvqF83cYbGVUmXwYekOM3KPA7YLx+4Vw=; b=VMJC5JdBoJdUQo mymmZTIZ9IC5z6lx0qNqD5HjYg+dUYBfGOUDMFy28jPOS92d5TmocVIML6vonZf5V14GUGg1zfibM kxFCEtRcJjkgflJixo8Nnj2YQDvTrRHLQ451Es5vazsZL8Ad1oLp+HrieZryHofYy0IyKX6Dw8s3n a39Ui5Xxg8sItvuK6MxPFKFQw16homSUOj2EAG3ZHBv2LKseqpUTzr9E/Ze4I/Lj0yG0MHUrASXza uVtjt3AdOtqfBIhr2qrbrkBe5nTuAFj1LdSq7fVzRWt28GAOUkhDRSe2jL/vhE+VGGlsbDYphXaSQ etY68wuDYOzCF2QfOWMA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYKP-000oCH-BE; Tue, 07 Mar 2023 14:30:09 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXya-000eM2-Vm for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:07:38 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 377656145D; Tue, 7 Mar 2023 14:07:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1021EC4339C; Tue, 7 Mar 2023 14:07:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198056; bh=an7RQl0FyYFp7pKvljqsgvYahLoQXlUCxmiwvRE6nxQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=C1AMjA/Q6Tv3JeIisClS6GOarzAr08r6Bk0wyffkzCepVTonXIzNfn3dUG7iWkgjX dNDCIuawjWf584NQZRvBZECxIqAPoKXI9r7qy+drljZo+dgHMf/UCmCK8xWjij+HpP IDkGd5BGbfTNsE7p2IaTMYkRmc3cGfXtXi4RSKFXXqqUV7nzxJfJciSb6IxyNWR4t3 wxbTPKgzRK7jYLmhqinyjhAs9/fcr25cHfwwd5QOoz/xf0d/NWGDMRuB0EClW/nbad Di5hhFWByMQ4MlVIg8dG3TSpcAiaJm1+WYhpOcdXuQgvh27jS78kxnG1igLHBc6/U+ 4CrBQFoFtJ1Rw== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 24/60] arm64: kernel: Remove early fdt remap code Date: Tue, 7 Mar 2023 15:04:46 +0100 Message-Id: <20230307140522.2311461-25-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1349; i=ardb@kernel.org; h=from:subject; bh=an7RQl0FyYFp7pKvljqsgvYahLoQXlUCxmiwvRE6nxQ=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdxSqOkXnV6mPWGxZKXlvR5vbwjFFV9H1f3/1+B2PvH dintU+jo5SFQYyDQVZMkUVg9t93O09PlKp1niULM4eVCWQIAxenAEzktiAjw0UG5R2NFRlGBk0u nxcly37OrC0orPc5x+8vbuwidNR2P8M/i/sxShuXJ3Gpiymmr7hrfn/Gih1/LQP+CPgKmJd4XFr DCAA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060737_143103_911427E8 X-CRM114-Status: GOOD ( 11.66 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The early FDT remap code is no longer used so let's drop it. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/setup.h | 3 --- arch/arm64/kernel/setup.c | 15 --------------- 2 files changed, 18 deletions(-) diff --git a/arch/arm64/include/asm/setup.h b/arch/arm64/include/asm/setup.h index f4af547ef54caa70..acc5e00bf3b0fafb 100644 --- a/arch/arm64/include/asm/setup.h +++ b/arch/arm64/include/asm/setup.h @@ -7,9 +7,6 @@ #include -void *get_early_fdt_ptr(void); -void early_fdt_map(u64 dt_phys); - /* * These two variables are used in the head.S file. */ diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index b8ec7b3ac9cbe8a8..bda21a9245943c57 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -166,21 +166,6 @@ static void __init smp_build_mpidr_hash(void) pr_warn("Large number of MPIDR hash buckets detected\n"); } -static void *early_fdt_ptr __initdata; - -void __init *get_early_fdt_ptr(void) -{ - return early_fdt_ptr; -} - -asmlinkage void __init early_fdt_map(u64 dt_phys) -{ - int fdt_size; - - early_fixmap_init(); - early_fdt_ptr = fixmap_remap_fdt(dt_phys, &fdt_size, PAGE_KERNEL); -} - static void __init setup_machine_fdt(phys_addr_t dt_phys) { int size; From patchwork Tue Mar 7 14:04:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163756 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 04E63C678D5 for ; Tue, 7 Mar 2023 14:32:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=v/RtFc5/wSTSrK25QKI55r9RVB+M8Uzl1w40PELFbeo=; b=fO/GXE/x8G7lhj XG2WvbXMNnJ3uPdsJtcnwRH42B4JtIhrL40TGg/leuBtnHv9ETXnTIsbgyCqs2YCstGijj1BU44Kj 5XvxswkQex4JRmfXrFVjKXAFPzIvxS8ufOL0CYaLljBRzwTXBh0k5U9FpINrvOMaYbW3LClEa3Aen i8peYcMwYDjIrurEGpDuUVF1xu0r/UVKe0NgiPz9Vu6OFbtZytqXP+v6LAZQqJkZd1k7dzNY/Md07 oC56w5RD/m/YtTMXDdPy0pQRxj1wB+VtTP6GzSSkmdGH8x7UQ8TzxtV8VJ/zh2bhxOUwiZ8WZxYoP KmjqtS8W5/OlSJ3KxpXQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYL2-000oTI-W3; Tue, 07 Mar 2023 14:30:49 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXyf-000eOM-2e for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:07:42 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id A7557B818F7; Tue, 7 Mar 2023 14:07:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 72DE3C433A0; Tue, 7 Mar 2023 14:07:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198058; bh=WFiiboUNlXGaUmkdqX5fmc34WsQ0VoU7MZXGSi+88TY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZaLcnLPEg0R+H7zkhz3RRod76KFXo1jHQJ+Z2pIZw8gBUIaW920u2MWBAmmWFdtKP 8dvvK9g9F+GKwTaN5prLbCyhc8eVemUZCWJq1ZGQkECClqtwEUjo+zWRE5nRBkgiXY fzJbMXpG0hH+B2/0kM7PEcSKjnIttGefaKzzJRxe5sXlOT4jyYbnp1QJOpXyu/sme2 tL/ldM3Ry/gF4tHQN2bHSIXOAZDGDRq42ujT/Gg+kWZVWZRe39bY4Bdka6rUSICEoQ lNtKXSQLKwpr157uiwgiwPtjo+A14yZGy3ZXJvZw97A2zMhhtbSfu6vgepvECvLty6 vxtK5iidqEnPw== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 25/60] arm64: head: Clear BSS and the kernel page tables in one go Date: Tue, 7 Mar 2023 15:04:47 +0100 Message-Id: <20230307140522.2311461-26-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2710; i=ardb@kernel.org; h=from:subject; bh=WFiiboUNlXGaUmkdqX5fmc34WsQ0VoU7MZXGSi+88TY=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdxa4nKujXcpU7UUunp7dPm2V91+ioYHtD9PcSgc0Kl Q6WK4I7SlkYxDgYZMUUWQRm/3238/REqVrnWbIwc1iZQIYwcHEKwESu3GP4zSJrGhGr3XMr3WO3 MePS4n1MhiGmidrP95xlZ0tJs/26hZGhYcsnfq7aKU21ytETP+vEmM9WnKO85YPQ7uiwN60LBcV ZAA== X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060741_467448_BF5A5BDB X-CRM114-Status: GOOD ( 15.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We will move the CPU feature overrides into BSS in a subsequent patch, and this requires that BSS is zeroed before the feature override detection code runs. So let's map BSS read-write in the ID map, and zero it via this mapping. Since the kernel page tables are right next to it, and also zeroed via the ID map, let's drop the separate clear_page_tables() function, and just zero everything in one go. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 33 +++++++------------- 1 file changed, 11 insertions(+), 22 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 0fa44b3188c1e204..ade0cb99c8a83a3d 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -177,17 +177,6 @@ SYM_CODE_START_LOCAL(preserve_boot_args) ret SYM_CODE_END(preserve_boot_args) -SYM_FUNC_START_LOCAL(clear_page_tables) - /* - * Clear the init page tables. - */ - adrp x0, init_pg_dir - adrp x1, init_pg_end - sub x2, x1, x0 - mov x1, xzr - b __pi_memset // tail call -SYM_FUNC_END(clear_page_tables) - /* * Macro to populate page table entries, these entries can be pointers to the next level * or last level entries pointing to physical memory. @@ -386,9 +375,9 @@ SYM_FUNC_START_LOCAL(create_idmap) map_memory x0, x1, x3, x6, x7, x3, IDMAP_PGD_ORDER, x10, x11, x12, x13, x14, EXTRA_SHIFT - /* Remap the kernel page tables r/w in the ID map */ + /* Remap BSS and the kernel page tables r/w in the ID map */ adrp x1, _text - adrp x2, init_pg_dir + adrp x2, __bss_start adrp x3, _end bic x4, x2, #SWAPPER_BLOCK_SIZE - 1 mov x5, SWAPPER_RW_MMUFLAGS @@ -489,14 +478,6 @@ SYM_FUNC_START_LOCAL(__primary_switched) mov x0, x20 bl set_cpu_boot_mode_flag - // Clear BSS - adr_l x0, __bss_start - mov x1, xzr - adr_l x2, __bss_stop - sub x2, x2, x0 - bl __pi_memset - dsb ishst // Make zero page visible to PTW - #if VA_BITS > 48 adr_l x8, vabits_actual // Set this early so KASAN early init str x25, [x8] // ... observes the correct value @@ -780,6 +761,15 @@ SYM_FUNC_START_LOCAL(__primary_switch) adrp x1, reserved_pg_dir adrp x2, init_idmap_pg_dir bl __enable_mmu + + // Clear BSS + adrp x0, __bss_start + mov x1, xzr + adrp x2, init_pg_end + sub x2, x2, x0 + bl __pi_memset + dsb ishst // Make zero page visible to PTW + #ifdef CONFIG_RELOCATABLE adrp x23, KERNEL_START and x23, x23, MIN_KIMG_ALIGN - 1 @@ -794,7 +784,6 @@ SYM_FUNC_START_LOCAL(__primary_switch) orr x23, x23, x0 // record kernel offset #endif #endif - bl clear_page_tables bl create_kernel_mapping adrp x1, init_pg_dir From patchwork Tue Mar 7 14:04:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163758 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E1113C678D4 for ; Tue, 7 Mar 2023 14:33:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=TzZVxrCM2rwXObaTOMj2YKNzPlunbAyNWVXqv9ikNe8=; b=F075BgO/zDtgyM PLy3B0nX/bB4Z98L2s/0ZdRFgU/T1iXMDSuwCJfuGKiej3Yvq+pVXuG46+7UYG4IBLaKAvY6+LW64 9nxrZYtUllFHPx11oiJQ5oZLPrCTXvse7JXyllQhg40FCIL92LDe0qNPj0YLIkksw6w+Yyy2un7nz /hASnSDR4m3a62D9bqqFuxYnzwQwIldrwZomHua9D28RPcQyLg+ncdo5XizFRm6rgJnArPYjlr0+3 Pya8V/4R/baQWbzUUdsnzJdX+uQDADsS3m3akTrLomlse0aN8pmXsbnf4H8zY1k90EOpn9lMh7pXO sz8s8mTuD/0YBvgTczyA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYMb-000pES-CI; Tue, 07 Mar 2023 14:32:26 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXyg-000eP2-1u for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:07:43 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 66E7A61457; Tue, 7 Mar 2023 14:07:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D6162C433A4; Tue, 7 Mar 2023 14:07:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198060; bh=NW0PGhf1i6p2eNdylQMSe2x4+utKDJZCyu+uqyONhAY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rcUOF+QHEd1jnWMXrZj5R+EYXNfLxD01pxXkwuWjvh839aQfApV8fuJnSxME6zk2l oQlxQ2hph0Pzgs6zrd92ynyO/BZvewOzML5BHSR7MJHkYMrQ48dqjA4anyKH/YMVeC CIfFy+xzkAeKvyv4J815sjfvnNCux9c86RKfaLG+2oELDINbKSvfs6x2NoxDx9hoew cNRZ2FM+8ehzAhU4394rZ3gUmiSFzkhWpCatrow9G4H8+UIVgjzXW3NdkgIHqWdmui Sxzh3Q+C1Kw8/MfO+frGDXfPgsGRngKVCJ8I6t0Xd4BgdQKvDobPUu7C0miiOJpk1v 77L9HNY8ygkxQ== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 26/60] arm64: Move feature overrides into the BSS section Date: Tue, 7 Mar 2023 15:04:48 +0100 Message-Id: <20230307140522.2311461-27-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1664; i=ardb@kernel.org; h=from:subject; bh=NW0PGhf1i6p2eNdylQMSe2x4+utKDJZCyu+uqyONhAY=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdxUHQ01bgxOGkDa94b+7L5XevME6w/Xdn10Mx2flLl 0f3ffnVUcrCIMbBICumyCIw+++7nacnStU6z5KFmcPKBDKEgYtTACbS68vIcH/DGqU2Fhe7k0Uy hTNeLbwXfnVCZrLi9d2lC52Dv240mszw33/bhvhT204cqGE/Lui3P3mTo96ClP1fWVsaapfejfy 2ghcA X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060742_219837_F537BD58 X-CRM114-Status: GOOD ( 12.74 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In order to allow the CPU feature override detection code to run even earlier, move the feature override global variables into BSS, which is the only part of the static kernel image that is mapped read-write in the initial ID map. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/cpufeature.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 88cec6c14743c4c5..0b16e676b68c6543 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -649,13 +649,13 @@ static const struct arm64_ftr_bits ftr_raz[] = { #define ARM64_FTR_REG(id, table) \ __ARM64_FTR_REG_OVERRIDE(#id, id, table, &no_override) -struct arm64_ftr_override __ro_after_init id_aa64mmfr1_override; -struct arm64_ftr_override __ro_after_init id_aa64pfr0_override; -struct arm64_ftr_override __ro_after_init id_aa64pfr1_override; -struct arm64_ftr_override __ro_after_init id_aa64zfr0_override; -struct arm64_ftr_override __ro_after_init id_aa64smfr0_override; -struct arm64_ftr_override __ro_after_init id_aa64isar1_override; -struct arm64_ftr_override __ro_after_init id_aa64isar2_override; +struct arm64_ftr_override id_aa64mmfr1_override; +struct arm64_ftr_override id_aa64pfr0_override; +struct arm64_ftr_override id_aa64pfr1_override; +struct arm64_ftr_override id_aa64zfr0_override; +struct arm64_ftr_override id_aa64smfr0_override; +struct arm64_ftr_override id_aa64isar1_override; +struct arm64_ftr_override id_aa64isar2_override; struct arm64_ftr_override arm64_sw_feature_override; From patchwork Tue Mar 7 14:04:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163775 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 40ED5C6FD1A for ; Tue, 7 Mar 2023 14:34:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=fDK6TIHciBBhYxKJWNE3euA8XuqzaKI5EBcqZOX5RM4=; b=eerlTHkX6F6qRO cbuf8xNxgPGt5a1miXWsBso6Jygk+uYUMBe5Jj6aLsfioxroRDJTvhDg0zFZWrmPqU2v5nOSxFABa VQ8mqQspvis/D+T2cf8pFV4lUjEd3t5R2/EiCuGqaQYSyP8WpT6UKeJm9GBg7rqNiOKkCXRHwWodL IDKlceO9EalTpar9QZijkj3WVb21shBcslyI/2zQifcRFhacW8NSV9jPFwJDfduH6erKBxA/YOswJ dS5S5dubyDodDrUEa7OHtqDF1FyuwCB8p2FGIYOa83J3jauyfIFKDeo5NZH2Rdk0MMfJjsUAsonV3 YLLLSAAonPbBi3cOY8Ng==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYNQ-000pdE-K4; Tue, 07 Mar 2023 14:33:16 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXyi-000eQW-95 for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:07:45 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C94E861455; Tue, 7 Mar 2023 14:07:43 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 442E9C433A1; Tue, 7 Mar 2023 14:07:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198063; bh=LGPwuGUBE9+D2Ri4mVTa9KzwPrnmKSHtWXBpjrEGvfU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RwzOzBuuV/wfG5j8QfOfX32RG0eTjM+i2oreD47PIfD3xfe6B8MWngtR5XRC0GzhU 5B6cc0APiUj0c+EvQfL1dcFT3tOVcKBJYt3xBd1LA5qKWqjel9ddOwaK9TSr8dlq9S bsQ3ieraqDDj+gClYHPuEXhJ+65Ka1BJ+jAIgphsLHMYA8v1ABeGYMWCoL0H3YLJ+E pphixn4kEPbe/FsBnLjFah7Q4mhll9rULdSHzJhLssrLHSGIe8OZgvk4udC32738SE Gg84NPJJAs94/bytiFf1AoJ+9oQni2qFHObKbvekDheKuCwdKbb5vkqWlnU3hMz6w+ GSwxrNUkJcohA== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 27/60] arm64: head: Run feature override detection before mapping the kernel Date: Tue, 7 Mar 2023 15:04:49 +0100 Message-Id: <20230307140522.2311461-28-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2825; i=ardb@kernel.org; h=from:subject; bh=LGPwuGUBE9+D2Ri4mVTa9KzwPrnmKSHtWXBpjrEGvfU=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdxenLuW+3y64XPmd4dOOZjIxXtH5B38riC5cW7E5d2 HJqu9zEjlIWBjEOBlkxRRaB2X/f7Tw9UarWeZYszBxWJpAhDFycAjCRlfMY/qd5fVJ2i2BfrprU flVk2TPP4nqFYv73WeI1mbu/zDo7M53hv7vjEWPNV49u3VqyxGqfzLQvKRcjL7fvvH9USoevqpN RiBMA X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060744_434125_1CDEB99F X-CRM114-Status: GOOD ( 15.07 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org To permit the feature overrides to be taken into account before the KASLR init code runs and the kernel mapping is created, move the detection code to an earlier stage in the boot. In a subsequent patch, this will be taken advantage of by merging the preliminary and permanent mappings of the kernel text and data into a single one that gets created and relocated before start_kernel() is called. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 17 +++++++++-------- arch/arm64/kernel/vmlinux.lds.S | 4 +--- 2 files changed, 10 insertions(+), 11 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index ade0cb99c8a83a3d..0a345898a12939af 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -375,9 +375,9 @@ SYM_FUNC_START_LOCAL(create_idmap) map_memory x0, x1, x3, x6, x7, x3, IDMAP_PGD_ORDER, x10, x11, x12, x13, x14, EXTRA_SHIFT - /* Remap BSS and the kernel page tables r/w in the ID map */ + /* Remap [.init].data, BSS and the kernel page tables r/w in the ID map */ adrp x1, _text - adrp x2, __bss_start + adrp x2, __initdata_begin adrp x3, _end bic x4, x2, #SWAPPER_BLOCK_SIZE - 1 mov x5, SWAPPER_RW_MMUFLAGS @@ -491,9 +491,6 @@ SYM_FUNC_START_LOCAL(__primary_switched) #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) bl kasan_early_init #endif - mov x0, x20 // pass the full boot status - mov x1, x22 // pass the low FDT mapping - bl __pi_init_feature_override // Parse cpu feature overrides #ifdef CONFIG_UNWIND_PATCH_PAC_INTO_SCS bl scs_patch_vmlinux #endif @@ -770,12 +767,16 @@ SYM_FUNC_START_LOCAL(__primary_switch) bl __pi_memset dsb ishst // Make zero page visible to PTW -#ifdef CONFIG_RELOCATABLE - adrp x23, KERNEL_START - and x23, x23, MIN_KIMG_ALIGN - 1 adrp x1, early_init_stack mov sp, x1 mov x29, xzr + mov x0, x20 // pass the full boot status + mov x1, x22 // pass the low FDT mapping + bl __pi_init_feature_override // Parse cpu feature overrides + +#ifdef CONFIG_RELOCATABLE + adrp x23, KERNEL_START + and x23, x23, MIN_KIMG_ALIGN - 1 #ifdef CONFIG_RANDOMIZE_BASE mov x0, x22 bl __pi_kaslr_early_init diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index ec24b1e70d606ec8..6c79ad2945749260 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -318,10 +318,8 @@ SECTIONS init_pg_dir = .; . += INIT_DIR_SIZE; init_pg_end = .; -#ifdef CONFIG_RELOCATABLE - . += SZ_4K; /* stack for the early relocation code */ + . += SZ_4K; /* stack for the early C runtime */ early_init_stack = .; -#endif . = ALIGN(SEGMENT_ALIGN); __pecoff_data_size = ABSOLUTE(. - __initdata_begin); From patchwork Tue Mar 7 14:04:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163776 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BB26AC678D4 for ; Tue, 7 Mar 2023 14:35:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=HbcAkGNYuAi4Nd4GAoEJWvpZ4YFKTrv5y1XBi/du5vs=; b=1UhpwCgseMsGos vbDje52tUOEzmEiBELINKIyDYi2IifYy5M7RnBNl9V/BYjtubzXjIMT/yV6emkrSTfhEW1hQ/3SSI 2+UuLIC6IakchxAFZ7ZSnY43HsIw6Q34IZ6DdpTf+tpIh+MAj2GgR++drwBMSb4TjgZ3+nRB3y+vy kLzSg/CgDs6DrbD5lT5Odac28HPEUz/9CBY2vzeUItspLzitvHoS1NARE+hVVvOnRiT0EKIRqWWiR vZBMyoGYr18dYxSV4u8CYPjGzuryT/+/4Rmm4Pbw1xAOORWmPQiTk1N/JPQfqF6xwhwEE10pbFqJX 2OfARR8F8yOTyB3x2gpg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYOH-000pzk-0P; Tue, 07 Mar 2023 14:34:09 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXym-000eRx-7s for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:07:50 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id D4A99B818FB; Tue, 7 Mar 2023 14:07:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A7D53C4339C; Tue, 7 Mar 2023 14:07:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198065; bh=L2+rgVx2iv3hRHUq+qUrb8b1wtfayhr80bZIYC3PNSI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PJ9rvgoQ6qhXJ4fKL6aSQ01F1mlWPTmSefIFRcnR0S2mgH6H8MNK+FpAtc5ePuJ3+ J2QWqeGt5WfyHS9hj5PT7WM0Tl55bnifu1Xo+uBdIY3+ezR62S1Oh71BPFPbayM3yl mpvoQKS1QSlVN1g0JY+1Hb3j2O2NNT3AaLLU6fcnympJeMAAD/BFuuy6RQvIFvbnzd n3OfigaUPVNVBsKsq5iPvgKFNfyD1f4adD8RLVY50/37J5n7H0H2DZ8kfnUZUO8OPP wtWRkx27kpV+03nBbdhjkuV4s40ya6wfXdegS7BxmITuocuBKc+zmw4WzTSvydevsG TNP9VeQ3fpc6A== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 28/60] arm64: head: move dynamic shadow call stack patching into early C runtime Date: Tue, 7 Mar 2023 15:04:50 +0100 Message-Id: <20230307140522.2311461-29-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6238; i=ardb@kernel.org; h=from:subject; bh=L2+rgVx2iv3hRHUq+qUrb8b1wtfayhr80bZIYC3PNSI=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdxTV+VVgVc1Z3lq6egMSRSVf/ikxRfed15eDkroZVf nlbJh3uKGVhEONgkBVTZBGY/ffdztMTpWqdZ8nCzGFlAhnCwMUpABPZxcbI0HOl7lb6mnN5++5c i98i+G3eg5BfCuGMP6edP3NihoSCvCbDP82Y7eUOT7ZYFM7bbdu49TdPntLq/a3h8cvUQ3c4KCv P4gAA X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060748_624455_1CB337CA X-CRM114-Status: GOOD ( 19.25 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Once we update the early kernel mapping code to only map the kernel once with the right permissions, we can no longer perform code patching via this mapping. So move this code to an earlier stage of the boot, right after applying the relocations. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/scs.h | 2 +- arch/arm64/kernel/Makefile | 2 -- arch/arm64/kernel/head.S | 8 +++--- arch/arm64/kernel/module.c | 2 +- arch/arm64/kernel/pi/Makefile | 10 +++++--- arch/arm64/kernel/{ => pi}/patch-scs.c | 26 ++++++++++---------- 6 files changed, 26 insertions(+), 24 deletions(-) diff --git a/arch/arm64/include/asm/scs.h b/arch/arm64/include/asm/scs.h index 13df982a080805e6..bcf8ad574807b82c 100644 --- a/arch/arm64/include/asm/scs.h +++ b/arch/arm64/include/asm/scs.h @@ -72,7 +72,7 @@ static inline void dynamic_scs_init(void) static inline void dynamic_scs_init(void) {} #endif -int scs_patch(const u8 eh_frame[], int size); +int __pi_scs_patch(const u8 eh_frame[], int size); #endif /* __ASSEMBLY __ */ diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 4f1fcaebafcfe077..bae6194df6a50479 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -72,8 +72,6 @@ obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o obj-$(CONFIG_ARM64_MTE) += mte.o obj-y += vdso-wrap.o obj-$(CONFIG_COMPAT_VDSO) += vdso32-wrap.o -obj-$(CONFIG_UNWIND_PATCH_PAC_INTO_SCS) += patch-scs.o -CFLAGS_patch-scs.o += -mbranch-protection=none # Force dependency (vdso*-wrap.S includes vdso.so through incbin) $(obj)/vdso-wrap.o: $(obj)/vdso/vdso.so diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 0a345898a12939af..70ad180eed364906 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -490,9 +490,6 @@ SYM_FUNC_START_LOCAL(__primary_switched) #endif #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) bl kasan_early_init -#endif -#ifdef CONFIG_UNWIND_PATCH_PAC_INTO_SCS - bl scs_patch_vmlinux #endif mov x0, x20 bl finalise_el2 // Prefer VHE if possible @@ -792,6 +789,11 @@ SYM_FUNC_START_LOCAL(__primary_switch) #ifdef CONFIG_RELOCATABLE mov x0, x23 bl __pi_relocate_kernel +#endif +#ifdef CONFIG_UNWIND_PATCH_PAC_INTO_SCS + ldr x0, =__eh_frame_start + ldr x1, =__eh_frame_end + bl __pi_scs_patch_vmlinux #endif ldr x8, =__primary_switched adrp x0, KERNEL_START // __pa(KERNEL_START) diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c index 5af4975caeb58ff7..9df01fce6ed528a0 100644 --- a/arch/arm64/kernel/module.c +++ b/arch/arm64/kernel/module.c @@ -516,7 +516,7 @@ int module_finalize(const Elf_Ehdr *hdr, if (scs_is_dynamic()) { s = find_section(hdr, sechdrs, ".init.eh_frame"); if (s) - scs_patch((void *)s->sh_addr, s->sh_size); + __pi_scs_patch((void *)s->sh_addr, s->sh_size); } return module_init_ftrace_plt(hdr, sechdrs, me); diff --git a/arch/arm64/kernel/pi/Makefile b/arch/arm64/kernel/pi/Makefile index 7f6dfce893c3b88f..a8b302245f15326a 100644 --- a/arch/arm64/kernel/pi/Makefile +++ b/arch/arm64/kernel/pi/Makefile @@ -38,7 +38,9 @@ $(obj)/lib-%.pi.o: OBJCOPYFLAGS += --prefix-alloc-sections=.init $(obj)/lib-%.o: $(srctree)/lib/%.c FORCE $(call if_changed_rule,cc_o_c) -obj-y := idreg-override.pi.o lib-fdt.pi.o lib-fdt_ro.pi.o -obj-$(CONFIG_RELOCATABLE) += relocate.pi.o -obj-$(CONFIG_RANDOMIZE_BASE) += kaslr_early.pi.o -extra-y := $(patsubst %.pi.o,%.o,$(obj-y)) +obj-y := idreg-override.pi.o \ + lib-fdt.pi.o lib-fdt_ro.pi.o +obj-$(CONFIG_RELOCATABLE) += relocate.pi.o +obj-$(CONFIG_RANDOMIZE_BASE) += kaslr_early.pi.o +obj-$(CONFIG_UNWIND_PATCH_PAC_INTO_SCS) += patch-scs.pi.o +extra-y := $(patsubst %.pi.o,%.o,$(obj-y)) diff --git a/arch/arm64/kernel/patch-scs.c b/arch/arm64/kernel/pi/patch-scs.c similarity index 91% rename from arch/arm64/kernel/patch-scs.c rename to arch/arm64/kernel/pi/patch-scs.c index a1fe4b4ff5917670..c65ef40d1e6b6b30 100644 --- a/arch/arm64/kernel/patch-scs.c +++ b/arch/arm64/kernel/pi/patch-scs.c @@ -4,14 +4,11 @@ * Author: Ard Biesheuvel */ -#include #include #include #include -#include #include -#include #include // @@ -81,7 +78,11 @@ static void __always_inline scs_patch_loc(u64 loc) */ return; } - dcache_clean_pou(loc, loc + sizeof(u32)); + if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_CLEAN_CACHE)) + asm("dc civac, %0" :: "r"(loc)); + else + asm(ALTERNATIVE("dc cvau, %0", "nop", ARM64_HAS_CACHE_IDC) + :: "r"(loc)); } /* @@ -128,10 +129,10 @@ struct eh_frame { }; }; -static int noinstr scs_handle_fde_frame(const struct eh_frame *frame, - bool fde_has_augmentation_data, - int code_alignment_factor, - bool dry_run) +static int scs_handle_fde_frame(const struct eh_frame *frame, + bool fde_has_augmentation_data, + int code_alignment_factor, + bool dry_run) { int size = frame->size - offsetof(struct eh_frame, opcodes) + 4; u64 loc = (u64)offset_to_ptr(&frame->initial_loc); @@ -198,14 +199,13 @@ static int noinstr scs_handle_fde_frame(const struct eh_frame *frame, break; default: - pr_err("unhandled opcode: %02x in FDE frame %lx\n", opcode[-1], (uintptr_t)frame); return -ENOEXEC; } } return 0; } -int noinstr scs_patch(const u8 eh_frame[], int size) +int scs_patch(const u8 eh_frame[], int size) { const u8 *p = eh_frame; @@ -251,12 +251,12 @@ int noinstr scs_patch(const u8 eh_frame[], int size) return 0; } -asmlinkage void __init scs_patch_vmlinux(void) +asmlinkage void __init scs_patch_vmlinux(const u8 start[], const u8 end[]) { if (!should_patch_pac_into_scs()) return; - WARN_ON(scs_patch(__eh_frame_start, __eh_frame_end - __eh_frame_start)); - icache_inval_all_pou(); + scs_patch(start, end - start); + asm("ic ialluis"); isb(); } From patchwork Tue Mar 7 14:04:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163777 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B7989C678D5 for ; Tue, 7 Mar 2023 14:36:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WC3pUCJm8ode8Mpqq5Xrrahx/S+/3axxOOP4AbR/w6U=; b=ZlU5Z0kM49Hb1j tuIulKFQR27EieDhguGKKBYP9OEX24CfnNRdCIcITmGGXXlatgLb/MypH9saQ5vEXQSHE/wbI+XLj fM++zPuNBjp5vr6zyaMRUjyx9O4guDZQU+tBB4wqKxCLBm6c8NLX42uzR/ZQ4Pl1hHuI5Q6gcRm/f 1E4byHkuOpy4kaJaN4hGHPoDH17ZoIoCAAET3WyCdnPBigs8pt48H59ah3z54EGV9+zx8nMhXujio RUaKtqDDPcGqr+bsIhI7Y21Cs4g+1cQg40E3cEqyNzhvSTlnlKIW3sdXzisCQ5Vp5cRTi8J76HJI4 RYsbDC6/BIoKLmsLT1dA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYPQ-000qWi-LI; Tue, 07 Mar 2023 14:35:20 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXyn-000eSx-7n for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:07:50 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 9B3676144B; Tue, 7 Mar 2023 14:07:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 162C8C433A4; Tue, 7 Mar 2023 14:07:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198068; bh=CV00tA1owwhL5iVLNA3OX5N4V06U/3a14L0hLSw4Z+0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Apj8QK2LYxIPN6TPi3s8XsZjMjAzSIMpZAn/Ry+HFYLF8mn+L4PwOdncNL7QppMfI /MX112JO4GIYJ0BQNtTObyOHSa5vhXH0hvhLq6UWGgDU8fqINKnxDew4OpG/TFsFAX yQOgf48VJpujIsL7479ZKLJ+1YplyNna4lX22nicC4KaC0gV/6wn5I1GxJuqdIKUM2 waJjr1kebV9uS42NBCaCZ9dbI9pc6Skoj6TRNYqWjsJgoAhC/gpGjJQ3LabzCk5xrg HAHIDZ4sYMTysWoMuL2E6cW9h5t0Si7K4i6UfzpcreS8WDb0zxtmdKs5p0qUSItNJi dsGljiMawT+3g== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 29/60] arm64: kaslr: Use feature override instead of parsing the cmdline again Date: Tue, 7 Mar 2023 15:04:51 +0100 Message-Id: <20230307140522.2311461-30-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2065; i=ardb@kernel.org; h=from:subject; bh=CV00tA1owwhL5iVLNA3OX5N4V06U/3a14L0hLSw4Z+0=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdxaM6dNJ0j9fn1VSuNdiv/y/8yulq9uHsxeWfdtwxO dlhYHmyo5SFQYyDQVZMkUVg9t93O09PlKp1niULM4eVCWQIAxenAEzkvTzDP2uZq+s4j4peCPzB sHbatzs7jTKWn111LiPdQO6lkU8Ttx7DT8as1lfHnjkEeJQnzWt8rHG9KujOz9a9G0w/H7o93WP aN14A X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060749_386492_CC7E7AF6 X-CRM114-Status: GOOD ( 15.22 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The early kaslr code open codes the detection of 'nokaslr' on the kernel command line, and this is no longer necessary now that the feature detection code, which also looks for the same string, executes before this code. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/pi/kaslr_early.c | 53 +------------------- 1 file changed, 1 insertion(+), 52 deletions(-) diff --git a/arch/arm64/kernel/pi/kaslr_early.c b/arch/arm64/kernel/pi/kaslr_early.c index 167081b30a152d0a..f2305e276ec36803 100644 --- a/arch/arm64/kernel/pi/kaslr_early.c +++ b/arch/arm64/kernel/pi/kaslr_early.c @@ -16,57 +16,6 @@ #include #include -/* taken from lib/string.c */ -static char *__init __strstr(const char *s1, const char *s2) -{ - size_t l1, l2; - - l2 = strlen(s2); - if (!l2) - return (char *)s1; - l1 = strlen(s1); - while (l1 >= l2) { - l1--; - if (!memcmp(s1, s2, l2)) - return (char *)s1; - s1++; - } - return NULL; -} -static bool __init cmdline_contains_nokaslr(const u8 *cmdline) -{ - const u8 *str; - - str = __strstr(cmdline, "nokaslr"); - return str == cmdline || (str > cmdline && *(str - 1) == ' '); -} - -static bool __init is_kaslr_disabled_cmdline(void *fdt) -{ - if (!IS_ENABLED(CONFIG_CMDLINE_FORCE)) { - int node; - const u8 *prop; - - node = fdt_path_offset(fdt, "/chosen"); - if (node < 0) - goto out; - - prop = fdt_getprop(fdt, node, "bootargs", NULL); - if (!prop) - goto out; - - if (cmdline_contains_nokaslr(prop)) - return true; - - if (IS_ENABLED(CONFIG_CMDLINE_EXTEND)) - goto out; - - return false; - } -out: - return cmdline_contains_nokaslr(CONFIG_CMDLINE); -} - static u64 __init get_kaslr_seed(void *fdt) { static char const chosen_str[] __initconst = "chosen"; @@ -92,7 +41,7 @@ asmlinkage u64 __init kaslr_early_init(void *fdt) { u64 seed, range; - if (is_kaslr_disabled_cmdline(fdt)) + if (kaslr_disabled_cmdline()) return 0; seed = get_kaslr_seed(fdt); From patchwork Tue Mar 7 14:04:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163778 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1EDE6C678D5 for ; Tue, 7 Mar 2023 14:37:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=7qR+cY6ZrHE90aIjqY805Rta8sQwAFH08Mj3NshIDug=; b=jJV553+fzma/7y hvEI2lq9LzlLi7YMyzqVCEgWxaL/Czp5jQdwSgEUCz563Xeh3G1dsegVus0xwXozjHN5cTmrkgLVH yGDkuqTMpdkzwUd31a2ca/FjZ1vW0mJUKatHcZBsCL/WCzySYdJED3A+7sPfMB/VINs8pMcxWrUpr Ek/KuRCdWhvvWZNrlPfzu50xOW5MgX1ZDAu54bPNyEWNA2xIrduY6udvqlDeaujslWVxzDAnUWCYF sPq51cDEdoloY+udqkPv5nPvE3nmHiBOLq7bq9WJq43LQJT3L35DcyoR5TnbiCFKZOdXMc+IkBBFN INs0/THEo+q73svMFF0w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYQ6-000qoQ-FU; Tue, 07 Mar 2023 14:36:02 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXyp-000eU0-3i for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:07:52 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A0A2561454; Tue, 7 Mar 2023 14:07:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 783E6C4339C; Tue, 7 Mar 2023 14:07:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198070; bh=MgAlosplvEwsGH+1fNbCRsUtWRN8yNAPEnQin/pEaZM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qRF8W2LQV8FQJYsf48awPPQKBb2WBFb3V8LC3NgRdG6rMtcklRKsz4b3DhAdSx0kB UieAWGEFDo/F15RpQrialQSBIZ4A/yFGOmOJm5JakllD6YKbAqsrrLsSZF0XxDbzqW h34C0isa4ikt652AMToJIHGXAxYG4gfkj9B6zrMCRUCjHuBXPPWsH+64wuUg/Gfsl2 v4pwULIaBv/OF80BI+78CMNC2TmMEQZv/EpggJpA73TQPSIlKnB0iZnBYSsOrcHma/ 0Zr82T3auhRB2Hi08WyE8YQsqmvroFGOpkwVOyKzLtYGt/as+Jpb9u/FGnkPRToLtu pkYFUxVCAe75A== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 30/60] arm64: idreg-override: Create a pseudo feature for rodata=off Date: Tue, 7 Mar 2023 15:04:52 +0100 Message-Id: <20230307140522.2311461-31-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1627; i=ardb@kernel.org; h=from:subject; bh=MgAlosplvEwsGH+1fNbCRsUtWRN8yNAPEnQin/pEaZM=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdxeubg3x1yZqWt9NVJ1zmKVvi+f2c1RTpQ2lX0l/IZ t/tZhbqKGVhEONgkBVTZBGY/ffdztMTpWqdZ8nCzGFlAhnCwMUpABOZ/4rhf+b3hX7sqaL/Rb63 9b8I230r59bsmGvaHmvvWyUlx3/OOsjwP4/v38b9G/xuOedYuxTHsAe813efwa0ZviyLPePWw8j 7nAA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060751_269291_5529C7B7 X-CRM114-Status: GOOD ( 14.08 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add rodata=off to the set of kernel command line options that is parsed early using the CPU feature override detection code, so we can easily refer to it when creating the kernel mapping. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/cpufeature.h | 1 + arch/arm64/kernel/pi/idreg-override.c | 2 ++ 2 files changed, 3 insertions(+) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index bc10098901808c00..edc7733aa49846b2 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -16,6 +16,7 @@ #define cpu_feature(x) KERNEL_HWCAP_ ## x #define ARM64_SW_FEATURE_OVERRIDE_NOKASLR 0 +#define ARM64_SW_FEATURE_OVERRIDE_RODATA_OFF 4 #ifndef __ASSEMBLY__ diff --git a/arch/arm64/kernel/pi/idreg-override.c b/arch/arm64/kernel/pi/idreg-override.c index 4e76db6eb72c2087..6c547cccaf6a9e9c 100644 --- a/arch/arm64/kernel/pi/idreg-override.c +++ b/arch/arm64/kernel/pi/idreg-override.c @@ -151,6 +151,7 @@ static const struct ftr_set_desc sw_features __prel64_initconst = { .override = &arm64_sw_feature_override, .fields = { FIELD("nokaslr", ARM64_SW_FEATURE_OVERRIDE_NOKASLR, NULL), + FIELD("rodataoff", ARM64_SW_FEATURE_OVERRIDE_RODATA_OFF, NULL), {} }, }; @@ -183,6 +184,7 @@ static const struct { "id_aa64isar2.gpa3=0 id_aa64isar2.apa3=0" }, { "arm64.nomte", "id_aa64pfr1.mte=0" }, { "nokaslr", "arm64_sw.nokaslr=1" }, + { "rodata=off", "arm64_sw.rodataoff=1" }, }; static int __init parse_hexdigit(const char *p, u64 *v) From patchwork Tue Mar 7 14:04:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163779 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C0196C6FD1A for ; Tue, 7 Mar 2023 14:38:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=K6GPLU3HaZsJ+V8mnGxtZwm9m1b+sfsdfO1Cb42wMvs=; b=e52s0LmODFkvm7 eR5U9HjqHGFMw5RsVIeCinIUHoQLoPVQps/fJkWT5UyZf8f6c/BoOEZA2yZkwfL0GWXB0oPt+1nPq UjhBe8+LF70XNpgHXI9nF9g52OXYX8pdi4fMv0hDp6zKqbODKztMUd0OIHZgok44viKpCL4PQG61Q tGjD6oprtLpj5wUQVf5Aq6IJt0IRbsQY0agKCYLBOh/qcuSNxWF1hFHdz3xI5w3Gdg0+bqDJvljBM Coge0NuP0njfPoDetlO9euA/susijo8SInVkTT71Uh2orDJ2dCmren9hs0gqbwB0Lb3OhfMPnLYlF clQ2/EJbKssIDG3qJ0aQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYRH-000rNL-H0; Tue, 07 Mar 2023 14:37:17 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXyr-000eVc-ST for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:07:55 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 6C91461454; Tue, 7 Mar 2023 14:07:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DBE50C4339E; Tue, 7 Mar 2023 14:07:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198072; bh=R6myqakIkwiwpNZGWu/wqgHZyBX0rotu/zhA9jFg+14=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oqDiStOJRd9+NhvoSAIywVDmpCjynLcL+RYWa93nYJQM/oBShIoyF+U+KiBwSIxef HZPWxpJwJ0h8AoW+RPyxOSHEKwbDBL7Zs5urldwQuygv8OA5dXH27JlhVnHdBOcO3Y 4cqEarAi4uvZn6KAOQ2haI2KpetS0thitH4boR8rf4DTjVwhPjTEP5mG8DnJanY4Vp HeD5HuyMbpLDkt4GuFezCNDmvlTli6v5JDrTu9f2BcYtfyAzZ5v+S6+EgailWPFbo3 QUFwMWhTVBbLpuHyN0hcxDFh+jVf8pWv+2kHVoEplNcb475NlItufqevmsIr75Ry6Q 0tKA3kmN1qGSQ== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 31/60] arm64: Add helpers to probe local CPU for PAC/BTI/E0PD support Date: Tue, 7 Mar 2023 15:04:53 +0100 Message-Id: <20230307140522.2311461-32-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3104; i=ardb@kernel.org; h=from:subject; bh=R6myqakIkwiwpNZGWu/wqgHZyBX0rotu/zhA9jFg+14=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdxUeJM+v9htPWPAbaa45Ocnnovf8M67tZp/etWeLDv 83l0I/2jlIWBjEOBlkxRRaB2X/f7Tw9UarWeZYszBxWJpAhDFycAjCRu3KMDLdPXgmcE6l4dO20 3TNdzz3rWNma+qhwzxWDmaFPuErKG4sY/rt++3xvwdHF+27M/Vj0cfuB7VeFZIK+hWTPWNmxZId DaSUXAA== X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060754_050941_FF3EF001 X-CRM114-Status: GOOD ( 16.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add some helpers that will be used by the early kernel mapping code to check feature support on the local CPU. This permits the early kernel mapping to be created with the right attributes, removing the need for tearing it down and recreating it. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/cpufeature.h | 56 ++++++++++++++++++++ arch/arm64/kernel/cpufeature.c | 12 +---- 2 files changed, 57 insertions(+), 11 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index edc7733aa49846b2..edefe3b36fe5c243 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -933,6 +933,62 @@ extern struct arm64_ftr_override arm64_sw_feature_override; u32 get_kvm_ipa_limit(void); void dump_cpu_features(void); +static inline bool cpu_has_bti(void) +{ + u64 pfr1; + + if (!IS_ENABLED(CONFIG_ARM64_BTI)) + return false; + + pfr1 = read_cpuid(ID_AA64PFR1_EL1); + pfr1 &= ~id_aa64pfr1_override.mask; + pfr1 |= id_aa64pfr1_override.val; + + return cpuid_feature_extract_unsigned_field(pfr1, + ID_AA64PFR1_EL1_BT_SHIFT); +} + +static inline bool cpu_has_e0pd(void) +{ + u64 mmfr2; + + if (!IS_ENABLED(CONFIG_ARM64_E0PD)) + return false; + + mmfr2 = read_sysreg_s(SYS_ID_AA64MMFR2_EL1); + return cpuid_feature_extract_unsigned_field(mmfr2, + ID_AA64MMFR2_EL1_E0PD_SHIFT); +} + +static inline bool cpu_has_pac(void) +{ + u64 isar1, isar2; + u8 feat; + + if (!IS_ENABLED(CONFIG_ARM64_PTR_AUTH)) + return false; + + isar1 = read_cpuid(ID_AA64ISAR1_EL1); + isar1 &= ~id_aa64isar1_override.mask; + isar1 |= id_aa64isar1_override.val; + feat = cpuid_feature_extract_unsigned_field(isar1, + ID_AA64ISAR1_EL1_APA_SHIFT); + if (feat) + return true; + + feat = cpuid_feature_extract_unsigned_field(isar1, + ID_AA64ISAR1_EL1_API_SHIFT); + if (feat) + return true; + + isar2 = read_sysreg_s(SYS_ID_AA64ISAR2_EL1); + isar2 &= ~id_aa64isar2_override.mask; + isar2 |= id_aa64isar2_override.val; + feat = cpuid_feature_extract_unsigned_field(isar2, + ID_AA64ISAR2_EL1_APA3_SHIFT); + return feat; +} + #endif /* __ASSEMBLY__ */ #endif diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 0b16e676b68c6543..9838934fee028bcb 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1609,21 +1609,11 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope) */ bool kaslr_requires_kpti(void) { - if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE)) - return false; - /* * E0PD does a similar job to KPTI so can be used instead * where available. */ - if (IS_ENABLED(CONFIG_ARM64_E0PD)) { - u64 mmfr2 = read_sysreg_s(SYS_ID_AA64MMFR2_EL1); - if (cpuid_feature_extract_unsigned_field(mmfr2, - ID_AA64MMFR2_EL1_E0PD_SHIFT)) - return false; - } - - return kaslr_enabled(); + return kaslr_enabled() && !cpu_has_e0pd(); } static bool __meltdown_safe = true; From patchwork Tue Mar 7 14:04:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163801 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 45BD1C678D4 for ; Tue, 7 Mar 2023 14:39:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+pqHTp0iCRYFlzhp0RliQjj7dWOq3NiWmM+dN0BMvEQ=; b=z9LAIF3+Nm3Itm Ij4SezwckfgS6nSE8hhCQEcU1mnEUyR8U4NlJk8kG5WyI/zCTx8xcdMtXazam4fHlD4kIvgeD+94j ClUopyYTnA+EJWKhWKKkuob5keO3npt1oxpznp25RrzCzQJK6XtOnW9ZAyxb6FC/Oswy39xv8Pw5w gpAp9MAr+RQlE5qGHLjf3FcwYGftU6UnhUl+Q/WeWFSVhjPz/0lhzuqL3o2EiYsiD1CFdiPgbmaLl z5Bvj8eMzFpAj1iKE83Z+08Q7Pgib+02Lj8P/0qkhrgRM6BoYnjnAZGx/suYppjrWdmisyAgXi4aC 2u5oPkL0T+VCEaQpI8Lg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYSZ-000rwn-HO; Tue, 07 Mar 2023 14:38:35 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXyv-000eXD-QT for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:07:59 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 81465B818FA; Tue, 7 Mar 2023 14:07:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4A1C9C433A4; Tue, 7 Mar 2023 14:07:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198075; bh=fcy++RWuRbBBHw56QOT+oXIgL+AugLZxq+IvNCB50sk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hUzHbJjf7kiWWmaqsrzJIjXwiaG8HD504qAbcJqXcK3SWWQQxmDlHqAIcytgLAW40 sMRCt+PKe8J0HHpiGK98GdPjdByLrIscwhPG2PJye2lUYjEdADOZ4doOMUVc1w2VfF zt6fvfJ3QvtrKolEzUbx0QjVE6vL+32NhrmKy4xb2RKDCQ6b4oANRXdvbIW5Grik3D bf+gYK2IoGv1ZypSGeDo/wFSeW0HM8JCU/nLv/r7u54vHarO7cWdSt9O3gE8T1YVCM wSFSLhCSQk+u1Vdc2or8y0buQ3u3hfDl1MU9MtIyiNmob+PR3M4ReWTrPuCU7JDljG fvX1vLaYgBc2A== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 32/60] arm64: head: allocate more pages for the kernel mapping Date: Tue, 7 Mar 2023 15:04:54 +0100 Message-Id: <20230307140522.2311461-33-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1825; i=ardb@kernel.org; h=from:subject; bh=fcy++RWuRbBBHw56QOT+oXIgL+AugLZxq+IvNCB50sk=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdxf+jbe4jxmXOrYUVzf6Klzkn9cy+dCUs8Fz0DX/hv xdNHas6SlkYxDgYZMUUWQRm/3238/REqVrnWbIwc1iZQIYwcHEKwERm3mD4X+G0QT9N4bjxaTfb cxdXH/CbIy+ueSZ4s7gQm1LBrOYP0xkZti5JqlnnNfuFwKR9htP5pt23m7i4yU5C/uDxuQ2PRRN mMwAA X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060758_076503_B3A16457 X-CRM114-Status: GOOD ( 15.20 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for switching to an early kernel mapping routine that maps each segment according to its precise boundaries, and with the correct attributes, let's allocate some extra pages for page tables for the 4k page size configuration. This is necessary because the start and end of each segment may not be aligned to the block size, and so we'll need an extra page table at each segment boundary. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/kernel-pgtable.h | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h index 4d13c73171e1e360..50b5c145358a5d8e 100644 --- a/arch/arm64/include/asm/kernel-pgtable.h +++ b/arch/arm64/include/asm/kernel-pgtable.h @@ -80,7 +80,7 @@ + EARLY_PGDS((vstart), (vend), add) /* each PGDIR needs a next level page table */ \ + EARLY_PUDS((vstart), (vend), add) /* each PUD needs a next level page table */ \ + EARLY_PMDS((vstart), (vend), add)) /* each PMD needs a next level page table */ -#define INIT_DIR_SIZE (PAGE_SIZE * EARLY_PAGES(KIMAGE_VADDR, _end, EARLY_KASLR)) +#define INIT_DIR_SIZE (PAGE_SIZE * (EARLY_PAGES(KIMAGE_VADDR, _end, EARLY_KASLR) + EARLY_SEGMENT_EXTRA_PAGES)) /* the initial ID map may need two extra pages if it needs to be extended */ #if VA_BITS < 48 @@ -101,6 +101,15 @@ #define SWAPPER_TABLE_SHIFT PMD_SHIFT #endif +/* The number of segments in the kernel image (text, rodata, inittext, initdata, data+bss) */ +#define KERNEL_SEGMENT_COUNT 5 + +#if SWAPPER_BLOCK_SIZE > SEGMENT_ALIGN +#define EARLY_SEGMENT_EXTRA_PAGES (KERNEL_SEGMENT_COUNT + 1) +#else +#define EARLY_SEGMENT_EXTRA_PAGES 0 +#endif + /* * Initial memory map attributes. */ From patchwork Tue Mar 7 14:04:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163800 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8F3E8C6FD1B for ; Tue, 7 Mar 2023 14:39:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=5ntq+NeBBzVX8oS8Avnj7dJc7aFMO/9+kFZ9qtBWpHs=; b=kc3asWNUISDmnu QCCtcJmsoqcIIsM+S8maK7OXIi3Gg9Otd5uVBJCVelXaTpmC1ZI55EqRu55inBE5SfsShk4Mj9j0H h3e4EueYrrm1VEYWyTSRuRiOgPSc+cyb0NKMh5ObPkx9GJJs3Z37+qe5xOUei4gNG8u4nppPOC3wb QzMM3sUrQBN4dMCOlppKoHrWRa2MggVBE8MPtWTcs1OQsBHhKf7Or6xkhjPb74ZfUXl0i+Cbv5e8s aD6xnBY3qb3ZhsrCQz3Igc4gA07CySNg97OZIpSlmyvyhynqSpFk2ajJ83/RJ2Nf2frmCWFeqEN8/ k7AtK01hhQXL9RKVcRqw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYS6-000rko-60; Tue, 07 Mar 2023 14:38:06 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXyw-000eY3-L8 for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:08:00 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3F956613CA; Tue, 7 Mar 2023 14:07:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AD50DC433A7; Tue, 7 Mar 2023 14:07:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198077; bh=5NPb4B7DLXdRfGBUFKwsc4EGZ/Nj6237LOVxsKjUH1g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=W3W34T8rh8RZYuV/7rNzp72cLllzIuLaA7QzKPle0lj3WaPJptvdI2zioMoVebCWv bu6NR7/hneCcKdNele7Hxe2Eht+ZulSb4D1LuQJzHiXJKJFmdyexvM3cuRblMpRT93 qJY6/bWJOyq7Gu6ISZwExtgpUT66owY03eGWzjdlWU3lwMCqGaLmNl7zNtBty4kh9X s9nzzGonQVCPxOdL/E/pJ9E6wz3uzPLKxlAnE4o/65mPCbQkx3uQDEcHXj6cEv5Aoy ADhLH5hUPmfpaZm8Z4z4ImiFtbEv44lNXq61crSY0xUWHWiy3vU8qCHqYV97q+71wo WplSaTarwOAvA== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 33/60] arm64: head: move memstart_offset_seed handling to C code Date: Tue, 7 Mar 2023 15:04:55 +0100 Message-Id: <20230307140522.2311461-34-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3180; i=ardb@kernel.org; h=from:subject; bh=5NPb4B7DLXdRfGBUFKwsc4EGZ/Nj6237LOVxsKjUH1g=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdJbBs68xHV729dDdOMVjnwHtLdeu8HsflazifWT7bG 53gVbKoo5SFQYyDQVZMkUVg9t93O09PlKp1niULM4eVCWQIAxenAExEjY/hr1j7tCbOj3lvr/Re 3sdScD7jHyeXTj9rdL2LaMtLd8YjUgx/OFvaFKPczIS5K8IX2SU0frvSudJ5Qo3HKvdDfaUvghO 4AA== X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060758_816714_10124D7E X-CRM114-Status: GOOD ( 15.38 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Now that we can set BSS variables from the early code running from the ID map, we can set memstart_offset_seed directly from the C code that derives the value instead of passing it back and forth between C and asm code. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 7 ------- arch/arm64/kernel/image-vars.h | 1 + arch/arm64/kernel/pi/kaslr_early.c | 4 ++++ 3 files changed, 5 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 70ad180eed364906..81c2dd06420992ea 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -82,7 +82,6 @@ * x21 primary_entry() .. start_kernel() FDT pointer passed at boot in x0 * x22 create_idmap() .. start_kernel() ID map VA of the DT blob * x23 __primary_switch() physical misalignment/KASLR offset - * x24 __primary_switch() linear map KASLR seed * x25 primary_entry() .. start_kernel() supported VA size * x28 create_idmap() callee preserved temp register */ @@ -483,11 +482,6 @@ SYM_FUNC_START_LOCAL(__primary_switched) str x25, [x8] // ... observes the correct value dc civac, x8 // Make visible to booting secondaries #endif - -#ifdef CONFIG_RANDOMIZE_BASE - adrp x5, memstart_offset_seed // Save KASLR linear map seed - strh w24, [x5, :lo12:memstart_offset_seed] -#endif #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) bl kasan_early_init #endif @@ -777,7 +771,6 @@ SYM_FUNC_START_LOCAL(__primary_switch) #ifdef CONFIG_RANDOMIZE_BASE mov x0, x22 bl __pi_kaslr_early_init - and x24, x0, #SZ_2M - 1 // capture memstart offset seed bic x0, x0, #SZ_2M - 1 orr x23, x23, x0 // record kernel offset #endif diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 5aa914ea919a1149..b7fa7fbf8fa543a6 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -44,6 +44,7 @@ PROVIDE(__pi_id_aa64smfr0_override = id_aa64smfr0_override); PROVIDE(__pi_id_aa64zfr0_override = id_aa64zfr0_override); PROVIDE(__pi_arm64_sw_feature_override = arm64_sw_feature_override); PROVIDE(__pi__ctype = _ctype); +PROVIDE(__pi_memstart_offset_seed = memstart_offset_seed); #ifdef CONFIG_KVM diff --git a/arch/arm64/kernel/pi/kaslr_early.c b/arch/arm64/kernel/pi/kaslr_early.c index f2305e276ec36803..eeecee7ffd6fa125 100644 --- a/arch/arm64/kernel/pi/kaslr_early.c +++ b/arch/arm64/kernel/pi/kaslr_early.c @@ -16,6 +16,8 @@ #include #include +extern u16 memstart_offset_seed; + static u64 __init get_kaslr_seed(void *fdt) { static char const chosen_str[] __initconst = "chosen"; @@ -51,6 +53,8 @@ asmlinkage u64 __init kaslr_early_init(void *fdt) return 0; } + memstart_offset_seed = seed & U16_MAX; + /* * OK, so we are proceeding with KASLR enabled. Calculate a suitable * kernel image offset from the seed. Let's place the kernel in the From patchwork Tue Mar 7 14:04:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163802 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 92666C678D4 for ; Tue, 7 Mar 2023 14:40:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WwXc2N/eajLYLMZBUHSa2XHXiho9/GimVtJNINgnk3Q=; b=TxW3ALVjc6pUBO J5Tz/ujtpmjhlQWShsQgHw/YAOYgeqaliPcsdcd7rUKpUqhIhXYY8ETwK/RpCI7blScfSBVhLo4tp EwnyLx2EHMShQckkyovVPnpWi1pcnspITPmw+YGI8cHsA9kjvY214VzMN1xv49HfZCYNeUvygy8FP 1cAkiWDnOqpkHfCvo26sAOm/XB+SUoHZRzwQKpgzTbOCE8J91QH1LTaLraA5A9mPwBqtA7zCLMc0P CVg8aCn3/3zbxB8JsSrMlpZYNVvEP5kKwoTZyuE3Z/qZQs7BNng0yDKpViEMUhrUSIrVUCNHka5r2 LIRptuifNTL7C0UgGYkA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYTW-000sRh-8L; Tue, 07 Mar 2023 14:39:34 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXyz-000eZI-Aw for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:08:05 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id CBEB761457; Tue, 7 Mar 2023 14:08:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1EDF8C4339C; Tue, 7 Mar 2023 14:07:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198080; bh=YQWbS+XJ/JSBrJ9hB1PhXjdrQkCZwn5zDRPI22xFlB8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Uu2tlBVsoW+wPyD231S1oJeski1njGoCs3V4l0uwcUuaOKRljcgXVsZw6xnqaPGZc T+fwIU7XBdPo+ihvRgPLntoBXTOxCUsyxRqXIddcmArt7y5+tim9Y+x+5SgC0NCetg hkZKiuL7VQwaGwDUMUGcPMBNSCYGYWv1sucqvQMXsT/kDyLTqlpY7L5whRPVvkl/a7 3EoB9DodaQh7+h2XuQ0Q6CDbRpxx250roGSfdmA1/ItFftgHXmYpM8ZFQJmCvXckpm PJjxufWQ6G4zfduOlvklwHtkrI0Mmfi10YnR4rDojosx8B1n3a9iknJIYjUpENxlRA bWD0GTNanerXA== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 34/60] arm64: head: Move early kernel mapping routines into C code Date: Tue, 7 Mar 2023 15:04:56 +0100 Message-Id: <20230307140522.2311461-35-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=24410; i=ardb@kernel.org; h=from:subject; bh=YQWbS+XJ/JSBrJ9hB1PhXjdrQkCZwn5zDRPI22xFlB8=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdJbQkX+la1dlXEZOPSa0X2/dZJGf+z293Q67v/2LC0 v327KJJHaUsDGIcDLJiiiwCs/++23l6olSt8yxZmDmsTCBDGLg4BWAiFisZGXYdcjBRW7fqtUHi ZsXiTeU6wlNEXM5nznmcfmz2urcfj+cw/JUOFVsd9bXmwNz+nMkv5rMdfXApTP+G+upbjV4HNGR e5jACAA== X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060801_518667_42FF46D0 X-CRM114-Status: GOOD ( 29.53 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The asm version of the kernel mapping code works fine for creating a coarse grained identity map, but for mapping the kernel down to its exact boundaries with the right attributes, it is not suitable. This is why we create a preliminary RWX kernel mapping first, and then rebuild it from scratch later on. So let's reimplement this in C, in a way that will make it unnecessary to create the kernel page tables yet another time in paging_init(). Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/mmu.h | 1 - arch/arm64/include/asm/scs.h | 32 +--- arch/arm64/kernel/cpufeature.c | 2 +- arch/arm64/kernel/head.S | 52 +----- arch/arm64/kernel/image-vars.h | 16 ++ arch/arm64/kernel/pi/Makefile | 1 + arch/arm64/kernel/pi/idreg-override.c | 22 ++- arch/arm64/kernel/pi/kaslr_early.c | 12 +- arch/arm64/kernel/pi/map_kernel.c | 171 ++++++++++++++++++++ arch/arm64/kernel/pi/map_range.c | 87 ++++++++++ arch/arm64/kernel/pi/patch-scs.c | 16 +- arch/arm64/kernel/pi/pi.h | 12 ++ arch/arm64/kernel/pi/relocate.c | 2 + arch/arm64/kernel/setup.c | 7 - arch/arm64/kernel/vmlinux.lds.S | 4 +- arch/arm64/mm/proc.S | 1 + 16 files changed, 317 insertions(+), 121 deletions(-) diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 9b9a206e4e9c9d4e..e74dfae8e48214c3 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -70,7 +70,6 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys, pgprot_t prot, bool page_mappings_only); extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot); extern void mark_linear_text_alias_ro(void); -extern bool kaslr_requires_kpti(void); static inline unsigned long kaslr_offset(void) { diff --git a/arch/arm64/include/asm/scs.h b/arch/arm64/include/asm/scs.h index bcf8ad574807b82c..f246b1515fd59507 100644 --- a/arch/arm64/include/asm/scs.h +++ b/arch/arm64/include/asm/scs.h @@ -33,37 +33,11 @@ #include #ifdef CONFIG_UNWIND_PATCH_PAC_INTO_SCS -static inline bool should_patch_pac_into_scs(void) -{ - u64 reg; - - /* - * We only enable the shadow call stack dynamically if we are running - * on a system that does not implement PAC or BTI. PAC and SCS provide - * roughly the same level of protection, and BTI relies on the PACIASP - * instructions serving as landing pads, preventing us from patching - * those instructions into something else. - */ - reg = read_sysreg_s(SYS_ID_AA64ISAR1_EL1); - if (SYS_FIELD_GET(ID_AA64ISAR1_EL1, APA, reg) | - SYS_FIELD_GET(ID_AA64ISAR1_EL1, API, reg)) - return false; - - reg = read_sysreg_s(SYS_ID_AA64ISAR2_EL1); - if (SYS_FIELD_GET(ID_AA64ISAR2_EL1, APA3, reg)) - return false; - - if (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL)) { - reg = read_sysreg_s(SYS_ID_AA64PFR1_EL1); - if (reg & (0xf << ID_AA64PFR1_EL1_BT_SHIFT)) - return false; - } - return true; -} - static inline void dynamic_scs_init(void) { - if (should_patch_pac_into_scs()) { + extern bool __pi_dynamic_scs_is_enabled; + + if (__pi_dynamic_scs_is_enabled) { pr_info("Enabling dynamic shadow call stack\n"); static_branch_enable(&dynamic_scs_enabled); } diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 9838934fee028bcb..e9788671be044a47 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1607,7 +1607,7 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope) * state once the SMP CPUs are up and thus make the switch to non-global * mappings if required. */ -bool kaslr_requires_kpti(void) +static bool kaslr_requires_kpti(void) { /* * E0PD does a similar job to KPTI so can be used instead diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 81c2dd06420992ea..e45fd99e8ab4272a 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -81,7 +81,6 @@ * x20 primary_entry() .. __primary_switch() CPU boot mode * x21 primary_entry() .. start_kernel() FDT pointer passed at boot in x0 * x22 create_idmap() .. start_kernel() ID map VA of the DT blob - * x23 __primary_switch() physical misalignment/KASLR offset * x25 primary_entry() .. start_kernel() supported VA size * x28 create_idmap() callee preserved temp register */ @@ -408,24 +407,6 @@ SYM_FUNC_START_LOCAL(create_idmap) 0: ret x28 SYM_FUNC_END(create_idmap) -SYM_FUNC_START_LOCAL(create_kernel_mapping) - adrp x0, init_pg_dir - mov_q x5, KIMAGE_VADDR // compile time __va(_text) -#ifdef CONFIG_RELOCATABLE - add x5, x5, x23 // add KASLR displacement -#endif - adrp x6, _end // runtime __pa(_end) - adrp x3, _text // runtime __pa(_text) - sub x6, x6, x3 // _end - _text - add x6, x6, x5 // runtime __va(_end) - mov x7, SWAPPER_RW_MMUFLAGS - - map_memory x0, x1, x5, x6, x7, x3, (VA_BITS - PGDIR_SHIFT), x10, x11, x12, x13, x14 - - dsb ishst // sync with page table walker - ret -SYM_FUNC_END(create_kernel_mapping) - /* * Initialize CPU registers with task-specific and cpu-specific context. * @@ -750,44 +731,13 @@ SYM_FUNC_START_LOCAL(__primary_switch) adrp x2, init_idmap_pg_dir bl __enable_mmu - // Clear BSS - adrp x0, __bss_start - mov x1, xzr - adrp x2, init_pg_end - sub x2, x2, x0 - bl __pi_memset - dsb ishst // Make zero page visible to PTW - adrp x1, early_init_stack mov sp, x1 mov x29, xzr mov x0, x20 // pass the full boot status mov x1, x22 // pass the low FDT mapping - bl __pi_init_feature_override // Parse cpu feature overrides - -#ifdef CONFIG_RELOCATABLE - adrp x23, KERNEL_START - and x23, x23, MIN_KIMG_ALIGN - 1 -#ifdef CONFIG_RANDOMIZE_BASE - mov x0, x22 - bl __pi_kaslr_early_init - bic x0, x0, #SZ_2M - 1 - orr x23, x23, x0 // record kernel offset -#endif -#endif - bl create_kernel_mapping + bl __pi_early_map_kernel // Map and relocate the kernel - adrp x1, init_pg_dir - load_ttbr1 x1, x1, x2 -#ifdef CONFIG_RELOCATABLE - mov x0, x23 - bl __pi_relocate_kernel -#endif -#ifdef CONFIG_UNWIND_PATCH_PAC_INTO_SCS - ldr x0, =__eh_frame_start - ldr x1, =__eh_frame_end - bl __pi_scs_patch_vmlinux -#endif ldr x8, =__primary_switched adrp x0, KERNEL_START // __pa(KERNEL_START) br x8 diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index b7fa7fbf8fa543a6..6eeb2a09c8d2441d 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -43,9 +43,25 @@ PROVIDE(__pi_id_aa64pfr1_override = id_aa64pfr1_override); PROVIDE(__pi_id_aa64smfr0_override = id_aa64smfr0_override); PROVIDE(__pi_id_aa64zfr0_override = id_aa64zfr0_override); PROVIDE(__pi_arm64_sw_feature_override = arm64_sw_feature_override); +PROVIDE(__pi_arm64_use_ng_mappings = arm64_use_ng_mappings); PROVIDE(__pi__ctype = _ctype); PROVIDE(__pi_memstart_offset_seed = memstart_offset_seed); +PROVIDE(__pi_init_pg_dir = init_pg_dir); +PROVIDE(__pi_init_pg_end = init_pg_end); + +PROVIDE(__pi__text = _text); +PROVIDE(__pi__stext = _stext); +PROVIDE(__pi__etext = _etext); +PROVIDE(__pi___start_rodata = __start_rodata); +PROVIDE(__pi___inittext_begin = __inittext_begin); +PROVIDE(__pi___inittext_end = __inittext_end); +PROVIDE(__pi___initdata_begin = __initdata_begin); +PROVIDE(__pi___initdata_end = __initdata_end); +PROVIDE(__pi__data = _data); +PROVIDE(__pi___bss_start = __bss_start); +PROVIDE(__pi__end = _end); + #ifdef CONFIG_KVM /* diff --git a/arch/arm64/kernel/pi/Makefile b/arch/arm64/kernel/pi/Makefile index a8b302245f15326a..8c2f80a46b9387b0 100644 --- a/arch/arm64/kernel/pi/Makefile +++ b/arch/arm64/kernel/pi/Makefile @@ -39,6 +39,7 @@ $(obj)/lib-%.o: $(srctree)/lib/%.c FORCE $(call if_changed_rule,cc_o_c) obj-y := idreg-override.pi.o \ + map_kernel.pi.o map_range.pi.o \ lib-fdt.pi.o lib-fdt_ro.pi.o obj-$(CONFIG_RELOCATABLE) += relocate.pi.o obj-$(CONFIG_RANDOMIZE_BASE) += kaslr_early.pi.o diff --git a/arch/arm64/kernel/pi/idreg-override.c b/arch/arm64/kernel/pi/idreg-override.c index 6c547cccaf6a9e9c..265b35b09dd488f1 100644 --- a/arch/arm64/kernel/pi/idreg-override.c +++ b/arch/arm64/kernel/pi/idreg-override.c @@ -301,37 +301,35 @@ static __init void __parse_cmdline(const char *cmdline, bool parse_aliases) } while (1); } -static __init const u8 *get_bootargs_cmdline(const void *fdt) +static __init const u8 *get_bootargs_cmdline(const void *fdt, int node) { + static char const bootargs[] __initconst = "bootargs"; const u8 *prop; - int node; - node = fdt_path_offset(fdt, "/chosen"); if (node < 0) return NULL; - prop = fdt_getprop(fdt, node, "bootargs", NULL); + prop = fdt_getprop(fdt, node, bootargs, NULL); if (!prop) return NULL; return strlen(prop) ? prop : NULL; } -static __init void parse_cmdline(const void *fdt) +static __init void parse_cmdline(const void *fdt, int chosen) { - const u8 *prop = get_bootargs_cmdline(fdt); + static char const cmdline[] __initconst = CONFIG_CMDLINE; + const u8 *prop = get_bootargs_cmdline(fdt, chosen); if (IS_ENABLED(CONFIG_CMDLINE_FORCE) || !prop) - __parse_cmdline(CONFIG_CMDLINE, true); + __parse_cmdline(cmdline, true); if (!IS_ENABLED(CONFIG_CMDLINE_FORCE) && prop) __parse_cmdline(prop, true); } -/* Keep checkers quiet */ -void init_feature_override(u64 boot_status, const void *fdt); - -asmlinkage void __init init_feature_override(u64 boot_status, const void *fdt) +void __init init_feature_override(u64 boot_status, const void *fdt, + int chosen) { struct arm64_ftr_override *override; const struct ftr_set_desc *reg; @@ -347,7 +345,7 @@ asmlinkage void __init init_feature_override(u64 boot_status, const void *fdt) __boot_status = boot_status; - parse_cmdline(fdt); + parse_cmdline(fdt, chosen); for (i = 0; i < ARRAY_SIZE(regs); i++) { reg = prel64_to_pointer(®s[i].reg_prel); diff --git a/arch/arm64/kernel/pi/kaslr_early.c b/arch/arm64/kernel/pi/kaslr_early.c index eeecee7ffd6fa125..0257b43819db85eb 100644 --- a/arch/arm64/kernel/pi/kaslr_early.c +++ b/arch/arm64/kernel/pi/kaslr_early.c @@ -16,17 +16,17 @@ #include #include +#include "pi.h" + extern u16 memstart_offset_seed; -static u64 __init get_kaslr_seed(void *fdt) +static u64 __init get_kaslr_seed(void *fdt, int node) { - static char const chosen_str[] __initconst = "chosen"; static char const seed_str[] __initconst = "kaslr-seed"; - int node, len; fdt64_t *prop; u64 ret; + int len; - node = fdt_path_offset(fdt, chosen_str); if (node < 0) return 0; @@ -39,14 +39,14 @@ static u64 __init get_kaslr_seed(void *fdt) return ret; } -asmlinkage u64 __init kaslr_early_init(void *fdt) +u64 __init kaslr_early_init(void *fdt, int chosen) { u64 seed, range; if (kaslr_disabled_cmdline()) return 0; - seed = get_kaslr_seed(fdt); + seed = get_kaslr_seed(fdt, chosen); if (!seed) { if (!__early_cpu_has_rndr() || !__arm64_rndr((unsigned long *)&seed)) diff --git a/arch/arm64/kernel/pi/map_kernel.c b/arch/arm64/kernel/pi/map_kernel.c new file mode 100644 index 0000000000000000..b573c964c7d88d1b --- /dev/null +++ b/arch/arm64/kernel/pi/map_kernel.c @@ -0,0 +1,171 @@ +// SPDX-License-Identifier: GPL-2.0-only +// Copyright 2023 Google LLC +// Author: Ard Biesheuvel + +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include "pi.h" + +extern const u8 __eh_frame_start[], __eh_frame_end[]; + +extern void idmap_cpu_replace_ttbr1(void *pgdir); + +static void map_segment(pgd_t *pg_dir, u64 *pgd, u64 va_offset, + void *start, void *end, pgprot_t prot, + bool may_use_cont, int root_level) +{ + map_range(pgd, ((u64)start + va_offset) & ~PAGE_OFFSET, + ((u64)end + va_offset) & ~PAGE_OFFSET, (u64)start, + prot, root_level, (pte_t *)pg_dir, may_use_cont, 0); +} + +static void unmap_segment(pgd_t *pg_dir, u64 va_offset, void *start, + void *end, int root_level) +{ + map_segment(pg_dir, NULL, va_offset, start, end, __pgprot(0), + false, root_level); +} + +static void __init map_kernel(u64 kaslr_offset, u64 va_offset, int root_level) +{ + bool enable_scs = IS_ENABLED(CONFIG_UNWIND_PATCH_PAC_INTO_SCS); + bool twopass = IS_ENABLED(CONFIG_RELOCATABLE); + u64 pgdp = (u64)init_pg_dir + PAGE_SIZE; + pgprot_t text_prot = PAGE_KERNEL_ROX; + pgprot_t data_prot = PAGE_KERNEL; + pgprot_t prot; + + /* + * External debuggers may need to write directly to the text mapping to + * install SW breakpoints. Allow this (only) when explicitly requested + * with rodata=off. + */ + if (cpuid_feature_extract_unsigned_field(arm64_sw_feature_override.val, + ARM64_SW_FEATURE_OVERRIDE_RODATA_OFF)) + text_prot = PAGE_KERNEL_EXEC; + + /* + * We only enable the shadow call stack dynamically if we are running + * on a system that does not implement PAC or BTI. PAC and SCS provide + * roughly the same level of protection, and BTI relies on the PACIASP + * instructions serving as landing pads, preventing us from patching + * those instructions into something else. + */ + if (IS_ENABLED(CONFIG_ARM64_PTR_AUTH_KERNEL) && cpu_has_pac()) + enable_scs = false; + + if (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL) && cpu_has_bti()) { + enable_scs = false; + + /* + * If we have a CPU that supports BTI and a kernel built for + * BTI then mark the kernel executable text as guarded pages + * now so we don't have to rewrite the page tables later. + */ + text_prot = __pgprot_modify(text_prot, PTE_GP, PTE_GP); + } + + /* Map all code read-write on the first pass if needed */ + twopass |= enable_scs; + prot = twopass ? data_prot : text_prot; + + map_segment(init_pg_dir, &pgdp, va_offset, _stext, _etext, prot, + !twopass, root_level); + map_segment(init_pg_dir, &pgdp, va_offset, __start_rodata, + __inittext_begin, data_prot, false, root_level); + map_segment(init_pg_dir, &pgdp, va_offset, __inittext_begin, + __inittext_end, prot, false, root_level); + map_segment(init_pg_dir, &pgdp, va_offset, __initdata_begin, + __initdata_end, data_prot, false, root_level); + map_segment(init_pg_dir, &pgdp, va_offset, _data, _end, data_prot, + true, root_level); + dsb(ishst); + + idmap_cpu_replace_ttbr1(init_pg_dir); + + if (twopass) { + if (IS_ENABLED(CONFIG_RELOCATABLE)) + relocate_kernel(kaslr_offset); + + if (enable_scs) { + scs_patch(__eh_frame_start + va_offset, + __eh_frame_end - __eh_frame_start); + asm("ic ialluis"); + + dynamic_scs_is_enabled = true; + } + + /* + * Unmap the text region before remapping it, to avoid + * potential TLB conflicts when creating the contiguous + * descriptors. + */ + unmap_segment(init_pg_dir, va_offset, _stext, _etext, + root_level); + dsb(ishst); + isb(); + __tlbi(vmalle1); + isb(); + + /* + * Remap these segments with different permissions + * No new page table allocations should be needed + */ + map_segment(init_pg_dir, NULL, va_offset, _stext, _etext, + text_prot, true, root_level); + map_segment(init_pg_dir, NULL, va_offset, __inittext_begin, + __inittext_end, text_prot, false, root_level); + dsb(ishst); + } +} + +asmlinkage void __init early_map_kernel(u64 boot_status, void *fdt) +{ + static char const chosen_str[] __initconst = "/chosen"; + u64 va_base, pa_base = (u64)&_text; + u64 kaslr_offset = pa_base % MIN_KIMG_ALIGN; + int root_level = 4 - CONFIG_PGTABLE_LEVELS; + int chosen; + + /* Clear BSS and the initial page tables */ + memset(__bss_start, 0, (u64)init_pg_end - (u64)__bss_start); + + /* Parse the command line for CPU feature overrides */ + chosen = fdt_path_offset(fdt, chosen_str); + init_feature_override(boot_status, fdt, chosen); + + /* + * The virtual KASLR displacement modulo 2MiB is decided by the + * physical placement of the image, as otherwise, we might not be able + * to create the early kernel mapping using 2 MiB block descriptors. So + * take the low bits of the KASLR offset from the physical address, and + * fill in the high bits from the seed. + */ + if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) { + u64 kaslr_seed = kaslr_early_init(fdt, chosen); + + /* + * Assume that any CPU that does not implement E0PD needs KPTI + * to ensure that KASLR randomized addresses will not leak. + * This means we need to use non-global mappings for the kernel + * text and data. + */ + if (kaslr_seed && !cpu_has_e0pd()) + arm64_use_ng_mappings = true; + + kaslr_offset |= kaslr_seed & ~(MIN_KIMG_ALIGN - 1); + } + + va_base = KIMAGE_VADDR + kaslr_offset; + map_kernel(kaslr_offset, va_base - pa_base, root_level); +} diff --git a/arch/arm64/kernel/pi/map_range.c b/arch/arm64/kernel/pi/map_range.c new file mode 100644 index 0000000000000000..61cbd6e82418c033 --- /dev/null +++ b/arch/arm64/kernel/pi/map_range.c @@ -0,0 +1,87 @@ +// SPDX-License-Identifier: GPL-2.0-only +// Copyright 2023 Google LLC +// Author: Ard Biesheuvel + +#include +#include + +#include +#include +#include + +#include "pi.h" + +/** + * map_range - Map a contiguous range of physical pages into virtual memory + * + * @pte: Address of physical pointer to array of pages to + * allocate page tables from + * @start: Virtual address of the start of the range + * @end: Virtual address of the end of the range (exclusive) + * @pa: Physical address of the start of the range + * @level: Translation level for the mapping + * @tbl: The level @level page table to create the mappings in + * @may_use_cont: Whether the use of the contiguous attribute is allowed + * @va_offset: Offset between a physical page and its current mapping + * in the VA space + */ +void __init map_range(u64 *pte, u64 start, u64 end, u64 pa, pgprot_t prot, + int level, pte_t *tbl, bool may_use_cont, u64 va_offset) +{ + u64 cmask = (level == 3) ? CONT_PTE_SIZE - 1 : U64_MAX; + u64 protval = pgprot_val(prot) & ~PTE_TYPE_MASK; + int lshift = (3 - level) * (PAGE_SHIFT - 3); + u64 lmask = (PAGE_SIZE << lshift) - 1; + + start &= PAGE_MASK; + pa &= PAGE_MASK; + + /* Advance tbl to the entry that covers start */ + tbl += (start >> (lshift + PAGE_SHIFT)) % PTRS_PER_PTE; + + /* + * Set the right block/page bits for this level unless we are + * clearing the mapping + */ + if (protval) + protval |= (level < 3) ? PMD_TYPE_SECT : PTE_TYPE_PAGE; + + while (start < end) { + u64 next = min((start | lmask) + 1, PAGE_ALIGN(end)); + + if (level < 3 && (start | next | pa) & lmask) { + /* + * This chunk needs a finer grained mapping. Create a + * table mapping if necessary and recurse. + */ + if (pte_none(*tbl)) { + *tbl = __pte(__phys_to_pte_val(*pte) | + PMD_TYPE_TABLE | PMD_TABLE_UXN); + *pte += PTRS_PER_PTE * sizeof(pte_t); + } + map_range(pte, start, next, pa, prot, level + 1, + (pte_t *)(__pte_to_phys(*tbl) + va_offset), + may_use_cont, va_offset); + } else { + /* + * Start a contiguous range if start and pa are + * suitably aligned + */ + if (((start | pa) & cmask) == 0 && may_use_cont) + protval |= PTE_CONT; + + /* + * Clear the contiguous attribute if the remaining + * range does not cover a contiguous block + */ + if ((end & ~cmask) <= start) + protval &= ~PTE_CONT; + + /* Put down a block or page mapping */ + *tbl = __pte(__phys_to_pte_val(pa) | protval); + } + pa += next - start; + start = next; + tbl++; + } +} diff --git a/arch/arm64/kernel/pi/patch-scs.c b/arch/arm64/kernel/pi/patch-scs.c index c65ef40d1e6b6b30..49d8b40e61bc050d 100644 --- a/arch/arm64/kernel/pi/patch-scs.c +++ b/arch/arm64/kernel/pi/patch-scs.c @@ -11,6 +11,10 @@ #include +#include "pi.h" + +bool dynamic_scs_is_enabled; + // // This minimal DWARF CFI parser is partially based on the code in // arch/arc/kernel/unwind.c, and on the document below: @@ -46,8 +50,6 @@ #define DW_CFA_GNU_negative_offset_extended 0x2f #define DW_CFA_hi_user 0x3f -extern const u8 __eh_frame_start[], __eh_frame_end[]; - enum { PACIASP = 0xd503233f, AUTIASP = 0xd50323bf, @@ -250,13 +252,3 @@ int scs_patch(const u8 eh_frame[], int size) } return 0; } - -asmlinkage void __init scs_patch_vmlinux(const u8 start[], const u8 end[]) -{ - if (!should_patch_pac_into_scs()) - return; - - scs_patch(start, end - start); - asm("ic ialluis"); - isb(); -} diff --git a/arch/arm64/kernel/pi/pi.h b/arch/arm64/kernel/pi/pi.h index f455ad385976a664..dfc70828ad0ad2a8 100644 --- a/arch/arm64/kernel/pi/pi.h +++ b/arch/arm64/kernel/pi/pi.h @@ -2,6 +2,8 @@ // Copyright 2023 Google LLC // Author: Ard Biesheuvel +#include + #define __prel64_initconst __section(".init.rodata.prel64") typedef volatile signed long prel64_t; @@ -12,3 +14,13 @@ static inline void *prel64_to_pointer(const prel64_t *offset) return NULL; return (void *)offset + *offset; } + +extern bool dynamic_scs_is_enabled; + +void init_feature_override(u64 boot_status, const void *fdt, int chosen); +u64 kaslr_early_init(void *fdt, int chosen); +void relocate_kernel(u64 offset); +int scs_patch(const u8 eh_frame[], int size); + +void map_range(u64 *pgd, u64 start, u64 end, u64 pa, pgprot_t prot, + int level, pte_t *tbl, bool may_use_cont, u64 va_offset); diff --git a/arch/arm64/kernel/pi/relocate.c b/arch/arm64/kernel/pi/relocate.c index 1853408ea76b0e5d..2407d269639870fc 100644 --- a/arch/arm64/kernel/pi/relocate.c +++ b/arch/arm64/kernel/pi/relocate.c @@ -7,6 +7,8 @@ #include #include +#include "pi.h" + extern const Elf64_Rela rela_start[], rela_end[]; extern const u64 relr_start[], relr_end[]; diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index bda21a9245943c57..a4578c823ab57b6f 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -281,13 +281,6 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p) *cmdline_p = boot_command_line; - /* - * If know now we are going to need KPTI then use non-global - * mappings from the start, avoiding the cost of rewriting - * everything later. - */ - arm64_use_ng_mappings = kaslr_requires_kpti(); - early_fixmap_init(); early_ioremap_init(); diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 6c79ad2945749260..e22bf17209c80ff9 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -126,9 +126,9 @@ jiffies = jiffies_64; #ifdef CONFIG_UNWIND_TABLES #define UNWIND_DATA_SECTIONS \ .eh_frame : { \ - __eh_frame_start = .; \ + __pi___eh_frame_start = .; \ *(.eh_frame) \ - __eh_frame_end = .; \ + __pi___eh_frame_end = .; \ } #else #define UNWIND_DATA_SECTIONS diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 91410f48809000a0..82e88f4521737c0e 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -195,6 +195,7 @@ SYM_TYPED_FUNC_START(idmap_cpu_replace_ttbr1) ret SYM_FUNC_END(idmap_cpu_replace_ttbr1) +SYM_FUNC_ALIAS(__pi_idmap_cpu_replace_ttbr1, idmap_cpu_replace_ttbr1) .popsection #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 From patchwork Tue Mar 7 14:04:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163803 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ED419C678D5 for ; Tue, 7 Mar 2023 14:41:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=kC5HP/UFo747ZcqxG1ZKTuumj+9UaNCj7UXiCgZOQws=; b=zzsMMuhFPYGLm1 uz/ml7fk09PR3UpcKWQc39c+i5QFlZZMhA17gX+iTUISPVSS7kx3T9zTHxaFPtAKPxsq0pEkR89m2 usce4dvRLnmkQPtPcr+AAPMzxj9dXXXNSWCSfAcF2EoATDk5XDX7iujgyoaEaPVEBHNboyhS0JhK1 WGCkxJC/zgI2rWgvnNb1NAEO4UShmHj9344u7nK86X7Qr8LEo2vSKND3aDKczciMcarYcA3fzL+I3 vY23kx/vzuqdh2SZ/ClZRLp3BRpZ3vgCFK0/eGZ+Y4aa5LIJdSVOiEDUWCTHApJNB9qRTgG8XLbKu ZUXb9wCu35kVAmUBgZMQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYUI-000sp9-F0; Tue, 07 Mar 2023 14:40:22 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXz1-000eaY-FP for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:08:09 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D032A61467; Tue, 7 Mar 2023 14:08:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A9D9EC433A4; Tue, 7 Mar 2023 14:08:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198082; bh=kbYzdg/ANQsS7ngQoT911Nj1bIm5cv48RbrebVBchjw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fZgxQhmcrVNgnVrsbNXRVw+FDzPCuJZuDXZ+5ad53CleSSQvziZ9FMEoUrNmye8yW z7ZBLeFfD9UHitW4U/hZ4gxCYza8CkCDGSXTOl3WKBC8NmB7pKf7O9WqvhJjVyLpDa elWnGzWeJqBiVVdHivmS9NGCoNEFWhBtTgvDVEb0oAMvRLKHbcDutWgcpCCZpHdo3T 8SZoIy55MdweDzE7ZH1dNADTzX0QButankOP5poMTAPZDHcPb/5az97X0oMATGKiFh cQD77PPvi9Ozkg9XfTqEHlCKAYqTpFahQWJuSoJmw6wr16N2VFBLAQ/6J1HXo0wwlZ tgdA3SKfCiC4w== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 35/60] arm64: mm: Use 48-bit virtual addressing for the permanent ID map Date: Tue, 7 Mar 2023 15:04:57 +0100 Message-Id: <20230307140522.2311461-36-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6948; i=ardb@kernel.org; h=from:subject; bh=kbYzdg/ANQsS7ngQoT911Nj1bIm5cv48RbrebVBchjw=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdJWLvpdk/t2jukrm01uLgYqYZr8sSDq0y/cAvdtRFI MzpaYdSRykLgxgHg6yYIovA7L/vdp6eKFXrPEsWZg4rE8gQBi5OAZiIcyjD/+jdZzZYr71ucz8+ 8J7z22+6UVURabcenKsQmbZqXtv+jG0Mf/h+3hdjmdASNePyZFlrwfc8P87ecdRVuqM0hSGu/m/ FFC4A X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060803_636253_8DFD18BA X-CRM114-Status: GOOD ( 25.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Even though we support loading kernels anywhere in 48-bit addressable physical memory, we create the ID maps based on the number of levels that we happened to configure for the kernel VA and user VA spaces. The reason for this is that the PGD/PUD/PMD based classification of translation levels, along with the associated folding when the number of levels is less than 5, does not permit creating a page table hierarchy of a set number of levels. This means that, for instance, on 39-bit VA kernels we need to configure an additional level above PGD level on the fly, and 36-bit VA kernels still only support 47-bit virtual addressing with this trick applied. Now that we have a separate helper to populate page table hierarchies that does not define the levels in terms of PUDS/PMDS/etc at all, let's reuse it to create the permanent ID map with a fixed VA size of 48 bits. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/kernel-pgtable.h | 2 ++ arch/arm64/kernel/head.S | 5 +++ arch/arm64/kvm/mmu.c | 15 +++------ arch/arm64/mm/mmu.c | 32 +++++++++++--------- arch/arm64/mm/proc.S | 9 ++---- 5 files changed, 31 insertions(+), 32 deletions(-) diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h index 50b5c145358a5d8e..2a2c80ffe59e5307 100644 --- a/arch/arm64/include/asm/kernel-pgtable.h +++ b/arch/arm64/include/asm/kernel-pgtable.h @@ -35,6 +35,8 @@ #define SWAPPER_PGTABLE_LEVELS (CONFIG_PGTABLE_LEVELS) #endif +#define IDMAP_LEVELS ARM64_HW_PGTABLE_LEVELS(48) +#define IDMAP_ROOT_LEVEL (4 - IDMAP_LEVELS) /* * If KASLR is enabled, then an offset K is added to the kernel address diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index e45fd99e8ab4272a..fc6a4076d826b728 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -727,6 +727,11 @@ SYM_FUNC_START_LOCAL(__no_granule_support) SYM_FUNC_END(__no_granule_support) SYM_FUNC_START_LOCAL(__primary_switch) + mrs x1, tcr_el1 + mov x2, #64 - VA_BITS + tcr_set_t0sz x1, x2 + msr tcr_el1, x1 + adrp x1, reserved_pg_dir adrp x2, init_idmap_pg_dir bl __enable_mmu diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 7113587222ffe8e1..d64be7b5f6692e8b 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1687,16 +1687,9 @@ int __init kvm_mmu_init(u32 *hyp_va_bits) BUG_ON((hyp_idmap_start ^ (hyp_idmap_end - 1)) & PAGE_MASK); /* - * The ID map may be configured to use an extended virtual address - * range. This is only the case if system RAM is out of range for the - * currently configured page size and VA_BITS_MIN, in which case we will - * also need the extended virtual range for the HYP ID map, or we won't - * be able to enable the EL2 MMU. - * - * However, in some cases the ID map may be configured for fewer than - * the number of VA bits used by the regular kernel stage 1. This - * happens when VA_BITS=52 and the kernel image is placed in PA space - * below 48 bits. + * The ID map is always configured for 48 bits of translation, which + * may be fewer than the number of VA bits used by the regular kernel + * stage 1, when VA_BITS=52. * * At EL2, there is only one TTBR register, and we can't switch between * translation tables *and* update TCR_EL2.T0SZ at the same time. Bottom @@ -1707,7 +1700,7 @@ int __init kvm_mmu_init(u32 *hyp_va_bits) * 1 VA bits to assure that the hypervisor can both ID map its code page * and map any kernel memory. */ - idmap_bits = 64 - ((idmap_t0sz & TCR_T0SZ_MASK) >> TCR_T0SZ_OFFSET); + idmap_bits = 48; kernel_bits = vabits_actual; *hyp_va_bits = max(idmap_bits, kernel_bits); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 81e1420d2cc13246..a59433ae4f5f8d02 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -762,22 +762,21 @@ static void __init map_kernel(pgd_t *pgdp) kasan_copy_shadow(pgdp); } +void __pi_map_range(u64 *pgd, u64 start, u64 end, u64 pa, pgprot_t prot, + int level, pte_t *tbl, bool may_use_cont, u64 va_offset); + +static u8 idmap_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init, + kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init; + static void __init create_idmap(void) { u64 start = __pa_symbol(__idmap_text_start); - u64 size = __pa_symbol(__idmap_text_end) - start; - pgd_t *pgd = idmap_pg_dir; - u64 pgd_phys; - - /* check if we need an additional level of translation */ - if (VA_BITS < 48 && idmap_t0sz < (64 - VA_BITS_MIN)) { - pgd_phys = early_pgtable_alloc(PAGE_SHIFT); - set_pgd(&idmap_pg_dir[start >> VA_BITS], - __pgd(pgd_phys | P4D_TYPE_TABLE)); - pgd = __va(pgd_phys); - } - __create_pgd_mapping(pgd, start, start, size, PAGE_KERNEL_ROX, - early_pgtable_alloc, 0); + u64 end = __pa_symbol(__idmap_text_end); + u64 ptep = __pa_symbol(idmap_ptes); + + __pi_map_range(&ptep, start, end, start, PAGE_KERNEL_ROX, + IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false, + __phys_to_virt(ptep) - ptep); if (IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0)) { extern u32 __idmap_kpti_flag; @@ -787,8 +786,10 @@ static void __init create_idmap(void) * The KPTI G-to-nG conversion code needs a read-write mapping * of its synchronization flag in the ID map. */ - __create_pgd_mapping(pgd, pa, pa, sizeof(u32), PAGE_KERNEL, - early_pgtable_alloc, 0); + ptep = __pa_symbol(kpti_ptes); + __pi_map_range(&ptep, pa, pa + sizeof(u32), pa, PAGE_KERNEL, + IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false, + __phys_to_virt(ptep) - ptep); } } @@ -813,6 +814,7 @@ void __init paging_init(void) memblock_allow_resize(); create_idmap(); + idmap_t0sz = TCR_T0SZ(48); } #ifdef CONFIG_MEMORY_HOTPLUG diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 82e88f4521737c0e..c7129b21bfd5191f 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -422,9 +422,9 @@ SYM_FUNC_START(__cpu_setup) mair .req x17 tcr .req x16 mov_q mair, MAIR_EL1_SET - mov_q tcr, TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \ - TCR_TG_FLAGS | TCR_KASLR_FLAGS | TCR_ASID16 | \ - TCR_TBI0 | TCR_A1 | TCR_KASAN_SW_FLAGS | TCR_MTE_FLAGS + mov_q tcr, TCR_T0SZ(48) | TCR_T1SZ(VA_BITS) | TCR_CACHE_FLAGS | \ + TCR_SMP_FLAGS | TCR_TG_FLAGS | TCR_KASLR_FLAGS | TCR_ASID16 | \ + TCR_TBI0 | TCR_A1 | TCR_KASAN_SW_FLAGS | TCR_MTE_FLAGS tcr_clear_errata_bits tcr, x9, x5 @@ -432,10 +432,7 @@ SYM_FUNC_START(__cpu_setup) sub x9, xzr, x0 add x9, x9, #64 tcr_set_t1sz tcr, x9 -#else - idmap_get_t0sz x9 #endif - tcr_set_t0sz tcr, x9 /* * Set the IPS bits in TCR_EL1. From patchwork Tue Mar 7 14:04:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163835 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8A176C678D4 for ; Tue, 7 Mar 2023 14:44:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=H7zcWTeArc9N9Z2gudM9zJ8IwYtxRaP/prkgMH4gLq8=; b=4bfJHZTKwv3Q2F acBgx+joqxvyQVnHIjz+Ffm2ObnzA9CsyvaAO1JBx8u/S7MZKYEe053YMP9xiC1UF0Tuib6XfRqsp QFDmrWctqz/l8fVSrsO51XiFNtWyYgZQ6AoKz+aY+hFn9Z5xslpgb5Jg9AQWywmo6FoLbX0JhKmlA vgzgU0xAJCNFli6O8GZdCfJAEHACI/rXsK5Bs5F4RNjhap24vFZon5Od1Q2G1ifylAQsHEOFBEpyD iQmvujle7bnA9IPTZ2puta4oP0970U4eDtFBnDdpEGn2MpT3UJBR4itaRMUU7XNPJGDvSYCJMckuq +VIahx4valx9qfjcg9gg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYWx-000u8U-4X; Tue, 07 Mar 2023 14:43:07 +0000 Received: from sin.source.kernel.org ([2604:1380:40e1:4800::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXzF-000ed4-9q for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:08:19 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id E9859CE1BDC; Tue, 7 Mar 2023 14:08:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 17EE7C433D2; Tue, 7 Mar 2023 14:08:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198085; bh=Rev1o3EJYNY+Qk3vzsOG2Q4FRpOLgCRqmirTRXa/+Fw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=esk9bra8qrRMgQfAH9gkmvsQ739xbRtgA+ATtz54Q2d5RnnMqFplF4M9iKFuiSjYF Wj4z12gV0HB7nOlVDZ+HvfHDq0ZIOFakR8i4gM/FWfxaYFVYQX0Hgp72w7YdzVobk5 FOvGjMpSfMzS9nHG8sE91OtkZ4LXMmwaerad2pxf5nOgynlI2qf5u8cMRNtBzb0L4u /kOB5uT39MXR9v+ScMHKxR6nh39ZEXsA474HfDsfURy7qHWdxjt8GDQ+MCcHuMdnbI 9uq3bYAB46yH/l1m71fYhM9FRhIbLRwd64lH7+GJuOo4qdtOFrayI36PsUo9Do43i9 ZoBy26z6/Scaw== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 36/60] arm64: pgtable: Decouple PGDIR size macros from PGD/PUD/PMD levels Date: Tue, 7 Mar 2023 15:04:58 +0100 Message-Id: <20230307140522.2311461-37-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5172; i=ardb@kernel.org; h=from:subject; bh=Rev1o3EJYNY+Qk3vzsOG2Q4FRpOLgCRqmirTRXa/+Fw=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdJSZe+mDU4Z2Oa4594em6Oktg6ydpyfgVmmfXix+7V +os9VSjo5SFQYyDQVZMkUVg9t93O09PlKp1niULM4eVCWQIAxenAEykPZbhf+xVW4G/Spel1zXb N7wQ6HifPqvotvYcR5HzRpsbZm1ZfpSR4VfhyonXRe76pWQoBrgeXVT6aH6Hxt7tVgmunNnlW8u YmQE= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060817_735550_A9B4295A X-CRM114-Status: GOOD ( 19.94 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The mapping from PGD/PUD/PMD to levels and shifts is very confusing, given that, due to folding, the shifts may be equal for different levels, if the macros are even #define'd to begin with. In a subsequent patch, we will modify the ID mapping code to decouple the number of levels from the kernel's view of how these types are folded, so prepare for this by reformulating the macros without the use of these types. Instead, use SWAPPER_BLOCK_SHIFT as the base quantity, and derive it from either PAGE_SHIFT or PMD_SHIFT, which -if defined at all- are defined unambiguously for a given page size, regardless of the number of configured levels. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/kernel-pgtable.h | 65 ++++++-------------- 1 file changed, 19 insertions(+), 46 deletions(-) diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h index 2a2c80ffe59e5307..bfc35944a5304e8e 100644 --- a/arch/arm64/include/asm/kernel-pgtable.h +++ b/arch/arm64/include/asm/kernel-pgtable.h @@ -13,27 +13,22 @@ #include /* - * The linear mapping and the start of memory are both 2M aligned (per - * the arm64 booting.txt requirements). Hence we can use section mapping - * with 4K (section size = 2M) but not with 16K (section size = 32M) or - * 64K (section size = 512M). + * The physical and virtual addresses of the start of the kernel image are + * equal modulo 2 MiB (per the arm64 booting.txt requirements). Hence we can + * use section mapping with 4K (section size = 2M) but not with 16K (section + * size = 32M) or 64K (section size = 512M). */ - -/* - * The idmap and swapper page tables need some space reserved in the kernel - * image. Both require pgd, pud (4 levels only) and pmd tables to (section) - * map the kernel. With the 64K page configuration, swapper and idmap need to - * map to pte level. The swapper also maps the FDT (see __create_page_tables - * for more information). Note that the number of ID map translation levels - * could be increased on the fly if system RAM is out of reach for the default - * VA range, so pages required to map highest possible PA are reserved in all - * cases. - */ -#ifdef CONFIG_ARM64_4K_PAGES -#define SWAPPER_PGTABLE_LEVELS (CONFIG_PGTABLE_LEVELS - 1) +#if defined(PMD_SIZE) && PMD_SIZE <= MIN_KIMG_ALIGN +#define SWAPPER_BLOCK_SHIFT PMD_SHIFT +#define SWAPPER_SKIP_LEVEL 1 #else -#define SWAPPER_PGTABLE_LEVELS (CONFIG_PGTABLE_LEVELS) +#define SWAPPER_BLOCK_SHIFT PAGE_SHIFT +#define SWAPPER_SKIP_LEVEL 0 #endif +#define SWAPPER_BLOCK_SIZE (UL(1) << SWAPPER_BLOCK_SHIFT) +#define SWAPPER_TABLE_SHIFT (SWAPPER_BLOCK_SHIFT + PAGE_SHIFT - 3) + +#define SWAPPER_PGTABLE_LEVELS (CONFIG_PGTABLE_LEVELS - SWAPPER_SKIP_LEVEL) #define IDMAP_LEVELS ARM64_HW_PGTABLE_LEVELS(48) #define IDMAP_ROOT_LEVEL (4 - IDMAP_LEVELS) @@ -64,24 +59,13 @@ #define EARLY_ENTRIES(vstart, vend, shift, add) \ ((((vend) - 1) >> (shift)) - ((vstart) >> (shift)) + 1 + add) -#define EARLY_PGDS(vstart, vend, add) (EARLY_ENTRIES(vstart, vend, PGDIR_SHIFT, add)) - -#if SWAPPER_PGTABLE_LEVELS > 3 -#define EARLY_PUDS(vstart, vend, add) (EARLY_ENTRIES(vstart, vend, PUD_SHIFT, add)) -#else -#define EARLY_PUDS(vstart, vend, add) (0) -#endif +#define EARLY_LEVEL(l, vstart, vend, add) \ + (SWAPPER_PGTABLE_LEVELS > l ? EARLY_ENTRIES(vstart, vend, SWAPPER_BLOCK_SHIFT + l * (PAGE_SHIFT - 3), add) : 0) -#if SWAPPER_PGTABLE_LEVELS > 2 -#define EARLY_PMDS(vstart, vend, add) (EARLY_ENTRIES(vstart, vend, SWAPPER_TABLE_SHIFT, add)) -#else -#define EARLY_PMDS(vstart, vend, add) (0) -#endif - -#define EARLY_PAGES(vstart, vend, add) ( 1 /* PGDIR page */ \ - + EARLY_PGDS((vstart), (vend), add) /* each PGDIR needs a next level page table */ \ - + EARLY_PUDS((vstart), (vend), add) /* each PUD needs a next level page table */ \ - + EARLY_PMDS((vstart), (vend), add)) /* each PMD needs a next level page table */ +#define EARLY_PAGES(vstart, vend, add) (1 /* PGDIR page */ \ + + EARLY_LEVEL(3, (vstart), (vend), add) /* each entry needs a next level page table */ \ + + EARLY_LEVEL(2, (vstart), (vend), add) /* each entry needs a next level page table */ \ + + EARLY_LEVEL(1, (vstart), (vend), add))/* each entry needs a next level page table */ #define INIT_DIR_SIZE (PAGE_SIZE * (EARLY_PAGES(KIMAGE_VADDR, _end, EARLY_KASLR) + EARLY_SEGMENT_EXTRA_PAGES)) /* the initial ID map may need two extra pages if it needs to be extended */ @@ -92,17 +76,6 @@ #endif #define INIT_IDMAP_DIR_PAGES EARLY_PAGES(KIMAGE_VADDR, _end + MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE, 1) -/* Initial memory map size */ -#ifdef CONFIG_ARM64_4K_PAGES -#define SWAPPER_BLOCK_SHIFT PMD_SHIFT -#define SWAPPER_BLOCK_SIZE PMD_SIZE -#define SWAPPER_TABLE_SHIFT PUD_SHIFT -#else -#define SWAPPER_BLOCK_SHIFT PAGE_SHIFT -#define SWAPPER_BLOCK_SIZE PAGE_SIZE -#define SWAPPER_TABLE_SHIFT PMD_SHIFT -#endif - /* The number of segments in the kernel image (text, rodata, inittext, initdata, data+bss) */ #define KERNEL_SEGMENT_COUNT 5 From patchwork Tue Mar 7 14:04:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163805 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9AF08C678D4 for ; Tue, 7 Mar 2023 14:42:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=iRNwloAsOIf9CCZd/sTxrtZG6DGUFEpndvsh+17Kajk=; b=kX5om0DMCAyIBI NvUJwB/RqfTOY2yq4D87TX8vGpqtIAfKFkssmekD4I6wK/uRdTE2g87dbgEJvVghVqnchu2+Wrqns +xbO6zlRThdbXjffStOo+OdYi/zWdUUxSSFmjjJ/e8aNyt7chk6MfoXEfZ8CowkFGFI+o9mrhHkpO SWSQo6p+uq+dU0N4oRzNpEsmCraoM57oynDuPhuYMtfJtKtTnt8U4JcCArRAAVCnhIZ484DlXP5v+ o1Uc4/lyEq2P7M2TCmL/Ucpu216FQikAja6iS/Gtrwrcwl7LEAyYXXADXaGW+N5ZKvAR4OFYt0xYm 6p19706XNI8Q6loAGfAw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYVa-000tVp-2N; Tue, 07 Mar 2023 14:41:42 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXz6-000ech-Ar for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:08:14 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A0D4261464; Tue, 7 Mar 2023 14:08:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7B0CFC4339E; Tue, 7 Mar 2023 14:08:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198087; bh=EtxR6WdIX1pAf3FgtWNSOEDemqq7GCr4fk8qBVA/F5I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=vN0hgK1ANHWYOwx1zgggu6iVhL/dJd9wsiweC2lh83fAc+e/BX4LddRXN5OB0eN8j aOaHUCoRT14fqk3BGy8TF1dv2Sm20ycKx/0ElJF9FTG2TSREPcoisRYCFjaX01cP87 /9M9h4/wo9TQAbSbQUSE6u0909Hcv7iyilsGYc143tfrIyojWfZWRrW+m2g9nN6jgU H+Q9ioYqFPHlufhY8HUlFweoApJhZgVUVnakd0djFlVEMkhrVHCI622DktNKh2nlji Oll/chaISrOOrUzSPZQurxhhXimLaFE5KcsR3G+rNk5g/1cduMwMypT1Sr2YPWYqh5 GivOiIYPG60XQ== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 37/60] arm64: kernel: Create initial ID map from C code Date: Tue, 7 Mar 2023 15:04:59 +0100 Message-Id: <20230307140522.2311461-38-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=24376; i=ardb@kernel.org; h=from:subject; bh=EtxR6WdIX1pAf3FgtWNSOEDemqq7GCr4fk8qBVA/F5I=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdJe7ZpJsTjkvFlZ2z/F0pUbEmZtIRBqerDz3amC5UL jv87fjdjlIWBjEOBlkxRRaB2X/f7Tw9UarWeZYszBxWJpAhDFycAjCROfsZGb79XVpeyjihYcJd Wd3b5+bo/nFfyjfxsEHasaRDla9Vgq4x/DPPdLwxUS1tD4uamfrMc7xPLxqVFNZZ8YYW55Qd0Mw tYQMA X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060808_539792_9C2BF5CB X-CRM114-Status: GOOD ( 33.56 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The asm code that creates the initial ID map is rather intricate and hard to follow. This is problematic because it makes adding support for things like LPA2 or WXN more difficult than necessary. Also, it is parameterized like the rest of the MM code to run with a configurable number of levels, which is rather pointless, given that all AArch64 CPUs implement support for 48-bit virtual addressing, and that many systems exist with DRAM located outside of the 39-bit addressable range, which is the only smaller VA size that is widely used, and we need additional tricks to make things work in that combination. So let's bite the bullet, and rip out all the asm macros, and fiddly code, and replace it with a C implementation based on the newly added routines for creating the early kernel VA mappings. And while at it, create the initial ID map based on 48-bit virtual addressing as well, regardless of the number of configured levels for the kernel proper. Note that this code may execute with the MMU and caches disabled, and is therefore not permitted to make unaligned accesses. This shouldn't generally happen in any case for the algorithm as implemented, but to be sure, let's pass -mstrict-align to the compiler just in case. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/assembler.h | 14 - arch/arm64/include/asm/kernel-pgtable.h | 50 ++-- arch/arm64/include/asm/mmu_context.h | 6 +- arch/arm64/kernel/head.S | 267 ++------------------ arch/arm64/kernel/image-vars.h | 1 + arch/arm64/kernel/pi/Makefile | 3 + arch/arm64/kernel/pi/map_kernel.c | 18 ++ arch/arm64/kernel/pi/map_range.c | 12 + arch/arm64/kernel/pi/pi.h | 4 + arch/arm64/mm/mmu.c | 5 - arch/arm64/mm/proc.S | 2 +- 11 files changed, 87 insertions(+), 295 deletions(-) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 376a980f2bad08bb..beb53bbd8c19bb1c 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -345,20 +345,6 @@ alternative_cb_end bfi \valreg, \t1sz, #TCR_T1SZ_OFFSET, #TCR_TxSZ_WIDTH .endm -/* - * idmap_get_t0sz - get the T0SZ value needed to cover the ID map - * - * Calculate the maximum allowed value for TCR_EL1.T0SZ so that the - * entire ID map region can be mapped. As T0SZ == (64 - #bits used), - * this number conveniently equals the number of leading zeroes in - * the physical address of _end. - */ - .macro idmap_get_t0sz, reg - adrp \reg, _end - orr \reg, \reg, #(1 << VA_BITS_MIN) - 1 - clz \reg, \reg - .endm - /* * tcr_compute_pa_size - set TCR.(I)PS to the highest supported * ID_AA64MMFR0_EL1.PARange value diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h index bfc35944a5304e8e..f4d5858d2f66821d 100644 --- a/arch/arm64/include/asm/kernel-pgtable.h +++ b/arch/arm64/include/asm/kernel-pgtable.h @@ -29,6 +29,7 @@ #define SWAPPER_TABLE_SHIFT (SWAPPER_BLOCK_SHIFT + PAGE_SHIFT - 3) #define SWAPPER_PGTABLE_LEVELS (CONFIG_PGTABLE_LEVELS - SWAPPER_SKIP_LEVEL) +#define INIT_IDMAP_PGTABLE_LEVELS (IDMAP_LEVELS - SWAPPER_SKIP_LEVEL) #define IDMAP_LEVELS ARM64_HW_PGTABLE_LEVELS(48) #define IDMAP_ROOT_LEVEL (4 - IDMAP_LEVELS) @@ -59,44 +60,39 @@ #define EARLY_ENTRIES(vstart, vend, shift, add) \ ((((vend) - 1) >> (shift)) - ((vstart) >> (shift)) + 1 + add) -#define EARLY_LEVEL(l, vstart, vend, add) \ - (SWAPPER_PGTABLE_LEVELS > l ? EARLY_ENTRIES(vstart, vend, SWAPPER_BLOCK_SHIFT + l * (PAGE_SHIFT - 3), add) : 0) +#define EARLY_LEVEL(l, lvls, vstart, vend, add) \ + (lvls > l ? EARLY_ENTRIES(vstart, vend, SWAPPER_BLOCK_SHIFT + l * (PAGE_SHIFT - 3), add) : 0) -#define EARLY_PAGES(vstart, vend, add) (1 /* PGDIR page */ \ - + EARLY_LEVEL(3, (vstart), (vend), add) /* each entry needs a next level page table */ \ - + EARLY_LEVEL(2, (vstart), (vend), add) /* each entry needs a next level page table */ \ - + EARLY_LEVEL(1, (vstart), (vend), add))/* each entry needs a next level page table */ -#define INIT_DIR_SIZE (PAGE_SIZE * (EARLY_PAGES(KIMAGE_VADDR, _end, EARLY_KASLR) + EARLY_SEGMENT_EXTRA_PAGES)) +#define EARLY_PAGES(lvls, vstart, vend, add) (1 /* PGDIR page */ \ + + EARLY_LEVEL(3, (lvls), (vstart), (vend), add) /* each entry needs a next level page table */ \ + + EARLY_LEVEL(2, (lvls), (vstart), (vend), add) /* each entry needs a next level page table */ \ + + EARLY_LEVEL(1, (lvls), (vstart), (vend), add))/* each entry needs a next level page table */ +#define INIT_DIR_SIZE (PAGE_SIZE * (EARLY_PAGES(SWAPPER_PGTABLE_LEVELS, KIMAGE_VADDR, _end, EARLY_KASLR) \ + + EARLY_SEGMENT_EXTRA_PAGES)) -/* the initial ID map may need two extra pages if it needs to be extended */ -#if VA_BITS < 48 -#define INIT_IDMAP_DIR_SIZE ((INIT_IDMAP_DIR_PAGES + 2) * PAGE_SIZE) -#else -#define INIT_IDMAP_DIR_SIZE (INIT_IDMAP_DIR_PAGES * PAGE_SIZE) -#endif -#define INIT_IDMAP_DIR_PAGES EARLY_PAGES(KIMAGE_VADDR, _end + MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE, 1) +#define INIT_IDMAP_DIR_PAGES (EARLY_PAGES(INIT_IDMAP_PGTABLE_LEVELS, KIMAGE_VADDR, _end, 1)) +#define INIT_IDMAP_DIR_SIZE ((INIT_IDMAP_DIR_PAGES + EARLY_IDMAP_EXTRA_PAGES) * PAGE_SIZE) + +#define INIT_IDMAP_FDT_PAGES (EARLY_PAGES(INIT_IDMAP_PGTABLE_LEVELS, 0UL, UL(MAX_FDT_SIZE), 1) - 1) +#define INIT_IDMAP_FDT_SIZE ((INIT_IDMAP_FDT_PAGES + EARLY_IDMAP_EXTRA_FDT_PAGES) * PAGE_SIZE) /* The number of segments in the kernel image (text, rodata, inittext, initdata, data+bss) */ #define KERNEL_SEGMENT_COUNT 5 #if SWAPPER_BLOCK_SIZE > SEGMENT_ALIGN #define EARLY_SEGMENT_EXTRA_PAGES (KERNEL_SEGMENT_COUNT + 1) -#else -#define EARLY_SEGMENT_EXTRA_PAGES 0 -#endif - /* - * Initial memory map attributes. + * The initial ID map consists of the kernel image, mapped as two separate + * segments, and may appear misaligned wrt the swapper block size. This means + * we need 3 additional pages. The DT could straddle a swapper block boundary, + * so it may need 2. */ -#define SWAPPER_PTE_FLAGS (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED) -#define SWAPPER_PMD_FLAGS (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S) - -#ifdef CONFIG_ARM64_4K_PAGES -#define SWAPPER_RW_MMUFLAGS (PMD_ATTRINDX(MT_NORMAL) | SWAPPER_PMD_FLAGS) -#define SWAPPER_RX_MMUFLAGS (SWAPPER_RW_MMUFLAGS | PMD_SECT_RDONLY) +#define EARLY_IDMAP_EXTRA_PAGES 3 +#define EARLY_IDMAP_EXTRA_FDT_PAGES 2 #else -#define SWAPPER_RW_MMUFLAGS (PTE_ATTRINDX(MT_NORMAL) | SWAPPER_PTE_FLAGS) -#define SWAPPER_RX_MMUFLAGS (SWAPPER_RW_MMUFLAGS | PTE_RDONLY) +#define EARLY_SEGMENT_EXTRA_PAGES 0 +#define EARLY_IDMAP_EXTRA_PAGES 0 +#define EARLY_IDMAP_EXTRA_FDT_PAGES 0 #endif /* diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index 72dbd6400549c185..f9ae2891e4c72c7f 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -57,11 +57,9 @@ static inline void cpu_switch_mm(pgd_t *pgd, struct mm_struct *mm) } /* - * TCR.T0SZ value to use when the ID map is active. Usually equals - * TCR_T0SZ(VA_BITS), unless system RAM is positioned very high in - * physical memory, in which case it will be smaller. + * TCR.T0SZ value to use when the ID map is active. */ -extern int idmap_t0sz; +#define idmap_t0sz TCR_T0SZ(48) /* * Ensure TCR.T0SZ is set to the provided value. diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index fc6a4076d826b728..c6f8c3b1f026c07b 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -80,26 +80,42 @@ * x19 primary_entry() .. start_kernel() whether we entered with the MMU on * x20 primary_entry() .. __primary_switch() CPU boot mode * x21 primary_entry() .. start_kernel() FDT pointer passed at boot in x0 - * x22 create_idmap() .. start_kernel() ID map VA of the DT blob * x25 primary_entry() .. start_kernel() supported VA size - * x28 create_idmap() callee preserved temp register */ SYM_CODE_START(primary_entry) bl record_mmu_state bl preserve_boot_args - bl create_idmap + + adrp x1, early_init_stack + mov sp, x1 + mov x29, xzr + adrp x0, init_idmap_pg_dir + bl __pi_create_init_idmap + + /* + * If the page tables have been populated with non-cacheable + * accesses (MMU disabled), invalidate those tables again to + * remove any speculatively loaded cache lines. + */ + cbnz x19, 0f + dmb sy + mov x1, x0 // end of used region + adrp x0, init_idmap_pg_dir + adr_l x2, dcache_inval_poc + blr x2 + b 1f /* * If we entered with the MMU and caches on, clean the ID mapped part * of the primary boot code to the PoC so we can safely execute it with * the MMU off. */ - cbz x19, 0f - adrp x0, __idmap_text_start +0: adrp x0, __idmap_text_start adr_l x1, __idmap_text_end adr_l x2, dcache_clean_poc blr x2 -0: mov x0, x19 + +1: mov x0, x19 bl init_kernel_el // w0=cpu_boot_mode mov x20, x0 @@ -175,238 +191,6 @@ SYM_CODE_START_LOCAL(preserve_boot_args) ret SYM_CODE_END(preserve_boot_args) -/* - * Macro to populate page table entries, these entries can be pointers to the next level - * or last level entries pointing to physical memory. - * - * tbl: page table address - * rtbl: pointer to page table or physical memory - * index: start index to write - * eindex: end index to write - [index, eindex] written to - * flags: flags for pagetable entry to or in - * inc: increment to rtbl between each entry - * tmp1: temporary variable - * - * Preserves: tbl, eindex, flags, inc - * Corrupts: index, tmp1 - * Returns: rtbl - */ - .macro populate_entries, tbl, rtbl, index, eindex, flags, inc, tmp1 -.Lpe\@: phys_to_pte \tmp1, \rtbl - orr \tmp1, \tmp1, \flags // tmp1 = table entry - str \tmp1, [\tbl, \index, lsl #3] - add \rtbl, \rtbl, \inc // rtbl = pa next level - add \index, \index, #1 - cmp \index, \eindex - b.ls .Lpe\@ - .endm - -/* - * Compute indices of table entries from virtual address range. If multiple entries - * were needed in the previous page table level then the next page table level is assumed - * to be composed of multiple pages. (This effectively scales the end index). - * - * vstart: virtual address of start of range - * vend: virtual address of end of range - we map [vstart, vend] - * shift: shift used to transform virtual address into index - * order: #imm 2log(number of entries in page table) - * istart: index in table corresponding to vstart - * iend: index in table corresponding to vend - * count: On entry: how many extra entries were required in previous level, scales - * our end index. - * On exit: returns how many extra entries required for next page table level - * - * Preserves: vstart, vend - * Returns: istart, iend, count - */ - .macro compute_indices, vstart, vend, shift, order, istart, iend, count - ubfx \istart, \vstart, \shift, \order - ubfx \iend, \vend, \shift, \order - add \iend, \iend, \count, lsl \order - sub \count, \iend, \istart - .endm - -/* - * Map memory for specified virtual address range. Each level of page table needed supports - * multiple entries. If a level requires n entries the next page table level is assumed to be - * formed from n pages. - * - * tbl: location of page table - * rtbl: address to be used for first level page table entry (typically tbl + PAGE_SIZE) - * vstart: virtual address of start of range - * vend: virtual address of end of range - we map [vstart, vend - 1] - * flags: flags to use to map last level entries - * phys: physical address corresponding to vstart - physical memory is contiguous - * order: #imm 2log(number of entries in PGD table) - * - * If extra_shift is set, an extra level will be populated if the end address does - * not fit in 'extra_shift' bits. This assumes vend is in the TTBR0 range. - * - * Temporaries: istart, iend, tmp, count, sv - these need to be different registers - * Preserves: vstart, flags - * Corrupts: tbl, rtbl, vend, istart, iend, tmp, count, sv - */ - .macro map_memory, tbl, rtbl, vstart, vend, flags, phys, order, istart, iend, tmp, count, sv, extra_shift - sub \vend, \vend, #1 - add \rtbl, \tbl, #PAGE_SIZE - mov \count, #0 - - .ifnb \extra_shift - tst \vend, #~((1 << (\extra_shift)) - 1) - b.eq .L_\@ - compute_indices \vstart, \vend, #\extra_shift, #(PAGE_SHIFT - 3), \istart, \iend, \count - mov \sv, \rtbl - populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp - mov \tbl, \sv - .endif -.L_\@: - compute_indices \vstart, \vend, #PGDIR_SHIFT, #\order, \istart, \iend, \count - mov \sv, \rtbl - populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp - mov \tbl, \sv - -#if SWAPPER_PGTABLE_LEVELS > 3 - compute_indices \vstart, \vend, #PUD_SHIFT, #(PAGE_SHIFT - 3), \istart, \iend, \count - mov \sv, \rtbl - populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp - mov \tbl, \sv -#endif - -#if SWAPPER_PGTABLE_LEVELS > 2 - compute_indices \vstart, \vend, #SWAPPER_TABLE_SHIFT, #(PAGE_SHIFT - 3), \istart, \iend, \count - mov \sv, \rtbl - populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp - mov \tbl, \sv -#endif - - compute_indices \vstart, \vend, #SWAPPER_BLOCK_SHIFT, #(PAGE_SHIFT - 3), \istart, \iend, \count - bic \rtbl, \phys, #SWAPPER_BLOCK_SIZE - 1 - populate_entries \tbl, \rtbl, \istart, \iend, \flags, #SWAPPER_BLOCK_SIZE, \tmp - .endm - -/* - * Remap a subregion created with the map_memory macro with modified attributes - * or output address. The entire remapped region must have been covered in the - * invocation of map_memory. - * - * x0: last level table address (returned in first argument to map_memory) - * x1: start VA of the existing mapping - * x2: start VA of the region to update - * x3: end VA of the region to update (exclusive) - * x4: start PA associated with the region to update - * x5: attributes to set on the updated region - * x6: order of the last level mappings - */ -SYM_FUNC_START_LOCAL(remap_region) - sub x3, x3, #1 // make end inclusive - - // Get the index offset for the start of the last level table - lsr x1, x1, x6 - bfi x1, xzr, #0, #PAGE_SHIFT - 3 - - // Derive the start and end indexes into the last level table - // associated with the provided region - lsr x2, x2, x6 - lsr x3, x3, x6 - sub x2, x2, x1 - sub x3, x3, x1 - - mov x1, #1 - lsl x6, x1, x6 // block size at this level - - populate_entries x0, x4, x2, x3, x5, x6, x7 - ret -SYM_FUNC_END(remap_region) - -SYM_FUNC_START_LOCAL(create_idmap) - mov x28, lr - /* - * The ID map carries a 1:1 mapping of the physical address range - * covered by the loaded image, which could be anywhere in DRAM. This - * means that the required size of the VA (== PA) space is decided at - * boot time, and could be more than the configured size of the VA - * space for ordinary kernel and user space mappings. - * - * There are three cases to consider here: - * - 39 <= VA_BITS < 48, and the ID map needs up to 48 VA bits to cover - * the placement of the image. In this case, we configure one extra - * level of translation on the fly for the ID map only. (This case - * also covers 42-bit VA/52-bit PA on 64k pages). - * - * - VA_BITS == 48, and the ID map needs more than 48 VA bits. This can - * only happen when using 64k pages, in which case we need to extend - * the root level table rather than add a level. Note that we can - * treat this case as 'always extended' as long as we take care not - * to program an unsupported T0SZ value into the TCR register. - * - * - Combinations that would require two additional levels of - * translation are not supported, e.g., VA_BITS==36 on 16k pages, or - * VA_BITS==39/4k pages with 5-level paging, where the input address - * requires more than 47 or 48 bits, respectively. - */ -#if (VA_BITS < 48) -#define IDMAP_PGD_ORDER (VA_BITS - PGDIR_SHIFT) -#define EXTRA_SHIFT (PGDIR_SHIFT + PAGE_SHIFT - 3) - - /* - * If VA_BITS < 48, we have to configure an additional table level. - * First, we have to verify our assumption that the current value of - * VA_BITS was chosen such that all translation levels are fully - * utilised, and that lowering T0SZ will always result in an additional - * translation level to be configured. - */ -#if VA_BITS != EXTRA_SHIFT -#error "Mismatch between VA_BITS and page size/number of translation levels" -#endif -#else -#define IDMAP_PGD_ORDER (PHYS_MASK_SHIFT - PGDIR_SHIFT) -#define EXTRA_SHIFT - /* - * If VA_BITS == 48, we don't have to configure an additional - * translation level, but the top-level table has more entries. - */ -#endif - adrp x0, init_idmap_pg_dir - adrp x3, _text - adrp x6, _end + MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE - mov x7, SWAPPER_RX_MMUFLAGS - - map_memory x0, x1, x3, x6, x7, x3, IDMAP_PGD_ORDER, x10, x11, x12, x13, x14, EXTRA_SHIFT - - /* Remap [.init].data, BSS and the kernel page tables r/w in the ID map */ - adrp x1, _text - adrp x2, __initdata_begin - adrp x3, _end - bic x4, x2, #SWAPPER_BLOCK_SIZE - 1 - mov x5, SWAPPER_RW_MMUFLAGS - mov x6, #SWAPPER_BLOCK_SHIFT - bl remap_region - - /* Remap the FDT after the kernel image */ - adrp x1, _text - adrp x22, _end + SWAPPER_BLOCK_SIZE - bic x2, x22, #SWAPPER_BLOCK_SIZE - 1 - bfi x22, x21, #0, #SWAPPER_BLOCK_SHIFT // remapped FDT address - add x3, x2, #MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE - bic x4, x21, #SWAPPER_BLOCK_SIZE - 1 - mov x5, SWAPPER_RW_MMUFLAGS - mov x6, #SWAPPER_BLOCK_SHIFT - bl remap_region - - /* - * Since the page tables have been populated with non-cacheable - * accesses (MMU disabled), invalidate those tables again to - * remove any speculatively loaded cache lines. - */ - cbnz x19, 0f // skip cache invalidation if MMU is on - dmb sy - - adrp x0, init_idmap_pg_dir - adrp x1, init_idmap_pg_end - bl dcache_inval_poc -0: ret x28 -SYM_FUNC_END(create_idmap) - /* * Initialize CPU registers with task-specific and cpu-specific context. * @@ -727,11 +511,6 @@ SYM_FUNC_START_LOCAL(__no_granule_support) SYM_FUNC_END(__no_granule_support) SYM_FUNC_START_LOCAL(__primary_switch) - mrs x1, tcr_el1 - mov x2, #64 - VA_BITS - tcr_set_t0sz x1, x2 - msr tcr_el1, x1 - adrp x1, reserved_pg_dir adrp x2, init_idmap_pg_dir bl __enable_mmu @@ -740,7 +519,7 @@ SYM_FUNC_START_LOCAL(__primary_switch) mov sp, x1 mov x29, xzr mov x0, x20 // pass the full boot status - mov x1, x22 // pass the low FDT mapping + mov x1, x21 // pass the FDT bl __pi_early_map_kernel // Map and relocate the kernel ldr x8, =__primary_switched diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 6eeb2a09c8d2441d..6ca235b09a30d5d3 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -47,6 +47,7 @@ PROVIDE(__pi_arm64_use_ng_mappings = arm64_use_ng_mappings); PROVIDE(__pi__ctype = _ctype); PROVIDE(__pi_memstart_offset_seed = memstart_offset_seed); +PROVIDE(__pi_init_idmap_pg_dir = init_idmap_pg_dir); PROVIDE(__pi_init_pg_dir = init_pg_dir); PROVIDE(__pi_init_pg_end = init_pg_end); diff --git a/arch/arm64/kernel/pi/Makefile b/arch/arm64/kernel/pi/Makefile index 8c2f80a46b9387b0..4393b41f0b7133da 100644 --- a/arch/arm64/kernel/pi/Makefile +++ b/arch/arm64/kernel/pi/Makefile @@ -11,6 +11,9 @@ KBUILD_CFLAGS := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) -fpie \ -fno-asynchronous-unwind-tables -fno-unwind-tables \ $(call cc-option,-fno-addrsig) +# this code may run with the MMU off so disable unaligned accesses +CFLAGS_map_range.o += -mstrict-align + # remove SCS flags from all objects in this directory KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_SCS), $(KBUILD_CFLAGS)) # disable LTO diff --git a/arch/arm64/kernel/pi/map_kernel.c b/arch/arm64/kernel/pi/map_kernel.c index b573c964c7d88d1b..a718714eb671f290 100644 --- a/arch/arm64/kernel/pi/map_kernel.c +++ b/arch/arm64/kernel/pi/map_kernel.c @@ -129,6 +129,22 @@ static void __init map_kernel(u64 kaslr_offset, u64 va_offset, int root_level) } } +static void map_fdt(u64 fdt) +{ + static u8 ptes[INIT_IDMAP_FDT_SIZE] __initdata __aligned(PAGE_SIZE); + u64 efdt = fdt + MAX_FDT_SIZE; + u64 ptep = (u64)ptes; + + /* + * Map up to MAX_FDT_SIZE bytes, but avoid overlap with + * the kernel image. + */ + map_range(&ptep, fdt, (u64)_text > fdt ? min((u64)_text, efdt) : efdt, + fdt, PAGE_KERNEL, IDMAP_ROOT_LEVEL, + (pte_t *)init_idmap_pg_dir, false, 0); + dsb(ishst); +} + asmlinkage void __init early_map_kernel(u64 boot_status, void *fdt) { static char const chosen_str[] __initconst = "/chosen"; @@ -137,6 +153,8 @@ asmlinkage void __init early_map_kernel(u64 boot_status, void *fdt) int root_level = 4 - CONFIG_PGTABLE_LEVELS; int chosen; + map_fdt((u64)fdt); + /* Clear BSS and the initial page tables */ memset(__bss_start, 0, (u64)init_pg_end - (u64)__bss_start); diff --git a/arch/arm64/kernel/pi/map_range.c b/arch/arm64/kernel/pi/map_range.c index 61cbd6e82418c033..8de3fe2c1cb55722 100644 --- a/arch/arm64/kernel/pi/map_range.c +++ b/arch/arm64/kernel/pi/map_range.c @@ -85,3 +85,15 @@ void __init map_range(u64 *pte, u64 start, u64 end, u64 pa, pgprot_t prot, tbl++; } } + +asmlinkage u64 __init create_init_idmap(pgd_t *pg_dir) +{ + u64 ptep = (u64)pg_dir + PAGE_SIZE; + + map_range(&ptep, (u64)_stext, (u64)__initdata_begin, (u64)_stext, + PAGE_KERNEL_ROX, IDMAP_ROOT_LEVEL, (pte_t *)pg_dir, false, 0); + map_range(&ptep, (u64)__initdata_begin, (u64)_end, (u64)__initdata_begin, + PAGE_KERNEL, IDMAP_ROOT_LEVEL, (pte_t *)pg_dir, false, 0); + + return ptep; +} diff --git a/arch/arm64/kernel/pi/pi.h b/arch/arm64/kernel/pi/pi.h index dfc70828ad0ad2a8..a8fe79c9b111140f 100644 --- a/arch/arm64/kernel/pi/pi.h +++ b/arch/arm64/kernel/pi/pi.h @@ -17,6 +17,8 @@ static inline void *prel64_to_pointer(const prel64_t *offset) extern bool dynamic_scs_is_enabled; +extern pgd_t init_idmap_pg_dir[]; + void init_feature_override(u64 boot_status, const void *fdt, int chosen); u64 kaslr_early_init(void *fdt, int chosen); void relocate_kernel(u64 offset); @@ -24,3 +26,5 @@ int scs_patch(const u8 eh_frame[], int size); void map_range(u64 *pgd, u64 start, u64 end, u64 pa, pgprot_t prot, int level, pte_t *tbl, bool may_use_cont, u64 va_offset); + +asmlinkage u64 create_init_idmap(pgd_t *pgd); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index a59433ae4f5f8d02..13f46c911558f21f 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -43,8 +43,6 @@ #define NO_CONT_MAPPINGS BIT(1) #define NO_EXEC_MAPPINGS BIT(2) /* assumes FEAT_HPDS is not used */ -int idmap_t0sz __ro_after_init; - #if VA_BITS > 48 u64 vabits_actual __ro_after_init = VA_BITS_MIN; EXPORT_SYMBOL(vabits_actual); @@ -798,8 +796,6 @@ void __init paging_init(void) pgd_t *pgdp = pgd_set_fixmap(__pa_symbol(swapper_pg_dir)); extern pgd_t init_idmap_pg_dir[]; - idmap_t0sz = 63UL - __fls(__pa_symbol(_end) | GENMASK(VA_BITS_MIN - 1, 0)); - map_kernel(pgdp); map_mem(pgdp); @@ -814,7 +810,6 @@ void __init paging_init(void) memblock_allow_resize(); create_idmap(); - idmap_t0sz = TCR_T0SZ(48); } #ifdef CONFIG_MEMORY_HOTPLUG diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index c7129b21bfd5191f..d0748f18b2abdf0e 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -200,7 +200,7 @@ SYM_FUNC_ALIAS(__pi_idmap_cpu_replace_ttbr1, idmap_cpu_replace_ttbr1) #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 -#define KPTI_NG_PTE_FLAGS (PTE_ATTRINDX(MT_NORMAL) | SWAPPER_PTE_FLAGS) +#define KPTI_NG_PTE_FLAGS (PTE_ATTRINDX(MT_NORMAL) | PTE_TYPE_PAGE | PTE_AF | PTE_SHARED) .pushsection ".idmap.text", "awx" From patchwork Tue Mar 7 14:05:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163804 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3247EC678D5 for ; Tue, 7 Mar 2023 14:42:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=JyFhIYMOokFvm9cIdCre25Dl1mt+GqCzsxy2ue7bUWk=; b=F2H/lG14rOmmIr d8DKpcvL5+siUmBkduMycvZ443/25sJ9Z/7eKytksP2xYi3hAwR/+6GEdhZ/7+A6/W+qd0YCEWzUB x/QGGRcWYJK64qCOPGXMazzOPUKh/btImNYxmgvl+1XJwIn2dkPwoHlrTsVZt452uj2bJV2j+aHnl x3+Tqf/chBHtmx2k784Ha6ihETOtDWI5ddaPGMq/6T5LJcT7G8ZOHFvsM2qnrhuT+buecNaQwBTo8 GrYHxyD1nsDzIwOLoidkrtx1bcFzyGKoRo2iUf1PSE9DH7BJWPkL/u0ePdsBCBKSg7/sqEavbCNqq uriMJZ/xHKQaYqODH2ow==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYV7-000tFw-Ru; Tue, 07 Mar 2023 14:41:14 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXz8-000edz-Px for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:08:14 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3C06D61458; Tue, 7 Mar 2023 14:08:10 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DD895C433D2; Tue, 7 Mar 2023 14:08:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198089; bh=uWLIu3yqNCwcrGkRr50TGOXKaa6P6eE4gao1D3xoCEo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iwFAh6VGmHFEMKpps+Sndm/kF/KFeIpQ5F+mfHpWHoLpyPyy0AIil6DnL38nbFaz0 V/32tC5/+fBAMEk+RZ2/vg0XSKJbkhKBcFX4YPu+k4wvmV92qSFaqhmqVq0LD4IFWw LEz8nRAndNcOb+gm8NBBZOR9KnCTMz4YpytDpORhYCJnjgXVnu7nLFbuuEL3wRByBf XZsOO9mpXMsv//CgtIK+BHVsC/zfucJu1GAEXsTPGKjNqYN9nM12m63Z10/NW2k83E ZqG56RV2YbmVmRIKZXlDQyogUck/gpMYJF4ubL0xk1S5EJ4mIkHLqq2CHFyeOApyxJ wUXmsvFEEUf2A== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 38/60] arm64: mm: avoid fixmap for early swapper_pg_dir updates Date: Tue, 7 Mar 2023 15:05:00 +0100 Message-Id: <20230307140522.2311461-39-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2146; i=ardb@kernel.org; h=from:subject; bh=uWLIu3yqNCwcrGkRr50TGOXKaa6P6eE4gao1D3xoCEo=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdJengQ+5ILZ+4yZ9msfx3jPXdZhPU7u7Ol/7ZgPlZy TbFj3wdJSwMYhwMsmKKLAKz/77beXqiVK3zLFmYOaxMIEMYuDgFYCIz6xi+afk8mqa8y2apu9bv Wx7fft8tTHfYlfzF7qrwPS+Nls1KDH/lNktwyi7Z2i/UPnWCSf+SHNYd0mekTDTrvptY1De4LOI CAA== X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060810_974161_80D63D1E X-CRM114-Status: GOOD ( 16.04 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Early in the boot, when .rodata is still writable, we can poke swapper_pg_dir entries directly, and there is no need to go through the fixmap. After a future patch, we will enter the kernel with swapper_pg_dir already active, and early swapper_pg_dir updates for creating the fixmap page table hierarchy itself cannot go through the fixmap for obvious reaons. So let's keep track of whether rodata is writable, and update the descriptor directly in that case. As the same reasoning applies to early KASAN init, make the function noinstr as well. Signed-off-by: Ard Biesheuvel --- arch/arm64/mm/mmu.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 13f46c911558f21f..3f631f3bc2f80b2b 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -53,6 +53,8 @@ EXPORT_SYMBOL(kimage_voffset); u32 __boot_cpu_mode[] = { BOOT_CPU_MODE_EL2, BOOT_CPU_MODE_EL1 }; +static bool rodata_is_rw __ro_after_init = true; + /* * The booting CPU updates the failed status @__early_cpu_boot_status, * with MMU turned off. @@ -73,10 +75,21 @@ static pud_t bm_pud[PTRS_PER_PUD] __page_aligned_bss __maybe_unused; static DEFINE_SPINLOCK(swapper_pgdir_lock); static DEFINE_MUTEX(fixmap_lock); -void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd) +void noinstr set_swapper_pgd(pgd_t *pgdp, pgd_t pgd) { pgd_t *fixmap_pgdp; + /* + * Don't bother with the fixmap if swapper_pg_dir is still mapped + * writable in the kernel mapping. + */ + if (rodata_is_rw) { + WRITE_ONCE(*pgdp, pgd); + dsb(ishst); + isb(); + return; + } + spin_lock(&swapper_pgdir_lock); fixmap_pgdp = pgd_set_fixmap(__pa_symbol(pgdp)); WRITE_ONCE(*fixmap_pgdp, pgd); @@ -614,6 +627,7 @@ void mark_rodata_ro(void) * to cover NOTES and EXCEPTION_TABLE. */ section_size = (unsigned long)__init_begin - (unsigned long)__start_rodata; + WRITE_ONCE(rodata_is_rw, false); update_mapping_prot(__pa_symbol(__start_rodata), (unsigned long)__start_rodata, section_size, PAGE_KERNEL_RO); From patchwork Tue Mar 7 14:05:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163806 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 826F0C678D4 for ; Tue, 7 Mar 2023 14:43:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=LkRYTBX6dQ7vJbivbC5UJJU13MzELoj/i9nCOnrEWuQ=; b=aod1fsIMaP1597 19yJuGCrwU3GlKZd6b5mNOtJDUnuxat5AtHoLoIzTUvb7tnn2dyC9yb978Lr3Woh7bdjgbiNjGXnw OyKVmU7pdOn4b9DTDNqGJoKE/9BlvVKFmiFZvvULuhfVOm9JRr4nqc1bq+YEC+VAobBKH6jPi6JX7 YHg0QuhXE3x9XOmRRcpZDNmwmGIPvWwy2YnFZl40v672FSAQdskLVFKFuZt7cgmHREEmS2yOKHv74 TSnnnqchq4Lpy6pwNxEC5B+UxgxdjzAnI6Je6kxhUbOhKuStXNXId3sb9+GJRB8LWEDesfCjQxkIi k0cimRcRn1C0AP+rfWQA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYWV-000tv6-G4; Tue, 07 Mar 2023 14:42:39 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXzD-000efJ-0O for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:08:17 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 93814B81903; Tue, 7 Mar 2023 14:08:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4CE34C433A8; Tue, 7 Mar 2023 14:08:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198092; bh=X9Ek6XmryrGuZuPxCkSVXUt6q5oJSf6iQG3CXGNKcYU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NXFkLomw7g6jIgPo/aXn1Yg6xIqW1a6HGbT1Gln303rNFhIYofW3qduzp/8iDECUV bkuzoxc+lpzab4iQ3g8zXgaMR5iUOCj3y4S+FFf61U8IXt2u2/NMYlr6dkIgG/2cS+ ekJePay1/wARy8k0WzwNlQARXRz/UAFJYeF4veuwTau2kEMqWQtnTDp0uLp/4w43vA OsqjfpP5xDSFWPc1h1oE0dlflV/TEw1NW8/8TwXJ7Q2qB9MKdXyOjbJwnswHdm+kuS YabWkbTlX7pZzVDhZ2+OTg2MVY1eA7WSlmZwnjjhBwhO7ndUrtmhl2bgOHHQzjsggi PGquE9fjlqc5g== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 39/60] arm64: mm: omit redundant remap of kernel image Date: Tue, 7 Mar 2023 15:05:01 +0100 Message-Id: <20230307140522.2311461-40-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=9110; i=ardb@kernel.org; h=from:subject; bh=X9Ek6XmryrGuZuPxCkSVXUt6q5oJSf6iQG3CXGNKcYU=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdJfVHdVds+ypenp3dSbmRacv+rnQ6coN1zoInF8UK1 5WfX/axo5SFQYyDQVZMkUVg9t93O09PlKp1niULM4eVCWQIAxenAEzkogvDPx1BsYjo9T7PZ289 nxFuXvdr58VTbGbNz53v8zO7lXdzyzL8T5M8pLd6j9V0y8T0sxJNP1Kzt7PmFmr8Zjvzd1PnQkc GXgA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060815_393168_511CD544 X-CRM114-Status: GOOD ( 26.33 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Now that the early kernel mapping is created with all the right attributes and segment boundaries, there is no longer a need to recreate it and switch to it. This also means we no longer have to copy the kasan shadow or some parts of the fixmap from one set of page tables to the other. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/kasan.h | 2 - arch/arm64/include/asm/mmu.h | 2 +- arch/arm64/kernel/image-vars.h | 1 + arch/arm64/kernel/pi/map_kernel.c | 6 +- arch/arm64/mm/kasan_init.c | 15 --- arch/arm64/mm/mmu.c | 110 +++----------------- 6 files changed, 20 insertions(+), 116 deletions(-) diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h index 12d5f47f7dbec628..ab52688ac4bd43b6 100644 --- a/arch/arm64/include/asm/kasan.h +++ b/arch/arm64/include/asm/kasan.h @@ -36,12 +36,10 @@ void kasan_init(void); #define _KASAN_SHADOW_START(va) (KASAN_SHADOW_END - (1UL << ((va) - KASAN_SHADOW_SCALE_SHIFT))) #define KASAN_SHADOW_START _KASAN_SHADOW_START(vabits_actual) -void kasan_copy_shadow(pgd_t *pgdir); asmlinkage void kasan_early_init(void); #else static inline void kasan_init(void) { } -static inline void kasan_copy_shadow(pgd_t *pgdir) { } #endif #endif diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index e74dfae8e48214c3..fce956cd721ba64f 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -95,7 +95,7 @@ static inline bool kaslr_disabled_cmdline(void) } #define INIT_MM_CONTEXT(name) \ - .pgd = init_pg_dir, + .pgd = swapper_pg_dir, #endif /* !__ASSEMBLY__ */ #endif diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 6ca235b09a30d5d3..1c0e920a2466f851 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -50,6 +50,7 @@ PROVIDE(__pi_memstart_offset_seed = memstart_offset_seed); PROVIDE(__pi_init_idmap_pg_dir = init_idmap_pg_dir); PROVIDE(__pi_init_pg_dir = init_pg_dir); PROVIDE(__pi_init_pg_end = init_pg_end); +PROVIDE(__pi_swapper_pg_dir = swapper_pg_dir); PROVIDE(__pi__text = _text); PROVIDE(__pi__stext = _stext); diff --git a/arch/arm64/kernel/pi/map_kernel.c b/arch/arm64/kernel/pi/map_kernel.c index a718714eb671f290..a90c4d6fc75c35d0 100644 --- a/arch/arm64/kernel/pi/map_kernel.c +++ b/arch/arm64/kernel/pi/map_kernel.c @@ -125,8 +125,12 @@ static void __init map_kernel(u64 kaslr_offset, u64 va_offset, int root_level) text_prot, true, root_level); map_segment(init_pg_dir, NULL, va_offset, __inittext_begin, __inittext_end, text_prot, false, root_level); - dsb(ishst); } + + /* Copy the root page table to its final location */ + memcpy((void *)swapper_pg_dir + va_offset, init_pg_dir, PGD_SIZE); + dsb(ishst); + idmap_cpu_replace_ttbr1(swapper_pg_dir); } static void map_fdt(u64 fdt) diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index e969e68de005fd2a..df98f496539f0e39 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -184,21 +184,6 @@ static void __init kasan_map_populate(unsigned long start, unsigned long end, kasan_pgd_populate(start & PAGE_MASK, PAGE_ALIGN(end), node, false); } -/* - * Copy the current shadow region into a new pgdir. - */ -void __init kasan_copy_shadow(pgd_t *pgdir) -{ - pgd_t *pgdp, *pgdp_new, *pgdp_end; - - pgdp = pgd_offset_k(KASAN_SHADOW_START); - pgdp_end = pgd_offset_k(KASAN_SHADOW_END); - pgdp_new = pgd_offset_pgd(pgdir, KASAN_SHADOW_START); - do { - set_pgd(pgdp_new, READ_ONCE(*pgdp)); - } while (pgdp++, pgdp_new++, pgdp != pgdp_end); -} - static void __init clear_pgds(unsigned long start, unsigned long end) { diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 3f631f3bc2f80b2b..81634ff5f6a67476 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -634,9 +634,9 @@ void mark_rodata_ro(void) debug_checkwx(); } -static void __init map_kernel_segment(pgd_t *pgdp, void *va_start, void *va_end, - pgprot_t prot, struct vm_struct *vma, - int flags, unsigned long vm_flags) +static void __init declare_vma(struct vm_struct *vma, + void *va_start, void *va_end, + unsigned long vm_flags) { phys_addr_t pa_start = __pa_symbol(va_start); unsigned long size = va_end - va_start; @@ -644,9 +644,6 @@ static void __init map_kernel_segment(pgd_t *pgdp, void *va_start, void *va_end, BUG_ON(!PAGE_ALIGNED(pa_start)); BUG_ON(!PAGE_ALIGNED(size)); - __create_pgd_mapping(pgdp, pa_start, (unsigned long)va_start, size, prot, - early_pgtable_alloc, flags); - if (!(vm_flags & VM_NO_GUARD)) size += PAGE_SIZE; @@ -691,87 +688,17 @@ core_initcall(map_entry_trampoline); #endif /* - * Open coded check for BTI, only for use to determine configuration - * for early mappings for before the cpufeature code has run. + * Declare the VMA areas for the kernel */ -static bool arm64_early_this_cpu_has_bti(void) +static void __init declare_kernel_vmas(void) { - u64 pfr1; - - if (!IS_ENABLED(CONFIG_ARM64_BTI_KERNEL)) - return false; - - pfr1 = __read_sysreg_by_encoding(SYS_ID_AA64PFR1_EL1); - return cpuid_feature_extract_unsigned_field(pfr1, - ID_AA64PFR1_EL1_BT_SHIFT); -} - -/* - * Create fine-grained mappings for the kernel. - */ -static void __init map_kernel(pgd_t *pgdp) -{ - static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_inittext, - vmlinux_initdata, vmlinux_data; - - /* - * External debuggers may need to write directly to the text - * mapping to install SW breakpoints. Allow this (only) when - * explicitly requested with rodata=off. - */ - pgprot_t text_prot = rodata_enabled ? PAGE_KERNEL_ROX : PAGE_KERNEL_EXEC; - - /* - * If we have a CPU that supports BTI and a kernel built for - * BTI then mark the kernel executable text as guarded pages - * now so we don't have to rewrite the page tables later. - */ - if (arm64_early_this_cpu_has_bti()) - text_prot = __pgprot_modify(text_prot, PTE_GP, PTE_GP); + static struct vm_struct vmlinux_seg[KERNEL_SEGMENT_COUNT]; - /* - * Only rodata will be remapped with different permissions later on, - * all other segments are allowed to use contiguous mappings. - */ - map_kernel_segment(pgdp, _stext, _etext, text_prot, &vmlinux_text, 0, - VM_NO_GUARD); - map_kernel_segment(pgdp, __start_rodata, __inittext_begin, PAGE_KERNEL, - &vmlinux_rodata, NO_CONT_MAPPINGS, VM_NO_GUARD); - map_kernel_segment(pgdp, __inittext_begin, __inittext_end, text_prot, - &vmlinux_inittext, 0, VM_NO_GUARD); - map_kernel_segment(pgdp, __initdata_begin, __initdata_end, PAGE_KERNEL, - &vmlinux_initdata, 0, VM_NO_GUARD); - map_kernel_segment(pgdp, _data, _end, PAGE_KERNEL, &vmlinux_data, 0, 0); - - if (!READ_ONCE(pgd_val(*pgd_offset_pgd(pgdp, FIXADDR_START)))) { - /* - * The fixmap falls in a separate pgd to the kernel, and doesn't - * live in the carveout for the swapper_pg_dir. We can simply - * re-use the existing dir for the fixmap. - */ - set_pgd(pgd_offset_pgd(pgdp, FIXADDR_START), - READ_ONCE(*pgd_offset_k(FIXADDR_START))); - } else if (CONFIG_PGTABLE_LEVELS > 3) { - pgd_t *bm_pgdp; - p4d_t *bm_p4dp; - pud_t *bm_pudp; - /* - * The fixmap shares its top level pgd entry with the kernel - * mapping. This can really only occur when we are running - * with 16k/4 levels, so we can simply reuse the pud level - * entry instead. - */ - BUG_ON(!IS_ENABLED(CONFIG_ARM64_16K_PAGES)); - bm_pgdp = pgd_offset_pgd(pgdp, FIXADDR_START); - bm_p4dp = p4d_offset(bm_pgdp, FIXADDR_START); - bm_pudp = pud_set_fixmap_offset(bm_p4dp, FIXADDR_START); - pud_populate(&init_mm, bm_pudp, lm_alias(bm_pmd)); - pud_clear_fixmap(); - } else { - BUG(); - } - - kasan_copy_shadow(pgdp); + declare_vma(&vmlinux_seg[0], _stext, _etext, VM_NO_GUARD); + declare_vma(&vmlinux_seg[1], __start_rodata, __inittext_begin, VM_NO_GUARD); + declare_vma(&vmlinux_seg[2], __inittext_begin, __inittext_end, VM_NO_GUARD); + declare_vma(&vmlinux_seg[3], __initdata_begin, __initdata_end, VM_NO_GUARD); + declare_vma(&vmlinux_seg[4], _data, _end, 0); } void __pi_map_range(u64 *pgd, u64 start, u64 end, u64 pa, pgprot_t prot, @@ -807,23 +734,12 @@ static void __init create_idmap(void) void __init paging_init(void) { - pgd_t *pgdp = pgd_set_fixmap(__pa_symbol(swapper_pg_dir)); - extern pgd_t init_idmap_pg_dir[]; - - map_kernel(pgdp); - map_mem(pgdp); - - pgd_clear_fixmap(); - - cpu_replace_ttbr1(lm_alias(swapper_pg_dir), init_idmap_pg_dir); - init_mm.pgd = swapper_pg_dir; - - memblock_phys_free(__pa_symbol(init_pg_dir), - __pa_symbol(init_pg_end) - __pa_symbol(init_pg_dir)); + map_mem(swapper_pg_dir); memblock_allow_resize(); create_idmap(); + declare_kernel_vmas(); } #ifdef CONFIG_MEMORY_HOTPLUG From patchwork Tue Mar 7 14:05:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163836 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 303FFC678D4 for ; Tue, 7 Mar 2023 14:44:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=TCMDjfoP+l+MZjkVwATw8WVFhzLp61v2wjr3iGlSXg0=; b=1bjcnix9Wekwcs NpPEKaSsXdpCYZKrfzjU4zLjQHW3/Cxo0Ab1Xbj5IOByWLzpJ7XaVDSkVIb/DqhaHhINc9u8ueQB5 Cha2GS8FrOlKkTwm+sgm+tsROwtdulIqrophWfofFXtS/y0QEpVhhEoxiKCVsEacWjtOe5sNxBV03 +o2LkwN1IKBtjeKCBs8raUe9bSptO/pHxnff+uzFOQ1qeHaO4v90NDsBY+Zr4v8zXkHWRhcLofcRG xakQHCS6cDav9QoUVbGzbMvsuaUzwfqRWLvHcimSgA8RV8Hw6KFIOy2MqaimCOhwKKTbn7WP71SCt 67Orr6PcCCHtLanR5fDQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYXh-000uVv-ST; Tue, 07 Mar 2023 14:43:54 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXzF-000ega-HZ for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:08:21 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id E912CB818E9; Tue, 7 Mar 2023 14:08:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AFAAFC4339C; Tue, 7 Mar 2023 14:08:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198094; bh=PQI4coC3/ibnm3/mD+JXkBCOdMxxUJnhmcG2A1VOyNA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WFMQQf+WtXmiw+m8jB4brIlFnzMWsBhXfd1g7Fi+I27gwyc4e+efcJWcx+zrK/lnd dqWFRmz/WUSohHI9wDlpJ4OcVTuFAX4oyxaWH5bo4lzuROE1XH/BDIB7fN/HNTz/ch aXBb/8ZFPa3KGsH8ApheDzEGNp16+JhGsdI8VWTqtIGs5BJZTAa6qYcex2EKfqWyc7 ekLRxzbTkVH0ywzgZuvE1uy3qefbdZGvncYUmYv4oqiIigy/7MIjwDDiKivY4xVE4V a5sVneCSA0o0ulki0bdQ1N9SxrDjvu0scgqXv24OcAbbirW80Idvu7tpxwlGZgR1O+ 8zoWg774s3Y/g== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 40/60] arm64: Revert "mm: provide idmap pointer to cpu_replace_ttbr1()" Date: Tue, 7 Mar 2023 15:05:02 +0100 Message-Id: <20230307140522.2311461-41-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3767; i=ardb@kernel.org; h=from:subject; bh=PQI4coC3/ibnm3/mD+JXkBCOdMxxUJnhmcG2A1VOyNA=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdJd2wI3XZxHcbg78XCtwqTV+vbr7kpe/3iLab5bumb WZrPcDQUcrCIMbBICumyCIw+++7nacnStU6z5KFmcPKBDKEgYtTACZSzM3wV2zxykVWmnOXrzXh LrtUMvFc6C95va8cShmx+tl33kX4izL8dz/y46OtypvzP1qfVqxOT/m8pvNTAbdWVI757YMWBb4 6LAA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060817_969290_E1683E22 X-CRM114-Status: GOOD ( 15.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This reverts commit 1682c45b920643c, which is no longer needed now that we create the permanent kernel mapping directly during early boot. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/mmu_context.h | 13 ++++--------- arch/arm64/kernel/cpufeature.c | 2 +- arch/arm64/kernel/suspend.c | 2 +- arch/arm64/mm/kasan_init.c | 4 ++-- 4 files changed, 8 insertions(+), 13 deletions(-) diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index f9ae2891e4c72c7f..bc1cef5002d60e02 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -104,18 +104,13 @@ static inline void cpu_uninstall_idmap(void) cpu_switch_mm(mm->pgd, mm); } -static inline void __cpu_install_idmap(pgd_t *idmap) +static inline void cpu_install_idmap(void) { cpu_set_reserved_ttbr0(); local_flush_tlb_all(); cpu_set_idmap_tcr_t0sz(); - cpu_switch_mm(lm_alias(idmap), &init_mm); -} - -static inline void cpu_install_idmap(void) -{ - __cpu_install_idmap(idmap_pg_dir); + cpu_switch_mm(lm_alias(idmap_pg_dir), &init_mm); } /* @@ -146,7 +141,7 @@ static inline void cpu_install_ttbr0(phys_addr_t ttbr0, unsigned long t0sz) * Atomically replaces the active TTBR1_EL1 PGD with a new VA-compatible PGD, * avoiding the possibility of conflicting TLB entries being allocated. */ -static inline void cpu_replace_ttbr1(pgd_t *pgdp, pgd_t *idmap) +static inline void cpu_replace_ttbr1(pgd_t *pgdp) { typedef void (ttbr_replace_func)(phys_addr_t); extern ttbr_replace_func idmap_cpu_replace_ttbr1; @@ -170,7 +165,7 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp, pgd_t *idmap) replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1); - __cpu_install_idmap(idmap); + cpu_install_idmap(); /* * We really don't want to take *any* exceptions while TTBR1 is diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index e9788671be044a47..b206de4758ce6fb3 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -3456,7 +3456,7 @@ subsys_initcall_sync(init_32bit_el0_mask); static void __maybe_unused cpu_enable_cnp(struct arm64_cpu_capabilities const *cap) { - cpu_replace_ttbr1(lm_alias(swapper_pg_dir), idmap_pg_dir); + cpu_replace_ttbr1(lm_alias(swapper_pg_dir)); } /* diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c index 0fbdf5fe64d8da08..7c2391851db6c198 100644 --- a/arch/arm64/kernel/suspend.c +++ b/arch/arm64/kernel/suspend.c @@ -55,7 +55,7 @@ void notrace __cpu_suspend_exit(void) /* Restore CnP bit in TTBR1_EL1 */ if (system_supports_cnp()) - cpu_replace_ttbr1(lm_alias(swapper_pg_dir), idmap_pg_dir); + cpu_replace_ttbr1(lm_alias(swapper_pg_dir)); /* * PSTATE was not saved over suspend/resume, re-enable any detected diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index df98f496539f0e39..7e32f21fb8e1e227 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -221,7 +221,7 @@ static void __init kasan_init_shadow(void) */ memcpy(tmp_pg_dir, swapper_pg_dir, sizeof(tmp_pg_dir)); dsb(ishst); - cpu_replace_ttbr1(lm_alias(tmp_pg_dir), idmap_pg_dir); + cpu_replace_ttbr1(lm_alias(tmp_pg_dir)); clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END); @@ -265,7 +265,7 @@ static void __init kasan_init_shadow(void) PAGE_KERNEL_RO)); memset(kasan_early_shadow_page, KASAN_SHADOW_INIT, PAGE_SIZE); - cpu_replace_ttbr1(lm_alias(swapper_pg_dir), idmap_pg_dir); + cpu_replace_ttbr1(lm_alias(swapper_pg_dir)); } static void __init kasan_init_depth(void) From patchwork Tue Mar 7 14:05:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163837 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 870E6C678D4 for ; Tue, 7 Mar 2023 14:45:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=7hCANmuC2cM1QPAIWCJTOeMlGsz0c0VRy69dtOiziVk=; b=pM8Xg+mN4H5sZs pMRi/mpRHBpJo2yZYri0eo8EWnH2XjcY3RaNaA2LDhaz7YapCC7ZVCwWQZXHFogdpDxlCE/wA0A5a AkJzdONhoyzeF3gL2OuE4UXDCOYpeBNR/NB8OweI/h8HwldyYYOaUtTDK3GnKIj3u5q0pFu24kafY W0N69MD1epmvjmAaCyI4CCNPcbi8OoHo6IisNoHcPxgQHiJ4E7yydcKN6FnIbzUMtMn/bYX000/kK Z8zmV4lRz6yjQ8UdJg20ru82MZdnVXqHDx+ApdxCyrNUvx+xgdxYCvY2KotvBQmXtq8LXjA9SxICC jMVOI6JPrG54/3zZvoWg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYYE-000ujg-34; Tue, 07 Mar 2023 14:44:26 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXzG-000ehN-84 for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:08:21 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A38D961463; Tue, 7 Mar 2023 14:08:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1E93CC433D2; Tue, 7 Mar 2023 14:08:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198097; bh=KyeVOWyCVtFoTF1LJWtSXOxn+egSaaC3be0AkldlV1c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JMooe098dHSvS1lQPK7GmT7Datna6P62s4X6DVa9UodJ25o24vy8IvaPovzlhZhgG DIeE4wDoRcH6WNduY/7q7KGv8XTiqORo/6W+CHdDx1uxzfmzdpmSv+CdLTAFceAz20 6tpY1cp4859he3ucXZgwPCMVYwWeZFrnel7Lkefp4Lc1WGwgADULkfgx4VMVW2MhrR vdg8ExweFZYZwhCNN0mjkKWfUR+r9xhBYxzvX0scTbnA6NgTWnqAGs0bujVcPQAQco CZlrATLopOmqR7jPD4OjQGl2kHgmjZIJppgtTszZJl/giF9ybnm7QQee/ECV1syOwy sRAtUk8wQn7dw== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 41/60] arm64/mm: Add FEAT_LPA2 specific TCR_EL1.DS field Date: Tue, 7 Mar 2023 15:05:03 +0100 Message-Id: <20230307140522.2311461-42-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=961; i=ardb@kernel.org; h=from:subject; bh=FGpJVT/P4QQvSVy9l1/wAhcmiWJoU30A2wDupoSxpoU=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdJfvh8eUxgbdFyxc4HVixJr01y795zapFhS8Ckox+s ky5dWRJRykLgxgHg6yYIovA7L/vdp6eKFXrPEsWZg4rE8gQBi5OAZiIUhTD/7AN6ZNfPoxb19Pz 7aDW9+ucZb91gw/ze8w6t1jw1N4E5vMM/z32PtH7UHRtf9nVLfOyLrmkMGzes3uN5A3FjP9KLgy hUfwA X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060818_374181_D80D0904 X-CRM114-Status: GOOD ( 11.70 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Anshuman Khandual As per ARM ARM (0487G.A) TCR_EL1.DS fields controls whether 52 bit input and output address get supported on 4K and 16K page size configuration, when FEAT_LPA2 is known to have been implemented. This adds TCR_DS field definition which would be used when FEAT_LPA2 gets enabled. Acked-by: Catalin Marinas Signed-off-by: Anshuman Khandual Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/pgtable-hwdef.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index f658aafc47dfa29a..c4ad7fbb12c5c07a 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -276,6 +276,7 @@ #define TCR_E0PD1 (UL(1) << 56) #define TCR_TCMA0 (UL(1) << 57) #define TCR_TCMA1 (UL(1) << 58) +#define TCR_DS (UL(1) << 59) /* * TTBR. From patchwork Tue Mar 7 14:05:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163838 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3895AC6FD1B for ; Tue, 7 Mar 2023 14:46:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=NAl8MNIR406xvSE9cb9sbsfF49QOJPuAc8ckyoFhMF4=; b=41pZUSdMFf/5pt 3P3lSWY+jlqf/UEOflgfyieeufI/Lm49mtarahmM6Djb1eSviOfDEGjAlY9adx+oe4d2+NJ7AaTNw 9bnCSxn4TKl90CRui29s545copgL7kntnwqsHHAUMBsuWI6wGpYASPTEhGSqt1T7magTv2vwELJk5 URVV14oybbwot6fkwZzMAukLfvMxYo/gq/c1+H7sqg7kE8v6IzmSGEFJmpGc0ixzTChxUA+z0ewbo AmhFunPJQiq3FZzQGUokd2qf47vg/0Kwikb7GmgpdJX/t4TVZjUzwfV1Te1AOK336iLWhN24xIuZx Tmy5iGe1rRp17bVK+vMg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYYj-000v0P-Sp; Tue, 07 Mar 2023 14:44:58 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXzJ-000ej5-AR for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:08:22 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 792C16146C; Tue, 7 Mar 2023 14:08:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 80D9EC433A4; Tue, 7 Mar 2023 14:08:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198099; bh=1Pv62B1bCqgAyF8KhSQC3DpGFth7DQJv+JqkdwKA/P8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ub98OoZAczjx5df73sZsoMHG9cjxGO/uF8C5atheU3gDlrpNMJU1glWwS7UYuwabn e9o+31uQNbmEy5UFteDabiQoUWOe1wG/YShLWi6prDCQKpUitSLbABA0TYdduW8EzV v1u/hfN8ORwsX/1Fgp6p2MvElkpRnhOm7jCeBZuitf/NLspLbIWNlT0x+RbAh6UpSc wRKfAdDf6KO3zKwI5gAjvB5SAzXnKpT6ZbhydJF+ojlk6cvtoW/p8lhRJLTk5ipi4x i9UftwaJy7Pl0/aF/xB7tuRhgTDHXWsqFxEZ7S9USFoOsxJpWYrBAT/k/EBta/L4wT bAJmJGExIvxiw== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 42/60] arm64/mm: Add FEAT_LPA2 specific ID_AA64MMFR0.TGRAN[2] Date: Tue, 7 Mar 2023 15:05:04 +0100 Message-Id: <20230307140522.2311461-43-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1728; i=ardb@kernel.org; h=from:subject; bh=86yHI17InXfad8HaTxgE2bN7oLspaaBMytfB+uKkgcI=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdJS+O8eZEv2X/L/Y5MbWeW521bcuZeZfunv06I2wxw 5Zpyn7HOkpZGMQ4GGTFFFkEZv99t/P0RKla51myMHNYmUCGMHBxCsBEJAoZ/ideMRLnjWy/udRo fXFwTMlL/78FVRWmyscmb3TIEn/NcYGR4fSpLbL1k9bMDer1u/C4uXxvwLOi9uUNz5g7iv+yi7N M4AAA X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060821_483645_0FFE3DF3 X-CRM114-Status: GOOD ( 11.49 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Anshuman Khandual PAGE_SIZE support is tested against possible minimum and maximum values for its respective ID_AA64MMFR0.TGRAN field, depending on whether it is signed or unsigned. But then FEAT_LPA2 implementation needs to be validated for 4K and 16K page sizes via feature specific ID_AA64MMFR0.TGRAN values. Hence it adds FEAT_LPA2 specific ID_AA64MMFR0.TGRAN[2] values per ARM ARM (0487G.A). Acked-by: Catalin Marinas Signed-off-by: Anshuman Khandual Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/sysreg.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 9e3ecba3c4e67936..21f2ceeb2d152469 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -630,11 +630,13 @@ #if defined(CONFIG_ARM64_4K_PAGES) #define ID_AA64MMFR0_EL1_TGRAN_SHIFT ID_AA64MMFR0_EL1_TGRAN4_SHIFT +#define ID_AA64MMFR0_EL1_TGRAN_LPA2 ID_AA64MMFR0_EL1_TGRAN4_52_BIT #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MIN #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MAX #define ID_AA64MMFR0_EL1_TGRAN_2_SHIFT ID_AA64MMFR0_EL1_TGRAN4_2_SHIFT #elif defined(CONFIG_ARM64_16K_PAGES) #define ID_AA64MMFR0_EL1_TGRAN_SHIFT ID_AA64MMFR0_EL1_TGRAN16_SHIFT +#define ID_AA64MMFR0_EL1_TGRAN_LPA2 ID_AA64MMFR0_EL1_TGRAN16_52_BIT #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MIN #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MAX #define ID_AA64MMFR0_EL1_TGRAN_2_SHIFT ID_AA64MMFR0_EL1_TGRAN16_2_SHIFT From patchwork Tue Mar 7 14:05:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163840 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 03C2EC678D4 for ; Tue, 7 Mar 2023 14:47:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=sRsTz525h8psR5TOM7Seq4Ez3ewUTcIfBHmnFDOSqeI=; b=M7GyL3uvbnH13k teFe3DEAJtXDvuiWjlYb01uPiwd+1rDGidcASIa6wXv6LCeN0U4aeGX2vDwPN4vwIS5YuPqu+uYM2 1+O85Gii0h5DvpUryEiQetsoOH95LaQr126dRum5nbvLl1pivRVLtiJEwQ6x6SZnLNhQWfp55cFbh r25fojU9et9FAqU6vmbdSSgvYR7mR+AJ5F85j5HEtyzaFY8RvKzyf7JH8Zw1Kzk9+Qw+dJt7GvoX7 toCwB68LiubWSLdSlQiWJgiWB0D39e1dgkPdU30Vqpxao4NAY4i/qmIRHgGlELT2z0Z6a3LIeyiTO 2ZvBxeVTqf/CjQafsMhQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYaQ-000vwG-Ob; Tue, 07 Mar 2023 14:46:43 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXzL-000ek0-Su for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:08:29 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 73D1E6145A; Tue, 7 Mar 2023 14:08:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E3720C433D2; Tue, 7 Mar 2023 14:08:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198101; bh=0PDBzg8wzgrWAIdBRQh7oJRLRwPNXTACA1QXrN62KoY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bSjJrjZrOyi5N8vBSHTP3EUIbXM2M6fNQP2xzAi5tBc0sPscgvRr+/NjFSoZfQCCk Oli2XrA16eygeNmOqFLUbMYkl/G9vdg40hlywgUKoQ9qQ1FuKISZk15fd8qjIp5Yy/ w6jF1um8p/ZVrMAw3iDw3tt0ixz/EzvwE5+Rj4OslXVxbczM14P0Aa9S4jLWfp4hDS RkFsUQEXmmtfEh6fZGdT3QHzkBEMZxjzcQpvOg6hx1AqAeLjrmfh+2N+XJxv+PmJPK oW5IhIdPSQFmcWTw5qgt+BBh4QzCfELU6DNUolujxQDSklsZpaddw9pYet0dXFPxre oGrDuauncxUXA== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 43/60] arm64: mm: Handle LVA support as a CPU feature Date: Tue, 7 Mar 2023 15:05:05 +0100 Message-Id: <20230307140522.2311461-44-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=10247; i=ardb@kernel.org; h=from:subject; bh=0PDBzg8wzgrWAIdBRQh7oJRLRwPNXTACA1QXrN62KoY=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdpUD+zW72xNf/H74SuMLyw0gj46rMeXdhj4zcOK8X+ 9Yvcl7QUcrCIMbBICumyCIw+++7nacnStU6z5KFmcPKBDKEgYtTACbCu4CRYe7280J/Qgxf1eb/ 5jyUIrhn4iWL9FdrwnUqdm/7dyLmlAnD/6ynEVz6cyJ0r4qu+yJ3hvPAj7ccvJXsSvv8/yjd/hv mzQUA X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060824_109806_E02AF77B X-CRM114-Status: GOOD ( 26.30 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently, we detect CPU support for 52-bit virtual addressing (LVA) extremely early, before creating the kernel page tables or enabling the MMU. We cannot override the feature this early, and so large virtual addressing is always enabled on CPUs that implement support for it if the software support for it was enabled at build time. It also means we rely on non-trivial code in asm to deal with this feature. Given that both the ID map and the TTBR1 mapping of the kernel image are guaranteed to be 48-bit addressable, it is not actually necessary to enable support this early, and instead, we can model it as a CPU feature. That way, we can rely on code patching to get the correct TCR.T1SZ values programmed on secondary boot and suspend from resume. On the primary boot path, we simply enable the MMU with 48-bit virtual addressing initially, and update TCR.T1SZ if LVA is supported from C code, right before creating the kernel mapping. Given that TTBR1 still points to reserved_pg_dir at this point, updating TCR.T1SZ should be safe without the need for explicit TLB maintenance. Since this gets rid of all accesses to the vabits_actual variable from asm code that occurred before TCR.T1SZ had been programmed, we no longer have a need for this variable, and we can replace it with a C expression that produces the correct value directly, based on the value of TCR.T1SZ. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/cpufeature.h | 9 ++++++ arch/arm64/include/asm/memory.h | 13 ++++++++- arch/arm64/kernel/cpufeature.c | 13 +++++++++ arch/arm64/kernel/head.S | 29 +++++--------------- arch/arm64/kernel/image-vars.h | 1 - arch/arm64/kernel/pi/map_kernel.c | 3 ++ arch/arm64/kernel/sleep.S | 3 -- arch/arm64/mm/mmu.c | 5 ---- arch/arm64/mm/proc.S | 9 +++--- arch/arm64/tools/cpucaps | 1 + 10 files changed, 49 insertions(+), 37 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index edefe3b36fe5c243..a37f4956d5a7ef6e 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -989,6 +989,15 @@ static inline bool cpu_has_pac(void) return feat; } +static inline bool cpu_has_lva(void) +{ + u64 mmfr2; + + mmfr2 = read_sysreg_s(SYS_ID_AA64MMFR2_EL1); + return cpuid_feature_extract_unsigned_field(mmfr2, + ID_AA64MMFR2_EL1_VARange_SHIFT); +} + #endif /* __ASSEMBLY__ */ #endif diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index f96975466ef1b752..3e32d957aadcb2bb 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -183,9 +183,20 @@ #include #include #include +#include + +static inline u64 __pure read_tcr(void) +{ + u64 tcr; + + // read_sysreg() uses asm volatile, so avoid it here + asm("mrs %0, tcr_el1" : "=r"(tcr)); + return tcr; +} #if VA_BITS > 48 -extern u64 vabits_actual; +// For reasons of #include hell, we can't use TCR_T1SZ_OFFSET/TCR_T1SZ_MASK here +#define vabits_actual (64 - ((read_tcr() >> 16) & 63)) #else #define vabits_actual ((u64)VA_BITS) #endif diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index b206de4758ce6fb3..660dedcae173841a 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2752,6 +2752,19 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .matches = has_cpuid_feature, .cpu_enable = cpu_enable_dit, }, +#ifdef CONFIG_ARM64_VA_BITS_52 + { + .desc = "52-bit Virtual Addressing (LVA)", + .capability = ARM64_HAS_VA52, + .type = ARM64_CPUCAP_BOOT_CPU_FEATURE, + .sys_reg = SYS_ID_AA64MMFR2_EL1, + .sign = FTR_UNSIGNED, + .field_width = 4, + .field_pos = ID_AA64MMFR2_EL1_VARange_SHIFT, + .matches = has_cpuid_feature, + .min_field_value = ID_AA64MMFR2_EL1_VARange_52, + }, +#endif {}, }; diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index c6f8c3b1f026c07b..47ebe3242d7feb7e 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -80,7 +80,6 @@ * x19 primary_entry() .. start_kernel() whether we entered with the MMU on * x20 primary_entry() .. __primary_switch() CPU boot mode * x21 primary_entry() .. start_kernel() FDT pointer passed at boot in x0 - * x25 primary_entry() .. start_kernel() supported VA size */ SYM_CODE_START(primary_entry) bl record_mmu_state @@ -125,14 +124,6 @@ SYM_CODE_START(primary_entry) * On return, the CPU will be ready for the MMU to be turned on and * the TCR will have been set. */ -#if VA_BITS > 48 - mrs_s x0, SYS_ID_AA64MMFR2_EL1 - tst x0, #0xf << ID_AA64MMFR2_EL1_VARange_SHIFT - mov x0, #VA_BITS - mov x25, #VA_BITS_MIN - csel x25, x25, x0, eq - mov x0, x25 -#endif bl __cpu_setup // initialise processor b __primary_switch SYM_CODE_END(primary_entry) @@ -242,11 +233,6 @@ SYM_FUNC_START_LOCAL(__primary_switched) mov x0, x20 bl set_cpu_boot_mode_flag -#if VA_BITS > 48 - adr_l x8, vabits_actual // Set this early so KASAN early init - str x25, [x8] // ... observes the correct value - dc civac, x8 // Make visible to booting secondaries -#endif #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) bl kasan_early_init #endif @@ -374,10 +360,13 @@ SYM_FUNC_START_LOCAL(secondary_startup) * Common entry point for secondary CPUs. */ mov x20, x0 // preserve boot mode + +#ifdef CONFIG_ARM64_VA_BITS_52 +alternative_if ARM64_HAS_VA52 bl __cpu_secondary_check52bitva -#if VA_BITS > 48 - ldr_l x0, vabits_actual +alternative_else_nop_endif #endif + bl __cpu_setup // initialise processor adrp x1, swapper_pg_dir adrp x2, idmap_pg_dir @@ -480,12 +469,8 @@ SYM_FUNC_START(__enable_mmu) ret SYM_FUNC_END(__enable_mmu) +#ifdef CONFIG_ARM64_VA_BITS_52 SYM_FUNC_START(__cpu_secondary_check52bitva) -#if VA_BITS > 48 - ldr_l x0, vabits_actual - cmp x0, #52 - b.ne 2f - mrs_s x0, SYS_ID_AA64MMFR2_EL1 and x0, x0, #(0xf << ID_AA64MMFR2_EL1_VARange_SHIFT) cbnz x0, 2f @@ -496,9 +481,9 @@ SYM_FUNC_START(__cpu_secondary_check52bitva) wfi b 1b -#endif 2: ret SYM_FUNC_END(__cpu_secondary_check52bitva) +#endif SYM_FUNC_START_LOCAL(__no_granule_support) /* Indicate that this CPU can't boot and is stuck in the kernel */ diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 1c0e920a2466f851..61d1d283a69ba5d8 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -34,7 +34,6 @@ PROVIDE(__pi___memcpy = __pi_memcpy); PROVIDE(__pi___memmove = __pi_memmove); PROVIDE(__pi___memset = __pi_memset); -PROVIDE(__pi_vabits_actual = vabits_actual); PROVIDE(__pi_id_aa64isar1_override = id_aa64isar1_override); PROVIDE(__pi_id_aa64isar2_override = id_aa64isar2_override); PROVIDE(__pi_id_aa64mmfr1_override = id_aa64mmfr1_override); diff --git a/arch/arm64/kernel/pi/map_kernel.c b/arch/arm64/kernel/pi/map_kernel.c index a90c4d6fc75c35d0..c1a5bef4e10a49d7 100644 --- a/arch/arm64/kernel/pi/map_kernel.c +++ b/arch/arm64/kernel/pi/map_kernel.c @@ -166,6 +166,9 @@ asmlinkage void __init early_map_kernel(u64 boot_status, void *fdt) chosen = fdt_path_offset(fdt, chosen_str); init_feature_override(boot_status, fdt, chosen); + if (VA_BITS > VA_BITS_MIN && cpu_has_lva()) + sysreg_clear_set(tcr_el1, TCR_T1SZ_MASK, TCR_T1SZ(VA_BITS)); + /* * The virtual KASLR displacement modulo 2MiB is decided by the * physical placement of the image, as otherwise, we might not be able diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S index 2ae7cff1953aaf87..353e71bd40a1e1e5 100644 --- a/arch/arm64/kernel/sleep.S +++ b/arch/arm64/kernel/sleep.S @@ -102,9 +102,6 @@ SYM_CODE_START(cpu_resume) mov x0, xzr bl init_kernel_el mov x19, x0 // preserve boot mode -#if VA_BITS > 48 - ldr_l x0, vabits_actual -#endif bl __cpu_setup /* enable the MMU early - so we can access sleep_save_stash by va */ adrp x1, swapper_pg_dir diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 81634ff5f6a67476..914745697fb8b30c 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -43,11 +43,6 @@ #define NO_CONT_MAPPINGS BIT(1) #define NO_EXEC_MAPPINGS BIT(2) /* assumes FEAT_HPDS is not used */ -#if VA_BITS > 48 -u64 vabits_actual __ro_after_init = VA_BITS_MIN; -EXPORT_SYMBOL(vabits_actual); -#endif - u64 kimage_voffset __ro_after_init; EXPORT_SYMBOL(kimage_voffset); diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index d0748f18b2abdf0e..a5dc009b7dd5c141 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -396,8 +396,6 @@ SYM_FUNC_END(idmap_kpti_install_ng_mappings) * * Initialise the processor for turning the MMU on. * - * Input: - * x0 - actual number of VA bits (ignored unless VA_BITS > 48) * Output: * Return in x0 the value of the SCTLR_EL1 register. */ @@ -422,16 +420,17 @@ SYM_FUNC_START(__cpu_setup) mair .req x17 tcr .req x16 mov_q mair, MAIR_EL1_SET - mov_q tcr, TCR_T0SZ(48) | TCR_T1SZ(VA_BITS) | TCR_CACHE_FLAGS | \ + mov_q tcr, TCR_T0SZ(48) | TCR_T1SZ(VA_BITS_MIN) | TCR_CACHE_FLAGS | \ TCR_SMP_FLAGS | TCR_TG_FLAGS | TCR_KASLR_FLAGS | TCR_ASID16 | \ TCR_TBI0 | TCR_A1 | TCR_KASAN_SW_FLAGS | TCR_MTE_FLAGS tcr_clear_errata_bits tcr, x9, x5 #ifdef CONFIG_ARM64_VA_BITS_52 - sub x9, xzr, x0 - add x9, x9, #64 + mov x9, #64 - VA_BITS +alternative_if ARM64_HAS_VA52 tcr_set_t1sz tcr, x9 +alternative_else_nop_endif #endif /* diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps index 37b1340e96466411..fce10148f000a8a4 100644 --- a/arch/arm64/tools/cpucaps +++ b/arch/arm64/tools/cpucaps @@ -43,6 +43,7 @@ HAS_SB HAS_STAGE2_FWB HAS_TIDCP1 HAS_TLB_RANGE +HAS_VA52 HAS_VIRT_HOST_EXTN HAS_WFXT HW_DBM From patchwork Tue Mar 7 14:05:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163839 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CE3F9C678D5 for ; Tue, 7 Mar 2023 14:46:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=PsVGusg6tomyxt5Jq76xNfkXqwl5k0PP4Ry6ZpL8OlY=; b=aNW7X3aznZRUZY ePWmr4eonhbtoHl8wOouKiZCrf5i8UvyVvzGssqIJ4Ryc38gLpyPfbd8voOClUQuczp49kboTITIg WsI6g6FtkC/DY1vo9dI8LxfOpEvRbCej58bQVLqVQIZiIlowp+IcBBhYCDHKLeoYJlGPUWKwjeB/7 2xSyYGee9sU43tU8Zj3MMZelagVNC8wM85v/WTum27WHX0quUg7nk3wrLVdmwmywVj6tXC7bD2wtM d41Xkvs8WmaETtvqF5TNJdayjwFrv4eRkZ2y65Yn121hDF45f74LotONP9YZAdE6tJAg6+ItbSVBA t/2zauEiIEQzEJzgqJYw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYZW-000vPo-G6; Tue, 07 Mar 2023 14:45:47 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXzN-000elT-ST for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:08:28 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 272B061469; Tue, 7 Mar 2023 14:08:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 52117C433A4; Tue, 7 Mar 2023 14:08:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198104; bh=cITyNqDHqP2MM3JyZX0FhteueW9phnpQCLtZyPbhWNs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uMJwLq0LCPwhiAjjMW4AY+5bLy27Tvgq5jAk+4mhMjQl49koLByrC2VNQ6UkY5m01 VFaW231zd/9C4AXiMrS1FlTFKamQGl+YmvKHKDPY59bV0MiFVxXkJF1NkHtvoveqkH tiJUcwlAYpfJaMm1nJU10lXXclo/wRh/iLH2dnVopSG+x3fF8vycis0tlOOTJa4MAO Vbq2knjTuBdZiRRsA6nF1qud+Z9hh27i53La4cWjzVqdsC90dE0RJiPwJ/roCM1xq9 bkgNrpXK/ylznhc1NlOoy0uqYb1wSllgF/a9bZluTdVXhdWsPhozqeowflcFl9Fhcc wTdY2hQVd7dig== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 44/60] arm64: mm: Add feature override support for LVA Date: Tue, 7 Mar 2023 15:05:06 +0100 Message-Id: <20230307140522.2311461-45-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6883; i=ardb@kernel.org; h=from:subject; bh=cITyNqDHqP2MM3JyZX0FhteueW9phnpQCLtZyPbhWNs=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdpbj107xtLxJ7T+TMELy/+PM36Tmhu+SLtpS4PmLnf nzrUoNxRykLgxgHg6yYIovA7L/vdp6eKFXrPEsWZg4rE8gQBi5OAZiItBjD/7B97rpLQvJ3H1d5 b3vsCp/uEy/eVZUN5kfSWT5XTjzcu4Phv2OP7I3tv5bMD5E7WH/Uqmyun8N2zcYcF96jPdL7vO/ FcgEA X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060826_102596_850F6A96 X-CRM114-Status: GOOD ( 23.42 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add support for overriding the VARange field of the MMFR2 CPU ID register. This permits the associated LVA feature to be overridden early enough for the boot code that creates the kernel mapping to take it into account. Given that LPA2 implies LVA, disabling the latter should disable the former as well. So override the ID_AA64MMFR0.TGran field of the current page size as well if it advertises support for 52-bit addressing. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/assembler.h | 17 ++++++----- arch/arm64/include/asm/cpufeature.h | 4 +++ arch/arm64/kernel/cpufeature.c | 8 +++-- arch/arm64/kernel/image-vars.h | 2 ++ arch/arm64/kernel/pi/idreg-override.c | 31 ++++++++++++++++++++ 5 files changed, 53 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index beb53bbd8c19bb1c..0710c17800a49b75 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -576,18 +576,21 @@ alternative_endif .endm /* - * Offset ttbr1 to allow for 48-bit kernel VAs set with 52-bit PTRS_PER_PGD. + * If the kernel is built for 52-bit virtual addressing but the hardware only + * supports 48 bits, we cannot program the pgdir address into TTBR1 directly, + * but we have to add an offset so that the TTBR1 address corresponds with the + * pgdir entry that covers the lowest 48-bit addressable VA. + * * orr is used as it can cover the immediate value (and is idempotent). - * In future this may be nop'ed out when dealing with 52-bit kernel VAs. * ttbr: Value of ttbr to set, modified. */ .macro offset_ttbr1, ttbr, tmp #ifdef CONFIG_ARM64_VA_BITS_52 - mrs_s \tmp, SYS_ID_AA64MMFR2_EL1 - and \tmp, \tmp, #(0xf << ID_AA64MMFR2_EL1_VARange_SHIFT) - cbnz \tmp, .Lskipoffs_\@ - orr \ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET -.Lskipoffs_\@ : + mrs \tmp, tcr_el1 + and \tmp, \tmp, #TCR_T1SZ_MASK + cmp \tmp, #TCR_T1SZ(VA_BITS_MIN) + orr \tmp, \ttbr, #TTBR1_BADDR_4852_OFFSET + csel \ttbr, \tmp, \ttbr, eq #endif .endm diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index a37f4956d5a7ef6e..7faf9a48339e7c8c 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -920,7 +920,9 @@ static inline unsigned int get_vmid_bits(u64 mmfr1) struct arm64_ftr_reg *get_arm64_ftr_reg(u32 sys_id); +extern struct arm64_ftr_override id_aa64mmfr0_override; extern struct arm64_ftr_override id_aa64mmfr1_override; +extern struct arm64_ftr_override id_aa64mmfr2_override; extern struct arm64_ftr_override id_aa64pfr0_override; extern struct arm64_ftr_override id_aa64pfr1_override; extern struct arm64_ftr_override id_aa64zfr0_override; @@ -994,6 +996,8 @@ static inline bool cpu_has_lva(void) u64 mmfr2; mmfr2 = read_sysreg_s(SYS_ID_AA64MMFR2_EL1); + mmfr2 &= ~id_aa64mmfr2_override.mask; + mmfr2 |= id_aa64mmfr2_override.val; return cpuid_feature_extract_unsigned_field(mmfr2, ID_AA64MMFR2_EL1_VARange_SHIFT); } diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 660dedcae173841a..f8e3f37accdddc86 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -649,7 +649,9 @@ static const struct arm64_ftr_bits ftr_raz[] = { #define ARM64_FTR_REG(id, table) \ __ARM64_FTR_REG_OVERRIDE(#id, id, table, &no_override) +struct arm64_ftr_override id_aa64mmfr0_override; struct arm64_ftr_override id_aa64mmfr1_override; +struct arm64_ftr_override id_aa64mmfr2_override; struct arm64_ftr_override id_aa64pfr0_override; struct arm64_ftr_override id_aa64pfr1_override; struct arm64_ftr_override id_aa64zfr0_override; @@ -713,10 +715,12 @@ static const struct __ftr_reg_entry { &id_aa64isar2_override), /* Op1 = 0, CRn = 0, CRm = 7 */ - ARM64_FTR_REG(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0), + ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0, + &id_aa64mmfr0_override), ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64MMFR1_EL1, ftr_id_aa64mmfr1, &id_aa64mmfr1_override), - ARM64_FTR_REG(SYS_ID_AA64MMFR2_EL1, ftr_id_aa64mmfr2), + ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64MMFR2_EL1, ftr_id_aa64mmfr2, + &id_aa64mmfr2_override), /* Op1 = 0, CRn = 1, CRm = 2 */ ARM64_FTR_REG(SYS_ZCR_EL1, ftr_zcr), diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 61d1d283a69ba5d8..79a7e0e3edd1aa21 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -36,7 +36,9 @@ PROVIDE(__pi___memset = __pi_memset); PROVIDE(__pi_id_aa64isar1_override = id_aa64isar1_override); PROVIDE(__pi_id_aa64isar2_override = id_aa64isar2_override); +PROVIDE(__pi_id_aa64mmfr0_override = id_aa64mmfr0_override); PROVIDE(__pi_id_aa64mmfr1_override = id_aa64mmfr1_override); +PROVIDE(__pi_id_aa64mmfr2_override = id_aa64mmfr2_override); PROVIDE(__pi_id_aa64pfr0_override = id_aa64pfr0_override); PROVIDE(__pi_id_aa64pfr1_override = id_aa64pfr1_override); PROVIDE(__pi_id_aa64smfr0_override = id_aa64smfr0_override); diff --git a/arch/arm64/kernel/pi/idreg-override.c b/arch/arm64/kernel/pi/idreg-override.c index 265b35b09dd488f1..c4ae5ffe5cb0c999 100644 --- a/arch/arm64/kernel/pi/idreg-override.c +++ b/arch/arm64/kernel/pi/idreg-override.c @@ -63,6 +63,35 @@ static const struct ftr_set_desc mmfr1 __prel64_initconst = { }, }; + +static bool __init mmfr2_varange_filter(u64 val) +{ + int __maybe_unused feat; + + if (val) + return false; + +#ifdef CONFIG_ARM64_LPA2 + feat = cpuid_feature_extract_signed_field(read_sysreg(id_aa64mmfr0_el1), + ID_AA64MMFR0_EL1_TGRAN_SHIFT); + if (feat >= ID_AA64MMFR0_EL1_TGRAN_LPA2) { + id_aa64mmfr0_override.val |= + (ID_AA64MMFR0_EL1_TGRAN_LPA2 - 1) << ID_AA64MMFR0_EL1_TGRAN_SHIFT; + id_aa64mmfr0_override.mask |= 0xfU << ID_AA64MMFR0_EL1_TGRAN_SHIFT; + } +#endif + return true; +} + +static const struct ftr_set_desc mmfr2 __prel64_initconst = { + .name = "id_aa64mmfr2", + .override = &id_aa64mmfr2_override, + .fields = { + FIELD("varange", ID_AA64MMFR2_EL1_VARange_SHIFT, mmfr2_varange_filter), + {} + }, +}; + static bool __init pfr0_sve_filter(u64 val) { /* @@ -161,6 +190,7 @@ static const union { prel64_t reg_prel; } regs[] __prel64_initconst = { { .reg = &mmfr1 }, + { .reg = &mmfr2 }, { .reg = &pfr0 }, { .reg = &pfr1 }, { .reg = &isar1 }, @@ -185,6 +215,7 @@ static const struct { { "arm64.nomte", "id_aa64pfr1.mte=0" }, { "nokaslr", "arm64_sw.nokaslr=1" }, { "rodata=off", "arm64_sw.rodataoff=1" }, + { "arm64.nolva", "id_aa64mmfr2.varange=0" }, }; static int __init parse_hexdigit(const char *p, u64 *v) From patchwork Tue Mar 7 14:05:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163841 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E7FE2C678D5 for ; Tue, 7 Mar 2023 14:48:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=y/kp2PsWY7wwIgTmeSrmvVOAP9U26Bb/LHA8owccCVI=; b=QMjeUzWEJi3zOh U5d1ul3ZGjj/v9DyzYElrARtFG0jlaOQlRw+RkrhsFKApIJDArgnB8O2csO/Z9GVJrmAMnlc2X8eC MoDS75nRh9laLvIIvn1JvDpIZE1JIQfOQ92pZz1OBEanck3tGCva4Yi3kxw16a5gnXj1rKrcdWO9H Izgz/y3GbA4VQ9n+KrbTvOX0NTC/7CU5lZxGU0SqWjPAH+EXN3ZNfSDk3tHgaHsL57eda0+VqWn7q GJGMP7bwTh4UgwCNcfYOvNawCXQEiBirxvozr6AwIGOUOUeKl7TqsXsPEZ0bjfn29OrmTXQLZMR2i vv1aULuLF7fEtEtS+3pw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYbB-000wMS-ES; Tue, 07 Mar 2023 14:47:30 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXzQ-000ema-1i for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:08:29 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 44A8F61465; Tue, 7 Mar 2023 14:08:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B4F62C433D2; Tue, 7 Mar 2023 14:08:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198106; bh=IooEg3M7Xp8n6fFz2ry7RA2wM5641yTnqrKiOOpwF8I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=d+uJPAJ5Tmrt2XLa4LwUWbI3E6c05+YTuH0eIiMVX0N76LVrsgRFqGc7FXnR10oMs KrdfJ7APfiu765sYJ4LdDNjLnkoxTNerwpZu77p5URLx4vaJZpGKTZfzUysycbmtWF YFmt+IqvwkPCfVZrXESjfWAg5tnWrYXUGGJ5L+O/bUNPpw7WMoemYWf7236SrOSjHe /8asHrreZFw8/sJN1tyUyZw+EQDfsW3LXVj9xsb3gqCrzH+4VMTaXcHoUpkAdByuCi tpaCfzsHVoPepxoXXsgua7Dl0osV7qUDuf8jC1z8ail8RlVF/1/RauRHJqXw9qEl8C BdpbYsg2QxtJQ== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 45/60] arm64: mm: Wire up TCR.DS bit to PTE shareability fields Date: Tue, 7 Mar 2023 15:05:07 +0100 Message-Id: <20230307140522.2311461-46-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2969; i=ardb@kernel.org; h=from:subject; bh=IooEg3M7Xp8n6fFz2ry7RA2wM5641yTnqrKiOOpwF8I=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdpTR3+pcUlnIXNWM+Rr9fFSn3L/GkFn0W+hOmbXpzx rsYvo6OUhYGMQ4GWTFFFoHZf9/tPD1RqtZ5lizMHFYmkCEMXJwCMBHmk4wMB1/yXZwumVc4Y/fG xwL1k1MWLvHbli2158l3BV/3Hs5wJUaGkz77CopVEhey2flmv77/bYLexYescTpfbi49b1vYvDG IBwA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060828_201903_06D52FB9 X-CRM114-Status: GOOD ( 16.79 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When LPA2 is enabled, bits 8 and 9 of page and block descriptors become part of the output address instead of carrying shareability attributes for the region in question. So avoid setting these bits if TCR.DS == 1, which means LPA2 is enabled. Signed-off-by: Ard Biesheuvel --- arch/arm64/Kconfig | 4 ++++ arch/arm64/include/asm/pgtable-prot.h | 18 ++++++++++++++++-- arch/arm64/mm/mmap.c | 4 ++++ 3 files changed, 24 insertions(+), 2 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 1023e896d46b8969..d287dad29198c843 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1287,6 +1287,10 @@ config ARM64_PA_BITS default 48 if ARM64_PA_BITS_48 default 52 if ARM64_PA_BITS_52 +config ARM64_LPA2 + def_bool y + depends on ARM64_PA_BITS_52 && !ARM64_64K_PAGES + choice prompt "Endianness" default CPU_LITTLE_ENDIAN diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h index 9b165117a454595a..269584d5a2c017fc 100644 --- a/arch/arm64/include/asm/pgtable-prot.h +++ b/arch/arm64/include/asm/pgtable-prot.h @@ -40,6 +40,20 @@ extern bool arm64_use_ng_mappings; #define PTE_MAYBE_NG (arm64_use_ng_mappings ? PTE_NG : 0) #define PMD_MAYBE_NG (arm64_use_ng_mappings ? PMD_SECT_NG : 0) +#ifndef CONFIG_ARM64_LPA2 +#define lpa2_is_enabled() false +#define PTE_MAYBE_SHARED PTE_SHARED +#define PMD_MAYBE_SHARED PMD_SECT_S +#else +static inline bool __pure lpa2_is_enabled(void) +{ + return read_tcr() & TCR_DS; +} + +#define PTE_MAYBE_SHARED (lpa2_is_enabled() ? 0 : PTE_SHARED) +#define PMD_MAYBE_SHARED (lpa2_is_enabled() ? 0 : PMD_SECT_S) +#endif + /* * If we have userspace only BTI we don't want to mark kernel pages * guarded even if the system does support BTI. @@ -50,8 +64,8 @@ extern bool arm64_use_ng_mappings; #define PTE_MAYBE_GP 0 #endif -#define PROT_DEFAULT (_PROT_DEFAULT | PTE_MAYBE_NG) -#define PROT_SECT_DEFAULT (_PROT_SECT_DEFAULT | PMD_MAYBE_NG) +#define PROT_DEFAULT (PTE_TYPE_PAGE | PTE_MAYBE_NG | PTE_MAYBE_SHARED | PTE_AF) +#define PROT_SECT_DEFAULT (PMD_TYPE_SECT | PMD_MAYBE_NG | PMD_MAYBE_SHARED | PMD_SECT_AF) #define PROT_DEVICE_nGnRnE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRnE)) #define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRE)) diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c index 8f5b7ce857ed4a8f..adcf547f74eb8e60 100644 --- a/arch/arm64/mm/mmap.c +++ b/arch/arm64/mm/mmap.c @@ -73,6 +73,10 @@ static int __init adjust_protection_map(void) protection_map[VM_EXEC | VM_SHARED] = PAGE_EXECONLY; } + if (lpa2_is_enabled()) + for (int i = 0; i < ARRAY_SIZE(protection_map); i++) + pgprot_val(protection_map[i]) &= ~PTE_SHARED; + return 0; } arch_initcall(adjust_protection_map); From patchwork Tue Mar 7 14:05:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163863 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 538EBC678D5 for ; Tue, 7 Mar 2023 14:49:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=x7vhkWSuJvafDGyEgOqAJLfKQtDbBmxmniz8D7c7/2I=; b=oG0eN/AuXcHjQE WPBCFo8jBj6xhlBLS4J8YjPofu+VYgjG4lCoGn1YuGuPZ8CO8lzLmyERMzqlzz2LH0wuSpBEUtjR6 79neLR+pezuK0PFVwunsaMSJJfJuwd5aiOlexlvv+W0zUUfQoEgV+t6uV/PDqA1abaAOTa9Dnr2pm I6cLAtKthqWxj8cLVGodIy+nDzj8DeoM7Oa/lOdQcj9f/bV52Y8H0qUYMuFUrjEPsUHsKEyVibLa5 aTPDki2wL5mXY14ySvQL8ugRpiR6WjfYyOZR6rjbEkoiNV/6dLT6/0KqIYSyXD7I5LfUmWqILIlO3 ZO8xi9L1aq5cBYgbpA/Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYbo-000wgA-UJ; Tue, 07 Mar 2023 14:48:09 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXzS-000eo8-OL for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:08:32 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 5874461468; Tue, 7 Mar 2023 14:08:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 227C8C4339C; Tue, 7 Mar 2023 14:08:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198109; bh=fi/Mt46HTgtFSr//58GfJTVreCP10GiauildLa+dfPw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=D74pKz2SK0GhwT6OXSOFZBxCrCxnbr/Zik+TUtE7ZX7XEblwOUjevezzdHkkjMLJf YoW8jHmHJKFsfrAsFgMSo5TVDp5Kp8EgwpfnsEQ9VkNxYxct6u0yaTl1QqLrjDYdGM z+O4QmTkvctELq707bPzwvKD4tzcXF0/sCMregCkknVelfAtEj2h8MgCuSmrRQyQf/ 2VpbSXVHY9k/AWNYSoEhNEBPpY+XJZvzbeWc4KHbCkJFf2eT0wtz40oUOL6Kh9O6PV iUfyVVbmj6CiQG29BJQUuPNe0JWGlpnZapQJXpzoh4pHlzjERJUDzn7DQwuCLnAXqg vil8E9paMZopg== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 46/60] arm64: mm: Add LPA2 support to phys<->pte conversion routines Date: Tue, 7 Mar 2023 15:05:08 +0100 Message-Id: <20230307140522.2311461-47-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=4658; i=ardb@kernel.org; h=from:subject; bh=fi/Mt46HTgtFSr//58GfJTVreCP10GiauildLa+dfPw=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdpXzrUrues+869ZyDDzNHNyWnXli/1mdapIehgYVHf NKqaxYdpSwMYhwMsmKKLAKz/77beXqiVK3zLFmYOaxMIEMYuDgFYCK9YQz/zDQfFb4xZF5+W0tn rpjCtvPzb3zSXLXLcpcKYzjr3oM3njAyTHcq+Ga8L/3nL9nUUxW3p7KlLUjqUmHS/peasUjp90U fdgA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060830_903917_2132864C X-CRM114-Status: GOOD ( 16.74 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for enabling LPA2 support, introduce the mask values for converting between physical addresses and their representations in a page table descriptor. While at it, move the pte_to_phys asm macro into its only user, so that we can freely modify it to use its input value register as a temp register. For LPA2, the PTE_ADDR_MASK contains two non-adjacent sequences of zero bits, which means it no longer fits into the immediate field of an ordinary ALU instruction. So let's redefine it to include the bits in between as well, and only use it when converting from physical address to PTE representation, where the distinction does not matter. Also update the name accordingly to emphasize this. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/assembler.h | 16 ++-------------- arch/arm64/include/asm/pgtable-hwdef.h | 10 +++++++--- arch/arm64/include/asm/pgtable.h | 5 +++-- arch/arm64/mm/proc.S | 8 ++++++++ 4 files changed, 20 insertions(+), 19 deletions(-) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 0710c17800a49b75..55e8731844cf7eb7 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -612,25 +612,13 @@ alternative_endif .macro phys_to_pte, pte, phys #ifdef CONFIG_ARM64_PA_BITS_52 - /* - * We assume \phys is 64K aligned and this is guaranteed by only - * supporting this configuration with 64K pages. - */ - orr \pte, \phys, \phys, lsr #36 - and \pte, \pte, #PTE_ADDR_MASK + orr \pte, \phys, \phys, lsr #PTE_ADDR_HIGH_SHIFT + and \pte, \pte, #PHYS_TO_PTE_ADDR_MASK #else mov \pte, \phys #endif .endm - .macro pte_to_phys, phys, pte - and \phys, \pte, #PTE_ADDR_MASK -#ifdef CONFIG_ARM64_PA_BITS_52 - orr \phys, \phys, \phys, lsl #PTE_ADDR_HIGH_SHIFT - and \phys, \phys, GENMASK_ULL(PHYS_MASK_SHIFT - 1, PAGE_SHIFT) -#endif - .endm - /* * tcr_clear_errata_bits - Clear TCR bits that trigger an errata on this CPU. */ diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index c4ad7fbb12c5c07a..b91fe4781b066d54 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -155,13 +155,17 @@ #define PTE_PXN (_AT(pteval_t, 1) << 53) /* Privileged XN */ #define PTE_UXN (_AT(pteval_t, 1) << 54) /* User XN */ -#define PTE_ADDR_LOW (((_AT(pteval_t, 1) << (48 - PAGE_SHIFT)) - 1) << PAGE_SHIFT) +#define PTE_ADDR_LOW (((_AT(pteval_t, 1) << (50 - PAGE_SHIFT)) - 1) << PAGE_SHIFT) #ifdef CONFIG_ARM64_PA_BITS_52 +#ifdef CONFIG_ARM64_64K_PAGES #define PTE_ADDR_HIGH (_AT(pteval_t, 0xf) << 12) -#define PTE_ADDR_MASK (PTE_ADDR_LOW | PTE_ADDR_HIGH) #define PTE_ADDR_HIGH_SHIFT 36 +#define PHYS_TO_PTE_ADDR_MASK (PTE_ADDR_LOW | PTE_ADDR_HIGH) #else -#define PTE_ADDR_MASK PTE_ADDR_LOW +#define PTE_ADDR_HIGH (_AT(pteval_t, 0x3) << 8) +#define PTE_ADDR_HIGH_SHIFT 42 +#define PHYS_TO_PTE_ADDR_MASK GENMASK_ULL(49, 8) +#endif #endif /* diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 2259898e8c3d990a..c8666d5c31fd1e52 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -80,15 +80,16 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; #ifdef CONFIG_ARM64_PA_BITS_52 static inline phys_addr_t __pte_to_phys(pte_t pte) { + pte_val(pte) &= ~PTE_MAYBE_SHARED; return (pte_val(pte) & PTE_ADDR_LOW) | ((pte_val(pte) & PTE_ADDR_HIGH) << PTE_ADDR_HIGH_SHIFT); } static inline pteval_t __phys_to_pte_val(phys_addr_t phys) { - return (phys | (phys >> PTE_ADDR_HIGH_SHIFT)) & PTE_ADDR_MASK; + return (phys | (phys >> PTE_ADDR_HIGH_SHIFT)) & PHYS_TO_PTE_ADDR_MASK; } #else -#define __pte_to_phys(pte) (pte_val(pte) & PTE_ADDR_MASK) +#define __pte_to_phys(pte) (pte_val(pte) & PTE_ADDR_LOW) #define __phys_to_pte_val(phys) (phys) #endif diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index a5dc009b7dd5c141..74b846d77b88e843 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -204,6 +204,14 @@ SYM_FUNC_ALIAS(__pi_idmap_cpu_replace_ttbr1, idmap_cpu_replace_ttbr1) .pushsection ".idmap.text", "awx" + .macro pte_to_phys, phys, pte + and \phys, \pte, #PTE_ADDR_LOW +#ifdef CONFIG_ARM64_PA_BITS_52 + and \pte, \pte, #PTE_ADDR_HIGH + orr \phys, \phys, \pte, lsl #PTE_ADDR_HIGH_SHIFT +#endif + .endm + .macro kpti_mk_tbl_ng, type, num_entries add end_\type\()p, cur_\type\()p, #\num_entries * 8 .Ldo_\type: From patchwork Tue Mar 7 14:05:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163865 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CAA09C678D4 for ; Tue, 7 Mar 2023 14:51:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=qC6ZHJdF6q7JVZTAEfwq2YfH595yxnJPLiijKT3BO5U=; b=LKc5CmS5A/wNze MHLSIEs6rlwdBwyYEKjyrBFxZkZidVJusxboI+wS+hXEAS1XFFtvQ4+TBHYRCgDUHZbKh0hFEqxM2 4geoQm/RAOR4Sba29jPw+NCY798O0yLiDVBauXW5khFkhjOg8qn56NhhL9WXzD+tcWcd/OqQCT33o bIFueir515NTE1fccOF6sIFHymAP3chTeRXnu9vBziuoJv2ZsPPfzhXsFwUg/4IGoC6LHhjsb5aTR UpYtJXdoeZxBgXYbuHXtxfH+5xj5ov7cGa2/tD+XNmoPvlOndDcfU2BzwlBTnmPfuGh7QakU9oPAK ejKmdt6WjKa0px2wvinQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYdO-000xQT-2u; Tue, 07 Mar 2023 14:49:46 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXzW-000epj-8z for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:08:36 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id B7DE3B818E9; Tue, 7 Mar 2023 14:08:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 87415C4339E; Tue, 7 Mar 2023 14:08:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198111; bh=f14g6En1j1H8/46XCGRBb0cJRRltFQikhUYuv4JwJX0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=N7NvPlCJVfkdnM7hIujijRqq8necan7cqiSKanTUuwHNgfSn56P8MSrXXN62bzK30 yP/lIX90qRW72tlCLF6vCMsRhvOWoSnZiAWgc+54+dJZyfINVswgh9fIfq1R5gtA8j fE6LKADpyF3fkc7qcCBUj4azRwnw3xx+6IB9XJJKSfEaj7YB5KU/UPUigmNxPrrcLd UZtqncpzet1umF3S9UjyK7MrQtxZ34sSFCfpZlzqZ6nonEgVDyxS0A4DHTvnrtx7+G tu5mZyAwgbY4D7VjJ+RcqFZ8LLdTPoWw6cLzjiUB3CzsCAfJlGPWOB+hK4S4kekLrH EuVQSZ0qO20FA== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 47/60] arm64: mm: Add definitions to support 5 levels of paging Date: Tue, 7 Mar 2023 15:05:09 +0100 Message-Id: <20230307140522.2311461-48-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=10817; i=ardb@kernel.org; h=from:subject; bh=f14g6En1j1H8/46XCGRBb0cJRRltFQikhUYuv4JwJX0=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdper8epZzvExqYtqBjLfNXxrGOf29eOxxYI5F3Zkua 0PB5EUdpSwMYhwMsmKKLAKz/77beXqiVK3zLFmYOaxMIEMYuDgFYCL2xxj+8KhISm1y8XLzX5S2 7vvdmp51mrbbgxj7zA78S/p024GZjeGfytG7n/iOq4T5f7bUb9+402/zdkPf0Mg7tn1PH98MXZT ACAA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060834_654967_616DAC46 X-CRM114-Status: GOOD ( 25.12 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add the required types and descriptor accessors to support 5 levels of paging in the common code. This is one of the prerequisites for supporting 52-bit virtual addressing with 4k pages. Note that this does not cover the code that handles kernel mappings or the fixmap. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/pgalloc.h | 41 ++++++++++ arch/arm64/include/asm/pgtable-hwdef.h | 22 +++++- arch/arm64/include/asm/pgtable-types.h | 6 ++ arch/arm64/include/asm/pgtable.h | 82 +++++++++++++++++++- arch/arm64/mm/mmu.c | 31 +++++++- arch/arm64/mm/pgd.c | 15 +++- 6 files changed, 188 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h index 237224484d0f6f11..cae8c648f4628709 100644 --- a/arch/arm64/include/asm/pgalloc.h +++ b/arch/arm64/include/asm/pgalloc.h @@ -60,6 +60,47 @@ static inline void __p4d_populate(p4d_t *p4dp, phys_addr_t pudp, p4dval_t prot) } #endif /* CONFIG_PGTABLE_LEVELS > 3 */ +#if CONFIG_PGTABLE_LEVELS > 4 + +static inline void __pgd_populate(pgd_t *pgdp, phys_addr_t p4dp, pgdval_t prot) +{ + if (pgtable_l5_enabled()) + set_pgd(pgdp, __pgd(__phys_to_pgd_val(p4dp) | prot)); +} + +static inline void pgd_populate(struct mm_struct *mm, pgd_t *pgdp, p4d_t *p4dp) +{ + pgdval_t pgdval = PGD_TYPE_TABLE; + + pgdval |= (mm == &init_mm) ? PGD_TABLE_UXN : PGD_TABLE_PXN; + __pgd_populate(pgdp, __pa(p4dp), pgdval); +} + +static inline p4d_t *p4d_alloc_one(struct mm_struct *mm, unsigned long addr) +{ + gfp_t gfp = GFP_PGTABLE_USER; + + if (mm == &init_mm) + gfp = GFP_PGTABLE_KERNEL; + return (p4d_t *)get_zeroed_page(gfp); +} + +static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d) +{ + if (!pgtable_l5_enabled()) + return; + BUG_ON((unsigned long)p4d & (PAGE_SIZE-1)); + free_page((unsigned long)p4d); +} + +#define __p4d_free_tlb(tlb, p4d, addr) p4d_free((tlb)->mm, p4d) +#else +static inline void __pgd_populate(pgd_t *pgdp, phys_addr_t p4dp, pgdval_t prot) +{ + BUILD_BUG(); +} +#endif /* CONFIG_PGTABLE_LEVELS > 4 */ + extern pgd_t *pgd_alloc(struct mm_struct *mm); extern void pgd_free(struct mm_struct *mm, pgd_t *pgdp); diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index b91fe4781b066d54..b364b02e696b8172 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -26,10 +26,10 @@ #define ARM64_HW_PGTABLE_LEVELS(va_bits) (((va_bits) - 4) / (PAGE_SHIFT - 3)) /* - * Size mapped by an entry at level n ( 0 <= n <= 3) + * Size mapped by an entry at level n ( -1 <= n <= 3) * We map (PAGE_SHIFT - 3) at all translation levels and PAGE_SHIFT bits * in the final page. The maximum number of translation levels supported by - * the architecture is 4. Hence, starting at level n, we have further + * the architecture is 5. Hence, starting at level n, we have further * ((4 - n) - 1) levels of translation excluding the offset within the page. * So, the total number of bits mapped by an entry at level n is : * @@ -62,9 +62,16 @@ #define PTRS_PER_PUD (1 << (PAGE_SHIFT - 3)) #endif +#if CONFIG_PGTABLE_LEVELS > 4 +#define P4D_SHIFT ARM64_HW_PGTABLE_LEVEL_SHIFT(0) +#define P4D_SIZE (_AC(1, UL) << P4D_SHIFT) +#define P4D_MASK (~(P4D_SIZE-1)) +#define PTRS_PER_P4D (1 << (PAGE_SHIFT - 3)) +#endif + /* * PGDIR_SHIFT determines the size a top-level page table entry can map - * (depending on the configuration, this level can be 0, 1 or 2). + * (depending on the configuration, this level can be -1, 0, 1 or 2). */ #define PGDIR_SHIFT ARM64_HW_PGTABLE_LEVEL_SHIFT(4 - CONFIG_PGTABLE_LEVELS) #define PGDIR_SIZE (_AC(1, UL) << PGDIR_SHIFT) @@ -87,6 +94,15 @@ /* * Hardware page table definitions. * + * Level -1 descriptor (PGD). + */ +#define PGD_TYPE_TABLE (_AT(pgdval_t, 3) << 0) +#define PGD_TABLE_BIT (_AT(pgdval_t, 1) << 1) +#define PGD_TYPE_MASK (_AT(pgdval_t, 3) << 0) +#define PGD_TABLE_PXN (_AT(pgdval_t, 1) << 59) +#define PGD_TABLE_UXN (_AT(pgdval_t, 1) << 60) + +/* * Level 0 descriptor (P4D). */ #define P4D_TYPE_TABLE (_AT(p4dval_t, 3) << 0) diff --git a/arch/arm64/include/asm/pgtable-types.h b/arch/arm64/include/asm/pgtable-types.h index b8f158ae25273679..6d6d4065b0cb4ed1 100644 --- a/arch/arm64/include/asm/pgtable-types.h +++ b/arch/arm64/include/asm/pgtable-types.h @@ -36,6 +36,12 @@ typedef struct { pudval_t pud; } pud_t; #define __pud(x) ((pud_t) { (x) } ) #endif +#if CONFIG_PGTABLE_LEVELS > 4 +typedef struct { p4dval_t p4d; } p4d_t; +#define p4d_val(x) ((x).p4d) +#define __p4d(x) ((p4d_t) { (x) } ) +#endif + typedef struct { pgdval_t pgd; } pgd_t; #define pgd_val(x) ((x).pgd) #define __pgd(x) ((pgd_t) { (x) } ) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index c8666d5c31fd1e52..c667073e3f56755d 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -800,7 +800,6 @@ static inline pud_t *p4d_pgtable(p4d_t p4d) #else #define p4d_page_paddr(p4d) ({ BUILD_BUG(); 0;}) -#define pgd_page_paddr(pgd) ({ BUILD_BUG(); 0;}) /* Match pud_offset folding in */ #define pud_set_fixmap(addr) NULL @@ -811,6 +810,87 @@ static inline pud_t *p4d_pgtable(p4d_t p4d) #endif /* CONFIG_PGTABLE_LEVELS > 3 */ +#if CONFIG_PGTABLE_LEVELS > 4 + +static __always_inline bool pgtable_l5_enabled(void) +{ + if (!alternative_has_feature_likely(ARM64_ALWAYS_BOOT)) + return vabits_actual == VA_BITS; + return alternative_has_feature_unlikely(ARM64_HAS_VA52); +} + +static inline bool mm_p4d_folded(const struct mm_struct *mm) +{ + return !pgtable_l5_enabled(); +} +#define mm_p4d_folded mm_p4d_folded + +#define p4d_ERROR(e) \ + pr_err("%s:%d: bad p4d %016llx.\n", __FILE__, __LINE__, p4d_val(e)) + +#define pgd_none(pgd) (pgtable_l5_enabled() && !pgd_val(pgd)) +#define pgd_bad(pgd) (pgtable_l5_enabled() && !(pgd_val(pgd) & 2)) +#define pgd_present(pgd) (!pgd_none(pgd)) + +static inline void set_pgd(pgd_t *pgdp, pgd_t pgd) +{ + if (in_swapper_pgdir(pgdp)) { + set_swapper_pgd(pgdp, __pgd(pgd_val(pgd))); + return; + } + + WRITE_ONCE(*pgdp, pgd); + dsb(ishst); + isb(); +} + +static inline void pgd_clear(pgd_t *pgdp) +{ + if (pgtable_l5_enabled()) + set_pgd(pgdp, __pgd(0)); +} + +static inline phys_addr_t pgd_page_paddr(pgd_t pgd) +{ + return __pgd_to_phys(pgd); +} + +#define p4d_index(addr) (((addr) >> P4D_SHIFT) & (PTRS_PER_P4D - 1)) + +static inline p4d_t *pgd_to_folded_p4d(pgd_t *pgdp, unsigned long addr) +{ + return (p4d_t *)PTR_ALIGN_DOWN(pgdp, PAGE_SIZE) + p4d_index(addr); +} + +static inline phys_addr_t p4d_offset_phys(pgd_t *pgdp, unsigned long addr) +{ + BUG_ON(!pgtable_l5_enabled()); + + return pgd_page_paddr(READ_ONCE(*pgdp)) + p4d_index(addr) * sizeof(p4d_t); +} + +static inline +p4d_t *p4d_offset_lockless(pgd_t *pgdp, pgd_t pgd, unsigned long addr) +{ + if (!pgtable_l5_enabled()) + return pgd_to_folded_p4d(pgdp, addr); + return (p4d_t *)__va(pgd_page_paddr(pgd)) + p4d_index(addr); +} +#define p4d_offset_lockless p4d_offset_lockless + +static inline p4d_t *p4d_offset(pgd_t *pgdp, unsigned long addr) +{ + return p4d_offset_lockless(pgdp, READ_ONCE(*pgdp), addr); +} + +#define pgd_page(pgd) pfn_to_page(__phys_to_pfn(__pgd_to_phys(pgd))) + +#else + +static inline bool pgtable_l5_enabled(void) { return false; } + +#endif /* CONFIG_PGTABLE_LEVELS > 4 */ + #define pgd_ERROR(e) \ pr_err("%s:%d: bad pgd %016llx.\n", __FILE__, __LINE__, pgd_val(e)) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 914745697fb8b30c..a16bcfba2e500600 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1003,7 +1003,7 @@ static void free_empty_pud_table(p4d_t *p4dp, unsigned long addr, if (CONFIG_PGTABLE_LEVELS <= 3) return; - if (!pgtable_range_aligned(start, end, floor, ceiling, PGDIR_MASK)) + if (!pgtable_range_aligned(start, end, floor, ceiling, P4D_MASK)) return; /* @@ -1026,8 +1026,8 @@ static void free_empty_p4d_table(pgd_t *pgdp, unsigned long addr, unsigned long end, unsigned long floor, unsigned long ceiling) { - unsigned long next; p4d_t *p4dp, p4d; + unsigned long i, next, start = addr; do { next = p4d_addr_end(addr, end); @@ -1039,6 +1039,27 @@ static void free_empty_p4d_table(pgd_t *pgdp, unsigned long addr, WARN_ON(!p4d_present(p4d)); free_empty_pud_table(p4dp, addr, next, floor, ceiling); } while (addr = next, addr < end); + + if (!pgtable_l5_enabled()) + return; + + if (!pgtable_range_aligned(start, end, floor, ceiling, PGDIR_MASK)) + return; + + /* + * Check whether we can free the p4d page if the rest of the + * entries are empty. Overlap with other regions have been + * handled by the floor/ceiling check. + */ + p4dp = p4d_offset(pgdp, 0UL); + for (i = 0; i < PTRS_PER_P4D; i++) { + if (!p4d_none(READ_ONCE(p4dp[i]))) + return; + } + + pgd_clear(pgdp); + __flush_tlb_kernel_pgtable(start); + free_hotplug_pgtable_page(virt_to_page(p4dp)); } static void free_empty_tables(unsigned long addr, unsigned long end, @@ -1283,6 +1304,12 @@ int pmd_set_huge(pmd_t *pmdp, phys_addr_t phys, pgprot_t prot) return 1; } +#ifndef __PAGETABLE_P4D_FOLDED +void p4d_clear_huge(p4d_t *p4dp) +{ +} +#endif + int pud_clear_huge(pud_t *pudp) { if (!pud_sect(READ_ONCE(*pudp))) diff --git a/arch/arm64/mm/pgd.c b/arch/arm64/mm/pgd.c index 4a64089e5771c1e2..3c4f8a279d2bc76a 100644 --- a/arch/arm64/mm/pgd.c +++ b/arch/arm64/mm/pgd.c @@ -17,11 +17,20 @@ static struct kmem_cache *pgd_cache __ro_after_init; +static bool pgdir_is_page_size(void) +{ + if (PGD_SIZE == PAGE_SIZE) + return true; + if (CONFIG_PGTABLE_LEVELS == 5) + return !pgtable_l5_enabled(); + return false; +} + pgd_t *pgd_alloc(struct mm_struct *mm) { gfp_t gfp = GFP_PGTABLE_USER; - if (PGD_SIZE == PAGE_SIZE) + if (pgdir_is_page_size()) return (pgd_t *)__get_free_page(gfp); else return kmem_cache_alloc(pgd_cache, gfp); @@ -29,7 +38,7 @@ pgd_t *pgd_alloc(struct mm_struct *mm) void pgd_free(struct mm_struct *mm, pgd_t *pgd) { - if (PGD_SIZE == PAGE_SIZE) + if (pgdir_is_page_size()) free_page((unsigned long)pgd); else kmem_cache_free(pgd_cache, pgd); @@ -37,7 +46,7 @@ void pgd_free(struct mm_struct *mm, pgd_t *pgd) void __init pgtable_cache_init(void) { - if (PGD_SIZE == PAGE_SIZE) + if (pgdir_is_page_size()) return; #ifdef CONFIG_ARM64_PA_BITS_52 From patchwork Tue Mar 7 14:05:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163864 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D6F25C678D4 for ; Tue, 7 Mar 2023 14:50:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=0iylB2iuspXbl5mhwXtd7dq8MoXLB4vB+GWDEtgXJ5o=; b=aSA6fHpwdb80fa /HyUYRDHTlL4GBh56AHISTQxZU+DPg4gOG7HOcFWepIk+S6f9bLA9y64yTMFJGK+d45UuWh0HzDso UX/0qKd8isgo3qUOoQ9pKU1JDwMi2JFnEZibmrW9bpcCKfpHQpkb0tJY9tCPbDlw7FMHbNzRcHWCG r7Im9kz3KWgUDMxGszdqVFJcSBRSkprFayz9rvfj+6F2HWFpIZgKVRK6TFT+fBv9kRNUp8AdJN1X9 BLUCpvp0HSXVvs4qBmXxJg5dAgvpwsdNSh90C/9PmI9zOVwFSPxugLSPiBTIVsoO2M8LXT+F/teVg jhLCjAFa4gLDOcbhRTmA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYce-000x3N-0Y; Tue, 07 Mar 2023 14:49:00 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXzX-000eqP-18 for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:08:37 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7C37F61457; Tue, 7 Mar 2023 14:08:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E972AC4339C; Tue, 7 Mar 2023 14:08:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198113; bh=8hH3Uc2UhkT3/0ty5o1TlACCwFDCvfgL5+D5fFfUrRw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SgjjhnR4pTHZpaj/6ZDOwtohlCgzHpgWtCMrmDY487+DK61hEpB0pKS9D6MrpiI2x /FBOCQF9lVgZno6QOJ5wK9dEpT9v7eQW3Z+eNATCmvGkL616zfS6Kfc8nR8mnSPTpM +Gf1v9yTp3b3nzm7775uBCgxI2YHDK2BGYJdCMU0rtt7NekHAQJvjWk4Um0Lzc7XgE JuSYku7jBB5t/EsH4yrwtMFBSDWqUFDaD5uV8iJLTzbXnkDQMzHElLP3hu2s3VXEhS UQSGdMZV1Dm7ZNRb6s+/HwttDRbNXh1tdGqbU4y2RZ/Kockdolw80TdlvhY77A6Apf eCbH4a2sjs+Ow== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 48/60] arm64: mm: add LPA2 and 5 level paging support to G-to-nG conversion Date: Tue, 7 Mar 2023 15:05:10 +0100 Message-Id: <20230307140522.2311461-49-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6585; i=ardb@kernel.org; h=from:subject; bh=8hH3Uc2UhkT3/0ty5o1TlACCwFDCvfgL5+D5fFfUrRw=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdpfZPfbnckdXbp75p25549WuFMbs7w6XrG4NXGQSlS C9U6GLqKGVhEONgkBVTZBGY/ffdztMTpWqdZ8nCzGFlAhnCwMUpABP5IMPwh2vHfYXND/ckrzpt pa0x5VK6m7vCKaULv5Ou8O/gucx2s4CRYe3u2luObIs+Hg1KXRVnxnieS3Wi495rNzM4p/18N72 bmRcA X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060835_219352_D6D64951 X-CRM114-Status: GOOD ( 21.90 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add support for 5 level paging in the G-to-nG routine that creates its own temporary page tables to traverse the swapper page tables. Also add support for running the 5 level configuration with the top level folded at runtime, to support CPUs that do not implement the LPA2 extension. While at it, wire up the level skipping logic so it will also trigger on 4 level configurations with LPA2 enabled at build time but not active at runtime, as we'll fall back to 3 level paging in that case. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/cpufeature.c | 9 ++- arch/arm64/mm/proc.S | 70 +++++++++++++++++--- 2 files changed, 66 insertions(+), 13 deletions(-) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index f8e3f37accdddc86..b0fee11886e54ca2 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1729,6 +1729,9 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused) pgd_t *kpti_ng_temp_pgd; u64 alloc = 0; + if (levels == 5 && !pgtable_l5_enabled()) + levels = 4; + if (__this_cpu_read(this_cpu_vector) == vectors) { const char *v = arm64_get_bp_hardening_vector(EL1_VECTOR_KPTI); @@ -1756,9 +1759,9 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused) // // The physical pages are laid out as follows: // - // +--------+-/-------+-/------ +-\\--------+ - // : PTE[] : | PMD[] : | PUD[] : || PGD[] : - // +--------+-\-------+-\------ +-//--------+ + // +--------+-/-------+-/------ +-/------ +-\\\--------+ + // : PTE[] : | PMD[] : | PUD[] : | P4D[] : ||| PGD[] : + // +--------+-\-------+-\------ +-\------ +-///--------+ // ^ // The first page is mapped into this hierarchy at a PMD_SHIFT // aligned virtual address, so that we can manipulate the PTE diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 74b846d77b88e843..f425cfc3e4dad188 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -215,16 +215,15 @@ SYM_FUNC_ALIAS(__pi_idmap_cpu_replace_ttbr1, idmap_cpu_replace_ttbr1) .macro kpti_mk_tbl_ng, type, num_entries add end_\type\()p, cur_\type\()p, #\num_entries * 8 .Ldo_\type: - ldr \type, [cur_\type\()p] // Load the entry + ldr \type, [cur_\type\()p], #8 // Load the entry and advance tbz \type, #0, .Lnext_\type // Skip invalid and tbnz \type, #11, .Lnext_\type // non-global entries orr \type, \type, #PTE_NG // Same bit for blocks and pages - str \type, [cur_\type\()p] // Update the entry + str \type, [cur_\type\()p, #-8] // Update the entry .ifnc \type, pte tbnz \type, #1, .Lderef_\type .endif .Lnext_\type: - add cur_\type\()p, cur_\type\()p, #8 cmp cur_\type\()p, end_\type\()p b.ne .Ldo_\type .endm @@ -234,18 +233,18 @@ SYM_FUNC_ALIAS(__pi_idmap_cpu_replace_ttbr1, idmap_cpu_replace_ttbr1) * fixmap slot associated with the current level. */ .macro kpti_map_pgtbl, type, level - str xzr, [temp_pte, #8 * (\level + 1)] // break before make + str xzr, [temp_pte, #8 * (\level + 2)] // break before make dsb nshst - add pte, temp_pte, #PAGE_SIZE * (\level + 1) + add pte, temp_pte, #PAGE_SIZE * (\level + 2) lsr pte, pte, #12 tlbi vaae1, pte dsb nsh isb phys_to_pte pte, cur_\type\()p - add cur_\type\()p, temp_pte, #PAGE_SIZE * (\level + 1) + add cur_\type\()p, temp_pte, #PAGE_SIZE * (\level + 2) orr pte, pte, pte_flags - str pte, [temp_pte, #8 * (\level + 1)] + str pte, [temp_pte, #8 * (\level + 2)] dsb nshst .endm @@ -278,6 +277,8 @@ SYM_TYPED_FUNC_START(idmap_kpti_install_ng_mappings) end_ptep .req x15 pte .req x16 valid .req x17 + cur_p4dp .req x19 + end_p4dp .req x20 mov x5, x3 // preserve temp_pte arg mrs swapper_ttb, ttbr1_el1 @@ -285,6 +286,12 @@ SYM_TYPED_FUNC_START(idmap_kpti_install_ng_mappings) cbnz cpu, __idmap_kpti_secondary +#if CONFIG_PGTABLE_LEVELS > 4 + stp x29, x30, [sp, #-32]! + mov x29, sp + stp x19, x20, [sp, #16] +#endif + /* We're the boot CPU. Wait for the others to catch up */ sevl 1: wfe @@ -302,9 +309,32 @@ SYM_TYPED_FUNC_START(idmap_kpti_install_ng_mappings) mov pte_flags, #KPTI_NG_PTE_FLAGS /* Everybody is enjoying the idmap, so we can rewrite swapper. */ + +#ifdef CONFIG_ARM64_LPA2 + /* + * If LPA2 support is configured, but 52-bit virtual addressing is not + * enabled at runtime, we will fall back to one level of paging less, + * and so we have to walk swapper_pg_dir as if we dereferenced its + * address from a PGD level entry, and terminate the PGD level loop + * right after. + */ + adrp pgd, swapper_pg_dir // walk &swapper_pg_dir at the next level + mov cur_pgdp, end_pgdp // must be equal to terminate the PGD loop +alternative_if_not ARM64_HAS_VA52 + b .Lderef_pgd // skip to the next level +alternative_else_nop_endif + /* + * LPA2 based 52-bit virtual addressing requires 52-bit physical + * addressing to be enabled as well. In this case, the shareability + * bits are repurposed as physical address bits, and should not be + * set in pte_flags. + */ + bic pte_flags, pte_flags, #PTE_SHARED +#endif + /* PGD */ adrp cur_pgdp, swapper_pg_dir - kpti_map_pgtbl pgd, 0 + kpti_map_pgtbl pgd, -1 kpti_mk_tbl_ng pgd, PTRS_PER_PGD /* Ensure all the updated entries are visible to secondary CPUs */ @@ -317,16 +347,33 @@ SYM_TYPED_FUNC_START(idmap_kpti_install_ng_mappings) /* Set the flag to zero to indicate that we're all done */ str wzr, [flag_ptr] +#if CONFIG_PGTABLE_LEVELS > 4 + ldp x19, x20, [sp, #16] + ldp x29, x30, [sp], #32 +#endif ret .Lderef_pgd: + /* P4D */ + .if CONFIG_PGTABLE_LEVELS > 4 + p4d .req x30 + pte_to_phys cur_p4dp, pgd + kpti_map_pgtbl p4d, 0 + kpti_mk_tbl_ng p4d, PTRS_PER_P4D + b .Lnext_pgd + .else /* CONFIG_PGTABLE_LEVELS <= 4 */ + p4d .req pgd + .set .Lnext_p4d, .Lnext_pgd + .endif + +.Lderef_p4d: /* PUD */ .if CONFIG_PGTABLE_LEVELS > 3 pud .req x10 - pte_to_phys cur_pudp, pgd + pte_to_phys cur_pudp, p4d kpti_map_pgtbl pud, 1 kpti_mk_tbl_ng pud, PTRS_PER_PUD - b .Lnext_pgd + b .Lnext_p4d .else /* CONFIG_PGTABLE_LEVELS <= 3 */ pud .req pgd .set .Lnext_pud, .Lnext_pgd @@ -370,6 +417,9 @@ SYM_TYPED_FUNC_START(idmap_kpti_install_ng_mappings) .unreq end_ptep .unreq pte .unreq valid + .unreq cur_p4dp + .unreq end_p4dp + .unreq p4d /* Secondary CPUs end up here */ __idmap_kpti_secondary: From patchwork Tue Mar 7 14:05:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163866 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 88873C678D5 for ; Tue, 7 Mar 2023 14:51:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8cUn44bHRQ49aZqzDiPd2wy8Zm26737qoF38Wlv/+lw=; b=Kf0J3RFPPk1F90 /0CgJrv5KPNxnFTj2ro5yG0FQFVh/d/DYXYOMHP7r/EYNTBAtr4nH8bwXIRGEJGowyeOPShuGxVku Ms12uaYooPsH9DlWu7xG4bFiZbsdET5TR7NKBPn8tXfojtMjBbN6BYUAFWybLuDA7k75OSWEoL0jb ums23+LZIHEswi9hU3Y3WVLII9RByzcn9JAP0atWKxezbmJoihNHVt4WbN1YjaD/4dRpfQrAeofZq HqcEWvMdfPSyjtuyB7E6SFJK8W7ISk3Ni4OhfGCjc7s9JVni9NEcivPrUpTQVrzHM9DhB58IXnhmH 3OXYfvDh99WIhwymr/8Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYdv-000xf8-WC; Tue, 07 Mar 2023 14:50:21 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXzZ-000erN-An for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:08:40 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7EA2861460; Tue, 7 Mar 2023 14:08:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 584A1C433A7; Tue, 7 Mar 2023 14:08:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198116; bh=uqt4IdnYXpiB6CSkE9eZAiNwpiziaGeY81Q1yCJ946w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BC3Y5e2MYeAU6DTvADsWLXprN6x9wRbh2ZEgEu6ytUqI0LdRxXBxWQOKb76aG+kVF mO8IqZNU0JuotC7I6FA22ca3exI2hdnnWwz2SDYs6+TSO/iKKGX9fe3myr6yerQmof ojiWi3+gKbyG8ZUwjWZsc7aspR2750I7eCs5R3YtfCXzwGlSirQijFiszh4hi6XjJ9 K8iofrYpN04JnsrGyNyiYyXZLuLCbH0BrgCXO01oAfnN/I62YoTfTVhYCF+rLSBKUf BkaAvUmFp0pZ8Y7Psml/mX+tRMwk9aJG3RFIr/bR8KS3zyZPcr+gp3djb5EIj2xBeb 9tdWACaDMSN5g== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 49/60] arm64: Enable LPA2 at boot if supported by the system Date: Tue, 7 Mar 2023 15:05:11 +0100 Message-Id: <20230307140522.2311461-50-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=13392; i=ardb@kernel.org; h=from:subject; bh=uqt4IdnYXpiB6CSkE9eZAiNwpiziaGeY81Q1yCJ946w=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdpWHfi7pZB290+LaLf56Q9+HXx+m3+9j4Nii1W3nuD w0NinraUcrCIMbBICumyCIw+++7nacnStU6z5KFmcPKBDKEgYtTACYylY3hf5xsSNrOd0euH3VL O8W06oLHvQmFzg+3bPpw33ifnIMl10NGhiNPlVfpv/q/ROWg0d4NTW8brka5zItbNFvoeKK4I8e C3TwA X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060837_516359_ED4F80C6 X-CRM114-Status: GOOD ( 35.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Update the early kernel mapping code to take 52-bit virtual addressing into account based on the LPA2 feature. This is a bit more involved than LVA (which is supported with 64k pages only), given that some page table descriptor bits change meaning in this case. To keep the handling in asm to a minimum, the initial ID map is still created with 48-bit virtual addressing, which implies that the kernel image must be loaded into 48-bit addressable physical memory. This is currently required by the boot protocol, even though we happen to support placement outside of that for LVA/64k based configurations. Enabling LPA2 involves more than setting TCR.T1SZ to a lower value, there is also a DS bit in TCR that needs to be set, and which changes the meaning of bits [9:8] in all page table descriptors. Since we cannot enable DS and every live page table descriptor at the same time, let's pivot through another temporary mapping. This avoids the need to reintroduce manipulations of the page tables with the MMU and caches disabled. To permit the LPA2 feature to be overridden on the kernel command line, which may be necessary to work around silicon errata, or to deal with mismatched features on heterogeneous SoC designs, test for CPU feature overrides first, and only then enable LPA2. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/assembler.h | 8 ++- arch/arm64/include/asm/cpufeature.h | 18 +++++ arch/arm64/include/asm/memory.h | 4 ++ arch/arm64/kernel/head.S | 8 +++ arch/arm64/kernel/image-vars.h | 1 + arch/arm64/kernel/pi/map_kernel.c | 70 +++++++++++++++++++- arch/arm64/kernel/pi/map_range.c | 11 ++- arch/arm64/kernel/pi/pi.h | 4 +- arch/arm64/mm/init.c | 2 +- arch/arm64/mm/mmu.c | 6 +- arch/arm64/mm/proc.S | 3 + 11 files changed, 124 insertions(+), 11 deletions(-) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 55e8731844cf7eb7..d5e139ce0820479b 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -581,11 +581,17 @@ alternative_endif * but we have to add an offset so that the TTBR1 address corresponds with the * pgdir entry that covers the lowest 48-bit addressable VA. * + * Note that this trick is only used for LVA/64k pages - LPA2/4k pages uses an + * additional paging level, and on LPA2/16k pages, we would end up with a root + * level table with only 2 entries, which is suboptimal in terms of TLB + * utilization, so there we fall back to 47 bits of translation if LPA2 is not + * supported. + * * orr is used as it can cover the immediate value (and is idempotent). * ttbr: Value of ttbr to set, modified. */ .macro offset_ttbr1, ttbr, tmp -#ifdef CONFIG_ARM64_VA_BITS_52 +#if defined(CONFIG_ARM64_VA_BITS_52) && !defined(CONFIG_ARM64_LPA2) mrs \tmp, tcr_el1 and \tmp, \tmp, #TCR_T1SZ_MASK cmp \tmp, #TCR_T1SZ(VA_BITS_MIN) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 7faf9a48339e7c8c..170e18cb2b4faf11 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -1002,6 +1002,24 @@ static inline bool cpu_has_lva(void) ID_AA64MMFR2_EL1_VARange_SHIFT); } +static inline bool cpu_has_lpa2(void) +{ +#ifdef CONFIG_ARM64_LPA2 + u64 mmfr0; + int feat; + + mmfr0 = read_sysreg(id_aa64mmfr0_el1); + mmfr0 &= ~id_aa64mmfr0_override.mask; + mmfr0 |= id_aa64mmfr0_override.val; + feat = cpuid_feature_extract_signed_field(mmfr0, + ID_AA64MMFR0_EL1_TGRAN_SHIFT); + + return feat >= ID_AA64MMFR0_EL1_TGRAN_LPA2; +#else + return false; +#endif +} + #endif /* __ASSEMBLY__ */ #endif diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 3e32d957aadcb2bb..3163f0748d90b313 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -54,7 +54,11 @@ #define FIXADDR_TOP (ULONG_MAX - SZ_8M + 1) #if VA_BITS > 48 +#ifdef CONFIG_ARM64_16K_PAGES +#define VA_BITS_MIN (47) +#else #define VA_BITS_MIN (48) +#endif #else #define VA_BITS_MIN (VA_BITS) #endif diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 47ebe3242d7feb7e..17b2a5518f5b7f3b 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -89,6 +89,7 @@ SYM_CODE_START(primary_entry) mov sp, x1 mov x29, xzr adrp x0, init_idmap_pg_dir + mov x1, xzr bl __pi_create_init_idmap /* @@ -471,9 +472,16 @@ SYM_FUNC_END(__enable_mmu) #ifdef CONFIG_ARM64_VA_BITS_52 SYM_FUNC_START(__cpu_secondary_check52bitva) +#ifndef CONFIG_ARM64_LPA2 mrs_s x0, SYS_ID_AA64MMFR2_EL1 and x0, x0, #(0xf << ID_AA64MMFR2_EL1_VARange_SHIFT) cbnz x0, 2f +#else + mrs x0, id_aa64mmfr0_el1 + sbfx x0, x0, #ID_AA64MMFR0_EL1_TGRAN_SHIFT, 4 + cmp x0, #ID_AA64MMFR0_EL1_TGRAN_LPA2 + b.ge 2f +#endif update_early_cpu_boot_status \ CPU_STUCK_IN_KERNEL | CPU_STUCK_REASON_52_BIT_VA, x0, x1 diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 79a7e0e3edd1aa21..15e396c9113189f9 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -49,6 +49,7 @@ PROVIDE(__pi__ctype = _ctype); PROVIDE(__pi_memstart_offset_seed = memstart_offset_seed); PROVIDE(__pi_init_idmap_pg_dir = init_idmap_pg_dir); +PROVIDE(__pi_init_idmap_pg_end = init_idmap_pg_end); PROVIDE(__pi_init_pg_dir = init_pg_dir); PROVIDE(__pi_init_pg_end = init_pg_end); PROVIDE(__pi_swapper_pg_dir = swapper_pg_dir); diff --git a/arch/arm64/kernel/pi/map_kernel.c b/arch/arm64/kernel/pi/map_kernel.c index c1a5bef4e10a49d7..a14b7c1236b5707c 100644 --- a/arch/arm64/kernel/pi/map_kernel.c +++ b/arch/arm64/kernel/pi/map_kernel.c @@ -128,11 +128,64 @@ static void __init map_kernel(u64 kaslr_offset, u64 va_offset, int root_level) } /* Copy the root page table to its final location */ - memcpy((void *)swapper_pg_dir + va_offset, init_pg_dir, PGD_SIZE); + memcpy((void *)swapper_pg_dir + va_offset, init_pg_dir, PAGE_SIZE); dsb(ishst); idmap_cpu_replace_ttbr1(swapper_pg_dir); } +static void noinline __section(".idmap.text") set_ttbr0_for_lpa2(u64 ttbr) +{ + u64 sctlr = read_sysreg(sctlr_el1); + u64 tcr = read_sysreg(tcr_el1) | TCR_DS; + + asm(" msr sctlr_el1, %0 ;" + " isb ;" + " msr ttbr0_el1, %1 ;" + " msr tcr_el1, %2 ;" + " isb ;" + " tlbi vmalle1 ;" + " dsb nsh ;" + " isb ;" + " msr sctlr_el1, %3 ;" + " isb ;" + :: "r"(sctlr & ~SCTLR_ELx_M), "r"(ttbr), "r"(tcr), "r"(sctlr)); +} + +static void __init remap_idmap_for_lpa2(void) +{ + /* clear the bits that change meaning once LPA2 is turned on */ + pteval_t mask = PTE_SHARED; + + /* + * We have to clear bits [9:8] in all block or page descriptors in the + * initial ID map, as otherwise they will be (mis)interpreted as + * physical address bits once we flick the LPA2 switch (TCR.DS). Since + * we cannot manipulate live descriptors in that way without creating + * potential TLB conflicts, let's create another temporary ID map in a + * LPA2 compatible fashion, and update the initial ID map while running + * from that. + */ + create_init_idmap(init_pg_dir, mask); + dsb(ishst); + set_ttbr0_for_lpa2((u64)init_pg_dir); + + /* + * Recreate the initial ID map with the same granularity as before. + * Don't bother with the FDT, we no longer need it after this. + */ + memset(init_idmap_pg_dir, 0, + (u64)init_idmap_pg_dir - (u64)init_idmap_pg_end); + + create_init_idmap(init_idmap_pg_dir, mask); + dsb(ishst); + + /* switch back to the updated initial ID map */ + set_ttbr0_for_lpa2((u64)init_idmap_pg_dir); + + /* wipe the temporary ID map from memory */ + memset(init_pg_dir, 0, (u64)init_pg_end - (u64)init_pg_dir); +} + static void map_fdt(u64 fdt) { static u8 ptes[INIT_IDMAP_FDT_SIZE] __initdata __aligned(PAGE_SIZE); @@ -155,6 +208,7 @@ asmlinkage void __init early_map_kernel(u64 boot_status, void *fdt) u64 va_base, pa_base = (u64)&_text; u64 kaslr_offset = pa_base % MIN_KIMG_ALIGN; int root_level = 4 - CONFIG_PGTABLE_LEVELS; + int va_bits = VA_BITS; int chosen; map_fdt((u64)fdt); @@ -166,8 +220,15 @@ asmlinkage void __init early_map_kernel(u64 boot_status, void *fdt) chosen = fdt_path_offset(fdt, chosen_str); init_feature_override(boot_status, fdt, chosen); - if (VA_BITS > VA_BITS_MIN && cpu_has_lva()) - sysreg_clear_set(tcr_el1, TCR_T1SZ_MASK, TCR_T1SZ(VA_BITS)); + if (IS_ENABLED(CONFIG_ARM64_64K_PAGES) && !cpu_has_lva()) { + va_bits = VA_BITS_MIN; + } else if (IS_ENABLED(CONFIG_ARM64_LPA2) && !cpu_has_lpa2()) { + va_bits = VA_BITS_MIN; + root_level++; + } + + if (va_bits > VA_BITS_MIN) + sysreg_clear_set(tcr_el1, TCR_T1SZ_MASK, TCR_T1SZ(va_bits)); /* * The virtual KASLR displacement modulo 2MiB is decided by the @@ -191,6 +252,9 @@ asmlinkage void __init early_map_kernel(u64 boot_status, void *fdt) kaslr_offset |= kaslr_seed & ~(MIN_KIMG_ALIGN - 1); } + if (IS_ENABLED(CONFIG_ARM64_LPA2) && va_bits > VA_BITS_MIN) + remap_idmap_for_lpa2(); + va_base = KIMAGE_VADDR + kaslr_offset; map_kernel(kaslr_offset, va_base - pa_base, root_level); } diff --git a/arch/arm64/kernel/pi/map_range.c b/arch/arm64/kernel/pi/map_range.c index 8de3fe2c1cb55722..8a55c7775db8390f 100644 --- a/arch/arm64/kernel/pi/map_range.c +++ b/arch/arm64/kernel/pi/map_range.c @@ -86,14 +86,19 @@ void __init map_range(u64 *pte, u64 start, u64 end, u64 pa, pgprot_t prot, } } -asmlinkage u64 __init create_init_idmap(pgd_t *pg_dir) +asmlinkage u64 __init create_init_idmap(pgd_t *pg_dir, pteval_t clrmask) { u64 ptep = (u64)pg_dir + PAGE_SIZE; + pgprot_t text_prot = PAGE_KERNEL_ROX; + pgprot_t data_prot = PAGE_KERNEL; + + pgprot_val(text_prot) &= ~clrmask; + pgprot_val(data_prot) &= ~clrmask; map_range(&ptep, (u64)_stext, (u64)__initdata_begin, (u64)_stext, - PAGE_KERNEL_ROX, IDMAP_ROOT_LEVEL, (pte_t *)pg_dir, false, 0); + text_prot, IDMAP_ROOT_LEVEL, (pte_t *)pg_dir, false, 0); map_range(&ptep, (u64)__initdata_begin, (u64)_end, (u64)__initdata_begin, - PAGE_KERNEL, IDMAP_ROOT_LEVEL, (pte_t *)pg_dir, false, 0); + data_prot, IDMAP_ROOT_LEVEL, (pte_t *)pg_dir, false, 0); return ptep; } diff --git a/arch/arm64/kernel/pi/pi.h b/arch/arm64/kernel/pi/pi.h index a8fe79c9b111140f..93d7457dd10d0380 100644 --- a/arch/arm64/kernel/pi/pi.h +++ b/arch/arm64/kernel/pi/pi.h @@ -17,7 +17,7 @@ static inline void *prel64_to_pointer(const prel64_t *offset) extern bool dynamic_scs_is_enabled; -extern pgd_t init_idmap_pg_dir[]; +extern pgd_t init_idmap_pg_dir[], init_idmap_pg_end[]; void init_feature_override(u64 boot_status, const void *fdt, int chosen); u64 kaslr_early_init(void *fdt, int chosen); @@ -27,4 +27,4 @@ int scs_patch(const u8 eh_frame[], int size); void map_range(u64 *pgd, u64 start, u64 end, u64 pa, pgprot_t prot, int level, pte_t *tbl, bool may_use_cont, u64 va_offset); -asmlinkage u64 create_init_idmap(pgd_t *pgd); +asmlinkage u64 create_init_idmap(pgd_t *pgd, pteval_t clrmask); diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 58a0bb2c17f18cf5..362b6884427c083f 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -334,7 +334,7 @@ void __init arm64_memblock_init(void) * physical address of PAGE_OFFSET, we have to *subtract* from it. */ if (IS_ENABLED(CONFIG_ARM64_VA_BITS_52) && (vabits_actual != 52)) - memstart_addr -= _PAGE_OFFSET(48) - _PAGE_OFFSET(52); + memstart_addr -= _PAGE_OFFSET(vabits_actual) - _PAGE_OFFSET(52); /* * Apply the memory limit if it was set. Since the kernel may be loaded diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index a16bcfba2e500600..6426df5211456cfe 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -543,8 +543,12 @@ static void __init map_mem(pgd_t *pgdp) * entries at any level are being shared between the linear region and * the vmalloc region. Check whether this is true for the PGD level, in * which case it is guaranteed to be true for all other levels as well. + * (Unless we are running with support for LPA2, in which case the + * entire reduced VA space is covered by a single pgd_t which will have + * been populated without the PXNTable attribute by the time we get here.) */ - BUILD_BUG_ON(pgd_index(direct_map_end - 1) == pgd_index(direct_map_end)); + BUILD_BUG_ON(pgd_index(direct_map_end - 1) == pgd_index(direct_map_end) && + pgd_index(_PAGE_OFFSET(VA_BITS_MIN)) != PTRS_PER_PGD - 1); if (can_set_direct_map()) flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index f425cfc3e4dad188..68f8337cd8bf6ab9 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -488,6 +488,9 @@ SYM_FUNC_START(__cpu_setup) mov x9, #64 - VA_BITS alternative_if ARM64_HAS_VA52 tcr_set_t1sz tcr, x9 +#ifdef CONFIG_ARM64_LPA2 + orr tcr, tcr, #TCR_DS +#endif alternative_else_nop_endif #endif From patchwork Tue Mar 7 14:05:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163867 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 571C3C678D4 for ; Tue, 7 Mar 2023 14:52:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=FkpB5k2T3wFyUJ3NZZ0Q0xnjPv6Qyx4nX0fftFN2E00=; b=Bp0YPDoEaYQBeE eugdcm34CQINrmCELZWzA4gUe+CjCQonsq1JXM0Bku21mdddf/GPEHsG1iPITM/bsjTvfEmWsHjjq b9JWJ/nBTyIXCTH9QxS3p523ao0WeKBS/dF5XJEM1zwzZ0eWvdMfJsRwxRZXrCo+lennOQHFjog08 NdtsPEliENNWKt5zdjw30rzn7Z9ZoMveUK/mGgyBr3DKS8TXP2l6LNKCp4J6ac4vl7d4saBKYfvNM NUeWe1bStXQybhOIZbC+a3PJiSuSOGjOXe2mUoA0suht/fuctEE2P1zCroG3/aB+mgNHiOxU6/oJw rLctmVTwo4swmGYYQNMA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYee-000xyo-Hl; Tue, 07 Mar 2023 14:51:04 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXzb-000esd-RK for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:08:41 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4C4026145C; Tue, 7 Mar 2023 14:08:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BA955C4339E; Tue, 7 Mar 2023 14:08:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198118; bh=jq0ZfsvwfvGD7dc7+4ei8T6xxeCdjcLuVSzc5ozEYYU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=P/g1FxQvtQ6Djb0efLquHx4sC5zwNZTzw6ha7e7dDSnGZtiJkcdGLhAJOIbO7Udrr c/pOatQglTTG4auyFwSaRThUdUosTsqq8d0NXzEt3QV9CzO9jtP37D1/KPzds7wH+X iP0lQkhdXjUMc+gIAoZpf0kTBUVYuSr0BYnrH6c2QUDuAl/7jsVa/FpkBUsHMTLxM1 VuXoIvbr6ldY+JSpVneCzpiOuUWgYh6w2DJ8eLlEmMyTGEe6LsmKBzDyoOSkp9EjIV 7kIVdgnn0PwRwNNqe9Jz7RuZdjmXLDXFUrz41yln2eVd24ywrvCeY5wWzD1qdtalDJ /RcDxpv06whmg== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 50/60] arm64: mm: Add 5 level paging support to fixmap and swapper handling Date: Tue, 7 Mar 2023 15:05:12 +0100 Message-Id: <20230307140522.2311461-51-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6571; i=ardb@kernel.org; h=from:subject; bh=jq0ZfsvwfvGD7dc7+4ei8T6xxeCdjcLuVSzc5ozEYYU=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdpUX48+cq3RMuB7/WPMm/4j+p8uWBmeqSBgce7tNIk Um+O+FrRykLgxgHg6yYIovA7L/vdp6eKFXrPEsWZg4rE8gQBi5OAZjIG3aG/4UHXR/4de2Ysdhj Btuf1Jipx7veRyn9luW25rl0RXo1635Ghmtid2bd9fz36cClqTu/2B3bu2/eKtOSN4zfXx36c39 ShgMHAA== X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060840_017169_B152788E X-CRM114-Status: GOOD ( 16.82 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add support for using 5 levels of paging in the fixmap, as well as in the kernel page table handling code which uses fixmaps internally. This also handles the case where a 5 level build runs on hardware that only supports 4 levels of paging. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/fixmap.h | 1 + arch/arm64/include/asm/pgtable.h | 45 ++++++++++++-- arch/arm64/mm/mmu.c | 64 +++++++++++++++++--- 3 files changed, 96 insertions(+), 14 deletions(-) diff --git a/arch/arm64/include/asm/fixmap.h b/arch/arm64/include/asm/fixmap.h index 71ed5fdf718bd0fd..8d4ec7bf74afb7a6 100644 --- a/arch/arm64/include/asm/fixmap.h +++ b/arch/arm64/include/asm/fixmap.h @@ -90,6 +90,7 @@ enum fixed_addresses { FIX_PTE, FIX_PMD, FIX_PUD, + FIX_P4D, FIX_PGD, __end_of_fixed_addresses diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index c667073e3f56755d..a286ecc447d33b24 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -613,12 +613,12 @@ static inline bool pud_table(pud_t pud) { return true; } PUD_TYPE_TABLE) #endif -extern pgd_t init_pg_dir[PTRS_PER_PGD]; +extern pgd_t init_pg_dir[]; extern pgd_t init_pg_end[]; -extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; -extern pgd_t idmap_pg_dir[PTRS_PER_PGD]; -extern pgd_t tramp_pg_dir[PTRS_PER_PGD]; -extern pgd_t reserved_pg_dir[PTRS_PER_PGD]; +extern pgd_t swapper_pg_dir[]; +extern pgd_t idmap_pg_dir[]; +extern pgd_t tramp_pg_dir[]; +extern pgd_t reserved_pg_dir[]; extern void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd); @@ -883,12 +883,47 @@ static inline p4d_t *p4d_offset(pgd_t *pgdp, unsigned long addr) return p4d_offset_lockless(pgdp, READ_ONCE(*pgdp), addr); } +static inline p4d_t *p4d_set_fixmap(unsigned long addr) +{ + if (!pgtable_l5_enabled()) + return NULL; + return (p4d_t *)set_fixmap_offset(FIX_P4D, addr); +} + +static inline p4d_t *p4d_set_fixmap_offset(pgd_t *pgdp, unsigned long addr) +{ + if (!pgtable_l5_enabled()) + return pgd_to_folded_p4d(pgdp, addr); + return p4d_set_fixmap(p4d_offset_phys(pgdp, addr)); +} + +static inline void p4d_clear_fixmap(void) +{ + if (pgtable_l5_enabled()) + clear_fixmap(FIX_P4D); +} + +/* use ONLY for statically allocated translation tables */ +static inline p4d_t *p4d_offset_kimg(pgd_t *pgdp, u64 addr) +{ + if (!pgtable_l5_enabled()) + return pgd_to_folded_p4d(pgdp, addr); + return (p4d_t *)__phys_to_kimg(p4d_offset_phys(pgdp, addr)); +} + #define pgd_page(pgd) pfn_to_page(__phys_to_pfn(__pgd_to_phys(pgd))) #else static inline bool pgtable_l5_enabled(void) { return false; } +/* Match p4d_offset folding in */ +#define p4d_set_fixmap(addr) NULL +#define p4d_set_fixmap_offset(p4dp, addr) ((p4d_t *)p4dp) +#define p4d_clear_fixmap() + +#define p4d_offset_kimg(dir,addr) ((p4d_t *)dir) + #endif /* CONFIG_PGTABLE_LEVELS > 4 */ #define pgd_ERROR(e) \ diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 6426df5211456cfe..77cd163375124c6a 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -315,15 +315,14 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr, } while (addr = next, addr != end); } -static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end, +static void alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end, phys_addr_t phys, pgprot_t prot, phys_addr_t (*pgtable_alloc)(int), int flags) { unsigned long next; - pud_t *pudp; - p4d_t *p4dp = p4d_offset(pgdp, addr); p4d_t p4d = READ_ONCE(*p4dp); + pud_t *pudp; if (p4d_none(p4d)) { p4dval_t p4dval = P4D_TYPE_TABLE | P4D_TABLE_UXN; @@ -371,6 +370,46 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end, pud_clear_fixmap(); } +static void alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end, + phys_addr_t phys, pgprot_t prot, + phys_addr_t (*pgtable_alloc)(int), + int flags) +{ + unsigned long next; + pgd_t pgd = READ_ONCE(*pgdp); + p4d_t *p4dp; + + if (pgd_none(pgd)) { + pgdval_t pgdval = PGD_TYPE_TABLE | PGD_TABLE_UXN; + phys_addr_t p4d_phys; + + if (flags & NO_EXEC_MAPPINGS) + pgdval |= PGD_TABLE_PXN; + BUG_ON(!pgtable_alloc); + p4d_phys = pgtable_alloc(P4D_SHIFT); + __pgd_populate(pgdp, p4d_phys, pgdval); + pgd = READ_ONCE(*pgdp); + } + BUG_ON(pgd_bad(pgd)); + + p4dp = p4d_set_fixmap_offset(pgdp, addr); + do { + p4d_t old_p4d = READ_ONCE(*p4dp); + + next = p4d_addr_end(addr, end); + + alloc_init_pud(p4dp, addr, next, phys, prot, + pgtable_alloc, flags); + + BUG_ON(p4d_val(old_p4d) != 0 && + p4d_val(old_p4d) != READ_ONCE(p4d_val(*p4dp))); + + phys += next - addr; + } while (p4dp++, addr = next, addr != end); + + p4d_clear_fixmap(); +} + static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys, unsigned long virt, phys_addr_t size, pgprot_t prot, @@ -393,7 +432,7 @@ static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys, do { next = pgd_addr_end(addr, end); - alloc_init_pud(pgdp, addr, next, phys, prot, pgtable_alloc, + alloc_init_p4d(pgdp, addr, next, phys, prot, pgtable_alloc, flags); phys += next - addr; } while (pgdp++, addr = next, addr != end); @@ -1120,10 +1159,19 @@ void vmemmap_free(unsigned long start, unsigned long end, } #endif /* CONFIG_MEMORY_HOTPLUG */ -static inline pud_t *fixmap_pud(unsigned long addr) +static inline p4d_t *fixmap_p4d(unsigned long addr) { pgd_t *pgdp = pgd_offset_k(addr); - p4d_t *p4dp = p4d_offset(pgdp, addr); + pgd_t pgd = READ_ONCE(*pgdp); + + BUG_ON(pgd_none(pgd) || pgd_bad(pgd)); + + return p4d_offset_kimg(pgdp, addr); +} + +static inline pud_t *fixmap_pud(unsigned long addr) +{ + p4d_t *p4dp = fixmap_p4d(addr); p4d_t p4d = READ_ONCE(*p4dp); BUG_ON(p4d_none(p4d) || p4d_bad(p4d)); @@ -1154,14 +1202,12 @@ static inline pte_t *fixmap_pte(unsigned long addr) */ void __init early_fixmap_init(void) { - pgd_t *pgdp; p4d_t *p4dp, p4d; pud_t *pudp; pmd_t *pmdp; unsigned long addr = FIXADDR_START; - pgdp = pgd_offset_k(addr); - p4dp = p4d_offset(pgdp, addr); + p4dp = fixmap_p4d(addr); p4d = READ_ONCE(*p4dp); if (CONFIG_PGTABLE_LEVELS > 3 && !(p4d_none(p4d) || p4d_page_paddr(p4d) == __pa_symbol(bm_pud))) { From patchwork Tue Mar 7 14:05:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163868 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A1C7BC678D4 for ; Tue, 7 Mar 2023 14:52:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=DvADP/Ob2H1JKO0SIZ1XchDckGsWQslChWveH7HEjl0=; b=Gh6c7GMqDx8wqx Td169EKinFm5f4wl7i/x5t8SeNuDvZSm+pcI6+HOt2DL/y/BNbhH4dv+WSceH6Tpx/6PMYYN65K6e EteOo7OiduDNLgmhfFVr1vifH3zE6zizJZrleWJ3clzqByTIRRw8hs5GrypsO6AAYhvcdunhihd9v j9K/Fkw6RNffyLL9lxIdT81d1bCwX9aVJBCpAhHTFPv9TTv5r7pp46ucrzn6ZDE/C8jYlc6SOj257 nCAvRzlfPcv0khPEFziQ1XYUDWAzh0G5VO82XPCcFcooE+p2ibCaXhZ1IX1rSo343zPxuGW1M0X61 4jwD4DbN/d9h9T5uEX0w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYf1-000yAT-Tm; Tue, 07 Mar 2023 14:51:28 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXzf-000eu0-Oe for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:08:46 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 607F3B818FF; Tue, 7 Mar 2023 14:08:42 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 29C8BC433A0; Tue, 7 Mar 2023 14:08:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198121; bh=gZ6gT92QoewG67ylJsd53DevrQCpgBq5112YtMXuQRo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TRNBdv0NFEKBnS2lFqek4SATtENdEPGBRch+JF41rd9uNvJhbI17ES1Td3abUFDER Ke6ptVpzc28DJ1gF0r+NHw1mFqi2YHyp7EDnh6fOpae2aHwCBDKhUWx4vYGX32vHgA 9ObQl4V/08s/85qs1tljGmIOYbsd99LB4S3a6A8DNFcuLnXl3Z/jPimstby0RP1CGl eE5ZSVjT1mweedTOg7ca8Ig3cdZ7h/f+hU6o1NVlBnK/JCSb1NjlMrerDZuMvgw93K Bj0yluZ6KFNOUtnqGvTaIMcYp9gT/WedJDVZe7Wrca4HkHdsEbJ7E5DyhfIEit5Nfr CWRK8+qbRq5Ng== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 51/60] arm64: kasan: Reduce minimum shadow alignment and enable 5 level paging Date: Tue, 7 Mar 2023 15:05:13 +0100 Message-Id: <20230307140522.2311461-52-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=9265; i=ardb@kernel.org; h=from:subject; bh=gZ6gT92QoewG67ylJsd53DevrQCpgBq5112YtMXuQRo=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdpS2Xof1souYOwccVnnP/nZX7HPXzp8WNVaXc3Y1ru pkreQo6SlkYxDgYZMUUWQRm/3238/REqVrnWbIwc1iZQIYwcHEKwEQSEhgZOvK/PnTfUsn9ItrP vE9HPLs8NSbG5P2Oo1V6Fs3bQrS3MTIcFXq0xyzk0wO3XSemrC/hsX9QkbFLTiX2y6yHb2XCFFz 5AA== X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060844_148670_BBE1C864 X-CRM114-Status: GOOD ( 26.02 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Allow the KASAN init code to deal with 5 levels of paging, and relax the requirement that the shadow region is aligned to the top level pgd_t size. This is necessary for LPA2 based 52-bit virtual addressing, where the KASAN shadow will never be aligned to the pgd_t size. Allowing this also enables the 16k/48-bit case for KASAN, which is a nice bonus. This involves some hackery to manipulate the root and next level page tables without having to distinguish all the various configurations, including 16k/48-bits (which has a two entry pgd_t level), and LPA2 configurations running with one translation level less on non-LPA2 hardware. Signed-off-by: Ard Biesheuvel --- arch/arm64/Kconfig | 2 +- arch/arm64/mm/kasan_init.c | 143 ++++++++++++++++++-- 2 files changed, 130 insertions(+), 15 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index d287dad29198c843..52aac583823863e4 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -157,7 +157,7 @@ config ARM64 select HAVE_ARCH_HUGE_VMAP select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_JUMP_LABEL_RELATIVE - select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48) + select HAVE_ARCH_KASAN select HAVE_ARCH_KASAN_VMALLOC if HAVE_ARCH_KASAN select HAVE_ARCH_KASAN_SW_TAGS if HAVE_ARCH_KASAN select HAVE_ARCH_KASAN_HW_TAGS if (HAVE_ARCH_KASAN && ARM64_MTE) diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index 7e32f21fb8e1e227..7ab7520133946e91 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -23,7 +23,7 @@ #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) -static pgd_t tmp_pg_dir[PTRS_PER_PGD] __initdata __aligned(PGD_SIZE); +static pgd_t tmp_pg_dir[PTRS_PER_PTE] __initdata __aligned(PAGE_SIZE); /* * The p*d_populate functions call virt_to_phys implicitly so they can't be used @@ -99,6 +99,19 @@ static pud_t *__init kasan_pud_offset(p4d_t *p4dp, unsigned long addr, int node, return early ? pud_offset_kimg(p4dp, addr) : pud_offset(p4dp, addr); } +static p4d_t *__init kasan_p4d_offset(pgd_t *pgdp, unsigned long addr, int node, + bool early) +{ + if (pgd_none(READ_ONCE(*pgdp))) { + phys_addr_t p4d_phys = early ? + __pa_symbol(kasan_early_shadow_p4d) + : kasan_alloc_zeroed_page(node); + __pgd_populate(pgdp, p4d_phys, PGD_TYPE_TABLE); + } + + return early ? p4d_offset_kimg(pgdp, addr) : p4d_offset(pgdp, addr); +} + static void __init kasan_pte_populate(pmd_t *pmdp, unsigned long addr, unsigned long end, int node, bool early) { @@ -144,12 +157,12 @@ static void __init kasan_p4d_populate(pgd_t *pgdp, unsigned long addr, unsigned long end, int node, bool early) { unsigned long next; - p4d_t *p4dp = p4d_offset(pgdp, addr); + p4d_t *p4dp = kasan_p4d_offset(pgdp, addr, node, early); do { next = p4d_addr_end(addr, end); kasan_pud_populate(p4dp, addr, next, node, early); - } while (p4dp++, addr = next, addr != end); + } while (p4dp++, addr = next, addr != end && p4d_none(READ_ONCE(*p4dp))); } static void __init kasan_pgd_populate(unsigned long addr, unsigned long end, @@ -165,14 +178,48 @@ static void __init kasan_pgd_populate(unsigned long addr, unsigned long end, } while (pgdp++, addr = next, addr != end); } +#if defined(CONFIG_ARM64_64K_PAGES) || CONFIG_PGTABLE_LEVELS > 4 +#define SHADOW_ALIGN P4D_SIZE +#else +#define SHADOW_ALIGN PUD_SIZE +#endif + +/* + * Return whether 'addr' is aligned to the size covered by a root level + * descriptor. + */ +static bool __init root_level_aligned(u64 addr) +{ + int shift = (ARM64_HW_PGTABLE_LEVELS(vabits_actual) - 1) * (PAGE_SHIFT - 3); + + return (addr % (PAGE_SIZE << shift)) == 0; +} + /* The early shadow maps everything to a single page of zeroes */ asmlinkage void __init kasan_early_init(void) { BUILD_BUG_ON(KASAN_SHADOW_OFFSET != KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT))); - BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS), PGDIR_SIZE)); - BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS_MIN), PGDIR_SIZE)); - BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE)); + BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS), SHADOW_ALIGN)); + BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS_MIN), SHADOW_ALIGN)); + BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, SHADOW_ALIGN)); + + if (!root_level_aligned(KASAN_SHADOW_START)) { + /* + * The start address is misaligned, and so the next level table + * will be shared with the linear region. This can happen with + * 4 or 5 level paging, so install a generic pte_t[] as the + * next level. This prevents the kasan_pgd_populate call below + * from inserting an entry that refers to the shared KASAN zero + * shadow pud_t[]/p4d_t[], which could end up getting corrupted + * when the linear region is mapped. + */ + static pte_t tbl[PTRS_PER_PTE] __page_aligned_bss; + pgd_t *pgdp = pgd_offset_k(KASAN_SHADOW_START); + + set_pgd(pgdp, __pgd(__pa_symbol(tbl) | PGD_TYPE_TABLE)); + } + kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, NUMA_NO_NODE, true); } @@ -184,20 +231,75 @@ static void __init kasan_map_populate(unsigned long start, unsigned long end, kasan_pgd_populate(start & PAGE_MASK, PAGE_ALIGN(end), node, false); } -static void __init clear_pgds(unsigned long start, - unsigned long end) +/* + * Return the descriptor index of 'addr' in the root level table + */ +static int __init root_level_idx(u64 addr) { /* - * Remove references to kasan page tables from - * swapper_pg_dir. pgd_clear() can't be used - * here because it's nop on 2,3-level pagetable setups + * On 64k pages, the TTBR1 range root tables are extended for 52-bit + * virtual addressing, and TTBR1 will simply point to the pgd_t entry + * that covers the start of the 48-bit addressable VA space if LVA is + * not implemented. This means we need to index the table as usual, + * instead of masking off bits based on vabits_actual. */ - for (; start < end; start += PGDIR_SIZE) - set_pgd(pgd_offset_k(start), __pgd(0)); + u64 vabits = IS_ENABLED(CONFIG_ARM64_64K_PAGES) ? VA_BITS + : vabits_actual; + int shift = (ARM64_HW_PGTABLE_LEVELS(vabits) - 1) * (PAGE_SHIFT - 3); + + return (addr & ~_PAGE_OFFSET(vabits)) >> (shift + PAGE_SHIFT); +} + +/* + * Clone a next level table from swapper_pg_dir into tmp_pg_dir + */ +static void __init clone_next_level(u64 addr, pgd_t *tmp_pg_dir, pud_t *pud) +{ + int idx = root_level_idx(addr); + pgd_t pgd = READ_ONCE(swapper_pg_dir[idx]); + pud_t *pudp = (pud_t *)__phys_to_kimg(__pgd_to_phys(pgd)); + + memcpy(pud, pudp, PAGE_SIZE); + tmp_pg_dir[idx] = __pgd(__phys_to_pgd_val(__pa_symbol(pud)) | + PUD_TYPE_TABLE); +} + +/* + * Return the descriptor index of 'addr' in the next level table + */ +static int __init next_level_idx(u64 addr) +{ + int shift = (ARM64_HW_PGTABLE_LEVELS(vabits_actual) - 2) * (PAGE_SHIFT - 3); + + return (addr >> (shift + PAGE_SHIFT)) % PTRS_PER_PTE; +} + +/* + * Dereference the table descriptor at 'pgd_idx' and clear the entries from + * 'start' to 'end' (exclusive) from the table. + */ +static void __init clear_next_level(int pgd_idx, int start, int end) +{ + pgd_t pgd = READ_ONCE(swapper_pg_dir[pgd_idx]); + pud_t *pudp = (pud_t *)__phys_to_kimg(__pgd_to_phys(pgd)); + + memset(&pudp[start], 0, (end - start) * sizeof(pud_t)); +} + +static void __init clear_shadow(u64 start, u64 end) +{ + int l = root_level_idx(start), m = root_level_idx(end); + + if (!root_level_aligned(start)) + clear_next_level(l++, next_level_idx(start), PTRS_PER_PTE); + if (!root_level_aligned(end)) + clear_next_level(m, 0, next_level_idx(end)); + memset(&swapper_pg_dir[l], 0, (m - l) * sizeof(pgd_t)); } static void __init kasan_init_shadow(void) { + static pud_t pud[2][PTRS_PER_PUD] __initdata __aligned(PAGE_SIZE); u64 kimg_shadow_start, kimg_shadow_end; u64 mod_shadow_start, mod_shadow_end; u64 vmalloc_shadow_end; @@ -220,10 +322,23 @@ static void __init kasan_init_shadow(void) * setup will be finished. */ memcpy(tmp_pg_dir, swapper_pg_dir, sizeof(tmp_pg_dir)); + + /* + * If the start or end address of the shadow region is not aligned to + * the root level size, we have to allocate a temporary next-level table + * in each case, clone the next level of descriptors, and install the + * table into tmp_pg_dir. Note that with 5 levels of paging, the next + * level will in fact be p4d_t, but that makes no difference in this + * case. + */ + if (!root_level_aligned(KASAN_SHADOW_START)) + clone_next_level(KASAN_SHADOW_START, tmp_pg_dir, pud[0]); + if (!root_level_aligned(KASAN_SHADOW_END)) + clone_next_level(KASAN_SHADOW_END, tmp_pg_dir, pud[1]); dsb(ishst); cpu_replace_ttbr1(lm_alias(tmp_pg_dir)); - clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END); + clear_shadow(KASAN_SHADOW_START, KASAN_SHADOW_END); kasan_map_populate(kimg_shadow_start, kimg_shadow_end, early_pfn_to_nid(virt_to_pfn(lm_alias(KERNEL_START)))); From patchwork Tue Mar 7 14:05:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163869 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B00D3C678D4 for ; Tue, 7 Mar 2023 14:53:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=0T61IsBAixUo43IocF+EvOkAzoFpn2o/qFyr56lZPj0=; b=mnUkvZLIq/3etY /MLwqlww8QqMXMbz7hHi/Bq/dWsS8G3w6ZE8Rlgglwyww9+8kAPKu0OFZfuod4T1kj0C9PWaDjlxo niPYCuuGi0LwgihItk3c1z4G4pLlzGnKGEv1AiR9gQI8ED1ZZMyOZw9nNmVa54bU2G6LWLdrfjK+R IO/2EeCK2uypELg0eKEQf82JWe2y8dK6yEQOH0hEjba0n4S4Wsitl2KyE04A0CO56h4859YiNF70x pptwAHqhgz2MIkcPhtIuz8lD8EzoxZDIDV+Q1UefQ9ACHIVNkeDcf3++SIgF9wgStQgltmca9uHrq 2WK/tmTwKx6EaZLUtqqg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYfQ-000yLY-Ml; Tue, 07 Mar 2023 14:51:53 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXzg-000eup-KS for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:08:46 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 1D89461457; Tue, 7 Mar 2023 14:08:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8C47BC433A4; Tue, 7 Mar 2023 14:08:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198123; bh=M+Dxbzwxo1zhfbcCFC0Rc6KRuDeSFFODIUXYwuEc0NQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HR0Ne7eLliaReh5F1ElF6ihdhEvIHi6XJT51ROG5g7YmaOMCoRQNnTc/Ht/yWa1Ea WtU8h46WTfYbwdXFKA/THdbA5b6azNWHqN1A6polWgXkwc33m+xCuEhDEIg8RhIRCa 6US23ctHhX4qt+sBavKW18bMbTKXu8c6uHWM1XGBIFQNHkwkl0tBoZYf8nndtEfFLW h2DtLgfNCUWACUyXX0WA8GqJ918mrWYp39N7Zczl/vuMoaKfp6zGWCoovNTm3hXGbs kjvSM0juOjczrAUcePSXHfakKpmFllB083rDe5OMQEYFKlOr0Ko01PtOyrKnEgAZJG YOWvUv/3EOZMw== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 52/60] arm64: mm: Add support for folding PUDs at runtime Date: Tue, 7 Mar 2023 15:05:14 +0100 Message-Id: <20230307140522.2311461-53-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=8426; i=ardb@kernel.org; h=from:subject; bh=M+Dxbzwxo1zhfbcCFC0Rc6KRuDeSFFODIUXYwuEc0NQ=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdpXPjq4MMiSZP7l/JVLq0/rvf+tincVIXayTlHzC/P 7xzUrBoRykLgxgHg6yYIovA7L/vdp6eKFXrPEsWZg4rE8gQBi5OAZiIbSzDX/muB7K31SeUe7ef Mk7mf2H9fuvnPVOa/8gxNtxZtCi2+TYjw6GNRYfrlISUlxoVa2rMKJlW72H5asHiOzwcSzlT5IT quQE= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060844_835260_8EC372FC X-CRM114-Status: GOOD ( 20.68 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In order to support LPA2 on 16k pages in a way that permits non-LPA2 systems to run the same kernel image, we have to be able to fall back to at most 48 bits of virtual addressing. Falling back to 48 bits would result in a level 0 with only 2 entries, which is suboptimal in terms of TLB utilization. So instead, let's fall back to 47 bits in that case. This means we need to be able to fold PUDs dynamically, similar to how we fold P4Ds for 48 bit virtual addressing on LPA2 with 4k pages. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/pgalloc.h | 12 ++- arch/arm64/include/asm/pgtable.h | 87 +++++++++++++++++--- arch/arm64/include/asm/tlb.h | 3 +- arch/arm64/kernel/cpufeature.c | 2 + arch/arm64/mm/mmu.c | 2 +- arch/arm64/mm/pgd.c | 2 + 6 files changed, 94 insertions(+), 14 deletions(-) diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h index cae8c648f4628709..aeba2cf15a253ee1 100644 --- a/arch/arm64/include/asm/pgalloc.h +++ b/arch/arm64/include/asm/pgalloc.h @@ -14,6 +14,7 @@ #include #define __HAVE_ARCH_PGD_FREE +#define __HAVE_ARCH_PUD_FREE #include #define PGD_SIZE (PTRS_PER_PGD * sizeof(pgd_t)) @@ -43,7 +44,8 @@ static inline void __pud_populate(pud_t *pudp, phys_addr_t pmdp, pudval_t prot) static inline void __p4d_populate(p4d_t *p4dp, phys_addr_t pudp, p4dval_t prot) { - set_p4d(p4dp, __p4d(__phys_to_p4d_val(pudp) | prot)); + if (pgtable_l4_enabled()) + set_p4d(p4dp, __p4d(__phys_to_p4d_val(pudp) | prot)); } static inline void p4d_populate(struct mm_struct *mm, p4d_t *p4dp, pud_t *pudp) @@ -53,6 +55,14 @@ static inline void p4d_populate(struct mm_struct *mm, p4d_t *p4dp, pud_t *pudp) p4dval |= (mm == &init_mm) ? P4D_TABLE_UXN : P4D_TABLE_PXN; __p4d_populate(p4dp, __pa(pudp), p4dval); } + +static inline void pud_free(struct mm_struct *mm, pud_t *pud) +{ + if (!pgtable_l4_enabled()) + return; + BUG_ON((unsigned long)pud & (PAGE_SIZE-1)); + free_page((unsigned long)pud); +} #else static inline void __p4d_populate(p4d_t *p4dp, phys_addr_t pudp, p4dval_t prot) { diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index a286ecc447d33b24..02ba59a8ede7e0fd 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -751,12 +751,27 @@ static inline pmd_t *pud_pgtable(pud_t pud) #if CONFIG_PGTABLE_LEVELS > 3 +static __always_inline bool pgtable_l4_enabled(void) +{ + if (CONFIG_PGTABLE_LEVELS > 4 || !IS_ENABLED(CONFIG_ARM64_LPA2)) + return true; + if (!alternative_has_feature_likely(ARM64_ALWAYS_BOOT)) + return vabits_actual == VA_BITS; + return alternative_has_feature_unlikely(ARM64_HAS_VA52); +} + +static inline bool mm_pud_folded(const struct mm_struct *mm) +{ + return !pgtable_l4_enabled(); +} +#define mm_pud_folded mm_pud_folded + #define pud_ERROR(e) \ pr_err("%s:%d: bad pud %016llx.\n", __FILE__, __LINE__, pud_val(e)) -#define p4d_none(p4d) (!p4d_val(p4d)) -#define p4d_bad(p4d) (!(p4d_val(p4d) & 2)) -#define p4d_present(p4d) (p4d_val(p4d)) +#define p4d_none(p4d) (pgtable_l4_enabled() && !p4d_val(p4d)) +#define p4d_bad(p4d) (pgtable_l4_enabled() && !(p4d_val(p4d) & 2)) +#define p4d_present(p4d) (!p4d_none(p4d)) static inline void set_p4d(p4d_t *p4dp, p4d_t p4d) { @@ -772,7 +787,8 @@ static inline void set_p4d(p4d_t *p4dp, p4d_t p4d) static inline void p4d_clear(p4d_t *p4dp) { - set_p4d(p4dp, __p4d(0)); + if (pgtable_l4_enabled()) + set_p4d(p4dp, __p4d(0)); } static inline phys_addr_t p4d_page_paddr(p4d_t p4d) @@ -780,25 +796,74 @@ static inline phys_addr_t p4d_page_paddr(p4d_t p4d) return __p4d_to_phys(p4d); } +#define pud_index(addr) (((addr) >> PUD_SHIFT) & (PTRS_PER_PUD - 1)) + +static inline pud_t *p4d_to_folded_pud(p4d_t *p4dp, unsigned long addr) +{ + return (pud_t *)PTR_ALIGN_DOWN(p4dp, PAGE_SIZE) + pud_index(addr); +} + static inline pud_t *p4d_pgtable(p4d_t p4d) { return (pud_t *)__va(p4d_page_paddr(p4d)); } -/* Find an entry in the first-level page table. */ -#define pud_offset_phys(dir, addr) (p4d_page_paddr(READ_ONCE(*(dir))) + pud_index(addr) * sizeof(pud_t)) +static inline phys_addr_t pud_offset_phys(p4d_t *p4dp, unsigned long addr) +{ + BUG_ON(!pgtable_l4_enabled()); -#define pud_set_fixmap(addr) ((pud_t *)set_fixmap_offset(FIX_PUD, addr)) -#define pud_set_fixmap_offset(p4d, addr) pud_set_fixmap(pud_offset_phys(p4d, addr)) -#define pud_clear_fixmap() clear_fixmap(FIX_PUD) + return p4d_page_paddr(READ_ONCE(*p4dp)) + pud_index(addr) * sizeof(pud_t); +} -#define p4d_page(p4d) pfn_to_page(__phys_to_pfn(__p4d_to_phys(p4d))) +static inline +pud_t *pud_offset_lockless(p4d_t *p4dp, p4d_t p4d, unsigned long addr) +{ + if (!pgtable_l4_enabled()) + return p4d_to_folded_pud(p4dp, addr); + return (pud_t *)__va(p4d_page_paddr(p4d)) + pud_index(addr); +} +#define pud_offset_lockless pud_offset_lockless + +static inline pud_t *pud_offset(p4d_t *p4dp, unsigned long addr) +{ + return pud_offset_lockless(p4dp, READ_ONCE(*p4dp), addr); +} +#define pud_offset pud_offset + +static inline pud_t *pud_set_fixmap(unsigned long addr) +{ + if (!pgtable_l4_enabled()) + return NULL; + return (pud_t *)set_fixmap_offset(FIX_PUD, addr); +} + +static inline pud_t *pud_set_fixmap_offset(p4d_t *p4dp, unsigned long addr) +{ + if (!pgtable_l4_enabled()) + return p4d_to_folded_pud(p4dp, addr); + return pud_set_fixmap(pud_offset_phys(p4dp, addr)); +} + +static inline void pud_clear_fixmap(void) +{ + if (pgtable_l4_enabled()) + clear_fixmap(FIX_PUD); +} /* use ONLY for statically allocated translation tables */ -#define pud_offset_kimg(dir,addr) ((pud_t *)__phys_to_kimg(pud_offset_phys((dir), (addr)))) +static inline pud_t *pud_offset_kimg(p4d_t *p4dp, u64 addr) +{ + if (!pgtable_l4_enabled()) + return p4d_to_folded_pud(p4dp, addr); + return (pud_t *)__phys_to_kimg(pud_offset_phys(p4dp, addr)); +} + +#define p4d_page(p4d) pfn_to_page(__phys_to_pfn(__p4d_to_phys(p4d))) #else +static inline bool pgtable_l4_enabled(void) { return false; } + #define p4d_page_paddr(p4d) ({ BUILD_BUG(); 0;}) /* Match pud_offset folding in */ diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index c995d1f4594f6692..a23d33b6b56b8c35 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -94,7 +94,8 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp, unsigned long addr) { - tlb_remove_table(tlb, virt_to_page(pudp)); + if (pgtable_l4_enabled()) + tlb_remove_table(tlb, virt_to_page(pudp)); } #endif diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index b0fee11886e54ca2..62f6d397f27b07c0 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1731,6 +1731,8 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused) if (levels == 5 && !pgtable_l5_enabled()) levels = 4; + else if (levels == 4 && !pgtable_l4_enabled()) + levels = 3; if (__this_cpu_read(this_cpu_vector) == vectors) { const char *v = arm64_get_bp_hardening_vector(EL1_VECTOR_KPTI); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 77cd163375124c6a..955c7c3341fbc9c2 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1043,7 +1043,7 @@ static void free_empty_pud_table(p4d_t *p4dp, unsigned long addr, free_empty_pmd_table(pudp, addr, next, floor, ceiling); } while (addr = next, addr < end); - if (CONFIG_PGTABLE_LEVELS <= 3) + if (!pgtable_l4_enabled()) return; if (!pgtable_range_aligned(start, end, floor, ceiling, P4D_MASK)) diff --git a/arch/arm64/mm/pgd.c b/arch/arm64/mm/pgd.c index 3c4f8a279d2bc76a..0c501cabc23846c4 100644 --- a/arch/arm64/mm/pgd.c +++ b/arch/arm64/mm/pgd.c @@ -21,6 +21,8 @@ static bool pgdir_is_page_size(void) { if (PGD_SIZE == PAGE_SIZE) return true; + if (CONFIG_PGTABLE_LEVELS == 4) + return !pgtable_l4_enabled(); if (CONFIG_PGTABLE_LEVELS == 5) return !pgtable_l5_enabled(); return false; From patchwork Tue Mar 7 14:05:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163870 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 43896C678D4 for ; Tue, 7 Mar 2023 14:53:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=rGhA8XwNKTU0tRntgCKcfREXqHNewodJucZJLWHc17U=; b=GtVHHO5EKcYT+K 2KdHeJ3KJLQA1rptwtTnP0C8AbLvijND/Djox7gesmAdakMW96pnRM3/1YMy1kLakqY3g4HnhgeaV W2tK+q/Bjt5uQ6HcTmmqTYEWkHpgHpSHJB/9OAIhL46VxRBIlRo1irPebpnxbsdqESweS63lsKWwr 7ksFPBPWMv/5yOBzlaPhCG6HI1Nez4AHcI6npBK1BT6ozlqhBivtO0w5sDcWIEbnWmnH/hwdZDj4A ViBmiTHElL/gTUeitWcgvfgOeREIS6Qyn6A2YMqgQpn3OySbrtF0K73Usj62sB5oN2BMeqWRqTADr dFojG5yYCqC17Q2ww82Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYgK-000yif-IL; Tue, 07 Mar 2023 14:52:48 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXzi-000evg-Vz for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:08:48 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7FF0B6144B; Tue, 7 Mar 2023 14:08:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EE89AC433D2; Tue, 7 Mar 2023 14:08:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198125; bh=RiTy0vYA92racswYmJC1M4uj15gXwTRNr7hKOo4kKIg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tHi6giKRrBTBP69pl5D/Ze+suO+NTYUMuAAkvqG+E3EbEgSNRY/WwDhaWs2uKPUnZ oQQU1KFNMBUIPklIVCIfsZB23Adk9umPOVHqVb1hThs/suZ0BVi79cjcZMQsVMYDT5 CA2mZe4OxqRdzMOvM6iA2/g12i1TedUc+jgOZ0UB+LZN4061JpXJmqFsf5uFNxj3z0 1R+DQHfJOofRBRr5+MQvPBSKJh2Gmrzb9gLdXjyU+gmmLUEAFxK+Jrhq418+Z6dto2 UqRUsRLju7U+QD/O2WivTgxMbO+/75sfHkQvd+6ZlPxFvrGibbl9ezNpPnCsL37G0p pjG9zwq1jEHKw== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 53/60] arm64: ptdump: Disregard unaddressable VA space Date: Tue, 7 Mar 2023 15:05:15 +0100 Message-Id: <20230307140522.2311461-54-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2007; i=ardb@kernel.org; h=from:subject; bh=RiTy0vYA92racswYmJC1M4uj15gXwTRNr7hKOo4kKIg=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdpfsPQ7PLPGfjNrOZH1MdPO5I7ODnu/6iolxCYGrTm woL1+0dpSwMYhwMsmKKLAKz/77beXqiVK3zLFmYOaxMIEMYuDgFYCKb3BgZFtkJLsjQfO3/sduE /xnjqQOc9+Nfz2xfY+J0ruU/w1Phewz/ffe3H7eZc+7Qe/36jy97S785Moq7va2vv78787Pob8Z b/AA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060847_147613_35F1C111 X-CRM114-Status: GOOD ( 17.36 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Configurations built with support for 52-bit virtual addressing can also run on CPUs that only support 48 bits of VA space, in which case only that part of swapper_pg_dir that represents the 48-bit addressable region is relevant, and everything else is ignored by the hardware. Our software pagetable walker has little in the way of input address validation, and so it will happily start a walk from an address that is not representable by the number of paging levels that are actually active, resulting in lots of bogus output from the page table dumper unless we take care to start at a valid address. Signed-off-by: Ard Biesheuvel --- arch/arm64/mm/ptdump.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/arch/arm64/mm/ptdump.c b/arch/arm64/mm/ptdump.c index 8f37d6d8b5216473..8aee5d25f3d8cbe6 100644 --- a/arch/arm64/mm/ptdump.c +++ b/arch/arm64/mm/ptdump.c @@ -345,7 +345,7 @@ static void __init ptdump_initialize(void) pg_level[i].mask |= pg_level[i].bits[j].mask; } -static struct ptdump_info kernel_ptdump_info = { +static struct ptdump_info kernel_ptdump_info __ro_after_init = { .mm = &init_mm, .markers = address_markers, .base_addr = PAGE_OFFSET, @@ -364,7 +364,7 @@ void ptdump_check_wx(void) .ptdump = { .note_page = note_page, .range = (struct ptdump_range[]) { - {PAGE_OFFSET, ~0UL}, + {_PAGE_OFFSET(vabits_actual), ~0UL}, {0, 0} } } @@ -388,8 +388,9 @@ static int __init ptdump_init(void) address_markers[VMALLOC_END_NR].start_address = VMALLOC_END; ptdump_initialize(); if (vabits_actual < VA_BITS) { + kernel_ptdump_info.base_addr = _PAGE_OFFSET(vabits_actual); address_markers[VMEMMAP_START_NR].start_address = - (unsigned long)virt_to_page(_PAGE_OFFSET(vabits_actual)); + (unsigned long)virt_to_page(kernel_ptdump_info.base_addr); } ptdump_debugfs_register(&kernel_ptdump_info, "kernel_page_tables"); return 0; From patchwork Tue Mar 7 14:05:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163871 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8DF37C678D4 for ; Tue, 7 Mar 2023 14:54:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=4rskNZpqs3G2FS3yufBM48VDKQU/wAhjTKkQ35MSU/A=; b=rZ73lzx/m9P7zx Lfzdx2ndfE4DzDVofguxxv/zudrTkhsTTciTAz1TD3W48E+JfGf4gNfnVRy2Jbp5Vzzt1j7CATqzA /G5PqlcaEGSEa/AY2TCNaRVwOE24jCgrqPObdKgjGgzb3nnZtKbbZEAAzV6lmMUuVRK9bCWO1q9G8 OrMyR6j5rIxdPIDFjxeVDGUcWMqoxMQIcAwQlToexu2ZWwMyYuaK3dDHS4FytXgC7GKJwae47ogHc iWzxFmh4f9ZjktTl7CHUxctphrcfrmbuJrUPdOgYfax6eRteZXQMA040iqAU8KIa37uvMMBX+Ikr9 EceUWtnuRWALP3OUfBBA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYh7-000z3v-SW; Tue, 07 Mar 2023 14:53:39 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXzn-000exW-10 for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:08:52 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id A104DB818FF; Tue, 7 Mar 2023 14:08:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5D588C433A4; Tue, 7 Mar 2023 14:08:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198128; bh=4DGW9eGKgpeMklFY1cuDqUCfnQybL1/dGwNaCUqrdDg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SR+HtuolschcP7wXkMAy4zoFJ1gpgbHOegTuQNBBwkD9tkHCB395oYEljPGy+zhXk nSFqKTdh3BaGJljGPk7ZGsPU66YCZGqdl2Sjb/qvIazPANm7vWnlTMbsJi1zkWeXH9 S/AKRXaJBwCeBPB/ml4bklMQZODezZiyVNqgfZnWiSBe/fiEnq56N0yR3xJcmI18zC t8nm0yuEOtA8x0c96HIfKzlIH/wbKm7G7Zy66l829WZgNARaYG5Mstj2dGYKTCb6w0 1dk6gPwiHoXJ9U6EBkyDIKepzzZU4CKsr+4wZH6oBHlo5lSewbh6ubfuWyFvGyqd35 e1dYpfU8JHUjg== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 54/60] arm64: ptdump: Deal with translation levels folded at runtime Date: Tue, 7 Mar 2023 15:05:16 +0100 Message-Id: <20230307140522.2311461-55-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2878; i=ardb@kernel.org; h=from:subject; bh=4DGW9eGKgpeMklFY1cuDqUCfnQybL1/dGwNaCUqrdDg=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdpe9BgHJGzYns1KV9rzumrspbnteh7vdYvPeYd27MX zVB2dUdpSwMYhwMsmKKLAKz/77beXqiVK3zLFmYOaxMIEMYuDgFYCIRnQz/9K/GbE2PvVD11szR LuVW3Q6P98qlV5puRU1YZ3jh9G2XlYwMqzW0fjq1TpD3bnNZMm/vvhXrVnH9V/5ZtDmLKaBr5dK XvAA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060851_437015_31A3A984 X-CRM114-Status: GOOD ( 20.92 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently, the ptdump code deals with folded PMD or PUD levels at build time, by omitting those levels when invoking note_page. IOW, note_page() is never invoked with level == 1 if P4Ds are folded in the build configuration. With the introduction of LPA2 support, we will defer some of these folding decisions to runtime, so let's take care of this by overriding the 'level' argument when this condition triggers. Substituting the PUD or PMD strings for "PGD" when the level in question is folded at build time is no longer necessary, and so the conditional expressions can be simplified. This also makes the indirection of the 'name' field unnecessary, so change that into a char[] array, and make the whole thing __ro_after_init. Note that the mm_p?d_folded() functions currently ignore their mm pointer arguments, but let's wire them up correctly anyway. Signed-off-by: Ard Biesheuvel --- arch/arm64/mm/ptdump.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/arch/arm64/mm/ptdump.c b/arch/arm64/mm/ptdump.c index 8aee5d25f3d8cbe6..0e0ad6a5a12e6f04 100644 --- a/arch/arm64/mm/ptdump.c +++ b/arch/arm64/mm/ptdump.c @@ -82,6 +82,7 @@ struct pg_state { struct ptdump_state ptdump; struct seq_file *seq; const struct addr_marker *marker; + const struct mm_struct *mm; unsigned long start_address; int level; u64 current_prot; @@ -178,12 +179,12 @@ static const struct prot_bits pte_bits[] = { struct pg_level { const struct prot_bits *bits; - const char *name; - size_t num; + char name[4]; + int num; u64 mask; }; -static struct pg_level pg_level[] = { +static struct pg_level pg_level[] __ro_after_init = { { /* pgd */ .name = "PGD", .bits = pte_bits, @@ -193,11 +194,11 @@ static struct pg_level pg_level[] = { .bits = pte_bits, .num = ARRAY_SIZE(pte_bits), }, { /* pud */ - .name = (CONFIG_PGTABLE_LEVELS > 3) ? "PUD" : "PGD", + .name = "PUD", .bits = pte_bits, .num = ARRAY_SIZE(pte_bits), }, { /* pmd */ - .name = (CONFIG_PGTABLE_LEVELS > 2) ? "PMD" : "PGD", + .name = "PMD", .bits = pte_bits, .num = ARRAY_SIZE(pte_bits), }, { /* pte */ @@ -261,6 +262,11 @@ static void note_page(struct ptdump_state *pt_st, unsigned long addr, int level, static const char units[] = "KMGTPE"; u64 prot = 0; + /* check if the current level has been folded dynamically */ + if ((level == 1 && mm_p4d_folded(st->mm)) || + (level == 2 && mm_pud_folded(st->mm))) + level = 0; + if (level >= 0) prot = val & pg_level[level].mask; @@ -322,6 +328,7 @@ void ptdump_walk(struct seq_file *s, struct ptdump_info *info) st = (struct pg_state){ .seq = s, .marker = info->markers, + .mm = info->mm, .level = -1, .ptdump = { .note_page = note_page, From patchwork Tue Mar 7 14:05:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163872 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5D0C8C678D4 for ; Tue, 7 Mar 2023 14:55:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Gn5tAeJq/j1ekiKeMTWs063lGCv4BmJ46C4iTLpUilw=; b=Pyld2W4v75uPsJ xsaKiynYUo7cj1xYMCAC0ZKgnTF3MJWsal9RmZWn4CuSc0Qw0kcth0dYVwWWOxCgAz+2hppNQZL0S /JZJ94dMqXts3BcJx8KWEwxFXVvdSo1/x9SBBv35ObOdmopC0aYL1ZVOYJK1gexBz2yfSHl6nggS8 CbntZWhH8kb6uUbfENYq2cBKdG30z/u/csJQj5HGI74PRS0KOAa1qRdCwl6qevr10t7JMkToeiyhS GM3CbGlBABYmKYC3VbYrIpM5wlLaj+1OCXCTRDY4sr55HdwyfEQe2WuF/AbYG0p5K9Y4jiNnwUR0h utZcy5nLVKyn4FsAb4kA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYhh-000zK9-5U; Tue, 07 Mar 2023 14:54:14 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXzn-000eyJ-PF for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:08:54 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E60256145F; Tue, 7 Mar 2023 14:08:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BF498C433EF; Tue, 7 Mar 2023 14:08:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198130; bh=LnIMWXn/NUQhEHogfVRPC6N8nPTaYTu5SzPxylVFFFA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dxt7vxDMrMeQJsTf2yCYbSQzPG4oUdjdzgCKdK8cy5uKjVjUBociHes0crp2TZ85Q 6ytey9H+NGpX2hcoXUItOOkym5riAqUnlaswFY16dhlC1EkretpkYelmEVj+MDsCMD L7P7JdIY7GR9Ep9zRDjiTybMUodqoRtw5XzVyBX23bHExKL/xkTcETbJwn7PCEHOvB OIcWAOB7MIsigEhku29GgdbFcg7dyBeKsHGwDNpr4Bg3K/q6CCH5QZfEHYYJdrR2ul zmUcsDKn3Nd33/ywDSxvs/Zf8X51lfhfiCD9Et5r5TwWMLX50BgOUHteDczk9uFe21 gIHG3unpgBDfQ== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 55/60] arm64: kvm: avoid CONFIG_PGTABLE_LEVELS for runtime levels Date: Tue, 7 Mar 2023 15:05:17 +0100 Message-Id: <20230307140522.2311461-56-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1031; i=ardb@kernel.org; h=from:subject; bh=LnIMWXn/NUQhEHogfVRPC6N8nPTaYTu5SzPxylVFFFA=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdZbKb65Y0U7+zrwseKHzQe7TBcakxty1f+VSJoMU8E hURwaIdpSwMYhwMsmKKLAKz/77beXqiVK3zLFmYOaxMIEMYuDgFYCLJ+xkZTv7LehPcuoNHgeOb k2HvaRvudrOfKhv0Wju4Hl0R7H9Sz/C/dAHbM9H+BxqrE6eHLjn74nHoO73ctltzrEI3V5yYMfU bMwA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060851_912252_613228BC X-CRM114-Status: GOOD ( 14.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org get_user_mapping_size() uses vabits_actual and CONFIG_PGTABLE_LEVELS to provide the starting point for a table walk. This is fine for LVA, as the number of translation levels is the same regardless of whether LVA is enabled. However, with LPA2, this will no longer be the case, so let's derive the number of levels from the number of VA bits directly. Signed-off-by: Ard Biesheuvel --- arch/arm64/kvm/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index d64be7b5f6692e8b..4e7c0f9a9c286c09 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -663,7 +663,7 @@ static int get_user_mapping_size(struct kvm *kvm, u64 addr) .pgd = (kvm_pteref_t)kvm->mm->pgd, .ia_bits = vabits_actual, .start_level = (KVM_PGTABLE_MAX_LEVELS - - CONFIG_PGTABLE_LEVELS), + ARM64_HW_PGTABLE_LEVELS(pgt.ia_bits)), .mm_ops = &kvm_user_mm_ops, }; kvm_pte_t pte = 0; /* Keep GCC quiet... */ From patchwork Tue Mar 7 14:05:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163873 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AB2B4C678D4 for ; Tue, 7 Mar 2023 14:56:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WDrLFt2w0mGEwYR4iwFhMB9LNRALpUZtiIne+oTpScQ=; b=ecZgNan+6E7cgK HprnxrEcV79aTO9x7z0XQKEyJlGmpW+mcAkuEZswRCuwVObX2jrm3YtX3Pb7uWglwPl1wXqXCAc4C gHzWNgNqzL5m2kzBlEMv75upIMglN4EDQSgH1B5RC9X8EipwGZBqNZUztMkFBaHDNYAI2h38TcnDq /P2vv4iYU4lu0yffy78cjMuBy6uT/m/4ysyDPgIlxvDtIitxX37SxzGEgadfIOFtMRgIe0/4TsKxh VLlhBbeEf4D5ryVUSlRPHRr7h4sPEJPx1ofQzd5t5+737XufkZSHQ7f7XADeyZiKIk0VKWkYac+9w d87dDSZGD7qHLFWs2EXQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYid-000zjA-SA; Tue, 07 Mar 2023 14:55:12 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXzr-000f0C-Sb for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:08:58 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B513F61461; Tue, 7 Mar 2023 14:08:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2D269C433A0; Tue, 7 Mar 2023 14:08:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198133; bh=ltQ/YiIke0+sXL8Rr299HB1L+76Qb5rdyScY2DfhQis=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Pwi6ZGc9uiOfW12N3hmOy3TkZ8WYrqP3d8EKMbB9r2KXKqStCbxO2kLGZ0A/d0zBr evN4HcZWcm3wQTfmNwnCH1/MHbcIybNdlFhkV215BuaWgA1IpFfzN/nW90XNkpx7pg MmIinHRXNQYROIkLCoFQedYkVBkfQS45hSh/z1jZQs4z3o0YFBRcPs77Dh9LbycgED PwIOuWDzVCZ5ImqkyI+CDY99tO/vvG5Ki4RuCBJcHmjwPm790UOKOcX8f0PGPWnXE8 ZJ2p0NIzi7BId+OocRZLwVgZXuildH0SZh8EbbOcYfupwAnVUzLo3kQrWlaCMaPo3m mjNRGxwuGpWVA== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 56/60] arm64: kvm: Limit HYP VA and host S2 range to 48 bits when LPA2 is in effect Date: Tue, 7 Mar 2023 15:05:18 +0100 Message-Id: <20230307140522.2311461-57-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3442; i=ardb@kernel.org; h=from:subject; bh=ltQ/YiIke0+sXL8Rr299HB1L+76Qb5rdyScY2DfhQis=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdZfq5Vavm1mX9fnaAz6wwr7m94rztkk+GvIUPVSS/L P6ZsTS7o5SFQYyDQVZMkUVg9t93O09PlKp1niULM4eVCWQIAxenAExkgwvDf/cHaRVvReZwdZV/ idMQWbzx7BPuoqJyB6d4XZ3/nKqdhxgZbq/nfN5xTMI5IzjfouNBe2KPi1SRomNZa9tuneXqMd1 8AA== X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060856_114146_24B09DE8 X-CRM114-Status: GOOD ( 18.12 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The KVM code needs more work to support 5 level paging with LPA2, so for the time being, limit KVM to 48 bit addressing on 4k and 16k pagesize configurations. This can be reverted once the LPA2 support for KVM is merged. Signed-off-by: Ard Biesheuvel --- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 2 ++ arch/arm64/kvm/mmu.c | 5 ++++- arch/arm64/kvm/va_layout.c | 9 +++++---- 3 files changed, 11 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 552653fa18be34b2..e00b87ed4a8400f6 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -128,6 +128,8 @@ static void prepare_host_vtcr(void) /* The host stage 2 is id-mapped, so use parange for T0SZ */ parange = kvm_get_parange(id_aa64mmfr0_el1_sys_val); phys_shift = id_aa64mmfr0_parange_to_phys_shift(parange); + if (IS_ENABLED(CONFIG_ARM64_LPA2) && phys_shift > 48) + phys_shift = 48; // not implemented yet host_mmu.arch.vtcr = kvm_get_vtcr(id_aa64mmfr0_el1_sys_val, id_aa64mmfr1_el1_sys_val, phys_shift); diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 4e7c0f9a9c286c09..2ad9e6f1e101e52d 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -661,7 +661,8 @@ static int get_user_mapping_size(struct kvm *kvm, u64 addr) { struct kvm_pgtable pgt = { .pgd = (kvm_pteref_t)kvm->mm->pgd, - .ia_bits = vabits_actual, + .ia_bits = IS_ENABLED(CONFIG_ARM64_LPA2) ? 48 + : vabits_actual, .start_level = (KVM_PGTABLE_MAX_LEVELS - ARM64_HW_PGTABLE_LEVELS(pgt.ia_bits)), .mm_ops = &kvm_user_mm_ops, @@ -1703,6 +1704,8 @@ int __init kvm_mmu_init(u32 *hyp_va_bits) idmap_bits = 48; kernel_bits = vabits_actual; *hyp_va_bits = max(idmap_bits, kernel_bits); + if (IS_ENABLED(CONFIG_ARM64_LPA2)) + *hyp_va_bits = 48; // LPA2 is not yet supported in KVM kvm_debug("Using %u-bit virtual addresses at EL2\n", *hyp_va_bits); kvm_debug("IDMAP page: %lx\n", hyp_idmap_start); diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c index 341b67e2f2514e55..ac87d0c39c38f7d9 100644 --- a/arch/arm64/kvm/va_layout.c +++ b/arch/arm64/kvm/va_layout.c @@ -59,12 +59,13 @@ static void init_hyp_physvirt_offset(void) */ __init void kvm_compute_layout(void) { + u64 vabits = IS_ENABLED(CONFIG_ARM64_LPA2) ? 48 : vabits_actual; // not yet phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start); u64 hyp_va_msb; /* Where is my RAM region? */ - hyp_va_msb = idmap_addr & BIT(vabits_actual - 1); - hyp_va_msb ^= BIT(vabits_actual - 1); + hyp_va_msb = idmap_addr & BIT(vabits - 1); + hyp_va_msb ^= BIT(vabits - 1); tag_lsb = fls64((u64)phys_to_virt(memblock_start_of_DRAM()) ^ (u64)(high_memory - 1)); @@ -72,10 +73,10 @@ __init void kvm_compute_layout(void) va_mask = GENMASK_ULL(tag_lsb - 1, 0); tag_val = hyp_va_msb; - if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && tag_lsb != (vabits_actual - 1) && + if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && tag_lsb != (vabits - 1) && !kaslr_disabled_cmdline()) { /* We have some free bits to insert a random tag. */ - tag_val |= get_random_long() & GENMASK_ULL(vabits_actual - 2, tag_lsb); + tag_val |= get_random_long() & GENMASK_ULL(vabits - 2, tag_lsb); } tag_val >>= tag_lsb; From patchwork Tue Mar 7 14:05:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163874 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AAD28C678D4 for ; Tue, 7 Mar 2023 14:57:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=r6tdKqaN5Fpxf3xlJ6cZaajKtuB3uMi+xBF57MW2tXM=; b=A0kq0KtJwlobFL f6xKXp7R8arxOqfgVm8M4k9JAwISAhe5jwGnQtt7x7BcXI4HBaowyRO/HNvisTmnvWgYAUA+DwD32 Hy5zzOOXsIVNpbkwxZdSg9e2dXq2yCcYGkf8yKDZtbgUUkDhBxN2pcB9c256maXUhO0nH8fVET/F/ U82LJhUzQUImRcLRVAqSQ46OaAhAK1a4UgPGPKtX7Zkyybt4viKbqlN4BhK1P56uGmSI37lnr/v+O Zzct0OWPothPkzOjVV6EV6eGLE0nzm9XxevZwKzhM0Sjz87QtR+KOFslKTOjDObRf15+I0RwlOo2Z CvJNezckXH2HIuaAbKzg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYjC-001005-CG; Tue, 07 Mar 2023 14:55:46 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXzs-000f0b-Ch for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:08:59 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B926D61460; Tue, 7 Mar 2023 14:08:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 90D80C4331F; Tue, 7 Mar 2023 14:08:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198135; bh=YIW4J/8Ja06Oc4Jt2bgT8fgcTe+XjUtudi32T81ok3g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=m0DJw20FkK7dxCUGjAykqaaKXLPuSoiEyjrQYvx+ChLW+pl57DQg42bh69fCi4pcg H6qqtWN1x6uMObJ8ymnBiUSa4WsnEK1FYolGSHa/22V3Mc6InVr33kHacQ3vgp1A9u 8BW5ilCifaEec1GClCX2LMr60u3cjeziWbD8DeqQ3oizhlQ4By9aeCr0yBdiW/4K5G e7+y/HkcynYocGxZ7Ay+67JfPcuC0GPInGgT9QEUitCRxgii3cWMUrJw1KCmlT6WU8 SpU0MKJx+G2liWyEafch4Rojm2bFlWuKY+IA9krCNt654sRuNmipGU3C7ZnQk8D3wA lQtiEXGAfA2lg== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 57/60] arm64: Enable 52-bit virtual addressing for 4k and 16k granule configs Date: Tue, 7 Mar 2023 15:05:19 +0100 Message-Id: <20230307140522.2311461-58-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=4995; i=ardb@kernel.org; h=from:subject; bh=YIW4J/8Ja06Oc4Jt2bgT8fgcTe+XjUtudi32T81ok3g=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdZVbj7ZnTH1/9fX5yZ9Ry+9k7v6m/Kpwq6rraO81hY rpEu8ryjlIWBjEOBlkxRRaB2X/f7Tw9UarWeZYszBxWJpAhDFycAjARtQyG/2l5q02yyiT3PXgf UJtY4p0cI8sVVpu/4oqnYIrAma37+hn+VzpWbjg+5en7wms6Ogw3JnLfevJkkm/XFJUUvSm6jgr VLAA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060856_565223_A15ACC40 X-CRM114-Status: GOOD ( 17.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Update Kconfig to permit 4k and 16k granule configurations to be built with 52-bit virtual addressing, now that all the prerequisites are in place. While at it, update the feature description so it matches on the appropriate feature bits depending on the page size. For simplicity, let's just keep ARM64_HAS_VA52 as the feature name. Note that LPA2 based 52-bit virtual addressing requires 52-bit physical addressing support to be enabled as well, as programming TCR.TxSZ to values below 16 is not allowed unless TCR.DS is set, which is what activates the 52-bit physical addressing support. While supporting the converse (52-bit physical addressing without 52-bit virtual addressing) would be possible in principle, let's keep things simple, by only allowing these features to be enabled at the same time. Signed-off-by: Ard Biesheuvel --- arch/arm64/Kconfig | 17 ++++++++------- arch/arm64/kernel/cpufeature.c | 22 ++++++++++++++++---- 2 files changed, 28 insertions(+), 11 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 52aac583823863e4..938fe1d090a5bb4e 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -351,7 +351,9 @@ config PGTABLE_LEVELS default 3 if ARM64_64K_PAGES && (ARM64_VA_BITS_48 || ARM64_VA_BITS_52) default 3 if ARM64_4K_PAGES && ARM64_VA_BITS_39 default 3 if ARM64_16K_PAGES && ARM64_VA_BITS_47 + default 4 if ARM64_16K_PAGES && (ARM64_VA_BITS_48 || ARM64_VA_BITS_52) default 4 if !ARM64_64K_PAGES && ARM64_VA_BITS_48 + default 5 if ARM64_4K_PAGES && ARM64_VA_BITS_52 config ARCH_SUPPORTS_UPROBES def_bool y @@ -365,13 +367,13 @@ config BROKEN_GAS_INST config KASAN_SHADOW_OFFSET hex depends on KASAN_GENERIC || KASAN_SW_TAGS - default 0xdfff800000000000 if (ARM64_VA_BITS_48 || ARM64_VA_BITS_52) && !KASAN_SW_TAGS - default 0xdfffc00000000000 if ARM64_VA_BITS_47 && !KASAN_SW_TAGS + default 0xdfff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS_52 && !ARM64_16K_PAGES)) && !KASAN_SW_TAGS + default 0xdfffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_52) && ARM64_16K_PAGES && !KASAN_SW_TAGS default 0xdffffe0000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS default 0xdfffffc000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS default 0xdffffff800000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS - default 0xefff800000000000 if (ARM64_VA_BITS_48 || ARM64_VA_BITS_52) && KASAN_SW_TAGS - default 0xefffc00000000000 if ARM64_VA_BITS_47 && KASAN_SW_TAGS + default 0xefff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS_52 && !ARM64_16K_PAGES)) && KASAN_SW_TAGS + default 0xefffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_52) && ARM64_16K_PAGES && KASAN_SW_TAGS default 0xeffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS default 0xefffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS default 0xeffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS @@ -1220,7 +1222,7 @@ config ARM64_VA_BITS_48 config ARM64_VA_BITS_52 bool "52-bit" - depends on ARM64_64K_PAGES && (ARM64_PAN || !ARM64_SW_TTBR0_PAN) + depends on ARM64_PAN || !ARM64_SW_TTBR0_PAN help Enable 52-bit virtual addressing for userspace when explicitly requested via a hint to mmap(). The kernel will also use 52-bit @@ -1267,10 +1269,11 @@ choice config ARM64_PA_BITS_48 bool "48-bit" + depends on ARM64_64K_PAGES || !ARM64_VA_BITS_52 config ARM64_PA_BITS_52 - bool "52-bit (ARMv8.2)" - depends on ARM64_64K_PAGES + bool "52-bit" + depends on ARM64_64K_PAGES || ARM64_VA_BITS_52 depends on ARM64_PAN || !ARM64_SW_TTBR0_PAN help Enable support for a 52-bit physical address space, introduced as diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 62f6d397f27b07c0..4487745e9b660c5f 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2763,15 +2763,29 @@ static const struct arm64_cpu_capabilities arm64_features[] = { }, #ifdef CONFIG_ARM64_VA_BITS_52 { - .desc = "52-bit Virtual Addressing (LVA)", .capability = ARM64_HAS_VA52, .type = ARM64_CPUCAP_BOOT_CPU_FEATURE, - .sys_reg = SYS_ID_AA64MMFR2_EL1, - .sign = FTR_UNSIGNED, + .matches = has_cpuid_feature, .field_width = 4, +#ifdef CONFIG_ARM64_64K_PAGES + .desc = "52-bit Virtual Addressing (LVA)", + .sign = FTR_SIGNED, + .sys_reg = SYS_ID_AA64MMFR2_EL1, .field_pos = ID_AA64MMFR2_EL1_VARange_SHIFT, - .matches = has_cpuid_feature, .min_field_value = ID_AA64MMFR2_EL1_VARange_52, +#else + .desc = "52-bit Virtual Addressing (LPA2)", + .sys_reg = SYS_ID_AA64MMFR0_EL1, +#ifdef CONFIG_ARM64_4K_PAGES + .sign = FTR_SIGNED, + .field_pos = ID_AA64MMFR0_EL1_TGRAN4_SHIFT, + .min_field_value = ID_AA64MMFR0_EL1_TGRAN4_52_BIT, +#else + .sign = FTR_UNSIGNED, + .field_pos = ID_AA64MMFR0_EL1_TGRAN16_SHIFT, + .min_field_value = ID_AA64MMFR0_EL1_TGRAN16_52_BIT, +#endif +#endif }, #endif {}, From patchwork Tue Mar 7 14:05:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163875 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2849CC6FD1E for ; Tue, 7 Mar 2023 14:58:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=gM5vDfnV2k1GBk/yAmPxOMVTHwD7XPzNSu8ulscG4Sc=; b=Y3PnQxYTGRhY1V J/Yxbo1oK3EB3kHcxnHV5mKKmy8k/kpxZ1oUKNB7aBVQlAu5/JqhwKlaovFjB7OBOcMC6ggkE7ilQ 1pofYsWsCadZWpYo5VYTEl9v81B6QIJ3/LlZLawkZPxkz6/nZkEYK2AOcZRlHEB5xcHhVbwD+S4yz 0bVfWFd9zwU2Jt0YJYMZXiWgwxELihbVbv1Q6IkchQOeYvmTIOqUWzj8A8xhI1P8Ap/2Yhi3nmH2G JuUQ14TWDscpAQrghw3VEa47vzWfZIg/Ub21dT9Z6rSyQ3ponMoA7ivJ+vG1/EtNr5zhzEAa/Jx1g Nl0dNoQC4iRrHCmAZaOw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYkI-0010XK-AO; Tue, 07 Mar 2023 14:56:55 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXzw-000f2B-1V for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:09:01 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id CF35761467; Tue, 7 Mar 2023 14:08:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F3870C4339E; Tue, 7 Mar 2023 14:08:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198138; bh=roqr6Q+/LaCDBS0wdH1j7SIax2HpGRyxNIxGqchdc0Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qcBZHpheeGEgqeu9h7lOW0O1leSo000RLluqbtqGMnIUTfeN6SjVzECb8VaYa1qUr E4AJDNnjARA2jTPupgk7HgTZAUjR4A7kHjhiXh9rr4CzbplQL1zoG630uqMAfWXfp3 DXVsaXwInlb8fSY2s+GqUJGLqLZOArUZszoWi1eGtrWwmaPNePOY0JYvLHP7dDLmmh DuGGGtyGCDdZKfzp3qntSv5+FekBZjl5dYrE5ySgQYPcu31bq4m8GezeNgiQWE3xel gI1CZzri3Mps0bJ5OQWsJqvJ3mzqqv8/nlFfl3aLJpR55sAeBmW5cbo1Cocxw42Pxn DjWV7klsej/wg== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 58/60] arm64: defconfig: Enable LPA2 support Date: Tue, 7 Mar 2023 15:05:20 +0100 Message-Id: <20230307140522.2311461-59-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=946; i=ardb@kernel.org; h=from:subject; bh=roqr6Q+/LaCDBS0wdH1j7SIax2HpGRyxNIxGqchdc0Q=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdZc6Zv6fn1lid3HjxBbdmUDLjI+FWzvr7XU4VcXp7h br6dHs7SlkYxDgYZMUUWQRm/3238/REqVrnWbIwc1iZQIYwcHEKwESMnRn+B+Z+nRnu07U7IqI+ VaorSVB8bdJWtc/zrZhlHl7zOPzZk+GfrlqJzYVdz4X87Fffu1prcvHT2Q3hlVOtrx1+HxOnVa3 BDgA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060900_173329_401C0251 X-CRM114-Status: GOOD ( 13.44 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We typically enable support in defconfig for all architectural features for which we can detect at runtime if the hardware actually supports them. Now that we have implemented support for LPA2 based 52-bit virtual addressing in a way that should not impact 48-bit operation on non-LPA2 CPU, we can do the same, and enable 52-bit virtual addressing by default. Signed-off-by: Ard Biesheuvel --- arch/arm64/configs/defconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig index 7790ee42c68a88d7..728b29549e058e94 100644 --- a/arch/arm64/configs/defconfig +++ b/arch/arm64/configs/defconfig @@ -69,7 +69,7 @@ CONFIG_ARCH_VEXPRESS=y CONFIG_ARCH_VISCONTI=y CONFIG_ARCH_XGENE=y CONFIG_ARCH_ZYNQMP=y -CONFIG_ARM64_VA_BITS_48=y +CONFIG_ARM64_VA_BITS_52=y CONFIG_SCHED_MC=y CONFIG_SCHED_SMT=y CONFIG_NUMA=y From patchwork Tue Mar 7 14:05:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163884 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 00C12C678D5 for ; Tue, 7 Mar 2023 14:58:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=GNZea61NYiFsMA4RqIwL+DBP4aFbDAAkUrCxmZwB/GU=; b=D4VTFJXTcINgsC IB7Gw1LsCg1ulYHiZXd6LHc/8aERu8V8C57fL1BU4qbhN91YJpTaIWDccIa9K4E+PB9I/J2wtBZXR 7SW2SeNdrAiSFraMDMIB1f2hcpjngxuKJaIX/bdqdW4UcIGMf52zDmS80KgSXHCnwcOUob4Ho3S6o 9mTtQUSsqnVZkmrICepEvONFLi/qf76zJxPErMy1DvcoLhSqfEwkV5M6MKghiv1e7HTx9UWHLBL+w tJB+nchnTn0GJftvgiiOvj6v6jI3lVGpxh1uMvifdHajPuoTMCzn1jwZz0P9sXW3XQsyQ1feJE4J9 DKw9YW3hQqQjW2bVgdpw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYlA-0010vV-Nk; Tue, 07 Mar 2023 14:57:50 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZXzx-000f3J-Nd for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:09:03 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E69B561453; Tue, 7 Mar 2023 14:09:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 61930C433A7; Tue, 7 Mar 2023 14:08:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198140; bh=xuULnbPfbdttGoFOmg13d9wHbRDgqXccUG7zc0Xws00=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ngCYk9bgPrkWxLrO/DwctYGOfpl/Q05CONePs/WxVd+HIvSDWTC3xFQ8ZcQFBAKT5 12xahHMOJ0OLIEJ8t3h0RxasgdkSgkaURegHIqbtzUAl6dfZ9dQ6qewtnRxvM2mshP oqp3nvPUxlXRcdhABrMVAn9DvTvUyEzpY4VDQytcYkKnKIU1wOMoeRENPGT3vRiuTC m1QkKryos2bl6wPHy0pQlORMSgRLAOJeYHCF9BQhCIFG9o/NW+zAKNJ2/87CqLuDYG WJmOEE1+W0Hh54MaowkecB17GiGnFHzVaFmWXKZCrzZOa9igvtuZYIEg5qW2nb9lxf QlHH6kjJU5OvA== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 59/60] mm: add arch hook to validate mmap() prot flags Date: Tue, 7 Mar 2023 15:05:21 +0100 Message-Id: <20230307140522.2311461-60-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1745; i=ardb@kernel.org; h=from:subject; bh=xuULnbPfbdttGoFOmg13d9wHbRDgqXccUG7zc0Xws00=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdZeGyK2eU0oUmac6b8DmnMD7ezH3OIq2tu4KmLvTJX f1uWr50RykLgxgHg6yYIovA7L/vdp6eKFXrPEsWZg4rE8gQBi5OAZgIlyLD/6qfL5S3a57yOmL8 6ewT7hB97bz6Y07vEsNkHuWGzFNhDGL4pydebBxtee/HmRUXhZfp9srMFLqW+lNEvCjoVXJR7J9 ePgA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060901_895160_5506BD51 X-CRM114-Status: GOOD ( 14.91 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add a hook to permit architectures to perform validation on the prot flags passed to mmap(), like arch_validate_prot() does for mprotect(). This will be used by arm64 to reject PROT_WRITE+PROT_EXEC mappings on configurations that run with WXN enabled. Reviewed-by: Kees Cook Signed-off-by: Ard Biesheuvel --- include/linux/mman.h | 15 +++++++++++++++ mm/mmap.c | 3 +++ 2 files changed, 18 insertions(+) diff --git a/include/linux/mman.h b/include/linux/mman.h index cee1e4b566d80095..25ffe6948fa21ff8 100644 --- a/include/linux/mman.h +++ b/include/linux/mman.h @@ -120,6 +120,21 @@ static inline bool arch_validate_flags(unsigned long flags) #define arch_validate_flags arch_validate_flags #endif +#ifndef arch_validate_mmap_prot +/* + * This is called from mmap(), which ignores unknown prot bits so the default + * is to accept anything. + * + * Returns true if the prot flags are valid + */ +static inline bool arch_validate_mmap_prot(unsigned long prot, + unsigned long addr) +{ + return true; +} +#define arch_validate_mmap_prot arch_validate_mmap_prot +#endif + /* * Optimisation macro. It is equivalent to: * (x & bit1) ? bit2 : 0 diff --git a/mm/mmap.c b/mm/mmap.c index 740b54be3ed4140f..02ccd45df0194796 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1214,6 +1214,9 @@ unsigned long do_mmap(struct file *file, unsigned long addr, if (!(file && path_noexec(&file->f_path))) prot |= PROT_EXEC; + if (!arch_validate_mmap_prot(prot, addr)) + return -EACCES; + /* force arch specific MAP_FIXED handling in get_unmapped_area */ if (flags & MAP_FIXED_NOREPLACE) flags |= MAP_FIXED; From patchwork Tue Mar 7 14:05:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13163885 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 40990C678D4 for ; Tue, 7 Mar 2023 15:00:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8N6BbhabDrikF1fXmtPsRDGZflVrwEH4MX/3S4Oi3W0=; b=OT+budBuUeQwjv e8ljncJaglO2Lfi5vFiuMP6qJ1T7PAK7bRpI+kHSn05AECaSpRwPqtqMbelN+Yv0LKK1Zf15xNbAW AIISEziRH96F0FKftkxL0shgVM/ORaqlXK0BwAyOa0eY2NJNgpiIQLFyNH5nFXQKURxuffxy+Z4t+ bTORbm8RmIzKe+sT3H2iRulOqtnjQ6Q6yZlwCDoz1gbuc6sXaVmL2rOA9eo+0Wo09rdm0AWBvF2CF RNQBCGAJasPdsHymQ6Mm4U7lMFqPKx9NOLdDD0dT3GvY/MJ8ihvLVySsdh+m5xgAGXkJiXPMgStuD MCG9L2dgTkavjhP+mmCQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZYm6-0011Jg-9T; Tue, 07 Mar 2023 14:58:47 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pZY00-000f4W-4b for linux-arm-kernel@lists.infradead.org; Tue, 07 Mar 2023 14:09:08 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 55A2761457; Tue, 7 Mar 2023 14:09:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C45E5C433EF; Tue, 7 Mar 2023 14:09:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678198142; bh=vKxI+oe9fpy8fzPj2uKjjtIoIWIZL1hoi7ZQRmkNSz4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nxErkHaE8GoD+KVEsftvLI1sxxyJJUqZLJQkHZ6G2hiDP7UewjhE6CsXhPXRq9xJF nEfef0dpS6h9QtrM4Oltr0hnxbzjYo5yirmThX4R6Y3bVTgf7eeQDIL2NKqsq/NB+5 ySyhpAWd2sl+Ni7byLtC6WhXimL9Xxxsl8RU+FQ8MMDjPkMJje0yC4C6t9Bw5OF4Hb 6cGb3RvP4XHpXMTsv/hcgJyFoqxaeZ+zOwa692Xp76D7i6sV1SYZqCnL0+5g2i2/Hl ban3r82Z4g0g6jgW0C6pLU9xSfRr8xkfwoIPxZV8coEcDAmT4fIsBNH976zEqs42gp s6Q3fr007BMkA== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook Subject: [PATCH v3 60/60] arm64: mm: add support for WXN memory translation attribute Date: Tue, 7 Mar 2023 15:05:22 +0100 Message-Id: <20230307140522.2311461-61-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307140522.2311461-1-ardb@kernel.org> References: <20230307140522.2311461-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=10060; i=ardb@kernel.org; h=from:subject; bh=vKxI+oe9fpy8fzPj2uKjjtIoIWIZL1hoi7ZQRmkNSz4=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIYXdZdEjGUPpD4euRzm8rKmpLLG0u7KbXX966EEfozucs srO5nwdpSwMYhwMsmKKLAKz/77beXqiVK3zLFmYOaxMIEMYuDgFYCJf7jIydCuKPhbquqopdvd8 r/X0pCtWvhXVbSkvdzcwX+n9sd/jBCNDp4pb9KKYd4FrtSY9KVfS2BFaLNb66IL95q6c4gkRIe7 MAA== X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230307_060904_322210_AE66B115 X-CRM114-Status: GOOD ( 30.83 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The AArch64 virtual memory system supports a global WXN control, which can be enabled to make all writable mappings implicitly no-exec. This is a useful hardening feature, as it prevents mistakes in managing page table permissions from being exploited to attack the system. When enabled at EL1, the restrictions apply to both EL1 and EL0. EL1 is completely under our control, and has been cleaned up to allow WXN to be enabled from boot onwards. EL0 is not under our control, but given that widely deployed security features such as selinux or PaX already limit the ability of user space to create mappings that are writable and executable at the same time, the impact of enabling this for EL0 is expected to be limited. (For this reason, common user space libraries that have a legitimate need for manipulating executable code already carry fallbacks such as [0].) If enabled at compile time, the feature can still be disabled at boot if needed, by passing arm64.nowxn on the kernel command line. [0] https://github.com/libffi/libffi/blob/master/src/closures.c#L440 Signed-off-by: Ard Biesheuvel Reviewed-by: Kees Cook --- arch/arm64/Kconfig | 11 ++++++ arch/arm64/include/asm/cpufeature.h | 10 ++++++ arch/arm64/include/asm/mman.h | 36 ++++++++++++++++++++ arch/arm64/include/asm/mmu_context.h | 30 +++++++++++++++- arch/arm64/kernel/pi/idreg-override.c | 4 ++- arch/arm64/kernel/pi/map_kernel.c | 24 +++++++++++++ arch/arm64/mm/proc.S | 6 ++++ 7 files changed, 119 insertions(+), 2 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 938fe1d090a5bb4e..4262f3f784696d94 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1552,6 +1552,17 @@ config RODATA_FULL_DEFAULT_ENABLED This requires the linear region to be mapped down to pages, which may adversely affect performance in some cases. +config ARM64_WXN + bool "Enable WXN attribute so all writable mappings are non-exec" + help + Set the WXN bit in the SCTLR system register so that all writable + mappings are treated as if the PXN/UXN bit is set as well. + If this is set to Y, it can still be disabled at runtime by + passing 'arm64.nowxn' on the kernel command line. + + This should only be set if no software needs to be supported that + relies on being able to execute from writable mappings. + config ARM64_SW_TTBR0_PAN bool "Emulate Privileged Access Never using TTBR0_EL1 switching" help diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 170e18cb2b4faf11..9a5a373a3fda7f58 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -17,6 +17,7 @@ #define ARM64_SW_FEATURE_OVERRIDE_NOKASLR 0 #define ARM64_SW_FEATURE_OVERRIDE_RODATA_OFF 4 +#define ARM64_SW_FEATURE_OVERRIDE_NOWXN 8 #ifndef __ASSEMBLY__ @@ -932,6 +933,15 @@ extern struct arm64_ftr_override id_aa64isar2_override; extern struct arm64_ftr_override arm64_sw_feature_override; +static inline bool arm64_wxn_enabled(void) +{ + if (!IS_ENABLED(CONFIG_ARM64_WXN) || + cpuid_feature_extract_unsigned_field(arm64_sw_feature_override.val, + ARM64_SW_FEATURE_OVERRIDE_NOWXN)) + return false; + return true; +} + u32 get_kvm_ipa_limit(void); void dump_cpu_features(void); diff --git a/arch/arm64/include/asm/mman.h b/arch/arm64/include/asm/mman.h index 5966ee4a61542edf..6d4940342ba73060 100644 --- a/arch/arm64/include/asm/mman.h +++ b/arch/arm64/include/asm/mman.h @@ -35,11 +35,40 @@ static inline unsigned long arch_calc_vm_flag_bits(unsigned long flags) } #define arch_calc_vm_flag_bits(flags) arch_calc_vm_flag_bits(flags) +static inline bool arm64_check_wx_prot(unsigned long prot, + struct task_struct *tsk) +{ + /* + * When we are running with SCTLR_ELx.WXN==1, writable mappings are + * implicitly non-executable. This means we should reject such mappings + * when user space attempts to create them using mmap() or mprotect(). + */ + if (arm64_wxn_enabled() && + ((prot & (PROT_WRITE | PROT_EXEC)) == (PROT_WRITE | PROT_EXEC))) { + /* + * User space libraries such as libffi carry elaborate + * heuristics to decide whether it is worth it to even attempt + * to create writable executable mappings, as PaX or selinux + * enabled systems will outright reject it. They will usually + * fall back to something else (e.g., two separate shared + * mmap()s of a temporary file) on failure. + */ + pr_info_ratelimited( + "process %s (%d) attempted to create PROT_WRITE+PROT_EXEC mapping\n", + tsk->comm, tsk->pid); + return false; + } + return true; +} + static inline bool arch_validate_prot(unsigned long prot, unsigned long addr __always_unused) { unsigned long supported = PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM; + if (!arm64_check_wx_prot(prot, current)) + return false; + if (system_supports_bti()) supported |= PROT_BTI; @@ -50,6 +79,13 @@ static inline bool arch_validate_prot(unsigned long prot, } #define arch_validate_prot(prot, addr) arch_validate_prot(prot, addr) +static inline bool arch_validate_mmap_prot(unsigned long prot, + unsigned long addr) +{ + return arm64_check_wx_prot(prot, current); +} +#define arch_validate_mmap_prot arch_validate_mmap_prot + static inline bool arch_validate_flags(unsigned long vm_flags) { if (!system_supports_mte()) diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index bc1cef5002d60e02..910c6e009515745d 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -20,13 +20,41 @@ #include #include #include -#include #include #include #include extern bool rodata_full; +static inline int arch_dup_mmap(struct mm_struct *oldmm, + struct mm_struct *mm) +{ + return 0; +} + +static inline void arch_exit_mmap(struct mm_struct *mm) +{ +} + +static inline void arch_unmap(struct mm_struct *mm, + unsigned long start, unsigned long end) +{ +} + +static inline bool arch_vma_access_permitted(struct vm_area_struct *vma, + bool write, bool execute, bool foreign) +{ + if (IS_ENABLED(CONFIG_ARM64_WXN) && execute && + (vma->vm_flags & (VM_WRITE | VM_EXEC)) == (VM_WRITE | VM_EXEC)) { + pr_warn_ratelimited( + "process %s (%d) attempted to execute from writable memory\n", + current->comm, current->pid); + /* disallow unless the nowxn override is set */ + return !arm64_wxn_enabled(); + } + return true; +} + static inline void contextidr_thread_switch(struct task_struct *next) { if (!IS_ENABLED(CONFIG_PID_IN_CONTEXTIDR)) diff --git a/arch/arm64/kernel/pi/idreg-override.c b/arch/arm64/kernel/pi/idreg-override.c index c4ae5ffe5cb0c999..0ceec20669e50913 100644 --- a/arch/arm64/kernel/pi/idreg-override.c +++ b/arch/arm64/kernel/pi/idreg-override.c @@ -181,6 +181,7 @@ static const struct ftr_set_desc sw_features __prel64_initconst = { .fields = { FIELD("nokaslr", ARM64_SW_FEATURE_OVERRIDE_NOKASLR, NULL), FIELD("rodataoff", ARM64_SW_FEATURE_OVERRIDE_RODATA_OFF, NULL), + FIELD("nowxn", ARM64_SW_FEATURE_OVERRIDE_NOWXN, NULL), {} }, }; @@ -214,8 +215,9 @@ static const struct { "id_aa64isar2.gpa3=0 id_aa64isar2.apa3=0" }, { "arm64.nomte", "id_aa64pfr1.mte=0" }, { "nokaslr", "arm64_sw.nokaslr=1" }, - { "rodata=off", "arm64_sw.rodataoff=1" }, + { "rodata=off", "arm64_sw.rodataoff=1 arm64_sw.nowxn=1" }, { "arm64.nolva", "id_aa64mmfr2.varange=0" }, + { "arm64.nowxn", "arm64_sw.nowxn=1" }, }; static int __init parse_hexdigit(const char *p, u64 *v) diff --git a/arch/arm64/kernel/pi/map_kernel.c b/arch/arm64/kernel/pi/map_kernel.c index a14b7c1236b5707c..2d02fa94e48c92e1 100644 --- a/arch/arm64/kernel/pi/map_kernel.c +++ b/arch/arm64/kernel/pi/map_kernel.c @@ -133,6 +133,25 @@ static void __init map_kernel(u64 kaslr_offset, u64 va_offset, int root_level) idmap_cpu_replace_ttbr1(swapper_pg_dir); } +static void noinline __section(".idmap.text") disable_wxn(void) +{ + u64 sctlr = read_sysreg(sctlr_el1) & ~SCTLR_ELx_WXN; + + /* + * We cannot safely clear the WXN bit while the MMU and caches are on, + * so turn the MMU off, flush the TLBs and turn it on again but with + * the WXN bit cleared this time. + */ + asm(" msr sctlr_el1, %0 ;" + " isb ;" + " tlbi vmalle1 ;" + " dsb nsh ;" + " isb ;" + " msr sctlr_el1, %1 ;" + " isb ;" + :: "r"(sctlr & ~SCTLR_ELx_M), "r"(sctlr)); +} + static void noinline __section(".idmap.text") set_ttbr0_for_lpa2(u64 ttbr) { u64 sctlr = read_sysreg(sctlr_el1); @@ -230,6 +249,11 @@ asmlinkage void __init early_map_kernel(u64 boot_status, void *fdt) if (va_bits > VA_BITS_MIN) sysreg_clear_set(tcr_el1, TCR_T1SZ_MASK, TCR_T1SZ(va_bits)); + if (IS_ENABLED(CONFIG_ARM64_WXN) && + cpuid_feature_extract_unsigned_field(arm64_sw_feature_override.val, + ARM64_SW_FEATURE_OVERRIDE_NOWXN)) + disable_wxn(); + /* * The virtual KASLR displacement modulo 2MiB is decided by the * physical placement of the image, as otherwise, we might not be able diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 68f8337cd8bf6ab9..1df7a031bcb40341 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -516,6 +516,12 @@ alternative_else_nop_endif * Prepare SCTLR */ mov_q x0, INIT_SCTLR_EL1_MMU_ON +#ifdef CONFIG_ARM64_WXN + ldr_l x1, arm64_sw_feature_override + FTR_OVR_VAL_OFFSET + tst x1, #0xf << ARM64_SW_FEATURE_OVERRIDE_NOWXN + orr x1, x0, #SCTLR_ELx_WXN + csel x0, x0, x1, ne +#endif ret // return to head.S .unreq mair