From patchwork Tue Mar 12 12:52:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 13589950 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 29720C54E58 for ; Tue, 12 Mar 2024 12:53:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=LYn1vaKJCyWDb+eW847i+rcG1CKg7eWTNWqILubo8ro=; b=HSDEifL9QVuQHX /8lclWjqu+Kj1ogVSmvseR7RX/JQOKvgLS6pJH+eN77yz1eOnc6op2qMdqVVt5ReuioQJdQTQiVfI NnPmJsAZV6PGmF73Zv2L5o99FK2i1YkMo4G7Q7wJIpkE5JJKp9NKIZbeityh1gXmRScW8dhFTFeDQ 9kd8L3WrzEOkm2ZCzwWJKlWEA7X8N8lEtbOSzlqQ6HhAucw3f7rNAnAdysHVTspXYlZY7l1FSI/xe IiplrS5+0D38lCo7VHlb289pmaroC0o2Xd/asGQb8oNWLn6G+iJW3XD4NL6kGi9fqFu7oI5VF78oK hJkUXSToQKJDY8uiitZg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rk1ce-00000005pIl-1YiM; Tue, 12 Mar 2024 12:52:48 +0000 Received: from mail-ej1-x62a.google.com ([2a00:1450:4864:20::62a]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rk1cW-00000005pBj-04dH for linux-arm-kernel@lists.infradead.org; Tue, 12 Mar 2024 12:52:42 +0000 Received: by mail-ej1-x62a.google.com with SMTP id a640c23a62f3a-a45fd0a0980so391059866b.2 for ; Tue, 12 Mar 2024 05:52:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1710247949; x=1710852749; darn=lists.infradead.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=ZIS+IHRcd0ERLxt100cpcLsozLhRREys7xiPQsR20Dk=; b=HL37CkY+0TXOWvwhTaAq2SY77pe6q+M44Fi25Eerf6KJeX8KmbNDxodDrLQs1IQtNC FbsOGZSbxwj9K+L0BRDR5fG+t1Y5X1vxXhjad1ikNR+N8zbnU9JhT6GqdeWjmPe1QBmT VVak6kMhNP+WhvYtxa+DWQY7E7+Y7KRxYlzIhWfd6ZZfkvOA8JkFUN3BcwplACRd/7Lr 9pSo6RzTyIhlBEHZ6SJBAGe1R7kne2pL96NAqj23GnghJerDwRbbLEXzrTSYIvA4Pbzg D0pJCIxk3nGsFkswvmiXlkVv50Gbx9gVQo7ZCWpJQJegWAbvWYK0IB45mec1EXdgCDdp HgKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710247949; x=1710852749; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZIS+IHRcd0ERLxt100cpcLsozLhRREys7xiPQsR20Dk=; b=blN6AKnknQvFtWf93qNHey0sgPvXOg2rRKaO8ia1X+17oqORPXjZ5n8tWbm/1ACfYk N2U33mV3spJrzwPxg2UBe9wXcefltqBDUiLzgeAQPzAY5npQ7ityZ1KYp1H8aGHojCeJ j59vqXj5A3e5I6wKWAnNTEpgk0K7hAR0bT4cieJ2Q8I9RITECm0V+2eYYuuiQRgy+0Ww 7A8r8E2Vc5yPgngrPUe2xQK2oI4iSUXA0cl4j7zcfkekPXxKnQp9HLGDezxOT3bLv3oC gtqLblAeyB9XH5KRMkvKnwrzx9+k1yoB3IDdvXGVELfc2bdm2j+Ds9nOl8/m84VHDAhT 5uwg== X-Gm-Message-State: AOJu0Yx+aR0phlceBnv8ovPDrcIOLerSPxbJt03bqgj4dDPNO4cdlu10 OxMnei8lP+abGgn+BjTWwyNKb4ez1dN4XRoGFsQWo4MLRlv1eqgBVl9JGB1witQ= X-Google-Smtp-Source: AGHT+IEEMrSo2RpZsVNgspp7rP3eZgbeXCtmU4PjECf1qziA4d18LL/y8ywVDjCvMSec/qAcMzHuRA== X-Received: by 2002:a17:906:4c4a:b0:a3e:5589:6099 with SMTP id d10-20020a1709064c4a00b00a3e55896099mr5809245ejw.70.1710247949516; Tue, 12 Mar 2024 05:52:29 -0700 (PDT) Received: from [127.0.1.1] ([85.235.12.238]) by smtp.gmail.com with ESMTPSA id gc5-20020a170906c8c500b00a45a09e7e23sm3845088ejb.136.2024.03.12.05.52.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Mar 2024 05:52:29 -0700 (PDT) From: Linus Walleij Date: Tue, 12 Mar 2024 13:52:17 +0100 Subject: [PATCH v3 1/4] ARM: Add TTBCR_* definitions to pgtable-3level-hwdef.h MIME-Version: 1.0 Message-Id: <20240312-arm32-lpae-pan-v3-1-532647afcd38@linaro.org> References: <20240312-arm32-lpae-pan-v3-0-532647afcd38@linaro.org> In-Reply-To: <20240312-arm32-lpae-pan-v3-0-532647afcd38@linaro.org> To: Russell King , Ard Biesheuvel , Arnd Bergmann , Stefan Wahren , Kees Cook , Geert Uytterhoeven Cc: linux-arm-kernel@lists.infradead.org, Linus Walleij , Catalin Marinas X-Mailer: b4 0.12.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240312_055240_217100_9F4321F2 X-CRM114-Status: GOOD ( 12.69 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Catalin Marinas These macros will be used in a subsequent patch. At one point these were part of the ARM32 KVM but that is no longer the case. Since these macros are only relevant to LPAE kernel builds, they are added to pgtable-3level-hwdef.h Signed-off-by: Catalin Marinas Reviewed-by: Kees Cook Signed-off-by: Linus Walleij --- arch/arm/include/asm/pgtable-3level-hwdef.h | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/arch/arm/include/asm/pgtable-3level-hwdef.h b/arch/arm/include/asm/pgtable-3level-hwdef.h index 2f35b4eddaa8..19da7753a0b8 100644 --- a/arch/arm/include/asm/pgtable-3level-hwdef.h +++ b/arch/arm/include/asm/pgtable-3level-hwdef.h @@ -94,4 +94,21 @@ #define TTBR1_SIZE (((PAGE_OFFSET >> 30) - 1) << 16) +/* + * TTBCR register bits. + */ +#define TTBCR_EAE (1 << 31) +#define TTBCR_IMP (1 << 30) +#define TTBCR_SH1_MASK (3 << 28) +#define TTBCR_ORGN1_MASK (3 << 26) +#define TTBCR_IRGN1_MASK (3 << 24) +#define TTBCR_EPD1 (1 << 23) +#define TTBCR_A1 (1 << 22) +#define TTBCR_T1SZ_MASK (7 << 16) +#define TTBCR_SH0_MASK (3 << 12) +#define TTBCR_ORGN0_MASK (3 << 10) +#define TTBCR_IRGN0_MASK (3 << 8) +#define TTBCR_EPD0 (1 << 7) +#define TTBCR_T0SZ_MASK (7 << 0) + #endif From patchwork Tue Mar 12 12:52:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 13589951 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DE44DC54E60 for ; Tue, 12 Mar 2024 12:53:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=AOllUwn+9owvnRFVfg2T8WrZNS5Y4CQvjl8h76rmr3Q=; b=wM9wR7S5KiTXDc bF7rmF/qyoBBFnJFGu2e93fdyFeTI8FxXbdIqQ2FN89EMY3JFPQE9FkWmqgoJARBAMkZ4UwAVbOea sFi0OdzCq/pNhax2L94r2EeVjTzzjgMLG8UGl28tnRG9jUZkuclQ4ow2DX1PXg7n91Ktfxm+OlLUw aNNA8VYyXnSQ7vbPbqUFb1wN9A81xfol9CT/fCEb6PrYV1f07ztaKKIhh3Dee8ome3JbVEfh6IukI LqkNvD6V+884RQdD7ISfnF6Kvu3bosPblNE2m3muhvGEiYvzjngeNpncrYW6DVuLPhfEtEhAnxkE3 QHQIOYqnxb1BVVGgFWaA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rk1cd-00000005pIE-1hGF; Tue, 12 Mar 2024 12:52:47 +0000 Received: from mail-ej1-x62e.google.com ([2a00:1450:4864:20::62e]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rk1cV-00000005pBm-3alu for linux-arm-kernel@lists.infradead.org; Tue, 12 Mar 2024 12:52:42 +0000 Received: by mail-ej1-x62e.google.com with SMTP id a640c23a62f3a-a45ecef71deso477204666b.2 for ; Tue, 12 Mar 2024 05:52:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1710247950; x=1710852750; darn=lists.infradead.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=vAAqIM+pcuAu2i/ucZrUcg2ANAQwDi96LSZYNnljZz8=; b=qhqjuAXme/xyIa7RkSPu0NGJjW0LMk2nASmmq4aSvGjKSbKVCTEv9xFafWx3oDY4UJ GTAc+jF+hxAkvzCIQuyq3YKUWDN+eaooAlm+qMhQBt8umxZj6Olp6Xa9QWYPt26eZ6a0 1Ll2i/sPlYJEkvkOJ1tMUAl9ZYLziWlJmvZqSGwrz67wDC7LK35RduA1lNfk/G02OKqC NK2rl+Djgq2w/9wOsub4Kmny9Sin+bIMvcS+0QyQCenb/5YlDtVX0g6154WGMFwAP2Z4 LpItIZpqI8/EgzNRhhP7S9pHYvq9j8i678NLaCdRiT21+f5GGRP1Co1eWjKIa45UqFtd baDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710247950; x=1710852750; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vAAqIM+pcuAu2i/ucZrUcg2ANAQwDi96LSZYNnljZz8=; b=azY4fFrLE8Aayhzmr+ZE1/KOhD5SDh4jsNPbRTF9FapNJsGpJXB15DoPlcB0oC/6eN rSUt5MKDO31cYl8POmvG1yWL28JmAxDvWuJ8espR44outio1j57RShnxmPzm4GmIyNIb lLraE4i6H91LIPtLwljy4cCO+rh3mTtlasS4TJwjcKtsSqCtOSCJ6tsRO8KXuv4/eWUt NUHgDY5ATohnc2gi9NFuB3/3DecdNFsdGGgSh6/yzGjoCEkjwV5dWVTII5RBVHv3d9lD kkIC5NLAEnpDqi0y1j/cdfsDY80vVfBNUGExLdUSRTWyxLlTvTCTJ3DgDTVAJoWm3Q3S yWIg== X-Gm-Message-State: AOJu0YyqqiYG8EmwWfBPhPNV8gFYKC8hzTiPomM3icfcqPeH91qhYb9n CaidlHjkVzvV87/XKZA23FslQqeyZuwVmRZxCToDCYZRY/rQJWUKziZmJQjv6Mc= X-Google-Smtp-Source: AGHT+IF4s/Xd7MbRgwlMbK9OoZah8by/MPthUn5mD5ZwMDu4o/A+l9v9UoWCc13nVYTBA8xkl1u6WA== X-Received: by 2002:a17:907:c08b:b0:a45:84e2:bde5 with SMTP id st11-20020a170907c08b00b00a4584e2bde5mr1267607ejc.3.1710247950588; Tue, 12 Mar 2024 05:52:30 -0700 (PDT) Received: from [127.0.1.1] ([85.235.12.238]) by smtp.gmail.com with ESMTPSA id gc5-20020a170906c8c500b00a45a09e7e23sm3845088ejb.136.2024.03.12.05.52.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Mar 2024 05:52:30 -0700 (PDT) From: Linus Walleij Date: Tue, 12 Mar 2024 13:52:18 +0100 Subject: [PATCH v3 2/4] ARM: Move asm statements accessing TTBCR into C functions MIME-Version: 1.0 Message-Id: <20240312-arm32-lpae-pan-v3-2-532647afcd38@linaro.org> References: <20240312-arm32-lpae-pan-v3-0-532647afcd38@linaro.org> In-Reply-To: <20240312-arm32-lpae-pan-v3-0-532647afcd38@linaro.org> To: Russell King , Ard Biesheuvel , Arnd Bergmann , Stefan Wahren , Kees Cook , Geert Uytterhoeven Cc: linux-arm-kernel@lists.infradead.org, Linus Walleij , Catalin Marinas X-Mailer: b4 0.12.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240312_055239_942373_D1BEFFEF X-CRM114-Status: GOOD ( 14.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Catalin Marinas This patch implements cpu_get_ttbcr() and cpu_set_ttbcr() and replaces the corresponding asm statements. Signed-off-by: Catalin Marinas Reviewed-by: Kees Cook Signed-off-by: Linus Walleij --- ChangeLog v1->v3: - Drop unnecesary volatile from the asm(mcr) call. --- arch/arm/include/asm/proc-fns.h | 12 ++++++++++++ arch/arm/mm/mmu.c | 7 +++---- 2 files changed, 15 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h index 280396483f5d..9b3105a2a5e0 100644 --- a/arch/arm/include/asm/proc-fns.h +++ b/arch/arm/include/asm/proc-fns.h @@ -178,6 +178,18 @@ extern void cpu_resume(void); }) #endif +static inline unsigned int cpu_get_ttbcr(void) +{ + unsigned int ttbcr; + asm("mrc p15, 0, %0, c2, c0, 2" : "=r" (ttbcr)); + return ttbcr; +} + +static inline void cpu_set_ttbcr(unsigned int ttbcr) +{ + asm("mcr p15, 0, %0, c2, c0, 2" : : "r" (ttbcr)); +} + #else /*!CONFIG_MMU */ #define cpu_switch_mm(pgd,mm) { } diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 674ed71573a8..9a780da6a4e1 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1687,9 +1687,8 @@ static void __init early_paging_init(const struct machine_desc *mdesc) */ cr = get_cr(); set_cr(cr & ~(CR_I | CR_C)); - asm("mrc p15, 0, %0, c2, c0, 2" : "=r" (ttbcr)); - asm volatile("mcr p15, 0, %0, c2, c0, 2" - : : "r" (ttbcr & ~(3 << 8 | 3 << 10))); + ttbcr = cpu_get_ttbcr(); + cpu_set_ttbcr(ttbcr & ~(3 << 8 | 3 << 10)); flush_cache_all(); /* @@ -1701,7 +1700,7 @@ static void __init early_paging_init(const struct machine_desc *mdesc) lpae_pgtables_remap(offset, pa_pgd); /* Re-enable the caches and cacheable TLB walks */ - asm volatile("mcr p15, 0, %0, c2, c0, 2" : : "r" (ttbcr)); + cpu_set_ttbcr(ttbcr); set_cr(cr); } From patchwork Tue Mar 12 12:52:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 13589949 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3B206C54E60 for ; Tue, 12 Mar 2024 12:52:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=jpx9JLKAKwj3/itE2iDx+ir3z/zfBhsVjqZoZXrQGhI=; b=nOc8iXtzt6nlvl AFuNjPDkxGpj659/FaAJmzqqKA9+UqoGf7BTgY9Ic1ZrRUuigx+gYrA7Y8n5uSraIV9y3dxEWn2Xo X+v4smDetPU+F5lmH8fATmyp7H/GRRX+p/twhIoIn4ugJRgYkqXv6Wcz0G6rUj4chT+Ax1TRdtqua 5XZYPa7J3zST+mScRMT4sMB9S7YOvl5/LesPKl1S2aabIa4Xf6P558c21wdEJdp8bKVIwonIQsKcx wy+iqtO1LRuQNzPxBvEd4gUNyq7m0WTiZmERGQ1+gOw5qMMnWT1YU2S/uo8/jxLakvJJmI+ARdzL6 nImx0YLVj/isbO//BNww==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rk1cc-00000005pHt-31Wy; Tue, 12 Mar 2024 12:52:46 +0000 Received: from mail-ej1-x62c.google.com ([2a00:1450:4864:20::62c]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rk1cV-00000005pBr-3b4t for linux-arm-kernel@lists.infradead.org; Tue, 12 Mar 2024 12:52:41 +0000 Received: by mail-ej1-x62c.google.com with SMTP id a640c23a62f3a-a44665605f3so771846666b.2 for ; Tue, 12 Mar 2024 05:52:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1710247951; x=1710852751; darn=lists.infradead.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=17ArG1M9fJDron0YzHnlfTGUNG1cg1KK8VZYlzu8rLI=; b=Z4G2EGu2mUDoD4Ko8V8JWNwP7/ZPxRWjmULuBkrd8/am841qQvxLLv332eVZDfcWYL rNjQgNww4jBrT4hdo+P4WRnN+6rzTppAXRJh2MaWOZX9kTNaRxO9GQLD++adPTNEyUk0 KmUxgtZUlQ/9btfzvaOPtglairpQIRNuIudAtLjUE/uGhs+43x3viREGWZT5cIZ7APYu k/sUuYog+QsMaS3d61jmW68zaecTDH9yDd6VAiJxwuBdWQ/kZs5OB55e9ot0V3KSFIRS iDfBJrsghQwVP9SMmvktXpZks+Ma+s8cxl3p/uDBkvGFA1WBGDHG0PJw0dpsCruxCtnm 7ojA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710247951; x=1710852751; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=17ArG1M9fJDron0YzHnlfTGUNG1cg1KK8VZYlzu8rLI=; b=TNnFwJEsszPsNNiIBggzzmkB9LCmGNcFwVOqlHlBMFkMcVbkF2r+lqnQ6wvgnTAHqf RF39cDAQKFaItl3NyZTkU+XTHD/mQVQRz2+kxSbxy39oLtPz9R+qbW4vpOmRqPXzW7Lz 7Jo5Qwwz3NAAyLiIUx0mA4vJh/jsHMuUA0seI7KxrDd9vrfuWsSDO/xpJMbIIUDvSNhT SL2xovFusmMqgi0kw2jl6PvBhqkb/Y6oi5NwU7DIFnLomH1NGDrj6fNVXitUeOMIAgw9 /aNwVKqh3E4heAUbn6jfa8+rjAdNsp7wb2UM2/us67E0+NmN39mk62jCW2kPjdLG8RiH 5KwA== X-Gm-Message-State: AOJu0YxbekTayFE+SOxcspgRT3C9sDPV87yFGwXMgM6MR1YaaNq+E5QF xj3ijQs/ioqYLWfdbtCQ6uv0BBFhoJcip0PlCQzdgTvLBenXI0s8RLv4ZzZNNsY= X-Google-Smtp-Source: AGHT+IEE9hdxo/s6q3o6swTsCsuikwsIDmifjoKQg5K0quiEBYvUQGfuy8jPmRfIDO1jpds87eQ5Bg== X-Received: by 2002:a17:906:694d:b0:a46:471e:b3e4 with SMTP id c13-20020a170906694d00b00a46471eb3e4mr901844ejs.10.1710247951589; Tue, 12 Mar 2024 05:52:31 -0700 (PDT) Received: from [127.0.1.1] ([85.235.12.238]) by smtp.gmail.com with ESMTPSA id gc5-20020a170906c8c500b00a45a09e7e23sm3845088ejb.136.2024.03.12.05.52.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Mar 2024 05:52:31 -0700 (PDT) From: Linus Walleij Date: Tue, 12 Mar 2024 13:52:19 +0100 Subject: [PATCH v3 3/4] ARM: Reduce the number of #ifdef CONFIG_CPU_SW_DOMAIN_PAN MIME-Version: 1.0 Message-Id: <20240312-arm32-lpae-pan-v3-3-532647afcd38@linaro.org> References: <20240312-arm32-lpae-pan-v3-0-532647afcd38@linaro.org> In-Reply-To: <20240312-arm32-lpae-pan-v3-0-532647afcd38@linaro.org> To: Russell King , Ard Biesheuvel , Arnd Bergmann , Stefan Wahren , Kees Cook , Geert Uytterhoeven Cc: linux-arm-kernel@lists.infradead.org, Linus Walleij , Catalin Marinas X-Mailer: b4 0.12.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240312_055239_940910_01681288 X-CRM114-Status: GOOD ( 16.39 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Catalin Marinas This is a clean-up patch aimed at reducing the number of checks on CONFIG_CPU_SW_DOMAIN_PAN, together with some empty lines for better clarity once the CONFIG_CPU_TTBR0_PAN is introduced. Signed-off-by: Catalin Marinas Reviewed-by: Kees Cook Signed-off-by: Linus Walleij --- ChangeLog v2->v3: - Don't ramdomly change ifdef to if defined(), let's save that for the last patch. --- arch/arm/include/asm/uaccess-asm.h | 16 ++++++++++++---- arch/arm/include/asm/uaccess.h | 21 +++++++++++++++------ 2 files changed, 27 insertions(+), 10 deletions(-) diff --git a/arch/arm/include/asm/uaccess-asm.h b/arch/arm/include/asm/uaccess-asm.h index 65da32e1f1c1..ea42ba25920f 100644 --- a/arch/arm/include/asm/uaccess-asm.h +++ b/arch/arm/include/asm/uaccess-asm.h @@ -39,8 +39,9 @@ #endif .endm - .macro uaccess_disable, tmp, isb=1 #ifdef CONFIG_CPU_SW_DOMAIN_PAN + + .macro uaccess_disable, tmp, isb=1 /* * Whenever we re-enter userspace, the domains should always be * set appropriately. @@ -50,11 +51,9 @@ .if \isb instr_sync .endif -#endif .endm .macro uaccess_enable, tmp, isb=1 -#ifdef CONFIG_CPU_SW_DOMAIN_PAN /* * Whenever we re-enter userspace, the domains should always be * set appropriately. @@ -64,9 +63,18 @@ .if \isb instr_sync .endif -#endif .endm +#else + + .macro uaccess_disable, tmp, isb=1 + .endm + + .macro uaccess_enable, tmp, isb=1 + .endm + +#endif + #if defined(CONFIG_CPU_SW_DOMAIN_PAN) || defined(CONFIG_CPU_USE_DOMAINS) #define DACR(x...) x #else diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h index 9556d04387f7..2278769f1156 100644 --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -24,9 +24,10 @@ * perform such accesses (eg, via list poison values) which could then * be exploited for priviledge escalation. */ +#ifdef CONFIG_CPU_SW_DOMAIN_PAN + static __always_inline unsigned int uaccess_save_and_enable(void) { -#ifdef CONFIG_CPU_SW_DOMAIN_PAN unsigned int old_domain = get_domain(); /* Set the current domain access to permit user accesses */ @@ -34,19 +35,27 @@ static __always_inline unsigned int uaccess_save_and_enable(void) domain_val(DOMAIN_USER, DOMAIN_CLIENT)); return old_domain; -#else - return 0; -#endif } static __always_inline void uaccess_restore(unsigned int flags) { -#ifdef CONFIG_CPU_SW_DOMAIN_PAN /* Restore the user access mask */ set_domain(flags); -#endif } +#else + +static inline unsigned int uaccess_save_and_enable(void) +{ + return 0; +} + +static inline void uaccess_restore(unsigned int flags) +{ +} + +#endif + /* * These two are intentionally not defined anywhere - if the kernel * code generates any references to them, that's a bug. From patchwork Tue Mar 12 12:52:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 13589952 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2BFDFC54E58 for ; Tue, 12 Mar 2024 12:53:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=kVowu1BiRendKuI1fJdny5XFQCUrL1gf1l5xd+tCgkE=; b=IDK2PuVHlOzhQR anEWB2/eEaWx1JUqHR4GBQNh1bjuwTjwHxJJ4MY3hxhOYgRs1SjH7KrmC454Mm8UzrXDXuuQrODXp uv5gGXWqWnpU5YN9bQxaSIoejFoxsQ3vS/qjZTSF74fT1t08Yz6DHK/OPgpCBfGyY5J69QNoNGxrl rZjpLud6nKcAeuXQUS+L9h7o/Hx44ICsf3QMKNZux554OK/MEYPJ6G8/+qPeNVNCfBsmJir9erjof QLXeoACdKqH9JfzmGUJhrCbkr9SV9FtL+KJGhZEeDO3ixW2lkSdSKrEdhgPCvGvaGumbEqCbsWg+e wfLYTaq2Qx+XFNlXbnzQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rk1cf-00000005pJv-2YBp; Tue, 12 Mar 2024 12:52:49 +0000 Received: from mail-ed1-x52d.google.com ([2a00:1450:4864:20::52d]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rk1cV-00000005pCA-2xER for linux-arm-kernel@lists.infradead.org; Tue, 12 Mar 2024 12:52:42 +0000 Received: by mail-ed1-x52d.google.com with SMTP id 4fb4d7f45d1cf-5684db9147dso3030916a12.2 for ; Tue, 12 Mar 2024 05:52:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1710247953; x=1710852753; darn=lists.infradead.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=G98Hwh+nZmDIbFGxnZJlxssDiTB3LoTpQQHnbtmLXE0=; b=BJ9IAjNxRIFnx2Kphau4ueAECy0nEdp7V5VkF/YoCj197PjaJT7ik1/jWcKRENTzW+ h7/hHTtIhdvbfcN0pifah8k8r2jIz8zyxOYfIFg9+e/msM6FII4kRj/LBDrGQWsWZpCw ui4fd6UDpN9Z/ovULIhx/GXGWfcF/mn2lFldraO8h7G6rORh6Z0HGhkutfmEiK4akB0O a2CXZ9mBPVKWZfHziQ7XNV/S9BWFBVAY3RUhMwG6U/+uMXXqKHjGsLcBP042ZOz6vB4e GNyP8LMq43GCIcTUBgnOJF0TF6ldGfUvBtwIt3h8hoZu9oTRDlr35FhFni0/IInxHWij XzkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710247953; x=1710852753; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=G98Hwh+nZmDIbFGxnZJlxssDiTB3LoTpQQHnbtmLXE0=; b=aUYOrMykXTk4P6XngGY/TBx/0bu68J0WXQpxkinwB06JN3lFQKzxV9Djte8YRXFyle tALWe9WA6IPnLECoMu3xsUoKBMJelri2y4uViAFcYI8xSsJI11wu7wY6phMmOgWOo1KZ XkoSq9ziBOpJLuPP4e9OvSWoWVoSur6SVPy8xePTNJymz6O2JYpGwZ8utFcevBRkkJoJ 9m2ED5TsXqEhioiB8ndVOtiYtB161UIJ61bRojC20QB4kC7/iqJssxBHrMVFaU/nwVPm A2cy6pvIZHWoRlcgSJnwAb/HzT7neRh83qNT+DpiUJV2mDIOSBLVwKPSr2ODhBPVdwb+ 02ow== X-Gm-Message-State: AOJu0YzUSpurb9vV9/mOaN8arVa75vEgN4nnknWBUCIqJxbeiuPj36BV u3M2qMGq/JY06+JT3b6sBfkveXB8PYaLftsy12aldGe/GQGR8qRdg70VjpRccdI= X-Google-Smtp-Source: AGHT+IF/1h/0ujWYwcVYIlT80SdBHZufhlWNJyMp4weofdeIpBdwxxu1LvgBDTL40Sn6wly1/C37GA== X-Received: by 2002:a17:906:f0d5:b0:a45:f263:361b with SMTP id dk21-20020a170906f0d500b00a45f263361bmr166708ejb.61.1710247952847; Tue, 12 Mar 2024 05:52:32 -0700 (PDT) Received: from [127.0.1.1] ([85.235.12.238]) by smtp.gmail.com with ESMTPSA id gc5-20020a170906c8c500b00a45a09e7e23sm3845088ejb.136.2024.03.12.05.52.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Mar 2024 05:52:32 -0700 (PDT) From: Linus Walleij Date: Tue, 12 Mar 2024 13:52:20 +0100 Subject: [PATCH v3 4/4] ARM: Implement PAN for LPAE by TTBR0 page table walks disablement MIME-Version: 1.0 Message-Id: <20240312-arm32-lpae-pan-v3-4-532647afcd38@linaro.org> References: <20240312-arm32-lpae-pan-v3-0-532647afcd38@linaro.org> In-Reply-To: <20240312-arm32-lpae-pan-v3-0-532647afcd38@linaro.org> To: Russell King , Ard Biesheuvel , Arnd Bergmann , Stefan Wahren , Kees Cook , Geert Uytterhoeven Cc: linux-arm-kernel@lists.infradead.org, Linus Walleij , Catalin Marinas X-Mailer: b4 0.12.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240312_055239_806324_ABC053DE X-CRM114-Status: GOOD ( 31.88 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Catalin Marinas With LPAE enabled, privileged no-access cannot be enforced using CPU domains as such feature is not available. This patch implements PAN by disabling TTBR0 page table walks while in kernel mode. The ARM architecture allows page table walks to be split between TTBR0 and TTBR1. With LPAE enabled, the split is defined by a combination of TTBCR T0SZ and T1SZ bits. Currently, an LPAE-enabled kernel uses TTBR0 for user addresses and TTBR1 for kernel addresses with the VMSPLIT_2G and VMSPLIT_3G configurations. The main advantage for the 3:1 split is that TTBR1 is reduced to 2 levels, so potentially faster TLB refill (though usually the first level entries are already cached in the TLB). The PAN support on LPAE-enabled kernels uses TTBR0 when running in user space or in kernel space during user access routines (TTBCR T0SZ and T1SZ are both 0). When running user accesses are disabled in kernel mode, TTBR0 page table walks are disabled by setting TTBCR.EPD0. TTBR1 is used for kernel accesses (including loadable modules; anything covered by swapper_pg_dir) by reducing the TTBCR.T0SZ to the minimum (2^(32-7) = 32MB). To avoid user accesses potentially hitting stale TLB entries, the ASID is switched to 0 (reserved) by setting TTBCR.A1 and using the ASID value in TTBR1. The difference from a non-PAN kernel is that with the 3:1 memory split, TTBR1 always uses 3 levels of page tables. As part of the change we are using preprocessor elif definied() clauses so balance these clauses by converting relevant precedingt ifdef clauses to if defined() clauses. Signed-off-by: Catalin Marinas Reviewed-by: Kees Cook Signed-off-by: Linus Walleij --- ChangeLog v2->v3: - Drop leftover uaccess_disabled() stub. - Consistently change ifdef to if defined() in this patch instead of the previous patch. - Convert a missing ifdef over to if defined() for consistency. ChangeLog v1->v2: - Make the SVC mode TTBCR a separate field in struct svc_pt_regs as requested by Russell. - Push the MM page fault permission check into a local function and avoid the too generic uaccess_disabled() as requested by Ard. --- arch/arm/Kconfig | 22 +++++++++++++-- arch/arm/include/asm/assembler.h | 1 + arch/arm/include/asm/pgtable-3level-hwdef.h | 9 ++++++ arch/arm/include/asm/ptrace.h | 1 + arch/arm/include/asm/uaccess-asm.h | 44 ++++++++++++++++++++++++++++- arch/arm/include/asm/uaccess.h | 26 ++++++++++++++++- arch/arm/kernel/asm-offsets.c | 1 + arch/arm/kernel/suspend.c | 8 ++++++ arch/arm/lib/csumpartialcopyuser.S | 20 ++++++++++++- arch/arm/mm/fault.c | 29 +++++++++++++++++++ 10 files changed, 155 insertions(+), 6 deletions(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 0af6709570d1..3d97a15a3e2d 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -1231,9 +1231,9 @@ config HIGHPTE consumed by page tables. Setting this option will allow user-space 2nd level page tables to reside in high memory. -config CPU_SW_DOMAIN_PAN - bool "Enable use of CPU domains to implement privileged no-access" - depends on MMU && !ARM_LPAE +config ARM_PAN + bool "Enable privileged no-access" + depends on MMU default y help Increase kernel security by ensuring that normal kernel accesses @@ -1242,10 +1242,26 @@ config CPU_SW_DOMAIN_PAN by ensuring that magic values (such as LIST_POISON) will always fault when dereferenced. + The implementation uses CPU domains when !CONFIG_ARM_LPAE and + disabling of TTBR0 page table walks with CONFIG_ARM_LPAE. + +config CPU_SW_DOMAIN_PAN + def_bool y + depends on ARM_PAN && !ARM_LPAE + help + Enable use of CPU domains to implement privileged no-access. + CPUs with low-vector mappings use a best-efforts implementation. Their lower 1MB needs to remain accessible for the vectors, but the remainder of userspace will become appropriately inaccessible. +config CPU_TTBR0_PAN + def_bool y + depends on ARM_PAN && ARM_LPAE + help + Enable privileged no-access by disabling TTBR0 page table walks when + running in kernel mode. + config HW_PERF_EVENTS def_bool y depends on ARM_PMU diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index aebe2c8f6a68..d33c1e24e00b 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -21,6 +21,7 @@ #include #include #include +#include #include #include diff --git a/arch/arm/include/asm/pgtable-3level-hwdef.h b/arch/arm/include/asm/pgtable-3level-hwdef.h index 19da7753a0b8..323ad811732e 100644 --- a/arch/arm/include/asm/pgtable-3level-hwdef.h +++ b/arch/arm/include/asm/pgtable-3level-hwdef.h @@ -74,6 +74,7 @@ #define PHYS_MASK_SHIFT (40) #define PHYS_MASK ((1ULL << PHYS_MASK_SHIFT) - 1) +#ifndef CONFIG_CPU_TTBR0_PAN /* * TTBR0/TTBR1 split (PAGE_OFFSET): * 0x40000000: T0SZ = 2, T1SZ = 0 (not used) @@ -93,6 +94,14 @@ #endif #define TTBR1_SIZE (((PAGE_OFFSET >> 30) - 1) << 16) +#else +/* + * With CONFIG_CPU_TTBR0_PAN enabled, TTBR1 is only used during uaccess + * disabled regions when TTBR0 is disabled. + */ +#define TTBR1_OFFSET 0 /* pointing to swapper_pg_dir */ +#define TTBR1_SIZE 0 /* TTBR1 size controlled via TTBCR.T0SZ */ +#endif /* * TTBCR register bits. diff --git a/arch/arm/include/asm/ptrace.h b/arch/arm/include/asm/ptrace.h index 7f44e88d1f25..f064252498c7 100644 --- a/arch/arm/include/asm/ptrace.h +++ b/arch/arm/include/asm/ptrace.h @@ -19,6 +19,7 @@ struct pt_regs { struct svc_pt_regs { struct pt_regs regs; u32 dacr; + u32 ttbcr; }; #define to_svc_pt_regs(r) container_of(r, struct svc_pt_regs, regs) diff --git a/arch/arm/include/asm/uaccess-asm.h b/arch/arm/include/asm/uaccess-asm.h index ea42ba25920f..4bccd895d954 100644 --- a/arch/arm/include/asm/uaccess-asm.h +++ b/arch/arm/include/asm/uaccess-asm.h @@ -39,7 +39,7 @@ #endif .endm -#ifdef CONFIG_CPU_SW_DOMAIN_PAN +#if defined(CONFIG_CPU_SW_DOMAIN_PAN) .macro uaccess_disable, tmp, isb=1 /* @@ -65,6 +65,37 @@ .endif .endm +#elif defined(CONFIG_CPU_TTBR0_PAN) + + .macro uaccess_disable, tmp, isb=1 + /* + * Disable TTBR0 page table walks (EDP0 = 1), use the reserved ASID + * from TTBR1 (A1 = 1) and enable TTBR1 page table walks for kernel + * addresses by reducing TTBR0 range to 32MB (T0SZ = 7). + */ + mrc p15, 0, \tmp, c2, c0, 2 @ read TTBCR + orr \tmp, \tmp, #TTBCR_EPD0 | TTBCR_T0SZ_MASK + orr \tmp, \tmp, #TTBCR_A1 + mcr p15, 0, \tmp, c2, c0, 2 @ write TTBCR + .if \isb + instr_sync + .endif + .endm + + .macro uaccess_enable, tmp, isb=1 + /* + * Enable TTBR0 page table walks (T0SZ = 0, EDP0 = 0) and ASID from + * TTBR0 (A1 = 0). + */ + mrc p15, 0, \tmp, c2, c0, 2 @ read TTBCR + bic \tmp, \tmp, #TTBCR_EPD0 | TTBCR_T0SZ_MASK + bic \tmp, \tmp, #TTBCR_A1 + mcr p15, 0, \tmp, c2, c0, 2 @ write TTBCR + .if \isb + instr_sync + .endif + .endm + #else .macro uaccess_disable, tmp, isb=1 @@ -79,6 +110,12 @@ #define DACR(x...) x #else #define DACR(x...) +#endif + +#ifdef CONFIG_CPU_TTBR0_PAN +#define PAN(x...) x +#else +#define PAN(x...) #endif /* @@ -94,6 +131,8 @@ .macro uaccess_entry, tsk, tmp0, tmp1, tmp2, disable DACR( mrc p15, 0, \tmp0, c3, c0, 0) DACR( str \tmp0, [sp, #SVC_DACR]) + PAN( mrc p15, 0, \tmp0, c2, c0, 2) + PAN( str \tmp0, [sp, #SVC_TTBCR]) .if \disable && IS_ENABLED(CONFIG_CPU_SW_DOMAIN_PAN) /* kernel=client, user=no access */ mov \tmp2, #DACR_UACCESS_DISABLE @@ -112,8 +151,11 @@ .macro uaccess_exit, tsk, tmp0, tmp1 DACR( ldr \tmp0, [sp, #SVC_DACR]) DACR( mcr p15, 0, \tmp0, c3, c0, 0) + PAN( ldr \tmp0, [sp, #SVC_TTBCR]) + PAN( mcr p15, 0, \tmp0, c2, c0, 2) .endm #undef DACR +#undef PAN #endif /* __ASM_UACCESS_ASM_H__ */ diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h index 2278769f1156..25d21d7d6e3e 100644 --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -14,6 +14,8 @@ #include #include #include +#include +#include #include #include @@ -24,7 +26,7 @@ * perform such accesses (eg, via list poison values) which could then * be exploited for priviledge escalation. */ -#ifdef CONFIG_CPU_SW_DOMAIN_PAN +#if defined(CONFIG_CPU_SW_DOMAIN_PAN) static __always_inline unsigned int uaccess_save_and_enable(void) { @@ -43,6 +45,28 @@ static __always_inline void uaccess_restore(unsigned int flags) set_domain(flags); } +#elif defined(CONFIG_CPU_TTBR0_PAN) + +static inline unsigned int uaccess_save_and_enable(void) +{ + unsigned int old_ttbcr = cpu_get_ttbcr(); + + /* + * Enable TTBR0 page table walks (T0SZ = 0, EDP0 = 0) and ASID from + * TTBR0 (A1 = 0). + */ + cpu_set_ttbcr(old_ttbcr & ~(TTBCR_A1 | TTBCR_EPD0 | TTBCR_T0SZ_MASK)); + isb(); + + return old_ttbcr; +} + +static inline void uaccess_restore(unsigned int flags) +{ + cpu_set_ttbcr(flags); + isb(); +} + #else static inline unsigned int uaccess_save_and_enable(void) diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c index 219cbc7e5d13..dd2567ba987f 100644 --- a/arch/arm/kernel/asm-offsets.c +++ b/arch/arm/kernel/asm-offsets.c @@ -83,6 +83,7 @@ int main(void) DEFINE(S_OLD_R0, offsetof(struct pt_regs, ARM_ORIG_r0)); DEFINE(PT_REGS_SIZE, sizeof(struct pt_regs)); DEFINE(SVC_DACR, offsetof(struct svc_pt_regs, dacr)); + DEFINE(SVC_TTBCR, offsetof(struct svc_pt_regs, ttbcr)); DEFINE(SVC_REGS_SIZE, sizeof(struct svc_pt_regs)); BLANK(); DEFINE(SIGFRAME_RC3_OFFSET, offsetof(struct sigframe, retcode[3])); diff --git a/arch/arm/kernel/suspend.c b/arch/arm/kernel/suspend.c index c3ec3861dd07..58a6441b58c4 100644 --- a/arch/arm/kernel/suspend.c +++ b/arch/arm/kernel/suspend.c @@ -12,6 +12,7 @@ #include #include #include +#include extern int __cpu_suspend(unsigned long, int (*)(unsigned long), u32 cpuid); extern void cpu_resume_mmu(void); @@ -26,6 +27,13 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long)) if (!idmap_pgd) return -EINVAL; + /* + * Needed for the MMU disabling/enabing code to be able to run from + * TTBR0 addresses. + */ + if (IS_ENABLED(CONFIG_CPU_TTBR0_PAN)) + uaccess_save_and_enable(); + /* * Function graph tracer state gets incosistent when the kernel * calls functions that never return (aka suspend finishers) hence diff --git a/arch/arm/lib/csumpartialcopyuser.S b/arch/arm/lib/csumpartialcopyuser.S index 6928781e6bee..c289bde04743 100644 --- a/arch/arm/lib/csumpartialcopyuser.S +++ b/arch/arm/lib/csumpartialcopyuser.S @@ -13,7 +13,8 @@ .text -#ifdef CONFIG_CPU_SW_DOMAIN_PAN +#if defined(CONFIG_CPU_SW_DOMAIN_PAN) + .macro save_regs mrc p15, 0, ip, c3, c0, 0 stmfd sp!, {r1, r2, r4 - r8, ip, lr} @@ -25,7 +26,23 @@ mcr p15, 0, ip, c3, c0, 0 ret lr .endm + +#elif defined(CONFIG_CPU_TTBR0_PAN) + + .macro save_regs + mrc p15, 0, ip, c2, c0, 2 @ read TTBCR + stmfd sp!, {r1, r2, r4 - r8, ip, lr} + uaccess_enable ip + .endm + + .macro load_regs + ldmfd sp!, {r1, r2, r4 - r8, ip, lr} + mcr p15, 0, ip, c2, c0, 2 @ restore TTBCR + ret lr + .endm + #else + .macro save_regs stmfd sp!, {r1, r2, r4 - r8, lr} .endm @@ -33,6 +50,7 @@ .macro load_regs ldmfd sp!, {r1, r2, r4 - r8, pc} .endm + #endif .macro load1b, reg1 diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index e96fb40b9cc3..7d262a819ad1 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -235,6 +235,27 @@ static inline bool is_permission_fault(unsigned int fsr) return false; } +#ifdef CONFIG_CPU_TTBR0_PAN +static inline bool ttbr0_usermode_access_allowed(struct pt_regs *regs) +{ + struct svc_pt_regs *svcregs; + + /* If we are in user mode: permission granted */ + if (user_mode(regs)) + return true; + + /* uaccess state saved above pt_regs on SVC exception entry */ + svcregs = to_svc_pt_regs(regs); + + return !(svcregs->ttbcr & TTBCR_EPD0); +} +#else +static inline bool ttbr0_usermode_access_allowed(struct pt_regs *regs) +{ + return true; +} +#endif + static int __kprobes do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) { @@ -278,6 +299,14 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); + /* + * Privileged access aborts with CONFIG_CPU_TTBR0_PAN enabled are + * routed via the translation fault mechanism. Check whether uaccess + * is disabled while in kernel mode. + */ + if (!ttbr0_usermode_access_allowed(regs)) + goto no_context; + if (!(flags & FAULT_FLAG_USER)) goto lock_mmap;