From patchwork Thu Dec 5 15:02:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13895513 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D8685E7716D for ; Thu, 5 Dec 2024 15:05:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=868ek3X6u4wRrG0oHFhQmnPTH1x/NJzHXmqaE0ZFvxc=; b=l4/K22rWS1NpYPqhRNr48s14Ei 3PUxjmrOF5namSEFPBGFnQF4FvIcCXfQQHBwJZoyc+Wn0fl9bZojGOUNW+ObuFO7lgkG35QR7J4vx LgDPJYsv3cF/Facfs/WYb1v03aksfTit5XzIKmdr76wr4pvJ9fUrhRREfW0aofT+pmc7oOIcd4i/X G6dUA31c2EhOlrjGLrwaqK6gGSrr0XquYoI7q1+gwP8XeFi+LtdJiJlnZY2Y0YKmP4mI+afqPHRNm zodS+a/oKNSd3frTqhfiG+QWmACg0ySaB9bWyQqocyJvob62DrIhd453OzNX/+GRtIWgEi2u1nLQ8 rxNc5gdw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tJDPx-0000000GSAV-1yEg; Thu, 05 Dec 2024 15:05:25 +0000 Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tJDNx-0000000GRgj-0R1W for linux-arm-kernel@lists.infradead.org; Thu, 05 Dec 2024 15:03:22 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-434a0d30c06so10973035e9.1 for ; Thu, 05 Dec 2024 07:03:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733410999; x=1734015799; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=868ek3X6u4wRrG0oHFhQmnPTH1x/NJzHXmqaE0ZFvxc=; b=minVbAUrk1RPP7EaUK9r/R4q7vIPlS62Z0gYGnByK2c3b3nTyUCalZd1E9XRS05GK4 xNxkZqnIhWnf7otwgi41KZxeZ9QGX1vOgHqAI8V4yA10CXF89uly2NrS1oy3Cin8ut9b oowPmSdeRTzAF1T6KQFx1lssVEaqClzI/jFxIhkV29K7CXULnMJnmRdkeSOGg0Vp9dUL We1+oznMYthyS06tlfmb7aGN3XK6rZ5cj1ze1ROHblQqQdgV/MAl0H27puPVz4aK+V+6 pgSCpLok3HE2Cpd1XMlutJexVGUQl0D8xRiGI9oUBgE9a3xGB5oG/Iy8k+q5LQL11bhl qQBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733410999; x=1734015799; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=868ek3X6u4wRrG0oHFhQmnPTH1x/NJzHXmqaE0ZFvxc=; b=af0MlSM2INhHXAbQi+Mk5PLMWQgSLDGR57pNe5DBN8ukyOgLIO90sOiFW620ic3i6e Pm6JxjpHWJJH39qu0N2+r8GQ2ZrS7vLlsKDhjbsoIbC25rnB/gHMuQZlxOQWWHrEp54G Y5RoowoURe03A6Z6kdhydKnUXYKsfjJeuuqHaqiyMDhEDMwplgqe9z9lnXAgJgwRoq0i q7Pkwd9XVPAD6+bK8nFA7UJhO9HErgJ+5t/os5h0doQAp/VoX5RbBEYKUMohUCu2Jq3A ohuABUY1C34C45N18ndHQiC+paBXdSMN/IhQIfM8O9vohKdOc5PZPY9Zlev0XKYpdk8e c0YQ== X-Gm-Message-State: AOJu0Yw3aIB9QiZfA9bsklkea/zRdx5H7Jya2kOUAxITs7TlR0qIp+TT rxK0mALj3HFZSeVb0aQ2bqQd4/UiEa6/fPzq5+qTweh8vfYdZgIvskYTh0sRRgFBNv1POjT/D50 MIivAeuGPua7ht2+nbJ4yPr8g/NarWZV6Mf9dtexzvAJvfa/zyKqtPD5knySzYKt5faw/rIQ5Xm j/ZpojQ2zOOup2/mVmkoDOKzMOo5mIQm/M/0jqkUAj X-Google-Smtp-Source: AGHT+IEG8OdMIDHFN7BUBJAGkuap4dQ3axovj/9FC8JwI5LK95PEfR5lpTW79KuGQFQy/XnCwcquwtPB X-Received: from wmgp6.prod.google.com ([2002:a05:600c:2046:b0:431:4a1d:9d5a]) (user=ardb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:45c7:b0:434:9fca:d6c3 with SMTP id 5b1f17b1804b1-434d927b20emr27344225e9.9.1733410999429; Thu, 05 Dec 2024 07:03:19 -0800 (PST) Date: Thu, 5 Dec 2024 16:02:31 +0100 In-Reply-To: <20241205150229.3510177-8-ardb+git@google.com> Mime-Version: 1.0 References: <20241205150229.3510177-8-ardb+git@google.com> X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-Developer-Signature: v=1; a=openpgp-sha256; l=3266; i=ardb@kernel.org; h=from:subject; bh=ho24jy35KD5ehFP+fzUCJpH+Gg46J3THWKKypOQTSx0=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIT3wQEfwrFkhzsd2ntqTcPtRdv+2yreNh1y+9XlrvdiSO y1lvqZpRykLgxgHg6yYIovA7L/vdp6eKFXrPEsWZg4rE8gQBi5OAZhIiCDD/6wVApcSX55aF+ik vHSG+oEpuW8+XP+pxmdfp1H0v7HzrjojQ2dLf2C4mEjmxTXJ96+d7/p/uXi2xYGkGbEtep9bEnw v8gAA X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241205150229.3510177-9-ardb+git@google.com> Subject: [PATCH v2 1/6] arm64/mm: Reduce PA space to 48 bits when LPA2 is not enabled From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook , Quentin Perret , stable@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241205_070321_141359_8B74247D X-CRM114-Status: GOOD ( 15.14 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Ard Biesheuvel Currently, LPA2 kernel support implies support for up to 52 bits of physical addressing, and this is reflected in global definitions such as PHYS_MASK_SHIFT and MAX_PHYSMEM_BITS. This is potentially problematic, given that LPA2 hardware support is modeled as a CPU feature which can be overridden, and with LPA2 hardware support turned off, attempting to map physical regions with address bits [51:48] set (which may exist on LPA2 capable systems booting with arm64.nolva) will result in corrupted mappings with a truncated output address and bogus shareability attributes. This means that the accepted physical address range in the mapping routines should be at most 48 bits wide when LPA2 support is configured but not enabled at runtime. Fixes: 352b0395b505 ("arm64: Enable 52-bit virtual addressing for 4k and 16k granule configs") Cc: Reviewed-by: Anshuman Khandual Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/pgtable-hwdef.h | 6 ------ arch/arm64/include/asm/pgtable-prot.h | 7 +++++++ arch/arm64/include/asm/sparsemem.h | 4 +++- 3 files changed, 10 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index c78a988cca93..a9136cc551cc 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -222,12 +222,6 @@ */ #define S1_TABLE_AP (_AT(pmdval_t, 3) << 61) -/* - * Highest possible physical address supported. - */ -#define PHYS_MASK_SHIFT (CONFIG_ARM64_PA_BITS) -#define PHYS_MASK ((UL(1) << PHYS_MASK_SHIFT) - 1) - #define TTBR_CNP_BIT (UL(1) << 0) /* diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h index 9f9cf13bbd95..a95f1f77bb39 100644 --- a/arch/arm64/include/asm/pgtable-prot.h +++ b/arch/arm64/include/asm/pgtable-prot.h @@ -81,6 +81,7 @@ extern unsigned long prot_ns_shared; #define lpa2_is_enabled() false #define PTE_MAYBE_SHARED PTE_SHARED #define PMD_MAYBE_SHARED PMD_SECT_S +#define PHYS_MASK_SHIFT (CONFIG_ARM64_PA_BITS) #else static inline bool __pure lpa2_is_enabled(void) { @@ -89,8 +90,14 @@ static inline bool __pure lpa2_is_enabled(void) #define PTE_MAYBE_SHARED (lpa2_is_enabled() ? 0 : PTE_SHARED) #define PMD_MAYBE_SHARED (lpa2_is_enabled() ? 0 : PMD_SECT_S) +#define PHYS_MASK_SHIFT (lpa2_is_enabled() ? CONFIG_ARM64_PA_BITS : 48) #endif +/* + * Highest possible physical address supported. + */ +#define PHYS_MASK ((UL(1) << PHYS_MASK_SHIFT) - 1) + /* * If we have userspace only BTI we don't want to mark kernel pages * guarded even if the system does support BTI. diff --git a/arch/arm64/include/asm/sparsemem.h b/arch/arm64/include/asm/sparsemem.h index 8a8acc220371..035e0ca74e88 100644 --- a/arch/arm64/include/asm/sparsemem.h +++ b/arch/arm64/include/asm/sparsemem.h @@ -5,7 +5,9 @@ #ifndef __ASM_SPARSEMEM_H #define __ASM_SPARSEMEM_H -#define MAX_PHYSMEM_BITS CONFIG_ARM64_PA_BITS +#include + +#define MAX_PHYSMEM_BITS PHYS_MASK_SHIFT /* * Section size must be at least 512MB for 64K base From patchwork Thu Dec 5 15:02:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13895514 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D5DB0E7716C for ; Thu, 5 Dec 2024 15:06:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=MSmMzjbQjkryanRY9ZazdGejohWYD6afgp49C2Ru5Dk=; b=D2BAa21BoqBAvTEMNJ644PCf8w dvIKMO+JbcGbTrBeOH4owHvyEAFdrDoaAar7R6rybk1pzAcb3+XTLwD2BFhECZaQACjX3vUMJmgr7 hIhO9H8CLJucZXnjTP1yOmM9b28J0v35jxi7in4H4puP7kDXghMCaT6R9yQu6EUDvXmDDGFc6tRn7 jvI06vVg1knn1PguZ3tZn9lfhfKEIzfU5b5FJCoIkUw0a6KAS+QhQYsVKZc+N60Y/F3P2nOzrwU0H dFUWhDVqHsZoXYGziTTD+HHl818C/kAUnvLJT68B88rfcqyfiLU0XfaSy+64DwXupD9hNx9KEbHOw t8pYrqSA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tJDQx-0000000GSO3-0GlZ; Thu, 05 Dec 2024 15:06:27 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tJDNz-0000000GRhS-34WU for linux-arm-kernel@lists.infradead.org; Thu, 05 Dec 2024 15:03:24 +0000 Received: by mail-wr1-x449.google.com with SMTP id ffacd0b85a97d-385df115288so514103f8f.2 for ; Thu, 05 Dec 2024 07:03:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733411001; x=1734015801; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=MSmMzjbQjkryanRY9ZazdGejohWYD6afgp49C2Ru5Dk=; b=zohTAfkqt0Fr3sqI41vmsi1c2zh0mxSE0XR8z5BTtw8ZZhKQfIGRHMWAp4NL/hZRt/ 27Jah3XQuo8+FXqVn9PqpZ+4EuGvuyK1+xD5KX8Hml2QDItHYvrlp43sDVILFtV939EI 2lRSrnSBP+KX8BBra+gX5/utqNO3+1y0vtNBKcuKmXEM0I2QNyjqha6PPWXGz8y9Um3J n8RdsQJpxgzr0GM3sZSeVnkZ8SOqCbwvX+8Ny7uxvCEwbsfyoSByTCWF3XcCmJn1TXuZ cKzwyP0of1WPNLx07uhVUYf8/6VIVl8R2Yd8u4LI4wcBceQzBnBJR5dSowMFzRJJz3qF phkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733411001; x=1734015801; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=MSmMzjbQjkryanRY9ZazdGejohWYD6afgp49C2Ru5Dk=; b=E8x59HWHBXcR9BDbJFntKFGk0Naf1Z5+azP8RYWa/kvXKArDkpswNWVOnMnNnBDPs+ P5gqyQhsc7w42ALbgmHfMqjewVczierCW8M/bVS5fdgVBh8jUOQg3eosJ+W9RzTITRoS JczeqI5x9y6yi0s1zTP+QM3r/mBiSqheQAoFz3F7Rc/veuHLDo/JA5T5VturXWDxSnmQ MxSn9ivqaku15k5dOSKPr3/z+JCAeyAWHsAxmnLDy3c8kec1uI8WPEJV3Sfn4FGgK5B3 WCYJDgODAfLY6flTedUC1ZfZSF1cBoJ6UvBxfYul4EbefNuJcgWfs0DbXHBlr7+pw4oz iugQ== X-Gm-Message-State: AOJu0YxHwe/Xvlt6HkT+mb8zFx1Am2v3RUZPQddmQF0JVMq8HM4IYQGB YaYKf8tCg0ITWPk1uoZpbjm4nS2bHsJKwBYS/sNwyJfphF6pBicst54ARbwyZLiPj5Z+Ab39+V8 n+wDvjB+4pKM6nQEYT2/c0mwPf5ZMHvhWsIzmXp2zwMe9/BqsCw5aY/BT/UrifBAgTMbLXC61Jn tVIwJfUfWAf8l4sAxH/Tg+EM6/52KUAeigr8ks/5QX X-Google-Smtp-Source: AGHT+IGeyIS11CyHaOaxzh7fmzkABKWg8TLqFcdiY2/yzqapgMZvhTcty/d5/9mQdd+Tbhk0hutQNOin X-Received: from wrco4.prod.google.com ([2002:a5d:47c4:0:b0:37d:3332:8544]) (user=ardb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:1866:b0:385:dc45:ea06 with SMTP id ffacd0b85a97d-385fd3e7b28mr10077509f8f.13.1733411001451; Thu, 05 Dec 2024 07:03:21 -0800 (PST) Date: Thu, 5 Dec 2024 16:02:32 +0100 In-Reply-To: <20241205150229.3510177-8-ardb+git@google.com> Mime-Version: 1.0 References: <20241205150229.3510177-8-ardb+git@google.com> X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-Developer-Signature: v=1; a=openpgp-sha256; l=4458; i=ardb@kernel.org; h=from:subject; bh=V2Mgw1B4fdeIa/VDXTw8dxEemYc3zLOlxi/YHPnanpA=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIT3wQGf5Bd4Hn53srYxftN/4VjjXbSLLleot29JX/i0I8 bGQ/VbaUcrCIMbBICumyCIw+++7nacnStU6z5KFmcPKBDKEgYtTACZS3Mnwzzj7+c40sVwXvmCx rJiTv3USRC9ve9SrNMXqoG5Gyo3TBxkZdvLoqTcm3WLunn+y2i+ucbnKnc1vZrlp+U41E1jGIxr ECAA= X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241205150229.3510177-10-ardb+git@google.com> Subject: [PATCH v2 2/6] arm64/mm: Override PARange for !LPA2 and use it consistently From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook , Quentin Perret , stable@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241205_070323_774785_ED581378 X-CRM114-Status: GOOD ( 16.45 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Ard Biesheuvel When FEAT_LPA{,2} are not implemented, the ID_AA64MMFR0_EL1.PARange and TCR.IPS values corresponding with 52-bit physical addressing are reserved. Setting the TCR.IPS field to 0b110 (52-bit physical addressing) has side effects, such as how the TTBRn_ELx.BADDR fields are interpreted, and so it is important that disabling FEAT_LPA2 (by overriding the ID_AA64MMFR0.TGran fields) also presents a PARange field consistent with that. So limit the field to 48 bits unless LPA2 is enabled, and update existing references to use the override consistently. Fixes: 352b0395b505 ("arm64: Enable 52-bit virtual addressing for 4k and 16k granule configs") Cc: Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/assembler.h | 5 +++++ arch/arm64/kernel/cpufeature.c | 2 +- arch/arm64/kernel/pi/idreg-override.c | 9 +++++++++ arch/arm64/kernel/pi/map_kernel.c | 6 ++++++ arch/arm64/mm/init.c | 7 ++++++- 5 files changed, 27 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 3d8d534a7a77..ad63457a05c5 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -343,6 +343,11 @@ alternative_cb_end // Narrow PARange to fit the PS field in TCR_ELx ubfx \tmp0, \tmp0, #ID_AA64MMFR0_EL1_PARANGE_SHIFT, #3 mov \tmp1, #ID_AA64MMFR0_EL1_PARANGE_MAX +#ifdef CONFIG_ARM64_LPA2 +alternative_if_not ARM64_HAS_VA52 + mov \tmp1, #ID_AA64MMFR0_EL1_PARANGE_48 +alternative_else_nop_endif +#endif cmp \tmp0, \tmp1 csel \tmp0, \tmp1, \tmp0, hi bfi \tcr, \tmp0, \pos, #3 diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 6ce71f444ed8..f8cb8a6ab98a 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -3478,7 +3478,7 @@ static void verify_hyp_capabilities(void) return; safe_mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); - mmfr0 = read_cpuid(ID_AA64MMFR0_EL1); + mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); mmfr1 = read_cpuid(ID_AA64MMFR1_EL1); /* Verify VMID bits */ diff --git a/arch/arm64/kernel/pi/idreg-override.c b/arch/arm64/kernel/pi/idreg-override.c index 22159251eb3a..c6b185b885f7 100644 --- a/arch/arm64/kernel/pi/idreg-override.c +++ b/arch/arm64/kernel/pi/idreg-override.c @@ -83,6 +83,15 @@ static bool __init mmfr2_varange_filter(u64 val) id_aa64mmfr0_override.val |= (ID_AA64MMFR0_EL1_TGRAN_LPA2 - 1) << ID_AA64MMFR0_EL1_TGRAN_SHIFT; id_aa64mmfr0_override.mask |= 0xfU << ID_AA64MMFR0_EL1_TGRAN_SHIFT; + + /* + * Override PARange to 48 bits - the override will just be + * ignored if the actual PARange is smaller, but this is + * unlikely to be the case for LPA2 capable silicon. + */ + id_aa64mmfr0_override.val |= + ID_AA64MMFR0_EL1_PARANGE_48 << ID_AA64MMFR0_EL1_PARANGE_SHIFT; + id_aa64mmfr0_override.mask |= 0xfU << ID_AA64MMFR0_EL1_PARANGE_SHIFT; } #endif return true; diff --git a/arch/arm64/kernel/pi/map_kernel.c b/arch/arm64/kernel/pi/map_kernel.c index f374a3e5a5fe..e57b043f324b 100644 --- a/arch/arm64/kernel/pi/map_kernel.c +++ b/arch/arm64/kernel/pi/map_kernel.c @@ -136,6 +136,12 @@ static void noinline __section(".idmap.text") set_ttbr0_for_lpa2(u64 ttbr) { u64 sctlr = read_sysreg(sctlr_el1); u64 tcr = read_sysreg(tcr_el1) | TCR_DS; + u64 mmfr0 = read_sysreg(id_aa64mmfr0_el1); + u64 parange = cpuid_feature_extract_unsigned_field(mmfr0, + ID_AA64MMFR0_EL1_PARANGE_SHIFT); + + tcr &= ~TCR_IPS_MASK; + tcr |= parange << TCR_IPS_SHIFT; asm(" msr sctlr_el1, %0 ;" " isb ;" diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index d21f67d67cf5..2b2289d55eaa 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -280,7 +280,12 @@ void __init arm64_memblock_init(void) if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) { extern u16 memstart_offset_seed; - u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1); + + /* + * Use the sanitised version of id_aa64mmfr0_el1 so that linear + * map randomization can be enabled by shrinking the IPA space. + */ + u64 mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); int parange = cpuid_feature_extract_unsigned_field( mmfr0, ID_AA64MMFR0_EL1_PARANGE_SHIFT); s64 range = linear_region_size - From patchwork Thu Dec 5 15:02:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13895515 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4E411E7716D for ; Thu, 5 Dec 2024 15:07:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=adk2aH/4GDKjaqv21ityBwrpMTayAeXtNiToa+jV3u8=; b=gsW6dh8jr09gcjPr17tP7K0lYL k8ta6icXHeWXlkluRAD5QnnlsSY8t71WRCc40Yge/p8PsB4Gf9MST2Z6gf4hCqTpIiXrPN9I5L5s9 nNkAhOrY39oyZcN7Erkic9LkAb1+DgPPDumQ+TBvvS3f6lRnvlFgIkb+praQ2uhXl8/AZEIdDyvMg wMUKvEK73VdDnRNgHWf0CCA2fxzpCOTgDS1PatAnCZ3/wCN3Hhyf9uZWENEN7wSLGE+584H7IXUH1 1PX2ijoHLivM8seIp4/AC2XjWgmWvvzB1LdIfOgfMWcM54xH7Efx8r+GZgk1crK14QFdNT9nWI7hr LWZJ4lAg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tJDRv-0000000GSbk-2sry; Thu, 05 Dec 2024 15:07:27 +0000 Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tJDO1-0000000GRiy-2qPP for linux-arm-kernel@lists.infradead.org; Thu, 05 Dec 2024 15:03:26 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-434a27c9044so6563045e9.0 for ; Thu, 05 Dec 2024 07:03:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733411003; x=1734015803; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=adk2aH/4GDKjaqv21ityBwrpMTayAeXtNiToa+jV3u8=; b=IAIBQtKl+IVn6qlqHAR0XeLcRhGHaaLCEoTpqtYaLd9EZv2BAgGCwDq44t1A9kRuye R85BUATk/KCAnU0pQXnnE+y6dqE2OXjt8S+JquQLS5ElYuyNnce5r4bgzFY2oBv50GgK wI4Qz9c8aQK3nV2mAjO+pEQZICN4XSfwF198Ai0WKrevsNliH8so+K+c+24TEidmdvh3 ckohaiuFnBgQGzFDJeksvAKc0RypymttvmZ4YDIQDPkCRrluZtAowOY8VFtFo6JuDAhS 9mL0+3aSMzkzLNz09snSHESVciPcgolzz48snf+BXOD9YEe0L8qRYZwdRSv9VLXf5sFL HxVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733411003; x=1734015803; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=adk2aH/4GDKjaqv21ityBwrpMTayAeXtNiToa+jV3u8=; b=FqwPEntlYnFPv3L4gzUj+5iZIvfLOhC2X6ZA2gwEJ1dQOaESpoLJFYS92VIIQoXKfW oerxsnqogVyhCaQ1QWQgJKLRT3mQgBDO7Aa1WWX4tZ8PPOpnTA7Xexmep1EOPT3CQ+YO mwUrJFuGh1gPMExVo84dmCuX0t8pWjdBg6neQK3uUBsCJcDpzyC8PiAi46t3EyWnHVGN dy/nCVbztfxKOW9FxHmGq4/BN89L5KxaICy9Gu6S2JtRM05nGpAp0OokO5ur5WI9qvhT JoH57nqkLVbBP3JuVS+4VQ/IfLZl/sE/F9YawznSe9L2rNvJ/9du0awAWnMyVw7FN0RA IQFw== X-Gm-Message-State: AOJu0YyahuoKRW1BKIlmN5oQVV0hhFXEaV25mMqHgnIwSGYN01aSQPFs LIh0pC6gZt85vNC0uAYaunKgXatGOOpvfI+05qOnRrfbv0Tru4nv44EuUZDEDSnJJyjn6zsmS/u Ktq+1/12rmdBE8Dq2KkFwRG5B2GsjNViILh31vx7r4nnhxaZJYCzuM0qddF9yQy3Tdl1e6c0YIU 7VaLN0EkFMrUm88BKNWDmHtwBagRTOtm24aqCkFRyh X-Google-Smtp-Source: AGHT+IF7/7yWatbx9m9EKm+cZ06mzA7cFMWteGbOArm8KBWUGs6ZGHFU/BgFWuC7Z2P10SoaWhuvRscf X-Received: from wmql16.prod.google.com ([2002:a05:600c:4f10:b0:434:a346:77e5]) (user=ardb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:19ce:b0:431:5871:6c5d with SMTP id 5b1f17b1804b1-434d3f8e454mr83608715e9.3.1733411003473; Thu, 05 Dec 2024 07:03:23 -0800 (PST) Date: Thu, 5 Dec 2024 16:02:33 +0100 In-Reply-To: <20241205150229.3510177-8-ardb+git@google.com> Mime-Version: 1.0 References: <20241205150229.3510177-8-ardb+git@google.com> X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-Developer-Signature: v=1; a=openpgp-sha256; l=2178; i=ardb@kernel.org; h=from:subject; bh=k2Mzp/wMZGFRYdDSzEySynVgHnYysUYKKCSLnjGJFrc=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIT3wQBcD07YY/8JkFrHzO87lqd8rFWx/6/Ni8pHKSFe+E zmsocwdpSwMYhwMsmKKLAKz/77beXqiVK3zLFmYOaxMYEO4OAVgIofeMzIsro/PTHeecONt0M87 8/ayrVOX+HL/XP75ZUbHFnvqWThXMTKca5NMzX3+x+PBosBvlytrP655xf77i8p7/7u7TA9UaH/ gBgA= X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241205150229.3510177-11-ardb+git@google.com> Subject: [PATCH v2 3/6] arm64/kvm: Configure HYP TCR.PS/DS based on host stage1 From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook , Quentin Perret , stable@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241205_070325_715015_05E1F78E X-CRM114-Status: GOOD ( 17.13 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Ard Biesheuvel When the host stage1 is configured for LPA2, the value currently being programmed into TCR_EL2.T0SZ may be invalid unless LPA2 is configured at HYP as well. This means kvm_lpa2_is_enabled() is not the right condition to test when setting TCR_EL2.DS, as it will return false if LPA2 is only available for stage 1 but not for stage 2. Similary, programming TCR_EL2.PS based on a limited IPA range due to lack of stage2 LPA2 support could potentially result in problems. So use lpa2_is_enabled() instead, and set the PS field according to the host's IPS, which is capped at 48 bits if LPA2 support is absent or disabled. Whether or not we can make meaningful use of such a configuration is a different question. Cc: Signed-off-by: Ard Biesheuvel --- arch/arm64/kvm/arm.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index a102c3aebdbc..7b2735ad32e9 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1990,8 +1990,7 @@ static int kvm_init_vector_slots(void) static void __init cpu_prepare_hyp_mode(int cpu, u32 hyp_va_bits) { struct kvm_nvhe_init_params *params = per_cpu_ptr_nvhe_sym(kvm_init_params, cpu); - u64 mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); - unsigned long tcr; + unsigned long tcr, ips; /* * Calculate the raw per-cpu offset without a translation from the @@ -2005,6 +2004,7 @@ static void __init cpu_prepare_hyp_mode(int cpu, u32 hyp_va_bits) params->mair_el2 = read_sysreg(mair_el1); tcr = read_sysreg(tcr_el1); + ips = FIELD_GET(TCR_IPS_MASK, tcr); if (cpus_have_final_cap(ARM64_KVM_HVHE)) { tcr |= TCR_EPD1_MASK; } else { @@ -2014,8 +2014,8 @@ static void __init cpu_prepare_hyp_mode(int cpu, u32 hyp_va_bits) tcr &= ~TCR_T0SZ_MASK; tcr |= TCR_T0SZ(hyp_va_bits); tcr &= ~TCR_EL2_PS_MASK; - tcr |= FIELD_PREP(TCR_EL2_PS_MASK, kvm_get_parange(mmfr0)); - if (kvm_lpa2_is_enabled()) + tcr |= FIELD_PREP(TCR_EL2_PS_MASK, ips); + if (lpa2_is_enabled()) tcr |= TCR_EL2_DS; params->tcr_el2 = tcr; From patchwork Thu Dec 5 15:02:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13895516 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D5C98E7716C for ; Thu, 5 Dec 2024 15:08:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=slhXHLrcA4lnMI4mcoJ78dsh1V1y/MQiAPdO/CifUdE=; b=WMMrVXgKR4y/rkJ+lBdS3OTKEf BQOFPD4/sZCh4bSWYEk95qcZSpUJIJtem8IJghCLNvyqfTy1O/v2UjWPVLCtbYHDAcFRjlAkbu6zU K0siDllJSomypHlXsRjkMbkE9CWbuLtxogKdSZihVWN0E30YIQ240pgRtO4lQAmS3TXwyQrAVfkby 5lsUsHpexfgKAeqnXA0iu6/WV2TWqvvG1glcJi3xDgqRsPvpdYRU5AAsUYPGRWL9H75n/Lf+NQER5 wF1h9OiqTGuNp1p4KP4OPgr9aTXaob3SwtvDYXd4rZIOnHz7zI/T4Nh3ReturYf0UHHKo7b2Qrqy2 MRgX+RvA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tJDSv-0000000GSpc-165c; Thu, 05 Dec 2024 15:08:29 +0000 Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tJDO3-0000000GRjY-0zfL for linux-arm-kernel@lists.infradead.org; Thu, 05 Dec 2024 15:03:28 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-434a195814fso8678875e9.3 for ; Thu, 05 Dec 2024 07:03:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733411005; x=1734015805; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=slhXHLrcA4lnMI4mcoJ78dsh1V1y/MQiAPdO/CifUdE=; b=nKD542odnw3o3r5+GTwBwtJxb99tdGmzjCU/AmBDAskq0NDIT9JFkSZNtf3NYLJGXD CljvD8RY5CdDEoaX4pcFH+leHWEuVLcdqe+i7ZJj8aCFQTtbkq2sAA+4G4woTT2YrTAG eSnRXCPIqezbnc3Txt57ygl6GxMtztM3HnvOIR+CPlGuc5Hrrl3D7JDE+SRYg89NIme6 o/AYh9M31f0o1+v9vbLL2+CkOCgHI6GSmI/e6OC32e6t6W607sVheX94OMoYol7wG8xT TBWa8bUpPUHvWETa1oRyWs8+i+HBwEUa1jCFVN3Ec6vi9eb0hMsa3wh8WfmE9BVAGb6w /N/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733411005; x=1734015805; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=slhXHLrcA4lnMI4mcoJ78dsh1V1y/MQiAPdO/CifUdE=; b=ny2Om4mYXcgbMTzmT9bfbDSJ7rx80nh/wULCaiTY3We8L14iApAT7ANvNrCYQAuGTB CwyzyN0STfizAHvN0c4LUoNtH+0WuX+/ac3E3JWkKslN5FCnASAw5IiM73SCeOL29Ez8 CUPzsu6vGrVb1KpGSH9EMIVsXgsijSgOqWKnHWvsND9mXiCOvzpieovLDFALXEfFv655 27Lg+nW18wqJeH9pPuMPgWggh9MHUYvmtxTLO+23JN8tpfIbjYsYevePZm7E1OAaUcfr ZsYzM/pmptRxMqMvbbThVAeWtic4f4eulKyXAok6+aR8aBCEZ6nmKsUL452LE5+8sb/x vYXA== X-Gm-Message-State: AOJu0YzdllKDW2MheyV7S4dffyiuSUrUZmTbunUYz1Ey8IYQtRxq065J aPIlSG24pNyiFlCmVUAJb5DG5QnFK6ysxCthvmhIxLdn16c3EeLIXr3VzCW9PXgMu3EFjZM2iag qgwVfHsQAnEJQxX61zJ+z2KlkhIuGDpLv0K2AgMGQBYW3hyVflmowLMRnb9kWb+4e8cIHxuqPnY 6hywIhagxrW9nwZjUj2Kq2xCtfhK7BKPm6Br7yXHHw X-Google-Smtp-Source: AGHT+IErGDMyoNEYHKdtQJ6HM76Rh0wVhjISU3TsZyJvd34+AxRKANhdtUS2dFF3NpSngDONw8BYuky4 X-Received: from wmap8.prod.google.com ([2002:a7b:cc88:0:b0:434:a98d:6a1c]) (user=ardb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:138a:b0:431:52da:9d67 with SMTP id 5b1f17b1804b1-434d09b1831mr105644975e9.3.1733411005512; Thu, 05 Dec 2024 07:03:25 -0800 (PST) Date: Thu, 5 Dec 2024 16:02:34 +0100 In-Reply-To: <20241205150229.3510177-8-ardb+git@google.com> Mime-Version: 1.0 References: <20241205150229.3510177-8-ardb+git@google.com> X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-Developer-Signature: v=1; a=openpgp-sha256; l=4048; i=ardb@kernel.org; h=from:subject; bh=VbEZrG51fuTH0v9uDoNKILnxrXNsrpIhK9DKU+UIPRo=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIT3wQE+L6sedXZ5yJg+VTgT9CFzu0uz17I6W0Jk+Jc6Oc 8ZhR7M7SlkYxDgYZMUUWQRm/3238/REqVrnWbIwc1iZQIYwcHEKwEQW8zL84dq2VW6meKX7c+ao P7npe27eN/73/uBeCT2mUPWAeUcf8zD8Mz73zqPWPsxO9D2LcpuNSq7+lCXyvFcXHt99L+Yt975 NzAA= X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241205150229.3510177-12-ardb+git@google.com> Subject: [PATCH v2 4/6] arm64/kvm: Avoid invalid physical addresses to signal owner updates From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook , Quentin Perret X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241205_070327_275249_285E2843 X-CRM114-Status: GOOD ( 19.46 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Ard Biesheuvel The pKVM stage2 mapping code relies on an invalid physical address to signal to the internal API that only the owner_id fields of descriptors should be updated, and these are stored in the high bits of invalid descriptors covering memory that has been donated to protected guests, and is therefore unmapped from the host stage-2 page tables. Given that these invalid PAs are never stored into the descriptors, it is better to rely on an explicit flag, to clarify the API and to avoid confusion regarding whether or not the output address of a descriptor can ever be invalid to begin with (which is not the case with LPA2). That removes a dependency on the logic that reasons about the maximum PA range, which differs on LPA2 capable CPUs based on whether LPA2 is enabled or not, and will be further clarified in subsequent patches. Cc: Quentin Perret Signed-off-by: Ard Biesheuvel --- arch/arm64/kvm/hyp/pgtable.c | 33 ++++++-------------- 1 file changed, 10 insertions(+), 23 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 40bd55966540..0569e1d97c38 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -35,14 +35,6 @@ static bool kvm_pgtable_walk_skip_cmo(const struct kvm_pgtable_visit_ctx *ctx) return unlikely(ctx->flags & KVM_PGTABLE_WALK_SKIP_CMO); } -static bool kvm_phys_is_valid(u64 phys) -{ - u64 parange_max = kvm_get_parange_max(); - u8 shift = id_aa64mmfr0_parange_to_phys_shift(parange_max); - - return phys < BIT(shift); -} - static bool kvm_block_mapping_supported(const struct kvm_pgtable_visit_ctx *ctx, u64 phys) { u64 granule = kvm_granule_size(ctx->level); @@ -53,7 +45,7 @@ static bool kvm_block_mapping_supported(const struct kvm_pgtable_visit_ctx *ctx, if (granule > (ctx->end - ctx->addr)) return false; - if (kvm_phys_is_valid(phys) && !IS_ALIGNED(phys, granule)) + if (!IS_ALIGNED(phys, granule)) return false; return IS_ALIGNED(ctx->addr, granule); @@ -587,6 +579,9 @@ struct stage2_map_data { /* Force mappings to page granularity */ bool force_pte; + + /* Walk should update owner_id only */ + bool owner_update; }; u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift) @@ -885,18 +880,7 @@ static u64 stage2_map_walker_phys_addr(const struct kvm_pgtable_visit_ctx *ctx, { u64 phys = data->phys; - /* - * Stage-2 walks to update ownership data are communicated to the map - * walker using an invalid PA. Avoid offsetting an already invalid PA, - * which could overflow and make the address valid again. - */ - if (!kvm_phys_is_valid(phys)) - return phys; - - /* - * Otherwise, work out the correct PA based on how far the walk has - * gotten. - */ + /* Work out the correct PA based on how far the walk has gotten */ return phys + (ctx->addr - ctx->start); } @@ -908,6 +892,9 @@ static bool stage2_leaf_mapping_allowed(const struct kvm_pgtable_visit_ctx *ctx, if (data->force_pte && ctx->level < KVM_PGTABLE_LAST_LEVEL) return false; + if (data->owner_update && ctx->level == KVM_PGTABLE_LAST_LEVEL) + return true; + return kvm_block_mapping_supported(ctx, phys); } @@ -923,7 +910,7 @@ static int stage2_map_walker_try_leaf(const struct kvm_pgtable_visit_ctx *ctx, if (!stage2_leaf_mapping_allowed(ctx, data)) return -E2BIG; - if (kvm_phys_is_valid(phys)) + if (!data->owner_update) new = kvm_init_valid_leaf_pte(phys, data->attr, ctx->level); else new = kvm_init_invalid_leaf_owner(data->owner_id); @@ -1085,11 +1072,11 @@ int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size, { int ret; struct stage2_map_data map_data = { - .phys = KVM_PHYS_INVALID, .mmu = pgt->mmu, .memcache = mc, .owner_id = owner_id, .force_pte = true, + .owner_update = true, }; struct kvm_pgtable_walker walker = { .cb = stage2_map_walker, From patchwork Thu Dec 5 15:02:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13895524 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 16E72E7716D for ; Thu, 5 Dec 2024 15:09:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=nxhk2o0tAc4zIrdHxGtBhCpg5dWRaRnoqVjOuRbuph0=; b=fdaKvVBAErPON6k2r2JQ5g7Zxq /Nr+Kh04EMsxPTUIxs3bVYbrF6IL9UCh0k6YmOTl0Ae3ICoyt9lCVKNsgWWz5u6fX9q+UhPk9+xQw erXDyxF8mw+omJnu/vSDhA1BU0eU5g70XRkbDYl6kMf1CuNqqxcnTv7sNLE0JBstSZfaLN2p2oEy3 Djakz1ky7fg7riJsKmYgwWPa3BqKwU5kYgfkbPkXUBTIOCSr5PM2cnGQKU5UufWB+ZBYzli0WWv7d bJMQkkNT13I10RNDWtMcdi3LqpX5AFr08t9EgVB7sB7FtTbP0cSZobNhi6lwLPQic3FuxTknLnDdZ OczaaL+w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tJDTu-0000000GT1H-434B; Thu, 05 Dec 2024 15:09:30 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tJDO6-0000000GRkN-2afX for linux-arm-kernel@lists.infradead.org; Thu, 05 Dec 2024 15:03:31 +0000 Received: by mail-wr1-x44a.google.com with SMTP id ffacd0b85a97d-385dcadffebso491283f8f.0 for ; Thu, 05 Dec 2024 07:03:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733411008; x=1734015808; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=nxhk2o0tAc4zIrdHxGtBhCpg5dWRaRnoqVjOuRbuph0=; b=uYfhBLC9QP7HkGRQIBHWZXfDVbneEXkvkX6ZmLUPa2X0alckUjtrplQ0EWwpasas4j qa8BkQF+oGLHrRIk6tPN51VAiSUxu30Shpxom0U9UmZ/ogr3zxuN6PDLb9H6LmdWnhfS Yv3TPjkaGnm4S4pd0kinVOqTCLty1p0C63UoCmqGjFAtTLLXZyh6UlNIwdM0g8AG1LM1 iWXC2HvPqMkAwwEMq+05J75svpqYVLKb0YpPHtmd44171sVLTzHFeLyc3lVOoyLlwq+C oI3U2aAiFeu670jnNoSTKIDPvJQ9mxTrh9BEfNYIioEwu7awPMjBrHFMtM8XC6mukbOG jI2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733411008; x=1734015808; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nxhk2o0tAc4zIrdHxGtBhCpg5dWRaRnoqVjOuRbuph0=; b=sRC8PwnAiCeTm+MqUm2a+kYtRE3scRN5/7mBAblGjilJlzJyBA6VUPzsiSxCtAXjEr KjPa3JAjcNLLz4p6E9vbWbKAI3pQ86DVbTTsTnHh8p3brlmuW+eoVEcJqN/uJzHCpu+2 j4Hqlt/nqADQiGBYHjZyvrbfpoh4oqX3hF4LAnm+mEuLrok2ozOOVtXWvASPGm1AziNM /LBvTGTujBVI+OfOTIP6I2eDoxdAUDv4ATJq/tVz6pF2eJ97jfKINJZImcg0ygReS4eC ONEnayHGN3BIULfQY7MEgsAXAEZeg3/R6TH9CMUdUMEq/MIpeUxvCkP00zz5IDIHv5F5 W8dg== X-Gm-Message-State: AOJu0Yw5idRzxMm7HH0Nw5Rp6sS6Hi9q7xS0bWVUxSKwuNqxEjOcxVqF UXJLxo82O8vTRrHux/mrXtbnGL/vNX544oJ7pCInDgwE8XKyflkF2mopPbJdJCZuzkQQeNRHQ9w 9xq82SNYX4b706oKuYMvgoswxNDI+ufjOZo13sv35Z/NaBOlkI91GoTFWeL4Za5CD9uArbD28u8 wcP4xewQKwp46d56jK/ZPoj1eVhdapWCQmPyt1Nhsn X-Google-Smtp-Source: AGHT+IGK+uqdrolwBE/YK0GzQSi2JfIN1swfXk1Cl/uS2VCNW+v/H02aP5idPYHHs/NRxJPEYLwBvIPp X-Received: from wmbd6.prod.google.com ([2002:a05:600c:58c6:b0:434:a471:130f]) (user=ardb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:184c:b0:385:df73:2f24 with SMTP id ffacd0b85a97d-385fd418d72mr9456469f8f.39.1733411007685; Thu, 05 Dec 2024 07:03:27 -0800 (PST) Date: Thu, 5 Dec 2024 16:02:35 +0100 In-Reply-To: <20241205150229.3510177-8-ardb+git@google.com> Mime-Version: 1.0 References: <20241205150229.3510177-8-ardb+git@google.com> X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-Developer-Signature: v=1; a=openpgp-sha256; l=2690; i=ardb@kernel.org; h=from:subject; bh=nNzKrIEGl5/jzZq+G376hZKl+MZq0c7mORI7knhQaps=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIT3wQG9+RtOixbsYNq4JUL8kkbBoa67TClvXWOHDh+ub4 lvqRao7SlkYxDgYZMUUWQRm/3238/REqVrnWbIwc1iZQIYwcHEKwEQSJjAyrKpamhqhsPpueoj2 vnM+bCJGR+JXxx74/71z55d9FTvqnzMyrHjDzDGpYem2TsVDjDN9DVSf+la5LIyb+q/csYojo8i THQA= X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241205150229.3510177-13-ardb+git@google.com> Subject: [PATCH v2 5/6] arm64: Kconfig: force ARM64_PAN=y when enabling TTBR0 sw PAN From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook , Quentin Perret X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241205_070330_654698_C82E8340 X-CRM114-Status: GOOD ( 13.76 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Ard Biesheuvel There are a couple of instances of Kconfig constraints where PAN must be enabled too if TTBR0 sw PAN is enabled, primarily to avoid dealing with the modified TTBR0_EL1 sysreg format that is used when 52-bit physical addressing and/or CnP are enabled (support for either implies support for hardware PAN as well, which will supersede PAN emulation if both are available) Let's simplify this, and always enable ARM64_PAN when enabling TTBR0 sw PAN. This decouples the PAN configuration from the VA size selection, permitting us to simplify the latter in subsequent patches. (Note that PAN and TTBR0 sw PAN can still be disabled after this patch, but not independently) To avoid a convoluted circular Kconfig dependency involving KCSAN, make ARM64_MTE select ARM64_PAN too, instead of depending on it. Signed-off-by: Ard Biesheuvel --- arch/arm64/Kconfig | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 100570a048c5..c1ca21adddc1 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1379,7 +1379,6 @@ config ARM64_VA_BITS_48 config ARM64_VA_BITS_52 bool "52-bit" - depends on ARM64_PAN || !ARM64_SW_TTBR0_PAN help Enable 52-bit virtual addressing for userspace when explicitly requested via a hint to mmap(). The kernel will also use 52-bit @@ -1431,7 +1430,6 @@ config ARM64_PA_BITS_48 config ARM64_PA_BITS_52 bool "52-bit" depends on ARM64_64K_PAGES || ARM64_VA_BITS_52 - depends on ARM64_PAN || !ARM64_SW_TTBR0_PAN help Enable support for a 52-bit physical address space, introduced as part of the ARMv8.2-LPA extension. @@ -1681,6 +1679,7 @@ config RODATA_FULL_DEFAULT_ENABLED config ARM64_SW_TTBR0_PAN bool "Emulate Privileged Access Never using TTBR0_EL1 switching" depends on !KCSAN + select ARM64_PAN help Enabling this option prevents the kernel from accessing user-space memory directly by pointing TTBR0_EL1 to a reserved @@ -1937,7 +1936,6 @@ config ARM64_RAS_EXTN config ARM64_CNP bool "Enable support for Common Not Private (CNP) translations" default y - depends on ARM64_PAN || !ARM64_SW_TTBR0_PAN help Common Not Private (CNP) allows translation table entries to be shared between different PEs in the same inner shareable @@ -2132,7 +2130,7 @@ config ARM64_MTE depends on AS_HAS_ARMV8_5 depends on AS_HAS_LSE_ATOMICS # Required for tag checking in the uaccess routines - depends on ARM64_PAN + select ARM64_PAN select ARCH_HAS_SUBPAGE_FAULTS select ARCH_USES_HIGH_VMA_FLAGS select ARCH_USES_PG_ARCH_2 From patchwork Thu Dec 5 15:02:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13895525 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CBC63E7716D for ; Thu, 5 Dec 2024 15:10:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=yclFVXNTRVuI3ORl238ziN2lD13I6D6/6bjC0IoKiuY=; b=eyEB++VkkaujruorbpzNsXz3Bt 58oBHdIviAukR79jhloTEwaTkO8wtlv3k1pFC5DCuVzUU2w/OOv6MSjtNuvPsdXLSRX4wR+NVr8JK QiTLXlwGQwab6SlAQ+z7eU2Dc3d4Vl2CYrY0PpXB/8Ky9vLyzpCggIFI4JsHOc1lvqO4mofh6DMop W/pnrLhsB2XL6FnzrP3Wv8ZTZBeiGrCxt6WmIhXCm4etR3PdlzxUM+QBe/svXv6/ceWz4l33li9mg 01ko2UAuJjqRgDUPkZ3gn/JoUSwVpeUM3ONgqn9J84E2zfZU56ZGls0eQj25qqOPCeAbXNt4Z7EUS SPLXHzqA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tJDUt-0000000GTJw-2OKn; Thu, 05 Dec 2024 15:10:31 +0000 Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tJDO8-0000000GRlI-12rR for linux-arm-kernel@lists.infradead.org; Thu, 05 Dec 2024 15:03:34 +0000 Received: by mail-wm1-x349.google.com with SMTP id 5b1f17b1804b1-4349df2d87dso10087965e9.2 for ; Thu, 05 Dec 2024 07:03:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733411010; x=1734015810; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=yclFVXNTRVuI3ORl238ziN2lD13I6D6/6bjC0IoKiuY=; b=bLs4SOJBKaPEAXACrgz1XdZ5BmIgVmj/9OzMFlojgT6S6kJVTWeRPU4lUTsHXIzpBY jLLVHAYoskghrryU6FSXjAu+J9zQxI3BI57joiYd9UnvXT503hAsaessmGllEmQC99WU JMt12kkKdTkJHcmASVGbgT+Lz2QqZtaEOb0XUQzScdVwekGU98O0HGhciwimpJkFk6XE Up7iQoDN7TusN8BiWcjSSzFy17RHbu8bho+iWIEeSVHHjI/pA1px5/R1hZN7wZg6Dt3P F+LefNo+Ri3XkY+gtpOTh/ItjwrgRxLjaspxOZH6IW7LBzCtb1rqgjwetv8A7bvl2nsE ZhWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733411010; x=1734015810; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=yclFVXNTRVuI3ORl238ziN2lD13I6D6/6bjC0IoKiuY=; b=WxRps7Z67uTZbMOX35JHfmIeK5wNILfORWkzm7m/fTjAJ392ELfpVn1tBtiUG8bOb1 s6yKIdFs9q/qGdUw2KOuOq2nnNAX3diH9qan5MgXOFh93XrpRYRW/yjNcPtwYPLSw8e+ 4EVhqTLESGkwTuNvBK00oMutKH1M+2hD/QmaqmFk3iwPnhNrGhizPYkofFUMxqIN1E8j UyoP19oLofgn1UyaFVqONbhQL23vfCx5TpA5T1OkU/qGxiFlzaRTrUYC/BOZCWhPKTTB HlECFb5EHFwvCNbY5lPgyiqvqw7cDOFVvVn9Jzsaey94YcU8kYZwOgKb5lt5FKmtpyXu qw6w== X-Gm-Message-State: AOJu0Yx9yLLFkH2JrnwAlyXWz1INR0GVd+L3h7oQS2JgsjvKbWV2gPZF BCu/oFNKL1rsnR7N6wl8izdevOKfuXQ2h4Dn7c0TqyVFH0gzi/uBh9BZcXHCDKflRlHqNcRSRzO x8aBAvntBL8YMeJFD9WLWAzsJgrS9nJp7X8dQvXpKRRKj8SrCTKrTkiFetfCSrCWv8yL6mnlqiK pW/liSx69YDl4bPyZlh0kgPqQ7r4JBOmxtSps+6Cwi X-Google-Smtp-Source: AGHT+IELvbd/O2SzZ3VWAEsTtUMSY7osqaiF93nf6dbOeGzMGVe84oA9bVeQxjEQScmqr6dd6O+shRpZ X-Received: from wmbbg6.prod.google.com ([2002:a05:600c:3c86:b0:434:a045:c681]) (user=ardb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:450e:b0:434:a30b:5445 with SMTP id 5b1f17b1804b1-434d09d1cfemr102033595e9.19.1733411010060; Thu, 05 Dec 2024 07:03:30 -0800 (PST) Date: Thu, 5 Dec 2024 16:02:36 +0100 In-Reply-To: <20241205150229.3510177-8-ardb+git@google.com> Mime-Version: 1.0 References: <20241205150229.3510177-8-ardb+git@google.com> X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-Developer-Signature: v=1; a=openpgp-sha256; l=11772; i=ardb@kernel.org; h=from:subject; bh=YzoW1gjf9Ier2qiu9vhpZuWDAp/MgsDQS+JjT1HAPQc=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIT3wQP89lwXtR1gVP3YpJy68///7Qt3pjGGtE7hvLBB6N 69R8duvjlIWBjEOBlkxRRaB2X/f7Tw9UarWeZYszBxWJpAhDFycAjCRVekM/8vSQ0yM247o2Dtv OHZxWmLX+UvCj71bmi7FtH4VDH734QEjw0GNwsddZxdHHv79YKWCZZ5X/J2M90uarB+7fus4FHa Ukx0A X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241205150229.3510177-14-ardb+git@google.com> Subject: [PATCH v2 6/6] arm64/mm: Drop configurable 48-bit physical address space limit From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook , Quentin Perret X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241205_070332_291751_1391D28C X-CRM114-Status: GOOD ( 23.04 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Ard Biesheuvel Currently, the maximum supported physical address space can be configured as either 48 bits or 52 bits. The only remaining difference between these in practice is that the former omits the masking and shifting required to construct TTBR and PTE values, which carry bits #48 and higher disjoint from the rest of the physical address. The overhead of performing these additional calculations is negligible, and so there is little reason to retain support for two different configurations, and we can simply support whatever the hardware supports. Signed-off-by: Ard Biesheuvel --- arch/arm64/Kconfig | 31 +------------------- arch/arm64/include/asm/assembler.h | 13 ++------ arch/arm64/include/asm/cpufeature.h | 3 +- arch/arm64/include/asm/kvm_pgtable.h | 3 +- arch/arm64/include/asm/pgtable-hwdef.h | 6 +--- arch/arm64/include/asm/pgtable-prot.h | 4 +-- arch/arm64/include/asm/pgtable.h | 11 +------ arch/arm64/include/asm/sysreg.h | 6 ---- arch/arm64/mm/pgd.c | 9 +++--- arch/arm64/mm/proc.S | 2 -- scripts/gdb/linux/constants.py.in | 1 - tools/arch/arm64/include/asm/sysreg.h | 6 ---- 12 files changed, 14 insertions(+), 81 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index c1ca21adddc1..7ebd0ba32a32 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1416,38 +1416,9 @@ config ARM64_VA_BITS default 48 if ARM64_VA_BITS_48 default 52 if ARM64_VA_BITS_52 -choice - prompt "Physical address space size" - default ARM64_PA_BITS_48 - help - Choose the maximum physical address range that the kernel will - support. - -config ARM64_PA_BITS_48 - bool "48-bit" - depends on ARM64_64K_PAGES || !ARM64_VA_BITS_52 - -config ARM64_PA_BITS_52 - bool "52-bit" - depends on ARM64_64K_PAGES || ARM64_VA_BITS_52 - help - Enable support for a 52-bit physical address space, introduced as - part of the ARMv8.2-LPA extension. - - With this enabled, the kernel will also continue to work on CPUs that - do not support ARMv8.2-LPA, but with some added memory overhead (and - minor performance overhead). - -endchoice - -config ARM64_PA_BITS - int - default 48 if ARM64_PA_BITS_48 - default 52 if ARM64_PA_BITS_52 - config ARM64_LPA2 def_bool y - depends on ARM64_PA_BITS_52 && !ARM64_64K_PAGES + depends on !ARM64_64K_PAGES choice prompt "Endianness" diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index ad63457a05c5..01a1e3c16283 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -342,14 +342,13 @@ alternative_cb_end mrs \tmp0, ID_AA64MMFR0_EL1 // Narrow PARange to fit the PS field in TCR_ELx ubfx \tmp0, \tmp0, #ID_AA64MMFR0_EL1_PARANGE_SHIFT, #3 - mov \tmp1, #ID_AA64MMFR0_EL1_PARANGE_MAX #ifdef CONFIG_ARM64_LPA2 alternative_if_not ARM64_HAS_VA52 mov \tmp1, #ID_AA64MMFR0_EL1_PARANGE_48 -alternative_else_nop_endif -#endif cmp \tmp0, \tmp1 csel \tmp0, \tmp1, \tmp0, hi +alternative_else_nop_endif +#endif bfi \tcr, \tmp0, \pos, #3 .endm @@ -599,21 +598,13 @@ alternative_endif * ttbr: returns the TTBR value */ .macro phys_to_ttbr, ttbr, phys -#ifdef CONFIG_ARM64_PA_BITS_52 orr \ttbr, \phys, \phys, lsr #46 and \ttbr, \ttbr, #TTBR_BADDR_MASK_52 -#else - mov \ttbr, \phys -#endif .endm .macro phys_to_pte, pte, phys -#ifdef CONFIG_ARM64_PA_BITS_52 orr \pte, \phys, \phys, lsr #PTE_ADDR_HIGH_SHIFT and \pte, \pte, #PHYS_TO_PTE_ADDR_MASK -#else - mov \pte, \phys -#endif .endm /* diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index b64e49bd9d10..ed327358e734 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -885,9 +885,8 @@ static inline u32 id_aa64mmfr0_parange_to_phys_shift(int parange) * However, by the "D10.1.4 Principles of the ID scheme * for fields in ID registers", ARM DDI 0487C.a, any new * value is guaranteed to be higher than what we know already. - * As a safe limit, we return the limit supported by the kernel. */ - default: return CONFIG_ARM64_PA_BITS; + default: return 52; } } diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index aab04097b505..525aef178cb4 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -30,8 +30,7 @@ static inline u64 kvm_get_parange_max(void) { - if (kvm_lpa2_is_enabled() || - (IS_ENABLED(CONFIG_ARM64_PA_BITS_52) && PAGE_SHIFT == 16)) + if (kvm_lpa2_is_enabled() || PAGE_SHIFT == 16) return ID_AA64MMFR0_EL1_PARANGE_52; else return ID_AA64MMFR0_EL1_PARANGE_48; diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index a9136cc551cc..9b34180042b2 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -176,7 +176,6 @@ #define PTE_SWBITS_MASK _AT(pteval_t, (BIT(63) | GENMASK(58, 55))) #define PTE_ADDR_LOW (((_AT(pteval_t, 1) << (50 - PAGE_SHIFT)) - 1) << PAGE_SHIFT) -#ifdef CONFIG_ARM64_PA_BITS_52 #ifdef CONFIG_ARM64_64K_PAGES #define PTE_ADDR_HIGH (_AT(pteval_t, 0xf) << 12) #define PTE_ADDR_HIGH_SHIFT 36 @@ -186,7 +185,6 @@ #define PTE_ADDR_HIGH_SHIFT 42 #define PHYS_TO_PTE_ADDR_MASK GENMASK_ULL(49, 8) #endif -#endif /* * AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registers). @@ -327,12 +325,10 @@ /* * TTBR. */ -#ifdef CONFIG_ARM64_PA_BITS_52 /* - * TTBR_ELx[1] is RES0 in this configuration. + * TTBR_ELx[1] is RES0 when using 52-bit physical addressing */ #define TTBR_BADDR_MASK_52 GENMASK_ULL(47, 2) -#endif #ifdef CONFIG_ARM64_VA_BITS_52 /* Must be at least 64-byte aligned to prevent corruption of the TTBR */ diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h index a95f1f77bb39..b73acf25341f 100644 --- a/arch/arm64/include/asm/pgtable-prot.h +++ b/arch/arm64/include/asm/pgtable-prot.h @@ -81,7 +81,7 @@ extern unsigned long prot_ns_shared; #define lpa2_is_enabled() false #define PTE_MAYBE_SHARED PTE_SHARED #define PMD_MAYBE_SHARED PMD_SECT_S -#define PHYS_MASK_SHIFT (CONFIG_ARM64_PA_BITS) +#define PHYS_MASK_SHIFT (52) #else static inline bool __pure lpa2_is_enabled(void) { @@ -90,7 +90,7 @@ static inline bool __pure lpa2_is_enabled(void) #define PTE_MAYBE_SHARED (lpa2_is_enabled() ? 0 : PTE_SHARED) #define PMD_MAYBE_SHARED (lpa2_is_enabled() ? 0 : PMD_SECT_S) -#define PHYS_MASK_SHIFT (lpa2_is_enabled() ? CONFIG_ARM64_PA_BITS : 48) +#define PHYS_MASK_SHIFT (lpa2_is_enabled() ? 52 : 48) #endif /* diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 6986345b537a..ec8124d66b9c 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -69,10 +69,9 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; pr_err("%s:%d: bad pte %016llx.\n", __FILE__, __LINE__, pte_val(e)) /* - * Macros to convert between a physical address and its placement in a + * Helpers to convert between a physical address and its placement in a * page table entry, taking care of 52-bit addresses. */ -#ifdef CONFIG_ARM64_PA_BITS_52 static inline phys_addr_t __pte_to_phys(pte_t pte) { pte_val(pte) &= ~PTE_MAYBE_SHARED; @@ -83,10 +82,6 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys) { return (phys | (phys >> PTE_ADDR_HIGH_SHIFT)) & PHYS_TO_PTE_ADDR_MASK; } -#else -#define __pte_to_phys(pte) (pte_val(pte) & PTE_ADDR_LOW) -#define __phys_to_pte_val(phys) (phys) -#endif #define pte_pfn(pte) (__pte_to_phys(pte) >> PAGE_SHIFT) #define pfn_pte(pfn,prot) \ @@ -1495,11 +1490,7 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf, update_mmu_cache_range(NULL, vma, addr, ptep, 1) #define update_mmu_cache_pmd(vma, address, pmd) do { } while (0) -#ifdef CONFIG_ARM64_PA_BITS_52 #define phys_to_ttbr(addr) (((addr) | ((addr) >> 46)) & TTBR_BADDR_MASK_52) -#else -#define phys_to_ttbr(addr) (addr) -#endif /* * On arm64 without hardware Access Flag, copying from user will fail because diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index b8303a83c0bf..f902893ec903 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -916,12 +916,6 @@ #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_LPA2 0x3 #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_MAX 0x7 -#ifdef CONFIG_ARM64_PA_BITS_52 -#define ID_AA64MMFR0_EL1_PARANGE_MAX ID_AA64MMFR0_EL1_PARANGE_52 -#else -#define ID_AA64MMFR0_EL1_PARANGE_MAX ID_AA64MMFR0_EL1_PARANGE_48 -#endif - #if defined(CONFIG_ARM64_4K_PAGES) #define ID_AA64MMFR0_EL1_TGRAN_SHIFT ID_AA64MMFR0_EL1_TGRAN4_SHIFT #define ID_AA64MMFR0_EL1_TGRAN_LPA2 ID_AA64MMFR0_EL1_TGRAN4_52_BIT diff --git a/arch/arm64/mm/pgd.c b/arch/arm64/mm/pgd.c index 0c501cabc238..8722ab6d4b1c 100644 --- a/arch/arm64/mm/pgd.c +++ b/arch/arm64/mm/pgd.c @@ -48,20 +48,21 @@ void pgd_free(struct mm_struct *mm, pgd_t *pgd) void __init pgtable_cache_init(void) { + unsigned int pgd_size = PGD_SIZE; + if (pgdir_is_page_size()) return; -#ifdef CONFIG_ARM64_PA_BITS_52 /* * With 52-bit physical addresses, the architecture requires the * top-level table to be aligned to at least 64 bytes. */ - BUILD_BUG_ON(PGD_SIZE < 64); -#endif + if (PHYS_MASK_SHIFT >= 52) + pgd_size = max(pgd_size, 64); /* * Naturally aligned pgds required by the architecture. */ - pgd_cache = kmem_cache_create("pgd_cache", PGD_SIZE, PGD_SIZE, + pgd_cache = kmem_cache_create("pgd_cache", pgd_size, pgd_size, SLAB_PANIC, NULL); } diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index b8edc5765441..51ed0e9d0a0d 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -197,10 +197,8 @@ SYM_FUNC_ALIAS(__pi_idmap_cpu_replace_ttbr1, idmap_cpu_replace_ttbr1) .macro pte_to_phys, phys, pte and \phys, \pte, #PTE_ADDR_LOW -#ifdef CONFIG_ARM64_PA_BITS_52 and \pte, \pte, #PTE_ADDR_HIGH orr \phys, \phys, \pte, lsl #PTE_ADDR_HIGH_SHIFT -#endif .endm .macro kpti_mk_tbl_ng, type, num_entries diff --git a/scripts/gdb/linux/constants.py.in b/scripts/gdb/linux/constants.py.in index fd6bd69c5096..05034c0b8fd7 100644 --- a/scripts/gdb/linux/constants.py.in +++ b/scripts/gdb/linux/constants.py.in @@ -141,7 +141,6 @@ LX_CONFIG(CONFIG_ARM64_4K_PAGES) LX_CONFIG(CONFIG_ARM64_16K_PAGES) LX_CONFIG(CONFIG_ARM64_64K_PAGES) if IS_BUILTIN(CONFIG_ARM64): - LX_VALUE(CONFIG_ARM64_PA_BITS) LX_VALUE(CONFIG_ARM64_VA_BITS) LX_VALUE(CONFIG_PAGE_SHIFT) LX_VALUE(CONFIG_ARCH_FORCE_MAX_ORDER) diff --git a/tools/arch/arm64/include/asm/sysreg.h b/tools/arch/arm64/include/asm/sysreg.h index cd8420e8c3ad..daeecb1a5366 100644 --- a/tools/arch/arm64/include/asm/sysreg.h +++ b/tools/arch/arm64/include/asm/sysreg.h @@ -574,12 +574,6 @@ #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_MIN 0x2 #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_MAX 0x7 -#ifdef CONFIG_ARM64_PA_BITS_52 -#define ID_AA64MMFR0_EL1_PARANGE_MAX ID_AA64MMFR0_EL1_PARANGE_52 -#else -#define ID_AA64MMFR0_EL1_PARANGE_MAX ID_AA64MMFR0_EL1_PARANGE_48 -#endif - #if defined(CONFIG_ARM64_4K_PAGES) #define ID_AA64MMFR0_EL1_TGRAN_SHIFT ID_AA64MMFR0_EL1_TGRAN4_SHIFT #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MIN