From patchwork Fri Jan 10 11:00:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 13934262 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A335DE77188 for ; Fri, 10 Jan 2025 11:03:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=55yJ6kb45jJSHxPy2vqBRzzipcynjeqXIxJjfktFSFo=; b=oBn7/VwSmkpeqziNMBxClqWdKm eDfiFJVaPTa4EhiK9yXZ4ce2L8x2v7+ZwCkSETo0/uoPt6RE4rFK2xgDURff3WKj8bFEXZ5ucI7as OeoIoFlKLLn6cEda9sFv61x5k2S5SybevqI72q7Dw58KrXCeFgrvWYdXQ6s1QzET26Qn/38yuCwwK kzMpYecK26ADsBM36lTVIbastwwAk4W5WaxBdgRs0RN8s4RNdRyjEaqxGaOEBP7c63hitA5WGZC5K IwAMdW9gmJhf1tRJCIn73YJSubMixxIPP/AQdsrS+KgYvPRw93vpCVZFdF9/NgJANtHW8qPvcqVYI s3iwoYbg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWCnA-0000000F37l-0QmL; Fri, 10 Jan 2025 11:03:04 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWCkp-0000000F2bz-48OI for linux-arm-kernel@lists.infradead.org; Fri, 10 Jan 2025 11:00:41 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 94D945C5C00; Fri, 10 Jan 2025 10:59:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F21EDC4CED6; Fri, 10 Jan 2025 11:00:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1736506839; bh=Za9GLFN307WlkUl4niP5UUi9dXFcP6CZJGyLTcZ6JB8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rfipLdXDNDrYj9mvGzw+/seBMC8NGic0jCoI6n1JtOoDqS5avsb6sPCPUGh8YYeRE EcyKAOK0b3HuWZkDF+YLdY/QXj2sFgFdhbuQ7xhaQhxxxhP5JrZ3F5an2SB76ucQ3f FN9zW54e0WsW0BvJR+IA+tGFUApqzOqYwJ9fY87oi2pxB0oqO3yB858RB34365Lt8e 0XuYN3Etf571wfynznXYuisKYaz8V+sdAzJOhmH86p9mqhXgV/BCRmY02HqSyLCLqY FVuotW9UMB4pdWvjVxUCuuERnoXVJ+WldI8SnPzJbm0SypCVWZ9MoI6kHioo+Hoq3/ ivHIh4v6wv68g== From: "Aneesh Kumar K.V (Arm)" To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Cc: Suzuki K Poulose , Steven Price , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland , Oliver Upton , Joey Gouly , Zenghui Yu , "Aneesh Kumar K.V (Arm)" Subject: [PATCH v2 1/7] arm64: Update the values to binary from hex Date: Fri, 10 Jan 2025 16:30:17 +0530 Message-ID: <20250110110023.2963795-2-aneesh.kumar@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250110110023.2963795-1-aneesh.kumar@kernel.org> References: <20250110110023.2963795-1-aneesh.kumar@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_030040_058631_53442BAC X-CRM114-Status: UNSURE ( 9.69 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This matches the ARM ARM representation. No functional change in this patch. Signed-off-by: Aneesh Kumar K.V (Arm) Acked-by: Catalin Marinas --- arch/arm64/include/asm/memory.h | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 8b9f33cf561b..cb244668954c 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -178,17 +178,17 @@ /* * Memory types for Stage-2 translation */ -#define MT_S2_NORMAL 0xf -#define MT_S2_NORMAL_NC 0x5 -#define MT_S2_DEVICE_nGnRE 0x1 +#define MT_S2_NORMAL 0b1111 +#define MT_S2_NORMAL_NC 0b0101 +#define MT_S2_DEVICE_nGnRE 0b0001 /* * Memory types for Stage-2 translation when ID_AA64MMFR2_EL1.FWB is 0001 * Stage-2 enforces Normal-WB and Device-nGnRE */ -#define MT_S2_FWB_NORMAL 6 -#define MT_S2_FWB_NORMAL_NC 5 -#define MT_S2_FWB_DEVICE_nGnRE 1 +#define MT_S2_FWB_NORMAL 0b0110 +#define MT_S2_FWB_NORMAL_NC 0b0101 +#define MT_S2_FWB_DEVICE_nGnRE 0b0001 #ifdef CONFIG_ARM64_4K_PAGES #define IOREMAP_MAX_ORDER (PUD_SHIFT) From patchwork Fri Jan 10 11:00:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 13934268 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 532DFE77188 for ; Fri, 10 Jan 2025 11:04:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=UTwuEyisrGqKUCU60u7Trkb9WMmWkCdK0wy0/IvYPNw=; b=0lOfEuaDJG2r3370+Cdo6NhYOy ZL+L6iAXoLk2/tA64fVOoKouUSbz4rc+APoOZHs3jBQGaH6lqyMsY46y/RiDsehfdP9kNWJoF8/Zr 1oLCEdnVbTf4Zim+5z1Ywr9Q/KMd/NYSCWH7u7sBhjjxFyG05uvLtQejtElbMe1CJ8E7VlGOHH4Zt TnXGennMhB+SFpJgGJiERePB9e0x1wW+l9VvW8wp+zRfBSlUJOsQ4f/EDw6k261KMN9rW5dGGoihl C73AcWOl5ion5OPxVVxT1nc8u4AdQuhQ2FmRwyFCZAeZWVRtV0/z2CkSe9hBSXkMUZmlJle38l3sf ksv+LWEg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWCoL-0000000F3Ef-308P; Fri, 10 Jan 2025 11:04:17 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWCku-0000000F2ct-3APV for linux-arm-kernel@lists.infradead.org; Fri, 10 Jan 2025 11:00:45 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 3C14B5C5C88; Fri, 10 Jan 2025 11:00:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DDBABC4CEE0; Fri, 10 Jan 2025 11:00:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1736506843; bh=vVgMyrLW1OMFgKTrIDDDXD458WHn0AVjGex0Tfq7F2A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mMeyRf4KG/aKnNkh+oX9xfvNoII4/xPf/OICujDFWt1vJ2Y41oEGhKboNzyYbrBNB MOkvxOuf4w8/quEjBV0VKaXaLLb1wZD7zvLVj9dx4Icmx8WVEFOuMDugN6Ull/fsE3 C2vQJqi85AnDHH2sBi6H/91/ofsuT9n0z61FkZxmVyPoiSTIqJhg0RgWkllPW0/LN3 l3SMHzOqOXSzYEUShgsgr7E6g0jzaXpV9cVMiHZqDbhNJN2wBVkngOJaMcJA9CguTu zAU5N59Q/V9vMAmWr5UZ/9aoKXCxIY4pkehnRN75crZm3+bbZYR9KJz7JyVYFQrmeO CdtaYRWKBnaDw== From: "Aneesh Kumar K.V (Arm)" To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Cc: Suzuki K Poulose , Steven Price , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland , Oliver Upton , Joey Gouly , Zenghui Yu , "Aneesh Kumar K.V (Arm)" Subject: [PATCH v2 2/7] KVM: arm64: MTE: Update code comments Date: Fri, 10 Jan 2025 16:30:18 +0530 Message-ID: <20250110110023.2963795-3-aneesh.kumar@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250110110023.2963795-1-aneesh.kumar@kernel.org> References: <20250110110023.2963795-1-aneesh.kumar@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_030044_839364_35D742FD X-CRM114-Status: GOOD ( 15.72 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org commit d77e59a8fccd ("arm64: mte: Lock a page for MTE tag initialisation") updated the locking such the kernel now allows VM_SHARED mapping with MTE. Update the code comment to reflect this. Signed-off-by: Aneesh Kumar K.V (Arm) --- arch/arm64/kvm/mmu.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index c9d46ad57e52..eb8220a409e1 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1391,11 +1391,11 @@ static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva) * able to see the page's tags and therefore they must be initialised first. If * PG_mte_tagged is set, tags have already been initialised. * - * The race in the test/set of the PG_mte_tagged flag is handled by: - * - preventing VM_SHARED mappings in a memslot with MTE preventing two VMs - * racing to santise the same page - * - mmap_lock protects between a VM faulting a page in and the VMM performing - * an mprotect() to add VM_MTE + * The race in the test/set of the PG_mte_tagged flag is handled by using + * PG_mte_lock and PG_mte_tagged together. if PG_mte_lock is found unset, we can + * go ahead and clear the page tags. if PG_mte_lock is found set, then the page + * tags are already cleared or there is a parallel tag clearing is going on. We + * wait for the parallel tag clear to finish by waiting on PG_mte_tagged bit. */ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, unsigned long size) From patchwork Fri Jan 10 11:00:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 13934269 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0703CE77188 for ; Fri, 10 Jan 2025 11:05:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Dvev6jpspqefAesy9QvGgapMJJ1vdiikr3E2bymm/Rs=; b=tCE6PiFKd/5bZQXRc1xZy3gIWn Y23dIXj4Iw2EtZ7LMsjZLD37QMZefNHk8tkv3e3b6eNx0yELQGlRP4XFW/96BtnvvXzJb4bkqIu0x sHl+UhZJ1YQ5ExuQpedDjgW7yQAKMaTRlAgqehtGmmXgF5yXTgc20YApQ+drodWMl88MHqMMAj8UL 1X12NYE9iyi7EFwERT75LRY1YqdH+MdxeHwy7cktNDMD5J3rHD77jk2BxzAW4kgWmkZkCbvO+SgyC Ez0ke4awvsk9CXx2Gg0juRaBavWCdik2Fp3+emo4AgWrJABSc+JKx6Uf8pMzAWTDHAEhlhgNjkw2u prBccUwA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWCpW-0000000F3Nq-1cAj; Fri, 10 Jan 2025 11:05:30 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWCkz-0000000F2fF-16o0 for linux-arm-kernel@lists.infradead.org; Fri, 10 Jan 2025 11:00:50 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id CD62A5C5B55; Fri, 10 Jan 2025 11:00:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8556EC4CED6; Fri, 10 Jan 2025 11:00:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1736506848; bh=7t6+J0/0kwwj5Ro61nODugDvQ9MdrD/SAj9fDOldBM8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bQr6L/82jf4fmSGVwkard9Zw/C+Dv4VOW8HIEIzxoTNP+hVS32DeYAJngDbbwHAbL KtD3SkEGPfJnXsTngjN2p/N8RrdOmjmv5pYsz8ibpgcm89/c8bocob3uZYLH1CnMng LkhkXGreehb4S7hhNRn1mQ4M2wozEA7sybXL1JA+f30A4GUMluSGsg9jsfZjSmSoe7 UTTRtesKkBdQyHRWvHTotG1lOGhJ6Td6JWWHkgoD3X4v7S8UBi5CEYOVPMLfAQ6LMh bFzxrnqEO+ZXpg0284J0cE1EiNmrEHcB9ruoKvwD2gHHAM8r6sz108axGxKGyMr/99 XYuX8gOn33GVQ== From: "Aneesh Kumar K.V (Arm)" To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Cc: Suzuki K Poulose , Steven Price , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland , Oliver Upton , Joey Gouly , Zenghui Yu , "Aneesh Kumar K.V (Arm)" Subject: [PATCH v2 3/7] arm64: cpufeature: add Allocation Tag Access Permission (MTE_PERM) feature Date: Fri, 10 Jan 2025 16:30:19 +0530 Message-ID: <20250110110023.2963795-4-aneesh.kumar@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250110110023.2963795-1-aneesh.kumar@kernel.org> References: <20250110110023.2963795-1-aneesh.kumar@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_030049_386757_D96900D1 X-CRM114-Status: GOOD ( 13.43 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This indicates if the system supports MTE_PERM. This will be used by KVM for stage 2 mapping. This is a CPUCAP_SYSTEM feature because if we enable the feature all cpus must have it. Cc: Catalin Marinas Cc: Will Deacon Signed-off-by: Aneesh Kumar K.V (Arm) Reviewed-by: Catalin Marinas --- arch/arm64/include/asm/cpufeature.h | 5 +++++ arch/arm64/include/asm/memory.h | 2 ++ arch/arm64/kernel/cpufeature.c | 9 +++++++++ arch/arm64/tools/cpucaps | 1 + 4 files changed, 17 insertions(+) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 8b4e5a3cd24c..d70d60ca1cf7 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -813,6 +813,11 @@ static inline bool system_supports_mte(void) return alternative_has_cap_unlikely(ARM64_MTE); } +static inline bool system_supports_notagaccess(void) +{ + return alternative_has_cap_unlikely(ARM64_MTE_PERM); +} + static inline bool system_has_prio_mask_debugging(void) { return IS_ENABLED(CONFIG_ARM64_DEBUG_PRIORITY_MASKING) && diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index cb244668954c..6939e4700a5e 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -179,6 +179,7 @@ * Memory types for Stage-2 translation */ #define MT_S2_NORMAL 0b1111 +#define MT_S2_NORMAL_NOTAGACCESS 0b0100 #define MT_S2_NORMAL_NC 0b0101 #define MT_S2_DEVICE_nGnRE 0b0001 @@ -187,6 +188,7 @@ * Stage-2 enforces Normal-WB and Device-nGnRE */ #define MT_S2_FWB_NORMAL 0b0110 +#define MT_S2_FWB_NORMAL_NOTAGACCESS 0b1110 #define MT_S2_FWB_NORMAL_NC 0b0101 #define MT_S2_FWB_DEVICE_nGnRE 0b0001 diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 6ce71f444ed8..c9cd0735aaf5 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -309,6 +309,7 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = { static const struct arm64_ftr_bits ftr_id_aa64pfr2[] = { ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR2_EL1_FPMR_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR2_EL1_MTEPERM_SHIFT, 4, 0), ARM64_FTR_END, }; @@ -2818,6 +2819,14 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .matches = has_cpuid_feature, ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, MTE, MTE3) }, + { + .desc = "MTE Allocation Tag Access Permission", + .capability = ARM64_MTE_PERM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .matches = has_cpuid_feature, + ARM64_CPUID_FIELDS(ID_AA64PFR2_EL1, MTEPERM, IMP) + }, + #endif /* CONFIG_ARM64_MTE */ { .desc = "RCpc load-acquire (LDAPR)", diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps index eb17f59e543c..10e01cf8ad96 100644 --- a/arch/arm64/tools/cpucaps +++ b/arch/arm64/tools/cpucaps @@ -66,6 +66,7 @@ MPAM MPAM_HCR MTE MTE_ASYMM +MTE_PERM SME SME_FA64 SME2 From patchwork Fri Jan 10 11:00:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 13934270 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EB99DE77188 for ; Fri, 10 Jan 2025 11:06:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=NPK+5JxEHe86S/KIQk/zhV048ZFP4BygV/6dqMA542I=; b=G062yAPuc6Gcn6PcLIJ5ahFktq S49y4Fmae6wPJHyymBU6vlZsmARVcWAsBKHva7lpwMLdFiXU+UUkfMx+fgE24+7Jybd7UTb9EP1o3 3r3Nz1ywHWJyc2chdCsx5LlIppJND6/0D9qP8mJOXg2D4ekox7M/GkuBJhBMJn/AJANqYuo8SBqZb qwl5uegfJKayRVTIfRJtB1qutXxUlLi/gP2Q19Vi10MZX+COp0ro02rNx14xxxAMD8sBc7YU5G/+3 CjsYifAZHTBCWXtAwKwv1mRhYR0o/cgB3NS9+MnNkWbmnr6UUOetahx5zU27dXgCxOofYMpaPCKgj LzluiMKw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWCqh-0000000F3Wi-06lJ; Fri, 10 Jan 2025 11:06:43 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWCl4-0000000F2ga-19cJ for linux-arm-kernel@lists.infradead.org; Fri, 10 Jan 2025 11:00:55 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id DA0FB5C5684; Fri, 10 Jan 2025 11:00:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 23702C4CED6; Fri, 10 Jan 2025 11:00:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1736506853; bh=ufnAMvfyTipAJNioLcVhIPqLQx4TKJI7tO8dVXXCXzg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fjWA+zqI2E+3xoYUuq+poOeYPAP+e/6aWJ6JhIU8HsPvVkyfoECyjutRQ5k8ggpxw izpfkE+rO6j2wnYKocCf+66qq4gyiqWh/hfIZyNjIXjIoryu7XYizeelotaWM1XdoE 3bWIvn4fBTrI8hW6+RJQlTL+D6rsdD27iUTov6zeKaiXjvYO88pzLBxTKVgTscVo6A JSGxMj/Afa9KYlEzm1+EF3fnzM/ItnTTYVOLxnPtGQaLnxmMBmOQKhQAddbxXLHK8M V6YSS9F6ktBi/BtAVekfwXk0ZasMDZXr/sf4zBd4yz0PEW8F0VsgzcknKcXwvFwCN5 TijgN4V7f3jcw== From: "Aneesh Kumar K.V (Arm)" To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Cc: Suzuki K Poulose , Steven Price , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland , Oliver Upton , Joey Gouly , Zenghui Yu , "Aneesh Kumar K.V (Arm)" Subject: [PATCH v2 4/7] KVM: arm64: MTE: Add KVM_CAP_ARM_MTE_PERM Date: Fri, 10 Jan 2025 16:30:20 +0530 Message-ID: <20250110110023.2963795-5-aneesh.kumar@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250110110023.2963795-1-aneesh.kumar@kernel.org> References: <20250110110023.2963795-1-aneesh.kumar@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_030054_385993_3E89DC5C X-CRM114-Status: GOOD ( 14.84 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This will be used by VMM to enable the usage of NoTagAccess memory attribute while mapping pages not supporting allocating tags to guest IPA. Cc: Catalin Marinas Cc: Will Deacon Signed-off-by: Aneesh Kumar K.V (Arm) --- Documentation/virt/kvm/api.rst | 14 ++++++++++++++ arch/arm64/include/asm/kvm_host.h | 7 +++++++ arch/arm64/kvm/arm.c | 11 +++++++++++ include/uapi/linux/kvm.h | 1 + 4 files changed, 33 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 454c2aaa155e..e954fca76c27 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -9017,6 +9017,20 @@ Do not use KVM_X86_SW_PROTECTED_VM for "real" VMs, and especially not in production. The behavior and effective ABI for software-protected VMs is unstable. +8.42 KVM_CAP_ARM_MTE_PERM +------------------------ + +:Capability: KVM_CAP_ARM_MTE_PERM +:Architectures: arm64 +:Type: vm + +This capability, if KVM_CHECK_EXTENSION indicates that it is available, means +that the kernel has support for mapping memory regions not supporting +allocations tags into a guest which enables KVM_CAP_ARM_MTE capability. + +In order to use this, it has to be activated by setting this capability via +KVM_ENABLE_CAP ioctl on the VM fd. + 9. Known KVM API problems ========================= diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index e18e9244d17a..ad2b488b99d5 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -331,6 +331,9 @@ struct kvm_arch { #define KVM_ARCH_FLAG_ID_REGS_INITIALIZED 7 /* Fine-Grained UNDEF initialised */ #define KVM_ARCH_FLAG_FGU_INITIALIZED 8 + /* Memory Tagging Extension NoTagAccess check enabled for the guest */ +#define KVM_ARCH_FLAG_MTE_PERM_ENABLED 9 + unsigned long flags; /* VM-wide vCPU feature set */ @@ -1417,6 +1420,10 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu); #define kvm_vm_has_ran_once(kvm) \ (test_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &(kvm)->arch.flags)) +#define kvm_has_mte_perm(kvm) \ + (system_supports_notagaccess() && \ + test_bit(KVM_ARCH_FLAG_MTE_PERM_ENABLED, &(kvm)->arch.flags)) + static inline bool __vcpu_has_feature(const struct kvm_arch *ka, int feature) { return test_bit(feature, ka->vcpu_features); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index a102c3aebdbc..fdcd2c1605d5 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -150,6 +150,14 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, } mutex_unlock(&kvm->slots_lock); break; + case KVM_CAP_ARM_MTE_PERM: + mutex_lock(&kvm->lock); + if (system_supports_notagaccess() && !kvm->created_vcpus) { + r = 0; + set_bit(KVM_ARCH_FLAG_MTE_PERM_ENABLED, &kvm->arch.flags); + } + mutex_unlock(&kvm->lock); + break; default: break; } @@ -418,6 +426,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_ARM_SUPPORTED_REG_MASK_RANGES: r = BIT(0); break; + case KVM_CAP_ARM_MTE_PERM: + r = system_supports_notagaccess(); + break; default: r = 0; } diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 502ea63b5d2e..4900ff577819 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -933,6 +933,7 @@ struct kvm_enable_cap { #define KVM_CAP_PRE_FAULT_MEMORY 236 #define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237 #define KVM_CAP_X86_GUEST_MODE 238 +#define KVM_CAP_ARM_MTE_PERM 239 struct kvm_irq_routing_irqchip { __u32 irqchip; From patchwork Fri Jan 10 11:00:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 13934271 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 34E40E77188 for ; Fri, 10 Jan 2025 11:08:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=bfnWtsE5XFkJqK0Cjf634QpZ99f9UU8Q0TQQnu5Pq34=; b=jcFxeHKwGEQIN/Z/IG+K+IuzHG 2Cc+haR8MQfc4mEVrRiYm8ASQ0EVwS3YBeb+iPwc36dZU9504OwvRUZ94JE//YBk1qVEa3S/R2+sE 5exkEYnvZfMVMvzSlcYR58SsiZuI4jEOOyjBfK8m+DX1R+TIby33aUHfxM1KKwVQtAfmcvineGB1a BQRXrnbSa2mnz5tPz2bVIMcsC0ZpzhpRXV6ZwBqmZ1Wg1Tsp7cpJP3VNYOtgGwcGaGGjJw+KgooSt it767W8jX+6orga92TyeqIAL39ZzdEFROSVhcRhFDVkffV/sNLyxe1BmYsyEERn5DJDooAtNWAYBA UZjywxlw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWCrr-0000000F3eP-30vI; Fri, 10 Jan 2025 11:07:55 +0000 Received: from nyc.source.kernel.org ([147.75.193.91]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWCl9-0000000F2iJ-2rMY for linux-arm-kernel@lists.infradead.org; Fri, 10 Jan 2025 11:01:00 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 2D8EDA41D3B; Fri, 10 Jan 2025 10:59:10 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 45BF9C4CED6; Fri, 10 Jan 2025 11:00:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1736506858; bh=9TRUXlBD9cnMC5DbsHqNqQnojwDs/LFT/n1O8JicJ8Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=forSODSfA5wvDsad6NgM9MiLxYKOq6M8GDI68srPidmKv7oTJb2PAHUauu7I2ZqQY XPSmKzbVYufufRIM12advoubbPsvl8dp95bcKzcGfsAcuNoe/58hTPHzJ2Dh3azeuO w0WxFL/Wvf5lhhVl3GMIIUB1/QIna7pyBs90iQ8BzSBtF0+L9qpLWaN2OOo3WZSl7S K0dulV+Edoo4ecmXF0cQB325pRJ7s6t+0TKN7oY+/wpZylt3fCaTQyTmDgbW1D2Aip Bubrdcj6roBNw+1V8zpuwdntLDwNSlvE9Wmy1wD7piOK9BzQYlUGn0+t4KHRxKYYr8 5Bz4MeUYeTuzw== From: "Aneesh Kumar K.V (Arm)" To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Cc: Suzuki K Poulose , Steven Price , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland , Oliver Upton , Joey Gouly , Zenghui Yu , "Aneesh Kumar K.V (Arm)" Subject: [PATCH v2 5/7] KVM: arm64: MTE: Use stage-2 NoTagAccess memory attribute if supported Date: Fri, 10 Jan 2025 16:30:21 +0530 Message-ID: <20250110110023.2963795-6-aneesh.kumar@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250110110023.2963795-1-aneesh.kumar@kernel.org> References: <20250110110023.2963795-1-aneesh.kumar@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_030059_859164_33330FF6 X-CRM114-Status: GOOD ( 24.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently, the kernel won't start a guest if the MTE feature is enabled and the guest RAM is backed by memory which doesn't support access tags. Update this such that the kernel uses the NoTagAccess memory attribute while mapping pages from VMAs for which MTE is not allowed. The fault from accessing the access tags with such pages is forwarded to VMM so that VMM can decide to kill the guest or take any corrective actions Signed-off-by: Aneesh Kumar K.V (Arm) --- Documentation/virt/kvm/api.rst | 3 +++ arch/arm64/include/asm/kvm_emulate.h | 5 +++++ arch/arm64/include/asm/kvm_pgtable.h | 1 + arch/arm64/kvm/hyp/pgtable.c | 16 +++++++++++++--- arch/arm64/kvm/mmu.c | 17 ++++++++++++++--- include/linux/kvm_host.h | 10 ++++++++++ include/uapi/linux/kvm.h | 1 + 7 files changed, 47 insertions(+), 6 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index e954fca76c27..3b357f9b76d6 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -7115,6 +7115,9 @@ describes properties of the faulting access that are likely pertinent: - KVM_MEMORY_EXIT_FLAG_PRIVATE - When set, indicates the memory fault occurred on a private memory access. When clear, indicates the fault occurred on a shared access. + - KVM_MEMORY_EXIT_FLAG_NOTAGACCESS - When set, indicates the memory fault + occurred due to allocation tag access on a memory region that doesn't support + allocation tags. Note! KVM_EXIT_MEMORY_FAULT is unique among all KVM exit reasons in that it accompanies a return code of '-1', not '0'! errno will always be set to EFAULT diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index cf811009a33c..609ed6a5ffce 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -378,6 +378,11 @@ static inline bool kvm_vcpu_trap_is_exec_fault(const struct kvm_vcpu *vcpu) return kvm_vcpu_trap_is_iabt(vcpu) && !kvm_vcpu_abt_iss1tw(vcpu); } +static inline bool kvm_vcpu_trap_is_tagaccess(const struct kvm_vcpu *vcpu) +{ + return !!(ESR_ELx_ISS2(kvm_vcpu_get_esr(vcpu)) & ESR_ELx_TagAccess); +} + static __always_inline u8 kvm_vcpu_trap_get_fault(const struct kvm_vcpu *vcpu) { return kvm_vcpu_get_esr(vcpu) & ESR_ELx_FSC; diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index aab04097b505..0daf4ffedc99 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -252,6 +252,7 @@ enum kvm_pgtable_prot { KVM_PGTABLE_PROT_DEVICE = BIT(3), KVM_PGTABLE_PROT_NORMAL_NC = BIT(4), + KVM_PGTABLE_PROT_NORMAL_NOTAGACCESS = BIT(5), KVM_PGTABLE_PROT_SW0 = BIT(55), KVM_PGTABLE_PROT_SW1 = BIT(56), diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 40bd55966540..4eb6e9345c12 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -677,9 +677,11 @@ static int stage2_set_prot_attr(struct kvm_pgtable *pgt, enum kvm_pgtable_prot p { kvm_pte_t attr; u32 sh = KVM_PTE_LEAF_ATTR_LO_S2_SH_IS; + unsigned long prot_mask = KVM_PGTABLE_PROT_DEVICE | + KVM_PGTABLE_PROT_NORMAL_NC | + KVM_PGTABLE_PROT_NORMAL_NOTAGACCESS; - switch (prot & (KVM_PGTABLE_PROT_DEVICE | - KVM_PGTABLE_PROT_NORMAL_NC)) { + switch (prot & prot_mask) { case KVM_PGTABLE_PROT_DEVICE | KVM_PGTABLE_PROT_NORMAL_NC: return -EINVAL; case KVM_PGTABLE_PROT_DEVICE: @@ -692,6 +694,12 @@ static int stage2_set_prot_attr(struct kvm_pgtable *pgt, enum kvm_pgtable_prot p return -EINVAL; attr = KVM_S2_MEMATTR(pgt, NORMAL_NC); break; + case KVM_PGTABLE_PROT_NORMAL_NOTAGACCESS: + if (system_supports_notagaccess()) + attr = KVM_S2_MEMATTR(pgt, NORMAL_NOTAGACCESS); + else + return -EINVAL; + break; default: attr = KVM_S2_MEMATTR(pgt, NORMAL); } @@ -872,7 +880,9 @@ static void stage2_unmap_put_pte(const struct kvm_pgtable_visit_ctx *ctx, static bool stage2_pte_cacheable(struct kvm_pgtable *pgt, kvm_pte_t pte) { u64 memattr = pte & KVM_PTE_LEAF_ATTR_LO_S2_MEMATTR; - return kvm_pte_valid(pte) && memattr == KVM_S2_MEMATTR(pgt, NORMAL); + return kvm_pte_valid(pte) && + ((memattr == KVM_S2_MEMATTR(pgt, NORMAL)) || + (memattr == KVM_S2_MEMATTR(pgt, NORMAL_NOTAGACCESS))); } static bool stage2_pte_executable(kvm_pte_t pte) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index eb8220a409e1..3610bea7607d 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1660,9 +1660,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (!fault_is_perm && !device && kvm_has_mte(kvm)) { /* Check the VMM hasn't introduced a new disallowed VMA */ - if (mte_allowed) { + if (mte_allowed) sanitise_mte_tags(kvm, pfn, vma_pagesize); - } else { + else if (kvm_has_mte_perm(kvm)) + prot |= KVM_PGTABLE_PROT_NORMAL_NOTAGACCESS; + else { ret = -EFAULT; goto out_unlock; } @@ -1840,6 +1842,14 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) gfn = ipa >> PAGE_SHIFT; memslot = gfn_to_memslot(vcpu->kvm, gfn); + + if (kvm_vcpu_trap_is_tagaccess(vcpu)) { + /* exit to host and handle the error */ + kvm_prepare_notagaccess_exit(vcpu, gfn << PAGE_SHIFT, PAGE_SIZE); + ret = 0; + goto out; + } + hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable); write_fault = kvm_is_write_fault(vcpu); if (kvm_is_error_hva(hva) || (write_fault && !writable)) { @@ -2152,7 +2162,8 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, if (!vma) break; - if (kvm_has_mte(kvm) && !kvm_vma_mte_allowed(vma)) { + if (kvm_has_mte(kvm) && + !kvm_has_mte_perm(kvm) && !kvm_vma_mte_allowed(vma)) { ret = -EINVAL; break; } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 401439bb21e3..8a270f658f36 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2471,6 +2471,16 @@ static inline void kvm_prepare_memory_fault_exit(struct kvm_vcpu *vcpu, vcpu->run->memory_fault.flags |= KVM_MEMORY_EXIT_FLAG_PRIVATE; } +static inline void kvm_prepare_notagaccess_exit(struct kvm_vcpu *vcpu, + gpa_t gpa, gpa_t size) +{ + vcpu->run->exit_reason = KVM_EXIT_MEMORY_FAULT; + vcpu->run->memory_fault.flags = KVM_MEMORY_EXIT_FLAG_NOTAGACCESS; + vcpu->run->memory_fault.gpa = gpa; + vcpu->run->memory_fault.size = size; +} + + #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES static inline unsigned long kvm_get_memory_attributes(struct kvm *kvm, gfn_t gfn) { diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 4900ff577819..7136d28eb307 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -442,6 +442,7 @@ struct kvm_run { /* KVM_EXIT_MEMORY_FAULT */ struct { #define KVM_MEMORY_EXIT_FLAG_PRIVATE (1ULL << 3) +#define KVM_MEMORY_EXIT_FLAG_NOTAGACCESS (1ULL << 4) __u64 flags; __u64 gpa; __u64 size; From patchwork Fri Jan 10 11:00:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 13934274 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 26EFCE77188 for ; Fri, 10 Jan 2025 11:09:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=HghE7eWEUBbR9w5rS7IrqcFN5OXKw539/0Ap3Ls6QHM=; b=3WKTnerxJXSx3VGXKofjoGfXf7 2vVD9jl1dQffiG1Is6aTBWz9aJGi5crCbyBIcc+EsgRZdDCLzdyGxjaShMIV6bnyj953Ae6ZfYMyg LmAsYCTSR1Uli7CeZEJS5IcuBiKJZIdBju7kNpWbxPGCmqCDFen/vwQXtOv/+tKTh9/0aT1Z4YQpu bGRV9YqWFBkBmL+63En1MVb0qYJglAxOklf+4Utjq5N6SaIntIozi4b0xcTU93abdyQCWEO9lRxB3 TS4yDQ75iSBVkN0m6iCNNeRF6kfa1JB0cxTSyzukY8o/1P6uejd51QOd7nWQKPDdm/qaJP8uTFeCD 6B69Ed/g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWCt4-0000000F3sx-1crR; Fri, 10 Jan 2025 11:09:10 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWClD-0000000F2jX-3HvO for linux-arm-kernel@lists.infradead.org; Fri, 10 Jan 2025 11:01:06 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 589165C4B5A; Fri, 10 Jan 2025 11:00:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E01C2C4CEE5; Fri, 10 Jan 2025 11:00:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1736506863; bh=wX1cvsZl0Gee/1vXpjR21iffk4CLgXLHBJvGmHQoMwk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UVcqX3HJARW23MBB7HIpwfgbrnzUbc2Xdp7r/MXz5XW48w0UiqIWjbK3XxVO1S7x1 fzfroHhsjit5PfoacoZwKjdKL4fzm60CU5LoBs+yAjiQ8gHzJ5k7T6Nxu2D0b85uYx v18Lk1TJH2dTwn28f5SJ6hZev/iy5YWdQs6mMEWhXsi60BM5bIFoJORwJ/6aZd5dU6 DZsUpwyj8ClIUd6wp9nqUwB8qdm3JUKSY648rPxv5fQFYKCDZJUaZ/fei1cE1mh/sz HNsJnGfr8v558Q4aMk1XrwHVTVh5T3CEb9+LXCYSxPBCdiT3YhXZA3P2XHgmEoz8jL VqX7Rll4Sq/6g== From: "Aneesh Kumar K.V (Arm)" To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Cc: Suzuki K Poulose , Steven Price , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland , Oliver Upton , Joey Gouly , Zenghui Yu , "Aneesh Kumar K.V (Arm)" Subject: [PATCH v2 6/7] KVM: arm64: MTE: Nested guest support Date: Fri, 10 Jan 2025 16:30:22 +0530 Message-ID: <20250110110023.2963795-7-aneesh.kumar@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250110110023.2963795-1-aneesh.kumar@kernel.org> References: <20250110110023.2963795-1-aneesh.kumar@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_030105_099715_B3B16763 X-CRM114-Status: GOOD ( 19.88 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently MTE feature is not enabled for a EL1 guest and similarly we disable MTE_PERM. But we do add code in this patch to allow using KVM_CAP_ARM_MTE_PERM with an EL1 guest. This will allow the usage of MTE in nested guest even if some of the memory backing nested guest RAM is not MTE capable (ex: page cache pages). Signed-off-by: Aneesh Kumar K.V (Arm) --- arch/arm64/include/asm/kvm_nested.h | 10 ++++++++++ arch/arm64/kvm/mmu.c | 10 ++++++++++ arch/arm64/kvm/nested.c | 28 ++++++++++++++++++++++++++++ arch/arm64/kvm/sys_regs.c | 15 +++++++++++---- 4 files changed, 59 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h index 233e65522716..4d6c0df3ef48 100644 --- a/arch/arm64/include/asm/kvm_nested.h +++ b/arch/arm64/include/asm/kvm_nested.h @@ -86,6 +86,8 @@ struct kvm_s2_trans { bool writable; bool readable; int level; + int s2_fwb; + int mem_attr; u32 esr; u64 desc; }; @@ -120,10 +122,18 @@ static inline bool kvm_s2_trans_executable(struct kvm_s2_trans *trans) return !(trans->desc & BIT(54)); } +static inline bool kvm_s2_trans_tagaccess(struct kvm_s2_trans *trans) +{ + if (trans->s2_fwb) + return (trans->mem_attr & MT_S2_FWB_NORMAL_NOTAGACCESS) != MT_S2_FWB_NORMAL_NOTAGACCESS; + return (trans->mem_attr & MT_S2_NORMAL_NOTAGACCESS) != MT_S2_NORMAL_NOTAGACCESS; +} + extern int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa, struct kvm_s2_trans *result); extern int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu, struct kvm_s2_trans *trans); +int kvm_s2_handle_notagaccess_fault(struct kvm_vcpu *vcpu, struct kvm_s2_trans *trans); extern int kvm_inject_s2_fault(struct kvm_vcpu *vcpu, u64 esr_el2); extern void kvm_nested_s2_wp(struct kvm *kvm); extern void kvm_nested_s2_unmap(struct kvm *kvm, bool may_block); diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 3610bea7607d..54e5bfe4f126 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1640,6 +1640,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, goto out_unlock; } + if (nested && !kvm_s2_trans_tagaccess(nested)) + mte_allowed = false; + /* * If we are not forced to use page mapping, check if we are * backed by a THP and thus use block mapping if possible. @@ -1836,6 +1839,13 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) goto out_unlock; } + ret = kvm_s2_handle_notagaccess_fault(vcpu, &nested_trans); + if (ret) { + esr = kvm_s2_trans_esr(&nested_trans); + kvm_inject_s2_fault(vcpu, esr); + goto out_unlock; + } + ipa = kvm_s2_trans_output(&nested_trans); nested = &nested_trans; } diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c index 9b36218b48de..5867e0376444 100644 --- a/arch/arm64/kvm/nested.c +++ b/arch/arm64/kvm/nested.c @@ -290,6 +290,7 @@ static int walk_nested_s2_pgd(phys_addr_t ipa, out->writable = desc & (0b10 << 6); out->level = level; out->desc = desc; + out->mem_attr = desc & KVM_PTE_LEAF_ATTR_LO_S2_MEMATTR; return 0; } @@ -340,6 +341,7 @@ int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa, wi.be = vcpu_read_sys_reg(vcpu, SCTLR_EL2) & SCTLR_ELx_EE; + result->s2_fwb = !!(*vcpu_hcr(vcpu) & HCR_FWB); ret = walk_nested_s2_pgd(gipa, &wi, result); if (ret) result->esr |= (kvm_vcpu_get_esr(vcpu) & ~ESR_ELx_FSC); @@ -733,6 +735,27 @@ int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu, struct kvm_s2_trans *trans) return forward_fault; } +int kvm_s2_handle_notagaccess_fault(struct kvm_vcpu *vcpu, struct kvm_s2_trans *trans) +{ + bool forward_fault = false; + + trans->esr = 0; + + if (!kvm_vcpu_trap_is_tagaccess(vcpu)) + return 0; + + if (!kvm_s2_trans_tagaccess(trans)) + forward_fault = true; + else + forward_fault = false; + + /* forward it as a permission fault with tag access set in ISS2 */ + if (forward_fault) + trans->esr = esr_s2_fault(vcpu, trans->level, ESR_ELx_FSC_PERM); + + return forward_fault; +} + int kvm_inject_s2_fault(struct kvm_vcpu *vcpu, u64 esr_el2) { vcpu_write_sys_reg(vcpu, vcpu->arch.fault.far_el2, FAR_EL2); @@ -844,6 +867,11 @@ static void limit_nv_id_regs(struct kvm *kvm) NV_FTR(PFR1, CSV2_frac)); kvm_set_vm_id_reg(kvm, SYS_ID_AA64PFR1_EL1, val); + /* For now no MTE_PERM support because MTE is disabled above */ + val = kvm_read_vm_id_reg(kvm, SYS_ID_AA64PFR2_EL1); + val &= ~NV_FTR(PFR2, MTEPERM); + kvm_set_vm_id_reg(kvm, SYS_ID_AA64PFR2_EL1, val); + /* Hide ECV, ExS, Secure Memory */ val = kvm_read_vm_id_reg(kvm, SYS_ID_AA64MMFR0_EL1); val &= ~(NV_FTR(MMFR0, ECV) | diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index e2a5c2918d9e..cb7d4d32179c 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1557,7 +1557,7 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { u32 id = reg_to_encoding(r); - u64 val; + u64 val, mask; if (sysreg_visible_as_raz(vcpu, r)) return 0; @@ -1587,8 +1587,14 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu, val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MPAM_frac); break; case SYS_ID_AA64PFR2_EL1: - /* We only expose FPMR */ - val &= ID_AA64PFR2_EL1_FPMR; + mask = ID_AA64PFR2_EL1_FPMR; + /* + * Since this is a stage-2 specific feature, only pass + * if vcpu can run in vEL2 + */ + if (vcpu_has_nv(vcpu)) + mask |= ID_AA64PFR2_EL1_MTEPERM; + val &= mask; break; case SYS_ID_AA64ISAR1_EL1: if (!vcpu_has_ptrauth(vcpu)) @@ -2566,7 +2572,8 @@ static const struct sys_reg_desc sys_reg_descs[] = { ID_AA64PFR1_EL1_MPAM_frac | ID_AA64PFR1_EL1_RAS_frac | ID_AA64PFR1_EL1_MTE)), - ID_WRITABLE(ID_AA64PFR2_EL1, ID_AA64PFR2_EL1_FPMR), + ID_WRITABLE(ID_AA64PFR2_EL1, ID_AA64PFR2_EL1_FPMR | + ID_AA64PFR2_EL1_MTEPERM), ID_UNALLOCATED(4,3), ID_WRITABLE(ID_AA64ZFR0_EL1, ~ID_AA64ZFR0_EL1_RES0), ID_HIDDEN(ID_AA64SMFR0_EL1), From patchwork Fri Jan 10 11:00:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 13934275 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C74FBE77188 for ; Fri, 10 Jan 2025 11:10:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=zhbOHS1LSUQnR0ZAmptYt5E326/dXJ8jgX1+l0kSoEs=; b=y4FLWGscXQOI/S6oDP7ogT3/J1 3zqKQ+vMZ5P8HNu2jJtRHD5cbvAONePbMma5sBo8B5UV94PXmBfTlcdiRGMHsYZ/pqwdJZQTJGuV8 750Rt5m/gA4SmRfAlTzjF2TYJy/bUyOw6x4J9C/1L0DuzTWv+ZXzl9tK8YaUsgeV503ZEHSZ039la B4rlFMV2QvYQ2dw6B0yu5uOAICyk5mPQNSlxYL7wkLes40QNK2VyDQOAOcCj0LYWxvj4D6rsWzSkT ZAbOhduQ1vXd/RxW8sgKHtN9DK5IhKTISPinqHzWXZu+x6GhAk3l1QtfWB0+HAg/cK2TO80yU/NZC EtJLQd8g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWCuG-0000000F45w-02eH; Fri, 10 Jan 2025 11:10:24 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWClI-0000000F2kk-45nI for linux-arm-kernel@lists.infradead.org; Fri, 10 Jan 2025 11:01:10 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 798325C5BEA; Fri, 10 Jan 2025 11:00:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A23A3C4CED6; Fri, 10 Jan 2025 11:01:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1736506868; bh=UC8nUjZKGXc68nYBY2MUUDphy0F97Omhafd+N3duSlE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iiuAzdT9jlu5T0GUYeDJlrjDEHehXsr778oIyn2pohGSexWVblLet9S41jaMgZ6jM 0DltV1Uhj6jvxwB75aS+DnSkvPdg5Tht76d+fpCQY8WtmLwztAmq6D+NgBT81o+LBb 6K8VWbTW2aihb4WJ+CFdwCeXdrPEF14457VYIIMYsNk2RXSNJeZUMJJJ4fK9u53zBk NFWW5uWPw9PmS5aD2tC76NepCGog+M1LiQ9CHKEQuFciRbdXrUs9jxB0Qnt1fSHJeY ypxSTpCbVz+f+DOC2odNSTj1vDdKsSztE44Yxlp1HYr30ji//idUrdeSIbE2f3BAK/ MKLmhLeTkwPrg== From: "Aneesh Kumar K.V (Arm)" To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Cc: Suzuki K Poulose , Steven Price , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland , Oliver Upton , Joey Gouly , Zenghui Yu , "Aneesh Kumar K.V (Arm)" Subject: [PATCH v2 7/7] KVM: arm64: Split some of the kvm_pgtable_prot bits into separate defines Date: Fri, 10 Jan 2025 16:30:23 +0530 Message-ID: <20250110110023.2963795-8-aneesh.kumar@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250110110023.2963795-1-aneesh.kumar@kernel.org> References: <20250110110023.2963795-1-aneesh.kumar@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_030109_090379_63C4F889 X-CRM114-Status: GOOD ( 18.27 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Some of the kvm_pgtable_prot values are mutually exclusive, like KVM_PGTABLE_PROT_NORMAL_NC and KVM_PGTABLE_PROT_DEVICE. This patch splits the Normal memory non-cacheable and NoTagAccess attributes into separate #defines. With this change, the kvm_pgtable_prot bits only indicate whether it is a device or normal memory mapping. There are no functional changes in this patch. Signed-off-by: Aneesh Kumar K.V (Arm) --- arch/arm64/include/asm/kvm_pgtable.h | 10 +++--- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 2 +- arch/arm64/kvm/hyp/pgtable.c | 47 +++++++++++++-------------- arch/arm64/kvm/mmu.c | 10 +++--- 4 files changed, 36 insertions(+), 33 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 0daf4ffedc99..9443a8ad9343 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -239,7 +239,6 @@ enum kvm_pgtable_stage2_flags { * @KVM_PGTABLE_PROT_W: Write permission. * @KVM_PGTABLE_PROT_R: Read permission. * @KVM_PGTABLE_PROT_DEVICE: Device attributes. - * @KVM_PGTABLE_PROT_NORMAL_NC: Normal noncacheable attributes. * @KVM_PGTABLE_PROT_SW0: Software bit 0. * @KVM_PGTABLE_PROT_SW1: Software bit 1. * @KVM_PGTABLE_PROT_SW2: Software bit 2. @@ -251,8 +250,6 @@ enum kvm_pgtable_prot { KVM_PGTABLE_PROT_R = BIT(2), KVM_PGTABLE_PROT_DEVICE = BIT(3), - KVM_PGTABLE_PROT_NORMAL_NC = BIT(4), - KVM_PGTABLE_PROT_NORMAL_NOTAGACCESS = BIT(5), KVM_PGTABLE_PROT_SW0 = BIT(55), KVM_PGTABLE_PROT_SW1 = BIT(56), @@ -263,6 +260,11 @@ enum kvm_pgtable_prot { #define KVM_PGTABLE_PROT_RW (KVM_PGTABLE_PROT_R | KVM_PGTABLE_PROT_W) #define KVM_PGTABLE_PROT_RWX (KVM_PGTABLE_PROT_RW | KVM_PGTABLE_PROT_X) +/* different memory attribute requested */ +#define KVM_PGTABLE_ATTR_NORMAL_NC 0x1 +#define KVM_PGTABLE_ATTR_NORMAL_NOTAGACCESS 0x2 + + #define PKVM_HOST_MEM_PROT KVM_PGTABLE_PROT_RWX #define PKVM_HOST_MMIO_PROT KVM_PGTABLE_PROT_RW @@ -606,7 +608,7 @@ kvm_pte_t *kvm_pgtable_stage2_create_unlinked(struct kvm_pgtable *pgt, * Return: 0 on success, negative error code on failure. */ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, - u64 phys, enum kvm_pgtable_prot prot, + u64 phys, enum kvm_pgtable_prot prot, int mem_attr, void *mc, enum kvm_pgtable_walk_flags flags); /** diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index caba3e4bd09e..25c8b2fbce15 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -411,7 +411,7 @@ static inline int __host_stage2_idmap(u64 start, u64 end, enum kvm_pgtable_prot prot) { return kvm_pgtable_stage2_map(&host_mmu.pgt, start, end - start, start, - prot, &host_s2_pool, 0); + prot, 0, &host_s2_pool, 0); } /* diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 4eb6e9345c12..9dd93ae8bb97 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -673,35 +673,34 @@ void kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, #define KVM_S2_MEMATTR(pgt, attr) PAGE_S2_MEMATTR(attr, stage2_has_fwb(pgt)) static int stage2_set_prot_attr(struct kvm_pgtable *pgt, enum kvm_pgtable_prot prot, - kvm_pte_t *ptep) + int mem_attr, kvm_pte_t *ptep) { kvm_pte_t attr; u32 sh = KVM_PTE_LEAF_ATTR_LO_S2_SH_IS; - unsigned long prot_mask = KVM_PGTABLE_PROT_DEVICE | - KVM_PGTABLE_PROT_NORMAL_NC | - KVM_PGTABLE_PROT_NORMAL_NOTAGACCESS; + bool device = prot & KVM_PGTABLE_PROT_DEVICE; - switch (prot & prot_mask) { - case KVM_PGTABLE_PROT_DEVICE | KVM_PGTABLE_PROT_NORMAL_NC: - return -EINVAL; - case KVM_PGTABLE_PROT_DEVICE: + if (device) { if (prot & KVM_PGTABLE_PROT_X) return -EINVAL; attr = KVM_S2_MEMATTR(pgt, DEVICE_nGnRE); - break; - case KVM_PGTABLE_PROT_NORMAL_NC: - if (prot & KVM_PGTABLE_PROT_X) + if (!mem_attr) return -EINVAL; - attr = KVM_S2_MEMATTR(pgt, NORMAL_NC); - break; - case KVM_PGTABLE_PROT_NORMAL_NOTAGACCESS: - if (system_supports_notagaccess()) - attr = KVM_S2_MEMATTR(pgt, NORMAL_NOTAGACCESS); - else - return -EINVAL; - break; - default: - attr = KVM_S2_MEMATTR(pgt, NORMAL); + } else { + switch (mem_attr) { + case KVM_PGTABLE_ATTR_NORMAL_NC: + if (prot & KVM_PGTABLE_PROT_X) + return -EINVAL; + attr = KVM_S2_MEMATTR(pgt, NORMAL_NC); + break; + case KVM_PGTABLE_ATTR_NORMAL_NOTAGACCESS: + if (system_supports_notagaccess()) + attr = KVM_S2_MEMATTR(pgt, NORMAL_NOTAGACCESS); + else + return -EINVAL; + break; + default: + attr = KVM_S2_MEMATTR(pgt, NORMAL); + } } if (!(prot & KVM_PGTABLE_PROT_X)) @@ -1060,7 +1059,7 @@ static int stage2_map_walker(const struct kvm_pgtable_visit_ctx *ctx, } int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, - u64 phys, enum kvm_pgtable_prot prot, + u64 phys, enum kvm_pgtable_prot prot, int mem_attr, void *mc, enum kvm_pgtable_walk_flags flags) { int ret; @@ -1081,7 +1080,7 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, if (WARN_ON((pgt->flags & KVM_PGTABLE_S2_IDMAP) && (addr != phys))) return -EINVAL; - ret = stage2_set_prot_attr(pgt, prot, &map_data.attr); + ret = stage2_set_prot_attr(pgt, prot, mem_attr, &map_data.attr); if (ret) return ret; @@ -1408,7 +1407,7 @@ kvm_pte_t *kvm_pgtable_stage2_create_unlinked(struct kvm_pgtable *pgt, if (!IS_ALIGNED(phys, kvm_granule_size(level))) return ERR_PTR(-EINVAL); - ret = stage2_set_prot_attr(pgt, prot, &map_data.attr); + ret = stage2_set_prot_attr(pgt, prot, 0, &map_data.attr); if (ret) return ERR_PTR(ret); diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 54e5bfe4f126..87afc8862459 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1130,8 +1130,8 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, break; write_lock(&kvm->mmu_lock); - ret = kvm_pgtable_stage2_map(pgt, addr, PAGE_SIZE, pa, prot, - &cache, 0); + ret = kvm_pgtable_stage2_map(pgt, addr, PAGE_SIZE, + pa, prot, 0, &cache, 0); write_unlock(&kvm->mmu_lock); if (ret) break; @@ -1452,6 +1452,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; struct page *page; + int normal_memattr = 0; if (fault_is_perm) fault_granule = kvm_vcpu_trap_get_perm_fault_granule(vcpu); @@ -1666,7 +1667,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (mte_allowed) sanitise_mte_tags(kvm, pfn, vma_pagesize); else if (kvm_has_mte_perm(kvm)) - prot |= KVM_PGTABLE_PROT_NORMAL_NOTAGACCESS; + normal_memattr = KVM_PGTABLE_ATTR_NORMAL_NOTAGACCESS; else { ret = -EFAULT; goto out_unlock; @@ -1681,7 +1682,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (device) { if (vfio_allow_any_uc) - prot |= KVM_PGTABLE_PROT_NORMAL_NC; + normal_memattr = KVM_PGTABLE_ATTR_NORMAL_NC; else prot |= KVM_PGTABLE_PROT_DEVICE; } else if (cpus_have_final_cap(ARM64_HAS_CACHE_DIC) && @@ -1704,6 +1705,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, } else { ret = kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize, __pfn_to_phys(pfn), prot, + normal_memattr, memcache, KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED);