From patchwork Mon Aug 19 13:19:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 13768345 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6660716C68B; Mon, 19 Aug 2024 13:19:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073587; cv=none; b=A0MVruIXOZlsvdsbIm480d38Pzla+T/iBkSe3+h+eQKeixpH0VvXoosaELC6uUBS7LGFxI2cCuE8WciMa55cM+T25eUnMCOgOTDrzaB7PfOmEDYo+Zkx7UBoNrtJfw/gyHOmv43mguKNt+Vsbo18KH6TcihVaE3IUKBJTdq0pNE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073587; c=relaxed/simple; bh=JsStGkCgNb6gy2BWmOepJzcY7eBBGSwzX/ny0Aat0xU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=b0E9D1pgTHhIQDSM/x/3X13HHpdmnbmtJzBMiEWrFdZaBg1SQyhD+Sy2fAGgxCOFNBcwjxEB3YKKdAos9kfPvuZySZNF7VVDRQbNCkFNMk15QTGRXv3ZW7MinCetFiM2l398d+FMNmnU85ynDN4WPsqRzpTxFCsQhOkYPEwmQ/I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D68C11063; Mon, 19 Aug 2024 06:20:11 -0700 (PDT) Received: from e122027.arm.com (unknown [10.57.85.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A21603F73B; Mon, 19 Aug 2024 06:19:41 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Gavin Shan , Shanker Donthineni , Alper Gun Subject: [PATCH v5 01/19] arm64: mm: Add top-level dispatcher for internal mem_encrypt API Date: Mon, 19 Aug 2024 14:19:06 +0100 Message-Id: <20240819131924.372366-2-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240819131924.372366-1-steven.price@arm.com> References: <20240819131924.372366-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Will Deacon Implementing the internal mem_encrypt API for arm64 depends entirely on the Confidential Computing environment in which the kernel is running. Introduce a simple dispatcher so that backend hooks can be registered depending upon the environment in which the kernel finds itself. Signed-off-by: Will Deacon Signed-off-by: Steven Price Reviewed-by: Catalin Marinas --- Patch 'borrowed' from Will's series for pKVM: https://lore.kernel.org/r/20240730151113.1497-4-will%40kernel.org --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/mem_encrypt.h | 15 +++++++++ arch/arm64/include/asm/set_memory.h | 1 + arch/arm64/mm/Makefile | 2 +- arch/arm64/mm/mem_encrypt.c | 50 ++++++++++++++++++++++++++++ 5 files changed, 68 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/include/asm/mem_encrypt.h create mode 100644 arch/arm64/mm/mem_encrypt.c diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index b3fc891f1544..68d77a2f4d1a 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -34,6 +34,7 @@ config ARM64 select ARCH_HAS_KERNEL_FPU_SUPPORT if KERNEL_MODE_NEON select ARCH_HAS_KEEPINITRD select ARCH_HAS_MEMBARRIER_SYNC_CORE + select ARCH_HAS_MEM_ENCRYPT select ARCH_HAS_NMI_SAFE_THIS_CPU_OPS select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE select ARCH_HAS_PTE_DEVMAP diff --git a/arch/arm64/include/asm/mem_encrypt.h b/arch/arm64/include/asm/mem_encrypt.h new file mode 100644 index 000000000000..b0c9a86b13a4 --- /dev/null +++ b/arch/arm64/include/asm/mem_encrypt.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __ASM_MEM_ENCRYPT_H +#define __ASM_MEM_ENCRYPT_H + +struct arm64_mem_crypt_ops { + int (*encrypt)(unsigned long addr, int numpages); + int (*decrypt)(unsigned long addr, int numpages); +}; + +int arm64_mem_crypt_ops_register(const struct arm64_mem_crypt_ops *ops); + +int set_memory_encrypted(unsigned long addr, int numpages); +int set_memory_decrypted(unsigned long addr, int numpages); + +#endif /* __ASM_MEM_ENCRYPT_H */ diff --git a/arch/arm64/include/asm/set_memory.h b/arch/arm64/include/asm/set_memory.h index 0f740b781187..917761feeffd 100644 --- a/arch/arm64/include/asm/set_memory.h +++ b/arch/arm64/include/asm/set_memory.h @@ -3,6 +3,7 @@ #ifndef _ASM_ARM64_SET_MEMORY_H #define _ASM_ARM64_SET_MEMORY_H +#include #include bool can_set_direct_map(void); diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile index 60454256945b..2fc8c6dd0407 100644 --- a/arch/arm64/mm/Makefile +++ b/arch/arm64/mm/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 obj-y := dma-mapping.o extable.o fault.o init.o \ cache.o copypage.o flush.o \ - ioremap.o mmap.o pgd.o mmu.o \ + ioremap.o mmap.o pgd.o mem_encrypt.o mmu.o \ context.o proc.o pageattr.o fixmap.o obj-$(CONFIG_ARM64_CONTPTE) += contpte.o obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o diff --git a/arch/arm64/mm/mem_encrypt.c b/arch/arm64/mm/mem_encrypt.c new file mode 100644 index 000000000000..ee3c0ab04384 --- /dev/null +++ b/arch/arm64/mm/mem_encrypt.c @@ -0,0 +1,50 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Implementation of the memory encryption/decryption API. + * + * Since the low-level details of the operation depend on the + * Confidential Computing environment (e.g. pKVM, CCA, ...), this just + * acts as a top-level dispatcher to whatever hooks may have been + * registered. + * + * Author: Will Deacon + * Copyright (C) 2024 Google LLC + * + * "Hello, boils and ghouls!" + */ + +#include +#include +#include +#include + +#include + +static const struct arm64_mem_crypt_ops *crypt_ops; + +int arm64_mem_crypt_ops_register(const struct arm64_mem_crypt_ops *ops) +{ + if (WARN_ON(crypt_ops)) + return -EBUSY; + + crypt_ops = ops; + return 0; +} + +int set_memory_encrypted(unsigned long addr, int numpages) +{ + if (likely(!crypt_ops) || WARN_ON(!PAGE_ALIGNED(addr))) + return 0; + + return crypt_ops->encrypt(addr, numpages); +} +EXPORT_SYMBOL_GPL(set_memory_encrypted); + +int set_memory_decrypted(unsigned long addr, int numpages) +{ + if (likely(!crypt_ops) || WARN_ON(!PAGE_ALIGNED(addr))) + return 0; + + return crypt_ops->decrypt(addr, numpages); +} +EXPORT_SYMBOL_GPL(set_memory_decrypted); From patchwork Mon Aug 19 13:19:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 13768346 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 397D016B749; Mon, 19 Aug 2024 13:19:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073592; cv=none; b=bm8v1BYoAOx8oGMjNiqCou0MoRLtko1YZmL6fSB3Q8JM/5/Mwg0hyZoVgTjIuoTJY2dDMGHvlHMraIKXUN3drhpS02t5BeGKriTarY7MJ3DQiF63n/HemcavQJbYwoid5JcRGw9MriJzsU4fDe0+0gKzyy0xIdHW85dl8BRAHuA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073592; c=relaxed/simple; bh=sjUJf8BQgocGkc1+uAdqLPxolc2B2H6WtIz9gFiTbXQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=izfxjFmmwZXuJ/fsLQnfh6olUtFrgD8GS1vnzjAXY3RkM7DCAZ5WqqjJufL/UChGuwkRd3PYwgY0QQvSjAow1F3RyGU64mpk/bPgKEStz8ZHvb2vXLaacAfVc/czCrxK5ZzQaD+EqOUNPUIIzkn5c7zaTtkmNJ0v5OIu6v2u9AM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A8E5A113E; Mon, 19 Aug 2024 06:20:16 -0700 (PDT) Received: from e122027.arm.com (unknown [10.57.85.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 36DA83F73B; Mon, 19 Aug 2024 06:19:46 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Gavin Shan , Shanker Donthineni , Alper Gun Subject: [PATCH v5 02/19] arm64: mm: Add confidential computing hook to ioremap_prot() Date: Mon, 19 Aug 2024 14:19:07 +0100 Message-Id: <20240819131924.372366-3-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240819131924.372366-1-steven.price@arm.com> References: <20240819131924.372366-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Will Deacon Confidential Computing environments such as pKVM and Arm's CCA distinguish between shared (i.e. emulated) and private (i.e. assigned) MMIO regions. Introduce a hook into our implementation of ioremap_prot() so that MMIO regions can be shared if necessary. Signed-off-by: Will Deacon Signed-off-by: Steven Price Reviewed-by: Catalin Marinas --- Patch 'borrowed' from Will's series for pKVM: https://lore.kernel.org/r/20240730151113.1497-6-will%40kernel.org --- arch/arm64/include/asm/io.h | 4 ++++ arch/arm64/mm/ioremap.c | 23 ++++++++++++++++++++++- 2 files changed, 26 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h index 41fd90895dfc..1ada23a6ec19 100644 --- a/arch/arm64/include/asm/io.h +++ b/arch/arm64/include/asm/io.h @@ -271,6 +271,10 @@ __iowrite64_copy(void __iomem *to, const void *from, size_t count) * I/O memory mapping functions. */ +typedef int (*ioremap_prot_hook_t)(phys_addr_t phys_addr, size_t size, + pgprot_t *prot); +int arm64_ioremap_prot_hook_register(const ioremap_prot_hook_t hook); + #define ioremap_prot ioremap_prot #define _PAGE_IOREMAP PROT_DEVICE_nGnRE diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c index 269f2f63ab7d..6cc0b7e7eb03 100644 --- a/arch/arm64/mm/ioremap.c +++ b/arch/arm64/mm/ioremap.c @@ -3,10 +3,22 @@ #include #include +static ioremap_prot_hook_t ioremap_prot_hook; + +int arm64_ioremap_prot_hook_register(ioremap_prot_hook_t hook) +{ + if (WARN_ON(ioremap_prot_hook)) + return -EBUSY; + + ioremap_prot_hook = hook; + return 0; +} + void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size, unsigned long prot) { unsigned long last_addr = phys_addr + size - 1; + pgprot_t pgprot = __pgprot(prot); /* Don't allow outside PHYS_MASK */ if (last_addr & ~PHYS_MASK) @@ -16,7 +28,16 @@ void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size, if (WARN_ON(pfn_is_map_memory(__phys_to_pfn(phys_addr)))) return NULL; - return generic_ioremap_prot(phys_addr, size, __pgprot(prot)); + /* + * If a hook is registered (e.g. for confidential computing + * purposes), call that now and barf if it fails. + */ + if (unlikely(ioremap_prot_hook) && + WARN_ON(ioremap_prot_hook(phys_addr, size, &pgprot))) { + return NULL; + } + + return generic_ioremap_prot(phys_addr, size, pgprot); } EXPORT_SYMBOL(ioremap_prot); From patchwork Mon Aug 19 13:19:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 13768347 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id DA61816B749; Mon, 19 Aug 2024 13:19:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073597; cv=none; b=sUJ9yuEBupQfgpjCeTxOOSA+jO8KqGS5cZrlVOWTqTKVpsLorsAA37PZXuj3PiQp04cLWt4Rxz6cVGJDH59n7/7LUQ9n0GgR/Av5PDZzzhwmN5vIncAIJykCAeuPUt1jLP+jBBvCClgOVFnhEwe5vuRi1QhVpy+jYrSz1Csncwg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073597; c=relaxed/simple; bh=R6Z8Qca/byQwHCHVSqrW/aV1emJzDZKceavF/l2e+14=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ETjv69yqmtn6bHIICBZl0sMQ2vTlFm4BUeeKQGNus7NjwjRkCk+7R1mXL1VIBC26yC/9hDEDibI3wcRNylURGuPG4XXl4KcHxSHWlGxAA5x+3lXAArtwr17n6fT7nqN8i/po3UgTby4L/TzR6D3d1Y9q17yNpAgfXTUFZMVLnNI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5DE90339; Mon, 19 Aug 2024 06:20:21 -0700 (PDT) Received: from e122027.arm.com (unknown [10.57.85.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1461B3F73B; Mon, 19 Aug 2024 06:19:50 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Gavin Shan , Shanker Donthineni , Alper Gun Subject: [PATCH v5 03/19] arm64: rsi: Add RSI definitions Date: Mon, 19 Aug 2024 14:19:08 +0100 Message-Id: <20240819131924.372366-4-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240819131924.372366-1-steven.price@arm.com> References: <20240819131924.372366-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Suzuki K Poulose The RMM (Realm Management Monitor) provides functionality that can be accessed by a realm guest through SMC (Realm Services Interface) calls. The SMC definitions are based on DEN0137[1] version 1.0-rel0-rc1. [1] https://developer.arm.com/-/cdn-downloads/permalink/PDF/Architectures/DEN0137_1.0-rel0-rc1_rmm-arch_external.pdf Signed-off-by: Suzuki K Poulose Signed-off-by: Steven Price Acked-by: Catalin Marinas Reviewed-by: Gavin Shan --- Changes since v4: * Update to match the latest RMM spec version 1.0-rel0-rc1. * Make use of the ARM_SMCCC_CALL_VAL macro. * Cast using (_UL macro) various values to unsigned long. Changes since v3: * Drop invoke_rsi_fn_smc_with_res() function and call arm_smccc_smc() directly instead. * Rename header guard in rsi_smc.h to be consistent. Changes since v2: * Rename rsi_get_version() to rsi_request_version() * Fix size/alignment of struct realm_config --- arch/arm64/include/asm/rsi_cmds.h | 136 +++++++++++++++++++++ arch/arm64/include/asm/rsi_smc.h | 189 ++++++++++++++++++++++++++++++ 2 files changed, 325 insertions(+) create mode 100644 arch/arm64/include/asm/rsi_cmds.h create mode 100644 arch/arm64/include/asm/rsi_smc.h diff --git a/arch/arm64/include/asm/rsi_cmds.h b/arch/arm64/include/asm/rsi_cmds.h new file mode 100644 index 000000000000..968b03f4e703 --- /dev/null +++ b/arch/arm64/include/asm/rsi_cmds.h @@ -0,0 +1,136 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2023 ARM Ltd. + */ + +#ifndef __ASM_RSI_CMDS_H +#define __ASM_RSI_CMDS_H + +#include + +#include + +#define RSI_GRANULE_SHIFT 12 +#define RSI_GRANULE_SIZE (_AC(1, UL) << RSI_GRANULE_SHIFT) + +enum ripas { + RSI_RIPAS_EMPTY = 0, + RSI_RIPAS_RAM = 1, + RSI_RIPAS_DESTROYED = 2, + RSI_RIPAS_IO = 3, +}; + +static inline unsigned long rsi_request_version(unsigned long req, + unsigned long *out_lower, + unsigned long *out_higher) +{ + struct arm_smccc_res res; + + arm_smccc_smc(SMC_RSI_ABI_VERSION, req, 0, 0, 0, 0, 0, 0, &res); + + if (out_lower) + *out_lower = res.a1; + if (out_higher) + *out_higher = res.a2; + + return res.a0; +} + +static inline unsigned long rsi_get_realm_config(struct realm_config *cfg) +{ + struct arm_smccc_res res; + + arm_smccc_smc(SMC_RSI_REALM_CONFIG, virt_to_phys(cfg), + 0, 0, 0, 0, 0, 0, &res); + return res.a0; +} + +static inline unsigned long rsi_set_addr_range_state(phys_addr_t start, + phys_addr_t end, + enum ripas state, + unsigned long flags, + phys_addr_t *top) +{ + struct arm_smccc_res res; + + arm_smccc_smc(SMC_RSI_IPA_STATE_SET, start, end, state, + flags, 0, 0, 0, &res); + + if (top) + *top = res.a1; + + return res.a0; +} + +/** + * rsi_attestation_token_init - Initialise the operation to retrieve an + * attestation token. + * + * @challenge: The challenge data to be used in the attestation token + * generation. + * @size: Size of the challenge data in bytes. + * + * Initialises the attestation token generation and returns an upper bound + * on the attestation token size that can be used to allocate an adequate + * buffer. The caller is expected to subsequently call + * rsi_attestation_token_continue() to retrieve the attestation token data on + * the same CPU. + * + * Returns: + * On success, returns the upper limit of the attestation report size. + * Otherwise, -EINVAL + */ +static inline unsigned long +rsi_attestation_token_init(const u8 *challenge, unsigned long size) +{ + struct arm_smccc_1_2_regs regs = { 0 }; + + /* The challenge must be at least 32bytes and at most 64bytes */ + if (!challenge || size < 32 || size > 64) + return -EINVAL; + + regs.a0 = SMC_RSI_ATTESTATION_TOKEN_INIT; + memcpy(®s.a1, challenge, size); + arm_smccc_1_2_smc(®s, ®s); + + if (regs.a0 == RSI_SUCCESS) + return regs.a1; + + return -EINVAL; +} + +/** + * rsi_attestation_token_continue - Continue the operation to retrieve an + * attestation token. + * + * @granule: {I}PA of the Granule to which the token will be written. + * @offset: Offset within Granule to start of buffer in bytes. + * @size: The size of the buffer. + * @len: The number of bytes written to the buffer. + * + * Retrieves up to a RSI_GRANULE_SIZE worth of token data per call. The caller + * is expected to call rsi_attestation_token_init() before calling this + * function to retrieve the attestation token. + * + * Return: + * * %RSI_SUCCESS - Attestation token retrieved successfully. + * * %RSI_INCOMPLETE - Token generation is not complete. + * * %RSI_ERROR_INPUT - A parameter was not valid. + * * %RSI_ERROR_STATE - Attestation not in progress. + */ +static inline int rsi_attestation_token_continue(phys_addr_t granule, + unsigned long offset, + unsigned long size, + unsigned long *len) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_invoke(SMC_RSI_ATTESTATION_TOKEN_CONTINUE, + granule, offset, size, 0, &res); + + if (len) + *len = res.a1; + return res.a0; +} + +#endif /* __ASM_RSI_CMDS_H */ diff --git a/arch/arm64/include/asm/rsi_smc.h b/arch/arm64/include/asm/rsi_smc.h new file mode 100644 index 000000000000..b76b03a8fea8 --- /dev/null +++ b/arch/arm64/include/asm/rsi_smc.h @@ -0,0 +1,189 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2023 ARM Ltd. + */ + +#ifndef __ASM_RSI_SMC_H_ +#define __ASM_RSI_SMC_H_ + +#include + +/* + * This file describes the Realm Services Interface (RSI) Application Binary + * Interface (ABI) for SMC calls made from within the Realm to the RMM and + * serviced by the RMM. + */ + +/* + * The major version number of the RSI implementation. This is increased when + * the binary format or semantics of the SMC calls change. + */ +#define RSI_ABI_VERSION_MAJOR UL(1) + +/* + * The minor version number of the RSI implementation. This is increased when + * a bug is fixed, or a feature is added without breaking binary compatibility. + */ +#define RSI_ABI_VERSION_MINOR UL(0) + +#define RSI_ABI_VERSION ((RSI_ABI_VERSION_MAJOR << 16) | \ + RSI_ABI_VERSION_MINOR) + +#define RSI_ABI_VERSION_GET_MAJOR(_version) ((_version) >> 16) +#define RSI_ABI_VERSION_GET_MINOR(_version) ((_version) & 0xFFFF) + +#define RSI_SUCCESS UL(0) +#define RSI_ERROR_INPUT UL(1) +#define RSI_ERROR_STATE UL(2) +#define RSI_INCOMPLETE UL(3) +#define RSI_ERROR_UNKNOWN UL(4) + +#define SMC_RSI_FID(n) ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_64, \ + ARM_SMCCC_OWNER_STANDARD, \ + n) + +/* + * Returns RSI version. + * + * arg1 == Requested interface revision + * ret0 == Status /error + * ret1 == Lower implemented interface revision + * ret2 == Higher implemented interface revision + */ +#define SMC_RSI_ABI_VERSION SMC_RSI_FID(0x190) + +/* + * Read feature register. + * + * arg1 == Feature register index + * ret0 == Status /error + * ret1 == Feature register value + */ +#define SMC_RSI_FEATURES SMC_RSI_FID(0x191) + +/* + * Read measurement for the current Realm. + * + * arg1 == Index, which measurements slot to read + * ret0 == Status / error + * ret1 == Measurement value, bytes: 0 - 7 + * ret2 == Measurement value, bytes: 7 - 15 + * ret3 == Measurement value, bytes: 16 - 23 + * ret4 == Measurement value, bytes: 24 - 31 + * ret5 == Measurement value, bytes: 32 - 39 + * ret6 == Measurement value, bytes: 40 - 47 + * ret7 == Measurement value, bytes: 48 - 55 + * ret8 == Measurement value, bytes: 56 - 63 + */ +#define SMC_RSI_MEASUREMENT_READ SMC_RSI_FID(0x192) + +/* + * Extend Realm Extensible Measurement (REM) value. + * + * arg1 == Index, which measurements slot to extend + * arg2 == Size of realm measurement in bytes, max 64 bytes + * arg3 == Measurement value, bytes: 0 - 7 + * arg4 == Measurement value, bytes: 7 - 15 + * arg5 == Measurement value, bytes: 16 - 23 + * arg6 == Measurement value, bytes: 24 - 31 + * arg7 == Measurement value, bytes: 32 - 39 + * arg8 == Measurement value, bytes: 40 - 47 + * arg9 == Measurement value, bytes: 48 - 55 + * arg10 == Measurement value, bytes: 56 - 63 + * ret0 == Status / error + */ +#define SMC_RSI_MEASUREMENT_EXTEND SMC_RSI_FID(0x193) + +/* + * Initialize the operation to retrieve an attestation token. + * + * arg1 == Challenge value, bytes: 0 - 7 + * arg2 == Challenge value, bytes: 7 - 15 + * arg3 == Challenge value, bytes: 16 - 23 + * arg4 == Challenge value, bytes: 24 - 31 + * arg5 == Challenge value, bytes: 32 - 39 + * arg6 == Challenge value, bytes: 40 - 47 + * arg7 == Challenge value, bytes: 48 - 55 + * arg8 == Challenge value, bytes: 56 - 63 + * ret0 == Status / error + * ret1 == Upper bound of token size in bytes + */ +#define SMC_RSI_ATTESTATION_TOKEN_INIT SMC_RSI_FID(0x194) + +/* + * Continue the operation to retrieve an attestation token. + * + * arg1 == The IPA of token buffer + * arg2 == Offset within the granule of the token buffer + * arg3 == Size of the granule buffer + * ret0 == Status / error + * ret1 == Length of token bytes copied to the granule buffer + */ +#define SMC_RSI_ATTESTATION_TOKEN_CONTINUE SMC_RSI_FID(0x195) + +#ifndef __ASSEMBLY__ + +struct realm_config { + union { + struct { + unsigned long ipa_bits; /* Width of IPA in bits */ + unsigned long hash_algo; /* Hash algorithm */ + }; + u8 pad[0x200]; + }; + union { + u8 rpv[64]; /* Realm Personalization Value */ + u8 pad2[0xe00]; + }; + /* + * The RMM requires the configuration structure to be aligned to a 4k + * boundary, ensure this happens by aligning this structure. + */ +} __aligned(0x1000); + +#endif /* __ASSEMBLY__ */ + +/* + * Read configuration for the current Realm. + * + * arg1 == struct realm_config addr + * ret0 == Status / error + */ +#define SMC_RSI_REALM_CONFIG SMC_RSI_FID(0x196) + +/* + * Request RIPAS of a target IPA range to be changed to a specified value. + * + * arg1 == Base IPA address of target region + * arg2 == Top of the region + * arg3 == RIPAS value + * arg4 == flags + * ret0 == Status / error + * ret1 == Top of modified IPA range + */ +#define SMC_RSI_IPA_STATE_SET SMC_RSI_FID(0x197) + +#define RSI_NO_CHANGE_DESTROYED UL(0) +#define RSI_CHANGE_DESTROYED UL(1) + +/* + * Get RIPAS of a target IPA range. + * + * arg1 == Base IPA of target region + * arg2 == End of target IPA region + * ret0 == Status / error + * ret1 == Top of IPA region which has the reported RIPAS value + * ret2 == RIPAS value + */ +#define SMC_RSI_IPA_STATE_GET SMC_RSI_FID(0x198) + +/* + * Make a Host call. + * + * arg1 == IPA of host call structure + * ret0 == Status / error + */ +#define SMC_RSI_HOST_CALL SMC_RSI_FID(0x199) + +#endif /* __ASM_RSI_SMC_H_ */ From patchwork Mon Aug 19 13:19:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 13768348 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4278016D33E; Mon, 19 Aug 2024 13:20:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073601; cv=none; b=IyZTBmO39wfSfmFpRltqY5al/UpnOHTZVViA0vD04ZbpQo2CSZDqeeCvY9nnWmrwuJyZN42sSWZXggVIk6wShe+gJcfPCPABDKBjqiTPt6ppDqqeqcNFbE3xdeGU/SJAxwQF7rSlHOoypJK/TPBweckmKSJJJU2Mo5vnOAvnyQw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073601; c=relaxed/simple; bh=sGP6LPmkwqwf/MUAO6MKlNTjTOPIr6zL5P31Exl3W2Q=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Y3aomtChzczXkITDUgVG9LaDGu8fadPsGenC/X4qwBn0QO6gTWQhKFMfebfkwuyHzbankp/9WaQC5shJQgf2sgG+dgfPc7t4Y493loykxCQLg7FAYa4g7oA8qnim/Im6IPrPWj1sz7Oa81P2MXDXOF8q/W35EP9yvzTqqErrw2s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BCDE9113E; Mon, 19 Aug 2024 06:20:25 -0700 (PDT) Received: from e122027.arm.com (unknown [10.57.85.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C20DC3F73B; Mon, 19 Aug 2024 06:19:55 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Gavin Shan , Shanker Donthineni , Alper Gun , Jean-Philippe Brucker Subject: [PATCH v5 04/19] firmware/psci: Add psci_early_test_conduit() Date: Mon, 19 Aug 2024 14:19:09 +0100 Message-Id: <20240819131924.372366-5-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240819131924.372366-1-steven.price@arm.com> References: <20240819131924.372366-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Jean-Philippe Brucker Add a function to test early if PSCI is present and what conduit it uses. Because the PSCI conduit corresponds to the SMCCC one, this will let the kernel know whether it can use SMC instructions to discuss with the Realm Management Monitor (RMM), early enough to enable RAM and serial access when running in a Realm. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Steven Price Reviewed-by: Catalin Marinas --- v4: New patch --- drivers/firmware/psci/psci.c | 25 +++++++++++++++++++++++++ include/linux/psci.h | 5 +++++ 2 files changed, 30 insertions(+) diff --git a/drivers/firmware/psci/psci.c b/drivers/firmware/psci/psci.c index 2328ca58bba6..2b308f97ef2c 100644 --- a/drivers/firmware/psci/psci.c +++ b/drivers/firmware/psci/psci.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -769,6 +770,30 @@ int __init psci_dt_init(void) return ret; } +/* + * Test early if PSCI is supported, and if its conduit matches @conduit + */ +bool __init psci_early_test_conduit(enum arm_smccc_conduit conduit) +{ + int len; + int psci_node; + const char *method; + unsigned long dt_root; + + /* DT hasn't been unflattened yet, we have to work with the flat blob */ + dt_root = of_get_flat_dt_root(); + psci_node = of_get_flat_dt_subnode_by_name(dt_root, "psci"); + if (psci_node <= 0) + return false; + + method = of_get_flat_dt_prop(psci_node, "method", &len); + if (!method) + return false; + + return (conduit == SMCCC_CONDUIT_SMC && strncmp(method, "smc", len) == 0) || + (conduit == SMCCC_CONDUIT_HVC && strncmp(method, "hvc", len) == 0); +} + #ifdef CONFIG_ACPI /* * We use PSCI 0.2+ when ACPI is deployed on ARM64 and it's diff --git a/include/linux/psci.h b/include/linux/psci.h index 4ca0060a3fc4..a1fc1703ba20 100644 --- a/include/linux/psci.h +++ b/include/linux/psci.h @@ -45,8 +45,13 @@ struct psci_0_1_function_ids get_psci_0_1_function_ids(void); #if defined(CONFIG_ARM_PSCI_FW) int __init psci_dt_init(void); +bool __init psci_early_test_conduit(enum arm_smccc_conduit conduit); #else static inline int psci_dt_init(void) { return 0; } +static inline bool psci_early_test_conduit(enum arm_smccc_conduit conduit) +{ + return false; +} #endif #if defined(CONFIG_ARM_PSCI_FW) && defined(CONFIG_ACPI) From patchwork Mon Aug 19 13:19:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 13768349 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 883F816C6BC; Mon, 19 Aug 2024 13:20:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073606; cv=none; b=GeUDHztths2sXAXbd9aBD/wpQO6um07+PxVLPrUEO3PaJZ2lT5JoHzGtJq0IiTJeIkUFFYS9H3Up+aQWpC52TP5P/Jk1uHDYt/+jrJaxvSQgsLSOvUfmbEjdtf95tKEnn8k+ioYSgJboM++uJH7SA45toJC6zT2jYrAri6piLNg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073606; c=relaxed/simple; bh=onsiJRYeWVpV6ubK1t7LO3lwEYbyiDNOOCkXtz72r0c=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=cYE57g7v47CQsTAttLNEx9rVMjg4Rptncf/l7L02qC5g5TzOIpI60QdZRSMkQYEyFaq0YI3CeGEpmvu14I/9bnN6yn3TGhmNu+y6ueggPIrduEhbsQiOrUBMnWUufwroanfCSvzlfaVnIj51zKIrO0iOqKha+sTG/rfy8POnSkQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0167A1063; Mon, 19 Aug 2024 06:20:30 -0700 (PDT) Received: from e122027.arm.com (unknown [10.57.85.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0A7103F73B; Mon, 19 Aug 2024 06:19:59 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Gavin Shan , Shanker Donthineni , Alper Gun Subject: [PATCH v5 05/19] arm64: Detect if in a realm and set RIPAS RAM Date: Mon, 19 Aug 2024 14:19:10 +0100 Message-Id: <20240819131924.372366-6-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240819131924.372366-1-steven.price@arm.com> References: <20240819131924.372366-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Suzuki K Poulose Detect that the VM is a realm guest by the presence of the RSI interface. If in a realm then all memory needs to be marked as RIPAS RAM initially, the loader may or may not have done this for us. To be sure iterate over all RAM and mark it as such. Any failure is fatal as that implies the RAM regions passed to Linux are incorrect - which would mean failing later when attempting to access non-existent RAM. Signed-off-by: Suzuki K Poulose Co-developed-by: Steven Price Signed-off-by: Steven Price --- Changes since v4: * Minor tidy ups. Changes since v3: * Provide safe/unsafe versions for converting memory to protected, using the safer version only for the early boot. * Use the new psci_early_test_conduit() function to avoid calling an SMC if EL3 is not present (or not configured to handle an SMC). Changes since v2: * Use DECLARE_STATIC_KEY_FALSE rather than "extern struct static_key_false". * Rename set_memory_range() to rsi_set_memory_range(). * Downgrade some BUG()s to WARN()s and handle the condition by propagating up the stack. Comment the remaining case that ends in a BUG() to explain why. * Rely on the return from rsi_request_version() rather than checking the version the RMM claims to support. * Rename the generic sounding arm64_setup_memory() to arm64_rsi_setup_memory() and move the call site to setup_arch(). --- arch/arm64/include/asm/rsi.h | 65 ++++++++++++++++++++++++++++++ arch/arm64/kernel/Makefile | 3 +- arch/arm64/kernel/rsi.c | 78 ++++++++++++++++++++++++++++++++++++ arch/arm64/kernel/setup.c | 8 ++++ 4 files changed, 153 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/include/asm/rsi.h create mode 100644 arch/arm64/kernel/rsi.c diff --git a/arch/arm64/include/asm/rsi.h b/arch/arm64/include/asm/rsi.h new file mode 100644 index 000000000000..2bc013badbc3 --- /dev/null +++ b/arch/arm64/include/asm/rsi.h @@ -0,0 +1,65 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2024 ARM Ltd. + */ + +#ifndef __ASM_RSI_H_ +#define __ASM_RSI_H_ + +#include +#include + +DECLARE_STATIC_KEY_FALSE(rsi_present); + +void __init arm64_rsi_init(void); +void __init arm64_rsi_setup_memory(void); +static inline bool is_realm_world(void) +{ + return static_branch_unlikely(&rsi_present); +} + +static inline int rsi_set_memory_range(phys_addr_t start, phys_addr_t end, + enum ripas state, unsigned long flags) +{ + unsigned long ret; + phys_addr_t top; + + while (start != end) { + ret = rsi_set_addr_range_state(start, end, state, flags, &top); + if (WARN_ON(ret || top < start || top > end)) + return -EINVAL; + start = top; + } + + return 0; +} + +/* + * Convert the specified range to RAM. Do not use this if you rely on the + * contents of a page that may already be in RAM state. + */ +static inline int rsi_set_memory_range_protected(phys_addr_t start, + phys_addr_t end) +{ + return rsi_set_memory_range(start, end, RSI_RIPAS_RAM, + RSI_CHANGE_DESTROYED); +} + +/* + * Convert the specified range to RAM. Do not convert any pages that may have + * been DESTROYED, without our permission. + */ +static inline int rsi_set_memory_range_protected_safe(phys_addr_t start, + phys_addr_t end) +{ + return rsi_set_memory_range(start, end, RSI_RIPAS_RAM, + RSI_NO_CHANGE_DESTROYED); +} + +static inline int rsi_set_memory_range_shared(phys_addr_t start, + phys_addr_t end) +{ + return rsi_set_memory_range(start, end, RSI_RIPAS_EMPTY, + RSI_NO_CHANGE_DESTROYED); +} +#endif /* __ASM_RSI_H_ */ diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 2b112f3b7510..71c29a2a2f19 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -33,7 +33,8 @@ obj-y := debug-monitors.o entry.o irq.o fpsimd.o \ return_address.o cpuinfo.o cpu_errata.o \ cpufeature.o alternative.o cacheinfo.o \ smp.o smp_spin_table.o topology.o smccc-call.o \ - syscall.o proton-pack.o idle.o patching.o pi/ + syscall.o proton-pack.o idle.o patching.o pi/ \ + rsi.o obj-$(CONFIG_COMPAT) += sys32.o signal32.o \ sys_compat.o diff --git a/arch/arm64/kernel/rsi.c b/arch/arm64/kernel/rsi.c new file mode 100644 index 000000000000..128a9a05a96b --- /dev/null +++ b/arch/arm64/kernel/rsi.c @@ -0,0 +1,78 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2023 ARM Ltd. + */ + +#include +#include +#include +#include + +DEFINE_STATIC_KEY_FALSE_RO(rsi_present); +EXPORT_SYMBOL(rsi_present); + +static bool rsi_version_matches(void) +{ + unsigned long ver_lower, ver_higher; + unsigned long ret = rsi_request_version(RSI_ABI_VERSION, + &ver_lower, + &ver_higher); + + if (ret == SMCCC_RET_NOT_SUPPORTED) + return false; + + if (ret != RSI_SUCCESS) { + pr_err("RME: RMM doesn't support RSI version %lu.%lu. Supported range: %lu.%lu-%lu.%lu\n", + RSI_ABI_VERSION_MAJOR, RSI_ABI_VERSION_MINOR, + RSI_ABI_VERSION_GET_MAJOR(ver_lower), + RSI_ABI_VERSION_GET_MINOR(ver_lower), + RSI_ABI_VERSION_GET_MAJOR(ver_higher), + RSI_ABI_VERSION_GET_MINOR(ver_higher)); + return false; + } + + pr_info("RME: Using RSI version %lu.%lu\n", + RSI_ABI_VERSION_GET_MAJOR(ver_lower), + RSI_ABI_VERSION_GET_MINOR(ver_lower)); + + return true; +} + +void __init arm64_rsi_setup_memory(void) +{ + u64 i; + phys_addr_t start, end; + + if (!is_realm_world()) + return; + + /* + * Iterate over the available memory ranges and convert the state to + * protected memory. We should take extra care to ensure that we DO NOT + * permit any "DESTROYED" pages to be converted to "RAM". + * + * BUG_ON is used because if the attempt to switch the memory to + * protected has failed here, then future accesses to the memory are + * simply going to be reflected as a SEA (Synchronous External Abort) + * which we can't handle. Bailing out early prevents the guest limping + * on and dying later. + */ + for_each_mem_range(i, &start, &end) { + BUG_ON(rsi_set_memory_range_protected_safe(start, end)); + } +} + +void __init arm64_rsi_init(void) +{ + /* + * If PSCI isn't using SMC, RMM isn't present. Don't try to execute an + * SMC as it could be UNDEFINED. + */ + if (!psci_early_test_conduit(SMCCC_CONDUIT_SMC)) + return; + if (!rsi_version_matches()) + return; + + static_branch_enable(&rsi_present); +} + diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index a096e2451044..143f87615af0 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -43,6 +43,7 @@ #include #include #include +#include #include #include #include @@ -293,6 +294,11 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p) * cpufeature code and early parameters. */ jump_label_init(); + /* + * Init RSI before early param so that "earlycon" console uses the + * shared alias when in a realm + */ + arm64_rsi_init(); parse_early_param(); dynamic_scs_init(); @@ -328,6 +334,8 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p) arm64_memblock_init(); + arm64_rsi_setup_memory(); + paging_init(); acpi_table_upgrade(); From patchwork Mon Aug 19 13:19:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 13768350 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 7E62B16DC35; Mon, 19 Aug 2024 13:20:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073609; cv=none; b=jkBK/kGUiAtM/pp2AEpJHl7CU2ExYv3kwzFsyhHCR7aE/u9URv+hIshOHNDcM8USmuDDLb8Sz+TrUqkKr+YAxDeNdxOcDh/0fFuDFfSvLKulh7+0kNdAgBfdzfH2yROHdhok/PWwMfTfKz65pkNaKlFMUJN7NpTGG2ECuMMTWWY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073609; c=relaxed/simple; bh=P7ON8kDElagaOXZ+UsP/vUuQd+zYHfvOT4Rh8IKD3lo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=pL4yhuyl4KeD23KytyZLxVLxVPYHm+BTFO5caHWVS0x+ArKIVy+saaMyvLSank5RantSBzlMY/HyCJCHd8X7wF0RTLWSKAW6jtHTfIbW14CFDbi0THPeKhwTcyexWmuvkgRLLZ6RmCp6yzM/6P3YGm7QJqPlpbQCGn9FyjPBM8E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 23B8611FB; Mon, 19 Aug 2024 06:20:34 -0700 (PDT) Received: from e122027.arm.com (unknown [10.57.85.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 475B63F73B; Mon, 19 Aug 2024 06:20:04 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Gavin Shan , Shanker Donthineni , Alper Gun Subject: [PATCH v5 06/19] arm64: realm: Query IPA size from the RMM Date: Mon, 19 Aug 2024 14:19:11 +0100 Message-Id: <20240819131924.372366-7-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240819131924.372366-1-steven.price@arm.com> References: <20240819131924.372366-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The top bit of the configured IPA size is used as an attribute to control whether the address is protected or shared. Query the configuration from the RMM to assertain which bit this is. Co-developed-by: Suzuki K Poulose Signed-off-by: Suzuki K Poulose Signed-off-by: Steven Price Reviewed-by: Catalin Marinas Reviewed-by: Gavin Shan --- Changes since v4: * Make PROT_NS_SHARED check is_realm_world() to reduce impact on non-CCA systems. Changes since v2: * Drop unneeded extra brackets from PROT_NS_SHARED. * Drop the explicit alignment from 'config' as struct realm_config now specifies the alignment. --- arch/arm64/include/asm/pgtable-prot.h | 4 ++++ arch/arm64/kernel/rsi.c | 8 ++++++++ 2 files changed, 12 insertions(+) diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h index b11cfb9fdd37..5e578274a3b7 100644 --- a/arch/arm64/include/asm/pgtable-prot.h +++ b/arch/arm64/include/asm/pgtable-prot.h @@ -68,8 +68,12 @@ #include #include +#include extern bool arm64_use_ng_mappings; +extern unsigned long prot_ns_shared; + +#define PROT_NS_SHARED (is_realm_world() ? prot_ns_shared : 0) #define PTE_MAYBE_NG (arm64_use_ng_mappings ? PTE_NG : 0) #define PMD_MAYBE_NG (arm64_use_ng_mappings ? PMD_SECT_NG : 0) diff --git a/arch/arm64/kernel/rsi.c b/arch/arm64/kernel/rsi.c index 128a9a05a96b..e968a5c9929e 100644 --- a/arch/arm64/kernel/rsi.c +++ b/arch/arm64/kernel/rsi.c @@ -8,6 +8,11 @@ #include #include +struct realm_config config; + +unsigned long prot_ns_shared; +EXPORT_SYMBOL(prot_ns_shared); + DEFINE_STATIC_KEY_FALSE_RO(rsi_present); EXPORT_SYMBOL(rsi_present); @@ -72,6 +77,9 @@ void __init arm64_rsi_init(void) return; if (!rsi_version_matches()) return; + if (rsi_get_realm_config(&config)) + return; + prot_ns_shared = BIT(config.ipa_bits - 1); static_branch_enable(&rsi_present); } From patchwork Mon Aug 19 13:19:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 13768351 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id BC73C16C865; Mon, 19 Aug 2024 13:20:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073614; cv=none; b=eFBRCkpiVwxKFJdyum0ZOBCMLryVPxGvyn2yrozgvDgdr0cgODt+asbaow4LPpU7Oa4HsEYl7AxuM5QPKQA2RYF+ni6iO7XMuNKyBkAP1+gH1GEzLowUpZfzIdNXQbMSiof4Fqh3zPrQmfH8W/y7SWD7FhxR0cuR/d/OQYEhyBc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073614; c=relaxed/simple; bh=H4ZI12wh5KC8ZiyjSPbUMmWL1N8wmgFC/Kz4zTDDNGg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=BIbLBadxG8UpkrIIisgji/WC+fu+stHtyAJnNNKgBpt3g1yJCJebhkJqLGHhe3bcmU3zv/pSxDRxT6+owNutE3XDfE+curB0VZ2eUCPLD6BLMKzeNWzOZdJr2X+zXav/yamiJLR165imlU43TgJGgtb4VUkmFAfw1+qsO8mDs6w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 19A051063; Mon, 19 Aug 2024 06:20:38 -0700 (PDT) Received: from e122027.arm.com (unknown [10.57.85.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5EDD83F73B; Mon, 19 Aug 2024 06:20:08 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Gavin Shan , Shanker Donthineni , Alper Gun Subject: [PATCH v5 07/19] arm64: rsi: Add support for checking whether an MMIO is protected Date: Mon, 19 Aug 2024 14:19:12 +0100 Message-Id: <20240819131924.372366-8-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240819131924.372366-1-steven.price@arm.com> References: <20240819131924.372366-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Suzuki K Poulose On Arm CCA, with RMM-v1.0, all MMIO regions are shared. However, in the future, an Arm CCA-v1.0 compliant guest may be run in a lesser privileged partition in the Realm World (with Arm CCA-v1.1 Planes feature). In this case, some of the MMIO regions may be emulated by a higher privileged component in the Realm world, i.e, protected. Thus the guest must decide today, whether a given MMIO region is shared vs Protected and create the stage1 mapping accordingly. On Arm CCA, this detection is based on the "IPA State" (RIPAS == RIPAS_IO). Provide a helper to run this check on a given range of MMIO. Also, provide a arm64 helper which may be hooked in by other solutions. Signed-off-by: Suzuki K Poulose Signed-off-by: Steven Price Reviewed-by: Catalin Marinas --- New patch for v5 --- arch/arm64/include/asm/io.h | 8 ++++++++ arch/arm64/include/asm/rsi.h | 3 +++ arch/arm64/include/asm/rsi_cmds.h | 21 +++++++++++++++++++++ arch/arm64/kernel/rsi.c | 26 ++++++++++++++++++++++++++ 4 files changed, 58 insertions(+) diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h index 1ada23a6ec19..a6c551c5e44e 100644 --- a/arch/arm64/include/asm/io.h +++ b/arch/arm64/include/asm/io.h @@ -17,6 +17,7 @@ #include #include #include +#include /* * Generic IO read/write. These perform native-endian accesses. @@ -318,4 +319,11 @@ extern bool arch_memremap_can_ram_remap(resource_size_t offset, size_t size, unsigned long flags); #define arch_memremap_can_ram_remap arch_memremap_can_ram_remap +static inline bool arm64_is_iomem_private(phys_addr_t phys_addr, size_t size) +{ + if (unlikely(is_realm_world())) + return arm64_rsi_is_protected_mmio(phys_addr, size); + return false; +} + #endif /* __ASM_IO_H */ diff --git a/arch/arm64/include/asm/rsi.h b/arch/arm64/include/asm/rsi.h index 2bc013badbc3..e31231b50b6a 100644 --- a/arch/arm64/include/asm/rsi.h +++ b/arch/arm64/include/asm/rsi.h @@ -13,6 +13,9 @@ DECLARE_STATIC_KEY_FALSE(rsi_present); void __init arm64_rsi_init(void); void __init arm64_rsi_setup_memory(void); + +bool arm64_rsi_is_protected_mmio(phys_addr_t base, size_t size); + static inline bool is_realm_world(void) { return static_branch_unlikely(&rsi_present); diff --git a/arch/arm64/include/asm/rsi_cmds.h b/arch/arm64/include/asm/rsi_cmds.h index 968b03f4e703..c2363f36d167 100644 --- a/arch/arm64/include/asm/rsi_cmds.h +++ b/arch/arm64/include/asm/rsi_cmds.h @@ -45,6 +45,27 @@ static inline unsigned long rsi_get_realm_config(struct realm_config *cfg) return res.a0; } +static inline unsigned long rsi_ipa_state_get(phys_addr_t start, + phys_addr_t end, + enum ripas *state, + phys_addr_t *top) +{ + struct arm_smccc_res res; + + arm_smccc_smc(SMC_RSI_IPA_STATE_GET, + start, end, 0, 0, 0, 0, 0, + &res); + + if (res.a0 == RSI_SUCCESS) { + if (top) + *top = res.a1; + if (state) + *state = res.a2; + } + + return res.a0; +} + static inline unsigned long rsi_set_addr_range_state(phys_addr_t start, phys_addr_t end, enum ripas state, diff --git a/arch/arm64/kernel/rsi.c b/arch/arm64/kernel/rsi.c index e968a5c9929e..381a5b9a5333 100644 --- a/arch/arm64/kernel/rsi.c +++ b/arch/arm64/kernel/rsi.c @@ -67,6 +67,32 @@ void __init arm64_rsi_setup_memory(void) } } +bool arm64_rsi_is_protected_mmio(phys_addr_t base, size_t size) +{ + enum ripas ripas; + phys_addr_t end, top; + + /* Overflow ? */ + if (WARN_ON(base + size < base)) + return false; + + end = ALIGN(base + size, RSI_GRANULE_SIZE); + base = ALIGN_DOWN(base, RSI_GRANULE_SIZE); + + while (base < end) { + if (WARN_ON(rsi_ipa_state_get(base, end, &ripas, &top))) + break; + if (WARN_ON(top <= base)) + break; + if (ripas != RSI_RIPAS_IO) + break; + base = top; + } + + return (size && base >= end); +} +EXPORT_SYMBOL(arm64_rsi_is_protected_mmio); + void __init arm64_rsi_init(void) { /* From patchwork Mon Aug 19 13:19:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 13768352 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 5ADD116C865; Mon, 19 Aug 2024 13:20:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073617; cv=none; b=UlpA3QmIUrXIM0KkRdU8JYQV7OYIHdpfdgc2oPn/OlCuDem6qJo+N/uwnSJIVFVGkiju5gIKjSNpksCFWKyDlIK3aAl1uuuk73899ihPZCRzb4zPl/PNclrOkQPawCbymSI0YSYsKhUMZoBi8GpizA2iSj/TIYcvd8HTxNbent0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073617; c=relaxed/simple; bh=TSPywSLsjMo97jZ+kUql1y4frPVKBelvm4qNdHTkfBA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Yn6lyXUsvZTBeB4qOVsFobrQbUkUfTZcwxKSuZuYdKPOKphlHol/7dC+5X7eDBWnBPPMo4vAQhjq5JtCspi1VGnNwtB/CfetM6fopJwre2b29r5pTDcFYStsO02CT4zfgByZ9iOhvXNKWQhl8mJCoFDQ+tc/Uk+15Mekxott8MY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E6F71339; Mon, 19 Aug 2024 06:20:41 -0700 (PDT) Received: from e122027.arm.com (unknown [10.57.85.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 43F873F73B; Mon, 19 Aug 2024 06:20:12 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Gavin Shan , Shanker Donthineni , Alper Gun Subject: [PATCH v5 08/19] fixmap: Allow architecture overriding set_fixmap_io Date: Mon, 19 Aug 2024 14:19:13 +0100 Message-Id: <20240819131924.372366-9-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240819131924.372366-1-steven.price@arm.com> References: <20240819131924.372366-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Suzuki K Poulose For a realm guest it will be necessary to ensure IO mappings are shared so that the VMM can emulate the device. The following patch will provide an implementation of set_fixmap_io for arm64 setting the shared bit (if in a realm). Signed-off-by: Suzuki K Poulose Signed-off-by: Steven Price --- New patch for v5 --- include/asm-generic/fixmap.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/include/asm-generic/fixmap.h b/include/asm-generic/fixmap.h index 29cab7947980..9b75fe2bd8fd 100644 --- a/include/asm-generic/fixmap.h +++ b/include/asm-generic/fixmap.h @@ -94,8 +94,10 @@ static inline unsigned long virt_to_fix(const unsigned long vaddr) /* * Some fixmaps are for IO */ +#ifndef set_fixmap_io #define set_fixmap_io(idx, phys) \ __set_fixmap(idx, phys, FIXMAP_PAGE_IO) +#endif #endif /* __ASSEMBLY__ */ #endif /* __ASM_GENERIC_FIXMAP_H */ From patchwork Mon Aug 19 13:19:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 13768353 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6572E16C865; Mon, 19 Aug 2024 13:20:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073621; cv=none; b=daP1iVlGOA2WM3WLD5naXPOAb4ODXnmsV1q0+OcKW3Io7SI01XDn/5aw3iynApCyvNfMhnp9mzJo8HljDOLEg3JTxxu4DAqpQI2W1XMhVjRrgXOJfPkdEF3DW8BV1CPDVta7k7DY74pU8pdNjZA2vux81INv7LJRgjN0/FgmJ4U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073621; c=relaxed/simple; bh=suH1Ez+pQ/HxRlZPjt7BZpKltP0lINstJ4+k9YpV2XU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Yxx/Rt9Sy98Cp426N7FzZLamJcyzYNH2QjC8IlsJ9Lr5aV/895dOg0Ys7SlO8X2ccDiVkXB8+udyF6DrzfxhIFRP639dY3rsrwluhG75IW6r3csNZtIqbPrZlVgX2ZCS3tu7PB+kYyO28bkzKocQJEPKLRfm1oYOx0J1hTAJ2iA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 14792113E; Mon, 19 Aug 2024 06:20:46 -0700 (PDT) Received: from e122027.arm.com (unknown [10.57.85.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 361163F73B; Mon, 19 Aug 2024 06:20:16 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Gavin Shan , Shanker Donthineni , Alper Gun Subject: [PATCH v5 09/19] fixmap: Pass down the full phys address for set_fixmap_io Date: Mon, 19 Aug 2024 14:19:14 +0100 Message-Id: <20240819131924.372366-10-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240819131924.372366-1-steven.price@arm.com> References: <20240819131924.372366-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Suzuki K Poulose For early I/O mapping using fixmap, we mask the address by PAGE_MASK base and then map it to the FIXMAP slot. However, with confidential computing, the granularity at which "protections" (encrypted vs decrypted) are applied may be finer than the PAGE_SIZE. e.g., for Arm CCA it is 4K while an arm64 kernel could be using 64K pagesize. However we need to know the exact address being mapped in. Thus in-order to calculate the accurate protection, pass down the exact phys address to the helpers. This would be later used by arm64 to detect if the MMIO address is shared vs protected. The users of such drivers already cope with running the same code with "4K" page size, thus mapping a PAGE_SIZE covering the address range is considered acceptable. Signed-off-by: Suzuki K Poulose Signed-off-by: Steven Price --- New patch for v5 --- drivers/tty/serial/earlycon.c | 2 +- include/asm-generic/fixmap.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/tty/serial/earlycon.c b/drivers/tty/serial/earlycon.c index a5fbb6ed38ae..c8414b648d47 100644 --- a/drivers/tty/serial/earlycon.c +++ b/drivers/tty/serial/earlycon.c @@ -40,7 +40,7 @@ static void __iomem * __init earlycon_map(resource_size_t paddr, size_t size) { void __iomem *base; #ifdef CONFIG_FIX_EARLYCON_MEM - set_fixmap_io(FIX_EARLYCON_MEM_BASE, paddr & PAGE_MASK); + set_fixmap_io(FIX_EARLYCON_MEM_BASE, paddr); base = (void __iomem *)__fix_to_virt(FIX_EARLYCON_MEM_BASE); base += paddr & ~PAGE_MASK; #else diff --git a/include/asm-generic/fixmap.h b/include/asm-generic/fixmap.h index 9b75fe2bd8fd..8d2222035ed2 100644 --- a/include/asm-generic/fixmap.h +++ b/include/asm-generic/fixmap.h @@ -96,7 +96,7 @@ static inline unsigned long virt_to_fix(const unsigned long vaddr) */ #ifndef set_fixmap_io #define set_fixmap_io(idx, phys) \ - __set_fixmap(idx, phys, FIXMAP_PAGE_IO) + __set_fixmap(idx, phys & PAGE_MASK, FIXMAP_PAGE_IO) #endif #endif /* __ASSEMBLY__ */ From patchwork Mon Aug 19 13:19:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 13768354 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6C8DB1741F8; Mon, 19 Aug 2024 13:20:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073625; cv=none; b=mjM81N6Q4tI3E1JdIXqXX/eqES2K2ML83ECNyDTGQCPtjmCD7dkWss93gjwBLtEqUxvLWy4a9STvVBA9y534cs1GyNSY/sk4vq/MRr73Iesmh6ZvZCABDOROSgHGLxNGaSVgBSMUz8zWEtpZyneBH0DdEOU2+bpokn0z7Tsf8DY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073625; c=relaxed/simple; bh=X9rSIQC9SH0+joSsR6x+uTW2R5jX6hBOUSDo1zmSjFk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=dtyFHziGKLof1FSj3pd2Sf6U5/vb0WF/voyLfDC5cr1K2e26j8Z5MoJ279lK6Tpsnh/5gMbst++VqTgLE5kvd7mNrHHibIxvMzqheqqrq/TuoGZ5YhXAN1vtcn1JfGVH7MH3x8nUSk2teogVSDQRILgW+UQzWhr7vc20PagRfNA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1229D339; Mon, 19 Aug 2024 06:20:50 -0700 (PDT) Received: from e122027.arm.com (unknown [10.57.85.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 69F013F73B; Mon, 19 Aug 2024 06:20:20 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Gavin Shan , Shanker Donthineni , Alper Gun Subject: [PATCH v5 10/19] arm64: Override set_fixmap_io Date: Mon, 19 Aug 2024 14:19:15 +0100 Message-Id: <20240819131924.372366-11-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240819131924.372366-1-steven.price@arm.com> References: <20240819131924.372366-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Suzuki K Poulose Override the set_fixmap_io to set shared permission for the host in case of a CC guest. For now we mark it shared unconditionally. If/when support for device assignment and device emulation in the realm is added in the future then this will need to filter the physical address and make the decision accordingly. Signed-off-by: Suzuki K Poulose Signed-off-by: Steven Price --- New patch for v5 --- arch/arm64/include/asm/fixmap.h | 2 ++ arch/arm64/mm/mmu.c | 17 +++++++++++++++++ 2 files changed, 19 insertions(+) diff --git a/arch/arm64/include/asm/fixmap.h b/arch/arm64/include/asm/fixmap.h index 87e307804b99..2c20da3a468c 100644 --- a/arch/arm64/include/asm/fixmap.h +++ b/arch/arm64/include/asm/fixmap.h @@ -108,6 +108,8 @@ void __init early_fixmap_init(void); #define __late_clear_fixmap(idx) __set_fixmap((idx), 0, FIXMAP_PAGE_CLEAR) extern void __set_fixmap(enum fixed_addresses idx, phys_addr_t phys, pgprot_t prot); +#define set_fixmap_io set_fixmap_io +void set_fixmap_io(enum fixed_addresses idx, phys_addr_t phys); #include diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 353ea5dc32b8..06b66c23c124 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1193,6 +1193,23 @@ void vmemmap_free(unsigned long start, unsigned long end, } #endif /* CONFIG_MEMORY_HOTPLUG */ +void set_fixmap_io(enum fixed_addresses idx, phys_addr_t phys) +{ + pgprot_t prot = FIXMAP_PAGE_IO; + + /* + * The set_fixmap_io maps a single Page covering phys. + * To make better decision, we stick to the smallest page + * size supported (4K). + */ + if (!arm64_is_iomem_private(phys, SZ_4K)) + prot = pgprot_decrypted(prot); + else + prot = pgprot_encrypted(prot); + + __set_fixmap(idx, phys, prot); +} + int pud_set_huge(pud_t *pudp, phys_addr_t phys, pgprot_t prot) { pud_t new_pud = pfn_pud(__phys_to_pfn(phys), mk_pud_sect_prot(prot)); From patchwork Mon Aug 19 13:19:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 13768355 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C158D16CD11; Mon, 19 Aug 2024 13:20:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073630; cv=none; b=klxq09lt1ppMfAGnoetVNxPE6pSOEemCGoA8D7LcBsIWHlQT9SgAryXpuMWjTlRNP4/rzHowlEW+BbrjldSDgBajQf/F66qcn4tsGDXgFHFYNN8WlvLuHMnfkSOCZh0uBn9jXpIFkX2W43k9dRoM0p5bKxyhytqtBFaaTXCF2Uw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073630; c=relaxed/simple; bh=Q9AA0bsOlUx9dNmprQGHuNvhKFPzaftwoNqmHHPLeVQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=TAlXc/SZLl9fq5zZaybonKb04TlAHZWw5P5IwYCTnD+2Y128M7WCq5PAJ1Y/xPh3+yIuTbIzBa9QSLeMnhpF4raMSXi+HJ1jy6evOvsDSVi5AKPfLCfsz6mIXdMjZPRnJrXqO1C0jLmCl6tJdz6Am8bc+7Ztg7Dno0iyNDntCBk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 344AE1063; Mon, 19 Aug 2024 06:20:54 -0700 (PDT) Received: from e122027.arm.com (unknown [10.57.85.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6999C3F73B; Mon, 19 Aug 2024 06:20:24 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Gavin Shan , Shanker Donthineni , Alper Gun Subject: [PATCH v5 11/19] arm64: rsi: Map unprotected MMIO as decrypted Date: Mon, 19 Aug 2024 14:19:16 +0100 Message-Id: <20240819131924.372366-12-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240819131924.372366-1-steven.price@arm.com> References: <20240819131924.372366-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Suzuki K Poulose Instead of marking every MMIO as shared, check if the given region is "Protected" and apply the permissions accordingly. Signed-off-by: Suzuki K Poulose Signed-off-by: Steven Price --- New patch for v5 --- arch/arm64/kernel/rsi.c | 15 +++++++++++++++ arch/arm64/mm/mmu.c | 2 +- 2 files changed, 16 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/rsi.c b/arch/arm64/kernel/rsi.c index 381a5b9a5333..672dd6862298 100644 --- a/arch/arm64/kernel/rsi.c +++ b/arch/arm64/kernel/rsi.c @@ -6,6 +6,8 @@ #include #include #include + +#include #include struct realm_config config; @@ -93,6 +95,16 @@ bool arm64_rsi_is_protected_mmio(phys_addr_t base, size_t size) } EXPORT_SYMBOL(arm64_rsi_is_protected_mmio); +static int realm_ioremap_hook(phys_addr_t phys, size_t size, pgprot_t *prot) +{ + if (arm64_rsi_is_protected_mmio(phys, size)) + *prot = pgprot_encrypted(*prot); + else + *prot = pgprot_decrypted(*prot); + + return 0; +} + void __init arm64_rsi_init(void) { /* @@ -107,6 +119,9 @@ void __init arm64_rsi_init(void) return; prot_ns_shared = BIT(config.ipa_bits - 1); + if (arm64_ioremap_prot_hook_register(realm_ioremap_hook)) + return; + static_branch_enable(&rsi_present); } diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 06b66c23c124..0c2fa35beca0 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1207,7 +1207,7 @@ void set_fixmap_io(enum fixed_addresses idx, phys_addr_t phys) else prot = pgprot_encrypted(prot); - __set_fixmap(idx, phys, prot); + __set_fixmap(idx, phys & PAGE_MASK, prot); } int pud_set_huge(pud_t *pudp, phys_addr_t phys, pgprot_t prot) From patchwork Mon Aug 19 13:19:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 13768356 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id D02D016CD13; Mon, 19 Aug 2024 13:20:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073634; cv=none; b=hY3CwGJXUf5+qhjBqiYPW89d7q7X+ePYt0NCXI2hfBHnRJ27UfMQCYe/d0E79yY91y9H2fmLR+bvYF+mUDTUAoUMPQsQeiiunaawKWfDOdiEPKRLH5t69Flb1pU2Sw6+KkaOt9ii/QlMzjOw/St5Bc/unl/Qvv4M2WfFMXfMomA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073634; c=relaxed/simple; bh=bp8rqBj6LCd3mK/jCglM2RuDmO8yLEV86BM/0PH4GOk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=YvUlFZ/2aBSy/HDsK6YTEDwTPb05sGbPf1ygmJvELkUCdblAIeZ3wxaQQqyPP/MvkH5ul5aiJhvPNbAyYX586CX+nW9ZD9K7XO+p5B1uDJq6QHsWVTaBgw5svgvtD+r6ZkWESm2tfeKYQFW6+k9SlU7osxp/RJogxz4h7MwyaWw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4A8E8339; Mon, 19 Aug 2024 06:20:58 -0700 (PDT) Received: from e122027.arm.com (unknown [10.57.85.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7BCBD3F73B; Mon, 19 Aug 2024 06:20:28 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Gavin Shan , Shanker Donthineni , Alper Gun Subject: [PATCH v5 12/19] efi: arm64: Map Device with Prot Shared Date: Mon, 19 Aug 2024 14:19:17 +0100 Message-Id: <20240819131924.372366-13-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240819131924.372366-1-steven.price@arm.com> References: <20240819131924.372366-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Suzuki K Poulose Device mappings need to be emualted by the VMM so must be mapped shared with the host. Signed-off-by: Suzuki K Poulose Signed-off-by: Steven Price --- Changes since v4: * Reworked to use arm64_is_iomem_private() to decide whether the memory needs to be decrypted or not. --- arch/arm64/kernel/efi.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c index 712718aed5dd..95f8e8bf07f8 100644 --- a/arch/arm64/kernel/efi.c +++ b/arch/arm64/kernel/efi.c @@ -34,8 +34,16 @@ static __init pteval_t create_mapping_protection(efi_memory_desc_t *md) u64 attr = md->attribute; u32 type = md->type; - if (type == EFI_MEMORY_MAPPED_IO) - return PROT_DEVICE_nGnRE; + if (type == EFI_MEMORY_MAPPED_IO) { + pgprot_t prot = __pgprot(PROT_DEVICE_nGnRE); + + if (arm64_is_iomem_private(md->phys_addr, + md->num_pages << EFI_PAGE_SHIFT)) + prot = pgprot_encrypted(prot); + else + prot = pgprot_decrypted(prot); + return pgprot_val(prot); + } if (region_is_misaligned(md)) { static bool __initdata code_is_misaligned; From patchwork Mon Aug 19 13:19:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 13768357 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 38C2E16CD23; Mon, 19 Aug 2024 13:20:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073639; cv=none; b=W26gk9jNdFjV0DnuWLiMf9YbfPN7rofLZvPITfy9IEQkA0eyqC1gZQy+rU8iszRAxzaSXBN5jb6lAymXNwzWk9q1nNqICMJhln36ueOCBhOrEA6mgOkkIXOXJ02F2gWzDMRJIFTmfpksB89bFWNqibyAHnkV6RBYvZqVmifNuxA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073639; c=relaxed/simple; bh=dPj5BNjMCh0vGPz3a3398xG7ognkHWZkFlYEYV7KQg0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=pPKB08aoOflwZJbSx7eeKTSb7e6dPWdGIvk4NSlkoRtKhbL6xbR/dqme2Z4v+mp2XYYqMVnrhKXVAPKF4u0OO/4A0Q75l4v+AaEOfTUX+jNvzBkayTVgRCwNqu5+R+a4Il96p30vbCi8lnXWGiimPdyEPXH/uca0PARmV4O/Xi0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D59F51063; Mon, 19 Aug 2024 06:21:02 -0700 (PDT) Received: from e122027.arm.com (unknown [10.57.85.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9D7913F73B; Mon, 19 Aug 2024 06:20:32 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Gavin Shan , Shanker Donthineni , Alper Gun Subject: [PATCH v5 13/19] arm64: Make the PHYS_MASK_SHIFT dynamic Date: Mon, 19 Aug 2024 14:19:18 +0100 Message-Id: <20240819131924.372366-14-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240819131924.372366-1-steven.price@arm.com> References: <20240819131924.372366-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Make the PHYS_MASK_SHIFT dynamic for Realms. This is only is required for masking the PFN from a pte entry. For a realm phys_mask_shift is reduced if the RMM reports a smaller configured size for the guest. The realm configuration splits the address space into two with the top half being memory shared with the host, and the bottom half being protected memory. We treat the bit which controls this split as an attribute bit and hence exclude it (and any higher bits) from the mask. Co-developed-by: Suzuki K Poulose Signed-off-by: Suzuki K Poulose Signed-off-by: Steven Price --- v3: Drop the MAX_PHYS_MASK{,_SHIFT} definitions as they are no longer needed. --- arch/arm64/include/asm/pgtable-hwdef.h | 6 ------ arch/arm64/include/asm/pgtable.h | 5 +++++ arch/arm64/kernel/rsi.c | 5 +++++ 3 files changed, 10 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index 1f60aa1bc750..183431ec8f7d 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -204,12 +204,6 @@ */ #define PTE_S2_MEMATTR(t) (_AT(pteval_t, (t)) << 2) -/* - * Highest possible physical address supported. - */ -#define PHYS_MASK_SHIFT (CONFIG_ARM64_PA_BITS) -#define PHYS_MASK ((UL(1) << PHYS_MASK_SHIFT) - 1) - #define TTBR_CNP_BIT (UL(1) << 0) /* diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 7a4f5604be3f..f39a4cbbf73a 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -39,6 +39,11 @@ #include #include +extern unsigned int phys_mask_shift; + +#define PHYS_MASK_SHIFT (phys_mask_shift) +#define PHYS_MASK ((1UL << PHYS_MASK_SHIFT) - 1) + #ifdef CONFIG_TRANSPARENT_HUGEPAGE #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE diff --git a/arch/arm64/kernel/rsi.c b/arch/arm64/kernel/rsi.c index 672dd6862298..5c2c977a50fb 100644 --- a/arch/arm64/kernel/rsi.c +++ b/arch/arm64/kernel/rsi.c @@ -15,6 +15,8 @@ struct realm_config config; unsigned long prot_ns_shared; EXPORT_SYMBOL(prot_ns_shared); +unsigned int phys_mask_shift = CONFIG_ARM64_PA_BITS; + DEFINE_STATIC_KEY_FALSE_RO(rsi_present); EXPORT_SYMBOL(rsi_present); @@ -119,6 +121,9 @@ void __init arm64_rsi_init(void) return; prot_ns_shared = BIT(config.ipa_bits - 1); + if (config.ipa_bits - 1 < phys_mask_shift) + phys_mask_shift = config.ipa_bits - 1; + if (arm64_ioremap_prot_hook_register(realm_ioremap_hook)) return; From patchwork Mon Aug 19 13:19:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 13768358 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C530616CD18; Mon, 19 Aug 2024 13:20:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073643; cv=none; b=QCZ+8BFm2VE0Ej6rKF4G2Ac4ferhlWWvRpEIeV4INOHj+hROFT81+Ni+cg8AMBJQ0QKbaSLKXzvzx9jhMymkEqkF6n2U5bkNoDuH4NXBOohu8p9xucUb8r76YSbKfhCRDiihQpLPcxIdYzDBjDq2rdk5jnJWbY9KXZRKmHjzh9k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073643; c=relaxed/simple; bh=KaLWr6YZb+PWvDZSNXabheOdGWtip51iowgdcysufRw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=YDkFVGJ26CN/kRkY/wIF9S7rwiB8Fiif6rwrqFwN9YbIZrIauNty2oosY9C1vCKBJvI1twKNZPwRNyhEE5vFaQKJ8QH6yzc2zoQUsjpHxMO22y6R6KX4r8tkLSdm+bR4NnD8iuOnLrL/ouWOY0WYYhfhniXZGYNW39QZZ6i3OT4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3EE82339; Mon, 19 Aug 2024 06:21:07 -0700 (PDT) Received: from e122027.arm.com (unknown [10.57.85.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2AAAE3F73B; Mon, 19 Aug 2024 06:20:37 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Gavin Shan , Shanker Donthineni , Alper Gun Subject: [PATCH v5 14/19] arm64: Enforce bounce buffers for realm DMA Date: Mon, 19 Aug 2024 14:19:19 +0100 Message-Id: <20240819131924.372366-15-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240819131924.372366-1-steven.price@arm.com> References: <20240819131924.372366-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Within a realm guest it's not possible for a device emulated by the VMM to access arbitrary guest memory. So force the use of bounce buffers to ensure that the memory the emulated devices are accessing is in memory which is explicitly shared with the host. This adds a call to swiotlb_update_mem_attributes() which calls set_memory_decrypted() to ensure the bounce buffer memory is shared with the host. For non-realm guests or hosts this is a no-op. Co-developed-by: Suzuki K Poulose Signed-off-by: Suzuki K Poulose Signed-off-by: Steven Price Reviewed-by: Catalin Marinas --- v3: Simplify mem_init() by using a 'flags' variable. --- arch/arm64/kernel/rsi.c | 1 + arch/arm64/mm/init.c | 10 +++++++++- 2 files changed, 10 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/rsi.c b/arch/arm64/kernel/rsi.c index 5c2c977a50fb..69d8d9791c65 100644 --- a/arch/arm64/kernel/rsi.c +++ b/arch/arm64/kernel/rsi.c @@ -6,6 +6,7 @@ #include #include #include +#include #include #include diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 9b5ab6818f7f..1d595b63da71 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -41,6 +41,7 @@ #include #include #include +#include #include #include #include @@ -369,8 +370,14 @@ void __init bootmem_init(void) */ void __init mem_init(void) { + unsigned int flags = SWIOTLB_VERBOSE; bool swiotlb = max_pfn > PFN_DOWN(arm64_dma_phys_limit); + if (is_realm_world()) { + swiotlb = true; + flags |= SWIOTLB_FORCE; + } + if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) && !swiotlb) { /* * If no bouncing needed for ZONE_DMA, reduce the swiotlb @@ -382,7 +389,8 @@ void __init mem_init(void) swiotlb = true; } - swiotlb_init(swiotlb, SWIOTLB_VERBOSE); + swiotlb_init(swiotlb, flags); + swiotlb_update_mem_attributes(); /* this will put all unused low memory onto the freelists */ memblock_free_all(); From patchwork Mon Aug 19 13:19:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 13768359 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3134016CD36; Mon, 19 Aug 2024 13:20:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073647; cv=none; b=r/wNnjjXxoY4Snhtl+n5U+6D+l1eho/eT+KCjsawrjlq5dnZDhh7AJchgd2M3FmA/Pj83yWsvCIJ7Voz5xMj7jGnwZfeu6NYS+d/vtXnNz21X8Zkl9YDbrFzf6yyrbDQcG4sw886tn2r+3SLy6PAXkiQxdOt1E5Jefv6KTij9x0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073647; c=relaxed/simple; bh=UbbovNi33/o7xauGKxX994se+VOXHy9SSou9Bs4VBZE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=r8RbfL1Jj+1SfGHdw8V0ZDnJuv6mQEo51cxuiSanpYq7JOoLuIjvuipZ4zHlmJBR2d9F8DQgbo16lmf4T6q0VowswumHijfOqckeNQkws+bwV01aMCMGnTIsGJDH5Xfo7cNj18wzGn5zcMYC51ek04fmDy7lpH4JToNgk3H8wNI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C5BC2339; Mon, 19 Aug 2024 06:21:11 -0700 (PDT) Received: from e122027.arm.com (unknown [10.57.85.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 846A33F73B; Mon, 19 Aug 2024 06:20:41 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Gavin Shan , Shanker Donthineni , Alper Gun Subject: [PATCH v5 15/19] arm64: mm: Avoid TLBI when marking pages as valid Date: Mon, 19 Aug 2024 14:19:20 +0100 Message-Id: <20240819131924.372366-16-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240819131924.372366-1-steven.price@arm.com> References: <20240819131924.372366-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 When __change_memory_common() is purely setting the valid bit on a PTE (e.g. via the set_memory_valid() call) there is no need for a TLBI as either the entry isn't changing (the valid bit was already set) or the entry was invalid and so should not have been cached in the TLB. Signed-off-by: Steven Price Reviewed-by: Catalin Marinas --- v4: New patch --- arch/arm64/mm/pageattr.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 0e270a1c51e6..547a9e0b46c2 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -60,7 +60,13 @@ static int __change_memory_common(unsigned long start, unsigned long size, ret = apply_to_page_range(&init_mm, start, size, change_page_range, &data); - flush_tlb_kernel_range(start, start + size); + /* + * If the memory is being made valid without changing any other bits + * then a TLBI isn't required as a non-valid entry cannot be cached in + * the TLB. + */ + if (pgprot_val(set_mask) != PTE_VALID || pgprot_val(clear_mask)) + flush_tlb_kernel_range(start, start + size); return ret; } From patchwork Mon Aug 19 13:19:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 13768360 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6854E16D300; Mon, 19 Aug 2024 13:20:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073652; cv=none; b=VfsMCadaHI3fEcLnlQuIuhkGo81N8ec4JvHi1VyjVIAPfcJFimdKNWVzVRu738QrVHThBEnn/U1ocHLBIeMT7ep7eAzAey1LFjLhW3lJrd9Ubk6Gs/1mN8mqm2an2DszVp2B4BwmK5EF+nwVcLqB1wYD8A+DbQ61MocKuef8uPE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073652; c=relaxed/simple; bh=6o6USzuetgUicKKrZ2hyyo8VvWPJ8tHPDLLhQsjqUXM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=D6mt8+A9rjv15lZ7lHbj4YqAxvnmkLUtTPWbnYqh8qd/WuykmlMRHp8OuXaEAwXWWd54p+ZCs7lu/BxXPbj6zZpSprK5p2Nku/jpaZevSTs4hwD70+gJyg7ze6eh168nIsRSmdeireSFiDQ8eMDm8QNgfAJOX69Ae7TCxaeExj4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 14EFB1063; Mon, 19 Aug 2024 06:21:16 -0700 (PDT) Received: from e122027.arm.com (unknown [10.57.85.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 182F93F73B; Mon, 19 Aug 2024 06:20:45 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Gavin Shan , Shanker Donthineni , Alper Gun Subject: [PATCH v5 16/19] arm64: Enable memory encrypt for Realms Date: Mon, 19 Aug 2024 14:19:21 +0100 Message-Id: <20240819131924.372366-17-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240819131924.372366-1-steven.price@arm.com> References: <20240819131924.372366-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Suzuki K Poulose Use the memory encryption APIs to trigger a RSI call to request a transition between protected memory and shared memory (or vice versa) and updating the kernel's linear map of modified pages to flip the top bit of the IPA. This requires that block mappings are not used in the direct map for realm guests. Signed-off-by: Suzuki K Poulose Co-developed-by: Steven Price Signed-off-by: Steven Price Reviewed-by: Catalin Marinas --- Changed since v4: * Reworked to use the new dispatcher for the mem_encrypt API Changes since v3: * Provide pgprot_{de,en}crypted() macros * Rename __set_memory_encrypted() to __set_memory_enc_dec() since it both encrypts and decrypts. Changes since v2: * Fix location of set_memory_{en,de}crypted() and export them. * Break-before-make when changing the top bit of the IPA for transitioning to/from shared. --- arch/arm64/Kconfig | 3 ++ arch/arm64/include/asm/mem_encrypt.h | 9 ++++ arch/arm64/include/asm/pgtable.h | 5 ++ arch/arm64/include/asm/set_memory.h | 3 ++ arch/arm64/kernel/rsi.c | 16 ++++++ arch/arm64/mm/pageattr.c | 76 ++++++++++++++++++++++++++-- 6 files changed, 109 insertions(+), 3 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 68d77a2f4d1a..03d3dae34277 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -21,6 +21,7 @@ config ARM64 select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2 select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE select ARCH_HAS_CACHE_LINE_SIZE + select ARCH_HAS_CC_PLATFORM select ARCH_HAS_CURRENT_STACK_POINTER select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEBUG_VM_PGTABLE @@ -43,6 +44,8 @@ config ARM64 select ARCH_HAS_SETUP_DMA_OPS select ARCH_HAS_SET_DIRECT_MAP select ARCH_HAS_SET_MEMORY + select ARCH_HAS_MEM_ENCRYPT + select ARCH_HAS_FORCE_DMA_UNENCRYPTED select ARCH_STACKWALK select ARCH_HAS_STRICT_KERNEL_RWX select ARCH_HAS_STRICT_MODULE_RWX diff --git a/arch/arm64/include/asm/mem_encrypt.h b/arch/arm64/include/asm/mem_encrypt.h index b0c9a86b13a4..f8f78f622dd2 100644 --- a/arch/arm64/include/asm/mem_encrypt.h +++ b/arch/arm64/include/asm/mem_encrypt.h @@ -2,6 +2,8 @@ #ifndef __ASM_MEM_ENCRYPT_H #define __ASM_MEM_ENCRYPT_H +#include + struct arm64_mem_crypt_ops { int (*encrypt)(unsigned long addr, int numpages); int (*decrypt)(unsigned long addr, int numpages); @@ -12,4 +14,11 @@ int arm64_mem_crypt_ops_register(const struct arm64_mem_crypt_ops *ops); int set_memory_encrypted(unsigned long addr, int numpages); int set_memory_decrypted(unsigned long addr, int numpages); +int realm_register_memory_enc_ops(void); + +static inline bool force_dma_unencrypted(struct device *dev) +{ + return is_realm_world(); +} + #endif /* __ASM_MEM_ENCRYPT_H */ diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index f39a4cbbf73a..4e8c648f5213 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -636,6 +636,11 @@ static inline void set_pud_at(struct mm_struct *mm, unsigned long addr, #define pgprot_nx(prot) \ __pgprot_modify(prot, PTE_MAYBE_GP, PTE_PXN) +#define pgprot_decrypted(prot) \ + __pgprot_modify(prot, PROT_NS_SHARED, PROT_NS_SHARED) +#define pgprot_encrypted(prot) \ + __pgprot_modify(prot, PROT_NS_SHARED, 0) + /* * Mark the prot value as uncacheable and unbufferable. */ diff --git a/arch/arm64/include/asm/set_memory.h b/arch/arm64/include/asm/set_memory.h index 917761feeffd..37774c793006 100644 --- a/arch/arm64/include/asm/set_memory.h +++ b/arch/arm64/include/asm/set_memory.h @@ -15,4 +15,7 @@ int set_direct_map_invalid_noflush(struct page *page); int set_direct_map_default_noflush(struct page *page); bool kernel_page_present(struct page *page); +int set_memory_encrypted(unsigned long addr, int numpages); +int set_memory_decrypted(unsigned long addr, int numpages); + #endif /* _ASM_ARM64_SET_MEMORY_H */ diff --git a/arch/arm64/kernel/rsi.c b/arch/arm64/kernel/rsi.c index 69d8d9791c65..9cb3353e5cbf 100644 --- a/arch/arm64/kernel/rsi.c +++ b/arch/arm64/kernel/rsi.c @@ -7,8 +7,10 @@ #include #include #include +#include #include +#include #include struct realm_config config; @@ -21,6 +23,17 @@ unsigned int phys_mask_shift = CONFIG_ARM64_PA_BITS; DEFINE_STATIC_KEY_FALSE_RO(rsi_present); EXPORT_SYMBOL(rsi_present); +bool cc_platform_has(enum cc_attr attr) +{ + switch (attr) { + case CC_ATTR_MEM_ENCRYPT: + return is_realm_world(); + default: + return false; + } +} +EXPORT_SYMBOL_GPL(cc_platform_has); + static bool rsi_version_matches(void) { unsigned long ver_lower, ver_higher; @@ -128,6 +141,9 @@ void __init arm64_rsi_init(void) if (arm64_ioremap_prot_hook_register(realm_ioremap_hook)) return; + if (realm_register_memory_enc_ops()) + return; + static_branch_enable(&rsi_present); } diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 547a9e0b46c2..d11f5dc4c2c5 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -5,10 +5,12 @@ #include #include #include +#include #include #include #include +#include #include #include #include @@ -23,14 +25,16 @@ bool rodata_full __ro_after_init = IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED bool can_set_direct_map(void) { /* - * rodata_full and DEBUG_PAGEALLOC require linear map to be - * mapped at page granularity, so that it is possible to + * rodata_full, DEBUG_PAGEALLOC and a Realm guest all require linear + * map to be mapped at page granularity, so that it is possible to * protect/unprotect single pages. * * KFENCE pool requires page-granular mapping if initialized late. + * + * Realms need to make pages shared/protected at page granularity. */ return rodata_full || debug_pagealloc_enabled() || - arm64_kfence_can_set_direct_map(); + arm64_kfence_can_set_direct_map() || is_realm_world(); } static int change_page_range(pte_t *ptep, unsigned long addr, void *data) @@ -198,6 +202,72 @@ int set_direct_map_default_noflush(struct page *page) PAGE_SIZE, change_page_range, &data); } +static int __set_memory_enc_dec(unsigned long addr, + int numpages, + bool encrypt) +{ + unsigned long set_prot = 0, clear_prot = 0; + phys_addr_t start, end; + int ret; + + if (!is_realm_world()) + return 0; + + if (!__is_lm_address(addr)) + return -EINVAL; + + start = __virt_to_phys(addr); + end = start + numpages * PAGE_SIZE; + + if (encrypt) + clear_prot = PROT_NS_SHARED; + else + set_prot = PROT_NS_SHARED; + + /* + * Break the mapping before we make any changes to avoid stale TLB + * entries or Synchronous External Aborts caused by RIPAS_EMPTY + */ + ret = __change_memory_common(addr, PAGE_SIZE * numpages, + __pgprot(set_prot), + __pgprot(clear_prot | PTE_VALID)); + + if (ret) + return ret; + + if (encrypt) + ret = rsi_set_memory_range_protected(start, end); + else + ret = rsi_set_memory_range_shared(start, end); + + if (ret) + return ret; + + return __change_memory_common(addr, PAGE_SIZE * numpages, + __pgprot(PTE_VALID), + __pgprot(0)); +} + +static int realm_set_memory_encrypted(unsigned long addr, int numpages) +{ + return __set_memory_enc_dec(addr, numpages, true); +} + +static int realm_set_memory_decrypted(unsigned long addr, int numpages) +{ + return __set_memory_enc_dec(addr, numpages, false); +} + +static const struct arm64_mem_crypt_ops realm_crypt_ops = { + .encrypt = realm_set_memory_encrypted, + .decrypt = realm_set_memory_decrypted, +}; + +int realm_register_memory_enc_ops(void) +{ + return arm64_mem_crypt_ops_register(&realm_crypt_ops); +} + #ifdef CONFIG_DEBUG_PAGEALLOC void __kernel_map_pages(struct page *page, int numpages, int enable) { From patchwork Mon Aug 19 13:19:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 13768361 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 925B1188CD8; Mon, 19 Aug 2024 13:20:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073656; cv=none; b=UPNovRBr1lv1Cwu+8Vq/vGJ9QBLjMVH8VVyLmAFadWasLLStgRLgZhh1lW9DtXCR9KkHe0Fain+Gn0aMKnyEaZ6uKBXQphIkPYx90kU9Ur2uajXxhvcl3HVrXCyR+ggcsbZewPtURo85suZhaKeMMB4sfpe8iD9e/+Fd70qHYpI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073656; c=relaxed/simple; bh=L7d0QOsIblt5jTLM0//EA/WHUOT46VMmtEiZ4akbngE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=De05BA1HQTQeZrWUngv+WPL0uhC78rEXIygFmLoxvZjLDhVgB6GSDX0PMcoDW8a7WsONHroFr2vxKTOFbxS/c9pE231rFe3KHrHVDy/GRvMyqRKoGy884KqaS4iWozP87/FX66eCQc5QC3SZhP8BAQhzbPb8g/uCjZzS+B3jZ5s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 43792339; Mon, 19 Aug 2024 06:21:20 -0700 (PDT) Received: from e122027.arm.com (unknown [10.57.85.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6CD523F73B; Mon, 19 Aug 2024 06:20:50 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Gavin Shan , Shanker Donthineni , Alper Gun Subject: [PATCH v5 17/19] irqchip/gic-v3-its: Share ITS tables with a non-trusted hypervisor Date: Mon, 19 Aug 2024 14:19:22 +0100 Message-Id: <20240819131924.372366-18-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240819131924.372366-1-steven.price@arm.com> References: <20240819131924.372366-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Within a realm guest the ITS is emulated by the host. This means the allocations must have been made available to the host by a call to set_memory_decrypted(). Introduce an allocation function which performs this extra call. For the ITT use a custom genpool-based allocator that calls set_memory_decrypted() for each page allocated, but then suballocates the size needed for each ITT. Note that there is no mechanism implemented to return pages from the genpool, but it is unlikely the peak number of devices will so much larger than the normal level - so this isn't expected to be an issue. Co-developed-by: Suzuki K Poulose Signed-off-by: Suzuki K Poulose Tested-by: Will Deacon Signed-off-by: Steven Price Reviewed-by: Marc Zyngier --- Changes since v3: * Use BIT() macro. * Use a genpool based allocator in its_create_device() to avoid allocating a full page. * Fix subject to drop "realm" and use gic-v3-its. * Add error handling to ITS alloc/free. Changes since v2: * Drop 'shared' from the new its_xxx function names as they are used for non-realm guests too. * Don't handle the NUMA_NO_NODE case specially - alloc_pages_node() should do the right thing. * Drop a pointless (void *) cast. --- drivers/irqchip/irq-gic-v3-its.c | 139 ++++++++++++++++++++++++++----- 1 file changed, 116 insertions(+), 23 deletions(-) diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c index 9b34596b3542..557214c774c3 100644 --- a/drivers/irqchip/irq-gic-v3-its.c +++ b/drivers/irqchip/irq-gic-v3-its.c @@ -12,12 +12,14 @@ #include #include #include +#include #include #include #include #include #include #include +#include #include #include #include @@ -27,6 +29,7 @@ #include #include #include +#include #include #include @@ -164,6 +167,7 @@ struct its_device { struct its_node *its; struct event_lpi_map event_map; void *itt; + u32 itt_sz; u32 nr_ites; u32 device_id; bool shared; @@ -199,6 +203,81 @@ static DEFINE_IDA(its_vpeid_ida); #define gic_data_rdist_rd_base() (gic_data_rdist()->rd_base) #define gic_data_rdist_vlpi_base() (gic_data_rdist_rd_base() + SZ_128K) +static struct page *its_alloc_pages_node(int node, gfp_t gfp, + unsigned int order) +{ + struct page *page; + int ret = 0; + + page = alloc_pages_node(node, gfp, order); + + if (!page) + return NULL; + + ret = set_memory_decrypted((unsigned long)page_address(page), + 1 << order); + if (WARN_ON(ret)) + return NULL; + + return page; +} + +static struct page *its_alloc_pages(gfp_t gfp, unsigned int order) +{ + return its_alloc_pages_node(NUMA_NO_NODE, gfp, order); +} + +static void its_free_pages(void *addr, unsigned int order) +{ + if (WARN_ON(set_memory_encrypted((unsigned long)addr, 1 << order))) + return; + free_pages((unsigned long)addr, order); +} + +static struct gen_pool *itt_pool; + +static void *itt_alloc_pool(int node, int size) +{ + unsigned long addr; + struct page *page; + + if (size >= PAGE_SIZE) { + page = its_alloc_pages_node(node, + GFP_KERNEL | __GFP_ZERO, + get_order(size)); + + return page_address(page); + } + + do { + addr = gen_pool_alloc(itt_pool, size); + if (addr) + break; + + page = its_alloc_pages_node(node, GFP_KERNEL | __GFP_ZERO, 1); + if (!page) + break; + + gen_pool_add(itt_pool, (unsigned long)page_address(page), + PAGE_SIZE, node); + } while (!addr); + + return (void *)addr; +} + +static void itt_free_pool(void *addr, int size) +{ + if (!addr) + return; + + if (size >= PAGE_SIZE) { + its_free_pages(addr, get_order(size)); + return; + } + + gen_pool_free(itt_pool, (unsigned long)addr, size); +} + /* * Skip ITSs that have no vLPIs mapped, unless we're on GICv4.1, as we * always have vSGIs mapped. @@ -2187,7 +2266,8 @@ static struct page *its_allocate_prop_table(gfp_t gfp_flags) { struct page *prop_page; - prop_page = alloc_pages(gfp_flags, get_order(LPI_PROPBASE_SZ)); + prop_page = its_alloc_pages(gfp_flags, + get_order(LPI_PROPBASE_SZ)); if (!prop_page) return NULL; @@ -2198,8 +2278,8 @@ static struct page *its_allocate_prop_table(gfp_t gfp_flags) static void its_free_prop_table(struct page *prop_page) { - free_pages((unsigned long)page_address(prop_page), - get_order(LPI_PROPBASE_SZ)); + its_free_pages(page_address(prop_page), + get_order(LPI_PROPBASE_SZ)); } static bool gic_check_reserved_range(phys_addr_t addr, unsigned long size) @@ -2321,7 +2401,8 @@ static int its_setup_baser(struct its_node *its, struct its_baser *baser, order = get_order(GITS_BASER_PAGES_MAX * psz); } - page = alloc_pages_node(its->numa_node, GFP_KERNEL | __GFP_ZERO, order); + page = its_alloc_pages_node(its->numa_node, + GFP_KERNEL | __GFP_ZERO, order); if (!page) return -ENOMEM; @@ -2334,7 +2415,7 @@ static int its_setup_baser(struct its_node *its, struct its_baser *baser, /* 52bit PA is supported only when PageSize=64K */ if (psz != SZ_64K) { pr_err("ITS: no 52bit PA support when psz=%d\n", psz); - free_pages((unsigned long)base, order); + its_free_pages(base, order); return -ENXIO; } @@ -2390,7 +2471,7 @@ static int its_setup_baser(struct its_node *its, struct its_baser *baser, pr_err("ITS@%pa: %s doesn't stick: %llx %llx\n", &its->phys_base, its_base_type_string[type], val, tmp); - free_pages((unsigned long)base, order); + its_free_pages(base, order); return -ENXIO; } @@ -2529,8 +2610,8 @@ static void its_free_tables(struct its_node *its) for (i = 0; i < GITS_BASER_NR_REGS; i++) { if (its->tables[i].base) { - free_pages((unsigned long)its->tables[i].base, - its->tables[i].order); + its_free_pages(its->tables[i].base, + its->tables[i].order); its->tables[i].base = NULL; } } @@ -2796,7 +2877,8 @@ static bool allocate_vpe_l2_table(int cpu, u32 id) /* Allocate memory for 2nd level table */ if (!table[idx]) { - page = alloc_pages(GFP_KERNEL | __GFP_ZERO, get_order(psz)); + page = its_alloc_pages(GFP_KERNEL | __GFP_ZERO, + get_order(psz)); if (!page) return false; @@ -2915,7 +2997,8 @@ static int allocate_vpe_l1_table(void) pr_debug("np = %d, npg = %lld, psz = %d, epp = %d, esz = %d\n", np, npg, psz, epp, esz); - page = alloc_pages(GFP_ATOMIC | __GFP_ZERO, get_order(np * PAGE_SIZE)); + page = its_alloc_pages(GFP_ATOMIC | __GFP_ZERO, + get_order(np * PAGE_SIZE)); if (!page) return -ENOMEM; @@ -2961,8 +3044,8 @@ static struct page *its_allocate_pending_table(gfp_t gfp_flags) { struct page *pend_page; - pend_page = alloc_pages(gfp_flags | __GFP_ZERO, - get_order(LPI_PENDBASE_SZ)); + pend_page = its_alloc_pages(gfp_flags | __GFP_ZERO, + get_order(LPI_PENDBASE_SZ)); if (!pend_page) return NULL; @@ -2974,7 +3057,7 @@ static struct page *its_allocate_pending_table(gfp_t gfp_flags) static void its_free_pending_table(struct page *pt) { - free_pages((unsigned long)page_address(pt), get_order(LPI_PENDBASE_SZ)); + its_free_pages(page_address(pt), get_order(LPI_PENDBASE_SZ)); } /* @@ -3309,8 +3392,9 @@ static bool its_alloc_table_entry(struct its_node *its, /* Allocate memory for 2nd level table */ if (!table[idx]) { - page = alloc_pages_node(its->numa_node, GFP_KERNEL | __GFP_ZERO, - get_order(baser->psz)); + page = its_alloc_pages_node(its->numa_node, + GFP_KERNEL | __GFP_ZERO, + get_order(baser->psz)); if (!page) return false; @@ -3405,7 +3489,6 @@ static struct its_device *its_create_device(struct its_node *its, u32 dev_id, if (WARN_ON(!is_power_of_2(nvecs))) nvecs = roundup_pow_of_two(nvecs); - dev = kzalloc(sizeof(*dev), GFP_KERNEL); /* * Even if the device wants a single LPI, the ITT must be * sized as a power of two (and you need at least one bit...). @@ -3413,7 +3496,11 @@ static struct its_device *its_create_device(struct its_node *its, u32 dev_id, nr_ites = max(2, nvecs); sz = nr_ites * (FIELD_GET(GITS_TYPER_ITT_ENTRY_SIZE, its->typer) + 1); sz = max(sz, ITS_ITT_ALIGN) + ITS_ITT_ALIGN - 1; - itt = kzalloc_node(sz, GFP_KERNEL, its->numa_node); + + itt = itt_alloc_pool(its->numa_node, sz); + + dev = kzalloc(sizeof(*dev), GFP_KERNEL); + if (alloc_lpis) { lpi_map = its_lpi_alloc(nvecs, &lpi_base, &nr_lpis); if (lpi_map) @@ -3425,9 +3512,9 @@ static struct its_device *its_create_device(struct its_node *its, u32 dev_id, lpi_base = 0; } - if (!dev || !itt || !col_map || (!lpi_map && alloc_lpis)) { + if (!dev || !itt || !col_map || (!lpi_map && alloc_lpis)) { kfree(dev); - kfree(itt); + itt_free_pool(itt, sz); bitmap_free(lpi_map); kfree(col_map); return NULL; @@ -3437,6 +3524,7 @@ static struct its_device *its_create_device(struct its_node *its, u32 dev_id, dev->its = its; dev->itt = itt; + dev->itt_sz = sz; dev->nr_ites = nr_ites; dev->event_map.lpi_map = lpi_map; dev->event_map.col_map = col_map; @@ -3464,7 +3552,7 @@ static void its_free_device(struct its_device *its_dev) list_del(&its_dev->entry); raw_spin_unlock_irqrestore(&its_dev->its->lock, flags); kfree(its_dev->event_map.col_map); - kfree(its_dev->itt); + itt_free_pool(its_dev->itt, its_dev->itt_sz); kfree(its_dev); } @@ -5112,8 +5200,9 @@ static int __init its_probe_one(struct its_node *its) } } - page = alloc_pages_node(its->numa_node, GFP_KERNEL | __GFP_ZERO, - get_order(ITS_CMD_QUEUE_SZ)); + page = its_alloc_pages_node(its->numa_node, + GFP_KERNEL | __GFP_ZERO, + get_order(ITS_CMD_QUEUE_SZ)); if (!page) { err = -ENOMEM; goto out_unmap_sgir; @@ -5177,7 +5266,7 @@ static int __init its_probe_one(struct its_node *its) out_free_tables: its_free_tables(its); out_free_cmd: - free_pages((unsigned long)its->cmd_base, get_order(ITS_CMD_QUEUE_SZ)); + its_free_pages(its->cmd_base, get_order(ITS_CMD_QUEUE_SZ)); out_unmap_sgir: if (its->sgir_base) iounmap(its->sgir_base); @@ -5663,6 +5752,10 @@ int __init its_init(struct fwnode_handle *handle, struct rdists *rdists, bool has_v4_1 = false; int err; + itt_pool = gen_pool_create(get_order(ITS_ITT_ALIGN), -1); + if (!itt_pool) + return -ENOMEM; + gic_rdists = rdists; lpi_prop_prio = irq_prio; From patchwork Mon Aug 19 13:19:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 13768362 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 9A4521891DA; Mon, 19 Aug 2024 13:20:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073659; cv=none; b=a2aAVMlPnsXC3vXoaw8viq/m0HgFLEmCLygO42traKAtFJpuHkhkOmapCagqOnM6/wPVWnB08KqilTaBRmEVvKla0xQ9CDbOdNjaEQOF6j+es8wbVyFQIYQrdhZL1Pc5KfyCWW7avZV7BXx1edQiM+p6RXQu9shQ00fE/OYWI/o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073659; c=relaxed/simple; bh=gbNnBXviOVEzHpXaEqTSotoOkdunzKEEjpsbEuVZdhU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=KWTM0LV2EuC65VybeoT2nDsOHK/axkW4twxEE+Subqcu4nEuH1OWWNpQE3w48LPW5e8983cwo5R3NsH+gsQTPAwUIyTscoRgZGgkQXmS1W7KI9Aa3/qTLTxrumr9GPnBgMJnHiXwu4HbJRoBW1z7bMB8dQ6/R5P6jn0p3+kPv+w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4E5C5113E; Mon, 19 Aug 2024 06:21:24 -0700 (PDT) Received: from e122027.arm.com (unknown [10.57.85.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A5F483F73B; Mon, 19 Aug 2024 06:20:54 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Gavin Shan , Shanker Donthineni , Alper Gun Subject: [PATCH v5 18/19] irqchip/gic-v3-its: Rely on genpool alignment Date: Mon, 19 Aug 2024 14:19:23 +0100 Message-Id: <20240819131924.372366-19-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240819131924.372366-1-steven.price@arm.com> References: <20240819131924.372366-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 its_create_device() over-allocated by ITS_ITT_ALIGN - 1 bytes to ensure that an aligned area was available within the allocation. The new genpool allocator has its min_alloc_order set to get_order(ITS_ITT_ALIGN) so all allocations from it should be appropriately aligned. Remove the over-allocation from its_create_device() and alignment from its_build_mapd_cmd(). Tested-by: Will Deacon Signed-off-by: Steven Price --- drivers/irqchip/irq-gic-v3-its.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c index 557214c774c3..49aacf96ade2 100644 --- a/drivers/irqchip/irq-gic-v3-its.c +++ b/drivers/irqchip/irq-gic-v3-its.c @@ -700,7 +700,6 @@ static struct its_collection *its_build_mapd_cmd(struct its_node *its, u8 size = ilog2(desc->its_mapd_cmd.dev->nr_ites); itt_addr = virt_to_phys(desc->its_mapd_cmd.dev->itt); - itt_addr = ALIGN(itt_addr, ITS_ITT_ALIGN); its_encode_cmd(cmd, GITS_CMD_MAPD); its_encode_devid(cmd, desc->its_mapd_cmd.dev->device_id); @@ -3495,7 +3494,7 @@ static struct its_device *its_create_device(struct its_node *its, u32 dev_id, */ nr_ites = max(2, nvecs); sz = nr_ites * (FIELD_GET(GITS_TYPER_ITT_ENTRY_SIZE, its->typer) + 1); - sz = max(sz, ITS_ITT_ALIGN) + ITS_ITT_ALIGN - 1; + sz = max(sz, ITS_ITT_ALIGN); itt = itt_alloc_pool(its->numa_node, sz); From patchwork Mon Aug 19 13:19:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 13768363 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id DCE931891DA; Mon, 19 Aug 2024 13:21:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073664; cv=none; b=IHuIg2DCOH3I9UWwVpS/4AvJrZXJafpLynMOqrqqw6jE8ifDP6HL/VjDlIfVICVOIW0rNDze41Rh3mxJvGdpRZTuD2l8nGy3NGDD5QFZQzkPvvUtgWhzIm9ZofDZKItvvaqaXkpNsb1TZQGEq1YoWNz6BxLb+fNClAzvM8Y48gg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724073664; c=relaxed/simple; bh=jUprL2sAqbh0LVwWLO8Z36rWofENEdpbrluOVkH/5/8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=bWW2q1DToRRCOJRHMJf3/wJUIhpk7kbPj96qXRdC7eq5YoHeYp9WnVpsipNtfwcN/huG0sQFWq4Z4f2joDgD5K87OVyaNCk+vwlPwy5VnHVfVVdsw/lRNWF8iKEiWiejdP28wMJ5nH7PmMQfr8PI9M1Ylsva/iulnljSAi7PFp0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8FB6312FC; Mon, 19 Aug 2024 06:21:28 -0700 (PDT) Received: from e122027.arm.com (unknown [10.57.85.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 93ADB3F73B; Mon, 19 Aug 2024 06:20:58 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Gavin Shan , Shanker Donthineni , Alper Gun , Sami Mujawar Subject: [PATCH v5 19/19] virt: arm-cca-guest: TSM_REPORT support for realms Date: Mon, 19 Aug 2024 14:19:24 +0100 Message-Id: <20240819131924.372366-20-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240819131924.372366-1-steven.price@arm.com> References: <20240819131924.372366-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Sami Mujawar Introduce an arm-cca-guest driver that registers with the configfs-tsm module to provide user interfaces for retrieving an attestation token. When a new report is requested the arm-cca-guest driver invokes the appropriate RSI interfaces to query an attestation token. The steps to retrieve an attestation token are as follows: 1. Mount the configfs filesystem if not already mounted mount -t configfs none /sys/kernel/config 2. Generate an attestation token report=/sys/kernel/config/tsm/report/report0 mkdir $report dd if=/dev/urandom bs=64 count=1 > $report/inblob hexdump -C $report/outblob rmdir $report Signed-off-by: Sami Mujawar Signed-off-by: Suzuki K Poulose Signed-off-by: Steven Price --- v3: Minor improvements to comments and adapt to the renaming of GRANULE_SIZE to RSI_GRANULE_SIZE. --- drivers/virt/coco/Kconfig | 2 + drivers/virt/coco/Makefile | 1 + drivers/virt/coco/arm-cca-guest/Kconfig | 11 + drivers/virt/coco/arm-cca-guest/Makefile | 2 + .../virt/coco/arm-cca-guest/arm-cca-guest.c | 211 ++++++++++++++++++ 5 files changed, 227 insertions(+) create mode 100644 drivers/virt/coco/arm-cca-guest/Kconfig create mode 100644 drivers/virt/coco/arm-cca-guest/Makefile create mode 100644 drivers/virt/coco/arm-cca-guest/arm-cca-guest.c diff --git a/drivers/virt/coco/Kconfig b/drivers/virt/coco/Kconfig index 87d142c1f932..4fb69804b622 100644 --- a/drivers/virt/coco/Kconfig +++ b/drivers/virt/coco/Kconfig @@ -12,3 +12,5 @@ source "drivers/virt/coco/efi_secret/Kconfig" source "drivers/virt/coco/sev-guest/Kconfig" source "drivers/virt/coco/tdx-guest/Kconfig" + +source "drivers/virt/coco/arm-cca-guest/Kconfig" diff --git a/drivers/virt/coco/Makefile b/drivers/virt/coco/Makefile index 18c1aba5edb7..a6228a1bf992 100644 --- a/drivers/virt/coco/Makefile +++ b/drivers/virt/coco/Makefile @@ -6,3 +6,4 @@ obj-$(CONFIG_TSM_REPORTS) += tsm.o obj-$(CONFIG_EFI_SECRET) += efi_secret/ obj-$(CONFIG_SEV_GUEST) += sev-guest/ obj-$(CONFIG_INTEL_TDX_GUEST) += tdx-guest/ +obj-$(CONFIG_ARM_CCA_GUEST) += arm-cca-guest/ diff --git a/drivers/virt/coco/arm-cca-guest/Kconfig b/drivers/virt/coco/arm-cca-guest/Kconfig new file mode 100644 index 000000000000..9dd27c3ee215 --- /dev/null +++ b/drivers/virt/coco/arm-cca-guest/Kconfig @@ -0,0 +1,11 @@ +config ARM_CCA_GUEST + tristate "Arm CCA Guest driver" + depends on ARM64 + default m + select TSM_REPORTS + help + The driver provides userspace interface to request and + attestation report from the Realm Management Monitor(RMM). + + If you choose 'M' here, this module will be called + arm-cca-guest. diff --git a/drivers/virt/coco/arm-cca-guest/Makefile b/drivers/virt/coco/arm-cca-guest/Makefile new file mode 100644 index 000000000000..69eeba08e98a --- /dev/null +++ b/drivers/virt/coco/arm-cca-guest/Makefile @@ -0,0 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0-only +obj-$(CONFIG_ARM_CCA_GUEST) += arm-cca-guest.o diff --git a/drivers/virt/coco/arm-cca-guest/arm-cca-guest.c b/drivers/virt/coco/arm-cca-guest/arm-cca-guest.c new file mode 100644 index 000000000000..7f724a03676f --- /dev/null +++ b/drivers/virt/coco/arm-cca-guest/arm-cca-guest.c @@ -0,0 +1,211 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2023 ARM Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include + +#include + +/** + * struct arm_cca_token_info - a descriptor for the token buffer. + * @granule: PA of the page to which the token will be written + * @offset: Offset within granule to start of buffer in bytes + * @len: Number of bytes of token data that was retrieved + * @result: result of rsi_attestation_token_continue operation + */ +struct arm_cca_token_info { + phys_addr_t granule; + unsigned long offset; + int result; +}; + +/** + * arm_cca_attestation_continue - Retrieve the attestation token data. + * + * @param: pointer to the arm_cca_token_info + * + * Attestation token generation is a long running operation and therefore + * the token data may not be retrieved in a single call. Moreover, the + * token retrieval operation must be requested on the same CPU on which the + * attestation token generation was initialised. + * This helper function is therefore scheduled on the same CPU multiple + * times until the entire token data is retrieved. + */ +static void arm_cca_attestation_continue(void *param) +{ + unsigned long len; + unsigned long size; + struct arm_cca_token_info *info; + + if (!param) + return; + + info = (struct arm_cca_token_info *)param; + + size = RSI_GRANULE_SIZE - info->offset; + info->result = rsi_attestation_token_continue(info->granule, + info->offset, size, &len); + info->offset += len; +} + +/** + * arm_cca_report_new - Generate a new attestation token. + * + * @report: pointer to the TSM report context information. + * @data: pointer to the context specific data for this module. + * + * Initialise the attestation token generation using the challenge data + * passed in the TSM decriptor. Allocate memory for the attestation token + * and schedule calls to retrieve the attestation token on the same CPU + * on which the attestation token generation was initialised. + * + * The challenge data must be at least 32 bytes and no more than 64 bytes. If + * less than 64 bytes are provided it will be zero padded to 64 bytes. + * + * Return: + * * %0 - Attestation token generated successfully. + * * %-EINVAL - A parameter was not valid. + * * %-ENOMEM - Out of memory. + * * %-EFAULT - Failed to get IPA for memory page(s). + * * A negative status code as returned by smp_call_function_single(). + */ +static int arm_cca_report_new(struct tsm_report *report, void *data) +{ + int ret; + int cpu; + long max_size; + unsigned long token_size; + struct arm_cca_token_info info; + void *buf; + u8 *token __free(kvfree) = NULL; + struct tsm_desc *desc = &report->desc; + + if (!report) + return -EINVAL; + + if (desc->inblob_len < 32 || desc->inblob_len > 64) + return -EINVAL; + + /* + * Get a CPU on which the attestation token generation will be + * scheduled and initialise the attestation token generation. + */ + cpu = get_cpu(); + max_size = rsi_attestation_token_init(desc->inblob, desc->inblob_len); + put_cpu(); + + if (max_size <= 0) + return -EINVAL; + + /* Allocate outblob */ + token = kvzalloc(max_size, GFP_KERNEL); + if (!token) + return -ENOMEM; + + /* + * Since the outblob may not be physically contiguous, use a page + * to bounce the buffer from RMM. + */ + buf = alloc_pages_exact(RSI_GRANULE_SIZE, GFP_KERNEL); + if (!buf) + return -ENOMEM; + + /* Get the PA of the memory page(s) that were allocated. */ + info.granule = (unsigned long)virt_to_phys(buf); + + token_size = 0; + /* Loop until the token is ready or there is an error. */ + do { + /* Retrieve one RSI_GRANULE_SIZE data per loop iteration. */ + info.offset = 0; + do { + /* + * Schedule a call to retrieve a sub-granule chunk + * of data per loop iteration. + */ + ret = smp_call_function_single(cpu, + arm_cca_attestation_continue, + (void *)&info, true); + if (ret != 0) { + token_size = 0; + goto exit_free_granule_page; + } + + ret = info.result; + } while ((ret == RSI_INCOMPLETE) && + (info.offset < RSI_GRANULE_SIZE)); + + /* + * Copy the retrieved token data from the granule + * to the token buffer, ensuring that the RMM doesn't + * overflow the buffer. + */ + if (WARN_ON(token_size + info.offset > max_size)) + break; + memcpy(&token[token_size], buf, info.offset); + token_size += info.offset; + } while (ret == RSI_INCOMPLETE); + + if (ret != RSI_SUCCESS) { + ret = -ENXIO; + token_size = 0; + goto exit_free_granule_page; + } + + report->outblob = no_free_ptr(token); +exit_free_granule_page: + report->outblob_len = token_size; + free_pages_exact(buf, RSI_GRANULE_SIZE); + return ret; +} + +static const struct tsm_ops arm_cca_tsm_ops = { + .name = KBUILD_MODNAME, + .report_new = arm_cca_report_new, +}; + +/** + * arm_cca_guest_init - Register with the Trusted Security Module (TSM) + * interface. + * + * Return: + * * %0 - Registered successfully with the TSM interface. + * * %-ENODEV - The execution context is not an Arm Realm. + * * %-EINVAL - A parameter was not valid. + * * %-EBUSY - Already registered. + */ +static int __init arm_cca_guest_init(void) +{ + int ret; + + if (!is_realm_world()) + return -ENODEV; + + ret = tsm_register(&arm_cca_tsm_ops, NULL); + if (ret < 0) + pr_err("Failed to register with TSM.\n"); + + return ret; +} +module_init(arm_cca_guest_init); + +/** + * arm_cca_guest_exit - unregister with the Trusted Security Module (TSM) + * interface. + */ +static void __exit arm_cca_guest_exit(void) +{ + tsm_unregister(&arm_cca_tsm_ops); +} +module_exit(arm_cca_guest_exit); + +MODULE_AUTHOR("Sami Mujawar "); +MODULE_DESCRIPTION("Arm CCA Guest TSM Driver."); +MODULE_LICENSE("GPL");