From patchwork Mon Jul 1 09:54:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 13717730 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 41A95C30653 for ; Mon, 1 Jul 2024 09:56:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=bAyGjO8hHDLYbtGNTwrKYRbBr8LiaYIVRWF7a8DtV5A=; b=V/NCk+euXNSZ+y6nc7GhThIUFT eACFbL0kOFw1w2Bu7r0btx1mfC0mPg5l4hI/VcIhOzqonrPii4dh/5pU6+T9KOaUClim3EAJwJ321 5N5ac8xOVZXm1b69usWQXvbw/MfUbLNXBIyqNXOlvWWXNN0/Wc4EfWtqLxEzf3DB4b44kwr4yK4n9 isVcpxiSmzfqr30fQItpIObok/dbL4xm9TG2CIe4k1CkBzTZNcZfM+H5YdLkQD/GQ7WwwP24oAsfm 8pUA0NIAansiZpTdIuo7V1C0sNMMXyg/57PmBGDejixuHcR6JbnFZh9RB6L1PEPJgjgq19sBQRZfn 84OEnPgA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sODlN-00000002WIE-3Pec; Mon, 01 Jul 2024 09:55:57 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sODku-00000002W2P-3mb5 for linux-arm-kernel@lists.infradead.org; Mon, 01 Jul 2024 09:55:30 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 22F36339; Mon, 1 Jul 2024 02:55:53 -0700 (PDT) Received: from e122027.arm.com (unknown [10.57.44.170]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BC1103F762; Mon, 1 Jul 2024 02:55:24 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Suzuki K Poulose , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Steven Price Subject: [PATCH v4 03/15] arm64: Detect if in a realm and set RIPAS RAM Date: Mon, 1 Jul 2024 10:54:53 +0100 Message-Id: <20240701095505.165383-4-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240701095505.165383-1-steven.price@arm.com> References: <20240701095505.165383-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240701_025529_059208_407EADEA X-CRM114-Status: GOOD ( 27.03 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Suzuki K Poulose Detect that the VM is a realm guest by the presence of the RSI interface. If in a realm then all memory needs to be marked as RIPAS RAM initially, the loader may or may not have done this for us. To be sure iterate over all RAM and mark it as such. Any failure is fatal as that implies the RAM regions passed to Linux are incorrect - which would mean failing later when attempting to access non-existent RAM. Signed-off-by: Suzuki K Poulose Co-developed-by: Steven Price Signed-off-by: Steven Price --- Changes since v3: * Provide safe/unsafe versions for converting memory to protected, using the safer version only for the early boot. * Use the new psci_early_test_conduit() function to avoid calling an SMC if EL3 is not present (or not configured to handle an SMC). Changes since v2: * Use DECLARE_STATIC_KEY_FALSE rather than "extern struct static_key_false". * Rename set_memory_range() to rsi_set_memory_range(). * Downgrade some BUG()s to WARN()s and handle the condition by propagating up the stack. Comment the remaining case that ends in a BUG() to explain why. * Rely on the return from rsi_request_version() rather than checking the version the RMM claims to support. * Rename the generic sounding arm64_setup_memory() to arm64_rsi_setup_memory() and move the call site to setup_arch(). --- arch/arm64/include/asm/rsi.h | 64 +++++++++++++++++++++++++ arch/arm64/include/asm/rsi_cmds.h | 22 +++++++++ arch/arm64/kernel/Makefile | 3 +- arch/arm64/kernel/rsi.c | 77 +++++++++++++++++++++++++++++++ arch/arm64/kernel/setup.c | 8 ++++ 5 files changed, 173 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/include/asm/rsi.h create mode 100644 arch/arm64/kernel/rsi.c diff --git a/arch/arm64/include/asm/rsi.h b/arch/arm64/include/asm/rsi.h new file mode 100644 index 000000000000..29fdc194d27b --- /dev/null +++ b/arch/arm64/include/asm/rsi.h @@ -0,0 +1,64 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2024 ARM Ltd. + */ + +#ifndef __ASM_RSI_H_ +#define __ASM_RSI_H_ + +#include +#include + +DECLARE_STATIC_KEY_FALSE(rsi_present); + +void __init arm64_rsi_init(void); +void __init arm64_rsi_setup_memory(void); +static inline bool is_realm_world(void) +{ + return static_branch_unlikely(&rsi_present); +} + +static inline int rsi_set_memory_range(phys_addr_t start, phys_addr_t end, + enum ripas state, unsigned long flags) +{ + unsigned long ret; + phys_addr_t top; + + while (start != end) { + ret = rsi_set_addr_range_state(start, end, state, flags, &top); + if (WARN_ON(ret || top < start || top > end)) + return -EINVAL; + start = top; + } + + return 0; +} + +/* + * Convert the specified range to RAM. Do not use this if you rely on the + * contents of a page that may already be in RAM state. + */ +static inline int rsi_set_memory_range_protected(phys_addr_t start, + phys_addr_t end) +{ + return rsi_set_memory_range(start, end, RSI_RIPAS_RAM, + RSI_CHANGE_DESTROYED); +} + +/* + * Convert the specified range to RAM. Do not convert any pages that may have + * been DESTROYED, without our permission. + */ +static inline int rsi_set_memory_range_protected_safe(phys_addr_t start, + phys_addr_t end) +{ + return rsi_set_memory_range(start, end, RSI_RIPAS_RAM, + RSI_NO_CHANGE_DESTROYED); +} + +static inline int rsi_set_memory_range_shared(phys_addr_t start, + phys_addr_t end) +{ + return rsi_set_memory_range(start, end, RSI_RIPAS_EMPTY, 0); +} +#endif diff --git a/arch/arm64/include/asm/rsi_cmds.h b/arch/arm64/include/asm/rsi_cmds.h index 89e907f3af0c..acb557dd4b88 100644 --- a/arch/arm64/include/asm/rsi_cmds.h +++ b/arch/arm64/include/asm/rsi_cmds.h @@ -10,6 +10,11 @@ #include +enum ripas { + RSI_RIPAS_EMPTY, + RSI_RIPAS_RAM, +}; + static inline unsigned long rsi_request_version(unsigned long req, unsigned long *out_lower, unsigned long *out_higher) @@ -35,4 +40,21 @@ static inline unsigned long rsi_get_realm_config(struct realm_config *cfg) return res.a0; } +static inline unsigned long rsi_set_addr_range_state(phys_addr_t start, + phys_addr_t end, + enum ripas state, + unsigned long flags, + phys_addr_t *top) +{ + struct arm_smccc_res res; + + arm_smccc_smc(SMC_RSI_IPA_STATE_SET, start, end, state, + flags, 0, 0, 0, &res); + + if (top) + *top = res.a1; + + return res.a0; +} + #endif diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 763824963ed1..a483b916ed11 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -33,7 +33,8 @@ obj-y := debug-monitors.o entry.o irq.o fpsimd.o \ return_address.o cpuinfo.o cpu_errata.o \ cpufeature.o alternative.o cacheinfo.o \ smp.o smp_spin_table.o topology.o smccc-call.o \ - syscall.o proton-pack.o idle.o patching.o pi/ + syscall.o proton-pack.o idle.o patching.o pi/ \ + rsi.o obj-$(CONFIG_COMPAT) += sys32.o signal32.o \ sys_compat.o diff --git a/arch/arm64/kernel/rsi.c b/arch/arm64/kernel/rsi.c new file mode 100644 index 000000000000..f01bff9dab04 --- /dev/null +++ b/arch/arm64/kernel/rsi.c @@ -0,0 +1,77 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2023 ARM Ltd. + */ + +#include +#include +#include +#include + +DEFINE_STATIC_KEY_FALSE_RO(rsi_present); +EXPORT_SYMBOL(rsi_present); + +static bool rsi_version_matches(void) +{ + unsigned long ver_lower, ver_higher; + unsigned long ret = rsi_request_version(RSI_ABI_VERSION, + &ver_lower, + &ver_higher); + + if (ret == SMCCC_RET_NOT_SUPPORTED) + return false; + + if (ret != RSI_SUCCESS) { + pr_err("RME: RMM doesn't support RSI version %u.%u. Supported range: %lu.%lu-%lu.%lu\n", + RSI_ABI_VERSION_MAJOR, RSI_ABI_VERSION_MINOR, + RSI_ABI_VERSION_GET_MAJOR(ver_lower), + RSI_ABI_VERSION_GET_MINOR(ver_lower), + RSI_ABI_VERSION_GET_MAJOR(ver_higher), + RSI_ABI_VERSION_GET_MINOR(ver_higher)); + return false; + } + + pr_info("RME: Using RSI version %lu.%lu\n", + RSI_ABI_VERSION_GET_MAJOR(ver_lower), + RSI_ABI_VERSION_GET_MINOR(ver_lower)); + + return true; +} + +void __init arm64_rsi_setup_memory(void) +{ + u64 i; + phys_addr_t start, end; + + if (!is_realm_world()) + return; + + /* + * Iterate over the available memory ranges and convert the state to + * protected memory. We should take extra care to ensure that we DO NOT + * permit any "DESTROYED" pages to be converted to "RAM". + * + * BUG_ON is used because if the attempt to switch the memory to + * protected has failed here, then future accesses to the memory are + * simply going to be reflected as a fault which we can't handle. + * Bailing out early prevents the guest limping on and dieing later. + */ + for_each_mem_range(i, &start, &end) { + BUG_ON(rsi_set_memory_range_protected_safe(start, end)); + } +} + +void __init arm64_rsi_init(void) +{ + /* + * If PSCI isn't using SMC, RMM isn't present. Don't try to execute an + * SMC as it could be UNDEFINED. + */ + if (!psci_early_test_conduit(SMCCC_CONDUIT_SMC)) + return; + if (!rsi_version_matches()) + return; + + static_branch_enable(&rsi_present); +} + diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index a096e2451044..143f87615af0 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -43,6 +43,7 @@ #include #include #include +#include #include #include #include @@ -293,6 +294,11 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p) * cpufeature code and early parameters. */ jump_label_init(); + /* + * Init RSI before early param so that "earlycon" console uses the + * shared alias when in a realm + */ + arm64_rsi_init(); parse_early_param(); dynamic_scs_init(); @@ -328,6 +334,8 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p) arm64_memblock_init(); + arm64_rsi_setup_memory(); + paging_init(); acpi_table_upgrade();