From patchwork Wed Oct 16 09:21:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bertrand Marquis X-Patchwork-Id: 13838017 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 803C1D1AD33 for ; Wed, 16 Oct 2024 09:23:01 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.819768.1233266 (Exim 4.92) (envelope-from ) id 1t10Eh-0003us-50; Wed, 16 Oct 2024 09:22:31 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 819768.1233266; Wed, 16 Oct 2024 09:22:31 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t10Eh-0003ul-1C; Wed, 16 Oct 2024 09:22:31 +0000 Received: by outflank-mailman (input) for mailman id 819768; Wed, 16 Oct 2024 09:22:29 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t10Ef-0001Bq-PT for xen-devel@lists.xenproject.org; Wed, 16 Oct 2024 09:22:29 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 2a1b3bed-8ba0-11ef-a0be-8be0dac302b0; Wed, 16 Oct 2024 11:22:29 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3F283FEC; Wed, 16 Oct 2024 02:22:58 -0700 (PDT) Received: from C3HXLD123V.arm.com (unknown [10.57.21.81]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6076B3F528; Wed, 16 Oct 2024 02:22:27 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2a1b3bed-8ba0-11ef-a0be-8be0dac302b0 From: Bertrand Marquis To: xen-devel@lists.xenproject.org Cc: jens.wiklander@linaro.org, Volodymyr Babchuk , Stefano Stabellini , Julien Grall , Michal Orzel Subject: [RFC PATCH 1/4] xen/arm: ffa: Introduce VM to VM support Date: Wed, 16 Oct 2024 11:21:55 +0200 Message-ID: <0475e48ace0acd862224e7ff628d11db94392871.1729069025.git.bertrand.marquis@arm.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 Create a CONFIG_FFA_VM_TO_VM parameter to activate FFA communication between VMs. When activated list VMs in the system with FF-A support in part_info_get. WARNING: There is no filtering for now and all VMs are listed !! Signed-off-by: Bertrand Marquis --- xen/arch/arm/tee/Kconfig | 11 +++ xen/arch/arm/tee/ffa_partinfo.c | 135 ++++++++++++++++++++++++++------ xen/arch/arm/tee/ffa_private.h | 12 +++ 3 files changed, 135 insertions(+), 23 deletions(-) diff --git a/xen/arch/arm/tee/Kconfig b/xen/arch/arm/tee/Kconfig index c5b0f88d7522..88a4c4c99154 100644 --- a/xen/arch/arm/tee/Kconfig +++ b/xen/arch/arm/tee/Kconfig @@ -28,5 +28,16 @@ config FFA [1] https://developer.arm.com/documentation/den0077/latest +config FFA_VM_TO_VM + bool "Enable FF-A between VMs (UNSUPPORTED)" if UNSUPPORTED + default n + depends on FFA + help + This option enables to use FF-A between VMs. + This is experimental and there is no access control so any + guest can communicate with any other guest. + + If unsure, say N. + endmenu diff --git a/xen/arch/arm/tee/ffa_partinfo.c b/xen/arch/arm/tee/ffa_partinfo.c index fde187dba4e5..d699a267cc76 100644 --- a/xen/arch/arm/tee/ffa_partinfo.c +++ b/xen/arch/arm/tee/ffa_partinfo.c @@ -77,7 +77,21 @@ void ffa_handle_partition_info_get(struct cpu_user_regs *regs) }; uint32_t src_size, dst_size; void *dst_buf; - uint32_t ffa_sp_count = 0; + uint32_t ffa_vm_count = 0, ffa_sp_count = 0; +#ifdef CONFIG_FFA_VM_TO_VM + struct domain *dom; + + /* Count the number of VM with FF-A support */ + rcu_read_lock(&domlist_read_lock); + for_each_domain( dom ) + { + struct ffa_ctx *vm = dom->arch.tee; + + if (dom != d && vm != NULL && vm->guest_vers != 0) + ffa_vm_count++; + } + rcu_read_unlock(&domlist_read_lock); +#endif /* * If the guest is v1.0, he does not get back the entry size so we must @@ -127,33 +141,38 @@ void ffa_handle_partition_info_get(struct cpu_user_regs *regs) dst_buf = ctx->rx; - if ( !ffa_rx ) + /* If not supported, we have ffa_sp_count=0 */ + if ( ffa_fw_supports_fid(FFA_PARTITION_INFO_GET) ) { - ret = FFA_RET_DENIED; - goto out_rx_release; - } + if ( !ffa_rx ) + { + ret = FFA_RET_DENIED; + goto out_rx_release; + } - spin_lock(&ffa_rx_buffer_lock); + spin_lock(&ffa_rx_buffer_lock); - ret = ffa_partition_info_get(uuid, 0, &ffa_sp_count, &src_size); + ret = ffa_partition_info_get(uuid, 0, &ffa_sp_count, &src_size); - if ( ret ) - goto out_rx_hyp_unlock; + if ( ret ) + goto out_rx_hyp_unlock; - /* - * ffa_partition_info_get() succeeded so we now own the RX buffer we - * share with the SPMC. We must give it back using ffa_hyp_rx_release() - * once we've copied the content. - */ + /* + * ffa_partition_info_get() succeeded so we now own the RX buffer we + * share with the SPMC. We must give it back using ffa_hyp_rx_release() + * once we've copied the content. + */ - /* we cannot have a size smaller than 1.0 structure */ - if ( src_size < sizeof(struct ffa_partition_info_1_0) ) - { - ret = FFA_RET_NOT_SUPPORTED; - goto out_rx_hyp_release; + /* we cannot have a size smaller than 1.0 structure */ + if ( src_size < sizeof(struct ffa_partition_info_1_0) ) + { + ret = FFA_RET_NOT_SUPPORTED; + goto out_rx_hyp_release; + } } - if ( ctx->page_count * FFA_PAGE_SIZE < ffa_sp_count * dst_size ) + if ( ctx->page_count * FFA_PAGE_SIZE < + (ffa_sp_count + ffa_vm_count) * dst_size ) { ret = FFA_RET_NO_MEMORY; goto out_rx_hyp_release; @@ -185,18 +204,88 @@ void ffa_handle_partition_info_get(struct cpu_user_regs *regs) } } + if ( ffa_fw_supports_fid(FFA_PARTITION_INFO_GET) ) + { + ffa_hyp_rx_release(); + spin_unlock(&ffa_rx_buffer_lock); + } + +#ifdef CONFIG_FFA_VM_TO_VM + if ( ffa_vm_count ) + { + uint32_t curr = 0; + /* add the VM informations if any */ + rcu_read_lock(&domlist_read_lock); + for_each_domain( dom ) + { + struct ffa_ctx *vm = dom->arch.tee; + + /* + * we do not add the VM calling to the list and only VMs with + * FF-A support + */ + if (dom != d && vm != NULL && vm->guest_vers != 0) + { + /* + * We do not have UUID info for VMs so use + * the 1.0 structure so that we set UUIDs to + * zero using memset + */ + struct ffa_partition_info_1_0 srcvm; + + if ( curr == ffa_vm_count ) + { + /* + * The number of domains changed since we counted them, we + * can add new ones if there is enough space in the + * destination buffer so check it or go out with an error. + */ + ffa_vm_count++; + if ( ctx->page_count * FFA_PAGE_SIZE < + (ffa_sp_count + ffa_vm_count) * dst_size ) + { + ret = FFA_RET_NO_MEMORY; + rcu_read_unlock(&domlist_read_lock); + goto out_rx_release; + } + } + + srcvm.id = ffa_get_vm_id(dom); + srcvm.execution_context = dom->max_vcpus; + srcvm.partition_properties = FFA_PART_VM_PROP; + if ( is_64bit_domain(dom) ) + srcvm.partition_properties |= FFA_PART_PROP_AARCH64_STATE; + + memcpy(dst_buf, &srcvm, MIN(sizeof(srcvm), dst_size)); + + if ( dst_size > sizeof(srcvm) ) + memset(dst_buf + sizeof(srcvm), 0, + dst_size - sizeof(srcvm)); + + dst_buf += dst_size; + curr++; + } + } + rcu_read_unlock(&domlist_read_lock); + + /* the number of domains could have reduce since the initial count */ + ffa_vm_count = curr; + } +#endif + + goto out; + out_rx_hyp_release: ffa_hyp_rx_release(); out_rx_hyp_unlock: spin_unlock(&ffa_rx_buffer_lock); out_rx_release: - if ( ret != FFA_RET_OK ) - ffa_rx_release(d); + ffa_rx_release(d); out: if ( ret ) ffa_set_regs_error(regs, ret); else - ffa_set_regs_success(regs, ffa_sp_count, dst_size); + ffa_set_regs_success(regs, ffa_sp_count + ffa_vm_count, dst_size); } static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id, diff --git a/xen/arch/arm/tee/ffa_private.h b/xen/arch/arm/tee/ffa_private.h index d441c0ca5598..47dd6b5fadea 100644 --- a/xen/arch/arm/tee/ffa_private.h +++ b/xen/arch/arm/tee/ffa_private.h @@ -187,6 +187,18 @@ */ #define FFA_PARTITION_INFO_GET_COUNT_FLAG BIT(0, U) +/* + * Partition properties we give for a normal world VM: + * - can send direct message but not receive them + * - can handle indirect messages + * - can receive notifications + * 32/64 bit flag is set depending on the VM + */ +#define FFA_PART_VM_PROP (FFA_PART_PROP_DIRECT_REQ_SEND | \ + FFA_PART_PROP_INDIRECT_MSGS | \ + FFA_PART_PROP_RECV_NOTIF | \ + FFA_PART_PROP_IS_PE_ID) + /* Flags used in calls to FFA_NOTIFICATION_GET interface */ #define FFA_NOTIF_FLAG_BITMAP_SP BIT(0, U) #define FFA_NOTIF_FLAG_BITMAP_VM BIT(1, U) From patchwork Wed Oct 16 09:21:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bertrand Marquis X-Patchwork-Id: 13838018 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C4026D1AD33 for ; Wed, 16 Oct 2024 09:23:05 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.819776.1233276 (Exim 4.92) (envelope-from ) id 1t10Ej-0004JS-KZ; Wed, 16 Oct 2024 09:22:33 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 819776.1233276; Wed, 16 Oct 2024 09:22:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t10Ej-0004JB-HJ; Wed, 16 Oct 2024 09:22:33 +0000 Received: by outflank-mailman (input) for mailman id 819776; Wed, 16 Oct 2024 09:22:32 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t10Ei-0001Po-HX for xen-devel@lists.xenproject.org; Wed, 16 Oct 2024 09:22:32 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 2b108c2e-8ba0-11ef-99a3-01e77a169b0f; Wed, 16 Oct 2024 11:22:30 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B60151007; Wed, 16 Oct 2024 02:22:59 -0700 (PDT) Received: from C3HXLD123V.arm.com (unknown [10.57.21.81]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E501D3F528; Wed, 16 Oct 2024 02:22:28 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2b108c2e-8ba0-11ef-99a3-01e77a169b0f From: Bertrand Marquis To: xen-devel@lists.xenproject.org Cc: jens.wiklander@linaro.org, Volodymyr Babchuk , Stefano Stabellini , Julien Grall , Michal Orzel Subject: [RFC PATCH 2/4] xen/arm: ffa: Add buffer full notification support Date: Wed, 16 Oct 2024 11:21:56 +0200 Message-ID: <70a1fd32542901791fef0d528b0fb0fa94f8e814.1729069025.git.bertrand.marquis@arm.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 Add support to raise a Rx buffer full notification to a VM. This function will be used for indirect message support between VM and is only activated if CONFIG_FFA_VM_TO_VM is selected. Even if there are 32 framework notifications possible, right now only one is defined so the implementation is simplified to only handle the buffer full notification using a boolean. If other framework notifications have to be supported one day, the design will have to be modified to handle it properly. Signed-off-by: Bertrand Marquis --- xen/arch/arm/tee/ffa_notif.c | 26 +++++++++++++++++++++----- xen/arch/arm/tee/ffa_private.h | 13 +++++++++++++ 2 files changed, 34 insertions(+), 5 deletions(-) diff --git a/xen/arch/arm/tee/ffa_notif.c b/xen/arch/arm/tee/ffa_notif.c index 3c6418e62e2b..052b3e364a70 100644 --- a/xen/arch/arm/tee/ffa_notif.c +++ b/xen/arch/arm/tee/ffa_notif.c @@ -93,6 +93,7 @@ void ffa_handle_notification_info_get(struct cpu_user_regs *regs) void ffa_handle_notification_get(struct cpu_user_regs *regs) { struct domain *d = current->domain; + struct ffa_ctx *ctx = d->arch.tee; uint32_t recv = get_user_reg(regs, 1); uint32_t flags = get_user_reg(regs, 2); uint32_t w2 = 0; @@ -132,11 +133,7 @@ void ffa_handle_notification_get(struct cpu_user_regs *regs) */ if ( ( flags & FFA_NOTIF_FLAG_BITMAP_SP ) && ( flags & FFA_NOTIF_FLAG_BITMAP_SPM ) ) - { - struct ffa_ctx *ctx = d->arch.tee; - - ACCESS_ONCE(ctx->notif.secure_pending) = false; - } + ACCESS_ONCE(ctx->notif.secure_pending) = false; arm_smccc_1_2_smc(&arg, &resp); e = ffa_get_ret_code(&resp); @@ -156,6 +153,12 @@ void ffa_handle_notification_get(struct cpu_user_regs *regs) w6 = resp.a6; } +#ifdef CONFIG_FFA_VM_TO_VM + if ( flags & FFA_NOTIF_FLAG_BITMAP_HYP && + test_and_clear_bool(ctx->notif.buff_full_pending) ) + w7 = FFA_NOTIF_RX_BUFFER_FULL; +#endif + ffa_set_regs(regs, FFA_SUCCESS_32, 0, w2, w3, w4, w5, w6, w7); } @@ -178,6 +181,19 @@ int ffa_handle_notification_set(struct cpu_user_regs *regs) bitmap_hi); } +#ifdef CONFIG_FFA_VM_TO_VM +void ffa_raise_rx_buffer_full(struct domain *d) +{ + struct ffa_ctx *ctx = d->arch.tee; + + if ( !ctx ) + return; + + if ( !test_and_set_bool(ctx->notif.buff_full_pending) ) + vgic_inject_irq(d, d->vcpu[0], notif_sri_irq, true); +} +#endif + /* * Extract a 16-bit ID (index n) from the successful return value from * FFA_NOTIFICATION_INFO_GET_64 or FFA_NOTIFICATION_INFO_GET_32. IDs are diff --git a/xen/arch/arm/tee/ffa_private.h b/xen/arch/arm/tee/ffa_private.h index 47dd6b5fadea..ad1dd04aeb7c 100644 --- a/xen/arch/arm/tee/ffa_private.h +++ b/xen/arch/arm/tee/ffa_private.h @@ -210,6 +210,8 @@ #define FFA_NOTIF_INFO_GET_ID_COUNT_SHIFT 7 #define FFA_NOTIF_INFO_GET_ID_COUNT_MASK 0x1F +#define FFA_NOTIF_RX_BUFFER_FULL BIT(0, U) + /* Feature IDs used with FFA_FEATURES */ #define FFA_FEATURE_NOTIF_PEND_INTR 0x1U #define FFA_FEATURE_SCHEDULE_RECV_INTR 0x2U @@ -298,6 +300,13 @@ struct ffa_ctx_notif { * pending global notifications. */ bool secure_pending; + +#ifdef CONFIG_FFA_VM_TO_VM + /* + * Pending Hypervisor framework notifications + */ + bool buff_full_pending; +#endif }; struct ffa_ctx { @@ -370,6 +379,10 @@ void ffa_handle_notification_info_get(struct cpu_user_regs *regs); void ffa_handle_notification_get(struct cpu_user_regs *regs); int ffa_handle_notification_set(struct cpu_user_regs *regs); +#ifdef CONFIG_FFA_VM_TO_VM +void ffa_raise_rx_buffer_full(struct domain *d); +#endif + void ffa_handle_msg_send_direct_req(struct cpu_user_regs *regs, uint32_t fid); int32_t ffa_handle_msg_send2(struct cpu_user_regs *regs); From patchwork Wed Oct 16 09:21:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bertrand Marquis X-Patchwork-Id: 13838020 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D5245D1AD33 for ; Wed, 16 Oct 2024 09:27:54 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.819817.1233295 (Exim 4.92) (envelope-from ) id 1t10Jb-0006wb-Dm; Wed, 16 Oct 2024 09:27:35 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 819817.1233295; Wed, 16 Oct 2024 09:27:35 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t10Jb-0006wU-BA; Wed, 16 Oct 2024 09:27:35 +0000 Received: by outflank-mailman (input) for mailman id 819817; Wed, 16 Oct 2024 09:27:34 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t10El-0001Bq-Qm for xen-devel@lists.xenproject.org; Wed, 16 Oct 2024 09:22:35 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 2bf1599a-8ba0-11ef-a0be-8be0dac302b0; Wed, 16 Oct 2024 11:22:32 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 507A9FEC; Wed, 16 Oct 2024 02:23:01 -0700 (PDT) Received: from C3HXLD123V.arm.com (unknown [10.57.21.81]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 683CF3F528; Wed, 16 Oct 2024 02:22:30 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2bf1599a-8ba0-11ef-a0be-8be0dac302b0 From: Bertrand Marquis To: xen-devel@lists.xenproject.org Cc: jens.wiklander@linaro.org, Volodymyr Babchuk , Stefano Stabellini , Julien Grall , Michal Orzel Subject: [RFC PATCH 3/4] xen/arm: ffa: Add indirect message between VM Date: Wed, 16 Oct 2024 11:21:57 +0200 Message-ID: <52d9809a114965832ee632756152d9125e93d4ea.1729069025.git.bertrand.marquis@arm.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 Add support for indirect messages between VMs. This is only enabled if CONFIG_FFA_VM_TO_VM is selected. Signed-off-by: Bertrand Marquis --- xen/arch/arm/tee/ffa_msg.c | 96 ++++++++++++++++++++++++++++++++++---- 1 file changed, 88 insertions(+), 8 deletions(-) diff --git a/xen/arch/arm/tee/ffa_msg.c b/xen/arch/arm/tee/ffa_msg.c index 335f246ba657..25f184a06546 100644 --- a/xen/arch/arm/tee/ffa_msg.c +++ b/xen/arch/arm/tee/ffa_msg.c @@ -95,9 +95,12 @@ int32_t ffa_handle_msg_send2(struct cpu_user_regs *regs) const struct ffa_part_msg_rxtx *src_msg; uint16_t dst_id, src_id; int32_t ret; - - if ( !ffa_fw_supports_fid(FFA_MSG_SEND2) ) - return FFA_RET_NOT_SUPPORTED; +#ifdef CONFIG_FFA_VM_TO_VM + struct domain *dst_d; + struct ffa_ctx *dst_ctx; + struct ffa_part_msg_rxtx *dst_msg; + int err; +#endif if ( !spin_trylock(&src_ctx->tx_lock) ) return FFA_RET_BUSY; @@ -106,10 +109,10 @@ int32_t ffa_handle_msg_send2(struct cpu_user_regs *regs) src_id = src_msg->send_recv_id >> 16; dst_id = src_msg->send_recv_id & GENMASK(15,0); - if ( src_id != ffa_get_vm_id(src_d) || !FFA_ID_IS_SECURE(dst_id) ) + if ( src_id != ffa_get_vm_id(src_d) ) { ret = FFA_RET_INVALID_PARAMETERS; - goto out_unlock_tx; + goto out; } /* check source message fits in buffer */ @@ -118,12 +121,89 @@ int32_t ffa_handle_msg_send2(struct cpu_user_regs *regs) src_msg->msg_offset < sizeof(struct ffa_part_msg_rxtx) ) { ret = FFA_RET_INVALID_PARAMETERS; - goto out_unlock_tx; + goto out; } - ret = ffa_simple_call(FFA_MSG_SEND2, ((uint32_t)src_id) << 16, 0, 0, 0); + if ( FFA_ID_IS_SECURE(dst_id) ) + { + /* Message for a secure partition */ + if ( !ffa_fw_supports_fid(FFA_MSG_SEND2) ) + { + ret = FFA_RET_NOT_SUPPORTED; + goto out; + } + + ret = ffa_simple_call(FFA_MSG_SEND2, ((uint32_t)src_id) << 16, 0, 0, + 0); + goto out; + } -out_unlock_tx: +#ifndef CONFIG_FFA_VM_TO_VM + ret = FFA_RET_INVALID_PARAMETERS; +#else + /* Message for a VM */ + if ( dst_id == 0 ) + { + /* FF-A ID 0 is the hypervisor, this is not valid */ + ret = FFA_RET_INVALID_PARAMETERS; + goto out; + } + + /* This is also checking that dest is not src */ + err = rcu_lock_live_remote_domain_by_id(dst_id - 1, &dst_d); + if ( err ) + { + ret = FFA_RET_INVALID_PARAMETERS; + goto out; + } + + if ( dst_d->arch.tee == NULL ) + { + ret = FFA_RET_INVALID_PARAMETERS; + goto out_unlock; + } + + dst_ctx = dst_d->arch.tee; + if ( !dst_ctx->guest_vers ) + { + ret = FFA_RET_INVALID_PARAMETERS; + goto out_unlock; + } + + /* This also checks that destination has set a Rx buffer */ + ret = ffa_rx_acquire(dst_d); + if ( ret ) + goto out_unlock; + + /* we need to have enough space in the destination buffer */ + if ( dst_ctx->page_count * FFA_PAGE_SIZE < + (sizeof(struct ffa_part_msg_rxtx) + src_msg->msg_size) ) + { + ret = FFA_RET_NO_MEMORY; + ffa_rx_release(dst_d); + goto out_unlock; + } + + dst_msg = dst_ctx->rx; + + /* prepare destination header */ + dst_msg->flags = 0; + dst_msg->reserved = 0; + dst_msg->msg_offset = sizeof(struct ffa_part_msg_rxtx); + dst_msg->send_recv_id = src_msg->send_recv_id; + dst_msg->msg_size = src_msg->msg_size; + + memcpy(dst_ctx->rx + sizeof(struct ffa_part_msg_rxtx), + src_ctx->tx + src_msg->msg_offset, src_msg->msg_size); + + /* receiver rx buffer will be released by the receiver*/ + +out_unlock: + rcu_unlock_domain(dst_d); + if ( !ret ) + ffa_raise_rx_buffer_full(dst_d); +#endif +out: spin_unlock(&src_ctx->tx_lock); return ret; } From patchwork Wed Oct 16 09:21:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bertrand Marquis X-Patchwork-Id: 13838019 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 09845D1AD33 for ; Wed, 16 Oct 2024 09:26:58 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.819798.1233286 (Exim 4.92) (envelope-from ) id 1t10Ir-00069v-7R; Wed, 16 Oct 2024 09:26:49 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 819798.1233286; Wed, 16 Oct 2024 09:26:49 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t10Ir-00069o-2W; Wed, 16 Oct 2024 09:26:49 +0000 Received: by outflank-mailman (input) for mailman id 819798; Wed, 16 Oct 2024 09:26:48 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t10En-0001Po-DN for xen-devel@lists.xenproject.org; Wed, 16 Oct 2024 09:22:37 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 2ce41d79-8ba0-11ef-99a3-01e77a169b0f; Wed, 16 Oct 2024 11:22:33 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C70891007; Wed, 16 Oct 2024 02:23:02 -0700 (PDT) Received: from C3HXLD123V.arm.com (unknown [10.57.21.81]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0223A3F528; Wed, 16 Oct 2024 02:22:31 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2ce41d79-8ba0-11ef-99a3-01e77a169b0f From: Bertrand Marquis To: xen-devel@lists.xenproject.org Cc: jens.wiklander@linaro.org, Volodymyr Babchuk , Stefano Stabellini , Julien Grall , Michal Orzel Subject: [RFC PATCH 4/4] xen/arm: ffa: Enable VM to VM without firmware Date: Wed, 16 Oct 2024 11:21:58 +0200 Message-ID: <57c59cae4141dd9601d7b4e9260030a16809b764.1729069025.git.bertrand.marquis@arm.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 When VM to VM support is activated and there is no suitable FF-A support in the firmware, enable FF-A support for VMs to allow using it for VM to VM communications. If there is Optee running in the secure world and using the non FF-A communication system, having CONFIG_FFA_VM_TO_VM could be non functional (if optee is probed first) or Optee could be non functional (if FF-A is probed first) so it is not recommended to activate the configuration option for such systems. To make buffer full notification work between VMs when there is not firmware, rework the notification handling and modify the global flag to only be used as check for firmware notification support instead. Modify part_info_get to return the list of VMs when there is no firmware support. Signed-off-by: Bertrand Marquis --- xen/arch/arm/tee/ffa.c | 11 +++ xen/arch/arm/tee/ffa_notif.c | 118 ++++++++++++++++---------------- xen/arch/arm/tee/ffa_partinfo.c | 2 + 3 files changed, 73 insertions(+), 58 deletions(-) diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c index 21d41b452dc9..6d427864f3da 100644 --- a/xen/arch/arm/tee/ffa.c +++ b/xen/arch/arm/tee/ffa.c @@ -324,8 +324,11 @@ static int ffa_domain_init(struct domain *d) struct ffa_ctx *ctx; int ret; +#ifndef CONFIG_FFA_VM_TO_VM if ( !ffa_fw_version ) return -ENODEV; +#endif + /* * We are using the domain_id + 1 as the FF-A ID for VMs as FF-A ID 0 is * reserved for the hypervisor and we only support secure endpoints using @@ -549,7 +552,15 @@ err_no_fw: bitmap_zero(ffa_fw_abi_supported, FFA_ABI_BITMAP_SIZE); printk(XENLOG_WARNING "ARM FF-A No firmware support\n"); +#ifdef CONFIG_FFA_VM_TO_VM + INIT_LIST_HEAD(&ffa_teardown_head); + init_timer(&ffa_teardown_timer, ffa_teardown_timer_callback, NULL, 0); + + printk(XENLOG_INFO "ARM FF-A only available between VMs\n"); + return true; +#else return false; +#endif } static const struct tee_mediator_ops ffa_ops = diff --git a/xen/arch/arm/tee/ffa_notif.c b/xen/arch/arm/tee/ffa_notif.c index 052b3e364a70..f2c87d1320de 100644 --- a/xen/arch/arm/tee/ffa_notif.c +++ b/xen/arch/arm/tee/ffa_notif.c @@ -16,7 +16,7 @@ #include "ffa_private.h" -static bool __ro_after_init notif_enabled; +static bool __ro_after_init fw_notif_enabled; static unsigned int __ro_after_init notif_sri_irq; int ffa_handle_notification_bind(struct cpu_user_regs *regs) @@ -27,21 +27,17 @@ int ffa_handle_notification_bind(struct cpu_user_regs *regs) uint32_t bitmap_lo = get_user_reg(regs, 3); uint32_t bitmap_hi = get_user_reg(regs, 4); - if ( !notif_enabled ) - return FFA_RET_NOT_SUPPORTED; - if ( (src_dst & 0xFFFFU) != ffa_get_vm_id(d) ) return FFA_RET_INVALID_PARAMETERS; if ( flags ) /* Only global notifications are supported */ return FFA_RET_DENIED; - /* - * We only support notifications from SP so no need to check the sender - * endpoint ID, the SPMC will take care of that for us. - */ - return ffa_simple_call(FFA_NOTIFICATION_BIND, src_dst, flags, bitmap_hi, - bitmap_lo); + if ( FFA_ID_IS_SECURE(src_dst>>16) && fw_notif_enabled ) + return ffa_simple_call(FFA_NOTIFICATION_BIND, src_dst, flags, + bitmap_hi, bitmap_lo); + + return FFA_RET_NOT_SUPPORTED; } int ffa_handle_notification_unbind(struct cpu_user_regs *regs) @@ -51,32 +47,36 @@ int ffa_handle_notification_unbind(struct cpu_user_regs *regs) uint32_t bitmap_lo = get_user_reg(regs, 3); uint32_t bitmap_hi = get_user_reg(regs, 4); - if ( !notif_enabled ) - return FFA_RET_NOT_SUPPORTED; - if ( (src_dst & 0xFFFFU) != ffa_get_vm_id(d) ) return FFA_RET_INVALID_PARAMETERS; - /* - * We only support notifications from SP so no need to check the - * destination endpoint ID, the SPMC will take care of that for us. - */ - return ffa_simple_call(FFA_NOTIFICATION_UNBIND, src_dst, 0, bitmap_hi, - bitmap_lo); + if ( FFA_ID_IS_SECURE(src_dst>>16) && fw_notif_enabled ) + return ffa_simple_call(FFA_NOTIFICATION_UNBIND, src_dst, 0, bitmap_hi, + bitmap_lo); + + return FFA_RET_NOT_SUPPORTED; } void ffa_handle_notification_info_get(struct cpu_user_regs *regs) { struct domain *d = current->domain; struct ffa_ctx *ctx = d->arch.tee; + bool notif_pending = false; - if ( !notif_enabled ) +#ifndef CONFIG_FFA_VM_TO_VM + if ( !fw_notif_enabled ) { ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED); return; } +#endif - if ( test_and_clear_bool(ctx->notif.secure_pending) ) + notif_pending = ctx->notif.secure_pending; +#ifdef CONFIG_FFA_VM_TO_VM + notif_pending |= ctx->notif.buff_full_pending; +#endif + + if ( notif_pending ) { /* A pending global notification for the guest */ ffa_set_regs(regs, FFA_SUCCESS_64, 0, @@ -103,11 +103,13 @@ void ffa_handle_notification_get(struct cpu_user_regs *regs) uint32_t w6 = 0; uint32_t w7 = 0; - if ( !notif_enabled ) +#ifndef CONFIG_FFA_VM_TO_VM + if ( !fw_notif_enabled ) { ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED); return; } +#endif if ( (recv & 0xFFFFU) != ffa_get_vm_id(d) ) { @@ -115,7 +117,8 @@ void ffa_handle_notification_get(struct cpu_user_regs *regs) return; } - if ( flags & ( FFA_NOTIF_FLAG_BITMAP_SP | FFA_NOTIF_FLAG_BITMAP_SPM ) ) + if ( fw_notif_enabled && (flags & ( FFA_NOTIF_FLAG_BITMAP_SP + | FFA_NOTIF_FLAG_BITMAP_SPM )) ) { struct arm_smccc_1_2_regs arg = { .a0 = FFA_NOTIFICATION_GET, @@ -170,15 +173,14 @@ int ffa_handle_notification_set(struct cpu_user_regs *regs) uint32_t bitmap_lo = get_user_reg(regs, 3); uint32_t bitmap_hi = get_user_reg(regs, 4); - if ( !notif_enabled ) - return FFA_RET_NOT_SUPPORTED; - if ( (src_dst >> 16) != ffa_get_vm_id(d) ) return FFA_RET_INVALID_PARAMETERS; - /* Let the SPMC check the destination of the notification */ - return ffa_simple_call(FFA_NOTIFICATION_SET, src_dst, flags, bitmap_lo, - bitmap_hi); + if ( FFA_ID_IS_SECURE(src_dst>>16) && fw_notif_enabled ) + return ffa_simple_call(FFA_NOTIFICATION_SET, src_dst, flags, bitmap_lo, + bitmap_hi); + + return FFA_RET_NOT_SUPPORTED; } #ifdef CONFIG_FFA_VM_TO_VM @@ -190,7 +192,7 @@ void ffa_raise_rx_buffer_full(struct domain *d) return; if ( !test_and_set_bool(ctx->notif.buff_full_pending) ) - vgic_inject_irq(d, d->vcpu[0], notif_sri_irq, true); + vgic_inject_irq(d, d->vcpu[0], GUEST_FFA_NOTIF_PEND_INTR_ID, true); } #endif @@ -363,7 +365,7 @@ void ffa_notif_init_interrupt(void) { int ret; - if ( notif_enabled && notif_sri_irq < NR_GIC_SGI ) + if ( fw_notif_enabled && notif_sri_irq < NR_GIC_SGI ) { /* * An error here is unlikely since the primary CPU has already @@ -394,47 +396,47 @@ void ffa_notif_init(void) int ret; /* Only enable fw notification if all ABIs we need are supported */ - if ( !(ffa_fw_supports_fid(FFA_NOTIFICATION_BITMAP_CREATE) && - ffa_fw_supports_fid(FFA_NOTIFICATION_BITMAP_DESTROY) && - ffa_fw_supports_fid(FFA_NOTIFICATION_GET) && - ffa_fw_supports_fid(FFA_NOTIFICATION_INFO_GET_64)) ) - return; - - arm_smccc_1_2_smc(&arg, &resp); - if ( resp.a0 != FFA_SUCCESS_32 ) - return; - - irq = resp.a2; - notif_sri_irq = irq; - if ( irq >= NR_GIC_SGI ) - irq_set_type(irq, IRQ_TYPE_EDGE_RISING); - ret = request_irq(irq, 0, notif_irq_handler, "FF-A notif", NULL); - if ( ret ) + if ( ffa_fw_supports_fid(FFA_NOTIFICATION_BITMAP_CREATE) && + ffa_fw_supports_fid(FFA_NOTIFICATION_BITMAP_DESTROY) && + ffa_fw_supports_fid(FFA_NOTIFICATION_GET) && + ffa_fw_supports_fid(FFA_NOTIFICATION_INFO_GET_64) ) { - printk(XENLOG_ERR "ffa: request_irq irq %u failed: error %d\n", - irq, ret); - return; - } + arm_smccc_1_2_smc(&arg, &resp); + if ( resp.a0 != FFA_SUCCESS_32 ) + return; - notif_enabled = true; + irq = resp.a2; + notif_sri_irq = irq; + if ( irq >= NR_GIC_SGI ) + irq_set_type(irq, IRQ_TYPE_EDGE_RISING); + ret = request_irq(irq, 0, notif_irq_handler, "FF-A notif", NULL); + if ( ret ) + { + printk(XENLOG_ERR "ffa: request_irq irq %u failed: error %d\n", + irq, ret); + return; + } + fw_notif_enabled = true; + } } int ffa_notif_domain_init(struct domain *d) { int32_t res; - if ( !notif_enabled ) - return 0; + if ( fw_notif_enabled ) + { - res = ffa_notification_bitmap_create(ffa_get_vm_id(d), d->max_vcpus); - if ( res ) - return -ENOMEM; + res = ffa_notification_bitmap_create(ffa_get_vm_id(d), d->max_vcpus); + if ( res ) + return -ENOMEM; + } return 0; } void ffa_notif_domain_destroy(struct domain *d) { - if ( notif_enabled ) + if ( fw_notif_enabled ) ffa_notification_bitmap_destroy(ffa_get_vm_id(d)); } diff --git a/xen/arch/arm/tee/ffa_partinfo.c b/xen/arch/arm/tee/ffa_partinfo.c index d699a267cc76..2e09440fe6c2 100644 --- a/xen/arch/arm/tee/ffa_partinfo.c +++ b/xen/arch/arm/tee/ffa_partinfo.c @@ -128,12 +128,14 @@ void ffa_handle_partition_info_get(struct cpu_user_regs *regs) goto out; } +#ifndef CONFIG_FFA_VM_TO_VM if ( !ffa_fw_supports_fid(FFA_PARTITION_INFO_GET) ) { /* Just give an empty partition list to the caller */ ret = FFA_RET_OK; goto out; } +#endif ret = ffa_rx_acquire(d); if ( ret != FFA_RET_OK )