From patchwork Mon Mar 25 09:38:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 13601702 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 06339C54E58 for ; Mon, 25 Mar 2024 09:39:21 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.697647.1088604 (Exim 4.92) (envelope-from ) id 1rognQ-0001Cx-GQ; Mon, 25 Mar 2024 09:39:12 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 697647.1088604; Mon, 25 Mar 2024 09:39:12 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rognQ-0001C2-B4; Mon, 25 Mar 2024 09:39:12 +0000 Received: by outflank-mailman (input) for mailman id 697647; Mon, 25 Mar 2024 09:39:11 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rognP-000193-BF for xen-devel@lists.xenproject.org; Mon, 25 Mar 2024 09:39:11 +0000 Received: from mail-ej1-x62b.google.com (mail-ej1-x62b.google.com [2a00:1450:4864:20::62b]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 883ef379-ea8b-11ee-afe3-a90da7624cb6; Mon, 25 Mar 2024 10:39:10 +0100 (CET) Received: by mail-ej1-x62b.google.com with SMTP id a640c23a62f3a-a4a39ab1a10so66662966b.1 for ; Mon, 25 Mar 2024 02:39:10 -0700 (PDT) Received: from rayden.urgonet (h-217-31-164-171.A175.priv.bahnhof.se. [217.31.164.171]) by smtp.gmail.com with ESMTPSA id bw26-20020a170906c1da00b00a4650ec48d0sm2891067ejb.140.2024.03.25.02.39.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Mar 2024 02:39:09 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 883ef379-ea8b-11ee-afe3-a90da7624cb6 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1711359550; x=1711964350; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XIM5k3CjT9Nw1rXmWFG3LUyacrKSLr0M9YlhGWJO8pY=; b=XJVMo5SHh75akJAC/nEUOl8MhUoevTSYzAYJv4ueURl6qt7FHPowsmD3Lo07GB4m1C S/kyxqVRTb8DVPSbplOSbz+ZiZhHlCtGF91yRYKCRM0oDorM3/RfTYBMxase4mUOEnQD GBtrgRKQzCjtn9YdMYaY5WVV66sUWgHjrPASEMRiA+ouDYxRauOAg4+EJ46uWo0/NMnn oeBceEyEiS/90x1Rz0R2mHl6rOD0G+BLHXJs3exMGJBU8BIKxHVFLX+Kff058I4TIH/h RKsIcxQ0LgBowlxES+LQwz0I2jJLZci3O0Ak/9t9Uw5XZpNheGpJt5/kfHcwQo1KYUQV KyHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711359550; x=1711964350; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XIM5k3CjT9Nw1rXmWFG3LUyacrKSLr0M9YlhGWJO8pY=; b=ry7x2QdZeZlS8hqvcFho3hLSb3f+m3350UVaF0uh+WZ8MtMV5kXF3OTfV6jTsWZKQ3 poQ5ssfd/AthnuLLnL2YGWwWf5zUWNZQEKPdBG8asIn8IlAmaXmW1kkYOSKL8DrUHuna 1BSEfgJwy/JlQTyxLZ5D+A0I40JJIE6q1xwCU3kuieLWcDNQx6P9GzWdBDfTSLOvKPdE XNhoZvor6rz9JKZ+/gb0EWs4vwnpHsYwZ3hOXTj91NMGEqeAeSyqI5+Odim6cTSHXWTY ldJ2suNKJoTX0rcxTWvuLJ8NugGusu9HvH9ZbOD23YekmRtkrVSnO7lYmAYLy+fx6Trw rmxQ== X-Gm-Message-State: AOJu0YzwPIVQExQZU7lt9rKMCtkkQzws/UDFeiwMnmRVTED0kruo6S8w 3Twc93VtYHJI/6Eb4511wRPThNDP2y0MouQZoRHQfXl2kdp3COa7v5p7Mol4Bobw/A7PK6ik8Tf w X-Google-Smtp-Source: AGHT+IFFM3XCbhGI12K+dE83gxQ3Se0cUgVXLDD58NPg/RxLYIuvc1dMnqatdb6UpHFlryNry75PSQ== X-Received: by 2002:a17:906:8cb:b0:a47:ae0:160 with SMTP id o11-20020a17090608cb00b00a470ae00160mr4125668eje.73.1711359549783; Mon, 25 Mar 2024 02:39:09 -0700 (PDT) From: Jens Wiklander To: xen-devel@lists.xenproject.org Cc: patches@linaro.org, Jens Wiklander , Volodymyr Babchuk , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel Subject: [XEN PATCH 1/6] xen/arm: ffa: rename functions to use ffa_ prefix Date: Mon, 25 Mar 2024 10:38:59 +0100 Message-Id: <20240325093904.3466092-2-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240325093904.3466092-1-jens.wiklander@linaro.org> References: <20240325093904.3466092-1-jens.wiklander@linaro.org> MIME-Version: 1.0 Prepare to separate into modules by renaming functions that will need new names when becoming non-static in the following commit. Signed-off-by: Jens Wiklander Reviewed-by: Bertrand Marquis --- xen/arch/arm/tee/ffa.c | 125 +++++++++++++++++++++-------------------- 1 file changed, 65 insertions(+), 60 deletions(-) diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c index 9a05dcede17a..0344a0f17e72 100644 --- a/xen/arch/arm/tee/ffa.c +++ b/xen/arch/arm/tee/ffa.c @@ -4,7 +4,7 @@ * * Arm Firmware Framework for ARMv8-A (FF-A) mediator * - * Copyright (C) 2023 Linaro Limited + * Copyright (C) 2023-2024 Linaro Limited * * References: * FF-A-1.0-REL: FF-A specification version 1.0 available at @@ -473,7 +473,7 @@ static bool ffa_get_version(uint32_t *vers) return true; } -static int32_t get_ffa_ret_code(const struct arm_smccc_1_2_regs *resp) +static int32_t ffa_get_ret_code(const struct arm_smccc_1_2_regs *resp) { switch ( resp->a0 ) { @@ -504,7 +504,7 @@ static int32_t ffa_simple_call(uint32_t fid, register_t a1, register_t a2, arm_smccc_1_2_smc(&arg, &resp); - return get_ffa_ret_code(&resp); + return ffa_get_ret_code(&resp); } static int32_t ffa_features(uint32_t id) @@ -546,7 +546,7 @@ static int32_t ffa_partition_info_get(uint32_t w1, uint32_t w2, uint32_t w3, arm_smccc_1_2_smc(&arg, &resp); - ret = get_ffa_ret_code(&resp); + ret = ffa_get_ret_code(&resp); if ( !ret ) { *count = resp.a2; @@ -654,15 +654,16 @@ static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id, return res; } -static uint16_t get_vm_id(const struct domain *d) +static uint16_t ffa_get_vm_id(const struct domain *d) { /* +1 since 0 is reserved for the hypervisor in FF-A */ return d->domain_id + 1; } -static void set_regs(struct cpu_user_regs *regs, register_t v0, register_t v1, - register_t v2, register_t v3, register_t v4, register_t v5, - register_t v6, register_t v7) +static void ffa_set_regs(struct cpu_user_regs *regs, register_t v0, + register_t v1, register_t v2, register_t v3, + register_t v4, register_t v5, register_t v6, + register_t v7) { set_user_reg(regs, 0, v0); set_user_reg(regs, 1, v1); @@ -674,15 +675,15 @@ static void set_regs(struct cpu_user_regs *regs, register_t v0, register_t v1, set_user_reg(regs, 7, v7); } -static void set_regs_error(struct cpu_user_regs *regs, uint32_t error_code) +static void ffa_set_regs_error(struct cpu_user_regs *regs, uint32_t error_code) { - set_regs(regs, FFA_ERROR, 0, error_code, 0, 0, 0, 0, 0); + ffa_set_regs(regs, FFA_ERROR, 0, error_code, 0, 0, 0, 0, 0); } -static void set_regs_success(struct cpu_user_regs *regs, uint32_t w2, +static void ffa_set_regs_success(struct cpu_user_regs *regs, uint32_t w2, uint32_t w3) { - set_regs(regs, FFA_SUCCESS_32, 0, w2, w3, 0, 0, 0, 0); + ffa_set_regs(regs, FFA_SUCCESS_32, 0, w2, w3, 0, 0, 0, 0); } static void handle_version(struct cpu_user_regs *regs) @@ -697,11 +698,11 @@ static void handle_version(struct cpu_user_regs *regs) vers = FFA_VERSION_1_1; ctx->guest_vers = vers; - set_regs(regs, vers, 0, 0, 0, 0, 0, 0, 0); + ffa_set_regs(regs, vers, 0, 0, 0, 0, 0, 0, 0); } -static uint32_t handle_rxtx_map(uint32_t fid, register_t tx_addr, - register_t rx_addr, uint32_t page_count) +static uint32_t ffa_handle_rxtx_map(uint32_t fid, register_t tx_addr, + register_t rx_addr, uint32_t page_count) { uint32_t ret = FFA_RET_INVALID_PARAMETERS; struct domain *d = current->domain; @@ -789,7 +790,7 @@ static void rxtx_unmap(struct ffa_ctx *ctx) ctx->rx_is_free = false; } -static uint32_t handle_rxtx_unmap(void) +static uint32_t ffa_handle_rxtx_unmap(void) { struct domain *d = current->domain; struct ffa_ctx *ctx = d->arch.tee; @@ -802,9 +803,10 @@ static uint32_t handle_rxtx_unmap(void) return FFA_RET_OK; } -static int32_t handle_partition_info_get(uint32_t w1, uint32_t w2, uint32_t w3, - uint32_t w4, uint32_t w5, - uint32_t *count, uint32_t *fpi_size) +static int32_t ffa_handle_partition_info_get(uint32_t w1, uint32_t w2, + uint32_t w3, uint32_t w4, + uint32_t w5, uint32_t *count, + uint32_t *fpi_size) { int32_t ret = FFA_RET_DENIED; struct domain *d = current->domain; @@ -883,7 +885,7 @@ out: return ret; } -static int32_t handle_rx_release(void) +static int32_t ffa_handle_rx_release(void) { int32_t ret = FFA_RET_DENIED; struct domain *d = current->domain; @@ -916,7 +918,7 @@ static void handle_msg_send_direct_req(struct cpu_user_regs *regs, uint32_t fid) mask = GENMASK_ULL(31, 0); src_dst = get_user_reg(regs, 1); - if ( (src_dst >> 16) != get_vm_id(d) ) + if ( (src_dst >> 16) != ffa_get_vm_id(d) ) { resp.a0 = FFA_ERROR; resp.a2 = FFA_RET_INVALID_PARAMETERS; @@ -949,8 +951,9 @@ static void handle_msg_send_direct_req(struct cpu_user_regs *regs, uint32_t fid) } out: - set_regs(regs, resp.a0, resp.a1 & mask, resp.a2 & mask, resp.a3 & mask, - resp.a4 & mask, resp.a5 & mask, resp.a6 & mask, resp.a7 & mask); + ffa_set_regs(regs, resp.a0, resp.a1 & mask, resp.a2 & mask, resp.a3 & mask, + resp.a4 & mask, resp.a5 & mask, resp.a6 & mask, + resp.a7 & mask); } /* @@ -1249,7 +1252,7 @@ static int read_mem_transaction(uint32_t ffa_vers, const void *buf, size_t blen, return 0; } -static void handle_mem_share(struct cpu_user_regs *regs) +static void ffa_handle_mem_share(struct cpu_user_regs *regs) { uint32_t tot_len = get_user_reg(regs, 1); uint32_t frag_len = get_user_reg(regs, 2); @@ -1318,7 +1321,7 @@ static void handle_mem_share(struct cpu_user_regs *regs) goto out_unlock; } - if ( trans.sender_id != get_vm_id(d) ) + if ( trans.sender_id != ffa_get_vm_id(d) ) { ret = FFA_RET_INVALID_PARAMETERS; goto out_unlock; @@ -1402,9 +1405,9 @@ out_unlock: out_set_ret: if ( ret == 0) - set_regs_success(regs, handle_lo, handle_hi); + ffa_set_regs_success(regs, handle_lo, handle_hi); else - set_regs_error(regs, ret); + ffa_set_regs_error(regs, ret); } /* Must only be called with ctx->lock held */ @@ -1419,7 +1422,7 @@ static struct ffa_shm_mem *find_shm_mem(struct ffa_ctx *ctx, uint64_t handle) return NULL; } -static int handle_mem_reclaim(uint64_t handle, uint32_t flags) +static int ffa_handle_mem_reclaim(uint64_t handle, uint32_t flags) { struct domain *d = current->domain; struct ffa_ctx *ctx = d->arch.tee; @@ -1471,41 +1474,42 @@ static bool ffa_handle_call(struct cpu_user_regs *regs) handle_version(regs); return true; case FFA_ID_GET: - set_regs_success(regs, get_vm_id(d), 0); + ffa_set_regs_success(regs, ffa_get_vm_id(d), 0); return true; case FFA_RXTX_MAP_32: case FFA_RXTX_MAP_64: - e = handle_rxtx_map(fid, get_user_reg(regs, 1), get_user_reg(regs, 2), - get_user_reg(regs, 3)); + e = ffa_handle_rxtx_map(fid, get_user_reg(regs, 1), + get_user_reg(regs, 2), get_user_reg(regs, 3)); if ( e ) - set_regs_error(regs, e); + ffa_set_regs_error(regs, e); else - set_regs_success(regs, 0, 0); + ffa_set_regs_success(regs, 0, 0); return true; case FFA_RXTX_UNMAP: - e = handle_rxtx_unmap(); + e = ffa_handle_rxtx_unmap(); if ( e ) - set_regs_error(regs, e); + ffa_set_regs_error(regs, e); else - set_regs_success(regs, 0, 0); + ffa_set_regs_success(regs, 0, 0); return true; case FFA_PARTITION_INFO_GET: - e = handle_partition_info_get(get_user_reg(regs, 1), - get_user_reg(regs, 2), - get_user_reg(regs, 3), - get_user_reg(regs, 4), - get_user_reg(regs, 5), &count, &fpi_size); + e = ffa_handle_partition_info_get(get_user_reg(regs, 1), + get_user_reg(regs, 2), + get_user_reg(regs, 3), + get_user_reg(regs, 4), + get_user_reg(regs, 5), &count, + &fpi_size); if ( e ) - set_regs_error(regs, e); + ffa_set_regs_error(regs, e); else - set_regs_success(regs, count, fpi_size); + ffa_set_regs_success(regs, count, fpi_size); return true; case FFA_RX_RELEASE: - e = handle_rx_release(); + e = ffa_handle_rx_release(); if ( e ) - set_regs_error(regs, e); + ffa_set_regs_error(regs, e); else - set_regs_success(regs, 0, 0); + ffa_set_regs_success(regs, 0, 0); return true; case FFA_MSG_SEND_DIRECT_REQ_32: case FFA_MSG_SEND_DIRECT_REQ_64: @@ -1513,21 +1517,21 @@ static bool ffa_handle_call(struct cpu_user_regs *regs) return true; case FFA_MEM_SHARE_32: case FFA_MEM_SHARE_64: - handle_mem_share(regs); + ffa_handle_mem_share(regs); return true; case FFA_MEM_RECLAIM: - e = handle_mem_reclaim(regpair_to_uint64(get_user_reg(regs, 2), - get_user_reg(regs, 1)), - get_user_reg(regs, 3)); + e = ffa_handle_mem_reclaim(regpair_to_uint64(get_user_reg(regs, 2), + get_user_reg(regs, 1)), + get_user_reg(regs, 3)); if ( e ) - set_regs_error(regs, e); + ffa_set_regs_error(regs, e); else - set_regs_success(regs, 0, 0); + ffa_set_regs_success(regs, 0, 0); return true; default: gprintk(XENLOG_ERR, "ffa: unhandled fid 0x%x\n", fid); - set_regs_error(regs, FFA_RET_NOT_SUPPORTED); + ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED); return true; } } @@ -1593,12 +1597,12 @@ static int ffa_domain_init(struct domain *d) for ( n = 0; n < subscr_vm_created_count; n++ ) { - res = ffa_direct_req_send_vm(subscr_vm_created[n], get_vm_id(d), + res = ffa_direct_req_send_vm(subscr_vm_created[n], ffa_get_vm_id(d), FFA_MSG_SEND_VM_CREATED); if ( res ) { printk(XENLOG_ERR "ffa: Failed to report creation of vm_id %u to %u: res %d\n", - get_vm_id(d), subscr_vm_created[n], res); + ffa_get_vm_id(d), subscr_vm_created[n], res); break; } } @@ -1620,13 +1624,13 @@ static void send_vm_destroyed(struct domain *d) if ( !test_bit(n, ctx->vm_destroy_bitmap) ) continue; - res = ffa_direct_req_send_vm(subscr_vm_destroyed[n], get_vm_id(d), + res = ffa_direct_req_send_vm(subscr_vm_destroyed[n], ffa_get_vm_id(d), FFA_MSG_SEND_VM_DESTROYED); if ( res ) { printk(XENLOG_ERR "%pd: ffa: Failed to report destruction of vm_id %u to %u: res %d\n", - d, get_vm_id(d), subscr_vm_destroyed[n], res); + d, ffa_get_vm_id(d), subscr_vm_destroyed[n], res); } /* @@ -1640,7 +1644,7 @@ static void send_vm_destroyed(struct domain *d) } } -static void reclaim_shms(struct domain *d) +static void ffa_reclaim_shms(struct domain *d) { struct ffa_ctx *ctx = d->arch.tee; struct ffa_shm_mem *shm, *tmp; @@ -1699,7 +1703,7 @@ static void ffa_domain_teardown_continue(struct ffa_ctx *ctx, bool first_time) struct ffa_ctx *next_ctx = NULL; send_vm_destroyed(ctx->teardown_d); - reclaim_shms(ctx->teardown_d); + ffa_reclaim_shms(ctx->teardown_d); if ( ctx->shm_count || !bitmap_empty(ctx->vm_destroy_bitmap, subscr_vm_destroyed_count) ) @@ -1719,7 +1723,8 @@ static void ffa_domain_teardown_continue(struct ffa_ctx *ctx, bool first_time) { /* * domain_destroy() might have been called (via put_domain() in - * reclaim_shms()), so we can't touch the domain structure anymore. + * ffa_reclaim_shms()), so we can't touch the domain structure + * anymore. */ xfree(ctx); From patchwork Mon Mar 25 09:39:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 13601703 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4EC6BC54E64 for ; Mon, 25 Mar 2024 09:39:25 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.697648.1088620 (Exim 4.92) (envelope-from ) id 1rognT-0001iQ-Ta; Mon, 25 Mar 2024 09:39:15 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 697648.1088620; Mon, 25 Mar 2024 09:39:15 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rognT-0001iH-Pn; Mon, 25 Mar 2024 09:39:15 +0000 Received: by outflank-mailman (input) for mailman id 697648; Mon, 25 Mar 2024 09:39:14 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rognS-0001fs-N3 for xen-devel@lists.xenproject.org; Mon, 25 Mar 2024 09:39:14 +0000 Received: from mail-ed1-x529.google.com (mail-ed1-x529.google.com [2a00:1450:4864:20::529]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 8925edf5-ea8b-11ee-a1ef-f123f15fe8a2; Mon, 25 Mar 2024 10:39:11 +0100 (CET) Received: by mail-ed1-x529.google.com with SMTP id 4fb4d7f45d1cf-56845954ffeso5582588a12.2 for ; Mon, 25 Mar 2024 02:39:11 -0700 (PDT) Received: from rayden.urgonet (h-217-31-164-171.A175.priv.bahnhof.se. [217.31.164.171]) by smtp.gmail.com with ESMTPSA id bw26-20020a170906c1da00b00a4650ec48d0sm2891067ejb.140.2024.03.25.02.39.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Mar 2024 02:39:10 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8925edf5-ea8b-11ee-a1ef-f123f15fe8a2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1711359551; x=1711964351; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=UtKMkPrD3bbUTAe9JMaiR4498NfrGxZFJfJBgZJEUYQ=; b=IGxEWobmXXsZdvIiw0NIf7+pJIJHOGY7/90CM8G29bMe4cI1h1t0bM/SniO+rddRx1 +sazB+I3t0KdgeqCRVBZ2Zi1n/47pTO/mEhU1937BRLm3ecr644aIVl6Pj+w0PDP750o lXBEfye3QuRaAxkSbr6GuVwPa3uauhuDEljy7ZYlA+n7rHwAr9cdjrZDQZZ1d/eVtGIQ J3lM2QJkAMQsX9CeiUSdF+dEVqzfW0CTDCp/yn5tTHVXQf5P1pSIoJppHPsBs05/88Uf dDZKxKKXhkHKS+hZFq3wapdVIo2NBLACKFsuO9LlcZ+LOSET92bFaT/8XhYACpyQ+z7C J1CQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711359551; x=1711964351; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UtKMkPrD3bbUTAe9JMaiR4498NfrGxZFJfJBgZJEUYQ=; b=Hexw0IheVfmFQd/tfm6g2PFb1CuROAt25qKspOVv8kJWOgAwTM6e5iAac29oe3gCyn wx9q+20V4hs+s7Pet60p8ohN9Sz3Gc54mb42KXqNLnQAjseNHDh9GNufCKq13Za5lmlT YNKFwTm+hXjGTwiKEhfkRbXkBkrl988sb56uh5IyrUPTpW+rO6KJrcHaP+tJMSi4rvI8 rzUEZmpnq5AfI0URMyAP/iIp3rLb7mF4Dwgp+XYpKGWPwPAIH8gdfqIXeP1lzabzKRzM FWd3+4AjTalRjxmpohA3Bjsuo6CgkRKzTLXbuAc5hlPZch72dmube7UVbz1Fns93RrKj hDkw== X-Gm-Message-State: AOJu0YyyAiT1eejzQWYNIQHg+EIgJbaJh8IjzXMAgtUeJDUAR+ydlKUx LYB/CmPsasw8LVoALEMiEoXgfBSs1NYg1eOXoegBISVy1H0VzE7RYrIWjSC5/BJm+SwmbJ+xac5 L X-Google-Smtp-Source: AGHT+IFEshPJz3eT1WpnGM/g8jA36vOf7jHJ/wJtxEy4DoWw66uuIwvxTVFyr8bGG52dsEvMmzOn+A== X-Received: by 2002:a17:907:1002:b0:a46:e921:ae3f with SMTP id ox2-20020a170907100200b00a46e921ae3fmr4067032ejb.13.1711359550939; Mon, 25 Mar 2024 02:39:10 -0700 (PDT) From: Jens Wiklander To: xen-devel@lists.xenproject.org Cc: patches@linaro.org, Jens Wiklander , Volodymyr Babchuk , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel Subject: [XEN PATCH 2/6] xen/arm: ffa: move common things to ffa_private.h Date: Mon, 25 Mar 2024 10:39:00 +0100 Message-Id: <20240325093904.3466092-3-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240325093904.3466092-1-jens.wiklander@linaro.org> References: <20240325093904.3466092-1-jens.wiklander@linaro.org> MIME-Version: 1.0 Prepare to separate ffa.c into modules by moving common things into the new internal header file ffa_private.h. Signed-off-by: Jens Wiklander Reviewed-by: Bertrand Marquis --- xen/arch/arm/tee/ffa.c | 298 +----------------------------- xen/arch/arm/tee/ffa_private.h | 318 +++++++++++++++++++++++++++++++++ 2 files changed, 319 insertions(+), 297 deletions(-) create mode 100644 xen/arch/arm/tee/ffa_private.h diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c index 0344a0f17e72..259851f20bdb 100644 --- a/xen/arch/arm/tee/ffa.c +++ b/xen/arch/arm/tee/ffa.c @@ -63,204 +63,7 @@ #include #include -/* Error codes */ -#define FFA_RET_OK 0 -#define FFA_RET_NOT_SUPPORTED -1 -#define FFA_RET_INVALID_PARAMETERS -2 -#define FFA_RET_NO_MEMORY -3 -#define FFA_RET_BUSY -4 -#define FFA_RET_INTERRUPTED -5 -#define FFA_RET_DENIED -6 -#define FFA_RET_RETRY -7 -#define FFA_RET_ABORTED -8 - -/* FFA_VERSION helpers */ -#define FFA_VERSION_MAJOR_SHIFT 16U -#define FFA_VERSION_MAJOR_MASK 0x7FFFU -#define FFA_VERSION_MINOR_SHIFT 0U -#define FFA_VERSION_MINOR_MASK 0xFFFFU -#define MAKE_FFA_VERSION(major, minor) \ - ((((major) & FFA_VERSION_MAJOR_MASK) << FFA_VERSION_MAJOR_SHIFT) | \ - ((minor) & FFA_VERSION_MINOR_MASK)) - -#define FFA_VERSION_1_0 MAKE_FFA_VERSION(1, 0) -#define FFA_VERSION_1_1 MAKE_FFA_VERSION(1, 1) -/* The minimal FF-A version of the SPMC that can be supported */ -#define FFA_MIN_SPMC_VERSION FFA_VERSION_1_1 - -/* - * This is the version we want to use in communication with guests and SPs. - * During negotiation with a guest or a SP we may need to lower it for - * that particular guest or SP. - */ -#define FFA_MY_VERSION_MAJOR 1U -#define FFA_MY_VERSION_MINOR 1U -#define FFA_MY_VERSION MAKE_FFA_VERSION(FFA_MY_VERSION_MAJOR, \ - FFA_MY_VERSION_MINOR) - -/* - * The FF-A specification explicitly works with 4K pages as a measure of - * memory size, for example, FFA_RXTX_MAP takes one parameter "RX/TX page - * count" which is the number of contiguous 4K pages allocated. Xen may use - * a different page size depending on the configuration to avoid confusion - * with PAGE_SIZE use a special define when it's a page size as in the FF-A - * specification. - */ -#define FFA_PAGE_SIZE SZ_4K - -/* - * The number of pages used for each of the RX and TX buffers shared with - * the SPMC. - */ -#define FFA_RXTX_PAGE_COUNT 1 - -/* - * Limit the number of pages RX/TX buffers guests can map. - * TODO support a larger number. - */ -#define FFA_MAX_RXTX_PAGE_COUNT 1 - -/* - * Limit for shared buffer size. Please note that this define limits - * number of pages. - * - * FF-A doesn't have any direct requirements on GlobalPlatform or vice - * versa, but an implementation can very well use FF-A in order to provide - * a GlobalPlatform interface on top. - * - * Global Platform specification for TEE requires that any TEE - * implementation should allow to share buffers with size of at least - * 512KB, defined in TEEC-1.0C page 24, Table 4-1, - * TEEC_CONFIG_SHAREDMEM_MAX_SIZE. - * Due to overhead which can be hard to predict exactly, double this number - * to give a safe margin. - */ -#define FFA_MAX_SHM_PAGE_COUNT (2 * SZ_512K / FFA_PAGE_SIZE) - -/* - * Limits the number of shared buffers that guest can have at once. This - * is to prevent case, when guests trick XEN into exhausting its own - * memory by allocating many small buffers. This value has been chosen - * arbitrarily. - */ -#define FFA_MAX_SHM_COUNT 32 - -/* - * The time we wait until trying to tear down a domain again if it was - * blocked initially. - */ -#define FFA_CTX_TEARDOWN_DELAY SECONDS(1) - -/* FF-A-1.1-REL0 section 10.9.2 Memory region handle, page 167 */ -#define FFA_HANDLE_HYP_FLAG BIT(63, ULL) -#define FFA_HANDLE_INVALID 0xffffffffffffffffULL - -/* - * Memory attributes: Normal memory, Write-Back cacheable, Inner shareable - * Defined in FF-A-1.1-REL0 Table 10.18 at page 175. - */ -#define FFA_NORMAL_MEM_REG_ATTR 0x2fU -/* - * Memory access permissions: Read-write - * Defined in FF-A-1.1-REL0 Table 10.15 at page 168. - */ -#define FFA_MEM_ACC_RW 0x2U - -/* FF-A-1.1-REL0 section 10.11.4 Flags usage, page 184-187 */ -/* Clear memory before mapping in receiver */ -#define FFA_MEMORY_REGION_FLAG_CLEAR BIT(0, U) -/* Relayer may time slice this operation */ -#define FFA_MEMORY_REGION_FLAG_TIME_SLICE BIT(1, U) -/* Clear memory after receiver relinquishes it */ -#define FFA_MEMORY_REGION_FLAG_CLEAR_RELINQUISH BIT(2, U) -/* Share memory transaction */ -#define FFA_MEMORY_REGION_TRANSACTION_TYPE_SHARE (1U << 3) - -/* - * Flags and field values used for the MSG_SEND_DIRECT_REQ/RESP: - * BIT(31): Framework or partition message - * BIT(7-0): Message type for frameworks messages - */ -#define FFA_MSG_FLAG_FRAMEWORK BIT(31, U) -#define FFA_MSG_TYPE_MASK 0xFFU; -#define FFA_MSG_PSCI 0x0U -#define FFA_MSG_SEND_VM_CREATED 0x4U -#define FFA_MSG_RESP_VM_CREATED 0x5U -#define FFA_MSG_SEND_VM_DESTROYED 0x6U -#define FFA_MSG_RESP_VM_DESTROYED 0x7U - -/* - * Flags to determine partition properties in FFA_PARTITION_INFO_GET return - * message: - * BIT(0): Supports receipt of direct requests - * BIT(1): Can send direct requests - * BIT(2): Can send and receive indirect messages - * BIT(3): Supports receipt of notifications - * BIT(4-5): Partition ID is a PE endpoint ID - * BIT(6): Partition must be informed about each VM that is created by - * the Hypervisor - * BIT(7): Partition must be informed about each VM that is destroyed by - * the Hypervisor - * BIT(8): Partition runs in the AArch64 execution state else AArch32 - * execution state - */ -#define FFA_PART_PROP_DIRECT_REQ_RECV BIT(0, U) -#define FFA_PART_PROP_DIRECT_REQ_SEND BIT(1, U) -#define FFA_PART_PROP_INDIRECT_MSGS BIT(2, U) -#define FFA_PART_PROP_RECV_NOTIF BIT(3, U) -#define FFA_PART_PROP_IS_TYPE_MASK (3U << 4) -#define FFA_PART_PROP_IS_PE_ID (0U << 4) -#define FFA_PART_PROP_IS_SEPID_INDEP (1U << 4) -#define FFA_PART_PROP_IS_SEPID_DEP (2U << 4) -#define FFA_PART_PROP_IS_AUX_ID (3U << 4) -#define FFA_PART_PROP_NOTIF_CREATED BIT(6, U) -#define FFA_PART_PROP_NOTIF_DESTROYED BIT(7, U) -#define FFA_PART_PROP_AARCH64_STATE BIT(8, U) - -/* - * Flag used as parameter to FFA_PARTITION_INFO_GET to return partition - * count only. - */ -#define FFA_PARTITION_INFO_GET_COUNT_FLAG BIT(0, U) - -/* Function IDs */ -#define FFA_ERROR 0x84000060U -#define FFA_SUCCESS_32 0x84000061U -#define FFA_SUCCESS_64 0xC4000061U -#define FFA_INTERRUPT 0x84000062U -#define FFA_VERSION 0x84000063U -#define FFA_FEATURES 0x84000064U -#define FFA_RX_ACQUIRE 0x84000084U -#define FFA_RX_RELEASE 0x84000065U -#define FFA_RXTX_MAP_32 0x84000066U -#define FFA_RXTX_MAP_64 0xC4000066U -#define FFA_RXTX_UNMAP 0x84000067U -#define FFA_PARTITION_INFO_GET 0x84000068U -#define FFA_ID_GET 0x84000069U -#define FFA_SPM_ID_GET 0x84000085U -#define FFA_MSG_WAIT 0x8400006BU -#define FFA_MSG_YIELD 0x8400006CU -#define FFA_RUN 0x8400006DU -#define FFA_MSG_SEND2 0x84000086U -#define FFA_MSG_SEND_DIRECT_REQ_32 0x8400006FU -#define FFA_MSG_SEND_DIRECT_REQ_64 0xC400006FU -#define FFA_MSG_SEND_DIRECT_RESP_32 0x84000070U -#define FFA_MSG_SEND_DIRECT_RESP_64 0xC4000070U -#define FFA_MEM_DONATE_32 0x84000071U -#define FFA_MEM_DONATE_64 0xC4000071U -#define FFA_MEM_LEND_32 0x84000072U -#define FFA_MEM_LEND_64 0xC4000072U -#define FFA_MEM_SHARE_32 0x84000073U -#define FFA_MEM_SHARE_64 0xC4000073U -#define FFA_MEM_RETRIEVE_REQ_32 0x84000074U -#define FFA_MEM_RETRIEVE_REQ_64 0xC4000074U -#define FFA_MEM_RETRIEVE_RESP 0x84000075U -#define FFA_MEM_RELINQUISH 0x84000076U -#define FFA_MEM_RECLAIM 0x84000077U -#define FFA_MEM_FRAG_RX 0x8400007AU -#define FFA_MEM_FRAG_TX 0x8400007BU -#define FFA_MSG_SEND 0x8400006EU -#define FFA_MSG_POLL 0x8400006AU +#include "ffa_private.h" /* * Structs below ending with _1_0 are defined in FF-A-1.0-REL and @@ -382,39 +185,6 @@ struct ffa_endpoint_rxtx_descriptor_1_1 { uint32_t tx_region_offs; }; -struct ffa_ctx { - void *rx; - const void *tx; - struct page_info *rx_pg; - struct page_info *tx_pg; - /* Number of 4kB pages in each of rx/rx_pg and tx/tx_pg */ - unsigned int page_count; - /* FF-A version used by the guest */ - uint32_t guest_vers; - bool rx_is_free; - /* Used shared memory objects, struct ffa_shm_mem */ - struct list_head shm_list; - /* Number of allocated shared memory object */ - unsigned int shm_count; - /* - * tx_lock is used to serialize access to tx - * rx_lock is used to serialize access to rx - * lock is used for the rest in this struct - */ - spinlock_t tx_lock; - spinlock_t rx_lock; - spinlock_t lock; - /* Used if domain can't be torn down immediately */ - struct domain *teardown_d; - struct list_head teardown_list; - s_time_t teardown_expire; - /* - * Used for ffa_domain_teardown() to keep track of which SPs should be - * notified that this guest is being destroyed. - */ - unsigned long vm_destroy_bitmap[]; -}; - struct ffa_shm_mem { struct list_head list; uint16_t sender_id; @@ -473,40 +243,6 @@ static bool ffa_get_version(uint32_t *vers) return true; } -static int32_t ffa_get_ret_code(const struct arm_smccc_1_2_regs *resp) -{ - switch ( resp->a0 ) - { - case FFA_ERROR: - if ( resp->a2 ) - return resp->a2; - else - return FFA_RET_NOT_SUPPORTED; - case FFA_SUCCESS_32: - case FFA_SUCCESS_64: - return FFA_RET_OK; - default: - return FFA_RET_NOT_SUPPORTED; - } -} - -static int32_t ffa_simple_call(uint32_t fid, register_t a1, register_t a2, - register_t a3, register_t a4) -{ - const struct arm_smccc_1_2_regs arg = { - .a0 = fid, - .a1 = a1, - .a2 = a2, - .a3 = a3, - .a4 = a4, - }; - struct arm_smccc_1_2_regs resp; - - arm_smccc_1_2_smc(&arg, &resp); - - return ffa_get_ret_code(&resp); -} - static int32_t ffa_features(uint32_t id) { return ffa_simple_call(FFA_FEATURES, id, 0, 0, 0); @@ -654,38 +390,6 @@ static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id, return res; } -static uint16_t ffa_get_vm_id(const struct domain *d) -{ - /* +1 since 0 is reserved for the hypervisor in FF-A */ - return d->domain_id + 1; -} - -static void ffa_set_regs(struct cpu_user_regs *regs, register_t v0, - register_t v1, register_t v2, register_t v3, - register_t v4, register_t v5, register_t v6, - register_t v7) -{ - set_user_reg(regs, 0, v0); - set_user_reg(regs, 1, v1); - set_user_reg(regs, 2, v2); - set_user_reg(regs, 3, v3); - set_user_reg(regs, 4, v4); - set_user_reg(regs, 5, v5); - set_user_reg(regs, 6, v6); - set_user_reg(regs, 7, v7); -} - -static void ffa_set_regs_error(struct cpu_user_regs *regs, uint32_t error_code) -{ - ffa_set_regs(regs, FFA_ERROR, 0, error_code, 0, 0, 0, 0, 0); -} - -static void ffa_set_regs_success(struct cpu_user_regs *regs, uint32_t w2, - uint32_t w3) -{ - ffa_set_regs(regs, FFA_SUCCESS_32, 0, w2, w3, 0, 0, 0, 0); -} - static void handle_version(struct cpu_user_regs *regs) { struct domain *d = current->domain; diff --git a/xen/arch/arm/tee/ffa_private.h b/xen/arch/arm/tee/ffa_private.h new file mode 100644 index 000000000000..8352b6b55a9a --- /dev/null +++ b/xen/arch/arm/tee/ffa_private.h @@ -0,0 +1,318 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2023 Linaro Limited + */ + +#ifndef __FFA_PRIVATE_H__ +#define __FFA_PRIVATE_H__ + +#include +#include +#include +#include +#include +#include +#include +#include + +/* Error codes */ +#define FFA_RET_OK 0 +#define FFA_RET_NOT_SUPPORTED -1 +#define FFA_RET_INVALID_PARAMETERS -2 +#define FFA_RET_NO_MEMORY -3 +#define FFA_RET_BUSY -4 +#define FFA_RET_INTERRUPTED -5 +#define FFA_RET_DENIED -6 +#define FFA_RET_RETRY -7 +#define FFA_RET_ABORTED -8 + +/* FFA_VERSION helpers */ +#define FFA_VERSION_MAJOR_SHIFT 16U +#define FFA_VERSION_MAJOR_MASK 0x7FFFU +#define FFA_VERSION_MINOR_SHIFT 0U +#define FFA_VERSION_MINOR_MASK 0xFFFFU +#define MAKE_FFA_VERSION(major, minor) \ + ((((major) & FFA_VERSION_MAJOR_MASK) << FFA_VERSION_MAJOR_SHIFT) | \ + ((minor) & FFA_VERSION_MINOR_MASK)) + +#define FFA_VERSION_1_0 MAKE_FFA_VERSION(1, 0) +#define FFA_VERSION_1_1 MAKE_FFA_VERSION(1, 1) +/* The minimal FF-A version of the SPMC that can be supported */ +#define FFA_MIN_SPMC_VERSION FFA_VERSION_1_1 + +/* + * This is the version we want to use in communication with guests and SPs. + * During negotiation with a guest or a SP we may need to lower it for + * that particular guest or SP. + */ +#define FFA_MY_VERSION_MAJOR 1U +#define FFA_MY_VERSION_MINOR 1U +#define FFA_MY_VERSION MAKE_FFA_VERSION(FFA_MY_VERSION_MAJOR, \ + FFA_MY_VERSION_MINOR) + +/* + * The FF-A specification explicitly works with 4K pages as a measure of + * memory size, for example, FFA_RXTX_MAP takes one parameter "RX/TX page + * count" which is the number of contiguous 4K pages allocated. Xen may use + * a different page size depending on the configuration to avoid confusion + * with PAGE_SIZE use a special define when it's a page size as in the FF-A + * specification. + */ +#define FFA_PAGE_SIZE SZ_4K + +/* + * The number of pages used for each of the RX and TX buffers shared with + * the SPMC. + */ +#define FFA_RXTX_PAGE_COUNT 1 + +/* + * Limit the number of pages RX/TX buffers guests can map. + * TODO support a larger number. + */ +#define FFA_MAX_RXTX_PAGE_COUNT 1 + +/* + * Limit for shared buffer size. Please note that this define limits + * number of pages. + * + * FF-A doesn't have any direct requirements on GlobalPlatform or vice + * versa, but an implementation can very well use FF-A in order to provide + * a GlobalPlatform interface on top. + * + * Global Platform specification for TEE requires that any TEE + * implementation should allow to share buffers with size of at least + * 512KB, defined in TEEC-1.0C page 24, Table 4-1, + * TEEC_CONFIG_SHAREDMEM_MAX_SIZE. + * Due to overhead which can be hard to predict exactly, double this number + * to give a safe margin. + */ +#define FFA_MAX_SHM_PAGE_COUNT (2 * SZ_512K / FFA_PAGE_SIZE) + +/* + * Limits the number of shared buffers that guest can have at once. This + * is to prevent case, when guests trick XEN into exhausting its own + * memory by allocating many small buffers. This value has been chosen + * arbitrarily. + */ +#define FFA_MAX_SHM_COUNT 32 + +/* + * The time we wait until trying to tear down a domain again if it was + * blocked initially. + */ +#define FFA_CTX_TEARDOWN_DELAY SECONDS(1) + +/* FF-A-1.1-REL0 section 10.9.2 Memory region handle, page 167 */ +#define FFA_HANDLE_HYP_FLAG BIT(63, ULL) +#define FFA_HANDLE_INVALID 0xffffffffffffffffULL + +/* + * Memory attributes: Normal memory, Write-Back cacheable, Inner shareable + * Defined in FF-A-1.1-REL0 Table 10.18 at page 175. + */ +#define FFA_NORMAL_MEM_REG_ATTR 0x2fU +/* + * Memory access permissions: Read-write + * Defined in FF-A-1.1-REL0 Table 10.15 at page 168. + */ +#define FFA_MEM_ACC_RW 0x2U + +/* FF-A-1.1-REL0 section 10.11.4 Flags usage, page 184-187 */ +/* Clear memory before mapping in receiver */ +#define FFA_MEMORY_REGION_FLAG_CLEAR BIT(0, U) +/* Relayer may time slice this operation */ +#define FFA_MEMORY_REGION_FLAG_TIME_SLICE BIT(1, U) +/* Clear memory after receiver relinquishes it */ +#define FFA_MEMORY_REGION_FLAG_CLEAR_RELINQUISH BIT(2, U) +/* Share memory transaction */ +#define FFA_MEMORY_REGION_TRANSACTION_TYPE_SHARE (1U << 3) + +/* + * Flags and field values used for the MSG_SEND_DIRECT_REQ/RESP: + * BIT(31): Framework or partition message + * BIT(7-0): Message type for frameworks messages + */ +#define FFA_MSG_FLAG_FRAMEWORK BIT(31, U) +#define FFA_MSG_TYPE_MASK 0xFFU; +#define FFA_MSG_PSCI 0x0U +#define FFA_MSG_SEND_VM_CREATED 0x4U +#define FFA_MSG_RESP_VM_CREATED 0x5U +#define FFA_MSG_SEND_VM_DESTROYED 0x6U +#define FFA_MSG_RESP_VM_DESTROYED 0x7U + +/* + * Flags to determine partition properties in FFA_PARTITION_INFO_GET return + * message: + * BIT(0): Supports receipt of direct requests + * BIT(1): Can send direct requests + * BIT(2): Can send and receive indirect messages + * BIT(3): Supports receipt of notifications + * BIT(4-5): Partition ID is a PE endpoint ID + * BIT(6): Partition must be informed about each VM that is created by + * the Hypervisor + * BIT(7): Partition must be informed about each VM that is destroyed by + * the Hypervisor + * BIT(8): Partition runs in the AArch64 execution state else AArch32 + * execution state + */ +#define FFA_PART_PROP_DIRECT_REQ_RECV BIT(0, U) +#define FFA_PART_PROP_DIRECT_REQ_SEND BIT(1, U) +#define FFA_PART_PROP_INDIRECT_MSGS BIT(2, U) +#define FFA_PART_PROP_RECV_NOTIF BIT(3, U) +#define FFA_PART_PROP_IS_TYPE_MASK (3U << 4) +#define FFA_PART_PROP_IS_PE_ID (0U << 4) +#define FFA_PART_PROP_IS_SEPID_INDEP (1U << 4) +#define FFA_PART_PROP_IS_SEPID_DEP (2U << 4) +#define FFA_PART_PROP_IS_AUX_ID (3U << 4) +#define FFA_PART_PROP_NOTIF_CREATED BIT(6, U) +#define FFA_PART_PROP_NOTIF_DESTROYED BIT(7, U) +#define FFA_PART_PROP_AARCH64_STATE BIT(8, U) + +/* + * Flag used as parameter to FFA_PARTITION_INFO_GET to return partition + * count only. + */ +#define FFA_PARTITION_INFO_GET_COUNT_FLAG BIT(0, U) + +/* Function IDs */ +#define FFA_ERROR 0x84000060U +#define FFA_SUCCESS_32 0x84000061U +#define FFA_SUCCESS_64 0xC4000061U +#define FFA_INTERRUPT 0x84000062U +#define FFA_VERSION 0x84000063U +#define FFA_FEATURES 0x84000064U +#define FFA_RX_ACQUIRE 0x84000084U +#define FFA_RX_RELEASE 0x84000065U +#define FFA_RXTX_MAP_32 0x84000066U +#define FFA_RXTX_MAP_64 0xC4000066U +#define FFA_RXTX_UNMAP 0x84000067U +#define FFA_PARTITION_INFO_GET 0x84000068U +#define FFA_ID_GET 0x84000069U +#define FFA_SPM_ID_GET 0x84000085U +#define FFA_MSG_WAIT 0x8400006BU +#define FFA_MSG_YIELD 0x8400006CU +#define FFA_RUN 0x8400006DU +#define FFA_MSG_SEND2 0x84000086U +#define FFA_MSG_SEND_DIRECT_REQ_32 0x8400006FU +#define FFA_MSG_SEND_DIRECT_REQ_64 0xC400006FU +#define FFA_MSG_SEND_DIRECT_RESP_32 0x84000070U +#define FFA_MSG_SEND_DIRECT_RESP_64 0xC4000070U +#define FFA_MEM_DONATE_32 0x84000071U +#define FFA_MEM_DONATE_64 0xC4000071U +#define FFA_MEM_LEND_32 0x84000072U +#define FFA_MEM_LEND_64 0xC4000072U +#define FFA_MEM_SHARE_32 0x84000073U +#define FFA_MEM_SHARE_64 0xC4000073U +#define FFA_MEM_RETRIEVE_REQ_32 0x84000074U +#define FFA_MEM_RETRIEVE_REQ_64 0xC4000074U +#define FFA_MEM_RETRIEVE_RESP 0x84000075U +#define FFA_MEM_RELINQUISH 0x84000076U +#define FFA_MEM_RECLAIM 0x84000077U +#define FFA_MEM_FRAG_RX 0x8400007AU +#define FFA_MEM_FRAG_TX 0x8400007BU +#define FFA_MSG_SEND 0x8400006EU +#define FFA_MSG_POLL 0x8400006AU + +struct ffa_ctx { + void *rx; + const void *tx; + struct page_info *rx_pg; + struct page_info *tx_pg; + /* Number of 4kB pages in each of rx/rx_pg and tx/tx_pg */ + unsigned int page_count; + /* FF-A version used by the guest */ + uint32_t guest_vers; + bool rx_is_free; + /* Used shared memory objects, struct ffa_shm_mem */ + struct list_head shm_list; + /* Number of allocated shared memory object */ + unsigned int shm_count; + /* + * tx_lock is used to serialize access to tx + * rx_lock is used to serialize access to rx + * lock is used for the rest in this struct + */ + spinlock_t tx_lock; + spinlock_t rx_lock; + spinlock_t lock; + /* Used if domain can't be torn down immediately */ + struct domain *teardown_d; + struct list_head teardown_list; + s_time_t teardown_expire; + /* + * Used for ffa_domain_teardown() to keep track of which SPs should be + * notified that this guest is being destroyed. + */ + unsigned long vm_destroy_bitmap[]; +}; + +static inline uint16_t ffa_get_vm_id(const struct domain *d) +{ + /* +1 since 0 is reserved for the hypervisor in FF-A */ + return d->domain_id + 1; +} + +static inline void ffa_set_regs(struct cpu_user_regs *regs, register_t v0, + register_t v1, register_t v2, register_t v3, + register_t v4, register_t v5, register_t v6, + register_t v7) +{ + set_user_reg(regs, 0, v0); + set_user_reg(regs, 1, v1); + set_user_reg(regs, 2, v2); + set_user_reg(regs, 3, v3); + set_user_reg(regs, 4, v4); + set_user_reg(regs, 5, v5); + set_user_reg(regs, 6, v6); + set_user_reg(regs, 7, v7); +} + +static inline void ffa_set_regs_error(struct cpu_user_regs *regs, + uint32_t error_code) +{ + ffa_set_regs(regs, FFA_ERROR, 0, error_code, 0, 0, 0, 0, 0); +} + +static inline void ffa_set_regs_success(struct cpu_user_regs *regs, + uint32_t w2, uint32_t w3) +{ + ffa_set_regs(regs, FFA_SUCCESS_32, 0, w2, w3, 0, 0, 0, 0); +} + +static inline int32_t ffa_get_ret_code(const struct arm_smccc_1_2_regs *resp) +{ + switch ( resp->a0 ) + { + case FFA_ERROR: + if ( resp->a2 ) + return resp->a2; + else + return FFA_RET_NOT_SUPPORTED; + case FFA_SUCCESS_32: + case FFA_SUCCESS_64: + return FFA_RET_OK; + default: + return FFA_RET_NOT_SUPPORTED; + } +} + +static inline int32_t ffa_simple_call(uint32_t fid, register_t a1, + register_t a2, register_t a3, + register_t a4) +{ + const struct arm_smccc_1_2_regs arg = { + .a0 = fid, + .a1 = a1, + .a2 = a2, + .a3 = a3, + .a4 = a4, + }; + struct arm_smccc_1_2_regs resp; + + arm_smccc_1_2_smc(&arg, &resp); + + return ffa_get_ret_code(&resp); +} + +#endif /*__FFA_PRIVATE_H__*/ From patchwork Mon Mar 25 09:39:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 13601707 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 66D8FC54E64 for ; Mon, 25 Mar 2024 09:39:28 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.697649.1088630 (Exim 4.92) (envelope-from ) id 1rognW-0001zV-61; Mon, 25 Mar 2024 09:39:18 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 697649.1088630; Mon, 25 Mar 2024 09:39:18 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rognW-0001zO-1o; Mon, 25 Mar 2024 09:39:18 +0000 Received: by outflank-mailman (input) for mailman id 697649; Mon, 25 Mar 2024 09:39:16 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rognT-0001fs-V0 for xen-devel@lists.xenproject.org; Mon, 25 Mar 2024 09:39:16 +0000 Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com [2a00:1450:4864:20::631]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 89e588ca-ea8b-11ee-a1ef-f123f15fe8a2; Mon, 25 Mar 2024 10:39:13 +0100 (CET) Received: by mail-ej1-x631.google.com with SMTP id a640c23a62f3a-a450bedffdfso470714866b.3 for ; Mon, 25 Mar 2024 02:39:13 -0700 (PDT) Received: from rayden.urgonet (h-217-31-164-171.A175.priv.bahnhof.se. [217.31.164.171]) by smtp.gmail.com with ESMTPSA id bw26-20020a170906c1da00b00a4650ec48d0sm2891067ejb.140.2024.03.25.02.39.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Mar 2024 02:39:11 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 89e588ca-ea8b-11ee-a1ef-f123f15fe8a2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1711359552; x=1711964352; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=nzCul7k3D57gESYQgpCIbwin6Did4XgqdPdjpsHyeJY=; b=jsTJJ+Y9EZcIRQSfs7MItETalSyv9nHKTkRJFwPGHkN3Bs4V80gXOXGWPngWLMx1SK eKfiRwd7re5ldEOkwlfDiYR5haw9ItdIX7pp0bU6dCfH+8RtVvk2jpblVf8TYIbU6PnM L2rrw2jik6uFd1T/3+Ztj1xf4R6VSuBv6KBw3MEezt+jqx9UiLc+SWnotGH2NJadOf91 XNcBZLK7Extxiu2lslizum9BSkN7vwjmis77dejr/ujAWA8EOB/vOz9NH592qEifLwK1 95E6Pp47W0y2o8Cn3gkwAHcpEnNPMh0DRXQsjTdZ0Pnr5pBNe8hYRy0F6yGhUPljTPgI h1Tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711359552; x=1711964352; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nzCul7k3D57gESYQgpCIbwin6Did4XgqdPdjpsHyeJY=; b=a3OFVT6sXQS/9AASEsHfSkLHoUvPLjO20HYi2Cb+1VIaPw4uE0pWAEnqHRygR3tfg8 ptvBJGO7ErMuvKjlO0F+afIx5UDjWVS9094eA8Zucj8ih4d5pvAc0YeZrwNAFPcdhTqg Phxun3JBBUgjjqrB4Vxg8mtfKAJnWKJsjDF5YbhphkjvTF5Qsi9dxOayzqndE2qEopKp R28J2nBs9g1dTzJ/Ufvl1veG53lW9KeByS2EMVcS6UYj34wZimlXkxSSa9bk8HaIQnwG gL/i59NHqHFsQDdITVfMH2ema0kmv3tch/4JKfJIUISbEzl03uHLpFnFMEy1HHvYLz23 0kQQ== X-Gm-Message-State: AOJu0YwBG/AdC63u126lQPnUS8GfXf7BDbRRbkPUV5WX0mzWAVM4TJtS IiVLH8RwWDskADrf5Z7ZBIGiZYEimn9HaZGw7+BP3g/Oi2hxLXo/sDAYejKtFL58foGnr+S+Gpg f X-Google-Smtp-Source: AGHT+IFacWpCJpsBCJwjyNcQco8asQriniIkagrQoRxZOBm+9AFJe+nscUTWYVBfBVetS6bcbOEBpg== X-Received: by 2002:a17:907:7214:b0:a47:50c5:15b2 with SMTP id dr20-20020a170907721400b00a4750c515b2mr2799238ejc.6.1711359552067; Mon, 25 Mar 2024 02:39:12 -0700 (PDT) From: Jens Wiklander To: xen-devel@lists.xenproject.org Cc: patches@linaro.org, Jens Wiklander , Volodymyr Babchuk , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel Subject: [XEN PATCH 3/6] xen/arm: ffa: separate memory sharing routines Date: Mon, 25 Mar 2024 10:39:01 +0100 Message-Id: <20240325093904.3466092-4-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240325093904.3466092-1-jens.wiklander@linaro.org> References: <20240325093904.3466092-1-jens.wiklander@linaro.org> MIME-Version: 1.0 Move memory sharing routines into a separate file for easier navigation in the source code. Add ffa_shm_domain_destroy() to isolate the ffa_shm things in ffa_domain_teardown_continue(). Signed-off-by: Jens Wiklander Reviewed-by: Bertrand Marquis INT32_MAX ) /* Impossible value */ - return FFA_RET_ABORTED; - return resp.a3 & INT32_MAX; - default: - return FFA_RET_NOT_SUPPORTED; - } -} - -static int32_t ffa_mem_reclaim(uint32_t handle_lo, uint32_t handle_hi, - uint32_t flags) -{ - return ffa_simple_call(FFA_MEM_RECLAIM, handle_lo, handle_hi, flags, 0); -} - static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id, uint8_t msg) { @@ -660,506 +524,6 @@ out: resp.a7 & mask); } -/* - * Gets all page and assigns them to the supplied shared memory object. If - * this function fails then the caller is still expected to call - * put_shm_pages() as a cleanup. - */ -static int get_shm_pages(struct domain *d, struct ffa_shm_mem *shm, - const struct ffa_address_range *range, - uint32_t range_count, unsigned int start_page_idx, - unsigned int *last_page_idx) -{ - unsigned int pg_idx = start_page_idx; - gfn_t gfn; - unsigned int n; - unsigned int m; - p2m_type_t t; - uint64_t addr; - uint64_t page_count; - - for ( n = 0; n < range_count; n++ ) - { - page_count = read_atomic(&range[n].page_count); - addr = read_atomic(&range[n].address); - for ( m = 0; m < page_count; m++ ) - { - if ( pg_idx >= shm->page_count ) - return FFA_RET_INVALID_PARAMETERS; - - gfn = gaddr_to_gfn(addr + m * FFA_PAGE_SIZE); - shm->pages[pg_idx] = get_page_from_gfn(d, gfn_x(gfn), &t, - P2M_ALLOC); - if ( !shm->pages[pg_idx] ) - return FFA_RET_DENIED; - /* Only normal RW RAM for now */ - if ( t != p2m_ram_rw ) - return FFA_RET_DENIED; - pg_idx++; - } - } - - *last_page_idx = pg_idx; - - return FFA_RET_OK; -} - -static void put_shm_pages(struct ffa_shm_mem *shm) -{ - unsigned int n; - - for ( n = 0; n < shm->page_count && shm->pages[n]; n++ ) - { - put_page(shm->pages[n]); - shm->pages[n] = NULL; - } -} - -static bool inc_ctx_shm_count(struct domain *d, struct ffa_ctx *ctx) -{ - bool ret = true; - - spin_lock(&ctx->lock); - - if ( ctx->shm_count >= FFA_MAX_SHM_COUNT ) - { - ret = false; - } - else - { - /* - * If this is the first shm added, increase the domain reference - * counter as we need to keep domain around a bit longer to reclaim - * the shared memory in the teardown path. - */ - if ( !ctx->shm_count ) - get_knownalive_domain(d); - - ctx->shm_count++; - } - - spin_unlock(&ctx->lock); - - return ret; -} - -static void dec_ctx_shm_count(struct domain *d, struct ffa_ctx *ctx) -{ - bool drop_ref; - - spin_lock(&ctx->lock); - - ASSERT(ctx->shm_count > 0); - ctx->shm_count--; - - /* - * If this was the last shm removed, let go of the domain reference we - * took in inc_ctx_shm_count() above. - */ - drop_ref = !ctx->shm_count; - - spin_unlock(&ctx->lock); - - if ( drop_ref ) - put_domain(d); -} - -static struct ffa_shm_mem *alloc_ffa_shm_mem(struct domain *d, - unsigned int page_count) -{ - struct ffa_ctx *ctx = d->arch.tee; - struct ffa_shm_mem *shm; - - if ( page_count >= FFA_MAX_SHM_PAGE_COUNT ) - return NULL; - if ( !inc_ctx_shm_count(d, ctx) ) - return NULL; - - shm = xzalloc_flex_struct(struct ffa_shm_mem, pages, page_count); - if ( shm ) - shm->page_count = page_count; - else - dec_ctx_shm_count(d, ctx); - - return shm; -} - -static void free_ffa_shm_mem(struct domain *d, struct ffa_shm_mem *shm) -{ - struct ffa_ctx *ctx = d->arch.tee; - - if ( !shm ) - return; - - dec_ctx_shm_count(d, ctx); - put_shm_pages(shm); - xfree(shm); -} - -static void init_range(struct ffa_address_range *addr_range, - paddr_t pa) -{ - memset(addr_range, 0, sizeof(*addr_range)); - addr_range->address = pa; - addr_range->page_count = 1; -} - -/* - * This function uses the ffa_tx buffer to transmit the memory transaction - * descriptor. The function depends ffa_tx_buffer_lock to be used to guard - * the buffer from concurrent use. - */ -static int share_shm(struct ffa_shm_mem *shm) -{ - const uint32_t max_frag_len = FFA_RXTX_PAGE_COUNT * FFA_PAGE_SIZE; - struct ffa_mem_access *mem_access_array; - struct ffa_mem_transaction_1_1 *descr; - struct ffa_address_range *addr_range; - struct ffa_mem_region *region_descr; - const unsigned int region_count = 1; - void *buf = ffa_tx; - uint32_t frag_len; - uint32_t tot_len; - paddr_t last_pa; - unsigned int n; - paddr_t pa; - - ASSERT(spin_is_locked(&ffa_tx_buffer_lock)); - ASSERT(shm->page_count); - - descr = buf; - memset(descr, 0, sizeof(*descr)); - descr->sender_id = shm->sender_id; - descr->handle = shm->handle; - descr->mem_reg_attr = FFA_NORMAL_MEM_REG_ATTR; - descr->mem_access_count = 1; - descr->mem_access_size = sizeof(*mem_access_array); - descr->mem_access_offs = MEM_ACCESS_OFFSET(0); - - mem_access_array = buf + descr->mem_access_offs; - memset(mem_access_array, 0, sizeof(*mem_access_array)); - mem_access_array[0].access_perm.endpoint_id = shm->ep_id; - mem_access_array[0].access_perm.perm = FFA_MEM_ACC_RW; - mem_access_array[0].region_offs = REGION_OFFSET(descr->mem_access_count, 0); - - region_descr = buf + mem_access_array[0].region_offs; - memset(region_descr, 0, sizeof(*region_descr)); - region_descr->total_page_count = shm->page_count; - - region_descr->address_range_count = 1; - last_pa = page_to_maddr(shm->pages[0]); - for ( n = 1; n < shm->page_count; last_pa = pa, n++ ) - { - pa = page_to_maddr(shm->pages[n]); - if ( last_pa + FFA_PAGE_SIZE == pa ) - continue; - region_descr->address_range_count++; - } - - tot_len = ADDR_RANGE_OFFSET(descr->mem_access_count, region_count, - region_descr->address_range_count); - if ( tot_len > max_frag_len ) - return FFA_RET_NOT_SUPPORTED; - - addr_range = region_descr->address_range_array; - frag_len = ADDR_RANGE_OFFSET(descr->mem_access_count, region_count, 1); - last_pa = page_to_maddr(shm->pages[0]); - init_range(addr_range, last_pa); - for ( n = 1; n < shm->page_count; last_pa = pa, n++ ) - { - pa = page_to_maddr(shm->pages[n]); - if ( last_pa + FFA_PAGE_SIZE == pa ) - { - addr_range->page_count++; - continue; - } - - frag_len += sizeof(*addr_range); - addr_range++; - init_range(addr_range, pa); - } - - return ffa_mem_share(tot_len, frag_len, 0, 0, &shm->handle); -} - -static int read_mem_transaction(uint32_t ffa_vers, const void *buf, size_t blen, - struct ffa_mem_transaction_int *trans) -{ - uint16_t mem_reg_attr; - uint32_t flags; - uint32_t count; - uint32_t offs; - uint32_t size; - - if ( ffa_vers >= FFA_VERSION_1_1 ) - { - const struct ffa_mem_transaction_1_1 *descr; - - if ( blen < sizeof(*descr) ) - return FFA_RET_INVALID_PARAMETERS; - - descr = buf; - trans->sender_id = descr->sender_id; - mem_reg_attr = descr->mem_reg_attr; - flags = descr->flags; - trans->handle = descr->handle; - trans->tag = descr->tag; - - count = descr->mem_access_count; - size = descr->mem_access_size; - offs = descr->mem_access_offs; - } - else - { - const struct ffa_mem_transaction_1_0 *descr; - - if ( blen < sizeof(*descr) ) - return FFA_RET_INVALID_PARAMETERS; - - descr = buf; - trans->sender_id = descr->sender_id; - mem_reg_attr = descr->mem_reg_attr; - flags = descr->flags; - trans->handle = descr->handle; - trans->tag = descr->tag; - - count = descr->mem_access_count; - size = sizeof(struct ffa_mem_access); - offs = offsetof(struct ffa_mem_transaction_1_0, mem_access_array); - } - /* - * Make sure that "descr" which is shared with the guest isn't accessed - * again after this point. - */ - barrier(); - - /* - * We're doing a rough check to see that no information is lost when - * tranfering the values into a struct ffa_mem_transaction_int below. - * The fields in struct ffa_mem_transaction_int are wide enough to hold - * any valid value so being out of range means that something is wrong. - */ - if ( mem_reg_attr > UINT8_MAX || flags > UINT8_MAX || size > UINT8_MAX || - count > UINT8_MAX || offs > UINT16_MAX ) - return FFA_RET_INVALID_PARAMETERS; - - /* Check that the endpoint memory access descriptor array fits */ - if ( size * count + offs > blen ) - return FFA_RET_INVALID_PARAMETERS; - - trans->mem_reg_attr = mem_reg_attr; - trans->flags = flags; - trans->mem_access_size = size; - trans->mem_access_count = count; - trans->mem_access_offs = offs; - - return 0; -} - -static void ffa_handle_mem_share(struct cpu_user_regs *regs) -{ - uint32_t tot_len = get_user_reg(regs, 1); - uint32_t frag_len = get_user_reg(regs, 2); - uint64_t addr = get_user_reg(regs, 3); - uint32_t page_count = get_user_reg(regs, 4); - const struct ffa_mem_region *region_descr; - const struct ffa_mem_access *mem_access; - struct ffa_mem_transaction_int trans; - struct domain *d = current->domain; - struct ffa_ctx *ctx = d->arch.tee; - struct ffa_shm_mem *shm = NULL; - unsigned int last_page_idx = 0; - register_t handle_hi = 0; - register_t handle_lo = 0; - int ret = FFA_RET_DENIED; - uint32_t range_count; - uint32_t region_offs; - - /* - * We're only accepting memory transaction descriptors via the rx/tx - * buffer. - */ - if ( addr ) - { - ret = FFA_RET_NOT_SUPPORTED; - goto out_set_ret; - } - - /* Check that fragment length doesn't exceed total length */ - if ( frag_len > tot_len ) - { - ret = FFA_RET_INVALID_PARAMETERS; - goto out_set_ret; - } - - /* We currently only support a single fragment */ - if ( frag_len != tot_len ) - { - ret = FFA_RET_NOT_SUPPORTED; - goto out_set_ret; - } - - if ( !spin_trylock(&ctx->tx_lock) ) - { - ret = FFA_RET_BUSY; - goto out_set_ret; - } - - if ( frag_len > ctx->page_count * FFA_PAGE_SIZE ) - goto out_unlock; - - ret = read_mem_transaction(ctx->guest_vers, ctx->tx, frag_len, &trans); - if ( ret ) - goto out_unlock; - - if ( trans.mem_reg_attr != FFA_NORMAL_MEM_REG_ATTR ) - { - ret = FFA_RET_NOT_SUPPORTED; - goto out_unlock; - } - - /* Only supports sharing it with one SP for now */ - if ( trans.mem_access_count != 1 ) - { - ret = FFA_RET_NOT_SUPPORTED; - goto out_unlock; - } - - if ( trans.sender_id != ffa_get_vm_id(d) ) - { - ret = FFA_RET_INVALID_PARAMETERS; - goto out_unlock; - } - - /* Check that it fits in the supplied data */ - if ( trans.mem_access_offs + trans.mem_access_size > frag_len ) - goto out_unlock; - - mem_access = ctx->tx + trans.mem_access_offs; - if ( read_atomic(&mem_access->access_perm.perm) != FFA_MEM_ACC_RW ) - { - ret = FFA_RET_NOT_SUPPORTED; - goto out_unlock; - } - - region_offs = read_atomic(&mem_access->region_offs); - if ( sizeof(*region_descr) + region_offs > frag_len ) - { - ret = FFA_RET_NOT_SUPPORTED; - goto out_unlock; - } - - region_descr = ctx->tx + region_offs; - range_count = read_atomic(®ion_descr->address_range_count); - page_count = read_atomic(®ion_descr->total_page_count); - - if ( !page_count ) - { - ret = FFA_RET_INVALID_PARAMETERS; - goto out_unlock; - } - - shm = alloc_ffa_shm_mem(d, page_count); - if ( !shm ) - { - ret = FFA_RET_NO_MEMORY; - goto out_unlock; - } - shm->sender_id = trans.sender_id; - shm->ep_id = read_atomic(&mem_access->access_perm.endpoint_id); - - /* - * Check that the Composite memory region descriptor fits. - */ - if ( sizeof(*region_descr) + region_offs + - range_count * sizeof(struct ffa_address_range) > frag_len ) - { - ret = FFA_RET_INVALID_PARAMETERS; - goto out; - } - - ret = get_shm_pages(d, shm, region_descr->address_range_array, range_count, - 0, &last_page_idx); - if ( ret ) - goto out; - if ( last_page_idx != shm->page_count ) - { - ret = FFA_RET_INVALID_PARAMETERS; - goto out; - } - - /* Note that share_shm() uses our tx buffer */ - spin_lock(&ffa_tx_buffer_lock); - ret = share_shm(shm); - spin_unlock(&ffa_tx_buffer_lock); - if ( ret ) - goto out; - - spin_lock(&ctx->lock); - list_add_tail(&shm->list, &ctx->shm_list); - spin_unlock(&ctx->lock); - - uint64_to_regpair(&handle_hi, &handle_lo, shm->handle); - -out: - if ( ret ) - free_ffa_shm_mem(d, shm); -out_unlock: - spin_unlock(&ctx->tx_lock); - -out_set_ret: - if ( ret == 0) - ffa_set_regs_success(regs, handle_lo, handle_hi); - else - ffa_set_regs_error(regs, ret); -} - -/* Must only be called with ctx->lock held */ -static struct ffa_shm_mem *find_shm_mem(struct ffa_ctx *ctx, uint64_t handle) -{ - struct ffa_shm_mem *shm; - - list_for_each_entry(shm, &ctx->shm_list, list) - if ( shm->handle == handle ) - return shm; - - return NULL; -} - -static int ffa_handle_mem_reclaim(uint64_t handle, uint32_t flags) -{ - struct domain *d = current->domain; - struct ffa_ctx *ctx = d->arch.tee; - struct ffa_shm_mem *shm; - register_t handle_hi; - register_t handle_lo; - int ret; - - spin_lock(&ctx->lock); - shm = find_shm_mem(ctx, handle); - if ( shm ) - list_del(&shm->list); - spin_unlock(&ctx->lock); - if ( !shm ) - return FFA_RET_INVALID_PARAMETERS; - - uint64_to_regpair(&handle_hi, &handle_lo, handle); - ret = ffa_mem_reclaim(handle_lo, handle_hi, flags); - - if ( ret ) - { - spin_lock(&ctx->lock); - list_add_tail(&shm->list, &ctx->shm_list); - spin_unlock(&ctx->lock); - } - else - { - free_ffa_shm_mem(d, shm); - } - - return ret; -} - static bool ffa_handle_call(struct cpu_user_regs *regs) { uint32_t fid = get_user_reg(regs, 0); @@ -1284,8 +648,8 @@ static int ffa_domain_init(struct domain *d) if ( !ffa_version ) return -ENODEV; /* - * We can't use that last possible domain ID or get_vm_id() would cause - * an overflow. + * We can't use that last possible domain ID or ffa_get_vm_id() would + * cause an overflow. */ if ( d->domain_id >= UINT16_MAX) return -ERANGE; @@ -1348,68 +712,16 @@ static void send_vm_destroyed(struct domain *d) } } -static void ffa_reclaim_shms(struct domain *d) -{ - struct ffa_ctx *ctx = d->arch.tee; - struct ffa_shm_mem *shm, *tmp; - int32_t res; - - list_for_each_entry_safe(shm, tmp, &ctx->shm_list, list) - { - register_t handle_hi; - register_t handle_lo; - - uint64_to_regpair(&handle_hi, &handle_lo, shm->handle); - res = ffa_mem_reclaim(handle_lo, handle_hi, 0); - switch ( res ) { - case FFA_RET_OK: - printk(XENLOG_G_DEBUG "%pd: ffa: Reclaimed handle %#lx\n", - d, shm->handle); - list_del(&shm->list); - free_ffa_shm_mem(d, shm); - break; - case FFA_RET_DENIED: - /* - * A temporary error that may get resolved a bit later, it's - * worth retrying. - */ - printk(XENLOG_G_INFO "%pd: ffa: Failed to reclaim handle %#lx : %d\n", - d, shm->handle, res); - break; /* We will retry later */ - default: - /* - * The rest of the error codes are not expected and are assumed - * to be of a permanent nature. It not in our control to handle - * the error properly so the object in this case is to try to - * minimize the damage. - * - * FFA_RET_NO_MEMORY might be a temporary error as it it could - * succeed if retried later, but treat it as permanent for now. - */ - printk(XENLOG_G_INFO "%pd: ffa: Permanent failure to reclaim handle %#lx : %d\n", - d, shm->handle, res); - - /* - * Remove the shm from the list and free it, but don't drop - * references. This results in having the shared physical pages - * permanently allocate and also keeps the domain as a zombie - * domain. - */ - list_del(&shm->list); - xfree(shm); - break; - } - } -} - static void ffa_domain_teardown_continue(struct ffa_ctx *ctx, bool first_time) { struct ffa_ctx *next_ctx = NULL; + bool retry = false; send_vm_destroyed(ctx->teardown_d); - ffa_reclaim_shms(ctx->teardown_d); + if ( !ffa_shm_domain_destroy(ctx->teardown_d) ) + retry = true; - if ( ctx->shm_count || + if ( retry || !bitmap_empty(ctx->vm_destroy_bitmap, subscr_vm_destroyed_count) ) { printk(XENLOG_G_INFO "%pd: ffa: Remaining cleanup, retrying\n", ctx->teardown_d); diff --git a/xen/arch/arm/tee/ffa_private.h b/xen/arch/arm/tee/ffa_private.h index 8352b6b55a9a..f3e2f42e573e 100644 --- a/xen/arch/arm/tee/ffa_private.h +++ b/xen/arch/arm/tee/ffa_private.h @@ -247,6 +247,16 @@ struct ffa_ctx { unsigned long vm_destroy_bitmap[]; }; +extern void *ffa_rx; +extern void *ffa_tx; +extern spinlock_t ffa_rx_buffer_lock; +extern spinlock_t ffa_tx_buffer_lock; + +bool ffa_shm_domain_destroy(struct domain *d); +void ffa_handle_mem_share(struct cpu_user_regs *regs); +int ffa_handle_mem_reclaim(uint64_t handle, uint32_t flags); + + static inline uint16_t ffa_get_vm_id(const struct domain *d) { /* +1 since 0 is reserved for the hypervisor in FF-A */ diff --git a/xen/arch/arm/tee/ffa_shm.c b/xen/arch/arm/tee/ffa_shm.c new file mode 100644 index 000000000000..13dc44683b45 --- /dev/null +++ b/xen/arch/arm/tee/ffa_shm.c @@ -0,0 +1,708 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2023 Linaro Limited + */ + +#include +#include +#include +#include +#include +#include + +#include +#include + +#include "ffa_private.h" + +/* Constituent memory region descriptor */ +struct ffa_address_range { + uint64_t address; + uint32_t page_count; + uint32_t reserved; +}; + +/* Composite memory region descriptor */ +struct ffa_mem_region { + uint32_t total_page_count; + uint32_t address_range_count; + uint64_t reserved; + struct ffa_address_range address_range_array[]; +}; + +/* Memory access permissions descriptor */ +struct ffa_mem_access_perm { + uint16_t endpoint_id; + uint8_t perm; + uint8_t flags; +}; + +/* Endpoint memory access descriptor */ +struct ffa_mem_access { + struct ffa_mem_access_perm access_perm; + uint32_t region_offs; + uint64_t reserved; +}; + +/* Lend, donate or share memory transaction descriptor */ +struct ffa_mem_transaction_1_0 { + uint16_t sender_id; + uint8_t mem_reg_attr; + uint8_t reserved0; + uint32_t flags; + uint64_t handle; + uint64_t tag; + uint32_t reserved1; + uint32_t mem_access_count; + struct ffa_mem_access mem_access_array[]; +}; + +struct ffa_mem_transaction_1_1 { + uint16_t sender_id; + uint16_t mem_reg_attr; + uint32_t flags; + uint64_t handle; + uint64_t tag; + uint32_t mem_access_size; + uint32_t mem_access_count; + uint32_t mem_access_offs; + uint8_t reserved[12]; +}; + +/* Calculate offset of struct ffa_mem_access from start of buffer */ +#define MEM_ACCESS_OFFSET(access_idx) \ + ( sizeof(struct ffa_mem_transaction_1_1) + \ + ( access_idx ) * sizeof(struct ffa_mem_access) ) + +/* Calculate offset of struct ffa_mem_region from start of buffer */ +#define REGION_OFFSET(access_count, region_idx) \ + ( MEM_ACCESS_OFFSET(access_count) + \ + ( region_idx ) * sizeof(struct ffa_mem_region) ) + +/* Calculate offset of struct ffa_address_range from start of buffer */ +#define ADDR_RANGE_OFFSET(access_count, region_count, range_idx) \ + ( REGION_OFFSET(access_count, region_count) + \ + ( range_idx ) * sizeof(struct ffa_address_range) ) + +/* + * The parts needed from struct ffa_mem_transaction_1_0 or struct + * ffa_mem_transaction_1_1, used to provide an abstraction of difference in + * data structures between version 1.0 and 1.1. This is just an internal + * interface and can be changed without changing any ABI. + */ +struct ffa_mem_transaction_int { + uint16_t sender_id; + uint8_t mem_reg_attr; + uint8_t flags; + uint8_t mem_access_size; + uint8_t mem_access_count; + uint16_t mem_access_offs; + uint64_t handle; + uint64_t tag; +}; + +struct ffa_shm_mem { + struct list_head list; + uint16_t sender_id; + uint16_t ep_id; /* endpoint, the one lending */ + uint64_t handle; /* FFA_HANDLE_INVALID if not set yet */ + unsigned int page_count; + struct page_info *pages[]; +}; + +static int32_t ffa_mem_share(uint32_t tot_len, uint32_t frag_len, + register_t addr, uint32_t pg_count, + uint64_t *handle) +{ + struct arm_smccc_1_2_regs arg = { + .a0 = FFA_MEM_SHARE_64, + .a1 = tot_len, + .a2 = frag_len, + .a3 = addr, + .a4 = pg_count, + }; + struct arm_smccc_1_2_regs resp; + + arm_smccc_1_2_smc(&arg, &resp); + + switch ( resp.a0 ) + { + case FFA_ERROR: + if ( resp.a2 ) + return resp.a2; + else + return FFA_RET_NOT_SUPPORTED; + case FFA_SUCCESS_32: + *handle = regpair_to_uint64(resp.a3, resp.a2); + return FFA_RET_OK; + case FFA_MEM_FRAG_RX: + *handle = regpair_to_uint64(resp.a2, resp.a1); + if ( resp.a3 > INT32_MAX ) /* Impossible value */ + return FFA_RET_ABORTED; + return resp.a3 & INT32_MAX; + default: + return FFA_RET_NOT_SUPPORTED; + } +} + +static int32_t ffa_mem_reclaim(uint32_t handle_lo, uint32_t handle_hi, + uint32_t flags) +{ + return ffa_simple_call(FFA_MEM_RECLAIM, handle_lo, handle_hi, flags, 0); +} + +/* + * Gets all page and assigns them to the supplied shared memory object. If + * this function fails then the caller is still expected to call + * put_shm_pages() as a cleanup. + */ +static int get_shm_pages(struct domain *d, struct ffa_shm_mem *shm, + const struct ffa_address_range *range, + uint32_t range_count, unsigned int start_page_idx, + unsigned int *last_page_idx) +{ + unsigned int pg_idx = start_page_idx; + gfn_t gfn; + unsigned int n; + unsigned int m; + p2m_type_t t; + uint64_t addr; + uint64_t page_count; + + for ( n = 0; n < range_count; n++ ) + { + page_count = read_atomic(&range[n].page_count); + addr = read_atomic(&range[n].address); + for ( m = 0; m < page_count; m++ ) + { + if ( pg_idx >= shm->page_count ) + return FFA_RET_INVALID_PARAMETERS; + + gfn = gaddr_to_gfn(addr + m * FFA_PAGE_SIZE); + shm->pages[pg_idx] = get_page_from_gfn(d, gfn_x(gfn), &t, + P2M_ALLOC); + if ( !shm->pages[pg_idx] ) + return FFA_RET_DENIED; + /* Only normal RW RAM for now */ + if ( t != p2m_ram_rw ) + return FFA_RET_DENIED; + pg_idx++; + } + } + + *last_page_idx = pg_idx; + + return FFA_RET_OK; +} + +static void put_shm_pages(struct ffa_shm_mem *shm) +{ + unsigned int n; + + for ( n = 0; n < shm->page_count && shm->pages[n]; n++ ) + { + put_page(shm->pages[n]); + shm->pages[n] = NULL; + } +} + +static bool inc_ctx_shm_count(struct domain *d, struct ffa_ctx *ctx) +{ + bool ret = true; + + spin_lock(&ctx->lock); + + if ( ctx->shm_count >= FFA_MAX_SHM_COUNT ) + { + ret = false; + } + else + { + /* + * If this is the first shm added, increase the domain reference + * counter as we need to keep domain around a bit longer to reclaim + * the shared memory in the teardown path. + */ + if ( !ctx->shm_count ) + get_knownalive_domain(d); + + ctx->shm_count++; + } + + spin_unlock(&ctx->lock); + + return ret; +} + +static void dec_ctx_shm_count(struct domain *d, struct ffa_ctx *ctx) +{ + bool drop_ref; + + spin_lock(&ctx->lock); + + ASSERT(ctx->shm_count > 0); + ctx->shm_count--; + + /* + * If this was the last shm removed, let go of the domain reference we + * took in inc_ctx_shm_count() above. + */ + drop_ref = !ctx->shm_count; + + spin_unlock(&ctx->lock); + + if ( drop_ref ) + put_domain(d); +} + +static struct ffa_shm_mem *alloc_ffa_shm_mem(struct domain *d, + unsigned int page_count) +{ + struct ffa_ctx *ctx = d->arch.tee; + struct ffa_shm_mem *shm; + + if ( page_count >= FFA_MAX_SHM_PAGE_COUNT ) + return NULL; + if ( !inc_ctx_shm_count(d, ctx) ) + return NULL; + + shm = xzalloc_flex_struct(struct ffa_shm_mem, pages, page_count); + if ( shm ) + shm->page_count = page_count; + else + dec_ctx_shm_count(d, ctx); + + return shm; +} + +static void free_ffa_shm_mem(struct domain *d, struct ffa_shm_mem *shm) +{ + struct ffa_ctx *ctx = d->arch.tee; + + if ( !shm ) + return; + + dec_ctx_shm_count(d, ctx); + put_shm_pages(shm); + xfree(shm); +} + +static void init_range(struct ffa_address_range *addr_range, + paddr_t pa) +{ + memset(addr_range, 0, sizeof(*addr_range)); + addr_range->address = pa; + addr_range->page_count = 1; +} + +/* + * This function uses the ffa_tx buffer to transmit the memory transaction + * descriptor. The function depends ffa_tx_buffer_lock to be used to guard + * the buffer from concurrent use. + */ +static int share_shm(struct ffa_shm_mem *shm) +{ + const uint32_t max_frag_len = FFA_RXTX_PAGE_COUNT * FFA_PAGE_SIZE; + struct ffa_mem_access *mem_access_array; + struct ffa_mem_transaction_1_1 *descr; + struct ffa_address_range *addr_range; + struct ffa_mem_region *region_descr; + const unsigned int region_count = 1; + void *buf = ffa_tx; + uint32_t frag_len; + uint32_t tot_len; + paddr_t last_pa; + unsigned int n; + paddr_t pa; + + ASSERT(spin_is_locked(&ffa_tx_buffer_lock)); + ASSERT(shm->page_count); + + descr = buf; + memset(descr, 0, sizeof(*descr)); + descr->sender_id = shm->sender_id; + descr->handle = shm->handle; + descr->mem_reg_attr = FFA_NORMAL_MEM_REG_ATTR; + descr->mem_access_count = 1; + descr->mem_access_size = sizeof(*mem_access_array); + descr->mem_access_offs = MEM_ACCESS_OFFSET(0); + + mem_access_array = buf + descr->mem_access_offs; + memset(mem_access_array, 0, sizeof(*mem_access_array)); + mem_access_array[0].access_perm.endpoint_id = shm->ep_id; + mem_access_array[0].access_perm.perm = FFA_MEM_ACC_RW; + mem_access_array[0].region_offs = REGION_OFFSET(descr->mem_access_count, 0); + + region_descr = buf + mem_access_array[0].region_offs; + memset(region_descr, 0, sizeof(*region_descr)); + region_descr->total_page_count = shm->page_count; + + region_descr->address_range_count = 1; + last_pa = page_to_maddr(shm->pages[0]); + for ( n = 1; n < shm->page_count; last_pa = pa, n++ ) + { + pa = page_to_maddr(shm->pages[n]); + if ( last_pa + FFA_PAGE_SIZE == pa ) + continue; + region_descr->address_range_count++; + } + + tot_len = ADDR_RANGE_OFFSET(descr->mem_access_count, region_count, + region_descr->address_range_count); + if ( tot_len > max_frag_len ) + return FFA_RET_NOT_SUPPORTED; + + addr_range = region_descr->address_range_array; + frag_len = ADDR_RANGE_OFFSET(descr->mem_access_count, region_count, 1); + last_pa = page_to_maddr(shm->pages[0]); + init_range(addr_range, last_pa); + for ( n = 1; n < shm->page_count; last_pa = pa, n++ ) + { + pa = page_to_maddr(shm->pages[n]); + if ( last_pa + FFA_PAGE_SIZE == pa ) + { + addr_range->page_count++; + continue; + } + + frag_len += sizeof(*addr_range); + addr_range++; + init_range(addr_range, pa); + } + + return ffa_mem_share(tot_len, frag_len, 0, 0, &shm->handle); +} + +static int read_mem_transaction(uint32_t ffa_vers, const void *buf, size_t blen, + struct ffa_mem_transaction_int *trans) +{ + uint16_t mem_reg_attr; + uint32_t flags; + uint32_t count; + uint32_t offs; + uint32_t size; + + if ( ffa_vers >= FFA_VERSION_1_1 ) + { + const struct ffa_mem_transaction_1_1 *descr; + + if ( blen < sizeof(*descr) ) + return FFA_RET_INVALID_PARAMETERS; + + descr = buf; + trans->sender_id = descr->sender_id; + mem_reg_attr = descr->mem_reg_attr; + flags = descr->flags; + trans->handle = descr->handle; + trans->tag = descr->tag; + + count = descr->mem_access_count; + size = descr->mem_access_size; + offs = descr->mem_access_offs; + } + else + { + const struct ffa_mem_transaction_1_0 *descr; + + if ( blen < sizeof(*descr) ) + return FFA_RET_INVALID_PARAMETERS; + + descr = buf; + trans->sender_id = descr->sender_id; + mem_reg_attr = descr->mem_reg_attr; + flags = descr->flags; + trans->handle = descr->handle; + trans->tag = descr->tag; + + count = descr->mem_access_count; + size = sizeof(struct ffa_mem_access); + offs = offsetof(struct ffa_mem_transaction_1_0, mem_access_array); + } + /* + * Make sure that "descr" which is shared with the guest isn't accessed + * again after this point. + */ + barrier(); + + /* + * We're doing a rough check to see that no information is lost when + * tranfering the values into a struct ffa_mem_transaction_int below. + * The fields in struct ffa_mem_transaction_int are wide enough to hold + * any valid value so being out of range means that something is wrong. + */ + if ( mem_reg_attr > UINT8_MAX || flags > UINT8_MAX || size > UINT8_MAX || + count > UINT8_MAX || offs > UINT16_MAX ) + return FFA_RET_INVALID_PARAMETERS; + + /* Check that the endpoint memory access descriptor array fits */ + if ( size * count + offs > blen ) + return FFA_RET_INVALID_PARAMETERS; + + trans->mem_reg_attr = mem_reg_attr; + trans->flags = flags; + trans->mem_access_size = size; + trans->mem_access_count = count; + trans->mem_access_offs = offs; + + return 0; +} + +void ffa_handle_mem_share(struct cpu_user_regs *regs) +{ + uint32_t tot_len = get_user_reg(regs, 1); + uint32_t frag_len = get_user_reg(regs, 2); + uint64_t addr = get_user_reg(regs, 3); + uint32_t page_count = get_user_reg(regs, 4); + const struct ffa_mem_region *region_descr; + const struct ffa_mem_access *mem_access; + struct ffa_mem_transaction_int trans; + struct domain *d = current->domain; + struct ffa_ctx *ctx = d->arch.tee; + struct ffa_shm_mem *shm = NULL; + unsigned int last_page_idx = 0; + register_t handle_hi = 0; + register_t handle_lo = 0; + int ret = FFA_RET_DENIED; + uint32_t range_count; + uint32_t region_offs; + + /* + * We're only accepting memory transaction descriptors via the rx/tx + * buffer. + */ + if ( addr ) + { + ret = FFA_RET_NOT_SUPPORTED; + goto out_set_ret; + } + + /* Check that fragment length doesn't exceed total length */ + if ( frag_len > tot_len ) + { + ret = FFA_RET_INVALID_PARAMETERS; + goto out_set_ret; + } + + /* We currently only support a single fragment */ + if ( frag_len != tot_len ) + { + ret = FFA_RET_NOT_SUPPORTED; + goto out_set_ret; + } + + if ( !spin_trylock(&ctx->tx_lock) ) + { + ret = FFA_RET_BUSY; + goto out_set_ret; + } + + if ( frag_len > ctx->page_count * FFA_PAGE_SIZE ) + goto out_unlock; + + ret = read_mem_transaction(ctx->guest_vers, ctx->tx, frag_len, &trans); + if ( ret ) + goto out_unlock; + + if ( trans.mem_reg_attr != FFA_NORMAL_MEM_REG_ATTR ) + { + ret = FFA_RET_NOT_SUPPORTED; + goto out_unlock; + } + + /* Only supports sharing it with one SP for now */ + if ( trans.mem_access_count != 1 ) + { + ret = FFA_RET_NOT_SUPPORTED; + goto out_unlock; + } + + if ( trans.sender_id != ffa_get_vm_id(d) ) + { + ret = FFA_RET_INVALID_PARAMETERS; + goto out_unlock; + } + + /* Check that it fits in the supplied data */ + if ( trans.mem_access_offs + trans.mem_access_size > frag_len ) + goto out_unlock; + + mem_access = ctx->tx + trans.mem_access_offs; + if ( read_atomic(&mem_access->access_perm.perm) != FFA_MEM_ACC_RW ) + { + ret = FFA_RET_NOT_SUPPORTED; + goto out_unlock; + } + + region_offs = read_atomic(&mem_access->region_offs); + if ( sizeof(*region_descr) + region_offs > frag_len ) + { + ret = FFA_RET_NOT_SUPPORTED; + goto out_unlock; + } + + region_descr = ctx->tx + region_offs; + range_count = read_atomic(®ion_descr->address_range_count); + page_count = read_atomic(®ion_descr->total_page_count); + + if ( !page_count ) + { + ret = FFA_RET_INVALID_PARAMETERS; + goto out_unlock; + } + + shm = alloc_ffa_shm_mem(d, page_count); + if ( !shm ) + { + ret = FFA_RET_NO_MEMORY; + goto out_unlock; + } + shm->sender_id = trans.sender_id; + shm->ep_id = read_atomic(&mem_access->access_perm.endpoint_id); + + /* + * Check that the Composite memory region descriptor fits. + */ + if ( sizeof(*region_descr) + region_offs + + range_count * sizeof(struct ffa_address_range) > frag_len ) + { + ret = FFA_RET_INVALID_PARAMETERS; + goto out; + } + + ret = get_shm_pages(d, shm, region_descr->address_range_array, range_count, + 0, &last_page_idx); + if ( ret ) + goto out; + if ( last_page_idx != shm->page_count ) + { + ret = FFA_RET_INVALID_PARAMETERS; + goto out; + } + + /* Note that share_shm() uses our tx buffer */ + spin_lock(&ffa_tx_buffer_lock); + ret = share_shm(shm); + spin_unlock(&ffa_tx_buffer_lock); + if ( ret ) + goto out; + + spin_lock(&ctx->lock); + list_add_tail(&shm->list, &ctx->shm_list); + spin_unlock(&ctx->lock); + + uint64_to_regpair(&handle_hi, &handle_lo, shm->handle); + +out: + if ( ret ) + free_ffa_shm_mem(d, shm); +out_unlock: + spin_unlock(&ctx->tx_lock); + +out_set_ret: + if ( ret == 0) + ffa_set_regs_success(regs, handle_lo, handle_hi); + else + ffa_set_regs_error(regs, ret); +} + +/* Must only be called with ctx->lock held */ +static struct ffa_shm_mem *find_shm_mem(struct ffa_ctx *ctx, uint64_t handle) +{ + struct ffa_shm_mem *shm; + + list_for_each_entry(shm, &ctx->shm_list, list) + if ( shm->handle == handle ) + return shm; + + return NULL; +} + +int ffa_handle_mem_reclaim(uint64_t handle, uint32_t flags) +{ + struct domain *d = current->domain; + struct ffa_ctx *ctx = d->arch.tee; + struct ffa_shm_mem *shm; + register_t handle_hi; + register_t handle_lo; + int ret; + + spin_lock(&ctx->lock); + shm = find_shm_mem(ctx, handle); + if ( shm ) + list_del(&shm->list); + spin_unlock(&ctx->lock); + if ( !shm ) + return FFA_RET_INVALID_PARAMETERS; + + uint64_to_regpair(&handle_hi, &handle_lo, handle); + ret = ffa_mem_reclaim(handle_lo, handle_hi, flags); + + if ( ret ) + { + spin_lock(&ctx->lock); + list_add_tail(&shm->list, &ctx->shm_list); + spin_unlock(&ctx->lock); + } + else + { + free_ffa_shm_mem(d, shm); + } + + return ret; +} + +bool ffa_shm_domain_destroy(struct domain *d) +{ + struct ffa_ctx *ctx = d->arch.tee; + struct ffa_shm_mem *shm, *tmp; + int32_t res; + + list_for_each_entry_safe(shm, tmp, &ctx->shm_list, list) + { + register_t handle_hi; + register_t handle_lo; + + uint64_to_regpair(&handle_hi, &handle_lo, shm->handle); + res = ffa_mem_reclaim(handle_lo, handle_hi, 0); + switch ( res ) { + case FFA_RET_OK: + printk(XENLOG_G_DEBUG "%pd: ffa: Reclaimed handle %#lx\n", + d, shm->handle); + list_del(&shm->list); + free_ffa_shm_mem(d, shm); + break; + case FFA_RET_DENIED: + /* + * A temporary error that may get resolved a bit later, it's + * worth retrying. + */ + printk(XENLOG_G_INFO "%pd: ffa: Failed to reclaim handle %#lx : %d\n", + d, shm->handle, res); + break; /* We will retry later */ + default: + /* + * The rest of the error codes are not expected and are assumed + * to be of a permanent nature. It not in our control to handle + * the error properly so the object in this case is to try to + * minimize the damage. + * + * FFA_RET_NO_MEMORY might be a temporary error as it it could + * succeed if retried later, but treat it as permanent for now. + */ + printk(XENLOG_G_INFO "%pd: ffa: Permanent failure to reclaim handle %#lx : %d\n", + d, shm->handle, res); + + /* + * Remove the shm from the list and free it, but don't drop + * references. This results in having the shared physical pages + * permanently allocate and also keeps the domain as a zombie + * domain. + */ + list_del(&shm->list); + xfree(shm); + break; + } + } + + return !ctx->shm_count; +} From patchwork Mon Mar 25 09:39:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 13601705 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 65481CD11DB for ; Mon, 25 Mar 2024 09:39:27 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.697651.1088640 (Exim 4.92) (envelope-from ) id 1rognW-00027I-SY; Mon, 25 Mar 2024 09:39:18 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 697651.1088640; Mon, 25 Mar 2024 09:39:18 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rognW-00025R-Mt; Mon, 25 Mar 2024 09:39:18 +0000 Received: by outflank-mailman (input) for mailman id 697651; Mon, 25 Mar 2024 09:39:17 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rognU-0001fs-Tg for xen-devel@lists.xenproject.org; Mon, 25 Mar 2024 09:39:17 +0000 Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com [2a00:1450:4864:20::631]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 8aab2411-ea8b-11ee-a1ef-f123f15fe8a2; Mon, 25 Mar 2024 10:39:14 +0100 (CET) Received: by mail-ej1-x631.google.com with SMTP id a640c23a62f3a-a46dd7b4bcbso502636966b.3 for ; Mon, 25 Mar 2024 02:39:14 -0700 (PDT) Received: from rayden.urgonet (h-217-31-164-171.A175.priv.bahnhof.se. [217.31.164.171]) by smtp.gmail.com with ESMTPSA id bw26-20020a170906c1da00b00a4650ec48d0sm2891067ejb.140.2024.03.25.02.39.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Mar 2024 02:39:12 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8aab2411-ea8b-11ee-a1ef-f123f15fe8a2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1711359554; x=1711964354; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=PzsRxNmHMhveXO5ZjFcn97aQMNBLyTGR+eqStIm9hyM=; b=rr5WKUBA02YLv9EjwUfd131Nma4cHflQPJihm+rK41c4eUNdM+rgRHFMXxZGH/ezeN zaNxqEi86IWNlFsZWKQQJ5zoMnA0bC1W9g5rE+Nr++J2Ql+enibZwJO1eUZ4F6Bfnvn6 4eSOt5vw7GqP3yI0OgOt7qgRujklIwCG3Mty5MCHXLOexq+IAN/LtDiKCDq7TYpZMqS8 inVC0rbvG1LqVtjaYXZ7tzaf1pQxcyYFpjMRVM8fjVsCBcL2kQtzayoDUNcsIqXcFnQJ pYEbSF6LIwXfQpcOatknI0qXsWiXrVU1NpTgyexu2Ed7nG9KSolFyFtQjABY2q1IOxFv BwZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711359554; x=1711964354; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PzsRxNmHMhveXO5ZjFcn97aQMNBLyTGR+eqStIm9hyM=; b=HZT7+GSuEgCINGcW9KvJNAPShkcukTIUwoJtVNTVr2jFqdelf/icdCPGjtLeP0N01l clzCyr4EZvCQfDPYIUIDs0Avb1wq1Z87JCiLeOEtYkYxMUyEHpkqXQgwCZ8mp5Xybq4w Wk5L/MhyfZ08v/hO5dwHMhjQU4IL3G704QLe73kwKDtjyGqtVM36SfiXjt9VqYxqyTR3 b/xWoK+omRi6NRRCegaq51rupEN5PX/Wm591R82San9oPJR9Rpz7ne0fiah/kmfK9GZr Mn6uoAh79XNDhfkqBuiWHKCrZPxqxIEciG3vyLUp13LNlcV2sxLuXu5hhhlSWXTUUXce S4FA== X-Gm-Message-State: AOJu0Yxm2t5mWdWi8VHWjsa9vCFKGWbpcmaybk3dqc1h8MAGXb7X73U0 WFk9U3TMyhrp4lLaaJhMeSdPZX55MedR4HwpkQlKreQ6uGkwT+e7JtjkJKz3E+whKkbXIcXxiQr y X-Google-Smtp-Source: AGHT+IHh5mGkgaIVnascEeRce7dQf+zvLCaRAKvMxcd/yEHFecAID8bjugZZ7vmws1WWtbw8XLtUjg== X-Received: by 2002:a17:906:2a07:b0:a47:ea3:9a09 with SMTP id j7-20020a1709062a0700b00a470ea39a09mr4201697eje.46.1711359553364; Mon, 25 Mar 2024 02:39:13 -0700 (PDT) From: Jens Wiklander To: xen-devel@lists.xenproject.org Cc: patches@linaro.org, Jens Wiklander , Volodymyr Babchuk , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel Subject: [XEN PATCH 4/6] xen/arm: ffa: separate partition info get routines Date: Mon, 25 Mar 2024 10:39:02 +0100 Message-Id: <20240325093904.3466092-5-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240325093904.3466092-1-jens.wiklander@linaro.org> References: <20240325093904.3466092-1-jens.wiklander@linaro.org> MIME-Version: 1.0 Move partition info get routines into a separate file for easier navigation in the source code. Add ffa_partinfo_init(), ffa_partinfo_domain_init(), and ffa_partinfo_domain_destroy() to handle the ffa_partinfo internal things on initialization and teardown. Signed-off-by: Jens Wiklander Reviewed-by: Bertrand Marquis --- xen/arch/arm/tee/Makefile | 1 + xen/arch/arm/tee/ffa.c | 359 +----------------------------- xen/arch/arm/tee/ffa_partinfo.c | 373 ++++++++++++++++++++++++++++++++ xen/arch/arm/tee/ffa_private.h | 14 +- 4 files changed, 398 insertions(+), 349 deletions(-) create mode 100644 xen/arch/arm/tee/ffa_partinfo.c diff --git a/xen/arch/arm/tee/Makefile b/xen/arch/arm/tee/Makefile index 0e683d23aa9d..be644fba8055 100644 --- a/xen/arch/arm/tee/Makefile +++ b/xen/arch/arm/tee/Makefile @@ -1,4 +1,5 @@ obj-$(CONFIG_FFA) += ffa.o obj-$(CONFIG_FFA) += ffa_shm.o +obj-$(CONFIG_FFA) += ffa_partinfo.o obj-y += tee.o obj-$(CONFIG_OPTEE) += optee.o diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c index db36292dc52f..7a2803881420 100644 --- a/xen/arch/arm/tee/ffa.c +++ b/xen/arch/arm/tee/ffa.c @@ -70,20 +70,6 @@ * structs ending with _1_1 are defined in FF-A-1.1-REL0. */ -/* Partition information descriptor */ -struct ffa_partition_info_1_0 { - uint16_t id; - uint16_t execution_context; - uint32_t partition_properties; -}; - -struct ffa_partition_info_1_1 { - uint16_t id; - uint16_t execution_context; - uint32_t partition_properties; - uint8_t uuid[16]; -}; - /* Endpoint RX/TX descriptor */ struct ffa_endpoint_rxtx_descriptor_1_0 { uint16_t sender_id; @@ -102,11 +88,6 @@ struct ffa_endpoint_rxtx_descriptor_1_1 { /* Negotiated FF-A version to use with the SPMC */ static uint32_t __ro_after_init ffa_version; -/* SPs subscribing to VM_CREATE and VM_DESTROYED events */ -static uint16_t *subscr_vm_created __read_mostly; -static uint16_t subscr_vm_created_count __read_mostly; -static uint16_t *subscr_vm_destroyed __read_mostly; -static uint16_t subscr_vm_destroyed_count __read_mostly; /* * Our rx/tx buffers shared with the SPMC. FFA_RXTX_PAGE_COUNT is the @@ -170,90 +151,6 @@ static int32_t ffa_rxtx_map(paddr_t tx_addr, paddr_t rx_addr, return ffa_simple_call(FFA_RXTX_MAP_64, tx_addr, rx_addr, page_count, 0); } -static int32_t ffa_partition_info_get(uint32_t w1, uint32_t w2, uint32_t w3, - uint32_t w4, uint32_t w5, - uint32_t *count, uint32_t *fpi_size) -{ - const struct arm_smccc_1_2_regs arg = { - .a0 = FFA_PARTITION_INFO_GET, - .a1 = w1, - .a2 = w2, - .a3 = w3, - .a4 = w4, - .a5 = w5, - }; - struct arm_smccc_1_2_regs resp; - uint32_t ret; - - arm_smccc_1_2_smc(&arg, &resp); - - ret = ffa_get_ret_code(&resp); - if ( !ret ) - { - *count = resp.a2; - *fpi_size = resp.a3; - } - - return ret; -} - -static int32_t ffa_rx_release(void) -{ - return ffa_simple_call(FFA_RX_RELEASE, 0, 0, 0, 0); -} - -static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id, - uint8_t msg) -{ - uint32_t exp_resp = FFA_MSG_FLAG_FRAMEWORK; - unsigned int retry_count = 0; - int32_t res; - - if ( msg == FFA_MSG_SEND_VM_CREATED ) - exp_resp |= FFA_MSG_RESP_VM_CREATED; - else if ( msg == FFA_MSG_SEND_VM_DESTROYED ) - exp_resp |= FFA_MSG_RESP_VM_DESTROYED; - else - return FFA_RET_INVALID_PARAMETERS; - - do { - const struct arm_smccc_1_2_regs arg = { - .a0 = FFA_MSG_SEND_DIRECT_REQ_32, - .a1 = sp_id, - .a2 = FFA_MSG_FLAG_FRAMEWORK | msg, - .a5 = vm_id, - }; - struct arm_smccc_1_2_regs resp; - - arm_smccc_1_2_smc(&arg, &resp); - if ( resp.a0 != FFA_MSG_SEND_DIRECT_RESP_32 || resp.a2 != exp_resp ) - { - /* - * This is an invalid response, likely due to some error in the - * implementation of the ABI. - */ - return FFA_RET_INVALID_PARAMETERS; - } - res = resp.a3; - if ( ++retry_count > 10 ) - { - /* - * TODO - * FFA_RET_INTERRUPTED means that the SPMC has a pending - * non-secure interrupt, we need a way of delivering that - * non-secure interrupt. - * FFA_RET_RETRY is the SP telling us that it's temporarily - * blocked from handling the direct request, we need a generic - * way to deal with this. - * For now in both cases, give up after a few retries. - */ - return res; - } - } while ( res == FFA_RET_INTERRUPTED || res == FFA_RET_RETRY ); - - return res; -} - static void handle_version(struct cpu_user_regs *regs) { struct domain *d = current->domain; @@ -371,88 +268,6 @@ static uint32_t ffa_handle_rxtx_unmap(void) return FFA_RET_OK; } -static int32_t ffa_handle_partition_info_get(uint32_t w1, uint32_t w2, - uint32_t w3, uint32_t w4, - uint32_t w5, uint32_t *count, - uint32_t *fpi_size) -{ - int32_t ret = FFA_RET_DENIED; - struct domain *d = current->domain; - struct ffa_ctx *ctx = d->arch.tee; - - /* - * FF-A v1.0 has w5 MBZ while v1.1 allows - * FFA_PARTITION_INFO_GET_COUNT_FLAG to be non-zero. - * - * FFA_PARTITION_INFO_GET_COUNT is only using registers and not the - * rxtx buffer so do the partition_info_get directly. - */ - if ( w5 == FFA_PARTITION_INFO_GET_COUNT_FLAG && - ctx->guest_vers == FFA_VERSION_1_1 ) - return ffa_partition_info_get(w1, w2, w3, w4, w5, count, fpi_size); - if ( w5 ) - return FFA_RET_INVALID_PARAMETERS; - - if ( !ffa_rx ) - return FFA_RET_DENIED; - - if ( !spin_trylock(&ctx->rx_lock) ) - return FFA_RET_BUSY; - - if ( !ctx->page_count || !ctx->rx_is_free ) - goto out; - spin_lock(&ffa_rx_buffer_lock); - ret = ffa_partition_info_get(w1, w2, w3, w4, w5, count, fpi_size); - if ( ret ) - goto out_rx_buf_unlock; - /* - * ffa_partition_info_get() succeeded so we now own the RX buffer we - * share with the SPMC. We must give it back using ffa_rx_release() - * once we've copied the content. - */ - - if ( ctx->guest_vers == FFA_VERSION_1_0 ) - { - size_t n; - struct ffa_partition_info_1_1 *src = ffa_rx; - struct ffa_partition_info_1_0 *dst = ctx->rx; - - if ( ctx->page_count * FFA_PAGE_SIZE < *count * sizeof(*dst) ) - { - ret = FFA_RET_NO_MEMORY; - goto out_rx_release; - } - - for ( n = 0; n < *count; n++ ) - { - dst[n].id = src[n].id; - dst[n].execution_context = src[n].execution_context; - dst[n].partition_properties = src[n].partition_properties; - } - } - else - { - size_t sz = *count * *fpi_size; - - if ( ctx->page_count * FFA_PAGE_SIZE < sz ) - { - ret = FFA_RET_NO_MEMORY; - goto out_rx_release; - } - - memcpy(ctx->rx, ffa_rx, sz); - } - ctx->rx_is_free = false; -out_rx_release: - ffa_rx_release(); -out_rx_buf_unlock: - spin_unlock(&ffa_rx_buffer_lock); -out: - spin_unlock(&ctx->rx_lock); - - return ret; -} - static int32_t ffa_handle_rx_release(void) { int32_t ret = FFA_RET_DENIED; @@ -604,46 +419,9 @@ static bool ffa_handle_call(struct cpu_user_regs *regs) } } -static bool is_in_subscr_list(const uint16_t *subscr, uint16_t start, - uint16_t end, uint16_t sp_id) -{ - unsigned int n; - - for ( n = start; n < end; n++ ) - { - if ( subscr[n] == sp_id ) - return true; - } - - return false; -} - -static void vm_destroy_bitmap_init(struct ffa_ctx *ctx, - unsigned int create_signal_count) -{ - unsigned int n; - - for ( n = 0; n < subscr_vm_destroyed_count; n++ ) - { - /* - * Skip SPs subscribed to the VM created event that never was - * notified of the VM creation due to an error during - * ffa_domain_init(). - */ - if ( is_in_subscr_list(subscr_vm_created, create_signal_count, - subscr_vm_created_count, - subscr_vm_destroyed[n]) ) - continue; - - set_bit(n, ctx->vm_destroy_bitmap); - } -} - static int ffa_domain_init(struct domain *d) { struct ffa_ctx *ctx; - unsigned int n; - int32_t res; if ( !ffa_version ) return -ENODEV; @@ -654,8 +432,7 @@ static int ffa_domain_init(struct domain *d) if ( d->domain_id >= UINT16_MAX) return -ERANGE; - ctx = xzalloc_flex_struct(struct ffa_ctx, vm_destroy_bitmap, - BITS_TO_LONGS(subscr_vm_destroyed_count)); + ctx = xzalloc(struct ffa_ctx); if ( !ctx ) return -ENOMEM; @@ -663,66 +440,28 @@ static int ffa_domain_init(struct domain *d) ctx->teardown_d = d; INIT_LIST_HEAD(&ctx->shm_list); - for ( n = 0; n < subscr_vm_created_count; n++ ) - { - res = ffa_direct_req_send_vm(subscr_vm_created[n], ffa_get_vm_id(d), - FFA_MSG_SEND_VM_CREATED); - if ( res ) - { - printk(XENLOG_ERR "ffa: Failed to report creation of vm_id %u to %u: res %d\n", - ffa_get_vm_id(d), subscr_vm_created[n], res); - break; - } - } - vm_destroy_bitmap_init(ctx, n); - if ( n != subscr_vm_created_count ) + /* + * ffa_domain_teardown() will be called if ffa_domain_init() returns an + * error, so no need for cleanup in this function. + */ + + if ( !ffa_partinfo_domain_init(d) ) return -EIO; return 0; } -static void send_vm_destroyed(struct domain *d) -{ - struct ffa_ctx *ctx = d->arch.tee; - unsigned int n; - int32_t res; - - for ( n = 0; n < subscr_vm_destroyed_count; n++ ) - { - if ( !test_bit(n, ctx->vm_destroy_bitmap) ) - continue; - - res = ffa_direct_req_send_vm(subscr_vm_destroyed[n], ffa_get_vm_id(d), - FFA_MSG_SEND_VM_DESTROYED); - - if ( res ) - { - printk(XENLOG_ERR "%pd: ffa: Failed to report destruction of vm_id %u to %u: res %d\n", - d, ffa_get_vm_id(d), subscr_vm_destroyed[n], res); - } - - /* - * For these two error codes the hypervisor is expected to resend - * the destruction message. For the rest it is expected that the - * error is permanent and that is doesn't help to resend the - * destruction message. - */ - if ( res != FFA_RET_INTERRUPTED && res != FFA_RET_RETRY ) - clear_bit(n, ctx->vm_destroy_bitmap); - } -} - static void ffa_domain_teardown_continue(struct ffa_ctx *ctx, bool first_time) { struct ffa_ctx *next_ctx = NULL; bool retry = false; - send_vm_destroyed(ctx->teardown_d); + if ( !ffa_partinfo_domain_destroy(ctx->teardown_d) ) + retry = true; if ( !ffa_shm_domain_destroy(ctx->teardown_d) ) retry = true; - if ( retry || - !bitmap_empty(ctx->vm_destroy_bitmap, subscr_vm_destroyed_count) ) + if ( retry ) { printk(XENLOG_G_INFO "%pd: ffa: Remaining cleanup, retrying\n", ctx->teardown_d); @@ -796,82 +535,6 @@ static int ffa_relinquish_resources(struct domain *d) return 0; } -static void uninit_subscribers(void) -{ - subscr_vm_created_count = 0; - subscr_vm_destroyed_count = 0; - XFREE(subscr_vm_created); - XFREE(subscr_vm_destroyed); -} - -static bool init_subscribers(struct ffa_partition_info_1_1 *fpi, uint16_t count) -{ - uint16_t n; - uint16_t c_pos; - uint16_t d_pos; - - subscr_vm_created_count = 0; - subscr_vm_destroyed_count = 0; - for ( n = 0; n < count; n++ ) - { - if ( fpi[n].partition_properties & FFA_PART_PROP_NOTIF_CREATED ) - subscr_vm_created_count++; - if ( fpi[n].partition_properties & FFA_PART_PROP_NOTIF_DESTROYED ) - subscr_vm_destroyed_count++; - } - - if ( subscr_vm_created_count ) - subscr_vm_created = xzalloc_array(uint16_t, subscr_vm_created_count); - if ( subscr_vm_destroyed_count ) - subscr_vm_destroyed = xzalloc_array(uint16_t, - subscr_vm_destroyed_count); - if ( (subscr_vm_created_count && !subscr_vm_created) || - (subscr_vm_destroyed_count && !subscr_vm_destroyed) ) - { - printk(XENLOG_ERR "ffa: Failed to allocate subscription lists\n"); - uninit_subscribers(); - return false; - } - - for ( c_pos = 0, d_pos = 0, n = 0; n < count; n++ ) - { - if ( fpi[n].partition_properties & FFA_PART_PROP_NOTIF_CREATED ) - subscr_vm_created[c_pos++] = fpi[n].id; - if ( fpi[n].partition_properties & FFA_PART_PROP_NOTIF_DESTROYED ) - subscr_vm_destroyed[d_pos++] = fpi[n].id; - } - - return true; -} - -static bool init_sps(void) -{ - bool ret = false; - uint32_t fpi_size; - uint32_t count; - int e; - - e = ffa_partition_info_get(0, 0, 0, 0, 0, &count, &fpi_size); - if ( e ) - { - printk(XENLOG_ERR "ffa: Failed to get list of SPs: %d\n", e); - goto out; - } - - if ( count >= UINT16_MAX ) - { - printk(XENLOG_ERR "ffa: Impossible number of SPs: %u\n", count); - goto out; - } - - ret = init_subscribers(ffa_rx, count); - -out: - ffa_rx_release(); - - return ret; -} - static bool ffa_probe(void) { uint32_t vers; @@ -949,7 +612,7 @@ static bool ffa_probe(void) } ffa_version = vers; - if ( !init_sps() ) + if ( !ffa_partinfo_init() ) goto err_free_ffa_tx; INIT_LIST_HEAD(&ffa_teardown_head); diff --git a/xen/arch/arm/tee/ffa_partinfo.c b/xen/arch/arm/tee/ffa_partinfo.c new file mode 100644 index 000000000000..dc1059584828 --- /dev/null +++ b/xen/arch/arm/tee/ffa_partinfo.c @@ -0,0 +1,373 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2024 Linaro Limited + */ + +#include +#include +#include + +#include +#include + +#include "ffa_private.h" + +/* Partition information descriptor defined in FF-A-1.0-REL */ +struct ffa_partition_info_1_0 { + uint16_t id; + uint16_t execution_context; + uint32_t partition_properties; +}; + +/* Partition information descriptor defined in FF-A-1.1-REL0 */ +struct ffa_partition_info_1_1 { + uint16_t id; + uint16_t execution_context; + uint32_t partition_properties; + uint8_t uuid[16]; +}; + +/* SPs subscribing to VM_CREATE and VM_DESTROYED events */ +static uint16_t *subscr_vm_created __read_mostly; +static uint16_t subscr_vm_created_count __read_mostly; +static uint16_t *subscr_vm_destroyed __read_mostly; +static uint16_t subscr_vm_destroyed_count __read_mostly; + +static int32_t ffa_partition_info_get(uint32_t w1, uint32_t w2, uint32_t w3, + uint32_t w4, uint32_t w5, uint32_t *count, + uint32_t *fpi_size) +{ + const struct arm_smccc_1_2_regs arg = { + .a0 = FFA_PARTITION_INFO_GET, + .a1 = w1, + .a2 = w2, + .a3 = w3, + .a4 = w4, + .a5 = w5, + }; + struct arm_smccc_1_2_regs resp; + uint32_t ret; + + arm_smccc_1_2_smc(&arg, &resp); + + ret = ffa_get_ret_code(&resp); + if ( !ret ) + { + *count = resp.a2; + *fpi_size = resp.a3; + } + + return ret; +} + +int32_t ffa_handle_partition_info_get(uint32_t w1, uint32_t w2, uint32_t w3, + uint32_t w4, uint32_t w5, uint32_t *count, + uint32_t *fpi_size) +{ + int32_t ret = FFA_RET_DENIED; + struct domain *d = current->domain; + struct ffa_ctx *ctx = d->arch.tee; + + /* + * FF-A v1.0 has w5 MBZ while v1.1 allows + * FFA_PARTITION_INFO_GET_COUNT_FLAG to be non-zero. + * + * FFA_PARTITION_INFO_GET_COUNT is only using registers and not the + * rxtx buffer so do the partition_info_get directly. + */ + if ( w5 == FFA_PARTITION_INFO_GET_COUNT_FLAG && + ctx->guest_vers == FFA_VERSION_1_1 ) + return ffa_partition_info_get(w1, w2, w3, w4, w5, count, fpi_size); + if ( w5 ) + return FFA_RET_INVALID_PARAMETERS; + + if ( !ffa_rx ) + return FFA_RET_DENIED; + + if ( !spin_trylock(&ctx->rx_lock) ) + return FFA_RET_BUSY; + + if ( !ctx->page_count || !ctx->rx_is_free ) + goto out; + spin_lock(&ffa_rx_buffer_lock); + ret = ffa_partition_info_get(w1, w2, w3, w4, w5, count, fpi_size); + if ( ret ) + goto out_rx_buf_unlock; + /* + * ffa_partition_info_get() succeeded so we now own the RX buffer we + * share with the SPMC. We must give it back using ffa_rx_release() + * once we've copied the content. + */ + + if ( ctx->guest_vers == FFA_VERSION_1_0 ) + { + size_t n; + struct ffa_partition_info_1_1 *src = ffa_rx; + struct ffa_partition_info_1_0 *dst = ctx->rx; + + if ( ctx->page_count * FFA_PAGE_SIZE < *count * sizeof(*dst) ) + { + ret = FFA_RET_NO_MEMORY; + goto out_rx_release; + } + + for ( n = 0; n < *count; n++ ) + { + dst[n].id = src[n].id; + dst[n].execution_context = src[n].execution_context; + dst[n].partition_properties = src[n].partition_properties; + } + } + else + { + size_t sz = *count * *fpi_size; + + if ( ctx->page_count * FFA_PAGE_SIZE < sz ) + { + ret = FFA_RET_NO_MEMORY; + goto out_rx_release; + } + + memcpy(ctx->rx, ffa_rx, sz); + } + ctx->rx_is_free = false; +out_rx_release: + ffa_rx_release(); +out_rx_buf_unlock: + spin_unlock(&ffa_rx_buffer_lock); +out: + spin_unlock(&ctx->rx_lock); + + return ret; +} + +static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id, + uint8_t msg) +{ + uint32_t exp_resp = FFA_MSG_FLAG_FRAMEWORK; + unsigned int retry_count = 0; + int32_t res; + + if ( msg == FFA_MSG_SEND_VM_CREATED ) + exp_resp |= FFA_MSG_RESP_VM_CREATED; + else if ( msg == FFA_MSG_SEND_VM_DESTROYED ) + exp_resp |= FFA_MSG_RESP_VM_DESTROYED; + else + return FFA_RET_INVALID_PARAMETERS; + + do { + const struct arm_smccc_1_2_regs arg = { + .a0 = FFA_MSG_SEND_DIRECT_REQ_32, + .a1 = sp_id, + .a2 = FFA_MSG_FLAG_FRAMEWORK | msg, + .a5 = vm_id, + }; + struct arm_smccc_1_2_regs resp; + + arm_smccc_1_2_smc(&arg, &resp); + if ( resp.a0 != FFA_MSG_SEND_DIRECT_RESP_32 || resp.a2 != exp_resp ) + { + /* + * This is an invalid response, likely due to some error in the + * implementation of the ABI. + */ + return FFA_RET_INVALID_PARAMETERS; + } + res = resp.a3; + if ( ++retry_count > 10 ) + { + /* + * TODO + * FFA_RET_INTERRUPTED means that the SPMC has a pending + * non-secure interrupt, we need a way of delivering that + * non-secure interrupt. + * FFA_RET_RETRY is the SP telling us that it's temporarily + * blocked from handling the direct request, we need a generic + * way to deal with this. + * For now in both cases, give up after a few retries. + */ + return res; + } + } while ( res == FFA_RET_INTERRUPTED || res == FFA_RET_RETRY ); + + return res; +} + +static void uninit_subscribers(void) +{ + subscr_vm_created_count = 0; + subscr_vm_destroyed_count = 0; + XFREE(subscr_vm_created); + XFREE(subscr_vm_destroyed); +} + +static bool init_subscribers(struct ffa_partition_info_1_1 *fpi, uint16_t count) +{ + uint16_t n; + uint16_t c_pos; + uint16_t d_pos; + + subscr_vm_created_count = 0; + subscr_vm_destroyed_count = 0; + for ( n = 0; n < count; n++ ) + { + if ( fpi[n].partition_properties & FFA_PART_PROP_NOTIF_CREATED ) + subscr_vm_created_count++; + if ( fpi[n].partition_properties & FFA_PART_PROP_NOTIF_DESTROYED ) + subscr_vm_destroyed_count++; + } + + if ( subscr_vm_created_count ) + subscr_vm_created = xzalloc_array(uint16_t, subscr_vm_created_count); + if ( subscr_vm_destroyed_count ) + subscr_vm_destroyed = xzalloc_array(uint16_t, + subscr_vm_destroyed_count); + if ( (subscr_vm_created_count && !subscr_vm_created) || + (subscr_vm_destroyed_count && !subscr_vm_destroyed) ) + { + printk(XENLOG_ERR "ffa: Failed to allocate subscription lists\n"); + uninit_subscribers(); + return false; + } + + for ( c_pos = 0, d_pos = 0, n = 0; n < count; n++ ) + { + if ( fpi[n].partition_properties & FFA_PART_PROP_NOTIF_CREATED ) + subscr_vm_created[c_pos++] = fpi[n].id; + if ( fpi[n].partition_properties & FFA_PART_PROP_NOTIF_DESTROYED ) + subscr_vm_destroyed[d_pos++] = fpi[n].id; + } + + return true; +} + + + +bool ffa_partinfo_init(void) +{ + bool ret = false; + uint32_t fpi_size; + uint32_t count; + int e; + + e = ffa_partition_info_get(0, 0, 0, 0, 0, &count, &fpi_size); + if ( e ) + { + printk(XENLOG_ERR "ffa: Failed to get list of SPs: %d\n", e); + goto out; + } + + if ( count >= UINT16_MAX ) + { + printk(XENLOG_ERR "ffa: Impossible number of SPs: %u\n", count); + goto out; + } + + ret = init_subscribers(ffa_rx, count); + +out: + ffa_rx_release(); + + return ret; +} + +static bool is_in_subscr_list(const uint16_t *subscr, uint16_t start, + uint16_t end, uint16_t sp_id) +{ + unsigned int n; + + for ( n = start; n < end; n++ ) + { + if ( subscr[n] == sp_id ) + return true; + } + + return false; +} + +static void vm_destroy_bitmap_init(struct ffa_ctx *ctx, + unsigned int create_signal_count) +{ + unsigned int n; + + for ( n = 0; n < subscr_vm_destroyed_count; n++ ) + { + /* + * Skip SPs subscribed to the VM created event that never was + * notified of the VM creation due to an error during + * ffa_domain_init(). + */ + if ( is_in_subscr_list(subscr_vm_created, create_signal_count, + subscr_vm_created_count, + subscr_vm_destroyed[n]) ) + continue; + + set_bit(n, ctx->vm_destroy_bitmap); + } +} + +bool ffa_partinfo_domain_init(struct domain *d) +{ + unsigned int count = BITS_TO_LONGS(subscr_vm_destroyed_count); + struct ffa_ctx *ctx = d->arch.tee; + unsigned int n; + int32_t res; + + ctx->vm_destroy_bitmap = xzalloc_array(unsigned long, count); + if ( !ctx->vm_destroy_bitmap ) + return false; + + for ( n = 0; n < subscr_vm_created_count; n++ ) + { + res = ffa_direct_req_send_vm(subscr_vm_created[n], ffa_get_vm_id(d), + FFA_MSG_SEND_VM_CREATED); + if ( res ) + { + printk(XENLOG_ERR "ffa: Failed to report creation of vm_id %u to %u: res %d\n", + ffa_get_vm_id(d), subscr_vm_created[n], res); + break; + } + } + vm_destroy_bitmap_init(ctx, n); + + return n == subscr_vm_created_count; +} + +bool ffa_partinfo_domain_destroy(struct domain *d) +{ + struct ffa_ctx *ctx = d->arch.tee; + unsigned int n; + int32_t res; + + if ( !ctx->vm_destroy_bitmap ) + return true; + + for ( n = 0; n < subscr_vm_destroyed_count; n++ ) + { + if ( !test_bit(n, ctx->vm_destroy_bitmap) ) + continue; + + res = ffa_direct_req_send_vm(subscr_vm_destroyed[n], ffa_get_vm_id(d), + FFA_MSG_SEND_VM_DESTROYED); + + if ( res ) + { + printk(XENLOG_ERR "%pd: ffa: Failed to report destruction of vm_id %u to %u: res %d\n", + d, ffa_get_vm_id(d), subscr_vm_destroyed[n], res); + } + + /* + * For these two error codes the hypervisor is expected to resend + * the destruction message. For the rest it is expected that the + * error is permanent and that is doesn't help to resend the + * destruction message. + */ + if ( res != FFA_RET_INTERRUPTED && res != FFA_RET_RETRY ) + clear_bit(n, ctx->vm_destroy_bitmap); + } + + if ( bitmap_empty(ctx->vm_destroy_bitmap, subscr_vm_destroyed_count) ) + XFREE(ctx->vm_destroy_bitmap); + + return !ctx->vm_destroy_bitmap; +} diff --git a/xen/arch/arm/tee/ffa_private.h b/xen/arch/arm/tee/ffa_private.h index f3e2f42e573e..6b32b69cfe90 100644 --- a/xen/arch/arm/tee/ffa_private.h +++ b/xen/arch/arm/tee/ffa_private.h @@ -244,7 +244,7 @@ struct ffa_ctx { * Used for ffa_domain_teardown() to keep track of which SPs should be * notified that this guest is being destroyed. */ - unsigned long vm_destroy_bitmap[]; + unsigned long *vm_destroy_bitmap; }; extern void *ffa_rx; @@ -256,6 +256,13 @@ bool ffa_shm_domain_destroy(struct domain *d); void ffa_handle_mem_share(struct cpu_user_regs *regs); int ffa_handle_mem_reclaim(uint64_t handle, uint32_t flags); +bool ffa_partinfo_init(void); +bool ffa_partinfo_domain_init(struct domain *d); +bool ffa_partinfo_domain_destroy(struct domain *d); +int32_t ffa_handle_partition_info_get(uint32_t w1, uint32_t w2, uint32_t w3, + uint32_t w4, uint32_t w5, uint32_t *count, + uint32_t *fpi_size); + static inline uint16_t ffa_get_vm_id(const struct domain *d) { @@ -325,4 +332,9 @@ static inline int32_t ffa_simple_call(uint32_t fid, register_t a1, return ffa_get_ret_code(&resp); } +static inline int32_t ffa_rx_release(void) +{ + return ffa_simple_call(FFA_RX_RELEASE, 0, 0, 0, 0); +} + #endif /*__FFA_PRIVATE_H__*/ From patchwork Mon Mar 25 09:39:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 13601704 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9BCC4C54E58 for ; Mon, 25 Mar 2024 09:39:26 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.697650.1088634 (Exim 4.92) (envelope-from ) id 1rognW-00021a-Gt; Mon, 25 Mar 2024 09:39:18 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 697650.1088634; Mon, 25 Mar 2024 09:39:18 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rognW-00020j-9F; Mon, 25 Mar 2024 09:39:18 +0000 Received: by outflank-mailman (input) for mailman id 697650; Mon, 25 Mar 2024 09:39:16 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rognU-000193-9Z for xen-devel@lists.xenproject.org; Mon, 25 Mar 2024 09:39:16 +0000 Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com [2a00:1450:4864:20::62d]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 8b2f9576-ea8b-11ee-afe3-a90da7624cb6; Mon, 25 Mar 2024 10:39:15 +0100 (CET) Received: by mail-ej1-x62d.google.com with SMTP id a640c23a62f3a-a4715991c32so482271366b.1 for ; Mon, 25 Mar 2024 02:39:15 -0700 (PDT) Received: from rayden.urgonet (h-217-31-164-171.A175.priv.bahnhof.se. [217.31.164.171]) by smtp.gmail.com with ESMTPSA id bw26-20020a170906c1da00b00a4650ec48d0sm2891067ejb.140.2024.03.25.02.39.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Mar 2024 02:39:14 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8b2f9576-ea8b-11ee-afe3-a90da7624cb6 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1711359555; x=1711964355; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ktHTOHPhZw2ERZCbIiZrCqqyY587Z370raI9TG92v+s=; b=yEP6T+18eaYVZdoALuo/IDevPszFdavJnEEp7t4W8RHG+A4C4+8Bs/JM1zrQg/kxpp 3h2vGVGEA9qAyt/laghm1HGnuvUzgwYbReiofyEq/t2BdnX8WhkEBCMwYyxj8GaFgM7u G4FwFh1QiLCG5d6+KpPOnKFLBq/uhejPrFyK9VNF3RVRokgAWjVOgRIq+RaprxRiWZJs 3ZUyVm1w3IJO0g4ZbhBHXd5rGl/Tgnb6s/R0MOHBC9IcQc1WMxZuYwlvS+bP39XS7a0c +p2qrdK1uFtveO9VUcH3UTNTVrMo01V6qYs+I22JZEPjSRKjKHec2Y/SmjlBsSPAcYj5 55CA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711359555; x=1711964355; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ktHTOHPhZw2ERZCbIiZrCqqyY587Z370raI9TG92v+s=; b=W1fIUGrAa1FKVTfbO0b+u44NBiK/+vO6dX1f9DYV0Dn+VPD8IHtd0F5ay2pwsd2YBY BU3n4zdhq8yuX712k53iRinEq7jY7rwLP063EDm3WBDuKIz1k76A0SMQm+sZ0TJc0mbP 8t1ES+SNTkMFhsTBvH1Iy7ZvcO/qbGHnMtwS6iiB6AMy5WOifdH0JP+eQ0GujuIBLJWA a7bgURZ9vgntMyGoAtbHipC4dku3Em4zew8a6gxU34puH9a+XC2UOdEdbEiIoegsuBv2 kjeQeM/4DobNp48kX4qx5Fq3BH8qyem9vbctZf5On5TncHBb+Wj33IXipAWoss/YsNWU 9WQQ== X-Gm-Message-State: AOJu0YwL5XtV917M9EU84MSMIcYv/AeEPUm6G6wtsDWQEV+4JVlVMxx7 F7Q31satr+l41FV3tfwJMbcwV/0flug1xG5wgmzdZd9T7LlBwex6NLegUdtYBxmi+Hz9a0lM07S X X-Google-Smtp-Source: AGHT+IEOGVY6EyKr+I/yJdd39jUyvRqBVgESrP7PFiLL/4YlDZRoDQR0ajVYjp5mXVwH3HmNwSEKUw== X-Received: by 2002:a17:907:944f:b0:a49:56d4:d643 with SMTP id dl15-20020a170907944f00b00a4956d4d643mr2186112ejc.36.1711359554661; Mon, 25 Mar 2024 02:39:14 -0700 (PDT) From: Jens Wiklander To: xen-devel@lists.xenproject.org Cc: patches@linaro.org, Jens Wiklander , Volodymyr Babchuk , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel Subject: [XEN PATCH 5/6] xen/arm: ffa: separate rxtx buffer routines Date: Mon, 25 Mar 2024 10:39:03 +0100 Message-Id: <20240325093904.3466092-6-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240325093904.3466092-1-jens.wiklander@linaro.org> References: <20240325093904.3466092-1-jens.wiklander@linaro.org> MIME-Version: 1.0 Move rxtx buffer routines into a spearate file for easier navigation in the source code. Add ffa_rxtx_init(), ffa_rxtx_destroy(), and ffa_rxtx_domain_destroy() to handle the ffa_rxtx internal things on initialization and teardown. Signed-off-by: Jens Wiklander Reviewed-by: Bertrand Marquis --- xen/arch/arm/tee/Makefile | 1 + xen/arch/arm/tee/ffa.c | 174 +------------------------- xen/arch/arm/tee/ffa_private.h | 7 ++ xen/arch/arm/tee/ffa_rxtx.c | 216 +++++++++++++++++++++++++++++++++ 4 files changed, 229 insertions(+), 169 deletions(-) create mode 100644 xen/arch/arm/tee/ffa_rxtx.c diff --git a/xen/arch/arm/tee/Makefile b/xen/arch/arm/tee/Makefile index be644fba8055..f0112a2f922d 100644 --- a/xen/arch/arm/tee/Makefile +++ b/xen/arch/arm/tee/Makefile @@ -1,5 +1,6 @@ obj-$(CONFIG_FFA) += ffa.o obj-$(CONFIG_FFA) += ffa_shm.o obj-$(CONFIG_FFA) += ffa_partinfo.o +obj-$(CONFIG_FFA) += ffa_rxtx.o obj-y += tee.o obj-$(CONFIG_OPTEE) += optee.o diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c index 7a2803881420..4f7775b8c890 100644 --- a/xen/arch/arm/tee/ffa.c +++ b/xen/arch/arm/tee/ffa.c @@ -65,26 +65,6 @@ #include "ffa_private.h" -/* - * Structs below ending with _1_0 are defined in FF-A-1.0-REL and - * structs ending with _1_1 are defined in FF-A-1.1-REL0. - */ - -/* Endpoint RX/TX descriptor */ -struct ffa_endpoint_rxtx_descriptor_1_0 { - uint16_t sender_id; - uint16_t reserved; - uint32_t rx_range_count; - uint32_t tx_range_count; -}; - -struct ffa_endpoint_rxtx_descriptor_1_1 { - uint16_t sender_id; - uint16_t reserved; - uint32_t rx_region_offs; - uint32_t tx_region_offs; -}; - /* Negotiated FF-A version to use with the SPMC */ static uint32_t __ro_after_init ffa_version; @@ -145,12 +125,6 @@ static bool check_mandatory_feature(uint32_t id) return !ret; } -static int32_t ffa_rxtx_map(paddr_t tx_addr, paddr_t rx_addr, - uint32_t page_count) -{ - return ffa_simple_call(FFA_RXTX_MAP_64, tx_addr, rx_addr, page_count, 0); -} - static void handle_version(struct cpu_user_regs *regs) { struct domain *d = current->domain; @@ -166,127 +140,6 @@ static void handle_version(struct cpu_user_regs *regs) ffa_set_regs(regs, vers, 0, 0, 0, 0, 0, 0, 0); } -static uint32_t ffa_handle_rxtx_map(uint32_t fid, register_t tx_addr, - register_t rx_addr, uint32_t page_count) -{ - uint32_t ret = FFA_RET_INVALID_PARAMETERS; - struct domain *d = current->domain; - struct ffa_ctx *ctx = d->arch.tee; - struct page_info *tx_pg; - struct page_info *rx_pg; - p2m_type_t t; - void *rx; - void *tx; - - if ( !smccc_is_conv_64(fid) ) - { - /* - * Calls using the 32-bit calling convention must ignore the upper - * 32 bits in the argument registers. - */ - tx_addr &= UINT32_MAX; - rx_addr &= UINT32_MAX; - } - - if ( page_count > FFA_MAX_RXTX_PAGE_COUNT ) - { - printk(XENLOG_ERR "ffa: RXTX_MAP: error: %u pages requested (limit %u)\n", - page_count, FFA_MAX_RXTX_PAGE_COUNT); - return FFA_RET_INVALID_PARAMETERS; - } - - /* Already mapped */ - if ( ctx->rx ) - return FFA_RET_DENIED; - - tx_pg = get_page_from_gfn(d, gfn_x(gaddr_to_gfn(tx_addr)), &t, P2M_ALLOC); - if ( !tx_pg ) - return FFA_RET_INVALID_PARAMETERS; - - /* Only normal RW RAM for now */ - if ( t != p2m_ram_rw ) - goto err_put_tx_pg; - - rx_pg = get_page_from_gfn(d, gfn_x(gaddr_to_gfn(rx_addr)), &t, P2M_ALLOC); - if ( !tx_pg ) - goto err_put_tx_pg; - - /* Only normal RW RAM for now */ - if ( t != p2m_ram_rw ) - goto err_put_rx_pg; - - tx = __map_domain_page_global(tx_pg); - if ( !tx ) - goto err_put_rx_pg; - - rx = __map_domain_page_global(rx_pg); - if ( !rx ) - goto err_unmap_tx; - - ctx->rx = rx; - ctx->tx = tx; - ctx->rx_pg = rx_pg; - ctx->tx_pg = tx_pg; - ctx->page_count = page_count; - ctx->rx_is_free = true; - return FFA_RET_OK; - -err_unmap_tx: - unmap_domain_page_global(tx); -err_put_rx_pg: - put_page(rx_pg); -err_put_tx_pg: - put_page(tx_pg); - - return ret; -} - -static void rxtx_unmap(struct ffa_ctx *ctx) -{ - unmap_domain_page_global(ctx->rx); - unmap_domain_page_global(ctx->tx); - put_page(ctx->rx_pg); - put_page(ctx->tx_pg); - ctx->rx = NULL; - ctx->tx = NULL; - ctx->rx_pg = NULL; - ctx->tx_pg = NULL; - ctx->page_count = 0; - ctx->rx_is_free = false; -} - -static uint32_t ffa_handle_rxtx_unmap(void) -{ - struct domain *d = current->domain; - struct ffa_ctx *ctx = d->arch.tee; - - if ( !ctx->rx ) - return FFA_RET_INVALID_PARAMETERS; - - rxtx_unmap(ctx); - - return FFA_RET_OK; -} - -static int32_t ffa_handle_rx_release(void) -{ - int32_t ret = FFA_RET_DENIED; - struct domain *d = current->domain; - struct ffa_ctx *ctx = d->arch.tee; - - if ( !spin_trylock(&ctx->rx_lock) ) - return FFA_RET_BUSY; - - if ( !ctx->page_count || ctx->rx_is_free ) - goto out; - ret = FFA_RET_OK; - ctx->rx_is_free = true; -out: - spin_unlock(&ctx->rx_lock); - - return ret; -} - static void handle_msg_send_direct_req(struct cpu_user_regs *regs, uint32_t fid) { struct arm_smccc_1_2_regs arg = { .a0 = fid, }; @@ -522,8 +375,7 @@ static int ffa_domain_teardown(struct domain *d) if ( !ctx ) return 0; - if ( ctx->rx ) - rxtx_unmap(ctx); + ffa_rxtx_domain_destroy(d); ffa_domain_teardown_continue(ctx, true /* first_time */); @@ -538,7 +390,6 @@ static int ffa_relinquish_resources(struct domain *d) static bool ffa_probe(void) { uint32_t vers; - int e; unsigned int major_vers; unsigned int minor_vers; @@ -596,36 +447,21 @@ static bool ffa_probe(void) !check_mandatory_feature(FFA_MSG_SEND_DIRECT_REQ_32) ) return false; - ffa_rx = alloc_xenheap_pages(get_order_from_pages(FFA_RXTX_PAGE_COUNT), 0); - if ( !ffa_rx ) + if ( !ffa_rxtx_init() ) return false; - ffa_tx = alloc_xenheap_pages(get_order_from_pages(FFA_RXTX_PAGE_COUNT), 0); - if ( !ffa_tx ) - goto err_free_ffa_rx; - - e = ffa_rxtx_map(__pa(ffa_tx), __pa(ffa_rx), FFA_RXTX_PAGE_COUNT); - if ( e ) - { - printk(XENLOG_ERR "ffa: Failed to map rxtx: error %d\n", e); - goto err_free_ffa_tx; - } ffa_version = vers; if ( !ffa_partinfo_init() ) - goto err_free_ffa_tx; + goto err_rxtx_destroy; INIT_LIST_HEAD(&ffa_teardown_head); init_timer(&ffa_teardown_timer, ffa_teardown_timer_callback, NULL, 0); return true; -err_free_ffa_tx: - free_xenheap_pages(ffa_tx, 0); - ffa_tx = NULL; -err_free_ffa_rx: - free_xenheap_pages(ffa_rx, 0); - ffa_rx = NULL; +err_rxtx_destroy: + ffa_rxtx_destroy(); ffa_version = 0; return false; diff --git a/xen/arch/arm/tee/ffa_private.h b/xen/arch/arm/tee/ffa_private.h index 6b32b69cfe90..98236cbf14a3 100644 --- a/xen/arch/arm/tee/ffa_private.h +++ b/xen/arch/arm/tee/ffa_private.h @@ -263,6 +263,13 @@ int32_t ffa_handle_partition_info_get(uint32_t w1, uint32_t w2, uint32_t w3, uint32_t w4, uint32_t w5, uint32_t *count, uint32_t *fpi_size); +bool ffa_rxtx_init(void); +void ffa_rxtx_destroy(void); +void ffa_rxtx_domain_destroy(struct domain *d); +uint32_t ffa_handle_rxtx_map(uint32_t fid, register_t tx_addr, + register_t rx_addr, uint32_t page_count); +uint32_t ffa_handle_rxtx_unmap(void); +int32_t ffa_handle_rx_release(void); static inline uint16_t ffa_get_vm_id(const struct domain *d) { diff --git a/xen/arch/arm/tee/ffa_rxtx.c b/xen/arch/arm/tee/ffa_rxtx.c new file mode 100644 index 000000000000..661764052e67 --- /dev/null +++ b/xen/arch/arm/tee/ffa_rxtx.c @@ -0,0 +1,216 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2024 Linaro Limited + */ + +#include +#include +#include +#include +#include + +#include +#include + +#include "ffa_private.h" + +/* Endpoint RX/TX descriptor defined in FF-A-1.0-REL */ +struct ffa_endpoint_rxtx_descriptor_1_0 { + uint16_t sender_id; + uint16_t reserved; + uint32_t rx_range_count; + uint32_t tx_range_count; +}; + +/* Endpoint RX/TX descriptor defined in FF-A-1.1-REL0 */ +struct ffa_endpoint_rxtx_descriptor_1_1 { + uint16_t sender_id; + uint16_t reserved; + uint32_t rx_region_offs; + uint32_t tx_region_offs; +}; + +uint32_t ffa_handle_rxtx_map(uint32_t fid, register_t tx_addr, + register_t rx_addr, uint32_t page_count) +{ + uint32_t ret = FFA_RET_INVALID_PARAMETERS; + struct domain *d = current->domain; + struct ffa_ctx *ctx = d->arch.tee; + struct page_info *tx_pg; + struct page_info *rx_pg; + p2m_type_t t; + void *rx; + void *tx; + + if ( !smccc_is_conv_64(fid) ) + { + /* + * Calls using the 32-bit calling convention must ignore the upper + * 32 bits in the argument registers. + */ + tx_addr &= UINT32_MAX; + rx_addr &= UINT32_MAX; + } + + if ( page_count > FFA_MAX_RXTX_PAGE_COUNT ) + { + printk(XENLOG_ERR "ffa: RXTX_MAP: error: %u pages requested (limit %u)\n", + page_count, FFA_MAX_RXTX_PAGE_COUNT); + return FFA_RET_INVALID_PARAMETERS; + } + + /* Already mapped */ + if ( ctx->rx ) + return FFA_RET_DENIED; + + tx_pg = get_page_from_gfn(d, gfn_x(gaddr_to_gfn(tx_addr)), &t, P2M_ALLOC); + if ( !tx_pg ) + return FFA_RET_INVALID_PARAMETERS; + + /* Only normal RW RAM for now */ + if ( t != p2m_ram_rw ) + goto err_put_tx_pg; + + rx_pg = get_page_from_gfn(d, gfn_x(gaddr_to_gfn(rx_addr)), &t, P2M_ALLOC); + if ( !tx_pg ) + goto err_put_tx_pg; + + /* Only normal RW RAM for now */ + if ( t != p2m_ram_rw ) + goto err_put_rx_pg; + + tx = __map_domain_page_global(tx_pg); + if ( !tx ) + goto err_put_rx_pg; + + rx = __map_domain_page_global(rx_pg); + if ( !rx ) + goto err_unmap_tx; + + ctx->rx = rx; + ctx->tx = tx; + ctx->rx_pg = rx_pg; + ctx->tx_pg = tx_pg; + ctx->page_count = page_count; + ctx->rx_is_free = true; + return FFA_RET_OK; + +err_unmap_tx: + unmap_domain_page_global(tx); +err_put_rx_pg: + put_page(rx_pg); +err_put_tx_pg: + put_page(tx_pg); + + return ret; +} + +static void rxtx_unmap(struct ffa_ctx *ctx) +{ + unmap_domain_page_global(ctx->rx); + unmap_domain_page_global(ctx->tx); + put_page(ctx->rx_pg); + put_page(ctx->tx_pg); + ctx->rx = NULL; + ctx->tx = NULL; + ctx->rx_pg = NULL; + ctx->tx_pg = NULL; + ctx->page_count = 0; + ctx->rx_is_free = false; +} + +uint32_t ffa_handle_rxtx_unmap(void) +{ + struct domain *d = current->domain; + struct ffa_ctx *ctx = d->arch.tee; + + if ( !ctx->rx ) + return FFA_RET_INVALID_PARAMETERS; + + rxtx_unmap(ctx); + + return FFA_RET_OK; +} + +int32_t ffa_handle_rx_release(void) +{ + int32_t ret = FFA_RET_DENIED; + struct domain *d = current->domain; + struct ffa_ctx *ctx = d->arch.tee; + + if ( !spin_trylock(&ctx->rx_lock) ) + return FFA_RET_BUSY; + + if ( !ctx->page_count || ctx->rx_is_free ) + goto out; + ret = FFA_RET_OK; + ctx->rx_is_free = true; +out: + spin_unlock(&ctx->rx_lock); + + return ret; +} + +static int32_t ffa_rxtx_map(paddr_t tx_addr, paddr_t rx_addr, + uint32_t page_count) +{ + return ffa_simple_call(FFA_RXTX_MAP_64, tx_addr, rx_addr, page_count, 0); +} + +static int32_t ffa_rxtx_unmap(void) +{ + return ffa_simple_call(FFA_RXTX_UNMAP, 0, 0, 0, 0); +} + +void ffa_rxtx_domain_destroy(struct domain *d) +{ + struct ffa_ctx *ctx = d->arch.tee; + + if ( ctx->rx ) + rxtx_unmap(ctx); +} + +void ffa_rxtx_destroy(void) +{ + bool need_unmap = ffa_tx && ffa_rx; + + if ( ffa_tx ) + { + free_xenheap_pages(ffa_tx, 0); + ffa_tx = NULL; + } + if ( ffa_rx ) + { + free_xenheap_pages(ffa_rx, 0); + ffa_rx = NULL; + } + + if ( need_unmap ) + ffa_rxtx_unmap(); +} + +bool ffa_rxtx_init(void) +{ + int e; + + ffa_rx = alloc_xenheap_pages(get_order_from_pages(FFA_RXTX_PAGE_COUNT), 0); + if ( !ffa_rx ) + return false; + + ffa_tx = alloc_xenheap_pages(get_order_from_pages(FFA_RXTX_PAGE_COUNT), 0); + if ( !ffa_tx ) + goto err; + + e = ffa_rxtx_map(__pa(ffa_tx), __pa(ffa_rx), FFA_RXTX_PAGE_COUNT); + if ( e ) + { + printk(XENLOG_ERR "ffa: Failed to map rxtx: error %d\n", e); + goto err; + } + return true; + +err: + ffa_rxtx_destroy(); + + return false; +} From patchwork Mon Mar 25 09:39:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 13601706 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CD26DC54E58 for ; Mon, 25 Mar 2024 09:39:29 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.697652.1088650 (Exim 4.92) (envelope-from ) id 1rognX-0002Oo-Oy; Mon, 25 Mar 2024 09:39:19 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 697652.1088650; Mon, 25 Mar 2024 09:39:19 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rognX-0002Nk-HK; Mon, 25 Mar 2024 09:39:19 +0000 Received: by outflank-mailman (input) for mailman id 697652; Mon, 25 Mar 2024 09:39:17 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rognV-000193-9s for xen-devel@lists.xenproject.org; Mon, 25 Mar 2024 09:39:17 +0000 Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com [2a00:1450:4864:20::536]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 8bbe3675-ea8b-11ee-afe3-a90da7624cb6; Mon, 25 Mar 2024 10:39:16 +0100 (CET) Received: by mail-ed1-x536.google.com with SMTP id 4fb4d7f45d1cf-56c0652d37aso1717002a12.0 for ; Mon, 25 Mar 2024 02:39:16 -0700 (PDT) Received: from rayden.urgonet (h-217-31-164-171.A175.priv.bahnhof.se. [217.31.164.171]) by smtp.gmail.com with ESMTPSA id bw26-20020a170906c1da00b00a4650ec48d0sm2891067ejb.140.2024.03.25.02.39.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Mar 2024 02:39:15 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8bbe3675-ea8b-11ee-afe3-a90da7624cb6 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1711359555; x=1711964355; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=R//dqaezYULgu2Zm/BSccIHOclqXeRZ8nf6tghKZQ5k=; b=qMYZmF6oVlXJqcG0QDCjjaSzW1Xgjn89xzHduMJ3p1I7irq19zN1tFgWeVyOo5uzK/ u0beJcrkGD7KHQmsyWLafnUfUGPuptioju1gdlGch5n3PFkXAAr2Zzy8L6/jCHhDFHhk FlE1a0T8mhQ/grkWCvvsYuHXBtdMnUl+GSvoRpWhNAOo1dQfiioHmTPvKZ3aksUCJn4V t28JfW/jSsboV+3Ph5nWHwTd2iNptvE335tCzMjoqTVl2SXwjiWNtUr0eL7vjgh5bfUO EBj3WYJhn6W3KWekKYKSv5AAAc49wDwOj0zM+WBw4pL7vDeGGcPDk8tugXVKmHpHmxGS 5Tsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711359555; x=1711964355; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=R//dqaezYULgu2Zm/BSccIHOclqXeRZ8nf6tghKZQ5k=; b=jeRgnkzL2qWCx5zDpluExoqwAOxpXOMBWHjOG0s/bRTM9E0hyEF1GhcE5kTdrBhIak d+OeyVKRRvjYUrbcHFQmzYhOAJM1zRjeEtYD3V9ncLTVWAYtBSuf/kZ7rI8kUArjoB9/ b0EheR/ygs4C5EDbW4GKIvxzpwI5aJGIrRwUmUaYszYGhnkmM9+vXDXH/6Je4D0WQ7y3 tcJSZsGzBOb6ccSX6X3QDr2xtfzFzDvwIGC1hdko2a5uBVWJf1B5nGdgBoBh016FpsXJ ZMb183u7kdyahbiSfr0EhJqgCRV+8c5ym2+HBFwytR5BTNE7DX1YZeetB85XypiNdUtX PmRw== X-Gm-Message-State: AOJu0Yy8au7oOFTQtF4Zb2ltoX4SvtCpOSe3sYWusAVpu4CkfYSL/+RX hkuA5aLR0oSqFDQkaYzP//kM1I+ZUKzyYeCj9mVG0vIfMoSnWecrwHtWgKcbAvT6lR+WGGbvCzD N X-Google-Smtp-Source: AGHT+IHp+/+QUpPfntMFxFrKsbtXhYig9yZ1giP9wj9Kd44YJk/MCgWMOMMAmO3Q/Q0QMZLd9qU6Sw== X-Received: by 2002:a17:906:4a12:b0:a47:3a99:87c with SMTP id w18-20020a1709064a1200b00a473a99087cmr4982726eju.18.1711359555664; Mon, 25 Mar 2024 02:39:15 -0700 (PDT) From: Jens Wiklander To: xen-devel@lists.xenproject.org Cc: patches@linaro.org, Jens Wiklander , Volodymyr Babchuk , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel Subject: [XEN PATCH 6/6] xen/arm: ffa: support FFA_FEATURES Date: Mon, 25 Mar 2024 10:39:04 +0100 Message-Id: <20240325093904.3466092-7-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240325093904.3466092-1-jens.wiklander@linaro.org> References: <20240325093904.3466092-1-jens.wiklander@linaro.org> MIME-Version: 1.0 Add support for the mandatory FF-A ABI function FFA_FEATURES. Signed-off-by: Jens Wiklander Reviewed-by: Bertrand Marquis --- xen/arch/arm/tee/ffa.c | 57 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 57 insertions(+) diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c index 4f7775b8c890..8665201e34a9 100644 --- a/xen/arch/arm/tee/ffa.c +++ b/xen/arch/arm/tee/ffa.c @@ -192,6 +192,60 @@ out: resp.a7 & mask); } +static void handle_features(struct cpu_user_regs *regs) +{ + uint32_t a1 = get_user_reg(regs, 1); + unsigned int n; + + for ( n = 2; n <= 7; n++ ) + { + if ( get_user_reg(regs, n) ) + { + ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED); + return; + } + } + + switch ( a1 ) + { + case FFA_ERROR: + case FFA_VERSION: + case FFA_SUCCESS_32: + case FFA_SUCCESS_64: + case FFA_FEATURES: + case FFA_ID_GET: + case FFA_RX_RELEASE: + case FFA_RXTX_UNMAP: + case FFA_MEM_RECLAIM: + case FFA_PARTITION_INFO_GET: + case FFA_MSG_SEND_DIRECT_REQ_32: + case FFA_MSG_SEND_DIRECT_REQ_64: + ffa_set_regs_success(regs, 0, 0); + break; + case FFA_MEM_SHARE_64: + case FFA_MEM_SHARE_32: + /* + * We currently don't support dynamically allocated buffers. Report + * that with 0 in bit[0] of w2. + */ + ffa_set_regs_success(regs, 0, 0); + break; + case FFA_RXTX_MAP_64: + case FFA_RXTX_MAP_32: + /* + * We currently support 4k pages only, report that as 00 in + * bit[0:1] in w0. This needs to be revised if Xen page size + * differs from FFA_PAGE_SIZE (SZ_4K). + */ + BUILD_BUG_ON(PAGE_SIZE != FFA_PAGE_SIZE); + ffa_set_regs_success(regs, 0, 0); + break; + default: + ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED); + break; + } +} + static bool ffa_handle_call(struct cpu_user_regs *regs) { uint32_t fid = get_user_reg(regs, 0); @@ -212,6 +266,9 @@ static bool ffa_handle_call(struct cpu_user_regs *regs) case FFA_ID_GET: ffa_set_regs_success(regs, ffa_get_vm_id(d), 0); return true; + case FFA_FEATURES: + handle_features(regs); + return true; case FFA_RXTX_MAP_32: case FFA_RXTX_MAP_64: e = ffa_handle_rxtx_map(fid, get_user_reg(regs, 1),