From patchwork Tue Sep 24 15:46:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Volodymyr Babchuk X-Patchwork-Id: 11159183 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AB77A14ED for ; Tue, 24 Sep 2019 15:48:06 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7A323214AF for ; Tue, 24 Sep 2019 15:48:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=epam.com header.i=@epam.com header.b="GDC+O4+/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7A323214AF Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=epam.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iCn1d-0001fl-E6; Tue, 24 Sep 2019 15:46:49 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iCn1b-0001fE-VR for xen-devel@lists.xenproject.org; Tue, 24 Sep 2019 15:46:48 +0000 X-Inumbo-ID: 83a3900a-dee2-11e9-9621-12813bfff9fa Received: from EUR02-AM5-obe.outbound.protection.outlook.com (unknown [40.107.0.81]) by localhost (Halon) with ESMTPS id 83a3900a-dee2-11e9-9621-12813bfff9fa; Tue, 24 Sep 2019 15:46:46 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=G1toiu+JtBL+dJZBK0dDV9iDcddDNqakMEbzDwOLNeYFKKGHXNhZdwTd1ByKw9vovp8jFJzBUKHMrY/Jjy/NKt9TvsuqSh844uO+rU/Ku42JEBr5tx0KN4TFZ5udvbseuPqK+m70w2ajAl+NylD1HNPWFaVVXPf4UHktrGo55t8MYYk1TPLMVww/wF24lRm3VU0AHATHaG4RD1ZqRXw1QjHQ5/WFyf5ywXg2l2tHqTDQy/yAEC8huqt8hOmD/HvIR0PKk4h4eVBRHFuHqmh8ZHnbz5J06YyhTVwgvCEn55vkVwa+SdU3Pu5hBjp0Z5lqxZGoGKO1RngZ5yeod3HwIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/aQhvMTTGItzx98e52ulqwZul3ylXhkg0Xw2Rik5xQo=; b=iKIbZ80eeTja8OaUCTuqVl+ic4VEZxCQ6Z+Gpmmr1HtTgCvZBWbYKI4DuJteWOQANMd0ZuEBmJFprWxLXO+ZcgUxb/2YysOSLhBLcOVsZILMop5ie4LCo59ntpbW+s9pCpJQXdsCSQEJaHJjIsTAo2dJvThhDHicVuWqqBI+tpJAOiCF229xBcvGFxjgBd9pbdWiPJTRuZKs7tF7IRFyoL6MagY1/1wY+Rc/g3tNANp2WZ4yjssT61TkzkwWngoXcZHgCqMWjNlvt7qMrvZNreoDiwgeqoMhmXs7OgHRNx+D+seIvaC/L64Calp6yI+7Osgyw+W5dNxWCxmRjaujCw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/aQhvMTTGItzx98e52ulqwZul3ylXhkg0Xw2Rik5xQo=; b=GDC+O4+/rGMDqxha1SC5ewtLm4VYoaygybgYYmvi+7+edCoKXmt84eb3FwfoZ3APWeH/vmk68gNVRDgac8Z2MtnXo1aemE1MecgCS82096CkxeYPtqKHMH3MxQHxCfPC0k3c+nMYq74iSDE88SzEM8Klxoz7SBPl4a4X4kFRIOo= Received: from AM6PR03MB4150.eurprd03.prod.outlook.com (20.177.36.81) by AM6PR03MB4757.eurprd03.prod.outlook.com (20.177.35.141) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2284.25; Tue, 24 Sep 2019 15:46:45 +0000 Received: from AM6PR03MB4150.eurprd03.prod.outlook.com ([fe80::3523:ad12:6e5d:5f17]) by AM6PR03MB4150.eurprd03.prod.outlook.com ([fe80::3523:ad12:6e5d:5f17%7]) with mapi id 15.20.2284.023; Tue, 24 Sep 2019 15:46:45 +0000 From: Volodymyr Babchuk To: "xen-devel@lists.xenproject.org" Thread-Topic: [PATCH v3 1/3] xen/arm: optee: handle shared buffer translation error Thread-Index: AQHVcu9E2rWJhDZVyUqVZe08oBSB3w== Date: Tue, 24 Sep 2019 15:46:45 +0000 Message-ID: <20190924154633.852828-2-volodymyr_babchuk@epam.com> References: <20190924154633.852828-1-volodymyr_babchuk@epam.com> In-Reply-To: <20190924154633.852828-1-volodymyr_babchuk@epam.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=Volodymyr_Babchuk@epam.com; x-originating-ip: [85.223.209.22] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: dff67be1-dc63-4b98-f0ac-08d741066753 x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600167)(711020)(4605104)(1401327)(2017052603328)(7193020); SRVR:AM6PR03MB4757; x-ms-traffictypediagnostic: AM6PR03MB4757:|AM6PR03MB4757: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:7691; x-forefront-prvs: 0170DAF08C x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(4636009)(136003)(376002)(39860400002)(366004)(396003)(346002)(189003)(199004)(51234002)(64756008)(66946007)(6116002)(3846002)(54906003)(476003)(2616005)(11346002)(76176011)(86362001)(8936002)(256004)(6916009)(2906002)(81156014)(5640700003)(81166006)(446003)(186003)(2351001)(14444005)(55236004)(6436002)(486006)(99286004)(6512007)(80792005)(36756003)(5660300002)(4326008)(14454004)(6486002)(66066001)(8676002)(305945005)(2501003)(6506007)(7736002)(25786009)(478600001)(66556008)(66476007)(91956017)(76116006)(316002)(71200400001)(66446008)(71190400001)(1076003)(26005)(102836004); DIR:OUT; SFP:1101; SCL:1; SRVR:AM6PR03MB4757; H:AM6PR03MB4150.eurprd03.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: epam.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: Y42+wx3sxcmo33nVlxq7dwq0+5mgCVvVk3JN++RRbnlW6Zq1aC0atcWhD/lgjZ5m/ZmBsdy8H/AXqh/HhnRGMmNCvO4NCkDxniafRPSgygq9VOTqfGAiE4TRIECn0Qkwg2AKrPpo0sFFdncxcquaLmLaMCsstwOKy9ep3JFLlAxAzFhwhs1PFZI2MR2VP9Akq4rGBcGwg1XkvOUV38JY8XInyQqwAJQ2gBjJ8Hj6ZjbXIL1zTR//ykbt45FNn7ihtHfz1TrUCUPyO5UwLZJf0GOWuhqJ/yWc8rj4Z53KhG7y5/XHBe8mKOxAKPrLIVpvknKKj1h8nwMNwE6rORgxWukPySXDgiB6b9TblmP6kjgfCfCPSQS48STglw7J+kytSRdp87sv34P5QEC571ZWgvHso/+2k3fAtM7AloqFr6w= MIME-Version: 1.0 X-OriginatorOrg: epam.com X-MS-Exchange-CrossTenant-Network-Message-Id: dff67be1-dc63-4b98-f0ac-08d741066753 X-MS-Exchange-CrossTenant-originalarrivaltime: 24 Sep 2019 15:46:45.4020 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: 5tF6BsqJCKjdiKyHbaCHnHarfL0X02UIINyrXLBLrUoZRZtYV3TnNiqmBuEmDpRNGcBWngePZ/+iotKEf4d0rso8+BYnHPOTrMb3sfVW2AA= X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR03MB4757 Subject: [Xen-devel] [PATCH v3 1/3] xen/arm: optee: handle shared buffer translation error X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: "tee-dev@lists.linaro.org" , Julien Grall , Stefano Stabellini , Volodymyr Babchuk Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" There is a case possible, when OP-TEE asks guest to allocate shared buffer, but Xen for some reason can't translate buffer's addresses. In this situation we should do two things: 1. Tell guest to free allocated buffer, so there will be no memory leak for guest. 2. Tell OP-TEE that buffer allocation failed. To ask guest to free allocated buffer we should perform the same thing, as OP-TEE does - issue RPC request. This is done by filling request buffer (luckily we can reuse the same buffer, that OP-TEE used to issue original request) and then return to guest with special return code. Then we need to handle next call from guest in a special way: as RPC was issued by Xen, not by OP-TEE, it should be handled by Xen. Basically, this is the mechanism to preempt OP-TEE mediator. The same mechanism can be used in the future to preempt mediator during translation large (>512 pages) shared buffers. Signed-off-by: Volodymyr Babchuk Acked-by: Julien Grall --- Changes from v1: - Renamed OPTEEM_CALL_* to OPTEE_CALL_* - Fixed comments - Added ASSERT() in handle_xen_rpc_return() Changes from v2: - ASSERT() in handle_xen_rpc_return() is replaced with domain_crash() --- xen/arch/arm/tee/optee.c | 173 ++++++++++++++++++++++++++++++++------- 1 file changed, 142 insertions(+), 31 deletions(-) diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c index 350af87d90..6a035355db 100644 --- a/xen/arch/arm/tee/optee.c +++ b/xen/arch/arm/tee/optee.c @@ -98,6 +98,11 @@ OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM | \ OPTEE_SMC_SEC_CAP_DYNAMIC_SHM) +enum optee_call_state { + OPTEE_CALL_NORMAL, + OPTEE_CALL_XEN_RPC, +}; + static unsigned int __read_mostly max_optee_threads; /* @@ -114,6 +119,9 @@ struct optee_std_call { paddr_t guest_arg_ipa; int optee_thread_id; int rpc_op; + /* Saved buffer type for the current buffer allocate request */ + unsigned int rpc_buffer_type; + enum optee_call_state state; uint64_t rpc_data_cookie; bool in_flight; register_t rpc_params[2]; @@ -301,6 +309,7 @@ static struct optee_std_call *allocate_std_call(struct optee_domain *ctx) call->optee_thread_id = -1; call->in_flight = true; + call->state = OPTEE_CALL_NORMAL; spin_lock(&ctx->lock); list_add_tail(&call->list, &ctx->call_list); @@ -1086,6 +1095,10 @@ static int handle_rpc_return(struct optee_domain *ctx, ret = -ERESTART; } + /* Save the buffer type in case we will want to free it */ + if ( shm_rpc->xen_arg->cmd == OPTEE_RPC_CMD_SHM_ALLOC ) + call->rpc_buffer_type = shm_rpc->xen_arg->params[0].u.value.a; + unmap_domain_page(shm_rpc->xen_arg); } @@ -1250,18 +1263,108 @@ err: return; } +/* + * Prepare RPC request to free shared buffer in the same way, as + * OP-TEE does this. + * + * Return values: + * true - successfully prepared RPC request + * false - there was an error + */ +static bool issue_rpc_cmd_free(struct optee_domain *ctx, + struct cpu_user_regs *regs, + struct optee_std_call *call, + struct shm_rpc *shm_rpc, + uint64_t cookie) +{ + register_t r1, r2; + + /* In case if guest will forget to update it with meaningful value */ + shm_rpc->xen_arg->ret = TEEC_ERROR_GENERIC; + shm_rpc->xen_arg->cmd = OPTEE_RPC_CMD_SHM_FREE; + shm_rpc->xen_arg->num_params = 1; + shm_rpc->xen_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT; + shm_rpc->xen_arg->params[0].u.value.a = call->rpc_buffer_type; + shm_rpc->xen_arg->params[0].u.value.b = cookie; + + if ( access_guest_memory_by_ipa(current->domain, + gfn_to_gaddr(shm_rpc->gfn), + shm_rpc->xen_arg, + OPTEE_MSG_GET_ARG_SIZE(1), + true) ) + { + /* + * Well, this is quite bad. We have error in the error + * path. This can happen only if guest behaves badly, so all + * we can do is to return error to OP-TEE and leave guest's + * memory leaked. We already have freed all resources + * allocated for this buffer, but guest will never receive + * OPTEE_RPC_CMD_SHM_FREE request, so it will not know that it + * can release allocated buffer. + */ + shm_rpc->xen_arg->ret = TEEC_ERROR_GENERIC; + shm_rpc->xen_arg->num_params = 0; + + return false; + } + + uint64_to_regpair(&r1, &r2, shm_rpc->cookie); + + call->state = OPTEE_CALL_XEN_RPC; + call->rpc_op = OPTEE_SMC_RPC_FUNC_CMD; + call->rpc_params[0] = r1; + call->rpc_params[1] = r2; + call->optee_thread_id = get_user_reg(regs, 3); + + set_user_reg(regs, 0, OPTEE_SMC_RETURN_RPC_CMD); + set_user_reg(regs, 1, r1); + set_user_reg(regs, 2, r2); + + return true; +} + +/* Handles return from Xen-issued RPC */ +static void handle_xen_rpc_return(struct optee_domain *ctx, + struct cpu_user_regs *regs, + struct optee_std_call *call, + struct shm_rpc *shm_rpc) +{ + call->state = OPTEE_CALL_NORMAL; + + /* + * Right now we have only one reason to be there - we asked guest + * to free shared buffer and it did it. Now we can tell OP-TEE + * that buffer allocation failed. We are not storing exact command + * type, only type of RPC return. So, this is the only check we + * can perform there. + */ + if ( call->rpc_op != OPTEE_SMC_RPC_FUNC_CMD ) + domain_crash(current->domain); + + /* + * We are not checking return value from a guest because we assume + * that OPTEE_RPC_CMD_SHM_FREE never fails. + */ + shm_rpc->xen_arg->ret = TEEC_ERROR_GENERIC; + shm_rpc->xen_arg->num_params = 0; +} + /* * This function is called when guest is finished processing RPC * request from OP-TEE and wished to resume the interrupted standard * call. + * + * Return values: + * false - there was an error, do not call OP-TEE + * true - success, proceed as normal */ -static void handle_rpc_cmd_alloc(struct optee_domain *ctx, +static bool handle_rpc_cmd_alloc(struct optee_domain *ctx, struct cpu_user_regs *regs, struct optee_std_call *call, struct shm_rpc *shm_rpc) { if ( shm_rpc->xen_arg->ret || shm_rpc->xen_arg->num_params != 1 ) - return; + return true; if ( shm_rpc->xen_arg->params[0].attr != (OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT | OPTEE_MSG_ATTR_NONCONTIG) ) @@ -1269,7 +1372,7 @@ static void handle_rpc_cmd_alloc(struct optee_domain *ctx, gdprintk(XENLOG_WARNING, "Invalid attrs for shared mem buffer: %"PRIx64"\n", shm_rpc->xen_arg->params[0].attr); - return; + return true; } /* Free pg list for buffer */ @@ -1285,21 +1388,14 @@ static void handle_rpc_cmd_alloc(struct optee_domain *ctx, { call->rpc_data_cookie = 0; /* - * Okay, so there was problem with guest's buffer and we need - * to tell about this to OP-TEE. - */ - shm_rpc->xen_arg->ret = TEEC_ERROR_GENERIC; - shm_rpc->xen_arg->num_params = 0; - /* - * TODO: With current implementation, OP-TEE will not issue - * RPC to free this buffer. Guest and OP-TEE will be out of - * sync: guest believes that it provided buffer to OP-TEE, - * while OP-TEE thinks of opposite. Ideally, we need to - * emulate RPC with OPTEE_MSG_RPC_CMD_SHM_FREE command. + * We are unable to translate guest's buffer, so we need tell guest + * to free it, before reporting an error to OP-TEE. */ - gprintk(XENLOG_WARNING, - "translate_noncontig() failed, OP-TEE/guest state is out of sync.\n"); + return !issue_rpc_cmd_free(ctx, regs, call, shm_rpc, + shm_rpc->xen_arg->params[0].u.tmem.shm_ref); } + + return true; } static void handle_rpc_cmd(struct optee_domain *ctx, struct cpu_user_regs *regs, @@ -1349,22 +1445,37 @@ static void handle_rpc_cmd(struct optee_domain *ctx, struct cpu_user_regs *regs, goto out; } - switch (shm_rpc->xen_arg->cmd) + if ( call->state == OPTEE_CALL_NORMAL ) { - case OPTEE_RPC_CMD_GET_TIME: - case OPTEE_RPC_CMD_WAIT_QUEUE: - case OPTEE_RPC_CMD_SUSPEND: - break; - case OPTEE_RPC_CMD_SHM_ALLOC: - handle_rpc_cmd_alloc(ctx, regs, call, shm_rpc); - break; - case OPTEE_RPC_CMD_SHM_FREE: - free_optee_shm_buf(ctx, shm_rpc->xen_arg->params[0].u.value.b); - if ( call->rpc_data_cookie == shm_rpc->xen_arg->params[0].u.value.b ) - call->rpc_data_cookie = 0; - break; - default: - break; + switch (shm_rpc->xen_arg->cmd) + { + case OPTEE_RPC_CMD_GET_TIME: + case OPTEE_RPC_CMD_WAIT_QUEUE: + case OPTEE_RPC_CMD_SUSPEND: + break; + case OPTEE_RPC_CMD_SHM_ALLOC: + if ( !handle_rpc_cmd_alloc(ctx, regs, call, shm_rpc) ) + { + /* We failed to translate buffer, report back to guest */ + unmap_domain_page(shm_rpc->xen_arg); + put_std_call(ctx, call); + + return; + } + break; + case OPTEE_RPC_CMD_SHM_FREE: + free_optee_shm_buf(ctx, shm_rpc->xen_arg->params[0].u.value.b); + if ( call->rpc_data_cookie == + shm_rpc->xen_arg->params[0].u.value.b ) + call->rpc_data_cookie = 0; + break; + default: + break; + } + } + else + { + handle_xen_rpc_return(ctx, regs, call, shm_rpc); } out: