From patchwork Tue Feb 18 14:34:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 13980108 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 431D2C02198 for ; Tue, 18 Feb 2025 14:35:46 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C381F10E6D9; Tue, 18 Feb 2025 14:35:45 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b="XioigvQK"; dkim-atps=neutral Received: from mail-lj1-f180.google.com (mail-lj1-f180.google.com [209.85.208.180]) by gabe.freedesktop.org (Postfix) with ESMTPS id AC1EE10E6D9 for ; Tue, 18 Feb 2025 14:35:39 +0000 (UTC) Received: by mail-lj1-f180.google.com with SMTP id 38308e7fff4ca-30a317e4c27so19959761fa.2 for ; Tue, 18 Feb 2025 06:35:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1739889338; x=1740494138; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aSw2Z+vDYpMcPS9QNEoQcbq77hOYHck6fzByPwaDKgI=; b=XioigvQKwdoTlIcMDXYtgy2rXeHGWFvzHbqmCXmcuPyUBwM4dsYRwvmUzJV7oONX0a vAdayZgQjDIZoJ/lf6l2YAFi1J49yvZjktE88Ddq8fA6bZXDd1KO0KfePW0/JllweqNS lkLUffkZXPVoy5SaclKnVpWphaI0R7dtYteeDRJRb8x/7p8Ul11g5ruuinRPHid7R1gq A5rdDTa8LfGuEImZ+GV3Qoe+4Gn7Q5ZtvXPRJ24KoTV+LIrGcYUYNjq8yp1BDg2PZquO FXa+vkgLj2xqZqLRW1a9amiKg+RuGo7h2fQrhTsnCeuyKZujw2JH68eoJiPL7vtEF9xp VeqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739889338; x=1740494138; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aSw2Z+vDYpMcPS9QNEoQcbq77hOYHck6fzByPwaDKgI=; b=ropkyAYty3tpvdA7YanrjYiF9y66Y8X6zBjFgBKYk8bgXabDiPC6G4GW87iuJIJoBD ctO/yH1nMWJyoX61rNBbIPvuyXkYDbCOGUPabuevV1JHVqPI60aMZ268698zeeKn20Wo 0zyx3BJCtmjR50upEINfwPHB98uiZAkoAbNVFqO46HS964XGo98NdOmGRNqXA6nEwID8 Lx/oaALSyV4yJ8bQy+BjhHINOV+iaEZVI1JLI48z8VjiYwsItFrkKqAgp6oNPeOud1V2 NMe9X9pG6pSzo25KD8Ym62CTIJZfE692IIHLib10Ac1E/WpZ1GtDgItWphdLxyVKzPMK ty4w== X-Forwarded-Encrypted: i=1; AJvYcCUbQ5D2VFaUjsikc6bMAyAlY9eKE9Rl1uLMpxMEt3VfqF8gUfE7opFEZyb/ZWZ1YFc7dHtJAzd8EmM=@lists.freedesktop.org X-Gm-Message-State: AOJu0YwpIaE1S4+xViBaDuVgM6IcquwjI6GmW/ONQpVX1wy1h0/MI9Td zntTJ6M2l0kvZ9ZPtk0tcjs0U7RpFYZRZR4gYeIvP3nI9xT4nvqVJQ6uMnroBlU= X-Gm-Gg: ASbGncsKrbn0pbZsYwP+P/ezMJLfXJIAhDst+Y4qy2RfE3RV1D4wgPkukSEYzDidxQp RRiVH5iqqLC2xnp0MkQUV26vIjkU+xz5EggTKnAISVAdOX+XkEv+A37dFoU8/QuRXyvlstm1/s2 cwILBxlzTQhNkqEb/1pFAKPq20F1hAPCEExbEN8RhwZlsWQsIQfZCXN8LK4Hem+s77s9fz63ABR AktiQ8WBFbC/LEdoeUs7aHtRead9Mp2JGkuVceAO+SjFAogcTzQMDlV1bDXzWfUlnK/BIY+c65h P/Yd34Y4pAA/68+Kz+EZG6wRNK83b8X7189nZrtfKssI3cSZBSe456MDObBv4LrniQwX X-Google-Smtp-Source: AGHT+IF7f9ec48KaZv7M3tutHQN+6tTXNf+jaL1lxy33GRxn28Otk9evT4b6BnrHCYuKUlHmWKLqmQ== X-Received: by 2002:a2e:9f54:0:b0:2fa:d2c3:a7e8 with SMTP id 38308e7fff4ca-30927a474a1mr39521641fa.13.1739889337817; Tue, 18 Feb 2025 06:35:37 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-309311777a8sm12360831fa.25.2025.02.18.06.35.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Feb 2025 06:35:36 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Jens Wiklander Subject: [PATCH v5 1/7] tee: add restricted memory allocation Date: Tue, 18 Feb 2025 15:34:50 +0100 Message-ID: <20250218143527.1236668-2-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250218143527.1236668-1-jens.wiklander@linaro.org> References: <20250218143527.1236668-1-jens.wiklander@linaro.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add restricted memory allocation to the TEE subsystem. Restricted memory refers to memory buffers behind a hardware enforced firewall. It is not accessible to the kernel during normal circumstances but rather only accessible to certain hardware IPs or CPUs executing in higher privileged mode than the kernel itself. This interface allows to allocate and manage such restricted memory buffers via interaction with a TEE implementation. A new ioctl TEE_IOC_RSTMEM_ALLOC is added to allocate these restricted memory buffers. The restricted memory is allocated for a specific use-case, like Secure Video Playback, Trusted UI, or Secure Video Recording where certain hardware devices can access the memory. More use-cases can be added in userspace ABI, but it's up to the backend drivers to provide the implementation. Signed-off-by: Jens Wiklander --- drivers/tee/Makefile | 1 + drivers/tee/tee_core.c | 38 +++++++- drivers/tee/tee_private.h | 2 + drivers/tee/tee_rstmem.c | 180 +++++++++++++++++++++++++++++++++++++ drivers/tee/tee_shm.c | 2 + drivers/tee/tee_shm_pool.c | 69 +++++++++++++- include/linux/tee_core.h | 15 ++++ include/linux/tee_drv.h | 2 + include/uapi/linux/tee.h | 44 ++++++++- 9 files changed, 349 insertions(+), 4 deletions(-) create mode 100644 drivers/tee/tee_rstmem.c diff --git a/drivers/tee/Makefile b/drivers/tee/Makefile index 5488cba30bd2..a4c6b55444b9 100644 --- a/drivers/tee/Makefile +++ b/drivers/tee/Makefile @@ -3,6 +3,7 @@ obj-$(CONFIG_TEE) += tee.o tee-objs += tee_core.o tee-objs += tee_shm.o tee-objs += tee_shm_pool.o +tee-objs += tee_rstmem.o obj-$(CONFIG_OPTEE) += optee/ obj-$(CONFIG_AMDTEE) += amdtee/ obj-$(CONFIG_ARM_TSTEE) += tstee/ diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c index d113679b1e2d..f4a45b77753b 100644 --- a/drivers/tee/tee_core.c +++ b/drivers/tee/tee_core.c @@ -1,12 +1,13 @@ // SPDX-License-Identifier: GPL-2.0-only /* - * Copyright (c) 2015-2016, Linaro Limited + * Copyright (c) 2015-2022, 2024, Linaro Limited */ #define pr_fmt(fmt) "%s: " fmt, __func__ #include #include +#include #include #include #include @@ -815,6 +816,38 @@ static int tee_ioctl_supp_send(struct tee_context *ctx, return rc; } +static int +tee_ioctl_rstmem_alloc(struct tee_context *ctx, + struct tee_ioctl_rstmem_alloc_data __user *udata) +{ + struct tee_ioctl_rstmem_alloc_data data; + struct dma_buf *dmabuf; + int id; + int fd; + + if (copy_from_user(&data, udata, sizeof(data))) + return -EFAULT; + + if (data.use_case == TEE_IOC_UC_RESERVED) + return -EINVAL; + + dmabuf = tee_rstmem_alloc(ctx, data.flags, data.use_case, data.size, + &id); + if (IS_ERR(dmabuf)) + return PTR_ERR(dmabuf); + if (put_user(id, &udata->id)) { + fd = -EFAULT; + goto err; + } + fd = dma_buf_fd(dmabuf, O_CLOEXEC); + if (fd < 0) + goto err; + return fd; +err: + dma_buf_put(dmabuf); + return fd; +} + static long tee_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) { struct tee_context *ctx = filp->private_data; @@ -839,6 +872,8 @@ static long tee_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) return tee_ioctl_supp_recv(ctx, uarg); case TEE_IOC_SUPPL_SEND: return tee_ioctl_supp_send(ctx, uarg); + case TEE_IOC_RSTMEM_ALLOC: + return tee_ioctl_rstmem_alloc(ctx, uarg); default: return -EINVAL; } @@ -1286,3 +1321,4 @@ MODULE_AUTHOR("Linaro"); MODULE_DESCRIPTION("TEE Driver"); MODULE_VERSION("1.0"); MODULE_LICENSE("GPL v2"); +MODULE_IMPORT_NS("DMA_BUF"); diff --git a/drivers/tee/tee_private.h b/drivers/tee/tee_private.h index 9bc50605227c..bf97796909c0 100644 --- a/drivers/tee/tee_private.h +++ b/drivers/tee/tee_private.h @@ -23,5 +23,7 @@ void teedev_ctx_put(struct tee_context *ctx); struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size); struct tee_shm *tee_shm_register_user_buf(struct tee_context *ctx, unsigned long addr, size_t length); +struct dma_buf *tee_rstmem_alloc(struct tee_context *ctx, u32 flags, + u32 use_case, size_t size, int *shm_id); #endif /*TEE_PRIVATE_H*/ diff --git a/drivers/tee/tee_rstmem.c b/drivers/tee/tee_rstmem.c new file mode 100644 index 000000000000..3b27594ec30b --- /dev/null +++ b/drivers/tee/tee_rstmem.c @@ -0,0 +1,180 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2024 Linaro Limited + */ +#include +#include +#include +#include +#include +#include +#include "tee_private.h" + +struct tee_rstmem_attachment { + struct sg_table table; + struct device *dev; +}; + +static int rstmem_dma_attach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + struct tee_shm *shm = dmabuf->priv; + struct tee_rstmem_attachment *a; + int rc; + + a = kzalloc(sizeof(*a), GFP_KERNEL); + if (!a) + return -ENOMEM; + + if (shm->pages) { + rc = sg_alloc_table_from_pages(&a->table, shm->pages, + shm->num_pages, 0, + shm->num_pages * PAGE_SIZE, + GFP_KERNEL); + if (rc) + goto err; + } else { + rc = sg_alloc_table(&a->table, 1, GFP_KERNEL); + if (rc) + goto err; + sg_set_page(a->table.sgl, phys_to_page(shm->paddr), shm->size, + 0); + } + + a->dev = attachment->dev; + attachment->priv = a; + + return 0; +err: + kfree(a); + return rc; +} + +static void rstmem_dma_detach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + struct tee_rstmem_attachment *a = attachment->priv; + + sg_free_table(&a->table); + kfree(a); +} + +static struct sg_table * +rstmem_dma_map_dma_buf(struct dma_buf_attachment *attachment, + enum dma_data_direction direction) +{ + struct tee_rstmem_attachment *a = attachment->priv; + int ret; + + ret = dma_map_sgtable(attachment->dev, &a->table, direction, + DMA_ATTR_SKIP_CPU_SYNC); + if (ret) + return ERR_PTR(ret); + + return &a->table; +} + +static void rstmem_dma_unmap_dma_buf(struct dma_buf_attachment *attachment, + struct sg_table *table, + enum dma_data_direction direction) +{ + struct tee_rstmem_attachment *a = attachment->priv; + + WARN_ON(&a->table != table); + + dma_unmap_sgtable(attachment->dev, table, direction, + DMA_ATTR_SKIP_CPU_SYNC); +} + +static void rstmem_dma_buf_free(struct dma_buf *dmabuf) +{ + struct tee_shm *shm = dmabuf->priv; + + tee_shm_put(shm); +} + +static const struct dma_buf_ops rstmem_generic_buf_ops = { + .attach = rstmem_dma_attach, + .detach = rstmem_dma_detach, + .map_dma_buf = rstmem_dma_map_dma_buf, + .unmap_dma_buf = rstmem_dma_unmap_dma_buf, + .release = rstmem_dma_buf_free, +}; + +struct dma_buf *tee_rstmem_alloc(struct tee_context *ctx, u32 flags, + u32 use_case, size_t size, int *shm_id) +{ + struct tee_device *teedev = ctx->teedev; + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + struct dma_buf *dmabuf; + struct tee_shm *shm; + void *ret; + int rc; + + if (!tee_device_get(teedev)) + return ERR_PTR(-EINVAL); + + if (!teedev->desc->ops->rstmem_alloc || + !teedev->desc->ops->rstmem_free) { + dmabuf = ERR_PTR(-EINVAL); + goto err; + } + + shm = kzalloc(sizeof(*shm), GFP_KERNEL); + if (!shm) { + dmabuf = ERR_PTR(-ENOMEM); + goto err; + } + + refcount_set(&shm->refcount, 1); + shm->flags = TEE_SHM_RESTRICTED; + shm->use_case = use_case; + shm->ctx = ctx; + + mutex_lock(&teedev->mutex); + shm->id = idr_alloc(&teedev->idr, NULL, 1, 0, GFP_KERNEL); + mutex_unlock(&teedev->mutex); + if (shm->id < 0) { + dmabuf = ERR_PTR(shm->id); + goto err_kfree; + } + + rc = teedev->desc->ops->rstmem_alloc(ctx, shm, flags, use_case, size); + if (rc) { + dmabuf = ERR_PTR(rc); + goto err_idr_remove; + } + + mutex_lock(&teedev->mutex); + ret = idr_replace(&teedev->idr, shm, shm->id); + mutex_unlock(&teedev->mutex); + if (IS_ERR(ret)) { + dmabuf = ret; + goto err_rstmem_free; + } + teedev_ctx_get(ctx); + + exp_info.ops = &rstmem_generic_buf_ops; + exp_info.size = shm->size; + exp_info.priv = shm; + dmabuf = dma_buf_export(&exp_info); + if (IS_ERR(dmabuf)) { + tee_shm_put(shm); + return dmabuf; + } + + *shm_id = shm->id; + return dmabuf; + +err_rstmem_free: + teedev->desc->ops->rstmem_free(ctx, shm); +err_idr_remove: + mutex_lock(&teedev->mutex); + idr_remove(&teedev->idr, shm->id); + mutex_unlock(&teedev->mutex); +err_kfree: + kfree(shm); +err: + tee_device_put(teedev); + return dmabuf; +} diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c index daf6e5cfd59a..416f7f25d885 100644 --- a/drivers/tee/tee_shm.c +++ b/drivers/tee/tee_shm.c @@ -55,6 +55,8 @@ static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) "unregister shm %p failed: %d", shm, rc); release_registered_pages(shm); + } else if (shm->flags & TEE_SHM_RESTRICTED) { + teedev->desc->ops->rstmem_free(shm->ctx, shm); } teedev_ctx_put(shm->ctx); diff --git a/drivers/tee/tee_shm_pool.c b/drivers/tee/tee_shm_pool.c index 80004b55628d..ee57ef157a77 100644 --- a/drivers/tee/tee_shm_pool.c +++ b/drivers/tee/tee_shm_pool.c @@ -1,9 +1,8 @@ // SPDX-License-Identifier: GPL-2.0-only /* - * Copyright (c) 2015, 2017, 2022 Linaro Limited + * Copyright (c) 2015, 2017, 2022, 2024 Linaro Limited */ #include -#include #include #include #include @@ -90,3 +89,69 @@ struct tee_shm_pool *tee_shm_pool_alloc_res_mem(unsigned long vaddr, return ERR_PTR(rc); } EXPORT_SYMBOL_GPL(tee_shm_pool_alloc_res_mem); + +static int rstmem_pool_op_gen_alloc(struct tee_shm_pool *pool, + struct tee_shm *shm, size_t size, + size_t align) +{ + size_t sz = ALIGN(size, PAGE_SIZE); + phys_addr_t pa; + + pa = gen_pool_alloc(pool->private_data, sz); + if (!pa) + return -ENOMEM; + + shm->size = sz; + shm->paddr = pa; + + return 0; +} + +static void rstmem_pool_op_gen_free(struct tee_shm_pool *pool, + struct tee_shm *shm) +{ + gen_pool_free(pool->private_data, shm->paddr, shm->size); + shm->paddr = 0; +} + +static struct tee_shm_pool_ops rstmem_pool_ops_generic = { + .alloc = rstmem_pool_op_gen_alloc, + .free = rstmem_pool_op_gen_free, + .destroy_pool = pool_op_gen_destroy_pool, +}; + +struct tee_shm_pool *tee_rstmem_gen_pool_alloc(phys_addr_t paddr, size_t size) +{ + const size_t page_mask = PAGE_SIZE - 1; + struct tee_shm_pool *pool; + int rc; + + /* Check it's page aligned */ + if ((paddr | size) & page_mask) + return ERR_PTR(-EINVAL); + + pool = kzalloc(sizeof(*pool), GFP_KERNEL); + if (!pool) + return ERR_PTR(-ENOMEM); + + pool->private_data = gen_pool_create(PAGE_SHIFT, -1); + if (!pool->private_data) { + rc = -ENOMEM; + goto err_free; + } + + rc = gen_pool_add(pool->private_data, paddr, size, -1); + if (rc) + goto err_free_pool; + + pool->ops = &rstmem_pool_ops_generic; + return pool; + +err_free_pool: + gen_pool_destroy(pool->private_data); +err_free: + kfree(pool); + + return ERR_PTR(rc); +} +EXPORT_SYMBOL_GPL(tee_rstmem_gen_pool_alloc); diff --git a/include/linux/tee_core.h b/include/linux/tee_core.h index a38494d6b5f4..608302f494fe 100644 --- a/include/linux/tee_core.h +++ b/include/linux/tee_core.h @@ -26,6 +26,7 @@ #define TEE_SHM_USER_MAPPED BIT(1) /* Memory mapped in user space */ #define TEE_SHM_POOL BIT(2) /* Memory allocated from pool */ #define TEE_SHM_PRIV BIT(3) /* Memory private to TEE driver */ +#define TEE_SHM_RESTRICTED BIT(4) /* Restricted memory */ #define TEE_DEVICE_FLAG_REGISTERED 0x1 #define TEE_MAX_DEV_NAME_LEN 32 @@ -76,6 +77,8 @@ struct tee_device { * @supp_send: called for supplicant to send a response * @shm_register: register shared memory buffer in TEE * @shm_unregister: unregister shared memory buffer in TEE + * @rstmem_alloc: allocate restricted memory + * @rstmem_free: free restricted memory */ struct tee_driver_ops { void (*get_version)(struct tee_device *teedev, @@ -99,6 +102,9 @@ struct tee_driver_ops { struct page **pages, size_t num_pages, unsigned long start); int (*shm_unregister)(struct tee_context *ctx, struct tee_shm *shm); + int (*rstmem_alloc)(struct tee_context *ctx, struct tee_shm *shm, + u32 flags, u32 use_case, size_t size); + void (*rstmem_free)(struct tee_context *ctx, struct tee_shm *shm); }; /** @@ -229,6 +235,15 @@ static inline void tee_shm_pool_free(struct tee_shm_pool *pool) pool->ops->destroy_pool(pool); } +/** + * tee_rstmem_gen_pool_alloc() - Create a restricted memory manager + * @paddr: Physical address of start of pool + * @size: Size in bytes of the pool + * + * @returns pointer to a 'struct tee_shm_pool' or an ERR_PTR on failure. + */ +struct tee_shm_pool *tee_rstmem_gen_pool_alloc(phys_addr_t paddr, size_t size); + /** * tee_get_drvdata() - Return driver_data pointer * @returns the driver_data pointer supplied to tee_register(). diff --git a/include/linux/tee_drv.h b/include/linux/tee_drv.h index a54c203000ed..cba067715d14 100644 --- a/include/linux/tee_drv.h +++ b/include/linux/tee_drv.h @@ -55,6 +55,7 @@ struct tee_context { * @pages: locked pages from userspace * @num_pages: number of locked pages * @refcount: reference counter + * @use_case: defined by TEE_IOC_UC_* in tee.h * @flags: defined by TEE_SHM_* in tee_core.h * @id: unique id of a shared memory object on this device, shared * with user space @@ -71,6 +72,7 @@ struct tee_shm { struct page **pages; size_t num_pages; refcount_t refcount; + u32 use_case; u32 flags; int id; u64 sec_world_id; diff --git a/include/uapi/linux/tee.h b/include/uapi/linux/tee.h index d0430bee8292..88834448debb 100644 --- a/include/uapi/linux/tee.h +++ b/include/uapi/linux/tee.h @@ -1,5 +1,5 @@ /* - * Copyright (c) 2015-2016, Linaro Limited + * Copyright (c) 2015-2017, 2020, 2024, Linaro Limited * All rights reserved. * * Redistribution and use in source and binary forms, with or without @@ -48,6 +48,7 @@ #define TEE_GEN_CAP_PRIVILEGED (1 << 1)/* Privileged device (for supplicant) */ #define TEE_GEN_CAP_REG_MEM (1 << 2)/* Supports registering shared memory */ #define TEE_GEN_CAP_MEMREF_NULL (1 << 3)/* NULL MemRef support */ +#define TEE_GEN_CAP_RSTMEM (1 << 4)/* Supports restricted memory */ #define TEE_MEMREF_NULL (__u64)(-1) /* NULL MemRef Buffer */ @@ -389,6 +390,47 @@ struct tee_ioctl_shm_register_data { */ #define TEE_IOC_SHM_REGISTER _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 9, \ struct tee_ioctl_shm_register_data) + +#define TEE_IOC_UC_RESERVED 0 +#define TEE_IOC_UC_SECURE_VIDEO_PLAY 1 +#define TEE_IOC_UC_TRUSTED_UI 2 +#define TEE_IOC_UC_SECURE_VIDEO_RECORD 3 + +/** + * struct tee_ioctl_rstmem_alloc_data - Restricted memory allocate argument + * @size: [in/out] Size of restricted memory to allocate + * @flags: [in/out] Flags to/from allocate + * @use_case [in] Restricted memory use case, TEE_IOC_UC_* + * @id: [out] Identifier of the restricted memory + */ +struct tee_ioctl_rstmem_alloc_data { + __u64 size; + __u32 flags; + __u32 use_case; + __s32 id; +}; + +/** + * TEE_IOC_RSTMEM_ALLOC - allocate restricted memory + * + * Allocates restricted physically memory normally not accessible by the + * kernel. + * + * Restricted memory refers to memory buffers behind a hardware enforced + * firewall. It is not accessible to the kernel during normal circumstances + * but rather only accessible to certain hardware IPs or CPUs executing in + * higher privileged mode than the kernel itself. This interface allows to + * allocate and manage such restricted memory buffers via interaction with + * a TEE implementation. + * + * Returns a file descriptor on success or < 0 on failure + * + * The returned file descriptor is a dma-buf that can be attached and + * mapped for device with permission to access the physical memory. + */ +#define TEE_IOC_RSTMEM_ALLOC _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 10, \ + struct tee_ioctl_rstmem_alloc_data) + /* * Five syscalls are used when communicating with the TEE driver. * open(): opens the device associated with the driver From patchwork Tue Feb 18 14:34:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 13980107 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 53359C021AA for ; Tue, 18 Feb 2025 14:35:43 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BDA9010E0A3; Tue, 18 Feb 2025 14:35:42 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b="jq4JNSyD"; dkim-atps=neutral Received: from mail-lf1-f49.google.com (mail-lf1-f49.google.com [209.85.167.49]) by gabe.freedesktop.org (Postfix) with ESMTPS id 60D4910E0A3 for ; Tue, 18 Feb 2025 14:35:41 +0000 (UTC) Received: by mail-lf1-f49.google.com with SMTP id 2adb3069b0e04-545284eac3bso4737150e87.0 for ; Tue, 18 Feb 2025 06:35:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1739889340; x=1740494140; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5crKMExrr5c3Q5dXwzY6LzWcqdtMg4U+Mqhx5XRVtIk=; b=jq4JNSyDEVZEvCQ52RNh6fL1cjDqhp8slmKFubsl0Qx4PJQebeR+snaLWKqWWgqgsC x75XCPB39L5qNWZrF/v1/kYrcrQRRaVAvjpJHr1dH7vVrfoKkwCOA3e1fl7W50gN/eOj g4WOVngqdD5ugBZjO+HewlKwUb20Wep+xgm0dxzuCODlVYcbU5NULnLnuI49bHOd6JgK 9qTjYV5SW08DeD2eVYrEAUINj7s/viFngwlLGwEPUs/bSEkTxEnP3lH3BYlwWy9dc0As sf8rCXNgTPj7OqD2dSNvNfzhRqV+05Fe1Pq+Eo/cAxMl3GcZlwHv8oUiHcOTbg1YsulA pLWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739889340; x=1740494140; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5crKMExrr5c3Q5dXwzY6LzWcqdtMg4U+Mqhx5XRVtIk=; b=n3pCr1bjMqWwkL1oGUL1y00koChkR6WovSs0YUi+n4p7XKO3jRSr8/80vTw2Lx6bqN n5UeSrAadSB0JAANUxOZGZ/gNfldmsibrDTf5nV02QmWIX6zBHIY/LlARxYmfoHzAVLD qNTOEtNyqWzyjMW6x0FDBCAIV4NWxmU5ex6zi/Zfsiuv6DDNmHED1dUH0fML72ojbXIY 4i3GWxrMXbwe0vHqS34wwtZ24pd/nbMJI9k/0hRQhuaOQMQQBam2+qBctLaqH3u755R2 Rzj5hcOblYfduOw8ZacqkkJ/V936kQsSPCGYYh3kWcwTrnW+RxCE6CKxdgMA7e10cYAG mlbA== X-Forwarded-Encrypted: i=1; AJvYcCXYzX2CZMGL852+NuxXZGF9uUul2q6lHLC2PHbf4/S9jwJ280uDSMp4/TBczOG7FP6s3B0ptDqqFzI=@lists.freedesktop.org X-Gm-Message-State: AOJu0Yw3SfuV9fRfWiVSjwalHYXCxVIqvCDa+GPq4ToexBNUPTExeV0k t3FJh3efByTjDRArWfNYwtjUbFt/xQKViv7OhiOzXT9yXROUHRTbcVcNLvwyu6I= X-Gm-Gg: ASbGnctL8al8Pgk981GQepn+jWSFd+iSQs9SpUHZnbYjOIZbAOgdSi9XuRdwKj64ZI3 OEmdCBDiFz5Vy7k5/lF09Xoy/6AL2cCHUVOj+LyFQvrTHiWlabbttoMsEEnu04tbCYGeoIS169/ NMpiSrQA8sMPSaESCUEKIHn3GzsrmXcnfVDinzzpEvVtgAkGfyX0/5f3bR9aIV4xLaivkeIWBz5 x1ScTM6KIxmUTCwYW0nEeTpJkeOvnOj1Gg8BQfd3wPTOfoALRqPCEXO+btadM3PnbZlwK2L099e nPVam5BVKNosGamDTm6f8xnpLKeOr3EHcmdICH9XcSuiRq/WXEwfkURaW3VLJsfOQmTy X-Google-Smtp-Source: AGHT+IGHuwCuQFYm4XQIVFkYY86WrwdSnRz+8agLRLCIecYLWINXh1qm1vFaOFrES19B9Zbg/l5wSA== X-Received: by 2002:a05:6512:1092:b0:545:a1a:556b with SMTP id 2adb3069b0e04-5452fdb9988mr5179716e87.0.1739889339564; Tue, 18 Feb 2025 06:35:39 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-309311777a8sm12360831fa.25.2025.02.18.06.35.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Feb 2025 06:35:38 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Jens Wiklander Subject: [PATCH v5 2/7] tee: add TEE_IOC_RSTMEM_FD_INFO Date: Tue, 18 Feb 2025 15:34:51 +0100 Message-ID: <20250218143527.1236668-3-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250218143527.1236668-1-jens.wiklander@linaro.org> References: <20250218143527.1236668-1-jens.wiklander@linaro.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add TEE_IOC_RSTMEM_FD_INFO to retrieve information about a previously allocated restricted memory dma-buf file descriptor. This is needed if the file descriptor from a restricted memory allocation has been saved due to limitations in the application. Signed-off-by: Jens Wiklander --- drivers/tee/tee_core.c | 31 +++++++++++++++++++++++++++++++ drivers/tee/tee_private.h | 2 ++ drivers/tee/tee_rstmem.c | 8 ++++++++ include/uapi/linux/tee.h | 27 +++++++++++++++++++++++++++ 4 files changed, 68 insertions(+) diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c index f4a45b77753b..01a2a9513578 100644 --- a/drivers/tee/tee_core.c +++ b/drivers/tee/tee_core.c @@ -848,6 +848,35 @@ tee_ioctl_rstmem_alloc(struct tee_context *ctx, return fd; } +static int +tee_ioctl_rstmem_fd_info(struct tee_context *ctx, + struct tee_ioctl_rstmem_fd_info __user *udata) +{ + struct tee_ioctl_rstmem_fd_info data; + struct dma_buf *dmabuf; + struct tee_shm *shm; + + if (copy_from_user(&data, udata, sizeof(data))) + return -EFAULT; + + dmabuf = dma_buf_get(data.fd); + if (IS_ERR(dmabuf)) + return PTR_ERR(dmabuf); + shm = tee_rstmem_dmabuf_to_shm(ctx, dmabuf); + if (!IS_ERR(shm)) { + data.flags = 0; + data.use_case = shm->use_case; + data.id = shm->id; + data.size = shm->size; + } + dma_buf_put(dmabuf); + if (IS_ERR(shm)) + return PTR_ERR(shm); + if (copy_to_user(udata, &data, sizeof(data))) + return -EFAULT; + return 0; +} + static long tee_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) { struct tee_context *ctx = filp->private_data; @@ -874,6 +903,8 @@ static long tee_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) return tee_ioctl_supp_send(ctx, uarg); case TEE_IOC_RSTMEM_ALLOC: return tee_ioctl_rstmem_alloc(ctx, uarg); + case TEE_IOC_RSTMEM_FD_INFO: + return tee_ioctl_rstmem_fd_info(ctx, uarg); default: return -EINVAL; } diff --git a/drivers/tee/tee_private.h b/drivers/tee/tee_private.h index bf97796909c0..b076089b2512 100644 --- a/drivers/tee/tee_private.h +++ b/drivers/tee/tee_private.h @@ -25,5 +25,7 @@ struct tee_shm *tee_shm_register_user_buf(struct tee_context *ctx, unsigned long addr, size_t length); struct dma_buf *tee_rstmem_alloc(struct tee_context *ctx, u32 flags, u32 use_case, size_t size, int *shm_id); +struct tee_shm *tee_rstmem_dmabuf_to_shm(struct tee_context *ctx, + struct dma_buf *dmabuf); #endif /*TEE_PRIVATE_H*/ diff --git a/drivers/tee/tee_rstmem.c b/drivers/tee/tee_rstmem.c index 3b27594ec30b..5108772f3ca0 100644 --- a/drivers/tee/tee_rstmem.c +++ b/drivers/tee/tee_rstmem.c @@ -178,3 +178,11 @@ struct dma_buf *tee_rstmem_alloc(struct tee_context *ctx, u32 flags, tee_device_put(teedev); return dmabuf; } + +struct tee_shm *tee_rstmem_dmabuf_to_shm(struct tee_context *ctx, + struct dma_buf *dmabuf) +{ + if (dmabuf->ops != &rstmem_generic_buf_ops) + return ERR_PTR(-EINVAL); + return dmabuf->priv; +} diff --git a/include/uapi/linux/tee.h b/include/uapi/linux/tee.h index 88834448debb..30ab5bd80a55 100644 --- a/include/uapi/linux/tee.h +++ b/include/uapi/linux/tee.h @@ -431,6 +431,33 @@ struct tee_ioctl_rstmem_alloc_data { #define TEE_IOC_RSTMEM_ALLOC _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 10, \ struct tee_ioctl_rstmem_alloc_data) +/** + * struct tee_ioctl_rstmem_fd_info - Restricted memory information + * @fd: [in] File descriptor returned from the previous allocation + * @flags: [out] Flags from the allocation + * @use_case: [out] Restricted memory use case, TEE_IOC_UC_* + * @id: [out] Identifier of the restricted memory + * @size: [out] Size of the restricted memory + */ +struct tee_ioctl_rstmem_fd_info { + __s32 fd; + __u32 flags; + __u32 use_case; + __s32 id; + __u64 size; +}; + +/** + * TEE_IOC_RSTMEM_FD_INFO - get restricted memory information from an fd + * + * Returns information about a previously allocated restricted memory + * dma-buf file descriptor. + * + * Returns 0 on success or < 0 on failure + */ +#define TEE_IOC_RSTMEM_FD_INFO _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 11, \ + struct tee_ioctl_rstmem_fd_info) + /* * Five syscalls are used when communicating with the TEE driver. * open(): opens the device associated with the driver From patchwork Tue Feb 18 14:34:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 13980109 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 230E5C021AD for ; Tue, 18 Feb 2025 14:35:47 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 25E8E10E6DE; Tue, 18 Feb 2025 14:35:46 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b="ctk2MLQ6"; dkim-atps=neutral Received: from mail-lj1-f175.google.com (mail-lj1-f175.google.com [209.85.208.175]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9879410E6F3 for ; Tue, 18 Feb 2025 14:35:44 +0000 (UTC) Received: by mail-lj1-f175.google.com with SMTP id 38308e7fff4ca-30795988ebeso57843331fa.3 for ; Tue, 18 Feb 2025 06:35:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1739889343; x=1740494143; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yCOqlw5SJhExh29sVZ94GbjzSF1NVDuk8gA2hGj4wos=; b=ctk2MLQ68juiAN7hYJewTIwMXbuDyoUHKFSUwbrp8DP5Ty3ySkDZwkUSCV0raPcmSe caKL4LRjoArVP9pJbHUKVjXR3JTcS7mvn5Fw16cLOk9BEBcN3A5k8mxrZrkAW9UOwxa7 yMigCj5bmn/VxRGwkGowV1Yk1505cwUNJKr4vt+4MgtO0xX5/7EvnC3U9T3Oh+F3TPHa hqaL4qGXEDq4ejzothO8jUZwL1PFszsyZFdzLudpl5cg0wWWiOVn3af4uRBjdgdesdJW HYy7hc2ha2uxnWzXnj79hu3BsEVz9xsTtB08dyZNy9Oqj5p/58YWwisRpuyFLe7P9QS6 l8/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739889343; x=1740494143; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yCOqlw5SJhExh29sVZ94GbjzSF1NVDuk8gA2hGj4wos=; b=eq/pk5nkp0dVBmHiTXVWlDgVlweKwnwVtC6sk6xe4Wnx4MggxHF4su8nLPFOzE0mx+ WldiRNeTUV4wq+USXbPjgn9u2GZ+/liJX1EK1lW82UF5L4nlr2lREHbDe2899tjyi2eQ P7/tWTUg3Boos3vz0O0ckZVz+Q67a3K5nSrPNwafAjsb6SzT5IsJcyGoei6s5pqJThAs XR0FfJ0ky7/XI+odomhz0DRkKyMmvyYd+0w5Z62ldtFwt8BClFiVNkp7F+uKF2Fto63T zlyqi7ykiFPbhxTMDDKwcrjkttHWMLqmn9rpgmuE2ly4nXkH5zVLDUaJiIzK+i4IMTTI sT0Q== X-Forwarded-Encrypted: i=1; AJvYcCUzIqPxK0kQI3bafnKTXUyY2nVEZRDZgygFMMT4kXhNo26gakkqi99ZLVZ9Ai2nOQehGr0OlTPEshg=@lists.freedesktop.org X-Gm-Message-State: AOJu0YxrZJyI6rsCUagov+JW0eNeYaBh3Oqec9+wk4dN2WUYcM/wf12u k1BAKPV6cVBRO7yGUtXgfXM2abPkL0G/tREcJ2hc7bcCqSPMbTe3DeX3P31BA8s= X-Gm-Gg: ASbGnctLnFURn4a1R0moNVX8KNi0FivSpeaGnuRgNHJbOgE+j8MPSXD4B6Q535usi/p L1+hEPzMwNt04NJJ5rN2cmxsgR/6chOAnvxwgA97WdDALrnTzZMO+jZdML3hY2p0Y0nZ0ioIeZH hCGD0dYtyFPmkbj+BTS/kl58DS8InP8ps6vIgTAYg1RZ57C4Iia9/QM0JXlJmvsbCRvtOIrFR2y e0HLWFPH80NWJHk6RDd6DuIj1WK2xGyErqZQH47psIFhnLbPbDtcSmCN3k8rkT6TsSx8a4Bq0m0 mSH/vTI/pIrKrm/kBNEj43F5NHn2XoLSJOgjFvKD27DGp+CohTiuKeHRGuiECHhg8cjk X-Google-Smtp-Source: AGHT+IHTqLXj8KrQJfUcqjPqa2+78JW5Su5sszooT/EHqLfvzSpgRY3reR5nug5n9thcwrL6s+iQAA== X-Received: by 2002:a2e:9f54:0:b0:2fa:d2c3:a7e8 with SMTP id 38308e7fff4ca-30927a474a1mr39522581fa.13.1739889342768; Tue, 18 Feb 2025 06:35:42 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-309311777a8sm12360831fa.25.2025.02.18.06.35.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Feb 2025 06:35:41 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Jens Wiklander Subject: [PATCH v5 3/7] optee: account for direction while converting parameters Date: Tue, 18 Feb 2025 15:34:52 +0100 Message-ID: <20250218143527.1236668-4-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250218143527.1236668-1-jens.wiklander@linaro.org> References: <20250218143527.1236668-1-jens.wiklander@linaro.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The OP-TEE backend driver has two internal function pointers to convert between the subsystem type struct tee_param and the OP-TEE type struct optee_msg_param. The conversion is done from one of the types to the other, which is then involved in some operation and finally converted back to the original type. When converting to prepare the parameters for the operation, all fields must be taken into account, but then converting back, it's enough to update only out-values and out-sizes. So, an update_out parameter is added to the conversion functions to tell if all or only some fields must be copied. This is needed in a later patch where it might get confusing when converting back in from_msg_param() callback since an allocated restricted SHM can be using the sec_world_id of the used restricted memory pool and that doesn't translate back well. Signed-off-by: Jens Wiklander --- drivers/tee/optee/call.c | 10 ++-- drivers/tee/optee/ffa_abi.c | 43 +++++++++++++---- drivers/tee/optee/optee_private.h | 42 +++++++++++------ drivers/tee/optee/rpc.c | 31 +++++++++---- drivers/tee/optee/smc_abi.c | 76 +++++++++++++++++++++++-------- 5 files changed, 144 insertions(+), 58 deletions(-) diff --git a/drivers/tee/optee/call.c b/drivers/tee/optee/call.c index 16eb953e14bb..f1533b894726 100644 --- a/drivers/tee/optee/call.c +++ b/drivers/tee/optee/call.c @@ -400,7 +400,8 @@ int optee_open_session(struct tee_context *ctx, export_uuid(msg_arg->params[1].u.octets, &client_uuid); rc = optee->ops->to_msg_param(optee, msg_arg->params + 2, - arg->num_params, param); + arg->num_params, param, + false /*!update_out*/); if (rc) goto out; @@ -427,7 +428,8 @@ int optee_open_session(struct tee_context *ctx, } if (optee->ops->from_msg_param(optee, param, arg->num_params, - msg_arg->params + 2)) { + msg_arg->params + 2, + true /*update_out*/)) { arg->ret = TEEC_ERROR_COMMUNICATION; arg->ret_origin = TEEC_ORIGIN_COMMS; /* Close session again to avoid leakage */ @@ -541,7 +543,7 @@ int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, msg_arg->cancel_id = arg->cancel_id; rc = optee->ops->to_msg_param(optee, msg_arg->params, arg->num_params, - param); + param, false /*!update_out*/); if (rc) goto out; @@ -551,7 +553,7 @@ int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, } if (optee->ops->from_msg_param(optee, param, arg->num_params, - msg_arg->params)) { + msg_arg->params, true /*update_out*/)) { msg_arg->ret = TEEC_ERROR_COMMUNICATION; msg_arg->ret_origin = TEEC_ORIGIN_COMMS; } diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c index f3af5666bb11..02e6175ac5f0 100644 --- a/drivers/tee/optee/ffa_abi.c +++ b/drivers/tee/optee/ffa_abi.c @@ -122,15 +122,21 @@ static int optee_shm_rem_ffa_handle(struct optee *optee, u64 global_id) */ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, - u32 attr, const struct optee_msg_param *mp) + u32 attr, const struct optee_msg_param *mp, + bool update_out) { struct tee_shm *shm = NULL; u64 offs_high = 0; u64 offs_low = 0; + if (update_out) { + if (attr == OPTEE_MSG_ATTR_TYPE_FMEM_INPUT) + return; + goto out; + } + p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + attr - OPTEE_MSG_ATTR_TYPE_FMEM_INPUT; - p->u.memref.size = mp->u.fmem.size; if (mp->u.fmem.global_id != OPTEE_MSG_FMEM_INVALID_GLOBAL_ID) shm = optee_shm_from_ffa_handle(optee, mp->u.fmem.global_id); @@ -141,6 +147,8 @@ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, offs_high = mp->u.fmem.offs_high; } p->u.memref.shm_offs = offs_low | offs_high << 32; +out: + p->u.memref.size = mp->u.fmem.size; } /** @@ -150,12 +158,14 @@ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, * @params: subsystem internal parameter representation * @num_params: number of elements in the parameter arrays * @msg_params: OPTEE_MSG parameters + * @update_out: update parameter for output only * * Returns 0 on success or <0 on failure */ static int optee_ffa_from_msg_param(struct optee *optee, struct tee_param *params, size_t num_params, - const struct optee_msg_param *msg_params) + const struct optee_msg_param *msg_params, + bool update_out) { size_t n; @@ -166,18 +176,20 @@ static int optee_ffa_from_msg_param(struct optee *optee, switch (attr) { case OPTEE_MSG_ATTR_TYPE_NONE: + if (update_out) + break; p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&p->u, 0, sizeof(p->u)); break; case OPTEE_MSG_ATTR_TYPE_VALUE_INPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT: - optee_from_msg_param_value(p, attr, mp); + optee_from_msg_param_value(p, attr, mp, update_out); break; case OPTEE_MSG_ATTR_TYPE_FMEM_INPUT: case OPTEE_MSG_ATTR_TYPE_FMEM_OUTPUT: case OPTEE_MSG_ATTR_TYPE_FMEM_INOUT: - from_msg_param_ffa_mem(optee, p, attr, mp); + from_msg_param_ffa_mem(optee, p, attr, mp, update_out); break; default: return -EINVAL; @@ -188,10 +200,16 @@ static int optee_ffa_from_msg_param(struct optee *optee, } static int to_msg_param_ffa_mem(struct optee_msg_param *mp, - const struct tee_param *p) + const struct tee_param *p, bool update_out) { struct tee_shm *shm = p->u.memref.shm; + if (update_out) { + if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT) + return 0; + goto out; + } + mp->attr = OPTEE_MSG_ATTR_TYPE_FMEM_INPUT + p->attr - TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT; @@ -211,6 +229,7 @@ static int to_msg_param_ffa_mem(struct optee_msg_param *mp, memset(&mp->u, 0, sizeof(mp->u)); mp->u.fmem.global_id = OPTEE_MSG_FMEM_INVALID_GLOBAL_ID; } +out: mp->u.fmem.size = p->u.memref.size; return 0; @@ -222,13 +241,15 @@ static int to_msg_param_ffa_mem(struct optee_msg_param *mp, * @optee: main service struct * @msg_params: OPTEE_MSG parameters * @num_params: number of elements in the parameter arrays - * @params: subsystem itnernal parameter representation + * @params: subsystem internal parameter representation + * @update_out: update parameter for output only * Returns 0 on success or <0 on failure */ static int optee_ffa_to_msg_param(struct optee *optee, struct optee_msg_param *msg_params, size_t num_params, - const struct tee_param *params) + const struct tee_param *params, + bool update_out) { size_t n; @@ -238,18 +259,20 @@ static int optee_ffa_to_msg_param(struct optee *optee, switch (p->attr) { case TEE_IOCTL_PARAM_ATTR_TYPE_NONE: + if (update_out) + break; mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&mp->u, 0, sizeof(mp->u)); break; case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT: - optee_to_msg_param_value(mp, p); + optee_to_msg_param_value(mp, p, update_out); break; case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: - if (to_msg_param_ffa_mem(mp, p)) + if (to_msg_param_ffa_mem(mp, p, update_out)) return -EINVAL; break; default: diff --git a/drivers/tee/optee/optee_private.h b/drivers/tee/optee/optee_private.h index dc0f355ef72a..20eda508dbac 100644 --- a/drivers/tee/optee/optee_private.h +++ b/drivers/tee/optee/optee_private.h @@ -185,10 +185,12 @@ struct optee_ops { bool system_thread); int (*to_msg_param)(struct optee *optee, struct optee_msg_param *msg_params, - size_t num_params, const struct tee_param *params); + size_t num_params, const struct tee_param *params, + bool update_out); int (*from_msg_param)(struct optee *optee, struct tee_param *params, size_t num_params, - const struct optee_msg_param *msg_params); + const struct optee_msg_param *msg_params, + bool update_out); }; /** @@ -316,23 +318,35 @@ void optee_release(struct tee_context *ctx); void optee_release_supp(struct tee_context *ctx); static inline void optee_from_msg_param_value(struct tee_param *p, u32 attr, - const struct optee_msg_param *mp) + const struct optee_msg_param *mp, + bool update_out) { - p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT + - attr - OPTEE_MSG_ATTR_TYPE_VALUE_INPUT; - p->u.value.a = mp->u.value.a; - p->u.value.b = mp->u.value.b; - p->u.value.c = mp->u.value.c; + if (!update_out) + p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT + + attr - OPTEE_MSG_ATTR_TYPE_VALUE_INPUT; + + if (attr == OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT || + attr == OPTEE_MSG_ATTR_TYPE_VALUE_INOUT || !update_out) { + p->u.value.a = mp->u.value.a; + p->u.value.b = mp->u.value.b; + p->u.value.c = mp->u.value.c; + } } static inline void optee_to_msg_param_value(struct optee_msg_param *mp, - const struct tee_param *p) + const struct tee_param *p, + bool update_out) { - mp->attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT + p->attr - - TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT; - mp->u.value.a = p->u.value.a; - mp->u.value.b = p->u.value.b; - mp->u.value.c = p->u.value.c; + if (!update_out) + mp->attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT + p->attr - + TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT; + + if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT || + p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT || !update_out) { + mp->u.value.a = p->u.value.a; + mp->u.value.b = p->u.value.b; + mp->u.value.c = p->u.value.c; + } } void optee_cq_init(struct optee_call_queue *cq, int thread_count); diff --git a/drivers/tee/optee/rpc.c b/drivers/tee/optee/rpc.c index ebbbd42b0e3e..580e6b9b0606 100644 --- a/drivers/tee/optee/rpc.c +++ b/drivers/tee/optee/rpc.c @@ -63,7 +63,7 @@ static void handle_rpc_func_cmd_i2c_transfer(struct tee_context *ctx, } if (optee->ops->from_msg_param(optee, params, arg->num_params, - arg->params)) + arg->params, false /*!update_out*/)) goto bad; for (i = 0; i < arg->num_params; i++) { @@ -107,7 +107,8 @@ static void handle_rpc_func_cmd_i2c_transfer(struct tee_context *ctx, } else { params[3].u.value.a = msg.len; if (optee->ops->to_msg_param(optee, arg->params, - arg->num_params, params)) + arg->num_params, params, + true /*update_out*/)) arg->ret = TEEC_ERROR_BAD_PARAMETERS; else arg->ret = TEEC_SUCCESS; @@ -188,6 +189,7 @@ static void handle_rpc_func_cmd_wait(struct optee_msg_arg *arg) static void handle_rpc_supp_cmd(struct tee_context *ctx, struct optee *optee, struct optee_msg_arg *arg) { + bool update_out = false; struct tee_param *params; arg->ret_origin = TEEC_ORIGIN_COMMS; @@ -200,15 +202,21 @@ static void handle_rpc_supp_cmd(struct tee_context *ctx, struct optee *optee, } if (optee->ops->from_msg_param(optee, params, arg->num_params, - arg->params)) { + arg->params, update_out)) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; goto out; } arg->ret = optee_supp_thrd_req(ctx, arg->cmd, arg->num_params, params); + /* + * Special treatment for OPTEE_RPC_CMD_SHM_ALLOC since input is a + * value type, but the output is a memref type. + */ + if (arg->cmd != OPTEE_RPC_CMD_SHM_ALLOC) + update_out = true; if (optee->ops->to_msg_param(optee, arg->params, arg->num_params, - params)) + params, update_out)) arg->ret = TEEC_ERROR_BAD_PARAMETERS; out: kfree(params); @@ -270,7 +278,7 @@ static void handle_rpc_func_rpmb_probe_reset(struct tee_context *ctx, if (arg->num_params != ARRAY_SIZE(params) || optee->ops->from_msg_param(optee, params, arg->num_params, - arg->params) || + arg->params, false /*!update_out*/) || params[0].attr != TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; return; @@ -280,7 +288,8 @@ static void handle_rpc_func_rpmb_probe_reset(struct tee_context *ctx, params[0].u.value.b = 0; params[0].u.value.c = 0; if (optee->ops->to_msg_param(optee, arg->params, - arg->num_params, params)) { + arg->num_params, params, + true /*update_out*/)) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; return; } @@ -324,7 +333,7 @@ static void handle_rpc_func_rpmb_probe_next(struct tee_context *ctx, if (arg->num_params != ARRAY_SIZE(params) || optee->ops->from_msg_param(optee, params, arg->num_params, - arg->params) || + arg->params, false /*!update_out*/) || params[0].attr != TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT || params[1].attr != TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; @@ -358,7 +367,8 @@ static void handle_rpc_func_rpmb_probe_next(struct tee_context *ctx, params[0].u.value.b = rdev->descr.capacity; params[0].u.value.c = rdev->descr.reliable_wr_count; if (optee->ops->to_msg_param(optee, arg->params, - arg->num_params, params)) { + arg->num_params, params, + true /*update_out*/)) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; return; } @@ -384,7 +394,7 @@ static void handle_rpc_func_rpmb_frames(struct tee_context *ctx, if (arg->num_params != ARRAY_SIZE(params) || optee->ops->from_msg_param(optee, params, arg->num_params, - arg->params) || + arg->params, false /*!update_out*/) || params[0].attr != TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT || params[1].attr != TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; @@ -401,7 +411,8 @@ static void handle_rpc_func_rpmb_frames(struct tee_context *ctx, goto out; } if (optee->ops->to_msg_param(optee, arg->params, - arg->num_params, params)) { + arg->num_params, params, + true /*update_out*/)) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; goto out; } diff --git a/drivers/tee/optee/smc_abi.c b/drivers/tee/optee/smc_abi.c index f0c3ac1103bb..e5b190d64a49 100644 --- a/drivers/tee/optee/smc_abi.c +++ b/drivers/tee/optee/smc_abi.c @@ -81,20 +81,26 @@ static int optee_cpuhp_disable_pcpu_irq(unsigned int cpu) */ static int from_msg_param_tmp_mem(struct tee_param *p, u32 attr, - const struct optee_msg_param *mp) + const struct optee_msg_param *mp, + bool update_out) { struct tee_shm *shm; phys_addr_t pa; int rc; + if (update_out) { + if (attr == OPTEE_MSG_ATTR_TYPE_TMEM_INPUT) + return 0; + goto out; + } + p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + attr - OPTEE_MSG_ATTR_TYPE_TMEM_INPUT; - p->u.memref.size = mp->u.tmem.size; shm = (struct tee_shm *)(unsigned long)mp->u.tmem.shm_ref; if (!shm) { p->u.memref.shm_offs = 0; p->u.memref.shm = NULL; - return 0; + goto out; } rc = tee_shm_get_pa(shm, 0, &pa); @@ -103,18 +109,25 @@ static int from_msg_param_tmp_mem(struct tee_param *p, u32 attr, p->u.memref.shm_offs = mp->u.tmem.buf_ptr - pa; p->u.memref.shm = shm; - +out: + p->u.memref.size = mp->u.tmem.size; return 0; } static void from_msg_param_reg_mem(struct tee_param *p, u32 attr, - const struct optee_msg_param *mp) + const struct optee_msg_param *mp, + bool update_out) { struct tee_shm *shm; + if (update_out) { + if (attr == OPTEE_MSG_ATTR_TYPE_RMEM_INPUT) + return; + goto out; + } + p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + attr - OPTEE_MSG_ATTR_TYPE_RMEM_INPUT; - p->u.memref.size = mp->u.rmem.size; shm = (struct tee_shm *)(unsigned long)mp->u.rmem.shm_ref; if (shm) { @@ -124,6 +137,8 @@ static void from_msg_param_reg_mem(struct tee_param *p, u32 attr, p->u.memref.shm_offs = 0; p->u.memref.shm = NULL; } +out: + p->u.memref.size = mp->u.rmem.size; } /** @@ -133,11 +148,13 @@ static void from_msg_param_reg_mem(struct tee_param *p, u32 attr, * @params: subsystem internal parameter representation * @num_params: number of elements in the parameter arrays * @msg_params: OPTEE_MSG parameters + * @update_out: update parameter for output only * Returns 0 on success or <0 on failure */ static int optee_from_msg_param(struct optee *optee, struct tee_param *params, size_t num_params, - const struct optee_msg_param *msg_params) + const struct optee_msg_param *msg_params, + bool update_out) { int rc; size_t n; @@ -149,25 +166,27 @@ static int optee_from_msg_param(struct optee *optee, struct tee_param *params, switch (attr) { case OPTEE_MSG_ATTR_TYPE_NONE: + if (update_out) + break; p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&p->u, 0, sizeof(p->u)); break; case OPTEE_MSG_ATTR_TYPE_VALUE_INPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT: - optee_from_msg_param_value(p, attr, mp); + optee_from_msg_param_value(p, attr, mp, update_out); break; case OPTEE_MSG_ATTR_TYPE_TMEM_INPUT: case OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT: case OPTEE_MSG_ATTR_TYPE_TMEM_INOUT: - rc = from_msg_param_tmp_mem(p, attr, mp); + rc = from_msg_param_tmp_mem(p, attr, mp, update_out); if (rc) return rc; break; case OPTEE_MSG_ATTR_TYPE_RMEM_INPUT: case OPTEE_MSG_ATTR_TYPE_RMEM_OUTPUT: case OPTEE_MSG_ATTR_TYPE_RMEM_INOUT: - from_msg_param_reg_mem(p, attr, mp); + from_msg_param_reg_mem(p, attr, mp, update_out); break; default: @@ -178,20 +197,25 @@ static int optee_from_msg_param(struct optee *optee, struct tee_param *params, } static int to_msg_param_tmp_mem(struct optee_msg_param *mp, - const struct tee_param *p) + const struct tee_param *p, bool update_out) { int rc; phys_addr_t pa; + if (update_out) { + if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT) + return 0; + goto out; + } + mp->attr = OPTEE_MSG_ATTR_TYPE_TMEM_INPUT + p->attr - TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT; mp->u.tmem.shm_ref = (unsigned long)p->u.memref.shm; - mp->u.tmem.size = p->u.memref.size; if (!p->u.memref.shm) { mp->u.tmem.buf_ptr = 0; - return 0; + goto out; } rc = tee_shm_get_pa(p->u.memref.shm, p->u.memref.shm_offs, &pa); @@ -201,19 +225,27 @@ static int to_msg_param_tmp_mem(struct optee_msg_param *mp, mp->u.tmem.buf_ptr = pa; mp->attr |= OPTEE_MSG_ATTR_CACHE_PREDEFINED << OPTEE_MSG_ATTR_CACHE_SHIFT; - +out: + mp->u.tmem.size = p->u.memref.size; return 0; } static int to_msg_param_reg_mem(struct optee_msg_param *mp, - const struct tee_param *p) + const struct tee_param *p, bool update_out) { + if (update_out) { + if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT) + return 0; + goto out; + } + mp->attr = OPTEE_MSG_ATTR_TYPE_RMEM_INPUT + p->attr - TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT; mp->u.rmem.shm_ref = (unsigned long)p->u.memref.shm; - mp->u.rmem.size = p->u.memref.size; mp->u.rmem.offs = p->u.memref.shm_offs; +out: + mp->u.rmem.size = p->u.memref.size; return 0; } @@ -223,11 +255,13 @@ static int to_msg_param_reg_mem(struct optee_msg_param *mp, * @msg_params: OPTEE_MSG parameters * @num_params: number of elements in the parameter arrays * @params: subsystem itnernal parameter representation + * @update_out: update parameter for output only * Returns 0 on success or <0 on failure */ static int optee_to_msg_param(struct optee *optee, struct optee_msg_param *msg_params, - size_t num_params, const struct tee_param *params) + size_t num_params, const struct tee_param *params, + bool update_out) { int rc; size_t n; @@ -238,21 +272,23 @@ static int optee_to_msg_param(struct optee *optee, switch (p->attr) { case TEE_IOCTL_PARAM_ATTR_TYPE_NONE: + if (update_out) + break; mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&mp->u, 0, sizeof(mp->u)); break; case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT: - optee_to_msg_param_value(mp, p); + optee_to_msg_param_value(mp, p, update_out); break; case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: if (tee_shm_is_dynamic(p->u.memref.shm)) - rc = to_msg_param_reg_mem(mp, p); + rc = to_msg_param_reg_mem(mp, p, update_out); else - rc = to_msg_param_tmp_mem(mp, p); + rc = to_msg_param_tmp_mem(mp, p, update_out); if (rc) return rc; break; From patchwork Tue Feb 18 14:34:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 13980111 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6A209C021AD for ; Tue, 18 Feb 2025 14:35:52 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BB4F610E6F4; Tue, 18 Feb 2025 14:35:51 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b="UObQ1mNn"; dkim-atps=neutral Received: from mail-lj1-f169.google.com (mail-lj1-f169.google.com [209.85.208.169]) by gabe.freedesktop.org (Postfix) with ESMTPS id 05A9F10E6F3 for ; Tue, 18 Feb 2025 14:35:47 +0000 (UTC) Received: by mail-lj1-f169.google.com with SMTP id 38308e7fff4ca-30737db1aa9so52552051fa.1 for ; Tue, 18 Feb 2025 06:35:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1739889345; x=1740494145; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=nSDQWd9UZqV/STl4BJjbppPEGE1hH0ufO/G2aXdmVa4=; b=UObQ1mNn53VcEFeyVIjtOcOSA/cumSjTUqii+GKStCYKd1aid5c/XUbDO8x0Yk85Nd hFFALkqg9qRTxhemwi8IgKL2tAvWgCsBXkvVWgsekoe6LIwYS9PrBhAQXbkBDoKhRve3 7ykSZ/rzR3kcCENSNjK2E3MORkjBTgjqoZDeh2xRF+vqOPV1doXivFi+WbSOUdwl68HZ j37RyA+1jhs3lRkan9stA8tW/kzIFdIvB8dia8/7MPh6mLUq7Ntx3fq6FLrvFwSv2jpq hW3jbMZJVSQtoCOMiV2t5fVP1BtrVJQQBrWZyqFMaHXTyxmsCGBNOpzm7t84UWV0xXlA fySQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739889345; x=1740494145; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nSDQWd9UZqV/STl4BJjbppPEGE1hH0ufO/G2aXdmVa4=; b=P6F6BWNGY/pjJ4JGimVyQWdORMzPGrPX02g2S2hMuVad9XYIGrXrLkdDgM0ady+0BB WkTwuM//VqmGuIkI4E6rDg7oQkxlQ5IIsn7qQcSHvSGNleoMKIjTyDxMl1FsLa2LWQXL vsMOY8P7MygNx0aVhhqGCfEFK5RKYAkvUv11dYFZefXLyxLoQ36oqRiSYFnzhAjcf0f+ Gq8FMJjdw77s5ZB8C1cGfczzo4ExH3sInt3z4css+EBesIxUT/qgRkVXuGmiL5qS4289 pWUC8q6xwkeZIaaJnb0NnLgwzacU5JwY/P/BRs6KT0DXiEAYWqCTq5hcuEqXOH8pfW8k gZeA== X-Forwarded-Encrypted: i=1; AJvYcCUvC+t3G/7919Qk3FMMJ91LN5awnbpBbIvmd08Bo9RvVQztqnmS0yNWXgL1c/XPKphHld18F0CL0xU=@lists.freedesktop.org X-Gm-Message-State: AOJu0YwFT0yZII68Nq8h1IQXW7lqY4hJpALcaiE3LWGaVROd8UGTvBDS wdTBODptpCtf7yK+XS0cD6JHEao4f84ekY0GnT+H/VosN8Z5NYO/kLYoxvj0CRs= X-Gm-Gg: ASbGnctGiD2PLsCWwBsa//XkEGS56GRHOedGXRqpEnO2hRXmTxpBzEMFHlUWNKuMpXT Ij9uoRM18o4PlYsx9l2tN03uSd23ICgzUtMQhgLFOu65APsbYAQNp2i9rs+BJd23Ur5pQVl2kf+ ZSXq5P4op9LMmP60KCSdk0Q/+wngA4/zRX5+eyW8DRaEm1yeqEEyFMfB0bv6aAN6BQq41jQq1lj GGaD79S69IALlKC10czxjIMqm1pVmXxqU8yQ955j0rFpAl5BEiXNn0Mogr+ESTJIrCC5GmMgCsq e7fhV5UUSRpwzKm2rzamHJaQs75QYAXNyeezYI24CgXz9X4J/MeKrMu6DHODHjV1LGBq X-Google-Smtp-Source: AGHT+IF9vO7JxagmmEJhta7Zxm6e8mrTw9C6huFeIBooCIxbmBYjTz6cKR26jv51+niVmF52hX1/gw== X-Received: by 2002:a2e:8757:0:b0:308:db61:34cf with SMTP id 38308e7fff4ca-30927a72551mr42596281fa.14.1739889345220; Tue, 18 Feb 2025 06:35:45 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-309311777a8sm12360831fa.25.2025.02.18.06.35.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Feb 2025 06:35:43 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Jens Wiklander Subject: [PATCH v5 4/7] optee: sync secure world ABI headers Date: Tue, 18 Feb 2025 15:34:53 +0100 Message-ID: <20250218143527.1236668-5-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250218143527.1236668-1-jens.wiklander@linaro.org> References: <20250218143527.1236668-1-jens.wiklander@linaro.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Update the header files describing the secure world ABI, both with and without FF-A. The ABI is extended to deal with restricted memory, but as usual backward compatible. Signed-off-by: Jens Wiklander --- drivers/tee/optee/optee_ffa.h | 27 ++++++++++--- drivers/tee/optee/optee_msg.h | 65 ++++++++++++++++++++++++++++++-- drivers/tee/optee/optee_smc.h | 71 ++++++++++++++++++++++++++++++++++- 3 files changed, 154 insertions(+), 9 deletions(-) diff --git a/drivers/tee/optee/optee_ffa.h b/drivers/tee/optee/optee_ffa.h index 257735ae5b56..7bd037200343 100644 --- a/drivers/tee/optee/optee_ffa.h +++ b/drivers/tee/optee/optee_ffa.h @@ -81,7 +81,7 @@ * as the second MSG arg struct for * OPTEE_FFA_YIELDING_CALL_WITH_ARG. * Bit[31:8]: Reserved (MBZ) - * w5: Bitfield of secure world capabilities OPTEE_FFA_SEC_CAP_* below, + * w5: Bitfield of OP-TEE capabilities OPTEE_FFA_SEC_CAP_* * w6: The maximum secure world notification number * w7: Not used (MBZ) */ @@ -94,6 +94,8 @@ #define OPTEE_FFA_SEC_CAP_ASYNC_NOTIF BIT(1) /* OP-TEE supports probing for RPMB device if needed */ #define OPTEE_FFA_SEC_CAP_RPMB_PROBE BIT(2) +/* OP-TEE supports Restricted Memory for secure data path */ +#define OPTEE_FFA_SEC_CAP_RSTMEM BIT(3) #define OPTEE_FFA_EXCHANGE_CAPABILITIES OPTEE_FFA_BLOCKING_CALL(2) @@ -108,7 +110,7 @@ * * Return register usage: * w3: Error code, 0 on success - * w4-w7: Note used (MBZ) + * w4-w7: Not used (MBZ) */ #define OPTEE_FFA_UNREGISTER_SHM OPTEE_FFA_BLOCKING_CALL(3) @@ -119,16 +121,31 @@ * Call register usage: * w3: Service ID, OPTEE_FFA_ENABLE_ASYNC_NOTIF * w4: Notification value to request bottom half processing, should be - * less than OPTEE_FFA_MAX_ASYNC_NOTIF_VALUE. + * less than OPTEE_FFA_MAX_ASYNC_NOTIF_VALUE * w5-w7: Not used (MBZ) * * Return register usage: * w3: Error code, 0 on success - * w4-w7: Note used (MBZ) + * w4-w7: Not used (MBZ) */ #define OPTEE_FFA_ENABLE_ASYNC_NOTIF OPTEE_FFA_BLOCKING_CALL(5) -#define OPTEE_FFA_MAX_ASYNC_NOTIF_VALUE 64 +#define OPTEE_FFA_MAX_ASYNC_NOTIF_VALUE 64 + +/* + * Release Restricted memory + * + * Call register usage: + * w3: Service ID, OPTEE_FFA_RECLAIM_RSTMEM + * w4: Shared memory handle, lower bits + * w5: Shared memory handle, higher bits + * w6-w7: Not used (MBZ) + * + * Return register usage: + * w3: Error code, 0 on success + * w4-w7: Note used (MBZ) + */ +#define OPTEE_FFA_RELEASE_RSTMEM OPTEE_FFA_BLOCKING_CALL(8) /* * Call with struct optee_msg_arg as argument in the supplied shared memory diff --git a/drivers/tee/optee/optee_msg.h b/drivers/tee/optee/optee_msg.h index e8840a82b983..1b558526e7d9 100644 --- a/drivers/tee/optee/optee_msg.h +++ b/drivers/tee/optee/optee_msg.h @@ -133,13 +133,13 @@ struct optee_msg_param_rmem { }; /** - * struct optee_msg_param_fmem - ffa memory reference parameter + * struct optee_msg_param_fmem - FF-A memory reference parameter * @offs_lower: Lower bits of offset into shared memory reference * @offs_upper: Upper bits of offset into shared memory reference * @internal_offs: Internal offset into the first page of shared memory * reference * @size: Size of the buffer - * @global_id: Global identifier of Shared memory + * @global_id: Global identifier of the shared memory */ struct optee_msg_param_fmem { u32 offs_low; @@ -165,7 +165,7 @@ struct optee_msg_param_value { * @attr: attributes * @tmem: parameter by temporary memory reference * @rmem: parameter by registered memory reference - * @fmem: parameter by ffa registered memory reference + * @fmem: parameter by FF-A registered memory reference * @value: parameter by opaque value * @octets: parameter by octet string * @@ -296,6 +296,18 @@ struct optee_msg_arg { */ #define OPTEE_MSG_FUNCID_GET_OS_REVISION 0x0001 +/* + * Values used in OPTEE_MSG_CMD_LEND_RSTMEM below + * OPTEE_MSG_RSTMEM_RESERVED Reserved + * OPTEE_MSG_RSTMEM_SECURE_VIDEO_PLAY Secure Video Playback + * OPTEE_MSG_RSTMEM_TRUSTED_UI Trused UI + * OPTEE_MSG_RSTMEM_SECURE_VIDEO_RECORD Secure Video Recording + */ +#define OPTEE_MSG_RSTMEM_RESERVED 0 +#define OPTEE_MSG_RSTMEM_SECURE_VIDEO_PLAY 1 +#define OPTEE_MSG_RSTMEM_TRUSTED_UI 2 +#define OPTEE_MSG_RSTMEM_SECURE_VIDEO_RECORD 3 + /* * Do a secure call with struct optee_msg_arg as argument * The OPTEE_MSG_CMD_* below defines what goes in struct optee_msg_arg::cmd @@ -337,6 +349,49 @@ struct optee_msg_arg { * OPTEE_MSG_CMD_STOP_ASYNC_NOTIF informs secure world that from now is * normal world unable to process asynchronous notifications. Typically * used when the driver is shut down. + * + * OPTEE_MSG_CMD_LEND_RSTMEM lends restricted memory. The passed normal + * physical memory is restricted from normal world access. The memory + * should be unmapped prior to this call since it becomes inaccessible + * during the request. + * Parameters are passed as: + * [in] param[0].attr OPTEE_MSG_ATTR_TYPE_VALUE_INPUT + * [in] param[0].u.value.a OPTEE_MSG_RSTMEM_* defined above + * [in] param[1].attr OPTEE_MSG_ATTR_TYPE_TMEM_INPUT + * [in] param[1].u.tmem.buf_ptr physical address + * [in] param[1].u.tmem.size size + * [in] param[1].u.tmem.shm_ref holds restricted memory reference + * + * OPTEE_MSG_CMD_RECLAIM_RSTMEM reclaims a previously lent restricted + * memory reference. The physical memory is accessible by the normal world + * after this function has return and can be mapped again. The information + * is passed as: + * [in] param[0].attr OPTEE_MSG_ATTR_TYPE_VALUE_INPUT + * [in] param[0].u.value.a holds restricted memory cookie + * + * OPTEE_MSG_CMD_GET_RSTMEM_CONFIG get configuration for a specific + * restricted memory use case. Parameters are passed as: + * [in] param[0].attr OPTEE_MSG_ATTR_TYPE_VALUE_INOUT + * [in] param[0].value.a OPTEE_MSG_RSTMEM_* + * [in] param[1].attr OPTEE_MSG_ATTR_TYPE_{R,F}MEM_OUTPUT + * [in] param[1].u.{r,f}mem Buffer or NULL + * [in] param[1].u.{r,f}mem.size Provided size of buffer or 0 for query + * output for the restricted use case: + * [out] param[0].value.a Minimal size of SDP memory + * [out] param[0].value.b Required alignment of size and start of + * restricted memory + * [out] param[1].{r,f}mem.size Size of output data + * [out] param[1].{r,f}mem If non-NULL, contains an array of + * uint16_t holding endpoints that + * must be included when lending + * memory for this use case + * + * OPTEE_MSG_CMD_ASSIGN_RSTMEM assigns use-case to restricted memory + * previously lent using the FFA_LEND framework ABI. Parameters are passed + * as: + * [in] param[0].attr OPTEE_MSG_ATTR_TYPE_VALUE_INPUT + * [in] param[0].u.value.a holds restricted memory cookie + * [in] param[0].u.value.b OPTEE_MSG_RSTMEM_* defined above */ #define OPTEE_MSG_CMD_OPEN_SESSION 0 #define OPTEE_MSG_CMD_INVOKE_COMMAND 1 @@ -346,6 +401,10 @@ struct optee_msg_arg { #define OPTEE_MSG_CMD_UNREGISTER_SHM 5 #define OPTEE_MSG_CMD_DO_BOTTOM_HALF 6 #define OPTEE_MSG_CMD_STOP_ASYNC_NOTIF 7 +#define OPTEE_MSG_CMD_LEND_RSTMEM 8 +#define OPTEE_MSG_CMD_RECLAIM_RSTMEM 9 +#define OPTEE_MSG_CMD_GET_RSTMEM_CONFIG 10 +#define OPTEE_MSG_CMD_ASSIGN_RSTMEM 11 #define OPTEE_MSG_FUNCID_CALL_WITH_ARG 0x0004 #endif /* _OPTEE_MSG_H */ diff --git a/drivers/tee/optee/optee_smc.h b/drivers/tee/optee/optee_smc.h index 879426300821..abc379ce190c 100644 --- a/drivers/tee/optee/optee_smc.h +++ b/drivers/tee/optee/optee_smc.h @@ -264,7 +264,6 @@ struct optee_smc_get_shm_config_result { #define OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM BIT(0) /* Secure world can communicate via previously unregistered shared memory */ #define OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM BIT(1) - /* * Secure world supports commands "register/unregister shared memory", * secure world accepts command buffers located in any parts of non-secure RAM @@ -280,6 +279,10 @@ struct optee_smc_get_shm_config_result { #define OPTEE_SMC_SEC_CAP_RPC_ARG BIT(6) /* Secure world supports probing for RPMB device if needed */ #define OPTEE_SMC_SEC_CAP_RPMB_PROBE BIT(7) +/* Secure world supports Secure Data Path */ +#define OPTEE_SMC_SEC_CAP_SDP BIT(8) +/* Secure world supports dynamic restricted memory */ +#define OPTEE_SMC_SEC_CAP_DYNAMIC_RSTMEM BIT(9) #define OPTEE_SMC_FUNCID_EXCHANGE_CAPABILITIES 9 #define OPTEE_SMC_EXCHANGE_CAPABILITIES \ @@ -451,6 +454,72 @@ struct optee_smc_disable_shm_cache_result { /* See OPTEE_SMC_CALL_WITH_REGD_ARG above */ #define OPTEE_SMC_FUNCID_CALL_WITH_REGD_ARG 19 +/* + * Get Secure Data Path memory config + * + * Returns the Secure Data Path memory config. + * + * Call register usage: + * a0 SMC Function ID, OPTEE_SMC_GET_SDP_CONFIG + * a2-6 Not used, must be zero + * a7 Hypervisor Client ID register + * + * Have config return register usage: + * a0 OPTEE_SMC_RETURN_OK + * a1 Physical address of start of SDP memory + * a2 Size of SDP memory + * a3 Not used + * a4-7 Preserved + * + * Not available register usage: + * a0 OPTEE_SMC_RETURN_ENOTAVAIL + * a1-3 Not used + * a4-7 Preserved + */ +#define OPTEE_SMC_FUNCID_GET_SDP_CONFIG 20 +#define OPTEE_SMC_GET_SDP_CONFIG \ + OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_GET_SDP_CONFIG) + +struct optee_smc_get_sdp_config_result { + unsigned long status; + unsigned long start; + unsigned long size; + unsigned long flags; +}; + +/* + * Get Secure Data Path dynamic memory config + * + * Returns the Secure Data Path dynamic memory config. + * + * Call register usage: + * a0 SMC Function ID, OPTEE_SMC_GET_DYN_SHM_CONFIG + * a2-6 Not used, must be zero + * a7 Hypervisor Client ID register + * + * Have config return register usage: + * a0 OPTEE_SMC_RETURN_OK + * a1 Minamal size of SDP memory + * a2 Required alignment of size and start of registered SDP memory + * a3 Not used + * a4-7 Preserved + * + * Not available register usage: + * a0 OPTEE_SMC_RETURN_ENOTAVAIL + * a1-3 Not used + * a4-7 Preserved + */ + +#define OPTEE_SMC_FUNCID_GET_DYN_SDP_CONFIG 21 +#define OPTEE_SMC_GET_DYN_SDP_CONFIG \ + OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_GET_DYN_SDP_CONFIG) + +struct optee_smc_get_dyn_sdp_config_result { + unsigned long status; + unsigned long size; + unsigned long align; + unsigned long flags; +}; /* * Resume from RPC (for example after processing a foreign interrupt) From patchwork Tue Feb 18 14:34:54 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 13980110 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C6532C02198 for ; Tue, 18 Feb 2025 14:35:51 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4423710E6F3; Tue, 18 Feb 2025 14:35:51 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b="d6VjDjkS"; dkim-atps=neutral Received: from mail-lj1-f174.google.com (mail-lj1-f174.google.com [209.85.208.174]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9110210E6F3 for ; Tue, 18 Feb 2025 14:35:49 +0000 (UTC) Received: by mail-lj1-f174.google.com with SMTP id 38308e7fff4ca-30a3092a9ebso24517001fa.1 for ; Tue, 18 Feb 2025 06:35:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1739889348; x=1740494148; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7n8oUg0OnbKsbbeA1MRO+2CvBTnW35Oy/MPtWDef0iM=; b=d6VjDjkSuqcG1XNemLZMwGW5buhQFUpB6IaBZIOuduNGHPf4WryMvmxbBnFc1c9bhZ 98BqXP5vurNGj3ER3cXoTIi7E3gH+dUuAz/2SqaVx1L9ZrJ1yY8yLsWj2Ysw9UHgqyKf iam+auzK4zD8nOQhPuAQsvJtn2h4rB1C7578WClAWSxZ4UHOcjkZWytTG4LCFLSgKxRv 0hGKNNBfU2TtNA7y6uUa2Y9ByeK5DJxm47NkuGshRC5vJsWTo0JGDwBqioRXlh0li8HK 1eSsJT96YnfKjukVNkqbyphp2+sXqXpk/5gcfS1bQmw4VV4YzxrD1w30n0oxwwB+nLcs W5Fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739889348; x=1740494148; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7n8oUg0OnbKsbbeA1MRO+2CvBTnW35Oy/MPtWDef0iM=; b=spX1cMt3pTVgELG5nopYyZtCPRpE9uyXNwnTMz6LfrUQQ6VMgtj8iF0u6Itb3p5VTq uGrdlBkCSmxfxO0vPEBTdnWoQjG4cdh6dhwQEXWuK/r9VQlDbYxN4K8aTl6U548QRuR0 ac6aG+Vpv8yuRmVTlTduRpgAF88zIjPUONvvMkELJvpxm3zJDkgkyCEUtuo1YCAjrrxM 1PGLUOuv7gC6RnUQDDuBAP3eNLo9teg6LpswT3NXCmgDh9rDAbPd3wQ2Mmokc9Ad2O+I R34bBrPgf2qRQlM5zZgN3nX0vIbTmIXmhhmd+KFfxJGmtvjnjvs085VNCQS5hwQOWlPW 9xzA== X-Forwarded-Encrypted: i=1; AJvYcCUTrTlrKDFJqI2qvoODL7sTOcZKpn6DHB031LC1603+E2peGi9hhyQLYKejZzAvp4ce+drvifTvtDw=@lists.freedesktop.org X-Gm-Message-State: AOJu0YzKaFvuga8ipWkstU8FcxdcyoHAYjMzx8XPcwhxhdI9+UYdEe12 nOo+UMmrFhETW0gWAUK1A/UBfMerfT3P7Uvju4OFi1VQ0dJ0FInOOAwLXIr2lds= X-Gm-Gg: ASbGnctDAIfIHOLftlOAmW3IVGxet6XJ/QjPZVphrG5bgstX3wU3myCR+DEShQcQkKG n65f3F5Hw2Szg05KfEox5/jZlm6Ghf0IErSys7qziy0N5zxvsQg8q5Bcb8KhDRQOQZNaSvHtvka nkz8FqPi6CTl4LlnYJkTvh/IibB4bEAkIsgyqbWRM+IrGXK0u8cwkftLogaLseqM5/oyrU7tmA7 Co7oE5O2xkVYv4y96DPzYQ6zyztTvIeTQXZbyIIaEfrG8Br1lVDlXtJAtN7Q8ZexIYU1DUC4ye0 RSHckL4B+xm4IyqbRXd09esNh26uvUKJnTclBsd+8M6ELEQMDakmHcVGjmNLYY39fzKJ X-Google-Smtp-Source: AGHT+IHCkYh8YddCTliI6N9IOQFAku9tFvotHCjFupkGbR3gzNbbFOE0lpJNL/AjTtSqaKGThQcwCQ== X-Received: by 2002:a2e:80d8:0:b0:308:e587:ca79 with SMTP id 38308e7fff4ca-30927a48ad8mr38643651fa.11.1739889347765; Tue, 18 Feb 2025 06:35:47 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-309311777a8sm12360831fa.25.2025.02.18.06.35.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Feb 2025 06:35:46 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Jens Wiklander Subject: [PATCH v5 5/7] optee: support restricted memory allocation Date: Tue, 18 Feb 2025 15:34:54 +0100 Message-ID: <20250218143527.1236668-6-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250218143527.1236668-1-jens.wiklander@linaro.org> References: <20250218143527.1236668-1-jens.wiklander@linaro.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add support in the OP-TEE backend driver for restricted memory allocation. The support is limited to only the SMC ABI and for secure video buffers. OP-TEE is probed for the range of restricted physical memory and a memory pool allocator is initialized if OP-TEE have support for such memory. Signed-off-by: Jens Wiklander --- drivers/tee/optee/Makefile | 1 + drivers/tee/optee/core.c | 1 + drivers/tee/optee/optee_private.h | 23 ++++++++++ drivers/tee/optee/rstmem.c | 76 +++++++++++++++++++++++++++++++ drivers/tee/optee/smc_abi.c | 69 ++++++++++++++++++++++++++-- 5 files changed, 167 insertions(+), 3 deletions(-) create mode 100644 drivers/tee/optee/rstmem.c diff --git a/drivers/tee/optee/Makefile b/drivers/tee/optee/Makefile index a6eff388d300..498969fb8e40 100644 --- a/drivers/tee/optee/Makefile +++ b/drivers/tee/optee/Makefile @@ -4,6 +4,7 @@ optee-objs += core.o optee-objs += call.o optee-objs += notif.o optee-objs += rpc.o +optee-objs += rstmem.o optee-objs += supp.o optee-objs += device.o optee-objs += smc_abi.o diff --git a/drivers/tee/optee/core.c b/drivers/tee/optee/core.c index c75fddc83576..f4fa494789a4 100644 --- a/drivers/tee/optee/core.c +++ b/drivers/tee/optee/core.c @@ -182,6 +182,7 @@ void optee_remove_common(struct optee *optee) tee_device_unregister(optee->teedev); tee_shm_pool_free(optee->pool); + optee_rstmem_pools_uninit(optee); optee_supp_uninit(&optee->supp); mutex_destroy(&optee->call_queue.mutex); rpmb_dev_put(optee->rpmb_dev); diff --git a/drivers/tee/optee/optee_private.h b/drivers/tee/optee/optee_private.h index 20eda508dbac..0491889e5b0e 100644 --- a/drivers/tee/optee/optee_private.h +++ b/drivers/tee/optee/optee_private.h @@ -193,6 +193,20 @@ struct optee_ops { bool update_out); }; +/** + * struct optee_rstmem_pools - restricted memory pools + * @mutex: serializes write access to @xa when adding a new pool. + * @xa: XArray of struct tee_shm_pool where the index is the + * use case ID TEE_IOC_UC_* supplied for TEE_IOC_RSTMEM_ALLOC. + */ +struct optee_rstmem_pools { + /* + * Serializes write access to @xa when adding a new pool. + */ + struct mutex mutex; + struct xarray xa; +}; + /** * struct optee - main service struct * @supp_teedev: supplicant device @@ -206,6 +220,7 @@ struct optee_ops { * @notif: notification synchronization struct * @supp: supplicant synchronization struct for RPC to supplicant * @pool: shared memory pool + * @rstmem_pool: restricted memory pool for secure data path * @mutex: mutex protecting @rpmb_dev * @rpmb_dev: current RPMB device or NULL * @rpmb_scan_bus_done flag if device registation of RPMB dependent devices @@ -230,6 +245,7 @@ struct optee { struct optee_notif notif; struct optee_supp supp; struct tee_shm_pool *pool; + struct optee_rstmem_pools *rstmem_pools; /* Protects rpmb_dev pointer */ struct mutex rpmb_dev_mutex; struct rpmb_dev *rpmb_dev; @@ -286,6 +302,9 @@ void optee_supp_init(struct optee_supp *supp); void optee_supp_uninit(struct optee_supp *supp); void optee_supp_release(struct optee_supp *supp); +int optee_rstmem_pools_init(struct optee *optee); +void optee_rstmem_pools_uninit(struct optee *optee); + int optee_supp_recv(struct tee_context *ctx, u32 *func, u32 *num_params, struct tee_param *param); int optee_supp_send(struct tee_context *ctx, u32 ret, u32 num_params, @@ -378,6 +397,10 @@ void optee_rpc_cmd(struct tee_context *ctx, struct optee *optee, int optee_do_bottom_half(struct tee_context *ctx); int optee_stop_async_notif(struct tee_context *ctx); +int optee_rstmem_alloc(struct tee_context *ctx, struct tee_shm *shm, + u32 flags, u32 use_case, size_t size); +void optee_rstmem_free(struct tee_context *ctx, struct tee_shm *shm); + /* * Small helpers */ diff --git a/drivers/tee/optee/rstmem.c b/drivers/tee/optee/rstmem.c new file mode 100644 index 000000000000..01456bc3e2f6 --- /dev/null +++ b/drivers/tee/optee/rstmem.c @@ -0,0 +1,76 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2024, Linaro Limited + */ +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include +#include +#include +#include +#include +#include +#include +#include "optee_private.h" + +int optee_rstmem_alloc(struct tee_context *ctx, struct tee_shm *shm, + u32 flags, u32 use_case, size_t size) +{ + struct optee *optee = tee_get_drvdata(ctx->teedev); + struct tee_shm_pool *pool; + + if (!optee->rstmem_pools) + return -EINVAL; + if (flags) + return -EINVAL; + + pool = xa_load(&optee->rstmem_pools->xa, use_case); + if (!pool) + return -EINVAL; + + return pool->ops->alloc(pool, shm, size, 0); +} + +void optee_rstmem_free(struct tee_context *ctx, struct tee_shm *shm) +{ + struct optee *optee = tee_get_drvdata(ctx->teedev); + struct tee_shm_pool *pool; + + pool = xa_load(&optee->rstmem_pools->xa, shm->use_case); + if (pool) + pool->ops->free(pool, shm); + else + pr_err("Can't find pool for use_case %u\n", shm->use_case); +} + +int optee_rstmem_pools_init(struct optee *optee) +{ + struct optee_rstmem_pools *pools; + + pools = kmalloc(sizeof(*pools), GFP_KERNEL); + if (!pools) + return -ENOMEM; + + mutex_init(&pools->mutex); + xa_init(&pools->xa); + optee->rstmem_pools = pools; + return 0; +} + +void optee_rstmem_pools_uninit(struct optee *optee) +{ + if (optee->rstmem_pools) { + struct tee_shm_pool *pool; + u_long idx; + + xa_for_each(&optee->rstmem_pools->xa, idx, pool) { + xa_erase(&optee->rstmem_pools->xa, idx); + pool->ops->destroy_pool(pool); + } + + xa_destroy(&optee->rstmem_pools->xa); + mutex_destroy(&optee->rstmem_pools->mutex); + kfree(optee->rstmem_pools); + optee->rstmem_pools = NULL; + } +} diff --git a/drivers/tee/optee/smc_abi.c b/drivers/tee/optee/smc_abi.c index e5b190d64a49..11589e5120c9 100644 --- a/drivers/tee/optee/smc_abi.c +++ b/drivers/tee/optee/smc_abi.c @@ -1201,6 +1201,8 @@ static void optee_get_version(struct tee_device *teedev, v.gen_caps |= TEE_GEN_CAP_REG_MEM; if (optee->smc.sec_caps & OPTEE_SMC_SEC_CAP_MEMREF_NULL) v.gen_caps |= TEE_GEN_CAP_MEMREF_NULL; + if (optee->rstmem_pools) + v.gen_caps |= TEE_GEN_CAP_RSTMEM; *vers = v; } @@ -1223,6 +1225,8 @@ static const struct tee_driver_ops optee_clnt_ops = { .cancel_req = optee_cancel_req, .shm_register = optee_shm_register, .shm_unregister = optee_shm_unregister, + .rstmem_alloc = optee_rstmem_alloc, + .rstmem_free = optee_rstmem_free, }; static const struct tee_desc optee_clnt_desc = { @@ -1239,6 +1243,8 @@ static const struct tee_driver_ops optee_supp_ops = { .supp_send = optee_supp_send, .shm_register = optee_shm_register_supp, .shm_unregister = optee_shm_unregister_supp, + .rstmem_alloc = optee_rstmem_alloc, + .rstmem_free = optee_rstmem_free, }; static const struct tee_desc optee_supp_desc = { @@ -1620,6 +1626,57 @@ static inline int optee_load_fw(struct platform_device *pdev, } #endif +static int optee_sdp_pool_init(struct optee *optee) +{ + bool sdp = optee->smc.sec_caps & OPTEE_SMC_SEC_CAP_SDP; + struct tee_shm_pool *pool; + int rc; + + /* + * optee_sdp_pools_init() must be called if secure world has any + * SDP capability. If the static carvout is available initialize + * and add a pool for that. + */ + if (!sdp) + return 0; + + rc = optee_rstmem_pools_init(optee); + if (rc) + return rc; + + if (optee->smc.sec_caps & OPTEE_SMC_SEC_CAP_SDP) { + union { + struct arm_smccc_res smccc; + struct optee_smc_get_sdp_config_result result; + } res; + + optee->smc.invoke_fn(OPTEE_SMC_GET_SDP_CONFIG, 0, 0, 0, 0, 0, 0, + 0, &res.smccc); + if (res.result.status != OPTEE_SMC_RETURN_OK) { + pr_err("Secure Data Path service not available\n"); + goto err; + } + + pool = tee_rstmem_gen_pool_alloc(res.result.start, + res.result.size); + if (IS_ERR(pool)) { + rc = PTR_ERR(pool); + goto err; + } + rc = xa_insert(&optee->rstmem_pools->xa, + TEE_IOC_UC_SECURE_VIDEO_PLAY, pool, GFP_KERNEL); + if (rc) { + pool->ops->destroy_pool(pool); + goto err; + } + } + + return 0; +err: + optee_rstmem_pools_uninit(optee); + return rc; +} + static int optee_probe(struct platform_device *pdev) { optee_invoke_fn *invoke_fn; @@ -1715,7 +1772,7 @@ static int optee_probe(struct platform_device *pdev) optee = kzalloc(sizeof(*optee), GFP_KERNEL); if (!optee) { rc = -ENOMEM; - goto err_free_pool; + goto err_free_shm_pool; } optee->ops = &optee_ops; @@ -1727,10 +1784,14 @@ static int optee_probe(struct platform_device *pdev) (sec_caps & OPTEE_SMC_SEC_CAP_RPMB_PROBE)) optee->in_kernel_rpmb_routing = true; + rc = optee_sdp_pool_init(optee); + if (rc) + goto err_free_optee; + teedev = tee_device_alloc(&optee_clnt_desc, NULL, pool, optee); if (IS_ERR(teedev)) { rc = PTR_ERR(teedev); - goto err_free_optee; + goto err_rstmem_pools_uninit; } optee->teedev = teedev; @@ -1837,9 +1898,11 @@ static int optee_probe(struct platform_device *pdev) tee_device_unregister(optee->supp_teedev); err_unreg_teedev: tee_device_unregister(optee->teedev); +err_rstmem_pools_uninit: + optee_rstmem_pools_uninit(optee); err_free_optee: kfree(optee); -err_free_pool: +err_free_shm_pool: tee_shm_pool_free(pool); if (memremaped_shm) memunmap(memremaped_shm); From patchwork Tue Feb 18 14:34:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 13980113 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 47C69C02198 for ; Tue, 18 Feb 2025 14:35:57 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A554C10E705; Tue, 18 Feb 2025 14:35:56 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b="wDtZ0hyf"; dkim-atps=neutral Received: from mail-lj1-f171.google.com (mail-lj1-f171.google.com [209.85.208.171]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4E77310E6F4 for ; Tue, 18 Feb 2025 14:35:51 +0000 (UTC) Received: by mail-lj1-f171.google.com with SMTP id 38308e7fff4ca-30613802a6bso57528241fa.1 for ; Tue, 18 Feb 2025 06:35:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1739889350; x=1740494150; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=l5jLGLhQmVC0qlHfueXl6HBhrQHoLB9yNWzNwwQeqEk=; b=wDtZ0hyfOTUCauzxd0L6/Sd8wB/iC0bp+V7t3HLnRxmt1Tuk2qg6njHtizh1t3ROcl Md5CeVugzIXdvYGz2HqoFHkEHiRuX4Zp9ZgncO9xPUWCuJAHQZzWhiUfulsQihYR005z UkYy/wtSSXIe83xRMPToRP3JuRJMVVKUYu6GOK5LK6V6qAIj/YI9LOzaDZLFnzwl0G/x qVihUh4Krx2xxesXPamkyU2bwnmQtkTt8ikAClOBqaIErXKrIuynHDVUjnjOq0neYhdj +UXepZRbfLpsj1dIMEBoGLkp5+j4pcZvx+f0MpVEt5bCVIDUy29s3tCAeEgFK+78RmRc mJyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739889350; x=1740494150; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=l5jLGLhQmVC0qlHfueXl6HBhrQHoLB9yNWzNwwQeqEk=; b=axiVYCcpMrlOsjsypQjMDX3M5/Pi98xyK3HaJ7pBN8PAM1Ovh8CR791EpPqWIJv8UP edLJSa871aczuP+j+Rl8vgXolesxcV+3qcn4gg2gx+dT4FoX1Ya7BPf1Y8egzkbG2jl2 gTTGHwskFMMGb9ljqUmc6YzfhEw7aQ/SMR2Ha1bGRdqLkp7vxrlLSIV16BCxzMgYxQDV /ykL5mPiRg5UA7G8PKCkmMRv5RsPaftPdKTJNQE8GsdPdTwHNCtK/bo07vVKVZBckVcy yXwA0bH3B06BP/P7CmHDTHz0bwjkndiPKZdBKeTJU9Zl9zxWaJWMj3HpEOF7gFASmwzw csUw== X-Forwarded-Encrypted: i=1; AJvYcCVsWDFaeDxarb+cVivtsvLA6yzcFo9b2HDCpfWoLRxlCZ4G9RezUh1UI+UiTRw/NIwDcvG/xkdj7lQ=@lists.freedesktop.org X-Gm-Message-State: AOJu0YzOVUiCzD+WD9QLTUDokpnTQYkary309BkJ/3fTznGlteUBQdQO BR/bhOWNufnSVf/iEEfjO5JV1/i7QWrB4baZhQjeLGNg2GcQEPiWJk4tHl/EQ8U= X-Gm-Gg: ASbGncsGlh8tfBHreObq3uIHClzyNP/N/IZSCghM31oaaaqsWBFnuF/WGsJL+CSmxlm Zm1/b5d9Rv7PiLy6E8L5g9HJKNSCBIr+1GUsExGw56nXtpXbGShXBRS7YVDEcKbPXlQLX9ORIQc M2ZYKJEDxqKRKfDx6AKEznlMy0G10DKiqj6CqZQZ/v99+Ck9SA67eh88zM94yZVuv5R9qTnOwSq 9sZjs7Lgq45a2Cz4ozH0zHedqDVHl76iG5OseWRKP7yPUbwkuOZia3y6n+BUhwj6fiUKEWY1yPJ /PdR1fqTpamOBjSDhNRYI16visRyI+oIbo4AWKeRZQmkHZvFFQwQZ1+nhSVAxx1NKJup X-Google-Smtp-Source: AGHT+IGKMoXiWkHP1au7kQ0fIR2cGUqkUdWE1Ncg5Ll+++2+F5j00PZxd4OGLR6XNOLpN4JZMPF8vg== X-Received: by 2002:a2e:8a9a:0:b0:30a:2a9f:e7f0 with SMTP id 38308e7fff4ca-30a2a9feeb7mr22374781fa.28.1739889349433; Tue, 18 Feb 2025 06:35:49 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-309311777a8sm12360831fa.25.2025.02.18.06.35.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Feb 2025 06:35:48 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Jens Wiklander Subject: [PATCH v5 6/7] optee: FF-A: dynamic restricted memory allocation Date: Tue, 18 Feb 2025 15:34:55 +0100 Message-ID: <20250218143527.1236668-7-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250218143527.1236668-1-jens.wiklander@linaro.org> References: <20250218143527.1236668-1-jens.wiklander@linaro.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add support in the OP-TEE backend driver dynamic restricted memory allocation with FF-A. The restricted memory pools for dynamically allocated restrict memory are instantiated when requested by user-space. This instantiation can fail if OP-TEE doesn't support the requested use-case of restricted memory. Restricted memory pools based on a static carveout or dynamic allocation can coexist for different use-cases. We use only dynamic allocation with FF-A. Signed-off-by: Jens Wiklander --- drivers/tee/optee/ffa_abi.c | 136 ++++++++++++- drivers/tee/optee/optee_private.h | 10 +- drivers/tee/optee/rstmem.c | 317 +++++++++++++++++++++++++++++- 3 files changed, 459 insertions(+), 4 deletions(-) diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c index 02e6175ac5f0..b933f12d096e 100644 --- a/drivers/tee/optee/ffa_abi.c +++ b/drivers/tee/optee/ffa_abi.c @@ -672,6 +672,122 @@ static int optee_ffa_do_call_with_arg(struct tee_context *ctx, return optee_ffa_yielding_call(ctx, &data, rpc_arg, system_thread); } +static int do_call_lend_rstmem(struct optee *optee, u64 cookie, u32 use_case) +{ + struct optee_shm_arg_entry *entry; + struct optee_msg_arg *msg_arg; + struct tee_shm *shm; + u_int offs; + int rc; + + msg_arg = optee_get_msg_arg(optee->ctx, 1, &entry, &shm, &offs); + if (IS_ERR(msg_arg)) + return PTR_ERR(msg_arg); + + msg_arg->cmd = OPTEE_MSG_CMD_ASSIGN_RSTMEM; + msg_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT; + msg_arg->params[0].u.value.a = cookie; + msg_arg->params[0].u.value.b = use_case; + + rc = optee->ops->do_call_with_arg(optee->ctx, shm, offs, false); + if (rc) + goto out; + if (msg_arg->ret != TEEC_SUCCESS) { + rc = -EINVAL; + goto out; + } + +out: + optee_free_msg_arg(optee->ctx, entry, offs); + return rc; +} + +static int optee_ffa_lend_rstmem(struct optee *optee, struct tee_shm *rstmem, + u16 *end_points, unsigned int ep_count) +{ + struct ffa_device *ffa_dev = optee->ffa.ffa_dev; + const struct ffa_mem_ops *mem_ops = ffa_dev->ops->mem_ops; + const struct ffa_msg_ops *msg_ops = ffa_dev->ops->msg_ops; + struct ffa_send_direct_data data; + struct ffa_mem_region_attributes *mem_attr; + struct ffa_mem_ops_args args = { + .use_txbuf = true, + .tag = rstmem->use_case, + }; + struct page *page; + struct scatterlist sgl; + unsigned int n; + int rc; + + mem_attr = kcalloc(ep_count, sizeof(*mem_attr), GFP_KERNEL); + for (n = 0; n < ep_count; n++) { + mem_attr[n].receiver = end_points[n]; + mem_attr[n].attrs = FFA_MEM_RW; + } + args.attrs = mem_attr; + args.nattrs = ep_count; + + page = phys_to_page(rstmem->paddr); + sg_init_table(&sgl, 1); + sg_set_page(&sgl, page, rstmem->size, 0); + + args.sg = &sgl; + rc = mem_ops->memory_lend(&args); + kfree(mem_attr); + if (rc) + return rc; + + rc = do_call_lend_rstmem(optee, args.g_handle, rstmem->use_case); + if (rc) + goto err_reclaim; + + rc = optee_shm_add_ffa_handle(optee, rstmem, args.g_handle); + if (rc) + goto err_unreg; + + rstmem->sec_world_id = args.g_handle; + + return 0; + +err_unreg: + data = (struct ffa_send_direct_data){ + .data0 = OPTEE_FFA_RELEASE_RSTMEM, + .data1 = (u32)args.g_handle, + .data2 = (u32)(args.g_handle >> 32), + }; + msg_ops->sync_send_receive(ffa_dev, &data); +err_reclaim: + mem_ops->memory_reclaim(args.g_handle, 0); + return rc; +} + +static int optee_ffa_reclaim_rstmem(struct optee *optee, struct tee_shm *rstmem) +{ + struct ffa_device *ffa_dev = optee->ffa.ffa_dev; + const struct ffa_msg_ops *msg_ops = ffa_dev->ops->msg_ops; + const struct ffa_mem_ops *mem_ops = ffa_dev->ops->mem_ops; + u64 global_handle = rstmem->sec_world_id; + struct ffa_send_direct_data data = { + .data0 = OPTEE_FFA_RELEASE_RSTMEM, + .data1 = (u32)global_handle, + .data2 = (u32)(global_handle >> 32) + }; + int rc; + + optee_shm_rem_ffa_handle(optee, global_handle); + rstmem->sec_world_id = 0; + + rc = msg_ops->sync_send_receive(ffa_dev, &data); + if (rc) + pr_err("Release SHM id 0x%llx rc %d\n", global_handle, rc); + + rc = mem_ops->memory_reclaim(global_handle, 0); + if (rc) + pr_err("mem_reclaim: 0x%llx %d", global_handle, rc); + + return rc; +} + /* * 6. Driver initialization * @@ -785,7 +901,10 @@ static void optee_ffa_get_version(struct tee_device *teedev, .gen_caps = TEE_GEN_CAP_GP | TEE_GEN_CAP_REG_MEM | TEE_GEN_CAP_MEMREF_NULL, }; + struct optee *optee = tee_get_drvdata(teedev); + if (optee->rstmem_pools) + v.gen_caps |= TEE_GEN_CAP_RSTMEM; *vers = v; } @@ -804,6 +923,8 @@ static const struct tee_driver_ops optee_ffa_clnt_ops = { .cancel_req = optee_cancel_req, .shm_register = optee_ffa_shm_register, .shm_unregister = optee_ffa_shm_unregister, + .rstmem_alloc = optee_rstmem_alloc, + .rstmem_free = optee_rstmem_free, }; static const struct tee_desc optee_ffa_clnt_desc = { @@ -820,6 +941,8 @@ static const struct tee_driver_ops optee_ffa_supp_ops = { .supp_send = optee_supp_send, .shm_register = optee_ffa_shm_register, /* same as for clnt ops */ .shm_unregister = optee_ffa_shm_unregister_supp, + .rstmem_alloc = optee_rstmem_alloc, + .rstmem_free = optee_rstmem_free, }; static const struct tee_desc optee_ffa_supp_desc = { @@ -833,6 +956,8 @@ static const struct optee_ops optee_ffa_ops = { .do_call_with_arg = optee_ffa_do_call_with_arg, .to_msg_param = optee_ffa_to_msg_param, .from_msg_param = optee_ffa_from_msg_param, + .lend_rstmem = optee_ffa_lend_rstmem, + .reclaim_rstmem = optee_ffa_reclaim_rstmem, }; static void optee_ffa_remove(struct ffa_device *ffa_dev) @@ -937,11 +1062,18 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev) (sec_caps & OPTEE_FFA_SEC_CAP_RPMB_PROBE)) optee->in_kernel_rpmb_routing = true; + if (IS_ENABLED(CONFIG_CMA) && !IS_MODULE(CONFIG_OPTEE) && + (sec_caps & OPTEE_FFA_SEC_CAP_RSTMEM)) { + rc = optee_rstmem_pools_init(optee); + if (rc) + goto err_free_pool; + } + teedev = tee_device_alloc(&optee_ffa_clnt_desc, NULL, optee->pool, optee); if (IS_ERR(teedev)) { rc = PTR_ERR(teedev); - goto err_free_pool; + goto err_free_rstmem_pools; } optee->teedev = teedev; @@ -1020,6 +1152,8 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev) tee_device_unregister(optee->teedev); err_free_pool: tee_shm_pool_free(pool); +err_free_rstmem_pools: + optee_rstmem_pools_uninit(optee); err_free_optee: kfree(optee); return rc; diff --git a/drivers/tee/optee/optee_private.h b/drivers/tee/optee/optee_private.h index 0491889e5b0e..dd2a3a2224bc 100644 --- a/drivers/tee/optee/optee_private.h +++ b/drivers/tee/optee/optee_private.h @@ -174,9 +174,14 @@ struct optee; * @do_call_with_arg: enters OP-TEE in secure world * @to_msg_param: converts from struct tee_param to OPTEE_MSG parameters * @from_msg_param: converts from OPTEE_MSG parameters to struct tee_param + * @lend_rstmem: lends physically contiguous memory as restricted + * memory, inaccessible by the kernel + * @reclaim_rstmem: reclaims restricted memory previously lent with + * @lend_rstmem() and makes it accessible by the + * kernel again * * These OPs are only supposed to be used internally in the OP-TEE driver - * as a way of abstracting the different methogs of entering OP-TEE in + * as a way of abstracting the different methods of entering OP-TEE in * secure world. */ struct optee_ops { @@ -191,6 +196,9 @@ struct optee_ops { size_t num_params, const struct optee_msg_param *msg_params, bool update_out); + int (*lend_rstmem)(struct optee *optee, struct tee_shm *rstmem, + u16 *end_points, unsigned int ep_count); + int (*reclaim_rstmem)(struct optee *optee, struct tee_shm *rstmem); }; /** diff --git a/drivers/tee/optee/rstmem.c b/drivers/tee/optee/rstmem.c index 01456bc3e2f6..9a5013e3cc5f 100644 --- a/drivers/tee/optee/rstmem.c +++ b/drivers/tee/optee/rstmem.c @@ -4,6 +4,7 @@ */ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt +#include #include #include #include @@ -13,11 +14,314 @@ #include #include "optee_private.h" +#if IS_ENABLED(CONFIG_CMA) && !IS_MODULE(CONFIG_OPTEE) +struct optee_rstmem_cma_pool { + struct tee_shm_pool pool; + struct page *page; + struct optee *optee; + size_t page_count; + u16 *end_points; + u_int end_point_count; + u_int align; + refcount_t refcount; + struct tee_shm rstmem; + /* Protects when initializing and tearing down this struct */ + struct mutex mutex; +}; + +static struct optee_rstmem_cma_pool * +to_rstmem_cma_pool(struct tee_shm_pool *pool) +{ + return container_of(pool, struct optee_rstmem_cma_pool, pool); +} + +static int init_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{ + struct cma *cma = dev_get_cma_area(&rp->optee->teedev->dev); + int rc; + + rp->page = cma_alloc(cma, rp->page_count, rp->align, true/*no_warn*/); + if (!rp->page) + return -ENOMEM; + + /* + * TODO unmap the memory range since the physical memory will + * become inaccesible after the lend_rstmem() call. + */ + + rp->rstmem.paddr = page_to_phys(rp->page); + rp->rstmem.size = rp->page_count * PAGE_SIZE; + rc = rp->optee->ops->lend_rstmem(rp->optee, &rp->rstmem, + rp->end_points, rp->end_point_count); + if (rc) + goto err_release; + + rp->pool.private_data = gen_pool_create(PAGE_SHIFT, -1); + if (!rp->pool.private_data) { + rc = -ENOMEM; + goto err_reclaim; + } + + rc = gen_pool_add(rp->pool.private_data, rp->rstmem.paddr, + rp->rstmem.size, -1); + if (rc) + goto err_free_pool; + + refcount_set(&rp->refcount, 1); + return 0; + +err_free_pool: + gen_pool_destroy(rp->pool.private_data); +err_reclaim: + rp->optee->ops->reclaim_rstmem(rp->optee, &rp->rstmem); +err_release: + cma_release(cma, rp->page, rp->page_count); + rp->rstmem.paddr = 0; + rp->rstmem.size = 0; + rp->rstmem.sec_world_id = 0; + return rc; +} + +static int get_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{ + int rc = 0; + + if (!refcount_inc_not_zero(&rp->refcount)) { + mutex_lock(&rp->mutex); + if (rp->pool.private_data) { + /* + * Another thread has already initialized the pool + * before us, or the pool was just about to be torn + * down. Either way we only need to increase the + * refcount and we're done. + */ + refcount_inc(&rp->refcount); + } else { + rc = init_cma_rstmem(rp); + } + mutex_unlock(&rp->mutex); + } + + return rc; +} + +static void release_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{ + gen_pool_destroy(rp->pool.private_data); + rp->optee->ops->reclaim_rstmem(rp->optee, &rp->rstmem); + cma_release(dev_get_cma_area(&rp->optee->teedev->dev), rp->page, + rp->page_count); + + rp->pool.private_data = NULL; + rp->page = NULL; + rp->rstmem.paddr = 0; + rp->rstmem.size = 0; + rp->rstmem.sec_world_id = 0; +} + +static void put_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{ + if (refcount_dec_and_test(&rp->refcount)) { + mutex_lock(&rp->mutex); + if (rp->pool.private_data) + release_cma_rstmem(rp); + mutex_unlock(&rp->mutex); + } +} + +static int rstmem_pool_op_cma_alloc(struct tee_shm_pool *pool, + struct tee_shm *shm, size_t size, + size_t align) +{ + struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool); + size_t sz = ALIGN(size, PAGE_SIZE); + phys_addr_t pa; + int rc; + + rc = get_cma_rstmem(rp); + if (rc) + return rc; + + pa = gen_pool_alloc(rp->pool.private_data, sz); + if (!pa) { + put_cma_rstmem(rp); + return -ENOMEM; + } + + shm->size = sz; + shm->paddr = pa; + shm->offset = pa - page_to_phys(rp->page); + shm->sec_world_id = rp->rstmem.sec_world_id; + + return 0; +} + +static void rstmem_pool_op_cma_free(struct tee_shm_pool *pool, + struct tee_shm *shm) +{ + struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool); + + gen_pool_free(rp->pool.private_data, shm->paddr, shm->size); + shm->size = 0; + shm->paddr = 0; + shm->offset = 0; + shm->sec_world_id = 0; + put_cma_rstmem(rp); +} + +static void pool_op_cma_destroy_pool(struct tee_shm_pool *pool) +{ + struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool); + + mutex_destroy(&rp->mutex); + kfree(rp); +} + +static struct tee_shm_pool_ops rstmem_pool_ops_cma = { + .alloc = rstmem_pool_op_cma_alloc, + .free = rstmem_pool_op_cma_free, + .destroy_pool = pool_op_cma_destroy_pool, +}; + +static int get_rstmem_config(struct optee *optee, u32 use_case, + size_t *min_size, u_int *min_align, + u16 *end_points, u_int *ep_count) +{ + struct tee_param params[2] = { + [0] = { + .attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT, + .u.value.a = use_case, + }, + [1] = { + .attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT, + }, + }; + struct optee_shm_arg_entry *entry; + struct tee_shm *shm_param = NULL; + struct optee_msg_arg *msg_arg; + struct tee_shm *shm; + u_int offs; + int rc; + + if (end_points && *ep_count) { + params[1].u.memref.size = *ep_count * sizeof(*end_points); + shm_param = tee_shm_alloc_priv_buf(optee->ctx, + params[1].u.memref.size); + if (IS_ERR(shm_param)) + return PTR_ERR(shm_param); + params[1].u.memref.shm = shm_param; + } + + msg_arg = optee_get_msg_arg(optee->ctx, ARRAY_SIZE(params), &entry, + &shm, &offs); + if (IS_ERR(msg_arg)) { + rc = PTR_ERR(msg_arg); + goto out_free_shm; + } + msg_arg->cmd = OPTEE_MSG_CMD_GET_RSTMEM_CONFIG; + + rc = optee->ops->to_msg_param(optee, msg_arg->params, + ARRAY_SIZE(params), params, + false /*!update_out*/); + if (rc) + goto out_free_msg; + + rc = optee->ops->do_call_with_arg(optee->ctx, shm, offs, false); + if (rc) + goto out_free_msg; + if (msg_arg->ret && msg_arg->ret != TEEC_ERROR_SHORT_BUFFER) { + rc = -EINVAL; + goto out_free_msg; + } + + rc = optee->ops->from_msg_param(optee, params, ARRAY_SIZE(params), + msg_arg->params, true /*update_out*/); + if (rc) + goto out_free_msg; + + if (!msg_arg->ret && end_points && + *ep_count < params[1].u.memref.size / sizeof(u16)) { + rc = -EINVAL; + goto out_free_msg; + } + + *min_size = params[0].u.value.a; + *min_align = params[0].u.value.b; + *ep_count = params[1].u.memref.size / sizeof(u16); + + if (msg_arg->ret == TEEC_ERROR_SHORT_BUFFER) { + rc = -ENOSPC; + goto out_free_msg; + } + + if (end_points) + memcpy(end_points, tee_shm_get_va(shm_param, 0), + params[1].u.memref.size); + +out_free_msg: + optee_free_msg_arg(optee->ctx, entry, offs); +out_free_shm: + if (shm_param) + tee_shm_free(shm_param); + return rc; +} + +static struct tee_shm_pool *alloc_rstmem_pool(struct optee *optee, u32 use_case) +{ + struct optee_rstmem_cma_pool *rp; + size_t min_size; + int rc; + + rp = kzalloc(sizeof(*rp), GFP_KERNEL); + if (!rp) + return ERR_PTR(-ENOMEM); + rp->rstmem.use_case = use_case; + + rc = get_rstmem_config(optee, use_case, &min_size, &rp->align, NULL, + &rp->end_point_count); + if (rc) { + if (rc != -ENOSPC) + goto err; + rp->end_points = kcalloc(rp->end_point_count, + sizeof(*rp->end_points), GFP_KERNEL); + if (!rp->end_points) { + rc = -ENOMEM; + goto err; + } + rc = get_rstmem_config(optee, use_case, &min_size, &rp->align, + rp->end_points, &rp->end_point_count); + if (rc) + goto err_kfree_eps; + } + + rp->pool.ops = &rstmem_pool_ops_cma; + rp->optee = optee; + rp->page_count = min_size / PAGE_SIZE; + mutex_init(&rp->mutex); + + return &rp->pool; + +err_kfree_eps: + kfree(rp->end_points); +err: + kfree(rp); + return ERR_PTR(rc); +} +#else /*CONFIG_CMA*/ +static struct tee_shm_pool * +alloc_rstmem_pool(struct optee *optee __always_unused, + u32 use_case __always_unused) +{ + return ERR_PTR(-EINVAL); +} +#endif /*CONFIG_CMA*/ + int optee_rstmem_alloc(struct tee_context *ctx, struct tee_shm *shm, u32 flags, u32 use_case, size_t size) { struct optee *optee = tee_get_drvdata(ctx->teedev); struct tee_shm_pool *pool; + int rc; if (!optee->rstmem_pools) return -EINVAL; @@ -25,8 +329,17 @@ int optee_rstmem_alloc(struct tee_context *ctx, struct tee_shm *shm, return -EINVAL; pool = xa_load(&optee->rstmem_pools->xa, use_case); - if (!pool) - return -EINVAL; + if (!pool) { + pool = alloc_rstmem_pool(optee, use_case); + if (IS_ERR(pool)) + return PTR_ERR(pool); + rc = xa_insert(&optee->rstmem_pools->xa, use_case, pool, + GFP_KERNEL); + if (rc) { + pool->ops->destroy_pool(pool); + return rc; + } + } return pool->ops->alloc(pool, shm, size, 0); } From patchwork Tue Feb 18 14:34:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 13980112 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A7032C021AD for ; Tue, 18 Feb 2025 14:35:57 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E332410E707; Tue, 18 Feb 2025 14:35:56 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b="mPiXGJr6"; dkim-atps=neutral Received: from mail-lf1-f53.google.com (mail-lf1-f53.google.com [209.85.167.53]) by gabe.freedesktop.org (Postfix) with ESMTPS id C546510E32E for ; Tue, 18 Feb 2025 14:35:53 +0000 (UTC) Received: by mail-lf1-f53.google.com with SMTP id 2adb3069b0e04-5452efeb87aso4241383e87.3 for ; Tue, 18 Feb 2025 06:35:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1739889352; x=1740494152; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=K8kowqzBNNR30VBKvjQTsvLxpaN4SxkdbGASJgFBpcA=; b=mPiXGJr6h0251JHFgC/YKhw02f8DArET7e/HgB/yrMXOZStqIK1yVvmyijZAckWzJq /h38ZAIBNhGYzCMF4ipVFT50NXbOOOwYFBLBUPN3L69FpvQ9oNmNzfEMi/fuPPgZs+bk 9WR6THEZF7CYeg0RL1qG8RdZPEFzCc5T0DTV8NbH5XsTBSpc4WoZz+4mY3dy1My8B7k1 IU2i7PGg+SOUwoODdNVX2NyHlwL+o26MtkoM5Im8Iv6ZyEFDEpE3nAWcYAbwEpdL99+d 2g6p00/MGP4TRY+7KWUs3Hei7CGVmb+cI0aUkgF6y33zt0FvVfFAIORjt9foSNCd63zD PjfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739889352; x=1740494152; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=K8kowqzBNNR30VBKvjQTsvLxpaN4SxkdbGASJgFBpcA=; b=C6IQ3Q80J4KD1xWGROXk8kQFSZ8Lj0hEWKui7iujuZfPv4O+l6zCyzPzY3r5kIZGcF cdWMoxJDDMmx2vE02UJzsTbQcD00D8nH7j7+SHwLGnsSUdeHSbTJb164f5+XMd0E4TNL sFIn5XSvolBkZRLnCxDdKKdUjsdurtnj4p5OUIvf3Cv6Hz5emfJB3XXIBj/+cBhGYLkc yrTs4Ajx3FTSPxrn/DzVU6gOpiwP0EGZ4LIrwhRvbQmAqJNRR++fWjbb+fwZtWhFftAh Z99cqvKGWrenCxkOxj5o7km0ic9kvYMjkrU9M11mlGk/xGj8Iwtzz6YQrcfihn/bx6H6 NSWg== X-Forwarded-Encrypted: i=1; AJvYcCUal+XVecxHriIA+7Tlvke3Li/1z+OoqnVI8WKO2h5QYg9VS96CPPBSxCPcnPOZzBAD33tRKBpvx2s=@lists.freedesktop.org X-Gm-Message-State: AOJu0YzmIl5OOKVInjKNxENz4PGoHFN2wamxykMBmqlADa9brfXsnUQw josrqFL9LeZVMrj1t/1VyxaFUAxBl/GIXaswgTW1/EWI+V5a62V27/OK7Z31rxA= X-Gm-Gg: ASbGnct0byxj6Z+C1OXwjlkcUTO//OSkw2e6LXnXmQ/zNx1NRjyvKlNsane6l5UZwOw OZCEGMmrXondSoKWs425puDe6igBMdlkHqupM2f4V/zLtaMM8hIc+Q3PrO4HFobZMlXtiT7n9hE rLqRIdUW5s3YobnqK8c2MfNCUdFFky6r6UjuvXn1YoZXdZ01sVJFe+T+SMmXGUlWyXIruSDKrDt ou/G7qrNO8aaLXjNorAfWa4Ar5hv6G9FLcJaqIafkj8SsnwuKjH73oCNNhEUYSpeZfAhfWBTw5j eBNIS+V+oSv3RJcakvuMXwM4+qZZ7pEfYWxCiWfqj6TuX9t/Kd7OnVvg7tIdfD5Q6JvJ X-Google-Smtp-Source: AGHT+IG2QSia+pqq7pQb/RgZH7dPW4LfCNk4j/lwNSmjg+mmOj+PY1NlVhF5Eyeo9wfiKu01ckwckA== X-Received: by 2002:a05:6512:ba6:b0:545:2fa7:5a8b with SMTP id 2adb3069b0e04-5452fe3aaafmr5144347e87.27.1739889351902; Tue, 18 Feb 2025 06:35:51 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-309311777a8sm12360831fa.25.2025.02.18.06.35.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Feb 2025 06:35:50 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Jens Wiklander Subject: [PATCH v5 7/7] optee: smc abi: dynamic restricted memory allocation Date: Tue, 18 Feb 2025 15:34:56 +0100 Message-ID: <20250218143527.1236668-8-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250218143527.1236668-1-jens.wiklander@linaro.org> References: <20250218143527.1236668-1-jens.wiklander@linaro.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add support in the OP-TEE backend driver for dynamic restricted memory allocation using the SMC ABI. Signed-off-by: Jens Wiklander --- drivers/tee/optee/smc_abi.c | 76 +++++++++++++++++++++++++++++++++++-- 1 file changed, 73 insertions(+), 3 deletions(-) diff --git a/drivers/tee/optee/smc_abi.c b/drivers/tee/optee/smc_abi.c index 11589e5120c9..ca0cb5045f5b 100644 --- a/drivers/tee/optee/smc_abi.c +++ b/drivers/tee/optee/smc_abi.c @@ -1001,6 +1001,67 @@ static int optee_smc_do_call_with_arg(struct tee_context *ctx, return rc; } +static int optee_smc_lend_rstmem(struct optee *optee, struct tee_shm *rstmem, + u16 *end_points, unsigned int ep_count) +{ + struct optee_shm_arg_entry *entry; + struct optee_msg_arg *msg_arg; + struct tee_shm *shm; + u_int offs; + int rc; + + msg_arg = optee_get_msg_arg(optee->ctx, 2, &entry, &shm, &offs); + if (IS_ERR(msg_arg)) + return PTR_ERR(msg_arg); + + msg_arg->cmd = OPTEE_MSG_CMD_LEND_RSTMEM; + msg_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT; + msg_arg->params[0].u.value.a = OPTEE_MSG_RSTMEM_SECURE_VIDEO_PLAY; + msg_arg->params[1].attr = OPTEE_MSG_ATTR_TYPE_TMEM_INPUT; + msg_arg->params[1].u.tmem.buf_ptr = rstmem->paddr; + msg_arg->params[1].u.tmem.size = rstmem->size; + msg_arg->params[1].u.tmem.shm_ref = (u_long)rstmem; + + rc = optee->ops->do_call_with_arg(optee->ctx, shm, offs, false); + if (rc) + goto out; + if (msg_arg->ret != TEEC_SUCCESS) { + rc = -EINVAL; + goto out; + } + +out: + optee_free_msg_arg(optee->ctx, entry, offs); + return rc; +} + +static int optee_smc_reclaim_rstmem(struct optee *optee, struct tee_shm *rstmem) +{ + struct optee_shm_arg_entry *entry; + struct optee_msg_arg *msg_arg; + struct tee_shm *shm; + u_int offs; + int rc; + + msg_arg = optee_get_msg_arg(optee->ctx, 1, &entry, &shm, &offs); + if (IS_ERR(msg_arg)) + return PTR_ERR(msg_arg); + + msg_arg->cmd = OPTEE_MSG_CMD_RECLAIM_RSTMEM; + msg_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_RMEM_INPUT; + msg_arg->params[0].u.rmem.shm_ref = (u_long)rstmem; + + rc = optee->ops->do_call_with_arg(optee->ctx, shm, offs, false); + if (rc) + goto out; + if (msg_arg->ret != TEEC_SUCCESS) + rc = -EINVAL; + +out: + optee_free_msg_arg(optee->ctx, entry, offs); + return rc; +} + /* * 5. Asynchronous notification */ @@ -1258,6 +1319,8 @@ static const struct optee_ops optee_ops = { .do_call_with_arg = optee_smc_do_call_with_arg, .to_msg_param = optee_to_msg_param, .from_msg_param = optee_from_msg_param, + .lend_rstmem = optee_smc_lend_rstmem, + .reclaim_rstmem = optee_smc_reclaim_rstmem, }; static int enable_async_notif(optee_invoke_fn *invoke_fn) @@ -1628,6 +1691,9 @@ static inline int optee_load_fw(struct platform_device *pdev, static int optee_sdp_pool_init(struct optee *optee) { + bool dyn_sdp = (optee->smc.sec_caps & + OPTEE_SMC_SEC_CAP_DYNAMIC_RSTMEM) && + IS_ENABLED(CONFIG_CMA) && !IS_MODULE(CONFIG_OPTEE); bool sdp = optee->smc.sec_caps & OPTEE_SMC_SEC_CAP_SDP; struct tee_shm_pool *pool; int rc; @@ -1635,9 +1701,11 @@ static int optee_sdp_pool_init(struct optee *optee) /* * optee_sdp_pools_init() must be called if secure world has any * SDP capability. If the static carvout is available initialize - * and add a pool for that. + * and add a pool for that. If there's an error from secure world + * we complain but don't call optee_sdp_pools_uninit() unless we + * know that there is no SDP capability left. */ - if (!sdp) + if (!dyn_sdp && !sdp) return 0; rc = optee_rstmem_pools_init(optee); @@ -1654,7 +1722,9 @@ static int optee_sdp_pool_init(struct optee *optee) 0, &res.smccc); if (res.result.status != OPTEE_SMC_RETURN_OK) { pr_err("Secure Data Path service not available\n"); - goto err; + if (!dyn_sdp) + goto err; + return 0; } pool = tee_rstmem_gen_pool_alloc(res.result.start,