From patchwork Wed Mar 5 13:04:07 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 14002623 Received: from mail-ed1-f53.google.com (mail-ed1-f53.google.com [209.85.208.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 34FAA22E400 for ; Wed, 5 Mar 2025 13:06:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741180006; cv=none; b=UDcXx6ZKXP/uSdKmTjry5CpGDNRl+x5xH6bSYh5klEv8zsLD6JLJUDtQUBnGUZSH+5U3DQqSuJ5PuWbd0MF7PNye+4HJuhzWif+Uqlkl36yFqwQoAp1/kF+snroalNR9ATtUtHzDpRQ62QS9qyIIhzck/s74TbasGG132OFLHnc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741180006; c=relaxed/simple; bh=BvsTKZrTmJtBWMLNRcmoFZmyCAxTZH+npaWxoS3hl3g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nx3bT0I1b8P7k9d3BIdi4KkTRU2XBeeovFC/vwsd91muD8TF6tDfMslXif5lSggd8OfRykQIGMSU3GOLrWx2kw3v+oE471uyXb2mcT55Fc2pbGZD0UV7jXQLKo9/+rOYtcH4g7p8vKNy5ivrjwAXvTdwtnm4cdUtK4sSbHVO/6c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=GzWEhalm; arc=none smtp.client-ip=209.85.208.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="GzWEhalm" Received: by mail-ed1-f53.google.com with SMTP id 4fb4d7f45d1cf-5e535e6739bso6042663a12.1 for ; Wed, 05 Mar 2025 05:06:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1741180002; x=1741784802; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BvA69GRM940nbh4WvjTd5eG8jH1Ow0PBoCzHHS2i3Ls=; b=GzWEhalmhDsyQTnREW1kBlDeGGPadjXbWIbImd9oa9hF6Mdi6qXYkYmwyXM9p9+QKH JNtjf6+QaefbYFq9mY7R2AGQhUEENJRW1x6s541XoHeSyeHwUfalyzUFMaNtZmbos/3E yOuueveo5CZJIvzIrnhPRQOuUKDR/lm3484SnOzruNqjB5zxclw38uq5WE9oW3QlpMmp tkYilKpdP2/ZKXFwSPOG5XAspUkX/Sq3lG1CJuY7EBGjLG7o48TVdGgO2RTaEZExJ8+K tFpGQtl9wAjBXtIbHn0lVQ0iBp0ldiwWFaPG5G/cHWbyi7fngd7JWhodzqK/Bd51Oluc 1Vjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741180002; x=1741784802; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BvA69GRM940nbh4WvjTd5eG8jH1Ow0PBoCzHHS2i3Ls=; b=Ge00nhPEWzYbbPUfbdM6XoI3IM+yyD3+swXXpJlhud7olGph2ICb231Lecut+ULGJz XfIoo4pHxjWv/mYNT1V8Hh84CAyyIWEgpCVvJ1hyB/0rQ6Pwgz2dq6o6uaC9QrvoyDlb p++X+oRFbSYTlyVM/vty7RWC6DG7OF+ID45QXbwqcA0DiYuiNqq2hde7mvDX7zkeaveq iRQf1w9dZ23uAFBg8B5hr6cOBWlhBEBTngBRap91C1GNdLwqkjv96La8YGXP9YDL5Zx0 KwNtV1BNlpKK4MGksCJQJWhTbwojXUpl65XMowB4yBUUpboK6PPg2gi7CDtWVIzCvOMB 9Bag== X-Forwarded-Encrypted: i=1; AJvYcCWQJdoqWbKF9GzfcNZvjrqPrNLCEmL0GZ07Qq8qJkrOYh0fJSIWxAEeUjco0Qi/KqAiHLsN6t+z84PWEw==@vger.kernel.org X-Gm-Message-State: AOJu0YynJPhimNviSH8i5IMkHm/jn3BQXWrw3qxZ+Gv2bbIyBaE7BNtr XE9eKn6OPcbN0bThVv0pLLPQKKlbtH1s28VIrw1IFcIAL1iVKco839h5gOk7DLs= X-Gm-Gg: ASbGncsgsx97idTA+RSLDYxkK3Z2m+Q5us+5N7czFCuImPYcAKlGi13Hm3Vd2lGqt34 xcy1DiUH0kO7GcQkodIsoe/2lyVqeKg3tYx0wIGHPyV+0erR2h6xFV38MAY2Y8Jt/XkzrptxiaH 9ErWeh4vwX7xElfmO2EjCw02SNu2rYiBUV9J+CT/g9oZSEZpneC5/ayosdyy0c5rWgLBv4FRZB0 QlSpg1uB1+IBOm+iub9PJ+juU7j0B+0b9Awdze2M/NEtjKIvg/ZPkuZJctd8MyA7fELgPdRTHei fj0gll2DhHE3bIbQdgFKfWUZXshpAl+t+vJWsTpftFAB3CVsJE4RQOb/Efg2u7YC1MbNUMcHfaj VQVdROXukBfKYCnC2ysYCNQ== X-Google-Smtp-Source: AGHT+IEjtiBAYCTH0+e95UXzOpV6Qk/1lP6TuM4iQbwFmlqCq9fGF9wURhaIIhsi+UMHMk8/oJkTeQ== X-Received: by 2002:a50:c8cd:0:b0:5e5:bcd6:4ad8 with SMTP id 4fb4d7f45d1cf-5e5bcd64d5amr155764a12.9.1741180002364; Wed, 05 Mar 2025 05:06:42 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-5e5bcd1595bsm65714a12.42.2025.03.05.05.06.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Mar 2025 05:06:41 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Daniel Stone , Jens Wiklander Subject: [PATCH v6 01/10] tee: tee_device_alloc(): copy dma_mask from parent device Date: Wed, 5 Mar 2025 14:04:07 +0100 Message-ID: <20250305130634.1850178-2-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250305130634.1850178-1-jens.wiklander@linaro.org> References: <20250305130634.1850178-1-jens.wiklander@linaro.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 If a parent device is supplied to tee_device_alloc(), copy the dma_mask field into the new device. This avoids future warnings when mapping a DMA-buf for the device. Signed-off-by: Jens Wiklander Reviewed-by: Sumit Garg --- drivers/tee/tee_core.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c index d113679b1e2d..685afcaa3ea1 100644 --- a/drivers/tee/tee_core.c +++ b/drivers/tee/tee_core.c @@ -922,6 +922,8 @@ struct tee_device *tee_device_alloc(const struct tee_desc *teedesc, teedev->dev.class = &tee_class; teedev->dev.release = tee_release_device; teedev->dev.parent = dev; + if (dev) + teedev->dev.dma_mask = dev->dma_mask; teedev->dev.devt = MKDEV(MAJOR(tee_devt), teedev->id); From patchwork Wed Mar 5 13:04:08 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 14002624 Received: from mail-ed1-f45.google.com (mail-ed1-f45.google.com [209.85.208.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E70F022E400 for ; Wed, 5 Mar 2025 13:06:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741180008; cv=none; b=iPG/4/Hv8+f1r4Wc0D2T5KrDiYPfZvBy+7kH/tpsFzGlSqWnjpzh8QQLN4LMS45ubtatjUNayfO8QJiaHj1H1ay5V+BYDoybHjzgZwOEvX4gjpltLPQ1z06MvehBxyeO1Q3yGrjf/+3lLEH3P6TUYwHzBaOn4GYb+wX9TcEyuRI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741180008; c=relaxed/simple; bh=noepNtfKlkYfyEuB8tCbH+R5KF4tmiiJFNlLMXy5mLg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UCPHdHZ/CnkoBTiDDM2DLt/yTFPDK30qhNVvLjSsvlTQwkH+vYS8bjQzF2VGf2y81HbLs6Wy/Ly6JmGVNCETlHwP2GIhEBamDqjApLDEXMBBN07KWjdGcV48oUQFSGE95qO7ieyPIlsNczC0NFCkZY0JiGrpLu1kVfoSmSFUZ1w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=N0Hx4L5Q; arc=none smtp.client-ip=209.85.208.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="N0Hx4L5Q" Received: by mail-ed1-f45.google.com with SMTP id 4fb4d7f45d1cf-5e549af4927so5665675a12.2 for ; Wed, 05 Mar 2025 05:06:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1741180005; x=1741784805; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=g64carhpN86lM7kPnuUc7tjq3kR3ZxlKeoOmJi6Slgs=; b=N0Hx4L5QiXadU2KpKAB7vCDQRi9OSLLPVh8eCinEgEPHPSv5e4C8v/CyKpqX8rY5K+ AZWlCV2I9DCFtQA9GE8lToVv93C10sA90t+HsPbaFfpwJZbOgmAEet4ihhP5pk6yHtxZ Ejjg44C09RnH5e0/xrH9vqbxDuGGwLM5+C41m8bm5WwCbKben8p+Gwh95QiNojRV1saf U69zB2UgAScGQT59SXmt43sha+3ypseK/N2ROwXEiVGRlAQusLhMX/f23uzxxi7MAiHI NAQXLXzPjG23jUB8MfnUJj/Vy0r9YYy2Sywz0RHec7EnV/5gOFJflFkHUHeUJUgwyGem dYMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741180005; x=1741784805; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=g64carhpN86lM7kPnuUc7tjq3kR3ZxlKeoOmJi6Slgs=; b=WqfoYDL5GOiI2doq062Q8G3KuQvfFfsdZut6/8mKvTY/gHQseVQ6NYMfX/TKQGDdk3 ni2WU6FLtlkxSBpCvdkZv8iRFAQ9eMW4gozsxMstAZU6XlgpzwQ4fyd41rWF7pkUQj5X u2YcIm5aA80otryx8bUM5Ii6VrqlJaI0JRqXr5OVV8y5jK76PGKQOHRyKEtsjax8mJSZ 0pI90tCy+0NHX90YPqEKdtbMKXbJzNjQ5+WkeHXtePraq6eU6KcQhFoxqPGJeNbprqrR G05vnsx279a58fgK8cqHf8vA3M7IAnQkL6d4vDXncZJZqZhBzFPudt/YtbYgflJrEBr6 Ibqg== X-Forwarded-Encrypted: i=1; AJvYcCUSH/j+pf7NzmECqR8u7MB5XumyFhtGS+d/YfcYTbMKquo8SN3ImapQO6/CBEuQgdSVd01U8Kt59JIerQ==@vger.kernel.org X-Gm-Message-State: AOJu0YzaFFrcJy9wNiOlz2vsoW8G1g7cRhCY9+P0FydkkOoahK7kh7W/ Wmx1ae47nv3TkuM1glvnvqXwTgwrB9TeJFaY7i5YAEt0ta6lX2YtA0V+d0Vxog4= X-Gm-Gg: ASbGncuFEiQAdQtc6U5dZEPMDRg+fhTnCLd8ff/KQ4+/zIxR1xGFkMBC0H1y6jzCfJb rfi4wvhKd1NuzGff2qKvDrodIWAnsjIKYvGl6D398zUL3dgvTo000vV5IvK5ahAHgGDU/Cv+w1O +zSUHAj83JQ8xmPA1eN2VSAajIiEsMeO+H7u5ITll08B09rLevtAezL6j7k8LFsH/0XEZHluCWp W4VBmz6AVaYQEiRQvX7yB3sDJqX7Yiyu8LwXv7hiJbgoMrfGQujGIvaMEpc4aCj1CkfDhL+dzWq E0gCcYgwFSD8O/NkdXon0xoYQY045iFeOIaEJx98mjsEp34Evyz3QMvX7hSqWB0yf/h4T4qGFVb XtPom4ZXuP413hQ3tg/h8dw== X-Google-Smtp-Source: AGHT+IHAJQdH7f8KJt+R2ffDvxbc7sUQ+b3jomBMCSiokLRmAxj8briZ+1p+KDuEOOHd7BerJyZSMw== X-Received: by 2002:a05:6402:5203:b0:5e1:9725:bb3e with SMTP id 4fb4d7f45d1cf-5e59f4869femr3345168a12.28.1741180005006; Wed, 05 Mar 2025 05:06:45 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-5e5bcd1595bsm65714a12.42.2025.03.05.05.06.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Mar 2025 05:06:44 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Daniel Stone , Jens Wiklander Subject: [PATCH v6 02/10] optee: pass parent device to tee_device_alloc() Date: Wed, 5 Mar 2025 14:04:08 +0100 Message-ID: <20250305130634.1850178-3-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250305130634.1850178-1-jens.wiklander@linaro.org> References: <20250305130634.1850178-1-jens.wiklander@linaro.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 During probing of the OP-TEE driver, pass the parent device to tee_device_alloc() so the dma_mask of the new devices can be updated accordingly. Signed-off-by: Jens Wiklander Reviewed-by: Sumit Garg --- drivers/tee/optee/ffa_abi.c | 8 ++++---- drivers/tee/optee/smc_abi.c | 4 ++-- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c index f3af5666bb11..4ca1d5161b82 100644 --- a/drivers/tee/optee/ffa_abi.c +++ b/drivers/tee/optee/ffa_abi.c @@ -914,16 +914,16 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev) (sec_caps & OPTEE_FFA_SEC_CAP_RPMB_PROBE)) optee->in_kernel_rpmb_routing = true; - teedev = tee_device_alloc(&optee_ffa_clnt_desc, NULL, optee->pool, - optee); + teedev = tee_device_alloc(&optee_ffa_clnt_desc, &ffa_dev->dev, + optee->pool, optee); if (IS_ERR(teedev)) { rc = PTR_ERR(teedev); goto err_free_pool; } optee->teedev = teedev; - teedev = tee_device_alloc(&optee_ffa_supp_desc, NULL, optee->pool, - optee); + teedev = tee_device_alloc(&optee_ffa_supp_desc, &ffa_dev->dev, + optee->pool, optee); if (IS_ERR(teedev)) { rc = PTR_ERR(teedev); goto err_unreg_teedev; diff --git a/drivers/tee/optee/smc_abi.c b/drivers/tee/optee/smc_abi.c index f0c3ac1103bb..165fadd9abc9 100644 --- a/drivers/tee/optee/smc_abi.c +++ b/drivers/tee/optee/smc_abi.c @@ -1691,14 +1691,14 @@ static int optee_probe(struct platform_device *pdev) (sec_caps & OPTEE_SMC_SEC_CAP_RPMB_PROBE)) optee->in_kernel_rpmb_routing = true; - teedev = tee_device_alloc(&optee_clnt_desc, NULL, pool, optee); + teedev = tee_device_alloc(&optee_clnt_desc, &pdev->dev, pool, optee); if (IS_ERR(teedev)) { rc = PTR_ERR(teedev); goto err_free_optee; } optee->teedev = teedev; - teedev = tee_device_alloc(&optee_supp_desc, NULL, pool, optee); + teedev = tee_device_alloc(&optee_supp_desc, &pdev->dev, pool, optee); if (IS_ERR(teedev)) { rc = PTR_ERR(teedev); goto err_unreg_teedev; From patchwork Wed Mar 5 13:04:09 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 14002625 Received: from mail-ed1-f52.google.com (mail-ed1-f52.google.com [209.85.208.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7DA9424C68D for ; Wed, 5 Mar 2025 13:06:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741180013; cv=none; b=Ix9CYgK/Od59DLD56E6rKy7ZJ9tE6anGEb8FLwVoXbDRrnk94fh2uJ8B+NAl65kvhzYKIVSuYNsgSBIAxyR6lhE6nmAumm2xYwL3nJObbUDEhgbgGvI/imEbK0kgh05oR8fHVsiVntV5DlYFLWX7UIsWCsNbiPynu/qYLBfqy8w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741180013; c=relaxed/simple; bh=+8DfPloRk/O2wbK0yr6MftUtkn5813LmLNGOT0rjEEQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=p1mWo5GLjXkKXJQLf7Y6cvod+/nSYISZwaQB4hZGyULA3R3z6fkg0wOW+TID4atY0wKGpvP8gxf7YbDMo5/YVfWBlDMFuaZacebEcXCFaXGOey5vER81AJu++Nr/WWrE0ajJ4y8/e3YnrAyLolxLsUtmD7WO9fdPHpK2WHFuj8I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=M4BIWVGD; arc=none smtp.client-ip=209.85.208.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="M4BIWVGD" Received: by mail-ed1-f52.google.com with SMTP id 4fb4d7f45d1cf-5e5a346aeffso1091539a12.1 for ; Wed, 05 Mar 2025 05:06:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1741180009; x=1741784809; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tnKwjJ1FVLv4umx37hI66Bb7z/WCUlz7TSg/JWGf0qo=; b=M4BIWVGD6MXd/GtOebZS74XNIJyJxIYlSLY4MhBMTYk+38QFeHotrL4tgHLqwzaugJ 86wFHaYY17+gypsttuhv0vmQQKQaYzx33MRGZsOiHpo40WccJQx0B0mtK6e8/T/BbaoO lOC5mINPSmtQpGOs0LKlxGziSzsdL22nATJ/yNlpLDU2LaTfyW1/GvgsmXCdTrhummrv QpaFBqdav0prdMu0aGesWoYJg+Nv/oxypR+DrT6xcdcvRV0ULLJqRiFp3DBDd8AfW920 JWaSxn4DCyU7DnJEwCYZRIqkTE1aH3h6eiUJ/Wk7DmdnmTOWvexX+UQp+0JFmuffx6hN DnGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741180009; x=1741784809; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tnKwjJ1FVLv4umx37hI66Bb7z/WCUlz7TSg/JWGf0qo=; b=CLX3dwJhj3+bibdRuCF0RYHtkcucw8CWMw5GsBA78rD4pdmI/SfzqglwdpuxSksas8 3lW1BkUMmJsy2j8NEBiJeGyTLGe27cvyGHR5hrxtg4nyUYODjXKpDpueptRx374sJWp3 lAKePsPls2CzLK7kNur2ly/9E3G9E2/GblCL7F4MpPUYWASICXWChsPbFGbJhkocRAUC yeuxjImAdDHr0LfmAOO+3jVKfIzZaKcu7+el3pI1YOt0zODq2aGotrHcx4mRtlP3iWDC EIF6Q5nbMvx6ItdC6icrg9j3ah13nyPRP3SwSbg2fW5fb0pkPhfjht/UguunWIa4iEED 90vg== X-Forwarded-Encrypted: i=1; AJvYcCWXvbeNuRbad+fNTKnb5B+wdUhZPQg3vcvF6T8+qimglNzGOU8Z5we97muUt784u8Ik8T+lBF7ecl/zBA==@vger.kernel.org X-Gm-Message-State: AOJu0YxWnhgKRsICcetPrWu2lL/xSSB8vCYeURNG2BrLH/bNUR77XWH8 XOnb8K5hNoBpX4Q+BMSeaDrJYmURnCyWc4Beapot3JAXzD5g/vsPzTIvy+j33aw= X-Gm-Gg: ASbGncvZMPhl1fwfF0PJaiMHfb1GI6gMpeaGhpQVFbrUL2mT16GbxH7+yuDBg1V67yR VDi9Yf0X5pI0Y7R5SI8a0wTaOGhvmE0+ZTcC7sQL0w1JZil6AHmXV8l0/pHcNfOz3RY74VVGiZ4 QARYPs6uMQFOHbVlzB9ErghEz2zLgLPIrZMTJxvJ0NyUTK1W2Um74PUfNux146clX3tDkYdli+m 4+ZUgzeriHQJ1SYbOn6xLPzv2TAOjdUUkgyOWiopMtffyX50HHB5dQCZgnlehenwT50rvlw2ZXT I2hZn7qoV3rHmGcYtDLQcy0OJvGlOAvBmDxIm5FONnqTZWlzIrQYXff0vdyjMFN2QzRgfsAewOq Pp6NzgqpUzXvQAnU4yecBAw== X-Google-Smtp-Source: AGHT+IFkv93UBCNyDTbp+dhfIHJd1mUccVi9lWn9kUXL6jjZpjjTaG6v7cqJg4o/sMmedLPajLbIUA== X-Received: by 2002:a05:6402:2714:b0:5e5:437b:74a7 with SMTP id 4fb4d7f45d1cf-5e59f37c457mr2370034a12.8.1741180008509; Wed, 05 Mar 2025 05:06:48 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-5e5bcd1595bsm65714a12.42.2025.03.05.05.06.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Mar 2025 05:06:46 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Daniel Stone , Jens Wiklander Subject: [PATCH v6 03/10] optee: account for direction while converting parameters Date: Wed, 5 Mar 2025 14:04:09 +0100 Message-ID: <20250305130634.1850178-4-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250305130634.1850178-1-jens.wiklander@linaro.org> References: <20250305130634.1850178-1-jens.wiklander@linaro.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The OP-TEE backend driver has two internal function pointers to convert between the subsystem type struct tee_param and the OP-TEE type struct optee_msg_param. The conversion is done from one of the types to the other, which is then involved in some operation and finally converted back to the original type. When converting to prepare the parameters for the operation, all fields must be taken into account, but then converting back, it's enough to update only out-values and out-sizes. So, an update_out parameter is added to the conversion functions to tell if all or only some fields must be copied. This is needed in a later patch where it might get confusing when converting back in from_msg_param() callback since an allocated restricted SHM can be using the sec_world_id of the used restricted memory pool and that doesn't translate back well. Signed-off-by: Jens Wiklander --- drivers/tee/optee/call.c | 10 ++-- drivers/tee/optee/ffa_abi.c | 43 +++++++++++++---- drivers/tee/optee/optee_private.h | 42 +++++++++++------ drivers/tee/optee/rpc.c | 31 +++++++++---- drivers/tee/optee/smc_abi.c | 76 +++++++++++++++++++++++-------- 5 files changed, 144 insertions(+), 58 deletions(-) diff --git a/drivers/tee/optee/call.c b/drivers/tee/optee/call.c index 16eb953e14bb..f1533b894726 100644 --- a/drivers/tee/optee/call.c +++ b/drivers/tee/optee/call.c @@ -400,7 +400,8 @@ int optee_open_session(struct tee_context *ctx, export_uuid(msg_arg->params[1].u.octets, &client_uuid); rc = optee->ops->to_msg_param(optee, msg_arg->params + 2, - arg->num_params, param); + arg->num_params, param, + false /*!update_out*/); if (rc) goto out; @@ -427,7 +428,8 @@ int optee_open_session(struct tee_context *ctx, } if (optee->ops->from_msg_param(optee, param, arg->num_params, - msg_arg->params + 2)) { + msg_arg->params + 2, + true /*update_out*/)) { arg->ret = TEEC_ERROR_COMMUNICATION; arg->ret_origin = TEEC_ORIGIN_COMMS; /* Close session again to avoid leakage */ @@ -541,7 +543,7 @@ int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, msg_arg->cancel_id = arg->cancel_id; rc = optee->ops->to_msg_param(optee, msg_arg->params, arg->num_params, - param); + param, false /*!update_out*/); if (rc) goto out; @@ -551,7 +553,7 @@ int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, } if (optee->ops->from_msg_param(optee, param, arg->num_params, - msg_arg->params)) { + msg_arg->params, true /*update_out*/)) { msg_arg->ret = TEEC_ERROR_COMMUNICATION; msg_arg->ret_origin = TEEC_ORIGIN_COMMS; } diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c index 4ca1d5161b82..e4b08cd195f3 100644 --- a/drivers/tee/optee/ffa_abi.c +++ b/drivers/tee/optee/ffa_abi.c @@ -122,15 +122,21 @@ static int optee_shm_rem_ffa_handle(struct optee *optee, u64 global_id) */ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, - u32 attr, const struct optee_msg_param *mp) + u32 attr, const struct optee_msg_param *mp, + bool update_out) { struct tee_shm *shm = NULL; u64 offs_high = 0; u64 offs_low = 0; + if (update_out) { + if (attr == OPTEE_MSG_ATTR_TYPE_FMEM_INPUT) + return; + goto out; + } + p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + attr - OPTEE_MSG_ATTR_TYPE_FMEM_INPUT; - p->u.memref.size = mp->u.fmem.size; if (mp->u.fmem.global_id != OPTEE_MSG_FMEM_INVALID_GLOBAL_ID) shm = optee_shm_from_ffa_handle(optee, mp->u.fmem.global_id); @@ -141,6 +147,8 @@ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, offs_high = mp->u.fmem.offs_high; } p->u.memref.shm_offs = offs_low | offs_high << 32; +out: + p->u.memref.size = mp->u.fmem.size; } /** @@ -150,12 +158,14 @@ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, * @params: subsystem internal parameter representation * @num_params: number of elements in the parameter arrays * @msg_params: OPTEE_MSG parameters + * @update_out: update parameter for output only * * Returns 0 on success or <0 on failure */ static int optee_ffa_from_msg_param(struct optee *optee, struct tee_param *params, size_t num_params, - const struct optee_msg_param *msg_params) + const struct optee_msg_param *msg_params, + bool update_out) { size_t n; @@ -166,18 +176,20 @@ static int optee_ffa_from_msg_param(struct optee *optee, switch (attr) { case OPTEE_MSG_ATTR_TYPE_NONE: + if (update_out) + break; p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&p->u, 0, sizeof(p->u)); break; case OPTEE_MSG_ATTR_TYPE_VALUE_INPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT: - optee_from_msg_param_value(p, attr, mp); + optee_from_msg_param_value(p, attr, mp, update_out); break; case OPTEE_MSG_ATTR_TYPE_FMEM_INPUT: case OPTEE_MSG_ATTR_TYPE_FMEM_OUTPUT: case OPTEE_MSG_ATTR_TYPE_FMEM_INOUT: - from_msg_param_ffa_mem(optee, p, attr, mp); + from_msg_param_ffa_mem(optee, p, attr, mp, update_out); break; default: return -EINVAL; @@ -188,10 +200,16 @@ static int optee_ffa_from_msg_param(struct optee *optee, } static int to_msg_param_ffa_mem(struct optee_msg_param *mp, - const struct tee_param *p) + const struct tee_param *p, bool update_out) { struct tee_shm *shm = p->u.memref.shm; + if (update_out) { + if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT) + return 0; + goto out; + } + mp->attr = OPTEE_MSG_ATTR_TYPE_FMEM_INPUT + p->attr - TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT; @@ -211,6 +229,7 @@ static int to_msg_param_ffa_mem(struct optee_msg_param *mp, memset(&mp->u, 0, sizeof(mp->u)); mp->u.fmem.global_id = OPTEE_MSG_FMEM_INVALID_GLOBAL_ID; } +out: mp->u.fmem.size = p->u.memref.size; return 0; @@ -222,13 +241,15 @@ static int to_msg_param_ffa_mem(struct optee_msg_param *mp, * @optee: main service struct * @msg_params: OPTEE_MSG parameters * @num_params: number of elements in the parameter arrays - * @params: subsystem itnernal parameter representation + * @params: subsystem internal parameter representation + * @update_out: update parameter for output only * Returns 0 on success or <0 on failure */ static int optee_ffa_to_msg_param(struct optee *optee, struct optee_msg_param *msg_params, size_t num_params, - const struct tee_param *params) + const struct tee_param *params, + bool update_out) { size_t n; @@ -238,18 +259,20 @@ static int optee_ffa_to_msg_param(struct optee *optee, switch (p->attr) { case TEE_IOCTL_PARAM_ATTR_TYPE_NONE: + if (update_out) + break; mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&mp->u, 0, sizeof(mp->u)); break; case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT: - optee_to_msg_param_value(mp, p); + optee_to_msg_param_value(mp, p, update_out); break; case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: - if (to_msg_param_ffa_mem(mp, p)) + if (to_msg_param_ffa_mem(mp, p, update_out)) return -EINVAL; break; default: diff --git a/drivers/tee/optee/optee_private.h b/drivers/tee/optee/optee_private.h index dc0f355ef72a..20eda508dbac 100644 --- a/drivers/tee/optee/optee_private.h +++ b/drivers/tee/optee/optee_private.h @@ -185,10 +185,12 @@ struct optee_ops { bool system_thread); int (*to_msg_param)(struct optee *optee, struct optee_msg_param *msg_params, - size_t num_params, const struct tee_param *params); + size_t num_params, const struct tee_param *params, + bool update_out); int (*from_msg_param)(struct optee *optee, struct tee_param *params, size_t num_params, - const struct optee_msg_param *msg_params); + const struct optee_msg_param *msg_params, + bool update_out); }; /** @@ -316,23 +318,35 @@ void optee_release(struct tee_context *ctx); void optee_release_supp(struct tee_context *ctx); static inline void optee_from_msg_param_value(struct tee_param *p, u32 attr, - const struct optee_msg_param *mp) + const struct optee_msg_param *mp, + bool update_out) { - p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT + - attr - OPTEE_MSG_ATTR_TYPE_VALUE_INPUT; - p->u.value.a = mp->u.value.a; - p->u.value.b = mp->u.value.b; - p->u.value.c = mp->u.value.c; + if (!update_out) + p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT + + attr - OPTEE_MSG_ATTR_TYPE_VALUE_INPUT; + + if (attr == OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT || + attr == OPTEE_MSG_ATTR_TYPE_VALUE_INOUT || !update_out) { + p->u.value.a = mp->u.value.a; + p->u.value.b = mp->u.value.b; + p->u.value.c = mp->u.value.c; + } } static inline void optee_to_msg_param_value(struct optee_msg_param *mp, - const struct tee_param *p) + const struct tee_param *p, + bool update_out) { - mp->attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT + p->attr - - TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT; - mp->u.value.a = p->u.value.a; - mp->u.value.b = p->u.value.b; - mp->u.value.c = p->u.value.c; + if (!update_out) + mp->attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT + p->attr - + TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT; + + if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT || + p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT || !update_out) { + mp->u.value.a = p->u.value.a; + mp->u.value.b = p->u.value.b; + mp->u.value.c = p->u.value.c; + } } void optee_cq_init(struct optee_call_queue *cq, int thread_count); diff --git a/drivers/tee/optee/rpc.c b/drivers/tee/optee/rpc.c index ebbbd42b0e3e..580e6b9b0606 100644 --- a/drivers/tee/optee/rpc.c +++ b/drivers/tee/optee/rpc.c @@ -63,7 +63,7 @@ static void handle_rpc_func_cmd_i2c_transfer(struct tee_context *ctx, } if (optee->ops->from_msg_param(optee, params, arg->num_params, - arg->params)) + arg->params, false /*!update_out*/)) goto bad; for (i = 0; i < arg->num_params; i++) { @@ -107,7 +107,8 @@ static void handle_rpc_func_cmd_i2c_transfer(struct tee_context *ctx, } else { params[3].u.value.a = msg.len; if (optee->ops->to_msg_param(optee, arg->params, - arg->num_params, params)) + arg->num_params, params, + true /*update_out*/)) arg->ret = TEEC_ERROR_BAD_PARAMETERS; else arg->ret = TEEC_SUCCESS; @@ -188,6 +189,7 @@ static void handle_rpc_func_cmd_wait(struct optee_msg_arg *arg) static void handle_rpc_supp_cmd(struct tee_context *ctx, struct optee *optee, struct optee_msg_arg *arg) { + bool update_out = false; struct tee_param *params; arg->ret_origin = TEEC_ORIGIN_COMMS; @@ -200,15 +202,21 @@ static void handle_rpc_supp_cmd(struct tee_context *ctx, struct optee *optee, } if (optee->ops->from_msg_param(optee, params, arg->num_params, - arg->params)) { + arg->params, update_out)) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; goto out; } arg->ret = optee_supp_thrd_req(ctx, arg->cmd, arg->num_params, params); + /* + * Special treatment for OPTEE_RPC_CMD_SHM_ALLOC since input is a + * value type, but the output is a memref type. + */ + if (arg->cmd != OPTEE_RPC_CMD_SHM_ALLOC) + update_out = true; if (optee->ops->to_msg_param(optee, arg->params, arg->num_params, - params)) + params, update_out)) arg->ret = TEEC_ERROR_BAD_PARAMETERS; out: kfree(params); @@ -270,7 +278,7 @@ static void handle_rpc_func_rpmb_probe_reset(struct tee_context *ctx, if (arg->num_params != ARRAY_SIZE(params) || optee->ops->from_msg_param(optee, params, arg->num_params, - arg->params) || + arg->params, false /*!update_out*/) || params[0].attr != TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; return; @@ -280,7 +288,8 @@ static void handle_rpc_func_rpmb_probe_reset(struct tee_context *ctx, params[0].u.value.b = 0; params[0].u.value.c = 0; if (optee->ops->to_msg_param(optee, arg->params, - arg->num_params, params)) { + arg->num_params, params, + true /*update_out*/)) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; return; } @@ -324,7 +333,7 @@ static void handle_rpc_func_rpmb_probe_next(struct tee_context *ctx, if (arg->num_params != ARRAY_SIZE(params) || optee->ops->from_msg_param(optee, params, arg->num_params, - arg->params) || + arg->params, false /*!update_out*/) || params[0].attr != TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT || params[1].attr != TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; @@ -358,7 +367,8 @@ static void handle_rpc_func_rpmb_probe_next(struct tee_context *ctx, params[0].u.value.b = rdev->descr.capacity; params[0].u.value.c = rdev->descr.reliable_wr_count; if (optee->ops->to_msg_param(optee, arg->params, - arg->num_params, params)) { + arg->num_params, params, + true /*update_out*/)) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; return; } @@ -384,7 +394,7 @@ static void handle_rpc_func_rpmb_frames(struct tee_context *ctx, if (arg->num_params != ARRAY_SIZE(params) || optee->ops->from_msg_param(optee, params, arg->num_params, - arg->params) || + arg->params, false /*!update_out*/) || params[0].attr != TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT || params[1].attr != TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; @@ -401,7 +411,8 @@ static void handle_rpc_func_rpmb_frames(struct tee_context *ctx, goto out; } if (optee->ops->to_msg_param(optee, arg->params, - arg->num_params, params)) { + arg->num_params, params, + true /*update_out*/)) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; goto out; } diff --git a/drivers/tee/optee/smc_abi.c b/drivers/tee/optee/smc_abi.c index 165fadd9abc9..cfdae266548b 100644 --- a/drivers/tee/optee/smc_abi.c +++ b/drivers/tee/optee/smc_abi.c @@ -81,20 +81,26 @@ static int optee_cpuhp_disable_pcpu_irq(unsigned int cpu) */ static int from_msg_param_tmp_mem(struct tee_param *p, u32 attr, - const struct optee_msg_param *mp) + const struct optee_msg_param *mp, + bool update_out) { struct tee_shm *shm; phys_addr_t pa; int rc; + if (update_out) { + if (attr == OPTEE_MSG_ATTR_TYPE_TMEM_INPUT) + return 0; + goto out; + } + p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + attr - OPTEE_MSG_ATTR_TYPE_TMEM_INPUT; - p->u.memref.size = mp->u.tmem.size; shm = (struct tee_shm *)(unsigned long)mp->u.tmem.shm_ref; if (!shm) { p->u.memref.shm_offs = 0; p->u.memref.shm = NULL; - return 0; + goto out; } rc = tee_shm_get_pa(shm, 0, &pa); @@ -103,18 +109,25 @@ static int from_msg_param_tmp_mem(struct tee_param *p, u32 attr, p->u.memref.shm_offs = mp->u.tmem.buf_ptr - pa; p->u.memref.shm = shm; - +out: + p->u.memref.size = mp->u.tmem.size; return 0; } static void from_msg_param_reg_mem(struct tee_param *p, u32 attr, - const struct optee_msg_param *mp) + const struct optee_msg_param *mp, + bool update_out) { struct tee_shm *shm; + if (update_out) { + if (attr == OPTEE_MSG_ATTR_TYPE_RMEM_INPUT) + return; + goto out; + } + p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + attr - OPTEE_MSG_ATTR_TYPE_RMEM_INPUT; - p->u.memref.size = mp->u.rmem.size; shm = (struct tee_shm *)(unsigned long)mp->u.rmem.shm_ref; if (shm) { @@ -124,6 +137,8 @@ static void from_msg_param_reg_mem(struct tee_param *p, u32 attr, p->u.memref.shm_offs = 0; p->u.memref.shm = NULL; } +out: + p->u.memref.size = mp->u.rmem.size; } /** @@ -133,11 +148,13 @@ static void from_msg_param_reg_mem(struct tee_param *p, u32 attr, * @params: subsystem internal parameter representation * @num_params: number of elements in the parameter arrays * @msg_params: OPTEE_MSG parameters + * @update_out: update parameter for output only * Returns 0 on success or <0 on failure */ static int optee_from_msg_param(struct optee *optee, struct tee_param *params, size_t num_params, - const struct optee_msg_param *msg_params) + const struct optee_msg_param *msg_params, + bool update_out) { int rc; size_t n; @@ -149,25 +166,27 @@ static int optee_from_msg_param(struct optee *optee, struct tee_param *params, switch (attr) { case OPTEE_MSG_ATTR_TYPE_NONE: + if (update_out) + break; p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&p->u, 0, sizeof(p->u)); break; case OPTEE_MSG_ATTR_TYPE_VALUE_INPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT: - optee_from_msg_param_value(p, attr, mp); + optee_from_msg_param_value(p, attr, mp, update_out); break; case OPTEE_MSG_ATTR_TYPE_TMEM_INPUT: case OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT: case OPTEE_MSG_ATTR_TYPE_TMEM_INOUT: - rc = from_msg_param_tmp_mem(p, attr, mp); + rc = from_msg_param_tmp_mem(p, attr, mp, update_out); if (rc) return rc; break; case OPTEE_MSG_ATTR_TYPE_RMEM_INPUT: case OPTEE_MSG_ATTR_TYPE_RMEM_OUTPUT: case OPTEE_MSG_ATTR_TYPE_RMEM_INOUT: - from_msg_param_reg_mem(p, attr, mp); + from_msg_param_reg_mem(p, attr, mp, update_out); break; default: @@ -178,20 +197,25 @@ static int optee_from_msg_param(struct optee *optee, struct tee_param *params, } static int to_msg_param_tmp_mem(struct optee_msg_param *mp, - const struct tee_param *p) + const struct tee_param *p, bool update_out) { int rc; phys_addr_t pa; + if (update_out) { + if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT) + return 0; + goto out; + } + mp->attr = OPTEE_MSG_ATTR_TYPE_TMEM_INPUT + p->attr - TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT; mp->u.tmem.shm_ref = (unsigned long)p->u.memref.shm; - mp->u.tmem.size = p->u.memref.size; if (!p->u.memref.shm) { mp->u.tmem.buf_ptr = 0; - return 0; + goto out; } rc = tee_shm_get_pa(p->u.memref.shm, p->u.memref.shm_offs, &pa); @@ -201,19 +225,27 @@ static int to_msg_param_tmp_mem(struct optee_msg_param *mp, mp->u.tmem.buf_ptr = pa; mp->attr |= OPTEE_MSG_ATTR_CACHE_PREDEFINED << OPTEE_MSG_ATTR_CACHE_SHIFT; - +out: + mp->u.tmem.size = p->u.memref.size; return 0; } static int to_msg_param_reg_mem(struct optee_msg_param *mp, - const struct tee_param *p) + const struct tee_param *p, bool update_out) { + if (update_out) { + if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT) + return 0; + goto out; + } + mp->attr = OPTEE_MSG_ATTR_TYPE_RMEM_INPUT + p->attr - TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT; mp->u.rmem.shm_ref = (unsigned long)p->u.memref.shm; - mp->u.rmem.size = p->u.memref.size; mp->u.rmem.offs = p->u.memref.shm_offs; +out: + mp->u.rmem.size = p->u.memref.size; return 0; } @@ -223,11 +255,13 @@ static int to_msg_param_reg_mem(struct optee_msg_param *mp, * @msg_params: OPTEE_MSG parameters * @num_params: number of elements in the parameter arrays * @params: subsystem itnernal parameter representation + * @update_out: update parameter for output only * Returns 0 on success or <0 on failure */ static int optee_to_msg_param(struct optee *optee, struct optee_msg_param *msg_params, - size_t num_params, const struct tee_param *params) + size_t num_params, const struct tee_param *params, + bool update_out) { int rc; size_t n; @@ -238,21 +272,23 @@ static int optee_to_msg_param(struct optee *optee, switch (p->attr) { case TEE_IOCTL_PARAM_ATTR_TYPE_NONE: + if (update_out) + break; mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&mp->u, 0, sizeof(mp->u)); break; case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT: - optee_to_msg_param_value(mp, p); + optee_to_msg_param_value(mp, p, update_out); break; case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: if (tee_shm_is_dynamic(p->u.memref.shm)) - rc = to_msg_param_reg_mem(mp, p); + rc = to_msg_param_reg_mem(mp, p, update_out); else - rc = to_msg_param_tmp_mem(mp, p); + rc = to_msg_param_tmp_mem(mp, p, update_out); if (rc) return rc; break; From patchwork Wed Mar 5 13:04:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 14002626 Received: from mail-ed1-f49.google.com (mail-ed1-f49.google.com [209.85.208.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6B73324CED5 for ; Wed, 5 Mar 2025 13:06:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.49 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741180015; cv=none; b=ieAYUjVrqPVsmMm4FjhoIifT5kLxrfoRSeRPbIp18eTi0ZhVFidpePVvgAQWly0nmi4yL3c/GTZhsE68PyVoeEB2K//5YvUnS3kSQeTdUIJiLNEkImQKWGQlzIyQisCEtLUsI74lxOJ/P+3Hw6KVXFkAVK6w1455zdl2+AaIwyI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741180015; c=relaxed/simple; bh=YB2dQ1rBZRLx3aY/9FJdGsn3BIuWnqmUrluD4l+qzqU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=vBFkaCd4raE78Fo4lG7EHYMdyqg8YRb8ZsQdFOF7MGF8SGJxYEGFPTqwt/hTte63cinf7UfkgXboY9MCSbxbxuPyQ810mB0OKDl6IileOGk6tEg3rkL5+KZbRmyfNilSvMeETd1Z7XxCJkmmYyy1YHqylwDyQ1rXZyZzU3qcYY0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=OMFmZ87T; arc=none smtp.client-ip=209.85.208.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="OMFmZ87T" Received: by mail-ed1-f49.google.com with SMTP id 4fb4d7f45d1cf-5e0b70fb1daso11838032a12.1 for ; Wed, 05 Mar 2025 05:06:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1741180012; x=1741784812; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=nSDQWd9UZqV/STl4BJjbppPEGE1hH0ufO/G2aXdmVa4=; b=OMFmZ87TGtI3LTSFQFdXLMRi72zwTiNfNrOl5qyTTzILwXkLj44ulobJ6CJqKu8Xuq 0Huzk+lkEhWkuDjGcQuA88MecAZxAUMjcJXQogbY9+oLFxCiKBTEbiLkICX4C7/whmdp hc2bC5PC6WSjz3XwH1gU6NBY6v+EOCwi8FRvMBncz8jj7DxyqbpTy+4Rrn92yS8mV4Ec iipDYWG2ERR+qNOWTVjBsNw93Km0EiQw/zT7LfTcqD/+nMk/2iOy7YzLLOPHr00hCgrO JxCEP8cnUSDA7i9ycof/mB7YM+Pm3ciZcwAUmHKParGefJbDUl+LZQZxyfPs7Ew+meyA xAxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741180012; x=1741784812; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nSDQWd9UZqV/STl4BJjbppPEGE1hH0ufO/G2aXdmVa4=; b=TvFnqAznQzHNUhAulm4E+ZJZqKyK6TAV4Rcz6phjIn2cdLebuocnS+bahAj/TtUCjj afQKFYL4SeKXlqJYZLIWB+sI2knbjxMHHVF5E4OCQlgZNXlEiubRmKhlcI7m3/9uIsGP p9uWoYIra9dFKII3cgrGnr4huehnnU2JydoV2LMd5GK17pPKndCx43lMywXl2I4Avyhh G4o19+fW2x5UEMydojh27SHT/NGy37esEtSU5KKSSiyETp3DA9u19AaQV9yjoqHaDi+2 KpG561p4W4N9+XsjU3h3kTSmwuxxLtGrh41Rn8xmHb5zU32oachUh5r35kDXTVRbWozb VSPw== X-Forwarded-Encrypted: i=1; AJvYcCUwSBIm2gNPM9m0KN64RZNDfBhxp8OoPkdyEb1JNi1MxaXPq9WK1gcYPF2yB2R2XzhYaGEh9ipg+XT8SQ==@vger.kernel.org X-Gm-Message-State: AOJu0Yz+ulsWbJQxxv60t9ES/yf1UmN5K0K6d4ewqd+PN/2tdwjkwzUP bLsfnlVyZSAQyeaBenf5i5MW5nxyLzhJb8cp5/Ue1k1ytLvJfAoUreCzPKJVC48= X-Gm-Gg: ASbGncu9Wf94VrH6dj5IMGxA8XFJwYa7uV1WrAKMXBxtI3b4Rl7+Akwiwk4rpPzRCvR l2C/o22D9n1JDNxmu7CTEekh9dVQYwGawnBVBntkAuxWMTYji+fGqVdtANl+dj9ckqwCgV8Xg3w f43tWPMw6HLkeHs5eZDWP3vCWEfNB02vmkjrq2d1cX2RmyjrZpizFP62On60wpu6dqqP+pbw5Nk v/m3lK4TfOFL5Vcj+5AvDxqhoq3KHNk16rihO05X0IqJmue4MJ71Vlex7fv9IhluoLjue48FoRA nUSbHEZ1hPE8TJTilHtqw+7Ff46VC+mfT9aJII6REyp1xzFFcbLw/o3FbR87jKDeZf8dqz79goU 28PsWTVm3ErsCoox61oyqWg== X-Google-Smtp-Source: AGHT+IGkGSDzVVBlXNvlMiIDvRWPvOOiytk/N/ekI3zhjISEo1QkiAr9cRPlUfo3MPWV5GHTdd4Z6g== X-Received: by 2002:a05:6402:13d3:b0:5dc:81b3:5e1a with SMTP id 4fb4d7f45d1cf-5e59f3864ebmr2662502a12.7.1741180011312; Wed, 05 Mar 2025 05:06:51 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-5e5bcd1595bsm65714a12.42.2025.03.05.05.06.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Mar 2025 05:06:50 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Daniel Stone , Jens Wiklander Subject: [PATCH v6 04/10] optee: sync secure world ABI headers Date: Wed, 5 Mar 2025 14:04:10 +0100 Message-ID: <20250305130634.1850178-5-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250305130634.1850178-1-jens.wiklander@linaro.org> References: <20250305130634.1850178-1-jens.wiklander@linaro.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Update the header files describing the secure world ABI, both with and without FF-A. The ABI is extended to deal with restricted memory, but as usual backward compatible. Signed-off-by: Jens Wiklander --- drivers/tee/optee/optee_ffa.h | 27 ++++++++++--- drivers/tee/optee/optee_msg.h | 65 ++++++++++++++++++++++++++++++-- drivers/tee/optee/optee_smc.h | 71 ++++++++++++++++++++++++++++++++++- 3 files changed, 154 insertions(+), 9 deletions(-) diff --git a/drivers/tee/optee/optee_ffa.h b/drivers/tee/optee/optee_ffa.h index 257735ae5b56..7bd037200343 100644 --- a/drivers/tee/optee/optee_ffa.h +++ b/drivers/tee/optee/optee_ffa.h @@ -81,7 +81,7 @@ * as the second MSG arg struct for * OPTEE_FFA_YIELDING_CALL_WITH_ARG. * Bit[31:8]: Reserved (MBZ) - * w5: Bitfield of secure world capabilities OPTEE_FFA_SEC_CAP_* below, + * w5: Bitfield of OP-TEE capabilities OPTEE_FFA_SEC_CAP_* * w6: The maximum secure world notification number * w7: Not used (MBZ) */ @@ -94,6 +94,8 @@ #define OPTEE_FFA_SEC_CAP_ASYNC_NOTIF BIT(1) /* OP-TEE supports probing for RPMB device if needed */ #define OPTEE_FFA_SEC_CAP_RPMB_PROBE BIT(2) +/* OP-TEE supports Restricted Memory for secure data path */ +#define OPTEE_FFA_SEC_CAP_RSTMEM BIT(3) #define OPTEE_FFA_EXCHANGE_CAPABILITIES OPTEE_FFA_BLOCKING_CALL(2) @@ -108,7 +110,7 @@ * * Return register usage: * w3: Error code, 0 on success - * w4-w7: Note used (MBZ) + * w4-w7: Not used (MBZ) */ #define OPTEE_FFA_UNREGISTER_SHM OPTEE_FFA_BLOCKING_CALL(3) @@ -119,16 +121,31 @@ * Call register usage: * w3: Service ID, OPTEE_FFA_ENABLE_ASYNC_NOTIF * w4: Notification value to request bottom half processing, should be - * less than OPTEE_FFA_MAX_ASYNC_NOTIF_VALUE. + * less than OPTEE_FFA_MAX_ASYNC_NOTIF_VALUE * w5-w7: Not used (MBZ) * * Return register usage: * w3: Error code, 0 on success - * w4-w7: Note used (MBZ) + * w4-w7: Not used (MBZ) */ #define OPTEE_FFA_ENABLE_ASYNC_NOTIF OPTEE_FFA_BLOCKING_CALL(5) -#define OPTEE_FFA_MAX_ASYNC_NOTIF_VALUE 64 +#define OPTEE_FFA_MAX_ASYNC_NOTIF_VALUE 64 + +/* + * Release Restricted memory + * + * Call register usage: + * w3: Service ID, OPTEE_FFA_RECLAIM_RSTMEM + * w4: Shared memory handle, lower bits + * w5: Shared memory handle, higher bits + * w6-w7: Not used (MBZ) + * + * Return register usage: + * w3: Error code, 0 on success + * w4-w7: Note used (MBZ) + */ +#define OPTEE_FFA_RELEASE_RSTMEM OPTEE_FFA_BLOCKING_CALL(8) /* * Call with struct optee_msg_arg as argument in the supplied shared memory diff --git a/drivers/tee/optee/optee_msg.h b/drivers/tee/optee/optee_msg.h index e8840a82b983..1b558526e7d9 100644 --- a/drivers/tee/optee/optee_msg.h +++ b/drivers/tee/optee/optee_msg.h @@ -133,13 +133,13 @@ struct optee_msg_param_rmem { }; /** - * struct optee_msg_param_fmem - ffa memory reference parameter + * struct optee_msg_param_fmem - FF-A memory reference parameter * @offs_lower: Lower bits of offset into shared memory reference * @offs_upper: Upper bits of offset into shared memory reference * @internal_offs: Internal offset into the first page of shared memory * reference * @size: Size of the buffer - * @global_id: Global identifier of Shared memory + * @global_id: Global identifier of the shared memory */ struct optee_msg_param_fmem { u32 offs_low; @@ -165,7 +165,7 @@ struct optee_msg_param_value { * @attr: attributes * @tmem: parameter by temporary memory reference * @rmem: parameter by registered memory reference - * @fmem: parameter by ffa registered memory reference + * @fmem: parameter by FF-A registered memory reference * @value: parameter by opaque value * @octets: parameter by octet string * @@ -296,6 +296,18 @@ struct optee_msg_arg { */ #define OPTEE_MSG_FUNCID_GET_OS_REVISION 0x0001 +/* + * Values used in OPTEE_MSG_CMD_LEND_RSTMEM below + * OPTEE_MSG_RSTMEM_RESERVED Reserved + * OPTEE_MSG_RSTMEM_SECURE_VIDEO_PLAY Secure Video Playback + * OPTEE_MSG_RSTMEM_TRUSTED_UI Trused UI + * OPTEE_MSG_RSTMEM_SECURE_VIDEO_RECORD Secure Video Recording + */ +#define OPTEE_MSG_RSTMEM_RESERVED 0 +#define OPTEE_MSG_RSTMEM_SECURE_VIDEO_PLAY 1 +#define OPTEE_MSG_RSTMEM_TRUSTED_UI 2 +#define OPTEE_MSG_RSTMEM_SECURE_VIDEO_RECORD 3 + /* * Do a secure call with struct optee_msg_arg as argument * The OPTEE_MSG_CMD_* below defines what goes in struct optee_msg_arg::cmd @@ -337,6 +349,49 @@ struct optee_msg_arg { * OPTEE_MSG_CMD_STOP_ASYNC_NOTIF informs secure world that from now is * normal world unable to process asynchronous notifications. Typically * used when the driver is shut down. + * + * OPTEE_MSG_CMD_LEND_RSTMEM lends restricted memory. The passed normal + * physical memory is restricted from normal world access. The memory + * should be unmapped prior to this call since it becomes inaccessible + * during the request. + * Parameters are passed as: + * [in] param[0].attr OPTEE_MSG_ATTR_TYPE_VALUE_INPUT + * [in] param[0].u.value.a OPTEE_MSG_RSTMEM_* defined above + * [in] param[1].attr OPTEE_MSG_ATTR_TYPE_TMEM_INPUT + * [in] param[1].u.tmem.buf_ptr physical address + * [in] param[1].u.tmem.size size + * [in] param[1].u.tmem.shm_ref holds restricted memory reference + * + * OPTEE_MSG_CMD_RECLAIM_RSTMEM reclaims a previously lent restricted + * memory reference. The physical memory is accessible by the normal world + * after this function has return and can be mapped again. The information + * is passed as: + * [in] param[0].attr OPTEE_MSG_ATTR_TYPE_VALUE_INPUT + * [in] param[0].u.value.a holds restricted memory cookie + * + * OPTEE_MSG_CMD_GET_RSTMEM_CONFIG get configuration for a specific + * restricted memory use case. Parameters are passed as: + * [in] param[0].attr OPTEE_MSG_ATTR_TYPE_VALUE_INOUT + * [in] param[0].value.a OPTEE_MSG_RSTMEM_* + * [in] param[1].attr OPTEE_MSG_ATTR_TYPE_{R,F}MEM_OUTPUT + * [in] param[1].u.{r,f}mem Buffer or NULL + * [in] param[1].u.{r,f}mem.size Provided size of buffer or 0 for query + * output for the restricted use case: + * [out] param[0].value.a Minimal size of SDP memory + * [out] param[0].value.b Required alignment of size and start of + * restricted memory + * [out] param[1].{r,f}mem.size Size of output data + * [out] param[1].{r,f}mem If non-NULL, contains an array of + * uint16_t holding endpoints that + * must be included when lending + * memory for this use case + * + * OPTEE_MSG_CMD_ASSIGN_RSTMEM assigns use-case to restricted memory + * previously lent using the FFA_LEND framework ABI. Parameters are passed + * as: + * [in] param[0].attr OPTEE_MSG_ATTR_TYPE_VALUE_INPUT + * [in] param[0].u.value.a holds restricted memory cookie + * [in] param[0].u.value.b OPTEE_MSG_RSTMEM_* defined above */ #define OPTEE_MSG_CMD_OPEN_SESSION 0 #define OPTEE_MSG_CMD_INVOKE_COMMAND 1 @@ -346,6 +401,10 @@ struct optee_msg_arg { #define OPTEE_MSG_CMD_UNREGISTER_SHM 5 #define OPTEE_MSG_CMD_DO_BOTTOM_HALF 6 #define OPTEE_MSG_CMD_STOP_ASYNC_NOTIF 7 +#define OPTEE_MSG_CMD_LEND_RSTMEM 8 +#define OPTEE_MSG_CMD_RECLAIM_RSTMEM 9 +#define OPTEE_MSG_CMD_GET_RSTMEM_CONFIG 10 +#define OPTEE_MSG_CMD_ASSIGN_RSTMEM 11 #define OPTEE_MSG_FUNCID_CALL_WITH_ARG 0x0004 #endif /* _OPTEE_MSG_H */ diff --git a/drivers/tee/optee/optee_smc.h b/drivers/tee/optee/optee_smc.h index 879426300821..abc379ce190c 100644 --- a/drivers/tee/optee/optee_smc.h +++ b/drivers/tee/optee/optee_smc.h @@ -264,7 +264,6 @@ struct optee_smc_get_shm_config_result { #define OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM BIT(0) /* Secure world can communicate via previously unregistered shared memory */ #define OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM BIT(1) - /* * Secure world supports commands "register/unregister shared memory", * secure world accepts command buffers located in any parts of non-secure RAM @@ -280,6 +279,10 @@ struct optee_smc_get_shm_config_result { #define OPTEE_SMC_SEC_CAP_RPC_ARG BIT(6) /* Secure world supports probing for RPMB device if needed */ #define OPTEE_SMC_SEC_CAP_RPMB_PROBE BIT(7) +/* Secure world supports Secure Data Path */ +#define OPTEE_SMC_SEC_CAP_SDP BIT(8) +/* Secure world supports dynamic restricted memory */ +#define OPTEE_SMC_SEC_CAP_DYNAMIC_RSTMEM BIT(9) #define OPTEE_SMC_FUNCID_EXCHANGE_CAPABILITIES 9 #define OPTEE_SMC_EXCHANGE_CAPABILITIES \ @@ -451,6 +454,72 @@ struct optee_smc_disable_shm_cache_result { /* See OPTEE_SMC_CALL_WITH_REGD_ARG above */ #define OPTEE_SMC_FUNCID_CALL_WITH_REGD_ARG 19 +/* + * Get Secure Data Path memory config + * + * Returns the Secure Data Path memory config. + * + * Call register usage: + * a0 SMC Function ID, OPTEE_SMC_GET_SDP_CONFIG + * a2-6 Not used, must be zero + * a7 Hypervisor Client ID register + * + * Have config return register usage: + * a0 OPTEE_SMC_RETURN_OK + * a1 Physical address of start of SDP memory + * a2 Size of SDP memory + * a3 Not used + * a4-7 Preserved + * + * Not available register usage: + * a0 OPTEE_SMC_RETURN_ENOTAVAIL + * a1-3 Not used + * a4-7 Preserved + */ +#define OPTEE_SMC_FUNCID_GET_SDP_CONFIG 20 +#define OPTEE_SMC_GET_SDP_CONFIG \ + OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_GET_SDP_CONFIG) + +struct optee_smc_get_sdp_config_result { + unsigned long status; + unsigned long start; + unsigned long size; + unsigned long flags; +}; + +/* + * Get Secure Data Path dynamic memory config + * + * Returns the Secure Data Path dynamic memory config. + * + * Call register usage: + * a0 SMC Function ID, OPTEE_SMC_GET_DYN_SHM_CONFIG + * a2-6 Not used, must be zero + * a7 Hypervisor Client ID register + * + * Have config return register usage: + * a0 OPTEE_SMC_RETURN_OK + * a1 Minamal size of SDP memory + * a2 Required alignment of size and start of registered SDP memory + * a3 Not used + * a4-7 Preserved + * + * Not available register usage: + * a0 OPTEE_SMC_RETURN_ENOTAVAIL + * a1-3 Not used + * a4-7 Preserved + */ + +#define OPTEE_SMC_FUNCID_GET_DYN_SDP_CONFIG 21 +#define OPTEE_SMC_GET_DYN_SDP_CONFIG \ + OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_GET_DYN_SDP_CONFIG) + +struct optee_smc_get_dyn_sdp_config_result { + unsigned long status; + unsigned long size; + unsigned long align; + unsigned long flags; +}; /* * Resume from RPC (for example after processing a foreign interrupt) From patchwork Wed Mar 5 13:04:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 14002627 Received: from mail-ed1-f53.google.com (mail-ed1-f53.google.com [209.85.208.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8B6FB24DFF9 for ; Wed, 5 Mar 2025 13:06:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741180019; cv=none; b=IIdpSpROu0JO67EZ7BtkKFIgYmYUbXnQQuGNy3OEfOzFuX3wghDzXRsnUv8HRYTdJTf2BmlFdvrj9t7mnukawfQi/vLIDxtOkvQOwaJjUKG/8JPutxkaJ5S3+PIVaPP3WI5oWXZTsg8g1vAFRdf3PzCQUCYyuEDOutbzpfjhDYQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741180019; c=relaxed/simple; bh=3shnqngrEyjrgHR4EUbJ3v8C6gwyfXgvQ3zrDaUBzcE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uEj8o+UWnf2dKNrIfvpkF3w6rRE4N4KSKft9qBZdOCb7zKaN8sorPWGDEhsaPANWsKBQvbq8lIjIz5aCO5AEQyv+2QAa6F4dU5EztflOSwV4tUNhjTfNl40zX+Ufdt0fzx7I/3teWLi4xBJHTpaYhE0GIz0TQ34zta5+pgEnZ7I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=fdNpWnH/; arc=none smtp.client-ip=209.85.208.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="fdNpWnH/" Received: by mail-ed1-f53.google.com with SMTP id 4fb4d7f45d1cf-5dccaaca646so1813230a12.0 for ; Wed, 05 Mar 2025 05:06:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1741180014; x=1741784814; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4A2RMjc7+ET06/OLz7LUVQUp9/M2mVREMhZjdI+mgFg=; b=fdNpWnH/Gp4+Ra6aEYJPTHb0oXSvgpsTkigE4kxgJFVSnrMdb/aBxhkKeyjQm2nwqV BaoLu611cuRWNf+q9WbMK5iBVhQ3+DHUaorPUJrx7yx9Ft+w015psadJh1k6IJCCjWrO mjn2IUZbydauIcr6Y3YbeD72QgGvkioo/Fe0NI8W0cyb7NOJbyYjUnULkaZBKpZap+9T Y6HPg7xf7V0Sbcslw+bMOUBt389jNKjbDB9SbsMRgknZsn7/28vLlMz+f7rNhR0pHQho g1TAzSwXhV6tmpfUeAByft0Oxrh9R/LiIYPRZQhHkAI2fSUpEZbXu8EmHspv0WHmqnuN aivQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741180014; x=1741784814; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4A2RMjc7+ET06/OLz7LUVQUp9/M2mVREMhZjdI+mgFg=; b=HatxIrBROzRmzqGtoSAFA8XGX1L6Q1r7yrITj+lHVVH6zmaNmzZbkJNWWhSCgA4DBn sjqsQ9PehAuFkLVfITwgXyc3dY6R05rRLx4a6nSYxYlyIAsF0UJ/xOFE3766ys2Dbsbj QzpDzcMzeKtuSoaK5p+Kw9g4uvguBusP8hhAVBencPrYGgRQs/lUwrnCRo+nJ1pQOPj4 kC4vtbGH5zmg3pJUuxoj9IjVu+Ej75g/sokutpMbhqCdsp1QU3iDxJTj75f3Dv64eoDh mS95gToRjRoDZvLlg7OBqvy1KSNMtyJxqqzxF6+UIuFWdrgrfvfVaHeUyo7ymQxQhweK 200w== X-Forwarded-Encrypted: i=1; AJvYcCWmEcT8/qTgXmQ1Akgp8BXPCisN04CBh3XFlOfvIqbDcR7JynxrvHkEZpsc2BHKgcwx+v88RnI+P5Mjrg==@vger.kernel.org X-Gm-Message-State: AOJu0YwvElFGrgCKDuo3ctcdYcM+LRvjariStf+eKqttfdvC6NwR4fo6 XIVvK7PftwyJmChT7sYMDa0F5A1jqKL+nQI9Ah2WSlM+MlH1TbTKnECO1WNn72M= X-Gm-Gg: ASbGnctniUT79+RD21shUuWWzCQEK6bXBzx7Q0kNY+28kCBoUsKIJFKXdY9dI/VM1l4 ZKeTOdomUAe/QAl2rqQR1vYoXqJKJT9NAWjufZBKPN2gNJI55ZbVnUP8y1C9sTkv5GreoJHSHJ1 Q+1ZKZpmK8Oif/tXhIa/+kGp8j0d7JmkUqdcfTmVvEfEnpkLwqNn5Q6+6LDbCEGFA8DaZzxzO3Q s6F3urIxivqTiCtZdezaJ8zy+dhMuAxJTyykQkvvdmocu2fn1UEWtNEQlZ8JyplOsWbURjBado/ 24fDzD2LMuFAAQ4JOip0JA1CTDoaHdr/krOV/AqJFOs1WGHVVcxj09RCDIZJOI+1/7uv2juZ9Ps j6IKe1qldWt7oruDIhJ8zXA== X-Google-Smtp-Source: AGHT+IFwVg2fe30YJ6NhkeUa+m2OUhrYsSvmDZcpiLEWIsMefC5Un9QvScvwsJSP1T5LTCC7FAF7Mw== X-Received: by 2002:a05:6402:2742:b0:5dc:eb2:570d with SMTP id 4fb4d7f45d1cf-5e59f0dc9d3mr3049042a12.2.1741180014215; Wed, 05 Mar 2025 05:06:54 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-5e5bcd1595bsm65714a12.42.2025.03.05.05.06.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Mar 2025 05:06:53 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Daniel Stone , Jens Wiklander Subject: [PATCH v6 05/10] tee: implement restricted DMA-heap Date: Wed, 5 Mar 2025 14:04:11 +0100 Message-ID: <20250305130634.1850178-6-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250305130634.1850178-1-jens.wiklander@linaro.org> References: <20250305130634.1850178-1-jens.wiklander@linaro.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Implement DMA heap for restricted DMA-buf allocation in the TEE subsystem. Restricted memory refers to memory buffers behind a hardware enforced firewall. It is not accessible to the kernel during normal circumstances but rather only accessible to certain hardware IPs or CPUs executing in higher or differently privileged mode than the kernel itself. This interface allows to allocate and manage such restricted memory buffers via interaction with a TEE implementation. The restricted memory is allocated for a specific use-case, like Secure Video Playback, Trusted UI, or Secure Video Recording where certain hardware devices can access the memory. The DMA-heaps are enabled explicitly by the TEE backend driver. The TEE backend drivers needs to implement restricted memory pool to manage the restricted memory. Signed-off-by: Jens Wiklander --- drivers/tee/Makefile | 1 + drivers/tee/tee_heap.c | 470 ++++++++++++++++++++++++++++++++++++++ drivers/tee/tee_private.h | 6 + include/linux/tee_core.h | 62 +++++ 4 files changed, 539 insertions(+) create mode 100644 drivers/tee/tee_heap.c diff --git a/drivers/tee/Makefile b/drivers/tee/Makefile index 5488cba30bd2..949a6a79fb06 100644 --- a/drivers/tee/Makefile +++ b/drivers/tee/Makefile @@ -1,6 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_TEE) += tee.o tee-objs += tee_core.o +tee-objs += tee_heap.o tee-objs += tee_shm.o tee-objs += tee_shm_pool.o obj-$(CONFIG_OPTEE) += optee/ diff --git a/drivers/tee/tee_heap.c b/drivers/tee/tee_heap.c new file mode 100644 index 000000000000..476ab2e27260 --- /dev/null +++ b/drivers/tee/tee_heap.c @@ -0,0 +1,470 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2025, Linaro Limited + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "tee_private.h" + +struct tee_dma_heap { + struct dma_heap *heap; + enum tee_dma_heap_id id; + struct tee_rstmem_pool *pool; + struct tee_device *teedev; + /* Protects pool and teedev above */ + struct mutex mu; +}; + +struct tee_heap_buffer { + struct tee_rstmem_pool *pool; + struct tee_device *teedev; + size_t size; + size_t offs; + struct sg_table table; +}; + +struct tee_heap_attachment { + struct sg_table table; + struct device *dev; +}; + +struct tee_rstmem_static_pool { + struct tee_rstmem_pool pool; + struct gen_pool *gen_pool; + phys_addr_t pa_base; +}; + +#if !IS_MODULE(CONFIG_TEE) && IS_ENABLED(CONFIG_DMABUF_HEAPS) +static DEFINE_XARRAY_ALLOC(tee_dma_heap); + +static int copy_sg_table(struct sg_table *dst, struct sg_table *src) +{ + struct scatterlist *dst_sg; + struct scatterlist *src_sg; + int ret; + int i; + + ret = sg_alloc_table(dst, src->orig_nents, GFP_KERNEL); + if (ret) + return ret; + + dst_sg = dst->sgl; + for_each_sgtable_sg(src, src_sg, i) { + sg_set_page(dst_sg, sg_page(src_sg), src_sg->length, + src_sg->offset); + dst_sg = sg_next(dst_sg); + } + + return 0; +} + +static int tee_heap_attach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + struct tee_heap_buffer *buf = dmabuf->priv; + struct tee_heap_attachment *a; + int ret; + + a = kzalloc(sizeof(*a), GFP_KERNEL); + if (!a) + return -ENOMEM; + + ret = copy_sg_table(&a->table, &buf->table); + if (ret) { + kfree(a); + return ret; + } + + a->dev = attachment->dev; + attachment->priv = a; + + return 0; +} + +static void tee_heap_detach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + struct tee_heap_attachment *a = attachment->priv; + + sg_free_table(&a->table); + kfree(a); +} + +static struct sg_table * +tee_heap_map_dma_buf(struct dma_buf_attachment *attachment, + enum dma_data_direction direction) +{ + struct tee_heap_attachment *a = attachment->priv; + int ret; + + ret = dma_map_sgtable(attachment->dev, &a->table, direction, + DMA_ATTR_SKIP_CPU_SYNC); + if (ret) + return ERR_PTR(ret); + + return &a->table; +} + +static void tee_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, + struct sg_table *table, + enum dma_data_direction direction) +{ + struct tee_heap_attachment *a = attachment->priv; + + WARN_ON(&a->table != table); + + dma_unmap_sgtable(attachment->dev, table, direction, + DMA_ATTR_SKIP_CPU_SYNC); +} + +static void tee_heap_buf_free(struct dma_buf *dmabuf) +{ + struct tee_heap_buffer *buf = dmabuf->priv; + struct tee_device *teedev = buf->teedev; + + buf->pool->ops->free(buf->pool, &buf->table); + tee_device_put(teedev); +} + +static const struct dma_buf_ops tee_heap_buf_ops = { + .attach = tee_heap_attach, + .detach = tee_heap_detach, + .map_dma_buf = tee_heap_map_dma_buf, + .unmap_dma_buf = tee_heap_unmap_dma_buf, + .release = tee_heap_buf_free, +}; + +static struct dma_buf *tee_dma_heap_alloc(struct dma_heap *heap, + unsigned long len, u32 fd_flags, + u64 heap_flags) +{ + struct tee_dma_heap *h = dma_heap_get_drvdata(heap); + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + struct tee_device *teedev = NULL; + struct tee_heap_buffer *buf; + struct tee_rstmem_pool *pool; + struct dma_buf *dmabuf; + int rc; + + mutex_lock(&h->mu); + if (tee_device_get(h->teedev)) { + teedev = h->teedev; + pool = h->pool; + } + mutex_unlock(&h->mu); + + if (!teedev) + return ERR_PTR(-EINVAL); + + buf = kzalloc(sizeof(*buf), GFP_KERNEL); + if (!buf) { + dmabuf = ERR_PTR(-ENOMEM); + goto err; + } + buf->size = len; + buf->pool = pool; + buf->teedev = teedev; + + rc = pool->ops->alloc(pool, &buf->table, len, &buf->offs); + if (rc) { + dmabuf = ERR_PTR(rc); + goto err_kfree; + } + + exp_info.ops = &tee_heap_buf_ops; + exp_info.size = len; + exp_info.priv = buf; + exp_info.flags = fd_flags; + dmabuf = dma_buf_export(&exp_info); + if (IS_ERR(dmabuf)) + goto err_rstmem_free; + + return dmabuf; + +err_rstmem_free: + pool->ops->free(pool, &buf->table); +err_kfree: + kfree(buf); +err: + tee_device_put(h->teedev); + return dmabuf; +} + +static const struct dma_heap_ops tee_dma_heap_ops = { + .allocate = tee_dma_heap_alloc, +}; + +static const char *heap_id_2_name(enum tee_dma_heap_id id) +{ + switch (id) { + case TEE_DMA_HEAP_SECURE_VIDEO_PLAY: + return "restricted,secure-video"; + case TEE_DMA_HEAP_TRUSTED_UI: + return "restricted,trusted-ui"; + case TEE_DMA_HEAP_SECURE_VIDEO_RECORD: + return "restricted,secure-video-record"; + default: + return NULL; + } +} + +static int alloc_dma_heap(struct tee_device *teedev, enum tee_dma_heap_id id, + struct tee_rstmem_pool *pool) +{ + struct dma_heap_export_info exp_info = { + .ops = &tee_dma_heap_ops, + .name = heap_id_2_name(id), + }; + struct tee_dma_heap *h; + int rc; + + if (!exp_info.name) + return -EINVAL; + + if (xa_reserve(&tee_dma_heap, id, GFP_KERNEL)) { + if (!xa_load(&tee_dma_heap, id)) + return -EEXIST; + return -ENOMEM; + } + + h = kzalloc(sizeof(*h), GFP_KERNEL); + if (!h) + return -ENOMEM; + h->id = id; + h->teedev = teedev; + h->pool = pool; + mutex_init(&h->mu); + + exp_info.priv = h; + h->heap = dma_heap_add(&exp_info); + if (IS_ERR(h->heap)) { + rc = PTR_ERR(h->heap); + kfree(h); + + return rc; + } + + /* "can't fail" due to the call to xa_reserve() above */ + return WARN(xa_store(&tee_dma_heap, id, h, GFP_KERNEL), + "xa_store() failed"); +} + +int tee_device_register_dma_heap(struct tee_device *teedev, + enum tee_dma_heap_id id, + struct tee_rstmem_pool *pool) +{ + struct tee_dma_heap *h; + int rc; + + h = xa_load(&tee_dma_heap, id); + if (h) { + mutex_lock(&h->mu); + if (h->teedev) { + rc = -EBUSY; + } else { + h->teedev = teedev; + h->pool = pool; + rc = 0; + } + mutex_unlock(&h->mu); + } else { + rc = alloc_dma_heap(teedev, id, pool); + } + + if (rc) + dev_err(&teedev->dev, "can't register DMA heap id %d (%s)\n", + id, heap_id_2_name(id)); + + return rc; +} + +void tee_device_unregister_all_dma_heaps(struct tee_device *teedev) +{ + struct tee_rstmem_pool *pool; + struct tee_dma_heap *h; + u_long i; + + xa_for_each(&tee_dma_heap, i, h) { + if (h) { + pool = NULL; + mutex_lock(&h->mu); + if (h->teedev == teedev) { + pool = h->pool; + h->teedev = NULL; + h->pool = NULL; + } + mutex_unlock(&h->mu); + if (pool) + pool->ops->destroy_pool(pool); + } + } +} +EXPORT_SYMBOL_GPL(tee_device_unregister_all_dma_heaps); + +int tee_heap_update_from_dma_buf(struct tee_device *teedev, + struct dma_buf *dmabuf, size_t *offset, + struct tee_shm *shm, + struct tee_shm **parent_shm) +{ + struct tee_heap_buffer *buf; + int rc; + + /* The DMA-buf must be from our heap */ + if (dmabuf->ops != &tee_heap_buf_ops) + return -EINVAL; + + buf = dmabuf->priv; + /* The buffer must be from the same teedev */ + if (buf->teedev != teedev) + return -EINVAL; + + shm->size = buf->size; + + rc = buf->pool->ops->update_shm(buf->pool, &buf->table, buf->offs, shm, + parent_shm); + if (!rc && *parent_shm) + *offset = buf->offs; + + return rc; +} +#else +int tee_device_register_dma_heap(struct tee_device *teedev __always_unused, + enum tee_dma_heap_id id __always_unused, + struct tee_rstmem_pool *pool __always_unused) +{ + return -EINVAL; +} +EXPORT_SYMBOL_GPL(tee_device_register_dma_heap); + +void +tee_device_unregister_all_dma_heaps(struct tee_device *teedev __always_unused) +{ +} +EXPORT_SYMBOL_GPL(tee_device_unregister_all_dma_heaps); + +int tee_heap_update_from_dma_buf(struct tee_device *teedev __always_unused, + struct dma_buf *dmabuf __always_unused, + size_t *offset __always_unused, + struct tee_shm *shm __always_unused, + struct tee_shm **parent_shm __always_unused) +{ + return -EINVAL; +} +#endif + +static struct tee_rstmem_static_pool * +to_rstmem_static_pool(struct tee_rstmem_pool *pool) +{ + return container_of(pool, struct tee_rstmem_static_pool, pool); +} + +static int rstmem_pool_op_static_alloc(struct tee_rstmem_pool *pool, + struct sg_table *sgt, size_t size, + size_t *offs) +{ + struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool); + phys_addr_t pa; + int ret; + + pa = gen_pool_alloc(stp->gen_pool, size); + if (!pa) + return -ENOMEM; + + ret = sg_alloc_table(sgt, 1, GFP_KERNEL); + if (ret) { + gen_pool_free(stp->gen_pool, pa, size); + return ret; + } + + sg_set_page(sgt->sgl, phys_to_page(pa), size, 0); + *offs = pa - stp->pa_base; + + return 0; +} + +static void rstmem_pool_op_static_free(struct tee_rstmem_pool *pool, + struct sg_table *sgt) +{ + struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool); + struct scatterlist *sg; + int i; + + for_each_sgtable_sg(sgt, sg, i) + gen_pool_free(stp->gen_pool, sg_phys(sg), sg->length); + sg_free_table(sgt); +} + +static int rstmem_pool_op_static_update_shm(struct tee_rstmem_pool *pool, + struct sg_table *sgt, size_t offs, + struct tee_shm *shm, + struct tee_shm **parent_shm) +{ + struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool); + + shm->paddr = stp->pa_base + offs; + *parent_shm = NULL; + + return 0; +} + +static void rstmem_pool_op_static_destroy_pool(struct tee_rstmem_pool *pool) +{ + struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool); + + gen_pool_destroy(stp->gen_pool); + kfree(stp); +} + +static struct tee_rstmem_pool_ops rstmem_pool_ops_static = { + .alloc = rstmem_pool_op_static_alloc, + .free = rstmem_pool_op_static_free, + .update_shm = rstmem_pool_op_static_update_shm, + .destroy_pool = rstmem_pool_op_static_destroy_pool, +}; + +struct tee_rstmem_pool *tee_rstmem_static_pool_alloc(phys_addr_t paddr, + size_t size) +{ + const size_t page_mask = PAGE_SIZE - 1; + struct tee_rstmem_static_pool *stp; + int rc; + + /* Check it's page aligned */ + if ((paddr | size) & page_mask) + return ERR_PTR(-EINVAL); + + stp = kzalloc(sizeof(*stp), GFP_KERNEL); + if (!stp) + return ERR_PTR(-ENOMEM); + + stp->gen_pool = gen_pool_create(PAGE_SHIFT, -1); + if (!stp->gen_pool) { + rc = -ENOMEM; + goto err_free; + } + + rc = gen_pool_add(stp->gen_pool, paddr, size, -1); + if (rc) + goto err_free_pool; + + stp->pool.ops = &rstmem_pool_ops_static; + stp->pa_base = paddr; + return &stp->pool; + +err_free_pool: + gen_pool_destroy(stp->gen_pool); +err_free: + kfree(stp); + + return ERR_PTR(rc); +} +EXPORT_SYMBOL_GPL(tee_rstmem_static_pool_alloc); diff --git a/drivers/tee/tee_private.h b/drivers/tee/tee_private.h index 9bc50605227c..6c6ff5d5eed2 100644 --- a/drivers/tee/tee_private.h +++ b/drivers/tee/tee_private.h @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -24,4 +25,9 @@ struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size); struct tee_shm *tee_shm_register_user_buf(struct tee_context *ctx, unsigned long addr, size_t length); +int tee_heap_update_from_dma_buf(struct tee_device *teedev, + struct dma_buf *dmabuf, size_t *offset, + struct tee_shm *shm, + struct tee_shm **parent_shm); + #endif /*TEE_PRIVATE_H*/ diff --git a/include/linux/tee_core.h b/include/linux/tee_core.h index a38494d6b5f4..16ef078247ae 100644 --- a/include/linux/tee_core.h +++ b/include/linux/tee_core.h @@ -8,9 +8,11 @@ #include #include +#include #include #include #include +#include #include #include #include @@ -30,6 +32,12 @@ #define TEE_DEVICE_FLAG_REGISTERED 0x1 #define TEE_MAX_DEV_NAME_LEN 32 +enum tee_dma_heap_id { + TEE_DMA_HEAP_SECURE_VIDEO_PLAY = 1, + TEE_DMA_HEAP_TRUSTED_UI, + TEE_DMA_HEAP_SECURE_VIDEO_RECORD, +}; + /** * struct tee_device - TEE Device representation * @name: name of device @@ -116,6 +124,33 @@ struct tee_desc { u32 flags; }; +/** + * struct tee_rstmem_pool - restricted memory pool + * @ops: operations + * + * This is an abstract interface where this struct is expected to be + * embedded in another struct specific to the implementation. + */ +struct tee_rstmem_pool { + const struct tee_rstmem_pool_ops *ops; +}; + +/** + * struct tee_rstmem_pool_ops - restricted memory pool operations + * @alloc: called when allocating restricted memory + * @free: called when freeing restricted memory + * @destroy_pool: called when destroying the pool + */ +struct tee_rstmem_pool_ops { + int (*alloc)(struct tee_rstmem_pool *pool, struct sg_table *sgt, + size_t size, size_t *offs); + void (*free)(struct tee_rstmem_pool *pool, struct sg_table *sgt); + int (*update_shm)(struct tee_rstmem_pool *pool, struct sg_table *sgt, + size_t offs, struct tee_shm *shm, + struct tee_shm **parent_shm); + void (*destroy_pool)(struct tee_rstmem_pool *pool); +}; + /** * tee_device_alloc() - Allocate a new struct tee_device instance * @teedesc: Descriptor for this driver @@ -154,6 +189,11 @@ int tee_device_register(struct tee_device *teedev); */ void tee_device_unregister(struct tee_device *teedev); +int tee_device_register_dma_heap(struct tee_device *teedev, + enum tee_dma_heap_id id, + struct tee_rstmem_pool *pool); +void tee_device_unregister_all_dma_heaps(struct tee_device *teedev); + /** * tee_device_set_dev_groups() - Set device attribute groups * @teedev: Device to register @@ -229,6 +269,28 @@ static inline void tee_shm_pool_free(struct tee_shm_pool *pool) pool->ops->destroy_pool(pool); } +/** + * tee_rstmem_static_pool_alloc() - Create a restricted memory manager + * @paddr: Physical address of start of pool + * @size: Size in bytes of the pool + * + * @returns pointer to a 'struct tee_shm_pool' or an ERR_PTR on failure. + */ +struct tee_rstmem_pool *tee_rstmem_static_pool_alloc(phys_addr_t paddr, + size_t size); + +/** + * tee_rstmem_pool_free() - Free a restricted memory pool + * @pool: The restricted memory pool to free + * + * There must be no remaining restricted memory allocated from this pool + * when this function is called. + */ +static inline void tee_rstmem_pool_free(struct tee_rstmem_pool *pool) +{ + pool->ops->destroy_pool(pool); +} + /** * tee_get_drvdata() - Return driver_data pointer * @returns the driver_data pointer supplied to tee_register(). From patchwork Wed Mar 5 13:04:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 14002628 Received: from mail-ed1-f42.google.com (mail-ed1-f42.google.com [209.85.208.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7B33724E00D for ; Wed, 5 Mar 2025 13:06:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.42 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741180021; cv=none; b=VveETVwZZmX4AoOq4wyl4BwbxytDE6CG+lvlw3plaSfjZeqbgBmWGXOgWt/Gqojw2FeY5RuTcp4wOiC6yOmRVScyJIgHkXgZ0daG5O8dqyDpKQlOjKapiYrh8h3VtlJGYg2+acQsdWyOFktc4VO53/Hl+WLX7D6wD3+VA8kT7X8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741180021; c=relaxed/simple; bh=oNtXPm1WJ3efB3y6j/VVtzxNhxHC8L0s0JU904cMlfU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JOBk405W49VrIjBO7J/A4maGU/6IjmWwAuL2Lvq9TCfgLOUfTx/frfv7+iSEWBpY+RQ0EiwhT+OLssluLzWILOs5fC7p0JSNlPq4iW/PXXhCPGiEzbQ5bfUq+vVpXe4l0IsE8otkZLHY44wYV99NECk6UDgaeqcYNf9VlNSwy2I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=RLd4Nlrh; arc=none smtp.client-ip=209.85.208.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="RLd4Nlrh" Received: by mail-ed1-f42.google.com with SMTP id 4fb4d7f45d1cf-5e0b70fb1daso11838249a12.1 for ; Wed, 05 Mar 2025 05:06:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1741180017; x=1741784817; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5CcAD/p05b1IVvXLEIrs8hiPkLvSkxniC6dMT97HxMQ=; b=RLd4Nlrh5HJO7lLxgnUPHgK6+AzmhrdrMGgIbPX2nokDOOrHpyrgksbzNP0KZEdHto 0/KfJg6y3Ru2UgUXh7Ph26gxqAbwoF1Ro6IHEqqI1YfdapCdEAX5cRZCey3Fcc2FU68o IfAwM54wDmHZiQaGY3yPZ88di9tqvytOU7pLo/+ysb0KOopqqwsB2QvWnTgLvdFbzIhv I0Su6giCsHBdZDjKDh4gsdqZ3v38XSRXf80UNfn70m8RcDUNl/8Uu5/SMTweSxEXHqhS DB/blkhjiAlAj4u+HOvO0fiYjAwUKLiGML1ZumBRWUvAMbTZiWWpualnYk//DrAtGG9x Q8ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741180017; x=1741784817; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5CcAD/p05b1IVvXLEIrs8hiPkLvSkxniC6dMT97HxMQ=; b=NNfYapLp6rw/hKp+cwzMzuyxHHP2lJP7pmJWkASPFXPn5NPNua/25+93Zijp3DP1Ob 1NojwEiIpZEZFxsoIFexWudAyL9T5KBj8FDJICW0643QSCTYsH9yphmLMRStt+KgU1Nt GUrEmQZS/RCSYotzlehFS5wlzp6IC6F1iGaHa0pvU+hFxxjcyJM8AycRkzDgGoEnY0TI d32CrPLgksjSXIcTS1/RQr5vWedQueiHeXIMgV65Qvo8Ki7Vnyef1Og1zWXWDMpelYMV wXI/Ytk6zCLkdOTK99xTssc7CngM//zDrIpdddAHU3K8AZwu1HJgTrPn0PkLYSTNnB3C 3b1Q== X-Forwarded-Encrypted: i=1; AJvYcCXvQ4kuEzVo70Ahzz0Wo6lumtEaD23gsBNhJlsjJqIeOLPZh3OIiqRWKEP0Jm3BKXRn4z6Mwft15fE84A==@vger.kernel.org X-Gm-Message-State: AOJu0YzI5AuQQa/aszrJpCzD0kaq9x5ctKoFtytI5dRKbHpmNhmiZUsj TZLRzG+MDklr1t22L3oRE+QGwMJ8SOImUWRzqAs66CpJtOIEkfWir+Mqvc/1de8= X-Gm-Gg: ASbGncu/Bb4oFeNqyI+Tol7Y05Hwe6hfy2wn9gaU3TXZ3dFoEWwN+nsdg9fjaBQdnrm oBRw7skFDQ9Mf67UHRn/MohHqPSIm2fBXSeMRQ7B5j7ON1VQWuxVOruLVAMtsAz/2cT10bc706K xP9fT1IqUyUB80vtWFRXjLWZypwJMV7DJn+0lDDqRlLt1PdH3ttBffqrsageXUd4+oopZQsvn7i 3n5h4AjZ5M/PCQaYCpoMKwKJ/3mv5X7BaIQEeLhrWQJvjWPOhXimEZH/h2Y0i9EYiIvqQkXtPs0 ipyisXeSp+flnitMyQ+xQpjZn6w+TPPh4vONcXrZrnIo/RU/siLgMC3HZnvUIYV6UgbkOMpyyhQ 7GxwJUaAKVWEieTpaE0y7sQ== X-Google-Smtp-Source: AGHT+IG3cgo4lt4+QBuBPJp4MiDy4EL2BqrtziPjGxQujndCvzWUBDgOAl5gJF76QqdWuudAiJRjUg== X-Received: by 2002:a05:6402:1e92:b0:5d9:82bc:ad06 with SMTP id 4fb4d7f45d1cf-5e59f386d7dmr2815278a12.3.1741180016471; Wed, 05 Mar 2025 05:06:56 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-5e5bcd1595bsm65714a12.42.2025.03.05.05.06.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Mar 2025 05:06:55 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Daniel Stone , Etienne Carriere , Jens Wiklander Subject: [PATCH v6 06/10] tee: new ioctl to a register tee_shm from a dmabuf file descriptor Date: Wed, 5 Mar 2025 14:04:12 +0100 Message-ID: <20250305130634.1850178-7-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250305130634.1850178-1-jens.wiklander@linaro.org> References: <20250305130634.1850178-1-jens.wiklander@linaro.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Etienne Carriere Enable userspace to create a tee_shm object that refers to a dmabuf reference. Userspace registers the dmabuf file descriptor as in a tee_shm object. The registration is completed with a tee_shm file descriptor returned to userspace. Userspace is free to close the dmabuf file descriptor now since all the resources are now held via the tee_shm object. Closing the tee_shm file descriptor will release all resources used by the tee_shm object. This change only support dmabuf references that relates to physically contiguous memory buffers. New tee_shm flag to identify tee_shm objects built from a registered dmabuf, TEE_SHM_DMA_BUF. Signed-off-by: Etienne Carriere Signed-off-by: Olivier Masse Signed-off-by: Jens Wiklander --- drivers/tee/tee_core.c | 145 ++++++++++++++++++++++++++----------- drivers/tee/tee_private.h | 1 + drivers/tee/tee_shm.c | 146 ++++++++++++++++++++++++++++++++++++-- include/linux/tee_core.h | 1 + include/linux/tee_drv.h | 10 +++ include/uapi/linux/tee.h | 29 ++++++++ 6 files changed, 288 insertions(+), 44 deletions(-) diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c index 685afcaa3ea1..3a71643766d5 100644 --- a/drivers/tee/tee_core.c +++ b/drivers/tee/tee_core.c @@ -353,6 +353,103 @@ tee_ioctl_shm_register(struct tee_context *ctx, return ret; } +static int +tee_ioctl_shm_register_fd(struct tee_context *ctx, + struct tee_ioctl_shm_register_fd_data __user *udata) +{ + struct tee_ioctl_shm_register_fd_data data; + struct tee_shm *shm; + long ret; + + if (copy_from_user(&data, udata, sizeof(data))) + return -EFAULT; + + /* Currently no input flags are supported */ + if (data.flags) + return -EINVAL; + + shm = tee_shm_register_fd(ctx, data.fd); + if (IS_ERR(shm)) + return -EINVAL; + + data.id = shm->id; + data.flags = shm->flags; + data.size = shm->size; + + if (copy_to_user(udata, &data, sizeof(data))) + ret = -EFAULT; + else + ret = tee_shm_get_fd(shm); + + /* + * When user space closes the file descriptor the shared memory + * should be freed or if tee_shm_get_fd() failed then it will + * be freed immediately. + */ + tee_shm_put(shm); + return ret; +} + +static int param_from_user_memref(struct tee_context *ctx, + struct tee_param_memref *memref, + struct tee_ioctl_param *ip) +{ + struct tee_shm *shm; + size_t offs = 0; + + /* + * If a NULL pointer is passed to a TA in the TEE, + * the ip.c IOCTL parameters is set to TEE_MEMREF_NULL + * indicating a NULL memory reference. + */ + if (ip->c != TEE_MEMREF_NULL) { + /* + * If we fail to get a pointer to a shared + * memory object (and increase the ref count) + * from an identifier we return an error. All + * pointers that has been added in params have + * an increased ref count. It's the callers + * responibility to do tee_shm_put() on all + * resolved pointers. + */ + shm = tee_shm_get_from_id(ctx, ip->c); + if (IS_ERR(shm)) + return PTR_ERR(shm); + + /* + * Ensure offset + size does not overflow + * offset and does not overflow the size of + * the referred shared memory object. + */ + if ((ip->a + ip->b) < ip->a || + (ip->a + ip->b) > shm->size) { + tee_shm_put(shm); + return -EINVAL; + } + + if (shm->flags & TEE_SHM_DMA_BUF) { + struct tee_shm *parent_shm; + + parent_shm = tee_shm_get_parent_shm(shm, &offs); + if (parent_shm) { + tee_shm_put(shm); + shm = parent_shm; + } + } + } else if (ctx->cap_memref_null) { + /* Pass NULL pointer to OP-TEE */ + shm = NULL; + } else { + return -EINVAL; + } + + memref->shm_offs = ip->a + offs; + memref->size = ip->b; + memref->shm = shm; + + return 0; +} + static int params_from_user(struct tee_context *ctx, struct tee_param *params, size_t num_params, struct tee_ioctl_param __user *uparams) @@ -360,8 +457,8 @@ static int params_from_user(struct tee_context *ctx, struct tee_param *params, size_t n; for (n = 0; n < num_params; n++) { - struct tee_shm *shm; struct tee_ioctl_param ip; + int rc; if (copy_from_user(&ip, uparams + n, sizeof(ip))) return -EFAULT; @@ -384,45 +481,10 @@ static int params_from_user(struct tee_context *ctx, struct tee_param *params, case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: - /* - * If a NULL pointer is passed to a TA in the TEE, - * the ip.c IOCTL parameters is set to TEE_MEMREF_NULL - * indicating a NULL memory reference. - */ - if (ip.c != TEE_MEMREF_NULL) { - /* - * If we fail to get a pointer to a shared - * memory object (and increase the ref count) - * from an identifier we return an error. All - * pointers that has been added in params have - * an increased ref count. It's the callers - * responibility to do tee_shm_put() on all - * resolved pointers. - */ - shm = tee_shm_get_from_id(ctx, ip.c); - if (IS_ERR(shm)) - return PTR_ERR(shm); - - /* - * Ensure offset + size does not overflow - * offset and does not overflow the size of - * the referred shared memory object. - */ - if ((ip.a + ip.b) < ip.a || - (ip.a + ip.b) > shm->size) { - tee_shm_put(shm); - return -EINVAL; - } - } else if (ctx->cap_memref_null) { - /* Pass NULL pointer to OP-TEE */ - shm = NULL; - } else { - return -EINVAL; - } - - params[n].u.memref.shm_offs = ip.a; - params[n].u.memref.size = ip.b; - params[n].u.memref.shm = shm; + rc = param_from_user_memref(ctx, ¶ms[n].u.memref, + &ip); + if (rc) + return rc; break; default: /* Unknown attribute */ @@ -827,6 +889,8 @@ static long tee_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) return tee_ioctl_shm_alloc(ctx, uarg); case TEE_IOC_SHM_REGISTER: return tee_ioctl_shm_register(ctx, uarg); + case TEE_IOC_SHM_REGISTER_FD: + return tee_ioctl_shm_register_fd(ctx, uarg); case TEE_IOC_OPEN_SESSION: return tee_ioctl_open_session(ctx, uarg); case TEE_IOC_INVOKE: @@ -1288,3 +1352,4 @@ MODULE_AUTHOR("Linaro"); MODULE_DESCRIPTION("TEE Driver"); MODULE_VERSION("1.0"); MODULE_LICENSE("GPL v2"); +MODULE_IMPORT_NS("DMA_BUF"); diff --git a/drivers/tee/tee_private.h b/drivers/tee/tee_private.h index 6c6ff5d5eed2..aad7f6c7e0f0 100644 --- a/drivers/tee/tee_private.h +++ b/drivers/tee/tee_private.h @@ -24,6 +24,7 @@ void teedev_ctx_put(struct tee_context *ctx); struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size); struct tee_shm *tee_shm_register_user_buf(struct tee_context *ctx, unsigned long addr, size_t length); +struct tee_shm *tee_shm_get_parent_shm(struct tee_shm *shm, size_t *offs); int tee_heap_update_from_dma_buf(struct tee_device *teedev, struct dma_buf *dmabuf, size_t *offset, diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c index daf6e5cfd59a..8b79918468b5 100644 --- a/drivers/tee/tee_shm.c +++ b/drivers/tee/tee_shm.c @@ -4,6 +4,7 @@ */ #include #include +#include #include #include #include @@ -15,6 +16,16 @@ #include #include "tee_private.h" +/* extra references appended to shm object for registered shared memory */ +struct tee_shm_dmabuf_ref { + struct tee_shm shm; + size_t offset; + struct dma_buf *dmabuf; + struct dma_buf_attachment *attach; + struct sg_table *sgt; + struct tee_shm *parent_shm; +}; + static void shm_put_kernel_pages(struct page **pages, size_t page_count) { size_t n; @@ -45,7 +56,23 @@ static void release_registered_pages(struct tee_shm *shm) static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) { - if (shm->flags & TEE_SHM_POOL) { + struct tee_shm *parent_shm = NULL; + void *p = shm; + + if (shm->flags & TEE_SHM_DMA_BUF) { + struct tee_shm_dmabuf_ref *ref; + + ref = container_of(shm, struct tee_shm_dmabuf_ref, shm); + parent_shm = ref->parent_shm; + p = ref; + if (ref->attach) { + dma_buf_unmap_attachment(ref->attach, ref->sgt, + DMA_BIDIRECTIONAL); + + dma_buf_detach(ref->dmabuf, ref->attach); + } + dma_buf_put(ref->dmabuf); + } else if (shm->flags & TEE_SHM_POOL) { teedev->pool->ops->free(teedev->pool, shm); } else if (shm->flags & TEE_SHM_DYNAMIC) { int rc = teedev->desc->ops->shm_unregister(shm->ctx, shm); @@ -57,9 +84,10 @@ static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) release_registered_pages(shm); } - teedev_ctx_put(shm->ctx); + if (shm->ctx) + teedev_ctx_put(shm->ctx); - kfree(shm); + kfree(p); tee_device_put(teedev); } @@ -169,7 +197,7 @@ struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size) * tee_client_invoke_func(). The memory allocated is later freed with a * call to tee_shm_free(). * - * @returns a pointer to 'struct tee_shm' + * @returns a pointer to 'struct tee_shm' on success, and ERR_PTR on failure */ struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size) { @@ -179,6 +207,116 @@ struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size) } EXPORT_SYMBOL_GPL(tee_shm_alloc_kernel_buf); +struct tee_shm *tee_shm_register_fd(struct tee_context *ctx, int fd) +{ + struct tee_shm_dmabuf_ref *ref; + int rc; + + if (!tee_device_get(ctx->teedev)) + return ERR_PTR(-EINVAL); + + teedev_ctx_get(ctx); + + ref = kzalloc(sizeof(*ref), GFP_KERNEL); + if (!ref) { + rc = -ENOMEM; + goto err_put_tee; + } + + refcount_set(&ref->shm.refcount, 1); + ref->shm.ctx = ctx; + ref->shm.id = -1; + ref->shm.flags = TEE_SHM_DMA_BUF; + + ref->dmabuf = dma_buf_get(fd); + if (IS_ERR(ref->dmabuf)) { + rc = PTR_ERR(ref->dmabuf); + goto err_kfree_ref; + } + + rc = tee_heap_update_from_dma_buf(ctx->teedev, ref->dmabuf, + &ref->offset, &ref->shm, + &ref->parent_shm); + if (!rc) + goto out; + if (rc != -EINVAL) + goto err_put_dmabuf; + + ref->attach = dma_buf_attach(ref->dmabuf, &ctx->teedev->dev); + if (IS_ERR(ref->attach)) { + rc = PTR_ERR(ref->attach); + goto err_put_dmabuf; + } + + ref->sgt = dma_buf_map_attachment(ref->attach, DMA_BIDIRECTIONAL); + if (IS_ERR(ref->sgt)) { + rc = PTR_ERR(ref->sgt); + goto err_detach; + } + + if (sg_nents(ref->sgt->sgl) != 1) { + rc = PTR_ERR(ref->sgt->sgl); + goto err_unmap_attachement; + } + + ref->shm.paddr = page_to_phys(sg_page(ref->sgt->sgl)); + ref->shm.size = ref->sgt->sgl->length; + +out: + mutex_lock(&ref->shm.ctx->teedev->mutex); + ref->shm.id = idr_alloc(&ref->shm.ctx->teedev->idr, &ref->shm, + 1, 0, GFP_KERNEL); + mutex_unlock(&ref->shm.ctx->teedev->mutex); + if (ref->shm.id < 0) { + rc = ref->shm.id; + if (ref->attach) + goto err_unmap_attachement; + goto err_put_dmabuf; + } + + return &ref->shm; + +err_unmap_attachement: + dma_buf_unmap_attachment(ref->attach, ref->sgt, DMA_BIDIRECTIONAL); +err_detach: + dma_buf_detach(ref->dmabuf, ref->attach); +err_put_dmabuf: + dma_buf_put(ref->dmabuf); +err_kfree_ref: + kfree(ref); +err_put_tee: + teedev_ctx_put(ctx); + tee_device_put(ctx->teedev); + + return ERR_PTR(rc); +} +EXPORT_SYMBOL_GPL(tee_shm_register_fd); + +struct tee_shm *tee_shm_get_parent_shm(struct tee_shm *shm, size_t *offs) +{ + struct tee_shm *parent_shm = NULL; + + if (shm->flags & TEE_SHM_DMA_BUF) { + struct tee_shm_dmabuf_ref *ref; + + ref = container_of(shm, struct tee_shm_dmabuf_ref, shm); + if (ref->parent_shm) { + /* + * the shm already has one reference to + * ref->parent_shm so we should be clear of 0. + * We're getting another reference since the caller + * of this function expects to put the returned + * parent_shm when it's done with it. + */ + parent_shm = ref->parent_shm; + refcount_inc(&parent_shm->refcount); + *offs = ref->offset; + } + } + + return parent_shm; +} + /** * tee_shm_alloc_priv_buf() - Allocate shared memory for a privately shared * kernel buffer diff --git a/include/linux/tee_core.h b/include/linux/tee_core.h index 16ef078247ae..6bd833b6d0e1 100644 --- a/include/linux/tee_core.h +++ b/include/linux/tee_core.h @@ -28,6 +28,7 @@ #define TEE_SHM_USER_MAPPED BIT(1) /* Memory mapped in user space */ #define TEE_SHM_POOL BIT(2) /* Memory allocated from pool */ #define TEE_SHM_PRIV BIT(3) /* Memory private to TEE driver */ +#define TEE_SHM_DMA_BUF BIT(4) /* Memory with dma-buf handle */ #define TEE_DEVICE_FLAG_REGISTERED 0x1 #define TEE_MAX_DEV_NAME_LEN 32 diff --git a/include/linux/tee_drv.h b/include/linux/tee_drv.h index a54c203000ed..824f1251de60 100644 --- a/include/linux/tee_drv.h +++ b/include/linux/tee_drv.h @@ -116,6 +116,16 @@ struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size); struct tee_shm *tee_shm_register_kernel_buf(struct tee_context *ctx, void *addr, size_t length); +/** + * tee_shm_register_fd() - Register shared memory from file descriptor + * + * @ctx: Context that allocates the shared memory + * @fd: Shared memory file descriptor reference + * + * @returns a pointer to 'struct tee_shm' on success, and ERR_PTR on failure + */ +struct tee_shm *tee_shm_register_fd(struct tee_context *ctx, int fd); + /** * tee_shm_free() - Free shared memory * @shm: Handle to shared memory to free diff --git a/include/uapi/linux/tee.h b/include/uapi/linux/tee.h index d0430bee8292..1f9a4ac2b211 100644 --- a/include/uapi/linux/tee.h +++ b/include/uapi/linux/tee.h @@ -118,6 +118,35 @@ struct tee_ioctl_shm_alloc_data { #define TEE_IOC_SHM_ALLOC _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 1, \ struct tee_ioctl_shm_alloc_data) +/** + * struct tee_ioctl_shm_register_fd_data - Shared memory registering argument + * @fd: [in] File descriptor identifying the shared memory + * @size: [out] Size of shared memory to allocate + * @flags: [in] Flags to/from allocation. + * @id: [out] Identifier of the shared memory + * + * The flags field should currently be zero as input. Updated by the call + * with actual flags as defined by TEE_IOCTL_SHM_* above. + * This structure is used as argument for TEE_IOC_SHM_REGISTER_FD below. + */ +struct tee_ioctl_shm_register_fd_data { + __s64 fd; + __u64 size; + __u32 flags; + __s32 id; +}; + +/** + * TEE_IOC_SHM_REGISTER_FD - register a shared memory from a file descriptor + * + * Returns a file descriptor on success or < 0 on failure + * + * The returned file descriptor refers to the shared memory object in kernel + * land. The shared memory is freed when the descriptor is closed. + */ +#define TEE_IOC_SHM_REGISTER_FD _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 8, \ + struct tee_ioctl_shm_register_fd_data) + /** * struct tee_ioctl_buf_data - Variable sized buffer * @buf_ptr: [in] A __user pointer to a buffer From patchwork Wed Mar 5 13:04:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 14002629 Received: from mail-ed1-f46.google.com (mail-ed1-f46.google.com [209.85.208.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2381924BBF3 for ; Wed, 5 Mar 2025 13:07:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741180023; cv=none; b=ovE/oHG5yAaoKB09F53PXuSnqrXrRp3iQNn8IPrQpJuViBt71dz6kQcKTAqAwr8CApLo/YPGHYe7THpc7Jufl1UK0e/LQiqRu8SDfeg0l5mxS/4C8e5Rlt1WZfeLOAfbidkTe3F6futn58eDmCdjD2sktZpEqpmYKlAVRkZ99ek= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741180023; c=relaxed/simple; bh=M7ELt2/bqUBtQfSHUosFiW+IHomsZKrTJudyA0W+Gbg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ovb4zZanthv4QiRfXTwAIThs95NbhclgqZdYROecUNsUTZ1FNMi4y1t46522fOMlnO+PTErDyp9dAqPfOGxCoZZKJ+H5dVmW5guCsXFmvjCnr55lT1KqOrjM8zZzQXRII9M7Cm6cXK9hm/x43hsDUZ3bxqX7F9/RU4eIVTS5H+g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=keg2ctqU; arc=none smtp.client-ip=209.85.208.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="keg2ctqU" Received: by mail-ed1-f46.google.com with SMTP id 4fb4d7f45d1cf-5dccaaca646so1813448a12.0 for ; Wed, 05 Mar 2025 05:07:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1741180019; x=1741784819; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+6L88VawgT5clB6S0hOB/yQKL6Af309OnWxwlD6cmBU=; b=keg2ctqUTnf0M2AATzG9Jh8ZWU6kBfhvMppX6u7ea3LGQJkct/xzMyaPzfDmSfUKTA ulfNx5o0Mh40BZ7lhgXatozqMuwxModkl366NSUGnfluZSnDou0Ql6hhgYA2r8FIJW7p TNDBIoT5KZY+CdWBOi677cEWkH27Lhd0bz2yz0anJJ4a3bl2/DnPA5+oIngmJlFc97hq gGY+PMAf9jjYs3THgax/qw6ssU6YxOtB/2Olhz1aNOeDbX0kszgQsWUU9/gYapc8NYh+ Nt1Vc4Fk1UU+RK8MVzd2ZenL65Yiwnvb5jFBZqCZ0d1qebwxNp0VNaahWgf3/+XeujBo 4Msg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741180019; x=1741784819; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+6L88VawgT5clB6S0hOB/yQKL6Af309OnWxwlD6cmBU=; b=aqK/cioDGoUxGSpnDBx1DM06F6pLKP1SYangLUx1FgL1KG5SBNtK0rPRdKhgJGurXy h7VUrbX5B7n5y/9my5KZ1e0yoYalWHtQ42y6c0nMbUt2fS3unOnRzS8Aaix8qQ98tKfe OSeEWWWLmz6eFt92XK8p//X/JfARwfpkkKpfKMSqmQ0s/iNYkmhv7A/gE0Ouj+x7HxEB SJreEK10v4f+4xwtPsNqdwTEvpKuNDUnKeIwlqFPiWltu4DyUp4mZArQxxqh/B+/a7ec 17UvPML306VbYjy2uuz6EB98kFpBB4gB4A9ElvkSyNn6eVZhVJGQ9AD1uDpRw7MMmQ2E /5iQ== X-Forwarded-Encrypted: i=1; AJvYcCXPkYrvhIinFQwf55wTrt0s49sXlJNPw0OpY1ndO8oULuIWZG/JUCntAHaU/VdWwup1M3bFq3Xq2QYgXw==@vger.kernel.org X-Gm-Message-State: AOJu0YwbSETbqWoUJE/R+FwslFJy7DmZ24NWF14vtXAy/k627Tj3Y30T iCkFk/QQuTflXIBxH120l2rEx4hyUtBb5RgAVIwxJxvx8eeTycBpTRmFSC2oFFU= X-Gm-Gg: ASbGnctCzFHwIVPzHFKkDv5yJ2VmYca86Aj481vVeMN7yhvSQ+T4V2fHH00sSdXhTpK wVZsthVIDSJ52ZIHDKGZjF4kPEr66EVd6tJnXwnnt64P88PLbWihxQMyZqEymSsu0Fa359BuOJh AwPPbzc6PPW4lkybibymVznMrff7+bJHf1FgiTdVkNfo25xttbNEdcYJQeY8CwHCej34oReil+7 P86fL9CxdfEZvNS4ggSkN8ayfLjUXju1UjsgP57shm6JAtKFCZk5Jwmcbuin4Kf3kVfpImQXGqW OgaXJI+DQb7Bc9faArYp0vLgUrwsGZbWURBuDcKVYoBMmEJzJr42nvSIDUXh4p9bK2vW2bCOgJ0 rnSSEPZTnO0GIH/6pPhWUoA== X-Google-Smtp-Source: AGHT+IEAmgkcE8RyjO1pJJHrKqdAmUGw9vFkZJI7+m0/4SoKqcFz7yQm/3SS6AjKMMHXW07x446SxQ== X-Received: by 2002:a05:6402:4309:b0:5e4:d52b:78a2 with SMTP id 4fb4d7f45d1cf-5e584f51077mr7080244a12.15.1741180019316; Wed, 05 Mar 2025 05:06:59 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-5e5bcd1595bsm65714a12.42.2025.03.05.05.06.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Mar 2025 05:06:57 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Daniel Stone , Jens Wiklander Subject: [PATCH v6 07/10] tee: add tee_shm_alloc_cma_phys_mem() Date: Wed, 5 Mar 2025 14:04:13 +0100 Message-ID: <20250305130634.1850178-8-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250305130634.1850178-1-jens.wiklander@linaro.org> References: <20250305130634.1850178-1-jens.wiklander@linaro.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add tee_shm_alloc_cma_phys_mem() to allocate a physical memory using from the default CMA pool. The memory is represented by a tee_shm object using the new flag TEE_SHM_CMA_BUF to identify it as physical memory from CMA. Signed-off-by: Jens Wiklander --- drivers/tee/tee_shm.c | 55 ++++++++++++++++++++++++++++++++++++++-- include/linux/tee_core.h | 4 +++ 2 files changed, 57 insertions(+), 2 deletions(-) diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c index 8b79918468b5..8d8341f8ebd7 100644 --- a/drivers/tee/tee_shm.c +++ b/drivers/tee/tee_shm.c @@ -3,8 +3,11 @@ * Copyright (c) 2015-2017, 2019-2021 Linaro Limited */ #include +#include #include #include +#include +#include #include #include #include @@ -13,7 +16,6 @@ #include #include #include -#include #include "tee_private.h" /* extra references appended to shm object for registered shared memory */ @@ -59,7 +61,14 @@ static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) struct tee_shm *parent_shm = NULL; void *p = shm; - if (shm->flags & TEE_SHM_DMA_BUF) { + if (shm->flags & TEE_SHM_CMA_BUF) { +#if !IS_MODULE(CONFIG_TEE) && IS_ENABLED(CONFIG_CMA) + struct page *page = phys_to_page(shm->paddr); + struct cma *cma = dev_get_cma_area(&shm->ctx->teedev->dev); + + cma_release(cma, page, shm->size / PAGE_SIZE); +#endif + } else if (shm->flags & TEE_SHM_DMA_BUF) { struct tee_shm_dmabuf_ref *ref; ref = container_of(shm, struct tee_shm_dmabuf_ref, shm); @@ -341,6 +350,48 @@ struct tee_shm *tee_shm_alloc_priv_buf(struct tee_context *ctx, size_t size) } EXPORT_SYMBOL_GPL(tee_shm_alloc_priv_buf); +struct tee_shm *tee_shm_alloc_cma_phys_mem(struct tee_context *ctx, + size_t page_count, size_t align) +{ +#if !IS_MODULE(CONFIG_TEE) && IS_ENABLED(CONFIG_CMA) + struct tee_device *teedev = ctx->teedev; + struct cma *cma = dev_get_cma_area(&teedev->dev); + struct tee_shm *shm; + struct page *page; + + if (!tee_device_get(teedev)) + return ERR_PTR(-EINVAL); + + page = cma_alloc(cma, page_count, align, true/*no_warn*/); + if (!page) + goto err_put_teedev; + + shm = kzalloc(sizeof(*shm), GFP_KERNEL); + if (!shm) + goto err_cma_crelease; + + refcount_set(&shm->refcount, 1); + shm->ctx = ctx; + shm->paddr = page_to_phys(page); + shm->size = page_count * PAGE_SIZE; + shm->flags = TEE_SHM_CMA_BUF; + + teedev_ctx_get(ctx); + + return shm; + +err_cma_crelease: + cma_release(cma, page, page_count); +err_put_teedev: + tee_device_put(teedev); + + return ERR_PTR(-ENOMEM); +#else + return ERR_PTR(-EINVAL); +#endif +} +EXPORT_SYMBOL_GPL(tee_shm_alloc_cma_phys_mem); + int tee_dyn_shm_alloc_helper(struct tee_shm *shm, size_t size, size_t align, int (*shm_register)(struct tee_context *ctx, struct tee_shm *shm, diff --git a/include/linux/tee_core.h b/include/linux/tee_core.h index 6bd833b6d0e1..b6727d9a3556 100644 --- a/include/linux/tee_core.h +++ b/include/linux/tee_core.h @@ -29,6 +29,7 @@ #define TEE_SHM_POOL BIT(2) /* Memory allocated from pool */ #define TEE_SHM_PRIV BIT(3) /* Memory private to TEE driver */ #define TEE_SHM_DMA_BUF BIT(4) /* Memory with dma-buf handle */ +#define TEE_SHM_CMA_BUF BIT(5) /* CMA allocated memory */ #define TEE_DEVICE_FLAG_REGISTERED 0x1 #define TEE_MAX_DEV_NAME_LEN 32 @@ -307,6 +308,9 @@ void *tee_get_drvdata(struct tee_device *teedev); */ struct tee_shm *tee_shm_alloc_priv_buf(struct tee_context *ctx, size_t size); +struct tee_shm *tee_shm_alloc_cma_phys_mem(struct tee_context *ctx, + size_t page_count, size_t align); + int tee_dyn_shm_alloc_helper(struct tee_shm *shm, size_t size, size_t align, int (*shm_register)(struct tee_context *ctx, struct tee_shm *shm, From patchwork Wed Mar 5 13:04:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 14002630 Received: from mail-ed1-f43.google.com (mail-ed1-f43.google.com [209.85.208.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 46B2D24EAA2 for ; Wed, 5 Mar 2025 13:07:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.43 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741180025; cv=none; b=fQsdjyp3Eqp/BLJsKBoWqR/93gc2nQtdI9zI1ZAkyWymYabG+07/tFjC74mz/0A909lVSmy5yCx9bm89zxHOWFgjPtuhUnd3BYpXRIwoG0gWd671Q3ef0/GBClkD7a/1+PahEou1hvdQAKwdSWEAa4/rH99M9e0TZwrMqVZfBiM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741180025; c=relaxed/simple; bh=xtIy4nqk8bQ82LCRFZ+Vle42J2TMgfuzcBaF8xFdVOU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=m8rjodioK13zvVwjVjvhRdXM+DOq6SyjsLKfejWHgA7SmQAY34OOaCx34Qb8ZUg4dJTeEI05onv2w19FseQK0Cg7xv/m8lDcsZwsJvwjHl2yrwDRBSikz4WqVYyYXZ2562Ho4gmIreNcyk8jAknfhxUJLOrR4JDdNBNtj+GbmJM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=yZdwqqg5; arc=none smtp.client-ip=209.85.208.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="yZdwqqg5" Received: by mail-ed1-f43.google.com with SMTP id 4fb4d7f45d1cf-5e4d3f92250so9365303a12.1 for ; Wed, 05 Mar 2025 05:07:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1741180021; x=1741784821; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GV+1CN9mTm6W0e8GwrPdwh4pOa9jooWXILMnvo8a19M=; b=yZdwqqg5r5VEegxab60WgM4HkEPGenHiOV0M2BMbnoPsy5ZH9DxObbLYNgS8OWQdvM 6TYE+qgMGMd/Jr2KaX4pYpibRVzWphZzDoP8eKoY4jlttgHC3MO78Yq5IDmO/h1Nn77A 2kbPkK50yH8Ck3Q9UnvV5HSdXWqiE0qbkMsp82W7thkXenBRkxjg87HyGQs6OIx+Bx3W XkIgJOpznyfMY+6P9bSkyaklBl3xF2NFO2a5rDNel87R/BCAjdVMyc2ETIl1UBL8RQhS 1O6UIZXSAmRGxbdu1SrDYPrwe/zsRvqaalDwFYEt8yi4k5wP1UAxRngTUQeU72lZ1A7Y msJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741180021; x=1741784821; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GV+1CN9mTm6W0e8GwrPdwh4pOa9jooWXILMnvo8a19M=; b=qq/GaRoB/r+4A2I+EfLnED6Z+9Ymnf2QHv/ExnJk/8AQVxqPbBdZJbA+CfJ5p8ZyD/ WtzMYEoF2yDOWsAwHo0qLBtEqze4rrtz3bMzPITmFygi+0scVTUbZ2XQaBFre+ZnxRNQ wt+t1A8RWSz4XeywHJqxu1+U/Hz+OPA72FhU+tLO8+XFzzb781tcwOVjaxxWIFZGFp3b hMmi11K35jzZx8UoJyTIK1NzuJ+NgWSKgS8XOzoBinfwxq0ICLmKKYDxQXy1hqY1skaI 3yolfEan/f9tEqZUFjT7+AfiK/flcNemio4METKqhGaRTPA7sVWbuxGVstl8yP94H72n BM5A== X-Forwarded-Encrypted: i=1; AJvYcCX+sdGbRJh5v8vnAswreIEBag2Z4ER9eztjqHAFTB5pb8FjRqWhcS0aGRUfg2YmztpcJk6tNEea2Eoj4Q==@vger.kernel.org X-Gm-Message-State: AOJu0YxV+w960JCeMHXMxurlFPtlwchs0zUHxemLFz6WHcuZkKg0HrTs K5Fbx6TAqXMjPM/fdmhew5hAES25U3hucdwJRys3WjmJZwCqBIW6sNO2mnrc2wI= X-Gm-Gg: ASbGncthjRrAieTDuIkxUsY/vj8jqLPf7E/pz9/QPG/+jfYrhr0SzgH+35Hr53aQKC9 HPWXZ/RTt3GKuRiDm5mAlZIHfR2bJPb4sIw556Ksm141xC5uddGVCyZF/Ax9Ydg4NON3HifTR4O +oyUW6d0UTdB5i6aLXNCqNw7crvsP3bRgKHbtSW+/BQnsE4qDrsWqG2MguXkz9Jycow+FaS4IUv 9hryy5UBr64npqR/MoaAmivQre758xFxbcuV+wM2t32XyJ4cIZcn1LfxA7LGlxDwG8PzZJFU0Ku cRx9rfBUfHOTf0lRMW4fnA2u5VuOIspKGHRqS3167fdigfQbVPYoC6TgPGpvSbEwYkyeYG+yTjl csxkjobaJBS655P4FFaQ+uQ== X-Google-Smtp-Source: AGHT+IFZCxWXYzGPaJ+Fpyr5pD44xJg3mO/vEGV3TFHnOgghMwA++b6wko/XdNnqusONUByGqiK4lg== X-Received: by 2002:a05:6402:4302:b0:5de:dc08:9cc5 with SMTP id 4fb4d7f45d1cf-5e59f35eab4mr2661989a12.7.1741180021272; Wed, 05 Mar 2025 05:07:01 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-5e5bcd1595bsm65714a12.42.2025.03.05.05.06.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Mar 2025 05:07:00 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Daniel Stone , Jens Wiklander Subject: [PATCH v6 08/10] optee: support restricted memory allocation Date: Wed, 5 Mar 2025 14:04:14 +0100 Message-ID: <20250305130634.1850178-9-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250305130634.1850178-1-jens.wiklander@linaro.org> References: <20250305130634.1850178-1-jens.wiklander@linaro.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add support in the OP-TEE backend driver for restricted memory allocation. The support is limited to only the SMC ABI and for secure video buffers. OP-TEE is probed for the range of restricted physical memory and a memory pool allocator is initialized if OP-TEE have support for such memory. Signed-off-by: Jens Wiklander --- drivers/tee/optee/core.c | 1 + drivers/tee/optee/smc_abi.c | 44 +++++++++++++++++++++++++++++++++++-- 2 files changed, 43 insertions(+), 2 deletions(-) diff --git a/drivers/tee/optee/core.c b/drivers/tee/optee/core.c index c75fddc83576..c7fd8040480e 100644 --- a/drivers/tee/optee/core.c +++ b/drivers/tee/optee/core.c @@ -181,6 +181,7 @@ void optee_remove_common(struct optee *optee) tee_device_unregister(optee->supp_teedev); tee_device_unregister(optee->teedev); + tee_device_unregister_all_dma_heaps(optee->teedev); tee_shm_pool_free(optee->pool); optee_supp_uninit(&optee->supp); mutex_destroy(&optee->call_queue.mutex); diff --git a/drivers/tee/optee/smc_abi.c b/drivers/tee/optee/smc_abi.c index cfdae266548b..a14ff0b7d3b3 100644 --- a/drivers/tee/optee/smc_abi.c +++ b/drivers/tee/optee/smc_abi.c @@ -1620,6 +1620,41 @@ static inline int optee_load_fw(struct platform_device *pdev, } #endif +static int optee_sdp_pool_init(struct optee *optee) +{ + enum tee_dma_heap_id heap_id = TEE_DMA_HEAP_SECURE_VIDEO_PLAY; + struct tee_rstmem_pool *pool; + int rc; + + if (optee->smc.sec_caps & OPTEE_SMC_SEC_CAP_SDP) { + union { + struct arm_smccc_res smccc; + struct optee_smc_get_sdp_config_result result; + } res; + + optee->smc.invoke_fn(OPTEE_SMC_GET_SDP_CONFIG, 0, 0, 0, 0, 0, 0, + 0, &res.smccc); + if (res.result.status != OPTEE_SMC_RETURN_OK) { + pr_err("Secure Data Path service not available\n"); + return 0; + } + + pool = tee_rstmem_static_pool_alloc(res.result.start, + res.result.size); + if (IS_ERR(pool)) + return PTR_ERR(pool); + + rc = tee_device_register_dma_heap(optee->teedev, heap_id, pool); + if (rc) + goto err; + } + + return 0; +err: + pool->ops->destroy_pool(pool); + return rc; +} + static int optee_probe(struct platform_device *pdev) { optee_invoke_fn *invoke_fn; @@ -1715,7 +1750,7 @@ static int optee_probe(struct platform_device *pdev) optee = kzalloc(sizeof(*optee), GFP_KERNEL); if (!optee) { rc = -ENOMEM; - goto err_free_pool; + goto err_free_shm_pool; } optee->ops = &optee_ops; @@ -1788,6 +1823,10 @@ static int optee_probe(struct platform_device *pdev) pr_info("Asynchronous notifications enabled\n"); } + rc = optee_sdp_pool_init(optee); + if (rc) + goto err_notif_uninit; + /* * Ensure that there are no pre-existing shm objects before enabling * the shm cache so that there's no chance of receiving an invalid @@ -1823,6 +1862,7 @@ static int optee_probe(struct platform_device *pdev) optee_disable_shm_cache(optee); optee_smc_notif_uninit_irq(optee); optee_unregister_devices(); + tee_device_unregister_all_dma_heaps(optee->teedev); err_notif_uninit: optee_notif_uninit(optee); err_close_ctx: @@ -1839,7 +1879,7 @@ static int optee_probe(struct platform_device *pdev) tee_device_unregister(optee->teedev); err_free_optee: kfree(optee); -err_free_pool: +err_free_shm_pool: tee_shm_pool_free(pool); if (memremaped_shm) memunmap(memremaped_shm); From patchwork Wed Mar 5 13:04:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 14002631 Received: from mail-ed1-f44.google.com (mail-ed1-f44.google.com [209.85.208.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6032324F594 for ; Wed, 5 Mar 2025 13:07:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.44 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741180029; cv=none; b=evdJn2C08IvQT80korq49ovHOLtSpEJ4xEBOlPcc8tMDP5wc57ClbubwdJkGWfLaPHU4TN6WRYfRFjuXrEoRu3mmD2T8gHA5e5ATWc1fsRB13VAc1M8WEifQYkdtsdl+NLaXReoPXjfGrhtcY6WYl7iqhzIUx5QCJQeMQIVe6eE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741180029; c=relaxed/simple; bh=uaDU2c9FCb/hwcEUY5xm+kLs30U+GgsVCYpLCod7Kyk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pwvwnf+UmUU5bmAFsrshNP6EGduLJgrGL4Mpez0IHGMioMSCeGK/jh3x8fqN9uqNyuXIjYX0gvRPYeyS1J919MoHvQX3q75L/1Gy09d9rYftIKXbTeJy1H9mg+7nWhJYJavPL4nmwKzieJPW5X6bciErLgenZg9C1wM9gAl7s/c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=P+KP1+ev; arc=none smtp.client-ip=209.85.208.44 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="P+KP1+ev" Received: by mail-ed1-f44.google.com with SMTP id 4fb4d7f45d1cf-5e4c0c12bccso11915676a12.1 for ; Wed, 05 Mar 2025 05:07:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1741180024; x=1741784824; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OqKSZBP+OJWF5gJmf7xM2I9adV5PGtGazVdFpn+CQSw=; b=P+KP1+evO9rFH5oD0uOVeHXfbVXLyll/N4qVNuH3ZwSQQAonKxxSbMOosgx/bFKqPx FqTE1hu5d9FOK+B4czE1hADtj/sb0myM3OlSsBOy1BA8JN23hOHH6ghyhHOOSR/S/RDY NVHIPCZtur49aVaMxHpXRh67Wn9EgtJB32sgrXPpUrbyQCHYA5ikDiK0GMZIc6EHXHeO JcLuCgeqb4+thGeubZJi7L//Bm8T9Nstr4H382txg05nqAwn1rtdILBikQgJbN1mwJ8z K/Yk0a7Ap9bPZkUPIvs3dzZrTD3FdGrC+d95N2ubzG31soID1c5S56qoRak6FKnKT3ex 3tuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741180024; x=1741784824; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OqKSZBP+OJWF5gJmf7xM2I9adV5PGtGazVdFpn+CQSw=; b=XWIZuoo503WGJsWWiyFSnOTQMnuxrsxlKyRzPzqlIshDeCzJNr9ae9Xx5r+UUOs/Xm uWwPO6dieC4srIcXL6H3F70X/Ykk0T6HIJzYATzq507sEJiehwDFc6GntunBzGSOzv2o C9+yZ1Zy0lHitEHYOFDQn/H7WRNXnQ9tRxW5X47BTkft3MdwsXvot5gyiPYw4b86EP9E N+ohe7s5q0jW4LevMoykh7/hqAD8jZBmIkJ073wBW5tWErkpdL/FwhyPHaf6ZiHX649w jWiIN3Y3o/BjIwE7ZJg3q3s3ua+VazxA9PJCE8WpnACViLq9bvr0Abmz5MFJqdyJKXp2 CyeA== X-Forwarded-Encrypted: i=1; AJvYcCWUAPE172x0AVllRxJMdmI6OiPcK920TGEDn8rGNNiMuSVemtwNaCRIAWW8meUo5OOLj5RF3a/JIcIj+g==@vger.kernel.org X-Gm-Message-State: AOJu0YwU/dMt27vfwtj0DbYnvgzQ0gcGbeZKeRkX5fGUV7Hg82soZEya 5w5nXis5fYORcBrNVToRm2ObyYS93tz1BCnnWSDvuEBOQ3jHkDS3IiGIL8jDP78= X-Gm-Gg: ASbGncsAs2PAI9rKDYlJCNCCcMYqgJ9qznbwbM9oCdA2to0Cc1xO6lECvKe344CN7nh 1kTh+1JzhttibwUox1uc4+cyziI93lDMaUlZktIxaT5byb4dJ9eoyf3u40XDM62RksKBFRWGgbq CrsjCOUeALkN6YBpZmMlimRBdSSAHi4QyjoT6MUk/C4ykYtMEpkRCtxhpZhz869Dx43aUzTt85N 0hJueEGWoTOveyKuhsO7WEIVNSf8I9e5i+3pQRwdnE2ROY1Iwrl+C2Cpw1GBK+mF25HUEk2QT3o 1PH2Q+pwdnyHU85oWpBooOB+ZiG2ikI0wLZrCivYR0/c7oAyFp5AvgfMqcgGYqlYuNRzYVQKTVM mgSZh+k/SVeshVFPNShoaOQ== X-Google-Smtp-Source: AGHT+IE15Fnv9zCGL44EN+x0x28iZKYTfWnu4C8Tigs1ik2W/BgqyNM9ZszaJHzYmQDtEqzGDoT2bw== X-Received: by 2002:a05:6402:2344:b0:5e5:827d:bb1c with SMTP id 4fb4d7f45d1cf-5e59f471818mr2458803a12.25.1741180024315; Wed, 05 Mar 2025 05:07:04 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-5e5bcd1595bsm65714a12.42.2025.03.05.05.07.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Mar 2025 05:07:03 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Daniel Stone , Jens Wiklander Subject: [PATCH v6 09/10] optee: FF-A: dynamic restricted memory allocation Date: Wed, 5 Mar 2025 14:04:15 +0100 Message-ID: <20250305130634.1850178-10-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250305130634.1850178-1-jens.wiklander@linaro.org> References: <20250305130634.1850178-1-jens.wiklander@linaro.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add support in the OP-TEE backend driver dynamic restricted memory allocation with FF-A. The restricted memory pools for dynamically allocated restrict memory are instantiated when requested by user-space. This instantiation can fail if OP-TEE doesn't support the requested use-case of restricted memory. Restricted memory pools based on a static carveout or dynamic allocation can coexist for different use-cases. We use only dynamic allocation with FF-A. Signed-off-by: Jens Wiklander --- drivers/tee/optee/Makefile | 1 + drivers/tee/optee/ffa_abi.c | 143 ++++++++++++- drivers/tee/optee/optee_private.h | 13 +- drivers/tee/optee/rstmem.c | 329 ++++++++++++++++++++++++++++++ 4 files changed, 483 insertions(+), 3 deletions(-) create mode 100644 drivers/tee/optee/rstmem.c diff --git a/drivers/tee/optee/Makefile b/drivers/tee/optee/Makefile index a6eff388d300..498969fb8e40 100644 --- a/drivers/tee/optee/Makefile +++ b/drivers/tee/optee/Makefile @@ -4,6 +4,7 @@ optee-objs += core.o optee-objs += call.o optee-objs += notif.o optee-objs += rpc.o +optee-objs += rstmem.o optee-objs += supp.o optee-objs += device.o optee-objs += smc_abi.o diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c index e4b08cd195f3..6a55114232ef 100644 --- a/drivers/tee/optee/ffa_abi.c +++ b/drivers/tee/optee/ffa_abi.c @@ -672,6 +672,123 @@ static int optee_ffa_do_call_with_arg(struct tee_context *ctx, return optee_ffa_yielding_call(ctx, &data, rpc_arg, system_thread); } +static int do_call_lend_rstmem(struct optee *optee, u64 cookie, u32 use_case) +{ + struct optee_shm_arg_entry *entry; + struct optee_msg_arg *msg_arg; + struct tee_shm *shm; + u_int offs; + int rc; + + msg_arg = optee_get_msg_arg(optee->ctx, 1, &entry, &shm, &offs); + if (IS_ERR(msg_arg)) + return PTR_ERR(msg_arg); + + msg_arg->cmd = OPTEE_MSG_CMD_ASSIGN_RSTMEM; + msg_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT; + msg_arg->params[0].u.value.a = cookie; + msg_arg->params[0].u.value.b = use_case; + + rc = optee->ops->do_call_with_arg(optee->ctx, shm, offs, false); + if (rc) + goto out; + if (msg_arg->ret != TEEC_SUCCESS) { + rc = -EINVAL; + goto out; + } + +out: + optee_free_msg_arg(optee->ctx, entry, offs); + return rc; +} + +static int optee_ffa_lend_rstmem(struct optee *optee, struct tee_shm *rstmem, + u16 *end_points, unsigned int ep_count, + u32 use_case) +{ + struct ffa_device *ffa_dev = optee->ffa.ffa_dev; + const struct ffa_mem_ops *mem_ops = ffa_dev->ops->mem_ops; + const struct ffa_msg_ops *msg_ops = ffa_dev->ops->msg_ops; + struct ffa_send_direct_data data; + struct ffa_mem_region_attributes *mem_attr; + struct ffa_mem_ops_args args = { + .use_txbuf = true, + .tag = use_case, + }; + struct page *page; + struct scatterlist sgl; + unsigned int n; + int rc; + + mem_attr = kcalloc(ep_count, sizeof(*mem_attr), GFP_KERNEL); + for (n = 0; n < ep_count; n++) { + mem_attr[n].receiver = end_points[n]; + mem_attr[n].attrs = FFA_MEM_RW; + } + args.attrs = mem_attr; + args.nattrs = ep_count; + + page = phys_to_page(rstmem->paddr); + sg_init_table(&sgl, 1); + sg_set_page(&sgl, page, rstmem->size, 0); + + args.sg = &sgl; + rc = mem_ops->memory_lend(&args); + kfree(mem_attr); + if (rc) + return rc; + + rc = do_call_lend_rstmem(optee, args.g_handle, use_case); + if (rc) + goto err_reclaim; + + rc = optee_shm_add_ffa_handle(optee, rstmem, args.g_handle); + if (rc) + goto err_unreg; + + rstmem->sec_world_id = args.g_handle; + + return 0; + +err_unreg: + data = (struct ffa_send_direct_data){ + .data0 = OPTEE_FFA_RELEASE_RSTMEM, + .data1 = (u32)args.g_handle, + .data2 = (u32)(args.g_handle >> 32), + }; + msg_ops->sync_send_receive(ffa_dev, &data); +err_reclaim: + mem_ops->memory_reclaim(args.g_handle, 0); + return rc; +} + +static int optee_ffa_reclaim_rstmem(struct optee *optee, struct tee_shm *rstmem) +{ + struct ffa_device *ffa_dev = optee->ffa.ffa_dev; + const struct ffa_msg_ops *msg_ops = ffa_dev->ops->msg_ops; + const struct ffa_mem_ops *mem_ops = ffa_dev->ops->mem_ops; + u64 global_handle = rstmem->sec_world_id; + struct ffa_send_direct_data data = { + .data0 = OPTEE_FFA_RELEASE_RSTMEM, + .data1 = (u32)global_handle, + .data2 = (u32)(global_handle >> 32) + }; + int rc; + + optee_shm_rem_ffa_handle(optee, global_handle); + rstmem->sec_world_id = 0; + + rc = msg_ops->sync_send_receive(ffa_dev, &data); + if (rc) + pr_err("Release SHM id 0x%llx rc %d\n", global_handle, rc); + + rc = mem_ops->memory_reclaim(global_handle, 0); + if (rc) + pr_err("mem_reclaim: 0x%llx %d", global_handle, rc); + + return rc; +} + /* * 6. Driver initialization * @@ -833,6 +950,8 @@ static const struct optee_ops optee_ffa_ops = { .do_call_with_arg = optee_ffa_do_call_with_arg, .to_msg_param = optee_ffa_to_msg_param, .from_msg_param = optee_ffa_from_msg_param, + .lend_rstmem = optee_ffa_lend_rstmem, + .reclaim_rstmem = optee_ffa_reclaim_rstmem, }; static void optee_ffa_remove(struct ffa_device *ffa_dev) @@ -941,7 +1060,7 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev) optee->pool, optee); if (IS_ERR(teedev)) { rc = PTR_ERR(teedev); - goto err_free_pool; + goto err_free_shm_pool; } optee->teedev = teedev; @@ -988,6 +1107,24 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev) rc); } + if (IS_ENABLED(CONFIG_CMA) && !IS_MODULE(CONFIG_OPTEE) && + (sec_caps & OPTEE_FFA_SEC_CAP_RSTMEM)) { + enum tee_dma_heap_id id = TEE_DMA_HEAP_SECURE_VIDEO_PLAY; + struct tee_rstmem_pool *pool; + + pool = optee_rstmem_alloc_cma_pool(optee, id); + if (IS_ERR(pool)) { + rc = PTR_ERR(pool); + goto err_notif_uninit; + } + + rc = tee_device_register_dma_heap(optee->teedev, id, pool); + if (rc) { + pool->ops->destroy_pool(pool); + goto err_notif_uninit; + } + } + rc = optee_enumerate_devices(PTA_CMD_GET_DEVICES); if (rc) goto err_unregister_devices; @@ -1001,6 +1138,8 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev) err_unregister_devices: optee_unregister_devices(); + tee_device_unregister_all_dma_heaps(optee->teedev); +err_notif_uninit: if (optee->ffa.bottom_half_value != U32_MAX) notif_ops->notify_relinquish(ffa_dev, optee->ffa.bottom_half_value); @@ -1018,7 +1157,7 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev) tee_device_unregister(optee->supp_teedev); err_unreg_teedev: tee_device_unregister(optee->teedev); -err_free_pool: +err_free_shm_pool: tee_shm_pool_free(pool); err_free_optee: kfree(optee); diff --git a/drivers/tee/optee/optee_private.h b/drivers/tee/optee/optee_private.h index 20eda508dbac..faab31ad7c52 100644 --- a/drivers/tee/optee/optee_private.h +++ b/drivers/tee/optee/optee_private.h @@ -174,9 +174,14 @@ struct optee; * @do_call_with_arg: enters OP-TEE in secure world * @to_msg_param: converts from struct tee_param to OPTEE_MSG parameters * @from_msg_param: converts from OPTEE_MSG parameters to struct tee_param + * @lend_rstmem: lends physically contiguous memory as restricted + * memory, inaccessible by the kernel + * @reclaim_rstmem: reclaims restricted memory previously lent with + * @lend_rstmem() and makes it accessible by the + * kernel again * * These OPs are only supposed to be used internally in the OP-TEE driver - * as a way of abstracting the different methogs of entering OP-TEE in + * as a way of abstracting the different methods of entering OP-TEE in * secure world. */ struct optee_ops { @@ -191,6 +196,10 @@ struct optee_ops { size_t num_params, const struct optee_msg_param *msg_params, bool update_out); + int (*lend_rstmem)(struct optee *optee, struct tee_shm *rstmem, + u16 *end_points, unsigned int ep_count, + u32 use_case); + int (*reclaim_rstmem)(struct optee *optee, struct tee_shm *rstmem); }; /** @@ -285,6 +294,8 @@ u32 optee_supp_thrd_req(struct tee_context *ctx, u32 func, size_t num_params, void optee_supp_init(struct optee_supp *supp); void optee_supp_uninit(struct optee_supp *supp); void optee_supp_release(struct optee_supp *supp); +struct tee_rstmem_pool *optee_rstmem_alloc_cma_pool(struct optee *optee, + enum tee_dma_heap_id id); int optee_supp_recv(struct tee_context *ctx, u32 *func, u32 *num_params, struct tee_param *param); diff --git a/drivers/tee/optee/rstmem.c b/drivers/tee/optee/rstmem.c new file mode 100644 index 000000000000..ea27769934d4 --- /dev/null +++ b/drivers/tee/optee/rstmem.c @@ -0,0 +1,329 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2025, Linaro Limited + */ +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include +#include +#include +#include +#include +#include +#include "optee_private.h" + +struct optee_rstmem_cma_pool { + struct tee_rstmem_pool pool; + struct gen_pool *gen_pool; + struct optee *optee; + size_t page_count; + u16 *end_points; + u_int end_point_count; + u_int align; + refcount_t refcount; + u32 use_case; + struct tee_shm *rstmem; + /* Protects when initializing and tearing down this struct */ + struct mutex mutex; +}; + +static struct optee_rstmem_cma_pool * +to_rstmem_cma_pool(struct tee_rstmem_pool *pool) +{ + return container_of(pool, struct optee_rstmem_cma_pool, pool); +} + +static int init_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{ + int rc; + + rp->rstmem = tee_shm_alloc_cma_phys_mem(rp->optee->ctx, rp->page_count, + rp->align); + if (IS_ERR(rp->rstmem)) { + rc = PTR_ERR(rp->rstmem); + goto err_null_rstmem; + } + + /* + * TODO unmap the memory range since the physical memory will + * become inaccesible after the lend_rstmem() call. + */ + rc = rp->optee->ops->lend_rstmem(rp->optee, rp->rstmem, rp->end_points, + rp->end_point_count, rp->use_case); + if (rc) + goto err_put_shm; + rp->rstmem->flags |= TEE_SHM_DYNAMIC; + + rp->gen_pool = gen_pool_create(PAGE_SHIFT, -1); + if (!rp->gen_pool) { + rc = -ENOMEM; + goto err_reclaim; + } + + rc = gen_pool_add(rp->gen_pool, rp->rstmem->paddr, + rp->rstmem->size, -1); + if (rc) + goto err_free_pool; + + refcount_set(&rp->refcount, 1); + return 0; + +err_free_pool: + gen_pool_destroy(rp->gen_pool); + rp->gen_pool = NULL; +err_reclaim: + rp->optee->ops->reclaim_rstmem(rp->optee, rp->rstmem); +err_put_shm: + tee_shm_put(rp->rstmem); +err_null_rstmem: + rp->rstmem = NULL; + return rc; +} + +static int get_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{ + int rc = 0; + + if (!refcount_inc_not_zero(&rp->refcount)) { + mutex_lock(&rp->mutex); + if (rp->gen_pool) { + /* + * Another thread has already initialized the pool + * before us, or the pool was just about to be torn + * down. Either way we only need to increase the + * refcount and we're done. + */ + refcount_inc(&rp->refcount); + } else { + rc = init_cma_rstmem(rp); + } + mutex_unlock(&rp->mutex); + } + + return rc; +} + +static void release_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{ + gen_pool_destroy(rp->gen_pool); + rp->gen_pool = NULL; + + rp->optee->ops->reclaim_rstmem(rp->optee, rp->rstmem); + rp->rstmem->flags &= ~TEE_SHM_DYNAMIC; + + WARN(refcount_read(&rp->rstmem->refcount) != 1, "Unexpected refcount"); + tee_shm_put(rp->rstmem); + rp->rstmem = NULL; +} + +static void put_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{ + if (refcount_dec_and_test(&rp->refcount)) { + mutex_lock(&rp->mutex); + if (rp->gen_pool) + release_cma_rstmem(rp); + mutex_unlock(&rp->mutex); + } +} + +static int rstmem_pool_op_cma_alloc(struct tee_rstmem_pool *pool, + struct sg_table *sgt, size_t size, + size_t *offs) +{ + struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool); + size_t sz = ALIGN(size, PAGE_SIZE); + phys_addr_t pa; + int rc; + + rc = get_cma_rstmem(rp); + if (rc) + return rc; + + pa = gen_pool_alloc(rp->gen_pool, sz); + if (!pa) { + rc = -ENOMEM; + goto err_put; + } + + rc = sg_alloc_table(sgt, 1, GFP_KERNEL); + if (rc) + goto err_free; + + sg_set_page(sgt->sgl, phys_to_page(pa), size, 0); + *offs = pa - rp->rstmem->paddr; + + return 0; +err_free: + gen_pool_free(rp->gen_pool, pa, size); +err_put: + put_cma_rstmem(rp); + + return rc; +} + +static void rstmem_pool_op_cma_free(struct tee_rstmem_pool *pool, + struct sg_table *sgt) +{ + struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool); + struct scatterlist *sg; + int i; + + for_each_sgtable_sg(sgt, sg, i) + gen_pool_free(rp->gen_pool, sg_phys(sg), sg->length); + sg_free_table(sgt); + put_cma_rstmem(rp); +} + +static int rstmem_pool_op_cma_update_shm(struct tee_rstmem_pool *pool, + struct sg_table *sgt, size_t offs, + struct tee_shm *shm, + struct tee_shm **parent_shm) +{ + struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool); + + *parent_shm = rp->rstmem; + + return 0; +} + +static void pool_op_cma_destroy_pool(struct tee_rstmem_pool *pool) +{ + struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool); + + mutex_destroy(&rp->mutex); + kfree(rp); +} + +static struct tee_rstmem_pool_ops rstmem_pool_ops_cma = { + .alloc = rstmem_pool_op_cma_alloc, + .free = rstmem_pool_op_cma_free, + .update_shm = rstmem_pool_op_cma_update_shm, + .destroy_pool = pool_op_cma_destroy_pool, +}; + +static int get_rstmem_config(struct optee *optee, u32 use_case, + size_t *min_size, u_int *min_align, + u16 *end_points, u_int *ep_count) +{ + struct tee_param params[2] = { + [0] = { + .attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT, + .u.value.a = use_case, + }, + [1] = { + .attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT, + }, + }; + struct optee_shm_arg_entry *entry; + struct tee_shm *shm_param = NULL; + struct optee_msg_arg *msg_arg; + struct tee_shm *shm; + u_int offs; + int rc; + + if (end_points && *ep_count) { + params[1].u.memref.size = *ep_count * sizeof(*end_points); + shm_param = tee_shm_alloc_priv_buf(optee->ctx, + params[1].u.memref.size); + if (IS_ERR(shm_param)) + return PTR_ERR(shm_param); + params[1].u.memref.shm = shm_param; + } + + msg_arg = optee_get_msg_arg(optee->ctx, ARRAY_SIZE(params), &entry, + &shm, &offs); + if (IS_ERR(msg_arg)) { + rc = PTR_ERR(msg_arg); + goto out_free_shm; + } + msg_arg->cmd = OPTEE_MSG_CMD_GET_RSTMEM_CONFIG; + + rc = optee->ops->to_msg_param(optee, msg_arg->params, + ARRAY_SIZE(params), params, + false /*!update_out*/); + if (rc) + goto out_free_msg; + + rc = optee->ops->do_call_with_arg(optee->ctx, shm, offs, false); + if (rc) + goto out_free_msg; + if (msg_arg->ret && msg_arg->ret != TEEC_ERROR_SHORT_BUFFER) { + rc = -EINVAL; + goto out_free_msg; + } + + rc = optee->ops->from_msg_param(optee, params, ARRAY_SIZE(params), + msg_arg->params, true /*update_out*/); + if (rc) + goto out_free_msg; + + if (!msg_arg->ret && end_points && + *ep_count < params[1].u.memref.size / sizeof(u16)) { + rc = -EINVAL; + goto out_free_msg; + } + + *min_size = params[0].u.value.a; + *min_align = params[0].u.value.b; + *ep_count = params[1].u.memref.size / sizeof(u16); + + if (msg_arg->ret == TEEC_ERROR_SHORT_BUFFER) { + rc = -ENOSPC; + goto out_free_msg; + } + + if (end_points) + memcpy(end_points, tee_shm_get_va(shm_param, 0), + params[1].u.memref.size); + +out_free_msg: + optee_free_msg_arg(optee->ctx, entry, offs); +out_free_shm: + if (shm_param) + tee_shm_free(shm_param); + return rc; +} + +struct tee_rstmem_pool *optee_rstmem_alloc_cma_pool(struct optee *optee, + enum tee_dma_heap_id id) +{ + struct optee_rstmem_cma_pool *rp; + u32 use_case = id; + size_t min_size; + int rc; + + rp = kzalloc(sizeof(*rp), GFP_KERNEL); + if (!rp) + return ERR_PTR(-ENOMEM); + rp->use_case = use_case; + + rc = get_rstmem_config(optee, use_case, &min_size, &rp->align, NULL, + &rp->end_point_count); + if (rc) { + if (rc != -ENOSPC) + goto err; + rp->end_points = kcalloc(rp->end_point_count, + sizeof(*rp->end_points), GFP_KERNEL); + if (!rp->end_points) { + rc = -ENOMEM; + goto err; + } + rc = get_rstmem_config(optee, use_case, &min_size, &rp->align, + rp->end_points, &rp->end_point_count); + if (rc) + goto err_kfree_eps; + } + + rp->pool.ops = &rstmem_pool_ops_cma; + rp->optee = optee; + rp->page_count = min_size / PAGE_SIZE; + mutex_init(&rp->mutex); + + return &rp->pool; + +err_kfree_eps: + kfree(rp->end_points); +err: + kfree(rp); + return ERR_PTR(rc); +} From patchwork Wed Mar 5 13:04:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 14002632 Received: from mail-ed1-f52.google.com (mail-ed1-f52.google.com [209.85.208.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9DAF324BBE4 for ; Wed, 5 Mar 2025 13:07:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741180031; cv=none; b=JAiqxz0VZbQ3voqeTfntWfMw34YVlcONrB7IK8NSvcvQIzthRpKbp8Lw8xz44VJBff/6VdYZd01JMmx6Fuuo7IqJAUHWmN61tyB0Eo4bbfqUYrGRF/J1jt9C8GU/0BvC+qc7LE5oTDdUhRiJK7LP2YSuMerbgHDycjdL+s3ZShg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741180031; c=relaxed/simple; bh=XAtG5u7idPVpuMFTd9Koq5IM2tTyJz37vBrZlH8EmTU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BoHAJ9WUoL/G77Xpnh43CWlrA/gKc9VfYyWuboWSYVGbnBjGWtKGYb1icLoTRtzLk2zpZPTbhQvHyF9WOi4+2tpI8eWxQ0HhQKj9WbjJ+XAArwap7WeXC5I9xlPdquCmSflQjtZ125LAcEemzXjdh0MZEUZ/Iz6TsvK10/AT9y8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org; spf=pass smtp.mailfrom=linaro.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b=YSAVbsum; arc=none smtp.client-ip=209.85.208.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linaro.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="YSAVbsum" Received: by mail-ed1-f52.google.com with SMTP id 4fb4d7f45d1cf-5e4bed34bccso9604976a12.3 for ; Wed, 05 Mar 2025 05:07:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1741180028; x=1741784828; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SZWcwQgxJYl5xqresVhiMdi9+ZPx9Y4Kk8m/1lcEjeM=; b=YSAVbsum5y6ZoMUHvf6pcru8XH33NDSgO/v/BmUbbtawygmXjUwd0psizBuMUEcE6e t8uBf6igQyQqOBBcaZ4H50NrxJ3w+rw/Zqxn0cjO5ThpGBsKJRmcL2u6BX+pLK32ZUXH 8rwc7LLeaB2TXGMWZ2Iza32IqSsyHlifFPedFToWDDjvdry6nxLuI2IeqLzg9ZB0T/ft uO2YSeiD3IrIo1s37fPn9NLn0r3tGrp2EMR2t186lIaqd/PWpY8aXxz7OQwXOGOnRoqU m39BB1tHYvAb6BlMxW4LHcox9UYfZ7MEhm/4ZJGW4wGMkT86gxvLS1LQIHNNCBKdvyoc aWug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741180028; x=1741784828; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SZWcwQgxJYl5xqresVhiMdi9+ZPx9Y4Kk8m/1lcEjeM=; b=tDZOqE+kr/1vJLcwDjKx997cCDIgWj7sSOVu6hvVghQPF2JaHNtwntNQFUdfvulAwE 40n8KzH/1Ao7aOWEjzh4HkUXs6wmoHaz7cdQ4J8d2ZQP3jgaclogyyeb8cbKo18ctLU3 IerAVpEDyfQiXYuv/q4ckBabHtXoSLbjoOTkJqjtpItCG8g6oVfqAUMywAB1k4G7vMec kJS/cPvPt5cdKtJRnOP/H3j6yzz6pHMmtz2q7BSx6C7gVgHY9T7bNdzIieZJ9XjNm5bj eakUBQIzyaUrXsKr4spTcqMARnBNdapIIPRR5MkC2V3eBNqkF6ahtng+/1X7C51US07M PaOQ== X-Forwarded-Encrypted: i=1; AJvYcCUQrW9ehQ9r+XPNHTHhh3ZvgZjT7SZNGUaWFRUWadqRCI2pXrzB3TvLf1esKbdmhaFX9hitjiWXLUjRbA==@vger.kernel.org X-Gm-Message-State: AOJu0YxyY9ZCZkYfjNj6mgsHfxKR3Sjl2C1+maP7c+HtRrixYXVORKPq +KUf+vyeXT8T0DkVwZfsK6J7QlUWqCe78GF9uNFqjSHC+hWDjiQQ+98kXkn/fO4= X-Gm-Gg: ASbGncuOXhDrf3HyoRrb3xQ8467cf24LeOS2Z002VWVx1CGdo9VHlYPuGUnNJUU7Mnx zJXz18Pk8mI/Tl/01mSk4uquN1c8c+y1rlTkU6ze06/6YVU9R/svDe33n5PwZ/tOih8I2PmDTL9 PrLE7asfOet46Dp/il6Mi1joMIkUF26lRJeM1zgJgqYrcKnUbwRyQJxUjjAypiEQ0p8bwB6GmJr zHV7Rdh69NVxZweHeAMyWyp1KcBVPGbxGVXL6DwYsi6oGnyZtUh1aE1jQEUkIuDj8DfKkYPsoKD 5M7YhsT4mMIiMvhZsM5tv4JgtagJ+LyTQmP9H81KMqvRh4XyHQOh9BPQhERzW223X6uf1a4Y1qv /al00eI5rqRdLvzorNvDPqQ== X-Google-Smtp-Source: AGHT+IEU8qhj3jZr5pC33H3BKUPYoPOJwyx39WdATtL+5/RDhfCQ1+ckzcmgDPihiiWE9zWeUGcUMw== X-Received: by 2002:a05:6402:13d1:b0:5e4:99af:b7c with SMTP id 4fb4d7f45d1cf-5e59f384aebmr3091724a12.9.1741180027839; Wed, 05 Mar 2025 05:07:07 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-5e5bcd1595bsm65714a12.42.2025.03.05.05.07.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Mar 2025 05:07:06 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Daniel Stone , Jens Wiklander Subject: [PATCH v6 10/10] optee: smc abi: dynamic restricted memory allocation Date: Wed, 5 Mar 2025 14:04:16 +0100 Message-ID: <20250305130634.1850178-11-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250305130634.1850178-1-jens.wiklander@linaro.org> References: <20250305130634.1850178-1-jens.wiklander@linaro.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add support in the OP-TEE backend driver for dynamic restricted memory allocation using the SMC ABI. Signed-off-by: Jens Wiklander --- drivers/tee/optee/smc_abi.c | 96 +++++++++++++++++++++++++++++++------ 1 file changed, 81 insertions(+), 15 deletions(-) diff --git a/drivers/tee/optee/smc_abi.c b/drivers/tee/optee/smc_abi.c index a14ff0b7d3b3..aa574ee6e277 100644 --- a/drivers/tee/optee/smc_abi.c +++ b/drivers/tee/optee/smc_abi.c @@ -1001,6 +1001,69 @@ static int optee_smc_do_call_with_arg(struct tee_context *ctx, return rc; } +static int optee_smc_lend_rstmem(struct optee *optee, struct tee_shm *rstmem, + u16 *end_points, unsigned int ep_count, + u32 use_case) +{ + struct optee_shm_arg_entry *entry; + struct optee_msg_arg *msg_arg; + struct tee_shm *shm; + u_int offs; + int rc; + + msg_arg = optee_get_msg_arg(optee->ctx, 2, &entry, &shm, &offs); + if (IS_ERR(msg_arg)) + return PTR_ERR(msg_arg); + + msg_arg->cmd = OPTEE_MSG_CMD_LEND_RSTMEM; + msg_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT; + msg_arg->params[0].u.value.a = use_case; + msg_arg->params[1].attr = OPTEE_MSG_ATTR_TYPE_TMEM_INPUT; + msg_arg->params[1].u.tmem.buf_ptr = rstmem->paddr; + msg_arg->params[1].u.tmem.size = rstmem->size; + msg_arg->params[1].u.tmem.shm_ref = (u_long)rstmem; + + rc = optee->ops->do_call_with_arg(optee->ctx, shm, offs, false); + if (rc) + goto out; + if (msg_arg->ret != TEEC_SUCCESS) { + rc = -EINVAL; + goto out; + } + rstmem->sec_world_id = (u_long)rstmem; + +out: + optee_free_msg_arg(optee->ctx, entry, offs); + return rc; +} + +static int optee_smc_reclaim_rstmem(struct optee *optee, struct tee_shm *rstmem) +{ + struct optee_shm_arg_entry *entry; + struct optee_msg_arg *msg_arg; + struct tee_shm *shm; + u_int offs; + int rc; + + msg_arg = optee_get_msg_arg(optee->ctx, 1, &entry, &shm, &offs); + if (IS_ERR(msg_arg)) + return PTR_ERR(msg_arg); + + msg_arg->cmd = OPTEE_MSG_CMD_RECLAIM_RSTMEM; + msg_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_RMEM_INPUT; + msg_arg->params[0].u.rmem.shm_ref = (u_long)rstmem; + + rc = optee->ops->do_call_with_arg(optee->ctx, shm, offs, false); + if (rc) + goto out; + if (msg_arg->ret != TEEC_SUCCESS) + rc = -EINVAL; + +out: + optee_free_msg_arg(optee->ctx, entry, offs); + return rc; +} + /* * 5. Asynchronous notification */ @@ -1252,6 +1315,8 @@ static const struct optee_ops optee_ops = { .do_call_with_arg = optee_smc_do_call_with_arg, .to_msg_param = optee_to_msg_param, .from_msg_param = optee_from_msg_param, + .lend_rstmem = optee_smc_lend_rstmem, + .reclaim_rstmem = optee_smc_reclaim_rstmem, }; static int enable_async_notif(optee_invoke_fn *invoke_fn) @@ -1622,11 +1687,13 @@ static inline int optee_load_fw(struct platform_device *pdev, static int optee_sdp_pool_init(struct optee *optee) { + bool sdp = optee->smc.sec_caps & OPTEE_SMC_SEC_CAP_SDP; + bool dyn_sdp = optee->smc.sec_caps & OPTEE_SMC_SEC_CAP_DYNAMIC_RSTMEM; enum tee_dma_heap_id heap_id = TEE_DMA_HEAP_SECURE_VIDEO_PLAY; - struct tee_rstmem_pool *pool; - int rc; + struct tee_rstmem_pool *pool = ERR_PTR(-EINVAL); + int rc = -EINVAL; - if (optee->smc.sec_caps & OPTEE_SMC_SEC_CAP_SDP) { + if (sdp) { union { struct arm_smccc_res smccc; struct optee_smc_get_sdp_config_result result; @@ -1634,25 +1701,24 @@ static int optee_sdp_pool_init(struct optee *optee) optee->smc.invoke_fn(OPTEE_SMC_GET_SDP_CONFIG, 0, 0, 0, 0, 0, 0, 0, &res.smccc); - if (res.result.status != OPTEE_SMC_RETURN_OK) { - pr_err("Secure Data Path service not available\n"); - return 0; - } + if (res.result.status == OPTEE_SMC_RETURN_OK) + pool = tee_rstmem_static_pool_alloc(res.result.start, + res.result.size); + } - pool = tee_rstmem_static_pool_alloc(res.result.start, - res.result.size); - if (IS_ERR(pool)) - return PTR_ERR(pool); + if (dyn_sdp && IS_ERR(pool)) + pool = optee_rstmem_alloc_cma_pool(optee, heap_id); + if (!IS_ERR(pool)) { rc = tee_device_register_dma_heap(optee->teedev, heap_id, pool); if (rc) - goto err; + pool->ops->destroy_pool(pool); } + if (rc && (sdp || dyn_sdp)) + pr_err("Secure Data Path service not available\n"); + return 0; -err: - pool->ops->destroy_pool(pool); - return rc; } static int optee_probe(struct platform_device *pdev)