From patchwork Thu Jun 8 13:23:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Donald Robson X-Patchwork-Id: 13272360 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7DD11C7EE25 for ; Thu, 8 Jun 2023 13:50:29 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A692510E5C9; Thu, 8 Jun 2023 13:50:28 +0000 (UTC) X-Greylist: delayed 1605 seconds by postgrey-1.36 at gabe; Thu, 08 Jun 2023 13:50:25 UTC Received: from mx08-00376f01.pphosted.com (mx08-00376f01.pphosted.com [91.207.212.86]) by gabe.freedesktop.org (Postfix) with ESMTPS id C462510E196 for ; Thu, 8 Jun 2023 13:50:25 +0000 (UTC) Received: from pps.filterd (m0168888.ppops.net [127.0.0.1]) by mx08-00376f01.pphosted.com (8.17.1.22/8.17.1.22) with ESMTP id 3585U27X011277; Thu, 8 Jun 2023 14:23:29 +0100 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=imgtec.com; h= from:to:cc:subject:date:message-id:content-type:content-id :content-transfer-encoding:mime-version; s=dk201812; bh=hkaOfLxd pO5LVNAyOl6d4qHfNgsntI52tr3rLQyw9p8=; b=Y9gwDcBGkMuh18p/xFPV5W+k T1i00A/eRrefLMqQ/51v40PrpshNuI9M5V4uiZpGFT+lPmtVe1TclFpUTOlyfkkS tAFNQlK6vgauoBjZL4bpFC/E8gsVXloP6ws5km/IuVrXED6dS7U1RIen92h4h9Q/ PPakzuYMZfGLVA0Gk/fkUf+RBlm0mduupRZh5jBxtBSVGSL9FGydhHKyZ/xYzhCp dm/QFgiqQ2rgXrDQvCuvahaY/TDB50qipMRuRkrkVVCBTwJ/lAQEtfUG0bWY05YX xYhUyC/5jqv2D6gakp++MJJikm9AV9q7QAWjAyQVuVFFst/9u+TE6pZHakv/bw== Received: from hhmail05.hh.imgtec.org ([217.156.249.195]) by mx08-00376f01.pphosted.com (PPS) with ESMTPS id 3r2g2t96w9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT); Thu, 08 Jun 2023 14:23:29 +0100 (BST) Received: from HHMAIL05.hh.imgtec.org (10.100.10.120) by HHMAIL05.hh.imgtec.org (10.100.10.120) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Thu, 8 Jun 2023 14:23:28 +0100 Received: from GBR01-LO2-obe.outbound.protection.outlook.com (104.47.21.55) by email.imgtec.com (10.100.10.121) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23 via Frontend Transport; Thu, 8 Jun 2023 14:23:28 +0100 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=jmidbuwUhm5i0KH9LtS8tHkqDHaXGBCvHGPI3hFnef4YW6T4vdpUlEOZmgy3ueaeKs4ybsoZaOW08PyTuwmnyIzQj+JwPtPBObcp2Z/7i0fhEJ9edGsP6LU939GuQK8P83isJtnvmtSjOijIJKBCkLLq9uOBteIRipeK6WJl5qXEh+HiXhqgPS4XYi0W6JawBuBemMu9a6n1xVspsv2hcWkoz7O8BfxPHsgqe4KuTMgd9/gxJg5ko/lgDGsueR8mqS6MCmTfp3ZMdFfnckoq/fEhWHAHhuDJHdb7a0Kk/WyP9esvwJe4MeDWMS7ouKH4pTGewC2yxaoBlcQXQFotGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=hkaOfLxdpO5LVNAyOl6d4qHfNgsntI52tr3rLQyw9p8=; b=eZTYlmpqxr8qzTvLNMdWKFtCBC+T1c1GpNY5tNblYYyHHFSCbQTosZXNeOArqJtwQkbC1I2PBIM83TAzD6FTnFUNDMuD8wCvrTA7CdDclXhFoDRHrbxME34zZ7Djuwd4t6TMDTWjwys3DNEaone4HMZ7CF8VAnPzMu+XYxRSqGDEOwve87vBbuKYJMZrgndnV/059LuAVrKjHDiRseoW24Oc8b22PjcFMrWs0HeymWETbaHK5ICVd5hc2PoOBvq4cjRJIJJ4VEz9A9L59VROenXt3AlewqoESHf9ILxn4pmp7l+we1PJJx2K5RzFc8aQ3HtMWbxSrqA2jo9mZlzGEg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=imgtec.com; dmarc=pass action=none header.from=imgtec.com; dkim=pass header.d=imgtec.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=IMGTecCRM.onmicrosoft.com; s=selector2-IMGTecCRM-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=hkaOfLxdpO5LVNAyOl6d4qHfNgsntI52tr3rLQyw9p8=; b=KvZ30tg/8XtUKQqzzLwGYXhWU4sYpQhn3BXew45IvZyQnpIh0vW7V/2YTPSffClnhVGHBVcYErp2xt8PVqIYpp1AuvAawwG7I53SJuEWwl/pw0bcqgQ4i5AchneiXcBqihqjL3oCmjPjmKHBMQ+oiAvFoicQW7ByfYPfRPmQH2g= Received: from CWLP265MB5770.GBRP265.PROD.OUTLOOK.COM (2603:10a6:400:1a0::8) by LO6P265MB6237.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:2b5::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6500.13; Thu, 8 Jun 2023 13:23:27 +0000 Received: from CWLP265MB5770.GBRP265.PROD.OUTLOOK.COM ([fe80::8419:9724:ffd0:21d1]) by CWLP265MB5770.GBRP265.PROD.OUTLOOK.COM ([fe80::8419:9724:ffd0:21d1%6]) with mapi id 15.20.6477.016; Thu, 8 Jun 2023 13:23:27 +0000 From: Donald Robson To: "dri-devel@lists.freedesktop.org" Subject: [PATCH] drm/sched: Add native dependency support to drm_sched Thread-Topic: [PATCH] drm/sched: Add native dependency support to drm_sched Thread-Index: AQHZmgxo+g4v4Um7i0SBBN/QkgpCEg== Date: Thu, 8 Jun 2023 13:23:26 +0000 Message-ID: <7ced7c0a101cb2467c34b69d2b686c429f64d8c2.camel@imgtec.com> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-publictraffictype: Email x-ms-traffictypediagnostic: CWLP265MB5770:EE_|LO6P265MB6237:EE_ x-ms-office365-filtering-correlation-id: 91201abf-231c-4465-9796-08db68238b24 x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: Tm9ODvT+xdKYn+bMFGtD4WDqsFwJbp8MS+LopT2OnK6dmSJCiwLvJdICMUdFj4WIT9xRHHMNZ2HzMhin1oG0nURdvijtFFLUWpMtZThE9NmgMnzViIhEkoeb6SZClfY/v1zR5+yD8SEEinkufuiO1Y7s7SX6LxOxY6XYWVGLFiSphAqwDOWreg9EMm5t7UA0DMjxFXlWOoKtd0bBSEE2ptBknpmxYBVCmMADxleR8WCu5laK7+BKqY1xpP4fW2wA9oxY8W9pRpCLzKl9RMzzzWHwMTSIzA8y/324xl4aN0XRVfVD9tiI0/oCiqImoUMYRIF27dX4afmMHgcy1v5sc25I5vV8WyUKmB8rlevHfXJLGUcpWXPNYv7rrZV4FOahjmzlSy4UmLxdiLtkzMUfxtNPIvLFsjL9Ym+5VRbivFC7z9BSA2IG4T+lK9vNus6NuShhXu4AmMGmuRDLDYJfePSU4wprvIkc3XdvXpU7s0RFwLU8qL/XG65cKdcNWD2YB9egYZq7PmyUbP0dOhWaV5akaq7VT9P/R4a673XYkGTwUc3AnsALJFFMzpkzB/ts/PFlzedw5vw9LmA/CO8+JhHTiQzlitERAqhUk160WiIvgi9us1gmY17T8TjEBTym x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:CWLP265MB5770.GBRP265.PROD.OUTLOOK.COM; PTR:; CAT:NONE; SFS:(13230028)(396003)(376002)(39850400004)(346002)(366004)(136003)(451199021)(26005)(186003)(6506007)(6512007)(66899021)(2616005)(66574015)(83380400001)(36756003)(6486002)(71200400001)(2906002)(30864003)(8676002)(8936002)(54906003)(478600001)(5660300002)(38100700002)(4326008)(122000001)(64756008)(6916009)(86362001)(66446008)(316002)(41300700001)(38070700005)(66476007)(66556008)(91956017)(66946007)(76116006); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?utf-8?q?+Cer3kY8YazsnfXXNKId28MnC7QQ?= =?utf-8?q?CmlIUD69FqXMaMe7GQJ2O5C7j1DeIvI4fKN9Ercr3Xko5epCHE0p487qI5JJIV5Kq?= =?utf-8?q?RqCKx4nBETepfxHT4oOoeebYHZyVztZy3uSatEch21Ymc7R4f33y+3jpxMJrue0h5?= =?utf-8?q?JtJZDo+mYsAP7ab3owH7k4mrkL1jKTk73lJTE7f9gSXj9BEvLyWaxJLMjn8fpuIHL?= =?utf-8?q?chCrmQFermuKQaiYlRWKL8n2szNhvzRDm7R2vECY5T6x7oJhTkb6wg6OuxKvpvMOK?= =?utf-8?q?/6GZfyR/n9sNhFXalQlBcXsZdqGvYdWYo0pI85uHvHuTULXE0Is+TL46akDJyNgKz?= =?utf-8?q?8jszm1Yj2Vq0G5r3x8BFrbir+DAzri8cagU6ox33pJ1tQjWX57ix+DLiUboTAtPI2?= =?utf-8?q?wufX6nOY/L0MZK2Jz4RNcG4nB7cD+yYly0qcpP0MC/WddEetbS+ON3uEGIUbTbwNk?= =?utf-8?q?BMqkuFqrVv/9Mhrb3cLvKpuWYzJFuZmqGHzYV/iKOnvV3YmQ4usvTYCF/KTgFl0PF?= =?utf-8?q?q7qp+wBsLVrE3H7OPHefJyz1AKgXbaKE9eO6FPLAQl+FdXBdO2XNt1aVEmZevetGg?= =?utf-8?q?WI0Qn6WMEUCkMmDUOWaXIR344/cmK8o8UlqYgFELj8Qgx6GfLQ1V5l71AxY+eVSyW?= =?utf-8?q?rLMuz/M1dSSpx1aOxXxEfxosXL05NiR+kdXyI/nU99kxkw6qYUgat9epssThbw0KU?= =?utf-8?q?7FQv5K8ZhiGG9llA6GLgL7n7BgD2aNL8ZnsuysOrNnL9/Zp7p80cQcc/3La5ql4aR?= =?utf-8?q?6BuRwc/6sop9C1VbF4BpJq2ZoCO5MIXN1MSfotOPLKY0oKDiLAHBWjchC8BsJCeQc?= =?utf-8?q?erY19c63MYyTBerd6LZIXFWFiwu/xJlyrm9Ies/0zeUrWPxAF2YOAqabtjuX91MVs?= =?utf-8?q?Gp1hDIpEqfIbSgKncI463NhYSf/iMbWz7YRKw1ZHEkN/zrUp2KqcPS9srfvVNAEB1?= =?utf-8?q?kE7w56/mUvNlEi025dZ6mQceLh8P6LtmtqUEviLjHcpY+xvkVhhx2AW9qIV+m6LYe?= =?utf-8?q?eiJsrHtnXRPTHmyCaRfXSkZqA3aV16+XF5huH+75wzyRZY+EjMlvcEyNpACUWvxB1?= =?utf-8?q?rArSshxcr7Et5dtiBdAjYXd6B7bW6fPnqgD9DHW7ZsYQ4Sy8MYbJYjt4bQULdeoRA?= =?utf-8?q?CfMANEGPO2IF3/5uszthUw76k4cTVHmoQIr2Sk44n5TSo4QfM/QN/3b8C4mn8ggzw?= =?utf-8?q?NA1SX2yub5XVgRbZMa0YUUkd4eX69GuM74/KbVZLLhvZTg0fDx2Vbf7IRul/Xf3nA?= =?utf-8?q?taskqeeXxmuQrkYiA89wmX/TqUPxiBeGMb4Mv3PhIlDHSMDvJv09d7GsVHVSnZtw2?= =?utf-8?q?f8GF5TOmoWLf3mwI4mSa7KlsnAPw2mTOSvUppDqMZIC+povJjriaM339eWylw3xoH?= =?utf-8?q?H99dnJ+1rqG3ntxdsymHs/6p0EUJvQ4Mc+7RaKLSOsvHUrTj3r737rb37mogNngXP?= =?utf-8?q?vq/xUZ/KLGjgkXMIwnvHGSVOmiOalQBbukUrh6a+kp1A2o/7vKlO8KO/tpwh5rDaN?= =?utf-8?q?tAyHIgUAuFix/zQSp0yhbbCLM3lxCD2Pbg=3D=3D?= Content-ID: <2D2BD234416281469CA2BD7DA3B3B03B@GBRP265.PROD.OUTLOOK.COM> MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: CWLP265MB5770.GBRP265.PROD.OUTLOOK.COM X-MS-Exchange-CrossTenant-Network-Message-Id: 91201abf-231c-4465-9796-08db68238b24 X-MS-Exchange-CrossTenant-originalarrivaltime: 08 Jun 2023 13:23:27.0385 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 0d5fd8bb-e8c2-4e0a-8dd5-2c264f7140fe X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: na7FIfh+Vi5tgMt9vnv8MfAyqtOxK9wm581HEsAKeq4KHdMkP9Kjs6ukSXVCEMg1yQysdj+BDisuqjurnbIH2ecLEU+FdrygKd0YSi29K6k= X-MS-Exchange-Transport-CrossTenantHeadersStamped: LO6P265MB6237 X-OriginatorOrg: imgtec.com X-EXCLAIMER-MD-CONFIG: 15a78312-3e47-46eb-9010-2e54d84a9631 X-Proofpoint-GUID: 2en_8IZxvpbgisF3cyPxfwcQ4epVZFlG X-Proofpoint-ORIG-GUID: 2en_8IZxvpbgisF3cyPxfwcQ4epVZFlG X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sarah Walker , "sumit.semwal@linaro.org" , "luben.tuikov@amd.com" , "boris.brezillon@collabora.com" , "christian.koenig@amd.com" Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This patch adds support for 'native' dependencies to DRM scheduler. In drivers that use a firmware based scheduler there are performance gains to be had by allowing waits to happen in the firmware, as this reduces the latency between signalling and job submission. Dependencies that can be awaited by the firmware scheduler are termed 'native dependencies'. In the new PowerVR driver we delegate the waits to the firmware, but it is still necessary to expose these fences within DRM scheduler. This is because when a job is cancelled drm_sched_entity_kill() registers callback to all the dependencies in order to ensure the job finished fence is not signalled before all the job dependencies are met, and if they were not exposed the core wouldn't be able to guarantee that anymore, and it might signal the fence too early leading to potential invalid accesses if other things depend on the job finished fence. All dependencies are handled in the same way up to the point that dependencies for a job are being checked. At this stage, DRM scheduler will now allow job submission to proceed once it encounters the first native dependency in the list - dependencies having been sorted beforehand in drm_sched_job_arm() so that native ones appear last. The list is sorted during drm_sched_job_arm() because the scheduler isn't known until this point, and determining whether a dependency is native is via a new drm_gpu_scheduler backend operation. Native fences are just simple counters that get incremented every time some specific execution point is reached, like when a GPU job is done. The firmware is in charge of waiting but also updating fences, so it can easily unblock any waiters it has internally. The CPU also has access to these counters, so it can also check for progress. TODO: When operating normally the CPU is not supposed to update the counters itself, but there is one specific situation where this is needed - when a GPU hang happened and some context were declared faulty because they had unfinished or blocked jobs pending. In that situation, when we reset the GPU we will evict faulty contexts so they can't submit jobs anymore and we will cancel the jobs that were in-flight at the time of reset, but that's not enough because some jobs on other non-faulty contexts might have native dependencies on jobs that never completed on this faulty context. If we were to ask the firmware to wait on those native fences, it would block indefinitely, because no one would ever update the counter. So, in that case, and that case only, we want the CPU to force-update the counter and set it to the last issued sequence number. We do not currently have a helper for this and we welcome any suggestions for how best to implement it. Signed-off-by: Donald Robson Cc: Luben Tuikov Cc: David Airlie Cc: Daniel Vetter Cc: Sumit Semwal Cc: "Christian König" Cc: Boris Brezillon Cc: Frank Binns Cc: Sarah Walker --- drivers/gpu/drm/scheduler/sched_entity.c | 60 +++++++++++++-- drivers/gpu/drm/scheduler/sched_main.c | 96 ++++++++++++++++++++++++ include/drm/gpu_scheduler.h | 11 +++ 3 files changed, 161 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index 15d04a0ec623..2685805a5e05 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -155,13 +155,14 @@ static void drm_sched_entity_kill_jobs_cb(struct dma_fence *f, { struct drm_sched_job *job = container_of(cb, struct drm_sched_job, finish_cb); + unsigned long idx; int r; dma_fence_put(f); /* Wait for all dependencies to avoid data corruptions */ - while (!xa_empty(&job->dependencies)) { - f = xa_erase(&job->dependencies, job->last_dependency++); + xa_for_each(&job->dependencies, idx, f) { + xa_erase(&job->dependencies, idx); r = dma_fence_add_callback(f, &job->finish_cb, drm_sched_entity_kill_jobs_cb); if (!r) @@ -390,12 +391,59 @@ static bool drm_sched_entity_add_dependency_cb(struct drm_sched_entity *entity) return false; } +/** + * dep_is_native - indicates that native dependencies are supported and that the + * dependency at @index is marked. + * @job: Scheduler job. + * @index: Index into the @job->dependencies xarray. + * + * Must only be used after calling drm_sched_job_arm(). + * + * Returns true if both these conditions are met. + */ +static bool dep_is_native(struct drm_sched_job *job, unsigned long index) +{ + return job->sched->ops->dependency_is_native && + xa_get_mark(&job->dependencies, job->last_dependency, XA_MARK_0); +} + static struct dma_fence * -drm_sched_job_dependency(struct drm_sched_job *job, - struct drm_sched_entity *entity) +drm_sched_job_dependency(struct drm_sched_job *job, struct drm_sched_entity *entity) { - if (!xa_empty(&job->dependencies)) - return xa_erase(&job->dependencies, job->last_dependency++); + struct dma_fence *fence; + unsigned long dep_index; + + if (!dep_is_native(job, job->last_dependency)) { + fence = xa_erase(&job->dependencies, job->last_dependency++); + if (fence) + return fence; + } + + xa_for_each_start(&job->dependencies, dep_index, fence, + job->last_dependency) { + /* + * Encountered first native dependency. Since these were + * previously sorted to the end of the array in + * drm_sched_sort_native_deps(), all remaining entries + * will be native too, so we can just iterate through + * them. + * + * Native entries cannot be erased, as they need to be + * accessed by the driver's native scheduler. + * + * If the native fence is a drm_sched_fence object, we + * ensure the job has been submitted so drm_sched_fence + * ::parent points to a valid dma_fence object. + */ + struct drm_sched_fence *s_fence = to_drm_sched_fence(fence); + struct dma_fence *scheduled_fence = + s_fence ? dma_fence_get_rcu(&s_fence->scheduled) : NULL; + + job->last_dependency = dep_index + 1; + + if (scheduled_fence) + return scheduled_fence; + } if (job->sched->ops->prepare_job) return job->sched->ops->prepare_job(job, entity); diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 214364fccb71..08dcc33ec690 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -643,6 +643,92 @@ int drm_sched_job_init(struct drm_sched_job *job, } EXPORT_SYMBOL(drm_sched_job_init); +/** + * drm_sched_sort_native_deps - relocates all native dependencies to the + * tail end of @job->dependencies. + * @job: target scheduler job. + * + * Starts by marking all of the native dependencies, then, in a quick-sort + * like manner it swaps entries using a head and tail index counter. Only + * a single partition is required, as there are only two values. + */ +static void drm_sched_sort_native_deps(struct drm_sched_job *job) +{ + struct dma_fence *entry, *head = NULL, *tail = NULL; + unsigned long h = 0, t = 0, native_dep_count = 0; + XA_STATE(xas_head, &job->dependencies, 0); + XA_STATE(xas_tail, &job->dependencies, 0); + bool already_sorted = true; + + if (!job->sched->ops->dependency_is_native) + /* Driver doesn't support native deps. */ + return; + + /* Mark all the native dependencies as we walk xas_tail to the end. */ + xa_lock(&job->dependencies); + xas_for_each(&xas_tail, entry, ULONG_MAX) { + /* Keep track of the index. */ + t++; + + if (job->sched->ops->dependency_is_native(entry)) { + xas_set_mark(&xas_tail, XA_MARK_0); + native_dep_count++; + } else if (native_dep_count) { + /* + * As a native dep has been encountered before, we can + * infer the array is not already sorted. + */ + already_sorted = false; + } + } + xa_unlock(&job->dependencies); + + if (already_sorted) + return; + + /* xas_tail and t are now at the end of the array. */ + xa_lock(&job->dependencies); + while (h < t) { + if (!head) { + /* Find a marked entry. */ + if (xas_get_mark(&xas_head, XA_MARK_0)) { + head = xas_load(&xas_head); + } else { + xas_next(&xas_head); + h++; + } + } + if (!tail) { + /* Find an unmarked entry. */ + if (xas_get_mark(&xas_tail, XA_MARK_0)) { + xas_prev(&xas_tail); + t--; + } else { + tail = xas_load(&xas_tail); + } + } + if (head && tail) { + /* + * Swap! + * These stores should never allocate, since they both + * already exist, hence they never fail. + */ + xas_store(&xas_head, tail); + xas_store(&xas_tail, head); + + /* Also swap the mark. */ + xas_clear_mark(&xas_head, XA_MARK_0); + xas_set_mark(&xas_tail, XA_MARK_0); + + head = NULL; + tail = NULL; + h++; + t--; + } + } + xa_unlock(&job->dependencies); +} + /** * drm_sched_job_arm - arm a scheduler job for execution * @job: scheduler job to arm @@ -669,6 +755,7 @@ void drm_sched_job_arm(struct drm_sched_job *job) job->s_priority = entity->rq - sched->sched_rq; job->id = atomic64_inc_return(&sched->job_id_count); + drm_sched_sort_native_deps(job); drm_sched_fence_init(job->s_fence, job->entity); } EXPORT_SYMBOL(drm_sched_job_arm); @@ -1045,6 +1132,15 @@ static int drm_sched_main(void *param) trace_drm_run_job(sched_job, entity); fence = sched->ops->run_job(sched_job); complete_all(&entity->entity_idle); + + /* We need to set the parent before signaling the scheduled + * fence if we want native dependency to work properly. If we + * don't, the driver might try to access the parent before + * it's set. + */ + if (!IS_ERR_OR_NULL(fence)) + drm_sched_fence_set_parent(s_fence, fence); + drm_sched_fence_scheduled(s_fence); if (!IS_ERR_OR_NULL(fence)) { diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 898608f87b96..dca6be35e517 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -455,6 +455,17 @@ struct drm_sched_backend_ops { * and it's time to clean it up. */ void (*free_job)(struct drm_sched_job *sched_job); + + /** + * @dependency_is_native: When arming a job for this scheduler, this + * function will be called to determine whether to treat it as a + * native dependency. A native dependency is awaited and cleaned up + * when the job is cancelled, but responsibility is otherwise delegated + * to a native scheduler in the calling driver code. + * + * Optional - implies support for native dependencies. + */ + bool (*dependency_is_native)(struct dma_fence *fence); }; /**