From patchwork Thu Jan 16 12:55:36 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 13941658 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 11A5AC02183 for ; Thu, 16 Jan 2025 12:55:48 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8941210E588; Thu, 16 Jan 2025 12:55:47 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=igalia.com header.i=@igalia.com header.b="iviWU0BV"; dkim-atps=neutral Received: from fanzine2.igalia.com (fanzine.igalia.com [178.60.130.6]) by gabe.freedesktop.org (Postfix) with ESMTPS id A3EE310E588 for ; Thu, 16 Jan 2025 12:55:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=igalia.com; s=20170329; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=Oeeo43s7bHaej93gQv/8c++gefxPJwbXq66rRp2F/ms=; b=iviWU0BV41pUYCEOtkoxd06UA6 5GaJJOB5E84CGpXOpUkZ1gucZOySleMkkNwxYHTs0rwILOlYITUHlBsDI85auz9WRDMfz15xTqJpu pSbqQe1A7NHA0/DB2erPkbfLpC1RtIj7kwGBVLDKkmX00dBoljZm3VQT6LI2do1PAe+u7ZYCBPD3p SVjf2/GFO8mAi6WRCnbqYfa9cd67v02s9f1xmRTGgkWMLAd4nxZMgugbxRxf4aS+AvtBctO4Y+JNm p9rHlBv9tQXLm/T0tN19I259Gi+Ecgrv7uUmYFDi7n4w58dbBqzm3G7gtQ4RnodjxRqZ0cBAHJRDP /ux6vqWg==; Received: from [90.241.98.187] (helo=localhost) by fanzine2.igalia.com with esmtpsa (Cipher TLS1.3:ECDHE_SECP256R1__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim) id 1tYPPQ-00GdqW-R8; Thu, 16 Jan 2025 13:55:40 +0100 From: Tvrtko Ursulin To: dri-devel@lists.freedesktop.org Cc: kernel-dev@igalia.com, Tvrtko Ursulin , =?utf-8?q?Christian_K=C3=B6nig?= , Danilo Krummrich , Matthew Brost , Philipp Stanner Subject: [PATCH v2] drm/sched: Avoid double re-lock on the job free path Date: Thu, 16 Jan 2025 12:55:36 +0000 Message-ID: <20250116125536.12124-1-tvrtko.ursulin@igalia.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250114105954.64847-1-tvrtko.ursulin@igalia.com> References: <20250114105954.64847-1-tvrtko.ursulin@igalia.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Currently the job free work item will lock sched->job_list_lock first time to see if there are any jobs, free a single job, and then lock again to decide whether to re-queue itself if there are more finished jobs. Since drm_sched_get_finished_job() already looks at the second job in the queue we can simply extend the timestamp check with the full signaled check and have it return the presence of more jobs to free to the caller. That way the work item does not have to lock the list again and repeat the very similar check. v2: * Consolidate to single dma_fence_is_signaled check. Signed-off-by: Tvrtko Ursulin Cc: Christian König Cc: Danilo Krummrich Cc: Matthew Brost Cc: Philipp Stanner --- drivers/gpu/drm/scheduler/sched_main.c | 40 ++++++++++---------------- 1 file changed, 15 insertions(+), 25 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index d172e3158880..af63a14f5001 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -370,22 +370,6 @@ static void __drm_sched_run_free_queue(struct drm_gpu_scheduler *sched) queue_work(sched->submit_wq, &sched->work_free_job); } -/** - * drm_sched_run_free_queue - enqueue free-job work if ready - * @sched: scheduler instance - */ -static void drm_sched_run_free_queue(struct drm_gpu_scheduler *sched) -{ - struct drm_sched_job *job; - - spin_lock(&sched->job_list_lock); - job = list_first_entry_or_null(&sched->pending_list, - struct drm_sched_job, list); - if (job && dma_fence_is_signaled(&job->s_fence->finished)) - __drm_sched_run_free_queue(sched); - spin_unlock(&sched->job_list_lock); -} - /** * drm_sched_job_done - complete a job * @s_job: pointer to the job which is done @@ -1065,12 +1049,13 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched) * drm_sched_get_finished_job - fetch the next finished job to be destroyed * * @sched: scheduler instance + * @have_more: are there more finished jobs on the list * * Returns the next finished job from the pending list (if there is one) * ready for it to be destroyed. */ static struct drm_sched_job * -drm_sched_get_finished_job(struct drm_gpu_scheduler *sched) +drm_sched_get_finished_job(struct drm_gpu_scheduler *sched, bool *have_more) { struct drm_sched_job *job, *next; @@ -1078,22 +1063,24 @@ drm_sched_get_finished_job(struct drm_gpu_scheduler *sched) job = list_first_entry_or_null(&sched->pending_list, struct drm_sched_job, list); - if (job && dma_fence_is_signaled(&job->s_fence->finished)) { /* remove job from pending_list */ list_del_init(&job->list); /* cancel this job's TO timer */ cancel_delayed_work(&sched->work_tdr); - /* make the scheduled timestamp more accurate */ + + *have_more = false; next = list_first_entry_or_null(&sched->pending_list, typeof(*next), list); - if (next) { - if (test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT, - &next->s_fence->scheduled.flags)) + if (dma_fence_is_signaled(&next->s_fence->finished)) { + *have_more = true; + /* make the scheduled timestamp more accurate */ next->s_fence->scheduled.timestamp = dma_fence_timestamp(&job->s_fence->finished); + } + /* start TO timer for next job */ drm_sched_start_timeout(sched); } @@ -1152,12 +1139,15 @@ static void drm_sched_free_job_work(struct work_struct *w) struct drm_gpu_scheduler *sched = container_of(w, struct drm_gpu_scheduler, work_free_job); struct drm_sched_job *job; + bool have_more; - job = drm_sched_get_finished_job(sched); - if (job) + job = drm_sched_get_finished_job(sched, &have_more); + if (job) { sched->ops->free_job(job); + if (have_more) + __drm_sched_run_free_queue(sched); + } - drm_sched_run_free_queue(sched); drm_sched_run_job_queue(sched); }