From patchwork Tue Jan 21 15:15:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Philipp Stanner X-Patchwork-Id: 13946412 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CEB7AC0218B for ; Tue, 21 Jan 2025 15:16:09 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 51BE810E59C; Tue, 21 Jan 2025 15:16:09 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="euOtlJEE"; dkim-atps=neutral Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8E9D110E5E7 for ; Tue, 21 Jan 2025 15:16:07 +0000 (UTC) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 9165A5C59B2; Tue, 21 Jan 2025 15:15:26 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8023FC4CEE3; Tue, 21 Jan 2025 15:16:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737472566; bh=5ziMMlCwm2y5y3wsVofZNp7cHjP4Bh/rr6cusB6ZzIw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=euOtlJEEuI0/BzXk7MZnJVgcHqKdHlsM59Qn3Omh8hmGRZaVy5u5IUah8R2etN2xe 4uJbCZ2aB0aX7iGIRozTPvGGxjIOl3J+QX+HQFl8g4oN13eUH2u9i5zLsF0yEaSCS6 cH0CQRZqQY9xB8nltFRhJUrvR2L2XBILxMqFVQSoKXOurojm5CNVKn9ue/z80yQokc ExYeu7IoHN1l05XDLIzowuUwM4tJDuJtZ2MPBbu0fKbnk9WAhxmmo0eo++/gtKzMfC tO4hchdPUoVWW81UvkvOynvbmpWeETldOgQTfeTZrwwG5F0q+jMbcjE9RYwdU0E6kX D6eZ0Ty6KqW4Q== From: Philipp Stanner To: Matthew Brost , Danilo Krummrich , Philipp Stanner , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH v2 1/3] drm/sched: Document run_job() refcount hazard Date: Tue, 21 Jan 2025 16:15:43 +0100 Message-ID: <20250121151544.44949-3-phasta@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250121151544.44949-2-phasta@kernel.org> References: <20250121151544.44949-2-phasta@kernel.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Philipp Stanner drm_sched_backend_ops.run_job() returns a dma_fence for the scheduler. That fence is signalled by the driver once the hardware completed the associated job. The scheduler does not increment the reference count on that fence, but implicitly expects to inherit this fence from run_job(). This is relatively subtle and prone to misunderstandings. This implies that, to keep a reference for itself, a driver needs to call dma_fence_get() in addition to dma_fence_init() in that callback. It's further complicated by the fact that the scheduler even decrements the refcount in drm_sched_run_job_work() since it created a new reference in drm_sched_fence_scheduled(). It does, however, still use its pointer to the fence after calling dma_fence_put() - which is safe because of the aforementioned new reference, but actually still violates the refcounting rules. Move the call to dma_fence_put() to the position behind the last usage of the fence. Document the necessity to increment the reference count in drm_sched_backend_ops.run_job(). Suggested-by: Danilo Krummrich Signed-off-by: Philipp Stanner Reviewed-by: Danilo Krummrich --- drivers/gpu/drm/scheduler/sched_main.c | 5 ++--- include/drm/gpu_scheduler.h | 19 +++++++++++++++---- 2 files changed, 17 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 57da84908752..7e69ebc09513 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -1218,15 +1218,14 @@ static void drm_sched_run_job_work(struct work_struct *w) drm_sched_fence_scheduled(s_fence, fence); if (!IS_ERR_OR_NULL(fence)) { - /* Drop for original kref_init of the fence */ - dma_fence_put(fence); - r = dma_fence_add_callback(fence, &sched_job->cb, drm_sched_job_done_cb); if (r == -ENOENT) drm_sched_job_done(sched_job, fence->error); else if (r) DRM_DEV_ERROR(sched->dev, "fence add callback failed (%d)\n", r); + + dma_fence_put(fence); } else { drm_sched_job_done(sched_job, IS_ERR(fence) ? PTR_ERR(fence) : 0); diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 95e17504e46a..d5cd2a78f27c 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -420,10 +420,21 @@ struct drm_sched_backend_ops { struct drm_sched_entity *s_entity); /** - * @run_job: Called to execute the job once all of the dependencies - * have been resolved. This may be called multiple times, if - * timedout_job() has happened and drm_sched_job_recovery() - * decides to try it again. + * @run_job: Called to execute the job once all of the dependencies + * have been resolved. This may be called multiple times, if + * timedout_job() has happened and drm_sched_job_recovery() decides to + * try it again. + * + * @sched_job: the job to run + * + * Returns: dma_fence the driver must signal once the hardware has + * completed the job ("hardware fence"). + * + * Note that the scheduler expects to 'inherit' its own reference to + * this fence from the callback. It does not invoke an extra + * dma_fence_get() on it. Consequently, this callback must take a + * reference for the scheduler, and additional ones for the driver's + * respective needs. */ struct dma_fence *(*run_job)(struct drm_sched_job *sched_job);