From patchwork Tue Feb 18 11:12:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Philipp Stanner X-Patchwork-Id: 13979590 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 66E0623F279; Tue, 18 Feb 2025 11:13:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739877190; cv=none; b=M4PIREVQ04YONEiBp/N5C7OHq+780Z1KYmG9VO0vjvNPvFSQ5GRjMfpRkFMTuoV6pXINz7t0DQdoCOEnSygSlGhDERG2GARKR4ZCMElj4BsLGmwcMKXaeALKKiLJuk3SO3jze86gOSc7i/X1g8rUckpfDecHWG/Q1VXyM/VoGIg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739877190; c=relaxed/simple; bh=wHCQOUz8s89T92fB5TvJiqEYx/Hx0bOuLLu49+hLCDo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=fSZjq5z2pkquUj46tHuFsQ0+DXXElCZcH8DKhJf/4atQq6Bx6NlViGj4RLHquofoKgjAaAJATCSk9Wc8MtELleCIctIjmnW+Eezr8aISnHPrrQqeOcgjzE50MKCipcInrhK0ScEqyNe6Nl7E4iLKBkWLyRpP9EyLaha565bsknI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Av5xQcp0; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Av5xQcp0" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 699B0C4CEE7; Tue, 18 Feb 2025 11:13:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739877189; bh=wHCQOUz8s89T92fB5TvJiqEYx/Hx0bOuLLu49+hLCDo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Av5xQcp0nEEhUsNRY+nRZq/jS84W4K0Fvh/ZaCu8DtgUgRklBoLlQQTCtNHG0Su3D a3HRxXRA21fmmJn3664wdapcQESXljQn/Bmjxo58r9zyV7FADWQIsqHuCoxgJIXy65 RHfbLl+8bjfkGNdjOe797PVEjxd/Decs7xaModCepVbuJkfO/3sVpiUxwXjvG6ccU/ QKXCVu3AnZFWiC0PrzjItCV8dz4yR85B8sF+p0MFO/DWBJlWQYA83FbepqgUTNPD3y bmyeoFjE2dbRa7ssWu/OzK2By3xXcXo7wgdyHQYPOk2nqfdbcIwNN3oFWUHlX+WdZx cGf74VN0N0yfg== From: Philipp Stanner To: Matthew Brost , Danilo Krummrich , Philipp Stanner , =?utf-8?q?Christian_K=C3=B6nig?= , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Sumit Semwal Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, Philipp Stanner Subject: [PATCH v4 1/3] drm/sched: Document run_job() refcount hazard Date: Tue, 18 Feb 2025 12:12:45 +0100 Message-ID: <20250218111246.108266-3-phasta@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250218111246.108266-2-phasta@kernel.org> References: <20250218111246.108266-2-phasta@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Philipp Stanner drm_sched_backend_ops.run_job() returns a dma_fence for the scheduler. That fence is signalled by the driver once the hardware completed the associated job. The scheduler does not increment the reference count on that fence, but implicitly expects to inherit this fence from run_job(). This is relatively subtle and prone to misunderstandings. This implies that, to keep a reference for itself, a driver needs to call dma_fence_get() in addition to dma_fence_init() in that callback. It's further complicated by the fact that the scheduler even decrements the refcount in drm_sched_run_job_work() since it created a new reference in drm_sched_fence_scheduled(). It does, however, still use its pointer to the fence after calling dma_fence_put() - which is safe because of the aforementioned new reference, but actually still violates the refcounting rules. Move the call to dma_fence_put() to the position behind the last usage of the fence. Document the necessity to increment the reference count in drm_sched_backend_ops.run_job(). Suggested-by: Danilo Krummrich Signed-off-by: Philipp Stanner Reviewed-by: Danilo Krummrich --- drivers/gpu/drm/scheduler/sched_main.c | 5 ++--- include/drm/gpu_scheduler.h | 19 +++++++++++++++---- 2 files changed, 17 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 8c36a59afb72..02af3f89099d 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -1222,15 +1222,14 @@ static void drm_sched_run_job_work(struct work_struct *w) drm_sched_fence_scheduled(s_fence, fence); if (!IS_ERR_OR_NULL(fence)) { - /* Drop for original kref_init of the fence */ - dma_fence_put(fence); - r = dma_fence_add_callback(fence, &sched_job->cb, drm_sched_job_done_cb); if (r == -ENOENT) drm_sched_job_done(sched_job, fence->error); else if (r) DRM_DEV_ERROR(sched->dev, "fence add callback failed (%d)\n", r); + + dma_fence_put(fence); } else { drm_sched_job_done(sched_job, IS_ERR(fence) ? PTR_ERR(fence) : 0); diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 6bf458dbce84..916279b5aa00 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -420,10 +420,21 @@ struct drm_sched_backend_ops { struct drm_sched_entity *s_entity); /** - * @run_job: Called to execute the job once all of the dependencies - * have been resolved. This may be called multiple times, if - * timedout_job() has happened and drm_sched_job_recovery() - * decides to try it again. + * @run_job: Called to execute the job once all of the dependencies + * have been resolved. This may be called multiple times, if + * timedout_job() has happened and drm_sched_job_recovery() decides to + * try it again. + * + * @sched_job: the job to run + * + * Returns: dma_fence the driver must signal once the hardware has + * completed the job ("hardware fence"). + * + * Note that the scheduler expects to 'inherit' its own reference to + * this fence from the callback. It does not invoke an extra + * dma_fence_get() on it. Consequently, this callback must take a + * reference for the scheduler, and additional ones for the driver's + * respective needs. */ struct dma_fence *(*run_job)(struct drm_sched_job *sched_job);