From patchwork Mon Dec 30 16:52:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 13923366 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CC097E77188 for ; Mon, 30 Dec 2024 16:53:40 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4E68210E53E; Mon, 30 Dec 2024 16:53:40 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=igalia.com header.i=@igalia.com header.b="SLw1aaDM"; dkim-atps=neutral Received: from fanzine2.igalia.com (fanzine.igalia.com [178.60.130.6]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3448310E538 for ; Mon, 30 Dec 2024 16:53:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=igalia.com; s=20170329; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=5ZBSFZxJvSu22KmqQfkECcJnDs4JmpClHHvldnIdaoM=; b=SLw1aaDMVnfpb7BbhtBMaRi1T7 gfcBkMyZkdUdpQ/WqdoIF6DC1e9OhZNjqt3OjvbfXK7VpiHGVcQXRIWUM2YVm0bV5VCvHPR4IyrGq mFdpus7/DDvhIVmuECbH2xWLzkJypdRZSPevKsmsITW2/d8wpNWIlyTCT6esdWrJ2UWAL2Yw3ryYQ jUOt6FCs/p30lYCHQZGr6+jDXTZ0fkfNifSCA0l/GlhKaRkxpJZKu4zaf55WIs4FdfBj5Nezb3Bso 6Ej5uMDke82AI87zoamUVlb39pnMWT4o3Laaym0Ct61LdC5I9qZdpldI0WAXFAAl5Kg70xe3sYVAf y8AeqDuQ==; Received: from [90.241.98.187] (helo=localhost) by fanzine2.igalia.com with esmtpsa (Cipher TLS1.3:ECDHE_SECP256R1__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim) id 1tSJ0y-009Zwa-Eu; Mon, 30 Dec 2024 17:53:12 +0100 From: Tvrtko Ursulin To: dri-devel@lists.freedesktop.org Cc: kernel-dev@igalia.com, Tvrtko Ursulin , =?utf-8?q?Christian_K=C3=B6nig?= , Danilo Krummrich , Matthew Brost , Philipp Stanner Subject: [RFC 12/14] drm/sched: Embed run queue singleton into the scheduler Date: Mon, 30 Dec 2024 16:52:57 +0000 Message-ID: <20241230165259.95855-13-tursulin@igalia.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241230165259.95855-1-tursulin@igalia.com> References: <20241230165259.95855-1-tursulin@igalia.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Tvrtko Ursulin Now that the run queue to scheduler relationship is always 1:1 we can embed it (the run queue) directly in the scheduler struct and save on some allocation error handling code and such. Signed-off-by: Tvrtko Ursulin Cc: Christian König Cc: Danilo Krummrich Cc: Matthew Brost Cc: Philipp Stanner --- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 6 ++-- drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 6 ++-- drivers/gpu/drm/amd/amdgpu/amdgpu_job.h | 5 +++- drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h | 8 ++++-- drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c | 8 +++--- drivers/gpu/drm/amd/amdgpu/amdgpu_xcp.c | 8 +++--- drivers/gpu/drm/scheduler/sched_entity.c | 27 ++++++++---------- drivers/gpu/drm/scheduler/sched_fence.c | 2 +- drivers/gpu/drm/scheduler/sched_main.c | 31 ++++----------------- drivers/gpu/drm/scheduler/sched_rq.c | 17 +++++------ include/drm/gpu_scheduler.h | 11 ++------ 11 files changed, 56 insertions(+), 73 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index d891ab779ca7..25028ac48844 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -1108,7 +1108,8 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p) if (p->gang_size > 1 && !p->adev->vm_manager.concurrent_flush) { for (i = 0; i < p->gang_size; ++i) { struct drm_sched_entity *entity = p->entities[i]; - struct drm_gpu_scheduler *sched = entity->rq->sched; + struct drm_gpu_scheduler *sched = + container_of(entity->rq, typeof(*sched), rq); struct amdgpu_ring *ring = to_amdgpu_ring(sched); if (amdgpu_vmid_uses_reserved(adev, vm, ring->vm_hub)) @@ -1233,7 +1234,8 @@ static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p) return r; } - sched = p->gang_leader->base.entity->rq->sched; + sched = container_of(p->gang_leader->base.entity->rq, typeof(*sched), + rq); while ((fence = amdgpu_sync_get_fence(&p->sync))) { struct drm_sched_fence *s_fence = to_drm_sched_fence(fence); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c index 918b6d4919e1..f7abe413044e 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c @@ -349,7 +349,9 @@ static struct dma_fence * amdgpu_job_prepare_job(struct drm_sched_job *sched_job, struct drm_sched_entity *s_entity) { - struct amdgpu_ring *ring = to_amdgpu_ring(s_entity->rq->sched); + struct drm_gpu_scheduler *sched = + container_of(s_entity->rq, typeof(*sched), rq); + struct amdgpu_ring *ring = to_amdgpu_ring(sched); struct amdgpu_job *job = to_amdgpu_job(sched_job); struct dma_fence *fence = NULL; int r; @@ -418,7 +420,7 @@ static struct dma_fence *amdgpu_job_run(struct drm_sched_job *sched_job) void amdgpu_job_stop_all_jobs_on_sched(struct drm_gpu_scheduler *sched) { - struct drm_sched_rq *rq = sched->rq; + struct drm_sched_rq *rq = &sched->rq; struct drm_sched_entity *s_entity; struct drm_sched_job *s_job; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h index ce6b9ba967ff..d6872baeba1e 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h @@ -85,7 +85,10 @@ struct amdgpu_job { static inline struct amdgpu_ring *amdgpu_job_ring(struct amdgpu_job *job) { - return to_amdgpu_ring(job->base.entity->rq->sched); + struct drm_gpu_scheduler *sched = + container_of(job->base.entity->rq, typeof(*sched), rq); + + return to_amdgpu_ring(sched); } int amdgpu_job_alloc(struct amdgpu_device *adev, struct amdgpu_vm *vm, diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h index 383fce40d4dd..a3819ed20d27 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h @@ -145,6 +145,7 @@ TRACE_EVENT(amdgpu_cs, struct amdgpu_ib *ib), TP_ARGS(p, job, ib), TP_STRUCT__entry( + __field(struct drm_gpu_scheduler *, sched) __field(struct amdgpu_bo_list *, bo_list) __field(u32, ring) __field(u32, dw) @@ -152,11 +153,14 @@ TRACE_EVENT(amdgpu_cs, ), TP_fast_assign( + __entry->sched = container_of(job->base.entity->rq, + typeof(*__entry->sched), + rq); __entry->bo_list = p->bo_list; - __entry->ring = to_amdgpu_ring(job->base.entity->rq->sched)->idx; + __entry->ring = to_amdgpu_ring(__entry->sched)->idx; __entry->dw = ib->length_dw; __entry->fences = amdgpu_fence_count_emitted( - to_amdgpu_ring(job->base.entity->rq->sched)); + to_amdgpu_ring(__entry->sched)); ), TP_printk("bo_list=%p, ring=%u, dw=%u, fences=%u", __entry->bo_list, __entry->ring, __entry->dw, diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c index 46d9fb433ab2..42f2bfb30af1 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c @@ -105,13 +105,13 @@ static int amdgpu_vm_sdma_prepare(struct amdgpu_vm_update_params *p, static int amdgpu_vm_sdma_commit(struct amdgpu_vm_update_params *p, struct dma_fence **fence) { + struct drm_gpu_scheduler *sched = + container_of(p->vm->delayed.rq, typeof(*sched), rq); + struct amdgpu_ring *ring = + container_of(sched, struct amdgpu_ring, sched); struct amdgpu_ib *ib = p->job->ibs; - struct amdgpu_ring *ring; struct dma_fence *f; - ring = container_of(p->vm->delayed.rq->sched, struct amdgpu_ring, - sched); - WARN_ON(ib->length_dw == 0); amdgpu_ring_pad_ib(ring, ib); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_xcp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_xcp.c index e209b5e101df..182744c5f0cf 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_xcp.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_xcp.c @@ -420,15 +420,15 @@ int amdgpu_xcp_open_device(struct amdgpu_device *adev, void amdgpu_xcp_release_sched(struct amdgpu_device *adev, struct amdgpu_ctx_entity *entity) { - struct drm_gpu_scheduler *sched; - struct amdgpu_ring *ring; + struct drm_gpu_scheduler *sched = + container_of(entity->entity.rq, typeof(*sched), rq); if (!adev->xcp_mgr) return; - sched = entity->entity.rq->sched; if (sched->ready) { - ring = to_amdgpu_ring(entity->entity.rq->sched); + struct amdgpu_ring *ring = to_amdgpu_ring(sched); + atomic_dec(&adev->xcp_mgr->xcp[ring->xcp_id].ref_cnt); } } diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index c5a4c04b2455..dc5105ca8381 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -77,19 +77,12 @@ int drm_sched_entity_init(struct drm_sched_entity *entity, * is initialized itself. */ entity->sched_list = num_sched_list > 1 ? sched_list : NULL; + if (num_sched_list) { + entity->sched_list = num_sched_list > 1 ? sched_list : NULL; + entity->rq = &sched_list[0]->rq; + } RCU_INIT_POINTER(entity->last_scheduled, NULL); RB_CLEAR_NODE(&entity->rb_tree_node); - - if (num_sched_list && !sched_list[0]->rq) { - /* Since every entry covered by num_sched_list - * should be non-NULL and therefore we warn drivers - * not to do this and to fix their DRM calling order. - */ - pr_warn("%s: called with uninitialized scheduler\n", __func__); - } else if (num_sched_list) { - entity->rq = sched_list[0]->rq; - } - init_completion(&entity->entity_idle); /* We start in an idle state. */ @@ -279,7 +272,7 @@ long drm_sched_entity_flush(struct drm_sched_entity *entity, long timeout) if (!entity->rq) return 0; - sched = entity->rq->sched; + sched = container_of(entity->rq, typeof(*sched), rq); /** * The client will not queue more IBs during this fini, consume existing * queued IBs or discard them on SIGKILL @@ -414,10 +407,12 @@ static void drm_sched_entity_wakeup(struct dma_fence *f, { struct drm_sched_entity *entity = container_of(cb, struct drm_sched_entity, cb); + struct drm_gpu_scheduler *sched = + container_of(entity->rq, typeof(*sched), rq); entity->dependency = NULL; dma_fence_put(f); - drm_sched_wakeup(entity->rq->sched); + drm_sched_wakeup(sched); } /* @@ -531,7 +526,7 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) spin_lock(&entity->lock); sched = drm_sched_pick_best(entity->sched_list, entity->num_sched_list); - rq = sched ? sched->rq : NULL; + rq = sched ? &sched->rq : NULL; if (rq != entity->rq) { drm_sched_rq_remove_entity(entity->rq, entity); entity->rq = rq; @@ -556,12 +551,14 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job) struct drm_sched_fence *s_fence = sched_job->s_fence; struct drm_sched_entity *entity = sched_job->entity; struct dma_fence *fence = &s_fence->finished; + struct drm_gpu_scheduler *sched = + container_of(entity->rq, typeof(*sched), rq); ktime_t submit_ts = ktime_get(); ktime_t fence_deadline; bool first; trace_drm_sched_job(sched_job, entity); - atomic_inc(entity->rq->sched->score); + atomic_inc(sched->score); WRITE_ONCE(entity->last_user, current->group_leader); drm_sched_job_prepare_dependecies(sched_job); diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c index dfc7f50d4e0d..a0f8fbba6d7e 100644 --- a/drivers/gpu/drm/scheduler/sched_fence.c +++ b/drivers/gpu/drm/scheduler/sched_fence.c @@ -226,7 +226,7 @@ void drm_sched_fence_init(struct drm_sched_fence *fence, unsigned seq; fence->entity = entity; - fence->sched = entity->rq->sched; + fence->sched = container_of(entity->rq, typeof(*fence->sched), rq); seq = atomic_inc_return(&entity->fence_seq); dma_fence_init(&fence->scheduled, &drm_sched_fence_ops_scheduled, &fence->lock, entity->fence_context, seq); diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 6f4ea8a2ca17..67bf0bec3309 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -590,7 +590,7 @@ void drm_sched_job_arm(struct drm_sched_job *job) BUG_ON(!entity); drm_sched_entity_select_rq(entity); - sched = entity->rq->sched; + sched = container_of(entity->rq, typeof(*sched), rq); job->sched = sched; job->s_priority = entity->priority; @@ -936,7 +936,7 @@ static void drm_sched_run_job_work(struct work_struct *w) for (;;) { /* Find entity with a ready job */ - entity = drm_sched_rq_select_entity(sched, sched->rq); + entity = drm_sched_rq_select_entity(sched); if (!entity) break; /* No more work */ @@ -1028,15 +1028,6 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, sched->score = score ? score : &sched->_score; sched->dev = dev; - if (sched->rq) { - /* Not an error, but warn anyway so drivers can - * fine-tune their DRM calling order, and return all - * is good. - */ - drm_warn(sched, "%s: scheduler already initialized!\n", __func__); - return 0; - } - if (submit_wq) { sched->submit_wq = submit_wq; sched->own_submit_wq = false; @@ -1054,11 +1045,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, sched->own_submit_wq = true; } - sched->rq = kmalloc(sizeof(*sched->rq), GFP_KERNEL | __GFP_ZERO); - if (!sched->rq) - goto Out_check_own; - - drm_sched_rq_init(sched, sched->rq); + drm_sched_rq_init(sched); init_waitqueue_head(&sched->job_scheduled); INIT_LIST_HEAD(&sched->pending_list); @@ -1072,12 +1059,6 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, sched->pause_submit = false; sched->ready = true; return 0; - -Out_check_own: - if (sched->own_submit_wq) - destroy_workqueue(sched->submit_wq); - drm_err(sched, "%s: Failed to setup GPU scheduler--out of memory\n", __func__); - return -ENOMEM; } EXPORT_SYMBOL(drm_sched_init); @@ -1104,7 +1085,7 @@ EXPORT_SYMBOL(drm_sched_init); void drm_sched_fini(struct drm_gpu_scheduler *sched) { - struct drm_sched_rq *rq = sched->rq; + struct drm_sched_rq *rq = &sched->rq; struct drm_sched_entity *s_entity; drm_sched_wqueue_stop(sched); @@ -1128,8 +1109,6 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched) if (sched->own_submit_wq) destroy_workqueue(sched->submit_wq); sched->ready = false; - kfree(sched->rq); - sched->rq = NULL; } EXPORT_SYMBOL(drm_sched_fini); @@ -1146,7 +1125,7 @@ void drm_sched_increase_karma(struct drm_sched_job *bad) { struct drm_gpu_scheduler *sched = bad->sched; struct drm_sched_entity *entity, *tmp; - struct drm_sched_rq *rq = sched->rq; + struct drm_sched_rq *rq = &sched->rq; /* don't change @bad's karma if it's from KERNEL RQ, * because sometimes GPU hang would cause kernel jobs (like VM updating jobs) diff --git a/drivers/gpu/drm/scheduler/sched_rq.c b/drivers/gpu/drm/scheduler/sched_rq.c index e96c8ca9c54b..2956d719c42d 100644 --- a/drivers/gpu/drm/scheduler/sched_rq.c +++ b/drivers/gpu/drm/scheduler/sched_rq.c @@ -47,13 +47,13 @@ static void __drm_sched_rq_add_tree_locked(struct drm_sched_entity *entity, * * Initializes a scheduler runqueue. */ -void drm_sched_rq_init(struct drm_gpu_scheduler *sched, - struct drm_sched_rq *rq) +void drm_sched_rq_init(struct drm_gpu_scheduler *sched) { + struct drm_sched_rq *rq = &sched->rq; + spin_lock_init(&rq->lock); INIT_LIST_HEAD(&rq->entities); rq->rb_tree_root = RB_ROOT_CACHED; - rq->sched = sched; } /** @@ -71,7 +71,7 @@ drm_sched_rq_add_entity(struct drm_sched_rq *rq, struct drm_sched_entity *entity, ktime_t ts) { - struct drm_gpu_scheduler *sched; + struct drm_gpu_scheduler *sched = container_of(rq, typeof(*sched), rq); if (entity->stopped) { DRM_ERROR("Trying to push to a killed entity\n"); @@ -81,7 +81,6 @@ drm_sched_rq_add_entity(struct drm_sched_rq *rq, spin_lock(&entity->lock); spin_lock(&rq->lock); - sched = rq->sched; atomic_inc(sched->score); if (!list_empty(&entity->list)) @@ -108,6 +107,8 @@ drm_sched_rq_add_entity(struct drm_sched_rq *rq, void drm_sched_rq_remove_entity(struct drm_sched_rq *rq, struct drm_sched_entity *entity) { + struct drm_gpu_scheduler *sched = container_of(rq, typeof(*sched), rq); + lockdep_assert_held(&entity->lock); if (list_empty(&entity->list)) @@ -115,7 +116,7 @@ void drm_sched_rq_remove_entity(struct drm_sched_rq *rq, spin_lock(&rq->lock); - atomic_dec(rq->sched->score); + atomic_dec(sched->score); list_del_init(&entity->list); if (!RB_EMPTY_NODE(&entity->rb_tree_node)) @@ -175,10 +176,10 @@ void drm_sched_rq_update_deadline(struct drm_sched_rq *rq, * Return an entity if one is found or NULL if no ready entity was found. */ struct drm_sched_entity * -drm_sched_rq_select_entity(struct drm_gpu_scheduler *sched, - struct drm_sched_rq *rq) +drm_sched_rq_select_entity(struct drm_gpu_scheduler *sched) { struct drm_sched_entity *entity = NULL; + struct drm_sched_rq *rq = &sched->rq; struct rb_node *rb; spin_lock(&rq->lock); diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index c68dce8af063..7b29f45aa1da 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -238,7 +238,6 @@ struct drm_sched_entity { /** * struct drm_sched_rq - queue of entities to be scheduled. * - * @sched: the scheduler to which this rq belongs to. * @lock: protects @entities, @rb_tree_root and @rr_deadline. * @entities: list of the entities to be scheduled. * @rb_tree_root: root of time based priority queue of entities for FIFO scheduling @@ -248,8 +247,6 @@ struct drm_sched_entity { * the next entity to emit commands from. */ struct drm_sched_rq { - struct drm_gpu_scheduler *sched; - spinlock_t lock; /* Following members are protected by the @lock: */ ktime_t rr_deadline; @@ -518,7 +515,7 @@ struct drm_gpu_scheduler { atomic_t credit_count; long timeout; const char *name; - struct drm_sched_rq *rq; + struct drm_sched_rq rq; wait_queue_head_t job_scheduled; atomic64_t job_id_count; struct workqueue_struct *submit_wq; @@ -585,8 +582,7 @@ bool drm_sched_dependency_optimized(struct dma_fence* fence, struct drm_sched_entity *entity); void drm_sched_fault(struct drm_gpu_scheduler *sched); -void drm_sched_rq_init(struct drm_gpu_scheduler *sched, - struct drm_sched_rq *rq); +void drm_sched_rq_init(struct drm_gpu_scheduler *sched); struct drm_gpu_scheduler * drm_sched_rq_add_entity(struct drm_sched_rq *rq, @@ -598,8 +594,7 @@ void drm_sched_rq_remove_entity(struct drm_sched_rq *rq, void drm_sched_rq_pop_entity(struct drm_sched_rq *rq, struct drm_sched_entity *entity); struct drm_sched_entity * -drm_sched_rq_select_entity(struct drm_gpu_scheduler *sched, - struct drm_sched_rq *rq); +drm_sched_rq_select_entity(struct drm_gpu_scheduler *sched); void drm_sched_rq_update_deadline(struct drm_sched_rq *rq, struct drm_sched_entity *entity, ktime_t deadline);