From patchwork Tue Aug 1 20:50:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 13337254 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9AF31C0015E for ; Tue, 1 Aug 2023 20:51:31 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A173410E438; Tue, 1 Aug 2023 20:51:19 +0000 (UTC) Received: from mgamail.intel.com (unknown [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2428B10E334; Tue, 1 Aug 2023 20:51:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690923071; x=1722459071; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aRrFZ7ij2tn0o8PdhE1PlOmb/nkz75hbUvhwXrak2Rw=; b=l5Bm9yYO7hkFISBJi9Nu2FTEc37/tUj2g8lREC2nbgXpgSqzIwDKDZde YJ7leNkv2iOasiHzxdyqP6SKOOv7dsbTojAB4J3wSv9efDjEwuivh6cd6 JqyEJKFHTXdvZFVvaDS45NkCerh3cBJ/rU/Emxw1Y2m5dbmm4T9fXTXR/ ZvIPMGwWSVfsAGWVKZscCN7+G0HhPgK40uyvw6W4AalWua+M/kLwgtgzK SryxvUTWv43FqKU3sSL7r+dgRVg2UDzYroDxtuQkwQX9b+0GHqt5G4/1H y1PEA3I11tJKogVXY7+vJxl9ENo3kXf6ZlkoDkbH7ZA/nolb1NlayC0U5 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10789"; a="373051705" X-IronPort-AV: E=Sophos;i="6.01,248,1684825200"; d="scan'208";a="373051705" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Aug 2023 13:51:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.01,202,1684825200"; d="scan'208";a="872215396" Received: from lstrano-desk.jf.intel.com ([10.54.39.91]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Aug 2023 13:51:12 -0700 From: Matthew Brost To: dri-devel@lists.freedesktop.org, intel-xe@lists.freedesktop.org Subject: [PATCH 1/8] drm/sched: Convert drm scheduler to use a work queue rather than kthread Date: Tue, 1 Aug 2023 13:50:56 -0700 Message-Id: <20230801205103.627779-2-matthew.brost@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230801205103.627779-1-matthew.brost@intel.com> References: <20230801205103.627779-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robdclark@chromium.org, thomas.hellstrom@linux.intel.com, Matthew Brost , sarah.walker@imgtec.com, ketil.johnsen@arm.com, Liviu.Dudau@arm.com, luben.tuikov@amd.com, lina@asahilina.net, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith.ekstrand@collabora.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" In XE, the new Intel GPU driver, a choice has made to have a 1 to 1 mapping between a drm_gpu_scheduler and drm_sched_entity. At first this seems a bit odd but let us explain the reasoning below. 1. In XE the submission order from multiple drm_sched_entity is not guaranteed to be the same completion even if targeting the same hardware engine. This is because in XE we have a firmware scheduler, the GuC, which allowed to reorder, timeslice, and preempt submissions. If a using shared drm_gpu_scheduler across multiple drm_sched_entity, the TDR falls apart as the TDR expects submission order == completion order. Using a dedicated drm_gpu_scheduler per drm_sched_entity solve this problem. 2. In XE submissions are done via programming a ring buffer (circular buffer), a drm_gpu_scheduler provides a limit on number of jobs, if the limit of number jobs is set to RING_SIZE / MAX_SIZE_PER_JOB we get flow control on the ring for free. A problem with this design is currently a drm_gpu_scheduler uses a kthread for submission / job cleanup. This doesn't scale if a large number of drm_gpu_scheduler are used. To work around the scaling issue, use a worker rather than kthread for submission / job cleanup. v2: - (Rob Clark) Fix msm build - Pass in run work queue v3: - (Boris) don't have loop in worker Signed-off-by: Matthew Brost --- drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 14 +- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 14 +- drivers/gpu/drm/etnaviv/etnaviv_sched.c | 2 +- drivers/gpu/drm/lima/lima_sched.c | 2 +- drivers/gpu/drm/msm/adreno/adreno_device.c | 6 +- drivers/gpu/drm/msm/msm_ringbuffer.c | 2 +- drivers/gpu/drm/panfrost/panfrost_job.c | 2 +- drivers/gpu/drm/scheduler/sched_main.c | 136 +++++++++++--------- drivers/gpu/drm/v3d/v3d_sched.c | 10 +- include/drm/gpu_scheduler.h | 14 +- 10 files changed, 113 insertions(+), 89 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c index f60753f97ac5..9c2a10aeb0b3 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c @@ -1489,9 +1489,9 @@ static int amdgpu_debugfs_test_ib_show(struct seq_file *m, void *unused) for (i = 0; i < AMDGPU_MAX_RINGS; i++) { struct amdgpu_ring *ring = adev->rings[i]; - if (!ring || !ring->sched.thread) + if (!ring || !ring->sched.ready) continue; - kthread_park(ring->sched.thread); + drm_sched_run_wq_stop(&ring->sched); } seq_printf(m, "run ib test:\n"); @@ -1505,9 +1505,9 @@ static int amdgpu_debugfs_test_ib_show(struct seq_file *m, void *unused) for (i = 0; i < AMDGPU_MAX_RINGS; i++) { struct amdgpu_ring *ring = adev->rings[i]; - if (!ring || !ring->sched.thread) + if (!ring || !ring->sched.ready) continue; - kthread_unpark(ring->sched.thread); + drm_sched_run_wq_start(&ring->sched); } up_write(&adev->reset_domain->sem); @@ -1727,7 +1727,7 @@ static int amdgpu_debugfs_ib_preempt(void *data, u64 val) ring = adev->rings[val]; - if (!ring || !ring->funcs->preempt_ib || !ring->sched.thread) + if (!ring || !ring->funcs->preempt_ib || !ring->sched.ready) return -EINVAL; /* the last preemption failed */ @@ -1745,7 +1745,7 @@ static int amdgpu_debugfs_ib_preempt(void *data, u64 val) goto pro_end; /* stop the scheduler */ - kthread_park(ring->sched.thread); + drm_sched_run_wq_stop(&ring->sched); /* preempt the IB */ r = amdgpu_ring_preempt_ib(ring); @@ -1779,7 +1779,7 @@ static int amdgpu_debugfs_ib_preempt(void *data, u64 val) failure: /* restart the scheduler */ - kthread_unpark(ring->sched.thread); + drm_sched_run_wq_start(&ring->sched); up_read(&adev->reset_domain->sem); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c index fac9312b1695..00c9c03c8f94 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -2364,7 +2364,7 @@ static int amdgpu_device_init_schedulers(struct amdgpu_device *adev) break; } - r = drm_sched_init(&ring->sched, &amdgpu_sched_ops, + r = drm_sched_init(&ring->sched, &amdgpu_sched_ops, NULL, ring->num_hw_submission, amdgpu_job_hang_limit, timeout, adev->reset_domain->wq, ring->sched_score, ring->name, @@ -4627,7 +4627,7 @@ bool amdgpu_device_has_job_running(struct amdgpu_device *adev) for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { struct amdgpu_ring *ring = adev->rings[i]; - if (!ring || !ring->sched.thread) + if (!ring || !ring->sched.ready) continue; spin_lock(&ring->sched.job_list_lock); @@ -4753,7 +4753,7 @@ int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev, for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { struct amdgpu_ring *ring = adev->rings[i]; - if (!ring || !ring->sched.thread) + if (!ring || !ring->sched.ready) continue; /*clear job fence from fence drv to avoid force_completion @@ -5294,7 +5294,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev, for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { struct amdgpu_ring *ring = tmp_adev->rings[i]; - if (!ring || !ring->sched.thread) + if (!ring || !ring->sched.ready) continue; drm_sched_stop(&ring->sched, job ? &job->base : NULL); @@ -5369,7 +5369,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev, for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { struct amdgpu_ring *ring = tmp_adev->rings[i]; - if (!ring || !ring->sched.thread) + if (!ring || !ring->sched.ready) continue; drm_sched_start(&ring->sched, true); @@ -5696,7 +5696,7 @@ pci_ers_result_t amdgpu_pci_error_detected(struct pci_dev *pdev, pci_channel_sta for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { struct amdgpu_ring *ring = adev->rings[i]; - if (!ring || !ring->sched.thread) + if (!ring || !ring->sched.ready) continue; drm_sched_stop(&ring->sched, NULL); @@ -5824,7 +5824,7 @@ void amdgpu_pci_resume(struct pci_dev *pdev) for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { struct amdgpu_ring *ring = adev->rings[i]; - if (!ring || !ring->sched.thread) + if (!ring || !ring->sched.ready) continue; drm_sched_start(&ring->sched, true); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c index 1ae87dfd19c4..8486a2923f1b 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c @@ -133,7 +133,7 @@ int etnaviv_sched_init(struct etnaviv_gpu *gpu) { int ret; - ret = drm_sched_init(&gpu->sched, &etnaviv_sched_ops, + ret = drm_sched_init(&gpu->sched, &etnaviv_sched_ops, NULL, etnaviv_hw_jobs_limit, etnaviv_job_hang_limit, msecs_to_jiffies(500), NULL, NULL, dev_name(gpu->dev), gpu->dev); diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c index ff003403fbbc..54f53bece27c 100644 --- a/drivers/gpu/drm/lima/lima_sched.c +++ b/drivers/gpu/drm/lima/lima_sched.c @@ -488,7 +488,7 @@ int lima_sched_pipe_init(struct lima_sched_pipe *pipe, const char *name) INIT_WORK(&pipe->recover_work, lima_sched_recover_work); - return drm_sched_init(&pipe->base, &lima_sched_ops, 1, + return drm_sched_init(&pipe->base, &lima_sched_ops, NULL, 1, lima_job_hang_limit, msecs_to_jiffies(timeout), NULL, NULL, name, pipe->ldev->dev); diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c b/drivers/gpu/drm/msm/adreno/adreno_device.c index c5c4c93b3689..f76ce11a5384 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_device.c +++ b/drivers/gpu/drm/msm/adreno/adreno_device.c @@ -662,7 +662,8 @@ static void suspend_scheduler(struct msm_gpu *gpu) */ for (i = 0; i < gpu->nr_rings; i++) { struct drm_gpu_scheduler *sched = &gpu->rb[i]->sched; - kthread_park(sched->thread); + + drm_sched_run_wq_stop(sched); } } @@ -672,7 +673,8 @@ static void resume_scheduler(struct msm_gpu *gpu) for (i = 0; i < gpu->nr_rings; i++) { struct drm_gpu_scheduler *sched = &gpu->rb[i]->sched; - kthread_unpark(sched->thread); + + drm_sched_run_wq_start(sched); } } diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c index 57a8e9564540..5879fc262047 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -95,7 +95,7 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id, /* currently managing hangcheck ourselves: */ sched_timeout = MAX_SCHEDULE_TIMEOUT; - ret = drm_sched_init(&ring->sched, &msm_sched_ops, + ret = drm_sched_init(&ring->sched, &msm_sched_ops, NULL, num_hw_submissions, 0, sched_timeout, NULL, NULL, to_msm_bo(ring->bo)->name, gpu->dev->dev); if (ret) { diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index dbc597ab46fb..f48b07056a16 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -815,7 +815,7 @@ int panfrost_job_init(struct panfrost_device *pfdev) js->queue[j].fence_context = dma_fence_context_alloc(1); ret = drm_sched_init(&js->queue[j].sched, - &panfrost_sched_ops, + &panfrost_sched_ops, NULL, nentries, 0, msecs_to_jiffies(JOB_TIMEOUT_MS), pfdev->reset.wq, diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index a18c8f5e8cc0..c3eed9e8062a 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -44,7 +44,6 @@ * The jobs in a entity are always scheduled in the order that they were pushed. */ -#include #include #include #include @@ -252,6 +251,47 @@ drm_sched_rq_select_entity_fifo(struct drm_sched_rq *rq) return rb ? rb_entry(rb, struct drm_sched_entity, rb_tree_node) : NULL; } +/** + * drm_sched_run_wq_stop - stop scheduler run worker + * + * @sched: scheduler instance to stop run worker + */ +void drm_sched_run_wq_stop(struct drm_gpu_scheduler *sched) +{ + WRITE_ONCE(sched->pause_run_wq, true); + cancel_work_sync(&sched->work_run); +} +EXPORT_SYMBOL(drm_sched_run_wq_stop); + +/** + * drm_sched_run_wq_start - start scheduler run worker + * + * @sched: scheduler instance to start run worker + */ +void drm_sched_run_wq_start(struct drm_gpu_scheduler *sched) +{ + WRITE_ONCE(sched->pause_run_wq, false); + queue_work(sched->run_wq, &sched->work_run); +} +EXPORT_SYMBOL(drm_sched_run_wq_start); + +/** + * drm_sched_run_wq_queue - queue scheduler run worker + * + * @sched: scheduler instance to queue run worker + */ +static void drm_sched_run_wq_queue(struct drm_gpu_scheduler *sched) +{ + /* + * Try not to schedule work if pause_run_wq set but not the end of world + * if we do as either it will be cancelled by the above + * cancel_work_sync, or drm_sched_main turns into a NOP while + * pause_run_wq is set. + */ + if (!READ_ONCE(sched->pause_run_wq)) + queue_work(sched->run_wq, &sched->work_run); +} + /** * drm_sched_job_done - complete a job * @s_job: pointer to the job which is done @@ -271,7 +311,7 @@ static void drm_sched_job_done(struct drm_sched_job *s_job) dma_fence_get(&s_fence->finished); drm_sched_fence_finished(s_fence); dma_fence_put(&s_fence->finished); - wake_up_interruptible(&sched->wake_up_worker); + drm_sched_run_wq_queue(sched); } /** @@ -434,7 +474,7 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) { struct drm_sched_job *s_job, *tmp; - kthread_park(sched->thread); + drm_sched_run_wq_stop(sched); /* * Reinsert back the bad job here - now it's safe as @@ -547,7 +587,7 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery) spin_unlock(&sched->job_list_lock); } - kthread_unpark(sched->thread); + drm_sched_run_wq_start(sched); } EXPORT_SYMBOL(drm_sched_start); @@ -864,7 +904,7 @@ static bool drm_sched_ready(struct drm_gpu_scheduler *sched) void drm_sched_wakeup(struct drm_gpu_scheduler *sched) { if (drm_sched_ready(sched)) - wake_up_interruptible(&sched->wake_up_worker); + drm_sched_run_wq_queue(sched); } /** @@ -974,61 +1014,42 @@ drm_sched_pick_best(struct drm_gpu_scheduler **sched_list, } EXPORT_SYMBOL(drm_sched_pick_best); -/** - * drm_sched_blocked - check if the scheduler is blocked - * - * @sched: scheduler instance - * - * Returns true if blocked, otherwise false. - */ -static bool drm_sched_blocked(struct drm_gpu_scheduler *sched) -{ - if (kthread_should_park()) { - kthread_parkme(); - return true; - } - - return false; -} - /** * drm_sched_main - main scheduler thread * * @param: scheduler instance - * - * Returns 0. */ -static int drm_sched_main(void *param) +static void drm_sched_main(struct work_struct *w) { - struct drm_gpu_scheduler *sched = (struct drm_gpu_scheduler *)param; + struct drm_gpu_scheduler *sched = + container_of(w, struct drm_gpu_scheduler, work_run); + struct drm_sched_entity *entity; + struct drm_sched_job *cleanup_job; int r; - sched_set_fifo_low(current); + if (READ_ONCE(sched->pause_run_wq)) + return; - while (!kthread_should_stop()) { - struct drm_sched_entity *entity = NULL; - struct drm_sched_fence *s_fence; - struct drm_sched_job *sched_job; - struct dma_fence *fence; - struct drm_sched_job *cleanup_job = NULL; + cleanup_job = drm_sched_get_cleanup_job(sched); + entity = drm_sched_select_entity(sched); - wait_event_interruptible(sched->wake_up_worker, - (cleanup_job = drm_sched_get_cleanup_job(sched)) || - (!drm_sched_blocked(sched) && - (entity = drm_sched_select_entity(sched))) || - kthread_should_stop()); + if (!entity && !cleanup_job) + return; /* No more work */ - if (cleanup_job) - sched->ops->free_job(cleanup_job); + if (cleanup_job) + sched->ops->free_job(cleanup_job); - if (!entity) - continue; + if (entity) { + struct dma_fence *fence; + struct drm_sched_fence *s_fence; + struct drm_sched_job *sched_job; sched_job = drm_sched_entity_pop_job(entity); - if (!sched_job) { complete_all(&entity->entity_idle); - continue; + if (!cleanup_job) + return; /* No more work */ + goto again; } s_fence = sched_job->s_fence; @@ -1055,14 +1076,17 @@ static int drm_sched_main(void *param) r); } else { if (IS_ERR(fence)) - dma_fence_set_error(&s_fence->finished, PTR_ERR(fence)); + dma_fence_set_error(&s_fence->finished, + PTR_ERR(fence)); drm_sched_job_done(sched_job); } wake_up(&sched->job_scheduled); } - return 0; + +again: + drm_sched_run_wq_queue(sched); } /** @@ -1070,6 +1094,7 @@ static int drm_sched_main(void *param) * * @sched: scheduler instance * @ops: backend operations for this scheduler + * @run_wq: workqueue to use for run work. If NULL, the system_wq is used * @hw_submission: number of hw submissions that can be in flight * @hang_limit: number of times to allow a job to hang before dropping it * @timeout: timeout value in jiffies for the scheduler @@ -1083,14 +1108,16 @@ static int drm_sched_main(void *param) */ int drm_sched_init(struct drm_gpu_scheduler *sched, const struct drm_sched_backend_ops *ops, + struct workqueue_struct *run_wq, unsigned hw_submission, unsigned hang_limit, long timeout, struct workqueue_struct *timeout_wq, atomic_t *score, const char *name, struct device *dev) { - int i, ret; + int i; sched->ops = ops; sched->hw_submission_limit = hw_submission; sched->name = name; + sched->run_wq = run_wq ? : system_wq; sched->timeout = timeout; sched->timeout_wq = timeout_wq ? : system_wq; sched->hang_limit = hang_limit; @@ -1099,23 +1126,15 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_COUNT; i++) drm_sched_rq_init(sched, &sched->sched_rq[i]); - init_waitqueue_head(&sched->wake_up_worker); init_waitqueue_head(&sched->job_scheduled); INIT_LIST_HEAD(&sched->pending_list); spin_lock_init(&sched->job_list_lock); atomic_set(&sched->hw_rq_count, 0); INIT_DELAYED_WORK(&sched->work_tdr, drm_sched_job_timedout); + INIT_WORK(&sched->work_run, drm_sched_main); atomic_set(&sched->_score, 0); atomic64_set(&sched->job_id_count, 0); - - /* Each scheduler will run on a seperate kernel thread */ - sched->thread = kthread_run(drm_sched_main, sched, sched->name); - if (IS_ERR(sched->thread)) { - ret = PTR_ERR(sched->thread); - sched->thread = NULL; - DRM_DEV_ERROR(sched->dev, "Failed to create scheduler for %s.\n", name); - return ret; - } + sched->pause_run_wq = false; sched->ready = true; return 0; @@ -1134,8 +1153,7 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched) struct drm_sched_entity *s_entity; int i; - if (sched->thread) - kthread_stop(sched->thread); + drm_sched_run_wq_stop(sched); for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) { struct drm_sched_rq *rq = &sched->sched_rq[i]; diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c index 06238e6d7f5c..38e092ea41e6 100644 --- a/drivers/gpu/drm/v3d/v3d_sched.c +++ b/drivers/gpu/drm/v3d/v3d_sched.c @@ -388,7 +388,7 @@ v3d_sched_init(struct v3d_dev *v3d) int ret; ret = drm_sched_init(&v3d->queue[V3D_BIN].sched, - &v3d_bin_sched_ops, + &v3d_bin_sched_ops, NULL, hw_jobs_limit, job_hang_limit, msecs_to_jiffies(hang_limit_ms), NULL, NULL, "v3d_bin", v3d->drm.dev); @@ -396,7 +396,7 @@ v3d_sched_init(struct v3d_dev *v3d) return ret; ret = drm_sched_init(&v3d->queue[V3D_RENDER].sched, - &v3d_render_sched_ops, + &v3d_render_sched_ops, NULL, hw_jobs_limit, job_hang_limit, msecs_to_jiffies(hang_limit_ms), NULL, NULL, "v3d_render", v3d->drm.dev); @@ -404,7 +404,7 @@ v3d_sched_init(struct v3d_dev *v3d) goto fail; ret = drm_sched_init(&v3d->queue[V3D_TFU].sched, - &v3d_tfu_sched_ops, + &v3d_tfu_sched_ops, NULL, hw_jobs_limit, job_hang_limit, msecs_to_jiffies(hang_limit_ms), NULL, NULL, "v3d_tfu", v3d->drm.dev); @@ -413,7 +413,7 @@ v3d_sched_init(struct v3d_dev *v3d) if (v3d_has_csd(v3d)) { ret = drm_sched_init(&v3d->queue[V3D_CSD].sched, - &v3d_csd_sched_ops, + &v3d_csd_sched_ops, NULL, hw_jobs_limit, job_hang_limit, msecs_to_jiffies(hang_limit_ms), NULL, NULL, "v3d_csd", v3d->drm.dev); @@ -421,7 +421,7 @@ v3d_sched_init(struct v3d_dev *v3d) goto fail; ret = drm_sched_init(&v3d->queue[V3D_CACHE_CLEAN].sched, - &v3d_cache_clean_sched_ops, + &v3d_cache_clean_sched_ops, NULL, hw_jobs_limit, job_hang_limit, msecs_to_jiffies(hang_limit_ms), NULL, NULL, "v3d_cache_clean", v3d->drm.dev); diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index c0586d832260..98fb5f85eba6 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -473,17 +473,16 @@ struct drm_sched_backend_ops { * @timeout: the time after which a job is removed from the scheduler. * @name: name of the ring for which this scheduler is being used. * @sched_rq: priority wise array of run queues. - * @wake_up_worker: the wait queue on which the scheduler sleeps until a job - * is ready to be scheduled. * @job_scheduled: once @drm_sched_entity_do_release is called the scheduler * waits on this wait queue until all the scheduled jobs are * finished. * @hw_rq_count: the number of jobs currently in the hardware queue. * @job_id_count: used to assign unique id to the each job. + * @run_wq: workqueue used to queue @work_run * @timeout_wq: workqueue used to queue @work_tdr + * @work_run: schedules jobs and cleans up entities * @work_tdr: schedules a delayed call to @drm_sched_job_timedout after the * timeout interval is over. - * @thread: the kthread on which the scheduler which run. * @pending_list: the list of jobs which are currently in the job queue. * @job_list_lock: lock to protect the pending_list. * @hang_limit: once the hangs by a job crosses this limit then it is marked @@ -492,6 +491,7 @@ struct drm_sched_backend_ops { * @_score: score used when the driver doesn't provide one * @ready: marks if the underlying HW is ready to work * @free_guilty: A hit to time out handler to free the guilty job. + * @pause_run_wq: pause queuing of @work_run on @run_wq * @dev: system &struct device * * One scheduler is implemented for each hardware ring. @@ -502,13 +502,13 @@ struct drm_gpu_scheduler { long timeout; const char *name; struct drm_sched_rq sched_rq[DRM_SCHED_PRIORITY_COUNT]; - wait_queue_head_t wake_up_worker; wait_queue_head_t job_scheduled; atomic_t hw_rq_count; atomic64_t job_id_count; + struct workqueue_struct *run_wq; struct workqueue_struct *timeout_wq; + struct work_struct work_run; struct delayed_work work_tdr; - struct task_struct *thread; struct list_head pending_list; spinlock_t job_list_lock; int hang_limit; @@ -516,11 +516,13 @@ struct drm_gpu_scheduler { atomic_t _score; bool ready; bool free_guilty; + bool pause_run_wq; struct device *dev; }; int drm_sched_init(struct drm_gpu_scheduler *sched, const struct drm_sched_backend_ops *ops, + struct workqueue_struct *run_wq, uint32_t hw_submission, unsigned hang_limit, long timeout, struct workqueue_struct *timeout_wq, atomic_t *score, const char *name, struct device *dev); @@ -550,6 +552,8 @@ void drm_sched_entity_modify_sched(struct drm_sched_entity *entity, void drm_sched_job_cleanup(struct drm_sched_job *job); void drm_sched_wakeup(struct drm_gpu_scheduler *sched); +void drm_sched_run_wq_stop(struct drm_gpu_scheduler *sched); +void drm_sched_run_wq_start(struct drm_gpu_scheduler *sched); void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad); void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery); void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched); From patchwork Tue Aug 1 20:50:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 13337247 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5746AC04A94 for ; Tue, 1 Aug 2023 20:51:16 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4F2F710E415; Tue, 1 Aug 2023 20:51:13 +0000 (UTC) Received: from mgamail.intel.com (unknown [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id 73BF110E1E1; Tue, 1 Aug 2023 20:51:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690923071; x=1722459071; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=J7K8UE1DsGYkND6TEHiwkNtM9J58cn144Ir6yKqXgGs=; b=GaoneCDLqz1siRYGi9ParUsNFttBQ2yVeGE5/F82kcUX9LyWwHKNmfZm 5H3KAfBmtg5k4dHuyvUYKfAXmsSLRs4ib8Orp5ohFVjwOAY7TEdg+8evy R+fllqupXTUz76BaMYCKtrN+ntuKuxClf8y8iOZgJuWXfM4ybnqU30aOB HeseVb/JlbSpW2moF747DJp4jZyefreb9lG7GP7I9o+uqvqygQDQPNOko VFU2CRnhUyfyNnd20d6TiUfU9fEl6sk1BHkNtqXfUewCxBkLgVNFHctNY Jo1Gykk53rhaAGZgY5wGwGtbpl1a7T+DqFIVBhQDVcJzpNtsIm5Q4jkOJ g==; X-IronPort-AV: E=McAfee;i="6600,9927,10789"; a="373051714" X-IronPort-AV: E=Sophos;i="6.01,248,1684825200"; d="scan'208";a="373051714" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Aug 2023 13:51:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.01,202,1684825200"; d="scan'208";a="872215401" Received: from lstrano-desk.jf.intel.com ([10.54.39.91]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Aug 2023 13:51:12 -0700 From: Matthew Brost To: dri-devel@lists.freedesktop.org, intel-xe@lists.freedesktop.org Subject: [PATCH 2/8] drm/sched: Move schedule policy to scheduler / entity Date: Tue, 1 Aug 2023 13:50:57 -0700 Message-Id: <20230801205103.627779-3-matthew.brost@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230801205103.627779-1-matthew.brost@intel.com> References: <20230801205103.627779-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robdclark@chromium.org, thomas.hellstrom@linux.intel.com, Matthew Brost , sarah.walker@imgtec.com, ketil.johnsen@arm.com, Liviu.Dudau@arm.com, luben.tuikov@amd.com, lina@asahilina.net, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith.ekstrand@collabora.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Rather than a global modparam for scheduling policy, move the scheduling policy to scheduler / entity so user can control each scheduler / entity policy. v2: - s/DRM_SCHED_POLICY_MAX/DRM_SCHED_POLICY_COUNT (Luben) - Only include policy in scheduler (Luben) Signed-off-by: Matthew Brost --- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 1 + drivers/gpu/drm/etnaviv/etnaviv_sched.c | 3 ++- drivers/gpu/drm/lima/lima_sched.c | 3 ++- drivers/gpu/drm/msm/msm_ringbuffer.c | 3 ++- drivers/gpu/drm/panfrost/panfrost_job.c | 3 ++- drivers/gpu/drm/scheduler/sched_entity.c | 24 ++++++++++++++++++---- drivers/gpu/drm/scheduler/sched_main.c | 23 +++++++++++++++------ drivers/gpu/drm/v3d/v3d_sched.c | 15 +++++++++----- include/drm/gpu_scheduler.h | 20 ++++++++++++------ 9 files changed, 70 insertions(+), 25 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c index 00c9c03c8f94..4df0fca5a74c 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -2368,6 +2368,7 @@ static int amdgpu_device_init_schedulers(struct amdgpu_device *adev) ring->num_hw_submission, amdgpu_job_hang_limit, timeout, adev->reset_domain->wq, ring->sched_score, ring->name, + DRM_SCHED_POLICY_DEFAULT, adev->dev); if (r) { DRM_ERROR("Failed to create scheduler on ring %s.\n", diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c index 8486a2923f1b..61204a3f8b0b 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c @@ -136,7 +136,8 @@ int etnaviv_sched_init(struct etnaviv_gpu *gpu) ret = drm_sched_init(&gpu->sched, &etnaviv_sched_ops, NULL, etnaviv_hw_jobs_limit, etnaviv_job_hang_limit, msecs_to_jiffies(500), NULL, NULL, - dev_name(gpu->dev), gpu->dev); + dev_name(gpu->dev), DRM_SCHED_POLICY_DEFAULT, + gpu->dev); if (ret) return ret; diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c index 54f53bece27c..33042ba6ae93 100644 --- a/drivers/gpu/drm/lima/lima_sched.c +++ b/drivers/gpu/drm/lima/lima_sched.c @@ -491,7 +491,8 @@ int lima_sched_pipe_init(struct lima_sched_pipe *pipe, const char *name) return drm_sched_init(&pipe->base, &lima_sched_ops, NULL, 1, lima_job_hang_limit, msecs_to_jiffies(timeout), NULL, - NULL, name, pipe->ldev->dev); + NULL, name, DRM_SCHED_POLICY_DEFAULT, + pipe->ldev->dev); } void lima_sched_pipe_fini(struct lima_sched_pipe *pipe) diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c index 5879fc262047..f408a9097315 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -97,7 +97,8 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id, ret = drm_sched_init(&ring->sched, &msm_sched_ops, NULL, num_hw_submissions, 0, sched_timeout, - NULL, NULL, to_msm_bo(ring->bo)->name, gpu->dev->dev); + NULL, NULL, to_msm_bo(ring->bo)->name, + DRM_SCHED_POLICY_DEFAULT, gpu->dev->dev); if (ret) { goto fail; } diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index f48b07056a16..effa48b33dce 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -819,7 +819,8 @@ int panfrost_job_init(struct panfrost_device *pfdev) nentries, 0, msecs_to_jiffies(JOB_TIMEOUT_MS), pfdev->reset.wq, - NULL, "pan_js", pfdev->dev); + NULL, "pan_js", DRM_SCHED_POLICY_DEFAULT, + pfdev->dev); if (ret) { dev_err(pfdev->dev, "Failed to create scheduler: %d.", ret); goto err_sched; diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index 15d04a0ec623..941ea8edead2 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -33,6 +33,20 @@ #define to_drm_sched_job(sched_job) \ container_of((sched_job), struct drm_sched_job, queue_node) +static bool bad_policies(struct drm_gpu_scheduler **sched_list, + unsigned int num_sched_list) +{ + enum drm_sched_policy sched_policy = sched_list[0]->sched_policy; + unsigned int i; + + /* All schedule policies must match */ + for (i = 1; i < num_sched_list; ++i) + if (sched_policy != sched_list[i]->sched_policy) + return true; + + return false; +} + /** * drm_sched_entity_init - Init a context entity used by scheduler when * submit to HW ring. @@ -62,7 +76,8 @@ int drm_sched_entity_init(struct drm_sched_entity *entity, unsigned int num_sched_list, atomic_t *guilty) { - if (!(entity && sched_list && (num_sched_list == 0 || sched_list[0]))) + if (!(entity && sched_list && (num_sched_list == 0 || sched_list[0])) || + bad_policies(sched_list, num_sched_list)) return -EINVAL; memset(entity, 0, sizeof(struct drm_sched_entity)); @@ -440,7 +455,7 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity) * Update the entity's location in the min heap according to * the timestamp of the next job, if any. */ - if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) { + if (entity->rq->sched->sched_policy == DRM_SCHED_POLICY_FIFO) { struct drm_sched_job *next; next = to_drm_sched_job(spsc_queue_peek(&entity->job_queue)); @@ -506,7 +521,8 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) void drm_sched_entity_push_job(struct drm_sched_job *sched_job) { struct drm_sched_entity *entity = sched_job->entity; - bool first; + bool first, fifo = entity->rq->sched->sched_policy == + DRM_SCHED_POLICY_FIFO; trace_drm_sched_job(sched_job, entity); atomic_inc(entity->rq->sched->score); @@ -528,7 +544,7 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job) drm_sched_rq_add_entity(entity->rq, entity); spin_unlock(&entity->rq_lock); - if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) + if (fifo) drm_sched_rq_update_fifo(entity, sched_job->submit_ts); drm_sched_wakeup(entity->rq->sched); diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index c3eed9e8062a..27989345889d 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -62,14 +62,14 @@ #define to_drm_sched_job(sched_job) \ container_of((sched_job), struct drm_sched_job, queue_node) -int drm_sched_policy = DRM_SCHED_POLICY_FIFO; +int default_drm_sched_policy = DRM_SCHED_POLICY_FIFO; /** * DOC: sched_policy (int) * Used to override default entities scheduling policy in a run queue. */ -MODULE_PARM_DESC(sched_policy, "Specify the scheduling policy for entities on a run-queue, " __stringify(DRM_SCHED_POLICY_RR) " = Round Robin, " __stringify(DRM_SCHED_POLICY_FIFO) " = FIFO (default)."); -module_param_named(sched_policy, drm_sched_policy, int, 0444); +MODULE_PARM_DESC(sched_policy, "Specify the default scheduling policy for entities on a run-queue, " __stringify(DRM_SCHED_POLICY_RR) " = Round Robin, " __stringify(DRM_SCHED_POLICY_FIFO) " = FIFO (default)."); +module_param_named(sched_policy, default_drm_sched_policy, int, 0444); static __always_inline bool drm_sched_entity_compare_before(struct rb_node *a, const struct rb_node *b) @@ -173,7 +173,7 @@ void drm_sched_rq_remove_entity(struct drm_sched_rq *rq, if (rq->current_entity == entity) rq->current_entity = NULL; - if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) + if (rq->sched->sched_policy == DRM_SCHED_POLICY_FIFO) drm_sched_rq_remove_fifo_locked(entity); spin_unlock(&rq->lock); @@ -925,7 +925,7 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched) /* Kernel run queue has higher priority than normal run queue*/ for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) { - entity = drm_sched_policy == DRM_SCHED_POLICY_FIFO ? + entity = sched->sched_policy == DRM_SCHED_POLICY_FIFO ? drm_sched_rq_select_entity_fifo(&sched->sched_rq[i]) : drm_sched_rq_select_entity_rr(&sched->sched_rq[i]); if (entity) @@ -1102,6 +1102,7 @@ static void drm_sched_main(struct work_struct *w) * used * @score: optional score atomic shared with other schedulers * @name: name used for debugging + * @sched_policy: schedule policy * @dev: target &struct device * * Return 0 on success, otherwise error code. @@ -1111,9 +1112,15 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, struct workqueue_struct *run_wq, unsigned hw_submission, unsigned hang_limit, long timeout, struct workqueue_struct *timeout_wq, - atomic_t *score, const char *name, struct device *dev) + atomic_t *score, const char *name, + enum drm_sched_policy sched_policy, + struct device *dev) { int i; + + if (sched_policy >= DRM_SCHED_POLICY_COUNT) + return -EINVAL; + sched->ops = ops; sched->hw_submission_limit = hw_submission; sched->name = name; @@ -1123,6 +1130,10 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, sched->hang_limit = hang_limit; sched->score = score ? score : &sched->_score; sched->dev = dev; + if (sched_policy == DRM_SCHED_POLICY_DEFAULT) + sched->sched_policy = default_drm_sched_policy; + else + sched->sched_policy = sched_policy; for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_COUNT; i++) drm_sched_rq_init(sched, &sched->sched_rq[i]); diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c index 38e092ea41e6..5e3fe77fa991 100644 --- a/drivers/gpu/drm/v3d/v3d_sched.c +++ b/drivers/gpu/drm/v3d/v3d_sched.c @@ -391,7 +391,8 @@ v3d_sched_init(struct v3d_dev *v3d) &v3d_bin_sched_ops, NULL, hw_jobs_limit, job_hang_limit, msecs_to_jiffies(hang_limit_ms), NULL, - NULL, "v3d_bin", v3d->drm.dev); + NULL, "v3d_bin", DRM_SCHED_POLICY_DEFAULT, + v3d->drm.dev); if (ret) return ret; @@ -399,7 +400,8 @@ v3d_sched_init(struct v3d_dev *v3d) &v3d_render_sched_ops, NULL, hw_jobs_limit, job_hang_limit, msecs_to_jiffies(hang_limit_ms), NULL, - NULL, "v3d_render", v3d->drm.dev); + ULL, "v3d_render", DRM_SCHED_POLICY_DEFAULT, + v3d->drm.dev); if (ret) goto fail; @@ -407,7 +409,8 @@ v3d_sched_init(struct v3d_dev *v3d) &v3d_tfu_sched_ops, NULL, hw_jobs_limit, job_hang_limit, msecs_to_jiffies(hang_limit_ms), NULL, - NULL, "v3d_tfu", v3d->drm.dev); + NULL, "v3d_tfu", DRM_SCHED_POLICY_DEFAULT, + v3d->drm.dev); if (ret) goto fail; @@ -416,7 +419,8 @@ v3d_sched_init(struct v3d_dev *v3d) &v3d_csd_sched_ops, NULL, hw_jobs_limit, job_hang_limit, msecs_to_jiffies(hang_limit_ms), NULL, - NULL, "v3d_csd", v3d->drm.dev); + NULL, "v3d_csd", DRM_SCHED_POLICY_DEFAULT, + v3d->drm.dev); if (ret) goto fail; @@ -424,7 +428,8 @@ v3d_sched_init(struct v3d_dev *v3d) &v3d_cache_clean_sched_ops, NULL, hw_jobs_limit, job_hang_limit, msecs_to_jiffies(hang_limit_ms), NULL, - NULL, "v3d_cache_clean", v3d->drm.dev); + NULL, "v3d_cache_clean", + DRM_SCHED_POLICY_DEFAULT, v3d->drm.dev); if (ret) goto fail; } diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 98fb5f85eba6..7474142daca6 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -72,11 +72,15 @@ enum drm_sched_priority { DRM_SCHED_PRIORITY_UNSET = -2 }; -/* Used to chose between FIFO and RR jobs scheduling */ -extern int drm_sched_policy; - -#define DRM_SCHED_POLICY_RR 0 -#define DRM_SCHED_POLICY_FIFO 1 +/* Used to chose default scheduling policy*/ +extern int default_drm_sched_policy; + +enum drm_sched_policy { + DRM_SCHED_POLICY_DEFAULT, + DRM_SCHED_POLICY_RR, + DRM_SCHED_POLICY_FIFO, + DRM_SCHED_POLICY_COUNT, +}; /** * struct drm_sched_entity - A wrapper around a job queue (typically @@ -489,6 +493,7 @@ struct drm_sched_backend_ops { * guilty and it will no longer be considered for scheduling. * @score: score to help loadbalancer pick a idle sched * @_score: score used when the driver doesn't provide one + * @sched_policy: Schedule policy for scheduler * @ready: marks if the underlying HW is ready to work * @free_guilty: A hit to time out handler to free the guilty job. * @pause_run_wq: pause queuing of @work_run on @run_wq @@ -514,6 +519,7 @@ struct drm_gpu_scheduler { int hang_limit; atomic_t *score; atomic_t _score; + enum drm_sched_policy sched_policy; bool ready; bool free_guilty; bool pause_run_wq; @@ -525,7 +531,9 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, struct workqueue_struct *run_wq, uint32_t hw_submission, unsigned hang_limit, long timeout, struct workqueue_struct *timeout_wq, - atomic_t *score, const char *name, struct device *dev); + atomic_t *score, const char *name, + enum drm_sched_policy sched_policy, + struct device *dev); void drm_sched_fini(struct drm_gpu_scheduler *sched); int drm_sched_job_init(struct drm_sched_job *job, From patchwork Tue Aug 1 20:50:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 13337248 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 75E6DC04E69 for ; Tue, 1 Aug 2023 20:51:19 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1D8FB10E41D; Tue, 1 Aug 2023 20:51:14 +0000 (UTC) Received: from mgamail.intel.com (unknown [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id B2E0410E1E1; Tue, 1 Aug 2023 20:51:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690923071; x=1722459071; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=i4fpXtQfDe2gCzmpBaZ8MhTsieWyAwr+ioyL0iejNDU=; b=Hku+kbuXSE4IEpJGVcrkDNwCSiYZ0xLafrzZXuZPFuOveqaBGPAM0tt7 MQCePIlO4q/Nts2fvvnkp09kGB1816txQMFCOVIWm0Kp225LmM/M29rij 56dZRyItg3Dkl6oUmtvKOB6uap3vGO1KIelmlqkMtdo1G94vlHewYXVVB j9zZIalQLAP0H79IKwX9L/9cK83M0CfLHy1lMaVZyrwnohEjyfut2c9u2 Y+iW0yDNhCIAyByd42LBARwUfOQVEAaIB5QDBDnLDGS8J1q/VrWG0Yn7t VDYiUjSgWBIIQnxB+00oqkILernRdY5cBxVguS8XS7Dcfd9yKNvqnwoLD w==; X-IronPort-AV: E=McAfee;i="6600,9927,10789"; a="373051723" X-IronPort-AV: E=Sophos;i="6.01,248,1684825200"; d="scan'208";a="373051723" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Aug 2023 13:51:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.01,202,1684825200"; d="scan'208";a="872215404" Received: from lstrano-desk.jf.intel.com ([10.54.39.91]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Aug 2023 13:51:12 -0700 From: Matthew Brost To: dri-devel@lists.freedesktop.org, intel-xe@lists.freedesktop.org Subject: [PATCH 3/8] drm/sched: Add DRM_SCHED_POLICY_SINGLE_ENTITY scheduling policy Date: Tue, 1 Aug 2023 13:50:58 -0700 Message-Id: <20230801205103.627779-4-matthew.brost@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230801205103.627779-1-matthew.brost@intel.com> References: <20230801205103.627779-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robdclark@chromium.org, thomas.hellstrom@linux.intel.com, Matthew Brost , sarah.walker@imgtec.com, ketil.johnsen@arm.com, Liviu.Dudau@arm.com, luben.tuikov@amd.com, lina@asahilina.net, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith.ekstrand@collabora.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" DRM_SCHED_POLICY_SINGLE_ENTITY creates a 1 to 1 relationship between scheduler and entity. No priorities or run queue used in this mode. Intended for devices with firmware schedulers. v2: - Drop sched / rq union (Luben) Signed-off-by: Matthew Brost --- drivers/gpu/drm/scheduler/sched_entity.c | 69 ++++++++++++++++++------ drivers/gpu/drm/scheduler/sched_fence.c | 2 +- drivers/gpu/drm/scheduler/sched_main.c | 62 ++++++++++++++++++--- include/drm/gpu_scheduler.h | 8 +++ 4 files changed, 118 insertions(+), 23 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index 941ea8edead2..59c1ca578256 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -83,6 +83,7 @@ int drm_sched_entity_init(struct drm_sched_entity *entity, memset(entity, 0, sizeof(struct drm_sched_entity)); INIT_LIST_HEAD(&entity->list); entity->rq = NULL; + entity->single_sched = NULL; entity->guilty = guilty; entity->num_sched_list = num_sched_list; entity->priority = priority; @@ -90,8 +91,17 @@ int drm_sched_entity_init(struct drm_sched_entity *entity, entity->last_scheduled = NULL; RB_CLEAR_NODE(&entity->rb_tree_node); - if(num_sched_list) - entity->rq = &sched_list[0]->sched_rq[entity->priority]; + if (num_sched_list) { + if (sched_list[0]->sched_policy != + DRM_SCHED_POLICY_SINGLE_ENTITY) { + entity->rq = &sched_list[0]->sched_rq[entity->priority]; + } else { + if (num_sched_list != 1 || sched_list[0]->single_entity) + return -EINVAL; + sched_list[0]->single_entity = entity; + entity->single_sched = sched_list[0]; + } + } init_completion(&entity->entity_idle); @@ -124,7 +134,8 @@ void drm_sched_entity_modify_sched(struct drm_sched_entity *entity, struct drm_gpu_scheduler **sched_list, unsigned int num_sched_list) { - WARN_ON(!num_sched_list || !sched_list); + WARN_ON(!num_sched_list || !sched_list || + !!entity->single_sched); entity->sched_list = sched_list; entity->num_sched_list = num_sched_list; @@ -194,13 +205,15 @@ static void drm_sched_entity_kill(struct drm_sched_entity *entity) { struct drm_sched_job *job; struct dma_fence *prev; + bool single_entity = !!entity->single_sched; - if (!entity->rq) + if (!entity->rq && !single_entity) return; spin_lock(&entity->rq_lock); entity->stopped = true; - drm_sched_rq_remove_entity(entity->rq, entity); + if (!single_entity) + drm_sched_rq_remove_entity(entity->rq, entity); spin_unlock(&entity->rq_lock); /* Make sure this entity is not used by the scheduler at the moment */ @@ -222,6 +235,20 @@ static void drm_sched_entity_kill(struct drm_sched_entity *entity) dma_fence_put(prev); } +/** + * drm_sched_entity_to_scheduler - Schedule entity to GPU scheduler + * @entity: scheduler entity + * + * Returns GPU scheduler for the entity + */ +struct drm_gpu_scheduler * +drm_sched_entity_to_scheduler(struct drm_sched_entity *entity) +{ + bool single_entity = !!entity->single_sched; + + return single_entity ? entity->single_sched : entity->rq->sched; +} + /** * drm_sched_entity_flush - Flush a context entity * @@ -239,11 +266,12 @@ long drm_sched_entity_flush(struct drm_sched_entity *entity, long timeout) struct drm_gpu_scheduler *sched; struct task_struct *last_user; long ret = timeout; + bool single_entity = !!entity->single_sched; - if (!entity->rq) + if (!entity->rq && !single_entity) return 0; - sched = entity->rq->sched; + sched = drm_sched_entity_to_scheduler(entity); /** * The client will not queue more IBs during this fini, consume existing * queued IBs or discard them on SIGKILL @@ -336,7 +364,7 @@ static void drm_sched_entity_wakeup(struct dma_fence *f, container_of(cb, struct drm_sched_entity, cb); drm_sched_entity_clear_dep(f, cb); - drm_sched_wakeup(entity->rq->sched); + drm_sched_wakeup(drm_sched_entity_to_scheduler(entity)); } /** @@ -350,6 +378,8 @@ static void drm_sched_entity_wakeup(struct dma_fence *f, void drm_sched_entity_set_priority(struct drm_sched_entity *entity, enum drm_sched_priority priority) { + WARN_ON(!!entity->single_sched); + spin_lock(&entity->rq_lock); entity->priority = priority; spin_unlock(&entity->rq_lock); @@ -362,7 +392,7 @@ EXPORT_SYMBOL(drm_sched_entity_set_priority); */ static bool drm_sched_entity_add_dependency_cb(struct drm_sched_entity *entity) { - struct drm_gpu_scheduler *sched = entity->rq->sched; + struct drm_gpu_scheduler *sched = drm_sched_entity_to_scheduler(entity); struct dma_fence *fence = entity->dependency; struct drm_sched_fence *s_fence; @@ -455,7 +485,8 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity) * Update the entity's location in the min heap according to * the timestamp of the next job, if any. */ - if (entity->rq->sched->sched_policy == DRM_SCHED_POLICY_FIFO) { + if (drm_sched_entity_to_scheduler(entity)->sched_policy == + DRM_SCHED_POLICY_FIFO) { struct drm_sched_job *next; next = to_drm_sched_job(spsc_queue_peek(&entity->job_queue)); @@ -472,6 +503,8 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) struct drm_gpu_scheduler *sched; struct drm_sched_rq *rq; + WARN_ON(!!entity->single_sched); + /* single possible engine and already selected */ if (!entity->sched_list) return; @@ -521,17 +554,22 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) void drm_sched_entity_push_job(struct drm_sched_job *sched_job) { struct drm_sched_entity *entity = sched_job->entity; - bool first, fifo = entity->rq->sched->sched_policy == - DRM_SCHED_POLICY_FIFO; + bool single_entity = !!entity->single_sched; + bool first; trace_drm_sched_job(sched_job, entity); - atomic_inc(entity->rq->sched->score); + if (!single_entity) + atomic_inc(entity->rq->sched->score); WRITE_ONCE(entity->last_user, current->group_leader); first = spsc_queue_push(&entity->job_queue, &sched_job->queue_node); sched_job->submit_ts = ktime_get(); /* first job wakes up scheduler */ if (first) { + struct drm_gpu_scheduler *sched = + drm_sched_entity_to_scheduler(entity); + bool fifo = sched->sched_policy == DRM_SCHED_POLICY_FIFO; + /* Add the entity to the run queue */ spin_lock(&entity->rq_lock); if (entity->stopped) { @@ -541,13 +579,14 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job) return; } - drm_sched_rq_add_entity(entity->rq, entity); + if (!single_entity) + drm_sched_rq_add_entity(entity->rq, entity); spin_unlock(&entity->rq_lock); if (fifo) drm_sched_rq_update_fifo(entity, sched_job->submit_ts); - drm_sched_wakeup(entity->rq->sched); + drm_sched_wakeup(sched); } } EXPORT_SYMBOL(drm_sched_entity_push_job); diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c index fe9c6468e440..d7cfc0441885 100644 --- a/drivers/gpu/drm/scheduler/sched_fence.c +++ b/drivers/gpu/drm/scheduler/sched_fence.c @@ -213,7 +213,7 @@ void drm_sched_fence_init(struct drm_sched_fence *fence, { unsigned seq; - fence->sched = entity->rq->sched; + fence->sched = drm_sched_entity_to_scheduler(entity); seq = atomic_inc_return(&entity->fence_seq); dma_fence_init(&fence->scheduled, &drm_sched_fence_ops_scheduled, &fence->lock, entity->fence_context, seq); diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 27989345889d..2597fb298733 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -32,7 +32,8 @@ * backend operations to the scheduler like submitting a job to hardware run queue, * returning the dependencies of a job etc. * - * The organisation of the scheduler is the following: + * The organisation of the scheduler is the following for scheduling policies + * DRM_SCHED_POLICY_RR and DRM_SCHED_POLICY_FIFO: * * 1. Each hw run queue has one scheduler * 2. Each scheduler has multiple run queues with different priorities @@ -42,6 +43,22 @@ * the hardware. * * The jobs in a entity are always scheduled in the order that they were pushed. + * The organisation of the scheduler is the following for scheduling policy + * DRM_SCHED_POLICY_SINGLE_ENTITY: + * + * 1. One to one relationship between scheduler and entity + * 2. No priorities implemented per scheduler (single job queue) + * 3. No run queues in scheduler rather jobs are directly dequeued from entity + * 4. The entity maintains a queue of jobs that will be scheduled on the + * hardware + * + * The jobs in a entity are always scheduled in the order that they were pushed + * regardless of scheduling policy. + * + * A policy of DRM_SCHED_POLICY_RR or DRM_SCHED_POLICY_FIFO is expected to used + * when the KMD is scheduling directly on the hardware while a scheduling policy + * of DRM_SCHED_POLICY_SINGLE_ENTITY is expected to be used when there is a + * firmware scheduler. */ #include @@ -92,6 +109,8 @@ static inline void drm_sched_rq_remove_fifo_locked(struct drm_sched_entity *enti void drm_sched_rq_update_fifo(struct drm_sched_entity *entity, ktime_t ts) { + WARN_ON(!!entity->single_sched); + /* * Both locks need to be grabbed, one to protect from entity->rq change * for entity from within concurrent drm_sched_entity_select_rq and the @@ -122,6 +141,8 @@ void drm_sched_rq_update_fifo(struct drm_sched_entity *entity, ktime_t ts) static void drm_sched_rq_init(struct drm_gpu_scheduler *sched, struct drm_sched_rq *rq) { + WARN_ON(sched->sched_policy == DRM_SCHED_POLICY_SINGLE_ENTITY); + spin_lock_init(&rq->lock); INIT_LIST_HEAD(&rq->entities); rq->rb_tree_root = RB_ROOT_CACHED; @@ -140,6 +161,8 @@ static void drm_sched_rq_init(struct drm_gpu_scheduler *sched, void drm_sched_rq_add_entity(struct drm_sched_rq *rq, struct drm_sched_entity *entity) { + WARN_ON(!!entity->single_sched); + if (!list_empty(&entity->list)) return; @@ -162,6 +185,8 @@ void drm_sched_rq_add_entity(struct drm_sched_rq *rq, void drm_sched_rq_remove_entity(struct drm_sched_rq *rq, struct drm_sched_entity *entity) { + WARN_ON(!!entity->single_sched); + if (list_empty(&entity->list)) return; @@ -667,7 +692,7 @@ int drm_sched_job_init(struct drm_sched_job *job, struct drm_sched_entity *entity, void *owner) { - if (!entity->rq) + if (!entity->rq && !entity->single_sched) return -ENOENT; job->entity = entity; @@ -700,13 +725,16 @@ void drm_sched_job_arm(struct drm_sched_job *job) { struct drm_gpu_scheduler *sched; struct drm_sched_entity *entity = job->entity; + bool single_entity = !!entity->single_sched; BUG_ON(!entity); - drm_sched_entity_select_rq(entity); - sched = entity->rq->sched; + if (!single_entity) + drm_sched_entity_select_rq(entity); + sched = drm_sched_entity_to_scheduler(entity); job->sched = sched; - job->s_priority = entity->rq - sched->sched_rq; + if (!single_entity) + job->s_priority = entity->rq - sched->sched_rq; job->id = atomic64_inc_return(&sched->job_id_count); drm_sched_fence_init(job->s_fence, job->entity); @@ -923,6 +951,13 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched) if (!drm_sched_ready(sched)) return NULL; + if (sched->single_entity) { + if (drm_sched_entity_is_ready(sched->single_entity)) + return sched->single_entity; + + return NULL; + } + /* Kernel run queue has higher priority than normal run queue*/ for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) { entity = sched->sched_policy == DRM_SCHED_POLICY_FIFO ? @@ -1122,6 +1157,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, return -EINVAL; sched->ops = ops; + sched->single_entity = NULL; sched->hw_submission_limit = hw_submission; sched->name = name; sched->run_wq = run_wq ? : system_wq; @@ -1134,7 +1170,9 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, sched->sched_policy = default_drm_sched_policy; else sched->sched_policy = sched_policy; - for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_COUNT; i++) + for (i = DRM_SCHED_PRIORITY_MIN; sched_policy != + DRM_SCHED_POLICY_SINGLE_ENTITY && i < DRM_SCHED_PRIORITY_COUNT; + i++) drm_sched_rq_init(sched, &sched->sched_rq[i]); init_waitqueue_head(&sched->job_scheduled); @@ -1166,7 +1204,15 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched) drm_sched_run_wq_stop(sched); - for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) { + if (sched->single_entity) { + spin_lock(&sched->single_entity->rq_lock); + sched->single_entity->stopped = true; + spin_unlock(&sched->single_entity->rq_lock); + } + + for (i = DRM_SCHED_PRIORITY_COUNT - 1; sched->sched_policy != + DRM_SCHED_POLICY_SINGLE_ENTITY && i >= DRM_SCHED_PRIORITY_MIN; + i--) { struct drm_sched_rq *rq = &sched->sched_rq[i]; if (!rq) @@ -1210,6 +1256,8 @@ void drm_sched_increase_karma(struct drm_sched_job *bad) struct drm_sched_entity *entity; struct drm_gpu_scheduler *sched = bad->sched; + WARN_ON(sched->sched_policy == DRM_SCHED_POLICY_SINGLE_ENTITY); + /* don't change @bad's karma if it's from KERNEL RQ, * because sometimes GPU hang would cause kernel jobs (like VM updating jobs) * corrupt but keep in mind that kernel jobs always considered good. diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 7474142daca6..df1993dd44ae 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -79,6 +79,7 @@ enum drm_sched_policy { DRM_SCHED_POLICY_DEFAULT, DRM_SCHED_POLICY_RR, DRM_SCHED_POLICY_FIFO, + DRM_SCHED_POLICY_SINGLE_ENTITY, DRM_SCHED_POLICY_COUNT, }; @@ -112,6 +113,9 @@ struct drm_sched_entity { */ struct drm_sched_rq *rq; + /** @single_sched: Single scheduler */ + struct drm_gpu_scheduler *single_sched; + /** * @sched_list: * @@ -473,6 +477,7 @@ struct drm_sched_backend_ops { * struct drm_gpu_scheduler - scheduler instance-specific data * * @ops: backend operations provided by the driver. + * @single_entity: Single entity for the scheduler * @hw_submission_limit: the max size of the hardware queue. * @timeout: the time after which a job is removed from the scheduler. * @name: name of the ring for which this scheduler is being used. @@ -503,6 +508,7 @@ struct drm_sched_backend_ops { */ struct drm_gpu_scheduler { const struct drm_sched_backend_ops *ops; + struct drm_sched_entity *single_entity; uint32_t hw_submission_limit; long timeout; const char *name; @@ -584,6 +590,8 @@ int drm_sched_entity_init(struct drm_sched_entity *entity, struct drm_gpu_scheduler **sched_list, unsigned int num_sched_list, atomic_t *guilty); +struct drm_gpu_scheduler * +drm_sched_entity_to_scheduler(struct drm_sched_entity *entity); long drm_sched_entity_flush(struct drm_sched_entity *entity, long timeout); void drm_sched_entity_fini(struct drm_sched_entity *entity); void drm_sched_entity_destroy(struct drm_sched_entity *entity); From patchwork Tue Aug 1 20:50:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 13337249 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0EF84C04A94 for ; Tue, 1 Aug 2023 20:51:23 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D75DC10E423; Tue, 1 Aug 2023 20:51:16 +0000 (UTC) Received: from mgamail.intel.com (unknown [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id D7EE710E334; Tue, 1 Aug 2023 20:51:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690923071; x=1722459071; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vFBXKfUGQbfzZuiiDUT5KzWxvenGsbJ3DWoBj/MSeN4=; b=LbmjUaF2dZLBdZPKrtGw2rLmwe8ejo6IgMlMKoYHe5JKprmBgxIAYBH2 U/NvtuDf76V9g376r1kF4r3h/JnOc/MtQBeGiRsW80ISsw9KbvsASMG+i rvUuuiDkUNBn+kudA7U2mPV87q/bUxZBigH35VIjstUazO8uwKMcKsfly P5aeHCoNH1s4g32F/93uuytxFRGBtQGbRwzZ1stdL+DMfReTYQxKEWcFo PK3QMjTTIes/bymDheuFufKaC35hO13XxnIjU4JLElu/UZ3KjFqfMzT2f WRZniG3zt7IkxgBoyiybirG9ufIqHScdn69AeorvrS3Kcwh8D8F3wSOQp g==; X-IronPort-AV: E=McAfee;i="6600,9927,10789"; a="373051732" X-IronPort-AV: E=Sophos;i="6.01,248,1684825200"; d="scan'208";a="373051732" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Aug 2023 13:51:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.01,202,1684825200"; d="scan'208";a="872215407" Received: from lstrano-desk.jf.intel.com ([10.54.39.91]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Aug 2023 13:51:13 -0700 From: Matthew Brost To: dri-devel@lists.freedesktop.org, intel-xe@lists.freedesktop.org Subject: [PATCH 4/8] drm/sched: Add generic scheduler message interface Date: Tue, 1 Aug 2023 13:50:59 -0700 Message-Id: <20230801205103.627779-5-matthew.brost@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230801205103.627779-1-matthew.brost@intel.com> References: <20230801205103.627779-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robdclark@chromium.org, thomas.hellstrom@linux.intel.com, Matthew Brost , sarah.walker@imgtec.com, ketil.johnsen@arm.com, Liviu.Dudau@arm.com, luben.tuikov@amd.com, lina@asahilina.net, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith.ekstrand@collabora.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add generic schedule message interface which sends messages to backend from the drm_gpu_scheduler main submission thread. The idea is some of these messages modify some state in drm_sched_entity which is also modified during submission. By scheduling these messages and submission in the same thread their is not race changing states in drm_sched_entity. This interface will be used in XE, new Intel GPU driver, to cleanup, suspend, resume, and change scheduling properties of a drm_sched_entity. The interface is designed to be generic and extendable with only the backend understanding the messages. Signed-off-by: Matthew Brost --- drivers/gpu/drm/scheduler/sched_main.c | 52 +++++++++++++++++++++++++- include/drm/gpu_scheduler.h | 29 +++++++++++++- 2 files changed, 78 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 2597fb298733..84821a124ca2 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -1049,6 +1049,49 @@ drm_sched_pick_best(struct drm_gpu_scheduler **sched_list, } EXPORT_SYMBOL(drm_sched_pick_best); +/** + * drm_sched_add_msg - add scheduler message + * + * @sched: scheduler instance + * @msg: message to be added + * + * Can and will pass an jobs waiting on dependencies or in a runnable queue. + * Messages processing will stop if schedule run wq is stopped and resume when + * run wq is started. + */ +void drm_sched_add_msg(struct drm_gpu_scheduler *sched, + struct drm_sched_msg *msg) +{ + spin_lock(&sched->job_list_lock); + list_add_tail(&msg->link, &sched->msgs); + spin_unlock(&sched->job_list_lock); + + drm_sched_run_wq_queue(sched); +} +EXPORT_SYMBOL(drm_sched_add_msg); + +/** + * drm_sched_get_msg - get scheduler message + * + * @sched: scheduler instance + * + * Returns NULL or message + */ +static struct drm_sched_msg * +drm_sched_get_msg(struct drm_gpu_scheduler *sched) +{ + struct drm_sched_msg *msg; + + spin_lock(&sched->job_list_lock); + msg = list_first_entry_or_null(&sched->msgs, + struct drm_sched_msg, link); + if (msg) + list_del(&msg->link); + spin_unlock(&sched->job_list_lock); + + return msg; +} + /** * drm_sched_main - main scheduler thread * @@ -1060,6 +1103,7 @@ static void drm_sched_main(struct work_struct *w) container_of(w, struct drm_gpu_scheduler, work_run); struct drm_sched_entity *entity; struct drm_sched_job *cleanup_job; + struct drm_sched_msg *msg; int r; if (READ_ONCE(sched->pause_run_wq)) @@ -1067,12 +1111,15 @@ static void drm_sched_main(struct work_struct *w) cleanup_job = drm_sched_get_cleanup_job(sched); entity = drm_sched_select_entity(sched); + msg = drm_sched_get_msg(sched); - if (!entity && !cleanup_job) + if (!entity && !cleanup_job && !msg) return; /* No more work */ if (cleanup_job) sched->ops->free_job(cleanup_job); + if (msg) + sched->ops->process_msg(msg); if (entity) { struct dma_fence *fence; @@ -1082,7 +1129,7 @@ static void drm_sched_main(struct work_struct *w) sched_job = drm_sched_entity_pop_job(entity); if (!sched_job) { complete_all(&entity->entity_idle); - if (!cleanup_job) + if (!cleanup_job && !msg) return; /* No more work */ goto again; } @@ -1177,6 +1224,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, init_waitqueue_head(&sched->job_scheduled); INIT_LIST_HEAD(&sched->pending_list); + INIT_LIST_HEAD(&sched->msgs); spin_lock_init(&sched->job_list_lock); atomic_set(&sched->hw_rq_count, 0); INIT_DELAYED_WORK(&sched->work_tdr, drm_sched_job_timedout); diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index df1993dd44ae..267bd060d178 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -394,6 +394,23 @@ enum drm_gpu_sched_stat { DRM_GPU_SCHED_STAT_ENODEV, }; +/** + * struct drm_sched_msg - an in-band (relative to GPU scheduler run queue) + * message + * + * Generic enough for backend defined messages, backend can expand if needed. + */ +struct drm_sched_msg { + /** @link: list link into the gpu scheduler list of messages */ + struct list_head link; + /** + * @private_data: opaque pointer to message private data (backend defined) + */ + void *private_data; + /** @opcode: opcode of message (backend defined) */ + unsigned int opcode; +}; + /** * struct drm_sched_backend_ops - Define the backend operations * called by the scheduler @@ -471,6 +488,12 @@ struct drm_sched_backend_ops { * and it's time to clean it up. */ void (*free_job)(struct drm_sched_job *sched_job); + + /** + * @process_msg: Process a message. Allowed to block, it is this + * function's responsibility to free message if dynamically allocated. + */ + void (*process_msg)(struct drm_sched_msg *msg); }; /** @@ -482,6 +505,7 @@ struct drm_sched_backend_ops { * @timeout: the time after which a job is removed from the scheduler. * @name: name of the ring for which this scheduler is being used. * @sched_rq: priority wise array of run queues. + * @msgs: list of messages to be processed in @work_run * @job_scheduled: once @drm_sched_entity_do_release is called the scheduler * waits on this wait queue until all the scheduled jobs are * finished. @@ -489,7 +513,7 @@ struct drm_sched_backend_ops { * @job_id_count: used to assign unique id to the each job. * @run_wq: workqueue used to queue @work_run * @timeout_wq: workqueue used to queue @work_tdr - * @work_run: schedules jobs and cleans up entities + * @work_run: schedules jobs, cleans up jobs, and processes messages * @work_tdr: schedules a delayed call to @drm_sched_job_timedout after the * timeout interval is over. * @pending_list: the list of jobs which are currently in the job queue. @@ -513,6 +537,7 @@ struct drm_gpu_scheduler { long timeout; const char *name; struct drm_sched_rq sched_rq[DRM_SCHED_PRIORITY_COUNT]; + struct list_head msgs; wait_queue_head_t job_scheduled; atomic_t hw_rq_count; atomic64_t job_id_count; @@ -566,6 +591,8 @@ void drm_sched_entity_modify_sched(struct drm_sched_entity *entity, void drm_sched_job_cleanup(struct drm_sched_job *job); void drm_sched_wakeup(struct drm_gpu_scheduler *sched); +void drm_sched_add_msg(struct drm_gpu_scheduler *sched, + struct drm_sched_msg *msg); void drm_sched_run_wq_stop(struct drm_gpu_scheduler *sched); void drm_sched_run_wq_start(struct drm_gpu_scheduler *sched); void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad); From patchwork Tue Aug 1 20:51:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 13337252 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CDD61C0015E for ; Tue, 1 Aug 2023 20:51:28 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D4D7D10E429; Tue, 1 Aug 2023 20:51:18 +0000 (UTC) Received: from mgamail.intel.com (unknown [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id 976BA10E1E1; Tue, 1 Aug 2023 20:51:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690923073; x=1722459073; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JDicnaBt5OskT5ZGlrFYCSlynmRxlR1jYycrLI7/7pE=; b=lFYgHZMyD3yWZ4KoDGL29j/S5EozkvUu3DAFHFyfB37K/BmuaABmlHMh TkEI22t/O06CGe+dZwJF7f2Wgtu45rDZJsOPyHhEMwp0QZKj6xff5H897 G/bajWO032f0mcahhOf/LIx6yQYOY3szvp5qZMJBsnjO9FJ/mSmiUzPm3 LfY8zV/GtkjHfb3OFbT4C6s52qCTaZOc1XocpnbZjB5TNxvCiIcWxAlHc RBIw7GMeH6lrpoogFttgQgS/fjgYC4NtYKqrDX5sWjzfAbx1inHpCHA/O KiBAYN/kzN1Jj5iVwThfYkEx6Np4FWcuHrl73cC0LDhBEhmnXSChTAxkq Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10789"; a="373051744" X-IronPort-AV: E=Sophos;i="6.01,248,1684825200"; d="scan'208";a="373051744" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Aug 2023 13:51:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.01,202,1684825200"; d="scan'208";a="872215411" Received: from lstrano-desk.jf.intel.com ([10.54.39.91]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Aug 2023 13:51:13 -0700 From: Matthew Brost To: dri-devel@lists.freedesktop.org, intel-xe@lists.freedesktop.org Subject: [PATCH 5/8] drm/sched: Add drm_sched_start_timeout_unlocked helper Date: Tue, 1 Aug 2023 13:51:00 -0700 Message-Id: <20230801205103.627779-6-matthew.brost@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230801205103.627779-1-matthew.brost@intel.com> References: <20230801205103.627779-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robdclark@chromium.org, thomas.hellstrom@linux.intel.com, Matthew Brost , sarah.walker@imgtec.com, ketil.johnsen@arm.com, Liviu.Dudau@arm.com, luben.tuikov@amd.com, lina@asahilina.net, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith.ekstrand@collabora.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Also add a lockdep assert to drm_sched_start_timeout. Signed-off-by: Matthew Brost --- drivers/gpu/drm/scheduler/sched_main.c | 23 +++++++++++++---------- 1 file changed, 13 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 84821a124ca2..be963d68a733 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -360,11 +360,20 @@ static void drm_sched_job_done_cb(struct dma_fence *f, struct dma_fence_cb *cb) */ static void drm_sched_start_timeout(struct drm_gpu_scheduler *sched) { + lockdep_assert_held(&sched->job_list_lock); + if (sched->timeout != MAX_SCHEDULE_TIMEOUT && !list_empty(&sched->pending_list)) queue_delayed_work(sched->timeout_wq, &sched->work_tdr, sched->timeout); } +static void drm_sched_start_timeout_unlocked(struct drm_gpu_scheduler *sched) +{ + spin_lock(&sched->job_list_lock); + drm_sched_start_timeout(sched); + spin_unlock(&sched->job_list_lock); +} + /** * drm_sched_fault - immediately start timeout handler * @@ -476,11 +485,8 @@ static void drm_sched_job_timedout(struct work_struct *work) spin_unlock(&sched->job_list_lock); } - if (status != DRM_GPU_SCHED_STAT_ENODEV) { - spin_lock(&sched->job_list_lock); - drm_sched_start_timeout(sched); - spin_unlock(&sched->job_list_lock); - } + if (status != DRM_GPU_SCHED_STAT_ENODEV) + drm_sched_start_timeout_unlocked(sched); } /** @@ -606,11 +612,8 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery) drm_sched_job_done(s_job); } - if (full_recovery) { - spin_lock(&sched->job_list_lock); - drm_sched_start_timeout(sched); - spin_unlock(&sched->job_list_lock); - } + if (full_recovery) + drm_sched_start_timeout_unlocked(sched); drm_sched_run_wq_start(sched); } From patchwork Tue Aug 1 20:51:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 13337253 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3BD49C04A6A for ; Tue, 1 Aug 2023 20:51:30 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4631710E436; Tue, 1 Aug 2023 20:51:18 +0000 (UTC) Received: from mgamail.intel.com (unknown [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4803210E1E1; Tue, 1 Aug 2023 20:51:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690923073; x=1722459073; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=FdGRfPvC9A7+2edmXqNAAKQwJfk4O4+s/Hgwkl1xXh0=; b=gFv8S5iHzEdZuxC26hAZmEcrctYCusqzR/ID+lR7ZuDJOGvReAjoGmVn Agp1oVSUPGhq5VltfrJIhGTbaAq6hCJ6Ofq9gkoZFMCKATYk+WjxSDV9/ qpZJFjZQ1z3mASJhBTuFMBkGuiJNSzk75tSSNVpO8PZDE6ZuZE+iUAegK UZXnr2Ivzwz2QkUD3zo6KDndjHHB+ITxcxGyyK4oYv00a2fJpwNiMYujT PmKvnmwZaSvPomb8Dnt4UJ3zR58Glsmwxf0xjYlrpx6Wy2rbE3BBl4+fK 8UETqm7pVigJwpXtMaOLoQWxgV117ExiYKHOIjwGjwgsU+n+7kqVycxdb Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10789"; a="373051759" X-IronPort-AV: E=Sophos;i="6.01,248,1684825200"; d="scan'208";a="373051759" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Aug 2023 13:51:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.01,202,1684825200"; d="scan'208";a="872215414" Received: from lstrano-desk.jf.intel.com ([10.54.39.91]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Aug 2023 13:51:14 -0700 From: Matthew Brost To: dri-devel@lists.freedesktop.org, intel-xe@lists.freedesktop.org Subject: [PATCH 6/8] drm/sched: Start run wq before TDR in drm_sched_start Date: Tue, 1 Aug 2023 13:51:01 -0700 Message-Id: <20230801205103.627779-7-matthew.brost@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230801205103.627779-1-matthew.brost@intel.com> References: <20230801205103.627779-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robdclark@chromium.org, thomas.hellstrom@linux.intel.com, Matthew Brost , sarah.walker@imgtec.com, ketil.johnsen@arm.com, Liviu.Dudau@arm.com, luben.tuikov@amd.com, lina@asahilina.net, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith.ekstrand@collabora.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" If the TDR is set to a very small value it can fire before the run wq is started in the function drm_sched_start. The run wq is expected to running when the TDR fires, fix this ordering so this expectation is always met. Signed-off-by: Matthew Brost --- drivers/gpu/drm/scheduler/sched_main.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index be963d68a733..2e404a6542ad 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -612,10 +612,10 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery) drm_sched_job_done(s_job); } + drm_sched_run_wq_start(sched); + if (full_recovery) drm_sched_start_timeout_unlocked(sched); - - drm_sched_run_wq_start(sched); } EXPORT_SYMBOL(drm_sched_start); From patchwork Tue Aug 1 20:51:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 13337251 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5FD1FC04A6A for ; Tue, 1 Aug 2023 20:51:27 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0723910E433; Tue, 1 Aug 2023 20:51:18 +0000 (UTC) Received: from mgamail.intel.com (unknown [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id C551310E41A; Tue, 1 Aug 2023 20:51:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690923073; x=1722459073; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DQoIpnI1x5g9Tu0v4cJDT9GXQ8upQSDJsJaCp/suKlo=; b=HpQHtgpce/nSGG+LPkvNRnGRMX9czNYuUCGGuDFCFLmePYcWLwxADQov AptiELNjH/70e1GucD+gc6sCLsFRYiv7mUKOf2yASEjD5rEXBz+PoAR/E XpF8S7yNerP0Z/NBnBc98On76VFvMecAt4lTeiFnbvW0LAaZhl4FgaMF+ Xm1uFXSuAyHLmWmhWARUo1PsCv34Hm0h5QinnCMFiyAY5Cu9gQDKfQz6c HU0q9bSunFOCKZLYcU7m+7BStQ3hQSedP1JzYFFnjJKn5U0cllQTi5a+U QoUtVwSIW9NmyCMOHXK8SzJuIMbilnSRi294KJO6+wpWlxEEbjWaFKY87 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10789"; a="373051769" X-IronPort-AV: E=Sophos;i="6.01,248,1684825200"; d="scan'208";a="373051769" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Aug 2023 13:51:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.01,202,1684825200"; d="scan'208";a="872215419" Received: from lstrano-desk.jf.intel.com ([10.54.39.91]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Aug 2023 13:51:14 -0700 From: Matthew Brost To: dri-devel@lists.freedesktop.org, intel-xe@lists.freedesktop.org Subject: [PATCH 7/8] drm/sched: Submit job before starting TDR Date: Tue, 1 Aug 2023 13:51:02 -0700 Message-Id: <20230801205103.627779-8-matthew.brost@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230801205103.627779-1-matthew.brost@intel.com> References: <20230801205103.627779-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robdclark@chromium.org, thomas.hellstrom@linux.intel.com, Matthew Brost , sarah.walker@imgtec.com, ketil.johnsen@arm.com, Liviu.Dudau@arm.com, luben.tuikov@amd.com, lina@asahilina.net, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith.ekstrand@collabora.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" If the TDR is set to a value, it can fire before a job is submitted in drm_sched_main. The job should be always be submitted before the TDR fires, fix this ordering. v2: - Add to pending list before run_job, start TDR after (Luben, Boris) Signed-off-by: Matthew Brost --- drivers/gpu/drm/scheduler/sched_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 2e404a6542ad..9573f13f8459 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -445,7 +445,6 @@ static void drm_sched_job_begin(struct drm_sched_job *s_job) spin_lock(&sched->job_list_lock); list_add_tail(&s_job->list, &sched->pending_list); - drm_sched_start_timeout(sched); spin_unlock(&sched->job_list_lock); } @@ -1146,6 +1145,7 @@ static void drm_sched_main(struct work_struct *w) fence = sched->ops->run_job(sched_job); complete_all(&entity->entity_idle); drm_sched_fence_scheduled(s_fence); + drm_sched_start_timeout_unlocked(sched); if (!IS_ERR_OR_NULL(fence)) { drm_sched_fence_set_parent(s_fence, fence); From patchwork Tue Aug 1 20:51:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 13337250 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 69550C0015E for ; Tue, 1 Aug 2023 20:51:25 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9990210E432; Tue, 1 Aug 2023 20:51:17 +0000 (UTC) Received: from mgamail.intel.com (unknown [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3B8D210E429; Tue, 1 Aug 2023 20:51:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690923074; x=1722459074; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Dd+fx2XJT/5S633RvpLfZ/sglev7L2POlqePydFod/g=; b=TwZcmgM/+lafCggaA4LCPBWjVzrpayZuJ1EkV5vBvqGWXCfOQy+Vg6wL jvOajcpxZOks6fGl6LpJwrsRbz9RkGe/lN+mLRbaWyUDEnWKWwGXR0KCd yT+hJ+hG0icliwXZDvOsxP5qP5VsIrPUlL85o7Vyh7daDW8tJ2N/1+qLc aHQhVTTAC3W45uBvMZrmikVzBZECBZg7KolpTaCQETLPNkF1F3T0wRwli flKBbEPs/T1JH02ImIegAlhqN1GMwihQtP02moTSnyHlsg7xqoTj5h3rz qf3V3DGWKDUTQAx30cx4bbZ/ln6x7tUHWYYv3joTeQb2hg0SdCQQx7nj+ Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10789"; a="373051785" X-IronPort-AV: E=Sophos;i="6.01,248,1684825200"; d="scan'208";a="373051785" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Aug 2023 13:51:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.01,202,1684825200"; d="scan'208";a="872215422" Received: from lstrano-desk.jf.intel.com ([10.54.39.91]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Aug 2023 13:51:14 -0700 From: Matthew Brost To: dri-devel@lists.freedesktop.org, intel-xe@lists.freedesktop.org Subject: [PATCH 8/8] drm/sched: Add helper to set TDR timeout Date: Tue, 1 Aug 2023 13:51:03 -0700 Message-Id: <20230801205103.627779-9-matthew.brost@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230801205103.627779-1-matthew.brost@intel.com> References: <20230801205103.627779-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robdclark@chromium.org, thomas.hellstrom@linux.intel.com, Matthew Brost , sarah.walker@imgtec.com, ketil.johnsen@arm.com, Liviu.Dudau@arm.com, luben.tuikov@amd.com, lina@asahilina.net, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith.ekstrand@collabora.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add helper to set TDR timeout and restart the TDR with new timeout value. This will be used in XE, new Intel GPU driver, to trigger the TDR to cleanup drm_sched_entity that encounter errors. Signed-off-by: Matthew Brost --- drivers/gpu/drm/scheduler/sched_main.c | 18 ++++++++++++++++++ include/drm/gpu_scheduler.h | 1 + 2 files changed, 19 insertions(+) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 9573f13f8459..19ec0cb5caee 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -374,6 +374,24 @@ static void drm_sched_start_timeout_unlocked(struct drm_gpu_scheduler *sched) spin_unlock(&sched->job_list_lock); } +/** + * drm_sched_set_timeout - set timeout for reset worker + * + * @sched: scheduler instance to set and (re)-start the worker for + * @timeout: timeout period + * + * Set and (re)-start the timeout for the given scheduler. + */ +void drm_sched_set_timeout(struct drm_gpu_scheduler *sched, long timeout) +{ + spin_lock(&sched->job_list_lock); + sched->timeout = timeout; + cancel_delayed_work(&sched->work_tdr); + drm_sched_start_timeout(sched); + spin_unlock(&sched->job_list_lock); +} +EXPORT_SYMBOL(drm_sched_set_timeout); + /** * drm_sched_fault - immediately start timeout handler * diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 267bd060d178..f4af856aebd9 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -589,6 +589,7 @@ void drm_sched_entity_modify_sched(struct drm_sched_entity *entity, struct drm_gpu_scheduler **sched_list, unsigned int num_sched_list); +void drm_sched_set_timeout(struct drm_gpu_scheduler *sched, long timeout); void drm_sched_job_cleanup(struct drm_sched_job *job); void drm_sched_wakeup(struct drm_gpu_scheduler *sched); void drm_sched_add_msg(struct drm_gpu_scheduler *sched,