mbox series

[v4,00/10] DRM scheduler changes for Xe

Message ID 20230919050155.2647172-1-matthew.brost@intel.com (mailing list archive)
Headers show
Series DRM scheduler changes for Xe | expand

Message

Matthew Brost Sept. 19, 2023, 5:01 a.m. UTC
As a prerequisite to merging the new Intel Xe DRM driver [1] [2], we
have been asked to merge our common DRM scheduler patches first.

This a continuation of a RFC [3] with all comments addressed, ready for
a full review, and hopefully in state which can merged in the near
future. More details of this series can found in the cover letter of the
RFC [3].

These changes have been tested with the Xe driver.

v2:
 - Break run job, free job, and process message in own work items
 - This might break other drivers as run job and free job now can run in
   parallel, can fix up if needed

v3:
 - Include missing patch 'drm/sched: Add drm_sched_submit_* helpers'
 - Fix issue with setting timestamp to early
 - Don't dequeue jobs for single entity after calling entity fini
 - Flush pending jobs on entity fini
 - Add documentation for entity teardown
 - Add Matthew Brost to maintainers of DRM scheduler

v4:
 - Drop message interface
 - Drop 'Flush pending jobs on entity fini'
 - Drop 'Add documentation for entity teardown'
 - Address all feedback

Matt

Matthew Brost (10):
  drm/sched: Add drm_sched_submit_* helpers
  drm/sched: Convert drm scheduler to use a work queue rather than
    kthread
  drm/sched: Move schedule policy to scheduler
  drm/sched: Add DRM_SCHED_POLICY_SINGLE_ENTITY scheduling policy
  drm/sched: Split free_job into own work item
  drm/sched: Add drm_sched_start_timeout_unlocked helper
  drm/sched: Start submission before TDR in drm_sched_start
  drm/sched: Submit job before starting TDR
  drm/sched: Add helper to queue TDR immediately for current and future
    jobs
  drm/sched: Update maintainers of GPU scheduler

 MAINTAINERS                                   |   1 +
 .../drm/amd/amdgpu/amdgpu_amdkfd_arcturus.c   |   2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c   |  15 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |  15 +-
 drivers/gpu/drm/etnaviv/etnaviv_sched.c       |   5 +-
 drivers/gpu/drm/lima/lima_sched.c             |   5 +-
 drivers/gpu/drm/msm/adreno/adreno_device.c    |   6 +-
 drivers/gpu/drm/msm/msm_ringbuffer.c          |   5 +-
 drivers/gpu/drm/nouveau/nouveau_sched.c       |   5 +-
 drivers/gpu/drm/panfrost/panfrost_job.c       |   5 +-
 drivers/gpu/drm/scheduler/sched_entity.c      |  85 ++-
 drivers/gpu/drm/scheduler/sched_fence.c       |   2 +-
 drivers/gpu/drm/scheduler/sched_main.c        | 491 ++++++++++++------
 drivers/gpu/drm/v3d/v3d_sched.c               |  25 +-
 include/drm/gpu_scheduler.h                   |  48 +-
 15 files changed, 495 insertions(+), 220 deletions(-)

Comments

Danilo Krummrich Sept. 19, 2023, 11:44 a.m. UTC | #1
Hi Matt,

On 9/19/23 07:01, Matthew Brost wrote:
> As a prerequisite to merging the new Intel Xe DRM driver [1] [2], we
> have been asked to merge our common DRM scheduler patches first.
> 
> This a continuation of a RFC [3] with all comments addressed, ready for
> a full review, and hopefully in state which can merged in the near
> future. More details of this series can found in the cover letter of the
> RFC [3].
> 
> These changes have been tested with the Xe driver.
> 
> v2:
>   - Break run job, free job, and process message in own work items
>   - This might break other drivers as run job and free job now can run in
>     parallel, can fix up if needed
> 
> v3:
>   - Include missing patch 'drm/sched: Add drm_sched_submit_* helpers'
>   - Fix issue with setting timestamp to early
>   - Don't dequeue jobs for single entity after calling entity fini
>   - Flush pending jobs on entity fini
>   - Add documentation for entity teardown
>   - Add Matthew Brost to maintainers of DRM scheduler
> 
> v4:
>   - Drop message interface
>   - Drop 'Flush pending jobs on entity fini'
>   - Drop 'Add documentation for entity teardown'
>   - Address all feedback

There is some feedback from V3 that doesn't seem to be addressed yet.

(1) Document tear down of struct drm_gpu_scheduler. [1]
(2) Implement helpers to tear down struct drm_gpu_scheduler. [2]
(3) Document implications of using a workqueue in terms of free_job() being
     or not being part of the fence signaling path respectively. [3]

I think at least (1) and (3) should be part of this series. I think (2) could
also happen subsequently. Christian's idea [2] how to address this is quite
interesting, but might exceed the scope of this series.

I will try to rebase my Nouveau changes onto your V4 tomorrow for a quick test.

- Danilo

[1] https://lore.kernel.org/all/20230912021615.2086698-1-matthew.brost@intel.com/T/#m2e8c1c1e68e8127d5dd62509b5e424a12217300b
[2] https://lore.kernel.org/all/20230912021615.2086698-1-matthew.brost@intel.com/T/#m16a0d6fa2e617383776515af45d3c6b9db543d47
[3] https://lore.kernel.org/all/20230912021615.2086698-1-matthew.brost@intel.com/T/#m807ff95284089fdb51985f1c187666772314bd8a

> 
> Matt
> 
> Matthew Brost (10):
>    drm/sched: Add drm_sched_submit_* helpers
>    drm/sched: Convert drm scheduler to use a work queue rather than
>      kthread
>    drm/sched: Move schedule policy to scheduler
>    drm/sched: Add DRM_SCHED_POLICY_SINGLE_ENTITY scheduling policy
>    drm/sched: Split free_job into own work item
>    drm/sched: Add drm_sched_start_timeout_unlocked helper
>    drm/sched: Start submission before TDR in drm_sched_start
>    drm/sched: Submit job before starting TDR
>    drm/sched: Add helper to queue TDR immediately for current and future
>      jobs
>    drm/sched: Update maintainers of GPU scheduler
> 
>   MAINTAINERS                                   |   1 +
>   .../drm/amd/amdgpu/amdgpu_amdkfd_arcturus.c   |   2 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c   |  15 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |  15 +-
>   drivers/gpu/drm/etnaviv/etnaviv_sched.c       |   5 +-
>   drivers/gpu/drm/lima/lima_sched.c             |   5 +-
>   drivers/gpu/drm/msm/adreno/adreno_device.c    |   6 +-
>   drivers/gpu/drm/msm/msm_ringbuffer.c          |   5 +-
>   drivers/gpu/drm/nouveau/nouveau_sched.c       |   5 +-
>   drivers/gpu/drm/panfrost/panfrost_job.c       |   5 +-
>   drivers/gpu/drm/scheduler/sched_entity.c      |  85 ++-
>   drivers/gpu/drm/scheduler/sched_fence.c       |   2 +-
>   drivers/gpu/drm/scheduler/sched_main.c        | 491 ++++++++++++------
>   drivers/gpu/drm/v3d/v3d_sched.c               |  25 +-
>   include/drm/gpu_scheduler.h                   |  48 +-
>   15 files changed, 495 insertions(+), 220 deletions(-)
>
Danilo Krummrich Sept. 25, 2023, 9:47 p.m. UTC | #2
On 9/19/23 13:44, Danilo Krummrich wrote:
> Hi Matt,
> 
> On 9/19/23 07:01, Matthew Brost wrote:
>> As a prerequisite to merging the new Intel Xe DRM driver [1] [2], we
>> have been asked to merge our common DRM scheduler patches first.
>>
>> This a continuation of a RFC [3] with all comments addressed, ready for
>> a full review, and hopefully in state which can merged in the near
>> future. More details of this series can found in the cover letter of the
>> RFC [3].
>>
>> These changes have been tested with the Xe driver.
>>
>> v2:
>>   - Break run job, free job, and process message in own work items
>>   - This might break other drivers as run job and free job now can run in
>>     parallel, can fix up if needed
>>
>> v3:
>>   - Include missing patch 'drm/sched: Add drm_sched_submit_* helpers'
>>   - Fix issue with setting timestamp to early
>>   - Don't dequeue jobs for single entity after calling entity fini
>>   - Flush pending jobs on entity fini
>>   - Add documentation for entity teardown
>>   - Add Matthew Brost to maintainers of DRM scheduler
>>
>> v4:
>>   - Drop message interface
>>   - Drop 'Flush pending jobs on entity fini'
>>   - Drop 'Add documentation for entity teardown'
>>   - Address all feedback
> 
> There is some feedback from V3 that doesn't seem to be addressed yet.
> 
> (1) Document tear down of struct drm_gpu_scheduler. [1]
> (2) Implement helpers to tear down struct drm_gpu_scheduler. [2]
> (3) Document implications of using a workqueue in terms of free_job() being
>      or not being part of the fence signaling path respectively. [3]
> 
> I think at least (1) and (3) should be part of this series. I think (2) could
> also happen subsequently. Christian's idea [2] how to address this is quite
> interesting, but might exceed the scope of this series.
> 
> I will try to rebase my Nouveau changes onto your V4 tomorrow for a quick test.

Tested-by: Danilo Krummrich <dakr@redhat.com>

> 
> - Danilo
> 
> [1] https://lore.kernel.org/all/20230912021615.2086698-1-matthew.brost@intel.com/T/#m2e8c1c1e68e8127d5dd62509b5e424a12217300b
> [2] https://lore.kernel.org/all/20230912021615.2086698-1-matthew.brost@intel.com/T/#m16a0d6fa2e617383776515af45d3c6b9db543d47
> [3] https://lore.kernel.org/all/20230912021615.2086698-1-matthew.brost@intel.com/T/#m807ff95284089fdb51985f1c187666772314bd8a
> 
>>
>> Matt
>>
>> Matthew Brost (10):
>>    drm/sched: Add drm_sched_submit_* helpers
>>    drm/sched: Convert drm scheduler to use a work queue rather than
>>      kthread
>>    drm/sched: Move schedule policy to scheduler
>>    drm/sched: Add DRM_SCHED_POLICY_SINGLE_ENTITY scheduling policy
>>    drm/sched: Split free_job into own work item
>>    drm/sched: Add drm_sched_start_timeout_unlocked helper
>>    drm/sched: Start submission before TDR in drm_sched_start
>>    drm/sched: Submit job before starting TDR
>>    drm/sched: Add helper to queue TDR immediately for current and future
>>      jobs
>>    drm/sched: Update maintainers of GPU scheduler
>>
>>   MAINTAINERS                                   |   1 +
>>   .../drm/amd/amdgpu/amdgpu_amdkfd_arcturus.c   |   2 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c   |  15 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |  15 +-
>>   drivers/gpu/drm/etnaviv/etnaviv_sched.c       |   5 +-
>>   drivers/gpu/drm/lima/lima_sched.c             |   5 +-
>>   drivers/gpu/drm/msm/adreno/adreno_device.c    |   6 +-
>>   drivers/gpu/drm/msm/msm_ringbuffer.c          |   5 +-
>>   drivers/gpu/drm/nouveau/nouveau_sched.c       |   5 +-
>>   drivers/gpu/drm/panfrost/panfrost_job.c       |   5 +-
>>   drivers/gpu/drm/scheduler/sched_entity.c      |  85 ++-
>>   drivers/gpu/drm/scheduler/sched_fence.c       |   2 +-
>>   drivers/gpu/drm/scheduler/sched_main.c        | 491 ++++++++++++------
>>   drivers/gpu/drm/v3d/v3d_sched.c               |  25 +-
>>   include/drm/gpu_scheduler.h                   |  48 +-
>>   15 files changed, 495 insertions(+), 220 deletions(-)
>>
Boris Brezillon Sept. 27, 2023, 7:33 a.m. UTC | #3
On Mon, 18 Sep 2023 22:01:45 -0700
Matthew Brost <matthew.brost@intel.com> wrote:

> As a prerequisite to merging the new Intel Xe DRM driver [1] [2], we
> have been asked to merge our common DRM scheduler patches first.
> 
> This a continuation of a RFC [3] with all comments addressed, ready for
> a full review, and hopefully in state which can merged in the near
> future. More details of this series can found in the cover letter of the
> RFC [3].
> 
> These changes have been tested with the Xe driver.
> 
> v2:
>  - Break run job, free job, and process message in own work items
>  - This might break other drivers as run job and free job now can run in
>    parallel, can fix up if needed
> 
> v3:
>  - Include missing patch 'drm/sched: Add drm_sched_submit_* helpers'
>  - Fix issue with setting timestamp to early
>  - Don't dequeue jobs for single entity after calling entity fini
>  - Flush pending jobs on entity fini
>  - Add documentation for entity teardown
>  - Add Matthew Brost to maintainers of DRM scheduler
> 
> v4:
>  - Drop message interface
>  - Drop 'Flush pending jobs on entity fini'
>  - Drop 'Add documentation for entity teardown'
>  - Address all feedback
> 
> Matt
> 
> Matthew Brost (10):
>   drm/sched: Add drm_sched_submit_* helpers
>   drm/sched: Convert drm scheduler to use a work queue rather than
>     kthread
>   drm/sched: Move schedule policy to scheduler
>   drm/sched: Add DRM_SCHED_POLICY_SINGLE_ENTITY scheduling policy
>   drm/sched: Split free_job into own work item
>   drm/sched: Add drm_sched_start_timeout_unlocked helper
>   drm/sched: Start submission before TDR in drm_sched_start
>   drm/sched: Submit job before starting TDR
>   drm/sched: Add helper to queue TDR immediately for current and future
>     jobs
>   drm/sched: Update maintainers of GPU scheduler

Tested-by: Boris Brezillon <boris.brezillon@collabora.com>

> 
>  MAINTAINERS                                   |   1 +
>  .../drm/amd/amdgpu/amdgpu_amdkfd_arcturus.c   |   2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c   |  15 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |  15 +-
>  drivers/gpu/drm/etnaviv/etnaviv_sched.c       |   5 +-
>  drivers/gpu/drm/lima/lima_sched.c             |   5 +-
>  drivers/gpu/drm/msm/adreno/adreno_device.c    |   6 +-
>  drivers/gpu/drm/msm/msm_ringbuffer.c          |   5 +-
>  drivers/gpu/drm/nouveau/nouveau_sched.c       |   5 +-
>  drivers/gpu/drm/panfrost/panfrost_job.c       |   5 +-
>  drivers/gpu/drm/scheduler/sched_entity.c      |  85 ++-
>  drivers/gpu/drm/scheduler/sched_fence.c       |   2 +-
>  drivers/gpu/drm/scheduler/sched_main.c        | 491 ++++++++++++------
>  drivers/gpu/drm/v3d/v3d_sched.c               |  25 +-
>  include/drm/gpu_scheduler.h                   |  48 +-
>  15 files changed, 495 insertions(+), 220 deletions(-)
>