mbox series

[00/35] Add HMM-based SVM memory manager to KFD

Message ID 20210107030127.20393-1-Felix.Kuehling@amd.com (mailing list archive)
Headers show
Series Add HMM-based SVM memory manager to KFD | expand

Message

Felix Kuehling Jan. 7, 2021, 3 a.m. UTC
This is the first version of our HMM based shared virtual memory manager
for KFD. There are still a number of known issues that we're working through
(see below). This will likely lead to some pretty significant changes in
MMU notifier handling and locking on the migration code paths. So don't
get hung up on those details yet.

But I think this is a good time to start getting feedback. We're pretty
confident about the ioctl API, which is both simple and extensible for the
future. (see patches 4,16) The user mode side of the API can be found here:
https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c

I'd also like another pair of eyes on how we're interfacing with the GPU VM
code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
and some retry IRQ handling changes (32).


Known issues:
* won't work with IOMMU enabled, we need to dma_map all pages properly
* still working on some race conditions and random bugs
* performance is not great yet

Alex Sierra (12):
  drm/amdgpu: replace per_device_list by array
  drm/amdkfd: helper to convert gpu id and idx
  drm/amdkfd: add xnack enabled flag to kfd_process
  drm/amdkfd: add ioctl to configure and query xnack retries
  drm/amdkfd: invalidate tables on page retry fault
  drm/amdkfd: page table restore through svm API
  drm/amdkfd: SVM API call to restore page tables
  drm/amdkfd: add svm_bo reference for eviction fence
  drm/amdgpu: add param bit flag to create SVM BOs
  drm/amdkfd: add svm_bo eviction mechanism support
  drm/amdgpu: svm bo enable_signal call condition
  drm/amdgpu: add svm_bo eviction to enable_signal cb

Philip Yang (23):
  drm/amdkfd: select kernel DEVICE_PRIVATE option
  drm/amdkfd: add svm ioctl API
  drm/amdkfd: Add SVM API support capability bits
  drm/amdkfd: register svm range
  drm/amdkfd: add svm ioctl GET_ATTR op
  drm/amdgpu: add common HMM get pages function
  drm/amdkfd: validate svm range system memory
  drm/amdkfd: register overlap system memory range
  drm/amdkfd: deregister svm range
  drm/amdgpu: export vm update mapping interface
  drm/amdkfd: map svm range to GPUs
  drm/amdkfd: svm range eviction and restore
  drm/amdkfd: register HMM device private zone
  drm/amdkfd: validate vram svm range from TTM
  drm/amdkfd: support xgmi same hive mapping
  drm/amdkfd: copy memory through gart table
  drm/amdkfd: HMM migrate ram to vram
  drm/amdkfd: HMM migrate vram to ram
  drm/amdgpu: reserve fence slot to update page table
  drm/amdgpu: enable retry fault wptr overflow
  drm/amdkfd: refine migration policy with xnack on
  drm/amdkfd: add svm range validate timestamp
  drm/amdkfd: multiple gpu migrate vram to vram

 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c    |    3 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h    |    4 +-
 .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c  |   16 +-
 .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |   13 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |   83 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h        |    7 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.h    |    5 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       |   90 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        |   47 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h        |   10 +
 drivers/gpu/drm/amd/amdgpu/vega10_ih.c        |   32 +-
 drivers/gpu/drm/amd/amdgpu/vega20_ih.c        |   32 +-
 drivers/gpu/drm/amd/amdkfd/Kconfig            |    1 +
 drivers/gpu/drm/amd/amdkfd/Makefile           |    4 +-
 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c      |  170 +-
 drivers/gpu/drm/amd/amdkfd/kfd_iommu.c        |    8 +-
 drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  866 ++++++
 drivers/gpu/drm/amd/amdkfd/kfd_migrate.h      |   59 +
 drivers/gpu/drm/amd/amdkfd/kfd_priv.h         |   52 +-
 drivers/gpu/drm/amd/amdkfd/kfd_process.c      |  200 +-
 .../amd/amdkfd/kfd_process_queue_manager.c    |    6 +-
 drivers/gpu/drm/amd/amdkfd/kfd_svm.c          | 2564 +++++++++++++++++
 drivers/gpu/drm/amd/amdkfd/kfd_svm.h          |  135 +
 drivers/gpu/drm/amd/amdkfd/kfd_topology.c     |    1 +
 drivers/gpu/drm/amd/amdkfd/kfd_topology.h     |   10 +-
 include/uapi/linux/kfd_ioctl.h                |  169 +-
 26 files changed, 4296 insertions(+), 291 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
 create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_migrate.h
 create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_svm.c
 create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_svm.h

Comments

Daniel Vetter Jan. 7, 2021, 9:23 a.m. UTC | #1
On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
> This is the first version of our HMM based shared virtual memory manager
> for KFD. There are still a number of known issues that we're working through
> (see below). This will likely lead to some pretty significant changes in
> MMU notifier handling and locking on the migration code paths. So don't
> get hung up on those details yet.
> 
> But I think this is a good time to start getting feedback. We're pretty
> confident about the ioctl API, which is both simple and extensible for the
> future. (see patches 4,16) The user mode side of the API can be found here:
> https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
> 
> I'd also like another pair of eyes on how we're interfacing with the GPU VM
> code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
> and some retry IRQ handling changes (32).
> 
> 
> Known issues:
> * won't work with IOMMU enabled, we need to dma_map all pages properly
> * still working on some race conditions and random bugs
> * performance is not great yet

Still catching up, but I think there's another one for your list:

 * hmm gpu context preempt vs page fault handling. I've had a short
   discussion about this one with Christian before the holidays, and also
   some private chats with Jerome. It's nasty since no easy fix, much less
   a good idea what's the best approach here.

I'll try to look at this more in-depth when I'm catching up on mails.
-Daniel

> 
> Alex Sierra (12):
>   drm/amdgpu: replace per_device_list by array
>   drm/amdkfd: helper to convert gpu id and idx
>   drm/amdkfd: add xnack enabled flag to kfd_process
>   drm/amdkfd: add ioctl to configure and query xnack retries
>   drm/amdkfd: invalidate tables on page retry fault
>   drm/amdkfd: page table restore through svm API
>   drm/amdkfd: SVM API call to restore page tables
>   drm/amdkfd: add svm_bo reference for eviction fence
>   drm/amdgpu: add param bit flag to create SVM BOs
>   drm/amdkfd: add svm_bo eviction mechanism support
>   drm/amdgpu: svm bo enable_signal call condition
>   drm/amdgpu: add svm_bo eviction to enable_signal cb
> 
> Philip Yang (23):
>   drm/amdkfd: select kernel DEVICE_PRIVATE option
>   drm/amdkfd: add svm ioctl API
>   drm/amdkfd: Add SVM API support capability bits
>   drm/amdkfd: register svm range
>   drm/amdkfd: add svm ioctl GET_ATTR op
>   drm/amdgpu: add common HMM get pages function
>   drm/amdkfd: validate svm range system memory
>   drm/amdkfd: register overlap system memory range
>   drm/amdkfd: deregister svm range
>   drm/amdgpu: export vm update mapping interface
>   drm/amdkfd: map svm range to GPUs
>   drm/amdkfd: svm range eviction and restore
>   drm/amdkfd: register HMM device private zone
>   drm/amdkfd: validate vram svm range from TTM
>   drm/amdkfd: support xgmi same hive mapping
>   drm/amdkfd: copy memory through gart table
>   drm/amdkfd: HMM migrate ram to vram
>   drm/amdkfd: HMM migrate vram to ram
>   drm/amdgpu: reserve fence slot to update page table
>   drm/amdgpu: enable retry fault wptr overflow
>   drm/amdkfd: refine migration policy with xnack on
>   drm/amdkfd: add svm range validate timestamp
>   drm/amdkfd: multiple gpu migrate vram to vram
> 
>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c    |    3 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h    |    4 +-
>  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c  |   16 +-
>  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |   13 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |   83 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h        |    7 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_object.h    |    5 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       |   90 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        |   47 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h        |   10 +
>  drivers/gpu/drm/amd/amdgpu/vega10_ih.c        |   32 +-
>  drivers/gpu/drm/amd/amdgpu/vega20_ih.c        |   32 +-
>  drivers/gpu/drm/amd/amdkfd/Kconfig            |    1 +
>  drivers/gpu/drm/amd/amdkfd/Makefile           |    4 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_chardev.c      |  170 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_iommu.c        |    8 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  866 ++++++
>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.h      |   59 +
>  drivers/gpu/drm/amd/amdkfd/kfd_priv.h         |   52 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_process.c      |  200 +-
>  .../amd/amdkfd/kfd_process_queue_manager.c    |    6 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_svm.c          | 2564 +++++++++++++++++
>  drivers/gpu/drm/amd/amdkfd/kfd_svm.h          |  135 +
>  drivers/gpu/drm/amd/amdkfd/kfd_topology.c     |    1 +
>  drivers/gpu/drm/amd/amdkfd/kfd_topology.h     |   10 +-
>  include/uapi/linux/kfd_ioctl.h                |  169 +-
>  26 files changed, 4296 insertions(+), 291 deletions(-)
>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_migrate.h
>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_svm.c
>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_svm.h
> 
> -- 
> 2.29.2
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
Felix Kuehling Jan. 7, 2021, 4:25 p.m. UTC | #2
Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
> On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
>> This is the first version of our HMM based shared virtual memory manager
>> for KFD. There are still a number of known issues that we're working through
>> (see below). This will likely lead to some pretty significant changes in
>> MMU notifier handling and locking on the migration code paths. So don't
>> get hung up on those details yet.
>>
>> But I think this is a good time to start getting feedback. We're pretty
>> confident about the ioctl API, which is both simple and extensible for the
>> future. (see patches 4,16) The user mode side of the API can be found here:
>> https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
>>
>> I'd also like another pair of eyes on how we're interfacing with the GPU VM
>> code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
>> and some retry IRQ handling changes (32).
>>
>>
>> Known issues:
>> * won't work with IOMMU enabled, we need to dma_map all pages properly
>> * still working on some race conditions and random bugs
>> * performance is not great yet
> Still catching up, but I think there's another one for your list:
>
>  * hmm gpu context preempt vs page fault handling. I've had a short
>    discussion about this one with Christian before the holidays, and also
>    some private chats with Jerome. It's nasty since no easy fix, much less
>    a good idea what's the best approach here.

Do you have a pointer to that discussion or any more details?

Thanks,
  Felix


>
> I'll try to look at this more in-depth when I'm catching up on mails.
> -Daniel
>
>> Alex Sierra (12):
>>   drm/amdgpu: replace per_device_list by array
>>   drm/amdkfd: helper to convert gpu id and idx
>>   drm/amdkfd: add xnack enabled flag to kfd_process
>>   drm/amdkfd: add ioctl to configure and query xnack retries
>>   drm/amdkfd: invalidate tables on page retry fault
>>   drm/amdkfd: page table restore through svm API
>>   drm/amdkfd: SVM API call to restore page tables
>>   drm/amdkfd: add svm_bo reference for eviction fence
>>   drm/amdgpu: add param bit flag to create SVM BOs
>>   drm/amdkfd: add svm_bo eviction mechanism support
>>   drm/amdgpu: svm bo enable_signal call condition
>>   drm/amdgpu: add svm_bo eviction to enable_signal cb
>>
>> Philip Yang (23):
>>   drm/amdkfd: select kernel DEVICE_PRIVATE option
>>   drm/amdkfd: add svm ioctl API
>>   drm/amdkfd: Add SVM API support capability bits
>>   drm/amdkfd: register svm range
>>   drm/amdkfd: add svm ioctl GET_ATTR op
>>   drm/amdgpu: add common HMM get pages function
>>   drm/amdkfd: validate svm range system memory
>>   drm/amdkfd: register overlap system memory range
>>   drm/amdkfd: deregister svm range
>>   drm/amdgpu: export vm update mapping interface
>>   drm/amdkfd: map svm range to GPUs
>>   drm/amdkfd: svm range eviction and restore
>>   drm/amdkfd: register HMM device private zone
>>   drm/amdkfd: validate vram svm range from TTM
>>   drm/amdkfd: support xgmi same hive mapping
>>   drm/amdkfd: copy memory through gart table
>>   drm/amdkfd: HMM migrate ram to vram
>>   drm/amdkfd: HMM migrate vram to ram
>>   drm/amdgpu: reserve fence slot to update page table
>>   drm/amdgpu: enable retry fault wptr overflow
>>   drm/amdkfd: refine migration policy with xnack on
>>   drm/amdkfd: add svm range validate timestamp
>>   drm/amdkfd: multiple gpu migrate vram to vram
>>
>>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c    |    3 +
>>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h    |    4 +-
>>  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c  |   16 +-
>>  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |   13 +-
>>  drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |   83 +
>>  drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h        |    7 +
>>  drivers/gpu/drm/amd/amdgpu/amdgpu_object.h    |    5 +
>>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       |   90 +-
>>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        |   47 +-
>>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h        |   10 +
>>  drivers/gpu/drm/amd/amdgpu/vega10_ih.c        |   32 +-
>>  drivers/gpu/drm/amd/amdgpu/vega20_ih.c        |   32 +-
>>  drivers/gpu/drm/amd/amdkfd/Kconfig            |    1 +
>>  drivers/gpu/drm/amd/amdkfd/Makefile           |    4 +-
>>  drivers/gpu/drm/amd/amdkfd/kfd_chardev.c      |  170 +-
>>  drivers/gpu/drm/amd/amdkfd/kfd_iommu.c        |    8 +-
>>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  866 ++++++
>>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.h      |   59 +
>>  drivers/gpu/drm/amd/amdkfd/kfd_priv.h         |   52 +-
>>  drivers/gpu/drm/amd/amdkfd/kfd_process.c      |  200 +-
>>  .../amd/amdkfd/kfd_process_queue_manager.c    |    6 +-
>>  drivers/gpu/drm/amd/amdkfd/kfd_svm.c          | 2564 +++++++++++++++++
>>  drivers/gpu/drm/amd/amdkfd/kfd_svm.h          |  135 +
>>  drivers/gpu/drm/amd/amdkfd/kfd_topology.c     |    1 +
>>  drivers/gpu/drm/amd/amdkfd/kfd_topology.h     |   10 +-
>>  include/uapi/linux/kfd_ioctl.h                |  169 +-
>>  26 files changed, 4296 insertions(+), 291 deletions(-)
>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_migrate.h
>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_svm.c
>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_svm.h
>>
>> -- 
>> 2.29.2
>>
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
Daniel Vetter Jan. 8, 2021, 2:40 p.m. UTC | #3
On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
> Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
> > On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
> >> This is the first version of our HMM based shared virtual memory manager
> >> for KFD. There are still a number of known issues that we're working through
> >> (see below). This will likely lead to some pretty significant changes in
> >> MMU notifier handling and locking on the migration code paths. So don't
> >> get hung up on those details yet.
> >>
> >> But I think this is a good time to start getting feedback. We're pretty
> >> confident about the ioctl API, which is both simple and extensible for the
> >> future. (see patches 4,16) The user mode side of the API can be found here:
> >> https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
> >>
> >> I'd also like another pair of eyes on how we're interfacing with the GPU VM
> >> code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
> >> and some retry IRQ handling changes (32).
> >>
> >>
> >> Known issues:
> >> * won't work with IOMMU enabled, we need to dma_map all pages properly
> >> * still working on some race conditions and random bugs
> >> * performance is not great yet
> > Still catching up, but I think there's another one for your list:
> >
> >  * hmm gpu context preempt vs page fault handling. I've had a short
> >    discussion about this one with Christian before the holidays, and also
> >    some private chats with Jerome. It's nasty since no easy fix, much less
> >    a good idea what's the best approach here.
> 
> Do you have a pointer to that discussion or any more details?

Essentially if you're handling an hmm page fault from the gpu, you can
deadlock by calling dma_fence_wait on a (chain of, possibly) other command
submissions or compute contexts with dma_fence_wait. Which deadlocks if
you can't preempt while you have that page fault pending. Two solutions:

- your hw can (at least for compute ctx) preempt even when a page fault is
  pending

- lots of screaming in trying to come up with an alternate solution. They
  all suck.

Note that the dma_fence_wait is hard requirement, because we need that for
mmu notifiers and shrinkers, disallowing that would disable dynamic memory
management. Which is the current "ttm is self-limited to 50% of system
memory" limitation Christian is trying to lift. So that's really not
a restriction we can lift, at least not in upstream where we need to also
support old style hardware which doesn't have page fault support and
really has no other option to handle memory management than
dma_fence_wait.

Thread was here:

https://lore.kernel.org/dri-devel/CAKMK7uGgoeF8LmFBwWh5mW1k4xWjuUh3hdSFpVH1NBM7K0=edA@mail.gmail.com/

There's a few ways to resolve this (without having preempt-capable
hardware), but they're all supremely nasty.
-Daniel

> 
> Thanks,
>   Felix
> 
> 
> >
> > I'll try to look at this more in-depth when I'm catching up on mails.
> > -Daniel
> >
> >> Alex Sierra (12):
> >>   drm/amdgpu: replace per_device_list by array
> >>   drm/amdkfd: helper to convert gpu id and idx
> >>   drm/amdkfd: add xnack enabled flag to kfd_process
> >>   drm/amdkfd: add ioctl to configure and query xnack retries
> >>   drm/amdkfd: invalidate tables on page retry fault
> >>   drm/amdkfd: page table restore through svm API
> >>   drm/amdkfd: SVM API call to restore page tables
> >>   drm/amdkfd: add svm_bo reference for eviction fence
> >>   drm/amdgpu: add param bit flag to create SVM BOs
> >>   drm/amdkfd: add svm_bo eviction mechanism support
> >>   drm/amdgpu: svm bo enable_signal call condition
> >>   drm/amdgpu: add svm_bo eviction to enable_signal cb
> >>
> >> Philip Yang (23):
> >>   drm/amdkfd: select kernel DEVICE_PRIVATE option
> >>   drm/amdkfd: add svm ioctl API
> >>   drm/amdkfd: Add SVM API support capability bits
> >>   drm/amdkfd: register svm range
> >>   drm/amdkfd: add svm ioctl GET_ATTR op
> >>   drm/amdgpu: add common HMM get pages function
> >>   drm/amdkfd: validate svm range system memory
> >>   drm/amdkfd: register overlap system memory range
> >>   drm/amdkfd: deregister svm range
> >>   drm/amdgpu: export vm update mapping interface
> >>   drm/amdkfd: map svm range to GPUs
> >>   drm/amdkfd: svm range eviction and restore
> >>   drm/amdkfd: register HMM device private zone
> >>   drm/amdkfd: validate vram svm range from TTM
> >>   drm/amdkfd: support xgmi same hive mapping
> >>   drm/amdkfd: copy memory through gart table
> >>   drm/amdkfd: HMM migrate ram to vram
> >>   drm/amdkfd: HMM migrate vram to ram
> >>   drm/amdgpu: reserve fence slot to update page table
> >>   drm/amdgpu: enable retry fault wptr overflow
> >>   drm/amdkfd: refine migration policy with xnack on
> >>   drm/amdkfd: add svm range validate timestamp
> >>   drm/amdkfd: multiple gpu migrate vram to vram
> >>
> >>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c    |    3 +
> >>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h    |    4 +-
> >>  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c  |   16 +-
> >>  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |   13 +-
> >>  drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |   83 +
> >>  drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h        |    7 +
> >>  drivers/gpu/drm/amd/amdgpu/amdgpu_object.h    |    5 +
> >>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       |   90 +-
> >>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        |   47 +-
> >>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h        |   10 +
> >>  drivers/gpu/drm/amd/amdgpu/vega10_ih.c        |   32 +-
> >>  drivers/gpu/drm/amd/amdgpu/vega20_ih.c        |   32 +-
> >>  drivers/gpu/drm/amd/amdkfd/Kconfig            |    1 +
> >>  drivers/gpu/drm/amd/amdkfd/Makefile           |    4 +-
> >>  drivers/gpu/drm/amd/amdkfd/kfd_chardev.c      |  170 +-
> >>  drivers/gpu/drm/amd/amdkfd/kfd_iommu.c        |    8 +-
> >>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  866 ++++++
> >>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.h      |   59 +
> >>  drivers/gpu/drm/amd/amdkfd/kfd_priv.h         |   52 +-
> >>  drivers/gpu/drm/amd/amdkfd/kfd_process.c      |  200 +-
> >>  .../amd/amdkfd/kfd_process_queue_manager.c    |    6 +-
> >>  drivers/gpu/drm/amd/amdkfd/kfd_svm.c          | 2564 +++++++++++++++++
> >>  drivers/gpu/drm/amd/amdkfd/kfd_svm.h          |  135 +
> >>  drivers/gpu/drm/amd/amdkfd/kfd_topology.c     |    1 +
> >>  drivers/gpu/drm/amd/amdkfd/kfd_topology.h     |   10 +-
> >>  include/uapi/linux/kfd_ioctl.h                |  169 +-
> >>  26 files changed, 4296 insertions(+), 291 deletions(-)
> >>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> >>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_migrate.h
> >>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> >>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_svm.h
> >>
> >> -- 
> >> 2.29.2
> >>
> >> _______________________________________________
> >> dri-devel mailing list
> >> dri-devel@lists.freedesktop.org
> >> https://lists.freedesktop.org/mailman/listinfo/dri-devel
Christian König Jan. 8, 2021, 2:45 p.m. UTC | #4
Am 08.01.21 um 15:40 schrieb Daniel Vetter:
> On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
>> Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
>>> On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
>>>> This is the first version of our HMM based shared virtual memory manager
>>>> for KFD. There are still a number of known issues that we're working through
>>>> (see below). This will likely lead to some pretty significant changes in
>>>> MMU notifier handling and locking on the migration code paths. So don't
>>>> get hung up on those details yet.
>>>>
>>>> But I think this is a good time to start getting feedback. We're pretty
>>>> confident about the ioctl API, which is both simple and extensible for the
>>>> future. (see patches 4,16) The user mode side of the API can be found here:
>>>> https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
>>>>
>>>> I'd also like another pair of eyes on how we're interfacing with the GPU VM
>>>> code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
>>>> and some retry IRQ handling changes (32).
>>>>
>>>>
>>>> Known issues:
>>>> * won't work with IOMMU enabled, we need to dma_map all pages properly
>>>> * still working on some race conditions and random bugs
>>>> * performance is not great yet
>>> Still catching up, but I think there's another one for your list:
>>>
>>>   * hmm gpu context preempt vs page fault handling. I've had a short
>>>     discussion about this one with Christian before the holidays, and also
>>>     some private chats with Jerome. It's nasty since no easy fix, much less
>>>     a good idea what's the best approach here.
>> Do you have a pointer to that discussion or any more details?
> Essentially if you're handling an hmm page fault from the gpu, you can
> deadlock by calling dma_fence_wait on a (chain of, possibly) other command
> submissions or compute contexts with dma_fence_wait. Which deadlocks if
> you can't preempt while you have that page fault pending. Two solutions:
>
> - your hw can (at least for compute ctx) preempt even when a page fault is
>    pending
>
> - lots of screaming in trying to come up with an alternate solution. They
>    all suck.
>
> Note that the dma_fence_wait is hard requirement, because we need that for
> mmu notifiers and shrinkers, disallowing that would disable dynamic memory
> management. Which is the current "ttm is self-limited to 50% of system
> memory" limitation Christian is trying to lift. So that's really not
> a restriction we can lift, at least not in upstream where we need to also
> support old style hardware which doesn't have page fault support and
> really has no other option to handle memory management than
> dma_fence_wait.
>
> Thread was here:
>
> https://lore.kernel.org/dri-devel/CAKMK7uGgoeF8LmFBwWh5mW1k4xWjuUh3hdSFpVH1NBM7K0=edA@mail.gmail.com/
>
> There's a few ways to resolve this (without having preempt-capable
> hardware), but they're all supremely nasty.
> -Daniel
>
>> Thanks,
>>    Felix
>>
>>
>>> I'll try to look at this more in-depth when I'm catching up on mails.
>>> -Daniel
>>>
>>>> Alex Sierra (12):
>>>>    drm/amdgpu: replace per_device_list by array
>>>>    drm/amdkfd: helper to convert gpu id and idx
>>>>    drm/amdkfd: add xnack enabled flag to kfd_process
>>>>    drm/amdkfd: add ioctl to configure and query xnack retries
>>>>    drm/amdkfd: invalidate tables on page retry fault
>>>>    drm/amdkfd: page table restore through svm API
>>>>    drm/amdkfd: SVM API call to restore page tables
>>>>    drm/amdkfd: add svm_bo reference for eviction fence
>>>>    drm/amdgpu: add param bit flag to create SVM BOs
>>>>    drm/amdkfd: add svm_bo eviction mechanism support
>>>>    drm/amdgpu: svm bo enable_signal call condition
>>>>    drm/amdgpu: add svm_bo eviction to enable_signal cb
>>>>
>>>> Philip Yang (23):
>>>>    drm/amdkfd: select kernel DEVICE_PRIVATE option
>>>>    drm/amdkfd: add svm ioctl API
>>>>    drm/amdkfd: Add SVM API support capability bits
>>>>    drm/amdkfd: register svm range
>>>>    drm/amdkfd: add svm ioctl GET_ATTR op
>>>>    drm/amdgpu: add common HMM get pages function
>>>>    drm/amdkfd: validate svm range system memory
>>>>    drm/amdkfd: register overlap system memory range
>>>>    drm/amdkfd: deregister svm range
>>>>    drm/amdgpu: export vm update mapping interface
>>>>    drm/amdkfd: map svm range to GPUs
>>>>    drm/amdkfd: svm range eviction and restore
>>>>    drm/amdkfd: register HMM device private zone
>>>>    drm/amdkfd: validate vram svm range from TTM
>>>>    drm/amdkfd: support xgmi same hive mapping
>>>>    drm/amdkfd: copy memory through gart table
>>>>    drm/amdkfd: HMM migrate ram to vram
>>>>    drm/amdkfd: HMM migrate vram to ram
>>>>    drm/amdgpu: reserve fence slot to update page table
>>>>    drm/amdgpu: enable retry fault wptr overflow
>>>>    drm/amdkfd: refine migration policy with xnack on
>>>>    drm/amdkfd: add svm range validate timestamp
>>>>    drm/amdkfd: multiple gpu migrate vram to vram
>>>>
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c    |    3 +
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h    |    4 +-
>>>>   .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c  |   16 +-
>>>>   .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |   13 +-
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |   83 +
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h        |    7 +
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.h    |    5 +
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       |   90 +-
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        |   47 +-
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h        |   10 +
>>>>   drivers/gpu/drm/amd/amdgpu/vega10_ih.c        |   32 +-
>>>>   drivers/gpu/drm/amd/amdgpu/vega20_ih.c        |   32 +-
>>>>   drivers/gpu/drm/amd/amdkfd/Kconfig            |    1 +
>>>>   drivers/gpu/drm/amd/amdkfd/Makefile           |    4 +-
>>>>   drivers/gpu/drm/amd/amdkfd/kfd_chardev.c      |  170 +-
>>>>   drivers/gpu/drm/amd/amdkfd/kfd_iommu.c        |    8 +-
>>>>   drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  866 ++++++
>>>>   drivers/gpu/drm/amd/amdkfd/kfd_migrate.h      |   59 +
>>>>   drivers/gpu/drm/amd/amdkfd/kfd_priv.h         |   52 +-
>>>>   drivers/gpu/drm/amd/amdkfd/kfd_process.c      |  200 +-
>>>>   .../amd/amdkfd/kfd_process_queue_manager.c    |    6 +-
>>>>   drivers/gpu/drm/amd/amdkfd/kfd_svm.c          | 2564 +++++++++++++++++
>>>>   drivers/gpu/drm/amd/amdkfd/kfd_svm.h          |  135 +
>>>>   drivers/gpu/drm/amd/amdkfd/kfd_topology.c     |    1 +
>>>>   drivers/gpu/drm/amd/amdkfd/kfd_topology.h     |   10 +-
>>>>   include/uapi/linux/kfd_ioctl.h                |  169 +-
>>>>   26 files changed, 4296 insertions(+), 291 deletions(-)
>>>>   create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>>>>   create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_migrate.h
>>>>   create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_svm.c
>>>>   create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_svm.h
>>>>
>>>> -- 
>>>> 2.29.2
>>>>
>>>> _______________________________________________
>>>> dri-devel mailing list
>>>> dri-devel@lists.freedesktop.org
>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
Felix Kuehling Jan. 8, 2021, 3:58 p.m. UTC | #5
Am 2021-01-08 um 9:40 a.m. schrieb Daniel Vetter:
> On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
>> Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
>>> On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
>>>> This is the first version of our HMM based shared virtual memory manager
>>>> for KFD. There are still a number of known issues that we're working through
>>>> (see below). This will likely lead to some pretty significant changes in
>>>> MMU notifier handling and locking on the migration code paths. So don't
>>>> get hung up on those details yet.
>>>>
>>>> But I think this is a good time to start getting feedback. We're pretty
>>>> confident about the ioctl API, which is both simple and extensible for the
>>>> future. (see patches 4,16) The user mode side of the API can be found here:
>>>> https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
>>>>
>>>> I'd also like another pair of eyes on how we're interfacing with the GPU VM
>>>> code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
>>>> and some retry IRQ handling changes (32).
>>>>
>>>>
>>>> Known issues:
>>>> * won't work with IOMMU enabled, we need to dma_map all pages properly
>>>> * still working on some race conditions and random bugs
>>>> * performance is not great yet
>>> Still catching up, but I think there's another one for your list:
>>>
>>>  * hmm gpu context preempt vs page fault handling. I've had a short
>>>    discussion about this one with Christian before the holidays, and also
>>>    some private chats with Jerome. It's nasty since no easy fix, much less
>>>    a good idea what's the best approach here.
>> Do you have a pointer to that discussion or any more details?
> Essentially if you're handling an hmm page fault from the gpu, you can
> deadlock by calling dma_fence_wait on a (chain of, possibly) other command
> submissions or compute contexts with dma_fence_wait. Which deadlocks if
> you can't preempt while you have that page fault pending. Two solutions:
>
> - your hw can (at least for compute ctx) preempt even when a page fault is
>   pending

Our GFXv9 GPUs can do this. GFXv10 cannot.


>
> - lots of screaming in trying to come up with an alternate solution. They
>   all suck.

My idea for GFXv10 is to avoid preemption for memory management purposes
and rely 100% on page faults instead. That is, if the memory manager
needs to prevent GPU access to certain memory, just invalidate the GPU
page table entries pointing to that memory. No waiting for fences is
necessary, except for the SDMA job that invalidates the PTEs, which runs
on a special high-priority queue that should never deadlock. That should
prevent the CPU getting involved in deadlocks in kernel mode. But you
can still deadlock the GPU in user mode if all compute units get stuck
in page faults and can't switch to any useful work any more. So it's
possible that we won't be able to use GPU page faults on our GFXv10 GPUs.

Regards,
  Felix

>
> Note that the dma_fence_wait is hard requirement, because we need that for
> mmu notifiers and shrinkers, disallowing that would disable dynamic memory
> management. Which is the current "ttm is self-limited to 50% of system
> memory" limitation Christian is trying to lift. So that's really not
> a restriction we can lift, at least not in upstream where we need to also
> support old style hardware which doesn't have page fault support and
> really has no other option to handle memory management than
> dma_fence_wait.
>
> Thread was here:
>
> https://lore.kernel.org/dri-devel/CAKMK7uGgoeF8LmFBwWh5mW1k4xWjuUh3hdSFpVH1NBM7K0=edA@mail.gmail.com/
>
> There's a few ways to resolve this (without having preempt-capable
> hardware), but they're all supremely nasty.
> -Daniel
>
>> Thanks,
>>   Felix
>>
>>
>>> I'll try to look at this more in-depth when I'm catching up on mails.
>>> -Daniel
>>>
>>>> Alex Sierra (12):
>>>>   drm/amdgpu: replace per_device_list by array
>>>>   drm/amdkfd: helper to convert gpu id and idx
>>>>   drm/amdkfd: add xnack enabled flag to kfd_process
>>>>   drm/amdkfd: add ioctl to configure and query xnack retries
>>>>   drm/amdkfd: invalidate tables on page retry fault
>>>>   drm/amdkfd: page table restore through svm API
>>>>   drm/amdkfd: SVM API call to restore page tables
>>>>   drm/amdkfd: add svm_bo reference for eviction fence
>>>>   drm/amdgpu: add param bit flag to create SVM BOs
>>>>   drm/amdkfd: add svm_bo eviction mechanism support
>>>>   drm/amdgpu: svm bo enable_signal call condition
>>>>   drm/amdgpu: add svm_bo eviction to enable_signal cb
>>>>
>>>> Philip Yang (23):
>>>>   drm/amdkfd: select kernel DEVICE_PRIVATE option
>>>>   drm/amdkfd: add svm ioctl API
>>>>   drm/amdkfd: Add SVM API support capability bits
>>>>   drm/amdkfd: register svm range
>>>>   drm/amdkfd: add svm ioctl GET_ATTR op
>>>>   drm/amdgpu: add common HMM get pages function
>>>>   drm/amdkfd: validate svm range system memory
>>>>   drm/amdkfd: register overlap system memory range
>>>>   drm/amdkfd: deregister svm range
>>>>   drm/amdgpu: export vm update mapping interface
>>>>   drm/amdkfd: map svm range to GPUs
>>>>   drm/amdkfd: svm range eviction and restore
>>>>   drm/amdkfd: register HMM device private zone
>>>>   drm/amdkfd: validate vram svm range from TTM
>>>>   drm/amdkfd: support xgmi same hive mapping
>>>>   drm/amdkfd: copy memory through gart table
>>>>   drm/amdkfd: HMM migrate ram to vram
>>>>   drm/amdkfd: HMM migrate vram to ram
>>>>   drm/amdgpu: reserve fence slot to update page table
>>>>   drm/amdgpu: enable retry fault wptr overflow
>>>>   drm/amdkfd: refine migration policy with xnack on
>>>>   drm/amdkfd: add svm range validate timestamp
>>>>   drm/amdkfd: multiple gpu migrate vram to vram
>>>>
>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c    |    3 +
>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h    |    4 +-
>>>>  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c  |   16 +-
>>>>  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |   13 +-
>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |   83 +
>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h        |    7 +
>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_object.h    |    5 +
>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       |   90 +-
>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        |   47 +-
>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h        |   10 +
>>>>  drivers/gpu/drm/amd/amdgpu/vega10_ih.c        |   32 +-
>>>>  drivers/gpu/drm/amd/amdgpu/vega20_ih.c        |   32 +-
>>>>  drivers/gpu/drm/amd/amdkfd/Kconfig            |    1 +
>>>>  drivers/gpu/drm/amd/amdkfd/Makefile           |    4 +-
>>>>  drivers/gpu/drm/amd/amdkfd/kfd_chardev.c      |  170 +-
>>>>  drivers/gpu/drm/amd/amdkfd/kfd_iommu.c        |    8 +-
>>>>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  866 ++++++
>>>>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.h      |   59 +
>>>>  drivers/gpu/drm/amd/amdkfd/kfd_priv.h         |   52 +-
>>>>  drivers/gpu/drm/amd/amdkfd/kfd_process.c      |  200 +-
>>>>  .../amd/amdkfd/kfd_process_queue_manager.c    |    6 +-
>>>>  drivers/gpu/drm/amd/amdkfd/kfd_svm.c          | 2564 +++++++++++++++++
>>>>  drivers/gpu/drm/amd/amdkfd/kfd_svm.h          |  135 +
>>>>  drivers/gpu/drm/amd/amdkfd/kfd_topology.c     |    1 +
>>>>  drivers/gpu/drm/amd/amdkfd/kfd_topology.h     |   10 +-
>>>>  include/uapi/linux/kfd_ioctl.h                |  169 +-
>>>>  26 files changed, 4296 insertions(+), 291 deletions(-)
>>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_migrate.h
>>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_svm.c
>>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_svm.h
>>>>
>>>> -- 
>>>> 2.29.2
>>>>
>>>> _______________________________________________
>>>> dri-devel mailing list
>>>> dri-devel@lists.freedesktop.org
>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
Daniel Vetter Jan. 8, 2021, 4:06 p.m. UTC | #6
On Fri, Jan 8, 2021 at 4:58 PM Felix Kuehling <felix.kuehling@amd.com> wrote:
>
> Am 2021-01-08 um 9:40 a.m. schrieb Daniel Vetter:
> > On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
> >> Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
> >>> On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
> >>>> This is the first version of our HMM based shared virtual memory manager
> >>>> for KFD. There are still a number of known issues that we're working through
> >>>> (see below). This will likely lead to some pretty significant changes in
> >>>> MMU notifier handling and locking on the migration code paths. So don't
> >>>> get hung up on those details yet.
> >>>>
> >>>> But I think this is a good time to start getting feedback. We're pretty
> >>>> confident about the ioctl API, which is both simple and extensible for the
> >>>> future. (see patches 4,16) The user mode side of the API can be found here:
> >>>> https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
> >>>>
> >>>> I'd also like another pair of eyes on how we're interfacing with the GPU VM
> >>>> code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
> >>>> and some retry IRQ handling changes (32).
> >>>>
> >>>>
> >>>> Known issues:
> >>>> * won't work with IOMMU enabled, we need to dma_map all pages properly
> >>>> * still working on some race conditions and random bugs
> >>>> * performance is not great yet
> >>> Still catching up, but I think there's another one for your list:
> >>>
> >>>  * hmm gpu context preempt vs page fault handling. I've had a short
> >>>    discussion about this one with Christian before the holidays, and also
> >>>    some private chats with Jerome. It's nasty since no easy fix, much less
> >>>    a good idea what's the best approach here.
> >> Do you have a pointer to that discussion or any more details?
> > Essentially if you're handling an hmm page fault from the gpu, you can
> > deadlock by calling dma_fence_wait on a (chain of, possibly) other command
> > submissions or compute contexts with dma_fence_wait. Which deadlocks if
> > you can't preempt while you have that page fault pending. Two solutions:
> >
> > - your hw can (at least for compute ctx) preempt even when a page fault is
> >   pending
>
> Our GFXv9 GPUs can do this. GFXv10 cannot.

Uh, why did your hw guys drop this :-/

> > - lots of screaming in trying to come up with an alternate solution. They
> >   all suck.
>
> My idea for GFXv10 is to avoid preemption for memory management purposes
> and rely 100% on page faults instead. That is, if the memory manager
> needs to prevent GPU access to certain memory, just invalidate the GPU
> page table entries pointing to that memory. No waiting for fences is
> necessary, except for the SDMA job that invalidates the PTEs, which runs
> on a special high-priority queue that should never deadlock. That should
> prevent the CPU getting involved in deadlocks in kernel mode. But you
> can still deadlock the GPU in user mode if all compute units get stuck
> in page faults and can't switch to any useful work any more. So it's
> possible that we won't be able to use GPU page faults on our GFXv10 GPUs.

This only works if _everything_ in the system works like this, since
you're defacto breaking the cross-driver contract. As soon as there's
some legacy gl workload (userptr) or another driver involved, this
approach falls apart.

I do think it can be rescued with what I call gang scheduling of
engines: I.e. when a given engine is running a context (or a group of
engines, depending how your hw works) that can cause a page fault, you
must flush out all workloads running on the same engine which could
block a dma_fence (preempt them, or for non-compute stuff, force their
completion). And the other way round, i.e. before you can run a legacy
gl workload with a dma_fence on these engines you need to preempt all
ctxs that could cause page faults and take them at least out of the hw
scheduler queue.

Just reserving an sdma engine for copy jobs and ptes updates and that
stuff is necessary, but not sufficient.

Another approach that Jerome suggested is to track the reverse
dependency graph of all dma_fence somehow and make sure that direct
reclaim never recurses on an engine you're serving a pagefault for.
Possible in theory, but in practice I think not feasible to implement
because way too much work to implement.

Either way it's imo really nasty to come up with a scheme here that
doesn't fail in some corner, or becomes really nasty with inconsistent
rules across different drivers and hw :-(

Cheers, Daniel

>
> Regards,
>   Felix
>
> >
> > Note that the dma_fence_wait is hard requirement, because we need that for
> > mmu notifiers and shrinkers, disallowing that would disable dynamic memory
> > management. Which is the current "ttm is self-limited to 50% of system
> > memory" limitation Christian is trying to lift. So that's really not
> > a restriction we can lift, at least not in upstream where we need to also
> > support old style hardware which doesn't have page fault support and
> > really has no other option to handle memory management than
> > dma_fence_wait.
> >
> > Thread was here:
> >
> > https://lore.kernel.org/dri-devel/CAKMK7uGgoeF8LmFBwWh5mW1k4xWjuUh3hdSFpVH1NBM7K0=edA@mail.gmail.com/
> >
> > There's a few ways to resolve this (without having preempt-capable
> > hardware), but they're all supremely nasty.
> > -Daniel
> >
> >> Thanks,
> >>   Felix
> >>
> >>
> >>> I'll try to look at this more in-depth when I'm catching up on mails.
> >>> -Daniel
> >>>
> >>>> Alex Sierra (12):
> >>>>   drm/amdgpu: replace per_device_list by array
> >>>>   drm/amdkfd: helper to convert gpu id and idx
> >>>>   drm/amdkfd: add xnack enabled flag to kfd_process
> >>>>   drm/amdkfd: add ioctl to configure and query xnack retries
> >>>>   drm/amdkfd: invalidate tables on page retry fault
> >>>>   drm/amdkfd: page table restore through svm API
> >>>>   drm/amdkfd: SVM API call to restore page tables
> >>>>   drm/amdkfd: add svm_bo reference for eviction fence
> >>>>   drm/amdgpu: add param bit flag to create SVM BOs
> >>>>   drm/amdkfd: add svm_bo eviction mechanism support
> >>>>   drm/amdgpu: svm bo enable_signal call condition
> >>>>   drm/amdgpu: add svm_bo eviction to enable_signal cb
> >>>>
> >>>> Philip Yang (23):
> >>>>   drm/amdkfd: select kernel DEVICE_PRIVATE option
> >>>>   drm/amdkfd: add svm ioctl API
> >>>>   drm/amdkfd: Add SVM API support capability bits
> >>>>   drm/amdkfd: register svm range
> >>>>   drm/amdkfd: add svm ioctl GET_ATTR op
> >>>>   drm/amdgpu: add common HMM get pages function
> >>>>   drm/amdkfd: validate svm range system memory
> >>>>   drm/amdkfd: register overlap system memory range
> >>>>   drm/amdkfd: deregister svm range
> >>>>   drm/amdgpu: export vm update mapping interface
> >>>>   drm/amdkfd: map svm range to GPUs
> >>>>   drm/amdkfd: svm range eviction and restore
> >>>>   drm/amdkfd: register HMM device private zone
> >>>>   drm/amdkfd: validate vram svm range from TTM
> >>>>   drm/amdkfd: support xgmi same hive mapping
> >>>>   drm/amdkfd: copy memory through gart table
> >>>>   drm/amdkfd: HMM migrate ram to vram
> >>>>   drm/amdkfd: HMM migrate vram to ram
> >>>>   drm/amdgpu: reserve fence slot to update page table
> >>>>   drm/amdgpu: enable retry fault wptr overflow
> >>>>   drm/amdkfd: refine migration policy with xnack on
> >>>>   drm/amdkfd: add svm range validate timestamp
> >>>>   drm/amdkfd: multiple gpu migrate vram to vram
> >>>>
> >>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c    |    3 +
> >>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h    |    4 +-
> >>>>  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c  |   16 +-
> >>>>  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |   13 +-
> >>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |   83 +
> >>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h        |    7 +
> >>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_object.h    |    5 +
> >>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       |   90 +-
> >>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        |   47 +-
> >>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h        |   10 +
> >>>>  drivers/gpu/drm/amd/amdgpu/vega10_ih.c        |   32 +-
> >>>>  drivers/gpu/drm/amd/amdgpu/vega20_ih.c        |   32 +-
> >>>>  drivers/gpu/drm/amd/amdkfd/Kconfig            |    1 +
> >>>>  drivers/gpu/drm/amd/amdkfd/Makefile           |    4 +-
> >>>>  drivers/gpu/drm/amd/amdkfd/kfd_chardev.c      |  170 +-
> >>>>  drivers/gpu/drm/amd/amdkfd/kfd_iommu.c        |    8 +-
> >>>>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  866 ++++++
> >>>>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.h      |   59 +
> >>>>  drivers/gpu/drm/amd/amdkfd/kfd_priv.h         |   52 +-
> >>>>  drivers/gpu/drm/amd/amdkfd/kfd_process.c      |  200 +-
> >>>>  .../amd/amdkfd/kfd_process_queue_manager.c    |    6 +-
> >>>>  drivers/gpu/drm/amd/amdkfd/kfd_svm.c          | 2564 +++++++++++++++++
> >>>>  drivers/gpu/drm/amd/amdkfd/kfd_svm.h          |  135 +
> >>>>  drivers/gpu/drm/amd/amdkfd/kfd_topology.c     |    1 +
> >>>>  drivers/gpu/drm/amd/amdkfd/kfd_topology.h     |   10 +-
> >>>>  include/uapi/linux/kfd_ioctl.h                |  169 +-
> >>>>  26 files changed, 4296 insertions(+), 291 deletions(-)
> >>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> >>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_migrate.h
> >>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> >>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_svm.h
> >>>>
> >>>> --
> >>>> 2.29.2
> >>>>
> >>>> _______________________________________________
> >>>> dri-devel mailing list
> >>>> dri-devel@lists.freedesktop.org
> >>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
Felix Kuehling Jan. 8, 2021, 4:36 p.m. UTC | #7
Am 2021-01-08 um 11:06 a.m. schrieb Daniel Vetter:
> On Fri, Jan 8, 2021 at 4:58 PM Felix Kuehling <felix.kuehling@amd.com> wrote:
>> Am 2021-01-08 um 9:40 a.m. schrieb Daniel Vetter:
>>> On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
>>>> Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
>>>>> On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
>>>>>> This is the first version of our HMM based shared virtual memory manager
>>>>>> for KFD. There are still a number of known issues that we're working through
>>>>>> (see below). This will likely lead to some pretty significant changes in
>>>>>> MMU notifier handling and locking on the migration code paths. So don't
>>>>>> get hung up on those details yet.
>>>>>>
>>>>>> But I think this is a good time to start getting feedback. We're pretty
>>>>>> confident about the ioctl API, which is both simple and extensible for the
>>>>>> future. (see patches 4,16) The user mode side of the API can be found here:
>>>>>> https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
>>>>>>
>>>>>> I'd also like another pair of eyes on how we're interfacing with the GPU VM
>>>>>> code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
>>>>>> and some retry IRQ handling changes (32).
>>>>>>
>>>>>>
>>>>>> Known issues:
>>>>>> * won't work with IOMMU enabled, we need to dma_map all pages properly
>>>>>> * still working on some race conditions and random bugs
>>>>>> * performance is not great yet
>>>>> Still catching up, but I think there's another one for your list:
>>>>>
>>>>>  * hmm gpu context preempt vs page fault handling. I've had a short
>>>>>    discussion about this one with Christian before the holidays, and also
>>>>>    some private chats with Jerome. It's nasty since no easy fix, much less
>>>>>    a good idea what's the best approach here.
>>>> Do you have a pointer to that discussion or any more details?
>>> Essentially if you're handling an hmm page fault from the gpu, you can
>>> deadlock by calling dma_fence_wait on a (chain of, possibly) other command
>>> submissions or compute contexts with dma_fence_wait. Which deadlocks if
>>> you can't preempt while you have that page fault pending. Two solutions:
>>>
>>> - your hw can (at least for compute ctx) preempt even when a page fault is
>>>   pending
>> Our GFXv9 GPUs can do this. GFXv10 cannot.
> Uh, why did your hw guys drop this :-/
>
>>> - lots of screaming in trying to come up with an alternate solution. They
>>>   all suck.
>> My idea for GFXv10 is to avoid preemption for memory management purposes
>> and rely 100% on page faults instead. That is, if the memory manager
>> needs to prevent GPU access to certain memory, just invalidate the GPU
>> page table entries pointing to that memory. No waiting for fences is
>> necessary, except for the SDMA job that invalidates the PTEs, which runs
>> on a special high-priority queue that should never deadlock. That should
>> prevent the CPU getting involved in deadlocks in kernel mode. But you
>> can still deadlock the GPU in user mode if all compute units get stuck
>> in page faults and can't switch to any useful work any more. So it's
>> possible that we won't be able to use GPU page faults on our GFXv10 GPUs.
> This only works if _everything_ in the system works like this, since
> you're defacto breaking the cross-driver contract. As soon as there's
> some legacy gl workload (userptr) or another driver involved, this
> approach falls apart.

I think the scenario you have in mind involves a dma_fence that depends
on the resolution of a GPU page fault. With our user mode command
submission model for compute contexts, there are no DMA fences that get
signaled by compute jobs that could get stuck on page faults.

The legacy GL workload would not get GPU page faults. The only way it
could get stuck is, if all CUs are stuck on page faults and the command
processor can't find any HW resources to execute it on. That's my user
mode deadlock scenario below. So yeah, you're right, kernel mode can't
avoid getting involved in that unless everything uses user mode command
submissions.

If (big if) we switched to user mode command submission for all compute
and graphics contexts, and no longer use DMA fences to signal their
completion, I think that would solve the problem as far as the kernel is
concerned.


>
> I do think it can be rescued with what I call gang scheduling of
> engines: I.e. when a given engine is running a context (or a group of
> engines, depending how your hw works) that can cause a page fault, you
> must flush out all workloads running on the same engine which could
> block a dma_fence (preempt them, or for non-compute stuff, force their
> completion). And the other way round, i.e. before you can run a legacy
> gl workload with a dma_fence on these engines you need to preempt all
> ctxs that could cause page faults and take them at least out of the hw
> scheduler queue.

Yuck! But yeah, that would work. A less invasive alternative would be to
reserve some compute units for graphics contexts so we can guarantee
forward progress for graphics contexts even when all CUs working on
compute stuff are stuck on page faults.


>
> Just reserving an sdma engine for copy jobs and ptes updates and that
> stuff is necessary, but not sufficient.
>
> Another approach that Jerome suggested is to track the reverse
> dependency graph of all dma_fence somehow and make sure that direct
> reclaim never recurses on an engine you're serving a pagefault for.
> Possible in theory, but in practice I think not feasible to implement
> because way too much work to implement.

I agree.


>
> Either way it's imo really nasty to come up with a scheme here that
> doesn't fail in some corner, or becomes really nasty with inconsistent
> rules across different drivers and hw :-(

Yeah. The cleanest approach is to avoid DMA fences altogether for
device/engines that can get stuck on page faults. A user mode command
submission model would do that.

Reserving some compute units for graphics contexts that signal fences
but never page fault should also work.

Regards,
  Felix


>
> Cheers, Daniel
>
>> Regards,
>>   Felix
>>
>>> Note that the dma_fence_wait is hard requirement, because we need that for
>>> mmu notifiers and shrinkers, disallowing that would disable dynamic memory
>>> management. Which is the current "ttm is self-limited to 50% of system
>>> memory" limitation Christian is trying to lift. So that's really not
>>> a restriction we can lift, at least not in upstream where we need to also
>>> support old style hardware which doesn't have page fault support and
>>> really has no other option to handle memory management than
>>> dma_fence_wait.
>>>
>>> Thread was here:
>>>
>>> https://lore.kernel.org/dri-devel/CAKMK7uGgoeF8LmFBwWh5mW1k4xWjuUh3hdSFpVH1NBM7K0=edA@mail.gmail.com/
>>>
>>> There's a few ways to resolve this (without having preempt-capable
>>> hardware), but they're all supremely nasty.
>>> -Daniel
>>>
>>>> Thanks,
>>>>   Felix
>>>>
>>>>
>>>>> I'll try to look at this more in-depth when I'm catching up on mails.
>>>>> -Daniel
>>>>>
>>>>>> Alex Sierra (12):
>>>>>>   drm/amdgpu: replace per_device_list by array
>>>>>>   drm/amdkfd: helper to convert gpu id and idx
>>>>>>   drm/amdkfd: add xnack enabled flag to kfd_process
>>>>>>   drm/amdkfd: add ioctl to configure and query xnack retries
>>>>>>   drm/amdkfd: invalidate tables on page retry fault
>>>>>>   drm/amdkfd: page table restore through svm API
>>>>>>   drm/amdkfd: SVM API call to restore page tables
>>>>>>   drm/amdkfd: add svm_bo reference for eviction fence
>>>>>>   drm/amdgpu: add param bit flag to create SVM BOs
>>>>>>   drm/amdkfd: add svm_bo eviction mechanism support
>>>>>>   drm/amdgpu: svm bo enable_signal call condition
>>>>>>   drm/amdgpu: add svm_bo eviction to enable_signal cb
>>>>>>
>>>>>> Philip Yang (23):
>>>>>>   drm/amdkfd: select kernel DEVICE_PRIVATE option
>>>>>>   drm/amdkfd: add svm ioctl API
>>>>>>   drm/amdkfd: Add SVM API support capability bits
>>>>>>   drm/amdkfd: register svm range
>>>>>>   drm/amdkfd: add svm ioctl GET_ATTR op
>>>>>>   drm/amdgpu: add common HMM get pages function
>>>>>>   drm/amdkfd: validate svm range system memory
>>>>>>   drm/amdkfd: register overlap system memory range
>>>>>>   drm/amdkfd: deregister svm range
>>>>>>   drm/amdgpu: export vm update mapping interface
>>>>>>   drm/amdkfd: map svm range to GPUs
>>>>>>   drm/amdkfd: svm range eviction and restore
>>>>>>   drm/amdkfd: register HMM device private zone
>>>>>>   drm/amdkfd: validate vram svm range from TTM
>>>>>>   drm/amdkfd: support xgmi same hive mapping
>>>>>>   drm/amdkfd: copy memory through gart table
>>>>>>   drm/amdkfd: HMM migrate ram to vram
>>>>>>   drm/amdkfd: HMM migrate vram to ram
>>>>>>   drm/amdgpu: reserve fence slot to update page table
>>>>>>   drm/amdgpu: enable retry fault wptr overflow
>>>>>>   drm/amdkfd: refine migration policy with xnack on
>>>>>>   drm/amdkfd: add svm range validate timestamp
>>>>>>   drm/amdkfd: multiple gpu migrate vram to vram
>>>>>>
>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c    |    3 +
>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h    |    4 +-
>>>>>>  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c  |   16 +-
>>>>>>  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |   13 +-
>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |   83 +
>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h        |    7 +
>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_object.h    |    5 +
>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       |   90 +-
>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        |   47 +-
>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h        |   10 +
>>>>>>  drivers/gpu/drm/amd/amdgpu/vega10_ih.c        |   32 +-
>>>>>>  drivers/gpu/drm/amd/amdgpu/vega20_ih.c        |   32 +-
>>>>>>  drivers/gpu/drm/amd/amdkfd/Kconfig            |    1 +
>>>>>>  drivers/gpu/drm/amd/amdkfd/Makefile           |    4 +-
>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_chardev.c      |  170 +-
>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_iommu.c        |    8 +-
>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  866 ++++++
>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.h      |   59 +
>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_priv.h         |   52 +-
>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_process.c      |  200 +-
>>>>>>  .../amd/amdkfd/kfd_process_queue_manager.c    |    6 +-
>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_svm.c          | 2564 +++++++++++++++++
>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_svm.h          |  135 +
>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_topology.c     |    1 +
>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_topology.h     |   10 +-
>>>>>>  include/uapi/linux/kfd_ioctl.h                |  169 +-
>>>>>>  26 files changed, 4296 insertions(+), 291 deletions(-)
>>>>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>>>>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_migrate.h
>>>>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_svm.c
>>>>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_svm.h
>>>>>>
>>>>>> --
>>>>>> 2.29.2
>>>>>>
>>>>>> _______________________________________________
>>>>>> dri-devel mailing list
>>>>>> dri-devel@lists.freedesktop.org
>>>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
>
Daniel Vetter Jan. 8, 2021, 4:53 p.m. UTC | #8
On Fri, Jan 8, 2021 at 5:36 PM Felix Kuehling <felix.kuehling@amd.com> wrote:
>
>
> Am 2021-01-08 um 11:06 a.m. schrieb Daniel Vetter:
> > On Fri, Jan 8, 2021 at 4:58 PM Felix Kuehling <felix.kuehling@amd.com> wrote:
> >> Am 2021-01-08 um 9:40 a.m. schrieb Daniel Vetter:
> >>> On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
> >>>> Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
> >>>>> On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
> >>>>>> This is the first version of our HMM based shared virtual memory manager
> >>>>>> for KFD. There are still a number of known issues that we're working through
> >>>>>> (see below). This will likely lead to some pretty significant changes in
> >>>>>> MMU notifier handling and locking on the migration code paths. So don't
> >>>>>> get hung up on those details yet.
> >>>>>>
> >>>>>> But I think this is a good time to start getting feedback. We're pretty
> >>>>>> confident about the ioctl API, which is both simple and extensible for the
> >>>>>> future. (see patches 4,16) The user mode side of the API can be found here:
> >>>>>> https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
> >>>>>>
> >>>>>> I'd also like another pair of eyes on how we're interfacing with the GPU VM
> >>>>>> code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
> >>>>>> and some retry IRQ handling changes (32).
> >>>>>>
> >>>>>>
> >>>>>> Known issues:
> >>>>>> * won't work with IOMMU enabled, we need to dma_map all pages properly
> >>>>>> * still working on some race conditions and random bugs
> >>>>>> * performance is not great yet
> >>>>> Still catching up, but I think there's another one for your list:
> >>>>>
> >>>>>  * hmm gpu context preempt vs page fault handling. I've had a short
> >>>>>    discussion about this one with Christian before the holidays, and also
> >>>>>    some private chats with Jerome. It's nasty since no easy fix, much less
> >>>>>    a good idea what's the best approach here.
> >>>> Do you have a pointer to that discussion or any more details?
> >>> Essentially if you're handling an hmm page fault from the gpu, you can
> >>> deadlock by calling dma_fence_wait on a (chain of, possibly) other command
> >>> submissions or compute contexts with dma_fence_wait. Which deadlocks if
> >>> you can't preempt while you have that page fault pending. Two solutions:
> >>>
> >>> - your hw can (at least for compute ctx) preempt even when a page fault is
> >>>   pending
> >> Our GFXv9 GPUs can do this. GFXv10 cannot.
> > Uh, why did your hw guys drop this :-/
> >
> >>> - lots of screaming in trying to come up with an alternate solution. They
> >>>   all suck.
> >> My idea for GFXv10 is to avoid preemption for memory management purposes
> >> and rely 100% on page faults instead. That is, if the memory manager
> >> needs to prevent GPU access to certain memory, just invalidate the GPU
> >> page table entries pointing to that memory. No waiting for fences is
> >> necessary, except for the SDMA job that invalidates the PTEs, which runs
> >> on a special high-priority queue that should never deadlock. That should
> >> prevent the CPU getting involved in deadlocks in kernel mode. But you
> >> can still deadlock the GPU in user mode if all compute units get stuck
> >> in page faults and can't switch to any useful work any more. So it's
> >> possible that we won't be able to use GPU page faults on our GFXv10 GPUs.
> > This only works if _everything_ in the system works like this, since
> > you're defacto breaking the cross-driver contract. As soon as there's
> > some legacy gl workload (userptr) or another driver involved, this
> > approach falls apart.
>
> I think the scenario you have in mind involves a dma_fence that depends
> on the resolution of a GPU page fault. With our user mode command
> submission model for compute contexts, there are no DMA fences that get
> signaled by compute jobs that could get stuck on page faults.
>
> The legacy GL workload would not get GPU page faults. The only way it
> could get stuck is, if all CUs are stuck on page faults and the command
> processor can't find any HW resources to execute it on. That's my user
> mode deadlock scenario below. So yeah, you're right, kernel mode can't
> avoid getting involved in that unless everything uses user mode command
> submissions.
>
> If (big if) we switched to user mode command submission for all compute
> and graphics contexts, and no longer use DMA fences to signal their
> completion, I think that would solve the problem as far as the kernel is
> concerned.

We can't throw dma_fence away because it's uapi built into various
compositor protocols. Otherwise we could pull a wddm2 like microsoft
did on windows and do what you're describing. So completely getting
rid of dma_fences (even just limited on newer gpus) is also a decadel
effort at least, since that's roughly how long it'll take to sunset
and convert everything over.

The other problem is that we're now building more stuff on top of
dma_resv like the dynamic dma-buf p2p stuff, now integrated into rdma.
I think even internally in the kernel it would be a massive pain to
untangle our fencing sufficiently to make this all happen without
loops. And I'm not even sure whether we could prevent deadlocks by
splitting dma_fence up into the userspace sync parts and the kernel
internal sync parts since they leak into each another.

> > I do think it can be rescued with what I call gang scheduling of
> > engines: I.e. when a given engine is running a context (or a group of
> > engines, depending how your hw works) that can cause a page fault, you
> > must flush out all workloads running on the same engine which could
> > block a dma_fence (preempt them, or for non-compute stuff, force their
> > completion). And the other way round, i.e. before you can run a legacy
> > gl workload with a dma_fence on these engines you need to preempt all
> > ctxs that could cause page faults and take them at least out of the hw
> > scheduler queue.
>
> Yuck! But yeah, that would work. A less invasive alternative would be to
> reserve some compute units for graphics contexts so we can guarantee
> forward progress for graphics contexts even when all CUs working on
> compute stuff are stuck on page faults.

Won't this hurt compute workloads? I think we need something were at
least pure compute or pure gl/vk workloads run at full performance.
And without preempt we can't take anything back when we need it, so
would have to always upfront reserve some cores just in case.

> > Just reserving an sdma engine for copy jobs and ptes updates and that
> > stuff is necessary, but not sufficient.
> >
> > Another approach that Jerome suggested is to track the reverse
> > dependency graph of all dma_fence somehow and make sure that direct
> > reclaim never recurses on an engine you're serving a pagefault for.
> > Possible in theory, but in practice I think not feasible to implement
> > because way too much work to implement.
>
> I agree.
>
>
> >
> > Either way it's imo really nasty to come up with a scheme here that
> > doesn't fail in some corner, or becomes really nasty with inconsistent
> > rules across different drivers and hw :-(
>
> Yeah. The cleanest approach is to avoid DMA fences altogether for
> device/engines that can get stuck on page faults. A user mode command
> submission model would do that.
>
> Reserving some compute units for graphics contexts that signal fences
> but never page fault should also work.

The trouble is you don't just need engines, you need compute
resources/cores behind them too (assuming I'm understading correctly
how this works on amd hw). Otherwise you end up with a gl context that
should complete to resolve the deadlock, but can't because it can't
run it's shader because all the shader cores are stuck in compute page
faults somewhere. Hence the gang scheduling would need to be at a
level were you can guarantee full isolation of hw resources, either
because you can preempt stuck compute kernels and let gl shaders run,
or because of hw core partitiion or something else. If you cant, you
need to gang schedule the entire gpu.

I think in practice that's not too ugly since for pure compute
workloads you're not going to have a desktop running most likely. And
for developer machines we should be able to push the occasional gfx
update through the gpu still without causing too much stutter on the
desktop or costing too much perf on the compute side. And pure gl/vk
or pure compute workloads should keep running at full performance.
-Daniel



>
> Regards,
>   Felix
>
>
> >
> > Cheers, Daniel
> >
> >> Regards,
> >>   Felix
> >>
> >>> Note that the dma_fence_wait is hard requirement, because we need that for
> >>> mmu notifiers and shrinkers, disallowing that would disable dynamic memory
> >>> management. Which is the current "ttm is self-limited to 50% of system
> >>> memory" limitation Christian is trying to lift. So that's really not
> >>> a restriction we can lift, at least not in upstream where we need to also
> >>> support old style hardware which doesn't have page fault support and
> >>> really has no other option to handle memory management than
> >>> dma_fence_wait.
> >>>
> >>> Thread was here:
> >>>
> >>> https://lore.kernel.org/dri-devel/CAKMK7uGgoeF8LmFBwWh5mW1k4xWjuUh3hdSFpVH1NBM7K0=edA@mail.gmail.com/
> >>>
> >>> There's a few ways to resolve this (without having preempt-capable
> >>> hardware), but they're all supremely nasty.
> >>> -Daniel
> >>>
> >>>> Thanks,
> >>>>   Felix
> >>>>
> >>>>
> >>>>> I'll try to look at this more in-depth when I'm catching up on mails.
> >>>>> -Daniel
> >>>>>
> >>>>>> Alex Sierra (12):
> >>>>>>   drm/amdgpu: replace per_device_list by array
> >>>>>>   drm/amdkfd: helper to convert gpu id and idx
> >>>>>>   drm/amdkfd: add xnack enabled flag to kfd_process
> >>>>>>   drm/amdkfd: add ioctl to configure and query xnack retries
> >>>>>>   drm/amdkfd: invalidate tables on page retry fault
> >>>>>>   drm/amdkfd: page table restore through svm API
> >>>>>>   drm/amdkfd: SVM API call to restore page tables
> >>>>>>   drm/amdkfd: add svm_bo reference for eviction fence
> >>>>>>   drm/amdgpu: add param bit flag to create SVM BOs
> >>>>>>   drm/amdkfd: add svm_bo eviction mechanism support
> >>>>>>   drm/amdgpu: svm bo enable_signal call condition
> >>>>>>   drm/amdgpu: add svm_bo eviction to enable_signal cb
> >>>>>>
> >>>>>> Philip Yang (23):
> >>>>>>   drm/amdkfd: select kernel DEVICE_PRIVATE option
> >>>>>>   drm/amdkfd: add svm ioctl API
> >>>>>>   drm/amdkfd: Add SVM API support capability bits
> >>>>>>   drm/amdkfd: register svm range
> >>>>>>   drm/amdkfd: add svm ioctl GET_ATTR op
> >>>>>>   drm/amdgpu: add common HMM get pages function
> >>>>>>   drm/amdkfd: validate svm range system memory
> >>>>>>   drm/amdkfd: register overlap system memory range
> >>>>>>   drm/amdkfd: deregister svm range
> >>>>>>   drm/amdgpu: export vm update mapping interface
> >>>>>>   drm/amdkfd: map svm range to GPUs
> >>>>>>   drm/amdkfd: svm range eviction and restore
> >>>>>>   drm/amdkfd: register HMM device private zone
> >>>>>>   drm/amdkfd: validate vram svm range from TTM
> >>>>>>   drm/amdkfd: support xgmi same hive mapping
> >>>>>>   drm/amdkfd: copy memory through gart table
> >>>>>>   drm/amdkfd: HMM migrate ram to vram
> >>>>>>   drm/amdkfd: HMM migrate vram to ram
> >>>>>>   drm/amdgpu: reserve fence slot to update page table
> >>>>>>   drm/amdgpu: enable retry fault wptr overflow
> >>>>>>   drm/amdkfd: refine migration policy with xnack on
> >>>>>>   drm/amdkfd: add svm range validate timestamp
> >>>>>>   drm/amdkfd: multiple gpu migrate vram to vram
> >>>>>>
> >>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c    |    3 +
> >>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h    |    4 +-
> >>>>>>  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c  |   16 +-
> >>>>>>  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |   13 +-
> >>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |   83 +
> >>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h        |    7 +
> >>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_object.h    |    5 +
> >>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       |   90 +-
> >>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        |   47 +-
> >>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h        |   10 +
> >>>>>>  drivers/gpu/drm/amd/amdgpu/vega10_ih.c        |   32 +-
> >>>>>>  drivers/gpu/drm/amd/amdgpu/vega20_ih.c        |   32 +-
> >>>>>>  drivers/gpu/drm/amd/amdkfd/Kconfig            |    1 +
> >>>>>>  drivers/gpu/drm/amd/amdkfd/Makefile           |    4 +-
> >>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_chardev.c      |  170 +-
> >>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_iommu.c        |    8 +-
> >>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  866 ++++++
> >>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.h      |   59 +
> >>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_priv.h         |   52 +-
> >>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_process.c      |  200 +-
> >>>>>>  .../amd/amdkfd/kfd_process_queue_manager.c    |    6 +-
> >>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_svm.c          | 2564 +++++++++++++++++
> >>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_svm.h          |  135 +
> >>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_topology.c     |    1 +
> >>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_topology.h     |   10 +-
> >>>>>>  include/uapi/linux/kfd_ioctl.h                |  169 +-
> >>>>>>  26 files changed, 4296 insertions(+), 291 deletions(-)
> >>>>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> >>>>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_migrate.h
> >>>>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> >>>>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_svm.h
> >>>>>>
> >>>>>> --
> >>>>>> 2.29.2
> >>>>>>
> >>>>>> _______________________________________________
> >>>>>> dri-devel mailing list
> >>>>>> dri-devel@lists.freedesktop.org
> >>>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
> >
> >
Felix Kuehling Jan. 8, 2021, 5:56 p.m. UTC | #9
Am 2021-01-08 um 11:53 a.m. schrieb Daniel Vetter:
> On Fri, Jan 8, 2021 at 5:36 PM Felix Kuehling <felix.kuehling@amd.com> wrote:
>>
>> Am 2021-01-08 um 11:06 a.m. schrieb Daniel Vetter:
>>> On Fri, Jan 8, 2021 at 4:58 PM Felix Kuehling <felix.kuehling@amd.com> wrote:
>>>> Am 2021-01-08 um 9:40 a.m. schrieb Daniel Vetter:
>>>>> On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
>>>>>> Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
>>>>>>> On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
>>>>>>>> This is the first version of our HMM based shared virtual memory manager
>>>>>>>> for KFD. There are still a number of known issues that we're working through
>>>>>>>> (see below). This will likely lead to some pretty significant changes in
>>>>>>>> MMU notifier handling and locking on the migration code paths. So don't
>>>>>>>> get hung up on those details yet.
>>>>>>>>
>>>>>>>> But I think this is a good time to start getting feedback. We're pretty
>>>>>>>> confident about the ioctl API, which is both simple and extensible for the
>>>>>>>> future. (see patches 4,16) The user mode side of the API can be found here:
>>>>>>>> https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
>>>>>>>>
>>>>>>>> I'd also like another pair of eyes on how we're interfacing with the GPU VM
>>>>>>>> code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
>>>>>>>> and some retry IRQ handling changes (32).
>>>>>>>>
>>>>>>>>
>>>>>>>> Known issues:
>>>>>>>> * won't work with IOMMU enabled, we need to dma_map all pages properly
>>>>>>>> * still working on some race conditions and random bugs
>>>>>>>> * performance is not great yet
>>>>>>> Still catching up, but I think there's another one for your list:
>>>>>>>
>>>>>>>  * hmm gpu context preempt vs page fault handling. I've had a short
>>>>>>>    discussion about this one with Christian before the holidays, and also
>>>>>>>    some private chats with Jerome. It's nasty since no easy fix, much less
>>>>>>>    a good idea what's the best approach here.
>>>>>> Do you have a pointer to that discussion or any more details?
>>>>> Essentially if you're handling an hmm page fault from the gpu, you can
>>>>> deadlock by calling dma_fence_wait on a (chain of, possibly) other command
>>>>> submissions or compute contexts with dma_fence_wait. Which deadlocks if
>>>>> you can't preempt while you have that page fault pending. Two solutions:
>>>>>
>>>>> - your hw can (at least for compute ctx) preempt even when a page fault is
>>>>>   pending
>>>> Our GFXv9 GPUs can do this. GFXv10 cannot.
>>> Uh, why did your hw guys drop this :-/

Performance. It's the same reason why the XNACK mode selection API
exists (patch 16). When we enable recoverable page fault handling in the
compute units on GFXv9, it costs some performance even when no page
faults are happening. On GFXv10 that retry fault handling moved out of
the compute units, so they don't take the performance hit. But that
sacrificed the ability to preempt during page faults. We'll need to work
with our hardware teams to restore that capability in a future generation.


>>>
>>>>> - lots of screaming in trying to come up with an alternate solution. They
>>>>>   all suck.
>>>> My idea for GFXv10 is to avoid preemption for memory management purposes
>>>> and rely 100% on page faults instead. That is, if the memory manager
>>>> needs to prevent GPU access to certain memory, just invalidate the GPU
>>>> page table entries pointing to that memory. No waiting for fences is
>>>> necessary, except for the SDMA job that invalidates the PTEs, which runs
>>>> on a special high-priority queue that should never deadlock. That should
>>>> prevent the CPU getting involved in deadlocks in kernel mode. But you
>>>> can still deadlock the GPU in user mode if all compute units get stuck
>>>> in page faults and can't switch to any useful work any more. So it's
>>>> possible that we won't be able to use GPU page faults on our GFXv10 GPUs.
>>> This only works if _everything_ in the system works like this, since
>>> you're defacto breaking the cross-driver contract. As soon as there's
>>> some legacy gl workload (userptr) or another driver involved, this
>>> approach falls apart.
>> I think the scenario you have in mind involves a dma_fence that depends
>> on the resolution of a GPU page fault. With our user mode command
>> submission model for compute contexts, there are no DMA fences that get
>> signaled by compute jobs that could get stuck on page faults.
>>
>> The legacy GL workload would not get GPU page faults. The only way it
>> could get stuck is, if all CUs are stuck on page faults and the command
>> processor can't find any HW resources to execute it on. That's my user
>> mode deadlock scenario below. So yeah, you're right, kernel mode can't
>> avoid getting involved in that unless everything uses user mode command
>> submissions.
>>
>> If (big if) we switched to user mode command submission for all compute
>> and graphics contexts, and no longer use DMA fences to signal their
>> completion, I think that would solve the problem as far as the kernel is
>> concerned.
> We can't throw dma_fence away because it's uapi built into various
> compositor protocols. Otherwise we could pull a wddm2 like microsoft
> did on windows and do what you're describing. So completely getting
> rid of dma_fences (even just limited on newer gpus) is also a decadel
> effort at least, since that's roughly how long it'll take to sunset
> and convert everything over.

OK.


>
> The other problem is that we're now building more stuff on top of
> dma_resv like the dynamic dma-buf p2p stuff, now integrated into rdma.
> I think even internally in the kernel it would be a massive pain to
> untangle our fencing sufficiently to make this all happen without
> loops. And I'm not even sure whether we could prevent deadlocks by
> splitting dma_fence up into the userspace sync parts and the kernel
> internal sync parts since they leak into each another.
>
>>> I do think it can be rescued with what I call gang scheduling of
>>> engines: I.e. when a given engine is running a context (or a group of
>>> engines, depending how your hw works) that can cause a page fault, you
>>> must flush out all workloads running on the same engine which could
>>> block a dma_fence (preempt them, or for non-compute stuff, force their
>>> completion). And the other way round, i.e. before you can run a legacy
>>> gl workload with a dma_fence on these engines you need to preempt all
>>> ctxs that could cause page faults and take them at least out of the hw
>>> scheduler queue.
>> Yuck! But yeah, that would work. A less invasive alternative would be to
>> reserve some compute units for graphics contexts so we can guarantee
>> forward progress for graphics contexts even when all CUs working on
>> compute stuff are stuck on page faults.
> Won't this hurt compute workloads? I think we need something were at
> least pure compute or pure gl/vk workloads run at full performance.
> And without preempt we can't take anything back when we need it, so
> would have to always upfront reserve some cores just in case.

Yes, it would hurt proportionally to how many CUs get reserved. On big
GPUs with many CUs the impact could be quite small.

That said, I'm not sure it'll work on our hardware. Our CUs can execute
multiple wavefronts from different contexts and switch between them with
fine granularity. I'd need to check with our HW engineers whether this
CU-internal context switching is still possible during page faults on
GFXv10.


>
>>> Just reserving an sdma engine for copy jobs and ptes updates and that
>>> stuff is necessary, but not sufficient.
>>>
>>> Another approach that Jerome suggested is to track the reverse
>>> dependency graph of all dma_fence somehow and make sure that direct
>>> reclaim never recurses on an engine you're serving a pagefault for.
>>> Possible in theory, but in practice I think not feasible to implement
>>> because way too much work to implement.
>> I agree.
>>
>>
>>> Either way it's imo really nasty to come up with a scheme here that
>>> doesn't fail in some corner, or becomes really nasty with inconsistent
>>> rules across different drivers and hw :-(
>> Yeah. The cleanest approach is to avoid DMA fences altogether for
>> device/engines that can get stuck on page faults. A user mode command
>> submission model would do that.
>>
>> Reserving some compute units for graphics contexts that signal fences
>> but never page fault should also work.
> The trouble is you don't just need engines, you need compute
> resources/cores behind them too (assuming I'm understading correctly
> how this works on amd hw). Otherwise you end up with a gl context that
> should complete to resolve the deadlock, but can't because it can't
> run it's shader because all the shader cores are stuck in compute page
> faults somewhere.

That's why I suggested reserving some CUs that would never execute
compute workloads that can page fault.


>  Hence the gang scheduling would need to be at a
> level were you can guarantee full isolation of hw resources, either
> because you can preempt stuck compute kernels and let gl shaders run,
> or because of hw core partitiion or something else. If you cant, you
> need to gang schedule the entire gpu.

Yes.


>
> I think in practice that's not too ugly since for pure compute
> workloads you're not going to have a desktop running most likely.

We still need legacy contexts for video decoding and post processing.
But maybe we can find a fix for that too.


>  And
> for developer machines we should be able to push the occasional gfx
> update through the gpu still without causing too much stutter on the
> desktop or costing too much perf on the compute side. And pure gl/vk
> or pure compute workloads should keep running at full performance.

I think it would be acceptable for mostly-compute workloads. It would be
bad for desktop workloads with some compute, e.g. games with
OpenCL-based physics. We're increasingly relying on KFD for all GPU
computing (including OpenCL) in desktop applications. But those could
live without GPU page faults until we can build sane hardware.

Regards,
  Felix


> -Daniel
>
>
>
>> Regards,
>>   Felix
>>
>>
>>> Cheers, Daniel
>>>
>>>> Regards,
>>>>   Felix
>>>>
>>>>> Note that the dma_fence_wait is hard requirement, because we need that for
>>>>> mmu notifiers and shrinkers, disallowing that would disable dynamic memory
>>>>> management. Which is the current "ttm is self-limited to 50% of system
>>>>> memory" limitation Christian is trying to lift. So that's really not
>>>>> a restriction we can lift, at least not in upstream where we need to also
>>>>> support old style hardware which doesn't have page fault support and
>>>>> really has no other option to handle memory management than
>>>>> dma_fence_wait.
>>>>>
>>>>> Thread was here:
>>>>>
>>>>> https://lore.kernel.org/dri-devel/CAKMK7uGgoeF8LmFBwWh5mW1k4xWjuUh3hdSFpVH1NBM7K0=edA@mail.gmail.com/
>>>>>
>>>>> There's a few ways to resolve this (without having preempt-capable
>>>>> hardware), but they're all supremely nasty.
>>>>> -Daniel
>>>>>
>>>>>> Thanks,
>>>>>>   Felix
>>>>>>
>>>>>>
>>>>>>> I'll try to look at this more in-depth when I'm catching up on mails.
>>>>>>> -Daniel
>>>>>>>
>>>>>>>> Alex Sierra (12):
>>>>>>>>   drm/amdgpu: replace per_device_list by array
>>>>>>>>   drm/amdkfd: helper to convert gpu id and idx
>>>>>>>>   drm/amdkfd: add xnack enabled flag to kfd_process
>>>>>>>>   drm/amdkfd: add ioctl to configure and query xnack retries
>>>>>>>>   drm/amdkfd: invalidate tables on page retry fault
>>>>>>>>   drm/amdkfd: page table restore through svm API
>>>>>>>>   drm/amdkfd: SVM API call to restore page tables
>>>>>>>>   drm/amdkfd: add svm_bo reference for eviction fence
>>>>>>>>   drm/amdgpu: add param bit flag to create SVM BOs
>>>>>>>>   drm/amdkfd: add svm_bo eviction mechanism support
>>>>>>>>   drm/amdgpu: svm bo enable_signal call condition
>>>>>>>>   drm/amdgpu: add svm_bo eviction to enable_signal cb
>>>>>>>>
>>>>>>>> Philip Yang (23):
>>>>>>>>   drm/amdkfd: select kernel DEVICE_PRIVATE option
>>>>>>>>   drm/amdkfd: add svm ioctl API
>>>>>>>>   drm/amdkfd: Add SVM API support capability bits
>>>>>>>>   drm/amdkfd: register svm range
>>>>>>>>   drm/amdkfd: add svm ioctl GET_ATTR op
>>>>>>>>   drm/amdgpu: add common HMM get pages function
>>>>>>>>   drm/amdkfd: validate svm range system memory
>>>>>>>>   drm/amdkfd: register overlap system memory range
>>>>>>>>   drm/amdkfd: deregister svm range
>>>>>>>>   drm/amdgpu: export vm update mapping interface
>>>>>>>>   drm/amdkfd: map svm range to GPUs
>>>>>>>>   drm/amdkfd: svm range eviction and restore
>>>>>>>>   drm/amdkfd: register HMM device private zone
>>>>>>>>   drm/amdkfd: validate vram svm range from TTM
>>>>>>>>   drm/amdkfd: support xgmi same hive mapping
>>>>>>>>   drm/amdkfd: copy memory through gart table
>>>>>>>>   drm/amdkfd: HMM migrate ram to vram
>>>>>>>>   drm/amdkfd: HMM migrate vram to ram
>>>>>>>>   drm/amdgpu: reserve fence slot to update page table
>>>>>>>>   drm/amdgpu: enable retry fault wptr overflow
>>>>>>>>   drm/amdkfd: refine migration policy with xnack on
>>>>>>>>   drm/amdkfd: add svm range validate timestamp
>>>>>>>>   drm/amdkfd: multiple gpu migrate vram to vram
>>>>>>>>
>>>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c    |    3 +
>>>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h    |    4 +-
>>>>>>>>  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c  |   16 +-
>>>>>>>>  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |   13 +-
>>>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |   83 +
>>>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h        |    7 +
>>>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_object.h    |    5 +
>>>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       |   90 +-
>>>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        |   47 +-
>>>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h        |   10 +
>>>>>>>>  drivers/gpu/drm/amd/amdgpu/vega10_ih.c        |   32 +-
>>>>>>>>  drivers/gpu/drm/amd/amdgpu/vega20_ih.c        |   32 +-
>>>>>>>>  drivers/gpu/drm/amd/amdkfd/Kconfig            |    1 +
>>>>>>>>  drivers/gpu/drm/amd/amdkfd/Makefile           |    4 +-
>>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_chardev.c      |  170 +-
>>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_iommu.c        |    8 +-
>>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  866 ++++++
>>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.h      |   59 +
>>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_priv.h         |   52 +-
>>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_process.c      |  200 +-
>>>>>>>>  .../amd/amdkfd/kfd_process_queue_manager.c    |    6 +-
>>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_svm.c          | 2564 +++++++++++++++++
>>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_svm.h          |  135 +
>>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_topology.c     |    1 +
>>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_topology.h     |   10 +-
>>>>>>>>  include/uapi/linux/kfd_ioctl.h                |  169 +-
>>>>>>>>  26 files changed, 4296 insertions(+), 291 deletions(-)
>>>>>>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>>>>>>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_migrate.h
>>>>>>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_svm.c
>>>>>>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_svm.h
>>>>>>>>
>>>>>>>> --
>>>>>>>> 2.29.2
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> dri-devel mailing list
>>>>>>>> dri-devel@lists.freedesktop.org
>>>>>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>>
>
>
Daniel Vetter Jan. 11, 2021, 4:29 p.m. UTC | #10
On Fri, Jan 08, 2021 at 12:56:24PM -0500, Felix Kuehling wrote:
> 
> Am 2021-01-08 um 11:53 a.m. schrieb Daniel Vetter:
> > On Fri, Jan 8, 2021 at 5:36 PM Felix Kuehling <felix.kuehling@amd.com> wrote:
> >>
> >> Am 2021-01-08 um 11:06 a.m. schrieb Daniel Vetter:
> >>> On Fri, Jan 8, 2021 at 4:58 PM Felix Kuehling <felix.kuehling@amd.com> wrote:
> >>>> Am 2021-01-08 um 9:40 a.m. schrieb Daniel Vetter:
> >>>>> On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
> >>>>>> Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
> >>>>>>> On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
> >>>>>>>> This is the first version of our HMM based shared virtual memory manager
> >>>>>>>> for KFD. There are still a number of known issues that we're working through
> >>>>>>>> (see below). This will likely lead to some pretty significant changes in
> >>>>>>>> MMU notifier handling and locking on the migration code paths. So don't
> >>>>>>>> get hung up on those details yet.
> >>>>>>>>
> >>>>>>>> But I think this is a good time to start getting feedback. We're pretty
> >>>>>>>> confident about the ioctl API, which is both simple and extensible for the
> >>>>>>>> future. (see patches 4,16) The user mode side of the API can be found here:
> >>>>>>>> https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
> >>>>>>>>
> >>>>>>>> I'd also like another pair of eyes on how we're interfacing with the GPU VM
> >>>>>>>> code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
> >>>>>>>> and some retry IRQ handling changes (32).
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> Known issues:
> >>>>>>>> * won't work with IOMMU enabled, we need to dma_map all pages properly
> >>>>>>>> * still working on some race conditions and random bugs
> >>>>>>>> * performance is not great yet
> >>>>>>> Still catching up, but I think there's another one for your list:
> >>>>>>>
> >>>>>>>  * hmm gpu context preempt vs page fault handling. I've had a short
> >>>>>>>    discussion about this one with Christian before the holidays, and also
> >>>>>>>    some private chats with Jerome. It's nasty since no easy fix, much less
> >>>>>>>    a good idea what's the best approach here.
> >>>>>> Do you have a pointer to that discussion or any more details?
> >>>>> Essentially if you're handling an hmm page fault from the gpu, you can
> >>>>> deadlock by calling dma_fence_wait on a (chain of, possibly) other command
> >>>>> submissions or compute contexts with dma_fence_wait. Which deadlocks if
> >>>>> you can't preempt while you have that page fault pending. Two solutions:
> >>>>>
> >>>>> - your hw can (at least for compute ctx) preempt even when a page fault is
> >>>>>   pending
> >>>> Our GFXv9 GPUs can do this. GFXv10 cannot.
> >>> Uh, why did your hw guys drop this :-/
> 
> Performance. It's the same reason why the XNACK mode selection API
> exists (patch 16). When we enable recoverable page fault handling in the
> compute units on GFXv9, it costs some performance even when no page
> faults are happening. On GFXv10 that retry fault handling moved out of
> the compute units, so they don't take the performance hit. But that
> sacrificed the ability to preempt during page faults. We'll need to work
> with our hardware teams to restore that capability in a future generation.

Ah yes, you need to stall in more points in the compute cores to make sure
you can recover if the page fault gets interrupted.

Maybe my knowledge is outdated, but my understanding is that nvidia can
also preempt (but only for compute jobs, since oh dear the pain this would
be for all the fixed function stuff). Since gfx10 moved page fault
handling further away from compute cores, do you know whether this now
means you can do page faults for (some?) fixed function stuff too? Or
still only for compute?

Supporting page fault for 3d would be real pain with the corner we're
stuck in right now, but better we know about this early than later :-/

> >>>
> >>>>> - lots of screaming in trying to come up with an alternate solution. They
> >>>>>   all suck.
> >>>> My idea for GFXv10 is to avoid preemption for memory management purposes
> >>>> and rely 100% on page faults instead. That is, if the memory manager
> >>>> needs to prevent GPU access to certain memory, just invalidate the GPU
> >>>> page table entries pointing to that memory. No waiting for fences is
> >>>> necessary, except for the SDMA job that invalidates the PTEs, which runs
> >>>> on a special high-priority queue that should never deadlock. That should
> >>>> prevent the CPU getting involved in deadlocks in kernel mode. But you
> >>>> can still deadlock the GPU in user mode if all compute units get stuck
> >>>> in page faults and can't switch to any useful work any more. So it's
> >>>> possible that we won't be able to use GPU page faults on our GFXv10 GPUs.
> >>> This only works if _everything_ in the system works like this, since
> >>> you're defacto breaking the cross-driver contract. As soon as there's
> >>> some legacy gl workload (userptr) or another driver involved, this
> >>> approach falls apart.
> >> I think the scenario you have in mind involves a dma_fence that depends
> >> on the resolution of a GPU page fault. With our user mode command
> >> submission model for compute contexts, there are no DMA fences that get
> >> signaled by compute jobs that could get stuck on page faults.
> >>
> >> The legacy GL workload would not get GPU page faults. The only way it
> >> could get stuck is, if all CUs are stuck on page faults and the command
> >> processor can't find any HW resources to execute it on. That's my user
> >> mode deadlock scenario below. So yeah, you're right, kernel mode can't
> >> avoid getting involved in that unless everything uses user mode command
> >> submissions.
> >>
> >> If (big if) we switched to user mode command submission for all compute
> >> and graphics contexts, and no longer use DMA fences to signal their
> >> completion, I think that would solve the problem as far as the kernel is
> >> concerned.
> > We can't throw dma_fence away because it's uapi built into various
> > compositor protocols. Otherwise we could pull a wddm2 like microsoft
> > did on windows and do what you're describing. So completely getting
> > rid of dma_fences (even just limited on newer gpus) is also a decadel
> > effort at least, since that's roughly how long it'll take to sunset
> > and convert everything over.
> 
> OK.
> 
> 
> >
> > The other problem is that we're now building more stuff on top of
> > dma_resv like the dynamic dma-buf p2p stuff, now integrated into rdma.
> > I think even internally in the kernel it would be a massive pain to
> > untangle our fencing sufficiently to make this all happen without
> > loops. And I'm not even sure whether we could prevent deadlocks by
> > splitting dma_fence up into the userspace sync parts and the kernel
> > internal sync parts since they leak into each another.
> >
> >>> I do think it can be rescued with what I call gang scheduling of
> >>> engines: I.e. when a given engine is running a context (or a group of
> >>> engines, depending how your hw works) that can cause a page fault, you
> >>> must flush out all workloads running on the same engine which could
> >>> block a dma_fence (preempt them, or for non-compute stuff, force their
> >>> completion). And the other way round, i.e. before you can run a legacy
> >>> gl workload with a dma_fence on these engines you need to preempt all
> >>> ctxs that could cause page faults and take them at least out of the hw
> >>> scheduler queue.
> >> Yuck! But yeah, that would work. A less invasive alternative would be to
> >> reserve some compute units for graphics contexts so we can guarantee
> >> forward progress for graphics contexts even when all CUs working on
> >> compute stuff are stuck on page faults.
> > Won't this hurt compute workloads? I think we need something were at
> > least pure compute or pure gl/vk workloads run at full performance.
> > And without preempt we can't take anything back when we need it, so
> > would have to always upfront reserve some cores just in case.
> 
> Yes, it would hurt proportionally to how many CUs get reserved. On big
> GPUs with many CUs the impact could be quite small.

Also, we could do the reservation only for the time when there's actually
a legacy context with normal dma_fence in the scheduler queue. Assuming
that reserving/unreserving of CUs isn't too expensive operation. If it's
as expensive as a full stall probably not worth the complexity here and
just go with a full stall and only run one or the other at a time.

Wrt desktops I'm also somewhat worried that we might end up killing
desktop workloads if there's not enough CUs reserved for these and they
end up taking too long and anger either tdr or worse the user because the
desktop is unuseable when you start a compute job and get a big pile of
faults. Probably needs some testing to see how bad it is.

> That said, I'm not sure it'll work on our hardware. Our CUs can execute
> multiple wavefronts from different contexts and switch between them with
> fine granularity. I'd need to check with our HW engineers whether this
> CU-internal context switching is still possible during page faults on
> GFXv10.

You'd need to do the reservation for all contexts/engines which can cause
page faults, otherewise it'd leak.
> 
> 
> >
> >>> Just reserving an sdma engine for copy jobs and ptes updates and that
> >>> stuff is necessary, but not sufficient.
> >>>
> >>> Another approach that Jerome suggested is to track the reverse
> >>> dependency graph of all dma_fence somehow and make sure that direct
> >>> reclaim never recurses on an engine you're serving a pagefault for.
> >>> Possible in theory, but in practice I think not feasible to implement
> >>> because way too much work to implement.
> >> I agree.
> >>
> >>
> >>> Either way it's imo really nasty to come up with a scheme here that
> >>> doesn't fail in some corner, or becomes really nasty with inconsistent
> >>> rules across different drivers and hw :-(
> >> Yeah. The cleanest approach is to avoid DMA fences altogether for
> >> device/engines that can get stuck on page faults. A user mode command
> >> submission model would do that.
> >>
> >> Reserving some compute units for graphics contexts that signal fences
> >> but never page fault should also work.
> > The trouble is you don't just need engines, you need compute
> > resources/cores behind them too (assuming I'm understading correctly
> > how this works on amd hw). Otherwise you end up with a gl context that
> > should complete to resolve the deadlock, but can't because it can't
> > run it's shader because all the shader cores are stuck in compute page
> > faults somewhere.
> 
> That's why I suggested reserving some CUs that would never execute
> compute workloads that can page fault.
> 
> 
> >  Hence the gang scheduling would need to be at a
> > level were you can guarantee full isolation of hw resources, either
> > because you can preempt stuck compute kernels and let gl shaders run,
> > or because of hw core partitiion or something else. If you cant, you
> > need to gang schedule the entire gpu.
> 
> Yes.
> 
> 
> >
> > I think in practice that's not too ugly since for pure compute
> > workloads you're not going to have a desktop running most likely.
> 
> We still need legacy contexts for video decoding and post processing.
> But maybe we can find a fix for that too.

Hm I'd expect video workloads to not use page faults (even if they use
compute for post processing). Same way that compute in vk/gl would still
use all the legacy fencing (which excludes page fault support).

So pure "compute always has to use page fault mode and user sync" I don't
think is feasible. And then all the mixed workloads useage should be fine
too.

> >  And
> > for developer machines we should be able to push the occasional gfx
> > update through the gpu still without causing too much stutter on the
> > desktop or costing too much perf on the compute side. And pure gl/vk
> > or pure compute workloads should keep running at full performance.
> 
> I think it would be acceptable for mostly-compute workloads. It would be
> bad for desktop workloads with some compute, e.g. games with
> OpenCL-based physics. We're increasingly relying on KFD for all GPU
> computing (including OpenCL) in desktop applications. But those could
> live without GPU page faults until we can build sane hardware.

Uh ... I guess the challenge here is noticing when your opencl should be
run in old style mode. I guess you could link them together through some
backchannel, so when a gl or vk context is set up you run opencl in the
legacy mode without pagefault for full perf together with vk. Still
doesn't work if the app sets up ocl before vk/gl :-/
-Daniel

> Regards,
>   Felix
> 
> 
> > -Daniel
> >
> >
> >
> >> Regards,
> >>   Felix
> >>
> >>
> >>> Cheers, Daniel
> >>>
> >>>> Regards,
> >>>>   Felix
> >>>>
> >>>>> Note that the dma_fence_wait is hard requirement, because we need that for
> >>>>> mmu notifiers and shrinkers, disallowing that would disable dynamic memory
> >>>>> management. Which is the current "ttm is self-limited to 50% of system
> >>>>> memory" limitation Christian is trying to lift. So that's really not
> >>>>> a restriction we can lift, at least not in upstream where we need to also
> >>>>> support old style hardware which doesn't have page fault support and
> >>>>> really has no other option to handle memory management than
> >>>>> dma_fence_wait.
> >>>>>
> >>>>> Thread was here:
> >>>>>
> >>>>> https://lore.kernel.org/dri-devel/CAKMK7uGgoeF8LmFBwWh5mW1k4xWjuUh3hdSFpVH1NBM7K0=edA@mail.gmail.com/
> >>>>>
> >>>>> There's a few ways to resolve this (without having preempt-capable
> >>>>> hardware), but they're all supremely nasty.
> >>>>> -Daniel
> >>>>>
> >>>>>> Thanks,
> >>>>>>   Felix
> >>>>>>
> >>>>>>
> >>>>>>> I'll try to look at this more in-depth when I'm catching up on mails.
> >>>>>>> -Daniel
> >>>>>>>
> >>>>>>>> Alex Sierra (12):
> >>>>>>>>   drm/amdgpu: replace per_device_list by array
> >>>>>>>>   drm/amdkfd: helper to convert gpu id and idx
> >>>>>>>>   drm/amdkfd: add xnack enabled flag to kfd_process
> >>>>>>>>   drm/amdkfd: add ioctl to configure and query xnack retries
> >>>>>>>>   drm/amdkfd: invalidate tables on page retry fault
> >>>>>>>>   drm/amdkfd: page table restore through svm API
> >>>>>>>>   drm/amdkfd: SVM API call to restore page tables
> >>>>>>>>   drm/amdkfd: add svm_bo reference for eviction fence
> >>>>>>>>   drm/amdgpu: add param bit flag to create SVM BOs
> >>>>>>>>   drm/amdkfd: add svm_bo eviction mechanism support
> >>>>>>>>   drm/amdgpu: svm bo enable_signal call condition
> >>>>>>>>   drm/amdgpu: add svm_bo eviction to enable_signal cb
> >>>>>>>>
> >>>>>>>> Philip Yang (23):
> >>>>>>>>   drm/amdkfd: select kernel DEVICE_PRIVATE option
> >>>>>>>>   drm/amdkfd: add svm ioctl API
> >>>>>>>>   drm/amdkfd: Add SVM API support capability bits
> >>>>>>>>   drm/amdkfd: register svm range
> >>>>>>>>   drm/amdkfd: add svm ioctl GET_ATTR op
> >>>>>>>>   drm/amdgpu: add common HMM get pages function
> >>>>>>>>   drm/amdkfd: validate svm range system memory
> >>>>>>>>   drm/amdkfd: register overlap system memory range
> >>>>>>>>   drm/amdkfd: deregister svm range
> >>>>>>>>   drm/amdgpu: export vm update mapping interface
> >>>>>>>>   drm/amdkfd: map svm range to GPUs
> >>>>>>>>   drm/amdkfd: svm range eviction and restore
> >>>>>>>>   drm/amdkfd: register HMM device private zone
> >>>>>>>>   drm/amdkfd: validate vram svm range from TTM
> >>>>>>>>   drm/amdkfd: support xgmi same hive mapping
> >>>>>>>>   drm/amdkfd: copy memory through gart table
> >>>>>>>>   drm/amdkfd: HMM migrate ram to vram
> >>>>>>>>   drm/amdkfd: HMM migrate vram to ram
> >>>>>>>>   drm/amdgpu: reserve fence slot to update page table
> >>>>>>>>   drm/amdgpu: enable retry fault wptr overflow
> >>>>>>>>   drm/amdkfd: refine migration policy with xnack on
> >>>>>>>>   drm/amdkfd: add svm range validate timestamp
> >>>>>>>>   drm/amdkfd: multiple gpu migrate vram to vram
> >>>>>>>>
> >>>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c    |    3 +
> >>>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h    |    4 +-
> >>>>>>>>  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c  |   16 +-
> >>>>>>>>  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |   13 +-
> >>>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |   83 +
> >>>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h        |    7 +
> >>>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_object.h    |    5 +
> >>>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       |   90 +-
> >>>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        |   47 +-
> >>>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h        |   10 +
> >>>>>>>>  drivers/gpu/drm/amd/amdgpu/vega10_ih.c        |   32 +-
> >>>>>>>>  drivers/gpu/drm/amd/amdgpu/vega20_ih.c        |   32 +-
> >>>>>>>>  drivers/gpu/drm/amd/amdkfd/Kconfig            |    1 +
> >>>>>>>>  drivers/gpu/drm/amd/amdkfd/Makefile           |    4 +-
> >>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_chardev.c      |  170 +-
> >>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_iommu.c        |    8 +-
> >>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  866 ++++++
> >>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.h      |   59 +
> >>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_priv.h         |   52 +-
> >>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_process.c      |  200 +-
> >>>>>>>>  .../amd/amdkfd/kfd_process_queue_manager.c    |    6 +-
> >>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_svm.c          | 2564 +++++++++++++++++
> >>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_svm.h          |  135 +
> >>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_topology.c     |    1 +
> >>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_topology.h     |   10 +-
> >>>>>>>>  include/uapi/linux/kfd_ioctl.h                |  169 +-
> >>>>>>>>  26 files changed, 4296 insertions(+), 291 deletions(-)
> >>>>>>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> >>>>>>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_migrate.h
> >>>>>>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> >>>>>>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_svm.h
> >>>>>>>>
> >>>>>>>> --
> >>>>>>>> 2.29.2
> >>>>>>>>
> >>>>>>>> _______________________________________________
> >>>>>>>> dri-devel mailing list
> >>>>>>>> dri-devel@lists.freedesktop.org
> >>>>>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
> >>>
> >
> >
Jerome Glisse Jan. 13, 2021, 4:47 p.m. UTC | #11
On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
> This is the first version of our HMM based shared virtual memory manager
> for KFD. There are still a number of known issues that we're working through
> (see below). This will likely lead to some pretty significant changes in
> MMU notifier handling and locking on the migration code paths. So don't
> get hung up on those details yet.

[...]

> Known issues:
> * won't work with IOMMU enabled, we need to dma_map all pages properly
> * still working on some race conditions and random bugs
> * performance is not great yet

What would those changes looks like ? Seeing the issue below i do not
see how they inter-play with mmu notifier. Can you elaborate.

Cheers,
Jérôme
Jerome Glisse Jan. 13, 2021, 4:56 p.m. UTC | #12
On Fri, Jan 08, 2021 at 03:40:07PM +0100, Daniel Vetter wrote:
> On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
> > Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
> > > On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
> > >> This is the first version of our HMM based shared virtual memory manager
> > >> for KFD. There are still a number of known issues that we're working through
> > >> (see below). This will likely lead to some pretty significant changes in
> > >> MMU notifier handling and locking on the migration code paths. So don't
> > >> get hung up on those details yet.
> > >>
> > >> But I think this is a good time to start getting feedback. We're pretty
> > >> confident about the ioctl API, which is both simple and extensible for the
> > >> future. (see patches 4,16) The user mode side of the API can be found here:
> > >> https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
> > >>
> > >> I'd also like another pair of eyes on how we're interfacing with the GPU VM
> > >> code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
> > >> and some retry IRQ handling changes (32).
> > >>
> > >>
> > >> Known issues:
> > >> * won't work with IOMMU enabled, we need to dma_map all pages properly
> > >> * still working on some race conditions and random bugs
> > >> * performance is not great yet
> > > Still catching up, but I think there's another one for your list:
> > >
> > >  * hmm gpu context preempt vs page fault handling. I've had a short
> > >    discussion about this one with Christian before the holidays, and also
> > >    some private chats with Jerome. It's nasty since no easy fix, much less
> > >    a good idea what's the best approach here.
> > 
> > Do you have a pointer to that discussion or any more details?
> 
> Essentially if you're handling an hmm page fault from the gpu, you can
> deadlock by calling dma_fence_wait on a (chain of, possibly) other command
> submissions or compute contexts with dma_fence_wait. Which deadlocks if
> you can't preempt while you have that page fault pending. Two solutions:
> 
> - your hw can (at least for compute ctx) preempt even when a page fault is
>   pending
> 
> - lots of screaming in trying to come up with an alternate solution. They
>   all suck.
> 
> Note that the dma_fence_wait is hard requirement, because we need that for
> mmu notifiers and shrinkers, disallowing that would disable dynamic memory
> management. Which is the current "ttm is self-limited to 50% of system
> memory" limitation Christian is trying to lift. So that's really not
> a restriction we can lift, at least not in upstream where we need to also
> support old style hardware which doesn't have page fault support and
> really has no other option to handle memory management than
> dma_fence_wait.
> 
> Thread was here:
> 
> https://lore.kernel.org/dri-devel/CAKMK7uGgoeF8LmFBwWh5mW1k4xWjuUh3hdSFpVH1NBM7K0=edA@mail.gmail.com/
> 
> There's a few ways to resolve this (without having preempt-capable
> hardware), but they're all supremely nasty.
> -Daniel
> 

I had a new idea, i wanted to think more about it but have not yet,
anyway here it is. Adding a new callback to dma fence which ask the
question can it dead lock ? Any time a GPU driver has pending page
fault (ie something calling into the mm) it answer yes, otherwise
no. The GPU shrinker would ask the question before waiting on any
dma-fence and back of if it gets yes. Shrinker can still try many
dma buf object for which it does not get a yes on associated fence.

This does not solve the mmu notifier case, for this you would just
invalidate the gem userptr object (with a flag but not releasing the
page refcount) but you would not wait for the GPU (ie no dma fence
wait in that code path anymore). The userptr API never really made
the contract that it will always be in sync with the mm view of the
world so if different page get remapped to same virtual address
while GPU is still working with the old pages it should not be an
issue (it would not be in our usage of userptr for compositor and
what not).

Maybe i overlook something there.

Cheers,
Jérôme
Daniel Vetter Jan. 13, 2021, 8:31 p.m. UTC | #13
On Wed, Jan 13, 2021 at 5:56 PM Jerome Glisse <jglisse@redhat.com> wrote:
> On Fri, Jan 08, 2021 at 03:40:07PM +0100, Daniel Vetter wrote:
> > On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
> > > Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
> > > > On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
> > > >> This is the first version of our HMM based shared virtual memory manager
> > > >> for KFD. There are still a number of known issues that we're working through
> > > >> (see below). This will likely lead to some pretty significant changes in
> > > >> MMU notifier handling and locking on the migration code paths. So don't
> > > >> get hung up on those details yet.
> > > >>
> > > >> But I think this is a good time to start getting feedback. We're pretty
> > > >> confident about the ioctl API, which is both simple and extensible for the
> > > >> future. (see patches 4,16) The user mode side of the API can be found here:
> > > >> https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
> > > >>
> > > >> I'd also like another pair of eyes on how we're interfacing with the GPU VM
> > > >> code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
> > > >> and some retry IRQ handling changes (32).
> > > >>
> > > >>
> > > >> Known issues:
> > > >> * won't work with IOMMU enabled, we need to dma_map all pages properly
> > > >> * still working on some race conditions and random bugs
> > > >> * performance is not great yet
> > > > Still catching up, but I think there's another one for your list:
> > > >
> > > >  * hmm gpu context preempt vs page fault handling. I've had a short
> > > >    discussion about this one with Christian before the holidays, and also
> > > >    some private chats with Jerome. It's nasty since no easy fix, much less
> > > >    a good idea what's the best approach here.
> > >
> > > Do you have a pointer to that discussion or any more details?
> >
> > Essentially if you're handling an hmm page fault from the gpu, you can
> > deadlock by calling dma_fence_wait on a (chain of, possibly) other command
> > submissions or compute contexts with dma_fence_wait. Which deadlocks if
> > you can't preempt while you have that page fault pending. Two solutions:
> >
> > - your hw can (at least for compute ctx) preempt even when a page fault is
> >   pending
> >
> > - lots of screaming in trying to come up with an alternate solution. They
> >   all suck.
> >
> > Note that the dma_fence_wait is hard requirement, because we need that for
> > mmu notifiers and shrinkers, disallowing that would disable dynamic memory
> > management. Which is the current "ttm is self-limited to 50% of system
> > memory" limitation Christian is trying to lift. So that's really not
> > a restriction we can lift, at least not in upstream where we need to also
> > support old style hardware which doesn't have page fault support and
> > really has no other option to handle memory management than
> > dma_fence_wait.
> >
> > Thread was here:
> >
> > https://lore.kernel.org/dri-devel/CAKMK7uGgoeF8LmFBwWh5mW1k4xWjuUh3hdSFpVH1NBM7K0=edA@mail.gmail.com/
> >
> > There's a few ways to resolve this (without having preempt-capable
> > hardware), but they're all supremely nasty.
> > -Daniel
> >
>
> I had a new idea, i wanted to think more about it but have not yet,
> anyway here it is. Adding a new callback to dma fence which ask the
> question can it dead lock ? Any time a GPU driver has pending page
> fault (ie something calling into the mm) it answer yes, otherwise
> no. The GPU shrinker would ask the question before waiting on any
> dma-fence and back of if it gets yes. Shrinker can still try many
> dma buf object for which it does not get a yes on associated fence.

Having that answer on a given fence isn't enough, you still need to
forward that information through the entire dependency graph, across
drivers. That's the hard part, since that dependency graph is very
implicit in the code, and we'd need to first roll it out across all
drivers.

> This does not solve the mmu notifier case, for this you would just
> invalidate the gem userptr object (with a flag but not releasing the
> page refcount) but you would not wait for the GPU (ie no dma fence
> wait in that code path anymore). The userptr API never really made
> the contract that it will always be in sync with the mm view of the
> world so if different page get remapped to same virtual address
> while GPU is still working with the old pages it should not be an
> issue (it would not be in our usage of userptr for compositor and
> what not).
>
> Maybe i overlook something there.

tbh I'm never really clear on how much exactly we need, and whether
maybe the new pin/unpin api should fix it all.
-Daniel
Felix Kuehling Jan. 14, 2021, 12:06 a.m. UTC | #14
Am 2021-01-13 um 11:47 a.m. schrieb Jerome Glisse:
> On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
>> This is the first version of our HMM based shared virtual memory manager
>> for KFD. There are still a number of known issues that we're working through
>> (see below). This will likely lead to some pretty significant changes in
>> MMU notifier handling and locking on the migration code paths. So don't
>> get hung up on those details yet.
> [...]
>
>> Known issues:
>> * won't work with IOMMU enabled, we need to dma_map all pages properly
>> * still working on some race conditions and random bugs
>> * performance is not great yet
> What would those changes looks like ? Seeing the issue below i do not
> see how they inter-play with mmu notifier. Can you elaborate.

We currently have some race conditions when multiple threads are causing
migrations concurrently (e.g. CPU page faults, GPU page faults, memory
evictions, and explicit prefetch by the application).

In the current patch series we set up one MMU range notifier for the
entire address space because we had trouble setting up MMU notifiers for
specific address ranges. There are situations where we want to free or
free/resize/reallocate MMU range notifiers, but we can't due to the
locking context we're in:

  * MMU release notifier when a virtual address range is unmapped
  * CPU page fault handler

In both these situations we may need to split virtual address ranges
because we only want to free or migrate a part of it. If we have
per-address range notifiers we also need to free or create notifiers,
which is not possible in those contexts. On the other hand, using a
single range notifier for everything causes unnecessary serialization.

We're reworking all of this to have per-address range notifiers that are
updated with a deferred mechanism in workers. I finally figured out how
to do that in a clean way, hopefully without races or deadlocks, which
should also address the other race conditions we had with concurrent
migration triggers. Philip is working on the implementation.

Regards,
  Felix

>
> Cheers,
> Jérôme
>
Jerome Glisse Jan. 14, 2021, 3:27 a.m. UTC | #15
On Wed, Jan 13, 2021 at 09:31:11PM +0100, Daniel Vetter wrote:
> On Wed, Jan 13, 2021 at 5:56 PM Jerome Glisse <jglisse@redhat.com> wrote:
> > On Fri, Jan 08, 2021 at 03:40:07PM +0100, Daniel Vetter wrote:
> > > On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
> > > > Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
> > > > > On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
> > > > >> This is the first version of our HMM based shared virtual memory manager
> > > > >> for KFD. There are still a number of known issues that we're working through
> > > > >> (see below). This will likely lead to some pretty significant changes in
> > > > >> MMU notifier handling and locking on the migration code paths. So don't
> > > > >> get hung up on those details yet.
> > > > >>
> > > > >> But I think this is a good time to start getting feedback. We're pretty
> > > > >> confident about the ioctl API, which is both simple and extensible for the
> > > > >> future. (see patches 4,16) The user mode side of the API can be found here:
> > > > >> https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
> > > > >>
> > > > >> I'd also like another pair of eyes on how we're interfacing with the GPU VM
> > > > >> code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
> > > > >> and some retry IRQ handling changes (32).
> > > > >>
> > > > >>
> > > > >> Known issues:
> > > > >> * won't work with IOMMU enabled, we need to dma_map all pages properly
> > > > >> * still working on some race conditions and random bugs
> > > > >> * performance is not great yet
> > > > > Still catching up, but I think there's another one for your list:
> > > > >
> > > > >  * hmm gpu context preempt vs page fault handling. I've had a short
> > > > >    discussion about this one with Christian before the holidays, and also
> > > > >    some private chats with Jerome. It's nasty since no easy fix, much less
> > > > >    a good idea what's the best approach here.
> > > >
> > > > Do you have a pointer to that discussion or any more details?
> > >
> > > Essentially if you're handling an hmm page fault from the gpu, you can
> > > deadlock by calling dma_fence_wait on a (chain of, possibly) other command
> > > submissions or compute contexts with dma_fence_wait. Which deadlocks if
> > > you can't preempt while you have that page fault pending. Two solutions:
> > >
> > > - your hw can (at least for compute ctx) preempt even when a page fault is
> > >   pending
> > >
> > > - lots of screaming in trying to come up with an alternate solution. They
> > >   all suck.
> > >
> > > Note that the dma_fence_wait is hard requirement, because we need that for
> > > mmu notifiers and shrinkers, disallowing that would disable dynamic memory
> > > management. Which is the current "ttm is self-limited to 50% of system
> > > memory" limitation Christian is trying to lift. So that's really not
> > > a restriction we can lift, at least not in upstream where we need to also
> > > support old style hardware which doesn't have page fault support and
> > > really has no other option to handle memory management than
> > > dma_fence_wait.
> > >
> > > Thread was here:
> > >
> > > https://lore.kernel.org/dri-devel/CAKMK7uGgoeF8LmFBwWh5mW1k4xWjuUh3hdSFpVH1NBM7K0=edA@mail.gmail.com/
> > >
> > > There's a few ways to resolve this (without having preempt-capable
> > > hardware), but they're all supremely nasty.
> > > -Daniel
> > >
> >
> > I had a new idea, i wanted to think more about it but have not yet,
> > anyway here it is. Adding a new callback to dma fence which ask the
> > question can it dead lock ? Any time a GPU driver has pending page
> > fault (ie something calling into the mm) it answer yes, otherwise
> > no. The GPU shrinker would ask the question before waiting on any
> > dma-fence and back of if it gets yes. Shrinker can still try many
> > dma buf object for which it does not get a yes on associated fence.
> 
> Having that answer on a given fence isn't enough, you still need to
> forward that information through the entire dependency graph, across
> drivers. That's the hard part, since that dependency graph is very
> implicit in the code, and we'd need to first roll it out across all
> drivers.

Here i am saying do not wait on fence for which you are not sure.
Only wait on fence for which you are 100% certain you can not dead
lock. So if you can never be sure on dma fence then never wait on
dma-fence in the shrinker. However most driver should have enough
information in their shrinker to know if it is safe to wait on
fence internal to their device driver (and also know if any of
those fence has implicit outside dependency). So first implementation
would be to say always deadlock and then having each driver build
confidence into what it can ascertain.

> 
> > This does not solve the mmu notifier case, for this you would just
> > invalidate the gem userptr object (with a flag but not releasing the
> > page refcount) but you would not wait for the GPU (ie no dma fence
> > wait in that code path anymore). The userptr API never really made
> > the contract that it will always be in sync with the mm view of the
> > world so if different page get remapped to same virtual address
> > while GPU is still working with the old pages it should not be an
> > issue (it would not be in our usage of userptr for compositor and
> > what not).
> >
> > Maybe i overlook something there.
> 
> tbh I'm never really clear on how much exactly we need, and whether
> maybe the new pin/unpin api should fix it all.

pin/unpin is not a solution it is to fix something with GUP (where
we need to know if a page is GUPed or not). GUP should die longterm
so anything using GUP (pin/unpin falls into that) should die longterm.
Pining memory is bad period (it just breaks too much mm and it is
unsolvable for things like mremap, splice, ...).

Cheers,
Jérôme
Felix Kuehling Jan. 14, 2021, 5:34 a.m. UTC | #16
Am 2021-01-11 um 11:29 a.m. schrieb Daniel Vetter:
> On Fri, Jan 08, 2021 at 12:56:24PM -0500, Felix Kuehling wrote:
>> Am 2021-01-08 um 11:53 a.m. schrieb Daniel Vetter:
>>> On Fri, Jan 8, 2021 at 5:36 PM Felix Kuehling <felix.kuehling@amd.com> wrote:
>>>> Am 2021-01-08 um 11:06 a.m. schrieb Daniel Vetter:
>>>>> On Fri, Jan 8, 2021 at 4:58 PM Felix Kuehling <felix.kuehling@amd.com> wrote:
>>>>>> Am 2021-01-08 um 9:40 a.m. schrieb Daniel Vetter:
>>>>>>> On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
>>>>>>>> Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
>>>>>>>>> On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
>>>>>>>>>> This is the first version of our HMM based shared virtual memory manager
>>>>>>>>>> for KFD. There are still a number of known issues that we're working through
>>>>>>>>>> (see below). This will likely lead to some pretty significant changes in
>>>>>>>>>> MMU notifier handling and locking on the migration code paths. So don't
>>>>>>>>>> get hung up on those details yet.
>>>>>>>>>>
>>>>>>>>>> But I think this is a good time to start getting feedback. We're pretty
>>>>>>>>>> confident about the ioctl API, which is both simple and extensible for the
>>>>>>>>>> future. (see patches 4,16) The user mode side of the API can be found here:
>>>>>>>>>> https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
>>>>>>>>>>
>>>>>>>>>> I'd also like another pair of eyes on how we're interfacing with the GPU VM
>>>>>>>>>> code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
>>>>>>>>>> and some retry IRQ handling changes (32).
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Known issues:
>>>>>>>>>> * won't work with IOMMU enabled, we need to dma_map all pages properly
>>>>>>>>>> * still working on some race conditions and random bugs
>>>>>>>>>> * performance is not great yet
>>>>>>>>> Still catching up, but I think there's another one for your list:
>>>>>>>>>
>>>>>>>>>  * hmm gpu context preempt vs page fault handling. I've had a short
>>>>>>>>>    discussion about this one with Christian before the holidays, and also
>>>>>>>>>    some private chats with Jerome. It's nasty since no easy fix, much less
>>>>>>>>>    a good idea what's the best approach here.
>>>>>>>> Do you have a pointer to that discussion or any more details?
>>>>>>> Essentially if you're handling an hmm page fault from the gpu, you can
>>>>>>> deadlock by calling dma_fence_wait on a (chain of, possibly) other command
>>>>>>> submissions or compute contexts with dma_fence_wait. Which deadlocks if
>>>>>>> you can't preempt while you have that page fault pending. Two solutions:
>>>>>>>
>>>>>>> - your hw can (at least for compute ctx) preempt even when a page fault is
>>>>>>>   pending
>>>>>> Our GFXv9 GPUs can do this. GFXv10 cannot.
>>>>> Uh, why did your hw guys drop this :-/
>> Performance. It's the same reason why the XNACK mode selection API
>> exists (patch 16). When we enable recoverable page fault handling in the
>> compute units on GFXv9, it costs some performance even when no page
>> faults are happening. On GFXv10 that retry fault handling moved out of
>> the compute units, so they don't take the performance hit. But that
>> sacrificed the ability to preempt during page faults. We'll need to work
>> with our hardware teams to restore that capability in a future generation.
> Ah yes, you need to stall in more points in the compute cores to make sure
> you can recover if the page fault gets interrupted.
>
> Maybe my knowledge is outdated, but my understanding is that nvidia can
> also preempt (but only for compute jobs, since oh dear the pain this would
> be for all the fixed function stuff). Since gfx10 moved page fault
> handling further away from compute cores, do you know whether this now
> means you can do page faults for (some?) fixed function stuff too? Or
> still only for compute?

I'm not sure.


>
> Supporting page fault for 3d would be real pain with the corner we're
> stuck in right now, but better we know about this early than later :-/

I know Christian hates the idea. We know that page faults on GPUs can be
a huge performance drain because you're stalling potentially so many
threads and the CPU can become a bottle neck dealing with all the page
faults from many GPU threads. On the compute side, applications will be
optimized to avoid them as much as possible, e.g. by pre-faulting or
pre-fetching data before it's needed.

But I think you need page faults to make overcommitted memory with user
mode command submission not suck.


>
>>
>>>>> I do think it can be rescued with what I call gang scheduling of
>>>>> engines: I.e. when a given engine is running a context (or a group of
>>>>> engines, depending how your hw works) that can cause a page fault, you
>>>>> must flush out all workloads running on the same engine which could
>>>>> block a dma_fence (preempt them, or for non-compute stuff, force their
>>>>> completion). And the other way round, i.e. before you can run a legacy
>>>>> gl workload with a dma_fence on these engines you need to preempt all
>>>>> ctxs that could cause page faults and take them at least out of the hw
>>>>> scheduler queue.
>>>> Yuck! But yeah, that would work. A less invasive alternative would be to
>>>> reserve some compute units for graphics contexts so we can guarantee
>>>> forward progress for graphics contexts even when all CUs working on
>>>> compute stuff are stuck on page faults.
>>> Won't this hurt compute workloads? I think we need something were at
>>> least pure compute or pure gl/vk workloads run at full performance.
>>> And without preempt we can't take anything back when we need it, so
>>> would have to always upfront reserve some cores just in case.
>> Yes, it would hurt proportionally to how many CUs get reserved. On big
>> GPUs with many CUs the impact could be quite small.
> Also, we could do the reservation only for the time when there's actually
> a legacy context with normal dma_fence in the scheduler queue. Assuming
> that reserving/unreserving of CUs isn't too expensive operation. If it's
> as expensive as a full stall probably not worth the complexity here and
> just go with a full stall and only run one or the other at a time.
>
> Wrt desktops I'm also somewhat worried that we might end up killing
> desktop workloads if there's not enough CUs reserved for these and they
> end up taking too long and anger either tdr or worse the user because the
> desktop is unuseable when you start a compute job and get a big pile of
> faults. Probably needs some testing to see how bad it is.
>
>> That said, I'm not sure it'll work on our hardware. Our CUs can execute
>> multiple wavefronts from different contexts and switch between them with
>> fine granularity. I'd need to check with our HW engineers whether this
>> CU-internal context switching is still possible during page faults on
>> GFXv10.
> You'd need to do the reservation for all contexts/engines which can cause
> page faults, otherewise it'd leak.

All engines that can page fault and cannot be preempted during faults.

Regards,
  Felix


>>
>>>>> Just reserving an sdma engine for copy jobs and ptes updates and that
>>>>> stuff is necessary, but not sufficient.
>>>>>
>>>>> Another approach that Jerome suggested is to track the reverse
>>>>> dependency graph of all dma_fence somehow and make sure that direct
>>>>> reclaim never recurses on an engine you're serving a pagefault for.
>>>>> Possible in theory, but in practice I think not feasible to implement
>>>>> because way too much work to implement.
>>>> I agree.
>>>>
>>>>
>>>>> Either way it's imo really nasty to come up with a scheme here that
>>>>> doesn't fail in some corner, or becomes really nasty with inconsistent
>>>>> rules across different drivers and hw :-(
>>>> Yeah. The cleanest approach is to avoid DMA fences altogether for
>>>> device/engines that can get stuck on page faults. A user mode command
>>>> submission model would do that.
>>>>
>>>> Reserving some compute units for graphics contexts that signal fences
>>>> but never page fault should also work.
>>> The trouble is you don't just need engines, you need compute
>>> resources/cores behind them too (assuming I'm understading correctly
>>> how this works on amd hw). Otherwise you end up with a gl context that
>>> should complete to resolve the deadlock, but can't because it can't
>>> run it's shader because all the shader cores are stuck in compute page
>>> faults somewhere.
>> That's why I suggested reserving some CUs that would never execute
>> compute workloads that can page fault.
>>
>>
>>>  Hence the gang scheduling would need to be at a
>>> level were you can guarantee full isolation of hw resources, either
>>> because you can preempt stuck compute kernels and let gl shaders run,
>>> or because of hw core partitiion or something else. If you cant, you
>>> need to gang schedule the entire gpu.
>> Yes.
>>
>>
>>> I think in practice that's not too ugly since for pure compute
>>> workloads you're not going to have a desktop running most likely.
>> We still need legacy contexts for video decoding and post processing.
>> But maybe we can find a fix for that too.
> Hm I'd expect video workloads to not use page faults (even if they use
> compute for post processing). Same way that compute in vk/gl would still
> use all the legacy fencing (which excludes page fault support).
>
> So pure "compute always has to use page fault mode and user sync" I don't
> think is feasible. And then all the mixed workloads useage should be fine
> too.
>
>>>  And
>>> for developer machines we should be able to push the occasional gfx
>>> update through the gpu still without causing too much stutter on the
>>> desktop or costing too much perf on the compute side. And pure gl/vk
>>> or pure compute workloads should keep running at full performance.
>> I think it would be acceptable for mostly-compute workloads. It would be
>> bad for desktop workloads with some compute, e.g. games with
>> OpenCL-based physics. We're increasingly relying on KFD for all GPU
>> computing (including OpenCL) in desktop applications. But those could
>> live without GPU page faults until we can build sane hardware.
> Uh ... I guess the challenge here is noticing when your opencl should be
> run in old style mode. I guess you could link them together through some
> backchannel, so when a gl or vk context is set up you run opencl in the
> legacy mode without pagefault for full perf together with vk. Still
> doesn't work if the app sets up ocl before vk/gl :-/
> -Daniel
>
>> Regards,
>>   Felix
>>
>>
>>> -Daniel
>>>
>>>
>>>
>>>> Regards,
>>>>   Felix
>>>>
>>>>
>>>>> Cheers, Daniel
>>>>>
>>>>>> Regards,
>>>>>>   Felix
>>>>>>
>>>>>>> Note that the dma_fence_wait is hard requirement, because we need that for
>>>>>>> mmu notifiers and shrinkers, disallowing that would disable dynamic memory
>>>>>>> management. Which is the current "ttm is self-limited to 50% of system
>>>>>>> memory" limitation Christian is trying to lift. So that's really not
>>>>>>> a restriction we can lift, at least not in upstream where we need to also
>>>>>>> support old style hardware which doesn't have page fault support and
>>>>>>> really has no other option to handle memory management than
>>>>>>> dma_fence_wait.
>>>>>>>
>>>>>>> Thread was here:
>>>>>>>
>>>>>>> https://lore.kernel.org/dri-devel/CAKMK7uGgoeF8LmFBwWh5mW1k4xWjuUh3hdSFpVH1NBM7K0=edA@mail.gmail.com/
>>>>>>>
>>>>>>> There's a few ways to resolve this (without having preempt-capable
>>>>>>> hardware), but they're all supremely nasty.
>>>>>>> -Daniel
>>>>>>>
>>>>>>>> Thanks,
>>>>>>>>   Felix
>>>>>>>>
>>>>>>>>
>>>>>>>>> I'll try to look at this more in-depth when I'm catching up on mails.
>>>>>>>>> -Daniel
>>>>>>>>>
>>>>>>>>>> Alex Sierra (12):
>>>>>>>>>>   drm/amdgpu: replace per_device_list by array
>>>>>>>>>>   drm/amdkfd: helper to convert gpu id and idx
>>>>>>>>>>   drm/amdkfd: add xnack enabled flag to kfd_process
>>>>>>>>>>   drm/amdkfd: add ioctl to configure and query xnack retries
>>>>>>>>>>   drm/amdkfd: invalidate tables on page retry fault
>>>>>>>>>>   drm/amdkfd: page table restore through svm API
>>>>>>>>>>   drm/amdkfd: SVM API call to restore page tables
>>>>>>>>>>   drm/amdkfd: add svm_bo reference for eviction fence
>>>>>>>>>>   drm/amdgpu: add param bit flag to create SVM BOs
>>>>>>>>>>   drm/amdkfd: add svm_bo eviction mechanism support
>>>>>>>>>>   drm/amdgpu: svm bo enable_signal call condition
>>>>>>>>>>   drm/amdgpu: add svm_bo eviction to enable_signal cb
>>>>>>>>>>
>>>>>>>>>> Philip Yang (23):
>>>>>>>>>>   drm/amdkfd: select kernel DEVICE_PRIVATE option
>>>>>>>>>>   drm/amdkfd: add svm ioctl API
>>>>>>>>>>   drm/amdkfd: Add SVM API support capability bits
>>>>>>>>>>   drm/amdkfd: register svm range
>>>>>>>>>>   drm/amdkfd: add svm ioctl GET_ATTR op
>>>>>>>>>>   drm/amdgpu: add common HMM get pages function
>>>>>>>>>>   drm/amdkfd: validate svm range system memory
>>>>>>>>>>   drm/amdkfd: register overlap system memory range
>>>>>>>>>>   drm/amdkfd: deregister svm range
>>>>>>>>>>   drm/amdgpu: export vm update mapping interface
>>>>>>>>>>   drm/amdkfd: map svm range to GPUs
>>>>>>>>>>   drm/amdkfd: svm range eviction and restore
>>>>>>>>>>   drm/amdkfd: register HMM device private zone
>>>>>>>>>>   drm/amdkfd: validate vram svm range from TTM
>>>>>>>>>>   drm/amdkfd: support xgmi same hive mapping
>>>>>>>>>>   drm/amdkfd: copy memory through gart table
>>>>>>>>>>   drm/amdkfd: HMM migrate ram to vram
>>>>>>>>>>   drm/amdkfd: HMM migrate vram to ram
>>>>>>>>>>   drm/amdgpu: reserve fence slot to update page table
>>>>>>>>>>   drm/amdgpu: enable retry fault wptr overflow
>>>>>>>>>>   drm/amdkfd: refine migration policy with xnack on
>>>>>>>>>>   drm/amdkfd: add svm range validate timestamp
>>>>>>>>>>   drm/amdkfd: multiple gpu migrate vram to vram
>>>>>>>>>>
>>>>>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c    |    3 +
>>>>>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h    |    4 +-
>>>>>>>>>>  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c  |   16 +-
>>>>>>>>>>  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |   13 +-
>>>>>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |   83 +
>>>>>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h        |    7 +
>>>>>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_object.h    |    5 +
>>>>>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       |   90 +-
>>>>>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        |   47 +-
>>>>>>>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h        |   10 +
>>>>>>>>>>  drivers/gpu/drm/amd/amdgpu/vega10_ih.c        |   32 +-
>>>>>>>>>>  drivers/gpu/drm/amd/amdgpu/vega20_ih.c        |   32 +-
>>>>>>>>>>  drivers/gpu/drm/amd/amdkfd/Kconfig            |    1 +
>>>>>>>>>>  drivers/gpu/drm/amd/amdkfd/Makefile           |    4 +-
>>>>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_chardev.c      |  170 +-
>>>>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_iommu.c        |    8 +-
>>>>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  866 ++++++
>>>>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.h      |   59 +
>>>>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_priv.h         |   52 +-
>>>>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_process.c      |  200 +-
>>>>>>>>>>  .../amd/amdkfd/kfd_process_queue_manager.c    |    6 +-
>>>>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_svm.c          | 2564 +++++++++++++++++
>>>>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_svm.h          |  135 +
>>>>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_topology.c     |    1 +
>>>>>>>>>>  drivers/gpu/drm/amd/amdkfd/kfd_topology.h     |   10 +-
>>>>>>>>>>  include/uapi/linux/kfd_ioctl.h                |  169 +-
>>>>>>>>>>  26 files changed, 4296 insertions(+), 291 deletions(-)
>>>>>>>>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>>>>>>>>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_migrate.h
>>>>>>>>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_svm.c
>>>>>>>>>>  create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_svm.h
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> 2.29.2
>>>>>>>>>>
>>>>>>>>>> _______________________________________________
>>>>>>>>>> dri-devel mailing list
>>>>>>>>>> dri-devel@lists.freedesktop.org
>>>>>>>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>>
Daniel Vetter Jan. 14, 2021, 9:26 a.m. UTC | #17
On Thu, Jan 14, 2021 at 4:27 AM Jerome Glisse <jglisse@redhat.com> wrote:
>
> On Wed, Jan 13, 2021 at 09:31:11PM +0100, Daniel Vetter wrote:
> > On Wed, Jan 13, 2021 at 5:56 PM Jerome Glisse <jglisse@redhat.com> wrote:
> > > On Fri, Jan 08, 2021 at 03:40:07PM +0100, Daniel Vetter wrote:
> > > > On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
> > > > > Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
> > > > > > On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
> > > > > >> This is the first version of our HMM based shared virtual memory manager
> > > > > >> for KFD. There are still a number of known issues that we're working through
> > > > > >> (see below). This will likely lead to some pretty significant changes in
> > > > > >> MMU notifier handling and locking on the migration code paths. So don't
> > > > > >> get hung up on those details yet.
> > > > > >>
> > > > > >> But I think this is a good time to start getting feedback. We're pretty
> > > > > >> confident about the ioctl API, which is both simple and extensible for the
> > > > > >> future. (see patches 4,16) The user mode side of the API can be found here:
> > > > > >> https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
> > > > > >>
> > > > > >> I'd also like another pair of eyes on how we're interfacing with the GPU VM
> > > > > >> code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
> > > > > >> and some retry IRQ handling changes (32).
> > > > > >>
> > > > > >>
> > > > > >> Known issues:
> > > > > >> * won't work with IOMMU enabled, we need to dma_map all pages properly
> > > > > >> * still working on some race conditions and random bugs
> > > > > >> * performance is not great yet
> > > > > > Still catching up, but I think there's another one for your list:
> > > > > >
> > > > > >  * hmm gpu context preempt vs page fault handling. I've had a short
> > > > > >    discussion about this one with Christian before the holidays, and also
> > > > > >    some private chats with Jerome. It's nasty since no easy fix, much less
> > > > > >    a good idea what's the best approach here.
> > > > >
> > > > > Do you have a pointer to that discussion or any more details?
> > > >
> > > > Essentially if you're handling an hmm page fault from the gpu, you can
> > > > deadlock by calling dma_fence_wait on a (chain of, possibly) other command
> > > > submissions or compute contexts with dma_fence_wait. Which deadlocks if
> > > > you can't preempt while you have that page fault pending. Two solutions:
> > > >
> > > > - your hw can (at least for compute ctx) preempt even when a page fault is
> > > >   pending
> > > >
> > > > - lots of screaming in trying to come up with an alternate solution. They
> > > >   all suck.
> > > >
> > > > Note that the dma_fence_wait is hard requirement, because we need that for
> > > > mmu notifiers and shrinkers, disallowing that would disable dynamic memory
> > > > management. Which is the current "ttm is self-limited to 50% of system
> > > > memory" limitation Christian is trying to lift. So that's really not
> > > > a restriction we can lift, at least not in upstream where we need to also
> > > > support old style hardware which doesn't have page fault support and
> > > > really has no other option to handle memory management than
> > > > dma_fence_wait.
> > > >
> > > > Thread was here:
> > > >
> > > > https://lore.kernel.org/dri-devel/CAKMK7uGgoeF8LmFBwWh5mW1k4xWjuUh3hdSFpVH1NBM7K0=edA@mail.gmail.com/
> > > >
> > > > There's a few ways to resolve this (without having preempt-capable
> > > > hardware), but they're all supremely nasty.
> > > > -Daniel
> > > >
> > >
> > > I had a new idea, i wanted to think more about it but have not yet,
> > > anyway here it is. Adding a new callback to dma fence which ask the
> > > question can it dead lock ? Any time a GPU driver has pending page
> > > fault (ie something calling into the mm) it answer yes, otherwise
> > > no. The GPU shrinker would ask the question before waiting on any
> > > dma-fence and back of if it gets yes. Shrinker can still try many
> > > dma buf object for which it does not get a yes on associated fence.
> >
> > Having that answer on a given fence isn't enough, you still need to
> > forward that information through the entire dependency graph, across
> > drivers. That's the hard part, since that dependency graph is very
> > implicit in the code, and we'd need to first roll it out across all
> > drivers.
>
> Here i am saying do not wait on fence for which you are not sure.
> Only wait on fence for which you are 100% certain you can not dead
> lock. So if you can never be sure on dma fence then never wait on
> dma-fence in the shrinker. However most driver should have enough
> information in their shrinker to know if it is safe to wait on
> fence internal to their device driver (and also know if any of
> those fence has implicit outside dependency). So first implementation
> would be to say always deadlock and then having each driver build
> confidence into what it can ascertain.

I just don't think that actually works in practice:

- on a single gpu you can't wait for vk/gl due to shared CUs, so only
sdma and uvd are left (or whatever else pure fixed function)

- for multi-gpu you get the guessing game of what leaks across gpus
and what doesn't. With p2p dma-buf we're now leaking dma_fence across
gpus even when there's no implicit syncing by userspace (although for
amdgpu this is tricky since iirc it still lacks the flag to let
userspace decide this, so this is more for other drivers).

- you don't just need to guarantee that there's no dma_fence
dependency going back to you, you also need to make sure there's no
other depedency chain through locks or whatever that closes the loop.
And since your proposal here is against the dma_fence lockdep
annotations we have now, lockdep won't help you (and let's be honest,
review doesn't catch this stuff either, so it's up to hangs in
production to catch this stuff)

- you still need the full dependency graph within the driver, and only
i915 scheduler has that afaik. And I'm not sure implementing that was
a bright idea

- assuming it's a deadlock by default means all gl/vk memory is
pinned. That's not nice, plus in additional you need hacks like ttm's
"max 50% of system memory" to paper over the worst fallout, which
Christian is trying to lift. I really do think we need to be able to
move towards more dynamic memory management, not less.

So in the end you're essentially disabling shrinking/eviction of other
gpu tasks, and I don't think that works. I really think the only two
realistic options are
- guarantee forward progress of other dma_fence (hw preemption,
reserved CUs, or whatever else you have)
- guarantee there's not a single offending dma_fence active in the
system that could cause problems

Hand-waving that in theory we could track the dependencies and that in
theory we could do some deadlock avoidance of some sorts about that
just doesn't look like a pragmatic&practical solution to me here. It
feels about as realistic as just creating a completely new memory
management model that sidesteps the entire dma_fence issues we have
due to mixing up kernel memory management and userspace sync fences in
one thing.

Cheers, Daniel

> > > This does not solve the mmu notifier case, for this you would just
> > > invalidate the gem userptr object (with a flag but not releasing the
> > > page refcount) but you would not wait for the GPU (ie no dma fence
> > > wait in that code path anymore). The userptr API never really made
> > > the contract that it will always be in sync with the mm view of the
> > > world so if different page get remapped to same virtual address
> > > while GPU is still working with the old pages it should not be an
> > > issue (it would not be in our usage of userptr for compositor and
> > > what not).
> > >
> > > Maybe i overlook something there.
> >
> > tbh I'm never really clear on how much exactly we need, and whether
> > maybe the new pin/unpin api should fix it all.
>
> pin/unpin is not a solution it is to fix something with GUP (where
> we need to know if a page is GUPed or not). GUP should die longterm
> so anything using GUP (pin/unpin falls into that) should die longterm.
> Pining memory is bad period (it just breaks too much mm and it is
> unsolvable for things like mremap, splice, ...).
>
> Cheers,
> Jérôme
>
Daniel Vetter Jan. 14, 2021, 10:39 a.m. UTC | #18
On Thu, Jan 14, 2021 at 10:26 AM Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Thu, Jan 14, 2021 at 4:27 AM Jerome Glisse <jglisse@redhat.com> wrote:
> >
> > On Wed, Jan 13, 2021 at 09:31:11PM +0100, Daniel Vetter wrote:
> > > On Wed, Jan 13, 2021 at 5:56 PM Jerome Glisse <jglisse@redhat.com> wrote:
> > > > On Fri, Jan 08, 2021 at 03:40:07PM +0100, Daniel Vetter wrote:
> > > > > On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
> > > > > > Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
> > > > > > > On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
> > > > > > >> This is the first version of our HMM based shared virtual memory manager
> > > > > > >> for KFD. There are still a number of known issues that we're working through
> > > > > > >> (see below). This will likely lead to some pretty significant changes in
> > > > > > >> MMU notifier handling and locking on the migration code paths. So don't
> > > > > > >> get hung up on those details yet.
> > > > > > >>
> > > > > > >> But I think this is a good time to start getting feedback. We're pretty
> > > > > > >> confident about the ioctl API, which is both simple and extensible for the
> > > > > > >> future. (see patches 4,16) The user mode side of the API can be found here:
> > > > > > >> https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
> > > > > > >>
> > > > > > >> I'd also like another pair of eyes on how we're interfacing with the GPU VM
> > > > > > >> code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
> > > > > > >> and some retry IRQ handling changes (32).
> > > > > > >>
> > > > > > >>
> > > > > > >> Known issues:
> > > > > > >> * won't work with IOMMU enabled, we need to dma_map all pages properly
> > > > > > >> * still working on some race conditions and random bugs
> > > > > > >> * performance is not great yet
> > > > > > > Still catching up, but I think there's another one for your list:
> > > > > > >
> > > > > > >  * hmm gpu context preempt vs page fault handling. I've had a short
> > > > > > >    discussion about this one with Christian before the holidays, and also
> > > > > > >    some private chats with Jerome. It's nasty since no easy fix, much less
> > > > > > >    a good idea what's the best approach here.
> > > > > >
> > > > > > Do you have a pointer to that discussion or any more details?
> > > > >
> > > > > Essentially if you're handling an hmm page fault from the gpu, you can
> > > > > deadlock by calling dma_fence_wait on a (chain of, possibly) other command
> > > > > submissions or compute contexts with dma_fence_wait. Which deadlocks if
> > > > > you can't preempt while you have that page fault pending. Two solutions:
> > > > >
> > > > > - your hw can (at least for compute ctx) preempt even when a page fault is
> > > > >   pending
> > > > >
> > > > > - lots of screaming in trying to come up with an alternate solution. They
> > > > >   all suck.
> > > > >
> > > > > Note that the dma_fence_wait is hard requirement, because we need that for
> > > > > mmu notifiers and shrinkers, disallowing that would disable dynamic memory
> > > > > management. Which is the current "ttm is self-limited to 50% of system
> > > > > memory" limitation Christian is trying to lift. So that's really not
> > > > > a restriction we can lift, at least not in upstream where we need to also
> > > > > support old style hardware which doesn't have page fault support and
> > > > > really has no other option to handle memory management than
> > > > > dma_fence_wait.
> > > > >
> > > > > Thread was here:
> > > > >
> > > > > https://lore.kernel.org/dri-devel/CAKMK7uGgoeF8LmFBwWh5mW1k4xWjuUh3hdSFpVH1NBM7K0=edA@mail.gmail.com/
> > > > >
> > > > > There's a few ways to resolve this (without having preempt-capable
> > > > > hardware), but they're all supremely nasty.
> > > > > -Daniel
> > > > >
> > > >
> > > > I had a new idea, i wanted to think more about it but have not yet,
> > > > anyway here it is. Adding a new callback to dma fence which ask the
> > > > question can it dead lock ? Any time a GPU driver has pending page
> > > > fault (ie something calling into the mm) it answer yes, otherwise
> > > > no. The GPU shrinker would ask the question before waiting on any
> > > > dma-fence and back of if it gets yes. Shrinker can still try many
> > > > dma buf object for which it does not get a yes on associated fence.
> > >
> > > Having that answer on a given fence isn't enough, you still need to
> > > forward that information through the entire dependency graph, across
> > > drivers. That's the hard part, since that dependency graph is very
> > > implicit in the code, and we'd need to first roll it out across all
> > > drivers.
> >
> > Here i am saying do not wait on fence for which you are not sure.
> > Only wait on fence for which you are 100% certain you can not dead
> > lock. So if you can never be sure on dma fence then never wait on
> > dma-fence in the shrinker. However most driver should have enough
> > information in their shrinker to know if it is safe to wait on
> > fence internal to their device driver (and also know if any of
> > those fence has implicit outside dependency). So first implementation
> > would be to say always deadlock and then having each driver build
> > confidence into what it can ascertain.
>
> I just don't think that actually works in practice:
>
> - on a single gpu you can't wait for vk/gl due to shared CUs, so only
> sdma and uvd are left (or whatever else pure fixed function)
>
> - for multi-gpu you get the guessing game of what leaks across gpus
> and what doesn't. With p2p dma-buf we're now leaking dma_fence across
> gpus even when there's no implicit syncing by userspace (although for
> amdgpu this is tricky since iirc it still lacks the flag to let
> userspace decide this, so this is more for other drivers).
>
> - you don't just need to guarantee that there's no dma_fence
> dependency going back to you, you also need to make sure there's no
> other depedency chain through locks or whatever that closes the loop.
> And since your proposal here is against the dma_fence lockdep
> annotations we have now, lockdep won't help you (and let's be honest,
> review doesn't catch this stuff either, so it's up to hangs in
> production to catch this stuff)
>
> - you still need the full dependency graph within the driver, and only
> i915 scheduler has that afaik. And I'm not sure implementing that was
> a bright idea
>
> - assuming it's a deadlock by default means all gl/vk memory is
> pinned. That's not nice, plus in additional you need hacks like ttm's
> "max 50% of system memory" to paper over the worst fallout, which
> Christian is trying to lift. I really do think we need to be able to
> move towards more dynamic memory management, not less.

Forgot one issue:

- somehow you need to transport the knowledge that you're in the gpu
fault repair path of a specific engine down to shrinkers/mmu notifiers
and all that. And it needs to be fairly specific, otherwise it just
amounts again to "no more dma_fence_wait allowed".

-Daniel

> So in the end you're essentially disabling shrinking/eviction of other
> gpu tasks, and I don't think that works. I really think the only two
> realistic options are
> - guarantee forward progress of other dma_fence (hw preemption,
> reserved CUs, or whatever else you have)
> - guarantee there's not a single offending dma_fence active in the
> system that could cause problems
>
> Hand-waving that in theory we could track the dependencies and that in
> theory we could do some deadlock avoidance of some sorts about that
> just doesn't look like a pragmatic&practical solution to me here. It
> feels about as realistic as just creating a completely new memory
> management model that sidesteps the entire dma_fence issues we have
> due to mixing up kernel memory management and userspace sync fences in
> one thing.
>
> Cheers, Daniel
>
> > > > This does not solve the mmu notifier case, for this you would just
> > > > invalidate the gem userptr object (with a flag but not releasing the
> > > > page refcount) but you would not wait for the GPU (ie no dma fence
> > > > wait in that code path anymore). The userptr API never really made
> > > > the contract that it will always be in sync with the mm view of the
> > > > world so if different page get remapped to same virtual address
> > > > while GPU is still working with the old pages it should not be an
> > > > issue (it would not be in our usage of userptr for compositor and
> > > > what not).
> > > >
> > > > Maybe i overlook something there.
> > >
> > > tbh I'm never really clear on how much exactly we need, and whether
> > > maybe the new pin/unpin api should fix it all.
> >
> > pin/unpin is not a solution it is to fix something with GUP (where
> > we need to know if a page is GUPed or not). GUP should die longterm
> > so anything using GUP (pin/unpin falls into that) should die longterm.
> > Pining memory is bad period (it just breaks too much mm and it is
> > unsolvable for things like mremap, splice, ...).
> >
> > Cheers,
> > Jérôme
> >
>
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
Christian König Jan. 14, 2021, 10:49 a.m. UTC | #19
Am 13.01.21 um 17:56 schrieb Jerome Glisse:
> On Fri, Jan 08, 2021 at 03:40:07PM +0100, Daniel Vetter wrote:
>> On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
>>> Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
>>>> On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
>>>>> This is the first version of our HMM based shared virtual memory manager
>>>>> for KFD. There are still a number of known issues that we're working through
>>>>> (see below). This will likely lead to some pretty significant changes in
>>>>> MMU notifier handling and locking on the migration code paths. So don't
>>>>> get hung up on those details yet.
>>>>>
>>>>> But I think this is a good time to start getting feedback. We're pretty
>>>>> confident about the ioctl API, which is both simple and extensible for the
>>>>> future. (see patches 4,16) The user mode side of the API can be found here:
>>>>> https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
>>>>>
>>>>> I'd also like another pair of eyes on how we're interfacing with the GPU VM
>>>>> code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
>>>>> and some retry IRQ handling changes (32).
>>>>>
>>>>>
>>>>> Known issues:
>>>>> * won't work with IOMMU enabled, we need to dma_map all pages properly
>>>>> * still working on some race conditions and random bugs
>>>>> * performance is not great yet
>>>> Still catching up, but I think there's another one for your list:
>>>>
>>>>   * hmm gpu context preempt vs page fault handling. I've had a short
>>>>     discussion about this one with Christian before the holidays, and also
>>>>     some private chats with Jerome. It's nasty since no easy fix, much less
>>>>     a good idea what's the best approach here.
>>> Do you have a pointer to that discussion or any more details?
>> Essentially if you're handling an hmm page fault from the gpu, you can
>> deadlock by calling dma_fence_wait on a (chain of, possibly) other command
>> submissions or compute contexts with dma_fence_wait. Which deadlocks if
>> you can't preempt while you have that page fault pending. Two solutions:
>>
>> - your hw can (at least for compute ctx) preempt even when a page fault is
>>    pending
>>
>> - lots of screaming in trying to come up with an alternate solution. They
>>    all suck.
>>
>> Note that the dma_fence_wait is hard requirement, because we need that for
>> mmu notifiers and shrinkers, disallowing that would disable dynamic memory
>> management. Which is the current "ttm is self-limited to 50% of system
>> memory" limitation Christian is trying to lift. So that's really not
>> a restriction we can lift, at least not in upstream where we need to also
>> support old style hardware which doesn't have page fault support and
>> really has no other option to handle memory management than
>> dma_fence_wait.
>>
>> Thread was here:
>>
>> https://lore.kernel.org/dri-devel/CAKMK7uGgoeF8LmFBwWh5mW1k4xWjuUh3hdSFpVH1NBM7K0=edA@mail.gmail.com/
>>
>> There's a few ways to resolve this (without having preempt-capable
>> hardware), but they're all supremely nasty.
>> -Daniel
>>
> I had a new idea, i wanted to think more about it but have not yet,
> anyway here it is. Adding a new callback to dma fence which ask the
> question can it dead lock ? Any time a GPU driver has pending page
> fault (ie something calling into the mm) it answer yes, otherwise
> no. The GPU shrinker would ask the question before waiting on any
> dma-fence and back of if it gets yes. Shrinker can still try many
> dma buf object for which it does not get a yes on associated fence.
>
> This does not solve the mmu notifier case, for this you would just
> invalidate the gem userptr object (with a flag but not releasing the
> page refcount) but you would not wait for the GPU (ie no dma fence
> wait in that code path anymore). The userptr API never really made
> the contract that it will always be in sync with the mm view of the
> world so if different page get remapped to same virtual address
> while GPU is still working with the old pages it should not be an
> issue (it would not be in our usage of userptr for compositor and
> what not).

The current working idea in my mind goes into a similar direction.

But instead of a callback I'm adding a complete new class of HMM fences.

Waiting in the MMU notfier, scheduler, TTM etc etc is only allowed for 
the dma_fences and HMM fences are ignored in container objects.

When you handle an implicit or explicit synchronization request from 
userspace you need to block for HMM fences to complete before taking any 
resource locks.

Regards,
Christian.

>
> Maybe i overlook something there.
>
> Cheers,
> Jérôme
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
Daniel Vetter Jan. 14, 2021, 11:52 a.m. UTC | #20
On Thu, Jan 14, 2021 at 11:49 AM Christian König
<ckoenig.leichtzumerken@gmail.com> wrote:
>
> Am 13.01.21 um 17:56 schrieb Jerome Glisse:
> > On Fri, Jan 08, 2021 at 03:40:07PM +0100, Daniel Vetter wrote:
> >> On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
> >>> Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
> >>>> On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
> >>>>> This is the first version of our HMM based shared virtual memory manager
> >>>>> for KFD. There are still a number of known issues that we're working through
> >>>>> (see below). This will likely lead to some pretty significant changes in
> >>>>> MMU notifier handling and locking on the migration code paths. So don't
> >>>>> get hung up on those details yet.
> >>>>>
> >>>>> But I think this is a good time to start getting feedback. We're pretty
> >>>>> confident about the ioctl API, which is both simple and extensible for the
> >>>>> future. (see patches 4,16) The user mode side of the API can be found here:
> >>>>> https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
> >>>>>
> >>>>> I'd also like another pair of eyes on how we're interfacing with the GPU VM
> >>>>> code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
> >>>>> and some retry IRQ handling changes (32).
> >>>>>
> >>>>>
> >>>>> Known issues:
> >>>>> * won't work with IOMMU enabled, we need to dma_map all pages properly
> >>>>> * still working on some race conditions and random bugs
> >>>>> * performance is not great yet
> >>>> Still catching up, but I think there's another one for your list:
> >>>>
> >>>>   * hmm gpu context preempt vs page fault handling. I've had a short
> >>>>     discussion about this one with Christian before the holidays, and also
> >>>>     some private chats with Jerome. It's nasty since no easy fix, much less
> >>>>     a good idea what's the best approach here.
> >>> Do you have a pointer to that discussion or any more details?
> >> Essentially if you're handling an hmm page fault from the gpu, you can
> >> deadlock by calling dma_fence_wait on a (chain of, possibly) other command
> >> submissions or compute contexts with dma_fence_wait. Which deadlocks if
> >> you can't preempt while you have that page fault pending. Two solutions:
> >>
> >> - your hw can (at least for compute ctx) preempt even when a page fault is
> >>    pending
> >>
> >> - lots of screaming in trying to come up with an alternate solution. They
> >>    all suck.
> >>
> >> Note that the dma_fence_wait is hard requirement, because we need that for
> >> mmu notifiers and shrinkers, disallowing that would disable dynamic memory
> >> management. Which is the current "ttm is self-limited to 50% of system
> >> memory" limitation Christian is trying to lift. So that's really not
> >> a restriction we can lift, at least not in upstream where we need to also
> >> support old style hardware which doesn't have page fault support and
> >> really has no other option to handle memory management than
> >> dma_fence_wait.
> >>
> >> Thread was here:
> >>
> >> https://lore.kernel.org/dri-devel/CAKMK7uGgoeF8LmFBwWh5mW1k4xWjuUh3hdSFpVH1NBM7K0=edA@mail.gmail.com/
> >>
> >> There's a few ways to resolve this (without having preempt-capable
> >> hardware), but they're all supremely nasty.
> >> -Daniel
> >>
> > I had a new idea, i wanted to think more about it but have not yet,
> > anyway here it is. Adding a new callback to dma fence which ask the
> > question can it dead lock ? Any time a GPU driver has pending page
> > fault (ie something calling into the mm) it answer yes, otherwise
> > no. The GPU shrinker would ask the question before waiting on any
> > dma-fence and back of if it gets yes. Shrinker can still try many
> > dma buf object for which it does not get a yes on associated fence.
> >
> > This does not solve the mmu notifier case, for this you would just
> > invalidate the gem userptr object (with a flag but not releasing the
> > page refcount) but you would not wait for the GPU (ie no dma fence
> > wait in that code path anymore). The userptr API never really made
> > the contract that it will always be in sync with the mm view of the
> > world so if different page get remapped to same virtual address
> > while GPU is still working with the old pages it should not be an
> > issue (it would not be in our usage of userptr for compositor and
> > what not).
>
> The current working idea in my mind goes into a similar direction.
>
> But instead of a callback I'm adding a complete new class of HMM fences.
>
> Waiting in the MMU notfier, scheduler, TTM etc etc is only allowed for
> the dma_fences and HMM fences are ignored in container objects.
>
> When you handle an implicit or explicit synchronization request from
> userspace you need to block for HMM fences to complete before taking any
> resource locks.

Isnt' that what I call gang scheduling? I.e. you either run in HMM
mode, or in legacy fencing mode (whether implicit or explicit doesn't
really matter I think). By forcing that split we avoid the problem,
but it means occasionally full stalls on mixed workloads.

But that's not what Jerome wants (afaiui at least), I think his idea
is to track the reverse dependencies of all the fences floating
around, and then skip evicting an object if you have to wait for any
fence that is problematic for the current calling context. And I don't
think that's very feasible in practice.

So what kind of hmm fences do you have in mind here?
-Daniel


>
> Regards,
> Christian.
>
> >
> > Maybe i overlook something there.
> >
> > Cheers,
> > Jérôme
> >
> > _______________________________________________
> > amd-gfx mailing list
> > amd-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
Christian König Jan. 14, 2021, 12:19 p.m. UTC | #21
Am 14.01.21 um 06:34 schrieb Felix Kuehling:
> Am 2021-01-11 um 11:29 a.m. schrieb Daniel Vetter:
>> On Fri, Jan 08, 2021 at 12:56:24PM -0500, Felix Kuehling wrote:
>>> Am 2021-01-08 um 11:53 a.m. schrieb Daniel Vetter:
>>>> On Fri, Jan 8, 2021 at 5:36 PM Felix Kuehling <felix.kuehling@amd.com> wrote:
>>>>> Am 2021-01-08 um 11:06 a.m. schrieb Daniel Vetter:
>>>>>> On Fri, Jan 8, 2021 at 4:58 PM Felix Kuehling <felix.kuehling@amd.com> wrote:
>>>>>>> Am 2021-01-08 um 9:40 a.m. schrieb Daniel Vetter:
>>>>>>>> On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
>>>>>>>>> Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
>>>>>>>>>> On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
>>>>>>>>>>> This is the first version of our HMM based shared virtual memory manager
>>>>>>>>>>> for KFD. There are still a number of known issues that we're working through
>>>>>>>>>>> (see below). This will likely lead to some pretty significant changes in
>>>>>>>>>>> MMU notifier handling and locking on the migration code paths. So don't
>>>>>>>>>>> get hung up on those details yet.
>>>>>>>>>>>
>>>>>>>>>>> But I think this is a good time to start getting feedback. We're pretty
>>>>>>>>>>> confident about the ioctl API, which is both simple and extensible for the
>>>>>>>>>>> future. (see patches 4,16) The user mode side of the API can be found here:
>>>>>>>>>>> https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
>>>>>>>>>>>
>>>>>>>>>>> I'd also like another pair of eyes on how we're interfacing with the GPU VM
>>>>>>>>>>> code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
>>>>>>>>>>> and some retry IRQ handling changes (32).
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Known issues:
>>>>>>>>>>> * won't work with IOMMU enabled, we need to dma_map all pages properly
>>>>>>>>>>> * still working on some race conditions and random bugs
>>>>>>>>>>> * performance is not great yet
>>>>>>>>>> Still catching up, but I think there's another one for your list:
>>>>>>>>>>
>>>>>>>>>>   * hmm gpu context preempt vs page fault handling. I've had a short
>>>>>>>>>>     discussion about this one with Christian before the holidays, and also
>>>>>>>>>>     some private chats with Jerome. It's nasty since no easy fix, much less
>>>>>>>>>>     a good idea what's the best approach here.
>>>>>>>>> Do you have a pointer to that discussion or any more details?
>>>>>>>> Essentially if you're handling an hmm page fault from the gpu, you can
>>>>>>>> deadlock by calling dma_fence_wait on a (chain of, possibly) other command
>>>>>>>> submissions or compute contexts with dma_fence_wait. Which deadlocks if
>>>>>>>> you can't preempt while you have that page fault pending. Two solutions:
>>>>>>>>
>>>>>>>> - your hw can (at least for compute ctx) preempt even when a page fault is
>>>>>>>>    pending
>>>>>>> Our GFXv9 GPUs can do this. GFXv10 cannot.
>>>>>> Uh, why did your hw guys drop this :-/
>>> Performance. It's the same reason why the XNACK mode selection API
>>> exists (patch 16). When we enable recoverable page fault handling in the
>>> compute units on GFXv9, it costs some performance even when no page
>>> faults are happening. On GFXv10 that retry fault handling moved out of
>>> the compute units, so they don't take the performance hit. But that
>>> sacrificed the ability to preempt during page faults. We'll need to work
>>> with our hardware teams to restore that capability in a future generation.
>> Ah yes, you need to stall in more points in the compute cores to make sure
>> you can recover if the page fault gets interrupted.
>>
>> Maybe my knowledge is outdated, but my understanding is that nvidia can
>> also preempt (but only for compute jobs, since oh dear the pain this would
>> be for all the fixed function stuff). Since gfx10 moved page fault
>> handling further away from compute cores, do you know whether this now
>> means you can do page faults for (some?) fixed function stuff too? Or
>> still only for compute?
> I'm not sure.
>
>
>> Supporting page fault for 3d would be real pain with the corner we're
>> stuck in right now, but better we know about this early than later :-/
> I know Christian hates the idea.

Well I don't hate the idea. I just don't think that this will ever work 
correctly and performant.

A big part of the additional fun is that we currently have a mix of HMM 
capable engines (3D, compute, DMA) and not HMM capable engines (display, 
multimedia etc..).

> We know that page faults on GPUs can be
> a huge performance drain because you're stalling potentially so many
> threads and the CPU can become a bottle neck dealing with all the page
> faults from many GPU threads. On the compute side, applications will be
> optimized to avoid them as much as possible, e.g. by pre-faulting or
> pre-fetching data before it's needed.
>
> But I think you need page faults to make overcommitted memory with user
> mode command submission not suck.

Yeah, completely agree.

The only short term alternative I see is to have an IOCTL telling the 
kernel which memory is currently in use. And that is complete nonsense 
cause it kills the advantage why we want user mode command submission in 
the first place.

Regards,
Christian.

>>>>>> I do think it can be rescued with what I call gang scheduling of
>>>>>> engines: I.e. when a given engine is running a context (or a group of
>>>>>> engines, depending how your hw works) that can cause a page fault, you
>>>>>> must flush out all workloads running on the same engine which could
>>>>>> block a dma_fence (preempt them, or for non-compute stuff, force their
>>>>>> completion). And the other way round, i.e. before you can run a legacy
>>>>>> gl workload with a dma_fence on these engines you need to preempt all
>>>>>> ctxs that could cause page faults and take them at least out of the hw
>>>>>> scheduler queue.
>>>>> Yuck! But yeah, that would work. A less invasive alternative would be to
>>>>> reserve some compute units for graphics contexts so we can guarantee
>>>>> forward progress for graphics contexts even when all CUs working on
>>>>> compute stuff are stuck on page faults.
>>>> Won't this hurt compute workloads? I think we need something were at
>>>> least pure compute or pure gl/vk workloads run at full performance.
>>>> And without preempt we can't take anything back when we need it, so
>>>> would have to always upfront reserve some cores just in case.
>>> Yes, it would hurt proportionally to how many CUs get reserved. On big
>>> GPUs with many CUs the impact could be quite small.
>> Also, we could do the reservation only for the time when there's actually
>> a legacy context with normal dma_fence in the scheduler queue. Assuming
>> that reserving/unreserving of CUs isn't too expensive operation. If it's
>> as expensive as a full stall probably not worth the complexity here and
>> just go with a full stall and only run one or the other at a time.
>>
>> Wrt desktops I'm also somewhat worried that we might end up killing
>> desktop workloads if there's not enough CUs reserved for these and they
>> end up taking too long and anger either tdr or worse the user because the
>> desktop is unuseable when you start a compute job and get a big pile of
>> faults. Probably needs some testing to see how bad it is.
>>
>>> That said, I'm not sure it'll work on our hardware. Our CUs can execute
>>> multiple wavefronts from different contexts and switch between them with
>>> fine granularity. I'd need to check with our HW engineers whether this
>>> CU-internal context switching is still possible during page faults on
>>> GFXv10.
>> You'd need to do the reservation for all contexts/engines which can cause
>> page faults, otherewise it'd leak.
> All engines that can page fault and cannot be preempted during faults.
>
> Regards,
>    Felix
>
Christian König Jan. 14, 2021, 1:37 p.m. UTC | #22
Am 14.01.21 um 12:52 schrieb Daniel Vetter:
> [SNIP]
>>> I had a new idea, i wanted to think more about it but have not yet,
>>> anyway here it is. Adding a new callback to dma fence which ask the
>>> question can it dead lock ? Any time a GPU driver has pending page
>>> fault (ie something calling into the mm) it answer yes, otherwise
>>> no. The GPU shrinker would ask the question before waiting on any
>>> dma-fence and back of if it gets yes. Shrinker can still try many
>>> dma buf object for which it does not get a yes on associated fence.
>>>
>>> This does not solve the mmu notifier case, for this you would just
>>> invalidate the gem userptr object (with a flag but not releasing the
>>> page refcount) but you would not wait for the GPU (ie no dma fence
>>> wait in that code path anymore). The userptr API never really made
>>> the contract that it will always be in sync with the mm view of the
>>> world so if different page get remapped to same virtual address
>>> while GPU is still working with the old pages it should not be an
>>> issue (it would not be in our usage of userptr for compositor and
>>> what not).
>> The current working idea in my mind goes into a similar direction.
>>
>> But instead of a callback I'm adding a complete new class of HMM fences.
>>
>> Waiting in the MMU notfier, scheduler, TTM etc etc is only allowed for
>> the dma_fences and HMM fences are ignored in container objects.
>>
>> When you handle an implicit or explicit synchronization request from
>> userspace you need to block for HMM fences to complete before taking any
>> resource locks.
> Isnt' that what I call gang scheduling? I.e. you either run in HMM
> mode, or in legacy fencing mode (whether implicit or explicit doesn't
> really matter I think). By forcing that split we avoid the problem,
> but it means occasionally full stalls on mixed workloads.
>
> But that's not what Jerome wants (afaiui at least), I think his idea
> is to track the reverse dependencies of all the fences floating
> around, and then skip evicting an object if you have to wait for any
> fence that is problematic for the current calling context. And I don't
> think that's very feasible in practice.
>
> So what kind of hmm fences do you have in mind here?

It's a bit more relaxed than your gang schedule.

See the requirements are as follow:

1. dma_fences never depend on hmm_fences.
2. hmm_fences can never preempt dma_fences.
3. dma_fences must be able to preempt hmm_fences or we always reserve 
enough hardware resources (CUs) to guarantee forward progress of dma_fences.

Critical sections are MMU notifiers, page faults, GPU schedulers and 
dma_reservation object locks.

4. It is valid to wait for a dma_fences in critical sections.
5. It is not valid to wait for hmm_fences in critical sections.

Fence creation either happens during command submission or by adding 
something like a barrier or signal command to your userspace queue.

6. If we have an hmm_fence as implicit or explicit dependency for 
creating a dma_fence we must wait for that before taking any locks or 
reserving resources.
7. If we have a dma_fence as implicit or explicit dependency for 
creating an hmm_fence we can wait later on. So busy waiting or special 
WAIT hardware commands are valid.

This prevents hard cuts, e.g. can mix hmm_fences and dma_fences at the 
same time on the hardware.

In other words we can have a high priority gfx queue running jobs based 
on dma_fences and a low priority compute queue running jobs based on 
hmm_fences.

Only when we switch from hmm_fence to dma_fence we need to block the 
submission until all the necessary resources (both memory as well as 
CUs) are available.

This is somewhat an extension to your gang submit idea.

Regards,
Christian.

> -Daniel
>
Daniel Vetter Jan. 14, 2021, 1:57 p.m. UTC | #23
On Thu, Jan 14, 2021 at 2:37 PM Christian König
<christian.koenig@amd.com> wrote:
>
> Am 14.01.21 um 12:52 schrieb Daniel Vetter:
> > [SNIP]
> >>> I had a new idea, i wanted to think more about it but have not yet,
> >>> anyway here it is. Adding a new callback to dma fence which ask the
> >>> question can it dead lock ? Any time a GPU driver has pending page
> >>> fault (ie something calling into the mm) it answer yes, otherwise
> >>> no. The GPU shrinker would ask the question before waiting on any
> >>> dma-fence and back of if it gets yes. Shrinker can still try many
> >>> dma buf object for which it does not get a yes on associated fence.
> >>>
> >>> This does not solve the mmu notifier case, for this you would just
> >>> invalidate the gem userptr object (with a flag but not releasing the
> >>> page refcount) but you would not wait for the GPU (ie no dma fence
> >>> wait in that code path anymore). The userptr API never really made
> >>> the contract that it will always be in sync with the mm view of the
> >>> world so if different page get remapped to same virtual address
> >>> while GPU is still working with the old pages it should not be an
> >>> issue (it would not be in our usage of userptr for compositor and
> >>> what not).
> >> The current working idea in my mind goes into a similar direction.
> >>
> >> But instead of a callback I'm adding a complete new class of HMM fences.
> >>
> >> Waiting in the MMU notfier, scheduler, TTM etc etc is only allowed for
> >> the dma_fences and HMM fences are ignored in container objects.
> >>
> >> When you handle an implicit or explicit synchronization request from
> >> userspace you need to block for HMM fences to complete before taking any
> >> resource locks.
> > Isnt' that what I call gang scheduling? I.e. you either run in HMM
> > mode, or in legacy fencing mode (whether implicit or explicit doesn't
> > really matter I think). By forcing that split we avoid the problem,
> > but it means occasionally full stalls on mixed workloads.
> >
> > But that's not what Jerome wants (afaiui at least), I think his idea
> > is to track the reverse dependencies of all the fences floating
> > around, and then skip evicting an object if you have to wait for any
> > fence that is problematic for the current calling context. And I don't
> > think that's very feasible in practice.
> >
> > So what kind of hmm fences do you have in mind here?
>
> It's a bit more relaxed than your gang schedule.
>
> See the requirements are as follow:
>
> 1. dma_fences never depend on hmm_fences.
> 2. hmm_fences can never preempt dma_fences.
> 3. dma_fences must be able to preempt hmm_fences or we always reserve
> enough hardware resources (CUs) to guarantee forward progress of dma_fences.
>
> Critical sections are MMU notifiers, page faults, GPU schedulers and
> dma_reservation object locks.
>
> 4. It is valid to wait for a dma_fences in critical sections.
> 5. It is not valid to wait for hmm_fences in critical sections.
>
> Fence creation either happens during command submission or by adding
> something like a barrier or signal command to your userspace queue.
>
> 6. If we have an hmm_fence as implicit or explicit dependency for
> creating a dma_fence we must wait for that before taking any locks or
> reserving resources.
> 7. If we have a dma_fence as implicit or explicit dependency for
> creating an hmm_fence we can wait later on. So busy waiting or special
> WAIT hardware commands are valid.
>
> This prevents hard cuts, e.g. can mix hmm_fences and dma_fences at the
> same time on the hardware.
>
> In other words we can have a high priority gfx queue running jobs based
> on dma_fences and a low priority compute queue running jobs based on
> hmm_fences.
>
> Only when we switch from hmm_fence to dma_fence we need to block the
> submission until all the necessary resources (both memory as well as
> CUs) are available.
>
> This is somewhat an extension to your gang submit idea.

Either I'm missing something, or this is just exactly what we
documented already with userspace fences in general, and how you can't
have a dma_fence depend upon a userspace (or hmm_fence).

My gang scheduling idea is really just an alternative for what you
have listed as item 3 above. Instead of requiring preempt or requiring
guaranteed forward progress of some other sorts we flush out any
pending dma_fence request. But _only_ those which would get stalled by
the job we're running, so high-priority sdma requests we need in the
kernel to shuffle buffers around are still all ok. This would be
needed if you're hw can't preempt, and you also have shared engines
between compute and gfx, so reserving CUs won't solve the problem
either.

What I don't mean with my gang scheduling is a completely exclusive
mode between hmm_fence and dma_fence, since that would prevent us from
using copy engines and dma_fence in the kernel to shuffle memory
around for hmm jobs. And that would suck, even on compute-only
workloads. Maybe I should rename "gang scheduling" to "engine flush"
or something like that.

I think the basics of userspace or hmm_fence or whatever we'll call it
we've documented already here:

https://dri.freedesktop.org/docs/drm/driver-api/dma-buf.html?highlight=dma_fence#indefinite-dma-fences

I think the only thing missing is clarifying a bit what you have under
item 3, i.e. how do we make sure there's no accidental hidden
dependency between hmm_fence and dma_fence. Maybe a subsection about
gpu page fault handling?

Or are we still talking past each another a bit here?
-Daniel


> Regards,
> Christian.
>
> > -Daniel
> >
>
Christian König Jan. 14, 2021, 2:13 p.m. UTC | #24
Am 14.01.21 um 14:57 schrieb Daniel Vetter:
> On Thu, Jan 14, 2021 at 2:37 PM Christian König
> <christian.koenig@amd.com> wrote:
>> Am 14.01.21 um 12:52 schrieb Daniel Vetter:
>>> [SNIP]
>>>>> I had a new idea, i wanted to think more about it but have not yet,
>>>>> anyway here it is. Adding a new callback to dma fence which ask the
>>>>> question can it dead lock ? Any time a GPU driver has pending page
>>>>> fault (ie something calling into the mm) it answer yes, otherwise
>>>>> no. The GPU shrinker would ask the question before waiting on any
>>>>> dma-fence and back of if it gets yes. Shrinker can still try many
>>>>> dma buf object for which it does not get a yes on associated fence.
>>>>>
>>>>> This does not solve the mmu notifier case, for this you would just
>>>>> invalidate the gem userptr object (with a flag but not releasing the
>>>>> page refcount) but you would not wait for the GPU (ie no dma fence
>>>>> wait in that code path anymore). The userptr API never really made
>>>>> the contract that it will always be in sync with the mm view of the
>>>>> world so if different page get remapped to same virtual address
>>>>> while GPU is still working with the old pages it should not be an
>>>>> issue (it would not be in our usage of userptr for compositor and
>>>>> what not).
>>>> The current working idea in my mind goes into a similar direction.
>>>>
>>>> But instead of a callback I'm adding a complete new class of HMM fences.
>>>>
>>>> Waiting in the MMU notfier, scheduler, TTM etc etc is only allowed for
>>>> the dma_fences and HMM fences are ignored in container objects.
>>>>
>>>> When you handle an implicit or explicit synchronization request from
>>>> userspace you need to block for HMM fences to complete before taking any
>>>> resource locks.
>>> Isnt' that what I call gang scheduling? I.e. you either run in HMM
>>> mode, or in legacy fencing mode (whether implicit or explicit doesn't
>>> really matter I think). By forcing that split we avoid the problem,
>>> but it means occasionally full stalls on mixed workloads.
>>>
>>> But that's not what Jerome wants (afaiui at least), I think his idea
>>> is to track the reverse dependencies of all the fences floating
>>> around, and then skip evicting an object if you have to wait for any
>>> fence that is problematic for the current calling context. And I don't
>>> think that's very feasible in practice.
>>>
>>> So what kind of hmm fences do you have in mind here?
>> It's a bit more relaxed than your gang schedule.
>>
>> See the requirements are as follow:
>>
>> 1. dma_fences never depend on hmm_fences.
>> 2. hmm_fences can never preempt dma_fences.
>> 3. dma_fences must be able to preempt hmm_fences or we always reserve
>> enough hardware resources (CUs) to guarantee forward progress of dma_fences.
>>
>> Critical sections are MMU notifiers, page faults, GPU schedulers and
>> dma_reservation object locks.
>>
>> 4. It is valid to wait for a dma_fences in critical sections.
>> 5. It is not valid to wait for hmm_fences in critical sections.
>>
>> Fence creation either happens during command submission or by adding
>> something like a barrier or signal command to your userspace queue.
>>
>> 6. If we have an hmm_fence as implicit or explicit dependency for
>> creating a dma_fence we must wait for that before taking any locks or
>> reserving resources.
>> 7. If we have a dma_fence as implicit or explicit dependency for
>> creating an hmm_fence we can wait later on. So busy waiting or special
>> WAIT hardware commands are valid.
>>
>> This prevents hard cuts, e.g. can mix hmm_fences and dma_fences at the
>> same time on the hardware.
>>
>> In other words we can have a high priority gfx queue running jobs based
>> on dma_fences and a low priority compute queue running jobs based on
>> hmm_fences.
>>
>> Only when we switch from hmm_fence to dma_fence we need to block the
>> submission until all the necessary resources (both memory as well as
>> CUs) are available.
>>
>> This is somewhat an extension to your gang submit idea.
> Either I'm missing something, or this is just exactly what we
> documented already with userspace fences in general, and how you can't
> have a dma_fence depend upon a userspace (or hmm_fence).
>
> My gang scheduling idea is really just an alternative for what you
> have listed as item 3 above. Instead of requiring preempt or requiring
> guaranteed forward progress of some other sorts we flush out any
> pending dma_fence request. But _only_ those which would get stalled by
> the job we're running, so high-priority sdma requests we need in the
> kernel to shuffle buffers around are still all ok. This would be
> needed if you're hw can't preempt, and you also have shared engines
> between compute and gfx, so reserving CUs won't solve the problem
> either.
>
> What I don't mean with my gang scheduling is a completely exclusive
> mode between hmm_fence and dma_fence, since that would prevent us from
> using copy engines and dma_fence in the kernel to shuffle memory
> around for hmm jobs. And that would suck, even on compute-only
> workloads. Maybe I should rename "gang scheduling" to "engine flush"
> or something like that.

Yeah, "engine flush" makes it much more clearer.

What I wanted to emphasis is that we have to mix dma_fences and 
hmm_fences running at the same time on the same hardware fighting over 
the same resources.

E.g. even on the newest hardware multimedia engines can't handle page 
faults, so video decoding/encoding will still produce dma_fences.

> I think the basics of userspace or hmm_fence or whatever we'll call it
> we've documented already here:
>
> https://dri.freedesktop.org/docs/drm/driver-api/dma-buf.html?highlight=dma_fence#indefinite-dma-fences

This talks about the restrictions we have for dma_fences and why 
infinite fences (even as hmm_fence) will never work.

But it doesn't talk about how to handle implicit or explicit 
dependencies with something like hmm_fences.

In other words my proposal above allows for hmm_fences to show up in 
dma_reservation objects and are used together with all this explicit 
synchronization we still have with only a medium amount of work :)

> I think the only thing missing is clarifying a bit what you have under
> item 3, i.e. how do we make sure there's no accidental hidden
> dependency between hmm_fence and dma_fence. Maybe a subsection about
> gpu page fault handling?

The real improvement is item 6. The problem with it is that it requires 
auditing all occasions when we create dma_fences so that we don't 
accidentally depend on an HMM fence.

Regards,
Christian.

>
> Or are we still talking past each another a bit here?
> -Daniel
>
>
>> Regards,
>> Christian.
>>
>>> -Daniel
>>>
>
Daniel Vetter Jan. 14, 2021, 2:23 p.m. UTC | #25
On Thu, Jan 14, 2021 at 3:13 PM Christian König
<ckoenig.leichtzumerken@gmail.com> wrote:
>
> Am 14.01.21 um 14:57 schrieb Daniel Vetter:
> > On Thu, Jan 14, 2021 at 2:37 PM Christian König
> > <christian.koenig@amd.com> wrote:
> >> Am 14.01.21 um 12:52 schrieb Daniel Vetter:
> >>> [SNIP]
> >>>>> I had a new idea, i wanted to think more about it but have not yet,
> >>>>> anyway here it is. Adding a new callback to dma fence which ask the
> >>>>> question can it dead lock ? Any time a GPU driver has pending page
> >>>>> fault (ie something calling into the mm) it answer yes, otherwise
> >>>>> no. The GPU shrinker would ask the question before waiting on any
> >>>>> dma-fence and back of if it gets yes. Shrinker can still try many
> >>>>> dma buf object for which it does not get a yes on associated fence.
> >>>>>
> >>>>> This does not solve the mmu notifier case, for this you would just
> >>>>> invalidate the gem userptr object (with a flag but not releasing the
> >>>>> page refcount) but you would not wait for the GPU (ie no dma fence
> >>>>> wait in that code path anymore). The userptr API never really made
> >>>>> the contract that it will always be in sync with the mm view of the
> >>>>> world so if different page get remapped to same virtual address
> >>>>> while GPU is still working with the old pages it should not be an
> >>>>> issue (it would not be in our usage of userptr for compositor and
> >>>>> what not).
> >>>> The current working idea in my mind goes into a similar direction.
> >>>>
> >>>> But instead of a callback I'm adding a complete new class of HMM fences.
> >>>>
> >>>> Waiting in the MMU notfier, scheduler, TTM etc etc is only allowed for
> >>>> the dma_fences and HMM fences are ignored in container objects.
> >>>>
> >>>> When you handle an implicit or explicit synchronization request from
> >>>> userspace you need to block for HMM fences to complete before taking any
> >>>> resource locks.
> >>> Isnt' that what I call gang scheduling? I.e. you either run in HMM
> >>> mode, or in legacy fencing mode (whether implicit or explicit doesn't
> >>> really matter I think). By forcing that split we avoid the problem,
> >>> but it means occasionally full stalls on mixed workloads.
> >>>
> >>> But that's not what Jerome wants (afaiui at least), I think his idea
> >>> is to track the reverse dependencies of all the fences floating
> >>> around, and then skip evicting an object if you have to wait for any
> >>> fence that is problematic for the current calling context. And I don't
> >>> think that's very feasible in practice.
> >>>
> >>> So what kind of hmm fences do you have in mind here?
> >> It's a bit more relaxed than your gang schedule.
> >>
> >> See the requirements are as follow:
> >>
> >> 1. dma_fences never depend on hmm_fences.
> >> 2. hmm_fences can never preempt dma_fences.
> >> 3. dma_fences must be able to preempt hmm_fences or we always reserve
> >> enough hardware resources (CUs) to guarantee forward progress of dma_fences.
> >>
> >> Critical sections are MMU notifiers, page faults, GPU schedulers and
> >> dma_reservation object locks.
> >>
> >> 4. It is valid to wait for a dma_fences in critical sections.
> >> 5. It is not valid to wait for hmm_fences in critical sections.
> >>
> >> Fence creation either happens during command submission or by adding
> >> something like a barrier or signal command to your userspace queue.
> >>
> >> 6. If we have an hmm_fence as implicit or explicit dependency for
> >> creating a dma_fence we must wait for that before taking any locks or
> >> reserving resources.
> >> 7. If we have a dma_fence as implicit or explicit dependency for
> >> creating an hmm_fence we can wait later on. So busy waiting or special
> >> WAIT hardware commands are valid.
> >>
> >> This prevents hard cuts, e.g. can mix hmm_fences and dma_fences at the
> >> same time on the hardware.
> >>
> >> In other words we can have a high priority gfx queue running jobs based
> >> on dma_fences and a low priority compute queue running jobs based on
> >> hmm_fences.
> >>
> >> Only when we switch from hmm_fence to dma_fence we need to block the
> >> submission until all the necessary resources (both memory as well as
> >> CUs) are available.
> >>
> >> This is somewhat an extension to your gang submit idea.
> > Either I'm missing something, or this is just exactly what we
> > documented already with userspace fences in general, and how you can't
> > have a dma_fence depend upon a userspace (or hmm_fence).
> >
> > My gang scheduling idea is really just an alternative for what you
> > have listed as item 3 above. Instead of requiring preempt or requiring
> > guaranteed forward progress of some other sorts we flush out any
> > pending dma_fence request. But _only_ those which would get stalled by
> > the job we're running, so high-priority sdma requests we need in the
> > kernel to shuffle buffers around are still all ok. This would be
> > needed if you're hw can't preempt, and you also have shared engines
> > between compute and gfx, so reserving CUs won't solve the problem
> > either.
> >
> > What I don't mean with my gang scheduling is a completely exclusive
> > mode between hmm_fence and dma_fence, since that would prevent us from
> > using copy engines and dma_fence in the kernel to shuffle memory
> > around for hmm jobs. And that would suck, even on compute-only
> > workloads. Maybe I should rename "gang scheduling" to "engine flush"
> > or something like that.
>
> Yeah, "engine flush" makes it much more clearer.
>
> What I wanted to emphasis is that we have to mix dma_fences and
> hmm_fences running at the same time on the same hardware fighting over
> the same resources.
>
> E.g. even on the newest hardware multimedia engines can't handle page
> faults, so video decoding/encoding will still produce dma_fences.

Well we also have to mix them so the kernel can shovel data around
using copy engines. Plus we have to mix it at the overall subsystem
level because I'm not sure SoC-class gpus will ever get here,
definitely aren't yet there for sure.

> > I think the basics of userspace or hmm_fence or whatever we'll call it
> > we've documented already here:
> >
> > https://dri.freedesktop.org/docs/drm/driver-api/dma-buf.html?highlight=dma_fence#indefinite-dma-fences
>
> This talks about the restrictions we have for dma_fences and why
> infinite fences (even as hmm_fence) will never work.
>
> But it doesn't talk about how to handle implicit or explicit
> dependencies with something like hmm_fences.
>
> In other words my proposal above allows for hmm_fences to show up in
> dma_reservation objects and are used together with all this explicit
> synchronization we still have with only a medium amount of work :)

Oh. I don't think we should put any hmm_fence or other infinite fence
into a dma_resv object. At least not into the current dma_resv object,
because then we have that infinite fences problem everywhere, and very
hard to audit.

What we could do is add new hmm_fence only slots for implicit sync,
but I think consensus is that implicit sync is bad, never do it again.
Last time around (for timeline syncobj) we've also pushed the waiting
on cross-over to userspace, and I think that's the right option, so we
need userspace to understand the hmm fence anyway. At that point we
might as well bite the bullet and do another round of wayland/dri
protocols.

So from that pov I think the kernel should at most deal with an
hmm_fence for cross-process communication and maybe some standard wait
primitives (for userspace to use, not for the kernel).

The only use case this would forbid is using page faults for legacy
implicit/explicit dma_fence synced workloads, and I think that's
perfectly ok to not allow. Especially since the motivation here for
all this is compute, and compute doesn't pass around dma_fences
anyway.

> > I think the only thing missing is clarifying a bit what you have under
> > item 3, i.e. how do we make sure there's no accidental hidden
> > dependency between hmm_fence and dma_fence. Maybe a subsection about
> > gpu page fault handling?
>
> The real improvement is item 6. The problem with it is that it requires
> auditing all occasions when we create dma_fences so that we don't
> accidentally depend on an HMM fence.

We have that rule already, it's the "dma_fence must not depend upon an
infinite fence anywhere" rule we documented last summer. So that
doesn't feel new.
-Daniel

>
> Regards,
> Christian.
>
> >
> > Or are we still talking past each another a bit here?
> > -Daniel
> >
> >
> >> Regards,
> >> Christian.
> >>
> >>> -Daniel
> >>>
> >
>
Christian König Jan. 14, 2021, 3:08 p.m. UTC | #26
Am 14.01.21 um 15:23 schrieb Daniel Vetter:
> On Thu, Jan 14, 2021 at 3:13 PM Christian König
> <ckoenig.leichtzumerken@gmail.com> wrote:
>> Am 14.01.21 um 14:57 schrieb Daniel Vetter:
>>> On Thu, Jan 14, 2021 at 2:37 PM Christian König
>>> <christian.koenig@amd.com> wrote:
>>>> Am 14.01.21 um 12:52 schrieb Daniel Vetter:
>>>>> [SNIP]
>>>>>>> I had a new idea, i wanted to think more about it but have not yet,
>>>>>>> anyway here it is. Adding a new callback to dma fence which ask the
>>>>>>> question can it dead lock ? Any time a GPU driver has pending page
>>>>>>> fault (ie something calling into the mm) it answer yes, otherwise
>>>>>>> no. The GPU shrinker would ask the question before waiting on any
>>>>>>> dma-fence and back of if it gets yes. Shrinker can still try many
>>>>>>> dma buf object for which it does not get a yes on associated fence.
>>>>>>>
>>>>>>> This does not solve the mmu notifier case, for this you would just
>>>>>>> invalidate the gem userptr object (with a flag but not releasing the
>>>>>>> page refcount) but you would not wait for the GPU (ie no dma fence
>>>>>>> wait in that code path anymore). The userptr API never really made
>>>>>>> the contract that it will always be in sync with the mm view of the
>>>>>>> world so if different page get remapped to same virtual address
>>>>>>> while GPU is still working with the old pages it should not be an
>>>>>>> issue (it would not be in our usage of userptr for compositor and
>>>>>>> what not).
>>>>>> The current working idea in my mind goes into a similar direction.
>>>>>>
>>>>>> But instead of a callback I'm adding a complete new class of HMM fences.
>>>>>>
>>>>>> Waiting in the MMU notfier, scheduler, TTM etc etc is only allowed for
>>>>>> the dma_fences and HMM fences are ignored in container objects.
>>>>>>
>>>>>> When you handle an implicit or explicit synchronization request from
>>>>>> userspace you need to block for HMM fences to complete before taking any
>>>>>> resource locks.
>>>>> Isnt' that what I call gang scheduling? I.e. you either run in HMM
>>>>> mode, or in legacy fencing mode (whether implicit or explicit doesn't
>>>>> really matter I think). By forcing that split we avoid the problem,
>>>>> but it means occasionally full stalls on mixed workloads.
>>>>>
>>>>> But that's not what Jerome wants (afaiui at least), I think his idea
>>>>> is to track the reverse dependencies of all the fences floating
>>>>> around, and then skip evicting an object if you have to wait for any
>>>>> fence that is problematic for the current calling context. And I don't
>>>>> think that's very feasible in practice.
>>>>>
>>>>> So what kind of hmm fences do you have in mind here?
>>>> It's a bit more relaxed than your gang schedule.
>>>>
>>>> See the requirements are as follow:
>>>>
>>>> 1. dma_fences never depend on hmm_fences.
>>>> 2. hmm_fences can never preempt dma_fences.
>>>> 3. dma_fences must be able to preempt hmm_fences or we always reserve
>>>> enough hardware resources (CUs) to guarantee forward progress of dma_fences.
>>>>
>>>> Critical sections are MMU notifiers, page faults, GPU schedulers and
>>>> dma_reservation object locks.
>>>>
>>>> 4. It is valid to wait for a dma_fences in critical sections.
>>>> 5. It is not valid to wait for hmm_fences in critical sections.
>>>>
>>>> Fence creation either happens during command submission or by adding
>>>> something like a barrier or signal command to your userspace queue.
>>>>
>>>> 6. If we have an hmm_fence as implicit or explicit dependency for
>>>> creating a dma_fence we must wait for that before taking any locks or
>>>> reserving resources.
>>>> 7. If we have a dma_fence as implicit or explicit dependency for
>>>> creating an hmm_fence we can wait later on. So busy waiting or special
>>>> WAIT hardware commands are valid.
>>>>
>>>> This prevents hard cuts, e.g. can mix hmm_fences and dma_fences at the
>>>> same time on the hardware.
>>>>
>>>> In other words we can have a high priority gfx queue running jobs based
>>>> on dma_fences and a low priority compute queue running jobs based on
>>>> hmm_fences.
>>>>
>>>> Only when we switch from hmm_fence to dma_fence we need to block the
>>>> submission until all the necessary resources (both memory as well as
>>>> CUs) are available.
>>>>
>>>> This is somewhat an extension to your gang submit idea.
>>> Either I'm missing something, or this is just exactly what we
>>> documented already with userspace fences in general, and how you can't
>>> have a dma_fence depend upon a userspace (or hmm_fence).
>>>
>>> My gang scheduling idea is really just an alternative for what you
>>> have listed as item 3 above. Instead of requiring preempt or requiring
>>> guaranteed forward progress of some other sorts we flush out any
>>> pending dma_fence request. But _only_ those which would get stalled by
>>> the job we're running, so high-priority sdma requests we need in the
>>> kernel to shuffle buffers around are still all ok. This would be
>>> needed if you're hw can't preempt, and you also have shared engines
>>> between compute and gfx, so reserving CUs won't solve the problem
>>> either.
>>>
>>> What I don't mean with my gang scheduling is a completely exclusive
>>> mode between hmm_fence and dma_fence, since that would prevent us from
>>> using copy engines and dma_fence in the kernel to shuffle memory
>>> around for hmm jobs. And that would suck, even on compute-only
>>> workloads. Maybe I should rename "gang scheduling" to "engine flush"
>>> or something like that.
>> Yeah, "engine flush" makes it much more clearer.
>>
>> What I wanted to emphasis is that we have to mix dma_fences and
>> hmm_fences running at the same time on the same hardware fighting over
>> the same resources.
>>
>> E.g. even on the newest hardware multimedia engines can't handle page
>> faults, so video decoding/encoding will still produce dma_fences.
> Well we also have to mix them so the kernel can shovel data around
> using copy engines. Plus we have to mix it at the overall subsystem
> level because I'm not sure SoC-class gpus will ever get here,
> definitely aren't yet there for sure.
>
>>> I think the basics of userspace or hmm_fence or whatever we'll call it
>>> we've documented already here:
>>>
>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdri.freedesktop.org%2Fdocs%2Fdrm%2Fdriver-api%2Fdma-buf.html%3Fhighlight%3Ddma_fence%23indefinite-dma-fences&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7Cc35b65cf4ad5430475de08d8b897f5dd%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637462310094850656%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=GHBbLzmHPaW4sSZUrfKi6aNMAmYDbzgUMhZOOd1Im8E%3D&amp;reserved=0
>> This talks about the restrictions we have for dma_fences and why
>> infinite fences (even as hmm_fence) will never work.
>>
>> But it doesn't talk about how to handle implicit or explicit
>> dependencies with something like hmm_fences.
>>
>> In other words my proposal above allows for hmm_fences to show up in
>> dma_reservation objects and are used together with all this explicit
>> synchronization we still have with only a medium amount of work :)
> Oh. I don't think we should put any hmm_fence or other infinite fence
> into a dma_resv object. At least not into the current dma_resv object,
> because then we have that infinite fences problem everywhere, and very
> hard to audit.

Yes, exactly. That's why this rules how to mix them or rather not mix them.

> What we could do is add new hmm_fence only slots for implicit sync,

Yeah, we would have them separated to the dma_fence objects.

> but I think consensus is that implicit sync is bad, never do it again.
> Last time around (for timeline syncobj) we've also pushed the waiting
> on cross-over to userspace, and I think that's the right option, so we
> need userspace to understand the hmm fence anyway. At that point we
> might as well bite the bullet and do another round of wayland/dri
> protocols.

As you said I don't see this happening in the next 5 years either.

So I think we have to somehow solve this in the kernel or we will go in 
circles all the time.

> So from that pov I think the kernel should at most deal with an
> hmm_fence for cross-process communication and maybe some standard wait
> primitives (for userspace to use, not for the kernel).
>
> The only use case this would forbid is using page faults for legacy
> implicit/explicit dma_fence synced workloads, and I think that's
> perfectly ok to not allow. Especially since the motivation here for
> all this is compute, and compute doesn't pass around dma_fences
> anyway.

As Alex said we will rather soon see this for gfx as well and we most 
likely will see combinations of old dma_fence based integrated graphics 
with new dedicated GPUs.

So I don't think we can say we reduce the problem to compute and don't 
support anything else.

Regards,
Christian.

>
>>> I think the only thing missing is clarifying a bit what you have under
>>> item 3, i.e. how do we make sure there's no accidental hidden
>>> dependency between hmm_fence and dma_fence. Maybe a subsection about
>>> gpu page fault handling?
>> The real improvement is item 6. The problem with it is that it requires
>> auditing all occasions when we create dma_fences so that we don't
>> accidentally depend on an HMM fence.
> We have that rule already, it's the "dma_fence must not depend upon an
> infinite fence anywhere" rule we documented last summer. So that
> doesn't feel new.
> -Daniel
>
>> Regards,
>> Christian.
>>
>>> Or are we still talking past each another a bit here?
>>> -Daniel
>>>
>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>> -Daniel
>>>>>
>
Daniel Vetter Jan. 14, 2021, 3:40 p.m. UTC | #27
On Thu, Jan 14, 2021 at 4:08 PM Christian König
<christian.koenig@amd.com> wrote:
> Am 14.01.21 um 15:23 schrieb Daniel Vetter:
> > On Thu, Jan 14, 2021 at 3:13 PM Christian König
> > <ckoenig.leichtzumerken@gmail.com> wrote:
> >> Am 14.01.21 um 14:57 schrieb Daniel Vetter:
> >>> On Thu, Jan 14, 2021 at 2:37 PM Christian König
> >>> <christian.koenig@amd.com> wrote:
> >>>> Am 14.01.21 um 12:52 schrieb Daniel Vetter:
> >>>>> [SNIP]
> >>>>>>> I had a new idea, i wanted to think more about it but have not yet,
> >>>>>>> anyway here it is. Adding a new callback to dma fence which ask the
> >>>>>>> question can it dead lock ? Any time a GPU driver has pending page
> >>>>>>> fault (ie something calling into the mm) it answer yes, otherwise
> >>>>>>> no. The GPU shrinker would ask the question before waiting on any
> >>>>>>> dma-fence and back of if it gets yes. Shrinker can still try many
> >>>>>>> dma buf object for which it does not get a yes on associated fence.
> >>>>>>>
> >>>>>>> This does not solve the mmu notifier case, for this you would just
> >>>>>>> invalidate the gem userptr object (with a flag but not releasing the
> >>>>>>> page refcount) but you would not wait for the GPU (ie no dma fence
> >>>>>>> wait in that code path anymore). The userptr API never really made
> >>>>>>> the contract that it will always be in sync with the mm view of the
> >>>>>>> world so if different page get remapped to same virtual address
> >>>>>>> while GPU is still working with the old pages it should not be an
> >>>>>>> issue (it would not be in our usage of userptr for compositor and
> >>>>>>> what not).
> >>>>>> The current working idea in my mind goes into a similar direction.
> >>>>>>
> >>>>>> But instead of a callback I'm adding a complete new class of HMM fences.
> >>>>>>
> >>>>>> Waiting in the MMU notfier, scheduler, TTM etc etc is only allowed for
> >>>>>> the dma_fences and HMM fences are ignored in container objects.
> >>>>>>
> >>>>>> When you handle an implicit or explicit synchronization request from
> >>>>>> userspace you need to block for HMM fences to complete before taking any
> >>>>>> resource locks.
> >>>>> Isnt' that what I call gang scheduling? I.e. you either run in HMM
> >>>>> mode, or in legacy fencing mode (whether implicit or explicit doesn't
> >>>>> really matter I think). By forcing that split we avoid the problem,
> >>>>> but it means occasionally full stalls on mixed workloads.
> >>>>>
> >>>>> But that's not what Jerome wants (afaiui at least), I think his idea
> >>>>> is to track the reverse dependencies of all the fences floating
> >>>>> around, and then skip evicting an object if you have to wait for any
> >>>>> fence that is problematic for the current calling context. And I don't
> >>>>> think that's very feasible in practice.
> >>>>>
> >>>>> So what kind of hmm fences do you have in mind here?
> >>>> It's a bit more relaxed than your gang schedule.
> >>>>
> >>>> See the requirements are as follow:
> >>>>
> >>>> 1. dma_fences never depend on hmm_fences.
> >>>> 2. hmm_fences can never preempt dma_fences.
> >>>> 3. dma_fences must be able to preempt hmm_fences or we always reserve
> >>>> enough hardware resources (CUs) to guarantee forward progress of dma_fences.
> >>>>
> >>>> Critical sections are MMU notifiers, page faults, GPU schedulers and
> >>>> dma_reservation object locks.
> >>>>
> >>>> 4. It is valid to wait for a dma_fences in critical sections.
> >>>> 5. It is not valid to wait for hmm_fences in critical sections.
> >>>>
> >>>> Fence creation either happens during command submission or by adding
> >>>> something like a barrier or signal command to your userspace queue.
> >>>>
> >>>> 6. If we have an hmm_fence as implicit or explicit dependency for
> >>>> creating a dma_fence we must wait for that before taking any locks or
> >>>> reserving resources.
> >>>> 7. If we have a dma_fence as implicit or explicit dependency for
> >>>> creating an hmm_fence we can wait later on. So busy waiting or special
> >>>> WAIT hardware commands are valid.
> >>>>
> >>>> This prevents hard cuts, e.g. can mix hmm_fences and dma_fences at the
> >>>> same time on the hardware.
> >>>>
> >>>> In other words we can have a high priority gfx queue running jobs based
> >>>> on dma_fences and a low priority compute queue running jobs based on
> >>>> hmm_fences.
> >>>>
> >>>> Only when we switch from hmm_fence to dma_fence we need to block the
> >>>> submission until all the necessary resources (both memory as well as
> >>>> CUs) are available.
> >>>>
> >>>> This is somewhat an extension to your gang submit idea.
> >>> Either I'm missing something, or this is just exactly what we
> >>> documented already with userspace fences in general, and how you can't
> >>> have a dma_fence depend upon a userspace (or hmm_fence).
> >>>
> >>> My gang scheduling idea is really just an alternative for what you
> >>> have listed as item 3 above. Instead of requiring preempt or requiring
> >>> guaranteed forward progress of some other sorts we flush out any
> >>> pending dma_fence request. But _only_ those which would get stalled by
> >>> the job we're running, so high-priority sdma requests we need in the
> >>> kernel to shuffle buffers around are still all ok. This would be
> >>> needed if you're hw can't preempt, and you also have shared engines
> >>> between compute and gfx, so reserving CUs won't solve the problem
> >>> either.
> >>>
> >>> What I don't mean with my gang scheduling is a completely exclusive
> >>> mode between hmm_fence and dma_fence, since that would prevent us from
> >>> using copy engines and dma_fence in the kernel to shuffle memory
> >>> around for hmm jobs. And that would suck, even on compute-only
> >>> workloads. Maybe I should rename "gang scheduling" to "engine flush"
> >>> or something like that.
> >> Yeah, "engine flush" makes it much more clearer.
> >>
> >> What I wanted to emphasis is that we have to mix dma_fences and
> >> hmm_fences running at the same time on the same hardware fighting over
> >> the same resources.
> >>
> >> E.g. even on the newest hardware multimedia engines can't handle page
> >> faults, so video decoding/encoding will still produce dma_fences.
> > Well we also have to mix them so the kernel can shovel data around
> > using copy engines. Plus we have to mix it at the overall subsystem
> > level because I'm not sure SoC-class gpus will ever get here,
> > definitely aren't yet there for sure.
> >
> >>> I think the basics of userspace or hmm_fence or whatever we'll call it
> >>> we've documented already here:
> >>>
> >>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdri.freedesktop.org%2Fdocs%2Fdrm%2Fdriver-api%2Fdma-buf.html%3Fhighlight%3Ddma_fence%23indefinite-dma-fences&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7Cc35b65cf4ad5430475de08d8b897f5dd%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637462310094850656%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=GHBbLzmHPaW4sSZUrfKi6aNMAmYDbzgUMhZOOd1Im8E%3D&amp;reserved=0
> >> This talks about the restrictions we have for dma_fences and why
> >> infinite fences (even as hmm_fence) will never work.
> >>
> >> But it doesn't talk about how to handle implicit or explicit
> >> dependencies with something like hmm_fences.
> >>
> >> In other words my proposal above allows for hmm_fences to show up in
> >> dma_reservation objects and are used together with all this explicit
> >> synchronization we still have with only a medium amount of work :)
> > Oh. I don't think we should put any hmm_fence or other infinite fence
> > into a dma_resv object. At least not into the current dma_resv object,
> > because then we have that infinite fences problem everywhere, and very
> > hard to audit.
>
> Yes, exactly. That's why this rules how to mix them or rather not mix them.
>
> > What we could do is add new hmm_fence only slots for implicit sync,
>
> Yeah, we would have them separated to the dma_fence objects.
>
> > but I think consensus is that implicit sync is bad, never do it again.
> > Last time around (for timeline syncobj) we've also pushed the waiting
> > on cross-over to userspace, and I think that's the right option, so we
> > need userspace to understand the hmm fence anyway. At that point we
> > might as well bite the bullet and do another round of wayland/dri
> > protocols.
>
> As you said I don't see this happening in the next 5 years either.

Well I guess we'll need to get started with that then, when you guys need it.

> So I think we have to somehow solve this in the kernel or we will go in
> circles all the time.
>
> > So from that pov I think the kernel should at most deal with an
> > hmm_fence for cross-process communication and maybe some standard wait
> > primitives (for userspace to use, not for the kernel).
> >
> > The only use case this would forbid is using page faults for legacy
> > implicit/explicit dma_fence synced workloads, and I think that's
> > perfectly ok to not allow. Especially since the motivation here for
> > all this is compute, and compute doesn't pass around dma_fences
> > anyway.
>
> As Alex said we will rather soon see this for gfx as well and we most
> likely will see combinations of old dma_fence based integrated graphics
> with new dedicated GPUs.
>
> So I don't think we can say we reduce the problem to compute and don't
> support anything else.

I'm not against pagefaults for gfx, just in pushing the magic into the
kernel. I don't think that works, because it means we add stall points
where usespace, especially vk userspace, really doesn't want it. So
same way like timeline syncobj, we need to push the compat work into
userspace.

There's going to be a few stall points:
- fully new stack, we wait for the userspace fence in the atomic
commit path (which we can, if we're really careful, since we pin all
buffers upfront and so there's no risk)
- userspace fencing gpu in the client, compositor protocol can pass
around userspace fences, but the compositor still uses dma_fence for
itself. There's some stalling in the compositor, which it does already
anyway when it's collecting new frames from clients
- userspace fencing gpu in the client, but no compositor protocol: We
wait in the swapchain, but in a separate thread so that nothing blocks
that shouldn't block

If we instead go with "magic waits in the kernel behind userspace's
back", like what your item 6 would imply, then we're not really
solving anything.

For actual implementation I think the best would be an extension of
drm_syncobj. Those already have at least conceptually future/infinite
fences, and we already have fd passing, so "just" need some protocol
to pass them around. Plus we could use the same uapi for timeline
syncobj using dma_fence as for hmm_fence, so also easier to transition
for userspace to the new world since don't need the new hw capability
to roll out the new uapi and protocols.

That's not that hard to roll out, and technically a lot better than
hacking up dma_resv and hoping we don't end up stalling in wrong
places, which sounds very "eeeek" to me :-)

Cheers, Daniel

> Regards,
> Christian.
>
> >
> >>> I think the only thing missing is clarifying a bit what you have under
> >>> item 3, i.e. how do we make sure there's no accidental hidden
> >>> dependency between hmm_fence and dma_fence. Maybe a subsection about
> >>> gpu page fault handling?
> >> The real improvement is item 6. The problem with it is that it requires
> >> auditing all occasions when we create dma_fences so that we don't
> >> accidentally depend on an HMM fence.
> > We have that rule already, it's the "dma_fence must not depend upon an
> > infinite fence anywhere" rule we documented last summer. So that
> > doesn't feel new.
> > -Daniel
> >
> >> Regards,
> >> Christian.
> >>
> >>> Or are we still talking past each another a bit here?
> >>> -Daniel
> >>>
> >>>
> >>>> Regards,
> >>>> Christian.
> >>>>
> >>>>> -Daniel
> >>>>>
> >
>
Christian König Jan. 14, 2021, 4:01 p.m. UTC | #28
Am 14.01.21 um 16:40 schrieb Daniel Vetter:
> [SNIP]
>> So I think we have to somehow solve this in the kernel or we will go in
>> circles all the time.
>>
>>> So from that pov I think the kernel should at most deal with an
>>> hmm_fence for cross-process communication and maybe some standard wait
>>> primitives (for userspace to use, not for the kernel).
>>>
>>> The only use case this would forbid is using page faults for legacy
>>> implicit/explicit dma_fence synced workloads, and I think that's
>>> perfectly ok to not allow. Especially since the motivation here for
>>> all this is compute, and compute doesn't pass around dma_fences
>>> anyway.
>> As Alex said we will rather soon see this for gfx as well and we most
>> likely will see combinations of old dma_fence based integrated graphics
>> with new dedicated GPUs.
>>
>> So I don't think we can say we reduce the problem to compute and don't
>> support anything else.
> I'm not against pagefaults for gfx, just in pushing the magic into the
> kernel. I don't think that works, because it means we add stall points
> where usespace, especially vk userspace, really doesn't want it. So
> same way like timeline syncobj, we need to push the compat work into
> userspace.
>
> There's going to be a few stall points:
> - fully new stack, we wait for the userspace fence in the atomic
> commit path (which we can, if we're really careful, since we pin all
> buffers upfront and so there's no risk)
> - userspace fencing gpu in the client, compositor protocol can pass
> around userspace fences, but the compositor still uses dma_fence for
> itself. There's some stalling in the compositor, which it does already
> anyway when it's collecting new frames from clients
> - userspace fencing gpu in the client, but no compositor protocol: We
> wait in the swapchain, but in a separate thread so that nothing blocks
> that shouldn't block
>
> If we instead go with "magic waits in the kernel behind userspace's
> back", like what your item 6 would imply, then we're not really
> solving anything.
>
> For actual implementation I think the best would be an extension of
> drm_syncobj. Those already have at least conceptually future/infinite
> fences, and we already have fd passing, so "just" need some protocol
> to pass them around. Plus we could use the same uapi for timeline
> syncobj using dma_fence as for hmm_fence, so also easier to transition
> for userspace to the new world since don't need the new hw capability
> to roll out the new uapi and protocols.
>
> That's not that hard to roll out, and technically a lot better than
> hacking up dma_resv and hoping we don't end up stalling in wrong
> places, which sounds very "eeeek" to me :-)

Yeah, that's what I totally agree upon :)

My idea was just the last resort since we are mixing userspace sync and 
memory management so creative here.

Stalling in userspace will probably get some push back as well, but 
maybe not as much as stalling in the kernel.

Ok if we can at least remove implicit sync from the picture then the 
question remains how do we integrate HMM into drm_syncobj then?

Regards,
Christian.

>
> Cheers, Daniel
>
Daniel Vetter Jan. 14, 2021, 4:36 p.m. UTC | #29
On Thu, Jan 14, 2021 at 5:01 PM Christian König
<christian.koenig@amd.com> wrote:
>
> Am 14.01.21 um 16:40 schrieb Daniel Vetter:
> > [SNIP]
> >> So I think we have to somehow solve this in the kernel or we will go in
> >> circles all the time.
> >>
> >>> So from that pov I think the kernel should at most deal with an
> >>> hmm_fence for cross-process communication and maybe some standard wait
> >>> primitives (for userspace to use, not for the kernel).
> >>>
> >>> The only use case this would forbid is using page faults for legacy
> >>> implicit/explicit dma_fence synced workloads, and I think that's
> >>> perfectly ok to not allow. Especially since the motivation here for
> >>> all this is compute, and compute doesn't pass around dma_fences
> >>> anyway.
> >> As Alex said we will rather soon see this for gfx as well and we most
> >> likely will see combinations of old dma_fence based integrated graphics
> >> with new dedicated GPUs.
> >>
> >> So I don't think we can say we reduce the problem to compute and don't
> >> support anything else.
> > I'm not against pagefaults for gfx, just in pushing the magic into the
> > kernel. I don't think that works, because it means we add stall points
> > where usespace, especially vk userspace, really doesn't want it. So
> > same way like timeline syncobj, we need to push the compat work into
> > userspace.
> >
> > There's going to be a few stall points:
> > - fully new stack, we wait for the userspace fence in the atomic
> > commit path (which we can, if we're really careful, since we pin all
> > buffers upfront and so there's no risk)
> > - userspace fencing gpu in the client, compositor protocol can pass
> > around userspace fences, but the compositor still uses dma_fence for
> > itself. There's some stalling in the compositor, which it does already
> > anyway when it's collecting new frames from clients
> > - userspace fencing gpu in the client, but no compositor protocol: We
> > wait in the swapchain, but in a separate thread so that nothing blocks
> > that shouldn't block
> >
> > If we instead go with "magic waits in the kernel behind userspace's
> > back", like what your item 6 would imply, then we're not really
> > solving anything.
> >
> > For actual implementation I think the best would be an extension of
> > drm_syncobj. Those already have at least conceptually future/infinite
> > fences, and we already have fd passing, so "just" need some protocol
> > to pass them around. Plus we could use the same uapi for timeline
> > syncobj using dma_fence as for hmm_fence, so also easier to transition
> > for userspace to the new world since don't need the new hw capability
> > to roll out the new uapi and protocols.
> >
> > That's not that hard to roll out, and technically a lot better than
> > hacking up dma_resv and hoping we don't end up stalling in wrong
> > places, which sounds very "eeeek" to me :-)
>
> Yeah, that's what I totally agree upon :)
>
> My idea was just the last resort since we are mixing userspace sync and
> memory management so creative here.
>
> Stalling in userspace will probably get some push back as well, but
> maybe not as much as stalling in the kernel.

I guess we need to have last-resort stalling in the kernel, but no
more than what we do with drm_syncobj future fences right now. Like
when anything asks for a dma_fence out of an hmm_fence drm_syncob, we
just stall until the hmm_fence is signalled, and then create a
dma_fence that's already signalled and return that to the caller.
Obviously this shouldn't happen, since anyone who's timeline aware
will check whether the fence has at least materialized first and stall
somewhere more useful for that first.

> Ok if we can at least remove implicit sync from the picture then the
> question remains how do we integrate HMM into drm_syncobj then?

From an uapi pov probably just an ioctl to create an hmm drm_syncobj,
and a syncobj ioctl to query whether it's a hmm_fence or dma_fence
syncobj, so that userspace can be a bit more clever with where it
should stall - for an hmm_fence the stall will most likely be directly
on the gpu in many cases (so the ioctl should also give us all the
details about that if it's an hmm fence).

I think the real work is going through all the hardware and trying to
figure out what the common ground for userspace fences are. Stuff like
can they be in system memory, or need something special (wc maybe, but
I hope system memory should be fine for everyone), and how you count,
wrap and compare. I also have no idea how/if we can optimized cpu
waits across different drivers.

Plus ideally we get some actual wayland protocol going for passing
drm_syncobj around, so we can test it.
-Daniel
Jerome Glisse Jan. 14, 2021, 4:51 p.m. UTC | #30
On Thu, Jan 14, 2021 at 02:37:36PM +0100, Christian König wrote:
> Am 14.01.21 um 12:52 schrieb Daniel Vetter:
> > [SNIP]
> > > > I had a new idea, i wanted to think more about it but have not yet,
> > > > anyway here it is. Adding a new callback to dma fence which ask the
> > > > question can it dead lock ? Any time a GPU driver has pending page
> > > > fault (ie something calling into the mm) it answer yes, otherwise
> > > > no. The GPU shrinker would ask the question before waiting on any
> > > > dma-fence and back of if it gets yes. Shrinker can still try many
> > > > dma buf object for which it does not get a yes on associated fence.
> > > > 
> > > > This does not solve the mmu notifier case, for this you would just
> > > > invalidate the gem userptr object (with a flag but not releasing the
> > > > page refcount) but you would not wait for the GPU (ie no dma fence
> > > > wait in that code path anymore). The userptr API never really made
> > > > the contract that it will always be in sync with the mm view of the
> > > > world so if different page get remapped to same virtual address
> > > > while GPU is still working with the old pages it should not be an
> > > > issue (it would not be in our usage of userptr for compositor and
> > > > what not).
> > > The current working idea in my mind goes into a similar direction.
> > > 
> > > But instead of a callback I'm adding a complete new class of HMM fences.
> > > 
> > > Waiting in the MMU notfier, scheduler, TTM etc etc is only allowed for
> > > the dma_fences and HMM fences are ignored in container objects.
> > > 
> > > When you handle an implicit or explicit synchronization request from
> > > userspace you need to block for HMM fences to complete before taking any
> > > resource locks.
> > Isnt' that what I call gang scheduling? I.e. you either run in HMM
> > mode, or in legacy fencing mode (whether implicit or explicit doesn't
> > really matter I think). By forcing that split we avoid the problem,
> > but it means occasionally full stalls on mixed workloads.
> > 
> > But that's not what Jerome wants (afaiui at least), I think his idea
> > is to track the reverse dependencies of all the fences floating
> > around, and then skip evicting an object if you have to wait for any
> > fence that is problematic for the current calling context. And I don't
> > think that's very feasible in practice.
> > 
> > So what kind of hmm fences do you have in mind here?
> 
> It's a bit more relaxed than your gang schedule.
> 
> See the requirements are as follow:
> 
> 1. dma_fences never depend on hmm_fences.
> 2. hmm_fences can never preempt dma_fences.
> 3. dma_fences must be able to preempt hmm_fences or we always reserve enough
> hardware resources (CUs) to guarantee forward progress of dma_fences.
> 
> Critical sections are MMU notifiers, page faults, GPU schedulers and
> dma_reservation object locks.
> 
> 4. It is valid to wait for a dma_fences in critical sections.
> 5. It is not valid to wait for hmm_fences in critical sections.
> 
> Fence creation either happens during command submission or by adding
> something like a barrier or signal command to your userspace queue.
> 
> 6. If we have an hmm_fence as implicit or explicit dependency for creating a
> dma_fence we must wait for that before taking any locks or reserving
> resources.
> 7. If we have a dma_fence as implicit or explicit dependency for creating an
> hmm_fence we can wait later on. So busy waiting or special WAIT hardware
> commands are valid.
> 
> This prevents hard cuts, e.g. can mix hmm_fences and dma_fences at the same
> time on the hardware.
> 
> In other words we can have a high priority gfx queue running jobs based on
> dma_fences and a low priority compute queue running jobs based on
> hmm_fences.
> 
> Only when we switch from hmm_fence to dma_fence we need to block the
> submission until all the necessary resources (both memory as well as CUs)
> are available.
> 
> This is somewhat an extension to your gang submit idea.

What is hmm_fence ? You should not have fence with hmm at all.
So i am kind of scare now.

Cheers,
Jérôme
Christian König Jan. 14, 2021, 7:08 p.m. UTC | #31
Am 14.01.21 um 17:36 schrieb Daniel Vetter:
> On Thu, Jan 14, 2021 at 5:01 PM Christian König
> <christian.koenig@amd.com> wrote:
>> Am 14.01.21 um 16:40 schrieb Daniel Vetter:
>>> [SNIP]
>>>> So I think we have to somehow solve this in the kernel or we will go in
>>>> circles all the time.
>>>>
>>>>> So from that pov I think the kernel should at most deal with an
>>>>> hmm_fence for cross-process communication and maybe some standard wait
>>>>> primitives (for userspace to use, not for the kernel).
>>>>>
>>>>> The only use case this would forbid is using page faults for legacy
>>>>> implicit/explicit dma_fence synced workloads, and I think that's
>>>>> perfectly ok to not allow. Especially since the motivation here for
>>>>> all this is compute, and compute doesn't pass around dma_fences
>>>>> anyway.
>>>> As Alex said we will rather soon see this for gfx as well and we most
>>>> likely will see combinations of old dma_fence based integrated graphics
>>>> with new dedicated GPUs.
>>>>
>>>> So I don't think we can say we reduce the problem to compute and don't
>>>> support anything else.
>>> I'm not against pagefaults for gfx, just in pushing the magic into the
>>> kernel. I don't think that works, because it means we add stall points
>>> where usespace, especially vk userspace, really doesn't want it. So
>>> same way like timeline syncobj, we need to push the compat work into
>>> userspace.
>>>
>>> There's going to be a few stall points:
>>> - fully new stack, we wait for the userspace fence in the atomic
>>> commit path (which we can, if we're really careful, since we pin all
>>> buffers upfront and so there's no risk)
>>> - userspace fencing gpu in the client, compositor protocol can pass
>>> around userspace fences, but the compositor still uses dma_fence for
>>> itself. There's some stalling in the compositor, which it does already
>>> anyway when it's collecting new frames from clients
>>> - userspace fencing gpu in the client, but no compositor protocol: We
>>> wait in the swapchain, but in a separate thread so that nothing blocks
>>> that shouldn't block
>>>
>>> If we instead go with "magic waits in the kernel behind userspace's
>>> back", like what your item 6 would imply, then we're not really
>>> solving anything.
>>>
>>> For actual implementation I think the best would be an extension of
>>> drm_syncobj. Those already have at least conceptually future/infinite
>>> fences, and we already have fd passing, so "just" need some protocol
>>> to pass them around. Plus we could use the same uapi for timeline
>>> syncobj using dma_fence as for hmm_fence, so also easier to transition
>>> for userspace to the new world since don't need the new hw capability
>>> to roll out the new uapi and protocols.
>>>
>>> That's not that hard to roll out, and technically a lot better than
>>> hacking up dma_resv and hoping we don't end up stalling in wrong
>>> places, which sounds very "eeeek" to me :-)
>> Yeah, that's what I totally agree upon :)
>>
>> My idea was just the last resort since we are mixing userspace sync and
>> memory management so creative here.
>>
>> Stalling in userspace will probably get some push back as well, but
>> maybe not as much as stalling in the kernel.
> I guess we need to have last-resort stalling in the kernel, but no
> more than what we do with drm_syncobj future fences right now. Like
> when anything asks for a dma_fence out of an hmm_fence drm_syncob, we
> just stall until the hmm_fence is signalled, and then create a
> dma_fence that's already signalled and return that to the caller.

Good idea. BTW: We should somehow teach lockdep that this 
materialization of any future fence should not happen while holding a 
reservation lock?

> Obviously this shouldn't happen, since anyone who's timeline aware
> will check whether the fence has at least materialized first and stall
> somewhere more useful for that first.

Well if I'm not completely mistaken it should help with existing stuff 
like an implicit fence for atomic modeset etc...

>> Ok if we can at least remove implicit sync from the picture then the
>> question remains how do we integrate HMM into drm_syncobj then?
>  From an uapi pov probably just an ioctl to create an hmm drm_syncobj,
> and a syncobj ioctl to query whether it's a hmm_fence or dma_fence
> syncobj, so that userspace can be a bit more clever with where it
> should stall - for an hmm_fence the stall will most likely be directly
> on the gpu in many cases (so the ioctl should also give us all the
> details about that if it's an hmm fence).
>
> I think the real work is going through all the hardware and trying to
> figure out what the common ground for userspace fences are. Stuff like
> can they be in system memory, or need something special (wc maybe, but
> I hope system memory should be fine for everyone), and how you count,
> wrap and compare. I also have no idea how/if we can optimized cpu
> waits across different drivers.

I think that this is absolutely hardware dependent. E.g. for example AMD 
will probably have handles, so that the hardware scheduler can counter 
problems like priority inversion.

What we should probably do is to handle this similar to how DMA-buf is 
handled - if it's the same driver and device the drm_syncobj we can use 
the same handle for both sides.

If it's different driver or device we go through some CPU round trip for 
the signaling.

> Plus ideally we get some actual wayland protocol going for passing
> drm_syncobj around, so we can test it.

And DRI3 :)

Christian.

> -Daniel
Daniel Vetter Jan. 14, 2021, 8:09 p.m. UTC | #32
On Thu, Jan 14, 2021 at 08:08:06PM +0100, Christian König wrote:
> Am 14.01.21 um 17:36 schrieb Daniel Vetter:
> > On Thu, Jan 14, 2021 at 5:01 PM Christian König
> > <christian.koenig@amd.com> wrote:
> > > Am 14.01.21 um 16:40 schrieb Daniel Vetter:
> > > > [SNIP]
> > > > > So I think we have to somehow solve this in the kernel or we will go in
> > > > > circles all the time.
> > > > > 
> > > > > > So from that pov I think the kernel should at most deal with an
> > > > > > hmm_fence for cross-process communication and maybe some standard wait
> > > > > > primitives (for userspace to use, not for the kernel).
> > > > > > 
> > > > > > The only use case this would forbid is using page faults for legacy
> > > > > > implicit/explicit dma_fence synced workloads, and I think that's
> > > > > > perfectly ok to not allow. Especially since the motivation here for
> > > > > > all this is compute, and compute doesn't pass around dma_fences
> > > > > > anyway.
> > > > > As Alex said we will rather soon see this for gfx as well and we most
> > > > > likely will see combinations of old dma_fence based integrated graphics
> > > > > with new dedicated GPUs.
> > > > > 
> > > > > So I don't think we can say we reduce the problem to compute and don't
> > > > > support anything else.
> > > > I'm not against pagefaults for gfx, just in pushing the magic into the
> > > > kernel. I don't think that works, because it means we add stall points
> > > > where usespace, especially vk userspace, really doesn't want it. So
> > > > same way like timeline syncobj, we need to push the compat work into
> > > > userspace.
> > > > 
> > > > There's going to be a few stall points:
> > > > - fully new stack, we wait for the userspace fence in the atomic
> > > > commit path (which we can, if we're really careful, since we pin all
> > > > buffers upfront and so there's no risk)
> > > > - userspace fencing gpu in the client, compositor protocol can pass
> > > > around userspace fences, but the compositor still uses dma_fence for
> > > > itself. There's some stalling in the compositor, which it does already
> > > > anyway when it's collecting new frames from clients
> > > > - userspace fencing gpu in the client, but no compositor protocol: We
> > > > wait in the swapchain, but in a separate thread so that nothing blocks
> > > > that shouldn't block
> > > > 
> > > > If we instead go with "magic waits in the kernel behind userspace's
> > > > back", like what your item 6 would imply, then we're not really
> > > > solving anything.
> > > > 
> > > > For actual implementation I think the best would be an extension of
> > > > drm_syncobj. Those already have at least conceptually future/infinite
> > > > fences, and we already have fd passing, so "just" need some protocol
> > > > to pass them around. Plus we could use the same uapi for timeline
> > > > syncobj using dma_fence as for hmm_fence, so also easier to transition
> > > > for userspace to the new world since don't need the new hw capability
> > > > to roll out the new uapi and protocols.
> > > > 
> > > > That's not that hard to roll out, and technically a lot better than
> > > > hacking up dma_resv and hoping we don't end up stalling in wrong
> > > > places, which sounds very "eeeek" to me :-)
> > > Yeah, that's what I totally agree upon :)
> > > 
> > > My idea was just the last resort since we are mixing userspace sync and
> > > memory management so creative here.
> > > 
> > > Stalling in userspace will probably get some push back as well, but
> > > maybe not as much as stalling in the kernel.
> > I guess we need to have last-resort stalling in the kernel, but no
> > more than what we do with drm_syncobj future fences right now. Like
> > when anything asks for a dma_fence out of an hmm_fence drm_syncob, we
> > just stall until the hmm_fence is signalled, and then create a
> > dma_fence that's already signalled and return that to the caller.
> 
> Good idea. BTW: We should somehow teach lockdep that this materialization of
> any future fence should not happen while holding a reservation lock?

Good idea, should be easy to add (although the explanation why it works
needs a comment).

> > Obviously this shouldn't happen, since anyone who's timeline aware
> > will check whether the fence has at least materialized first and stall
> > somewhere more useful for that first.
> 
> Well if I'm not completely mistaken it should help with existing stuff like
> an implicit fence for atomic modeset etc...

Modeset is special:
- we fully pin buffers before we even start waiting. That means the loop
  can't close, since no one can try to evict our pinned buffer and would
  hence end up waiting on our hmm fence. We also only unpin the after
  everything is done.

- there's out-fences, but as long as we require that the in and out
  fences are of the same type that should be all fine. Also since the
  explicit in/out fence stuff is there already it shouldn't be too hard to
  add support for syncobj fences without touching a lot of drivers - all
  the ones that use the atomic commit helpers should Just Work.

> > > Ok if we can at least remove implicit sync from the picture then the
> > > question remains how do we integrate HMM into drm_syncobj then?
> >  From an uapi pov probably just an ioctl to create an hmm drm_syncobj,
> > and a syncobj ioctl to query whether it's a hmm_fence or dma_fence
> > syncobj, so that userspace can be a bit more clever with where it
> > should stall - for an hmm_fence the stall will most likely be directly
> > on the gpu in many cases (so the ioctl should also give us all the
> > details about that if it's an hmm fence).
> > 
> > I think the real work is going through all the hardware and trying to
> > figure out what the common ground for userspace fences are. Stuff like
> > can they be in system memory, or need something special (wc maybe, but
> > I hope system memory should be fine for everyone), and how you count,
> > wrap and compare. I also have no idea how/if we can optimized cpu
> > waits across different drivers.
> 
> I think that this is absolutely hardware dependent. E.g. for example AMD
> will probably have handles, so that the hardware scheduler can counter
> problems like priority inversion.
> 
> What we should probably do is to handle this similar to how DMA-buf is
> handled - if it's the same driver and device the drm_syncobj we can use the
> same handle for both sides.
> 
> If it's different driver or device we go through some CPU round trip for the
> signaling.

I think we should try to be slightly more standardized, dma-buf was a bit
much free-for all. But maybe that's not possible really, since we tried
this with dma-fence and ended up with exactly the situation you're
describing for hmm fences.

> > Plus ideally we get some actual wayland protocol going for passing
> > drm_syncobj around, so we can test it.
> 
> And DRI3 :)
 
Yeah. Well probably Present extension, since that's the thing that's doing
the flipping. At least we only have to really care about XWayland for
that, with this time horizon at least.
-Daniel
Felix Kuehling Jan. 14, 2021, 9:13 p.m. UTC | #33
Am 2021-01-14 um 11:51 a.m. schrieb Jerome Glisse:
> On Thu, Jan 14, 2021 at 02:37:36PM +0100, Christian König wrote:
>> Am 14.01.21 um 12:52 schrieb Daniel Vetter:
>>> [SNIP]
>>>>> I had a new idea, i wanted to think more about it but have not yet,
>>>>> anyway here it is. Adding a new callback to dma fence which ask the
>>>>> question can it dead lock ? Any time a GPU driver has pending page
>>>>> fault (ie something calling into the mm) it answer yes, otherwise
>>>>> no. The GPU shrinker would ask the question before waiting on any
>>>>> dma-fence and back of if it gets yes. Shrinker can still try many
>>>>> dma buf object for which it does not get a yes on associated fence.
>>>>>
>>>>> This does not solve the mmu notifier case, for this you would just
>>>>> invalidate the gem userptr object (with a flag but not releasing the
>>>>> page refcount) but you would not wait for the GPU (ie no dma fence
>>>>> wait in that code path anymore). The userptr API never really made
>>>>> the contract that it will always be in sync with the mm view of the
>>>>> world so if different page get remapped to same virtual address
>>>>> while GPU is still working with the old pages it should not be an
>>>>> issue (it would not be in our usage of userptr for compositor and
>>>>> what not).
>>>> The current working idea in my mind goes into a similar direction.
>>>>
>>>> But instead of a callback I'm adding a complete new class of HMM fences.
>>>>
>>>> Waiting in the MMU notfier, scheduler, TTM etc etc is only allowed for
>>>> the dma_fences and HMM fences are ignored in container objects.
>>>>
>>>> When you handle an implicit or explicit synchronization request from
>>>> userspace you need to block for HMM fences to complete before taking any
>>>> resource locks.
>>> Isnt' that what I call gang scheduling? I.e. you either run in HMM
>>> mode, or in legacy fencing mode (whether implicit or explicit doesn't
>>> really matter I think). By forcing that split we avoid the problem,
>>> but it means occasionally full stalls on mixed workloads.
>>>
>>> But that's not what Jerome wants (afaiui at least), I think his idea
>>> is to track the reverse dependencies of all the fences floating
>>> around, and then skip evicting an object if you have to wait for any
>>> fence that is problematic for the current calling context. And I don't
>>> think that's very feasible in practice.
>>>
>>> So what kind of hmm fences do you have in mind here?
>> It's a bit more relaxed than your gang schedule.
>>
>> See the requirements are as follow:
>>
>> 1. dma_fences never depend on hmm_fences.
>> 2. hmm_fences can never preempt dma_fences.
>> 3. dma_fences must be able to preempt hmm_fences or we always reserve enough
>> hardware resources (CUs) to guarantee forward progress of dma_fences.
>>
>> Critical sections are MMU notifiers, page faults, GPU schedulers and
>> dma_reservation object locks.
>>
>> 4. It is valid to wait for a dma_fences in critical sections.
>> 5. It is not valid to wait for hmm_fences in critical sections.
>>
>> Fence creation either happens during command submission or by adding
>> something like a barrier or signal command to your userspace queue.
>>
>> 6. If we have an hmm_fence as implicit or explicit dependency for creating a
>> dma_fence we must wait for that before taking any locks or reserving
>> resources.
>> 7. If we have a dma_fence as implicit or explicit dependency for creating an
>> hmm_fence we can wait later on. So busy waiting or special WAIT hardware
>> commands are valid.
>>
>> This prevents hard cuts, e.g. can mix hmm_fences and dma_fences at the same
>> time on the hardware.
>>
>> In other words we can have a high priority gfx queue running jobs based on
>> dma_fences and a low priority compute queue running jobs based on
>> hmm_fences.
>>
>> Only when we switch from hmm_fence to dma_fence we need to block the
>> submission until all the necessary resources (both memory as well as CUs)
>> are available.
>>
>> This is somewhat an extension to your gang submit idea.
> What is hmm_fence ? You should not have fence with hmm at all.
> So i am kind of scare now.

I kind of had the same question trying to follow Christian and Daniel's
discussion. I think an HMM fence would be any fence resulting from the
completion of a user mode operation in a context with HMM-based memory
management that may stall indefinitely due to page faults.

But on a hardware engine that cannot preempt page-faulting work and has
not reserved resources to guarantee forward progress for kernel jobs, I
think all fences will need to be HMM fences, because any work submitted
to such an engine can stall by getting stuck behind a stalled user mode
operation.

So for example, you have a DMA engine that can preempt during page
faults, but a graphics engine that cannot. Then work submitted to the
DMA engine can use dma_fence. But work submitted to the graphics engine
must use hmm_fence. To avoid deadlocks, dma_fences must never depend on
hmm_fences and resolution of page faults must never depend on hmm_fences.

Regards,
  Felix


>
> Cheers,
> Jérôme
>
Christian König Jan. 15, 2021, 7:47 a.m. UTC | #34
Am 14.01.21 um 22:13 schrieb Felix Kuehling:
> Am 2021-01-14 um 11:51 a.m. schrieb Jerome Glisse:
>> On Thu, Jan 14, 2021 at 02:37:36PM +0100, Christian König wrote:
>>> Am 14.01.21 um 12:52 schrieb Daniel Vetter:
>>>> [SNIP]
>>>>>> I had a new idea, i wanted to think more about it but have not yet,
>>>>>> anyway here it is. Adding a new callback to dma fence which ask the
>>>>>> question can it dead lock ? Any time a GPU driver has pending page
>>>>>> fault (ie something calling into the mm) it answer yes, otherwise
>>>>>> no. The GPU shrinker would ask the question before waiting on any
>>>>>> dma-fence and back of if it gets yes. Shrinker can still try many
>>>>>> dma buf object for which it does not get a yes on associated fence.
>>>>>>
>>>>>> This does not solve the mmu notifier case, for this you would just
>>>>>> invalidate the gem userptr object (with a flag but not releasing the
>>>>>> page refcount) but you would not wait for the GPU (ie no dma fence
>>>>>> wait in that code path anymore). The userptr API never really made
>>>>>> the contract that it will always be in sync with the mm view of the
>>>>>> world so if different page get remapped to same virtual address
>>>>>> while GPU is still working with the old pages it should not be an
>>>>>> issue (it would not be in our usage of userptr for compositor and
>>>>>> what not).
>>>>> The current working idea in my mind goes into a similar direction.
>>>>>
>>>>> But instead of a callback I'm adding a complete new class of HMM fences.
>>>>>
>>>>> Waiting in the MMU notfier, scheduler, TTM etc etc is only allowed for
>>>>> the dma_fences and HMM fences are ignored in container objects.
>>>>>
>>>>> When you handle an implicit or explicit synchronization request from
>>>>> userspace you need to block for HMM fences to complete before taking any
>>>>> resource locks.
>>>> Isnt' that what I call gang scheduling? I.e. you either run in HMM
>>>> mode, or in legacy fencing mode (whether implicit or explicit doesn't
>>>> really matter I think). By forcing that split we avoid the problem,
>>>> but it means occasionally full stalls on mixed workloads.
>>>>
>>>> But that's not what Jerome wants (afaiui at least), I think his idea
>>>> is to track the reverse dependencies of all the fences floating
>>>> around, and then skip evicting an object if you have to wait for any
>>>> fence that is problematic for the current calling context. And I don't
>>>> think that's very feasible in practice.
>>>>
>>>> So what kind of hmm fences do you have in mind here?
>>> It's a bit more relaxed than your gang schedule.
>>>
>>> See the requirements are as follow:
>>>
>>> 1. dma_fences never depend on hmm_fences.
>>> 2. hmm_fences can never preempt dma_fences.
>>> 3. dma_fences must be able to preempt hmm_fences or we always reserve enough
>>> hardware resources (CUs) to guarantee forward progress of dma_fences.
>>>
>>> Critical sections are MMU notifiers, page faults, GPU schedulers and
>>> dma_reservation object locks.
>>>
>>> 4. It is valid to wait for a dma_fences in critical sections.
>>> 5. It is not valid to wait for hmm_fences in critical sections.
>>>
>>> Fence creation either happens during command submission or by adding
>>> something like a barrier or signal command to your userspace queue.
>>>
>>> 6. If we have an hmm_fence as implicit or explicit dependency for creating a
>>> dma_fence we must wait for that before taking any locks or reserving
>>> resources.
>>> 7. If we have a dma_fence as implicit or explicit dependency for creating an
>>> hmm_fence we can wait later on. So busy waiting or special WAIT hardware
>>> commands are valid.
>>>
>>> This prevents hard cuts, e.g. can mix hmm_fences and dma_fences at the same
>>> time on the hardware.
>>>
>>> In other words we can have a high priority gfx queue running jobs based on
>>> dma_fences and a low priority compute queue running jobs based on
>>> hmm_fences.
>>>
>>> Only when we switch from hmm_fence to dma_fence we need to block the
>>> submission until all the necessary resources (both memory as well as CUs)
>>> are available.
>>>
>>> This is somewhat an extension to your gang submit idea.
>> What is hmm_fence ? You should not have fence with hmm at all.
>> So i am kind of scare now.
> I kind of had the same question trying to follow Christian and Daniel's
> discussion. I think an HMM fence would be any fence resulting from the
> completion of a user mode operation in a context with HMM-based memory
> management that may stall indefinitely due to page faults.

It was more of a placeholder for something which can be used for inter 
process synchronization.

> But on a hardware engine that cannot preempt page-faulting work and has
> not reserved resources to guarantee forward progress for kernel jobs, I
> think all fences will need to be HMM fences, because any work submitted
> to such an engine can stall by getting stuck behind a stalled user mode
> operation.
>
> So for example, you have a DMA engine that can preempt during page
> faults, but a graphics engine that cannot. Then work submitted to the
> DMA engine can use dma_fence. But work submitted to the graphics engine
> must use hmm_fence. To avoid deadlocks, dma_fences must never depend on
> hmm_fences and resolution of page faults must never depend on hmm_fences.

Yeah, it's a bit more complicated but in general that fits.

Regards,
Christian.

>
> Regards,
>    Felix
>
>
>> Cheers,
>> Jérôme
>>